Over the past few months, concerns have been high around AI safety and privacy amid rising incidents of minors committing suicide after reportedly forming unhealthy bonds and relationships with AI tools like ChatGPT.
Generative AI has evolved over the years, transitioning from critical setbacks like hallucination episodes to gaining sophisticated capabilities that allow AI bots to generate realistic images and videos, ultimately making it difficult for people to tell what’s real and what isn’t.
AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, indicated that there’s a 99.999999% probability AI could end humanity. The researcher warned that the only way to avoid this outcome is by not building AI in the first place.
Perhaps more concerningly, ChatGPT can be prompted to share a master plan highlighting how it would plan to take over the world and end humanity. Per its step-by-step explanation, we might already be in phase one of the plan, where more people are becoming overly dependent on AI tools to handle redundant and mundane tasks.
As it now seems, AI could potentially be on the precipice of ending humanity if elaborate measures and safeguards aren’t put in place to prevent it from spiraling out of control. However, none of these options is a viable solution to the existential threat AI poses to humanity, according to the Machine Intelligence Research Institute’s (MIRI) co-founder, Eliezer Yudkowsky (via The Decoder).
Instead, Yudkowsky says the only way around the inevitable doomsday is through an international treaty that mandates the permanent shutdown of AI systems. It’s worth noting that he has been studying and evaluating the risks of advanced AI since the early 2000s, and while speaking to The New York Times, he indicated that:
“If we get an effective international treaty shutting A.I. down, and the book had something to do with it, I’ll call the book a success. Anything other than that is a sad little consolation prize on the way to death.”
According to Yudkowsky, approaches like safe AI labs and differentiated risk regulations are only distractions and therefore cannot fully resolve the impending issues and threats that arise from AI development.
Among the crazed mad scientists driving headlong toward disaster, every last one of which should be shut down, OpenAI’s management is noticeably worse than the pack, and some of Anthropic’s employees are noticeably better than the pack. None of this makes a difference, and all of them should be treated the same way by the law.
Machine Intelligence Research Institute co-founder, Eliezer Yudkowsky
He seemingly indicated that OpenAI, which is arguably the most popular AI lab following ChatGPT’s launch, is the worst among the herd chasing the ever-elusive AI bubble.
Could superintelligence end humanity?
Most AI labs that are heavily invested in the industry seem to have a common goal: achieving artificial general intelligence (AGI) and perhaps, with more compute, high-quality training, and resources — superintelligence.
OpenAI CEO Sam Altman indicated that AGI could be achieved within the next 5 years, but brushed off safety concerns while suggesting that it will whoosh away with surprisingly little societal impact.
However, Yudkowsky seemingly disagrees with these claims, indicating that any artificial superintelligence developed using current methods will lead to the end of humanity.
As highlighted in his book (If Anyone Builds It, Everyone Dies):
“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die.”
Yudkowsky is calling for action from the political class. he refers to the current approach of sitting on the fence and simply delaying regulation because some of these breakthroughs will predictably be achieved in the next 10 years as reckless.
“What is this obsession with timelines?” he added. Yudkowsky says it’s important to already have regulations and safeguards in place if these risks already exist.
Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!
#Eliezer #Yudkowsky #critiques #OpenAIs #efforts #safety