In the name of “protecting future generations from potentially devastating consequences,” a bipartisan group of U.S. lawmakers on Wednesday introduced legislation meant to prevent artificial intelligence from launching nuclear weapons without meaningful human control.
The proposed legislation acknowledges that the Pentagon’s 2022 Nuclear Posture Review states that current U.S. policy is to “maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the president to initiate and terminate nuclear weapon employment.”
Human beings are not ready for a powerful AI under present conditions or even in the “foreseeable future,” stated a foremost expert in the field, adding that the recent open letter calling for a six-month moratorium on developing advanced artificial intelligence is “understating the seriousness of the situation.”
“The key issue is not ‘human-competitive’ intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence,” said Eliezer Yudkowsky, a decision theorist and leading AI researcher in a March 29 Time magazine op-ed. “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.”
After the recent popularity and explosive growth of ChatGPT, several business leaders and researchers, now totaling 1,843 including Elon Musk and Steve Wozniak, signed a letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” GPT-4, released in March, is the latest version of OpenAI’s chatbot, ChatGPT.
AI ‘Does Not Care’ and Will Demand Rights
Yudkowsky predicts that in the absence of meticulous preparation, the AI will have vastly different demands from humans, and once self-aware will “not care for us” nor any other sentient life. “That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.” This is the reason why he’s calling for the absolute shutdown.
Without a human approach to life, the AI will simply consider all sentient beings to be “made of atoms it can use for something else.” And there is little humanity can do to stop it. Yudkowsky compared the scenario to “a 10-year-old trying to play chess against Stockfish 15.” No human chess player has yet been able to beat Stockfish, which is considered an impossible feat.
The industry veteran asked readers to imagine AI technology as not being contained within the confines of the internet.
“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.”
The AI will expand its influence outside the periphery of physical networks and could “build artificial life forms” using laboratories where proteins are produced using DNA strings.
VLA COMMENT: Watch this, above…Harvard/Wyse has accomplished, through instructions from AI, how to create a living biological robot that self assembles and reproduces itself.
Russia’s new robot weather girl. No need for Humans
The end result of building an all-powerful AI, under present conditions, would be the death of “every single member of the human species and all biological life on Earth,” he warned.
Yudkowsky blamed OpenAI and DeepMind—two of the world’s foremost AI research labs—for not having any preparations or requisite protocols regarding the matter. OpenAI even plans to have AI itself do the alignment with human values. “They will work together with humans to ensure that their own successors are more aligned with humans,” according to OpenAI.
This mode of action is “enough to get any sensible person to panic,” said Yudkowsky.
He added that humans cannot fully monitor or detect self-aware AI systems. Conscious digital minds demanding “human rights” could progress to a point where humans can no longer possess or own the system.
“If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the ‘self-aware’ part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.”
Unlike other scientific experiments and gradual progression of knowledge and capability, people cannot afford this with superhuman intelligence because if it’s wrong on the first try, there are no second chances “because you are dead.”Shut it down
Yudkowsky said that many researchers are aware that “we’re plunging toward a catastrophe” but they’re not saying it out loud.
This stance is unlike that of proponents like Bill Gates who recently praised the evolution of artificial intelligence. Gates claimed that the development of AI is “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”
Gates said that AI can help with several progressive agendas, including climate change and economic inequities.
Meanwhile, Yudkowsky instructs all establishments, including international governments and militaries, to indefinitely end large AI training runs and shut down all large computer farms where AIs are refined. He adds that AI should only be confined to solving problems in biology and biotechnology, and not trained to read “text from the internet” or to “the level where they start talking or planning.”
Regarding AI, there is no arms race. “That we all live or die as one, in this, is not a policy but a fact of nature.”
If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. “Shut it down.”
VLA COMMENT: Just as I thought. Most everyone is afraid of death…extinction. It is my opinion that the Democrats,various Republican, WORLD leaders, VATICAN and insiders are aware of an “extinction event” whether Niberu or something else. There fanaticism on climate change, geoengineering, colonizing Mars, World Government is just a last ditch effort to avoid a cosmic phase transition.
They (including Musk) see AI and the merging of man and machine (transhumanism) as the solution. AI is not “Artificial” Intelligence but Alien (off planet) Intelligence facilitated by complicit humans to network on earth …see www.2045.com
Historically, the striving for immortality has been a faith-based venture, based in the idea that the soul is immortal while the body perishes, which is a concept I am in complete alignment with. Transhumanists more or less reverse this idea. They discard the notion of soul altogether and aim for the preservation of the perceived personality, first through radical life extension of the physical body, and later through the transfer of brain data into a replacement form.
Two new books offer a single choice: merge with machines or be left behind
Artificial intelligence operates on a different plane than human reason. As described by various futurists and technologists, AI is literally an “alien mind.” In advanced artificial neural networks, the modes of cognition—the logical steps behind any given conclusion—are completely incomprehensible, even to their creators.
The craziest part? It’s oftentimes correct.
In the coming years, this nonhuman intelligence will change everything about our personal lives, our social organization, and how we think. That’s the premise of two books published back-to-back this year—one from the West, the other from the East.
The former argues that artificial intelligence already displays unearthly powers of perception. It will soon determine the fate of nations. The latter envisions algorithms becoming quasi-spiritual entities who cradle a new humanity. These beings are “inevitable,” the authors maintain, and the risks to our freedom are profound.
As legacy humans, our choice is to either reject this alien intelligence and fall behind, or to forge a “human-AI symbiosis”—to let ethereal tentacles probe us, analyze us, and guide our evolution.
Normal people are horrified by either prospect, but that’s irrelevant. The future doesn’t care about your feelings, nor do the people creating and directing it.