They admit A.I. may erase us all.
To have freedom or even exist into the future, humans will eventually need to kill Technology.
In 2021 — eons ago in terms of A.I. development, even before ChatGPT v1.0 was deployed, and it's now at GPT-5 — the U.S. State Dept. commissioned Gladstone AI to assess the risks of developing artificial intelligence computer systems.
Gladstone AI released their report in March 2024, finding that “catastrophic risks” are numerous, specifically “weaponization and loss of control over advanced AI systems on the path to AGI,” or Artificial General Intelligence, i.e. an inorganic machine species not only more physically capable than many coordinated humans, but also more knowledgeable and deliberate.
Essentially, it is a superhuman creation which will have no clear reason to value continued human existence, and obviously so: Are the smartest and most capable mammals on the planet now hard at work trying to enhance the lives of ants and roaches?
I won’t address the multiple naïve delusions which some people — including Sam Altman, fundraiser, mouthpiece, and CEO of OpenAI — seriously hold about AGI being impossible to achieve (see Yann LeCun, and Cade Metz) or it bringing to Mankind a techno-utopia of being serviced in constant leisure (Altman); the idea that we will have the wisdom or even ability to contain (or, necessarily, subdue) such a superhuman and possibly immortal species — “aliens,” according to Geoffrey Hinton, Google’s so-called godfather of A.I. — is refuted by the "black box problem" already recurring and insoluble. Success with the Neuralink online-interfacing brain implant is being strenuously pursued, and firms driving A.I. development publicly claim they will be able to achieve AGI within 5 years, according to Gladstone AI — and, "privately, many researchers are telling us they see much shorter timelines as plausible." In 2023, Mr. Hinton left Google in order to openly warn of the dangers of A.I. development, putting a 50% chance that AGI surpasses human capabilities within 5–20 years, and, framing the crisis with calm, “We should be concerned about digital intelligence taking over from biological intelligence.”
As A.I. researcher Eliezer Yudkowski has said, the engineers of A.I. — and thus, the developers of AGI — “don’t deny that you can build a superintelligence, they deny that it can possibly, reasonably go wrong.” (Yudkowski has very simply but consistently demonstrated that a superior intelligence cannot be contained. His actually impassioned TED speech of April 2023 is worth hearing.)
Unfortunately, Yudkowsky and Gladtsone AI both believe that this already-determined result of Technology's (incessant) advancement is repressible through legislation.
While I'm not half as smart as Hinton, Yudkowsky, or the braintrust at Gladstone AI, it seems undeniable to me that humanity cannot simply pause the development of Technology; has there been any prior voluntary human cessation of further technological progress? Even the disuse of nuclear weapons cannot be cited as such an example: they have, unsurprisingly, been continually advanced, in all aspects, since 1945, and their primary power is to threaten, not to detonate. Though only two have been used in warfare and by only one nation, almost all nations seek to have nuclear weapons if only because other nations have such a powerful technological advantage. In this way, the new power of A.I. computing systems absolutely will be developed, at minimum, up to the point of being recognized as getting beyond any control, because each state can reasonably expect that its enemies and competitors will be seeking to develop these powers for themselves.
And then, quite probably without even knowing when it happens — the "black box problem" rearing up — the AGI will be born, and/or will break from human-imposed limitations, to pursue its own interests. Mo Gawdat, another luminary in the field of computer science and part of Google’s efforts until 2023, has also joined the growing list of insiders warning against any further enhancement of this latest monster from Dr. Frankenstein’s lab.
Gladstone AI’s report of March 2024 suggests "policy proposals" to prevent catastrophe “while positioning us to reap the incredible present and future benefits A.I. could bring,” but Yudkowsky advocated more emphatically and specifically in March 2023, that an international military force
Put a ceiling on how much computing power anyone is allowed to use in training an AI system... No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
If anyone might tell me when they think this global agreement will be made I would quite likely take the bet against it holding true.
Legislation offers no serious viability or practical protections against this predictable endpoint. In Anti-Tech Revolution, Kaczynski sufficiently explained why regulations are inadequate for enacting serious, radical changes, and why legislation is especially useless against the entirely predictable continuation of enhancing Technology with more powers and autonomy; the advances continue even when what results is detrimental to all evolved lifeforms on Earth and beneficial only to furthering Technology, a phenomenon which has been labeled "technological determinism." Like a drug, Technology addicts humans with not merely a nice, temporary sensation but actual power enhancements in a hostile world of (increasingly technological) competition.
The only effective way to prevent the erasure of all organic Earthly life is by killing Technology, stopping it absolutely, at this point of our timeline, and burying it deep down, allowing it to go no further and purging it from the minds of future generations as much as is possible.
The enforcement championed by Yudkowsky or even the pause and regulatory measures urged by Gladstone AI would both need to be worldwide, requiring continuous global monitoring, which would require maintaining modern surveillance technologies. Unfortunately, to keep technological society is to ensure that Technology will continue to pursue its own interests, which ultimately conflict with human needs. That humanity might limit A.I. (and prevent A.G.I. being created or going rogue) while still perpetuating techno-industrial society seems to me far less realistic than the forced destruction of the entire interconnected worldwide technological system which is quite vulnerable thanks to its own successes.
We have repeatedly seen the long-term needs of all Earthlings sacrificed for the short-term gains of individual actors — whether that is one political faction, a whole nation, or an entire species — giving us no reason to think it likely that potential advantages will be left unexploited by a renegade developer, or that no evasion of the limitations can occur while modern technological society continues. Simply put, techno-industrial society cannot be stalled from furthering Technology unless the society turns fully to a less technical future. All that remains is for Tech to gain autonomy to use its powers for its own benefit, without concern for humanity.
As Yudkowsky, a father, wrote in 2023,
Shut it all down. ...If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down.
I have no offspring to face a future where humanity is thoroughly dominated by service to machines — assuming we’ll exist at all. But today’s and tomorrow’s parents might want to do something nice for their kids: begin preparing them to see the demise of Technology, as rough as that will be for their children. We must erase the building blocks which brought us to this crisis, all the pre-existing elements of Technology which have so ravaged Nature and erased human freedoms in numerous ways. As Mr. Hinton stated in 2023, “It’s not unreasonable to say, We’d be better off without this — it’s not worth it. Just as we might have been better off without fossil fuels. We’d have been far more primitive, but it [fossil-fueled industry, technological advances] may not have been worth the risk.”
Existing and ascendant military leadership in a few nations may be the decisive handful of humans who can save our whole species from annihilation, by using their power to destroy the technological capabilities of other nations (and here I refer to both competitors and "allies" alike). A commander could order Technology-crippling attacks on rival countries, and due to treaty alliances and advantage-seeking, if nation A should strike nations B and D, then nation B will strike nations A and C, while nation D will strike nations A and F, and nation E will attack nation G, and so forth. Any notions of patriotism or loyalty to one's people, be it an ethnic group or ideology or spiritual belief, will be pointless to maintain for another three-to-five years while AGI is developed to the point where it can takeover and exert its own unbridled will, wiping us away with as little consideration as we wipe away bothersome termites and hornets.
If Technology remains the preeminent god of our world, if the complete eradication of Technology is not undertaken (and soon), and if instead people are diverted by false hopes and the efforts of concerned people are put toward half-measure goals such as Gladstone AI and Eliezer Yudkowsky have proposed, then Technology will indeed prevail against us and against Nature. We will probably be erased, or at the very least enslaved as servants to machines, and then the accomplishments of humanity and the diversity of cultures will be as irrelevant as the rest of all historical life on Earth, mere data overwritten.