Are we heading for an AI-pocalypse soon?

Think:Act Magazine "It’s time to rethink AI"
Are we heading for an AI-pocalypse soon?

May 15, 2024

While some hail the enormous potential offered by AI, others are worried about potential risks

Listen to the article

Article

by Fred Schulenburg
Artwork byCarsten Gueth

The late great physicist Stephen Hawking said that AI could mean the brief history of humanity. Now that the dizzying pace of advances in AI-driven applications is almost matched by the speed of their widespread adoption, are these quantum tech leaps the dawn of a new, inspiring era for humanity or a threat to life as we know it? And how can Big Tech companies and governments strike a balance between regulation and innovation?

Key takeaways from this piece
Understand the fears around AI:

It is a different type of intelligence from the human biological one: It can share knowledge instantly on a massive scale.

Don't let the fear overwhelm you:

Consider that trying to rein in AI now is akin to reining in the jet industry in 1920 before it even became a thing.

Understand the positives:

The advent of AI could be the stimulus we need to be more creative and make things better than industrial machinery can.

Bletchley Park may look like one of those English country houses drawn from the pages of a cozy Agatha Christie thriller. Built in the late 19th century by a financier, the house is a mish-mash reworking of various architectural styles, which in combination – at least according to the view of one American critic – produced a "maudlin and monstrous pile." And yet the house and surrounding estate some 100km northwest of London can lay a plausible claim to being the cradle of our computer age: It was there that a number of the world's finest mathematical minds, including the legendary Alan Turing, gathered during the Second World War in an endeavor to crack German military ciphers. One result of their efforts was the development of a programmable computer – a harbinger of a new technological revolution.

Abstract artwork in Neon Green, Neon Purple and Silver by Carsten Gueth

As such Bletchley Park was a fitting setting for the first global summit on artificial intelligence safety in early November 2023 that saw government officials, academics and tech industry leaders from 28 nations gather to discuss the opportunities and risks of the powerful new technology which – depending on whom you talk to – has the potential to change the world as we know it for the better or worse. Hosted by the UK prime minister Rishi Sunak, the Bletchley Park AI Safety Summit brought together representatives from the US and Chinese governments as well as the likes of Elon Musk and some of the pioneers at the forefront of the AI revolution such as Sam Altman of OpenAI. Among the points of discussion was whether and how to control the development of AI.

Olivia O'Sullivan

Olivia O'Sullivan is UK director in the World Programme at Chatham House and a former member of the UK's Open Innovation Team.

The high-profile nature of the event was confirmation of just how rapidly AI has come to feature in all walks of life, from business to policymaking, entertainment to health care and beyond. What was seemingly once the stuff of science fiction – think any number of scenarios depicting a world in which "the machines are taking over" – has moved on to become a part of our everyday reality and the discussions now are about what jobs are not at risk from being done by intelligent machines.

What has fueled the discussion around AI is the breathtaking pace of development. The popular ChatGPT bot was launched only a little over one year ago, at the end of November, 2022. Now it has become the fastest-growing consumer software application in history. "The trajectory of technology has outpaced even what some experts were expecting just a year ago," says Olivia O'Sullivan of the Chatham House think tank in London. One top scientist in the field says that they dare not miss a day in the lab as every day something new emerges. And we are only at the beginning. Fei-Fei Li of Stanford University and a leading figure in the development of AI says we are still in the "very nascent … pre-Newtonian" phase of the technology's evolution.

"The trajectory of technology has outpaced even what some experts were expecting just a year ago."
Portrait of Olivia O'Sullivan

Olivia O'Sullivan

Director, UK in the World programme
Chatham House

In the midst of this there is much uncertainty about just what exactly we mean when we talk about AI. Artificial intelligence is already very much with us, embedded in scores of mundane applications from predictive text to pulling together reading suggestions based on your past choices, from calculating future levels of hospital admissions to the level of household insurance cover a customer may require – and countless other examples.

What is less certain is the next stage, so-called "frontier AI" which the UK government defines as "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models." The American organization OpenAI adds to its definition that such models "could possess dangerous capabilities sufficient to pose severe risks to public safety." Those dangerous capabilities can, it adds, "arise unexpectedly." It is both difficult to prevent their misuse and stop them from proliferating broadly.

Mushroom cloud from a nuclear test explosion in the background, water, beach and palm trees in the foreground.
A new destroyer:  Some scientists are likening AI to the atom bomb, which went into mass testing at sites such as Bikini Atoll only after its first war-time deployments.

The nefarious possibilities this conjures up range from deepfakes to cyberwarfare on an unimaginable scale, from the wholescale elimination of jobs to, in the imagination of the late physicist Stephen Hawking, the end of humanity. Yet it is only fair to also mention that OpenAI – and many others – also states that the technology could bring huge benefits to humanity, from curing diseases to tackling climate change, improving business and public services processes and decision-making to unlocking the mysteries of the universe.  

The challenge is how these benefits are secured without succumbing to the risks that AI brings with it. Here it is not just the predictable technophobes who are pressing for more caution. What is notable is how many people at the forefront of AI development are now among those calling loudest for greater controls. It is almost as if they are worried about what they have unleashed.

Yoshua Bengio

Yoshua Bengio is a professor at Université de Montréal and internationally recognized as a leading expert in AI. He is best known for his pioneering work in deep learning which earned him the 2018 A.M. Turing Award alongside Geoffrey Hinton and Yann LeCun.

These include Mustafa Suleyman, one of the co-founders of DeepMind, who recently published a book, The Coming Wave, calling for a greater discussion around the management of the development of AI. He is also one of scores of leading tech figures who have signed a statement from the Center for AI Safety that demands that the mitigation of "the risk of extinction" from AI be made a global priority alongside "other societal risks" such as pandemics and nuclear war. Another signatory is Yoshua Bengio, a so-called "godfather of AI" who in 2023 told the BBC he felt lost when contemplating the speed and scale of AI development, saying that it was "not the first time that scientists are going through these emotions. Think about the Second World War and the atom bomb. There are a lot of analogies there."

For others, more philosophical issues are at stake. Geoffrey Hinton told the BBC that alongside any concerns about the possible malignant use of AI, he was focused on the "existential risk of when these things get more intelligent than us." A key point is that it is "different intelligence." Whereas our human intelligence is a biological system, AI is a digital one: Machines can learn separately but share knowledge instantly at massive scale.

The range of proposed models of control is multiplying seemingly almost as fast as the machine learning models themselves. Some – such as Suleyman and Eric Schmidt, former CEO of Google – call for the establishment of an international body similar to the Intergovernmental Panel on Climate Change (IPCC) to provide policymakers with a monitoring and warning function as well as shaping protocols and norms for handling AI. Others look to bodies such as the International Civil Aviation Organization. Olivia O'Sullivan of Chatham House says that it's "worth thinking about something like nuclear power." Like AI that technology brings "really significant" capability – that can be used for both weapons and beneficial, civilian ends. "We have a global governance system in nuclear power," notes O'Sullivan. Something similar could be established for AI.

"It is not the first time that scientists are going through these emotions. Think about the Second World War and the atom bomb. There are a lot of analogies there."
Portrait of Yoshua Bengio

Yoshua Bengio

Professor
Université de Montréal

Others are more skeptical. Yann LeCun, chief AI scientist at Meta argues that fears about AI are overdone and that humanity only stands to benefit from the enhanced power of machines which, he says, will ultimately remain under human control. In an interview with the Financial Times, he likened efforts to regulate the industry now to seeking to rein in the jet airline industry in 1920s – when jet airplanes had not even been invented.

Another critical question is who sets the agenda. In terms of geopolitics the tussling has already begun. Barely had delegates gathered in Bletchley Park than US Vice President Kamala Harris rained on the British prime minister's parade with a statement that made clear that America had no intention of playing second fiddle when it comes to shaping the development of AI. "Let us be clear: When it comes to AI, America is a global leader," she said. She went on to note that it is American companies that lead in AI innovation and the US is able to catalyze global action and consensus in a way no one else can.

Meanwhile the EU has drawn up its own proposals for regulating AI aimed at establishing common standards across the single market. The presence of China at Bletchley Park was taken as something of a diplomatic coup given both the country's fast-paced growth in innovation and its determination to set its own course in technological development.

Abstract artwork in Neon Green and Neon Purple by Carsten Gueth

There are other fault lines. Representatives from the global south at the AI Safety Summit made a case for spreading the benefits of AI innovation as widely as possible. Others, such as Fei-Fei Li, argue that the public sector must play a greater role in the development of such a "critical technology" as AI. This is, she told Bloomberg, "important for American leadership." It is a thought echoed by her Stanford colleague and special adviser to the European Commission Marietje Schaake, who has warned of the risks of allowing private sector entities to hold too much proprietary control of the technology. These include leaving lawmakers, regulators and the general public increasingly unaware of the capabilities and risks embedded in technology that will feature in ever more aspects of civic life, from health care to law enforcement. Yet John Maeda, vice president of design and artificial intelligence at Microsoft, strikes a more optimistic note. He says AI will force humans to be even more creative than before and draws inspiration from the arts and crafts movement of the 19th century: "How do we make better things than the industrial machinery can create?"

The Bletchley Park summit concluded with a declaration in which the signatories acknowledged the enormous global opportunities, potential benefits and risks presented by AI – all of which call for an international, collaborative and, above all, "human-centered" response. The declaration is broad and well intentioned, but ultimately it represents the start of a conversation. There will be follow-up “editions” of the summit over 2024 in South Korea and France when the focus will shift to practical next steps.  The question is: Who knows where the machines – and the companies pioneering AI – will have got to by then.

About the author
Fred Schulenburg
Fred Schulenburg, who also publishes under the name Frederick Studemann, is literary editor of the Financial Times, where he has worked since 1996 in a number of roles including Berlin correspondent, UK correspondent, assistant news editor and Comment & Analysis editor. He writes regularly for the Notebook column and was a founding member of Financial Times Deutschland.
Download the full edition
Think:Act Edition

It's time to rethink AI

{[downloads[language].preview]}

As our lives and work are increasingly being reshaped by AI, the new edition of Think:Act Magazine explores what this means for C-level managers.

Published May 2024. Available in
All online publications of this edition
Load More
Portrait of Think:Act Magazine

Think:Act Magazine

Munich Office, Central Europe