Smart or scary? Tell-tale signs of an emerging AI-nxiety crisis 

AI

Fast, free, sparse, exponential are just some of the word associations floating around today’s frenzied AI movement — baiting public interest with grandiose claims of superintelligence and the Internet of Augmentation (IoA). Yet what started as a virtuous research and safety premise has morphed into an exploitation endeavour, devouring data at the expense of ethical reason.


OpenAI, the research organisation behind mainstream chatbot mania, has been mesmerising users with advancements in natural human-computer interaction. But an ominous Apple-esque business model is unfolding at the speed of computational milliseconds.

OpenAI’s CEO, Sam Altman, who appears to be emulating the visionary guise of Steve Jobs, has been captivating investors with his infectious ambition to build an AI chips company and upcoming app store — apparently leading the world to well-intentioned artificial general intelligence. But this computational quest is no tale of technologic triumph. It is something far more profound, obscure and sinister. An existential journey where economics and ethics are bound to collide in a cataclysmic conundrum — reshaping the way humans transact precious intellectual property in a world dumbfounded by data.

Recently, numerous legal disputes against OpenAI have been hitting the headlines, with Scarlett Johansson the latest high-profile defendant calling foul in a case that is bound to open pandora’s box for the explosive Internet of Augmentation — not only for OpenAI, but for any entity that has hastily used proprietary information to feed machine learning models. In her argument, the American actress claims a newly released chatbot voice (known as ‘Sky’) sounds ‘eerily similar’ to hers despite prior refusals to opt-out. Adding fuel to the fire, Altman’s subtly implied post on X references the film, ‘Her’, in which Johansson starred alongside Joaquin Phoenix as Samantha, an AI voice companion.

Growing signs of a legal uprising can be seen in additional lawsuits involving the New York Times, the Authors Guild et al, a bunch of US daily newspapers, and Getty Images versus Stability AI. Meanwhile, The Guardian, BBC, CNN, and Reuters have all taken action to prevent OpenAI’s web crawler from accessing material.

Whether such backlash is a sign of traditional market players resisting an inevitable inflection point, or merely intuitional responsiveness to copyright infringement, is up for debate. Regardless, ethical contention is becoming more pervasive as artificial narrow intelligence infiltrates wider industries.

Mo Gawdat, former chief business officer of Google [X] (now X Development) and author of Scary Smart, has described his terrifying existential concern for the trajectory of AI as imminently critical.

Mo Gawdat explains his terrifying existential concern for the trajectory of AI

In a recent podcast with host Steven Bartlett, Gawdat explains that to address current short-sighted interests, large-scale AI experimental efforts must become expensive to remedy unsustainable and unethical development. As clichéd as it sounds, this will require orchestrated impetus by governments, above and beyond what was endeavoured in 2023’s open letter to ‘Pause Giant AI Experiments’ by the Future of Life Institute.

Moreover, without the proactive application of ethical frameworks and regulations addressing AI safety at this crucial milestone, humanity is gambling with a seismic shift far more powerful than any forerunning event in the history of Homo sapiens. Unresolved, this could lead to calamitous ramifications for our species.

Yuval Noah Harari speaks of AI’s digital evolution being millions of times faster than organic evolution

To harness such groundbreaking technology for prosocial gain, certain dependencies are required to pave the way for harmonious innovation. Think: circular economics, job succession planning and universal basic income. These are fundamental components that are failing to receive equal attention. Moreover, widespread adoption of AI Ethicists could usher in a new kind of gate-keeping to resume deep learning at scale.

As AI ethics researcher, Sasha Luccioni, puts it, creating opt-in and opt-out mechanisms for developing data sets is crucial, lamenting that ‘artwork created by humans shouldn’t be an all-you-can-eat buffet for training AI language models’.

Sasha Luccioni offers practical solutions to regulate our AI-filled future
Actively block AI scrapers from your website with Spawning’s defence network

Curious to know if your work has been included in training data sets without your consent? Spawning.ai, an organisation founded by artists, has released a tool enabling users to search vast data sets to monitor if any proprietary information has been crawled. Find out here if you’ve been trained.

As Zephyr, an anonymous artist, so eloquently puts it: ‘Rather than fear AI, I will strive to learn from it, to adapt and grow alongside it. For in the end, it is not the tools we use that define us, but the passion, the vision, and the indomitable human spirit that guides our creative journey. Anxiety of AI may be real, but so too is the limitless potential of the human imagination. And that is a force that no machine can ever fully replicate or replace.’

On an equally constructive note, OpenAI’s recent deals with the Financial Times and Reddit signify a change in course for the prominent tech provider. Similar partnerships are gaining traction (such as content licensing deals with the Associated Press, Le Monde, El País, and Bild) as traditional media institutions adapt to the inevitable shared future of an open-source ecosystem.

But the question remains, will such initiatives uphold the ethical premise of OpenAI amid frantic innovation. In a disturbing sequence of events, Microsoft (a dominant stockholder of OpenAI) laid off an entire ethical team back in January 2023, while shockingly, as of May 2024, OpenAI’s long-term AI risk team has been disbanded. Evidently, market share supersedes moral compass for this strategic partnership. Left unchecked, the ramifications are dire: technocratic reign prioritising ‘shiny products’ over human safety.

Reducing the vulnerabilities reported in Large Language Models is crucial at this foundational phase. In an alarming publication on the scalable extraction of training data, DeepMind researchers at Cornell University demonstrate that open-LLMs (Pythia or GPT-Neo), semi-open-LLMs (LLaMA or Falcon) and closed-LLMs (ChatGPT) are all prone to adversarial attacks — estimating that a gigabyte of ChatGPTs training data set could be extracted at a rate of 150x higher than when behaving properly.

As superintelligence emerges amid our relentless quest for productivity gains, an inevitable near-term AGI event horizon beckons: Is this the biggest epistemological experiment to which none of us have consented?

Indeed, more questions should be asked at such a critical juncture in our journey to AGI. A pause in large-scale open source experimentation may not be such a bad thing after all, if collective effort results in a more resolute framework to developing ethical AI.

AI
Newt Scamander conjuring AI. HDR, realistic, 16K, sharp focus – ar 3:2. Midjourney
AI
Vincent van Gogh contemplating AI. HDR, realistic, 16K, sharp focus – ar 3:2. Midjourney

What the experts are saying about AI-nxiety

AI and our future with Yuval Noah Harari and Mustafa Suleyman
Sam Harris discusses his worry that humanity will have to declare bankruptcy to the internet

‘Look at how it [AI] was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.’

Geoffrey Hinton

‘By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.’

Eliezer Yudkowsky

‘The real danger is not that machines more intelligent than we are will usurp us as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.’

Daniel Dennett

‘Within 10 years computers won’t even keep us as pets.’

Marvin Minsky

‘Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.’

Ray Kurzweil

‘My guess is that we’ll have AI that is smarter than any one human probably around the end of next year [2025].’

Elon Musk

‘The rise of powerful AI will either be the best or the worst thing ever to happen to humanity.’

Stephen Hawking