Opinion

Why we need to pause AI research, consider the real dangers

I have spent most of my life as a techno-optimist.

I believe the world has gotten better on average over the centuries — I would rather be a random person in 2023 than a random person at any other time in history.

This progress is largely thanks to advances in science.

I started working in the philosophy and ethics of artificial intelligence in large part because I was enthralled by its potential as the most transformative technology of my lifetime.

Google’s DeepMind says its mission is to “solve intelligence” and from there use the enhanced intelligence to solve all our other problems — global poverty, climate change, cancer, you name it. I find this vision compelling, and I still believe AI has that potential.

But I have reluctantly come to believe that path there is much narrower and much more dangerous than I once hoped.


Google's DeepMind says its mission is to "solve intelligence."
Google’s DeepMind says its mission is to “solve intelligence.”
REUTERS

This is why I signed the Future of Life Institute’s open letter calling for a moratorium on the further development of the most powerful modern AI (notably “large language models” like OpenAI’s GPT).

First, there are dangers of AI already present but quickly amplifying in both power and prevalence: misinformation, algorithmic bias, surveillance, and intellectual stultification, to name a few. I think these worries are already sufficient to call for more reflection before we proceed.

My colleague Phil Woodward has pointed out that though OpenAI has promised to proceed cautiously, it has released to the public a perfect cheating machine that has already done to higher education what publishing an easy recipe for an undetectable athletic enhancement would do to professional sports.

No doubt ChatGPT and its successors will have many positive effects on education, too — but the disruption in the meantime is undeniable and not obviously for the best.


OpenAI has already been used in higher education.
OpenAI has already been used in higher education.
NurPhoto via Getty Images

OpenAI is arguably one of the best-intentioned AI outfits out there, but intentions have a funny way of being warped when profit motives are also on the line. That one of OpenAI’s most notable founders (Elon Musk) also signed FLI’s letter is telling.

The near-term AI concerns are not all, though.

Like many, I am convinced the long-term “existential risk” is very real.


AI has already presented some dangers, including misinformation and algorithmic bias,
AI has already presented some dangers, including misinformation and algorithmic bias.
NurPhoto via Getty Images

About a decade ago I read the preliminary papers behind Nick Bostrom’s book “Superintelligence,” which basically argues 1) AI is likely to self-amplify until it reaches a level of scientific and technological sophistication far beyond what humans have, and 2) once it reaches that state, humans will basically have no say in what happens next, and 3) it is very hard to make sure such an AI will have our true interests at heart. 

Since reading that work I have gone through some of the classic stages of grief.

First, I was in denial; I wrote an academic response to Bostrom in which I thought I could show his arguments were misguided.

The more directly I engaged his book, though, the more I realized he had already considered my objections and refuted them.


Microsoft ChatGPT
Popular science fiction can give us the impression artificial general intelligence (AGI) is in the far future or pure fantasy.
AFP via Getty Images

In the years since it’s been a mix of bargaining, depression, and acceptance of the fact that advances in my much-beloved AI are a serious risk to human existence.

These arguments are not always portrayed well in the media. Like many subtle issues, arguments for the position don’t fit well into a soundbite or 280 characters, but apparent takedowns (of oversimplified, strawman versions) do.

Popular science fiction, especially, can mislead our imagination in two opposing directions. First, it can give us the impression artificial general intelligence (AGI) is in the far future or pure fantasy.

But we have already started to see hints of AGI right now.

And even if real AGI is still far off — say 50 years or more — the hurdles are at least as hard as those of climate change, and the stakes at least as high.

Second, because science fiction is ultimately about human concerns (and its AI must be portrayed by human actors), we are used to the idea that AI will be like us in most ways. But, as Yuval Harari, Tristan Harris, and Aza Raskin recently put it, we are in the process of summoning a truly alien intelligence.

AI will not share our biological history and so will not have our contingent wants. It will not necessarily be pro-social, for example, any more than it will love sugar and fat.

In this space, I can only urge the curious and concerned to engage in the nuances. Myself, I now devote most of my research time to the “alignment problem”: roughly, the problem of trying to make sure the goals of a superintelligent system are sufficiently aligned with ours to enable human flourishing.

This is a truly interdisciplinary field; It needs computer scientists, ethicists, psychologists, formal epistemologists, governance experts, neuroscientists, mathematicians, public-relations experts, engineers, economists . . . and it needs many more of each.


Accenture Research analyzed how companies are still experimenting with AI.
Accenture Research analyzed how companies are still experimenting with AI.

Accenture Research showed how AI is rising in different industries.
Accenture Research showed how AI is rising in different industries.

If you find yourself wanting to know more, you might perhaps start with Brian Christian’s excellent overview “The Alignment Problem.”

For those who want to help but aren’t sure how you might start at the website 80,000 Hours.

As a philosopher, I am often haunted by a phrase from “Superintelligence,” that AI alignment is “philosophy with a deadline.”

Lately, as we’ve all noticed, that deadline has shortened dramatically.

Steve Petersen is a professor of philosophy at Niagara University.

Source link

𝗖𝗿𝗲𝗱𝗶𝘁𝘀, 𝗖𝗼𝗽𝘆𝗿𝗶𝗴𝗵𝘁 & 𝗖𝗼𝘂𝗿𝘁𝗲𝘀𝘆: nypost.com
𝗙𝗼𝗿 𝗮𝗻𝘆 𝗰𝗼𝗺𝗽𝗹𝗮𝗶𝗻𝘁𝘀 𝗿𝗲𝗴𝗮𝗿𝗱𝗶𝗻𝗴 𝗗𝗠𝗖𝗔,
𝗣𝗹𝗲𝗮𝘀𝗲 𝘀𝗲𝗻𝗱 𝘂𝘀 𝗮𝗻 𝗲𝗺𝗮𝗶𝗹 𝗮𝘁 dmca@enspirers.com

Similar Posts