Search
Close this search box.

The Trouble with Artificial Intelligence

Elon Musk has penned a letter calling for the world to take a beat on Artificial Intelligence, or AI, citing “profound risks to society and humanity.”

The letter, co-authored with hundreds of other people, says this:

“Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

This comes as Bill Gates penned a letter last week mentioning some concerns such as AI sharing our personal information, becoming too strong, or deciding “that humans are a threat.” Gates take was basically: Yeah, it could be dangerous, but let’s give it a go because it could help poor people. Only, it won’t get to poor people, it will mostly be used by the privileged people who don’t need it.

This letter is quite the opposite. It is replete with warnings that we have not answered the philosophical questions yet and we should do that first.

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The paper asks that “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

During this pause, the authors call for a set of standards and safety protocols to ensure “that systems adhering to them are safe beyond a reasonable doubt.” There is precedence for this, the letter says. “Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”

Related Articles

Join the mailing list

Get the daily email that makes reading the investment news actually enjoyable.

Related Articles

Redacted is an independent platform, unencumbered by external factors or restrictive policies, on which Clayton and Natali Morris bring you quality information, balanced reporting, constructive debate, and thoughtful narratives.