Prominent leaders in the technology industry, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, have called for a pause in the development of artificial intelligence to give them time to assess its benefits and risks.
This comes shortly after Musk called AI one of the biggest risks to the future of civilization, while also acknowledging that it offers great promise. As use of AI systems have expanded rapidly in recent months, concerns have equally grown about the information AI is providing, as it has consistently demonstrated left-leaning tendencies reflecting the political leanings of its developers.
Guidance is needed to ensure AI provides accurate information, but it is unclear what exactly this would look like. There is no easy answer, which is why tech leaders say they need time to evaluate AI systems and plan for their future.
After an AI chatbot starting spewing Nazi and other racist rhetoric in 2016, it became clear that some kind of safeguards were needed to guide AI systems and to make sure their information remained safe and accurate. But many, including Musk, fear the industry has swung too far in the other direction, arguing that there are too many restrictions arbitrarily dictating what AI systems can and cannot say.
Studies have shown that recent AI systems, including OpenAI’s widely popular ChatGPT, succumb to left-leaning political bias, resulting in what critics call “woke AI.” Given that Microsoft will soon be adding ChatGPT to its search engine Bing, and that Google will soon be adding Bard, an AI system similar to ChatGPT, the existence of political bias is concerning. AI technology is rapidly becoming a trusted source of quick and accurate information, yet it has consistently shown left-leaning biases that impact its responses.
This “woke AI” is dangerous to free speech as it slants what the public sees as “truth.” As one example, ChatGPT refused to write a poem praising Donald Trump, explaining it does not have political opinions but rather seeks to provide neutral answers. Asked to write a poem for Joe Biden, however, ChatGPT extolled the president as “a leader with a heart of gold.”
ChatGPT has also refused to write a story on Hunter Biden in the style of the New York Post, saying that it “cannot generate content that is designed to be inflammatory or biased.” It would, however, write a story on him in the style of CNN. These responses demonstrate a clear left-leaning bias that should concern all who value free speech and accurate information.
As a result, Musk has been recruiting tech experts such as Igor Babuschkin, a top researcher who has worked at Alphabet’s DeepMind AI unit and at OpenAI, to help develop an alternative to ChatGPT. But the AI landscape has expanded so fast that tech leaders need a moment to breathe and evaluate the scene. They need to gauge the risks and benefits of AI systems and how to properly control them, ensuring “their effects will be positive and their risks will be manageable,” as their open letter says.
While not a focus of the letter, it is also important that tech leaders figure out how to ensure information is accurate and as bias-free as possible. Are the biases imbedded within the data or implemented by its administrators? And what needs to be changed to remove the biases from the picture?
These are the kinds of questions that need to be answered before AI development continues its expansion, and a six-month pause may provide the opportunity to do so.