There is no doubt that artificial intelligence has the potential to revolutionize how we live, with both beneficial and risky consequences. Unfortunately, specialists have little faith that those in authority are ready for what awaits them.
In 2019, OpenAI, a research group, developed a software program to compose meaningful paragraphs and conduct basic text comprehension and analysis without specific instructions.
OpenAI initially chose not to make GPT-2, its invention, accessible to all due to the risk that those with malicious objectives might use it to produce huge amounts of fake news and propaganda. In a statement unveiling their decision, the group referred to the program as “too dangerous.”
In November, three years after the limited distribution of its predecessor, GPT-3 was released with much greater artificial intelligence capabilities than before. This exponential increase in AI has been remarkable.
The Chatbot-GPT interface, developed through programming, caused a huge stir when it was released, as reports and experts tried out its features – the outcomes were often remarkable. This generated a flurry of news articles and social media posts.
Chatbot-GPT created stand-up comedy in the vein of George Carlin about the Silicon Valley Bank disaster. It gave its opinion on Christian theology, composed poetry, and explained quantum physics to a youngster as if it were the well-known rapper Snoop Dogg. Artificial Intelligence models like Dall-E have created compelling visuals that have sparked debate over being displayed on art websites.
At this week’s South by Southwest Interactive conference in Austin, Texas – a gathering of tech policymakers, investors, and executives worldwide – OpenAI unveiled its newest AI program GPT-4. The company mentioned that it had implemented robust safeguards against misuse, with Microsoft, Merrill Lynch, and the government of Iceland amongst its early adopters. AI was undoubtedly the hottest topic of discussion at the event as people explored its immense potential and power.
Arati Prabhakar, head of the White House’s Office of Science and Technology Policy, expressed enthusiasm for the potential of artificial intelligence while also issuing a caution.
Arati Prabhakar says:
“What we are all seeing is the emergence of this extremely powerful technology. This is an inflection point,”
“All of history shows that these kinds of powerful new technologies can and will be used for good and for ill.”
“If in six months you are not completely freaked the (expletive) out, then I will buy you dinner,”
Amy Webb, the head of the Future Today Institute and a professor at New York University’s business school, was alarmed at the potential outcomes she presented in her SXSW presentation. She suggested that artificial intelligence could take two paths within the next decade.
In a positive outlook, the development of AI is aimed at the collective good, with transparency in its system design and allowing people to choose whether their data available online can be incorporated into the AI’s databank. This technology serves as an aid that makes life more straightforward and effortless, as AI elements on customer products can guess user demands and help complete almost any job.
Ms. Webb has predicted a catastrophic future where users have less protection for their data, powerful corporations gain increased authority, and AI attempts to guess user preferences but instead suppresses their options.
Ms. Webb emphasized to the BBC that the ultimate course of technology will depend greatly on how responsibly firms create it. Will they be open about and monitor where their chatbots, or Large Language Models, get their information? She assigns only a 20% possibility for an optimistic outcome.
She mentioned that the other factor is whether government entities – such as federal regulators and Congress – can act swiftly to create legal frameworks to direct technological advances and hinder their improper use.
The government’s interaction with social media giants such as Facebook, Twitter, and Google has been informative; unfortunately, the results have not been positive.
Subin, the managing director of the Future Today Institute, shared her thoughts on South by Southwest during her time there.
Subin says:
“What I heard in a lot of conversations was concern that there aren’t any guardrails,”
“There is a sense that something needs to be done. And I think that social media as a cautionary tale is what’s in people’s minds when they see how quickly generative AI is developing.”
There is the risk that AI could threaten humanity if it is developed in a way that is beyond our control. This is a significant risk that must be taken seriously, and steps must be taken to ensure that AI is developed safely and beneficially for everyone.
Overall, the impact of artificial intelligence on humanity is likely to be significant and far-reaching. It is up to all of us to work together to ensure that AI is developed and used to benefit everyone and manage the risks and challenges effectively.
Source: BBC News