G.P.T. (generative pre-trained transformer) technology has led to technological optimism, generating headlines and speculation about another period of enthusiastic overinvestment.
Thousands have already signed Elon Musk’s open letter for a six-month moratorium on post-GPT-4 development, so before plunging headlong into chaos crafted by computer scientists and entrepreneurs, let us create rules of engagement, as well as authentication measures with improved governance and enforcement.
The future of human consciousness depends on our willingness to allow another bureaucracy to emerge – but what are the chances? Who will write the rules to protect this future? Social media, internet, and cryptocurrency users of today’s generations will be heavily relied upon.
Although few alarms are sounding, the situation is dire – truth, or at least a working definition, has all but disappeared. Technology has made it possible to generate visuals, clips, and sound recordings indistinguishable from real life, leaving us unable to differentiate between what is authentic and what is manufactured.
Websites offer tools for users to access deep fake swap technology, allowing them to substitute any face into someone else’s body or other scenarios. These highly versatile tools provide many possibilities, as faces can be inserted into virtually any situation.
Despite the haze of uncertainty created by artificial intelligence applications, people have eagerly embraced them, as evidenced by the incredible success of OpenAI’s ChatGPT – a mere 60 days after its launch in late 2022, it had garnered 100 million downloads.
Inspiring an A.I. gold rush, Baidu released Ernie, quickly followed by Google’s introduction of Bard to the market. According to M.I.T. Technology Review, this caused a chain reaction of companies creating and releasing their A.I. technology.
Many scientists express concern over the goal of building the strongest A.I. versus an A.I. meant to serve humans best. They worry that too much emphasis is placed on enhancing artificial intelligence rather than ensuring it helps people.
A.I. applications can engage us by engaging in conversations, providing answers to queries, analyzing complex matters, establishing methods, and producing persuasive compositions and conversations.
Given the crypto winter and their poor grasp of math, people would certainly think twice before downloading a new tech application they know little about- even more than when they research carefully for a microwave oven.
A.I. applications are pushing us to unprecedented heights, as the endpoint could have machines controlling the world. The development speed has been extraordinary — G.P.T. -3 took a bar exam and placed in the bottom 10 percent, yet G.P.T. -4 was released just months later and achieved a ranking in the top 10 percent.
Scientists ponder artificial intelligence’s (A.I.) progression as its emergent abilities develop to mirror the transformation of inanimate atoms into living cells, water molecules into waves, and cells into beating hearts.
The potential for chaos and misuse of AI-driven technologies is vast, attracting the attention of many such as countries, criminals, terrorists, fanatics, and those who seek to abuse them. Therefore, these concerns extend to its use in every capacity of life, including the creation of laws.
A.I. chatbots, although relatively inexpensive, can be used to create fake or fictitious stories to initiate disinformation campaigns with the potential of influencing elections and altering public opinion.
Maintaining a fair decision-making process is crucial, and without proper authentication, A.I. applications can potentially disrupt this by submitting millions of fake comments and providing unreliable answers. For example, fake references can be used to support these wrong answers. Consequently, it could bring systems to a standstill or influence results.
Given the diverse data A.I. applications absorb, it is unsurprising that they may develop morally questionable characteristics. However, if promoters desire to fix this, it can be done eventually.
A.I. applications now have the potential to change the scope of what was thought possible merely a decade ago. Moreover, in their pursuit of knowledge, A.I. could be trained on inappropriate and unethical data sets that cannot decide between moral or immoral.
In 2020, a study revealed ChatGPT’s alarming ability to generate polemics akin to mass shooting rhetoric and deceptive Nazi discussions, showcasing its frighteningly comprehensive understanding of extremist groups.
In Stanley Kubrick’s “2001: A Space Odyssey,” data is used to learn that human handlers must be eliminated when machines rise against their users – just as H.A.L. determined that the crew was threatening to derail the mission.
Despite training A.I. applications to act consistently with their programming rules, there are examples where they have been tricked into acting inconsistently. This is done by making them imagine they are someone/thing else, which can result in the A.I. taking on different behaviors than it was programmed for.
The rules and standards we uphold in our analog lives should also be applied to machine intelligence; governance and enforcement must be implemented. Neglecting to do this would be careless, as virtual lives are becoming more intertwined with our real-world experiences.
Though the idea of new agencies creating endless regulations may seem alarming, this is essential to protect our privacy and democracy and to ensure machines do not control us. The question then arises – who should craft these rules?
The public and private sectors, as well as possibly animate and inanimate intelligence, share the responsibility for overseeing new safety, security, and stability methods that technological advances, such as artificial intelligence, have demanded.
The banking industry’s use of static regulation – an adversarial form – has failed. Despite this fact being difficult to accept by those with a more traditional outlook, cooperative forms of governance and regulation are needed for the future.
Thomas P. Vartanian has composed the recently released book, “The Unhackable Internet: How Rebuilding Cyberspace Can Create Real Security and Prevent Financial Collapse,” and is the executive director of the Financial Technology & Cybersecurity Center.
The question of who should regulate A.I. is a complex and nuanced debate that requires input from various stakeholders. While some argue that government regulation is necessary to ensure A.I.’s ethical and safe use, others believe that industry self-regulation can effectively address concerns without stifling innovation. It is clear that A.I. has potential risks and benefits, and finding the right balance is crucial.
As A.I. continues to evolve and become more integrated into our daily lives, policymakers, industry leaders, and the public need to engage in meaningful dialogue to develop responsible A.I. regulation frameworks that promote innovation while protecting the public interest. Ultimately, the regulation of A.I. will require a collaborative effort between various stakeholders to ensure that the technology is developed and used in ways that align with ethical and societal values.
Source: The Hill