Digging Deeper Into The Delusion At The Heart Of The A.I. Boom

Although House of the Dragon has received high ratings, I’m not interested in watching it as it is a prequel to Game of Thrones, whose ending was so unsatisfactory that I don’t wish to be involved with the fictional universe anymore – though A.I. could potentially make me reconsider my decision.

At this year’s South by Southwest, OpenAI president Greg Brockman spoke about the potential applications of ChatGPT. He suggested, “What if you could direct your A.I. to compose a unique conclusion for a story that diverges from the original?”

Could A.I. be used to tailor revisions of novels and scripts striving for more desirable results? For instance, making them shorter or longer, toning down violence, or adjusting levels of “wokeness”? This could be the solution to rectifying any problem someone has with work.

A.I. cannot alter movies and books to suit personal taste since the creations hold value in sparked conversations about commonplace, historically related material – meaning no matter what changes you desire, it would not change the fundamental discourse that these works drive.

Since the introduction of generative-A.I. apps like ChatGPT and DALL-E 2 this past fall, excitement and funding have surged around the newest generation of artificial intelligence products.

Many of the technologies’ suggested use cases appear to be fixed in search of problems because of what may be described as ‘solutionism.’ This is why boosters have difficulty searching for issues that can be addressed.

Technological Solutionism, a term created by technology analyst Evgeny Morozov, is the misguided view that dilemmas can be significantly addressed or solved by simplifying their core problems into more “engineering-type” problems.

Solutionism, which is seductive for three reasons, offers a psychologically reassuring and financially enticing way of believing that complicated challenges can be easily met. First of all, it feels good to think this way. Secondly, adopting technology-based solutions can be profitable.

Silver, a potential resource that could solve many of today’s pressing problems, comes at an affordable price.

Thirdly, the notion of technological Solutionism reinforces the belief that innovation – especially those of a technocratic nature that heavily relies on engineering solutions to problems is superior to other approaches involving social and political aspects.

Agreeing that it would be great to have a new ending to a bad show; the problem with Solutionism is that it misrepresents problems and fails to acknowledge their real cause. Unfortunately, it probably is if something appears too good to be true.

Solutions make mistakes when they fail to consider vital information, often about context. It is important to consult people with the relevant expertise and understanding to gain the insight needed.

Big Tech seeks to promote its visions of innovation and has embraced Solutionism, with Facebook exemplifying this approach in its Super Bowl commercial: pushing the idea that physical reality is inadequate and the solution lies within virtual alternatives.

Law enforcement officials are now recognizing the potential of the metaverse as an “online solution” to their recruitment problem. Through this platform, aspirants can have immersive experiences such as driving police vehicles and solving cases – providing a much deeper understanding of what it truly entails to be a member of law enforcement.

Mark Zuckerberg recently changed Meta’s trajectory from the metaverse to prioritize A.I. Employees are no longer allowed to work remotely, despite initial enthusiasm for Horizon Workrooms, a V.R. collaboration platform. Meta’s action indicates that Solutionism is their P.R. strategy as they move forward in their commitments.

Solutionism has become a major topic in this era of artificial intelligence buzz. Undeniably, A.I.-driven technologies will cause noticeable changes to our lifestyles. However, the extent of these so-called breakthroughs is greatly exaggerated.

The Federal Trade Commission has expressed concern that companies may be overstating the capabilities of their A.I. products, covering a wide range from toys to cars and even chatbots. The agency warns that there “is A.I. hype playing out amongst many products today.”

Though there have been vibrant progress in the domain of A.I. regarding medical research and diagnostics, the notion that robotic A.I. counselors will enhance access to care is far-fetched at its most optimal level.

A.I. medical advisers are likely to offer consistently high advice in certain healthcare spheres; however, there is cause to be wary of this. Despite their potential to be beneficial, one should remain skeptical.

According to Benjamin Mazer – an assistant professor of pathology at Johns Hopkins University- A.I. advancements will likely lead to increased healthcare costs.

With the increasing sophistication of A.I., the U.S. medical industry expects to bill each inquiry and analysis as a separate “fee for service,” as A. I have become more capable of completing physical examinations and intensively studying medical histories and signs.

Mazer warns that using A.I. in diagnostics might result in “cascades of care,” meaning excessive, costly tests and procedures that could be unnecessary, alarming, or harmful. Thus, A.I. becoming a moneymaker could ultimately cause medical costs to increase significantly.

The cost of testing, procedures, medicine, physical therapy, healthy eating, and other typical medical treatments remains challenging for people who struggle to afford healthcare. Despite being medically advised of certain treatments, the difficulty in acting on this advice due to the expense is an obstacle where no real progress is made.

A.I. advice could help make some processes cheaper, but how a chatbot would do this is unclear. Additionally, we should remember that even if the implementation of A. I advise is not expensive, but the results may differ.

While some appreciate the efficiency of A.I.-based cognitive behavioral therapy, others recognize the advantages of engaging with an empathetic human therapist who can offer genuine concern and accountability on a personal level.

Some may see A. It is a way of making inequality worse, particularly in therapeutic help. Those with less financial means could be given subpar bot treatment, while those better off financially can get preferred human consultations.

To gain further insight into whether Altman is applying an A.I. solutionism mindset or simply providing a poor example, let’s examine another case that reflects his proposal.

Low-income individuals may find cost-effective legal advice through free or inexpensive Artificial Intelligence (A.I.) legal advisers instead of paying a premium for what human lawyers charge.

Using an A.I., someone considering renting an apartment, taking a job, or hiring someone for home repair services could review the contract to understand the pros and cons of terms that might otherwise be difficult to parse—translating them into easy-to-understand language.

MSNBC recently aired a segment focusing on the debut of the first A.I. legal assistant, which could be a realistic prospect shortly.

Professor Tess Wilkinson-Ryan of the University of Pennsylvania Carey Law School, a specialist in contracts, was asked for her thoughts on the matter.

Professor Tess Wilkinson-Ryan says:

“I am pretty skeptical about the utility of cheap A.I. for the kinds of contracting problems that pose the most serious threats to low-income parties. My general view is that the biggest problems do not have to do with people failing to understand their contracts; they have to do with people lacking the financial resources (access to credit, or cash reserves) to afford the transactions that come with good terms.”

Although A.I. can be used in determining contract enforceability, Wilkinson-Ryan notes that it is limited in this role except when noticing unenforceable terms. Even here, she emphasizes the limited utility of the technology.

Professor Tess Wilkinson-Ryan went on to say:

“It might help tenants who are negotiating disputes with their landlords,”

“It might not be useful for choosing whether or not to rent an apartment.”

ChatGPT, a technology the company claims will make students “smarter,” has been received with a wave of panic in schools. This is because some students use it to make studying easier by asking it to explain material they didn’t understand during class.

Teachers face multiple challenges due to plagiarism becoming easier and harder to detect with the advancement of technology. Additionally, the ever-changing nature of chatbots makes it difficult to develop lasting educational policies.

Techniques such as ChatGPT have not yet been examined seriously enough to discern their effect on student intelligence, so it is impossible to conclude whether they will make students smarter.

The idea of increased productivity is attractive. However, it can be misleading as many think gains in efficiency will result in less and better work. Contrary to this belief, Altman claims it will have the opposite effect.

People are raving about how the ChatGPT update, with OpenAI’s GPT-4 model, is a great way to automate processes quickly and save time. It is being celebrated for its usefulness as a productivity booster.

Quickly adopting modern processes for engineering is advantageous in the long run, as it can effectively save time. Unfortunately, these gains are only temporary and will eventually be nullified when automation integrates into all workflows. As a result, competition in such fields will lead to higher standards being set across the board.

Washington University professor Ian Bogost thus characterizes OpenAI as “adopting a classic model of solutionism” and notes that tools like ChatGPT “will impose new regimes of labor and management atop the labor required to carry out the supposedly labor-saving effort.”

Organizations should be realistic and honest when considering utilizing Artificial Intelligence best; overselling its capabilities may lead to a misplaced sense of Solutionism, ultimately hindering its effective integration into society.

Partners Slate, New America, and Arizona State University have teamed up to form Future Tense – a collaboration that explores new technologies, their relevant public policies, and their effects on society.

The delusion at the heart of the A.I. boom is a cautionary tale of the dangers of overhyping and oversimplifying complex technologies. By embracing a more nuanced and critical perspective, we can unlock the full potential of A.I. while minimizing its potential risks and negative consequences.

Source: Slate Magazine

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top