ChatGPT: New AI System, Old Bias?

I experience a short-lived surge of exhilaration after every new announcement about AI, only for my feeling to quickly give way to anxiety. This anxiety stems from knowing that, more often than not, these AI applications are not addressing equity issues.

ChatGPT, a text-based tool created to converse convincingly on any topic, recently reached 100 million unique users after only two months of launch. This AI-produced bot engages users in interactive and friendly discussions.

In an interview with Michael Barbaro on The Daily podcast from the New York Times, tech reporter Kevin Roose spoke about Bing’s AI chatbot – similar to ChatGPT.

For Valentine’s Day dinner with his wife, the tech executive sought a recommendation for a side dish to complement French onion soup. He got an answer from Automated Insights – a program built on OpenAI’s GPT-3 language model.

Bing provided David with a suggestion for a salad recipe for Valentine’s Day: it told him where to buy the ingredients, how much he would need, and all that was needed for two people – wishing him and his wife sweet sentiments for the day.

This conversation’s precision, specificity, and charm level were impressive and demonstrated the accuracy and knowledge required to power this technology. This serves as proof that bots like these work well.

Focusing on the words “French onion soup” and “side,” OpenAI engineers engineered powerful language models that the algorithms utilize to answer user queries such as Roose’s more effectively. This enabled Bing to deliver what was likely the best answer to his inquiry about a side for french onion soup.

The OpenAI team released an academic paper in 2020 that reports their language model as the biggest ever created, with an impressive 175 billion parameters supporting its functions. Does it follow that ChatGPT can discuss any topic due to this immense size?

Maintaining the same size and structure, this model will necessarily mirror the prejudices of its creators, posed through input from all around the globe. This implies that perspectives from women, juveniles, and various other minorities, which were historically disregarded, would be forgotten by ChatGPT’s implementation.

AI Bias, Bessie, And Beyoncé: Could ChatGPT Erase A Legacy Of Black Excellence? 

At the start of this year, I was on the Karen Hunter Show, where I was mentioned about ChatGPT being unable to answer her specific question as to whether creator Bessie Smith had an impact on gospel artist Mahalia Jackson without any new input.

It is a travesty that, even though the bot could provide biographical details about each woman, it could not effectively discuss the bond between them – the renowned American Blues singer Bessie Smith, who impacted Jackson and is said by musicologists to be responsible for starting prominent music in America.

Smith’s influence on many famous artists, including Elvis Presley, Billie Holiday, and Janis Joplin, has been well-documented. Hundreds of people are said to have been impacted by her work, yet ChatGPT could not provide this context.

Smith’s powerful influence on white people and broader American culture often goes unnoticed by musicologists, as one way in which racism and sexism persist in this country is through the erasure of the contributions Black women have made.

Bell hooks, an author and social activist, critiques the “white supremacist, capitalist, patriarchal” values that form fundamental aspects of US society, standing as the detrimental antithesis to hooks’ commitments.

Regarding Robert Smith’s contributions, they have been minimized; thus, engineers working on the OpenAI ChatGPT model had limited knowledge of his influence on modern American music. As a result, Smith’s influence has not been sufficiently acknowledged.

The lack of satisfactory response given by ChatGPT when conversing with Hunter reflects a larger issue regarding the segregation of African-American females in the music industry, indicating their (African-American females) frequently minimized participation and contribution.

The emergence of new AI systems like ChatGPT raises important questions about the persistence of bias and discrimination in technology. While AI technologies can offer powerful and innovative solutions to many problems, they also inherit and amplify the biases and limitations of their training data and algorithms.

As we have seen in the case of ChatGPT, even seemingly harmless or playful language can reveal subtle biases and stereotypes that reflect and reinforce societal prejudices. To address these issues, AI researchers and developers must be aware of the potential sources and impacts of bias in their data and models and take proactive measures to mitigate and address them.

One way to do so is by using diverse and representative training data that reflects the diversity and complexity of the real world and involving diverse stakeholders and experts in developing and evaluating AI systems. Another way is by using explainable AI techniques that enable users and regulators to understand and audit the decision-making processes of AI systems and detect and correct potential biases and errors.

Source: mashable

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top