The development of generative AI technology has been a hot topic recently, with companies and researchers competing to create systems that generate new and original content, from music and art to text and images. However, as the field of generative AI grows, a concerning issue is starting to emerge: the problem of bias in AI systems.
Bias in AI can result in the creation of content that is sexist, racist, or otherwise discriminatory, and this is particularly problematic in the fertile AI field, where the potential for AI-generated content to spread quickly and widely is high. This problem is not limited to generative AI but is a critical concern that must be addressed.
At the beginning of February, Google and Microsoft revealed major updates to their search engines. Both companies invested heavily in constructing or obtaining AI tools that use large language models for comprehending and addressing tricky inquiries.
Baidu, the Chinese search company, is taking measures to combine its services with those of other companies to provide users with a more precise and extensive experience. They are striving to make this integration successful.
The thrill of these new technologies may be hiding a grim reality. The competition to construct sophisticated, AI-driven search engines will necessitate a huge boost in computing power, thus resulting in an exponential rise in tech companies’ energy usage and carbon emissions.
According to Alan Woodward, a professor of cybersecurity at the University of Surrey in the UK.
Alan Woodward says:
“There are already huge resources involved in indexing and searching internet content, but the incorporation of AI requires a different kind of firepower.”
“It requires processing power as well as storage and efficient search. Every time we see a step change in online processing, we see significant increases in the power and cooling resources required by large processing centres. I think this could be such a step.”
The development of large language models (LLMs), like those utilized by OpenAI’s ChatGPT, Microsoft’s advanced Bing search engine, and Google’s similar Bard technology, requires the analysis and analysis interconnectivity of vast amounts of data. This is the reason why corporations with considerable resources generally create such LLMs.
Carlos Gómez-Rodríguez, a computer scientist from the University of Coruña in Spain, expressed his opinion.
Carlos Gómez-Rodríguez says:
“Training these models takes a huge amount of computational power.”
“Right now, only the Big Tech companies can train them.”
Researchers have estimated that the training of GPT-3, which ChatGPT is based on, used 1,287 MWh and created more than 550 tons of carbon dioxide equivalent. This amount equals a person taking 550 roundtrips between New York and San Francisco. Although neither OpenAI nor Google has disclosed how much computing power their products need, this third-party analysis gives an idea of the energy used.
Gómez-Rodríguez continues to say:
“It’s not that bad, but then you have to take into account [the fact that] not only do you have to train it, but you have to execute it and serve millions of users.”
ChatGPT is estimated to have 13 million users a day, according to UBS investment bank, as an individual product contrasts with the integration of it into Bing, which processes half a billion searches every day.
Martin Bouchard, one of the founders of the Canadian data center enterprise QScale, has concluded that integrating artificial intelligence into search engines, as Microsoft and Google have planned, would necessitate at least four or five times more computing power for each search.
He emphasizes that to reduce the computing power needed, ChatGPT will terminate its comprehension of the environment by the end of 2021.
To satisfy search engine users’ expectations, modifications must be made.
Martin Bouchard says:
“If they’re going to retrain the model often and add more parameters and stuff, it’s a totally different scale of things.”
The development of generative AI technology is exciting, but it also highlights the importance of addressing the issue of bias in AI systems. Bias in AI can result in the creation of discriminatory content, which is particularly concerning in the fertile AI field, where the potential for AI-generated content to spread quickly and widely is high.
We must address this issue head-on by developing and implementing AI technology responsibly and ethically. This means reducing bias in AI systems and ensuring AI-generated content is free from discrimination and prejudice.