OpenAI Co-founder: “We Were Wrong” – A Major Shakeup In AI

OpenAI, a non-profit organization specializing in developing and applying AI technology, is dedicated to ensuring its use is socially conscious. Its research portfolio includes ChatGPT for natural language processing and DALL E for image generation.

In an interview with The Verge, a technology news website, Ilya Satsukivar, one of the founding members of OpenAI, remarked that the company’s initial idea to share research openly was misguided when it launched in 2015. He stated bluntly: “We were wrong.”

ChatGPT, one of the latest language models released, has sparked a lot of buzz in the AI community. Not only is GPT-4 capable of scoring in the top 10% on bar exams and passing Turing tests by pretending to be someone visually impaired, but it also exhibits remarkable performance and flexibility. However, some research experts and AI professionals have voiced their concerns that not enough information has been disclosed about GPT-4.

Ben Schmidt, the vice president of information design at Nomic AI, strongly emphasizes the importance of releasing training data to validate and adjust the mistakes and biases that may be built into Artificial Intelligence.

However, OpenAI does not disclose the data set and training method used to build GPT-4, and Mr. Schmidt said in this regard,

Ben Schmidt says:

“OpenAI is a paper introducing GPT-4, “What is the content of the training set? We will not disclose it,”

It is time to close the door on “Open” AI; GPT-4’s 98-page paper boldly states that they are not providing any information regarding their training data.

Ben Schmidt went on to say:

“Considered both the competitive environment and safety of large-scale models such as GPT-4, this report includes architecture, hardware, and model size. We do not provide additional details regarding ware, training calculations, dataset construction, training methods, or the like.’

In discussion with The Verge, Ilya Sutskever, OpenAI’s chief scientist and co-founder, defended the organization’s decision to keep GPT-4’s training data private. He cited ‘competitive’ and ‘safety’ grounds as justification for this choice, which he described as “obvious.”

Mr. Satsukivar commented that the creation of GPT-4 was not simple and that there is a lot of competition in this area as numerous other companies are striving to build similar products, leading to an expansion of the AI field. Satsukivar also made a point about safety.

Satsukivar says:

“These models are very powerful, and they will be even more powerful. It’s going to be very easy, and that’s why as the AI gets more powerful, you don’t want to reveal it.’

OpenAI, founded to pursue open AI research, has made a significant shift in policy by opting to make AI closed. This change comes as no surprise when we look back to December of 2015 when Satsukivar wrote in the blog post announcing OpenAI’s launch that ‘Our goal as a non-profit is to create value for everyone, not just our shareholders.’

Satsukivar articulated the rationale behind his drastic shift in view regarding open-sourcing AI research, noting, “If you recognize how immensely powerful general purpose AI can be, then it is not wise to make it freely accessible.

It is anticipated that, in time, people will realize the foolishness of this decision due to the legal ramifications OpenAI faces by keeping the specifics of GPT-4 confidential.

Large datasets are often used to train language models, and scraping the internet through web scraping is a popular way of obtaining this data. However, when scraped in this manner, there is a risk that copyrighted material will be included in the dataset. This issue arises not only with language models but also with AI-generated images such as paintings and illustrations., Satsukivar said, “In my view, training data is technology. It may not look like it, but it is. It’s pretty much the same reason,’ he said. However, it did not answer whether OpenAI’s training data is subject to copyright infringement.

As AI develops rapidly, companies rush to incorrupt their products, often leaving behind discussions about safety and ethics. For example, Microsoft, which revealed that Bing Chat, an interactive AI, is based on GPT-4, in March 2023 dismissed an internal team specializing in risk research related to AI technology.

It turns out that Microsoft fired the “team that warns of the risks of AI technology” – GIGAZINE

Jess Whittlestone, the head of AI policy at The Center for Long-Term Resilience in Britain, sympathizes with OpenAI’s stance of not making GPT-4 details public. Yet, she is also aware of worries about the increasing centralization of AI.

Whittlestone told The Verge that it would not be appropriate to leave it up to individual companies to reveal AI. He further said that it needs to be examined if disclosing AI to the general public is sensible.

It is ultimately a positive development for the future of AI. It signals a willingness to learn from mistakes and course-correct to create ethical, transparent, and beneficial AI for all. As AI continues to evolve, we must remain vigilant in addressing the potential risks and ensuring that the technology is developed and used responsibly and accountable.

In the end, the AI community must work together to develop ethical guidelines and standards that ensure that AI is used for the betterment of society as a whole. By doing so, we can help to ensure that AI remains a force for good and continues to bring positive change to our world.

Source: GIGAZINE

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top