Why Chatbots Fail At Journalism & What You Can Do About It

Chatbots have become a ubiquitous presence in our daily lives. They help us order food, make appointments, and even assist us with our banking needs. However, chatbots may not be up to the task regarding journalism. While they can produce content, their ability to report the news accurately and ethically is questionable. Some experts argue that chatbots may do more harm than good in journalism.

Using A.I. present reality, both boosters & skeptics concur: white-collar jobs are targeted by systems like ChatGPT, for example, journalism. That being said, this is not a distant future conceivable scenario, but something currently taking place.

Text-based media has embraced Artificial Intelligence (A.I.) to reduce costs and accelerate the production timeline. By cutting out expensive humans, firms can save costs and gain an expedited output—allowing them to make decisions faster than if depending on a human alone.

Last month, when BuzzFeed announced that it would be taking advantage of OpenAI’s services, its stock value saw a distinct surge. Using the buzzy startup’s services was to give an enriching touch to Buzzfeeds already beloved quizzes.

The future of media looks dismal given the rising prevalence of websites owned by humans, who stand to benefit from automated banner ads placed on content authored by bots, which are indexed by search engine bots. Such content is rarely even seen by human visitors.

Every few months, we enter into a spastic cycle of hype and fear surrounding A.I.-generated content and its potential dissemination in the future. These warnings create an almost unending stream of apprehension as everyone wonders what such technology could mean for us.

Journalists are yet to be displaced by A.I. and machine learning tools like ChatGPT because evidence points out that it is not feasible from a business perspective. Despite the reliable news generated by these technologies, they still lack the skills of journalists currently active in the market.

The Associated Press, Reuters, and the Washington Post have utilized automated and A.I. technology since 2014; Bloomberg employs it to individualize news feeds and search results for its readers; the Los Angeles Times deploys it for swiftly reporting on murders and earthquakes while the Guardian has made use of it for keeping a record of international.

Slate sought to further its ongoing commitment to fostering better accessibility by transcribing its podcast automatically but also tested something novel: the capabilities of ChatGPT as suggested counsel for Dear Prudence.

By extracting data sets, British publications such as the Times and Press Association have revealed contemporary trends while customizing newsletters delivered to their readers.

Journalism utilizes software to take on tedious tasks such as transcribing and initial data collection, freeing time for more in-depth reporting. These tasks, which have been automated, have allowed reporters more flexibility in their findings of important stories.

The difference now is that some outlets are endeavoring to use ChatGPT and other A.I. tools for more than mere donkey work – instead, they are striving to derive greater efficiencies from them.

While chatbots may be able to perform a wide range of tasks in our daily lives, they are not suited for journalism’s complex and nuanced world. Their information-gathering, accuracy, and ethical decision-making limitations make them unsuitable for producing reliable and trustworthy news.

Furthermore, using chatbots for news production can negatively affect society, including spreading misinformation and eroding trust in journalism. As such, we must continue to invest in and support human journalists who can provide the context, analysis, and ethical considerations necessary for a functioning and trustworthy media ecosystem.

Source: slate.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top