Uncovering Misinformation Through Artificial Intelligence Speech Pattern Analysis

A report at The Federalist revealed that the U.S. federal government is building a collection of artificial intelligence and machine learning technologies via federal grants to academics and businesses.

The government is attempting to gain tight control over “misinformation” and “disinformation,” utilizing AI and ML to monitor internet conversations through more than 500 federally supported contracts or grants since 2020, as the report states.

The Federalist states that the systems will be able to determine where potentially threatening messages or hate speech come from in real-time according to what the government considers dangerous.

It would be feasible to stop the spread of unauthorized communications on the web by curbing the intensification of discussion before it goes viral.

Some organizations to receive federal support are websites like NewsGuard, which invested $750,000 from the Small Business Innovation Research Center for its “Fingerprints” program. This program is intended to monitor false information disseminated online by combining human intelligence and AI.

The Department of Defense has awarded PeakMetrics a $1.5 million grant to create technology for swiftly evaluating and gauging disinformation for their operators. This organization monitors 1.5 million press outlets, blogs, social media networks, podcasts, television/radio programs, and email newsletters.

Omelas Inc. was awarded more than one million dollars in federal money for research and development purposes. It examines the impact of leading newspapers, TV networks, governmental organizations, militant factions, and similar sources across various social media channels and messaging systems, such as RSS feeds, websites, and applications from countries like Russia, Iran, and China.

Primer Technologies was granted $3 million for its “social media event monitoring” project. The company’s “The Strategic Imperative of AI to Speed Up Decision Cycles” document elucidates how its Artificial Intelligence (AI) technology can scrutinize web-based conversations.

This technology can discover which people are involved in the conversation, recognize their emotions, find appropriate visuals related to what is being discussed, and identify any conflicting views and who is providing them.

Primer notifies users of challenging connections that may be easily overlooked, such as changing attitudes across multiple languages in streaming data and stories propagated by automated bots. According to the document, the command allows the user to trace potential dangers arising and group pictures and stories for reporting and further examination.

The tracking of online conversations, extensive databases of news-related information, and AI recognition of speech seems to be the primary objectives behind certain government programs, which military organizations like the U.S. Air Force or Navy commonly purchase.

The use of AI for uncovering misinformation through speech pattern analysis represents an exciting development in the fight against false information. As this technology continues to evolve, it has the potential to play an increasingly important role in promoting accurate and reliable information and combating the spread of false information.

Source: TheBlaze

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top