In December 2019, the moderators of the renowned r/AskHistorians Reddit discussion board observed emerging posts that seemed emblematic of AI-generated text.
Sarah Gilbert, a postdoctoral associate at Cornell University and a forum moderator, is present.
Sarah Gilbert says:
“They were pretty easy to spot,”
“They’re not in-depth, they’re not comprehensive, and they often contain false information.”
When ChatGPT was unveiled last year, the team soon realized that their small portion of the web had become a focus for content created by ChatGPT. The launch of ChatGPT set off an ongoing cycle of enthusiasm that seemed never-ending.
Evangelists of ChatGPT have declared that its technology could lead to the eradication of hundreds of millions of jobs, show glimmers of artificial general intelligence resembling singularity, and even cause the destruction of the world – all reasons why it should be purchased immediately. However, there has been less focus on other potential effects, such as a surge in AI-generated content flooding the web.
Questions about history topics can be asked by non-experts on the AskHistorians forum, which has two million members. Recent posts have asked if being “on time” is a modern concept, what would a medieval scribe do if the monastery cat made an inky paw print on their vellum, and how Genghis Khan got fiber in his diet.
Initially, the forum saw five to 10 ChatGPT posts daily as more people became aware of the tool, according to Gilbert. Nowadays, that number has decreased. The team believes this may be due to their strict measures in managing AI-generated content; even if they aren’t being removed for being made by ChatGPT, they often fail to meet the sub’s quality expectations.
The moderators are suspicious that certain posts on ChatGPT may be done to check if the moderators can detect them. Additionally, there is evidence of astroturfing, spamming, and “karma farming,” in which accounts are built up over time to seem genuine to be used for unlawful purposes.
The issue of Reddit’s ChatGPT-powered bots being present is currently “pretty bad,” according to an anonymous moderator with knowledge of the site’s moderation systems. Over a few hundred accounts have been removed from the website. More are found daily, mostly manually, due to AI-generated content being too challenging for automated systems. Unfortunately, Reddit refused to comment on this situation.
In February, AskHistorians and other subreddits were targeted in a bot attack orchestrated by ChatGPT. Gilbert reported that this automated bots system was inputting questions from AskHistorians into ChatGPT and then giving answers through several shill accounts. The same type of botnet was also found on many “ask” subreddits, such as r/AskWomen, r/AskEconomics, and r/AskPhilosophy.
Gilbert explains that the issue was not with realizing that ChatGPT had created the bot’s answers but rather “the speed at which they were being sent.” During the attack’s peak, 75 accounts were banned each day within three days. Although there is no definite proof of its intent, some posts promoting a video game were observed.
A recent Reddit transparency report revealed the significant issue of spam and ‘astroturfing’ accounts, which are fake accounts set up to promote a product. The use of generative AI like ChatGPT could make this problem much worse. BeforeBefore this technology, astroturfing mostly involved replicating text spread by multiple accounts. With ChatGPT, however, it is now possible to quickly generate entirely new spam posts.
U/abrownn, a moderator of the massive r/Technology forum with 14 million subscribers, noted that the bot problem was already incredibly dire, and Reddit’s auto-spam blockers were barely effective. By the time they did kick in, it was usually too late since most bots had finished their task.
As many suggest, bots on Reddit are mainly used for promotional activities, not political interference. These bot accounts promote adult-oriented items such as marijuana/Delta8 products, pornographic material, and gambling. Additionally, they may advertise drop shipped goods that often involve credit card fraud, provide different products than what was ordered, or fail to deliver anything.
Aside from r/AskHistorians, moderators of subreddits such as r/AskPhilosophy, r/AskEconomics, and r/Cybersecurity have noticed some issues with ChatGPT that are manageable for now. A moderator from AskPhilosophy commented, “ChatGPT has a style which can easily be recognized, however the deciding factor is its quality – it appears to be very weak in philosophy.”
The AskPhilosophy moderator commented that it’s only a matter of time before another bot attack occurs. They assume the malicious bots will become better at avoiding their security protocols. They said that ChatGPT comments have become rare, or the hackers are becoming more adept at fooling them.
As ChatGPT continues to gain popularity and accessibility, Reddit moderators must proactively prepare for the potential spam apocalypse. By staying vigilant, implementing effective strategies, and collaborating with technology providers, moderators can help maintain the integrity of Reddit communities and ensure a positive user experience for all. With careful planning and proactive measures, Reddit can continue to thrive as a vibrant and engaging platform for discussion and collaboration.
Source: @vice