Ideological warfare: Student builds AI tool to deradicalize extremists on Reddit

4 days ago 1

A new AI tool called PrismX has surfaced, built by Sairaj Balaji, a computer science student at SRMIST in Chennai, India, and designed to track Reddit users showing signs of radicalization.

According to 404Media, PrismX scans for specific keywords, assigns users a "radical score," and can deploy an AI chatbot to try and steer them from extremist views. Balaji demonstrated this to 404Media by having PrismX search Reddit for "fgc9," a type of 3D printed weapon. The tool then presented a list of users discussing the term, analyzed their posts using a large language model, and assigned "radical scores." For example, one user scored 0.85 out of 1 because the tool determined they were "seeking detailed advice on manufacturing firearms with minimal resources, referencing known illicit designs (FGC8, Luty SMG)," which the tool interpreted as intent to circumvent legal channels and "a behavior strongly associated with extremist or radical circles." After this initial scan and scoring, PrismX can then focus on a particular user to provide assessments of their "radical affinity," "escalation potential," "group influence," and "psychological markers."

Here are some of the core capabilities of PrismX, according to a technical brief published by Balaji (spotted by Neowin):

  • Live OSINT Scan: Monitors platforms like Reddit in real time, flagging users based on ideological language and emotional cues.
  • Psychological Profiling Engine: Assigns psychographic archetypes such as "Zealot" (inflexible extremist), "Idealist" (emotionally persuadable), and "Atomized" (isolated, volatile).
  • Active Engagement Module: Deploys AI personas to interact with users, capable of pushing them toward deradicalization or, conversely, escalating their loyalty.
  • Action Potential Conditioning: Aims to convert psychological allegiance into a state of "dormant readiness," where individuals might be triggered later by specific phrases or memes.

Balaji's brief ends with a stark warning: "If you think what PrismX does is dangerous — you're right." It further adds, "But if you think no one else is building this already — you're wrong."

He told 404Media he hasn't actually used the "active engagement" side of PrismX on real Reddit users, citing ethical concerns. He also admitted that he has no training in or academic study of de-radicalisation, referring to himself as a completely "tech/management guy."

That hasn't stopped people from raising red flags. One commenter on the 404Media piece summed it up this way:

This is ethically dubious at best. It also seems highly likely to be abused depending on how someone defines some behavior as "radical". To some, creating 3D printed weapons secretly is a rational response if living under an oppressive government. What if someone deploying a tool like this thinks it's radical to offer gender affirming therapy to anyone who wants it?

PrismX, according to Balaji, is meant to be "a mirror — one that shows where the next war will be fought: Not with weapons. Not even with truth. But with belief — and those who know how to shape it." He told 404Media, "I'm just a kid in college, if I can do this, can you imagine the scale and power of the tools that may be used by rogue actors?”

The environment where PrismX might be deployed is itself complex. Recent research found that a segment of users on that very platform engage in debates not for genuine discussion or to have their views changed, but simply because they enjoy the act of arguing, disagreeing, or even trolling. This means any attempt to 'engage' or 'deradicalize' users online must contend with individuals whose motivations are far removed from sincere dialogue, potentially complicating the efforts of any AI, however sophisticated.

PrismX has emerged amid growing concerns about AI misuse. Those fears were intensified after Elon Musk's Grok AI started dropping unprompted comments about "white genocide" in South Africa. The general worry is that such AIs can be used to generate believable misinformation and manipulate public opinion on an unprecedented scale.

Read Entire Article