Slopaganda: the interaction between propaganda and generative AI
A summary of work by Michał Klincewicz, Mark Alfano, Amir Ebrahimi Fard
At least since Francis Bacon, the slogan “knowledge is power” has been used
to capture the relationship between decision-making at a group level and
information. We know that being able to shape the informational environment
for a group is a way to shape their decisions; it is essentially a way to make
decisions for them. This paper focuses on strategies that are intentionally, by
design, impactful on the decision-making capacities of groups, effectively
shaping their ability to take advantage of information in their environment.
Among these, the best known are political rhetoric, propaganda, and
misinformation.
The phenomenon this paper brings out from these is a relatively new strategy, which we call slopaganda. According to The Guardian, News Corp Australia is currently churning out 3000 “local” generative AI (GAI) stories each week.
In the coming years, such “generative AI slop” will present multiple knowledge-related (epistemic) challenges. We draw on contemporary research in cognitive science and artificial intelligence to diagnose the problem of slopaganda, describe some recent troubling cases, then suggest several interventions that may help to counter slopaganda.



