
Is “AI Slop” Dividing Us? Why Unity Matters in Ethical AI Work

The Growing Divide: A Threat to Progress
Artificial intelligence is moving fast—and so are the conversations surrounding its risks, potential, and ethical boundaries. Generative AI tools present real challenges: misinformation, training data transparency, content misuse, and embedded partialities, among others. Equally real, though less discussed, is a growing dissonance among advocates. Some now broadly label generative AI users as creators of "AI slop"—a dismissive term originally aimed at low-effort, spam-like outputs.
This framing is more than unhelpful—it is actively counterproductive.
As argued in Atlas of AI, dismissing individuals based on the tools they use can alienate potential contributors, discourage grassroots experimentation, and entrench institutional gatekeeping (Crawford, 2021). The AI ethics movement needs more engagement, not less.
Where the “Slop” Narrative Falls Short
The term “AI slop,” while used to flag concerns over quality and provenance, increasingly acts as a rhetorical filter—pushing away people who are exploring good-faith use, including those building public-facing tools to tackle AI misinformation.
This dynamic creates several harms:
- Suppresses participation: Emerging builders and observers, especially those outside major institutions, may feel pushed out of the conversation (Crawford, 2021).
- Overlooks positive contributions: People using AI-generated outputs to raise awareness or offer educational resources are often swept into blanket criticism.
- Increases misalignment: Collaborative opportunities are lost, and bad actors benefit from confusion and fracturing within advocacy spaces (O'Neil, 2016).
Who Is Most Affected?
The consequences of exclusion are not distributed equally. Many creators using generative AI to challenge inequality, build cultural archives, or bring technical insight to wider audiences are now facing undue scrutiny—not for their ethics, but for the tools they choose.
Research such as Race After Technology highlights how creators working to make AI fairer—from racial representation to dataset integrity—are often sidelined in discourse shaped by legacy institutions (Benjamin, 2019). When individuals already navigating systemic challenges are branded under a single dismissive term, it may unintentionally suppress meaningful work and reinforce disparities (Noble, 2018).
Accountability Should Start at the Top
The discomfort fueling these divides is real—especially around corporate misuse of ethics language to mask commercial aims (Matthias, 2019). This tactic, often called “ethics washing,” must remain in the spotlight.
But the solution is not policing every generative output. Instead, attention should be directed toward AI systems and developers responsible for scale and harm—not users engaging in exploratory or values-driven creation.
A Better Path: Shared Standards, Strategic Dialogue
To build trust and integrity across the AI space, we can take clear, inclusive steps:
- Transparency – Promote voluntary watermarking and attribution by creators using AI tools.
- Constructive critique – Replace public shaming with peer collaboration and shared standards review.
- Invest in inclusion – Support organizations that train, amplify, and fund creators from communities often left out of traditional innovation pipelines.
- Demand accountability – Push for audits and accountability from large platforms and AI companies rather than discrediting smaller contributors.
Final Thoughts
Artificial intelligence is already shaping society. Our conversations about ethics and fairness must be just as future-focused—and just as varied. Framing some participants as inherently harmful because of their methodologies creates unnecessary friction, diminishes credible contributions, and opens the movement to strategic division.
Technology needs all of us—especially those asking the hard questions and imagining better systems.
References
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity. https://www.politybooks.com/bookdetail?book_slug=race-after-technology-abolitionist-tools-for-the-new-jim-code--9781509526390
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. https://www.researchgate.net/publication/363177193_Atlas_of_AI_Power_Politics_and_the_Planetary_Costs_of_Artificial_Intelligence
Matthias, A. (2019). Ethics washing in artificial intelligence. Philosophy & Technology, 32(4), 689–705. https://link.springer.com/article/10.1007/s44206-022-00013-3
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. https://pubmed.ncbi.nlm.nih.gov/34709921/
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. https://www.researchgate.net/publication/314165204_Cathy_O'Neil_Weapons_of_Math_Destruction_How_Big_Data_Increases_Inequality_and_Threatens_Democracy_New_York_Crown_Publishers_2016_272p_Hardcover_26_ISBN_978-0553418811
Additional Resources:
Partnership on AI: https://www.partnershiponai.org/
AI Now Institute: https://ainowinstitute.org/
Ethics Guidelines for Trustworthy AI (EU): https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Algorithmic Justice League: https://www.ajl.org/