A vision of charged stillness in a moment of solitude. Eerie lake in the early morning in subdued lighting with photograph-like surreal aura.

Beyond the Glitch: Oversight Gaps, AI Hallucinations, and Responsible Data Use


By Lucky Star, Responsible AI | Blockchain Educator & Consultant

Introduction


A vision of charged stillness in a moment of solitude. Eerie lake in the early morning in subdued lighting with photograph-like surreal aura.

[May 13, 2025] — In recent months, a growing chorus of industry and academic voices has noted a surprising rise in AI “hallucinations”—instances where advanced models confidently produce plausible-sounding but unfounded content. While systems such as OpenAI’s o3 and o4-mini demonstrate remarkable reasoning and creative abilities, these hallucinations remind us both of technology’s promise and of our shared human imperfection.

 

What Are AI Hallucinations?

AI hallucinations occur when a model outputs information with high confidence that is not grounded in reality. In text-based large language models (LLMs), this might look like an invented quotation attributed to a real person or a fabricated statistic (Xu et al., 2025). A subtype termed “delusions” arises when models assert falsehoods so confidently, they resist correction (Xu et al., 2025). Rather than dismissing these as mere technical quirks, it is more useful to view them as part of a broader information ecosystem, where credible-sounding errors can mislead even expert users (Shao, 2025).

 

How Hallucinations Show Up Across AI Modalities

  • Generative Text & Reasoning Models: Benchmarks report that OpenAI’s o3 and o4-mini hallucinate on roughly one-third to nearly one-half of queries—about twice the rate of earlier reasoning variants—while non-reasoning models like GPT-4o produce far fewer ungrounded outputs (Lenoir, 2025; Murray, 2025).
  • Image Generation: Visual AI can yield “creative anomalies,” such as subtly distorted faces or impossible lighting that slip past casual inspection of photorealism (van Egmond, 2025; Yahoo Tech, 2025).
  • Everyday Services: In chatbots, virtual tutors, or smart speakers, a single hallucinated claim—whether a wrong dosage or a fabricated policy—can lead to wasted time, confusion, or harm if left unchecked (Infosecurity Magazine, 2025; MIT Sloan School of Management, 2025).

Why We Should All Care

A vision of charged stillness in a moment of solitude. Eerie lake in the early morning in subdued lighting with photograph-like surreal aura.

AI now underpins many daily tasks—summaries, recommendations, tutoring. Each hallucination erodes time savings and trust: a student must fact-check a false history, a manager corrects a bogus metric, a patient hesitates at conflicting medical advice (Lenoir, 2025; Shao, 2025). These downstream impacts highlight the importance of verification layers and ongoing awareness of AI’s limits.

 

Researchers’ Concerns & Broader Societal Stakes

Experts view the uptick in hallucinations as a symptom of deeper trade-offs and societal risks. Noisy or contradictory training data can embed persistent “delusions” that fine-tuning alone cannot erase (Xu et al., 2025; Shao, 2025). Real-world incidents—from defamatory automated content to misleading security advisories—demonstrate tangible harms (Infosecurity Magazine, 2025; Axios, 2025). Cognitive-science research shows that as LLMs grow more sophisticated, they increasingly mirror human-like intuitive errors—our own cognitive biases—suggesting AI hallucinates much as humans jump to conclusions (Frederick, 2005; LiveScience, 2025).

 

Hallucinations as a Mirror of Human Fragility

A vision of charged stillness in a moment of solitude. Eerie lake in the early morning in subdued lighting with photograph-like surreal aura.

Our drive to “fix” AI hallucinations often reflects discomfort with our own fallibility. Cognitive-psychology studies document how narrative biases, overconfidence, and the Dunning-Kruger effect—the tendency of those least competent to overestimate their abilities—shape both human and machine missteps (Frederick, 2005). AI’s confident errors prompt us to confront our instinct to hide mistakes rather than accept them as part of design. Yet neither blind acceptance nor insistence on total control is prudent; robust guardrails—human-in-the-loop oversight, transparent audits, and continuous evaluation—let us learn from errors while preventing harm.

 

Data Colonialism, Informed Consent & Equitable AI Governance

Biometric Data & the Risk of AI Misuse

Biometric experiments in blockchain, such as WorldCoin’s iris-scanning “Orb,” offer a stark warning of how technology rollouts can repeat past transgressions and feed directly into AI development. Lucky Star’s analysis shows that over 4.5 million irises were captured—often in regions with minimal data protection—without truly informed consent (Lucky Star, 2025). Independent researchers confirm that “a lot of their data was collected without informed consent” in impoverished areas of Mexico (Howson, 2023), and privacy regulators in France, Germany, Portugal, and Kenya have repeatedly questioned or halted these practices (Reuters, 2023a; Reuters, 2023b; Reuters, 2023c; Reuters, 2024a). A Kenyan court ultimately ruled WorldCoin’s data collection unlawful, citing failures to perform mandatory impact assessments and secure valid consent (ICJ Kenya, 2025).

A vision of charged stillness in a moment of solitude. Eerie lake in the early morning in subdued lighting with photograph-like surreal aura.

These patterns go beyond token privacy concerns—they risk creating vast biometric repositories that fuel AI training pipelines without oversight or benefit to the communities involved. In effect, vulnerable populations become unknowing “test subjects,” their unique physiological data used to optimize facial-recognition algorithms, synthetic avatars, or even emotion-detection systems. Such non-consensual extraction exemplifies data colonialism: valuable biometric and behavioral inputs being harvested from underserved groups, then monetized and deployed globally, all while the original data-sources remain excluded from decision-making or profit (Appleby, 1990).

Comparable issues have arisen in large-scale image-scraping initiatives that disproportionately ensnare Black and other underrepresented individuals, creating biased AI datasets without transparency or redress (Clearview AI Surveillance Tech, 2025). The result is a feedback loop where historic power imbalances are encoded into AI systems—undermining trust, amplifying imbalance, and opening the door to further misuse.

These examples underscore the urgent need to tie concerns about AI hallucinations back to the upstream practices that shape model behavior. If non-consensual biometric and visual data feed into AI training sets, those same models may reproduce or exacerbate ethical blind spots—hallucinating in ways that disproportionately harm the very communities from which the data was extracted. Without equitable governance and genuine consent, we risk perpetuating a cycle of technological harm under the guise of innovation.

Practices to consider:

  • Implement clear, accessible consent protocols that explain data use, retention, and sharing in local languages (CIPIT Strathmore, 2023).
  • Mandate Data Protection Impact Assessments with community representation and publish results openly (ICJ Kenya, 2025).
  • Enforce data sovereignty: participants must have the right to withdraw consent and to access, correct, or delete their data.
  • Establish equitable benefit-sharing: ensure that data providers receive tangible returns—financial, educational, or infrastructural—from AI projects.

 

Technology as Modern Faith, Its Ancient Roots & a Path Forward

Many today treat AI with near-religious devotion or dread—expecting miracles or fearing apocalypse. Yet in antiquity, inquiry and spirituality were inseparable: medieval European scholars saw natural philosophy as a path to divine understanding, and thinkers of the Islamic Golden Age regarded scientific exploration as a sacred duty (Gutas, 2001; Wikipedia, 2025). It was only during the ‘Enlightenment’ that a clear divide emerged between empirical science and faith (Appleby, 1990).

Recognizing this shared connection can temper modern technological faith with philosophical humility and guide us toward responsible AI stewardship. To honor both our creative ambitions and our collective responsibility, we can:

  • Embed human-in-the-loop oversight and continuous feedback loops, ensuring that errors—whether hallucinatory outputs or ethical oversights—are identified, analyzed, and learned from in real time.
  • Foster transparent governance by engaging technologists, ethicists, and community representatives (especially from historically underserved regions) in defining data-use policies, consent standards, and accountability measures.
  • Build adaptive, resilient processes that accept complexity and uncertainty, shifting from a mindset of total control to one of collaborative stewardship—where AI systems evolve alongside human insight and societal values.

By recognizing that AI hallucinations reflect not just technical faults but our own cognitive shortcuts—overconfidence, narrative bias, and the Dunning–Kruger effect (Xu et al., 2025; Frederick, 2005)—we see how easily falsehoods can feel true. These errors also mirror patterns of data colonialism, where vulnerable communities supply data without fair compensation or genuine consent (Appleby, 1999; Howson, 2023).

This article was born from a pressing question: What are the consequences when systems we trust present fabricated information as fact? The increased incidence of AI hallucinations reveals both the limits of machine reasoning, and the risks of unchecked human trust. Yet within these challenges, lies an opportunity: by embedding human-in-the-loop oversight, fostering transparent governance, and building adaptive, resilient processes, we can turn AI’s fragility into a foundation for more equitable and reliable technology.

You—the reader—have the power to shape this outcome. Your engagement, your critical questions, and your advocacy for informed consent and ethical data practices will influence policy and guide corporate behavior. The urgency is real: decisions made today about data use, consent and oversight will determine whether AI becomes a tool for the few or a benefit for all. May this insight allow you to stand ready to participate, to question, and to collaborate—so that AI advances with accountability, integrity and respect for human dignity.

 

Sources & References
Appleby, R. S. (1999). The ambivalence of the sacred: Religion, violence, and reconciliation (Carnegie Commission on Preventing Deadly Conflict series). Lanham, MD: Rowman & Littlefield. Retrieved from https://rowman.com/ISBN/9780847685547/The-Ambivalence-of-the-Sacred-Religion-Violence-and-Reconciliation
Axios. (2025, April 23). Advanced AI gets more unpredictable.Retrieved from https://www.axios.com/2025/04/23/ai-jagged-frontier-o3
Clearview AI Surveillance Tech. (2025, May). Clearview AI surveillance tech allegedly designed to target minorities. LinkedIn. Retrieved from https://www.linkedin.com/pulse/clearview-ai-surveillance-tech-allegedly-designed-target-minorities-ch6ne
CIPIT Strathmore. (2023). Case commentary on Worldcoin in Kenya. Retrieved from https://cipit.strathmore.edu/case-commentary-on-worldcoin-in-kenya/
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. Retrieved from https://doi.org/10.1257/089533005775196732
Gutas, D. (1998). Greek thought, Arabic culture: The Graeco-Arabic translation movement in Baghdad and early ʻAbbāsid society. London, UK: Routledge. Retrieved from https://www.routledge.com/Greek-Thought-Arabic-Culture-The-Graeco-Arabic-Translation-Movement-in-Baghdad-and-Early-Abbasaid-Society-2nd-4th-5th-10th-c/Gutas/p/book/9780415061339
Howson, P. (2023, April). Crypto for biometrics? Privacy fears as Worldcoin scans Mexicans. Economic Times.Retrieved from https://www.reuters.com/article/business/media-telecom/feature-crypto-for-biometrics-privacy-fears-as-worldcoin-scans-mexicans-idUSL1N39A07M
ICJ Kenya. (2025, May). ICJ Kenya welcomes ruling declaring Worldcoin’s biometric data collection unlawful. Eastleigh Voice. Retrieved from https://eastleighvoice.co.ke/worldcoin/146166/icj-kenya-welcomes-ruling-declaring-worldcoin-s-biometric-data-collection-illegal
Infosecurity Magazine. (2025, May). AI’s dark side: The emergence of hallucinations in the digital age. Retrieved from https://www.infosecurity-magazine.com/opinions/ai-dark-side-hallucinations/
Lenoir, M. (2025, April 18). OpenAI’s new reasoning AI models hallucinate more. TechCrunch.Retrieved from https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/
LiveScience. (2025, April). AI is just as overconfident and biased as humans can be, study shows. Retrieved from https://www.livescience.com/technology/artificial-intelligence/ai-is-just-as-overconfident-and-biased-as-humans-can-be-study-shows
Lucky Star. (2025). The human cost of digital currency: What biometric experiments in Web3 reveal. Lucky Star AI. Retrieved from https://luckystar.ai/blogs/blockchain/the-human-cost-of-digital-currency-what-biometric-experiments-in-web3-reveal
Murray, C. (2025, May 6). Why AI “hallucinations” are worse than ever. Forbes.Retrieved from https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
MIT Sloan School of Management. (2025). When AI gets it wrong: Addressing AI hallucinations and bias. Retrieved from https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
Reuters. (2023a, July 28). France’s privacy watchdog questions legality of Worldcoin biometric data collection. Retrieved from https://www.reuters.com/technology/frances-privacy-watchdog-says-worldcoin-legality-seems-questionable-2023-07-28/
Reuters. (2023b, July 31). German data watchdog probing Worldcoin crypto project, official says. Retrieved from https://www.reuters.com/technology/german-data-watchdog-probing-worldcoin-crypto-project-official-says-2023-07-31/
Reuters. (2023c, September 1). Scrutiny of iris-scanning crypto project Worldcoin grows. Retrieved fromhttps://www.reuters.com/technology/scrutiny-iris-scanning-crypto-project-worldcoin-grows-2023-09-01/
Reuters. (2024a, March 26). Portugal orders Sam Altman’s Worldcoin to halt data collection. Retrieved from https://www.reuters.com/markets/currencies/sam-altmans-worldcoin-ordered-stop-data-collection-portugal-2024-03-26/
Shao, A. (2025, April). Beyond misinformation: A conceptual framework for studying AI hallucinations in (Science) communication (arXiv:2504.13777). Retrieved from https://arxiv.org/abs/2504.13777
van Egmond, R. (2025, April). New OpenAI models hallucinate more often than their predecessors. Techzine. Retrieved from https://www.techzine.eu/news/applications/130720/new-openai-models-hallucinate-more-often-than-their-predecessors/
Xu, H., Yang, Z., Zhu, Z., Lan, K., Wang, Z., Wu, M., Ji, Z., Chen, L., Fung, P., & Yu, K. (2025). Delusions of large language models (arXiv:2503.06709). Retrieved from https://arxiv.org/abs/2503.06709
Wikipedia (2025, May 10) Relationship between religion and science.  Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Relationship_between_religion_and_science
Yahoo Tech  (2025, April) OpenAI’s hot new AI has an embarrassing problem. Yahoo Tech. Retrieved from https://tech.yahoo.com/articles/openais-hot-ai-embarrassing-problem-193401856.html

 

This article is intended for informational purposes only. For direct consultation, please contact Lucky Star.
 
Back to blog