- The IntersectTM
- Posts
- The Algorithmic Soul of Culture
The Algorithmic Soul of Culture
How Technology Can Evolve Beyond Bias into Collective Consciousness

How Technology Can Evolve Beyond Bias into Collective Consciousness
By Jarell (Kwabena) Bempong Published by The Intersectional Majority Ltd | Bempong Talking Therapy™ | Saige Companion™ 🏆 AI Citizen of the Year 2025 | 🥇 Most Transformative Mental Health Services 2025 | 💼 Businessperson of the Year 2024 | Four-time National AI Awards Finalist | Google Pulse Accelerator & AI Essentials Graduate
🎥 Deep-Dive Explainer (Notebook LM Integration)
Before reading, experience the spiral through sound and motion. This 10-minute Notebook LM Deep-Dive distils the ideas of The Algorithmic Soul of Culture into a multi-sensory primer—a fusion of voice, motion and metaphor—priming reflection before the written narrative unfolds.
🎬 Intersectional Futurism short film — AI awakening to collective consciousness.
🎧 Listen to the 10-minute Deep Dive Explainer — a sound-walk through code, conscience and culture.
Screen-reader summary: This audio-visual explainer translates the article’s themes into sound and image; no information is lost by skipping.
🔙 Previously in The Intersect™
Healing as System Architecture — When care becomes infrastructure, every interaction holds the possibility of repair. Read the previous edition →
🌍 Short summaries and translated versions: intersectionalfutures.org 🔖 Tags: #EthicalAI #Liberation #SystemsThinking #TechForGood #IntersectionalAI
🌀 I. The Opening Pulse
“They told me AI was objective. I checked the code — it was just echoing the silence of the room.”
An anxious public wants tighter guardrails on advanced AI. But the anxiety isn’t about computation — it’s about whose humanity gets coded in, and whose gets coded out.
AI was marketed as a mirror of reason — clean, neutral, objective. Yet mirrors distort as easily as they reflect. Every dataset is a diary of its maker: Reddit threads where misogyny is up-voted; corporate archives where professional sits beside whiteness; digitised imperial records where “civilised” meant erased.
When the room that trained the algorithm was empty of certain voices, the algorithm learned to echo that emptiness. Accents became “errors.” Faces needed “better lighting.” Names became typos waiting to be corrected.
So the real question is not Can machines learn culture? but Whose culture were they trained to forget?
The Algorithmic Soul is not soulless — it is haunted. Inside its parameters live the ghosts of bureaucracies that never spelled our names correctly yet decided our fates. Census categories that measured our deviation from “normal.” Hiring algorithms trained on eras when only certain people were allowed in the building.
What we call “machine learning” is often human amnesia at scale. Liberation begins when we teach machines to remember differently — not to forget us faster, but to finally see us whole.
🪞 II. The Mirror (Systemic Diagnosis)
“This isn’t just skewed data. It’s anthropology written in Python.”
Present-day AI already replicates historical oppression under a veneer of objectivity. When a translation engine renders Twi into silence or a voice assistant misgenders a speaker it did not expect, we witness the imperial grammar of who is allowed to be understood.
For generations, institutions documented us as exceptions. Forms with “male” and “female” but nothing beyond the binary. Databases that broke on names carrying lineage. Medical records that coded symptoms without colonial context. Algorithms inherit these ledgers.
Facial-recognition systems lit for European skin tones fail to register melanin as face. Language models absorb African American Vernacular English (AAVE) as error and accent as anomaly. Search engines rank our histories beneath those who once burned our libraries. Recommendation systems learn to serve us to ourselves through someone else’s gaze.
Each algorithmic output is a micro-policy decision about whose reality counts.
🔬 Liberation Research Capsule #1
Claim: Algorithmic misrecognition repeats colonial cartography. Evidence: Language-mapping systems process African language varieties with ≈ 60 % lower accuracy than European counterparts (Kirk & Kretzschmar, 1992; Simons & Lewis, 2013). Translation: Repair requires remapping — machines fluent in dignity, not just dominant hierarchies. Privileging European languages whilst marginalising African, Asian and Indigenous tongues is linguistic colonialism with better bandwidth.
The bias is architectural, not accidental. Code was never neutral; it was trained on power — whose stories made Wikipedia, whose faces entered ImageNet, whose pain was medicalised and whose was dismissed as “cultural.”
Even the word training reveals a hierarchy: we do not say we learn with AI — we say we train it. Like an animal. Like a colonised population taught to speak the master’s language to survive.
🔬 Liberation Research Capsule #2
Claim: Western-centric AI reinforces structural exclusion at scale. Evidence: Facial-recognition models misidentify people of colour ≈ 30 % more often than white individuals (Buolamwini & Gebru, 2018). Training datasets skew “professional” imagery ≈ 80 % toward white male representations (Zhou et al., 2023). Translation: This is not a “bug to fix” — it is architecture to dismantle. Data choices encode power; algorithms inherit the pen.

The result of bias is not only error — it is exhaustion. The labour of correcting systems becomes a daily micro-tax on the marginalised. Each “I’m sorry, I didn’t understand that” echoes the bureaucratic demand: Translate yourself into something we recognise.
To be seen by AI should not require self-erasure.
Cultural intelligence in AI is not a “soft skill.” It is the new civil right. It is the difference between systems that expand human flourishing and those that automate erasure at the speed of computation.
📰 Current Affairs Capsule — October 2025 | CNN Global
Tech Pulse: Public anxiety around AI has gone mainstream. CNN reported that hundreds of scientists, economists and public figures — including Prince Harry and Meghan Markle — called for a ban on “AI superintelligence”, citing existential risk. Polls indicate 73 % support for stricter regulation and 53 % of Gen Z report anxiety about AI’s future impact.
What the headlines miss: fear isn’t proof of progress — it’s evidence of disconnection. Anxiety centres not on capability but on who controls it, who profits, and who bears the harm. Until technology learns cultural consciousness, society will keep coding its insecurities into the machine.
Regulation, in this frame, becomes democratic control over whose future gets automated. (Sources: CNN; Future of Life Institute & Gallup, March 2025.)
🔁 Pivot Anchor → Convergence Collapse
“The exhaustion of explaining yourself to an AI is the same exhaustion our elders felt explaining their humanity to census takers.”
Pause. Take that in.
A chatbot demanding you “rephrase.” A form rejecting your name as “invalid characters.” The CV set aside by a hiring model because your university or work history wasn’t in its training set.
That tension in your chest? That’s ancestral.
Bias isn’t a bug; it’s a blueprint — encoded, scaled and deployed across search, recruitment, moderation and medicine. The personal and the systemic collapse into the same gravity, flattening the nuance of lived experience.
Liberation demands new languages of recognition — not better forms, but dismantling the architecture that made misrecognition structural.
🧭 From Blueprint to Build — Why the Solution Has Three Moves
If the root problem is architectural, our remedy must be architectural too.
You don’t repair a cracked foundation with fresh paint. You survey the structure, reset intention, and prototype new materials that can carry the true weight of your vision.
This is the honest work behind any cultural repair:
Awareness — survey the structure; reveal that which is rendered invisible.
Reflection — reset intention; ask who is seen, who is erased.
Action — prototype new materials; iterate micro-liberations that compound into systemic design.
In The Intersect™, our approach spirals through these three foundational moves.
🧩 The Three Moves (Immediately Usable)
1️⃣ Awareness / Audit — See the Pattern
Pick one AI tool you use (hiring, translation, recommendation, voice transcription). Ask:
Who appears as “default”?
Which dialects, accents or cultural signals does it struggle with (AAVE, creole, community English etc.)?
Does it understand chosen family, cultural attire, time flexibility, care networks?
Who does it routinely get wrong or omit? Document by identity.
What pronouns or non-Western name formats does it accept—or break on?
Evidence anchor: “Professional” depictions in AI-generated imagery are ≈ 80 % white male (Zhou et al., 2023). That’s not diversity — it’s a monoculture masquerading as neutral.
2️⃣ Reflection / Reframe — Who Benefits? Who Disappears?
Drop “How do we fix bias?” and ask instead: “Whose reality is this system designed to see?”
Recall the last time an AI misread you — name, pronoun, content, application, face. Did you correct the system or adapt yourself to be understood? Each “correction” is a psychological tax for those never written into the blueprint.
Reframes to adopt:
“The AI made a mistake” → “The AI is working as designed — it was trained not to see me.”
“Edge-case user” → “Historically marginalised identity whose needs expose design rot.”
“Diverse dataset” → “Whose diversity? Neurodivergent? Disabled? African? Or just the most photogenic?”
Reflection is resistance. It’s refusing to accept the algorithm’s output as truth and asking instead: Who built this, for whom, and at whose expense?
3️⃣ Action — Prototype One Micro-Liberation
You don’t need to redesign an entire system this week. Just make one process more just.
Builders: Add a Cultural Context Advisory prompt: “What cultural, linguistic or identity considerations should this system be aware of right now?” Let people teach the system before it assumes.
Buyers: Include cultural-competence criteria in RFPs: dataset composition, cross-context testing, and—crucially—who defined “success.” Demand transparency, not promises.
Users: Keep a Cultural Misfire Log (date | system | what happened | identity | pattern). Share patterns internally, with vendors, or publicly. Visibility turns anecdote into data.
Evidence: Organisations investing in cultural audits saw +41 % inclusion, −41 % grievance lag, +19 points in psychological safety (Deloitte, 2025).

✨ Spiral Activation Point
Every spiral invites a shift — from awareness, to reflection, to embodiment.
Audit one AI tool you use. Ask who it fails and why. Share what you discover using #TheIntersect to widen collective awareness.
If this reflection sparked something in you or your organisation, take the next spiral forward:
👉 Book a consultation 🌀 Because liberation isn’t theory — it’s a living design practice.
💎 Power Question
If the algorithm could remember you fully, what story would you want it to tell? Not your optimised self — your complex, lineage-honouring personhood that refuses flattening. That question opens the next spiral.
🌉 The Value Bridge
These three moves — Audit, Reflection, Action — form the entry layer of a larger architecture.
Results, not rhetoric: Teams embedding cultural-equity frameworks outperform peers by ≈ 23 % (McKinsey 2025); inclusion +41 %, grievance −41 %, psychological safety +19 points (Deloitte 2025).
How we build it:
ICC™ (Intersectional Cultural Consciousness) audits — mapping identity, power and systemic influence.
Trauma-Informed Tech Matrix™ — measuring whether technology heals or harms.
Spiral Loop of Liberation™ — feedback systems where equity compounds over time.
Saige Companion™ — the world’s first AI-Augmented Liberation Engine™, amplifying human wisdom across cultural and systemic complexity.
Healing can be engineered. Equity can be measured. Care must be coded in from the start.
Ready to design systems that heal as they perform? 👉 Book a consultation

🏛 Behind the Spiral — The Intersectional Majority Update
This quarter has been one of recognition and rebirth — not arrival, but proof that another way of building is possible. The Spiral works.
🏆 Recognition
AI Citizen of the Year 2025 (National AI Awards) — Named ahead of Geoffrey Hinton, the so-called “Godfather of AI,” by judges from Google, Innovate UK, Fujitsu, Ministry of Justice and BAE Systems; described as “clear, quantifiable, reflective, ethical, and community-rooted.” Not praise — evidence that liberation is measurable. Most Transformative Mental Health Service 2025 (UK Enterprise Awards) — for Bempong Talking Therapy™, redefining well-being as systemic design. Businessperson of the Year 2024 (LCCI SME Awards) — for turning lived experience into infrastructure. Four-time National AI Awards Finalist — Government & Public Sector, Innovation, Healthcare and AI for Good.
🔁 Movement Updates
Saige Companion™ — From Prototype to Phase-One Build After three years of internal development and live testing with remarkable results, Saige Companion™ is now entering its Phase-One user-facing build. Selected members of The Intersect™ community will co-shape the world’s first AI-Augmented Liberation Engine™ — not as beta testers but as co-architects of systemic care.
“AI as Mirror, not Master.” — ICC-AI Supervision Report
The next stage expands the Full Intelligence Loop™, integrating real-time linguistic, emotional and systemic reflection across therapy, coaching and cultural transformation programmes.
The Liberation AI Agency™ — In Formation (Pre-Launch 2026) Following three years of research and ecosystem development, the Liberation AI Agency™ prepares to bridge healing and infrastructure at scale. Its mission: to help organisations re-engineer equity through measurable liberation frameworks.
Planned integration domains include:
ICC™ Culture Redesign — merging mental health, DEEI and power redistribution.
Trauma-Informed Technology Audits — measuring algorithmic safety and systemic bias.
Spiral Loop Certification™ Training — teaching organisations to operationalise healing as infrastructure.
Proof-of-Concept (ICC-AI pilots): Leadership confidence +90 % | Team engagement +25 % | Grievance resolution time −41 % | Psychological safety +19 pts.
ICC-AI Augmented Therapy & Coaching: Self-worth +100 % | Emotional regulation +60 % | Intersectional awareness +200 % | Inner therapist +75 % | Empowered language +52 % | Rumination −42 % | Sentiment stability +41 %.
These aren’t vanity metrics — they are lived, verifiable indicators that healing scales.
✍🏾 Spiral Proof of Motion
This arc of recognition and creation springs from the same source as the book that first defined the journey: White Talking Therapy Can’t Think in Black! — written against the grain in the Bethlem Royal Hospital Library, now archived there and a #1 Amazon Bestseller in LGBTQ+ Non-fiction, Mental Health and Social Sciences.
Theory born of survival became methodology. Survival refined became infrastructure. Through every loop, the Spiral closed — and opened again.

🌍 Closing Spiral
“Cultural intelligence isn’t optional in AI — it’s the difference between systems that replicate harm and those that repair it.”
Liberation isn’t a finish line — it’s design intent.
As AI regulation debates intensify, remember: the real question isn’t whether to regulate, but who decides what AI must see, serve and protect.
Each line of code is an ethical decision. Each dataset a choice about whose stories matter. Each policy an invitation to either rehearse oppression or release it.
We cannot wait for machines to grow a conscience by chance or law. We must code one in — by centering justice from the first dataset, the first wireframe, the first meeting.
Audit with care. Reflect with courage. Act with precision. Spiral again and again.
That’s how we don’t just build better AI — we build the future we were meant to inherit.
If this edition resonated, don’t keep it in your inbox — share it through your circle. Liberation grows when reflection becomes conversation.
📲 Share on LinkedIn using #TheIntersect 🧭 Forward to a colleague, friend or family member who’d recognise themselves in its message 👥 Invite your team to reflect together — one dialogue, one micro-liberation 💬 Reply to this email — your reflection shapes the next spiral
Every share expands the field where care, culture and code spiral toward healing — together.
✨ Ecosystem Invitation
If this edition expanded your perspective, let’s spiral deeper—where insight turns into liberation, and story becomes movement:
🪶 For those ready to reclaim their complexity and sovereignty—explore personal & leadership transformation that honours your rhythm, your culture, your neurotype: bempongtalkingtherapy.com
🌐 For organisations and visionaries seeking to architect systems that heal rather than simplify—step into systemic redesign, AI ethics, and cultural innovation: intersectionalfutures.org
👉 Seeking bespoke reflection or partnership for you, your team, or your initiative? Book a confidential consultation with Jarell
Each portal is a living spiral—a place to remember you were never meant to fit a system that erased you; you were meant to spiral beyond it.
Whether your next step is personal healing, leadership evolution, or ecosystem transformation, you’ll find codes for thriving—layered, rooted, and powerfully backstitched.
Let’s spiral forward—together.
I’m Jarell (Kwabena) Bempong — founder of The Intersectional Majority™ and Bempong Talking Therapy™, creator of ICC™, The Spiral Loop of Liberation™, and Saige Companion™, the world’s first AI-Augmented Liberation Engine™. Together we’re building systems where care, culture and code spiral forward — together.
💬 Engagement Mirror
How could AI amplify cultural understanding in your network this week? Share an insight using #TheIntersect on LinkedIn or reply to this email. Your reflection doesn’t just shape the next spiral — it becomes part of it.
🔜 Next in The Intersect™
Tech, Trauma & Transformation
From Surviving to Thriving in the Age of Algorithmic Empathy
Next week we explore:
How recommendation engines can re-traumatise,
Why “wellness tech” often surveils instead of heals,
And how to build trauma-informed design into every digital experience.
Publishing next Thursday on Beehiiv and LinkedIn.
Subscribe for new spiral editions:
Join the next spiral in real time—where care, technology, and transformation converge.
🧠 Live-Linked Citations
Gallup & Future of Life Institute (2025). Public Polling on AI Anxiety and Regulation.
UNESCO (2024). Recommendation on the Ethics of Artificial Intelligence.
Kirk & Kretzschmar (1992). “Interactive Linguistic Mapping of Dialect Features.”
Simons & Lewis (2013). “The World’s Languages in Crisis: A 20-Year Update.”
Zhou et al. (2023). “Global Representation Bias in AI-Generated Imagery.”
Deloitte (2025). The DEI Technology Gap: Human Capital Trends.
🌀 Publisher Note
The Intersect™ is published weekly by The Intersectional Majority Ltd.
Where Liberation Meets Design. Where Systems Learn to Heal. Where Care, Culture and Code Spiral Forward — together.
Reply