Picture this: It’s 2030. You’re haggling for a price on a locally crafted wooden sculpture through a popular online marketplace, something akin to a souped-up Jumia. The seller, ‘Mama Zawadi’s Crafts,’ is offering a beautiful piece. You type, “Can you do 80,000 shillings?” A response pops up instantly: “Asante sana for your interest! This piece is handcrafted with love. My current price is firm, but I can offer free delivery within Dar es Salaam if you purchase today. (AI Assistant: GPT-7 by OpenAI)”
That little parenthetical at the end, the one declaring the respondent an AI assistant, is now as ubiquitous as a ‘Made in China’ label on a plastic bucket. It’s not just a courtesy, it’s the law. This isn't some far-off European fantasy; this is our reality, right here in Tanzania, and indeed, across much of the globe. The right to know if you’re talking to an AI, once a niche demand from privacy advocates, has become a cornerstone of digital interaction, fundamentally reshaping our relationship with technology and each other.
The Great AI Reveal: A Future Unveiled
In this not-so-distant future, every interaction with an AI, whether it’s a customer service chatbot for Vodacom, a financial advisor bot from Crdb Bank, or even the subtle AI nudges in your social media feed, comes with a clear, unambiguous disclosure. No more guessing games, no more feeling manipulated by unseen algorithms. The days of sophisticated bots masquerading as humans, subtly influencing our decisions or extracting information without our full awareness, are largely over. The digital wild west, where every click felt like a gamble, has been fenced in, at least a little.
This transparency extends beyond simple chatbots. Imagine applying for a microloan. The system tells you, “Your application is being processed by ‘Uwezo Credit AI,’ an algorithmic lending model developed by Google DeepMind, which analyzes 30 data points including your M-Pesa transaction history and community social credit score. (AI System: Decision-making algorithm)” Or perhaps you’re reading a news article, and a small tag at the bottom states, “This article was drafted by a human journalist, Zawadì Mutembò, with factual verification assistance from ‘Veritas AI’ by Anthropic.” Even the deepfake detection systems themselves are required to declare their presence.
How We Got Here: A Rocky Road to Revelation
How did we arrive at this brave new world of algorithmic honesty? It wasn't a sudden epiphany, but a slow, grinding realization that the unchecked proliferation of AI was eroding trust at an alarming rate. The turning point, in my humble opinion, wasn't some grand UN declaration, but a series of utterly absurd, yet deeply impactful, local incidents. You can't make this stuff up, really.
Around 2025, we saw a surge in sophisticated scams. People were being duped by AI voices mimicking relatives asking for emergency funds, or by AI-generated customer service agents promising impossible discounts. Here in Dar es Salaam, there was a particularly memorable case of an AI chatbot for a popular duka la dawa (pharmacy) giving out incorrect dosage advice, leading to a minor public health scare. The public outcry was deafening. People felt betrayed, not just by the scammers, but by the technology itself, and by the companies that deployed it without clear boundaries.
Simultaneously, global bodies and national governments, spurred by the European Union's pioneering AI Act, started to take notice. The US, initially hesitant, followed suit with its own







