"The future has not been written. There is no fate but what we make for ourselves." - (John Connor) - Terminator 3: Rise of the Machines (2003).
I’ve just rolled out of an AI course at the University of Helsinki, and calling my mind “blown” is like calling a Motorhead concert “a bit noisy.” Apart from getting a sort-of-clearer, sort-of-murkier idea of how the mysterious engine room of AI actually spins along—algorithms, machine learning, deep learning, robotics, data science and some terminology I can’t get my tongue around! I can totally appreciate even more now why the “propeller heads” out there earn the dosh they do. This mind bender also cracked open a bigger box: “What on earth does this mean for humanity?”
So here I am, armed with scribbles, half-baked theories, and a newfound respect for the fact that machines don’t just crunch numbers—they provoke existential crises. The course didn’t just explain how the gears turn; it left me questioning why we built the gears in the first place and what happens if they start spinning faster than we can keep up. Here’s a few key take aways from the course I uncovered as part of the research course work. I caution you all, I may have totally misunderstood the spoon bending session!
1. The Hype vs. The Harsh Reality
Generative AI was supposed to be the golden goose. Instead, it’s looking more like a very expensive ostrich—lots of feathers, very little flight. A recent MIT study found that 95% of generative AI projects flop. Yes, flop. This despite companies shovelling more than $44 billion into the AI furnace in early 2025. Why? Unrealistic expectations, clunky integrations, and strategies so misaligned they make my chiropractor wince. If it all sounds eerily familiar, that’s because it is—hello, dot-com déjà vu. I’ve seen this first hand when the plethora of software-salespeople promise their product to be THE silver bullet, but what one gets is too often a bag of mere “vapour wear”!
2. Ethics & Jobs: The Elephant in the Server Room
Hemant Taneja of General Catalyst calls this the age of “peak ambiguity” in AI. Translation: no one has a clue what’s going to happen, but everyone’s pretending they do. The biggest fear? Mass job displacement. Emerging economies could be hit hardest, and unless reskilling keeps up, AI won’t just widen inequality—it’ll rip the seams wide open. Imagine telling millions of workers: “Good news, you’ve been automated. Bad news, we forgot to retrain you.” Again, seen this first hand where executives chasing the next “bright and shiny thing” believing this will solve all, and forget about what I call the People Sigma, and wonder why their business implodes around them!
3. Governance, Trust, & the Rusty Pipework Behind the Curtain
Apparently, 96% of companies say they’re adopting AI. But down in the bowels of the engine room? Legacy systems creak like dial-up modems, security is often an afterthought, and “governance” is basically crossing your fingers and hoping the chatbot doesn’t hallucinate. Researchers are shouting for tailored governance models, but businesses are racing ahead like teenagers joyriding in their parents’ car—no seatbelts, no insurance, to the next Metallica gig on the other side of the country with no map on how to get there! Rock ON! (blindly!)
4. Leadership Accountability (Or Lack Thereof)
Bain & Company tells us only 12% of business transformations succeed. That means 88% fail—and AI is already marching happily into that same pit. Predictions suggest 30% of GenAI projects will be abandoned by the end of 2025—not because the tech isn’t cool, but because no one can explain the ROI without trying to play air guitar and using interpretive dance. Leaders want disruption, but what they’re getting is expensive chaos dressed up as innovation theatre - imagine a group of executives all in suits in a muddy mosh pit at Ozzfest!
5. Walking the Tightrope
Executives are trapped in a balancing act: embrace AI and risk breaking the business or ignore AI and risk becoming Blockbuster. Neither option screams “relaxing weekend.” The role of leadership now isn’t just to champion shiny tools—it’s to guard trust, ensure transparency, and deploy ethically. AI without responsibility is just disruption with a better press release. As I said recently at a conference in Canada, too many CEO’s let the tail wag the dog, i.e., let the IT guy dictate what’s needed in the business, without the business actually being consulted on what it really needs!
The Bottom Line
Generative AI is transformative, unavoidable, and—and to me—slightly terrifying, yet intriguing. The problem isn’t the algorithms; it’s the humans deploying them with half-baked strategies, outdated infrastructure, and no governance plan beyond “move fast and hope nothing breaks.” To harness AI’s promise without fuelling backlash, leaders need to slow down, get strategic, and treat governance as more than a compliance box-tick. Otherwise, this won’t be the Fourth Industrial Revolution—it’ll be the world’s most expensive group project gone wrong. Remember my piece on the “$1.5b A.I. unicorn, nothing more than a $1 scruffy goat! The Collapse of Builder A.I.: A Stark Reminder in the Age of “Fake” A.I. Tools!”, need I say more! See https://shorturl.at/yXSzk
Comments welcomed, as aways…
MC