
My mission is to make complex topics feel simple
Curious what my Keynote speak are about? Here are some examples

AI Responsibility
The most dangerous phrase in technology today is "technology is neutral." It gives everyone permission to avoid the real question: who's responsible when things go wrong?
I've seen this at the highest levels. The Netherlands childcare scandal: an automated system flagged thousands of innocent families as fraudsters. When damage became undeniable, developers blamed data. Administrators blamed the model. Decision-makers blamed the system. The system pointed nowhere. Each piece made sense alone. Together, they destroyed lives.
This is what happens when we hide behind neutrality. Technology amplifies human values, biases, and priorities. Every dataset reflects decisions. Every algorithm encodes assumptions. Claiming neutrality isn't protecting innovation, it's avoiding ownership.
Identify where responsibility fragments in AI systems
Shift from reactive compliance to proactive leadership
Build accountability structures that connect every stakeholder
Turn governance from burden into competitive advantage

Governance VS Regulations
Regulation is what governments impose from outside. Governance is what you build inside. If you're waiting for the first to tell you how to do the second, you're already losing.
Think of it this way: regulation is the skeleton, structure and boundaries. Governance is your immune system, keeping you healthy day-to-day, responding to threats before crises. Your immune system doesn't wait for disease. Neither should your governance.
Companies freeze AI initiatives waiting for regulatory clarity that's never coming. The EU AI Act took years. US regulations fragment across states. Global frameworks conflict. But companies with strong governance adapted within weeks when rules changed, because they'd already answered the hard questions.
The critical distinction between external compliance and internal accountability
How to build governance that works under any regulatory scenario
Why regulatory confusion is permanent (not temporary)
How compliance obligations become market differentiators

Leading AI Transformation
The biggest barrier to AI transformation isn't technical, it's human. Your data scientists speak one language, your legal team another, your executives a third. Everyone's optimizing for different outcomes, and no one's talking to each other.
I learned this uniting Big Tech, civil society, and European governments on policies where compromise seemed impossible. The breakthrough wasn't finding perfect technical solutions, it was creating frameworks where each party understood how their piece connected to the whole. Where developers, compliance, leadership, and operations stopped seeing each other as obstacles and started seeing themselves as parts of one system.
Most AI transformations fail because organizations treat them as technology projects when they're actually culture projects. You can have the best models and clearest regulations, but if your stakeholders can't communicate across silos, you'll stay stuck in the confusion-and-paralysis loop.
Map your stakeholder landscape to find communication breakdowns
Build decision-making frameworks where ownership is crystal clear
Create shared language across technical and non-technical teams
Scale AI initiatives without losing alignment as you grow

















