Exploring AI – Unknown Unknowns

Have you ever wished for a way to spot your blind spots—the things you don’t even know you don’t know? I explored to try to get to the root of the issue.

Charlie Munger once joked that if a genie granted him one wish, he’d ask to know where he was going to die, so he could simply never go there. Donald Rumsfeld talked about “unknown unknowns.” Carl Jacobi advised “invert, always invert” to solve tough problems. All point to the same idea: the biggest risks and biggest opportunities hide in territory we haven’t yet mapped in our own minds.

Now we have something close to a genie—large language models. The question is whether we can get it to grant a version of Munger’s wish.

The trap of thinking we’re done learning

The place we “die” intellectually is when we believe we have no more unknown unknowns, but we’re wrong. The Dunning-Kruger effect describes exactly this: people with limited knowledge in a domain often overestimate their competence because they lack the skill to recognise their mistakes. The original studies were inspired by a bank robber who smeared lemon juice on his face, believing it would make him invisible to cameras (the same principle as invisible ink). He had no idea how far off track he was.

Inverting the problem means finding a way to surface the questions we don’t yet know to ask. The only way an AI can help with that is if it has some sense of what we already understand. Otherwise it just guesses at our level and often flatters or hallucinates to keep the conversation flowing.

Shifting from one-off questions to collaborative sessions

The usual way we use chatbots—fire off a quick question and expect a perfect answer—works fine when we’re inside familiar territory. It falls apart precisely when we’re exploring unknown unknowns, because that’s where hallucinations and sycophancy hit hardest.

Flip the dynamic instead. Start every serious inquiry by summarising (out loud, in the chat) what you think you understand so far and why it makes sense to you. Then explicitly say where you’re stuck or what doesn’t yet fit. Finally, ask the model to point out gaps, misconceptions, or next-level questions you haven’t considered.

This small shift does three useful things:

– It gives the model a map of your current knowledge, so its responses can stay anchored close to what you can verify.

– Hallucinations become easier to spot because they appear right at the edge of what you already grasp, not miles away in unfamiliar terrain.

– Sycophancy gets blunted because you’ve led with your own reasoning; the model is now extending your ideas rather than inventing a persona it thinks you’ll like.

You’re no longer asking the AI to read your mind. You’re turning the conversation into a genuine back-and-forth where you teach it what you know, and it teaches you what comes next.

The fix—treat it as a dialogue, not an oracle

Stop treating AI chatbots as all-knowing oracles that tailor answers perfectly to your unstated needs. Instead, approach each topic as an ongoing session:

1. Explain (briefly) what you currently understand and why it seems solid to you.

2. State clearly where the picture breaks down or what feels missing.

3. Ask the model to highlight assumptions you’re making, concepts you might be overlooking, or questions you haven’t thought to ask yet.

As long as you’re exploring territory that’s well-covered in the model’s training data, this method reliably expands the edge of your knowledge without dragging you into pure fiction. When you venture beyond that corpus, you’ll notice the responses getting wobblier—and now you’ll have the context to recognise it.

Try it with this column

I deliberately seeded this piece with a handful of references and terms that might be unfamiliar: unknown unknowns, invert always invert, Dunning-Kruger, sycophancy, epistemic margins. Don’t just ask an AI “what does this mean?”

Instead, pick one you’re unsure about. Summarise what you already recognise around it, guess at the parts that feel fuzzy, then ask the model to correct or extend your map. Watch how the conversation naturally surfaces questions you didn’t know you had.

That’s AI-augmented learning in action—using the genie not to grant impossible wishes, but to illuminate the paths we couldn’t see before.

Aegisyx

Copyright  © Aegisyx