top of page

OSINT is Still a Thinking Game

  • Writer: Nico Dekens | dutch_osintguy
    Nico Dekens | dutch_osintguy
  • 2 minutes ago
  • 7 min read

I’ve been thinking a lot about why conversations around OSINT feel increasingly shallow, even though the tools we use have never been more powerful.


We have more data, automation, dashboards and AI-generated summaries than at any point in the history of intelligence.


And yet, when I look at how conclusions are drawn, how confidence is expressed, and how often assumptions go unchallenged, I can’t escape a deeply uncomfortable feeling:


We are collecting more than ever and thinking less than ever.


AI didn’t cause this.

AI simply made it visible.


OSINT is Still a Thinking Game
OSINT is Still a Thinking Game

What Intelligence Was Always Supposed to Be


Let me be very clear about something that gets misunderstood constantly.


The purpose of an OSINT investigator or intelligence analyst has never been to study information.

It has been to diagnose information into actionable intelligence.


That difference sounds subtle. It isn’t.


Information exists everywhere.

Intelligence exists only when information is interpreted, weighed, challenged, contextualised, and ultimately used to support a decision that carries consequences.


Accuracy alone doesn’t make something intelligence.

Volume doesn’t either. Neither does speed.


Intelligence is about relevance under uncertainty. Uncertainty is something no tool, no matter how advanced, is willing to take responsibility for.


That responsibility still sits with us, humans


OSINT Was Never About Tools


When I say OSINT is a thinking game, I’m not being poetic. I’m being literal.


OSINT was never about knowing more than others.

It was about reasoning better than others when information is incomplete, manipulated, emotional, contradictory, or deliberately misleading.


Somewhere along the way, the profession drifted. We started measuring capability in terms of tooling instead of judgment. We rewarded speed instead of reflection.

We equated output with insight.


AI didn’t disrupt this model. It completed it.


Because now AI does (some of) the collecting, correlating, summarising, and pattern-surfacing faster and more consistently than humans ever could.


And that forces a hard question we’ve avoided for too long:


If AI can do what you call “analysis,” what part of your work was actually intelligence?


AI Didn’t Replace Analysts (It Replaced the Illusion of Analysis)


This is where some people get uncomfortable, and honestly, they should.


If your role was primarily about retrieving information, organising it, summarising it, or presenting it neatly, that was never the core of intelligence work. That was pre-analysis.


Intelligence begins where tools stop being helpful.


It begins when you ask:

  • What’s missing here?

  • What doesn’t make sense?

  • What assumption am I making without realising it?

  • What would change my conclusion and have I actually looked for that?

  • What happens if I’m wrong?


AI can’t carry those questions responsibly, because AI doesn’t own the consequences of being wrong.


We (humans) do.


When Information Was Everywhere and Intelligence Still Failed


One of the most dangerous myths in OSINT is the idea that failures happen because “we didn’t have the data.”


In reality, some of the most significant intelligence failures of recent years unfolded in full view of the open internet.


Take Ukraine.

Before the invasion, there was no shortage of satellite imagery, logistics indicators, public statements, and open-source signals. What fractured wasn’t access, it was judgment. Intent was debated, signals were mis-weighted, confidence oscillated between certainty and denial.


Or look at Gaza.

Here, the problem isn’t lack of information, it’s overload. Images, narratives, claims, counter-claims, all competing for attention. In that environment, virality starts to masquerade as validation, repetition as truth, and visibility as importance.


When everything is visible, thinking becomes the scarce resource.


I see the same pattern in financial investigations, fraud cases, and influence operations. Patterns are detected. Networks are mapped. Anomalies are flagged. And yet, conclusions fail because intent, motivation, timing, and human behaviour are treated as secondary, or worse, ignored.


AI is very good at telling us what is happening.

AI is not good at explaining why it matters.


Operational Security Is Not a Separate Topic


There’s another conversation we need to stop postponing: operational security.


Not “OPSEC” as a checklist. OPSEC as a mindset that should sit inside collection and analysis, not outside it.


Because the moment you collect, you create risk.


You create risk for your source of truth, for your client, for your organisation, for your team, and sometimes for the people you are investigating or trying to protect. You create risk through what you search, where you search from, how you store the material, what you share internally, and what trail your tooling leaves behind.


And AI changes the OPSEC equation in subtle ways.


When you paste case context into a model, when you upload a file to a platform, when you run a query in a “smart” tool that promises magic results, you are doing more than collecting. You are moving sensitive intent into someone else’s infrastructure. You’re transferring not just data, but the very thing intelligence depends on: what you care about, what you’re looking for, and how you think.


For some people, that’s acceptable risk.

For others, it’s an unacceptable exposure.


But either way, it needs to be a deliberate decision, not a habit.


If you can’t explain the OPSEC implications of your workflow, you don’t have a workflow, you have a liability.


The OSINT SaaS Temptation: Outsourcing Collection, Outsourcing Thinking


I’m also watching something else happen in the OSINT world: new companies pop up seemingly overnight. They look polished. They market themselves as the “all-in-one OSINT solution.” They appear big, funded, and inevitable. Sometimes they are. Sometimes they aren’t.


I always feel uncomfortable talking about this since I work for a OSINT company (ShadowDragon) myself. But I also have worked in government for more then 20 years where we often had to rely on outsourcing. Which in my book always meant, do your homework before you buy or associate yourself with a SaaS solution and company.


And even when a product works, there’s a deeper question I rarely see people ask out loud:


Do I actually understand who I’m doing business with?


Not just “is the tool good?”

But: who operates it, who benefits from my usage, and what is the long-term risk of building my intelligence workflow on top of someone else’s black box?


Because if you outsource open source intelligence collection and analysis to a SaaS solution, you’re not just buying software. You’re outsourcing parts of:


  • your operational footprint

  • your investigative methodology

  • your institutional memory

  • your analysts’ skill development

  • and potentially your legal and reputational exposure


That matters for companies. It matters even more for government institutions. And it matters most when decision-makers see a shiny platform and start believing OSINT is something you can “just buy.”


If you can’t clearly explain how a tool gets its results, what assumptions it bakes in, and what it misses by design, you are not buying capability, you are buying dependence.


And dependence is expensive in intelligence. It shows up later, at the worst possible time. And again who really owns that company and how transparent are they in what and how they do and how they market themselves. What are they hiding and how? Maybe more important if they are hiding, obfuscating or making things blurry/un-transparent something, why?


What This Means for Younger or Less Experienced Analysts


This is where I get genuinely concerned.


New analysts are entering the field in an era where “click button = get results” is becoming normalised. That feels empowering at first. But there’s a hidden cost: it can quietly teach people that OSINT is a product output, not a reasoning process.


And that’s deadly for tradecraft.


Because tools can give you material. They can give you leads. They can give you correlations. They can even give you narratives.


But if you don’t have the discipline to ask:

  • What is the intelligence requirement?

  • What would prove this wrong?

  • What evidence is missing?

  • What’s the alternative explanation?

  • What’s the decision this supports?

then you’re not doing OSINT. You’re doing assisted browsing.


The risk for young analysts isn’t that they’ll be “replaced by AI.”


The risk is that they’ll never be forced to develop the muscles that make an analyst valuable in the first place: skepticism, framing, hypothesis-testing, and the courage to say “I don’t know, yet.”


If you learn OSINT as a button-clicking exercise, you’ll struggle the first time you’re confronted with deception, conflict narratives, coordinated influence, fraud ecosystems, or adversaries who understand exactly how your tooling works.


The Most Dangerous Failure Mode of the AI Era


People often worry about AI hallucinations.

I worry about something else.


The greatest risk AI introduces into intelligence work is false confidence in correct-looking answers to the wrong questions.


AI answers prompts.

Intelligence defines problems.


If analysts outsource problem-framing (if we let tools decide what deserves attention) we don’t become more efficient. We become less relevant.


And once decision-makers stop trusting analysts to think independently, they stop asking for judgment altogether. That’s not a technical failure. That’s a professional one.


The Split That’s Coming (Or Is Already In Progress)


I’m convinced the intelligence world is heading toward a split.


On one side will be people who operate tools, generate outputs, optimise prompts, and measure success in volume and speed.


On the other side will be people who diagnose reality. Those who understand context, psychology, bias, uncertainty, and consequence, and who can explain why something matters now.


AI will erase the middle ground.


The future doesn’t belong to those who can produce more information.

It belongs to those who can think clearly when information is overwhelming.


The Split That’s Coming
The Split That’s Coming

Why I Keep Saying This Out Loud


I’m not writing this to be provocative for the sake of it.

I’m writing it because I care deeply about where OSINT is heading.


If OSINT continues to drift toward automation without reflection, it won’t evolve, it will dissolve into noise.


But if we reclaim what it was always meant to be, a disciplined way of thinking about the world using openly available information, then this moment becomes an inflection point.


Not the end of OSINT. Its maturation.


An Open Invitation


If this resonates with you, I’d like to talk.

If it frustrates you or if you disagree, even better.


This isn’t a closed argument. I do not claim to have all the answers. But I am certainly and genuinely looking for answers.

It’s a conversation the field needs to have, publicly and honestly.


OSINT is still a thinking game.

The question is whether we’re willing to play it that way again.


If you are, reach out.

bottom of page