Critical Thinking Collapse 2.0: AI’s Impact on OSINT
- Nico Dekens | dutch_osintguy
- 3 minutes ago
- 9 min read
A follow-up to “The Slow Collapse of Critical Thinking in OSINT Due to AI”
In my November 2025 piece I warned that AI was quietly eroding critical thinking in OSINT. Since then, the problem has evolved.
This isn’t just about lazy analysts or bad prompts anymore. It’s about AI-shaped distortion entering the intelligence cycle itself, in digital investigations, threat intelligence, crisis reporting, and ultimately in executive and national-security decision-making.
The (uncomfortable) reality:
AI is not just changing how we do OSINT
It is changing what we think is true.
This blog walks through how that’s happening, where it’s already visible, and what investigators, analysts, and leadership need to do about it.

From “helpful assistant” to invisible decision-driver
Most teams didn’t adopt AI with malicious intent. They adopted it because:
you can summarise 300 pages in 30 seconds
you can translate in real time
you can generate a first draft of an assessment
you can ask “what am I missing?” and get ideas back
Nothing wrong with any of that… if analysts still think, verify, and challenge.
But here’s the shift I see over and over:
Old mindset:
“I’ll collect, verify, analyze - then maybe use a tool to speed up the writing.”
New mindset:
“I’ll prompt the model, skim the answer, and adjust a few sentences.”
The mental workload moved:
from: evidence → reasoning → conclusion
to: prompt → output → cosmetics
That’s not “augmentation”. That’s outsourcing the middle of the intelligence cycle, the part where judgement actually lives.
AI hallucinations: when fluency replaces truth
Let’s be painfully precise for a moment.
What is an AI hallucination in our context?
Vendors like IBM and Google describe hallucinations as those moments when a model produces responses that are confident but wrong - not just minor mistakes, but details, citations, events, or attributions that were never in the underlying data in the first place.
This is not a bug you can patch away. It’s baked into how large language models work: they generate the most probable next token, not the most verified fact.
A recent business-oriented review for enterprise LLM use put it bluntly: hallucinations are what you get when pattern-matching and probability are treated as if they were evidence.
For OSINT, threat intelligence, and investigations, that matters because:
OSINT is already noisy.
Disinformation and propaganda are already everywhere.
Now we’re adding a system on top that will confidently invent whatever is missing to make the narrative “smooth”.
So when an analyst asks:
“Summarise the network around this protest group / threat actor / shell company.”
…the model will happily fill in gaps from:
partially relevant data
contaminated or biased sources
its own generative imagination
And it will sound good.
When “authoritative-looking lies” fool AI - and why that matters to OSINT
There’s a nice, sharp, real-world study that illustrates the problem.
Researchers at Mount Sinai tested 20 large language models with different sources of medical misinformation: Reddit posts, physician-written scenarios, and fake statements embedded in realistic hospital discharge notes.
The models were much more likely to repeat false information when it appeared in “official-looking” documents than when it appeared in messy social media posts.
Translated into OSINT terms:
An unverified Telegram rant? The model might be cautious.
A polished “think-tank” PDF seeded by a state-linked front? The model is far more likely to treat it as truth.
That’s exactly how hostile influence operations work:
Wrap the lie in something that looks official.
Get it indexed, linked, and repeated enough.
Let humans and AI's pick it up downstream as “trustworthy context”.
The more we lean on AI for “quick context”, the more we risk inheriting authoritative-looking fakes.
Adversaries are actively “grooming” AI models
This isn’t paranoia. It’s documented.
Analysts at the Center for European Policy Analysis (CEPA) looked at a Kremlin-aligned network pumping out English-language propaganda through a global “news” infrastructure. When AI chatbots were asked about topics this network targeted, they repeated Kremlin narratives roughly one-third of the time.
A separate investigation commissioned by ISD and covered by The Guardian showed how the “Pravda” network - a pro-Kremlin content farm producing tens of thousands of articles a day - was being treated as a legitimate source by hundreds of English-language sites. That content doesn’t just mislead humans; it also feeds search engines and LLM training corpora, a technique researchers now call LLM grooming: flooding the internet with aligned narratives to influence what AI will later repeat.
In its own threat report, OpenAI disclosed it had disrupted multiple foreign influence campaigns by actors from Russia, China, Iran, and Israel using generative tools to create fake personas, articles, and social content for political influence operations.
Put together, this paints a clear picture:
AI isn’t just processing the information environment
AI is being targeted as part of that environment.
If your OSINT pipeline leans on models to surface “what people are saying” or “what narratives are out there”, you are swimming in a pool that adversaries are actively poisoning.
Conflict as the testbed: Ukraine and AI-driven disinformation
The war in Ukraine has been a brutal case study in AI-enhanced information warfare.
Investigations by the Atlantic Council’s DFRLab show how Russian actors use AI tools to:
generate fake news articles and pseudo-local outlets
fabricate visual content supporting specific narratives
scale troll-like operations across platforms such as X/Twitter, Telegram, and fringe sites
Meanwhile, academic work and policy briefs document that AI-written disinformation sites have grown rapidly, often outpacing manual detection and fact-checking, turning the info-sphere into a high-volume, low-trust environment.
Now layer your OSINT workflow on top of that:
You’re scraping posts, articles, “news” sites, and viral content.
You’re using models to help cluster, summarise, and characterise narratives.
Some portion of that input is AI-generated propaganda, designed to be AI-friendly.
If you don’t deliberately design your process to detect and discount these signals, your own intelligence products can end up amplifying the very narratives you’re supposed to understand and counter.
How the intelligence cycle is being deformed
Let’s compare two cycles.
The traditional OSINT / intelligence cycle
Most frameworks boil down to:
Direction & Planning
Collection
Processing & Verification
Analysis & Integration
Production & Dissemination
Feedback
That middle band - processing, verification, analysis - is where:
source reliability is assessed
geolocation and chronolocation happen
cross-source triangulation is done
hypotheses are challenged
confidence levels are assigned
The AI-shaped cycle in too many teams
What I increasingly see looks more like:
Prompt (“Give me an assessment of…”)
Output
Light editing
Briefing slide
Verification is assumed to have happened inside the model.
Analysis is assumed to have happened in the text.
Bias, gaps, and poisoning are not even visible.
This is why I keep saying:
AI isn’t replacing analysts.
It’s replacing thinking analysts with unthinking ones.
Concrete use cases where this goes wrong
Let’s make this less abstract and more operational.
Use case 1 - Cybercrime & CTI: phantom infrastructure
A cyber threat intel team is investigating a ransomware incident:
They feed domain names, IPs, and snippets from a ransom note into a model.
The model “recognises” patterns and outputs a neat narrative:
connects the incident to a known threat actor
lists previous “campaigns”
suggests probable TTPs
Except:
Several of those “previous campaigns” are hallucinated labelings from blogspam OSINT sites.
Some IOC relationships were inferred incorrectly from co-occurrence, not from actual joint infrastructure.
A key domain was mis-categorised as C2 infrastructure based on a misread forum discussion.
Leadership is briefed: “We assess with medium confidence this is Actor X, likely aligned with country Y.”
The operator? Actually a small opportunistic crew reusing leaked tooling.
Mis-attribution → wrong diplomatic messaging → unnecessary escalation.
Use case 2 - Protest & civil unrest monitoring: synthetic crowd
A public-order intelligence cell is monitoring rallies around an upcoming political decision:
They use external OSINT + internal tools to track social posts, hashtags, channels.
An LLM is wired in to summarise “top narratives”, “influential accounts”, and “likely risk areas”.
Unknown to the team:
A small group is running a botnet of AI-generated personas posting emotional but fake reports about weapons, “foreign agitators”, and calls for violence.
The LLM clusters these as a major narrative and flags them as high-salience risk signals.
The result:
Overestimation of violent intent.
Underestimation of organic, non-violent grievances.
Policing plan and political messaging shaped by synthetic noise - exactly what an adversary might want.
Use case 3 - Corporate threat intel: reputational oversteer
A multinational is tracking extremist groups that occasionally target its sector:
The OSINT team asks a model to map “links” between a specific activist network and known extremist groups.
The model trawls old forum posts, mis-tagged blog posts, and a couple of dubious investigative threads.
It outputs a “web of associations” that looks like a graph of convergence.
The company responds by:
labeling the activist group as “violent extremist-adjacent” internally
quietly pressuring partners to disengage
shaping its risk narrative accordingly
Later, a manual review by a human analyst finds:
no direct operational connection
mis-read sarcasm as endorsement
one single photo used to justify a “relationship”
Too late, the reputational and legal exposure from over-steering is already in motion.
AI as both threat and tool in information warfare
Here’s the uncomfortable duality:
AI lowers the barrier for hostile actors to generate disinformation, deepfake personas, and tailored propaganda at scale.
AI is also being used by defenders to detect, cluster, and mitigate those same campaigns.
OpenAI’s own reporting on the foreign campaigns it disrupted shows both sides:
hostile actors using AI to generate fake personas and content
defenders using AI to catch patterns faster than humans alone could.
Academic work on AI-driven disinformation stresses that this is now a structural feature of the information environment, not a temporary glitch. Disinformation sites and synthetic outlets are multiplying, and traditional manual fact-checking cannot keep up.
For OSINT and intelligence work, that means:
You can’t just opt out of AI.
Your adversaries, your data sources, and your own platforms are already using it.
You can’t just blindly embrace AI.
Doing so turns your analytic process into an un-audited black box.
The only viable option is conscious, disciplined integration.
What investigators and analysts must reclaim
Let’s talk about what you (and your teams) can actually control.
Re-own the middle of the cycle
Non-negotiables:
Human-driven verification steps
Geolocation, chronolocation, and cross-source confirmation remain human tasks.
If an AI helps, it’s a suggestion engine, not an oracle.
Explicit source tagging
Distinguish: human primary source, scraped document, AI-generated content, “AI-assisted summary”.
Track what’s raw OSINT vs what’s AI-processed.
Hypothesis challenge
Periodically ask: “If this is wrong, what would I expect to see?”
Run that manually or with alternative tooling, not just the same model with a different prompt.
Build “AI hygiene” into your OSINT workflow
Concrete habits:
Treat all model outputs as secondary sources
Never cite a model directly as “source”; always trace to an underlying document or dataset
Use multiple independent search engines and data sources before you ever ask a model to “summarise the state of X”
Flag when content you’re analysing is likely AI-generated itself (repetitive phrasing, odd timing, abnormal volume, stylometric patterns)
Design for adversarial conditions
Assume:
Some of what you see is synthetic.
Some of what the model “knows” is poisoned.
Some of the narratives are engineered specifically to grab algorithmic and AI attention.
Build counter-moves:
Cross-lingual checks (often propaganda “leaks” across language lines unevenly)
Temporal pattern checks (sudden surges of very similar content)
Network analysis of accounts and outlets pushing specific frames
What leadership and decision-makers must demand
If you’re in leadership - CSO, CISO, intel director, policy lead, commander, executive - this part is for you.
You should be asking your teams:
Where exactly are AI tools in our intelligence pipeline?
Collection? Translation? Triage? Drafting? Analysis? All of the above?
How do we handle hallucinations and errors?
Is there a documented process for verification and correction?
Do products flag when AI was used?
What adversarial risks have we mapped?
Could our tools be ingesting poisoned narratives?
Are we monitoring for AI-generated disinformation that feeds back into our own reporting?
Can we audit and reproduce our assessments?
If a key decision relied on an AI-assisted brief, can we reconstruct the evidence trail?
Are we training analysts and managers - in AI literacy?
Not just “how to use the tools”, but how they fail, how they can be manipulated, and what they must never be trusted with alone.
Because in 2026 and beyond, “I didn’t know the tool could do that” is not a defensible position for anyone signing off on high-impact decisions.

OSINT is still a thinking game
Let’s end where this really lives.
AI:
scales our reach
accelerates our work
widens our situational awareness
AI also:
amplifies our biases
hides its own errors well
can be manipulated upstream
The decisive variable isn’t the model. It’s the mind using it.
OSINT without thinking is just automated guesswork.
Intelligence without verification is just storytelling.
AI without tradecraft is just a very fast way to be confidently wrong.
The future belongs to investigators, analysts, and leaders who:
treat AI as a power tool, not a substitute for judgement
build verification into every workflow
recognise adversarial manipulation as a given, not an edge case
are willing to slow down thinking, even when the tools speed everything else up
Because at the end of the day, intelligence still comes from people.
Not prompts.