top of page

Vibe Coding Is Becoming an OSINT Risk

  • Writer: Nico Dekens | dutch_osintguy
    Nico Dekens | dutch_osintguy
  • 4 minutes ago
  • 8 min read

AI has made it easier to vibe code and install investigative tools. It has not made it safer to trust them.


There is a shift happening inside OSINT, and I do not think enough people are taking it seriously.


More investigators are building / vibe coding tools with AI. Others are installing tools built by strangers because they are trending on GitHub, getting reposted on LinkedIn, or being pushed around in private communities as the next must-have thing. In both cases, the same pattern keeps showing up: people are starting to trust software long before they truly understand it.


In OSINT, tools are not just convenience. They shape how we collect, what we notice, what we miss, and what starts to look credible. Once that happens, weak software stops being just a technical issue. It becomes an analytical one. Sometimes an operational one. And in the wrong hands, or under the wrong conditions, even a counterintelligence one.


This is not about rejecting AI. AI can absolutely help people prototype ideas, automate repetitive work, and explore new workflows. Hence I use it myself very often. But that is not the issue. The issue is that AI is lowering the barrier to building while doing nothing to reduce the responsibility that comes with using software in real investigative environments. The model may generate the code. The user still owns the risk.


The user owns the risk
The user ALWAYS owns the risk

The problem is not the vibe coding tool. It is the confidence wrapped around it.


What AI has done is give a lot of people access to technical capability they did not have before. That feels and is powerful. And I understand the attraction. If you have ever wanted a custom scraper, a monitor, a dashboard, a browser extension, a translation workflow, or a local assistant that fits your own way of working, AI vibe coding makes that feel reachable in a way it did not before.


But there is a big difference between producing software and understanding software.


That difference matters more in OSINT than people seem willing to admit.


Most OSINT practitioners are not engineers. They may be very good at investigation, verification, geolocation, attribution, social media analysis, or target development. That does not mean they know how to review dependencies, handle secrets safely, think about logging, secure an app, isolate risk, or understand what a piece of software is really doing once it starts running. These are different skills. AI makes it easy to blur that line because it removes the friction that used to warn people they were entering territory they did not fully understand.


Now the app runs, the interface loads, the script returns output, and confidence arrives before understanding ever has a chance to catch up. That is where the danger begins.


A weak tool does not stay a technical problem for long


This is one of the biggest blind spots in the current conversation.


If a hobby project breaks, that is annoying. If an OSINT tool breaks, the damage can be much harder to spot. A parser fails and relevant posts stop being collected. A monitor silently misses updates. A matching workflow links the wrong identities. A summarisation layer smooths over uncertainty and makes weak information sound stronger than it is. A translation step strips tone or context. A dashboard buries contradiction and surfaces what looks neat instead. None of that has to look dramatic. That is what makes it dangerous.


The software does not need to crash to mislead you. It only needs to quietly shape what you see and what you stop questioning. Once that happens, the tool is no longer just supporting analysis. It is bending it.


That is why polished interfaces can be so deceptive in this field. They make fragile processes look stable. They give weak assumptions a clean surface. They create confidence the underlying logic has not earned.


So the real question is not whether a tool saves time. The real question is what it changes in your judgment.


“It’s only internal” is still one of the weakest excuses in this space


I keep seeing people defend risky tools with language that sounds reassuring but means very little in practice.


It is only internal. Only local. Only experimental. Only for testing. Only for the team.


Fine.


That still does not make it safe.


Internal tools still touch real workflows. They still sit on machines used for real work. They still end up near screenshots, watchlists, saved searches, notes, usernames, keywords, target sets, geolocations, exports, and all the other fragments that make up actual investigative activity. And once people start using those tools regularly, they stop being experiments. They become dependencies. Usually before anyone has seriously reviewed them.


This is how shadow infrastructure gets built. Not through some grand plan, but through convenience. A small tool solves a problem. People reuse it. Someone adds a feature. Someone else shares it. Nobody wants to slow it down. A few months later it is part of the workflow, trusted by habit, but still poorly understood.


Calling something “internal” does not answer the real question. It often avoids it.


The viral tool problem is getting worse


Not everybody is building their own tools. A growing number of people are installing software built by others because it is moving fast through social media or GitHub.


That creates a different kind of risk, and I think a lot of practitioners are being far too casual about it.


A repo trends. Screenshots get posted. Someone with a strong following says it is excellent. A few people you know start using it. The interface looks sharp. The feature list sounds exactly right. The setup is easy. At that point, skepticism starts to disappear.


That should sound familiar, because it is exactly how trust often gets manufactured online.


A popular GitHub repo is not a security review. A slick README is not evidence of safe engineering. Open source is not the same as audited source. Most people do not inspect the code, trace the dependencies, monitor outbound connections, verify the build process, or think carefully about what happens the moment the software starts running. They download it, spin it up, click around, maybe connect a few things, and start seeing where it fits.


That is not tradecraft. That is social proof doing the work of technical judgment.


Take a project like World Monitor as an example of why this category is so attractive. It presents itself as a real-time global intelligence dashboard, promotes AI-powered aggregation and monitoring, and offers a quick setup path through the README. It also has broad visibility and an active releases page, which helps explain why tools like this spread so quickly. That does not mean the project is unsafe. It means this is exactly the kind of tool people are likely to trust too early because it looks useful, polished, and easy to adopt. That is the point.


The danger is not that every viral tool is malicious. The danger is that virality makes people lower their guard.


The adversarial angle is not theoretical


This is where I think the profession still needs to get more honest.


An adversary does not always need to hack an investigator. Sometimes it only needs to get invited into the workflow.


That invitation can come in the form of a useful app, a browser extension, a downloadable desktop tool, a hosted monitoring platform, or an AI coding environment that promises speed and flexibility. If that system can observe user behaviour, collect inputs, store files, capture telemetry, or reveal workflow patterns, then the practitioner may be disclosing far more than they realise simply by using it.


That exposure does not need to be dramatic to matter.


Search terms matter. Keyword lists matter. Watchlists matter. Imported spreadsheets matter. Uploaded screenshots matter. API keys matter. Case themes matter. Timing matters. Focus areas matter. To an adversary, those things can reveal what an investigator is paying attention to, which names or networks are under scrutiny, when attention shifts, or what issue has suddenly become active.


In other words, your interest can be intelligence too.


your interest can be intelligence too.
Your interest can be intelligence too

That is why I think this has to be framed as more than a software hygiene problem. There is a real counterintelligence angle here. The most effective lure is often a useful one. A genuinely helpful tool lowers skepticism. A workflow that saves time lowers resistance. The application does not need to look suspicious. In many cases, the more practical and relevant it looks, the better the lure becomes. I good recent example is the recent change of ownership of the Chrome Extension "Instant Data Scraper" This tool has always been very populair amongst OSINT practitioners. The recent change shows that the tool now has some new features that should make you frown. Micah Hoffman did a great write up on his findings, you can read them on his LinkedIn post here and here.



That does not mean every useful app is hostile. It means OSINT practitioners should stop behaving as if useful automatically means safe.


Hosted AI coding platforms deserve the same suspicion


The same logic applies to the platforms people now use to build with AI.


A lot of users think they are only asking for code. In reality, they are often pasting in much more than that. Workflow logic. Collection ideas. Search structures. Schema designs. Target categories. Internal pain points. Case-adjacent examples. Fragments of methodology. All of that can reveal how a person or team works, even if they never intend it to.


And again, the issue is not that every platform is malicious. The issue is that too many users do not really know what the privacy boundaries are, what gets logged, what is retained, what is visible, how integrations behave, or how much of their working method they are externalising in the process.


That should matter in OSINT.


This field is supposed to produce people who question interfaces, inspect assumptions, and stay skeptical of anything that makes the hard part disappear too neatly. Yet when it comes to software that promises speed or convenience, many practitioners seem willing to suspend exactly those instincts.


That is a bad habit to build.


This is the decision point the field needs


None of this means people should stop experimenting. That would be lazy advice.


Build if building genuinely helps. Use AI if it saves real time. Test tools if they solve real problems. But stop confusing speed with maturity. Stop assuming a working prototype deserves trust. Stop treating labels like “open source,” “local,” “private,” or “AI-powered” as if they settle the risk question. They do not.


More importantly, stop thinking about tooling decisions as if they are only technical decisions. They are not. They are operational decisions. They are analytical decisions. In some cases, they are counterintelligence decisions too.


That is the shift I want more OSINT practitioners to make.


Not less curiosity. Better judgment.

Not less experimentation. More discipline.

Not fear of new tools. A more serious standard for deciding when a tool deserves trust.


Closing


OSINT practitioners speak often about deception, collection, exposure, and manipulation. Good. They should.


But those instincts need to be turned inward as well.


They need to be applied to the software entering the workflow, the platforms shaping behaviour, the repos going viral, and the tools being trusted because they look useful enough to save time.


The real danger of vibe coding in OSINT is not only bad code.


It is the slow normalisation of dependence on systems too many practitioners cannot properly explain, assess, or defend.


That is not progress, it is operational weakness wearing a modern interface.


Further reading


bottom of page