Back

The Slow Collapse of Critical Thinking in OSINT Due to AI

59 points7 hoursdutchosintguy.com
palmotea3 hours ago

One way to achieve superhuman intelligence in AI is to make humans dumber.

imoverclocked2 hours ago

That’s only if our stated goal is to make superhuman AI and we use AI at every level to help drive that goal. Point received.

jruohonen6 hours ago

"""

• Instead of forming hypotheses, users asked the AI for ideas.

• Instead of validating sources, they assumed the AI had already done so.

• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

This isn’t hypothetical. This is happening now, in real-world workflows.

"""

Amen, and OSINT is hardly unique in this respect.

And implicitly related, philosophically:

https://news.ycombinator.com/item?id=43561654

cmiles743 hours ago

Anyone using these tools would do well to take this article to heart.

ridgeguy1 hour ago

I think this post isn't limited to OSINT. It's widely applicable, probably where AI is being adopted as a new set of tools.

BariumBlue2 hours ago

Good point in the post about confidence - most people equate confidence with accuracy - and since AIs always sound confident, they always sound correct

rglover2 hours ago

Yep. Last night I was asking ChatGPT (4o) to help me generate a simple HTML canvas that users could draw on. Multiple times, it spoke confidently of its not even kind of working solution (copying the text from the chat below):

- "Final FIXED & WORKING drawing.html" (it wasn't working at all)

- "Full, Clean, Working Version (save as drawing.html)" (not working at all)

- "Tested and works perfectly with: Chrome / Safari / Firefox" (not working at all)

- "Working Drawing Canvas (Vanilla HTML/JS — Save this as index.html)" (not working at all)

- "It Just Works™" (not working at all)

The last one was so obnoxious I moved over to Claude (3.5 Sonnet) and it knocked it out in 3-5 prompts.

dullcrisp1 hour ago

To be fair, I wouldn't really expect working software if someone described it that way either.

morkalork2 hours ago

The number of times I've caught chatgpt passing off something with perfect confidence is growing but what's truly annoying is when you point it out and you get that ever so cheerful "oh I'm so sorry teehee" response from it. It's dumb stuff too like a formula it's simplified based on a assumption that was never prompted.

treyfitty3 hours ago

Well, if I want to first understand the basics, such as “what do the letters OSINT mean,” I’d think the homepage (https://osintframework.com/) would tell me. But alas, it does not, and a simple chatgpt query would have told me the answer without the wasted effort.

OgsyedIE3 hours ago

Similar criticisms that outsiders need to do their own research to acquire foundational-level understanding before they start on the topic can be made about other popular topics on Hn that frequently use abbreviations, such as TLS, BSDs, URL and MCP, but somehow those get a pass.

Is it unfair to make such demands for the inclusion of 101-level stuff in non-programming content, or is it unfair to give IT topics a pass? Which approach fosters a community of winners and which one does the opposite? I'm confident that you can work it out.

walterbell3 hours ago
dullcrisp1 hour ago

Ironically, my local barber shop also wouldn't explain to me what OSINT stands for.

hmcq62 hours ago

The OSINT framework isn’t meant to be an intro to OSINT. This is like getting mad that https://planningpokeronline.com/ doesn’t explain what Kanban is.

If anything you’ve just pointed out how over reliance on AI is weakening your ability to search for relevant information

FrankWilhoit5 hours ago

A crutch is one thing. A crutch made of rotten wood is another.

add-sub-mul-div4 hours ago

Also, a crutch for doing long division is not the same as a crutch for general thinking and creativity.

rini172 hours ago

It isn't something completely new, there are many cases of unwarranted trust in machines even before computers existed. AI just adds persuasion.

The "Pray Mr. Babbage..." anecdote comes to mind: https://www.azquotes.com/quote/14183

aaron6953 hours ago

[dead]

nonrandomstring4 hours ago

> This isn’t a rant against AI. I use it daily

It is, but it adds disingenuous apologetic.

Not wishing to pick on this particular author, or even this particular topic, but it follows a clear pattern that you can find everywhere in tech journalism:

  Some really bad thing X is happening. Everyone knows X is happening.
  There is evidence X is happening, But I am *not* arguing against X
  because that would brand me a Luddite/outsider/naysayer.... and we
  all know a LOT of money and influence (including my own salary)
  rests on nobody talking about X.
Practically every article on the negative effects of smartphones or social media printed in the past 20 years starts with the same chirpy disavowal of the authors actual message. Something like;

"Smartphones and social media are an essential part of modern life today... but"

That always sounds like those people who say "I'm not a racist, but..."

Sure, we get it, there's a lot of money and powerful people riding on "AI". Why water down your message of genuine concern?

rini171 hour ago

There were too many cheap accusations of hypocrisy "you say X is bad so why do you use it yourself". So everyone is now preempting it.

trinsic22 hours ago

I think this is a good point regardless of how much you have been down voted. I hope your not using this context to sub-communicate this issue isn't important. If not, It might have been better to put your last line at the top

AIorNot3 hours ago

This is another silly against AI tools - that doesn’t offer useful or insightful suggestions on how to adapt or provide an informed study of areas of concern and - one that capitalizes on the natural worries we have on HN because of our generic fears around critical thinking being lost when AI will take over our jobs - in general, rather like concerns about the web in pre-internet age and SEO in digital marketing age

OSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinking

AI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too

card_zero3 hours ago

The article is all about that oversight. It ends with a ten point checklist with items such as "Did I treat GenAI as a thought partner—not a source of truth?".

cmiles743 hours ago

So weak! No matter how good a model gets it will always present information with confidence regardless of whether or not it's correct. Anyone that has spent five minutes with the tools I knows this.

salgernon2 hours ago

OSINT (not a term I was particularly familiar with, personally) actually goes back quite a ways[1]. Software certainly makes aggregating the information easier to accumulate and finding signal in the noise, but bad security practices do far more to make that information accessible.

[1] https://www.tandfonline.com/doi/full/10.1080/16161262.2023.2...