The AI Shortcut Trap (and What Moby Dick Taught Me)

AI shortcut

Written by Georg Lindsey

I am the co-founder and CEO of CGNET. I love my job and spend a lot of time in the office -- I enjoy interacting with folks around the world. Outside the office, I enjoy the coastline, listening to audiobooks, photography, and cooking. You can read more about me here.

October 23, 2025

I still remember the slight knot in my stomach back in school when I’d reach for the Cliff Notes instead of finishing the actual book. Okay — to be fair, I did read Lord of the Flies and The Great Gatsby. But Moby Dick? That was a different story. I just couldn’t muscle through all those whaling chapters, so I leaned on the Cliff Notes version. Or maybe it was a Reader’s Digest Condensed Book — either way, it felt a little embarrassing. And, if I’m honest, a little dishonest.

But here’s the thing: I knew it was a shortcut. I knew I was skipping over some of the depth and nuance of the original work. It saved time, sure. It spared me some boredom. But I never would have claimed I’d read every page.

That’s why I find it so interesting — and unsettling — that we don’t always treat AI the same way. These days, people will copy and paste something from ChatGPT or Perplexity or whatever tool they’re using and pass it off as if it were entirely their own thinking. No hesitation. No second-guessing. And no sense that maybe, just maybe, they’re missing something important.

The truth is, that’s a big mistake.

AI can be a wonderful shortcut — one that saves you time and energy, just like those yellow-and-black Cliff Notes did. But if you don’t read carefully, question what it gives you, and add your own thinking, you’re not just cutting corners — you’re handing over the steering wheel. And that can lead you straight into trouble: wrong facts, flawed logic, even legal or security risks.

(And in the spirit of full disclosure: yes, I used voice-to-text, Perplexity, and ChatGPT to write this. See? It’s okay to use the tools — as long as you’re still the one driving.)

Deloitte Caught Cheating

Verify, verify, verify is the golden rule when it comes to using artificial intelligence at work. Remember, whatever you put out there will be attributed to you!

One of the world’s biggest consulting firms just got a very expensive reminder of why.

Recently, Deloitte — yes, the global consulting giant — had to refund part of a $440,000 contract to the Australian government after a serious AI mishap. The company had delivered a report on welfare policy that looked polished and professional. But there was one tiny problem: Much of it was based on AI-generated information that turned out to be completely made up.

We’re talking fake academic citations, nonexistent books, and even invented quotes from a federal judge. An academic reviewing the report spotted references to legal works that didn’t exist — and once people started digging, the whole thing unraveled. Deloitte admitted it had used a generative AI tool to “fill in some gaps” without ever telling the client.

The result? A very public embarrassment, a refund, and a major hit to their reputation.

Lesson #1: Always Say When AI Is Involved

Look, there’s nothing wrong with using AI to help with your work. Lots of nonprofits and small organizations are already doing it — drafting emails, summarizing policies, or even analyzing data. That’s smart. But if AI played a role in something you’re delivering — especially something important, like a grant report, a compliance plan, or a cybersecurity policy — be upfront about it.

Think of it like labeling food. If you bring a dish to a potluck, people want to know what’s in it — especially if they’re allergic to peanuts. The same goes for your work. If AI helped you write it, people deserve to know so they can evaluate it with that context in mind.

Lesson #2: AI Lies (With Confidence) — So Check Everything

The second lesson is even more important: never trust AI blindly. These tools are excellent at sounding smart — and sometimes even more convincing than a real expert — but that doesn’t mean they’re right. They “hallucinate” facts, cite sources that don’t exist, and occasionally make up laws out of thin air.

That might sound funny until you realize what’s at stake. Imagine relying on AI to write a security plan and discovering later that it referenced a cybersecurity standard that doesn’t actually exist. Or that it missed a critical legal requirement because it “guessed” what the law said.

The fix is simple but essential: check its work. Click the links. Verify the sources. Make sure a real human reviews and approves anything before it goes out the door.

Why This Matters for Nonprofits and Small Teams

For many nonprofits, AI feels like a lifeline — a way to get more done with limited staff and tight budgets. And it can be. But with that convenience comes risk. A single inaccurate report, a misinterpreted regulation, or a fake citation can lead to compliance headaches, funding problems, or security gaps.

That’s especially true in cybersecurity. Policies, training materials, and response plans need to be accurate. If they’re not, you’re leaving the door wide open for bigger problems down the road.

AI is a Great Assistant — Not a Replacement

The Deloitte fiasco teaches us two simple but powerful lessons:

  • Always disclose when AI is part of the process. Transparency builds trust.
  • Always verify AI-generated information before sharing or acting on it. Accuracy protects you.

AI is a powerful tool — like a helpful assistant who works fast but isn’t always careful. You wouldn’t hand over a grant application without reading it first, and you shouldn’t trust AI without reviewing its work, either.

Sure, use AI as a shortcut. Enjoy the time it saves. But keep a human in the loop. Because at the end of the day, it’s your organization’s name — and reputation — on the line.

 

At CGNET, we work with nonprofits and mission-driven organizations around the world to make technology safer, smarter, and more effective. That includes helping teams adopt AI tools responsibly — with clear policies, strong oversight, and cybersecurity practices that protect your people and your data. If you’d like to learn how to integrate AI into your organization without putting your reputation or security at risk, get in touch with us — we’re happy to help you put the right guardrails in place.

Please check out our website or drop me a line at g.*******@***et.com.

 

You May Also Like…

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate »
Share This
Subscribe