The AI Shortcut Trap (and What Moby Dick Taught Me)

AI shortcut

Written by Georg Lindsey

I am the co-founder and CEO of CGNET. I love my job and spend a lot of time in the office -- I enjoy interacting with folks around the world. Outside the office, I enjoy the coastline, listening to audiobooks, photography, and cooking. You can read more about me here.
๎€ฅ

October 23, 2025

I still remember the slight knot in my stomach back in school when Iโ€™d reach for the Cliff Notes instead of finishing the actual book. Okay โ€” to be fair, I did read Lord of the Flies and The Great Gatsby. But Moby Dick? That was a different story. I just couldnโ€™t muscle through all those whaling chapters, so I leaned on the Cliff Notes version. Or maybe it was a Readerโ€™s Digest Condensed Book โ€” either way, it felt a little embarrassing. And, if Iโ€™m honest, a little dishonest.

But hereโ€™s the thing: I knew it was a shortcut. I knew I was skipping over some of the depth and nuance of the original work. It saved time, sure. It spared me some boredom. But I never would have claimed Iโ€™d read every page.

Thatโ€™s why I find it so interesting โ€” and unsettling โ€” that we donโ€™t always treat AI the same way. These days, people will copy and paste something from ChatGPT or Perplexity or whatever tool theyโ€™re using and pass it off as if it were entirely their own thinking. No hesitation. No second-guessing. And no sense that maybe, just maybe, theyโ€™re missing something important.

The truth is, thatโ€™s a big mistake.

AI can be a wonderful shortcut โ€” one that saves you time and energy, just like those yellow-and-black Cliff Notes did. But if you donโ€™t read carefully, question what it gives you, and add your own thinking, youโ€™re not just cutting corners โ€” youโ€™re handing over the steering wheel. And that can lead you straight into trouble: wrong facts, flawed logic, even legal or security risks.

(And in the spirit of full disclosure: yes, I used voice-to-text, Perplexity, and ChatGPT to write this. See? Itโ€™s okay to use the tools โ€” as long as youโ€™re still the one driving.)

Deloitte Caught Cheating

Verify, verify, verify is the golden rule when it comes to using artificial intelligence at work. Remember, whatever you put out there will be attributed to you!

One of the worldโ€™s biggest consulting firms just got a very expensive reminder of why.

Recently, Deloitte โ€” yes, the global consulting giant โ€” had to refund part of a $440,000 contract to the Australian government after a serious AI mishap. The company had delivered a report on welfare policy that looked polished and professional. But there was one tiny problem: Much of it was based on AI-generated information that turned out to be completely made up.

Weโ€™re talking fake academic citations, nonexistent books, and even invented quotes from a federal judge. An academic reviewing the report spotted references to legal works that didnโ€™t exist โ€” and once people started digging, the whole thing unraveled. Deloitte admitted it had used a generative AI tool to โ€œfill in some gapsโ€ without ever telling the client.

The result? A very public embarrassment, a refund, and a major hit to their reputation.

Lesson #1: Always Say When AI Is Involved

Look, thereโ€™s nothing wrong with using AI to help with your work. Lots of nonprofits and small organizations are already doing it โ€” drafting emails, summarizing policies, or even analyzing data. Thatโ€™s smart. But if AI played a role in something youโ€™re delivering โ€” especially something important, like a grant report, a compliance plan, or a cybersecurity policy โ€” be upfront about it.

Think of it like labeling food. If you bring a dish to a potluck, people want to know whatโ€™s in it โ€” especially if theyโ€™re allergic to peanuts. The same goes for your work. If AI helped you write it, people deserve to know so they can evaluate it with that context in mind.

Lesson #2: AI Lies (With Confidence) โ€” So Check Everything

The second lesson is even more important: never trust AI blindly. These tools are excellent at sounding smart โ€” and sometimes even more convincing than a real expert โ€” but that doesnโ€™t mean theyโ€™re right. They โ€œhallucinateโ€ facts, cite sources that donโ€™t exist, and occasionally make up laws out of thin air.

That might sound funny until you realize whatโ€™s at stake. Imagine relying on AI to write a security plan and discovering later that it referenced a cybersecurity standard that doesnโ€™t actually exist. Or that it missed a critical legal requirement because it โ€œguessedโ€ what the law said.

The fix is simple but essential: check its work. Click the links. Verify the sources. Make sure a real human reviews and approves anything before it goes out the door.

Why This Matters for Nonprofits and Small Teams

For many nonprofits, AI feels like a lifeline โ€” a way to get more done with limited staff and tight budgets. And it can be. But with that convenience comes risk. A single inaccurate report, a misinterpreted regulation, or a fake citation can lead to compliance headaches, funding problems, or security gaps.

Thatโ€™s especially true in cybersecurity. Policies, training materials, and response plans need to be accurate. If theyโ€™re not, youโ€™re leaving the door wide open for bigger problems down the road.

AI is a Great Assistant โ€” Not a Replacement

The Deloitte fiasco teaches us two simple but powerful lessons:

  • Always disclose when AI is part of the process. Transparency builds trust.
  • Always verify AI-generated information before sharing or acting on it. Accuracy protects you.

AI is a powerful tool โ€” like a helpful assistant who works fast but isnโ€™t always careful. You wouldnโ€™t hand over a grant application without reading it first, and you shouldnโ€™t trust AI without reviewing its work, either.

Sure, use AI as a shortcut. Enjoy the time it saves. But keep a human in the loop. Because at the end of the day, itโ€™s your organizationโ€™s name โ€” and reputation โ€” on the line.

 

At CGNET, we work with nonprofits and mission-driven organizations around the world to make technology safer, smarter, and more effective. That includes helping teams adopt AI tools responsibly โ€” with clear policies, strong oversight, and cybersecurity practices that protect your people and your data. If youโ€™d like to learn how to integrate AI into your organization without putting your reputation or security at risk, get in touch with us โ€” weโ€™re happy to help you put the right guardrails in place.

Please check out our website or drop me a line at g.*******@***et.com.

 

You May Also Like…

The AI Leaderboard Idea Is a Lie

The AI Leaderboard Idea Is a Lie

If you ask, โ€œWhich AI is best?โ€โ€” youโ€™re already asking the wrong question. Because in 2026, there is no single...

Stop Asking If AI Wrote That!

Stop Asking If AI Wrote That!

The question sounds innocent enough.ย  But what itโ€™s really asking is something else entirely: Is this thinking...

You May Also Like…

The AI Leaderboard Idea Is a Lie

The AI Leaderboard Idea Is a Lie

If you ask, โ€œWhich AI is best?โ€โ€” youโ€™re already asking the wrong question. Because in 2026, there is no single...

Stop Asking If AI Wrote That!

Stop Asking If AI Wrote That!

The question sounds innocent enough.ย  But what itโ€™s really asking is something else entirely: Is this thinking...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate ยป
Share This
Subscribe
CGNET
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.