GenAI Prompts at Work: Risky Business

GenAI prompts

Written by Jackie Bilodeau

I am the Communications Director for CGNET, having returned to CGNET in 2018 after a 10-year stint in the 1990's. I enjoy hiking, music, dance, photography, writing and travel. Read more about my work at CGNET here.

February 27, 2025

Generative AI (GenAI) tools are becoming increasingly popular in the workplace, but they come with significant risks. Researchers at Harmonic Security have found that employees are sharing a wide range of sensitive data through these tools, raising concerns about data security and privacy. For this reason, many organizations are hesitant to allow staff to fully adopt AI into their daily work routines.

Understanding the Danger

These fears aren’t unfounded, and here’s why: Whenever a user enters data into an AI tool like ChatGPT, Gemini, etc., that data is consumed by that service to be used as material to train the next generation of its algorithm. Unfortunately, if proper data security isn’t in place within an organization, that data may then become available for hackers to retrieve via nefarious methods. For example, one of the main AI threat vectors is known as “prompt injection”, where cleverly constructed AI prompts can trigger unintended or unauthorized action that calls upon that sensitive data.

Sensitive Data Categories

According to Harmonic Security’s analysis of thousands of prompts submitted into a wide variety of GenAI tools, they discovered that a significant number of them contained sensitive company information. And while in many of the cases the employees are using the tools for straightforward, simple tasks (for example, summarizing a piece of text or editing a blog), 8.5% of the prompts included sensitive data. The types of information can be categorized as follows:

  • Customer data
  • Employee data
  • Legal and finance
  • Security
  • Sensitive code

It should be noted that customer data is by far the most frequently shared at nearly 46%.

Finding the Risk-Reward Balance

The time-saving capabilities GenAI offers organizations can bring huge advantages when it comes to efficiency, productivity and innovation. On the other hand, the risky behaviors of employees who use it can threaten to bring huge reputational and/or financial damages upon the organization. So finding that perfect balance is key.

Security experts offer the following advice:

  • Deploy AI monitoring systems (e.g., Datadog, Dynatrace, and Brand24) that track GenAI input in real time
  • Use premium, paid plans that do not use inputted data to train their algorithms
  • Train employees on best practices, teaching responsible GenAI use

Take Action

As a next step, we encourage all organizations to conduct a thorough review of their current AI usage policies and implement the appropriate security measures. By doing so, you can enjoy the many advantages of AI while simultaneously ensuring a safer and more secure future for your organization.

 

 

Written by Jackie Bilodeau

I am the Communications Director for CGNET, having returned to CGNET in 2018 after a 10-year stint in the 1990's. I enjoy hiking, music, dance, photography, writing and travel. Read more about my work at CGNET here.

You May Also Like…

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate »
Share This
Subscribe