Tips for Creating Generative AI Policies

Written by Jackie Bilodeau

I am the Communications Director for CGNET, having returned to CGNET in 2018 after a 10-year stint in the 1990's. I enjoy hiking, music, dance, photography, writing and travel. Read more about my work at CGNET here.

August 3, 2023

Generative AI is suddenly everywhere.  Students are using it in to help with schoolwork. Teachers are using it to draw up lesson plans. Employees at all levels – and at organizations of every size – are finding ways to use it to make their work more efficient.  But is your organization prepared for the risks of exposing confidential data that generative AI brings?  Have you created a set of generative AI policies and made your staff – and vendors, consultants, and so forth – aware of them?  If not, here are some tips for doing just that.

Categorize and define data sensitivity

Before you even get started setting the rules, the first step would be to outline, in great detail, the level of sensitivity of all your organization’s information.  Essentially, it needs to be clear to staff what information can and cannot be fed into generative AI tools like ChatGPT.

Consider creating a chart that defines the level of sensitivity of a particular class of data and provides examples.  I recently saw an example of such a chart. Here’s how that Foundation broke it down:

Public Data

This type of data is fairly obvious.  If the public can already find it on the internet, it’s safe to upload to a generative AI tool.  Examples include published news stories, public domain info, newsletters, audited financials, published lists of grants, and so on.

Non-Sensitive (but not public) Data

While not necessarily “out in the public” already, there is plenty of data that you may feel is safe for staff to share with AI tools.  Basically, anything that does not reveal identifiable individual information OR proprietary organizational information is okay to feed into these tools. Examples include anonymized grantee data, non-sensitive internal communications – think holiday calendar and the like – and departmental policies if – and only if – approved by the department head.

Sensitive Data

Clearly, this is the section of the table that is most important, since it’s the What Not to Share section.

This includes:

  • Any and all personal information of employees, vendors, etc. (personal contact info, SSNs, health information, login credentials, HR data like performance reviews, VIP contact info.)
  • Legal documentation subject to Attorney-client privilege
  • Contractual information
  • Any info subject to third-party confidentiality requirements or expectations
  • Information about organization processes or products of a proprietary nature, including investment information, partner evaluations, etc.
  • Source code of any custom applications within the organization
  • Undisclosed (not published) grant documentation

Create your generative AI policy

Principles

First, explain the principles behind the policies, so your staff understands fully why they are necessary. Go over both the positives and negatives of using AI tools. Explain the various risks of harm, including:

  • exposing private data
  • the potential for offensive, erroneous and/or biased outputs by AI
  • violating intellectual property rights

Usage: Guidance & responsible use

Consider a section the provides guidance as to how AI tools are best utilized. Examples could be:

  • drafting an email
  • summarizing a meeting
  • creating an agenda
  • making a first draft of a (non-confidential) document.

As one foundation explained in the guidance section of their new policy, don’t input or “ask” an AI tool anything you wouldn’t want to see as a headline in a newspaper! And beyond this best use advice, remind employees they and they alone are responsible for both fact-checking the output, and removing any information that violates the rules outlined in your policies.

Rules

Using the categories I provided in the Data Sensitivity section earlier, detail specifically what is restricted from being uploaded to generative AI.  Also, consider providing a list of the AI tools that are pre-approved for use by your organization and instructing staff to use only those. (Any other tools could be subject to approval by your head of IT.)

Consequences

Determine and then spell out consequences/repercussions for lack of compliance.

Share your policy

Obviously, these generative AI policies are ineffective if your staff aren’t aware of them.  But how best to make sure they’ve read and understood these new rules?  Or that they even know enough about AI tools to understand what all this fuss is about?   Consider training sessions: these could be added to your regular cybersecurity training, or (considering the novelty and uniqueness of some AI tools) given as special, separate training. For best results, be sure to make this training interactive, fun, and include quizzes. This ensures that everyone is paying attention.

Evolve your policy…and your training

As these generative AI tools are a hot product right now – and for the foreseeable future – know that new ones will be hitting the internet regularly.  And the ones already out there will be constantly evolving.  For that reason, it is critical your generative AI policy and training evolve to keep up.  Don’t make the assumption that the policy and training you provide today will include everything that is available 6 months from now.

The time to create these generative AI policies is now.  Don’t wait until your organization’s sensitive information is exposed and there’s no going back!

Written by Jackie Bilodeau

I am the Communications Director for CGNET, having returned to CGNET in 2018 after a 10-year stint in the 1990's. I enjoy hiking, music, dance, photography, writing and travel. Read more about my work at CGNET here.

You May Also Like…

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate »
Share This
Subscribe