Get Started with This AI Primer

Written by Dan Callahan

I am a Senior Technical Advisor to CGNET. Formerly, I managed our Cybersecurity and Cloud Services businesses, and provided consulting to many clients over the years. I wear a lot of hats. Professionally, I'm a builder of businesses. Outside of work, I'm a hobby farmer, chef, skier, dog walker, jokester, woodworker, structuralist, husband and father.

June 8, 2023

AI—artificial intelligence. Another landing spot for that NSFW joke. You know the one. The joke where

  • Everyone is talking about it
  • No one knows what it means
  • No one is actually doing it

Except in this case, plenty of people are doing it. I thought I would offer this AI primer (thanks Google!) so that others (like me) who do not live on the leading edge of technology can get up to speed with AI concepts, before the AI story moves from snow flurries to a blizzard, and the CEO is stopping in to ask for a briefing.

What is AI?

We start our AI primer with a definition of AI. “AI” is shorthand for “artificial intelligence.” (I see that some of the high-priced consulting firms now want to use the term “machine intelligence.”) The “artificial” here distinguishes learning by the machine vs. what (and how) a person learns.

Generative AI

Today, we read many stories about AI that refer to “generative AI.” Here, the “generative” descriptor means that AI is being used to generate something new, based on the inputs originally used to train the algorithm. I train the AI model to recognize the salient features of a cat’s face, and then ask it to draw a cat’s face for me. Or I train the model to replace images of my face with those of a cat’s face. Then I sell the technology to Zoom and retire to the British countryside.

Kidding!

Machine Learning

Another key concept for our AI primer is machine learning. Here, we “feed” the machine algorithm a dataset and let it determine what variables are correlated with one another. Once the correlation model is in place, we feed the machine new data, and ask it to generate new content based on the model. This is the step where we hope the machine will tell us something we could not figure out for ourselves.

For instance, we develop a machine learning model that captures drug reactions and interactions. Then, we ask the model to tell us about the effects of the drug. We hope the model will tell us where the drug might be useful in addressing diseases we had not considered before.

Large Models

Large Models are our training datasets. Large Language Models are text-oriented datasets. Large Models are datasets that may contain content that is not text-oriented. Perhaps the model contains images, or GPS coordinates. We want these models to represent reality as best they can. That means the models must be big—really big. Because these models can be so large, the computer power and expense to process them can exceed even a large organization’s budget. Hence, we see the rise of large model outputs.

Add to our AI primer the concept of foundation models. These are models that third parties make available to us, through an API or other means. You can start your vegetable garden with seeds that you bury in the soil. Or you can buy vegetable seedlings and get a head start on your summer garden. Foundation models are like that.

Model Bias

Remember in our AI primer these characteristics of machine learning.

  • We do not know in advance (a priori, for you Latin geeks) what causes what or what correlates with what (that is why we are turning to machine learning in the first place).
  • We hope that if we use a sufficiently large model as input, we will get some valuable output. But we do not know much about what is in the large model. What is more, we do not know in advance what biases might exist in the large model. We have read about machine learning for facial recognition that performed poorly for groups other than white males, because the large models used to build the facial recognition applications did not adequately represent non-white and non-male images.

The impact of potential model bias is that we can expect to fine-tune our AI models. This means we must feed thousands of new inputs to the model, such that it changes its machine learning algorithm.

Prompts

We must discuss prompts as part of our AI primer. Think of prompts as the ways we ask questions of the AI application. We might search for “broken shoulder recovery statistics” to understand how soon we might expect our broken shoulder to heal. Conversely, we might ask “should I seek out shoulder surgery?” to ask an AI model to compare results from research studies and offer an opinion on the necessity and value of shoulder surgery.

A Detour on Statistics

Please amuse me by taking in a bit of statistics talk. I promise it will not hurt.

As I said earlier, machine learning is the application of algorithms to a dataset, to determine correlations between elements in the dataset. So, what is the concept of correlation then?

A correlation between two variables occurs when the value of one variable changes in a way that is consistent with the value of another variable. The two variables could each get bigger at the same time. Or they could get smaller at the same time. Or one could get bigger while the other gets smaller at the same time. We measure the change in variables a and b, and we apply statistical tests to determine that the change in variables does not seem to be happening by chance. Something is going on; we just do not know what.

Correlation analysis can help us see changes in two (or more) variables. It cannot tell us why those variables change together. Here, we often substitute our own notions of causality. Fewer earthquake deaths occur on Sundays because more people are in church. (Or the opposite…)

Notice that, in trying to understand why two variables are correlated, we look for causation. The two variables are correlated because one variable causes a change in the other variable.

At this point, our statistics professor clears her voice and reminds us:

Correlation is not causation.

We know that two variables move with one another. We do not know why. So, we do not know if we have learned something meaningful or if we are witnessing random chance in action.

AI Primer Takeaway: Does it Make Sense?

The takeaway: machine learning will generate content that is correlated with the inputs it has observed. That content may or may not make sense. We humans must review the content and determine what makes sense and what is gibberish.

Will AI Be the Promised Land? Or Hell on Earth?

This belongs in the non-technology section of your AI primer. There have been news reports lately, of people (who are not crackpots) predicting that AI could be the ultimate evil, threatening life on earth. Could it happen? Who knows. Just bear in mind that some of those discussions are taking place in front of the US Congress, where AI leaders are asking Congress, “please, regulate us!”

Remember who else made that request? Facebook.

Why would a tech company ask to be regulated? Because then they can blame the regulators for things that go badly. And my confidence in Congress to intelligently regulate the AI industry is… very low.

So, I do not think that AI will kill us all. (Just to be safe, however, I end all my Siri and Alexa requests with “please.”) I also do not think AI will free me to sip cocktails while my trusty robot machine does all my work. I suspect the truth will be somewhere in between. Where exactly? That will be the interesting part to discover.

Written by Dan Callahan

I am a Senior Technical Advisor to CGNET. Formerly, I managed our Cybersecurity and Cloud Services businesses, and provided consulting to many clients over the years. I wear a lot of hats. Professionally, I'm a builder of businesses. Outside of work, I'm a hobby farmer, chef, skier, dog walker, jokester, woodworker, structuralist, husband and father.

You May Also Like…

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate »
Share This
Subscribe