AI Drove. I Believed.

Waymo AI Driving

Written by Georg Lindsey

I am the co-founder and CEO of CGNET. I love my job and spend a lot of time in the office -- I enjoy interacting with folks around the world. Outside the office, I enjoy the coastline, listening to audiobooks, photography, and cooking. You can read more about me here.

December 18, 2025

I rode in a Waymo for the first time recently. It was fun and practical, and it meant I could talk to my colleague without worrying about driving or missing a turn. By the end of the trip, I was sold.

The ride experience

The first surprise was how normal the whole experience felt. The car pulled up, I got in, buckled up, tapped to start, and then… it just drove, smoothly and predictably, like a cautious but competent human driver. The cost seemed reasonable, especially when factoring in the ability to focus on conversation instead of traffic and navigation.

From “cool gadget” to AI system

What struck me afterward was how easy it was to treat Waymo as just another transportation option, when in fact I had just trusted a very complex AI system with my safety. Under the hood, that ride depended on years of work in machine learning, robotics, mapping, and simulation, all blended into something that feels as simple as calling a car from an app. The “magic” of AI disappears into the background the moment the experience becomes reliable and boring in the best possible way.

How the car knows where it is

A human driver orients using street signs, landmarks, and a general sense of the area. A robotaxi does something similar, but with far more detail and precision. Before operating in a city, teams drive specialized vehicles around to build extremely detailed maps: lane boundaries, curbs, crosswalks, traffic lights, even how wide a turn actually is. During your ride, the car constantly matches its sensor data against those maps, updating its understanding of where it is within inches. This combination of prebuilt maps and real time perception is the foundation on which everything else rests.

Seeing the world: sensors and perception AI

If you glance at a Waymo car, the “hat” on top and the sensors around the body are hard to miss. Those are LiDAR units, cameras, and radar systems that collectively act as the car’s eyes and ears. Together they let the car see in 360 degrees, in rain or darkness, and at ranges beyond what most humans manage consistently. Perception AI models take this raw data and answer basic but crucial questions: “Is that object a pedestrian, a cyclist, a car, a traffic cone, or a dog?” and “Exactly how fast is it moving, and in what direction?”

Predicting what everyone else will do

Driving is not just about knowing where things are now; it is about guessing what they will do next. A person walking near a curb might step into the street. A cyclist might merge into a lane to avoid a parked car. A driver might cut across multiple lanes to make an exit at the last second. Waymo style systems use learned models to estimate many possible futures‑ for every nearby road user, with probabilities attached. The car then plans its own motion so that, even if someone behaves in a slightly unexpected way, there is still a safe buffer.

Planning and decision-making

Once the AI has a picture of the world and likely futures, it has to choose a specific course of action: stay in this lane or change, slow down or maintain speed, inch forward at a four way stop or yield. Underneath, this is an optimization problem: given safety constraints, traffic rules, comfort, and efficiency, choose a path and speed profile that works. The system recomputes this many times per second, so it can adapt if a parked truck suddenly starts moving‑ or an emergency vehicle appears.

Learning from millions of miles

One human driver learns slowly over years of daily commuting. A fleet of autonomous cars can learn from millions of miles in the real world plus billions more in simulation. Edge cases — an unusual construction layout, a strange pedestrian behavior, an odd unprotected left turn — can be replayed, varied, and stress tested in virtual form thousands of times. When the team improves the models based on these cases, every car in the fleet benefits, not just the one that encountered the problem. That “hive mind” learning is one reason these systems can steadily improve over time.

Why this matters beyond the novelty

The obvious benefits are convenience and productivity: being able to talk with a colleague, answer email, or just enjoy the view instead of wrestling with city traffic. There are deeper implications, though. If autonomous systems continue to prove safer than human drivers, they could reduce crashes, injuries, and insurance costs, while changing how cities think about parking, transit, and street design. At the same time, they raise questions about jobs for human drivers, accessibility, and who bears responsibility when something goes wrong.

The invisible AI layer of everyday life

Riding in a Waymo made it clear how quickly society absorbs powerful AI into everyday routines. Yesterday, “AI” was an abstract buzzword. Today, it is quietly piloting a two-ton vehicle through crowded streets while passengers chat in the back seat. That gap — between how ordinary the ride feels and how extraordinary the underlying technology is—is exactly why a deeper dive is worth doing. The more people understand what these systems can and cannot do, the better the choices we can make about where and how to use them.

 

 

Want to learn more? AI has been a subject of my writing for several years, and CGNET has offered AI user training and implementation for both large and small scale organizations.   I would love to answer your questions! Please check out our website or drop me a line at g.*******@***et.com.

 

You May Also Like…

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Translate »
Share This
Subscribe