The Intersections Of Artificial Intelligence

 

StarTrek newborn AI
The car in front of me misjudged a traffic light and stopped in the crosswalk. Braking early, I left a gap should the driver wish to back up. Then I wondered, would an autonomous vehicle extend the same courtesy?

But I'm getting ahead of myself. Before being courteous, autonomous driving has yet to master the rules of road. There are two prevailing approaches: solve for general AI or use geo-fencing.  The former, pursued by Tesla, is harder to achieve but will make autonomous cars capable of driving on almost any road. The latter, used by GM Cruise and Alphabet Waymo, is easier to achieve, but restricts autonomous cars to certain locations. GM Cruise is already accepting fares for its driverless taxi in San Francisco. The service, however, is bound by both time and space, allowed only to run between 10pm and 6am (lighter traffic) and within a 7x7 mile area.

Choices and trade-offs such as these are common when planning and building a future. Thirty years ago, Apple introduced the Newton MessagePad, along with a stylus and the promise of handwriting recognition. But the AI needed to be trained, and its failings became the target for humor. Doonesbury had its comicstrip character scribble out "I am writing a test sentence" only to have it translate to "Siam fighting atomic sentry." 

Today, using an Apple Pencil on an iPad, I can write that same test sentence -- naturally with my right hand or clumsily with my left -- and have near perfect accuracy.

In between, there was a decade when another handheld device achieved success not by solving general handwriting recognition, but by employing a "geo-fenced" alternative.  That device maker was Palm and the handwriting tech was known as Graffiti. I describe it as "geo-fenced" because instead of accepting my natural handwriting, Graffiti required that I write in a specific way:

This is a test sentence written in Graffiti

The dot indicates the starting point, and the character is written with a single stroke (no lifting of the stylus to dot an i or cross a t). It looks awkward, but Graffiti was surprisingly easy to learn, accurate, and allowed Palm to succeed where the Newton could not. Palm eventually went out of business for a myriad of reasons, but the company contributed to the mass adoption of personal handheld and connected organizers.

Of course, autonomous driving is a much bigger problem to solve than handwriting recognition, but computing power and networking capabilities have grown to match.  Autonomous vehicles are trained to understand the rules of the road and to deal with situations where the rules are broken: pedestrians crossing in the middle of the street, double parked vehicles, cars running a red light, a ball rolling across the road. But the scenario I described in the first paragraph isn't at all necessary for safe driving; it was just a nice thing to do.  If an AI wasn't trained for this specific behavior but eventually acted this way regardless, it would be called emergent behavior (and would still be a nice thing to do).

The colorful photo at the top is from the StarTrek episode "Emergence," and the story involved an AI life form emerging from the Enterprise. At the end, Captain Picard philosophized:

"The intelligence that was formed on the Enterprise didn't just come out of the ship's systems. It came from us. From our mission records, personal logs, holodeck programs, our fantasies. Now, if our experiences with the Enterprise have been honorable, can't we trust that the sum of those experiences will be the same?"

I've been writing software long enough to recognize that a program's structure largely reflects the structure of the organization that developed it.  Similarly, an emergent AI, sentient perhaps, would be a reflection of our society. The "Emergence" episode aired in 1994, and at the time, I took comfort in Picard's words as they meant a future AI could be, and would be, courteous -- more the annoying but endearing C-3PO and less the murderous Terminator T-1000. But that optimism took a turn after the corruption of the AI chatbot Tay in 2016.

Tay was developed by Microsoft and intended for the US Twitter audience between the ages of 18 and 24.  Comedians and cynics would point out that was Microsoft's first mistake, but Tay's predecessor, XiaoIce (Little Ice), was successful in China, interacting charmingly with over 40 million users. In the US, however, Tay began tweeting racist and misogynist remarks in less than 24 hours after joining Twitter. I initially took the news simply as pranksters trolling a major company, but sadly, Tay did accurately reflect an ugly corner of the internet.

It's 2022 as I write this, and the world has changed, where bad behavior is amplified, celebrated, and rewarded, and not just in the depths of the internet, but at the highest offices in the land. And what is a potential AI to make of all this? How would it learn anything past the noise?

I'm a little bit scared, but not of a future where Terminators wipe out humanity. I fear we are already terminating ourselves, albeit slowly, through wars, climate change, and anti-science views. Yet, I am also hopeful. More people than not are courteous, good, and honorable. It would be ironic and poetic If a sentient AI emerged to figure out how to amplify those qualities.  But it would be better if we figured that out for ourselves.



Comments

Unknown said…
Lovely well written article.

I too hope that we mere mortals can and will figure out how best to amplify those behaviors that are just, wise, loving and benevolent.

But sadly, I trust that everything but those traits will become the emergent forces of the universe.

Deep sigh.

Popular posts from this blog

MR2 Check Engine

Bookshelf: UNIX A History and a Memoir

Bookshelf Classic: The C Programming Language