In a recent Bloomberg column “Let Robots Teach American Kids,” Tyler Cowan makes the case that their intrinsic lack of emotion can make robots more effective than humans in certain situations, by avoiding our psychological flaws. While I wouldn’t disagree that robotic communication may work in certain situations, the author makes me nervous when he says this will be the new normal and kids need to get used to AI early:
“Exposing children to robots early, and having them grow accustomed to human-machine interaction, is one path toward this important goal”
Two quick stories.
Just making idle conversation in the medical lab chair this morning, I asked Laine about her name – is it short for Elaine? She chuckled as she poked the needle into my vein and told me her full name is Orlaine – but people always read her name tag wrong and call her Lorraine. So now it’s just Laine. She finished filling the tubes, released the rubber tourniquet from my arm and sent me off. I wished Orlaine a nice day as I headed out the door for my badly needed post-fast toast and coffee.
Earlier, few minutes before I stepped into the chair, Laine couldn’t find my standing order in the lab’s database, thanks likely to the recent merger with another lab company that has been characterized by all kinds of database glitches. I expected her to tell me to come back later with a new requisition. But that’s not what happened. Instead, she called my doctor’s office to see if they could fax it over. The office wasn’t open yet. Since I had another blood requisition as well, she said she’d draw the extra tube anyway and hold it till I could get the requisition faxed over that morning. She spent probably ten minutes trying to get this problem sorted out for me so I wouldn’t have to make another appointment and come back.
Things like this make me feel good; I really, really appreciate that level of caring and professionalism, the willingness to go a few extra steps to help someone out. Robots don’t, and, I predict, never will, go to such lengths and adjust procedures to help out as Laine just did. They will be programmed for one thing: efficiency.
In a similar vein (ok sorry) I appreciate the attendants at those self-checkout machines. Because the machines have no idea what’s actually happening beyond their minimal inputs, they become what can only be interpreted as passive-aggressive at the slightest perceived human malfunction. Taking a little too long to move items from one step to another? The machine urges you in an overly friendly tone to get on with it. If you just dropped a can of beans on the floor and have to chase it down, it nags you to “Please put the item in the bagging area.” (Of course, it’s polite; it says “Please.” But can’t you see I’m busy, idiot?) When you dig through your wallet trying to find your debit card, it again loses patience and repeats the command to pay up, in an increasingly louder voice. All too often, it glitches out and asks you to locate an attendant. Usually, with a roll of the eyes at the machine, the wave of a magic ID card in front of the scanner and the punching of a few mysterious buttons, the attendant resets the machine. The final “Thank you for shopping at …” robotically burped by the machine doesn’t really move me at that point.
So, I vote for keeping humans in our many systems where human interactions not only do the job well, but also can show the ability to care and cut to the chase. I vote for not allowing ourselves, or our children, ever to get used to the dehumanizing shortcomings of robotic talk.
Now I’m trying to picture classrooms of the future staffed by robots …
Irwin, thanks for giving me a good laugh with your great storytelling style, in addition to your real purpose, that is, offering a really valuable example of the future we need to avoid.