Sometimes, you have to see the humor in it.
I’ve not been particularly fearful of the increasing use of AI. Some of my writer colleagues are freaking out. While I’m certainly not an expert, I think an essential limitation of AI is the linear aspect–at least at this point.
There’s a fundamental difference between “intelligence” and “thinking.” Intelligence means knowing you don’t start sentences with a contraction. But a thinking writer will do so to make a point. (See what I just did there?)
So here’s a story for you. I received a $50 debit card thanks to a settled class action suit. It could only be used for online purchases, functioning like a prepaid credit card. The instructions warned that the transaction would be denied if an attempted purchase was more than the balance on the card.
I purchased using the card for $48.52, leaving a $1.48 balance on it. My financial thinking hated acknowledging that I would be “losing” that $1.48 unless I made an online purchase for less than that.
This is not too complicated so far, right?
A month later, I received an email advising me that my $12.76 purchase was denied because it exceeded the balance on the card. That would make sense, except that I hadn’t used the card. So the most logical conclusion to me (and, I’m sure, you) was the card has been “compromised.” We are both intelligent and thinking.
So I emailed my concern to the card provider. “Noemi” almost immediately advised that I’d been assigned a case number and would be hearing soon. (Apparently, Noemi must follow a proscribed, linear system to pretend it can think. )
When the reply came, AI generated it (emphasis on Artificial), reminding me that I was not allowed to make purchases that exceeded the balance, etc. It insulted my intelligence, but I reminded myself our relationship wouldn’t have much emotion. Noemi isn’t concerned about my feelings.
After some thinking (there’s that word again), I realized my only risk here was $1.48, which I’d already determined I would lose. But for the entertainment value, I continued to converse with Noemi via email.
Noemi continued sending me useless information reflecting its lack of understanding of my emails and inability to think.
ME: “I’m reporting this because there’s a problem with your system and this compromised card. The good news is the system denied the charge.”
NOEMI: “Please contact the vendor to dispute the charge.”
I briefly considered additional experimentation. Could I find a keyword for Noemi to recognize and generate a different response? I suspect that will become a required skill in the future, but I didn’t see much to gain this time.
One thing that does scare me a bit is that many humans are adopting this linear thinking pattern. I remember a conversation during COVID with a doctor’s office that was refusing to see me because “You have symptoms of COVID.”
I replied, “But I tested negative. The symptoms are attributed to my COPD.” (I should add this was a routine, non-essential visit.)
The human replied, “I’m sorry, but our policy is that we don’t see patients with COVID symptoms.” I thought her voice had a robotic tone.
I said, “Can you take a message for the doctor?” (Artificial intelligence likes closed-end questions that require a “yes” or “no” answer.) “Please advise the doctor that she will never see me again since I will always have these symptoms.”
Several days later, I received a call advising they had changed their policy and I could please come in next Tuesday.
So the good news is the bad news. AI doesn’t think like humans do–that’s the good news. But humans sometimes “think” the way AI does–that’s the bad news. I’m more concerned with human thinking than artificial intelligence. How about you?