Just yesterday, I asked a friend about freelancing possibilities. She wrote back to my Gmail account with the news that a certain spending spigot was closed off. She also provided the status on something I’d written. (Delayed, but still plowing ahead)
Right above the reply blank (and below the email), I saw three possible responses. They were cooked up by my co-reader of this email, Google’s computer. All I had to do was pick one of them, and hit the send button. My choices:
No Worries, thank for the update!
These were viable responses. I could conceivably have chosen one of them, but only if I could remove the exclamation points, which remind me of a certain tweeter in chief.
The AI reading our emails is getting a lot smarters. It’s moved beyond primitive targeting for ads, and is now zeroed in on our motives. Why did we write the email? What were we looking for? The computer is interpreting our dialogues, or at least their dynamics. It can come to all kinds of conclusions, even judging the relationships in which we appear to be dominant, and others in which we tread closely to subservience.
Today, a new chapter. A friend sent out a Gmail alert that her account may have been hacked.
Google, no doubt perceiving the emergency, provided sober-minded answers, with none of those light-hearted exclamation marks.
Thanks for the heads up.
For a few minutes, I toyed around with the Gmail system, trying to elicit suggested replys from the server. It seems to be a sporadic effort. This makes sense. Google, after all, is attempting to automate a layer of our communication. The company will want to roll it out slowly, gathering data from users, and calculating which types of people make use of it, and which types of communication do they use the most, and under what circumstances? An AI can learn a lot from humanities’ emails.