AI is for real. Accept it.
If you’re scared of the implications of malevolent AI applications, you have good cause. If you are encouraged by the benefits of applications, you have good reason. See my latest experiment, below.
I last addressed the subject of artificial intelligence about a year ago in Let’s Talk Artificial Intelligence (AI). That piece focused on the regulatory policies being bandied about as well as the possible impact of AI on the workplace. Regarding the latter, I found some analysis that supported the expectation of a greater impact on white-collar work than on so-called blue-collar jobs:
One such stab compared AI’s eventual capabilities with the human abilities of specific professions, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. It concluded that, unlike the skilled blue-collar jobs first clobbered by the Industrial Revolution, AI will come for highly educated, white-collar workers first. Lawyers, teachers, financial advisers, and real-estate brokers, among them. Ouch!
I’ve recently had some first-hand experience that puts new meat on the bones of this assessment, particularly reasoning and fluency. I want to share this with you and, if you have the time and interest, allow you to make your own real-time evaluation.
Here’s what happened. First, three weeks ago, I sent out to Pancake subscribers a newsletter with the headline, Why Trump won’t win, revisited. I followed that with an email to subscribers with a link to a podcast called Deep Dive, in which two commentators made my post the topic of their conversation. I then asked those subscribers to tell me what they thought of the Deep Dive podcast. The feedback I received was positive, and in a few cases, my respondents were impressed that these podcasters chose my post to be the subject of the “deep dive.”
What no one caught on to—which was the goal of this experiment—was that the Deep Dive podcast was totally AI created by a new Google AI application, NotebookLM. The two commentators, male and female voices, were AI-generated. The entire podcast was drawn solely from a pdf file of my post. There were a few mistakes I caught, but no worse than some real commentators were likely to convey in a discussion.
So far, so good—or so bad, depending on how nervous one wants to be about AI capabilities.
I followed up this past week with another Deep Dive using my post on The Pete Rose Hall of Fame Debate: Should He Be Admitted? which ultimately addressed the larger question of whether one's character should be a qualifier for public recognition of one's professional accomplishments. Again, I uploaded a PDF of my text. The “podcast” produced in about two minutes blew me away.
The “discussion” went well beyond what I wrote, veering into implications and commentary. For example, while my copy mentioned “integrity” of baseball, the “discussants” extrapolated to this exchange at 1:48:
Speaker 2: It really gets at the core of what baseball is supposed to be. You know, the integrity of the game depends on everyone believing that every game is fair. When you have a manager like Rose was at the time betting on games, it makes people question every decision he makes, every play. It casts doubt on everything.
Speaker 1: Right. Like if you're betting on your own team, even if it's for them to win, it still makes you wonder, are you going to make decisions that could influence the outcome?
And then at 9:06, there was this exchange:
Speaker 2: And it goes way beyond sports. Think about politics, entertainment, even academics. It's always a challenge figuring out how to reconcile someone's accomplishments with their flaws. Does one cancel out the other? It's never easy.
Speaker 1: It's definitely more complicated than just ticking off achievements, right? You have to think about their character, how they treat people, what they stand for, especially if they're in a position where others are looking up to them.
Speaker 2: Without a doubt. It's easy to judge when you're on the outside, but real life is messy. You know, every situation is different. There aren't always easy answers.
Speaker 1: It reminds me that F. Scott Fitzgerald quote1, you know, the test of a first rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. This whole Pete Rose thing. That's it right there.
Speaker 2: Love that quote. And you're so right. It's about recognizing that it's complicated. There are good arguments on both sides and sometimes there isn't a right answer. It's OK to hold both of those things in your head at the same time.
A big leap into abstract reasoning
My piece says nothing about F. Scott Fitzgerald. It is about two sides of an issue. The “intelligence” understood the high-level abstraction of what I was talking about and, on its own, associated the Fitzgerald quote—which it either already had in its large database or zoomed out into the Web to find something appropriate for addressing the dichotomy of the problem. It is an example of abstract reasoning:
Abstract Reasoning is most closely related to fluid intelligence: our ability to quickly reason with information to solve new, unfamiliar problems, independent of any prior knowledge. It includes lateral and flexible thinking, logical reasoning, and generating solutions beyond the most obvious. Someone who is strong in Abstract Reasoning would be able to use logic to extrapolate rules or relationships to other possible scenarios.
The AI was able to take the gist of the question I raised—that maybe there are times when an individual is worthy of recognition for their accomplishments yet unworthy of being honored for them—and also understood what Fitzgerald was referring to and make the connection at, what I would consider being well up the abstraction ladder.
Implications, alas, on both sides
What’s the implication of this? Well, here again, we have an application of two opposing ideas that might have to coexist. On AI’s better angels side, perhaps the capability of feeding a patient’s symptoms and vitals in and receiving a treatment plan out that may be totally unique to that single patient. On the flip side, the nightmare of student papers—even Ph.D. dissertations— that include full citations and original analysis. In between, how about the scripts for a situation comedy, including director instructions, that is truly funny for a target audience? (Eventually, the entire production could be created without the use of actual actors and sets. The actors’ and writers' guild nightmare scenario and why there was that long strike last year).
We are still in the infant stages of applications for AI. However, the changes made just since ChatGPT hit the public conscience two years ago point to an exponential trajectory for AI. That’s because abstraction can be used to create new, more efficient, and powerful algorithms, which can then be applied to create newer, more efficient, and powerful algorithms, which can then be applied…..
Whose first-rate intelligence will be tested?
There’s no putting the genie back in the bottle. And there is no regulation that can more than nibble at the margins. If you’re scared of the implications for the malevolent applications of AI, you have a good cause. If you are encouraged by the beneficial value of AI applications, you have good reason. The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.
F. Scott Fitzgerald from”The Crack-up”
OMG This deep dive blew my mind. I’m beginning to think I’ve lived too long. Adaptation to a world that encompasses such applications of AI seems amazingly difficult but then so does the ability to hold 2 opposing beliefs at the same time and still function. Suddenly I’m happy to be 78 years old .
While I tend to be on the "this is scary, it must be regulated" side, it also refers back to another of your recent posts about tradeoffs in risk-taking. Do the potential benefits outweigh the potential risks? How should be define and measure these risks and benefits? Is there a way to mitigated the potential harms through regulation or market-based initiatives? At this point, I only have questions not answers. that said, I agree with you that it's here to stay. At a minimum, we need mechanisms to monitor AI's ongoing trajectory and its effects.