(I wrote this post on March 5, 2024. I will have more to say about the future of AI later on in this post.)
AI is all the rage these days. People use what they call “generative” AI for all kinds of things: fact-based research, generate alternative ideas, and to correct the work they created.
However, what we now call “generative AI” is not generative for new ideas. Instead, these tools generate text (or code) based on many patterns these models scooped up from other people’s writing (or code). That allows these models to become excellent predictors for what other people already generated. That’s not creating good decisions for you.
However, there is work that AI excels at—almost anything that does not require insights. For example, copywriting. Copywriting is writing to entice people to buy a product or service. In addition, the grammar checking tools can check my writing against known “good” grammar and composition.
Both of those make those tasks easier. I do check with Grammarly for much of my writing. However, I rarely take all of its suggestions. I often take the copywriting suggestions because copywriting is about attracting readers to specific actions. Copywriting is quite predictable.
However, copywriting is not the same as helping my readers understand my insights that might—or might not—work for them. If I write “wrong” and take all of Grammarly’s suggestions, my writing will sound boring. Worse, if I make a mistake in my writing, Grammarly will try to fix it based on the composition, not the meaning.
At this time, Grammarly and most of the other AI tools have no content insights. That means I do not trust most AI tools as a way to correct my writing. However, there are areas where I do trust AI.
Where I Trust AI
See the image of my trust continuum at the top of this post. (See How Often Can You Make New Good Friends You Trust? for the origin of that continuum.)
I trust Grammarly’s AI as an acquaintance, or maybe a colleague. But, I do not trust Grammarly as a friend or good friend. (I trust very few of the AI “companions” that every vendor seems to want to impose on me.)
That makes me an AI skeptic. Remember, most of the supposedly AI products are LLMs, Large Language Models. They generate statistically predictable words or code. Not insights or new ideas.
That means they recycle ideas—they do not create new ideas. Here are specific reasons for why I do not trust these LLMs:
- Friends (assuming they are sober) don’t create hallucinations or lies. Too many LLMs still do.
- Humans make innovative connections between ideas. LLMs do not. (They recycle, and therefore, regress to the mean for ideas.)
- Even worse, I rarely trust even my good friends to make good decisions on my behalf. Why would I trust the LLMs?
Here’s a relevant example: Grammarly did not like that sentence in the first paragraph. It wanted to correct it to something that does not reflect what I mean. Here’s the screenshot of the correction:
I don’t like Grammarly’s proposed alternative because it assumes AI can do those things because of the verb choice: “generating” and “correcting.” That’s not my current experience with these tools.
However, I have also not spent nearly enough time exploring these tools to make the best possible judgments. That’s because the AI of today is low-trust. I’m waiting for the AI of the future where I can use AI as a trusted colleague.
Future AI Must Create Trust
I don’t want a lot of AI interrupting my thinking. Instead, I want to know I can depend on AI to understand my content—even when I write it wrong. That’s because high-trust friends ask you questions, such as, “Do you mean XYZ or ABC when you say that?”
I want AI to fix my spreadsheets when I break them with relative references that change year over year. And while we’re on that topic, I want AI to fix my royalty spreadsheets. (Amazon reports payments in the buyer’s currency. However, they pay two months later in dollars, and that’s when they tell me the currency conversion rate. AI would work really well for this kind of a problem.) Spreadsheets. I love them.
Right now, as of March 5, 2024, AI is not yet trustworthy because it does not yet understand the context of my problem. Often, that’s one step away from AI not understanding what I tried to say—the content.
I want to use AI to be an agent for me. And to help me with research and to explain what it does not understand about my content. I do not want AI to blindly recommend changes because it does not understand what I meant to say. That day will come, but it’s not here yet. I’m looking forward to it and I will adapt then.
That’s the question this week: How much do you trust AI to make good decisions for you?