Because I’m a technology addict, I check my tablet every night before I go to sleep for the last bits of sensationalism (aka, the news) for the day. And while I’m there, I might take a side look at the latest discussions on the various Slack channels I enjoy. We won’t talk about checking email – don’t you know you’re not supposed to do that at night in bed??? But I digress. On all those channels, the latest big thing is ChatGPT.
You’ve probably heard about ChatGPT by now, though your news bubble might be a bit different than mine. It’s an artificial intelligence-driven chatbot, and it’s quite an impressive bit of technology. It answers questions in ways that aren’t always distinguishable from how a real, physical person might answer them. It can write poems. It can write epic sagas. It can provide answers to some of the deep questions of life. Maybe not accurately, but it’ll give you something. (Forbes has a geeky-but-not-too-geeky write-up about it.) people are equally amazed and anxious by this new wonder of modern technology.
As with any new technology, ChatGPT and other AI tools have all sorts of potential. They can improve tech support times. They can help writers unblock themselves by offering potential words. They can (sometimes) write decent code. They can offer information that sounds quite plausible, regardless of its accuracy. It almost sounds … human.
And therein lies the rub: it often does sound human, with all the brilliance and frailty therein. It sounds authoritative, but it’s often wrong. It can lead people down Very Bad Paths, but it has no conscience. It can only learn based on what information it’s given. Sounds very human to me. Is that a good thing or a bad thing?
I think the answer can’t help but be “Yes.” I know, not very helpful. At the end of the day, people need critical thinking skills to evaluate what they see online. The source of the content does not matter. If a human writes it, a bot writes it, or an infinite number of monkeys writes it, it doesn’t matter. Readers still need to think about how they process what they read. I personally would like to see some indication of who the author(s) is on a body of work, whether what I’m reading is a direct result of human creativity or something automatically generated, but I don’t demand it. I’m still going to think about what I read, regardless.
Oh, and to answer the question in the title: I think AI can write a blog post, but they can’t write mine. Not and capture my voice, thoughts, and message. Not yet, at least. Give it another decade or so when I have my personal AI that has learned all my habits and styles and can generate text just for me.
Thank you for reading my post! Please leave a comment if you found it useful. If you want to start your own blog or improve your writing, you might be interested in another effort I’m spinning up, The Writer’s Comfort Zone. Learn more here!
If you’d like to have me on a podcast or webinar, my media kit is available for your reference.