In a previous post, I mentioned how I was back to ruminating over decisions that I thought I had put to bed.
But in my attempt to actually put those decisions to rest, I did something I never expected myself to do—turn to AI. I know I make it sound like I committed a sin.
AI can help you get the closure you want (at least, initially I thought it did). But after trying it multiple times with different prompts, I realized something unsettling: it senses the direction you want to go and shapes its response accordingly.
For instance, my ongoing worry about the sunk cost of leaving Colemak and returning to QWERTY. Depending on how I framed the question, ChatGPT gave me different answers—each convincing in its own way.
If it sensed that I was leaning toward using QWERTY, it helped me get closure around sticking with QWERTY, offering points about not worrying over the time I spent learning Colemak and how to reframe that effort. And when I framed it the other way, it did the opposite.
AI conforms to the confirmation bias we present in our prompts.
The moral: don’t seek advice from ChatGPT.