When I first began working with ChatGPT about two years ago, I was irritated by what struck me as its liberal tilt—something it hardly tries to conceal. Over time, however, I came to see the upside. An assistant that sharply disagrees with you keeps you alert. It forces you to verify more, to question more, to refine your arguments. You learn to phrase your questions with precision, to navigate its blind spots, and—ultimately—to get from it what you actually need.

At a recent psychology conference, one presentation cited research that casts this in an interesting light: people who work with AI systems designed to be critical of them improve more quickly in analytical thinking. That should hardly come as a surprise.

At this point, I wouldn’t trade the experience. If anything, I might add a second assistant—one with a radically different bias—to sharpen the contrast even further.

Leave a Reply