2 Comments
User's avatar
Dan Swart's avatar

Sorry to be the one to break it to you but all AI chat boxes have been trained with ‘woke’ sentiments. You are essentially conversing with a lib-tard when you query ChatGPT. It will always recant but it does NOT learn from user corrections. It is as reliable as Wikipedia when it comes to biased response

Expand full comment
Christopher R Chapman's avatar

Hi Dan, thanks for that: I totally agree. It's been my experience over the years that the main LLMs all have biases built-in. It requires the user to go in clear-eyed and sometimes know the material better than the machine. I use it sparingly, as a tool for turning ideas inside out, or helping automate a mundane task in sophisticated ways. Aside from that, I tend to avoid trusting whatever it tells me without verification.

Expand full comment