In a speech at Dowling College in May 1990, Deming said that you “cannot force knowledge on anybody. They have to ask for it.” Everywhere he went, Deming saw tables of data, computer printouts, and information of all types, but little knowledge. People didn’t know how to get knowledge, he said. Deming would point at tables of data and say, “Tons of figures—no knowledge.”
Deming, W. Edwards; Deming, W. Edwards. The Essential Deming: Leadership Principles from the Father of Quality (p. 193). McGraw Hill LLC. Kindle Edition.
An ounce of information is worth a pound of data.
An ounce of knowledge is worth a pound of information.
An ounce of understanding is worth a pound of knowledge.
Ackoff, Dr. Russell. Ackoff’s Best. (pp. 170-172). John Wiley & Sons.
THE AIM for this short post is to share with you a surprising and startling interaction I just had with AI phenom, ChatGPT (Chat Generative Pre-trained Transformer), when I asked it to compose an introduction to Dr. Deming’s Red Bead Experiment. Here is the response it provided to my prompt:
What? Where on EARTH did it get this idea? Who trained the AI to come to THIS conclusion!?
When I pushed back, the model quickly recanted and corrected itself:
So, all’s well that ends well, right? Well, no.
Despite being trained on millions upon millions of documents, papers, articles, posts, etc., ChatGPT has a comically facile understanding of Deming - even after you ask it point-blank whether it has an understanding of his books:
Note the caveat in the last sentence: “I do not have personal experiences or opinions”. That seems rather odd given the above response which is most certainly an uninformed opinion. Oops.
Moving on, I asked a deeper question about an aspect of Deming’s theory of transformation:
To the casual observer, the difference between the response and my correction may seem subtle, but it’s actually quite significant with respect to appreciating why Dr. Deming was so adamant about transformation away from the prevailing style of management to a whole new philosophy based on the System of Profound Knowledge. In a way, ChatGPT is effectively emulating someone who “read the book” but didn’t understand a word.
As before, when I pushed back, the AI corrects itself:
I then pressed it to tell me if it has updated its training based on new knowledge:
And this is where I get a little concerned: Was the AI’s first response about the Red Bead Experiment being unethical and harmful trained into it by the developers, an ignorant user, or both? What is the theory of knowledge being used, here? Recall, I asked it point-blank if it was familiar with Deming’s book, The New Economics, and it answered in the affirmative.
Were Deming around to see this, I’m certain he’d be concerned — he’d seen enough hacks in his day that misunderstood and misrepresented his teachings:
…American management have resorted to mass assemblies for crash courses in statistical methods, employing hacks for teachers, being unable to discriminate between competence and ignorance. The result is that hundreds of people are learning what is wrong.
Deming, W. Edwards. Out of the Crisis (The MIT Press) (p. 131). The MIT Press. Kindle Edition.
You’re a Strange Animal
Deming begins The New Economics with a factual observation for 1993: “A new world: Information flows. The people of the old world no longer live in isolation. Information flows across borders.” He foresaw the increasing challenges technology would bring to the world still working under the prevailing theories of management that have persisted to today.
Computers of the era could do an astounding number of calculations per second but were still leagues away from approximating human intelligence. He’d observe that in many organizations there were leaders who just as distant from grasping the severity of their circumstances and what to do about it, warning that “substitution of the computer for fundamentals will take its toll on American production” and that “there is no substitute for knowledge”.
But what happens when you outsource your “knowledge” to an AI engine? ChatGPT is a marvel of our time, showing us what computer-aided pattern matching over extremely large volumes of information can do. However, it also shows some alarming early warning signals about how it is trained and what biases and opinions are being injected into its models. This becomes really concerning when we imagine a future where a ChatGPT and its successors supplant search and become the de facto way we look for answers to problems, or explanations of theories or simulations like The Red Bead Experiment and get factually bizarre and wrong interpretations. It provides ignorance on-tap to the unaware.
ChatGPT can be a lot of fun to play with - I’ve certainly had my share - but it needs to be regarded as a wild animal when it comes to thinking and understanding. There is no substitute for firsthand knowledge supported by theory.