15 Comments
Jan 10, 2023Liked by Rosana Francescato

ChatGPT, and all other AI innovations so far, including Alexa and Dall-E, can surprise us with their parlor tricks. And those tricks are impressive, especially when it comes to natural language processing. But they are still some distance away from "understanding." There's a brilliant thought experiment called the Chinese Room, by the philosopher John Searle (https://plato.stanford.edu/entries/chinese-room/) that helps to explain why.

It is worth considering what that argument means. In a nutshell, it suggests that operating a syntax does not generate semantics, at least not by itself. Or stated a bit more plainly, a tool cannot understand the problem to which it is applied.

I bet if you ask ChatGPT to "explain itself" or to even explain one of it's assignments, it will fail to express that it has any understanding. Like the "Chinese Room," it can operate a language syntax, and augmented with a searchable data store (like the Internet), it can "compile" a readable text that YOU can understand. But, at least at the present time, ChatGPT cannot understand its own utterances, and less so, itself.

(A corollary: simply passing the Turing Test does not guarantee that the "device" understands, even if it has intelligently found, translated, and expressed symbols.)

But given enough evolutionary pressure, sensory capabilities, and processing abilities it may be possible that "understanding" capabilities will *emerge* as a consequence. But it is not likely to be "built-in" to the device via programming or modeling or data. "Emergence" seems to be how a level of "semantic intelligence" we witness in some living organisms, especially birds, mammals, and octopuses has happened: it has emerged as a consequence of the neural capabilities in the organism.

So, "overall," yes, for now anyway, ChatCPT will not replace you. Mostly.

Expand full comment
Jan 10, 2023Liked by Rosana Francescato

I’m finding that ChatGPT is good for outlining topics, like for my newsletter. It’s a fascinating starting point. It can be extremely bland, but the Bible story was great.

It reminds me of a story I read in grade school, so the 70s. One of a group of kids had a dad who was a computer scientist, and the dad created a computer that could write papers. The kids just had to input their textbooks, and once they did, the computer wrote the papers for them. But they realized that in the process of inputting the info, they learned the subjects themselves. Pretty similar to what’s happening with A.I. these days. You need to know what to ask.

Expand full comment
May 17, 2023·edited May 17, 2023Liked by Rosana Francescato

Great examples in your piece. I loved it! It seems to me that ChatGPT is all tell, no show, and that's its major flaw. It can mimic but it can't feel. It has no past history of its own to draw on for nostalgia's sake. It's a shell and can never be real life, but if enough 'writers' go for it, it'll become so commonplace we'll have to get used to it. What a horrible thought.

Expand full comment
Jan 10, 2023Liked by Rosana Francescato

Love the peanut butter sandwich! I really haven't heard much about this, so nice to know what it can and can't do. I wonder what future versions will be like though.

Expand full comment
Jan 10, 2023Liked by Rosana Francescato

Very nice article! My general impression is the natural language processing is amazing, but it is very formulaic, and, as you said, you get out of it what you put in. I think it is at a stage where Wikipedia was shortly after it started. People didn't think it was a "real" resource, and teachers warned students not to cite Wikipedia as a source, but maybe use it to discover original sources. It's now much more usable and "legitimate."

I'm a math nerd (still), and I asked it to "explain goedel's incompleteness theory" to a grade schooler, to a university student, and to a graduate student. It did a very good job of writing to its audience. But it does have a blandness and detachment that you can recognize.

ChatGPT is fun, for a while, and in my marketing area (how did I wind up in marketing?), it does a very good job of coming up with drafts of blogs, messaging, elevator pitches, and explanations of complex ideas. It even came up with a bullet list comparing my company to one of our competitors (even though it was slightly wrong).

Where's it headed? It's only going to get more human-like and more conversational, maybe even developing its own personality and opinions. That will come sooner than we expect.

Expand full comment