Share this page to your:
Mastodon

Back in the 80s I used to go to conferences about AI, mostly because of my interest in expert systems, or rule based systems. Back then there was a list of general problems the AI community was hoping to solve. I can't remember all of them but expert systems was one, there was image recognition, speech recognition and, a little surprisingly speech synthesis. Surprising for two reasons.

First: there were speech synthesis machines around. The company I worked for sold them. I'm not sure we actually sold any of them, we mostly sold other stuff. But they made good demos if you could get past the Stephen Hawking sound the thing made. Second: why was this AI anyway?

If we go back a few decades before that it was common for columns of figures to be added up by humans, even though we had machines that could do it. Those machines were cumbersome and expensive and you had a brain right there that was cheap. And a few decades before that we did not have machines that could do it. Back then only a human could add up a column of figures. Maybe an abacus would help, but a human had to operate it on a bean by bean basis. Addition, and any other arithmetic, took intelligence.

Of course the moment we got a machine that could add up figures we redefined intelligence to exclude that function. The adding machine is not smart, it just does addition. So what? We keep doing this. Speech recognition, we now know, is just a complex computing problem, image recognition is the same. The machine never really understands what is being asked of it. There's a specific function it carries out well and it does that when asked. They make heavy use of Neural Networks which are basically pattern matchers. Matching patterns does not, we find, mean that the machine knows anything deeper about those patterns. Pattern matching is not intelligence, just as adding a column of numbers is not intelligence.

The Large Language Models which generate text using a prompt are the same. They blindly output their response based on pattern matching on a grand scale. But there is nothing there that means the LLM knows what it is saying. It can look like it but it doesn't.

And now my point. We should probably stop using the term AI all together. We've always struggled to define intelligence and tagging these technologies with the word simply skews everyone's expectations. We are not going to see a Terminator or Skynet arise from these things with an agenda to exterminate humans. They have no agenda. If they did that might mean they were intelligent, or maybe not.

Comment on Mastodon

Previous Post Next Post