Tuesday, July 03, 2018

Artificial Intelligence (AI): A series of notes

2005-06-20
“If it looks like a duck, and walks like a duck, and quacks like a duck, then it’s a duck” (Ancient wisdom)

Unless it’s a model of a duck.

Artificial Intelligence is model building – we want autonomous machines, but the best we can do is build models of autonomous machines.

Eg, an artificial ant could be made to behave like an ant in many ways, but not as an ant in an anthill, or capable of making more ants.

2015-10-21
It’s probably possible to make an artificial ant that behaves like an ant in anthill. We may even be able to make an artificial ant that can reproduce in some way.

However, “behave like an ant” is not well defined. There are too many behaviours, and some are obviously easier to mimic than others. Nevertheless, it will soon be possible to make an ant-size robot that can navigate like an ant, climb vertical surfaces like an ant, etc.

But it will always be a model of ant, and therefore its behaviour will in some respect will not be antlike, and in other respects will be a bad imitation of ant behaviour. That’s simply the nature of models. Models are mixtures of emulation and imitation.

2016-05-15
Intelligence is even less well-defined than “ant behaviour”. We can mimic some intelligent behaviours, eg, sorting, learning correlations, recognising patterns, and so on, which are useful to augment human tasks such as diagnosis of a fault or illness, or finding the data we want. If a task is well enough defined, we can build a machine to do it.

But that’s the problem: “Intelligence” is simply not well enough defined. My notion of it is the ability to apply and adapt existing knowledge and insight to unanticipated problems. Every term in that definition is fuzzy and vague. Anyhow, some people (including me) would argue it’s more of a definition of creativity than intelligence.

Is consciousness part of  “intelligence”? Many people would say it is. A machine that merely solves problems isn’t intelligent, it’s just an algorithm. It’s not enough to know how to do long division, you have to be able to recognise when and why you should do it. An intelligent entity then would be able to apply the rules of the algorithm to another problem. This claim entails that intelligence can abstract rules and patterns, and recognise them in different contexts.

“Understanding” is another component of intelligence. Isn’t it? Well, it does have something to do with learning: an intelligent person is one who can make sense of new explanations. “I don’t get it” at one extreme means “I haven’t figured it out yet”, at the other it means “I can’t figure it out”. The latter is a measure of intelligence.

And that’s just three attempts to make sense of “intelligence”. We’re long way from knowing exactly what we mean by “artificial intelligence”. Far enough that we may not even recognise it when we see it.

The recent development of “deep learning” artificially intelligent neural nets crystallises the problem. It’s already clear that we can evaluate the results of their operations, but we can’t figure out how they do it. What’s more, they have come up with solutions that humans have not only not produced, but have trouble recognising as viable solutions. For example, some AIs are better then humans at recognising cancerous tissue.

2018-07-03
If we accept “intelligence” as a label for problem-solving abilities, then consciousness is not required. That makes the neural-net AIs more than a little spooky.


No comments:

Mice in the Beer (Ward, 1960)

 Norman Ward. Mice In the Beer (1960. Reprinted 1986) Ward, like Stephen Leacock, was an economics and political science professor, Leacock...