Thursday, April 07, 2016

The Limits of Knowledge

 Theory, Model, Algorithm, and the Limits of Knowledge


Three terms that are often used interchangeably. They do have something in common, we’ll see what it is after an attempt to differentiate them, by describing how what they refer to differs.

Framework: The world we live in is “reality”. We interact with it in various ways. As we grow from infancy to adulthood, we develop various methods of predicting how reality works so that we can get what we need and want. Explicit ideas about how reality works are the theories on which we base our actions. We reason about the state reality right now so that we can change it to suit ourselves. For example, we plant seeds when we figure the weather is favourable so that we get tomatoes a couple of months later. We add fertiliser and soil conditioners and water to ensure that the tomatoes will grow. Those actions are based on a bundle of ideas and observations that form a more or less coherent theory about how tomatoes grow from seeds.

Theory: An explanation of how something works the way it does. It’s what you get when you test a hypothesis, which is a more or less speculative explanation of some observation(s). A good hypothesis links the observation(s) to some existing explanation, and predicts additional observation(s). If those predictions are proved true, then the hypothesis is confirmed and becomes a theory. A good theory implies or suggests further hypotheses, which in turn imply new observations. When a theory is applied to some practical problem, we get a model. That, and the desire to just figure things out, are what drives science and engineering.


Model: An explanation that can be used to predict how some part of reality will work. We use this term because a conceptual model about growing tomatoes is analogous to a physical model of, say, a steam locomotive. A scale model is not a replica, it is something that looks like and in a limited way works like its prototype. The model locomotive may operate on steam as the prototype does, but even so, there will be compromises. E.g., the thickness of the boiler shell will not be to scale for that would make it too weak to contain the necessary steam pressure. And so on.

We use both models and theories to plan what to do so as to get some desired result. The difference is subtle. We test a theory’s predictions in order to discover its limits, so that if necessary we can modify it or even replace it. We use a model within its limits to control some aspect of reality as much as possible. We may use a models to test a theory: an experiment is a model constructed from that part of a theory that we wish to test. It’s not easy to derive a model from a theory: models also have to be tested.

Both models and theories are true insofar as they work. When a model becomes a precise set of rules, it becomes an algorithm.

Algorithm: A set of procedures applied to some inputs that will produce outputs in a predictable way. Thus, “long division” is an algorithm because it describes how to manipulate the input numbers (divisor and dividend) to get the answer (quotient). A recipe for a toasted cheese sandwich is an algorithm because it describes how to manipulate the inputs (ingredients and  heating device) so as to get an output (tasty sandwich). And so on.

Algorithms are everywhere. They are especially handy for determining future values of present states. In this sense, an algorithm is a knowledge machine: input information about “this thing here and now”, turn the crank, and you get information about “this thing somewhere, somewhen, somehow else”.

If the above comments make sense, we may see a model as a set of interrelated algorithms, and a theory then becomes a set of validated and interconnected models.

And that brings us to what they have in common: All three are modes of gaining new knowledge. All three operate on the same fundamental principle: “If you do this, you will find out that”. None of them “describe reality”. They describe only how we may observe certain aspects of reality. Which ones? Those that the theory or model or algorithm “is about.” What “is about” means is not easy to say. An example will explain (as far as the example applies, that is):

We may use Newton’s laws of motion to build a model that calculates the course of a rocket launched towards Jupiter. If we know its mass and its velocity, the varying gravitational forces of the Moon and Mars etc, we can calculate, and recalculate, its course to whatever precision we like. But the model will tell us nothing about the health of the crew. If we want to know that, we need another (and more complicated and less certain) model. The model cannot tell us what the rocket “really is”, only how it interacts with gravitational fields and the reaction forces of its engines. If we want to know other things about it, we must use other models. What’s more, even to monitor the course of the rocket, we have to use other models.

Thus all theories, all models, all algorithms are knowledge engines. They are epistemological devices. But they are limited. They can’t tell us what some entity really is, only how we can interact with it, and what will happen when we do so. Even the notion of “entity” is fundamentally epistemological: An entity is a more or less consistent bundle of expected interactions. If any of them are missing or unexpected, we doubt that we are interacting with that entity. It may be an hallucination, or a dream, or a fake, or merely an image of the entity.

Kant was right, I think: There is no way to know reality in itself. That doesn’t mean there is no reality “out there”. It just means that we can know only our interactions with it. That we can know even that much is, I think, an even greater puzzle than what it is that we can’t know.

(c) 2016-04-07


No comments:

Time (Some rambling thoughts)

 Time 2024-12-08 to 11  Einstein’s Special Relativity (SR) says that time is one of the four dimensions of spacetime. String theory claims t...