Philosophy

C.P. Snow, author of "The Two Cultures", would have no difficulty placing computer science: it grew up in the science culture and, as far as most people can tell, it still belongs there as much as engineering does, or physics. That philosophy is not important in this culture is only natural, because in the eighteenth century, when Culture split, philosophers were still scientists and vice versa. Ever since, denizens of the science culture have done their philosophy implicitly, and done it in the eighteenth century mold. When they look over the fence to the other culture, they assume that the philosophers are still mulling over the same old conundrums as they were at the time when science split off.

Some are. At least, this is true of philosophers who died as recently as Bertrand Russell. However, things are stirring in the Other Culture, and have been ever since the Great Split. This may not affect physics. After all, old philosophy is not necessarily bad philosophy and it may well be that philosophy adequate for some endeavours has been developed a long time ago and will continue to be adequate a long time into the future.

But other sciences have different requirements. Donald McCloskey ("The Rhetoric of Economics") has argued that this is the case for economics. He shows that what he calls "modernism", philosophy as of the eighteenth century, is not adequate for his subject. One can infer from this book that the same holds, a fortiori, for psychology. Since computer science has been dabbling in topics like artificial intelligence, and this has been going on since the 1940s, the same holds for computer science.

Thus, even though computer scientists may be as little inclined to do it as physicists are, it makes sense for them to have a look at what has been happening in philosophy since the great split. Not doing so can have adverse practical consequences, like expensive research projects getting nowhere, or generating lots of papers that are of no use.

Looking over the fence at the other culture can take the practical form of determining what the implicit philosophical presuppositions are of certain research programmes. The ones that seem ripe for this treatment are robotics, program verification, databases, and knowledge engineering. These subjects are closely related to logic and to the role that model theory plays in it. An indication of thinking in computer science that breaks the old mold is Robert Kowalski's "Logic Without Model Theory".

The abstract of this paper starts with: "Arguably, model theory serves two main functions: (1) to explain the relationship between language and experience, and (2) to specify the notion of logical consequence." I added the italics, because by not emphasizing it and not further commenting on the choice of this word, Kowalski assumes a degree of philosophical sophistication that is not widely shared among his colleagues in computer, or any other, science.

The remainder of this note serves as the quick introduction that is lacking in Kowalski's paper. For most non-philosophers, and perhaps still a fair number of philosophers, model theory serves the purpose of elucidating the relationship between language and reality. Of course it is clear that if you want to be precise, then you start by making the syntax of your language precise, and then you have formal logic, a step taken in the late nineteenth and early twentieth century.

But the relationship between language and world cannot be made precise with formality only at the language end: the world has to be abstracted from and formalized as well. The way this is customarily done, starting with Tarski in 1931, results in a mathematical structure of individuals, functions on these, and relations among these. Such a structure is called an interpretation, and model-theoretic semantics determines of any sentence of the language whether it is true in any given interpretation. If so, the interpretation is said to be a model of the sentence.

Thus model theory, formally at least, is just the study of a certain correspondence between two mathematical structures: a formal language on the one hand (can be viewed as a kind of algebra) and an interpretation (algebra plus relations) on the other hand. Does such an exercise have special significance, or is it just another one of the many arcane things mathematicians do?

Just as small children believe in Santa Claus, grown scientists believe in Reality. The difference of course is that the child grows out that particular belief but the scientist will never grow out of belief in Reality. Given that belief, model theory takes on a special significance. But only if one takes for granted that Reality consists of individuals, functions on them, and relations between them. And moreover one has to take for granted that these functions and relations are the precisely circumscribed objects that set theory in mathematics has reduced them to.

This requirement may tax the scientist's credulity, but may not yet prepare him for Kowalski's claim that model theory serves the purpose of relating language to experience rather than to Reality. Such preparation can be given by Richard Rorty's "Contingency, irony, and solidarity" (Cambridge University Press, 1989). In the following paragraphs I discuss the introduction to this book from the point of view of one who wonders what might make model theory more than an exercise in mathematics.

First, why Rorty? Surprisingly perhaps to scientists, a lot has happened in philosophy since the eighteenth century. I am grateful to Rorty for being, possibly among other things that I cannot judge, an excellent expositor of certain currents in philosophy that I'm tempted to equate with the "postmodern", realizing that neither Rorty nor McCloskey use it, though the latter inveighs against "modernism". Instead of invoking an ism, Rorty refers a certain "sort of philospher". Perhaps the closest he comes to identifying this sort (his sort) is when, on page 10, he drops a string of names:

   ... the traditional subject-object picture which
   idealism tried and failed to replace, and which
   Nietzsche, Heidegger, Derrida, James, Dewey, Goodman,
   Sellars, Putnam, Davidson, and other have tried to
   replace without entangling themselves in the idealists' 
   paradoxes.

I think Rorty would say that the urge to provide languages with models is of a piece with adhering, usually unconsciously, to a philosophy that is not "of his sort". Below I briefly sketch the relevant part of his philosophy and this helps to classify philosophically the urge to provide logic, and other languages, with model theories.

The starting point of Rorty is the contrast between the view that truth is out there, and can be discovered, versus the view that truth is a man-made thing.

One can easily confuse the latter view with solipsism, the view that takes seriously the truism that I can never refute the thesis that nothing exists outside of my mind. Any putative evidence to the contrary will always be a phenomenon in my mind, as it was in Dr Johnson's mind when he kicked the large stone and said: "I refute it thus" (Dictionary of Quotations, Oxford University Press, 1985). Rorty assumes his audience is sufficiently sophisticated not to take solipsism seriously. I think he should address it, because his manner of speaking is the first I've seen that effectively addresses solipsism, as follows. In other contexts, Rorty acknowledges the impossibility of refuting certain theses. However, he views such a thesis as a "manner of speaking". These can be judged to be fruitful or not and can be rejected for being boring. I now see that all I need to reject solipsism is to find it boring and intellectually barren.

By the way, Creationism is amenable to the same treatment. Of course Earth and man could have been created during the week starting at 9 am on October 26, 4004 BC (why don't we celebrate this as Earth Day?) as bishop Ussher calculated from the Bible. (Encyclopedia Britannica, 15th edition). Of course at that time, Earth was created with fossils and all. But then it could also been created ten minutes ago, not only with the fossils, but also with our memories, ready-made. We will never be able to refute it. But the thesis is boring, and not only to biologists.

Thus Rorty (page 5) distinguishes between denying that truth is out there (which he does) and denying that the world is out there (which is what solipsism does).

To say that the world is out there, that it is not our creation, is to say, with common sense, that most things in space and time are the effect of causes which do not include human mental states. To say that truth is not out there is simply to say that where there are no sentences there is no truth, that sentences are elements of human languages, and that human languages are human creations.

Later on the same page he has a pithy redescription of this passage:

   ..., we shall not be tempted to confuse the platitude
   that the world may cause us to be justified in
   believing a sentence true with the claim that the
   world splits itself up, on its own initiative, into
   sentence-shaped chunks called "facts".

I think this says something about model theory. Not so much about model theory itself, but about the esthetic or emotional need that people have for model theory. Such people take for granted that "the world splits itself up, on its own initiative, into sentence- shaped chunks". Only then model theory becomes important. Otherwise model theory is just an exercise in translating from one language to another.

In fact, Rorty says somewhere in the same book that for his sort of philosopher, a proof is a redescription. If the proof is rigorous, then it is a very careful redescription. If the proof is formal, then it is a redescription in a formal language. But a redescription it is. It is not an appeal to truth that is out there. The redescription could be useful in so far as it helps us believe something that we doubted before. Though this is may seem a truism, this point of view is not shared by the formal verification of computer programs which aims at formal machine- generated proofs of correctness of, say, computer programs written in Ada. Thus program verification has in common with some (all?) other research programs that it is based on implicit philosophical presuppositions. Model theory and program verification are especially striking examples. Certain research programs in artificial intelligence may be equally striking. Sir James Lighthill, who wrote in the early 1970s an influential report on artificial intelligence might have had something like this in mind, but he was not enough of a philosopher to pull it off.

There seem to be two kinds of model theory for predicate logic. In the usual one the model is a mathematical structure with relations and functions. Let us call this weak model theory. I have never seen strong model theory in print, but heard it described by P.J. Hayes in the early 1970s as being a correspondence between names in the language and "real" objects, not elements in a mathematical structure. It seems that weak model theory is just an exercise in redescription, because the model, being an abstract mathematical structure is necessarily itself a linguistic entity. Strong model theory is an example of the traditional kind of philosophy ("truth out there, waiting to be discovered") that leads to the conundrums that Rorty gets bored with.

In weak model theory, where models are mathematical structures, one can distinguish an ultra-weak strain, where the correspondence between sentence and model is only a redescription with as only motivation that the description in terms of the model gives us insights that the sentence version does not give. But the usual version of weak model theory accords to the model version a special status that the sentence does not have; a special status derived from being somehow closer to "reality" or to the "nature of things", even though acknowledged to be merely a mathematical structure.

Thus in ultra-weak model theory, the criterion is whether the redescription in terms of a model yields insight or is perhaps otherwise helpful. I doubt whether that is the case for the models for the lambda calculus.

Scott found around 1970 a model of the lambda calculus. This calculus is a very intuitive way of "modelling" certain aspects of computation. It is simple. What more can one want? Not only Scott, but judging from the reception of his work, a lot of other people as well, thought that the lambda calculus without a model was but a meagre and unsatisfactory thing. The model provided by Scott was described in terms of topological concepts like retracts, so that to those unversed in topology, but who could appreciate the lambda calculus, the model was complete gobbledygook. Later Scott found a less esoteric model in terms of sets of integers, but which is still much less intuitive than the lambda calculus itself. So one asks: what was could possibly have been added to an understanding or appreciation of lambda calculus by these models? What was the calculus lacking without a model?

It was also not clear what counted as a model. For example, one can express the reduction rules of the lambda calculus by Horn clauses. These are guaranteed to have a (minimal Herbrand) model. Why is that not a model of the lambda calculus? Albert Meyer noticed this mystery and wrote a paper "What is a model of the lambda calculus?" (Information and Control, vol. 52 (1982), pp. 87--122).

The work on models of the lambda calculus usually comes under the label of "denotational semantics": a method of giving meaning to the lambda calculus and to programming languages by mapping their expressions to a mathematical model. I take it that "denotational semantics" is for these languages what model theory is for logic. Denotational semantics may be mathematically a bit more sophisticated in that its correspondences are explicitly between mathematical structures and are required to be homeomorphisms. But researchers like H. Andreka and I. Nemeti have used the homeomorphism method for model theory of logic as well.

In closing, let me remark that even for a brief review I have not done justice to Rorty's book. I only touched on the epistemological aspect concerning the non-mental world. But his sort of philosopher repudiates (page 4)

   the very idea of anything -- mind or matter, self or
   world -- having an intrinsic nature to be expressed
   or represented.

This leads to a form of relativism that may seem extreme, and frightening, to some. The great value of Rorty's book is to show that it can lead to an agnostic personal philosophy that makes the holder less likely to inflict cruelty than philosophies that explicitly invoke lofty humanitarian goals.