Since Aristotle end even before, many philosophers had the idea that human logic could be reduced to a formal, mathematical system.
A lot of work on this topic was done at the beginning of the 20th century: Boole introduces symbolic logic, then we have the work of Frege, on formal logic, then Russel and Whitehead, trying to reduce even mathematics to logic, and, finally, Gödel stated that some assumption can't be logically demonstrated; but the idea than human thought is basically formal logic was very strong among researchers.
With the development of computers many scientist began to believe that the human intelligence could be reproduced and surpassed by an artificial intelligence of the computers.
This idea, in a mixture of hope and fear, is still alive, giving life to a lot of science fiction, films, novels, books and academic studies.
Artificial intelligence is an illusion: the way humans and computers think is inherently different. Computers use a logic based on exact comparisons and mathematics; humans proceeds by loose similarities, influenced by many different informations, whose correlation is often ill-defined. This is the base of intuition, and can't be formalized; logic is only a part of the human intelligence, used in situation in which intuition fails, most time for a lack of practical experience.
Moreover, even a single neuron is alive and able to adapt to changes and different situations; really different from a programmable electronic circuit.
Fuzzy logic and neural networks, as attempts to simulate neuron interactions, are very interesting, showing us that the base of thinking can be something mechanical, but brain and computers remain two completely different things.
Due to this difference humans are more able to learn, adapt and evolve, computers are fast in data management and calculations, and they must be used for what they do best.
With the increasing power of computers, more and more complex tasks can be done today by computer-based equipments, tasks once only for humans; but a computer is a computer, a brain a brain, and computers must be programmed following their own logic schemes, not to mimic humans, that is a pointless attempt. This is also true for neural networks, very effective tools for some specific problems, in which an analytical treatment is unfeasible, but they are only an adaptive way to find a solutor for a difficult mathematical problem.
But these where not the leading ideas among scientists and, Artificial Intelligence (AI) studies, where feeded, in the sixties, by huge funding in US and England, and by Japan government in the eighties. Proud AI scientists promised all, but there where not enough results and the funding stopped after some years and many millions of dollars.
Mainly at US universities, as Stanford or MIT, a lot of work was done in the field of artificial intelligence, developing methods and ideas that drove the subsequent development of computer languages.
Computer languages where classified following an evolutionary-like scheme:
first generation of computer languages: the instructions are binary numbers, represented by sequences of 0 and 1 (machine language).
second generation of computer languages: instructions are small acronyms controlling the detailed internal behavior of the computer central unit. Instructions can be grouped in reusable parts (routines). A program , named assembler, collects the routines and translates the program into machine language, to be used by the computer. The computer in the 50s where entirely programmed in Assembler,
third generation of computer languages: these are nearer to the programmer logic than to the circuitry logic of the computer. Nearly all the language we use today are of this type: FORTRAN, COBOL, PASCAL, C, etc. etc. These are sometimes referred to as "imperative programming", because implement algorithms by detailing each step.
fourth generation: the term was widely publicized by James Martin in his books: "Application Development Without Programmers" (Prentice-Hall, 1981), and "Fourth-generation languages" (three volumes, Prentice-Hall, 1985-86). These languages should use a natural (human) language to to describe what to do, instead of dictating a precise procedure (as in the third generation ones). These are called also: "declarative languages" or "non-procedural languages" because they don't details procedures.
I've seen this approach successfull only for specific domains, where procedures are simple and can be standardized: mainly database interaction and production of reports.
There is some confusion on the classification of languages of the fourth generation; an example is SQL, the language used to query relational databases. Is often cited as a fourth generation language, but it is not a really natural or user friendly language:
fifth generation languages: in 1982 the Japan ministry of industry launched the "Fifth Generation Computer Systems project" (FGCS), a ten year project for the building of a massively parallel computer for artificial intelligence using logic programming. Then US and England also allocated new funds to artificial intelligence studies. The Japan projet failed, after producing some prototypes and dissipating tenth of billion of Yen.
The classification of languages for the fifth generation is confused. Prolog is often cited as fifth generation language; in Prolog the program consists of logical condition and Prolog tries by himself to find if a given statement is satisfied or not, analyzing the conditions.
Prolog and Lisp where widely used for artificial intelligence studies. Lisp was developed at MIT by John McCarthy in 1958. The MIT AI laboratory was very active in the sixties and seventies, and two spinoff from MIT produced in the seventies the Lisp machines: personal workstations with hardware support for Lisp, programmed in Lisp and with an operating system written in Lisp. Only some thousands of Lisp machines where produced; then their business was killed by the RISC workstations of the eighties, faster, cheaper and with a Lisp compiler.
The emphasis on artificial intelligence and logic, in spite of the overall failure, produced a lot of ideas and methods that influenced computer science in the following years. Along this line where developed languages as Smalltalk, Eiffel, Haskell, Scheme, Simula and many, many others. It was a fundamental moment in the language and computer evolution; but nothing of this reached me, and I missed all this. I produced, in those years, only old style FORTRAN programs for IBM 3090 and CDC mainframes. I saw also COBOL, Assembler, Basic, but nothing else. I always wonder how that could happen, but Italy was at margin in the computer technology; most universities had a very theoretical approach to computing, interested in algorithms and theory, not in real programming, and the computer market was driven by the big vendors, often with privileged relationsship with government institutions, whose management was unable to understand what they where buying. We have to remember that there was no internet in the seventies; the information spread was slow, following rigid paths, it was difficult even to find where to search for a given topic. Specific technical knowledge was a prerogative of limited environments. Internet changed all, but this is an'other story.
This text is released under the "Creative Commons" license. |