The C language was created by Ken Thompson and Dennis Ritchie at the AT&T's Bell Labs between 1969 and 1973.
Most of the operating systems of that period where written in Assembler, Unix, insted, was mostly written in C; for this reason the C language is strictly connected to the Unix operating system. C was written for Unix development with a very pragmatic approach.
Unix was a
Unix, distributed at no cost to US Universities, was very successful; C and Unix had, togheter, a wide diffusion, both in academic circles and industry. Unix was the operating system of the RISC workstation of the eighties, the architecture that killed the mini-computer CISC market.
Most languages, that came after C, borrowed from C the syntax and many ideas and structures. C is a very important milestone in the programing language evolution; it is designed to be powerful, but with a simple syntax, contains all the statements for a structured language (but also the goto statement), and allows for low level operations, as the management of memory addresses.
C is still one of the most used programming in the world around 2010 [1], whereas functional and logical programming languages as Prolog, Lisp, Askell, Scheme, are limited to academic circles or specific topics. A sad symptom of the gap between the reality and the academic world; and in Italy this is an even bigger problem.
I like C, it is essential and powerful, with it's clear syntax, with the free format (no more 80 columns with reserved spaces) and the natural tendency to structured programming, but, coming from FORTRAN, I had to grasp some important changes:
The first problem encoutered is the need to explicit treatment of addresses, as pointers to memory areas containing variables.
In C variables are passed to function by copy: the functions deal with a copy of the variable, the original one being untouched; and functions return to the caller program only one value. The opposite of FORTRAN, where functions can alter arrays passed as arguments.
So, to modify array or many variables in a function, you have to give to the function a copy of the addresses of the arrays or variables. For this reason, in C is common to find array of pointers, pointers to functions, arithmetic with pointers etc. etc., pointers are everywhere; the arrays are also implemented by using pointers.
I like very much the flexibility that this gives to the program, but the creation of a bogus pointer (pointing nowhere) become an easy error. This is not detected at compilation time and causes, at run time, all kind of nasty things; you can also overwrite the program in memory, causing errors in uncorrelated parts of your program. So "care your pointers" is a must in C programming.
C programs are usually modular: you split your code into different files, compiled in an independent way and assembled at the end. The compiler has a pre-processor helping in this task.
Each part of the program must know the structure of the functions
it is calling; for this reason the function structure
(name, arguments types etc.) are put in a declaration file (the header file)
and the body of the function in another.
If, in a file of the program, you have to call a function,
its the header file must be included.
There is a special preprocessor instruction for this,
the statement:
Variable scope: in FORTRAN all variables are local to subroutines,
and function, except function arguments (passed by reference)
and data in a COMMON block.
C is not so easy: variables are local to blocks,
variables out of a blocks are local to functions, variable out of
functions are local to the file. And you can declare a variable as
In more modern languages there are singletons to solve this problem: structure (classes) which are unique in your program and can be referred everywhere. The use of singleton is a bad practice; they represent an hidden internal interface, deeply buried into your program; an example of clear abuse of singletons can be seen in the Geant4 toolkit.
Structures: collection of things of different types, treated as a single entity. A very useful feature, that I had already found in some extension to the FORTRAN used by the VAX computers.
Allocation of arrays at run time. Old FORTRAN lacks this important feature; in old FORTRAN you have to know in advance how many space you need for arrays and the dimensions are hard coded in the program, a severe limitation.
Preprocessor: this is a phase preceding the compilation, in which strings can be defined and evaluated before the compilation. In this way you can also built custom C statements.
This is mainly used for conditional compilation of part of the program (i.e. depending on the computer architecture you are using), and for definition of constants.
This is an useful feature, but sometimes abused: people use the preprocessor to create complex functions; this can give some extra performance to the running program, but can result in an unreadable mess of code, and worse, makes debugging very difficult.
Accompanying library: C has an associated library of useful functions; good also for character management, lacking in old FORTRAN.
C is a good language, but, due to some features of the language, managing big C programs of hundred of thousands of statements, is a very complex task. In the following years when program dimensions grew, different languages became popular, as C++ and Java.
This text is released under the "Creative Commons" license. |