History of Programming Languages

Let’s start at the beginning. In the beginning, computers didn’t even have a keyboard! That is, it was very bad – they didn’t have a keyboard or a screen, they had punched cards (those little things with holes or no holes). Accordingly, they put pins in there, or they shone a light in there. If there was a hole (or no hole), it meant a zero or one. And programs at that time were written with the help of machine codes. Each operation in the computer (addition, subtraction, more complicated operations) had a machine code. People themselves chose this code from a table, and all the addresses in memory, and they punched it out by hand and put it into a reader, and it all counted out. Of course, the job of a programmer was probably not particularly interesting back then – making holes – and as science and technology progressed, of course, they started coming up with all sorts of more “interesting” things. For example, the Assembler, which already made life a little easier.

Well, how did it make life easier? Instead of remembering that there was some “magic” command code, all sorts of words resembling the “human” English language were used, like add or mov, and then the registers or memory areas, the variables to perform these operations were listed. But of course, this also required quite a lot of mental effort, to keep in mind in which register we had what, where the variables were, and what was going on. Why did this happen? Because the computers were “dumb” and could not understand anything more “intelligent”. To put together machine code from assembler also takes time and memory (of course, there was not enough of it at the time).

Gradually, it became clear that it is very difficult to develop such large complex programs. The productivity of the programmer in those commands was very low, meaning that he wrote several lines per day (meaningful) and each line did not do anything special, some simple arithmetic operations. And people wanted to make languages much more similar to human language and to English in particular, so that it would be easier and more convenient to write programs. And so it went!

Old and dead languages.

One of the first languages was Fortran. By the way, it was also punched on punch cards – there were special punch cards for punched Fortran programs. But if you take that Fortran language now – I think it even appeared somewhere between 50 and 60 – and try to write something in it, you’ll be very unhappy, I guarantee you! Modern Fortran is still alive but it’s already quite different from what it used to be.

There’s also a “funny” language, it was called Algol (version 68, which describes the year it was created). It’s an algorithmic language. Anyway, they could do something there, but we’re not really interested in what they could do now. This is where our excursion into the ancient and relatively unused languages can end and move on to what is still alive (and actively living).

Old, but living languages.

Algol was invented in Europe while Fortran was mainly used in the States – there is no great difference. What trend is noticeable? In the beginning it was very complicated and to be able to write you almost had to be an engineer, an electrical engineer, you had to understand where all the pins were shorted and so on for programming. Then I also had to sit down with sheets of paper and count the memory, watch it. And gradually, everything became easier, easier, easier, and then even easier for the programmer – as little thinking as possible for the man, as much as possible to do automatically. Around the end of this period (the lecturer points to Algol and Cobol), languages that have in a sense “survived” to this day begin to appear.

BASIC. Maybe some people still write things in it, at least I’ve seen that in some institutions they teach in QBasic – a blue box that says “1989”. In general, it’s alive and well! It was invented as a language for non-programmers. At that time, programmers were a very specialized occupation. But here they say: “Well, we have a great language called Basic, and any intelligent person can write a program in it, it’s easy. Again, that Basic and modern Basic are a huge difference. All this stuff with numbering in 10, all this GOTO stuff, and all that crap – it has nothing to do with modern BASIC, and it doesn’t even have much to do with BASIC of ’89.

Another funny story is the language Pascal, widely known in university circles. It was and still is used surprisingly as a teaching language. In the rest of the world, it is less common, but it is alive and well. There’s a man named Wirth – he’s a scientist, a theorist. He took part in the discussion of Algol; he didn’t like the way it turned out, and he invented his own language, Pascal. And then Borland (and many other firms before that, Apple in particular) took it and ruined it. He had a beautiful theory, a coherent theory: “everything will be fine”, and they took it and stuffed it with things that people needed to work. Well, it didn’t turn out as nice as he wanted it to.

And finally, C.C. was invented by engineers. If Pascal was invented by a scientist, C was invented by Kernighan and Ritchie, they were engineers at Bell. How did this happen? At the time, you couldn’t write anything systemic in those languages (the lecturer points to Fortran, COBOL, Algol). What is “system stuff”? For example, an operating system, a driver or something. These languages were designed for mathematical calculations, for business calculations, that sort of thing. And everything else was written in Assembler. There were some languages, they’re dead now, so C didn’t come right out of Assembler, but through some in-between things.