Following a computer science degree, a student is likely to end up attending an object oriented programming lecture. Sometimes, it is more honestly called Java or C♯. During this lecture, the student will generally learn that Java, C♯ and C++ are said to be object oriented languages. Some teachers will even try to define what an object is, saying that it is a group of data (like C’s structs) and methods operating on these data.
Though, accepting this definition, C is an object oriented programming language. Suffice to add function pointers to a structure and assign them during allocation, and voilà! You’ve got yourself an object! Admitting it is an incomplete explanation, some teachers may add that you need inheritance to do some real OOP and blah blah blah…
But one way or another, one has to accept it: Java, C♯ and C++ are not real object oriented languages.
What is OOP?
Traditionally, one learns that object oriented programming is a paradigm where everything is an object and an object is a group of data, and some processes on these data (the famous functions that OOP calls methods). The subtelty, compared to a laguage like C, lies in the fact that data need to be explicit parameters of the functions when they can be implicit for object’s methods. For instance, if I want to convert an integer into a string in C, I have to write something like this:
when I would write something like this in Java:
toString() method do not need an argument as data is enclosed in the object.
To instanciate an object, a special method needs to be defined: the constructor. The class itself is some kind of blueprint on which each object is going to be built.
This idea of OOP is born with the success of C++ and Java which imported the notion of object into their lexicon.
Besides, in Java, every class implicitely inherits from
Object. While this idea is not entirely untrue, it still
represent an incomplete version of the OOP concept invented by Alan Kay in the 1970’s.
Alan Kay’s vision
Alan Kay is often presented as the true creator of OOP. But, as often when it comes to science, an invention does not magically appears in some isolated genius’ mind. Alan Kay’s work, just like any other work in human’s history, relies on its predecessors’. This is why you are likely to read people on internet claiming that Alan Kay is not the true creator of OOP, that it’s someone else’s creation, etc. But that is not my point.
In Alan Kay original concept, object oriented programming consists on working with objects instead of data + functions. Everything, in OOP is an object and an object itself is some software component. It’s an abstraction of any real life concept (like a car, a tree, etc.). It can be a piece of memory containing data and data processes, like in Java, but it can be much more. For instance, any server being queried through RPC can be considered as an object. What lies behind the definition of an object and how it is represented in memory does not matter. We are talking about a concept, any piece of software.
This was the point of Alan Kay saying that he did not have C++ and Java in mind when he coined the term “object oriented”. Even though, afterward, developers only kept the object idea, what was really important was the idea of message passing.
This is where Java and C++ have failed: they are not object oriented languages as defined by Alan Kay. While they
have some OOP features, they completely miss the essentials: the messages. They continue to have a clear separation
between natural types (
char, etc.) and objects. But natural types can’t send and receive messages.
The main downside of having forgotten messages is that the only way to send messages in “modern OOP” is calling methods; which is sad, because it’s very simplistic vision. Those who had to make event-based programming in Java know how implementing an observer pattern is awful. You can’t easily perform a proper callback passing (yes, even in Java 8 because their implementation of lambdas is shit).
It’s particularly simplistic as C++ and Java forces the developer to pass object of a certain type as parameters of methods. Thus, sometimes the developer has to type cast the parameters. This was a stupid problem of Java 4’s lists and collections. They tried to solve it with generics, but they only achieved to make the syntax more awful:
Here, the method’s signature indicates that you can pass anything implementing the
Collection interface. But
an interface has no other use than to assert that the object implements some method.
Python and Groovy solved the problem by adopting the duck typing principle. It tells the following: if it looks like a duck, swims like a duck, and quacks like a duck, then it’s a duck. Simple. When a message is passed, the langauge does not give a single fuck about the object’s type. The only thing that should matter is: can this object respond to this message?
Groovy does this gracefully: when a method is called, the language just verifies that the object has this method.
If not, it just calls the
which provides the developer the possibility to recover the error and throws a
by default. You know what it’s useful for? Dynamic finders like Grails GORM’s.
As a matter of fact, the purest for of OOP I found among the classic examples (C++, C♯ and Java) is C++/Qt’s signals/slots mecanism. It’s a simple mecanism: any class can define events that can happen in the form of a method signature (a signal) and a processing that happens when a signal is received (a slot):
When a slot is connected to a signal (a basic observer pattern), I can pass messages to many objects simulatneously with a very expressive syntax:
I emit a message. C♯ has an equivalent in the form of event handlers.
Having forgotten the notion of message passing in Alan Kay’s original idea causes a lot of problems to classical “OOP” languages that were solved by the original approach. By trying to solve these problems, their creators (Java’s ones, particularly) made them awfully complex. While insisting on the notion of object rather than messages, they came to the idea of strong and static typing. But, by trying to prevent developers to make mistakes, they dropped the possibility to elegantly solve exotic problems. When they noticed that, they choosed again the wrong path with interfaces, even to the point of saying that inheritance was overrated…
But that is forgotting one of programmation’s cardinal rules: the language should not decide what the programmer can or cannot do. It is the programmer’s job to be aware of what she or he is doing and what it implies.