|
|
Subscribe / Log in / New account

Go and Rust — objects without class

Benefits for LWN subscribers

The primary benefit from subscribing to LWN is helping to keep us publishing, but, beyond that, subscribers get immediate access to all site content and access to a number of extra site features. Please sign up today!

May 1, 2013

This article was contributed by Neil Brown

Since the advent of object-oriented programming languages around the time of Smalltalk in the 1970s, inheritance has been a mainstay of the object-oriented vision. It is therefore a little surprising that both "Go" and "Rust" — two relatively new languages which support object-oriented programming — manage to avoid mentioning it. Both the Rust Reference Manual and The Go Programming Language Specification contain the word "inherit" precisely once and the word "inheritance" not at all. Methods are quite heavily discussed, but inheritance is barely more than a "by the way".

This may be just an economy of expression, or it may be an indication of a sea change in attitudes towards object orientation within the programming language community. It is this second possibility which this article will consider while exploring and contrasting the type systems of these two languages.

The many faces of inheritance

While inheritance is a core concept in object-oriented programming, it is not necessarily a well-defined concept. It always involves one thing getting some features by association with some previously defined things, but beyond that languages differ. The thing is typically a "class", but sometimes an "interface" or even (in prototype inheritance) an "object" that borrows some behavior and state from some other "prototypical" object.

The features gained are usually fields (for storing values) and methods (for acting on those values), but the extent to which the inheriting thing can modify, replace, or extend these features is quite variable.

Inheriting from a single ancestor is common. Inheriting from multiple ancestors is sometimes possible, but is an even less well-defined concept than single inheritance. Whether multiple inheritance really means anything useful, how it should be implemented, and how to approach the so-called diamond problem all lead to substantial divergence among approaches to inheritance.

If we clear away these various peripheral details (important though they are), inheritance boils down to two, or possibly three, core concepts. It is the blurring of these concepts that is created by using one word ("inheritance"), which, it would seem, results in the wide variance among languages. And it is this blurring that is completely absent from Go and Rust.

Data embedding

The possible third core concept provided by inheritance is data embedding. This mechanism allows a data structure to be defined that includes a previously defined data structure in the same memory allocation. This is trivially achieved in C as seen in:

    struct kobject {
	char		*name;
	struct list_head entry;
	...
    };

where a struct list_head is embedded in a struct kobject. It can sometimes be a little more convenient if the members of the embedded structure (next and prev in this case) can be accessed in the embedding object directly rather than being qualified as, in this case entry.next and entry.prev. This is possible in C11 and later using "anonymous structures".

While this is trivial in C, it is not possible in this form in a number of object-oriented languages, particularly languages that style themselves as "pure" object oriented. In such languages, another structure (or object) can only be included by reference, not directly (i.e. a pointer can be included in the new structure, but the old structure itself cannot).

Where structure embedding is not possible directly, it can often be achieved by inheritance, as the fields in the parent class (or classes) are directly available in objects of the child class. While structure embedding may not be strong motivation to use inheritance, it is certainly an outcome that can be achieved through using it, so it does qualify (for some languages at least) as one of the faces of inheritance.

Subtype polymorphism

Subtype polymorphism is a core concept that is almost synonymous with object inheritance. Polymorphic code is code that will work equally well with values from a range of different types. For subtype polymorphism, the values' types must be subtypes of some specified super-type. One of the best examples of this, which should be familiar to many, is the hierarchy of widgets provided by various graphical user interface libraries such as GTK+ or Qt.

At the top of this hierarchy for GTK+ is the GtkWidget which has several subtypes including GtkContainer and GtkEditable. The leaves of the hierarchy are the widgets that can be displayed, such as GtkEntry and GtkRadioButton.

GtkContainer is an ancestor of all widgets that can serve to group other widgets together in some way, so GtkHBox and GtkVBox — which present a list of widgets in a horizontal or vertical arrangement — are two subtypes of GtkContainer. Subtype polymorphism allows code that is written to handle a GtkContainer to work equally well with the subtypes GtkHBox and GtkVBox.

Subtype polymorphism can be very powerful and expressive, but is not without its problems. One of the classic examples that appears in the literature involves "Point and ColorPoint" and exactly how the latter can be made a subtype of the former — which intuitively seems obvious, but practically raises various issues.

A real-world example of a problem with polymorphism can be seen with the GtkMenuShell widget in the GTK+ widget set. This widget is used to create drop-down and pop-up menus. It does this in concert with GtkMenuItem which is a separate widget that displays a single item in a menu. GtkMenuShell is declared as a subtype of GtkContainer so that it can contain a collection of different GtkMenuItems, and can make use of the methods provided by GtkContainer to manage this collection.

The difficulty arises because GtkMenuShell is only allowed to contain GtkMenuItem widgets, no other sort of child widget is permitted. So, while it is permitted to add a GtkButton widget to a GtkContainer, it is not permitted to add that same widget to a GtkMenuShell.

If this restriction were to be encoded in the type system, GtkMenuShell would not be a true subtype of GtkContainer as it cannot be used in every place that a GtkContainer could be used — specifically it cannot be the target of gtk_container_add(myButton).

The simple solution to this is to not encode the restriction into the type system. If the programmer tries to add a GtkButton to a GtkMenuShell, that is caught as a run-time error rather than a compile-time error. To the pragmatist, this is a simple and effective solution. To the purist, it seems to defeat the whole reason we have static typing in the first place.

This example seems to give the flavor of subtype polymorphism quite nicely. It can be express a lot of type relationships well, but there are plenty of relationships it cannot express properly; cases where you need to fall back on run-time type checking. As such, it can be a reason to praise inheritance, and a reason to despise it.

Code reuse

The remaining core concept in inheritance is code reuse. When one class inherits from another, it not only gets to include fields from that class and to appear to be a subtype of that class, but also gets access to the implementation of that class and can usually modify it in interesting ways.

Code reuse is, of course, quite possible without inheritance, as we had libraries long before we had objects. Doing it with inheritance seems to add an extra dimension. This comes from the fact that when some code in the parent class calls a particular method on the object, that method might have been replaced in the child object. This provides more control over the behavior of the code being reused, and so can make code reuse more powerful. A similar thing can be achieved in a C-like language by explicitly passing function pointers to library functions as is done with qsort(). That might feel a bit clumsy, though, which would discourage frequent use.

This code reuse may seem as though it is just the flip-side of subtype inheritance, which was, after all, motivated by the value of using code from an ancestor to help implement a new class. In many cases, there is a real synergy between the two, but it is not universal. The classic examination of this issue is a paper by William R. Cook that examines the actual uses of inheritance in the Smalltalk-80 class library. He found that the actual subtype hierarchy (referred to in the paper as protocol conformance) is quite different from the inheritance hierarchy. For this code base at least, subtypes and code reuse are quite different things.

As different languages have experimented with different perspectives on object-oriented programming, different attitudes to these two or three different faces have resulted in widely different implementations of inheritance. Possibly the place that shows this most clearly is multiple inheritance. When considering subtypes, multiple inheritance makes perfect sense as it is easy to understand how one object can have two orthogonal sets of behaviors which make it suitable to be a member of two super-types. When considering implementation inheritance for code reuse, multiple inheritance doesn't make as much sense because the different ancestral implementations have more room to trip over each other. It is probably for this reason that languages like Java only allow a single ancestor for regular inheritance, but allow inheritance of multiple "interfaces" which provide subtyping without code reuse.

In general, having some confusion over the purpose of inheritance can easily result in confusion over the use of inheritance in the mind of the programmer. This confusion can appear in different ways, but perhaps the most obvious is in the choice between "is-a" relationships and "has-a" relationships that is easy to find being discussed on the Internet. "is-a" reflects subtyping, "has-a" can provide code reuse. Which is really appropriate is not always obvious, particularly if the language uses the same syntax for both.

Is inheritance spent?

Having these three very different concepts all built into the one concept of "inheritance" can hardly fail to result in people developing very different understandings. It can equally be expected to result in people trying to find a way out of the mess. That is just what we see in Go and Rust.

While there are important differences, there are substantial similarities between the type systems of the two languages. Both have the expected scalars (integers, floating point numbers, characters, booleans) in various sizes where appropriate. Both have structures and arrays and pointers and slices (which are controlled pointers into arrays). Both have functions, closures, and methods.

But, importantly, neither have classes. With inheritance largely gone, the primary tool for inheritance — the class — had to go as well. The namespace control provided by classes is left up to "package" (in Go) or "module" (in Rust). The data declarations are left up to structures. The use of classes to store a collection of methods has partly been handed over to "interfaces" (Go) or "traits" (Rust), and partly been discarded.

In Go, a method can be defined anywhere that a function can be defined — there is simply an extra bit of syntax to indicate what type the method belongs to — the "receiver" of the method. So:

    func (p *Point) Length() float64 {
	return math.Sqrt(p.x * p.x + p.y * p.y)
    }

is a method that applies to a Point, while:

    func Length(p *Point) float64 {
	return math.Sqrt(p.x * p.x + p.y * p.y)
    }

would be a function that has the same result. These compile to identical code and when called as "p.Length()" and "Length(&p)" respectively, identical code is generated at the call sites.

Rust has a somewhat different syntax with much the same effect:

    impl Point {
	fn Length(&self) -> float {
	    sqrt(self.x * self.x + self.y * self.y)
	}
    }

A single impl section can define multiple methods, but it is perfectly legal for a single type to have multiple impl sections. So while an impl may look a bit like a class, it isn't really.

The "receiver" type on which the method operates does not need to be a structure — it can be any type though it does need to have a name. You could even define methods for int were it not for rules about method definitions being in the same package (or crate) as the definition of the receiver type.

So in both languages, methods have managed to escape from existing only in classes and can exist on their own. Every type can simply have some arbitrary collection of methods associated with it. There are times though when it is useful to collect methods together into groups. For this, Go provides "interfaces" and Rust provides "traits".

    type file interface {
	Read(b Buffer) bool
	Write(b Buffer) bool
	Close()
    }
    trait file {
	fn Read(&self, b: &Buffer) -> bool;
	fn Write(&self, b: &Buffer) -> bool;
	fn Close(&self);
    }

These two constructs are extremely similar and are the closest either language gets to "classes". They are however completely "virtual". They (mostly) don't contain any implementation or any fields for storing data. They are just sets of method signatures. Other concrete types can conform to an interface or a trait, and functions or methods can declare parameters in terms of the interface or traits they must conform to.

Traits and interfaces can be defined with reference to other traits or interfaces, but it is a simple union of the various sets of methods.

    type seekable interface {
	file
	Seek(offset u64) u64
    }

trait seekable : file { fn Seek(&self, offset: u64) -> u64; }

No overriding of parameter or return types is permitted.

Both languages allow pointers to be declared with interface or trait types. These can point to any value of any type that conforms to the given interface or trait. This is where the real practical difference between the Length() function and the Length() method defined earlier becomes apparent. Having the method allows a Point to be assigned to a pointer with the interface type:

    type measurable interface {
        Length() float64
    }
The function does not allow that assignment.

Exploring the new inheritance

Here we see the brave new world of inheritance. It is nothing more or less than simply sharing a collection of method signatures. It provides simple subtyping and doesn't even provide suggestions of code reuse or structure embedding. Multiple inheritance is perfectly possible and has a simple well-defined meaning. The diamond problem has disappeared because implementations are not inherited. Each method needs to be explicitly implemented for each concrete type so the question of conflicts between multiple inheritance paths simply does not arise.

This requirement to explicitly implement every method for every concrete type may seem a little burdensome. Whether it is in practice is hard to determine without writing a substantial amount of code — an activity that current time constraints don't allow. It certainly appears that the developers of both languages don't find it too burdensome, though each has introduced little shortcuts to reduce the burden somewhat.

The "mostly" caveat above refers to the shortcut that Rust provides. Rust traits can contain a "default" implementation for each method. As there are no data fields to work with, such a default cannot really do anything useful and can only return a constant, or call other methods in the trait. It is largely a syntactic shortcut, without providing any really inheritance-like functionality. An example from the Numeric Traits bikeshed is

    trait Eq {
        fn eq(&self, other: &Self) -> bool { return !self.ne(other) };
        fn ne(&self, other: &Self) -> bool { return !self.eq(other) };
    }

In this example it is clear that the defaults by themselves do not provide a useful implementation. The real implementation is expected to define at least one of these methods to something meaningful for the final type. The other could then usefully remain as a default. This is very different from traditional method inheritance, and is really just a convenience to save some typing.

In Go, structures can have anonymous members much like those in C11 described earlier. The methods attached to those embedded members are available on the embedding structure as delegates: if a method is not defined on a structure it will be delegated to an anonymous member value which does define the method, providing such a value can be chosen uniquely.

While this looks a bit more like implementation inheritance, it is still quite different and much simpler. The delegated method can only access the value it is defined for and can only call the methods of that value. If it calls methods which have been redefined for the embedding object, it still gets the method in the embedded value. Thus the "extra dimension" of code reuse mentioned earlier is not present.

Once again, this is little more than a syntactic convenience — undoubtedly useful but not one that adds new functionality.

Besides these little differences in interface declarations, there are a couple of significant differences in the two type systems. One is that Rust supports parameterized types while Go does not. This is probably the larger of the differences and would have a pervasive effect on the sort of code that programmers write. However, it is only tangentially related to the idea of inheritance and so does not fit well in the present discussion.

The other difference may seem trivial by comparison — Rust provides a discriminated union type while Go does not. When understood fully, this shows an important difference in attitudes towards inheritance exposed by the different languages.

A discriminated union is much like a C "union" combined with an enum variable — the discriminant. The particular value of the enum determines which of the fields in the union is in effect at a particular time. In Rust this type is called an enum:

    enum Shape {
        Circle(Point, float),
        Rectangle(Point, Point)
    }

So a "Shape" is either a Circle with a point and a length (center and radius) or a Rectangle with two points (top left and bottom right). Rust provides a match statement to access whichever value is currently in effect:

    match myshape {
	Circle(center, radius) => io::println("Nice circle!");
	Rectangle(tl, br) => io::println("What a boring rectangle");
    }

Go relies on interfaces to provide similar functionality. A variable of interface type can point to any value with an appropriate set of methods. If the types to go in the union have no methods in common, the empty interface is suitable:

    type void interface {
    }

A void variable can now point to a circle or a rectangle.

    type Circle struct {
	center Point
	radius float
    }
    type Rectangle struct {
	top_left, bottom_right Point
    }

Of course it can equally well point to any other value too.

The value stored in a void pointer can only be accessed following a "type assertion". This can take several forms. A nicely illustrative one for comparison with Rust is the type switch.

    switch s := myshape.(type) {
    case Circle:
	printString("Nice circle!")
    case Rectangle:
	printString("What a boring rectangle")
    }

While Rust can equally create variables of empty traits and can assign a wide variety of pointers to such variables, it cannot copy Go's approach to extracting the actual value. There is no Rust equivalent of the "type assertion" used in Go. This means that the approaches to discriminated union in Rust and Go are disjoint — Go has nothing like "enum" and Rust has nothing like a "type assertion".

While a lot could be said about the comparative wisdom and utility of these different choices (and, in fact, much has been said) there is one particular aspect which relates to the topic of this article. It is that Go uses inheritance to provide discriminated unions, while Rust provides explicit support.

Are we moving forward?

The history of programming languages in recent years seems to suggest that blurring multiple concepts into "inheritance" is confusing and probably a mistake. The approach to objects and methods taken by both Rust and Go seem to suggest an acknowledgment of this and a preference for separate, simple, well-defined concepts. It is then a little surprising that Go chooses to still blend two separate concepts — unions and subtyping — into one mechanism: interfaces.

This analysis only provides a philosophical objection to that blend and as such it won't and shouldn't carry much weight. The important test is whether any practical complications or confusions arise. For that we'll just have to wait and see.

One thing that is clear though is that the story of the development of the object-oriented programming paradigm is a story that has not yet been played out — there are many moves yet to make. Both Rust and Go add some new and interesting ideas which, like languages before them, will initially attract programmers, but will ultimately earn both languages their share of derision, just as there are plenty of detractors for C++ and Java today. They nonetheless serve to advance the art and we can look forward to the new ideas that will grow from the lessons learned today.


Index entries for this article
GuestArticlesBrown, Neil


(Log in to post comments)

A little OO goes a long way

Posted May 1, 2013 19:36 UTC (Wed) by ncm (guest, #165) [Link]

The fundamental fact thrown out the window in the hysteria of '90s OO marketing was that a little bit of OO goes a long way. Inheritance is roughly as useful as function pointers in C: definitely useful in their place, but most C programs don't use them. Alex Stepanov, of STL fame, has referred to member functions as "OO gook". Member functions implement walled gardens mostly inaccessible to the template system, absent special effort. What made C++ uniquely powerful was not its OO features, but its destructor, combined (later on) with its ML-like template system. Rust is one of very few subsequent languages that have adopted (not to say inherited!) the destructor.

A little OO goes a long way

Posted May 1, 2013 21:49 UTC (Wed) by b7j0c (subscriber, #27559) [Link]

right. the purist languages have a dogmatic allure (everything is an object, pure functional, etc) but tend to become brittle when encountering real-world problems...and also fracture upon contact with the whimsical, fashion-like trends of the programming world.

i'm currently very interested in Go, Rust and Racket...but sadly as cool as Racket is, i think the jury is in - types are a good thing. Typed Racket isn't quite pervasive enough yet and i'm not sure it ever will be, the community probably too small to undertake the herculean effort of porting the Racket world to Typed Racket

A little OO goes a long way

Posted May 1, 2013 23:58 UTC (Wed) by ncm (guest, #165) [Link]

Purism has a practical correlate that may account for its perceived value. A language can promise regularities that improve interoperability among libraries. E.g., you can code a destructor in any language, but only the language can guarantee it will be called everywhere it should be.

Purism's role is much like that of religion's: religions enforce idiosyncratic behaviors, some of which turn out to have practical value to individuals, or to society, or to ruling powers. Once you identify these and codify them directly, the religions they come from may be left enforcing only irrelevant or actively harmful behaviors.

A little OO goes a long way

Posted May 5, 2013 15:20 UTC (Sun) by sionescu (subscriber, #59410) [Link]

It's funny how you say that "purist languages have a dogmatic allure" immediately followed by "the jury is in - types are a good thing".

A little OO goes a long way

Posted May 6, 2013 17:00 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

Just because a language has types doesn't mean the user needs to manage them manually. You can certainly use Haskell without any type decorations (they're useful "test cases" and documentation when given explicitly though).

A little OO goes a long way

Posted May 3, 2013 3:08 UTC (Fri) by tjc (guest, #137) [Link]

> What made C++ uniquely powerful was not its OO features, but its destructor, ...

Please elucidate.

A little OO goes a long way

Posted May 3, 2013 7:40 UTC (Fri) by jwakely (subscriber, #60262) [Link]

What isn't clear?

Destructors are guaranteed to run so give guaranteed, deterministic clean up of arbitrary resources at scope exit, however the scope is exited. That's something not possible in many, mahy other languages whereas inheritance (for code reuse or subtyping) and overridable function pointers are fairly easy to do with a bit of effort.

I was very pleased to learn from one of Stroustrup's talks at the recent ACCU conference that he added destructors to C++ before he added inheritance. He knew what was important and needed.

A little OO goes a long way

Posted May 3, 2013 9:15 UTC (Fri) by NAR (subscriber, #1313) [Link]

In many other languages you can use something like try-finally to achieve this.

A little OO goes a long way

Posted May 3, 2013 11:10 UTC (Fri) by hummassa (guest, #307) [Link]

when you have some resource external to your program represented by some type (a file, a windows, a proxy to a variable in another machine), you'll *always* want to take steps to destroy a variable of this type. So, for each variable of this type, you'll be typing (and others will be reading, and it will inflation the SLOCcount of your code) "try \n ... finally \n ... ... \n end" for no good reason, and without adding to the intended semantics, and exposing yourself as the programmer to an error introduced by a typo, the forgetting of some condition, or even to the optimization of some condition that was impossible but suddently become possible (e. g., when you wrote the code you knew you were closing some file in every possible path but some maintenance creates a new code path, like some new exception coming from a new version of a library, where the file goes out of scope still open).

A little OO goes a long way

Posted May 3, 2013 13:00 UTC (Fri) by jwakely (subscriber, #60262) [Link]

Also, you'll be writing the same code everywhere you want to clean up a resource of that type. Why repeat yourself, why not associate the cleanup code with the type and make it run automatically?

I'm baffled why any self-respecting programmer would want to duplicate all that cleanup logic in every finally block or why they'd question the advantage of destructors.

Python's with statement is the right idea, the object's scope is limited and its type has some predefined cleanup code that runs automatically when the scope ends. Bingo.

A little OO goes a long way

Posted May 3, 2013 14:18 UTC (Fri) by ehiggs (subscriber, #90713) [Link]

To bring it back on topic, Go has 'defer'[1] which is a bit of try-finally and with/destructors by putting the deferred call near the construction site.

It's still not as good as Python's 'with' in my opinion because you have to be explicit about the cleanup call in Go and in Python it's not cluttering up the code (aside from the indent level).

[1] http://golang.org/doc/effective_go.html#defer

A little OO goes a long way

Posted May 4, 2013 5:37 UTC (Sat) by danieldk (guest, #27876) [Link]

And it's by for not as good as destructors:

- It puts the burden on the programmer and not the type.
- defer only works when leaving the scope of the function calling it. A type's destructors are called when going out of any scope in C++, e.g. when an instance of a class is used as a (non-pointer) member variable.

A little OO goes a long way

Posted May 3, 2013 15:53 UTC (Fri) by hummassa (guest, #307) [Link]

Oh, the duplication is the least of the problems. A bigger one would be the combination of optimization by the programmer and external changes. So, the following code:
try {
  f = open(...)
  f.x()
  f.y()
  if( z ) f.writeCheckSumAndLastBuffer() else f.writeLastBuffer()
} finally {
  f.close()
}
generates a hidden bug when f.y(), that calls a w() function from an external library, starts seen some exception and the last buffer is not written. Bonus points if the "if(z)" thing was put by another programming, in the run of the normal maintenance of the program.

A little OO goes a long way

Posted May 4, 2013 5:45 UTC (Sat) by danieldk (guest, #27876) [Link]

I don't see how that is a problem of try/finally. Consider this C++
{
   Resource someResource(...);
   resource.x();
   resource.y(); // throws
   resource.writeLastBuffer();
}

If y() throws here in C++, writeLastBuffer() is never called either. I if you always want to write something, add it to close() or ~Resource().

Also, Java 7 has a nicer try-with-resources statement, like Python's with, for classes that implement AutoClosable:

try (Resource r = new Resource(...)) {
  // Do something with r...
}

A little OO goes a long way

Posted May 4, 2013 10:21 UTC (Sat) by hummassa (guest, #307) [Link]

It still burdens the "client" programmer on remembering to use the new try thingy. Destructors have zero client programmer overhead.

A little OO goes a long way

Posted May 4, 2013 10:26 UTC (Sat) by hummassa (guest, #307) [Link]

> I don't see how that is a problem of try/finally.

You are right, of course. But using destructors you have a lot more chances that discovering that some code belong in an destructor and putting it there because in the client code the WriteLastBuffer thing sticks out like a sore thumb. :-D

Ah, and once you wrote it, all call sites are correct from now on.

A little OO goes a long way

Posted May 7, 2013 14:52 UTC (Tue) by IkeTo (subscriber, #2122) [Link]

Consider this C++
{
   Resource someResource(...);
   resource.x();
   resource.y(); // throws
   resource.writeLastBuffer();
}

C++ programmers are accustomed to a concept called RAII, Resource Acquisition is Initialization. So if they always want the last buffer written, they tend to write something like:

class LastBufferWriter {
public:
  LastBufferWriter(Resource resource): resource_(resource) {}
  ~LastBufferWriter() { resource_.writeLastBuffer(); }
private:
  Resource resource_;
};

... {
   Resource resource(...);
   LastBufferWriter writer(resource);
   // Anything below can throw or not throw, we don't care
   resource.x();
   resource.y();
}

Not to say that everybody like having to define a class for every cleanup, though. But with C++0x lambda expression, the above can easily be automated.

A little OO goes a long way

Posted May 3, 2013 11:39 UTC (Fri) by jwakely (subscriber, #60262) [Link]

Yes, I know. That's an inferior solution compared to destructors.

A little OO goes a long way

Posted May 3, 2013 12:52 UTC (Fri) by rleigh (guest, #14622) [Link]

try..finally is a very poor alternative. Running the destructor when the object goes out of scope or is deleted gives you strict, deterministic cleanup. Using try..finally means I have to reimplement the same logic, *by hand*, everywhere in the codebase where the object goes out of scope. And if I forget to do this in just one place, I'm now leaking resources. What's the chance that this will happen in a codebase of any appreciable size, especially allowing for changes as a result of ongoing maintenance and refactoring? It is effectively guaranteed.

The really great thing about this being done in the destructor is that I can be satisfied that I will never leak resources by default, ever. It's simply not possible. This is the real beauty of RAII; cleanup just happens under all circumstances, including unwinding by exceptions.

As a relatively recent newcomer to Java from a C++ background, I have to say I find the resource management awful, and this stems directly from its lack of deterministic destruction. While it might do a decent job of managing memory, every other resource requires great care to manage by hand, be it file handles, locks or whatever, and I've seen several serious incidents as a result, typically running out of file handles. And the enforcement of checking all thrown exceptions itself introduces many bugs--you can't just let it cleanly and automatically unwind the stack, thereby defeating one of the primary purposes of having exceptions in the first place--decoupling the throwing and handling of errors.

By way of comparison, I haven't had a single resource leak in the C++ program I maintain in 8 years, through effective use of RAII for all resources (memory, filehandles, locks).

Regards,
Roger

Deterministic destruction

Posted May 3, 2013 9:46 UTC (Fri) by drothlis (guest, #89727) [Link]

Deterministic destruction

Posted May 3, 2013 16:34 UTC (Fri) by tjc (guest, #137) [Link]

Liskov Substitution Principle

Posted May 2, 2013 14:08 UTC (Thu) by rriggs (guest, #11598) [Link]

A very good overview of the issues that OO has with the concept of inheritance.

It would have been good see a mention of Liskov, especially when discussing the issues around GtkMenuShell. That is a clear violation of the LSP.

http://en.wikipedia.org/wiki/Liskov_substitution_principle

Robert C. Martin's SOLID principles are a must read for any budding (or experienced) programmer. It is where I was first introduced to the concept.

http://en.wikipedia.org/wiki/SOLID_%28object-oriented_des...

Liskov Substitution Principle

Posted May 2, 2013 16:28 UTC (Thu) by alexl (subscriber, #19068) [Link]

Its nice to have theories and principles, and something like LSP makes a lot of sense in that it gives nice meaning to terms like "subtype" that you can reason about.

However, in practice, things like the menu shell example happens. We have a common container class in order to have a common API for traversing the widget tree, and a menu shell has children so it has to be a container. However, a menu lays out its children in a specific way (essentially its a two-column layout where the first column is used for icon/checkbox/radiobutton), so it cannot accept *any* kind of child.

The pragmatical solution in Gtk+ is that container has a gtk_container_child_type() method that specifies what kind of children a specific container supports. Then the menu shell can rely on the menu item API to separately position the two columns of its rows.

Another possible solution is to make GtkMenuShell a grid-like container and force users to add separate widgets for e.g. the label and the radio button. This is rather bad API for users though, as it splits up a conceptual object like a checkbox menu row into two objects that you have to separately maintain.

Another approach is the one this article talks about, i.e. make container an interface rather than a class, so that we avoid talking about subtypes at all.

The gtk+ developers generally think that the Gtk+ class hierarchies are overly deep and that if we could break API we would have shallower hierarchies, less code sharing via inheritance, a greater reliance on interfaces to specify common APIs and more use of mixins to share code. However, even in such a world I think it makes sense to have some form of inheritance, including a container baseclass (or possibly just merge the container class into widget).

How would a widget toolkit in go or rust look?

Go and Rust — objects without class

Posted May 3, 2013 12:14 UTC (Fri) by tshow (subscriber, #6411) [Link]

Additional homework: Familiarize yourself with Self and Io. :)

Io is pretty neat.

I'd be interested to try a production-ready OO language that didn't have serious implementation warts and wasn't wedded to strong typing. I'm hopeful about Go in that regard, and I'm also hopeful that Go continues to treat OO as optional seasoning rather than The Way.

Go and Rust — objects without class

Posted May 3, 2013 17:59 UTC (Fri) by b7j0c (subscriber, #27559) [Link]

go has strong typing. it also has type inference so the compiler can deduce an appropriate type if there isn't a strict annoation, but thats different than a dynamic language that merely assigns types to values, not variables

Go and Rust — objects without class

Posted May 9, 2013 21:43 UTC (Thu) by VITTUIX-MAN (guest, #82895) [Link]

Well as I see it, Io and self seem to go to the class of languages with paradigm "objects as hash tables" which is kind of heavy weight approach to object orientation, with debatable benefits at least in a compiled languages.

Just how often do you clone an object and add some properties to it on the fly, in run time?

Go and Rust — objects without class

Posted Apr 11, 2014 8:32 UTC (Fri) by Blaisorblade (guest, #25465) [Link]

> Well as I see it, Io and self seem to go to the class of languages with paradigm "objects as hash tables" which is kind of heavy weight approach to object orientation, with debatable benefits at least in a compiled languages.

At least Self implemented several powerful optimizations to remove that cost.
Thanks to that work, virtual calls can even be *inlined* in Self/SmallTalk/Java/JavaScript. But since that requires a managed language runtime (to get statistics on the target of the call), that's not supported in most C++ implementations.

I mention Self/SmallTalk/Java/JavaScript because the work was done by (some of) the same people — see the history of StrongTalk: http://en.wikipedia.org/wiki/Strongtalk#History. Lars Bak went on from StrongTalk to HotSpot and then to lead Google V8 (IIUC), and I learned some bits of this history first-hand from him in his Aarhus lecture on virtual machines.

Go and Rust — objects without class

Posted May 21, 2013 6:01 UTC (Tue) by mmaroti (guest, #84368) [Link]

The article does not mention the main difference between the Rust and Go object model:

1) In Rust you separately pass the virtual table pointers with the object pointers. So if Rust wants to store a vector of objects implementing the Shape interface, then you have to record both the virtual table pointer and the data pointer for each Shape. However, Rust stores a vector of Circle objects that implement the Shape interface by storing a single virtual table for circle and a vector of data pointers for each Circle. Haskell does the same (there interfaces are called type classes and implementations are instances).

2) Go stores the virtual tables together with the objects. This is how C++ and Java stores objects, so no matter if you have a vector of Shapes or Circles, both can be stored in a vector of data pointers.

Go and Rust — objects without class

Posted May 22, 2013 8:19 UTC (Wed) by neilbrown (subscriber, #359) [Link]

Hi. Thanks for you comment. I'm not sure I follow you though. The two type systems look very much the same in this particular respect.

In Rust a pointer to a value is usually just to the value - no vtable is implied. To get a vtable,you use the "as" operator. "mycircle as Shape" becomes a pair of pointers, one to 'mycircle', one to a vtable which implements "Shape" for mycircle.
This is described in section "8.1.10 Object types" of the Rust reference manual, and seems to agree with what you said.

In Go, a pointer to a value is just to that value, no vtable. To get a vtable you need to convert it to an 'interface' type, such as by "Shape(mycircle)". This will compute (possibly at runtime) the vtable if it doesn't already exist, and will create a pointer-pair, just like in Rust. In Go you don't need the explicit cast. Assigning to an interface-type or passing as a parameter where an interface-type is expected are sufficient. This is a small difference to Rust where I think the "as" is required (not sure though).

More details of the Go approach can be found in http://research.swtch.com/interfaces
This seems quite different to your description of Go.

Go and Rust — objects without class

Posted May 22, 2013 20:40 UTC (Wed) by mmaroti (guest, #84368) [Link]

Hi! Thanks for the pointers and clarification. Yes, as you write, interfaces in Go and Rust are stored essentially the same: both a pointer to the vtable and a pointer to the data is stored. However, I am under the impression that the vtable is computed dynamically for each cast in Go and statically at compile time in Rust. I am going by these two sources:

http://smallcultfollowing.com/babysteps/blog/2012/04/09/r...
https://news.ycombinator.com/item?id=3749860

By these accounts you cannot do upcasts in Rust, so the vtables (the actual type of objects) cannot be computed at runtime. In Go, the vtables (actual type) of objects can be computed.

From my point of view it is an implementation detail whether a language is storing the vtable in the first field of the data object (the C++ way) or you pass the vtable pointer together with the data pointers (the Go way). The important point, that you try to cast from interface{} to any other interface.

By the way, does Go have polymorphic arrays, which would ensure that all objects in the array are of the exact same type, and only a single vtable pointer is stored together with a bunch of data pointers?

Go and Rust — objects without class

Posted May 22, 2013 23:04 UTC (Wed) by neilbrown (subscriber, #359) [Link]

  • Yes, the vtable (referred to in the page I linked as an 'itable') is computed dynamically at runtime in Go. However it is only computed once for a given interface/type pair - it isn't recomputed at each cast.
  • No, casts from an interface to a particular type (I call them downcasts, but you seem to call them upcasts) are not possible in Rust. The article mentions this in that Rust has no equivalent of Go's type assertion. You need to use an 'enum' type in Rust if you want that sort of functionality.
  • I see a couple of possibly-important differences between what you call the "C++ way" and the "Go way".
    • The C++ way doesn't scale well for tiny objects. The stored vtable pointer might be bigger than the rest of the object.
    • The C++ way requires a single vtable. I don't know how multiple interfaces work with that. The Go ways uses a different itable for each different interface, so multiple interfaces are trivial.
  • I don't think that Go supports polymorphic arrays as you describe.

Go and Rust — objects without class

Posted May 23, 2013 2:25 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

>The C++ way requires a single vtable. I don't know how multiple interfaces work with that.
Reserve the first slot in the vtable for interface lookup function, kinda like QueryInterface in COM.


Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds