Skip to main content

Smalltalk Challenge: Post 5 - Talking to Objects

Everything I've read about Smalltalk makes an effort to stress objects are capable of doing three things: maintaining state, receiving messages, and sending messages. The understanding that an object can maintains its own state is common across many languages, but message passing may be unfamiliar to programmers working with other languages (though message passing does play an important role in Objective-C). Alan Kay has said that message passing is the most important concept to understand in Smalltalk, and that objects have been over-emphasized.

Smalltalk's object model follows the classical (class-based) style; the programmer writes a class which is then used by the runtime as a blueprint for instantiating an object. But unlike C++ or Java, the programmer never works with objects by invoking their methods directly in Smalltalk. Instead, messages are sent to the object which receives a message, looks up the appropriate method to perform, and then executes the method. This is more than a conceptual difference; it's a concrete implementation detail as explained by the following quote from Wikipedia's Objective-C article:
In a Simula-style language [like C++, Java, etc.], the method name is in most cases bound to a section of code in the target class by the compiler. In Smalltalk and Objective-C, the target of a message is resolved at runtime, with the receiving object itself interpreting the message.
Perhaps a suitable way of looking at it is that all of the properties and methods of an object are private. The only way to interact with the object is by sending it the appropriate messages.

There are three types of messages in Smalltalk: unary messages that take no arguments, binary messages which have one argument, and keyword messages that can take one or more arguments.

Unary messages appear after the object instance they're sent to. The cr message is an example of a unary message that can be sent to a Transcript object, which responds by displaying a newline character.
Transcript cr.
Binary messages appear with an argument. For example, the binary message = can be sent to foo with the argument 42, and foo responds by comparing its value with the argument and returning a Boolean object set to whether the values were equal.
foo = 42.
Keyword messages end with a colon and can take multiple arguments. The at:put: message is an example of a keyword message which can be sent to an IntegerArray object. The array responds by placing the value 42 at index 5.
anIntegerArray at: 5 put: 42.
Keyword messages are a single message. That is, at:put is not an at: message followed by a put: message. In Java, it might be written like anIntegerArray.atPut(5, 42).

In statements that contain multiple messages, the order of precedence is from left to right with unary messages being sent first, then binary messages, and finally keyword messages, though the order can be influenced by parenthesis. Recall the example 2 + 1 / 4 from Post 3. The expression contains two binary operators, + and /. Following the precedence rules, 2 is sent the message + with the argument 1 and then the message / with the argument 4. Contrast this with how the statement would be evaluated in Java following the mathematical order of operations. To force that execution in Smalltalk, the expression would need to be parenthesized as 2 + (1 / 4).

It seems the biggest benefit of the approach is that its late/dynamic binding affords the flexibility of duck typing in a compiled language... but now that dynamic languages are widely used, and something like an inferred typing system can give the "feel" of dynamic programming in a strongly-typed language, I wonder if this is only historically important now. There is a runtime performance hit since method lookups happens at execution instead of compilation, and some Smalltalk implementations have explored method caching to help mitigate the impact, but it seems there must be a whole class of compile-time optimizations that cannot be performed by the compiler that may could affect performance as well.

I asked about the advantages and drawbacks of message passing on the PiLuD mailing list. Some of the respondents made pretty good points, and I recommend reading the exchange if you're interested in hearing some other opinions from a language-design perspective.

Comments

Popular posts from this blog

Composing Music with PHP

I’m not an expert on probability theory, artificial intelligence, and machine learning. And even my Music 201 class from years ago has been long forgotten. But if you’ll indulge me for the next 10 minutes, I think you’ll find that even just a little knowledge can yield impressive results if creatively woven together. I’d like to share with you how to teach PHP to compose music. Here’s an example: You’re looking at a melody generated by PHP. It’s not the most memorable, but it’s not unpleasant either. And surprisingly, the code to generate such sequences is rather brief. So what’s going on? The script calculates a probability map of melodic intervals and applies a Markov process to generate a new sequence. In friendlier terms, musical data is analyzed by a script to learn which intervals make up pleasing melodies. It then creates a new composition by selecting pitches based on the possibilities it’s observed. . Standing on Shoulders Composition doesn’t happen in a vacuum. Bach wa

Learning Prolog

I'm not quite sure exactly I was searching for, but somehow I serendipitously stumbled upon the site learnprolognow.org a few months ago. It's the home for an introductory Prolog programming course. Logic programming offers an interesting way to think about your problems; I've been doing so much procedural and object-oriented programming in the past decade that it really took effort to think at a higher level! I found the most interesting features to be definite clause grammars (DCG), and unification. Difference lists are very powerful and Prolog's DCG syntax makes it easy to work with them. Specifying a grammar such as: s(s(NP,VP)) --> np(NP,X,Y,subject), vp(VP,X,Y). np(np(DET,NBAR,PP),X,Y,_) --> det(DET,X), nbar(NBAR,X,Y), pp(PP). np(np(DET,NBAR),X,Y,_) --> det(DET,X), nbar(NBAR,X,Y). np(np(PRO),X,Y,Z) --> pro(PRO,X,Y,Z). vp(vp(V),X,Y) --> v(V,X,Y). vp(vp(V,NP),X,Y) --> v(V,X,Y), np(NP,_,_,object). nbar(nbar(JP),X,3) --> jp(JP,X). pp(pp(PREP,N

What's Wrong with OOP

Proponents of Object Oriented Programming feel the paradigm yields code that is better organized, easier to understand and maintain, and reusable. They view procedural programming code as unwieldy spaghetti and embrace OO-centric design patterns as the "right way" to do things. They argue objects are easier to grasp because they model how we view the world. If the popularity of languages like Java and C# is any indication, they may be right. But after almost 20 years of OOP in the mainstream, there's still a large portion of programmers who resist it. If objects truly model the way people think of things in the real world, then why do people have a hard time understanding and working in OOP? I suspect the problem might be the focus on objects instead of actions. If I may quote from Steve Yegge's Execution in the Kingdom of Nouns : Verbs in Javaland are responsible for all the work, but as they are held in contempt by all, no Verb is ever permitted to wander about