Programming

Absolute Measure for Elegance in Programming Languages?

Question posed by Wolfram Research

Does Google know if there is an absolute measure for elegance in programming languages?

My Opinion

My measure for programming language elegance is lexeme count (tokens, symbols, whatever you call it)[1] Why? Because the designer of the programming language (1) has free reign in all aspects of the language, including lexical synax, grammar, and sematics, as long as (2) she can implement it in a parser. No one can tell her something is too succinct[2]. If the language takes lots of tokens to express a given program, then maybe the language needs better concepts to build programs out of!

How does one count tokens? I say, anything the programmer has to type or copy-paste or get the computer to write for her is a token, if she has to think of it as a distinct element (even semicolons or parentheses) because then it increases the complexity of the symbols the programmer has to keep in her head, helpful editors not withstanding.

You decide:

void RunAll(List<Obj> objs) {
    for (int i = 0; i < objs.Length(); i++)
    objs[i].DoWhatever();
}

vs.

void RunAll(List<Obj> objs) {
   foreach (Obj obj in objs)
      obj.DoWhatever();
}

Which is clearer? Why? Less typing? Some will say "who cares about more typing?" to which I will answer, it minimizes the opportunities for mistakes, especially in nested for loops! (If you prefer having to give a name to a counting variable like i or if you equate longer programs with clarity, you better be programming in Assembly or I just don't think you have a consistent case.)

If you agree that foreach is good, why stop there? Why not allow the programmer to create her own control structures?

runAll obj = sequence_ (map doWhatever obj)

or

runAll = sequence_ . map doWhatever

or

runAll = mapM doWhatever

Is this too succint? Why?

Which specific measure for token counts?

The truth is, I don't care if you allow for more leeway in token counts, e.g. ignore semicolons and braces since they more-or-less reiterate newlines and indentation. Programming languages are only made more succint when they are sufficiently powerful and high-level enough. Pretending "certain tokens are unimportant" or that "these two tokens are more like one token" is a sure sign the language is not succint enough, e.g. %s = 4; – count that as three, four, or five tokens if you like.

Succint enough at what?

That's the main problem with this or any metric. What domain are we talking about? For example, I bet you that Haskell is the ideal programming language for implementing Haskell compilers (i.e. least amount of code counted in terms of tokens). But you know something interesting? Haskell is also the ideal programming language for implementing a Java compiler (Java would be painful to implement a Java compiler in---but I will admit that Java is ideal for embedding Java in programs because it is already introspective and has all the runtime support in some huge API somewhere). This means that of course there are areas where certain languages shine and others ... are not so shiny. I suppose the trick is to find a wide variety of interesting domains and specific applications within those domains that each language should be able to produce (but we need to compare exact tasks to be scientific). Here is a partial list to get us going:

  • Compilers, interpreters
  • Highly concurrent server applications
  • Tools, e.g. version control systems
  • Graphics, image processing
  • Algorithms and frequently used data structures and methods (regexes, maps/sets/hashes, bit twiddling, etc.)
  • ?

I think that Haskell has really shown that it excels in many areas. It has a ways to come, but considering the limited manpower, it's really quite impressive. We'll have to see what "killer apps" are on the horizon....

The weird thing about Haskell is that it's growing so fast that when someone writes a large and useful library, it gets absorbed into the standard libraries and the language increases in power exponentially. Java has boatloads of Enterprise libraries (and new releases of the language add certain functional programming language features to try to clean things up), but I'm curious about how big the codebase would be for comparable tasks (including libs) in Haskell and Java.

Interesting (flawed) experimental data

See the shootout for some fast, elegant hacks. Even for involved code, Haskell still packs more expression into fewer tokens.

Other notes

When Haskellers say that Haskell lets the programmer write more modular programs, what they really mean is that Haskell lets you write micro-modular programs. It provides the tools to write code that can be reused anywhere. And it encourages and allows for smaller pieces of code that you can then glue together. It allows more levels of modularity; not just "subroutines" and "classes" and "iterators" but functions from data to data, which can be used in any situation, even on very large data structures, like Gigabyte files. (Saying that Haskell's laziness allows you to use infinite data structures sounds like some kind of one-trick-pony, like it must not be useful in "normal" programming, but saying "enormous files," since this is a subset of infinite data, somehow sounds more useful.)


  1. For a given set of programs (e.g. the program that prints the digits of pi, the program that enumerates all of the rationals without repeats, the program that prints the first 8,000,000 primes, the program that implements the spec for a specific military project, etc. ↩︎

  2. Unless they are whiny freshmen in a functional programming language class or the military. "Waaa! Waaa! It doesn't look C-like or Pascal-like!" ↩︎

Archive