XO: The Next Lisp Machine?

I have ordered a Quanta XO-1 (One Laptop Per Child) on the Give One, Get One deal, where you pay $400 plus shipping and you get an XO and donate one to the project ($200 tax deduction). This is just so cool that I have to have one. And I need a lightweight box that can do email and browsing that I can carry around easily. There are other good options but the XO is so novel and interesting! It’s just 3+ lb and runs on 2-3 watts with an amazing lithium ferro-phosphate battery, and physically extremely durable, waterproof, and dirtproof, and a great (but small, 7.5 inch) screen. No disk nor CD/DVD, but you can add them externally. And if the OLPC project is a big success, this may be the platform of the next generation of hackers. They are aiming to bring the price down to $100.

http://wiki.laptop.org/go/Hardware_specification

After watching a talk given at Google by Ivan Krstic, I got more and more excited hearing about the hardware and the software. A lot (14, apparently) of hackers, at least some of whom are famous superhackers (e.g. Jim Gettys), were involved in putting together the software. They have thought of and taken care of a huge number of issues. Perhaps I’ll end up contributing open source code to the project someday, although at the moment I’m too busy for that to be feasible.

The Give One Get One deal is only available for another 7 days. It may be hard to get them after that since they are going to be sold only to schools and other educational institutions and governments and in the third world. So if you want one, don’t hesitate:

http://www.laptopgiving.org/en/index.php www.xogiving.org

The only thing I’m worried about is that David Pogue in the New York Times says that the XO’s keyboard is too small for an adult to touchtype on. I asked around, and Luke Gorrie (of SLIME fame) says that it’s frustrating at first but then he learned to touchtype on it at high speed. (I was going to walk over to the Media Lab and try one but I have no time in the next seven days and I’m just too convinced now.) And so many people seem to get along fine on much smaller keyboards, such as those on the Blackberry or smart phones (not touchtyping, obviously, but good enough for email when I’m on the road). So I’ll chance it. Other drawbacks: 2 minutes to boot (hey, Lisp machines booted slowly), and switching between apps is “poky”. (But the apps are fast.)

In a previous post, I mentioned capability architectures. The XO’s “Bitfrost” is not a capability system, but it does deal with the issue of mutually-suspicious protection domains. Given how many XO-1′s there will be, if the project succeeds, it will be an obvious target for malware, and I think Bitfrost will be a big help there. Bitfrost works by dividing up protection domains at a coarse level, whereas I’m more interested in very-fine-level schemes. See:

http://en.wikipedia.org/wiki/Bitfrost

General info:

http://en.wikipedia.org/wiki/OLPC_XO-1

Main web site, but it seems to be down at the moment:

laptop.org

David Pogue’s review in The New York Times, both written and video. Pogue does lots of product reviews and I have a lot of confidence in his evaluations (and I love his books).

http://www.nytimes.com/2007/10/04/technology/circuits/04pogue.html?_r=1&oref=slogin

http://video.on.nytimes.com/?fr_story=6ffd976ed367bacae4171dd4999d36431c84b0f5

There’s plenty more if you Google for “OLPC”.

The XO does everything in Python. You can see all the code, with a single keystroke (that shows the code of what’s running) and even modify the code. In the video, the speaker (Ivan Krstic’) is asked “Why not just use Lisp or Smalltalk?”, and the questioner cites Lisp machines! See, our influence is still there! He replies that doing everything in Python “comes close to the general Lisp machine idea” (of course he, too, knows what a Lisp machine is!). Answer: he protests that it’s a lot like a Lisp machine except that the language doesn’t go all the way down to the metal (it’s based on Linux). Hey are also shipping Squeak (a modern Smalltalk). They used Python because of the “size and momentum” of the community, and because he feels that Lisp has a steeper learning curve than Python does for kids. I won’t object to those reasons.

Hey, Python, Lisp, what’s the difference? :) So, strange as it is to say, maybe this is the new Lisp machine!

Daily Grommet

Daily Grommet is a web site that tells you about one cool product every day. There’s a video showing all about it, as well as a written description, and you can click through if you want to buy it. Some of the products tend to be oriented towards women, but not all of them. The “product” also also sometimes a worthy charitable organization. Jules calls each product a grommet.

The company was founded by my friend, Jules Pieri. She and her team carefully test each product. In fact, Jules once recruited me to help test out a new kind of American caviar. (She provided champagne as well; it’s a tough job, but somebody’s got to do it.) The only other member of the team I’ve met so far is Nataly Kogan, the Chief Community Officer and a great entrepreneur as well. I’m looking forward to meeting the rest of the team; their office is very close to where I live.

I have bought four or five products through Daily Grommet, some as presents for my wife (shh, don’t tell her yet!) and friends, and some for myself. The coolest one I’ve bought so far is the “foodloop” Trussing Tool, which is like a reusable cable tie (sorry, I’m an engineer) that you can put around food, instead of using twine. I gave these to my wife, and my friends Ed and Scott, all of whom are experienced cooks, and they all liked them a lot. You can buy past grommets (click on “Past Grommets”).

If you know of any product that would make a good grommet, please send mail to them.

NoSQL Storage Systems Never Violate ACID. Never? Well, Hardly Ever!

Everybody agrees that the new “NoSQL” storage systems “aren’t ACID”, or “don’t have transactions”.  This is true <i>in a sense</i>, but without knowing the sense, it doesn’t tell you much.

In one sense, they do have transactions that are limited to having one operation per transaction.  One operation could mean reading, writing, incrementing, or doubling the value associated with a particular key.  For example, look at an “insert” operation in a key/value store.  An operations acts on only one data object.  Are these single-operation transactions ACID?  Let’s check each criterion:

A means “atomic”: either all the operations happen, or none of them happens.  Well, there’s only one operation.  The key-value store <i>does</i> guarantee that either the insert happens, or it doesn’t.  So the transaction atomic.

C means “consistent”.  In relational database systems, people use this to mean that various interesting consistency guarantees are maintained.  But here, we don’t have to worry about such things as referential integrity, since there are no references to have integrity; that is, there are no foreign keys.  So it’s consistent.

I means “isolated”: concurrency is never seen by the application.  The system behaves as if each operation happened at a particular, distinct moment in time.  The key-value stores all make this guarantee.

D means “durable”: before the application is told that the transaction has been completed successfully (i.e. committed), any side-effects it does are in stable storage so that if a node stops (such as a crash of a process or a whole node) won’t lose the results of the side-effects.  Here, a transaction is only one operation, but that doesn’t change anything: the system does provide “durability”.  (Some systems might cheat by not actually forcing data to stable storage, but we’re not talking about those.)

So it appears to be ACID!  OK, something has <i>got</i> to be wrong here, right?

Right.  Where I tried to pull the wool over your eyes is the definition of “C”.  “C” doesn’t just mean conforming to the databases integrity constraints.  It means that the system returns the correct answer! That is, response to any operation is consistent with some state that the database could be in.  There’s more than one such state when there are concurrent operations going on, which might be ordered in more than one way, depending on how the concurrency system works.  So it’s clearer to think of “C” as meaning “correct”.  (In the famous Gilbert and Lynch paper that “proves the CAP theorem”, that’s what they mean by “C”.)

The “NoSQL” storage systems are guaranteed return the correct answer <i>only</i>if there are no partitions in the network.  But if there are (or were, e.g. at write time) partitions, they can return things like “two replicas say the value is X, but another replica says that the answer is Y”, and the application has to try to make sense of and cope with that.  That is <i>not</i> “C”.  This is usually called “eventually consistency”: if the partitions were to eventually heal and the system deferred accepting new operations until all the in-progress operations finished, and something went over the whole database to fix up any inconsistencies that happened during writes, then the system would become fully consistent, and would be behave correctly until the next partition.

that there are at least two nodes that cannot send messages between each other.  It’s important to know that if a node in your your system is down, that’s considered a partition: it’s as if this node were disconnected from the network.

The “NoSQL” systems are ACID, as long as you accept that a transaction can only perform one operation, in the sense that the only thing that gets in the way of being ACID is when there are network partitions and the system is called upon to perform operations while the partition is still there.

“Partition” is a somewhat slippery concept that I will examine in an upcoming separate essay.  But the basic ides is that a it means that there are at least two nodes that cannot send messages between them.  It’s important to know that if a node in your your system is down, that’s considered a partition: it’s as if this node were disconnected from the network.

This also shows that the name “NoSQL” doesn’t explain everything that’s important about these systems.  But you can’t pack a whole lot into a short, punchy name, so I’m not really complaining.  ( do the same thing with the names of my blog essays; <i>mea culpa<i>.  You just have to keep in mind that the lack of SQL is not the only important thing.

Adventures trying to use open-source libraries

It seems that whenever I want to use an open source library, I run into problems because of various kinds of dependencies. I’ve run into this with Java and C++ libraries. Most recently, I had one of these adventures with a Common Lisp library.

Babel is an excellent open-source Common Lisp library for converting between string representations, such as the different encodings of UNICODE, as well as EBCDIC characters and so on. It’s portable and efficient. We use it to decode UTF-8 into full 32-bit UNICODE.

Recently we suspected that it might be running more slowly than we’d like, and that we might be able to get a measurable speedup by optimizing it. So I thought I’d write a simple benchmark and try some changes that might speed it up, such as adding fixnum declarations.

Babel includes a regression test. Obviously, I needed to make sure any speedups that I put in would not break Babel, so running the regression test would be important. This is where the fun began.

Babel’s regression test depends on a Common Lisp unit test framework called stefil (which I’d never heard of). I found stefil on the web, but there was no source distribution. The only way to get it was to use darcs.

The machine on my desktop uses an old version of Linux. (The reasons are too boring to go into here, and it’ll be upgraded soon.) It does not have darcs already installed on it. No problem, I said to myself, and proceeded to obtain darcs. It turns out that darcs comes in source form, so you have to compile it.

Darcs is written in Haskell, and my Linux machine does not already have the Haskell compiler. So I downloaded the compiler (GHC), and tried to compile it. But I got weird error messages about missing C header files. I could not figure this out, because the build mechanism for GHC is rather complicated, using tools that I would have had to figure out, etc.

Finally I gave up, and found someone with a more modern version of Linux that already had darcs. He got stefil for me.

Next, I found that stefil depends on several other Common Lisp libraries: Swank, alexandria, iterate, and metabang-bind. We already had Swank (which is part of Slime), and alexandria, so I found and downloaded iterate and metabang-bind.

I got error messages trying to compile stefil. It eventually turned out that stefil depends on a non-standard version of Swank, and will not compile with any other version. Since I did not need the feature that integrates stefil with Slime/Swank, I had to comment out the dependency on stefil in its asdf file (which is like a makefile).

Compiling stefil still failed, because it uses the iterate library, and iterate includes a Common Lisp code walker, and in the version of Clozure Common Lisp that we use at ITA, assert macroexpands to a non-portable form that the code walker does not understand. This feature in assert was added for us in order make the code coverage tool know that it’s OK that we do not cover assert forms, but, of course, iterate’s code walker didn’t know about it. (A code walker must know about every Lisp “special form”.) I fixed this by learning how the code walker is organized, and extending it to know about assert as a primitive special form.

Finally, the babel regression tests turned out to have bugs. They depend on char-code always returning a fixnum, which is a violation of the Common Lisp standard. I had to fix various things and comment out other things in order to make the unit test work properly with Clozure Common Lisp (which was not at fault).

After all this, I was able to run the regression test, and so I could proceed to make changes to Babel with some assurance that I didn’t introduce bugs. But it all took so much time that I fell behind in my work schedule, which was, to say the least, annoying.

The problem partly lies with my using such an old version of Linux, but this kind of problem seems to be common with open source libraries in all languages and domains. If they’re not used very widely, and not maintained, they often don’t work well together.

What Programming Language Do People Speak Well Of?

I usually don’t write blog entries that are merely pointers to someone else’s blog entries, but I’m making an exception this time. A blogger named Lukas Biewald, in a blog called/of Dolores Labs, wrote an entry called The Programming Language With The Happiest Users.

He measured Twitter “tweets” that mention certain programming languages, and ascertained which were positive. I’m particularly interested because Lisp came in second place.

Interpreting this as “the programming langauge with the happiest users” depends on several tacit assumptions that seem dubious at best. We don’t know that the people writing these comments are actually users. The number of tweets sent about a language is not uncorreleated with the langauge; I bet there are fewer COBOL programmers using Twitter than Perl programmers. Not everybody tweets about how much they like or dislike their langauge as much as everybody else. He knows this and mentions some of these problems at the end of the post, so I’m not saying this to criticize him.

Yes, the title of the blog post is sort of misleading, but written to get the attention of readers. I cannot criticize him for that either, since I do the same thing. Sometimes it backfires; a lot of people seem to have seen my post named “Why Did M.I.T. Switch from Scheme to Python” without getting my points, which were (1) they didn’t make a high-level decision to switch languages, but rather this fell out as an end consequence of decisions that had nothing to do with languages, and (2) this is only for the freshman core courses, not the whole curriculum.

It’s hard to draw any hard and meaningful and useful conclusions from this research, but I still find it interesting and entertaining.

Programming with Concurrency

New high-speed computers will have more and more cores as the years go by, and the ramp-up has started and is going very quickly.  To take advantage of those processors, some programs will need to use interesting (complicated and novel) concurrency.

But the history of concurrent software is littered with approaches that just turned out to be too hard to use, and the software was slow to develop and very hard to debug.  Now that we’re all in the same boat, how do we solve the software problem?

Many language designers think that the answer lies in pure (side-effect free) programming.  The best known, and quite practical, languages that are pure are Haskell and Erlang.

But many new languages are arriving based on the idea that you should use mostly side-effect-free code, and then when side-effects are needed, use transactions.  This is at least a trend if not a movement or revolution.

When Guy Steele came back from the JAOO Conference, I asked him for a quick report, and he sent me this (very slightly copy edited, used with Guy’s permission):

I was stunned by the end of the first day of JAOO 2008 when I realized that Anders Hejlsberg had given a plenary talk on C#, I had given a talk on Fortress, Bill Venners had given a talk on Scala, and Erik Meijer had given a talk on functional programming, and we had all delivered approximately the same message to this object-oriented crowd: the multicores are coming—no, they’re here—and the only plausible way to deal with them in the long run is to rein in the side effects inherent to the OO point of view and move as much as possible to a functional programming style with mostly-immutable data structures and implicit parallelism.

I am very excited by the new Clojure language, which is a dialect of Lisp based on exactly these same principles.  Rich Hickey apparently wasn’t at JAOO, but would have found friends there!

Normally I don’t try to learn a language unless I’m about to actually program in it.  But it’s worth learning a language when you pick up fundamental new ideas that might be helpful (or just interesting).  Haskell is like that (thanks, Alan Bawden, for letting me know).

If you might have to write highly-concurrent programs in the future, I recommend that you keep your eyes on all this.

Rumors of ITA Acquisition are Just Rumors

Many of my friends have been asking me about stories they’ve heard regarding Google purchasing ITA Software. It’s only a rumor.

Here’s what happened. On April 21, Bloomberg published a story, citing only anonymous sources, claimed that Google is “in talks” with ITA. Many, many other web sites, including news agencies, blogs, and so on, repeated this story. But they all stem from the one Bloomberg story.

Since then, there has been no further information about this whatsoever.

(Of course, nobody is talking. If a company denies false rumors about news like acquisitions, but does not deny true rumors, anyone could figure out whether the rumors were true or not. To keep such events secret, the only thing to do is remain silent, no matter what.)

The “Worse is Better” idea and the future of Lisp

The tag line for the International Lisp Conference 2009 was Lisp: The Next 50 Years. I am very interested in the future of Lisp, and hope to be one of many participants in creating that future. A widely-read paper from 1991 introduced the world to the phrase and philosophy called Worse is Better, and says that this philosophy should be used for the design of the next Lisp. What does that mean, and what parts of the argument still apply and should guide us?

Richard Gabriel and Worse is Better

Richard P. Gabriel is a brilliant computer scientist, probably best known for his company Lucid, Inc., which produced an excellent Common Lisp implementation, and later developed a sophisticated software development environment called Energize.

He has written extensively about the process by which new technological ideas move to the marketplace. His ideas about this are unique and very much worth learning. The most well worked-out version of his thoughts are in his book, Patterns of Software, which I recommend highly.

His first essay on this topic is called Lisp: Good News, Bad News, How to Win Big, originally published in 1989. It’s primarily about why the Lisp language was not succeeding as a vehicle for the delivery of practical applications. It examines Lisp’s successes and apparent failures, and suggests how to improve things. I find it very accurate and thoughtful, and holds up well over time.

The part that got the widest attention was Section 2.1, “The Rise of Worse is Better.” Jamie Zawinsky, then of Lucid, forwarded this section to many people, and soon it was redistributed very widely. It became, in effect, its own paper, generally known as Worse is Better. Do a web search on that phrase and you’ll find all kinds of commentary.

It characterizes a school of design which Gabriel attributes to MIT and Stanford and calls “the right thing”. He contrasts this with what he calls the “worse-is-better” philosophy, which he says “is only slightly different”. Many commentators have oversimplified and overstressed the dichotomy, and so I strongly recommend that you read the original four points that he associates with each philosophy. You’ll see that his characterization is careful and nuanced.

The phrase “Worse is Better” is rather over-the-top, and I think some people have misinterpreted the point because of that name. I sympathize with Gabriel. If you follow my own blog, you’ll see that I use somewhat provocative names for the articles, in order to attract readers. Sometimes it backfires. In my case, I used “Why Did M.I.T. Switch from Scheme to Python?” for an entry whose point was that the switch is not what’s important. But perhaps since it was the title, people commented mostly on the language issue! Oops. Some of the commentary on Worse is Better gets confused and thinks the two philosophies are simple opposites, but it’s much more subtle than that.

Worse is Better contains a story, which starts: “Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues.” He wrote the story based on an oral account from me. In fact, the “MIT guy” was me, and the “New Jersey guy” (from Berkeley; see the paper for why) was Bill Joy.

His account is basically right. About the phrase “two famous people”, Bill Joy is far more famous than I am (see the current best-selling book, “Outliers”, for example). Neither of us said “it takes a tough man to make a tender chicken” (a line from an old TV commercial), as far as I remember. If you want to know about the issue that he spells “PC-loser-ing”, see the excellent 1989 paper PCLSRing: Keeping Process State Modular, by my friend Alan Bawden. It has been described as “an unpublished but influential note by Bawden”, and has been widely cited. (The general concept of PCLSR has to do with forcing a thread of execution to be X-consistent, for some level of abstraction X, even if the thread is operating below the level of X.)

Gabriel’s section ends: “But, one can conclude only that the Lisp community needs to seriously rethink its position on Lisp design. I will say more about this later.”

What does this mean for the future of Lisp?

The paper is about Lisp, but if we look carefully, it doesn’t bring the “worse is better” point to bear on Lisp very much.

Section 3.6, “The Next Lisp”, starts: “I think there will be a next Lisp. This Lisp must be carefully designed, using the principles for success we saw in worse-is-better…. The kernel should emphasize implementational simplicity, but not at the expense of interface simplicity. Where one conflicts with the other, the capability should be left out of the kernel.”

He goes on: “Some aspects of the extreme dynamism of Common Lisp should be reexamined, or at least the tradeoffs reconsidered.” He gives an example of correct but undesirable Lisp code, in which a function redefines top-level functions.

It’s hard for a compiler to optimize code in the presence of this kind of runtime behavior. There’s no need to write programs this way. Lisp has better ways to do what this code fragment is trying to do, and any competent Common Lisp programmer knows that and knows the proper way. Therefore, the next Lisp should consider omitting this capability.

Reducing extreme dynamism, way out at the edges, sounds promising, and should be considered carefully. But this specific example is the only one he gives!

The rest of the section is about how to layer the implementation. All of this is great, but it does not seem to have anything to do with “Worse is Better”!

The next section is “Help Applications Writers Win”, and clearly the right thing philosophy makes things better for application writers than the worse is better philosophy, all other things being equal. The point of the paper is that all other things aren’t equal because the worse is better philosophy should help get the system done on time and help it spread. But that’s just the overall thesis of the paper, not specific to Lisp at all.

Why does this paper spend so much time on the Worse is Better philosophy, when it bears so little on Lisp?

I’ll go out on a limb and speculate that this was very much on Gabriel’s mind at the time. He felt it was relevant to Lisp because MIT/Stanford people were frustrated that Unix seemed to be ignoring lessons and techniques that had been developed, and widely used, over so many years. He might even have been thinking of the competition between his own Lucid Lisp product and its competitors. But I ought not put words in his mouth.

Gabriel later wrote much more about the Worse is Better philosophy. He famously conducted a debate with himself, writing the other side under the pseudonym “Nickieben Bourbaki” (an allusion to Nicolas Bourbaki). These include Worse is Better is Worse, Is Worse Really Better?, and even more.

What do you think: do the ideas in the Worse is Better series of papers bear on the question of the future of Lisp? I’d appreciate if you’d take a look at Gabriel’s paper before answering!

P.S. Dept. of Fair Attribution: I borrowed some phrases of text from various Wikipedia articles. Look here for more general discussions about the future of Lisp.

Using Solid State Disks on Linux

Solid-state disks (SSDs) are getting less expensive, faster, and larger. I just bought a lightweight laptop with 128GB of SSD instead of a disk. Just to see what I’d find out, I poked around on the web looking for information how to use SSD’s under Linux. Keep in mind that I am not an expert on SSD’s, nor on Linux! Bearing that in mind, here’s what I found:

Tuning Linux for SSDs

Here a quick summary of Tom Bryer’s “Four Tweaks for Using Linux with Solid State Drives” (Sept 2008)

If you’re using Linux with SSD’s, it is recommended to use the noatime option to turn off writing the “last accessed time” attribute to files. This avoids writes, increasing the lifetime of the SSD. (As root, edit /etc/fstab and change “relatime” to “noatime” on SSD partitions. This might only apply to ext3.)

You can create a tmpfs partition (in RAM) and make Firefox use it for its cache, to reduce disk writes. Edit the file /etc/fstab and add:

tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0

Then, in Firefox, open about:config, right click in an open area, create a new string value called:

browser.cache.disk.parent_directory

and set it to /tmp.

If you write a large file to the disk, Linux will stop any other application’s attempts to write, potentially for a long time. To greatly reduce the pause, change the I/O scheduler for SSD’s. Do:

cat /sys/block/sda/queue/scheduler

to get the current scheduler for a disk (sda, in this case) and to see the alternative options. You’ll probably have four options, the one in brackets is currently being used by the disk specified in the previous command:

noop anticipatory deadline [cfq]

Now do (as root):

echo deadline > /sys/block/sda/queue/scheduler

File Systems for SSDs

What’s a good Linux file system to use for SSD’s? A lot of people have asked this on the web and gotten very few straight answers. There is jffs2 but everybody seems to think it’s lousy. Some people think that ext2 is considered better than ext3, which is a journaling file system that does more writes. However, journaling keeps the file systems’ metadata consistent after a crash, so it’s quite valuable. Surely there’s a lot more to say that this, but I wasn’t able to find it.

SanDisk has announced ExtremeFFS. It looks like this is not a Linux file system, but rather the hardware acts like a disk. If so, one could take advantage of this technology from non-Linux machines.

Samsung says that ExtremeFFS uses a non-blocking architecture in which all of the NAND channels of the SSD can behave independently. It can read and write at the same time. They also claim that it can speed up random writes by 100x! How they do it is explained in this article by Chris Mellor. They avoid the need for erases in a lot of cases. Also there is a garbage collector!

I found a comment saying that “this sounds like what Fusion-io is doing on the ioDrive.” Fusion-io makes very high-speed SSD’s.

Also

Although one person points out that your SSD may outlive your laptop, or you can replace it with a larger, cheaper, faster one that will be around at that time (assuming that you can get your data off before it’s too late). But avoiding journaling is also good for speed, not just longevity.

It’s good to align your file system on an erase-block boundary. Especially if you’re using RAID, so that a whole stripe can be copied efficiently. You want your partition aligned on a 128K boundary. Theodore Tso’s blog item provides vast technical detail.

The International Lisp Conference 2009 Succeeded!

Last December, I was invited to be general chair of the International Lisp Conference 2009. Since then I have done a great deal of work, and it has finally all paid off. The conference ran from last Sunday to Wednesday, and it went perfectly! I can hardly believe it. And we got at least 215 attendees, which was great! (I had planned for 175; apologies to those of you who didn’t get a tee shirt and a tote bag.)

The only surprise problem was that two of the speakers were not able to show up. However, we reallocated their time for more lightning talks. These are five-minute talks on any topic bearing on Lisp. Three of them were approved by the program committee and are in the proceedings. The program committee then agreed that we could post a sign-up sheet, and let anybody talk about anything appropriate. We ended up having about twenty-five of them. They were almost all great! We learned about fascinating new open source libraries, fun applications, great anecdotes, and so on.

The lightning talks make the whole conference more participatory, rather than just “we give the talks, and you sit there and listen.” Although I’m sorry that the two speakers were unable to present their papers, the lightning talks were great. I recommend that other conference organizers in the future consider allocating plenty of time for such talks.

The Great Macro Debate went just as I had hoped. Lisp’s macros make the Lisp language extensible. It’s only because of macros that Lisp has stayed sufficiently up-to-date to still be a relevant language after fifty years of life. And macros are one of Lisp’s most distinguishing features, now that so many Lisp ideas have been adopted by other languages.

Earlier this year, I was having lunch with my former co-worker, Jeremy Brown. He had been one of the senior engineers on the Polaris project at ITA Software, and we had worked together closely. (He left to start his own company, Rep Invariant.) We were talking about the use of Lisp in Polaris, and specifically about Lisp macros. To my surprise, Jeremy opined that having macros in the language was a net drawback! Many people have objected to macros, but Jeremy really knows all about macros; he’s a very proficient Lisp programmer, and has seen how we use macros in Polaris.

So I had the idea of having him debate someone about this at the Lisp conference. Guy Steele, as program chair, took over the idea, and found people to be in the debate. Pascal Costanza, who is one of the deepest thinkers about Common Lisp these days, was Jeremy’s prime opponent. Guy Steele himself was Pascal’s “second”, and Dick Gabriel was Jeremy’s. I moderated.

Jeremy prepared very thoroughly, with slides that presented all of his attacks, and were also very funny. The debaters both made important real points, and kept the whole thing hilarious. There was a great deal of contention and disagreement, to the point where audience members, unable to contain themselves, started shouting out questions and comments. Indeed, I felt the same way myself, and misused my privilege of having a microphone to participate in the debate. Finally Dick Gabriel said, “OK, Weinreb, enough of this. SIt down at the table, and I’ll be the moderator!” I replied, “Oh, thank you! Now that I’m a panelist, I can say what I want to into this other microphone!” Sadly, we didn’t videotape this, but we all had a great time.

David Moon’s talk about how to do macros for a language with syntax was very innovative, to the point where, in his introduction, Dave said “some of you may think this is mad scientist stuff”! It’s certainly fascinating, and the people who had worked on Dylan (and therefore grappled with the same problems) were particularly interested and felt that it looked very promising.

Tom Sgouros performed his one-man, one-robot show: “Judy, or, What Is It Like To Be A Robot”. I had seen this once at ITA (Tom works at ITA) and knew that it was perfect for this audience. It’s about the concept of intelligent robots, and the nature of consciousness, and it’s also very clever and funny. Tom did a wonderful job.

I’ve been catching up on my sleep (really). But now I’m busy again! This year’s family opera show, The Weaver’s Wedding, is opening tomorrow. I’ve been involved in the North Cambridge Family Opera company for about ten years. While the conference was going on, my wife Cheryl was working very long hours of the day and evening getting the set and props finished, teaching the stagehands what to do, and so on. (As you can imagine, it’s been rather crazy around at home, with both of those things going on at once!) I hope to blog more about the conference and papers in the future. In the meantime, I expect some of the attendees will write their own descriptions.

Thanks again to all our sponsors, who made possible the relatively-low registration. Special thanks to ITA Software, our Platinum sponsor, and to my wonderful boss, Sundar Narasimhan (CTO and Chief Architect of Polaris), for allowing me to take part time off from my work at ITA in order to run the conference.