I had a great time at the European Common Lisp Meeting (ECLM) in Amsterdam, April 19-20, 2008. I met many of the important people in today’s Common Lisp world, an almost completely generation set of folks as compared to 15 years ago. The papers were excellent, and demonstrated that Common Lisp is still a vibrant and uniquely powerful language. (I’m writing this on the plane back home, on my OLPC laptop, as I learn to touch-type with my big hands on the child-friendly little keys.) Arthur Lemmens and Edi Weitz did a great job organizing and running the meeting. Everything went entirely smoothly and I felt that everyone enjoyed it very much.
Jeremy Jones of Clozure Associates demonstrated InspireData, an educational application that lets you analyze data and draw conclusions. The user interface and interaction design is superb. It performs very well, even on weak, old PC’s, which matters since that’s what many schools actually have. It has gotten excellent reviews and sales, and is widely used in schools.
The user would have no reason to know that it was written in Lisp. No Lisp is exposed to the user. They used LispWorks, since it runs on all of their target platforms (including Windows 95, as well as more modern Windows and MacOS X), it has a good interactive development environment, it provides a portable library for accessing the platform’s native menus and other widgets (called CAPI), and had favorable licensing terms. The rest of the graphics was done by calling the OpenGL graphics library, using a library from LispWorks. They found all of these technologies to work very well.
They wrote it as a contract job, building it to specs provided to them in a 200-page document. It took 8 person-years of developers plus two person-years of Q/A, who were brought on board from the very beginning. It’s 270K lines of Lisp code plus 470 lines of C.
The primary advantage of using Lisp is that they could produce a prototype in only two months, and then do incremental additions and refinements. You might ask, why was this necessary if they had a 200-page requirements document?
- The specs were vague. It would have taken over a 1000-page spec to really be unambiguous. (In my opinion, that’s absolutely normal for software.)
- The specs kept changing. (That always happens. Always!)
- In particular, the designers would change the specs because of what they saw the program doing. In other words, specifying it in advance would have been impossible in any number of pages. Design and implementation must be interleaved.
- Even if the spec were known in advance. the best implementation techniques are not initially appanrent. Sometimes you have to get pretty far along in the implementation to realize that some atrchitectural decision did not work out well.
Lisp is very malleable. Experience over the years has shown that even large Lisp systems are particulary easy to re-factor and even re-architect. (I have seen this over and over again.) In fact, Jeremy feels that they didn’t re-architect enough! (One usually hears the opposite lament.) He emphasized that iterative development — build, test, refine — was the only way to go and the only way they could have succeeded.
LispWorks performance in this application is excellent. As I could see in the demo, it is extremely responsive. Jeremy says he has never perceived any delay from the garbage collector. InspireData is a shining proof that real applications can be done just fine in Common Lisp.
Nicholas Neuss (IANM, U. Karlsruhe) presented FEMLISP, a system to do finite-element analysis (FEM). FEM is used for solving partial differential equations. It’s used to model things like convection, diffusion, viscous fluid flow, and so on.
I had thought this would just be a numeric library with some API, and wondered why doing it in Lisp would be helpful or make any difference. But it’s not like that at all. First, choosing a good way to run FEM is a hard problem. I only sort-of understand the issues, but I got a sense of it. There’s a big issue of how you “discreetize” and solve the discrete problem. You also must make decisions about how to set the mesh. Second, you want an interactive environment that lets you display results graphically, and make small changes to the input spec and try again. Ideally, you’d like an expert system to make these decisions, but what he’s done so far he described as “rudimentary”.
He created small domain-specific language to represent of how to run a particular FEM problem. This lives in a Slime buffer and can be edited and recompiled quickly and conveniently. It can display the history of the iterations so you can see what’s going on and refine your input. You can insert Lisp code into the input, for computing or debugging.
It runs on CMUCL, SBCL, LispWorks CL, and Allegro CL. He does graphics with OpenDX (Data Explorer), which was written by IBM and open-sourced. (He is considering switching to VTK.)
Why did he use Lisp?
- Dynamic typing worked out well
- Macros and a few reader macros let him make an embedded domain-specific language easily
- Dynamic testing and debugging (read-eval-print loop, etc.)
- High performance compiled code (as compared to Guile and other Scheme implementations that he tried)
- Common Lisp is stable; once you learn it, you know it
- There is no system/user dichotomy
- It only took 30K lines of code
- There were lots of useful libraries
Performance as compared to other available FEM packages is hard to determine for many reasons. For example, who chooses the benchmark? Can you find an informed third party to spec and judge the procedure? How do you know you’re not comparing apples and oranges? (These are all standard pitfalls of benchmarking.) Also, he has not spent too much time on performance improvements anyway.
Nevertheless, he ran some basic comparisons against their company’s in-house FEM system, called M++, not only to measure speed but to make sure he got the same answers (he did). M++ turned out to be faster on small problems, but FEMLISP was faster on larger problems.
One reason for this is extremely interesting. Apparently there is a certain well-known technique for speeding up FEM. He had implemented it, but they had not yet done so. This illustrates the principle that higher-productivity software development can lead to faster performance! When considering the effects on performance of using Lisp, take this into account.
In other tests he found that FEMLISP is about as fast as a leading commercial product (FEMLAB) for comparable accuracy, and much easier to use.
So far he has not tried to encourage other people to use it, mainly for political reasons (his boss wrote M++). He used FFI for certain existing libraries (e.g. LAPACK).
Large Internet Systems
Stefan Richter of freiheit.com talked about “Using Common Lisp for Large Internet Systems”. His company, freiheit.com technologies (it means “freedom”, in the sense not having to use the Microsoft platform any more!) has built many commercial web sites in Java. They have 60 developers, most using Java, but also a 6-person Common Lisp group. In an unusual twist, the manager of the group had to convince the reluctant programmers to use Common Lisp. Also, the clients had to be convinced that accepting a product in Common Lisp was OK. They have delivered one Common Lisp application so far, a social marketing tool.
By “large internet systems” he mainly means scalable web sites. Unfortunately, he has not actually built such a thing in Lisp yet. The talk suggests approaches to the problem, but he did not have actual experiences to report. He primarly prefers Lisp because he feels that Java is too verbose, and Ruby is basically like Lisp.
He explained a lot about how to build scalable and reliable servers (all of which I was very familiar with from my work at BEA and at ITA Software). Clusters, load balancers, stateless app tier, separate DBMS’s transactions and reporting, shared memcached distributed cache, keeping functionally separate data on separate DBMS’s, plus one idea that’s still new or in the short-term future: shared-nothing database clusters using “shards” with replicated data for reliability. All of this is completely right, in my opinion, and I don’t think any of it is controversial.
Java has many good tools for doing such an archtecture: Tomcat providing a framework for servlets/JSP’s, a memcached client, Hibernate for database access, and even Hadoop (a free MapReduce implementation).
How does Common Lisp compare? We have Hunchentoot (a sophisticated HTTP server), cl-memcached (a memcached client), cl-sql (to invoke relational DBMS’s), and two advanced tools for generating HTML: Weblocks (by Slava Akhmechet, I think), and UnCommon Web (by Marco Baringer).
He also suggests using cl-muproc (a library that provides Erlang semantics in Common Lisp, basically) which he feels could be a good basis for a Common Lisp MapReduce. I don’t know exactly what he has in mind here, but apparently he has implemented this.
He doesn’t like existing conventional technology for generating web pages. Servlets clumsily embed HTML in Java code; JSP’s clumsily embed Java code in HTML. Using Common Lisp has many of the advantages of other popular languages that are being used to write HTML generation, such as Ruby, Groovy, and Python. Lisp has major advantages: you don’t have to write out files in order to compile things; CLOS is very useful, including the MOP; we can avoid the need for XML because programs and data use the same format; and of course macros help in all kinds of ways. (And, I was thinking, Common Lisp implementations typically execute code much faster than Ruby and Python!)
He talked about using continuations to save state between HTTP interactions. (Many papers have been written on this topic.) You want to be able to write a program in a normal style, that can say “do this web interaction” in the middle of any procedure; this makes flow of control much easier to understand. A continuation saves stack and execution state across interactions. He talked about Weblocks and how it uses continuations, as well as many of its other virtues (it sounds great, from what he said; I have yet to learn about it).
He feels that what’s needed now is to put it all together, and then write a good book about how to use it. He points out that Ruby on Rails would never have taken off without the excellent book. (I agree completely!) He encourages us to write books, and help develop the framework libraries.
This all led to a lively discussion of continuations, particularly persistent continuations, and how to best implement them for Common Lisp. Weblocks uses the cl-cont library. Marco Baringer said that cl-cont’s continuation states are extremely large, leading to performance problems, although it would not be hard to improve this.
We also talked about just how reliable a system like this needs to be. It often turns out that in exchange for a very small about of unreliability, you can make big improvements in simplicity and performance. On a web site, it’s often quite acceptable to fail now and then, since the clients are human users who are much better at handling failure and retrying or finding alternatives.
Kilian Sprotte described PWGL, a tool for music composition and analysis. It is based on an earlier system called Patchwork, by Mikael Laurson in his 1996 doctor’s dissertation, at IRCAM, the famous music lab in Paris. It is ten years old, and has always been in Common Lisp. Originally it was developed in MCL; now it’s based on LispWorks and runs on both Windows and Mac OS X. It’s now being developed at Sibelius Academy in Finland. It’s currently in beta-test, downloadable, and version 1.0 is expected later this year.
According to the description on the web site: PWGL is a free cross-platform visual language based on Common Lisp, CLOS and OpenGL, specialized in computer aided composition and sound synthesis. It integrates several programming paradigms (functional, object-oriented, constraint-based) with high-level visual representation of data and it can be used to solve a wide range of musical problems.
It’s a visual dataflow functional language; in some ways it’s like doing Lisp by drawing boxes and lines.
It uses OpenGL for graphics, the PortAudio library for recording, playing back, and basic sound synthesis, and the libsndfile library for reading and writing files containing sampled sound. (It was interesting to see how many Lisp systems are capable of using non-Lisp libraries easily. This is another important counter-argument to the objection that Lisp has too few libraries.)
Embeddable Common Lisp (ECL)
Juan Jose Garcia-Ripoll described Embeddable Common Lisp. ECL is not just for embedding: it’s a full Common Lisp implementation. It’s a descendant of Kyoto Common Lisp by Taiichi Yuasa and Masami Hagiya at the Research Institute. Juan is the maintainer.
It is designed for portability. Rather than generating machine code for various processors, it generates C, and then allows the target host’s C compiler to produce machine language. This approach lets it take advantage of the target compiler’s optimizations, and specific knowledge of the target architecture. (However, compilation is not very fast.) All platforms these days include a free C compiler (even Microsoft). It makes minimal architectural assumptions: a pointer can be cast to an int, and C functions can be called with many arguments, and a variable number of arguments.
It supports a wide range of operating systems: Linux, NetBSD, FreeBSD, OpenBSD, Windows, Solaris, and Mac OS X.
The core and the Lisp interpreter are written in C; the rest is in Lisp. It borrows the Boehm-Weiser conservative GC, and provides CLOS with the PCL implementation. It uses native threads. It also contains a byte-code compiler and interpreter (instead of direct interpretation of Lisp as s-expressions). The implementation of subtypep uses the efficient method described by Henry Baker, and works with CLOS types.
It can build standalone executables and dynamically-linked libraries, and this is why it’s called “embeddable”. But it can be used as a regular Common Lisp implementation too, so don’t be put off by the name!
For more details, see his paper.
Kristoffer Kvello of Selvaag told us about House Developer, which is basically a CAD system for architects. It allows the architect to draw a very high-level drawing, and it takes care of filling in myriad specifics. It decides where to put windows and doors, and which way the doors should swing. It places electrical outlets and switches. It decides on wall types, wall offsets, wall junctions, heaters, fire exit paths, and so on.
There are many details, all of which must conform to regulations, company rules, and best practices. Doing all this by hand is costly, time-consuming, and error-prone. Automating it reduces errors, and lets the architect try lots of ideas and see their consequences promptly.
This is, in many ways, a classic rule-based expert system. They started writing it in 1994, using Knowledge-Based Engineering (KBE) technology of the time, which was primarily in Lisp. However, the rules are not like classic Artificial Intelligence rules; they are more like constraints. An example:
(define-attribute area (window)
(* ?width ?height))
This defines a constraint that gets recomputed as necessary. These rules can use the full power of Lisp.
The core of the system is written in Allegro CL. There is a Java-based user interface, that sends S-expressions to the core. The core sends XML replies back to the user interface. It uses many available libraries: asdf, zip, cl-sql, cl-utilities, s-xml, aserve, Expresso Toolkit, and Screamer.
The Expresso Toolkit knows the STEP (Standard Exchange of Product data) and EXPRESS (an ISO standard modeling language), which are important standards in the architecture industry.
Screamer supports “non-deterministic programming; it does constraint satisfaction with mixed systems of numeric and symbolic constraints, based on a substrate that supports backtracking and undoable side effects.
The advantages of using Lisp for this system include:
- Interactive development, with fast recompilation, incremental changes, no need to constantly re-create the global state
- Break loops, with the ability to fix things and then restart
- Reader macros, so that we could customize the syntax
- Advice, so that we could customize behavior
- It’s easy to inspect the image to find out what to customize
- Extensibility in general
- handler-bind, for use on our test framework
- Many available relevant libraries, which worked fine
There have not been a lot of users so far, but they are planning to deliver it to a large customer soon.
Marc Battyani discussed a high-performance computer architecture, using Field-Programmable Gate Arrays (FPGA’s) that are programmed using a high-level special-purpose language, implemented in Lisp. He has a computer based on a Stratix II FPGA with memory and network. The FPGA has modules on it such as adders, multipliers, I/O pins, memory, and so on. Programming one consists of hooking up the modules up to perform a particular special-purpose function. A problem with FPGA technology is that programming them is so hard; the novel feature here is to use a Lisp-based language, called HPCC, that compiles a high-level description into the FPGA’s program.
They have implemented two applications so far. One prices exotic financial instruments using Monte Carlo simulation. Currently, this kind of thing is done with grids of 10K-10K boxes. The other does multicast networking at 1 million messages per second. They plan to get funding, hire more Common Lisp programmers, and do more applications.
Ken Tilton talked about his Cells library, a dataflow extension to CLOS. The basic idea is that the values slots are determined by a formula, like the cells in a spreadsheet. Cells tracks dependencies between cells and propagating values. He demonstrated widgets that grow and reshape graphics automatically.
Randall Pitts is looking for Lisp programmers to work on a speech understanding project, that would help answer email, help call center agents, etc. They’re dealing with language, grammar, and syntax. You must work in Germany.
Nick Levine is looking for work. He has 20 years of Lisp experience and has been consulting for seven years.
Marty Simmons of LispWorks is looking for applications that use concurrency, to help test their new thread support.
One parting thought
One of the most widespread complaints about Common Lisp is the lack of available libraries. However, in several practical applications, we see here that there are many available libraries for Common Lisp that work well and can be built on.