The Technology and Business of ObjectStore
This is a follow-up to my previous article about the success of OODBMS’s, and ObjectStore in particular. For people interested in the more technical story behind the ObjectStore object-oriented database management system, here are some stories that you might enjoy. You’ll see why it was harder to do than we had originally anticipated. There are also stories about problems with the business, with some cautionary tales that you could take into account the next time you start a company.
I’ve been involved with or heard about many high-tech startups. Nearly always, the product turns out to appeal to a set of customers who aren’t the ones the founders originally had in mind. Smart founders dynamically adjust. We found that our customers’ technical requirements varied somewhat, and we had to make a lot of improvements and changes to the product to meet these new requirements. That took a lot of engineering talent.
This essay includes very substantial contributions by my colleagues, which I have tried to organize into a cogent whole. Contributors, in alphabetical order:
Gene Bonte: Co-founder, CFO
Sam Haradhvala: Co-founder
Guy Hillyer: Senior engineer
Charles Lamb: Co-founder
Benson Margulies: Senior engineer and head of porting (after Ed)
Dave Moon: Senior engineer
Jack Orenstein: Co-founder
Mark Sandeen: Senior salesperson
Ed Schwalenberg: Senior engineer and head of porting
Dave Stryker: Co-founder, VP of Engineering
Porting Was Hard
We knew that porting ObjectStore was going to be hard. Dave Stryker recalls: “That was the thing we talked about most during the crucial first three months when we were working out the implications of the architecture.” However, by the time all was said and done, it turned out to be more work than we had originally anticipated.
We ported ObjectStore to an amazing number of architectures: many versions of Windows, many flavors of Unix, OS/2, you name it. I can hardly remember them all. Worse, we often had to do a port simply because a vendor produced a new C++ compiler! So we’d have a version for Solaris on the SPARC with C++ version 4, and another for Solaris on the SPARC with C++ version 5, and so on. We did ports to hardware that never made it big, like the NeXT, and hardware that never even reached the market. (What, you don’t remember the Canon workstation? As Mark Sandeen, one of our best salespeople, points out: “We never should have spent the time to port to platforms with minuscule market share.”) And every so often our sales force would book a sale on a platform that we didn’t actually support. Quick guys, get to work! Our porting group pulled off miracles, but all this took up a lot of engineering talent.
Ed Schwalenberg reminds me that “another bane of our porting existence was the set of orthogonal choices to be made in compiling a library: threads vs. non-threaded, shared vs. static libraries, 32- vs. 64-bit instructions, exceptions vs. non-exceptions, etc. All of those were in addition to the choice of compilers.”
By the way, the first thing that would happen whenever we did an ObjectStore port is that we would discover bugs in the vendor’s C++ compiler. Every single time! As Ed Schwalenberg says: “We were the world’s C++ compiler quality assurance department for a decade.”
Dave Moon points out: “A lot of the early technical problems in ObjectStore were caused by our building on very immature products from other vendors. Since they weren’t open source, we could not work around problems, and had to wait for the vendors to fix them. This is inherent in working at the bleeding edge.”
Fun fact: In the early days of C++, the designers at Bell Labs came up with a specification for the first version of parameterized types. This was of great interest to us, since we wanted to support “a set of Transistors” so that we could query over such a set, and so on. At that time, there was only one C++ implementation, from Bell Labs, known as “cfront”, which translated C++ to C. The guys at Bell Labs apparently were not good enough compiler hackers to implement parameterized types in cfront. So we did it for them (I believe Sam Hardhvala did the work) and gave the code back to them and the world, in an early instance of de facto open source collaboration. We got a nice press release out of it. We were very much among the world’s C++ experts at the time.
We also kept finding operating system bugs. ObjectStore needed to be able to create a “cache” file, and map each page, page by page, into the appropriate virtual address, and control its access permissions, using the Unix “mmap” and “mprotect” and “munmap” system calls. Then the application program would attempt to read or write a no-access page, or write a read-only page, causing a SIGSEGV fault. Our SIGSEGV handler would then, analogously to a page fault handler, figure out what had occurred, and do whatever needed to be done: fetch the page from the server if necessary, map the page into address space if necessary, set the access permissions, wait for locks when necessary, and so on, finally resuming the program. This was supposed to work in Unix, but Ed Schwalenberg says: “Recovering from a SIGSEGV did not work in any of the first dozen or so platforms we tried it on: Sun’s SunOS, IBM’s AIX, HP’s HP/UX, Digital Unix, OS/2, and the analogous thing on Win16, Win32s, and Windows NT. Every last one of these required a conversation with the relevant kernel development team to get the operating system fixed. Win16 and Win32s didn’t even have the concept of user-mode interception of memory faults, so we had to write kernel-level device drivers to add that capability. Also, SIGSEGV handling did not work recursively, anything that had to work inside a SIGSEGV handler could not, itself, take a SIGSEGV (this is fixed in modern versions of Unix and Windows).”
Here’s a story of an operating system bug. Solaris writes out all modified pages every N seconds. The ObjectStore “cache” file could get pretty big, and had lots of modified pages, but there was no need to write them out to the disk, since the file was discarded after a crash anyway. We acquired a customer, Telstra in Australia, who needed real-time response: ObjectStore was invoked after a customer dialed a special phone number, to look up another phone number, and the phone switch had unforgiving time limits. Sun suggested that we put the cache into a special “tmpfs” file system. Files in “tmpfs” aren’t written out, because they’re known to be temporary. That made perfect sense. Unfortunately, we got rare and unrepeatable weird bugs, which finally turned out to be because the SIGSEGV/mmap/mprotect feature almost worked on “tmpfs” file systems, but not quite. We got around it somehow, but I can no longer remember how.
We found that Solaris was taking a very long time to execute mprotect system calls. It turns out that the architects of Solaris had apparently assumed that there would be very few mapped regions of memory. They had not anticipated our architecture, which mapped a huge number of pages independently. So they were using a simple linear search. Guy Hillyer wrote an improvement to Solaris, using skip lists to make the search run in O(log n) time. The hard part was the politics getting Sun to accept our changes to Solaris! We only did this for Solaris, which was then our primary platform. (Maybe it should be done for Linux?)
When the new Windows technology (which was OS/2 at the time; IBM and Microsoft were still working together on it) came out, it was crucial for us that it be able to support memory mapping. Dave Stryker and Tom Atwood, flew out to meet with Bill Gates in September of 1989. Dave Stryker recalls: “We originally had a 45-minute appointment, but Gates extended the meeting to a couple of hours, and called in Dave Cutler [the architect of OS/2]. At Tom’s urging, we told Gates and Cutler everything they wanted to know about ObjectStore. Gates was complimentary of the Object Design approach, but said, in a nice enough way, that if the Microsoft Empire ever needed such a thing, they would build it themselves. Still, Gates told Cutler to make sure that the OS/2 equivalent to mmap was powerful enough to run ObjectStore, and there were some changes made to make it so.” Later, this OS/2 technology turned into Windows NT. Dave Moon adds that it turned to have a bug: it doesn’t free up disk space when it ought to. For some reason Microsoft hasn’t fixed this, even after many years. We found a way around it.)
Speaking of industry luminaries, we also met with Steve Jobs when he was at NeXT, and Jobs made a big announcement praising our technology, which resulted in a nice press release. There was some discussion that NeXT might buy Object Design, but that never went anywhere.
It turned out to be hard to support customers who wanted to use the same ObjectStore database from many different client architectures. We had to support what we called “heterogeneity”. First there was “architecture hetero”: some machines have big-endian numbers and some have little-endian numbers, and we’d have to convert, for example. Much worse was “compiler hetero”: different C++ compilers represented C++ objects differently in memory, due to run-time compiler “dope”, padding, and so on. Objects were not even the same size in different compilers, which was a huge problem. We had to know every last thing about how objects were laid out, where the compiler put padding, where the compiler put “dope” information such as “vtbl pointers” and various displacement offsets, etc. Our engineers came up with clever solutions to these problems, but it was hard and used up a lot of engineering talent. I think if we had realized that we’d run into this problem, originally, we might have never started the company at all, thinking the technical issues too daunting. It’s a good thing we didn’t think about it then!
The Virtual Memory Mapping Architecture
Was the page-mapping, virtual-memory mapping architecture worth it? Mark Sandeen says: “In competitive situations, against the other OODB companies, we sold on performance, performance, performance. Plus the fact that you got that performance by using an elegant architecture that was fundamentally different from anything our competitors had or ever would have. We used our incredible engineering team to win the benchmark wars, and then told our customers that the reason we won the benchmarks was the 2nd generation OODB architecture.” Sam Haradhvala, says: I still find the architecture almost as appealing as on day one of the company, and feel very lucky that we had a chance to see it realized in a product.
It would have been easier to port had we not gone for transparent persistence, and the goal that dereferencing a pointer was done in one instruction, exactly as in a non-persistent program. None of our competitors did this; for C++, they used the “overloaded operator ->” approach, in which dereferencing a pointer did a software operation that usually consisted of going through an indirection in an object table. Our justification was that CAD people would never tolerate a slowdown in the time it took to redisplay a drawing. So once the pages were faulted in, C++ operations would run at full speed. This led to all kinds of pros and cons. Concurrency control was totally transparent and foolproof; on the other hand, it was at page granularity, causing unnecessary conflicts sometimes. We didn’t expect this to be a problem in the classic CAD scenario since we imagined designers would usually not be working on the very same drawing at the same time. But other scenarios did run into this sometimes. However, difficulty of porting was our own problem, not our customers’ problem, so they didn’t know or care.
Dave Stryker recalls some more reasons we stuck with our original idea of using the memory-mapping architecture. “First, our competitors had staked out the strategy of overloading C++ dereferences. Object Design came into existence after Versant (then Object Sciences) and Objectivity, and needed to be differentiated from the competition. Second, our approach was really clever, and won many ideological converts based on cleverness alone. We could usually count on the smartest guy in the room being an ally, because using faulting was such an impressive intellectual accomplishment. Third, we had really smart engineers who enabled us to undertake obligations, particularly porting obligations, that with more prudence we might have avoided. With engineers that talented, you need really disciplined, far-sighted top management, because in the short term it’s perfectly clear that engineering can work miracles. It’s only in the longer term that the cumulative miracles sap all the capacity of engineering.” (Mark Sandeen also remembers that we were the last of these three startups, whereas Gene Bonte says we all started at the same time and remembers a lot of details about it.)
Dave Stryker says: “As you say, for the largest early customers database meant concurrency, and at that point at least it was difficult to avoid concurrency conflicts among simultaneous users. In my memory, it seemed that there were often fires burning because customers had trouble getting ObjectStore to work concurrently. I know this got a lot better in the years after I left Object Design.” A main technical problem is that locks were at the granularity of pages, so sometimes ObjectStore thought that there was a concurrency conflict even though there really wasn’t, and that would hold up processing until the other transaction was finished. This is inherent in the virtual memory mapping architecture. Our competitors often pointed out this drawback.
He goes on: “I’ve certainly wondered if the architectural choice of page faults and native-format on-disk objects was the right one. I was an enthusiastic booster of the page fault architecture, but it certainly made porting, multi-architecture access, schema evolution and so on much, much harder. [Ken Rugg says that Dave Moon has made huge improvements in schema evolution in the latest releases.] Certainly, the page fault / native object on disk architecture was instrumental in many of the CAD industry wins.” And, “The open-source industry makes me wonder what’s the future of software products like ObjectStore. At Multiverse where I work now, the large majority of libraries and development tools we use are open-source. The only things we buy right now are Microsoft licenses for Windows boxes and 3D modeling tools. The database is mySQL, and it’s going to be a fine solution for a fairly long time, because gaming isn’t hugely database intensive, even though the gaming objects would map naturally to an object database. In many product areas today, the best and/or most successful products are open source.” The whole concept of open source wasn’t around when we started 1988. (Neither were Unix threads. Nor was Windows. ObjectStore was aimed at the class of computers then known as “workstations”, primarily the Sun-3.)
Sam Haradhvala says: “I have often wondered like Dave and others on this thread whether the use of page faults and native on-disk representation was the correct one. It seems that it was the right choice at that time and conferred some rather unique advantages. Given the current state of technology and the hot issues of today, the limited flexibility inherent in the approach might very well dictate a different set of choices.” But he also says: “I still find the architecture almost as appealing as on day one of the company, and feel very lucky that we had a chance to see it realized in a product.”
Ed Schwalenberg also points out that our architecture, by doing so many things transparently, avoided huge numbers of bugs, much as languages with automatic storage management (e.g. garbage collection) save you from bugs in storage allocation and deallocation.
Ken Rugg notes that we had always intended to do some kind of declarative mechanism to help support clustering and reclustering, since that’s so crucial for delivering ObjectStore’s performance advantages. That still hasn’t been done yet, and perhaps never will be as the importance of C++ continues to decline.
Fun story: Ed Schwalenberg reminds me of a truly vexing case we ever ran into with the virtual-memory mapped architecture. The program went into a mysterious infinite loop. Guy Hillyer figured out that it had a single machine instruction that had both source and destination operands in ObjectStore-managed persistent memory, in two different “versions” (when we were trying to support a very sophisticated database versioning feature). Fetching from the source, in one version, was making the destination, in a second version, out of reach. Retrying would fault in the destination, putting the source out of reach, and so the single instruction could never make progress.
The high performance that we designed ObjectStore for really did come out as we expected it to. If your data had good spatial and temporal locality, and especially if concurrent access was relatively rare, it was extremely fast.
However, it turned out that it was not so easy to anticipate the performance that would result from using it in certain ways. Sometimes customers would come to us literally a week before they wanted to deploy their product. They had just tried running it under heavy load, or with multiple users, for the very first time (yes, a week before they planned to deploy!), and all of a sudden ObjectStore was becoming a bottleneck. We had some amazingly competent consultants, who could fly in and fix these problems for the customers very quickly, but not before there was some anger from the customers. Mark Sandeen goes so far as to say that few of our customers were able to build a deployable application without help from our consultants, which limits the scalability of the business model.
Charles Lamb points out: “I think this happens in any database company.” Indeed, there is a whole industry of Oracle experts; we have engaged several at my current company. Ed Schwalenberg says: “ObjectStore made it easy — too easy — for any C++ programmer to write a “database application”, while being ignorant of concepts like lock contention, database hot spots, etc. It was folks like that, who never tested more than one user until a week before launch, who sometimes gave us a bad name.” Everybody out there, take heed: do testing under serious performance load way, way before you’re going to release your product!
Sam Haradhvala, who has had extensive real-world experience with relational databases in the last few years, remembers: “ObjectStore was characterized as being like a Ferrari, which if tuned right by the experts could be made to run like one. Tuning an application, almost as an afterthought, is a common practice even in the relational database world. ObjectStore did make it easy for people to write database applications, without worrying about lock contentions, database hot spots, etc, but so do SQL and PL/SQL. So what was it about ObjectStore that made it a harder problem? If it had been possible in ObjectStore to use object level locks the way relational programs use row level locks, it would probably not have been as much of an issue, but this is one of those areas where the architecture puts you at a disadvantage.”
There were many competitive benchmarks. ComputerVision wrote an early one, aimed at determining OODBMS performance for CAD systems, and we spent a lot of time winning this. The one that took the most effort was the OO7 benchmark, describe in my previous posting. We spent a huge amount of time improving our performance on OO7. From the engineering point of view, this was very helpful. The OO7 crew at Wisconsin found many interesting performance problems that we didn’t even know about, many of which were easy to fix. I particularly remember how much benefit we got from setting the TCP_NODELAY flag. Meanwhile, the sales forces of every OODBMS company were using OO7 as a sales tool, each claiming to have gotten the best results! OO7 wasn’t really designed to compare competing products, but rather to act as an X-ray to analyze the systems and illustrate how they worked, and the researchers were rather unhappy to see it used in sales situations. Meanwhile tension developed as the benchmark was revised in order to make it a better X-ray. The problem was that each revision favored some vendors and disfavored others. Sadly, Ken Marshall decided that the OO7 team was intentionally trying to make Object Design look bad (because one of the researchers was on the technical advisory board of one of our competitors), and Object Design pulled out of the benchmark, invoking the clause in our license saying that customers could not distribute benchmark results. As you can imagine, the Wisconsin team was pretty upset about this. Charlie Lamb and I eventually published our own OO7 numbers, with complete instructions for anyone about the exact procedure that we had used, so that they could duplicate it. In my opinion, we did the best overall, though not on every test, but it was never official because of Object Design’s having withdrawn from the study.
Gene Bonte says: “I remember Ken Marshall [the CEO] telling me that in his days at Oracle (which he left to join us), 80-90% of the significant sales depended on benchmarks. For a new market like ours, this was the same or higher. Our salespeople and pre-sales engineers spent a lot of time trying to get customer benchmarks written so that it would favor our VMMA approach. Our competitors did the same for their approaches. Given there were almost no concurrent user engineering applications in existence, this was always a weak to non-existent part of the benchmark and we were always strong in these situations. Thus we won most of the benchmark wars.
An important thing that we never got around to implementing was putting more data processing on the server side. In formulating the architecture, I was heavily influenced by work at Xerox PARC on database systems in which the server just stored pages of data without interpreting them. This matched ObjectStore’s needs very well; the server side wasn’t where we knew the C++ data layouts and database schema. But sometimes this meant that you had to read a lot of data into the client side in order to search for small amounts of data in the database. We had originally hoped that this would not be a problem on the grounds that local area networks are awfully fast. That was a good answer for many cases, but not all. I have only recently (in my present job) worked with sophisticated Oracle experts who have shown me more about how to improve performance by processing (in PL/SQL, in their case) on the server side; I didn’t appreciate that well enough back when we designed ObjectStore.
Dave Stryker points out: “One thing that has made it harder for OODBMS’s is ever-growing memory and CPU power in PCs. ObjectStore database sizes were typically just a few gigabytes or less. Our original Sun-3 workstations had 8 Mbytes of RAM, I believe, and if you’re going to search a couple of gigabytes on an 8 Mbyte machine, you’re going to need a database system with indexes. In contrast, today even my laptop has a 2 Gbytes of memory, and lots of workstations have 8 Gbytes or more. It’s completely practical and common to slurp up a couple of gigs of information into memory and search it in memory on a machine like that. So the ‘object database cache’ of the past gets done now, most of the time, using in-memory data structures. Even when a database is the right answer, the extra overhead of translating from an on-disk representation to an object representation happens 100 times faster on today’s CPUs than on the 50Mhz CPUs of 1990. So the performance advantages of not translating are much smaller.”
Looking into the future, Dave Moon says: “The illusion of random access memory is becoming increasingly unconvincing on modern hardware. Although dereferencing a pointer takes only one instruction, when the target of the pointer is not cached in the CPU that instruction can take as long to execute as 1000 ordinary instructions executed at peak speed. It’s not clear that other approaches to database navigation are able to execute at peak speed, i.e. with no cache misses and no delays due to resource conflicts within the CPU, but if they were able to execute that fast, they would be able to expend hundreds of instructions to do what pointer dereferencing does and still come out equally fast, in the random access case where the target is not cached. Thus, the advantage of ObjectStore’s architecture is being eroded by hardware evolution. But at the same time, the advantage of C++ and other conventional programming languages is being eroded in the same way. It is not unreasonable to predict that we will see widespread abandonment of the illusion of random access memory in the next two decades. The IBM Cell processor used in video games is the first crack in the dam.”
Many customers wanted an industry standard, to avoid vendor lock-in. There was never a real standard for OODBMS’s. There was an attempted standardization effort called ODMG. Unfortunately, it was run by the vendors, not by the customers. So every vendor tried to adjust the standard to benefit his own technical approach and make life hard for the other company’s technical approach. It was really not done in good faith, and we were just as bad as anyone else, perhaps even worse. Unfortunately, there wasn’t any other OODBMS that worked the way ours did, so our customers really did have a vendor lock-in problem, which we never succeeded in addressing.
Ken Rugg points out that there wasn’t even a common understanding of what an object database even is! “If you looked under the covers, the actual persistence mechanisms behind Versant and ObjectStore, let alone something like Cache, are very different. Also, these differences are much more visible to the user than differences in the engines of RDBMS products.”
Several key customers wanted support for versioning, e.g. so a CAD system could easily keep track of earlier versions of a design. But our highly sophisticated versioning system involved such complex semantics and such a complicated implementation that it made the whole ObjectStore client side mind-bogglingly complex. I remember Dave Andre and I reporting to Dave Stryker that it was almost working, but it made the product unmaintainable! We eventually had to rip it out. It was a huge waste of engineering resources and a good lesson in the virtues of simplicity, one of the hardest and most important lessons to learn in all of software engineering.
Java, PSE Pro for Java, and Smalltalk
(Thanks to Sam Haradhvala for help with this section.)
When Java came to prominence, we had to figure out how to turn ObjectStore into a Java OODBMS. Again, we went for transparency: persistent Java objects. You program with them just the way you regularly program in Java, except that you put in transaction boundaries and so on. Objects are persistent if they are reachable from any object designated as a persistent root object.
To do this, we used a novel trick: we took the Java class files, and added new JVM instructions before a read or write, to check whether the object being accessed had been read in. If not, we’d read it in on demand. As Sam Haradhvala points out, this can be thought of as a two-level faulting architecture. It used object-level faulting to fault in the contents of individual Java objects, while using VMMA to fault in the underlying C++ object representation, implement scalable collections, etc. This architecture could have provided the underpinnings for object-granularity locking and increased flexibility in other areas.
The PSE Pro for Java product had its own storage engine which used just object-level faulting with a specialized lightweight, small footprint, storage engine. It did atomicity and durability: committed changes happened either all-or-nothing, even in the face of system crashes. However, it did not support concurrent access between separate Java processes. It was targeted at an entirely different market segment than the ObjectStore Java product, but had the same API, so that you could, e.g., use it as scaffolding.
There was even an ObjectStore Smalltalk product which used the VMMA architecture, with special hooks built into the Smalltalk ParcPlace VM, so that it could co-exist with with the Smalltalk GC.” This was built by a team of very smart people on the West Coast. Unfortunately, they didn’t communicate tightly with the key developers on the East Coast, and so they didn’t fit into the architecture properly. The code became too hard to maintain, and the demand for Smalltalk turned out to be a fad in those particular years, so we discarded this.
Jack Orenstein was very interested in object-relational mapping, which he describes as “my quixotic mission at Object Design”. “The idea was to bring relational database features to ObjectStore: collections, queries over them, and mappings to and from the relational model. A relational interface to ObjectStore would have expanded the pool of ObjectStore users, and opened up the product to off-the-shelf relational tools, e.g. Crystal Reports. The opposite direction (ObjectStore programming model on top of a relational database) would have opened up the ObjectStore API to other kinds of databases. These projects were of interest to a small number of customers (e.g. USWest, Credit Suisse), but for various reasons, some due to internal company politics, they were never internally funded and supported to the point where we came out with a product.” That was probably a mistake, perhaps a big one.
Objects can be stored in RDBMS’s using object-relational mapping tools. Relational databases have become so successful in exploiting hardware, and are such a ubiquitous component of the computation infrastructure, that a vast number of applications map their objects to relational tables. Hibernate is a very popular system for doing this in Java. You use Java annotation and XML configuration files to specify the mapping, which can be pretty sophisticated. Hibernate is clever at generating efficient SQL. It’s widely used and well-documented. A big advantage of mapping tools is that they let you share data with other, relation-oriented applications. On the other hand, this approach is not appropriate for the kind of CAD-like applications at which ObjectStore is aimed. Sun’s Entity Enterprise Java Beans (particularly the EJB 3.0 standard) is another mapping tool. See here for a paper by Mick Jordan about other Java approaches to orthogonal persistence.
Benson Margulies says: “The idea of persistent storage of an object data model, is, in fact, ever-more-common … in the form of object-relational middleware. Relational databases have become so successful in exploiting hardware, and are such a ubiquitous feature of the computation infrastructure, that a vast number of applications map their objects to relational tables and go home for a nice lunch. The trio of ObjectStore, Object Design, and the OODBMS concept can claim much credit for this. We wouldn’t have Hibernate, not to mention 15 incomprehensible Java standard initialisms, if not for what we did. And we had to do it. Ironically, if we had set out to build the object-relational product, I think that we would have failed. It couldn’t have been fast enough. We identified and exploited a gap, and we had a relatively successful run in that gap.”
This entire section is by Jack Orenstein, regarding the “Third Generation Database System Manifesto” claim that query optimizer’s can always do better than a programmer can do by hand.
The relational side of this debate relies on an assumption that applications navigate to data of interest, and then, after the query, process that data. (Or, in a few cases, process the data inside the query, e.g. simple arithmetic, simple forms of aggregation, simple updates.) But in many applications, that separation of navigation and processing is impossible or not feasible.
I’ve implemented polygon overlay, which I think is typical of such applications. In polygon overlay, you need to traverse linked lists of vertices and edges making up polygons. You don’t navigate to an edge and then retrieve some of its data for later processing (after the database query). Instead, the navigation and processing of the data are tightly intertwined. Yes, with enough work you might be able to separate the implementation into navigation and processing parts, express the navigation part in a query language, and then have the query optimizer generate an execution plan better than the one implicit in your original code. An approach like this would obviously be completely alien to developers.
But if you really did write your application this way, separating navigation from processing, then the optimizer could, in principle, come up with an execution plan that reduces the number of disk reads compared to your original implementation.
But only if data is clustered in a predictable way. A relational optimizer uses a cost model to estimate the number of page accesses required to implement a query using a candidate execution plan. That cost model makes assumptions about how data is organized on disk, and uses some observations of actual data (e.g. key frequency distributions). If ObjectStore data were clustered as in a relational database, then the relational argument might have some merit. The optimizer would take estimates of page reads into account, something the low-level, data structure navigating C++ code is obviously not doing. But if the ObjectStore data is clustered intelligently, then that argument falls apart. In other words, a programmer can easily beat an optimizer if the programmer is also responsible for clustering the data. (The tools for clustering data in relational systems are extremely limited.)
ObjectStore was organized around providing persistence for a particular application. However, Ken Rugg points out that even in non-traditional market areas, some customers needed a DBMS that needs to be shared across multiple applications with different access patterns. In such cases, it was hard to optimize one of them without hurting the other, since much of the performance depends on the way the data is clustered, and it can’t be clustered two different ways at the same time in the same database.
Ken says: “One area that we are working on is how to synchronize data in ObjectStore with relational data so you can ‘have your cake and eat it too’. I think having multiple special purpose stores that are optimized for each consumer and synchronized and consistent with each other, (assuming you can manage them all in a reasonable way,) is better than a single ‘least common denominator’ store that is shared by all the applications in an enterprise. Of course doing the synchronization this isn’t an easy problem.”
Business Problems at Object Design
From time to time, I, and others, would lobby management to provide post-sales technical support, to help the customers learn how to best use ObjectStore. The pre-sales engineers tried to do this when they could, but they were usually too busy doing their pre-sales job. Periodically, one management regime or another would agree, and set up post-sales technical support. Life was good. But not for long, because management would see how valuable the customers thought post-sales technical support was, and they’d get the bright idea that we should charge for it and make it a profit center, making these guys into more consultants. (We always had consultants who could be hired.) Well, that was a big mistake. Lots of customers can’t pay for consultants. In some corporate cultures, for you to hire a consultant from the vendor tacitly implies that you are incompetent. What Object Design needed was successful customers to use as reference accounts when we tried to sell to new customers. Post-sales technical support was a long-term investment. But management would often lose sight of this and go for the short-term profit.
Mark Sandeen says: “The fact that we needed this level of technical support resulted in an interesting situation. Every now and then we’d hire the best and the brightest engineers from our customers, leaving our customers without the talent to architect their systems appropriately.” He and I can remember at least five of these, including several of our most awesome.
Our sales force faced obstacles. One of our sales reps, Ben Bassi, told me that the moment he walked in the door and said that they were here to sell a “database”, many customers would say “We already have Oracle: go away”, without giving us a chance to explain what we were about. (But Mark Sandeen says: “I never had that happen to me personally. And I trained all the staff that worked for me to never go anywhere near a prospect that was using Oracle (or RDBMS’s in general). In the early days we followed leads from folks who had purchased C++ compilers and tools, and after we had some wins in GIS, network management, etc. We would target those folks directly. We’d sell high performance, concurrent, persistence solutions to application developers.”
We even thought of trying to not even call it a database system: maybe it’s an “application data management” product, or something. Unfortunately our marketing department never really solved this problem. Our early salespeople were great. Later management regimes felt that you didn’t really need salespeople who understood the product; they were too hard to find and cost too much. Wrong. Some of the best salespeople left when that policy started take over.
If you took any of our CEO’s and locked him in a room with the product, he’d not have the faintest idea how to use it. It was a technical product aimed at programmers. Our first CEO, Ken Marshall, was very good at delegating, and his own lack of technical background wasn’t much of a problem. But after he left, the next CEO considered himself much more technically competent than he really was, and he made a lot of bad decisions, and he hadn’t really wanted to be CEO anyway, and he was only interested in wild ideas that would make the company grow super-fast, but those ideas never worked. The third CEO, acquired from a merger, was a good guy but, in my opinion, totally unfamiliar with how to run a software product company, and he pretty much ignored the advice of the technical people (particularly Ken Rugg, who was CTO and VP Engineering) even though he originally solicited it. That was when I finally threw in the towel. Fortunately, Progress Software bought the company, and the original ObjectStore part was put under a new general manager who was apparently quite good. So life is good again over there, and they’ve actually hired back a lot of very talented people who had left the company earlier!
Here’s a real life example of why it’s so hard to escape Oracle and embrace ObjectStore. I currently work at ITA Software, Inc., where we are building a new airline reservation system. We’re using Oracle RAC for the database system. Our rules say that all persistent mutable information must be stored in Oracle. Why? Because we are using Oracle Dataguard to copy data to our disaster recovery site(s), and to copy all online data to an archive, and our operations department wants data for disaster recovery handled uniformly across the system. We might use ObjectStore as a cache, but the place where we’d probably benefit most from a cache is a big module that’s written in Common Lisp, and there isn’t a good interface from Common Lisp to ObjectStore. It’s often for reasons like this that it’s hard for ObjectStore to get a foothold. However, there’s another product being developed at ITA for which ObjectStore, using its Java interface, looks like it might be a great fit.
Ken Rugg notes that the company took a big hit when the bubble burst in 2000. Object Design primarily sold to high-tech companies, since the users of the product were very technical and leading-edge. In particular, one of the major markets for ObjectStore was telecommunications companies, who were particularly hard-hit in that period. This contributed to a decline in revenues and eventual acquisition.
Caveats and Thanks
Everything here is my own personal opinion, and should not be taken as a statement by Object Design or Progress Software!
Much of this is in the past tense because I’ve been gone so long, and because things have changed, but ObjectStore is still alive.
Thanks to all the contributors named above, particularly Benson Margulies, whose highly cogent criticism compelled me to substantially reorganize the whole essay. I have made small edits to the contributions. Of course, I take responsibility for all errors.