The Open Source Experts
full-service open source library services provider


April 21st, 2008 by

ILSs have evolved over time. What started as as a way to manage the printing of catalog cards (1950s and 1960s) eventually morphed into a system for tracking inventory, and then finally into a full-lifecycle management system for general library operations. None of this is news, of course.

What is news is that Evergreen is the first and only ILS to go into production that does not suffer from decisions made 20, 30 or even 40 years ago. Too often I see quotes from respected luminaries in the library world suggesting that Evergreen is simply duplicating what has been done for the last 30 years of ILS design. That means two things, actually:

  1. We’ve done a very good job of creating an interface to the Evergreen ILS that feels like what librarians are used to using. This means a shorter training cycle for front-line staff, greater ROI and lower TCO.
  2. We may have done too good of a job creating an interface that makes librarians comfortable with traditional ILSs feel comfortable with Evergreen, and haven’t explained well enough just how the core Evergreen is designed and shown just how easy it is to expand services in ways no one has yet considered in terms of an ILS

Looking Back

What we now call the ILS, a product purchased from and maintained by vendors, first emerged in the late 1970s and early 1980s. This is, for all intents and purposes, where history begins.

Over the last thirty years we’ve seen, more or less, three architectural generations of ILS arise from the original proto-ILS. Each generation learns from the lessons of those that came before, as well as from the wider world of software engineering. At each stage there are specific changes in architecture which provide obvious differentiating features that are wholly new so far as the ILS is concerned, but there are also concepts and constructs carried over from previous generations. In addition, there are non-architectural design components that are typical of each stage of evolution, and while the basic architecture of systems at each evolutionary stage may not be directly relevant, there is an identifiable correlation between the overall architecture and these higher-level design decisions.

The Client-Server Age

In the beginning (the late 1970s to mid 1980s) there was the monolithic client-server system, and it was good. Mostly it was good because there was nothing else, and something was needed. These systems were designed to run on the computers of the day; that is, large (for their time) central computers with dumb terminals, or later, thin clients on PCs, for data entry and output. They were (and are, it’s amazing how some of these systems persist) consolidated 2-tier systems that contained all of the logic in a single program on the server.

The back-end storage systems, and in many cases the display engine, were integrated directly into the main program, and required that this one program run on one server. The only way to scale such a system is to purchase a larger server. It is far from impossible to outgrow the largest servers on the market, even today, unless an institution is willing to dedicate a very large portion of their budget to such a system.

Another design constraint of theses systems, though not related to their client-server architecture but still endemic to the age, is that they were never meant to run the operations of more than one organizational entity. There is little or no consideration for location independence. If a staff member has a permission, they have that permission everywhere within the system. In other words, there is no concept of location-specific privilege separation.

Case in point: PINES is the state-wide consortium in Georgia at which Evergreen was initially developed. Although functionality was lacking in their existing solution, an equally concerning problem was the performance of the server during normal business hours. The 48 processor E10K they were running on was straining under the load of a state-wide consortium, and regular maintenance jobs that would not be a problem for smaller scale installations were requiring a mid-day restart of an essential, core service, and a nightly restart of all services. To upgrade out of the red zone would have meant investment in a 2 million dollar E20k, or larger. The PINES budget for one year is around 1.5 million dollars. Putting aside the cost and scaling factors in running such a system, there was no way for PINES to separate privileges based on which library or library system a staff member worked in. Among other things, this meant that PINES could not implement centralized Acquisitions.

The Web Application Age

Then in the late 1990s, scurrying around the feet of the lumbering, antiquated 1- and 2-tier client-server dinosaurs, there evolved the 3-tier web application. These systems are characterized by a nod towards database independence, the use of non-library standards for core functionality, and interface rendering handled by a generic third-party application such as a web browser. Such systems can be made to scale horizontally — that is, the full spectrum of functional areas can be scaled up — by adding more web servers in a dumb cluster, but because all of the web servers must run all the business logic for the ILS, as well as generate web pages for display, this scaling is not linear — one sees less usable per-server capacity added with each new server in the cluster. A related scaling problem is that there is no opportunity for balancing base-line resource utilization against the specific needs of the installation — in a low-circulation, high-search environment the server still has to load and run all of the heavy circulation logic with each instance of the search logic, sapping memory and CPU resources that could be used elsewhere.

These systems also carried over the lack of location-aware privilege separation from the earlier generations. While adding new services to such systems is somewhat simpler than in a client-server architecture, they are still basically a monolithic code-base built to run the operations of a single organization. They are also targeted at smaller library systems lacking the need for some of the more sophisticated business processes of larger institutions. As such, scaling is not a primary concern — and rightly so, with their small-system focus — and is not addressed directly but comes as a side-effect of the architecture, to the level that such scaling exists.

The SOA/Distributed/Dis-integrated Age

In the mid-2000s we see the rise of REST and SOA. New software systems in most industries are being modeled on distributed and modular architectures that allow the dis-integration of services and interfaces. ILSs designed and developed in this time period have the specific characteristic of incorporating concepts of SOA and distributed computing from the ground up. Each functional unit of the ILS is implemented as a service with a well-defined API, and these services can be replaced or augmented over time without the fear of disrupting the rest of the code-base. New services snap in like Legos and are trivial to implement in comparison to new services in previous generations, and service APIs stay consistent across time and even development and implementation methodology. Because ILSs built this way have intentional separation between all service layers (UI, business logic, storage) they can present many different workflow-optimized interfaces to the user, or to other services, while leveraging existing code.

Because of the focus on service dis-integration, resources can be applied precisely where they are most needed, and horizontal scaling is linear in the worst case. Scaling in this architecture is managed through the addition of simple, replaceable commodity hardware, making it extremely cost effective for initial capital outlay and providing the possibility of “just in time” cluster expansion using hardware that delivers the most bang for the buck at the time of need.

Greener pastures

Evergreen is the first production ILS, and only we are aware of as of this writing, to be designed with Service Oriented Architecture (SOA), Representational State Transfer (REST) and “n-tier” architectural concepts specifically in mind. Evergreen achieves this by building services and applications on the OpenSRF (Open Service Request Framework, and pronounced “open surf”) platform. OpenSRF handles all of the details of implementing a stateful, decentralized service architecture and the Evergreen code supplies the business logic and UI framework that matters to libraries.

To provide a specific example of the ease of extension that Evergreen and OpenSRF provide, the online bill pay functionality coming in the next major release was designed and developed by someone that had only passing familiarity with the underlying of Evergreen, or with the OpenSRF framework on which it is built. It was created in about 4 hours, and adds integrated, ILS-wide support for credit card and PayPal payment processing.

Another mistake I hear repeated all too often is the fallacy that Evergreen requires uniform policy definition across participating institutions. This could not be further from the truth — Evergreen supports the most flexible and location aware circulation and hold policies of any ILS available, and can even be extended with ad hoc exceptions to any policy definition via simple rules. This is, of course, unrelated to the architecture of Evergreen, but as it is the only example of a system based on such an architecture, this feature is also endemic to this pattern.

But wait, there’s more!

Another natural consequence of the SOA/dis-integrated architecture of Evergreen, facilitated by OpenSRF, is that most if not all of its code can be re-purposed largely without modification. We at Equinox Software are currently beginning development on FulfILLment, a large-scale, cross-ILS borrowing platform to facilitate the functionality of large ILL consortia, such as state or region-wide networks. This system will leverage much of the organization-aware business logic that Evergreen already provides to support cross-institution circulation in a completely ILS agnostic fashion.

The Future

While neither I nor anyone else can predict what the Next Big Thing will be in terms of application architecture, we can draw some clues from the current state of the art. Bleeding edge application development in 2008 seems to be moving towards refining the SOA model. Efforts such as SOA 2.0 and Service Component Architecture (SCA) point toward a near-term future where SOA and dis-integrated services become mainstream, and service-on-demand and mashups are the norm. If these trends are any predictor, Evergreen and OpenSRF are in for a long ride, especially as more features and functionality are incorporated over time.

Note: Updated 2008-04-22T10:00:00-04 to add some term clarification.

Print Friendly

Comments are closed.

The beautiful thing about working for a smaller company like Equinox is you get to grow and change right alongside the company.

Director of Sales

Print Friendly