Not so fast, maybe relational databases aren’t dead!

Maybe the obituary announcing the demise of the relational database was premature!

Much has been written recently about the demise (or in some cases, the impending demise) of the relational database. “Relational databases are dead” writes Savio Rodrigues on July 2nd, I guess I missed the announcement and the funeral in the flood of emails and twitters about another high profile demise.

Some days ago, Michael Stonebraker authored an article with the title, “The End of a DBMS Era (Might be Upon Us)”. In September 2007 he made a similar argument in this article, and also in this 2005 paper with Uğur Çetintemel.

What Michael says here is absolutely true. And, in reality, Savio’s article just has a catchy title (and it worked). The body of the article makes a valid argument that there are some situations where the current “one size fits all” relational database offering that was born in the OLTP days may not be adequate for all data management problems.

So, let’s be perfectly clear about this; the issue isn’t that relational databases are dead. It is that a variety of use use cases are pushing the current relational database offerings to their limits.

I must emphasize that I consider relational databases (RDBMS’s) to be those systems that use a relational model (a definition consistent with http://en.wikipedia.org/wiki/Relational_database). As a result, columnar (or vertical) representations, row (or horizontal) representations, systems with hardware acceleration (FPGA’s, …) are all relational databases. There is arguably some confusion in terminology in the rest of this post, especially where I quote others who tend to use the term “Relational Database” more narrowly, so as to create a perception of differentiation between their product (columnar, analytic, …) and the conventional row oriented database which they refer to as an RDBMS.

Tony Bain begins his three part series about the problem with relational databases with an introduction where he says

“The specialist solutions have be slowly cropping up over the last 5 years and now today it wouldn’t be that unusual for an organization to choose a specialist data analytics database platform (such as those offered from Netezza, Greenplum, Vertica, Aster Data or Kickfire) over a generic database platform offered by IBM, Microsoft, Oracle or Sun for housing data for high end analytics.”

While I have some issues with his characterization of “specialist analytic database platforms” as something other than a Relational Database, I assume that he is using the term RDBMS to refer to the commonly available (general purpose) databases that are most often seen in OLTP environments.

I believe that whether you refer to a column oriented architecture (with or without compression), an architecture that uses hardware acceleration (Kickfire, Netezza, …) or a materialized view, you are attempting to address the same underlying issue; I/O is costly and performance is significantly improved when you reduce the I/O cost. Columnar representations significantly reduce I/O cost by not performing DMA on unnecessary columns of data. FPGA’s in Netezza serve a similar purpose; (among other things) they perform projections thereby reducing the amount of data that is DMA’ed. A materialized view with only the required columns (narrow table, thin table) serves the same purpose. In a similar manner (but for different reasons), indexes improve performance by quickly identifying the tuples that need to be DMA’ed.

Notice that all of these solutions fundamentally address one aspect of the problem; how to reduce the cost of I/O. The challenges that are facing databases these days are somewhat different. In addition to huge amounts of data that are being amassed (The Richard Winter article on the subject) there is a much broader variety of things that are being demanded of the repository of that information. For example, there is the “Search” model that has been discussed in a variety of contexts (web, peptide/nucleotide), the stream processing and data warehousing cases that have also received a fair amount of discussion.

Unlike the problem of I/O cost, many of these problems reflect issues with the fundamental structure and semantics of the SQL language. Some of these issues can be addressed with language extensions, User Defined Functions, MapReduce extensions and the like. But none of these address the underlying issue that the language and semantics were defined for a class of problems that we today come to classify as the “OLTP use case”.

Relational databases are not dead; on the contrary with the huge amounts of information that are being handled, they are more alive than ever before. The SQL language is not dead but it is in need of some improvements. That’s not something new; we’ve seen those in ’92, ’99, … But, more importantly the reason why the Relational Database and SQL have survived this long is because it is widely used and portable. By being an extensible and descriptive language, it has managed to adapt to many of the new requirements that were placed on it.

And if the current problems are significant, two more problems are just around the problem and waiting to rear their ugly heads. The first is the widespread adoption of the virtualization and the abstraction of computing resources. In addition to making it much hardware to adopt solutions with custom hardware (that cannot be virtualized), it introduces a level of unpredictability in I/O bandwidth, latency and performance. Right along with this, users are going to want the database to live on the cloud. With that will come all the requirements of scalability, ease of use and deployment that one associates with a cloud based offering (not just the deployment model). The second is the fact that users will expect one “solution” to meet a wide variety of demands including the current OLTP and reporting through the real time alerting that today’s “Google/Facebook/Twitter Generation” has come to demand (look-ma-no-silos).

These problems are going to drive a round of innovation, and the NoSQL trend is a good and healthy trend. In the same description of all the NoSQL and analytics alternatives, one should also mention the various vendors who are working on CEP solutions. As a result of all of these efforts, Relational Databases as we know them today (general purpose OLTP optimized, small data volume systems) will evolve into systems capable of managing huge volumes of data in a distributed/cloud/virtualized environment and capable of meeting a broad variety of consumer demands.

The current architectures that we know of (shared disk, shared nothing, shared memory) will need to be reconsidered in a virtualized environment. The architectures of our current databases will also need some changes to address the wide variety of consumer demands. Current optimization techniques will need to be adapted and the underlying data representations will have to change. But, in the end, I believe that the thing that will decide the success or failure of a technology in this area is the extent of compatibility and integration with the existing SQL language. If the system has a whole new set of semantics and is fundamentally incompatible with SQL I believe that adoption will slow. A system that extends SQL and meets these new requirements will do much better.

Relational Databases aren’t dead; the model of “one-size-fits-all” is certainly on shaky ground! There is a convergence between the virtualization/cloud paradigms, the cost and convenience advantages of managing large infrastructures in that model and the business need for large databases.

Fasten your seat-belts because the ride will be rough. But, it is a great time to be in the big-data-management field!


Highland Light, Provicetown, MA

If you have ever been to the Highland Light Lighthouse at Provincetown, you will likely recognize the picture below. The image is panoramic and the high resolution image is 15MB and about 13k pixels wide. The image appears cropped here, you can see the complete image at flickr (link on the left).

Highland Light, Provincetown, MA
Highland Light, Provincetown, MA

The image is a composite of 10 discrete images that were stitched using AutoStitch. The software is available for free evaluation at this site.

For more information about Highland Light you can visit their web page at http://www.lighthouse.cc/highland/

IE+Microsoft ROCKS. Ubuntu+Firefox BLOWS

I’m running Ubuntu 9.0.4 and it appears that to upgrade to Firefox 3.5 requires a PhD. Could I borrow yours?

Kevin Purdy has the following “One line install”

That’s not a true install; he just unpacks the tar-ball into the current directory.

Web Upd8 has the following steps

But, now I get daily updates?

What’s with this garbage? Complain as much as you want but Microsoft Software Updates will give you the latest Internet Explorer easily.

Could someone please make this easy?

It’s just numbers people!

I hadn’t read Justin Swanhart’s post (Why is everybody so steamed about a benchmark anyway?) from June 24th till just a short while ago. You can see it at here. Justin is right, it’s just numbers.

He did remind me of a statement from a short while ago by another well known philosopher.

It's a budget

I could not embed the script provided by quotesdaddy.com into this post. I have used their material in the past, they have a great selection of statements that will live on in history. Check them out at http://www.quotesdaddy.com/ 🙂

Head for the hills, here comes more jargon: Anti-Databases and NoSQL

We all know what SQL is, get ready to learn about NoSQL. There was a meet-up last month in San Francisco to discuss it and some details are available at this link. The call for the “inaugural get-together of the burgeoning NoSQL community” drew 150 people and if you are so inclined, you can subscribe to the email list here.

I’m no expert at NoSQL but the major gripe seems to be schema restriction and what Jon Travis calls “twisting your object” data to fit an RDBMS.

NoSQL appears to be a newly coined collective pronoun for a lot of technologies you have heard about before. They include Hadoop, BigTable, HBase, Hypertable, and CouchDB, and maybe some that you have not heard of before like Voldemort, Cassandra, and MongoDb.

But, then there is this thing called NoSQL. The first line on that web page reads

NoSQL is a fast, portable, relational database management system without arbitrary limits, (other than memory and processor speed) that runs under, and interacts with, the UNIX 1 Operating System.

Now, I’m confused. Is this the NoSQL that was referenced in the non-conference at non-san-francisco last non-month?

My Point of View

From what I’ve seen so far, NoSQL seems to be a case of disruption at the “low end”. By “low end”, I refer to circumstances where the data model is simple and in those cases I can see the point that SQL imposes a very severe overhead. The same can be said in the case of a model that is relatively simple but refers to “objects” that don’t relate to a relational model (like a document). But, while this appears to be “low end” disruption right now, there is a real reason to believe that over time, NoSQL will grow into more complex models.

But, the big benefit of SQL is that it is declarative programming language that gave implementations the flexibility of implementing the program flow in a manner that matched the physical representation of the data. Witness the fact that the same basic SQL language has produced a myriad of representations and architectures ranging from shared nothing to shared disk, SMP to MPP, row oriented to column oriented, with cost based and rules based optimizers etc., etc.,

There is (thus far) a concern about the relative complexity of the NoSQL offerings when compared with the currently available SQL offerings. I’m sure that folks are giving this considerable attention and what comes next may be a very interesting piece of technology.



More on TPC-H comparisons

Three charts showing comparisons of TPC-H benchmark data.

Just a quick post to upload three charts that help visualize the numbers that Curt and I have been referring to in our posts. Curt’s original post was, my post was.

The first chart shows the disk to data ratio that was mentioned. Note that the X-Axis showing TPC-H scale factor is a logarithmic scale.The benchmark information shows that the ParAccel solution has in excess of 900TB of storage for the 30TB benchmark, the ratio is therefore in excess of 30:1.

The second chart shows the memory to data ratio. Note that both the X and Y Axis are logarithmic scales. The benchmark information shows that the ParAccel solution has 43 nodes and approximately 2.7TB of RAM, the ratio is therefore approximately 1:11 (or 9%).

The third chart shows the load time (in hours) for various recorded results. The ParAccel results indicate a load time of 3.48 hours. Note again that the X-Axis is a logarithmic scale.

For easy reading, I have labeled the ParAccel 30TB value on the chart. I have to admit, I don’t understand Curt’s point. And maybe others share this bewilderment? I think I’ve captured the numbers correctly, could someone help verify these please.

If the images above are shown as thumbnails, you may not be able to see the point I’m trying to make. You need to see the bigger images to see the pattern.

Revision

In response to an email, I looked at the data again and got the following RANK() information. Of the 151 results available today, the ParAccel 30TB numbers are 58th in Memory to Data and 115th in Disk to Data. It is meaningless to compare load time ranks without factoring in the scale and I’m not going to bother with that as the sample size at SF=30,000 is too small.

If you are willing to volunteer some of your time to review the spreadsheet with all of this information, I am happy to send you a copy. Just ask!