Old write-up about CAP Theorem

In 2011, Ken Rugg and I were having a number of conversations around CAP Theorem and after much discussion, we came up with the following succinct inequality. It was able to help us much better speak to the issue of what constituted “availability”, “partition tolerance”, and “consistency”. It also confirmed our suspicions that availability and partition tolerance were not simple binary attributes; Yes or No, but rather that they had shades of gray.

So here’s the write-up we prepared at that time.

parelastic-brewers-conjecture

Unfortunately, the six part blog post that we wrote (on parelastic.com) never made it in the transition to the new owners.

More than 99% of Blockchain Use Cases Are Bullshit

I’ve been following the blockchain ecosystem for some time now largely because it strikes me as yet another distributed database architecture, and I dream about those things.

For some time now, I’ve been wondering what to do after Tesora and blockchain was one of the things I’ve been looking at as a promising technology but I wasn’t seeing it. Of late I’ve been asking people who claim to be devotees at the altar of blockchain what they see as the killer app. All I hear are a large number of low rumbling sounds.

And then I saw this article by Jamie Burke of Convergence.vc and I feel better that I’m not the only one who feels that this emperor is in need of a wardrobe.

Let’s be clear, I absolutely agree that bitcoin is a wonderful use of the blockchain technology and it solves the issue of trust very cleverly through proof of work. I think there is little dispute of elegance of this solution.

But once we go past bitcoin, the applications largely sound and feel like my stomach after eating gas station sushi; they sound horrible and throw me into convulsions of pain.

In his article, Jamie Burke talks of 3d printing based on a blockchain sharded CAD file. I definitely don’t see how blockchain can prevent the double-spend (i.e. buy one Yoda CAD file, print 10,000).

Most of the blockchain ideas I’m seeing are things which are attempting to piggy-back on the hot new buzzword and where blockchain is being used to refer to “secure and encrypted database”. After all, there’s a bunch of crypto involved and there’s data stored there right? so it must be a secure encrypted database.

To which I say, Bullshit!

P.S. Oh, the irony. This blog post references a blog post with a picture labeled “Burke’s Bullshit Cycle”, and the name of this blog is hypecycles.com.

Is this the end of NoSQL?

If it is, you read it here first!

I posted this article on my other (work related) blog.

I think the future for NoSQL isn’t as bright as a lot of pundits would have you believe. Yes, Yes, I know that MongoDB got a $1.2 billion valuation. Some other things to keep in mind.
  1. In the heyday of OODBMS, XML DB, and OLAP/MDX, there was similar hype about those technologies.
  2. Today, more and more NoSQL vendors are trying to build “SQL’isms” into their products. I often hear of people who want a product that has the scalability of NoSQL with transactions and a standard query language. Yes, we have that; it is called a horizontally scalable RDBMS!

Technologies come and technologies go but the underlying trends are worth understanding.

And the trends don’t favor NoSQL.

Ingesting data at over 1,000,000 rows/second with MySQL in Amazon’s cloud!

I just posted this article on my other (work related) blog.

http://www.parelastic.com/blog/ingesting-over-1000000-rows-second-mysql-aws-cloud

Just to be clear, this was with standard MySQL, InnoDB, and with machines in Amazon’s cloud (AWS).

The data was inserted using standard SQL INSERT statements and can be queried immediately using SQL as well. All standard database stuff, no NoSQL tomfoolery going on.

This kind of high ingest rate has long been considered to be out of the reach of traditional databases; not at all true.

Why MongoDB and NoSQL make me want to scream

Recently I saw an article on the MongoHQ blog where they described a “slow query” and how to improve the performance.

The problem is this. I have a document with the following fields:

  • ID
  • submitdate
  • status
  • content

And the status can be something like ‘published’, ‘rejected’, ‘in progress’, ‘draft’ etc.,

I want to find all articles with some set of statuses and sorted by submit date.

Apparently the MongoDB solution to this problem (according to their own blog) is to:

  1. create a new field called ‘unpublished_submit_date’
  2. set that field to a ‘null’ value if the document is of an uninteresting status (i.e. published)
  3. set that field to the submitdate if it is an interesting status (i.e not published)
  4. then query on the single column unpublished_submit_date

Really? Really? You’ve got to be kidding me.

For more on this interesting exchange, a response from a MongoDB fanboy, and a follow-up, read my work blog at

http://parelastic.com/blog/more-subject-improving-performance-removing-query-logic

The things people have to do to use NoSQL, boggles the mind!

Comparing parallel databases to sharding

I just posted an article comparing parallel databases to sharding on the ParElastic blog at http://bit.ly/JaMeVr

It was motivated by the fact that I’ve been asked a couple of times recently how the ParElastic architecture compares with sharding and it occurred to me this past weekend that

“Parallel Database” is a database architecture but sharding is an application architecture

Read the entire blog post here:

http://bit.ly/JaMeVr

Scaling MongoDB: A year with MongoDB (Engineering at KiiP)

Here is the synopsis:

  • A year with MongoDB in production
  • Nine months spent in moving 95% of the data off MongoDB and onto PostgreSQL

Over the past 6 months, we’ve “scaled” MongoDB by moving data off of it.

Read the complete article here: http://bit.ly/HIQ8ox

 

The MongoDB rant. Truth or hoax?

Two days ago, someone called ‘nomoremongo’ posted this on Y Combinator News.

Several people (me included) stumbled upon the article, read it, and took it at face value. It’s on the Internet, it’s got to be true, right?

No, seriously. I read it, and parts of it resonated with my understanding of how MongoDB works. I saw some of the “warnings” and they seemed real. I read this one (#7) and ironically, this was the one that convinced me that this was a true post.

**7. Things were shipped that should have never been shipped**

Things with known, embarrassing bugs that could cause data
problems were in "stable" releases--and often we weren't told
about these issues until after they bit us, and then only b/c
we had a super duper crazy platinum support contract with 10gen.

The response was to send up a hot patch and that they were
calling an RC internally, and then run that on our data.

 

Who but a naive engineer would feel this kind of self-righteous outrage 😉 I’ve shared this outrage at some time in my career, but then I also saw companies ship backup software (and have a party) when they knew that restore couldn’t possibly work (yes, a hot patch), software that could corrupt data in pretty main stream circumstances (yes, a hot patch before anyone installed stuff) etc.,

I spoke with a couple of people who know about MongoDB much better than I do and they all nodded about some of the things they read. The same article was also forwarded to me by someone who is clearly knowledgeable about MongoDB.

OK, truth has been established.

Then I saw this tweet.

Which was odd. Danny doesn’t usually swear (well, I’ve done things to him that have made him swear and a lot more but that was a long time ago). Right Danny?

 

 

 

Well, he had me at the “Start thinking for yourself”. But then he went off the meds, “MongoDB is the next MySQL”, really …

 

I think there’s a kernel of truth in the MongoDB rant. And it is certainly the case that a lot of startups are making dumb architectural decisions because someone told them that “MongoDB was web-scale”, or that “CAP Theorem told them that databases were dead”.

Was this a hoax? I don’t know. But it was certainly a reminder that all scams don’t originate in Nigeria, and don’t begin by telling me that I could make a couple of billion dollars if I just put up and couple of thousand.

On migrating from Microsoft SQL Server to MongoDB

Just reading this article http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read describing one companies experiences migrating from SQL Server to MongoDB.

Having read the article, my only question to these folks is “why do it”?

Let’s begin by saying that we should discount all one time costs related to data migration. They are just that, one time migration costs. However monumental, if you believe that the final outcome is going to justify it, grin and bear the cost.

But, once you are in the (promised) MongoDB land, what then?

The things that this author believes that they will miss are:

  • maturity
  • tools
  • query expressiveness
  • transactions
  • joins
  • case insensitive indexes on text fields

Really, and you would still roll the dice in favor of a NoSQL science project. Well, then the benefits must be really really awesome! Let’s go take a look at what those are. Let’s take a look at what those are:

  • MongoDB is free
  • MongoDB is fast
  • Freedom from rigid schemas
  • ObjectID’s are expressive and handy
  • GridFS for distributed file storage
  • Developed in the open

OK, I’m scratching my head now. None of these really blows me away. Let’s look at these one at a time.

  • MongoDB is free
  • So is PostgreSQL and MySQL
  • MongoDB is fast
    • So are PostgreSQL and MySQL if you put them on the same SSD and multiple HDD’s like you claim you do with MongoDB
  • Freedom from rigid schemas
    • I’ll give you this one, relational databases are kind of “old school” in this department
  • ObjectID’s are expressive and handy
    • Elastic Transparent Sharding schemes like ParElastic overcome this with Elastic Sequences which give you the same benefits. A half-way decent developer could do this for you with a simple sharded architecture.
  • GridFS for distributed file storage
    • Replication anyone?
  • Developed in the open
    • Yes, MongoDB is free and developed in the open like a puppy is “free”. You just told us all the “costs” associated with this “free puppy”

    So really, why do people use MongoDB? I know there are good circumstances where MongoDB will whip the pants off any relational database but I submit to you that those are the 1%.

    To this day, I believe that the best description of MongoDB is this one:

    http://www.xtranormal.com/watch/6995033/mongo-db-is-web-scale

    Mongo DB is web scale
    by: gar1t

    http://www.xtranormal.com/xtraplayr/6995033/mongo-db-is-web-scale

    Database scalability myth (again)

    A common myth that has been perpetrated is that relational database do not scale beyond two or three nodes. That, and the CAP Theorem are considered to be the reason why relational databases are unscalable and why NoSQL is the only feasible solution!

    I ran into a very thought provoking article that makes just this case yesterday. You can read that entire post here. In this post, the author Srinath Perera provides an interesting template for choosing the data store for an application. In it, he makes the case that relational databases do not scale beyond 2 or 5 nodes. He writes,

    The low scalability class roughly denotes the limits of RDBMS where they can be scaled by adding few replicas. However, data synchronization is expensive and usually RDBMSs do not scale for more than 2-5 nodes. The “Scalable” class roughly denotes data sharded (partitioned) across many nodes, and high scalability means ultra scalable systems like Google.

    In 2002, when I started at Netezza, the first system I worked on (affectionately called Monolith) had almost 100 nodes. The first production class “Mercury” system had 108 nodes (112 nodes, 4 spares). By 2006, the systems had over 650 nodes and more recently much larger systems have been put into production. Yet, people still believe that relational databases don’t scale beyond two or three nodes!

    Systems like ParElastic (Elastic Transparent Sharding) can certainly scale to much more than two or three nodes, and I’ve run prototype systems with upto 100 nodes on Amazon EC2!

    Srinath’s post does contain an interesting perspective on unstructured and semi-structured data though, one that I think most will generally agree with.

    All you ever wanted to know about the CAP Theorem but were scared to ask!

    I just posted a longish blog post (six parts actually) about the CAP Theorem at the ParElastic blog.

    http://www.parelastic.com/database-architectures/an-analysis-of-the-cap-theorem/

    -amrith

    Running databases in virtualized environments

    I have long believed that databases can be successfully deployed in virtual machines. Among other things, that is one of the central ideas behind ParElastic, a start-up I helped launch earlier this year. Many companies (Amazon, Rackspace, Microsoft, for example) offer you hosted databases in the cloud!

    But yesterday I read this post in RWW. This article talks about a report published by Principled Technologies in July 2011, a report commissioned by Intel, that

    tested 12 database applications simultaneously – and all delivered strong and consistent performance. How strong? Read the case study, examine the results and testing methodology, and see for yourself.

    Unfortunately, I believe that discerning readers of this report are more likely to question the conclusion(s) based on the methodology. What do you think?


    A Summary of the Principled Technologies Report

    In a nutshell, this report seeks to make the case that industry standard servers with virtualization can in fact deliver the performance required to run business critical database applications.

    It attempts to do so by running Vware vSphere 5.0 on the newest four socket Intel Xeon E7-4870 based server and hosting 12 database applications each of which has an 80GB database in its own virtual machine. The Intel Xeon E7-4870 server is a 10 core processor with two hardware threads per core. It was clocked at 2.4GHz and 1TB of RAM (64 modules each of which had 16GB). The storage in this server was 2 disks, each of which was 146GB in size (10k SAS). In addition, an EMC Clarriion Fibre Channel SAN with some disks configured in RAID0. In total they configured 6 LUN’s each of which was 1066GB (over a TB each). They VM’s ran Windows Server 2008 R2, and SQL Server 2008 R2.

    The report claims that the test that was performed was “Benchmark Factory’s TPC-H like workload”. Appendix B somewhat (IMHO) misleadingly calls this “Benchmark Factory TPC-H score”.

    The result is that these twelve VM’s running against an 80GB database were able to consistently process in excess of 10,000 queries per hour each.

    A comparison is made to the Netezza whitepaper that claims that the TwinFin data warehouse appliance running the “Nationwide Financial Services” workload was able to process around 2,500 queries per hour and a maximum of 10,000 queries per hour.

    The report leaves the reader to believe that since the 12 VM’s in the test ran consistently more than 10,000 queries per hour, business critical applications can in fact be deployed in virtualized environments and deliver good performance.

    The report concludes therefore that business critical applications can be run on virtualized platforms, deliver good performance, and reduce cost.


    My opinion

    While I entirely believe that virtualized database servers can produce very good performance, and while I entirely agree with the conclusion that was reached, I don’t believe that this whitepaper makes even a modestly credible case.

    I ask you to consider this question, “Is the comparison with Netezza running 2,500 queries per hour legitimate?”

    Without digging too far, I found that the Netezza whitepaper talks of a data warehouse with “more than 4.5TB of data”, 10 million database changes per day, 50 concurrent users at peak time and 10-15 on an average. 2,500 qph with a peak of 10k qph at month end, 99.5% completing in under one minute.

    Based on the information disclosed, this comparison does not appear to be valid. Note well that I am not saying that this comparison is invalid, rather that the case has not been made sufficiently to justify it.

    An important reason for my skepticism is that when processing database operations like joins between two tables, doubling the data volume quadruples the amount of computation that may be required. If you are performing three table joins, doubling the data increases the computation involved may be as much as eight times. This is the very essence of the scalability challenge with databases!

    I get an inkling that this may not be a valid comparison when we look at Appendix B that states that the total test time was under 750 seconds in all cases.

    This feeling is compounded when I don’t see how many concurrent queries are run against each database. Single user database performance is a whole lot better and more predictable than multi-user performance. The Netezza paper specifically talks about the multi-user concurrency performance not the single-user performance.

    Reading very carefully, I did find a mention that a single server running 12 VM’s hosted the client(s) for the benchmark. Since ~15k queries were completed in under 750s, we can say that each query lasted about 0.05s. Now, those are really really short queries. Impressive but not what I would generally consider to be in the kinds of workloads that one would expect Netezza to be deployed. The Netezza report does clearly state that 99.5% completed in under one minute, which leads me to conclude that the queries being run in the subject benchmark are at least two orders of magnitude away!

    Conclusion

    Virtualized environments like Amazon EC2, Rackspace, Microsoft Azure, and VMWare are perfectly capable of running databases and database applications.One need only look at Amazon RDS (now with MySQL and Oracle), database.com, SQL Azure, and offerings like that to realize that this is in fact the case!

    However, this report fails to make a compelling case for this. By making a comparison to a different whitepaper and simply relating the results to the “queries per hour” in the other paper causes me to question the  methodology. Once readers question the method(s) used to reach a conclusion, they are likely to question the conclusion itself.

    Therefore, I don’t believe that this report achieves what it set out to do.


    References

    You can get a copy of the white paper here, a link to scribd, or here, a link to the PDF on RWW.

    This case study references a Netezza whitepaper on concurrency, which you can get here. The Netezza whitepaper is “CONCURRENCY & WORKLOAD MANAGEMENT IN NETEZZA”, and prepared by Winter Corp and sponsored by Netezza.

    I have also archived copies of the two documents here and here.

    A link to the TPC-H benchmark can be found on the TPC web site here.

    Disclosure

    In the interest of full disclosure, in the past I was an employee of Netezza, a company that is referenced in this report.

    Oracle’s new NoSQL announcement

    Oracle’s announcement of a NoSQL solution at Oracle Open World 2011 has produced a fair amount of discussion. Curt Monash blogged about it some days ago, and so did Dan Abadi. A great description of the new offering (Dan credits it to Margo Seltzer) can be found here or here. I think the announcement, and this whitepaper do in fact bring something new to the table that we’ve not had until now.

    1. First, the Oracle NoSQL solution extends the notion of configurable consistency in a surprising way. Solutions so far had ranged from synchronous consistency to eventual consistency. But, all solutions did speak of consistency at some point in time. Eventual consistency has been the minimum guarantee of other NoSQL solutions. The whitepaper referenced above makes this very clear and characterizes this not in terms of consistency but durability.

    Oracle NoSQL Database also provides a range of durability policies that specify what guarantees the system makes after a crash. At one extreme, applications can request that write requests block until the record has been written to stable storage on all copies. This has obvious performance and availability implications, but ensures that if the application successfully writes data, that data will persist and can be recovered even if all the copies become temporarily unavailable due to multiple simultaneous failures. At the other extreme, applications can request that write operations return as soon as the system has recorded the existence of the write, even if the data is not persistent anywhere. Such a policy provides the best write performance, but provides no durability guarantees. By specifying when the database writes records to disk and what fraction of the copies of the record must be persistent (none, all, or a simple majority), applications can enforce a wide range of durability policies.

    2. It sets forth a very specific set of use-cases for this product.There has been much written by NoSQL proponents about its applicability in all manners of data management situations. I find this section of the whitepaper to be particularly fact based.

    The Oracle NoSQL Database, with its “No Single Point of Failure” architecture is the right solution when data access is “simple” in nature and application demands exceed the volume or latency capability of traditional data management solutions. For example, click-stream data from high volume web sites, high-throughput event processing, and social networking communications all represent application domains that produce extraordinary volumes of simple keyed data. Monitoring online retail behavior, accessing customer profiles, pulling up appropriate customer ads and storing and forwarding real-time communication are examples
    of domains requiring the ultimate in low-latency access. Highly distributed applications such as real-time sensor aggregation and scalable authentication also represent domains well-suited to Oracle NoSQL Database.

    Several have also observed that this position is in stark contrast to Oracle’s previous position on NoSQL. Oracle released a whitepaper written in May 2011 entitled “Debunking the NoSQL Hype”. This document has been removed from Oracles website. You can, however, find cached copies all over the internet. Ironically, the last line in that document reads,

    Go for the tried and true path. Don’t be risking your data on NoSQL databases.

    With all that said, this certainly seems to be a solution that brings an interesting twist to the NoSQL solutions out there, if nothing else to highlight the shortcomings of existing NoSQL solutions.

    [2011-10-07] Two short updates here.

    1. There has been an interesting exchange on Dan Abadi’s blog (comments) between him and Margo Seltzer (the author of the whitepaper) on the definition of eventual consistency. I subscribe to Dan’s interpretation that says that perpetually returning to T0 state is not a valid definition (in the limit) of eventual consistency.
    2. Some kind soul has shared the Oracle “Debunking the NoSQL Hype” whitepaper here. You have to click download a couple of times and then wait 10 seconds for an ad to complete.