The GCE outage on June 2 2019

I happened to notice the GCE outage on June 2 for an odd reason. I have a number of motion activated cameras that continually stream to a small Raspberry Pi cluster (where tensor flow does some nifty stuff). This cluster pushes some more serious processing onto GCE. Just as a fail-safe, I have the system also generate an email when they notice an anomaly, some unexplained movement, and so on.

And on June 2nd, this all went dark for a while, and I wasn’t quite sure why. Digging around later, I realize that the issue was that I relied on GCE for the cloud infrastructure, and gmail for the email. So when GCE had an outage, the whole thing came apart – there’s no resiliency if you have a single-point-of-failure (SPOF) and GCE was my SPOF.

WhiScreen Shot 2019-06-05 at 7.17.17 AMle I was receiving mobile alerts that there was motion, I got no notification(s) on what the cause was. The expected behavior was that I would receive alerts on my mobile device, and explanations as email. For example, the alert would read “Motion detected, camera-5 <time>”. The explanation would be something like “NORMAL: camera-5 motion detected at <time> – timer activated light change”,  “NORMAL: camera-3 motion detected at <time> – garage door closed”, or “WARNING: camera-4 motion detected at <time> – unknown pattern”.

I now realize that the reason was that the email notification, and the pattern detection relied on GCE and that SPOF caused delays in processing, and email notification. OK, so I fixed my error and now use Office365 for email generation so at least I’ll get a warning email.

But, I’m puzzled by Google’s blog post about this outage. The summary of that post is that a configuration change that was intended for a small number of servers ended up going to other servers, shit happened, shit cleanup took longer because troubleshooting network was the same as the affected network.

So, just as I had a SPOF, Google appears to have had an SPOF. But, why is it that we still have these issues where a configuration change intended for a small number of servers ends up going to a large number of servers?

Wasn’t this the same kind of thing that caused the 2017 Amazon S3 outage?

At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.

Shouldn’t there be a better way to detect the intended scope of a change, and a verification that this is intended? Seems like an opportunity for a different kind of check-and-balance?

Building completely redundant systems sounds like a simple solution but at some point the cost of this becomes exorbitant. So building completely independent control and user networks may seem like the obvious solution but is it cost effective to do that?

Old write-up about CAP Theorem

In 2011, Ken Rugg and I were having a number of conversations around CAP Theorem and after much discussion, we came up with the following succinct inequality. It was able to help us much better speak to the issue of what constituted “availability”, “partition tolerance”, and “consistency”. It also confirmed our suspicions that availability and partition tolerance were not simple binary attributes; Yes or No, but rather that they had shades of gray.

So here’s the write-up we prepared at that time.

parelastic-brewers-conjecture

Unfortunately, the six part blog post that we wrote (on parelastic.com) never made it in the transition to the new owners.

Blockchain is an over hyped technology solution looking for a problem

The article makes a very simple argument for something that I have felt for a while, block chain is a cool technology but the majority of the use cases people talk about are just bull shit.

http://www.coindesk.com/blockchain-intermediaries-hype/

More than 99% of Blockchain Use Cases Are Bullshit

I’ve been following the blockchain ecosystem for some time now largely because it strikes me as yet another distributed database architecture, and I dream about those things.

For some time now, I’ve been wondering what to do after Tesora and blockchain was one of the things I’ve been looking at as a promising technology but I wasn’t seeing it. Of late I’ve been asking people who claim to be devotees at the altar of blockchain what they see as the killer app. All I hear are a large number of low rumbling sounds.

And then I saw this article by Jamie Burke of Convergence.vc and I feel better that I’m not the only one who feels that this emperor is in need of a wardrobe.

Let’s be clear, I absolutely agree that bitcoin is a wonderful use of the blockchain technology and it solves the issue of trust very cleverly through proof of work. I think there is little dispute of elegance of this solution.

But once we go past bitcoin, the applications largely sound and feel like my stomach after eating gas station sushi; they sound horrible and throw me into convulsions of pain.

In his article, Jamie Burke talks of 3d printing based on a blockchain sharded CAD file. I definitely don’t see how blockchain can prevent the double-spend (i.e. buy one Yoda CAD file, print 10,000).

Most of the blockchain ideas I’m seeing are things which are attempting to piggy-back on the hot new buzzword and where blockchain is being used to refer to “secure and encrypted database”. After all, there’s a bunch of crypto involved and there’s data stored there right? so it must be a secure encrypted database.

To which I say, Bullshit!

P.S. Oh, the irony. This blog post references a blog post with a picture labeled “Burke’s Bullshit Cycle”, and the name of this blog is hypecycles.com.

Comparing parallel databases to sharding

I just posted an article comparing parallel databases to sharding on the ParElastic blog at http://bit.ly/JaMeVr

It was motivated by the fact that I’ve been asked a couple of times recently how the ParElastic architecture compares with sharding and it occurred to me this past weekend that

“Parallel Database” is a database architecture but sharding is an application architecture

Read the entire blog post here:

http://bit.ly/JaMeVr

SQL, NoSQL, NewSQL and now and new term …

NonsenSQL!

Read all about it at C Mohan’s blog (cmohan.tumblr.com).

Mohan knows a thing or two about databases. As a matter of fact, keeping track of his database related achievements, publications and citations is in itself a big-data problem.

NonsenSQL, read all about it!