What the recent Facebook/WhatsApp announcements could mean

Ever since Facebook acquired WhatsApp (in 2014) I have wondered how long it would take before we found that our supposedly “end to end encrypted” messages were being mined by Facebook for its own purposes.

It has been a while coming, but I think it is now clear that end to end encryption in WhatsApp isn’t really the case, and will definitely be less secure in the future.

Over a year ago, Gregorio Zanon described in detail why it was that end-to-end encryption didn’t really mean that Facebook couldn’t snoop on all of the messages you exchanged with others. There’s always been this difference between one-to-one messages and group messages in WhatsApp, and how the encryption is handled on each. For details of how it is done in WhatsApp, see the detailed write-up from April 2016.

Now we learn that Facebook is going to be relaxing “end to end encrypted”. As reported in Schneier, who quotes Kalev Leetaru,

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

 


 

Some years ago, I happened to be in India, and at a loose end, and accompanied someone who went to a Government office to get some work done. The work was something to do with a real-estate transaction. The Government office was the usual bustle of people, hangers-on, sweat, and the sounds of people talking on telephones, and the clacking of typewriters. All of that I was used to, but there was something new that I’d not seen before.

At one point documents were handed to one of the ‘brokers’ who was facilitating the transaction. He set them out on a table, and proceeded to take pictures. Aadhar Card (an identity card), PAN Card (tax identification), Drivers License, … all quickly photographed – and this made my skin crawl (a bit). Then these were quickly sent off to the document writer, sitting three floors down, just outside the building under a tree at his typewriter, generating the documents that would then be certified.

And how was this done: WhatsApp! Not email, not on some secure server with 256 bit encryption and security, just WhatsApp! India in general has a rather poor security practice, and this kind of thing is commonplace, people are used to it.

So now that Facebook says they are going to be intercepting and decrypting all messages and potentially sending them off to their own servers, guess what information they could get their hands on!

It seems pointless to expect that US regulators will do anything to protect consumers ‘privacy’ given that they’re pushing for weakening communication security themselves, and it seems like a foregone conclusion that Facebook will misuse this data, given that they have no moral compass (at least not one that is functioning).

This change has far-reaching implications and only time will tell how badly it will turn out but given Facebook’s track record, this isn’t going to end well.

The importance of longevity testing

airbus_a350_1000I worked for many years with, and for Stratus Technologies, a company that made fault tolerant computers – computers that just didn’t go down. One of the important things that we did at Stratus was longevity testing.

All software errors are not detectable quickly – some take time. Sometimes, just leaving a system to idle for a long time can cause problems. And we used to test for all of those things.

Which is why, when I see stuff like this, it makes me wonder what knowledge we are losing in this mad race towards ‘agile’ and ‘CI/CD’.

Airbus A350 software bug forces airlines to turn planes off and on every 149 hours

The AWD reads, in part

Prompted by in-service events where a loss of communication occurred between some avionics systems and avionics network, analysis has shown that this may occur after 149 hours of continuous aeroplane power-up. Depending on the affected aeroplane systems or equipment, different consequences have been observed and reported by operators, from redundancy loss to complete loss on a specific function hosted on common remote data concentrator and core processing input/output modules.

and this:

Required Action(s) and Compliance Time(s):

Repetitive Power Cycle (Reset):

(1) Within 30 days after 01 August 2017 [the effective date of the original issue of this AD], and, thereafter, at intervals not to exceed 149 hours of continuous power-up (as defined in the AOT), accomplish an on ground power cycle in accordance with the instructions of the AOT .

What is ridiculous about this particular issue is that it comes on the heals of Boeing 787 software bug can shut down planes’ generators IN FLIGHT, a bug where the generators would shutdown after 250 days of continuous operation, a problem that prompted this AWD!

Come on Airbus, my Windows PC has been up longer than your dreamliner!

The GCE outage on June 2 2019

I happened to notice the GCE outage on June 2 for an odd reason. I have a number of motion activated cameras that continually stream to a small Raspberry Pi cluster (where tensor flow does some nifty stuff). This cluster pushes some more serious processing onto GCE. Just as a fail-safe, I have the system also generate an email when they notice an anomaly, some unexplained movement, and so on.

And on June 2nd, this all went dark for a while, and I wasn’t quite sure why. Digging around later, I realize that the issue was that I relied on GCE for the cloud infrastructure, and gmail for the email. So when GCE had an outage, the whole thing came apart – there’s no resiliency if you have a single-point-of-failure (SPOF) and GCE was my SPOF.

WhiScreen Shot 2019-06-05 at 7.17.17 AMle I was receiving mobile alerts that there was motion, I got no notification(s) on what the cause was. The expected behavior was that I would receive alerts on my mobile device, and explanations as email. For example, the alert would read “Motion detected, camera-5 <time>”. The explanation would be something like “NORMAL: camera-5 motion detected at <time> – timer activated light change”,  “NORMAL: camera-3 motion detected at <time> – garage door closed”, or “WARNING: camera-4 motion detected at <time> – unknown pattern”.

I now realize that the reason was that the email notification, and the pattern detection relied on GCE and that SPOF caused delays in processing, and email notification. OK, so I fixed my error and now use Office365 for email generation so at least I’ll get a warning email.

But, I’m puzzled by Google’s blog post about this outage. The summary of that post is that a configuration change that was intended for a small number of servers ended up going to other servers, shit happened, shit cleanup took longer because troubleshooting network was the same as the affected network.

So, just as I had a SPOF, Google appears to have had an SPOF. But, why is it that we still have these issues where a configuration change intended for a small number of servers ends up going to a large number of servers?

Wasn’t this the same kind of thing that caused the 2017 Amazon S3 outage?

At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.

Shouldn’t there be a better way to detect the intended scope of a change, and a verification that this is intended? Seems like an opportunity for a different kind of check-and-balance?

Building completely redundant systems sounds like a simple solution but at some point the cost of this becomes exorbitant. So building completely independent control and user networks may seem like the obvious solution but is it cost effective to do that?