DIY Geiger Counters

In the previous post, I described false starts with off the shelf radon detectors. Radon is radioactive, and anyone who has seen a movie or two knows that the good guys have Geiger counters that make noises when there is radioactivity. So of course, the solution is to get a geiger counter.

The first one I tried was one made by Mighty Ohm. Not knowing any better, I got one that had an SBM-20 tube. This tube detects beta, and gamma particles, but not alpha. Nice geiger counter, great kit, great to put it together and get it working, but let’s skip forward. I need one that’ll measure not just beta, and gamma, but also alpha particles.

Recall that when Radon decays, it releases an alpha particle. See the picture below.

Figure 1. Radioactive decay of Uranium to Lead, including half lives, and emission. Radon (Rn) is the only element that is a gas, the others are all solids.

I had two choices, get another kit, or get something that was pre-built. I chose the GMC-600+ from GQ Electronics. It comes pre-assembled, and pre-calibrated, it detects alpha, beta, and gamma particles, and reviews gave it a good battery life. Most importantly, it was available on Amazon with two day delivery. So I ordered one, and waited. After a false start with the first one (had a line of dead pixels), the second one has proved to be really good.

It also has a USB port on which it appears as a simple serial port device, and you can read, and write from it directly. They also give you some software (I didn’t try it, it was Windows only). I wrote some software based on their documented protocol, and it worked quite easily. GQ Electronics makes some interesting hardware, they clearly are not software people. But, I do like their Geiger counter, and I’ll open source the software I’ve written.


The GMC-600+ uses an LND-7317 tube. As shown on its specification page, it can detect alpha, beta and gamma particles. I found that to convert CPM to uSv/h for this tube, one must divide by 350. I’m not really sure why this is, but for now, I’m using this number and moving forward.


On two different days, I conducted the following experiment. I placed the geiger counter inside the air-conditioning duct, right next to the filter (inside a ziploc bag).

I then ran the circulating fans for two hours, and then shut them off.

On 9/26 the fans were run between 12:30 and 14:30 (local time). On 10/9 the fans were run between 13:45 and 15:45 (local time). Here are the results from the GMC-600+. Note, I converted CPM to mSv/year.

Figure 2. Test on 09/26, fans were run between 12:30 and 14:30 (local time).
Figure 3. Test on 10/09, fans were run between 13:45 and 15:45 (local time).

In the test on 10/09, the counter was placed in the a/c duct many hours before I started to run the fans. The background radiation is about 1 msV/year in both tests. On 9/26 the peak was just over 3 msV/year, on 10/09 it went just over 1.5 msV/year.

In both cases, the radiation level dropped by 50% (over background) in about 50 minutes.


If you look at the radioactive decay chart above, from Po218 to Pb210 takes ~50 minutes. It sure looks like the dust in the filter is radioactive, and has a decay characteristic that could be related to the decay from Po218 to Pb210! Lots of fun and interesting math to follow in the next blog post.


WARNING: You can’t just add half life (times) to get effective decay rates, and half life. A half life is an exponential decay curve, and mere addition is meaningless. It has been a while since I studied Bateman’s equations, but in the simplest form, Bateman’s assumes a chain of decay beginning with all particles of the first type in the chain. That’s not what I’m dealing with here – at the time when the fan goes off, there are a collection of particles on the filter, each with its own decay (half life), and fraction. The effective half life is more complex than Bateman.

Of residential radon tests, and sensors

In the last blog post, I started to describe how radon comes into houses, and the radioactive decay that causes it. During the home inspection process, a radon test would place some canisters in the basement for 12, or 24 hours. These canisters are then sent away to a lab for testing, and you get a result in a day or two. The Commonwealth of Massachusetts has some information about Radon as well as specific details about testing.

Figure 1. Side-by-side comparison of two radon detectors.

These tests give you a number representing the Radon level at the time when the test was done. But radon levels change throughout the day, even after the test is done. When I saw a passive radon system in the basement, I looked into getting a digital radon meter. I purchased a couple [you can find them at your local hardware store, I found mine online] and set them up in my basement.

After 24 hours they started showing numbers, and they consistently showed different numbers! I’ve blurred the manufacturer’s name on purpose.

Within reason, I can imagine differences, but when they were consistently different, and sometimes diverging, I wasn’t sure what to do. A few days later, one of them consistently showed a reading in excess of 6 pCi/L (pico-Curies/liter) and the second stayed stubbornly below 1.75 or so. What would you do?

I purchased a third one, and I put all three through a “reset” cycle, and then tossed them into a solid lead box. And I left them there, in that box for 36 hours. When I took them out, and reviewed the readings over the past 12 hours [there’s a 24 hour period when the meters report nothing], and all three of them were different. I wasn’t comfortable with that end result. I wanted higher confidence in the readings.


The next post will cover what came next, my own radon detector.

Of Radon, Radon tests, and home ownership

If you live in the New England area, and are about to purchase a house, you will likely come face to face with a Radon test. When you get a home inspection the inspector will likely do this for you. [Even if you waive your home inspection contingency, I strongly recommend that you get a home inspection – in the best case it is uneventful, in the worst case, you cap your loss at whatever you put down with your offer to purchase.]

Radon is the number one cause of lung cancer among non-smokers, according to EPA estimates. Overall, radon is the second leading cause of lung cancer. Radon is responsible for about 21,000 lung cancer deaths every year. About 2,900 of these deaths occur among people who have never smoked. On January 13, 2005, Dr. Richard H. Carmona, the U.S. Surgeon General, issued a national health advisory on radon.

Health Risk of Radon

I had a radon inspection, and the result was that the radon level was “acceptable” [1.7 pCi/L]. The EPA suggests the action level of 4 pCi/L so all’s well, right. Nothing to worry about.

When I moved in, I noticed that the basement had a “passive radon mitigation system”. So I started looking into this a bit further, and other than a bunch of companies who are trying to sell me a test. More credible documents from the EPA, and other places are hard to read, and understand. I tried to find something easier to understand. Hopefully this helps someone else looking for comprehensible radon information.


Here is some high school physics that you’ll need to understand what comes next. Radioactive elements decay over time. When a radioactive element decays, it emits some radiation, and transforms into another element which may, or may not itself be radioactive.

The rate of decay is measured by the element’s half-life. If you start with a gram of a radioactive substance, and this substance has a half-life of 1 day, then at the end of a day, you will have 1/2 a gram, after another day you will have 0.25 g, and so on. Hence the name, half-life.


Another thing you’ll need to understand is where Radon comes from. I’ve summarized that below. The radioactive decay begins with Uranium (U238) and progresses through various elements, till we end up with Lead (Pb206). Along the way, each decay has an associated half life, and a radioactive radiation that is either an alpha (α), or beta (β) particle.

Figure 1. Radioactive decay of Uranium to Lead, including half lives, and emission. Radon (Rn) is the only element that is a gas, the others are all solids.

The important thing to notice is that Radon (Rn) is the only element that is a gas, all others are solid. The gas leaks into the house through cracks in the basement floor, and within a few days decays into Polonium (Po218). The solid particles tend to stick to dust particles (due to electrical charge) and end up getting inhaled. If the contaminated dust sticks to the airways, further decay occurs within the body, and can cause the sensitive cells that they are close to. This is what leads to cancer.


The next post continues with a description of my adventures with household radon meters.

What the recent Facebook/WhatsApp announcements could mean

Ever since Facebook acquired WhatsApp (in 2014) I have wondered how long it would take before we found that our supposedly “end to end encrypted” messages were being mined by Facebook for its own purposes.

It has been a while coming, but I think it is now clear that end to end encryption in WhatsApp isn’t really the case, and will definitely be less secure in the future.

Over a year ago, Gregorio Zanon described in detail why it was that end-to-end encryption didn’t really mean that Facebook couldn’t snoop on all of the messages you exchanged with others. There’s always been this difference between one-to-one messages and group messages in WhatsApp, and how the encryption is handled on each. For details of how it is done in WhatsApp, see the detailed write-up from April 2016.

Now we learn that Facebook is going to be relaxing “end to end encrypted”. As reported in Schneier, who quotes Kalev Leetaru,

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

 


 

Some years ago, I happened to be in India, and at a loose end, and accompanied someone who went to a Government office to get some work done. The work was something to do with a real-estate transaction. The Government office was the usual bustle of people, hangers-on, sweat, and the sounds of people talking on telephones, and the clacking of typewriters. All of that I was used to, but there was something new that I’d not seen before.

At one point documents were handed to one of the ‘brokers’ who was facilitating the transaction. He set them out on a table, and proceeded to take pictures. Aadhar Card (an identity card), PAN Card (tax identification), Drivers License, … all quickly photographed – and this made my skin crawl (a bit). Then these were quickly sent off to the document writer, sitting three floors down, just outside the building under a tree at his typewriter, generating the documents that would then be certified.

And how was this done: WhatsApp! Not email, not on some secure server with 256 bit encryption and security, just WhatsApp! India in general has a rather poor security practice, and this kind of thing is commonplace, people are used to it.

So now that Facebook says they are going to be intercepting and decrypting all messages and potentially sending them off to their own servers, guess what information they could get their hands on!

It seems pointless to expect that US regulators will do anything to protect consumers ‘privacy’ given that they’re pushing for weakening communication security themselves, and it seems like a foregone conclusion that Facebook will misuse this data, given that they have no moral compass (at least not one that is functioning).

This change has far-reaching implications and only time will tell how badly it will turn out but given Facebook’s track record, this isn’t going to end well.

The importance of longevity testing

airbus_a350_1000I worked for many years with, and for Stratus Technologies, a company that made fault tolerant computers – computers that just didn’t go down. One of the important things that we did at Stratus was longevity testing.

All software errors are not detectable quickly – some take time. Sometimes, just leaving a system to idle for a long time can cause problems. And we used to test for all of those things.

Which is why, when I see stuff like this, it makes me wonder what knowledge we are losing in this mad race towards ‘agile’ and ‘CI/CD’.

Airbus A350 software bug forces airlines to turn planes off and on every 149 hours

The AWD reads, in part

Prompted by in-service events where a loss of communication occurred between some avionics systems and avionics network, analysis has shown that this may occur after 149 hours of continuous aeroplane power-up. Depending on the affected aeroplane systems or equipment, different consequences have been observed and reported by operators, from redundancy loss to complete loss on a specific function hosted on common remote data concentrator and core processing input/output modules.

and this:

Required Action(s) and Compliance Time(s):

Repetitive Power Cycle (Reset):

(1) Within 30 days after 01 August 2017 [the effective date of the original issue of this AD], and, thereafter, at intervals not to exceed 149 hours of continuous power-up (as defined in the AOT), accomplish an on ground power cycle in accordance with the instructions of the AOT .

What is ridiculous about this particular issue is that it comes on the heals of Boeing 787 software bug can shut down planes’ generators IN FLIGHT, a bug where the generators would shutdown after 250 days of continuous operation, a problem that prompted this AWD!

Come on Airbus, my Windows PC has been up longer than your dreamliner!

The GCE outage on June 2 2019

I happened to notice the GCE outage on June 2 for an odd reason. I have a number of motion activated cameras that continually stream to a small Raspberry Pi cluster (where tensor flow does some nifty stuff). This cluster pushes some more serious processing onto GCE. Just as a fail-safe, I have the system also generate an email when they notice an anomaly, some unexplained movement, and so on.

And on June 2nd, this all went dark for a while, and I wasn’t quite sure why. Digging around later, I realize that the issue was that I relied on GCE for the cloud infrastructure, and gmail for the email. So when GCE had an outage, the whole thing came apart – there’s no resiliency if you have a single-point-of-failure (SPOF) and GCE was my SPOF.

WhiScreen Shot 2019-06-05 at 7.17.17 AMle I was receiving mobile alerts that there was motion, I got no notification(s) on what the cause was. The expected behavior was that I would receive alerts on my mobile device, and explanations as email. For example, the alert would read “Motion detected, camera-5 <time>”. The explanation would be something like “NORMAL: camera-5 motion detected at <time> – timer activated light change”,  “NORMAL: camera-3 motion detected at <time> – garage door closed”, or “WARNING: camera-4 motion detected at <time> – unknown pattern”.

I now realize that the reason was that the email notification, and the pattern detection relied on GCE and that SPOF caused delays in processing, and email notification. OK, so I fixed my error and now use Office365 for email generation so at least I’ll get a warning email.

But, I’m puzzled by Google’s blog post about this outage. The summary of that post is that a configuration change that was intended for a small number of servers ended up going to other servers, shit happened, shit cleanup took longer because troubleshooting network was the same as the affected network.

So, just as I had a SPOF, Google appears to have had an SPOF. But, why is it that we still have these issues where a configuration change intended for a small number of servers ends up going to a large number of servers?

Wasn’t this the same kind of thing that caused the 2017 Amazon S3 outage?

At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.

Shouldn’t there be a better way to detect the intended scope of a change, and a verification that this is intended? Seems like an opportunity for a different kind of check-and-balance?

Building completely redundant systems sounds like a simple solution but at some point the cost of this becomes exorbitant. So building completely independent control and user networks may seem like the obvious solution but is it cost effective to do that?

Try this DIY Neutral Density Filter for Long Exposure Photos

I have heard of this trick of using welders glass as a cheap ND filter. But from my childhood experience of arc welding, I was not sure how one would deal with the reality that welders glasses are not really precision optics.

This article addresses at least the issue of coloration and offers some nice tips for adjusting color balance in general.

https://digital-photography-school.com/diy-neutral-density-filter/

Automate everything

I like things to be automated, everything. Coffee in the morning, bill paautomatoryment, cycling the cable modem when it goes wonky, everything. The adage used to be, if you do something twice, automate it. I think it should be, “if you do anything, automate it, you will likely have to do it one more time”.

So I used to automate stuff like converting DOCX to PDF and PPTX to PDF on Windows all the time. But for the past two years, after moving to a Mac this is one thing that I’ve not been able to automate, and it bugged me, a lot.

No longer.

I had to make a presentation which went with a descriptive document, and I wanted to submit the whole thing as a PDF. Try as I might, Powerpoint and Word on the Mac would not make this easy.

It is disgusting that I had to resort to Applescript + Automator to do this.

I found this, and this.

It is a horrible way to do it, but yes, it works.

Now, before the Mac purists flame me for using Microsoft Word, and Microsoft Powerpoint, let me point out that the Mac default tools don’t make it any easier. Apple Keynote does not appear to offer a solution to this either, you have to resort to automator for this too.

So, eventually, I had to resort to automation based on those two links to make two PDFs and then this to combine them into a single PDF.

This is shitty, horrible, and I am using it now. But, do you know of some other solution, using simple python, and not having to install LibreOffice or a handful of other tools? Isn’t this a solved problem? If not, I wonder why?

Monitoring your ISP – Fun things to do with a Raspberry Pi (Part 2)

In Part 1 of this blog post, I described a problem I’ve been facing with my internet service, and the desired solution – a gizmo that would reboot my cable modem when the internet connection was down.

The first thing I got was a PiRelay from SB Components. This nifty HAT has four relays that will happily turn on and off a 110v or 250v load. The site claims 7A @ 240V, more than enough for all of my network gear. See image below, left.

Next I needed some way to put this in a power source. Initially I thought I’d get a simple power strip with individual switches on the outlets. I thought I could just connect the relays up in place of the switches and I’d be all set! So I bought one of these (above right).

Finally I just made a little junction box with four power outlets, and wired them up to the relays.

The software to control this is very straightforward.

  1. It turns out that the way Microsoft checks for internet connectivity is to do a get on “http://www.msftncsi.com/ncsi.txt&#8221;, and that returns the text “Microsoft NCSI”. OK, so I do that.
  2. I also made a list of a dozen or so web sites that I visit often, and I make a conn.request() to them to fetch the HEAD.

If internet connectivity appear to be not working, power cycle “relay 0”, which is where my cable modem is running. And this is a simple cron job, runs every 10 minutes.

Works like a champ. Another simple Raspberry Pi project!

If you are interested, ping me and I’ll post more details. I intend to share the code for the project soon – once I shake out any remaining little gremlins!

The relationship between accuracy, precision, and relevance

OK, this is a rant.

graph1It annoys me to no end when people present graphs like this one. Yes, the numbers do in fact add up to 100% but does it make any sense to have so many digits after the decimal when in reality this is based on a sample size of 6? Wouldn’t 1/2, 1/3, 1/6 have sufficed? What about 0.5, 0.33 and 0.67. Do you really really have to go to all those decimal places?

Excel has made it easy for people to make meaningless graphs like this, where merely clicking a little button gives you more decimal places. I’m firmly convinced that just having more digits after the decimal point doesn’t really make a difference in a lot of situations.

Let’s start first with some definitions

accuracy is a “degree of conformity of a measure to a standard or a true value“.

precision is the “the degree of refinement with which an operation is performed or a measurement stated“.

One can be precise, and accurate. For example, when I say that the sun rises in the east 100% every single day, I am both precise, and accurate. (I am just as precise and accurate if I said that the sun rises in the east 100.000% of the time).

One can be precise, and inaccurate. For example, when I say that the sun rises in the east 90.00% of the time, I am being precise but inaccurate.

So, as you can see, it is important to be accurate; the question now is how precise does one have to be. Assume that I conduct an experiment and tabulate the results, I find that 1/2 the time I have outcome A, 1/3 of the time I have outcome B, and 1/3 of the time I have outcome C. It would be both precise, and accurate to state the results are (as shown in the pie chart above) 50.0000%, 16.66667%, and 33.33333% for the various possible outcomes.

But does that really matter? I believe that it does. Consider the following two pictures, these are real pictures, of real street signs.

2018-08-16 18.18.09

This sign is on the outskirts of Mysore, in India.

2018-09-08 10.37.06

This sign is in Lancaster, MA.

In the first picture (the one from Mysore, India), we have distances to various places, accurate to 0.01km (apparently). Mysore Palace is 4.00 km away, the zoo is 4.00 km away, Mangalore is 270.00 km away. What’s 0.01km? That’s about 10m (about 33 feet). It is conceivable that this is accurate (possible, not probably). So I’d say this is precise and may be accurate.

The second picture (the one from Lancaster, MA) is most definitely precise, to 4 places of the decimal point no less. The bridge is 3.3528 meters (the sign claims). It also indicates that it is 11 feet. A foot is 12 inches, an inch is 2.54 centimeters, and therefore a meter (100cm is 39.3701″) is exactly 3.2808 feet. Therefore 11 feet is 3.3528 meters exactly. So this is both precise, and accurate (assuming that the bridge does in fact have a 11′ clearance).

The question is this, is the precision (4.00km, or 3.3528m) really relevant? We’re talking about street signs, measuring things with a fair amount of error. In the case of the bridge, the clearance could change by as much as 2″ between summer and winter because of expansion and contraction of the road surface (frost heaves). So wouldn’t it make more sense to just stick with 11′, or 3.5 meters?

So back to our graph with the 50.0000%, 16.66667% and 33.33333%. Does it really matter to the person looking at the graph that these numbers are presented to a precision of 0.000001%? For the most part, given the fact that the experiment had a sample size of 6, absolutely not.

So please, when presenting facts (and numbers) please do think about accuracy; that’s important. But please make the precision consistent with the relevance. When driving a car to the zoo, is the last 33′ going to really kill me? or am I really interested in the clearance of the bridge accurate to the thickness of a human hair, or a sheet of paper?

 

Android in a virtual machine

Very often, I’ve found that it is an advantage to have Android running in a virtual machine (say on a desktop or a laptop) and use the Android applications.

Welcome to android-x86

Android runs on x86 based machines thanks to the android-x86 project. I download images from here. What follows is a simple, step-by-step How-To to get Android running on your x86 based computer with VMWare. I assume that much the same thing can be done with some other emulator (like VirtualBox).

Install android-x86

The screenshots below show the steps involved in installing android-x86 on a Mac.

android-x86-1

Choose “Create a custom virtual machine”

android-x86-2

Choose “FreeBSD 64-bit”

android-x86-3

I used the default size (for now; I’ll increase the size later).

android-x86-4

For starters, we can get going with this.

android-x86-5

I was installing android v8.1 RC1

android-x86-6android-x86-7

Increased #cores and memory as above.

android-x86-8

Resized the drive to 40GB.

android-x86-9

And with that, began the installation.

Options during installation.

The options and choices are self explanatory. Screenshots show the options that are chosen (selected).

android-x86-10android-x86-11android-x86-12android-x86-13android-x86-14android-x86-15android-x86-16android-x86-17android-x86-18android-x86-19android-x86-20android-x86-21android-x86-22android-x86-23android-x86-24

One final thing – enable 3D graphics acceleration

Before booting, you should enable 3D graphics acceleration. I’ve found that without this, you end up in a text screen at a shell prompt.

android-x86-26

And finally, a reboot!

android-x86-25

That’s all there is to it, you end up in the standard android “first boot” process.

Daylight Saving Time isn’t worth it, European Parliament members say

I have long suspected that this was the case, especially when no one could give a simple explanation why we did this.

  • For the farmers
  • For the farm animals
  • To save energy

Those are just some of the explanations I have heard.

But I get it, in countries that are wide (west to east) you need more time zones. That is the real solution.

https://arstechnica.com/tech-policy/2018/02/daylight-saving-time-isnt-worth-it-european-parliament-ministers-say/?amp=1

This article appeared almost a month ago

This article appeared almost a month ago, and I just got to reading it. It almost looks like it was written looking in the rear view mirror! Are there any of these seven trends that isn’t already a hot topic?

7 #Cloud Computing Trends to Watch in 2018

http://ow.ly/dcG830ihigd

Don’t use a blockchain unless you really need one

Now, that’s easy to say.

But, blockchain today is what docker was 18 months ago. 

Startups got funded for doing “Docker for sandwiches”, or “Docker for underpants” (not really, but bloody close).

Talks were accepted at conferences because of the title “Docker, Docker, Docker, Docker, Docker” (really).

And today it is blockchain.

https://www.coindesk.com/dont-use-blockchain-unless-really-need-one/

I hope you weren’t counting on Project Fi …

Google’s Project Fi international data service goes down.

One of the things I have come to realize is that it is not a good plan to depend on services provided by the likes of Google.

They work 99.9 or 99.95% of the time. And they work well. But depend on them for five 9’s, that’s dumb.

https://www.engadget.com/amp/2018/01/13/google-project-fi-international-data-outage/

Mysore School of Architecture

I happened to be at the Mysore School of Architecture, in Mysore, India and had a chance to walk around. And click some photographs 🙂 The building is bright and airy, and the pictures below show some views of this. There was a lot of art all around the campus, all of it created by the students. Some modern art under the stairs. Modern art in the courtyard. A nice painting which shows two views, depending on where you stand. A nice mural on the wall. Some lovely photos; the sun and the wind weathered the display, but the display and the captions are lovely. Many of the class rooms have nice caricatures of famous architects. And finally, some lovely origami under the stairs! I loved my short trip to the college and will surely be back when there are some students there.

How do you answer this interview question, “what do you make in your current job?”

A couple of months ago, a former co-worker called me and asked if I would provide a reference for her in a job search (which I readily agreed to). Then she went on to ask me this, “This company wants to make me an offer and they called and asked me what I currently make, and asked for a copy of a paystub. What should I do?”

Personally, I find this question stupid. I’ve been asked it many times (including quite recently) and in all instances I’ve been surprised by it (doh!) and I’ve answered in what I now consider to be the wrong way.

Every hiring manager has a range of salaries that they are willing to pay for a position, and they have a range of bonuses, a range of stock options and other incentives. And then there’s the common incentives that everyone gets (401(k), vacation, …). So why even ask the question? Why not make an offer that makes sense and be done with it?

If you are a hiring manager / HR person on the hiring side, do you ask this question?

If you are a candidate, how do you handle this question?

In any event, here’s what I recommended to my friend, answer the question along these lines.

  • I’m sure you are asking me this so you can make me a competitive offer that I’ll accept
  • I’m also sure that you have a range for all the components of the offer that you intend to make to me; base pay, bonus, stock options, …
  • So what I’ll tell you is what I am looking for in an offer and I’ll leave it to you to make me an offer based on the standard ranges that you have
  • I am looking for a take-home pay of $ _____ each month
  • Since you offer a 401(k) plan which I intend to contribute $ _____ to, that means I am looking for a total base pay of $ ______ per year.
  • I am looking for a total annual compensation of $ ______ including bonuses
  • In addition, I am looking for ______ days of vacation each year.

That’s it. When asked for a copy of current pay-stub or anything like that, I recommend that you simply decline to provide it and make it clear that this is not any of their business.

Now, whether one can get away with this answer or not depends on how strong your position is for the opening in question. Some companies have a ‘policy’ that they need this paystub/W-2 stuff.

Not providing last pay information and following their ‘process’ could make the crabby HR person label you ‘not a team player’ or some such bogus thing and put your resume in the ‘special inbox’ which is marked ‘Basura’.

In any event, this all was fine and my friend told me  that she was given a good offer which she accepted.

How do you approach this question?

Blockchain is an over hyped technology solution looking for a problem

The article makes a very simple argument for something that I have felt for a while, block chain is a cool technology but the majority of the use cases people talk about are just bull shit.

http://www.coindesk.com/blockchain-intermediaries-hype/

Stratoscale acquires Tesora

Yesterday it was announced that Tesora had been acquired by Stratoscale, here are some of the articles that were published about this.

and this official announcement by Stratoscale

Thanks to all of you who emailed, texted, tweeted, called, and pinged me on IRC 🙂 I’m overwhelmed by the volume and all the good wishes. I’ll reply to each of you individually, sorry it may take a couple of days for me to do that.

To all of our investors and advisors, first in ParElastic and later in Tesora, thank you all for your help and support. To everyone at Tesora who is moving to Stratoscale, all the very best to you. It has truly been a wonderful six years working with you and I thoroughly enjoyed it.

Doug: It’s been especially awesome working with you these past six years. You are a great leader of people, and you have built and managed a truly exceptional team. People in your team like you, respect you, are comfortable approaching you with the strangest questions, and are willing to work with you over and over again. Not an easy thing to pull off over the extraordinarily long period that you’ve been able to do this. You were clearly not an easy taskmaster and your team consistently delivered miracles. But along the way you managed to ensure that everyone was having a good time.

Ken: No entrepreneur can have hoped for a better partner than you. It has been an extraordinary ride and it has been truly my honor and privilege to have taken this ride along with you. I think you were instrumental in building a company which was a very special place to work, where we built some excellent technology, got some marquee customers, and had a lot of fun doing it. I’ve learned a lot, about startups, about technology, about business, and about myself; thank you very much for this awesome experience.

Several of you have asked me “what’s next for Amrith”. I don’t know yet, I’m trying to figure that out (thanks to all of you who have offered to help me figure this out, I will certainly take you up on that).

In the short term, I’m going to continue to work on Trove, finish up my term as the PTL for Ocata and continue to work on the project as we begin the Pike cycle.

What comes later, I have no idea but if you have ideas, I’m all ears.

The Acurite SmartHub and monitoring solution

I purchased an Acurite 10 sensor indoor humidity and temperature monitoring system and am surprisingly happy with it. I was expecting a generally crappy experience but I have to say I was wrong, and it wasn’t that I’d set my expectations so low; the system is truly quite good.

The system I purchased is this one.

You get a smartHub and 10 sensors. Pictured below is the hub and a sensor.

06044-800x800_2_2 09150rm-phone-800x800_7

The setup: smartHub

The smartHub comes with a little power adapter and an ethernet cable. Stick it into a wall outlet and connect the ethernet cable to your router and DHCP does its thing and the smartHub gets online.

It initiates a series of accesses to some locations and downloads firmware and the like. (I’ve captured network traces, if I find anything interesting, I’ll blog about that). In a couple of minutes the lights stabilize and you have to press a button that says “Activate”.

The setup: online

Then you create an online account at the Acruite site and once you are logged in, you associate your account with the device. You identify it by the number on the bottom (spoiler alert, the number is the MAC address of the device).

Within about a minute, the device shows up and you are good to go for the next step.

The setup: sensors

Each sensor takes two AAA batteries, pop them in and within a minute the web portal shows a new device which you can rename and mount wherever you want it. Very slick and easy.

Within about 15 minutes I had 10 sensors online and reporting.

I was so happy with this that I’ve purchased another smartHub and 10 more indoor/outdoor sensors; they don’t have the LCD display.

Enough of the happy talk

OK, so what did I not like?

  1. It has been years (literally, years and years) since I’ve purchased a gadget that requires batteries, and the batteries are not included. It’s a good thing that I purchase AAA’s in packs of 50. Like every other commodity piece of electronics these days, these sensors are made in China, so just stick two AAA’s in the box please.
  2. Once you power up a sensor, it takes under a minute to initialize and register with the smartHub. But, if you stick batteries in two of them in quick succession, there’s no way to tell (on the Web UI) which is which. There’s no number on the sensor, nothing which you can associate with what you see on the screen; just “Temperature and Humidity Sensor – NN” where NN is a number incrementing from 1 to 10.
  3. Once you get sensors on the Web UI, there is no way to re-order them. The will forever remain in the same order. So if you decide to move a device from one location to another, and you want to group your devices based on location, you are not able to do that.
  4. Wired ethernet, really? I’m sure the stupid cable they have to give you will cost about as much as it would to get wireless setup. But it would make the setup just a bit harder.
  5. The web app is just about OK. Fine, it sucks. It allows you to add alerts for each device. By default, low battery and loss of signal rules are added for each device. But, I want to add temperature rules for different devices. Yes, you can do that but you get to do it one device at a time. No copy/paste available.
  6. They claim to have an android application but it won’t work on a tablet; instead they expect you to use a full blown web app for the tablet. The android app won’t install on my android phone; lots of others seem to be complaining about this as well.

Closing thoughts

Acurite strikes me as a company that makes fine hardware and they appear to have done an absolutely bang up job on the initial setup and “getting started” part of the experience.

They are not a software company. The software part of the “after setup” experience is kind of horrible.

They offer no easy API based mechanism to retrieve your sensor data. Yes, on the web app, you can click a couple of buttons and play with date controls and get a link to some AWS S3 bucket mailed to you with your data as a CSV but really, advertise an API, get someone to write an IFTTT channel, then you’ll be cooking with gas.

Next post will be a deconstruction of the protocol, what you get when you point your web browser at the smartHub’s IP address, and those kinds of fun things.

One more thing

The people at Acurite Support are wonderful. I have (in the past three days) spoken with two of them, and interacted with one via email. The people I spoke with were knowledgeable, and very helpful.

The wait times on hold are quite bad. I waited 25 minutes and 15 minutes respectively on hold. There is the usual boring elevator music while you stay in line with an announcement every minute that you are “XX in line”. No indication of how long your wait will be but you are offered the option of getting a callback.

An odd thing though is that while I was in line and I heard the message “you are second in line” a couple of times, I suddenly ended up being “third in line”. How someone got ahead of me in line, I know not.

But, their support is great. 5 stars for that!

Amazon’s demented plans for its warehouse blimp with drone fleet 

Amazon’s demented plans for its warehouse blimp with drone fleet http://arstechnica.com/information-technology/2016/12/amazons-demented-plans-for-its-warehouse-blimp-with-drone-fleet/?amp=1

Shit like this is what gives patents a bad name!

Another look at IFTTT

In March 2012 (that’s a while ago) I wrote this article about a new service I’d discovered called IF-This-Then-That.

Now, almost five years on, IFTTT has come a long way. Just looking at the channels (they now call them services) it is amazing how far they’ve come. Quite amazing.

Time to go revisit IFTTT. It still amazes me that they are a free service.

Facebook at a Crossroads

Interesting article in MIT Technology review at https://www.technologyreview.com/s/603198/facebook-at-a-crossroads/.

More than half of the 3.4 billion people with Internet access log on to Facebook each month. Revenue in the first nine months of 2016 jumped 36 percent to $19 billion; profit nearly tripled, to $6 billion. Yet the company’s founder has spent the year talking up his plans to become something much larger and more meaningful.

With the election now over, the coming crackdown on fake news, and getting mired in the censorship controversy after blocking the video stream of Philando Castile after he was shot in Minnesota surely didn’t help.

I wonder how much all these things will affect Facebook, and how much that is driving the urge to do unnatural things.

Drones, Virtual Reality, get a grip …

The case(s) for and against PGP

When I read I’m throwing in the towel on PGP, and I work in security, which appeared as an Op-ed in ArsTechnica, I felt that it certainly deserved a response. While Filippo Valsorda makes some valid points about PGP/GPG, I felt that they were less about shortcomings in the scheme and rather usability issues that have been unfortunately ignored.

Then I read “Why I’m not giving up on PGP“,  an excellent article, also in ArsTechnica, and it does a much better job of refuting the article than I could ever have done.

Both are well worth the read.

May I please get whatever Windows version powers the Dreamliner?

It is being widely reported that the FAA has issued an Airworthiness Directive (AD) requiring that Boeing 787 Dreamliners must be rebooted every 21 or so days.dreamliner

This is not a hoax.

This is the AD issued by the FAA 0n 2016-09-24, I obtained a copy of this AD from here.

The AD states:

This AD requires repetitive cycling of either the airplane electrical power or the power to the three flight control modules (FCMs). This AD was prompted by a report indicating that all three FCMs might simultaneously reset if continuously powered on for 22 days. We are issuing this AD to address the unsafe condition on these products.
A little investigation indicates that this isn’t the first time the FAA has had to do this. The last time they had to do something like this was in 2015-09 when they issued this AD which I obtained from here. That AD was more specific about the reason for the problem, stating
This condition is caused by a software counter internal to the GCUs that will overflow after 248 days of continuous power.
It has been widely rumored that the present AD about the 21 day action is similarly motivated, and the logic is that a timer with millisecond precision which will overflow at about 24 days.
This is all very droll, and I hope to hell that they power cycle their planes on the ground regularly and all that. My only question is this, since they are in fact running Windows under the covers, how on earth are they able to keep the thing going for 21 days?
With Windows 7 that was a piece of cake but this new Windows 10 that I have wants to reboot every night and I don’t have any say in the matter.
So whatever Boeing did to keep the damn thing going 21 days, it would be great if they shared that with the world.

The Monty Hall problem

I’ve long wanted a simple explanation of the Monty Hall problem and I’ve never found one that I liked. Some I really detested like one that tried to make some lame analogy to baseball pitchers.

Anyway, here is what I’ve found to be the simplest explanation yet. First, what’s the problem.

In a game show, the contestant is shown into a room with three identical closed doors. He is informed that behind one door is a prize and behind the other two doors, there is nothing.

He is then asked to pick a door. Once he has picked a door, the host proceeds to open one of the other two doors (that he had not picked) and shows the contestant that there is nothing behind that door.

The host then offers the contestant the option of either changing his selection (picking the third remaining door), or sticking with his initial choice.

What should the contestant do?

The simplistic answer is that once the contestant has been shown that there is nothing behind one door, the problem reduces to two doors and therefore the odds are 50-50 and the contestant has no motivation to switch.

In reality, this is not the case, and the contestant would be wise to switch. Here is why.

image1Three doors, behind one of them is the prize, behind the other two, there is nothing.

The contestant now picks a door. For the purposes of this illustration, let’s assume that the contestant picks the door in the middle as shown below.

image2Since the prize is behind one of the three doors, the odds that the prize is behind the door that the contestant has picked is 1/3. By extension therefore the probability that it is behind one of the other two doors is 2/3 (1/3 for each of the doors).

So far, we’re all likely on solid footing, so let’s now bring in the twist. The game show host can always find a door behind which there is nothing. And as shown below, he does.

image3The game show host has picked the third door and there’s nothing there.

However, nothing has changed the fact that the probability that the prize was behind the door that the contestant chose is 1/3 and the probability that it is behind one of the other two doors is 2/3. What has changed is that the host has revealed that it is not behind the door at the far right. If then the probability that it is behind the far left door and the far right door (the two doors that the contestant did not pick) is 2/3, we can say that the probability that it is behind the far left door has to be 2/3.

With this new information therefore, the contestant would be wise to switch his choice.

Defining Success in OpenStack (With Heisenberg in Mind)

This article first appeared at http://www.tesora.com/defining-success-in-openstack/

I recently read Thierry Carrez’s blog post where he references a post by Ed Leafe. Both reminded me that in the midst of all this hand wringing about whether the Big Tent was good or bad, at fault or not at fault, and whether companies were gaming the system (or not), the much bigger issue is being ignored.

We don’t incentivize people and organizations to do the things that will make OpenStack successful, and this shortcoming poses a real and existential threat to OpenStack.

Werner Heisenberg observed that the act of measuring the position of a sub-atomic particle affected its momentum and vice-versa. In exactly the same way(s) that Heisenberg said, the act of measuring an individuals (or organizations) performance in some area impacts that performance itself.

By measuring commits, lines of code, reviews and other such metrics that are not really measures of OpenStack’s success, we are effectively causing individuals and organizations to do the things that make them appear “good” on those metrics. They aren’t “gaming the system”, they are trying to look good on the measures that you have established for “success”.

At Tesora, we have always had a single-minded focus on a single project: Trove. We entered OpenStack as the DBaaS company, and have remained true to that. All the changes we have submitted to OpenStack, and the reviews and participation by Tesora have been focused on the advancement of DBaaS. We have contributed code, documentation, tests, and reviews that have helped improve Trove. To us, this single minded focus is a good thing because it has helped us advance the project, and to make it easier for people to deploy and use it in practice. And to us, that is the only thing that really matters.

The same thing(s) are, true for all of OpenStack. Actual adoption is all that matters. What we need from the Technical Committee and the community at large is a concerted effort to drive adoption, and to make it easier for prospects to deploy and bring into production, a cloud based on OpenStack. And while I am a core-reviewer, and I am the Trove PTL, and I wrote a book about Trove, and our sales and marketing team do mention that in customer engagements, we do that only because they are the “currency” in OpenStack. To us, the only things that really matter are ease-of-use, adoption, a superlative user experience, and a feature rich product. Without that, all this talk about contribution, and the number of cores and PTL’s is as completely meaningless as whether the Big Tent approach resulted in a loss of focus in OpenStack.

But, remember Heisenberg! Knowing that what one measures changes how people act means that it would be wise for the Technical Committee to take the leadership in defining success in terms of things that are surrogates for ease of installation, ease of deployment, the number of actual deployments, and things that would truly indicate the success of OpenStack.

Let’s stop wasting time defending the Big Tent. It was done for good reasons, it had consequences. Realize what these consequences are, perceive the reality, and act accordingly.