Tom’s Hardware: How to Install Ubuntu on Your Raspberry Pi.
Ever since Facebook acquired WhatsApp (in 2014) I have wondered how long it would take before we found that our supposedly “end to end encrypted” messages were being mined by Facebook for its own purposes.
It has been a while coming, but I think it is now clear that end to end encryption in WhatsApp isn’t really the case, and will definitely be less secure in the future.
Over a year ago, Gregorio Zanon described in detail why it was that end-to-end encryption didn’t really mean that Facebook couldn’t snoop on all of the messages you exchanged with others. There’s always been this difference between one-to-one messages and group messages in WhatsApp, and how the encryption is handled on each. For details of how it is done in WhatsApp, see the detailed write-up from April 2016.
Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.
Some years ago, I happened to be in India, and at a loose end, and accompanied someone who went to a Government office to get some work done. The work was something to do with a real-estate transaction. The Government office was the usual bustle of people, hangers-on, sweat, and the sounds of people talking on telephones, and the clacking of typewriters. All of that I was used to, but there was something new that I’d not seen before.
At one point documents were handed to one of the ‘brokers’ who was facilitating the transaction. He set them out on a table, and proceeded to take pictures. Aadhar Card (an identity card), PAN Card (tax identification), Drivers License, … all quickly photographed – and this made my skin crawl (a bit). Then these were quickly sent off to the document writer, sitting three floors down, just outside the building under a tree at his typewriter, generating the documents that would then be certified.
And how was this done: WhatsApp! Not email, not on some secure server with 256 bit encryption and security, just WhatsApp! India in general has a rather poor security practice, and this kind of thing is commonplace, people are used to it.
So now that Facebook says they are going to be intercepting and decrypting all messages and potentially sending them off to their own servers, guess what information they could get their hands on!
It seems pointless to expect that US regulators will do anything to protect consumers ‘privacy’ given that they’re pushing for weakening communication security themselves, and it seems like a foregone conclusion that Facebook will misuse this data, given that they have no moral compass (at least not one that is functioning).
This change has far-reaching implications and only time will tell how badly it will turn out but given Facebook’s track record, this isn’t going to end well.
I worked for many years with, and for Stratus Technologies, a company that made fault tolerant computers – computers that just didn’t go down. One of the important things that we did at Stratus was longevity testing.
All software errors are not detectable quickly – some take time. Sometimes, just leaving a system to idle for a long time can cause problems. And we used to test for all of those things.
Which is why, when I see stuff like this, it makes me wonder what knowledge we are losing in this mad race towards ‘agile’ and ‘CI/CD’.
The AWD reads, in part
Prompted by in-service events where a loss of communication occurred between some avionics systems and avionics network, analysis has shown that this may occur after 149 hours of continuous aeroplane power-up. Depending on the affected aeroplane systems or equipment, different consequences have been observed and reported by operators, from redundancy loss to complete loss on a specific function hosted on common remote data concentrator and core processing input/output modules.
Required Action(s) and Compliance Time(s):
Repetitive Power Cycle (Reset):
(1) Within 30 days after 01 August 2017 [the effective date of the original issue of this AD], and, thereafter, at intervals not to exceed 149 hours of continuous power-up (as defined in the AOT), accomplish an on ground power cycle in accordance with the instructions of the AOT .
What is ridiculous about this particular issue is that it comes on the heals of Boeing 787 software bug can shut down planes’ generators IN FLIGHT, a bug where the generators would shutdown after 250 days of continuous operation, a problem that prompted this AWD!
Come on Airbus, my Windows PC has been up longer than your dreamliner!
This is a great article with a lovely video clip.
This was captured from about 45 to 60 feet …
I was too lazy to get any closer.
This at the Mysore School of Architecture
I happened to notice the GCE outage on June 2 for an odd reason. I have a number of motion activated cameras that continually stream to a small Raspberry Pi cluster (where tensor flow does some nifty stuff). This cluster pushes some more serious processing onto GCE. Just as a fail-safe, I have the system also generate an email when they notice an anomaly, some unexplained movement, and so on.
And on June 2nd, this all went dark for a while, and I wasn’t quite sure why. Digging around later, I realize that the issue was that I relied on GCE for the cloud infrastructure, and gmail for the email. So when GCE had an outage, the whole thing came apart – there’s no resiliency if you have a single-point-of-failure (SPOF) and GCE was my SPOF.
While I was receiving mobile alerts that there was motion, I got no notification(s) on what the cause was. The expected behavior was that I would receive alerts on my mobile device, and explanations as email. For example, the alert would read “Motion detected, camera-5 <time>”. The explanation would be something like “NORMAL: camera-5 motion detected at <time> – timer activated light change”, “NORMAL: camera-3 motion detected at <time> – garage door closed”, or “WARNING: camera-4 motion detected at <time> – unknown pattern”.
I now realize that the reason was that the email notification, and the pattern detection relied on GCE and that SPOF caused delays in processing, and email notification. OK, so I fixed my error and now use Office365 for email generation so at least I’ll get a warning email.
But, I’m puzzled by Google’s blog post about this outage. The summary of that post is that a configuration change that was intended for a small number of servers ended up going to other servers, shit happened, shit cleanup took longer because troubleshooting network was the same as the affected network.
So, just as I had a SPOF, Google appears to have had an SPOF. But, why is it that we still have these issues where a configuration change intended for a small number of servers ends up going to a large number of servers?
Wasn’t this the same kind of thing that caused the 2017 Amazon S3 outage?
At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.
Shouldn’t there be a better way to detect the intended scope of a change, and a verification that this is intended? Seems like an opportunity for a different kind of check-and-balance?
Building completely redundant systems sounds like a simple solution but at some point the cost of this becomes exorbitant. So building completely independent control and user networks may seem like the obvious solution but is it cost effective to do that?
I have heard of this trick of using welders glass as a cheap ND filter. But from my childhood experience of arc welding, I was not sure how one would deal with the reality that welders glasses are not really precision optics.
This article addresses at least the issue of coloration and offers some nice tips for adjusting color balance in general.
I like things to be automated, everything. Coffee in the morning, bill payment, cycling the cable modem when it goes wonky, everything. The adage used to be, if you do something twice, automate it. I think it should be, “if you do anything, automate it, you will likely have to do it one more time”.
So I used to automate stuff like converting DOCX to PDF and PPTX to PDF on Windows all the time. But for the past two years, after moving to a Mac this is one thing that I’ve not been able to automate, and it bugged me, a lot.
I had to make a presentation which went with a descriptive document, and I wanted to submit the whole thing as a PDF. Try as I might, Powerpoint and Word on the Mac would not make this easy.
It is disgusting that I had to resort to Applescript + Automator to do this.
It is a horrible way to do it, but yes, it works.
Now, before the Mac purists flame me for using Microsoft Word, and Microsoft Powerpoint, let me point out that the Mac default tools don’t make it any easier. Apple Keynote does not appear to offer a solution to this either, you have to resort to automator for this too.
So, eventually, I had to resort to automation based on those two links to make two PDFs and then this to combine them into a single PDF.
This is shitty, horrible, and I am using it now. But, do you know of some other solution, using simple python, and not having to install LibreOffice or a handful of other tools? Isn’t this a solved problem? If not, I wonder why?
Before and After: Students Becoming Better Photographers After 31 Days Course
I have Comcast Internet service at home. I’ve used it for many years now, and one of the constant things over this period of time has been that the service is quite often very unreliable. I’ve gone for months with no problems, and then for some weeks or months the service gets to be terribly unreliable.
What do I mean by unreliable? That is best described in terms of what the service is like when it is reliable.
- I can leave an ssh session to a remote machine up and running for days (say, an EC2 instance) – if I have keep-alive and things like that setup
- VPN sessions stay up for days without a problem
- The network is responsive, DNS lookups are quick, ICMP response is good, surfing the web is effortless, things like Netflix and Amazon movies work well
- Both IPv4 and IPv6 are working well
You get the idea. With that in mind, here’s what I see from time to time:
- Keeping an ssh session up for more than an hour is virtually impossible
- VPN sessions terminate frequently, sometimes it is so bad that I can’t VPN successfully
- DNS lookups fail (using the Comcast default DNS servers, 184.108.40.206, 220.127.116.11, 2001:558:feed::1, and 2001:558:feed::2). It isn’t any better with Google’s DNS because the issue is basic network connectivity
- There is very high packet loss even pinging my default gateway!
- Surfing the web is a pain, click a link and it hangs … Forget about streaming content
During these incidents, I’ve found that the cable modem itself remains fine, I can ping the internal interface, signal strengths look good, and there’s nothing obviously wrong with the hardware.
What I’ve found is that rebooting my cable modem generally fixes the problem immediately. Now, this isn’t always the case – Comcast does have outages from time to time where you just have to wait a few hours. But for the most part, resetting the cable modem just fixes things.
So I was wondering how I could make this all a bit better for myself.
An option is something like this. An “Internet Enabled IP Remote Power Switch with Reboot“. Or this, this, or this. The last one of those, Web Power Switch Pro Model, even sports a little web server, can be configured, and supports SNMP, and a REST API! Some of these gadgets are even Alexa compatible!
But, no – I had to solve this with a Raspberry Pi! Continued in Part 2.
In Part 1 of this blog post, I described a problem I’ve been facing with my internet service, and the desired solution – a gizmo that would reboot my cable modem when the internet connection was down.
The first thing I got was a PiRelay from SB Components. This nifty HAT has four relays that will happily turn on and off a 110v or 250v load. The site claims 7A @ 240V, more than enough for all of my network gear. See image below, left.
Next I needed some way to put this in a power source. Initially I thought I’d get a simple power strip with individual switches on the outlets. I thought I could just connect the relays up in place of the switches and I’d be all set! So I bought one of these (above right).
Finally I just made a little junction box with four power outlets, and wired them up to the relays.
The software to control this is very straightforward.
- It turns out that the way Microsoft checks for internet connectivity is to do a get on “http://www.msftncsi.com/ncsi.txt”, and that returns the text “Microsoft NCSI”. OK, so I do that.
- I also made a list of a dozen or so web sites that I visit often, and I make a conn.request() to them to fetch the HEAD.
If internet connectivity appear to be not working, power cycle “relay 0”, which is where my cable modem is running. And this is a simple cron job, runs every 10 minutes.
Works like a champ. Another simple Raspberry Pi project!
If you are interested, ping me and I’ll post more details. I intend to share the code for the project soon – once I shake out any remaining little gremlins!
This is a wonderful, well written, and comprehensive write-up on taking night photographs (in general) and the Milky Way in particular.
Well worth the 10 minutes to read it through.
This is just freaking awesome. 50,000 images produced this masterpiece in 81 megapixel resolution.
Debugging things on the Raspberry Pi by flashing the power LED.
I’ve often found that the most useful debugging technique is to be able to provide a visual cue that something is going on. And for that, blinking the power light on the Raspberry Pi is the easiest thing to do.
The power light (often called LED1) is always on, and bright red. So turning it off, and back on is a great little debugging technique.
A short note about the LEDs on Raspberry Pi. There are two, one is the green one [led0] for network activity, and the other is the red one [led1] for power.
They are exposed through
To turn off the red LED
echo 0 > /sys/class/leds/led1/brightness
To turn on the red LED
echo 0 > /sys/class/leds/led1/brightness
Doing this requires that you are privileged. So to make things easy I wrote it in C, put the binary in /bin, and turned on the setuid bit on it. I’ve also used a library that blinks the power LED in simple morse code to get a short message across. I can’t do more than about 10 wpm in my head now so while it is slow, it is very very useful.
4 Ways To Make Better Street Portraits While Traveling
A nice article, particularly tip 4.
How to Improve Low-Light Performance by Increasing Your ISO
I have got to try this out.
OK, this is a rant.
It annoys me to no end when people present graphs like this one. Yes, the numbers do in fact add up to 100% but does it make any sense to have so many digits after the decimal when in reality this is based on a sample size of 6? Wouldn’t 1/2, 1/3, 1/6 have sufficed? What about 0.5, 0.33 and 0.67. Do you really really have to go to all those decimal places?
Excel has made it easy for people to make meaningless graphs like this, where merely clicking a little button gives you more decimal places. I’m firmly convinced that just having more digits after the decimal point doesn’t really make a difference in a lot of situations.
Let’s start first with some definitions
accuracy is a “degree of conformity of a measure to a standard or a true value“.
precision is the “the degree of refinement with which an operation is performed or a measurement stated“.
One can be precise, and accurate. For example, when I say that the sun rises in the east 100% every single day, I am both precise, and accurate. (I am just as precise and accurate if I said that the sun rises in the east 100.000% of the time).
One can be precise, and inaccurate. For example, when I say that the sun rises in the east 90.00% of the time, I am being precise but inaccurate.
So, as you can see, it is important to be accurate; the question now is how precise does one have to be. Assume that I conduct an experiment and tabulate the results, I find that 1/2 the time I have outcome A, 1/3 of the time I have outcome B, and 1/3 of the time I have outcome C. It would be both precise, and accurate to state the results are (as shown in the pie chart above) 50.0000%, 16.66667%, and 33.33333% for the various possible outcomes.
But does that really matter? I believe that it does. Consider the following two pictures, these are real pictures, of real street signs.
This sign is on the outskirts of Mysore, in India.
This sign is in Lancaster, MA.
In the first picture (the one from Mysore, India), we have distances to various places, accurate to 0.01km (apparently). Mysore Palace is 4.00 km away, the zoo is 4.00 km away, Mangalore is 270.00 km away. What’s 0.01km? That’s about 10m (about 33 feet). It is conceivable that this is accurate (possible, not probably). So I’d say this is precise and may be accurate.
The second picture (the one from Lancaster, MA) is most definitely precise, to 4 places of the decimal point no less. The bridge is 3.3528 meters (the sign claims). It also indicates that it is 11 feet. A foot is 12 inches, an inch is 2.54 centimeters, and therefore a meter (100cm is 39.3701″) is exactly 3.2808 feet. Therefore 11 feet is 3.3528 meters exactly. So this is both precise, and accurate (assuming that the bridge does in fact have a 11′ clearance).
The question is this, is the precision (4.00km, or 3.3528m) really relevant? We’re talking about street signs, measuring things with a fair amount of error. In the case of the bridge, the clearance could change by as much as 2″ between summer and winter because of expansion and contraction of the road surface (frost heaves). So wouldn’t it make more sense to just stick with 11′, or 3.5 meters?
So back to our graph with the 50.0000%, 16.66667% and 33.33333%. Does it really matter to the person looking at the graph that these numbers are presented to a precision of 0.000001%? For the most part, given the fact that the experiment had a sample size of 6, absolutely not.
So please, when presenting facts (and numbers) please do think about accuracy; that’s important. But please make the precision consistent with the relevance. When driving a car to the zoo, is the last 33′ going to really kill me? or am I really interested in the clearance of the bridge accurate to the thickness of a human hair, or a sheet of paper?
How to run a Kubernetes cluster in OpenStack
Very often, I’ve found that it is an advantage to have Android running in a virtual machine (say on a desktop or a laptop) and use the Android applications.
Welcome to android-x86
Android runs on x86 based machines thanks to the android-x86 project. I download images from here. What follows is a simple, step-by-step How-To to get Android running on your x86 based computer with VMWare. I assume that much the same thing can be done with some other emulator (like VirtualBox).
The screenshots below show the steps involved in installing android-x86 on a Mac.
Choose “Create a custom virtual machine”
Choose “FreeBSD 64-bit”
I used the default size (for now; I’ll increase the size later).
For starters, we can get going with this.
I was installing android v8.1 RC1
Increased #cores and memory as above.
Resized the drive to 40GB.
And with that, began the installation.
Options during installation.
The options and choices are self explanatory. Screenshots show the options that are chosen (selected).
One final thing – enable 3D graphics acceleration
Before booting, you should enable 3D graphics acceleration. I’ve found that without this, you end up in a text screen at a shell prompt.
And finally, a reboot!
That’s all there is to it, you end up in the standard android “first boot” process.
The Boston Harbor hotel … http://ow.ly/i/GPnEp
Recently, I had the opportunity to photograph a wedding. Some background for people reading this (who don’t have the context). The wedding was an “Indian wedding” in San Francisco, CA. It was a hybrid of a traditional North Indian and a South Indian wedding, and was compressed to two hours. There were several events that occurred before the wedding itself, and there were a few after the wedding.
For example, there was a cruise around the San Francisco Bay (thankfully, good weather).
There were also several indoor events (which were conducted in a basement, and so little natural light). There were several religious ceremonies, a civil ceremony, and lots of food, and drink, and partying as well.
Before heading out to SFO, I read a bunch of stuff about photographing weddings, and I spoke with one person (Thanks Allison) very familiar with this. I took a bunch of gear with me, and I thought long and hard about how to deal with the professional photographer(s) who would also be covering the event.
I was hoping that I’d be able to work alongside them, and watch and learn (and not get in the way). I hoped that they’d not be too annoyed with a busybody with a bunch of gear, and I hoped that I could stay out of their way.
Thinking back, and looking at the pictures I took, I’ve learned a lot; a lot about taking photographs, a lot about myself, and a lot about the equipment that I have.
Shoot fully manual mode – most of the time
Outdoors, it may be possible to get away with auto ISO, but even there shooting anything other than manual focus, manual exposure and aperture is a bad idea. I’ve tried a number of different options for metering, and focus preference, but did not find them to be particularly fun. But, that did mean that I was shooting stopped down (f5.6 or smaller).
Bounce the flash off the roof
You do really want f2.8 a lot of the time!
While I often shot f5.6 or smaller, I did find myself shooting f2.8 quite a lot. Not as much as I thought I would, but certainly quite a lot. And it was good that I had lenses that could go to f2.8. Most of the time I found that I was shooting between 50mm and 90mm so it was quite annoying that I needed two lenses to cover this range. But I managed …
Shoot RAW (+JPEG, but definitely RAW)
I’ve found that many of the pictures I took needed post-processing that would was much easier with RAW. For example, some of them required significant color (temperature) and exposure adjustment.
One example is at left, I think the color temperature of the picture above is better than the one below. The significant amount of purple in the decorations caused the image to look a little bit too purple for my liking. Luckily the little white altar in the foreground gave me a good color reference.
I don’t want to get into the “can this be done with JPEG” debate; I’m sure that it can, and there are many who prefer JPEG. I just feel lucky that I shot everything RAW+JPEG.
LED Light Panels are a must
I have a great flash, but it is no match for a good LED light panel. I really need to get one of those things if I’m ever going to shoot a wedding, or any other event with a lot of people.
Take more pictures; way more pictures
I’m not a “spray and pray” kind-of person. I tend to look through the view finder a while before clicking. I try to frame a shot well, get everything to look just right, and then the subject has moved, or the ‘moment’ has passed. This happened a lot.
I really have to learn to accept a lower ‘good picture’ ratio, and capture the moment as best as I can, and crop, and post-process later.
Lose a lot of weight
The professionals were at least a hundred pounds lighter than I was. The way they moved clearly reflected a certain difference in our respective ‘momentum’s!
I definitely need more experience with photographing people, something that I’ve known for a while. The wedding was a great excuse for me to happily point a camera at people who were having animated conversations, and click. Now I have to find other venues where I can do the same thing, and learn more about this aspect of photography that I’ve really neglected for too long.
P.S. My thanks to Allison Perkel for all the pointers she gave me before I went on this trip.
4 of the Most Common Composition Mistakes In Photography
As usual, articles in DPS are well written, and excellent things to learn from.
How to Make Fake Shallow Depth of Field Using Photoshop
I’ve long known and used ND filters and graduated ND filters in bright light; didn’t realize that you could get some wonderful effects with them in darkness.
The examples in this article (via ) are just outstanding
Like this one of the moon (below).
Here, the “double stacked graduated ND filters” helped bring the brightness of the moon to a level comparable with the foreground.
The takeaway is that ND filters and graduated ND filters can be used in places where there is a huge difference in brightness of the various elements in the photograph.
I have long suspected that this was the case, especially when no one could give a simple explanation why we did this.
- For the farmers
- For the farm animals
- To save energy
Those are just some of the explanations I have heard.
But I get it, in countries that are wide (west to east) you need more time zones. That is the real solution.
The developer’s dilemma
A nice, thought provoking article by @nayafia
“Zooming with your feet” means getting closer to your subject physically instead of relying on a longer lens, but you should be aware that the results you won’t be the same. Here’s a 9-minute video from This Place that looks at how different focal lengths affect perspective when compared to “zooming with your feet.” Perspective distortion is often misunderstood — it’s an area of photography that many photographers may not need to explore or understand properly.
This article appeared almost a month ago, and I just got to reading it. It almost looks like it was written looking in the rear view mirror! Are there any of these seven trends that isn’t already a hot topic?
7 #Cloud Computing Trends to Watch in 2018
Some very good introductory material about TensorFlow
The article also provides a useful link to hot air balloon festivals near you, that link is: http://www.hotairballoon.com/
Excellent article about some research at Northeastern University.
Similar location leaks also reported in a number of recent articles related to fitness trackers.
In 2011, Ken Rugg and I were having a number of conversations around CAP Theorem and after much discussion, we came up with the following succinct inequality. It was able to help us much better speak to the issue of what constituted “availability”, “partition tolerance”, and “consistency”. It also confirmed our suspicions that availability and partition tolerance were not simple binary attributes; Yes or No, but rather that they had shades of gray.
So here’s the write-up we prepared at that time.
Unfortunately, the six part blog post that we wrote (on parelastic.com) never made it in the transition to the new owners.
Now, that’s easy to say.
But, blockchain today is what docker was 18 months ago.
Startups got funded for doing “Docker for sandwiches”, or “Docker for underpants” (not really, but bloody close).
Talks were accepted at conferences because of the title “Docker, Docker, Docker, Docker, Docker” (really).
And today it is blockchain.
Google’s Project Fi international data service goes down.
One of the things I have come to realize is that it is not a good plan to depend on services provided by the likes of Google.
They work 99.9 or 99.95% of the time. And they work well. But depend on them for five 9’s, that’s dumb.
Cold weather is the best time to look at—and photograph—the night sky
I happened to be at the Mysore School of Architecture, in Mysore, India and had a chance to walk around. And click some photographs 🙂 The building is bright and airy, and the pictures below show some views of this. There was a lot of art all around the campus, all of it created by the students. Some modern art under the stairs. Modern art in the courtyard. A nice painting which shows two views, depending on where you stand. A nice mural on the wall. Some lovely photos; the sun and the wind weathered the display, but the display and the captions are lovely. Many of the class rooms have nice caricatures of famous architects. And finally, some lovely origami under the stairs! I loved my short trip to the college and will surely be back when there are some students there.
Ten quick tips for photographing northern lights.
I would love to do this sometime.
A great article on the subject of hyperfocal distance.
The basic idea is that if you focus your camera at the hyperfocal distance (H), the depth of field is from H/2 to infinity.
Last week I submitted my candidacy for election to the OpenStack Technical Committee, .
One thing that I am liking with this new election format is the email exchanges on the OpenStack mailing list to get a sense for candidates points of view on a variety of things.
In a totally non-work, non-technical context, I have participated and helped organize “candidates nights” events where candidates for election actually get to sit in front of an electorate and answer questions; real politics at the grass roots level. Some years back in one such election, I was elected to a position in the town where I live where I am required to lead with no real authority! So I look forward to doing the same with OpenStack.
You can read my candidacy statement at  and  so I won’t repeat those things here. I continue to work actively on OpenStack at Verizon now. In the past I was not really a “user” of OpenStack, now I absolutely am, and I am also a contributor. I want to build a better and more functional DBaaS solution and the good news is that there are four companies that are already interested in participating in the project; projects that didn’t participate in the Trove project!
I’m looking forward to working on Hoard in the community, and to serving on the TC if you give me that opportunity!
Earlier this week, I attended the OpenDev conference in San Francisco, CA.
The conference was focused on the emerging “edge computing” use cases for the cloud. This is an area that is of particular interest, not just from the obvious applicability to my ‘day job’ at Verizon, but also from the fact that it opens up an interesting new set of opportunities for distributed computing applications.
The highlight(s) of the show were two keynotes by M. Satyanarayanan of CMU. Both sessions were video taped and I’m hoping that the videos will be made available soon.
His team is working on some real cool stuff, and he showed off some of their work. The one that I found most fascinating, which most completely illustrates the value of edge computing is the augmented reality application to playing table tennis (which they call ping pong, and I know that annoys a lot of people :))
It was great to hear a user perspective presented by Andrew Mitry of Walmart. With 11,000 stores and an enormous (2mm??) employees, their edge computing use-case truly represents the scale at which these systems will have to operate, and the benefits that they can bring to the enterprise.
The conference sessions were very interesting and some of my key takeaways were that:
- Edge Computing means different things to different people, because the term ‘Edge’ means different things to different applications. In some cases the edge device may be in a data center, in other cases in your houses, and in other cases on top of a lamp post at the end of your street.
- A common API in orchestrating applications across the entirety of the cloud is very important, but different technologies may be better suited to each location in the cloud. There was a lot of discussion of the value (or lack thereof) of having OpenStack at the edge, and whether it made sense for edge devices to be orchestrated by OpenStack (or not).
- I think an enormous amount of time was spent on debating whether or not OpenStack could be made to fit on a system with limited resources and I found this discussion to be rather tiring. After all, OpenStack runs fine on a little raspberry-pi and for a deployment where there will be relatively few OpenStack operations (instance, volume, security group creation, update, deletetion) the limited resources at the edge should be more than sufficient.
- There are different use-cases for edge-computing and NFV/VNF are not the only ones, and while they may be the early movers into this space, they may be unrepresentative of the larger market opportunity presented by the edge.
There is a lot of activity going on in the edge computing space and many of the things we’re doing at Verizon fall into that category. There were several sessions that showcased some of the things that we have been doing, AT&T had a couple of sessions describing their initiatives in the space as well.
There was a very interesting discussion of the edge computing use-cases and the etherpad for that session can be found here.
Some others who attended the session also posted summaries on their blogs. This one from Chris Dent provides a good summary.
After the eclipse, I’m sure that everyone will have a welding glass to try this with!
A couple of months ago, a former co-worker called me and asked if I would provide a reference for her in a job search (which I readily agreed to). Then she went on to ask me this, “This company wants to make me an offer and they called and asked me what I currently make, and asked for a copy of a paystub. What should I do?”
Personally, I find this question stupid. I’ve been asked it many times (including quite recently) and in all instances I’ve been surprised by it (doh!) and I’ve answered in what I now consider to be the wrong way.
Every hiring manager has a range of salaries that they are willing to pay for a position, and they have a range of bonuses, a range of stock options and other incentives. And then there’s the common incentives that everyone gets (401(k), vacation, …). So why even ask the question? Why not make an offer that makes sense and be done with it?
If you are a hiring manager / HR person on the hiring side, do you ask this question?
If you are a candidate, how do you handle this question?
In any event, here’s what I recommended to my friend, answer the question along these lines.
- I’m sure you are asking me this so you can make me a competitive offer that I’ll accept
- I’m also sure that you have a range for all the components of the offer that you intend to make to me; base pay, bonus, stock options, …
- So what I’ll tell you is what I am looking for in an offer and I’ll leave it to you to make me an offer based on the standard ranges that you have
- I am looking for a take-home pay of $ _____ each month
- Since you offer a 401(k) plan which I intend to contribute $ _____ to, that means I am looking for a total base pay of $ ______ per year.
- I am looking for a total annual compensation of $ ______ including bonuses
- In addition, I am looking for ______ days of vacation each year.
That’s it. When asked for a copy of current pay-stub or anything like that, I recommend that you simply decline to provide it and make it clear that this is not any of their business.
Now, whether one can get away with this answer or not depends on how strong your position is for the opening in question. Some companies have a ‘policy’ that they need this paystub/W-2 stuff.
Not providing last pay information and following their ‘process’ could make the crabby HR person label you ‘not a team player’ or some such bogus thing and put your resume in the ‘special inbox’ which is marked ‘Basura’.
In any event, this all was fine and my friend told me that she was given a good offer which she accepted.
How do you approach this question?
I agree with four of the five; I am not so sure about the raise the ISO thing.
No matter how many articles I read about Photoshop, I always end up learning something new.
This article about HSL is no exception.
This blog post is dedicated to Sean McGinnis, the PTL of the OpenStack Cinder project. If you are an OpenStack type, you should really follow him on Twitter and read his blog which he promises to spruce up.
Update: March 2019. Another good blog post about configuring a Raspberry Pi as an IRC bouncer can be found at https://blog.ghosh.pro/2019/03/07/irc-bouncer-on-a-pi/
IRC is the lifeblood of communication and collaboration in many open source ecosystems, like OpenStack.
One of the biggest issues in working in this environment is that IRC is inherently an asynchronous communications mechanism and if you aren’t on IRC, you aren’t plugged in.
A bouncer is a simple app that will keep a connection open to the IRC network and allow your device (phone, laptop, tablet) connect and disconnect at will, and easily catch up on what you missed.
There are many other ways to accomplish this, some leave an emacs session running on a machine and connect to it remotely. Whatever floats your boat.
But, you need a place to run this solution, I run mine on a Raspberry Pi.
To install and configure ZNC (my bouncer of choice) on my Raspberry Pi, I use this diskimage-builder element. (https://github.com/amrith/rpi-image-builder/tree/master/elements/rpi-znc)
Using this element is very straightforward, it is a dependency from rpi-generic and you can follow the simple image building tutorial here. Since I enable the log module on ZNC, I run my Raspberry Pi with a read-only MicroSD card and a writable root file system on a USB Flash Drive. Instructions for that setup are found here.
Once you boot the system with this image, you first configure znc.
Simple instructions on how to do this are in the element listed above (in the README file). Those steps are
sudo -u znc /usr/bin/znc –datadir=/var/lib/znc –makeconf
echo “PidFile = /var/run/znc/znc.pid” | sudo su \
– znc -s /bin/bash \
-c “tee -a /var/lib/znc/configs/znc.conf”
That’s it! You can now enable and launch the znc service. In my case, I configured ZNC to run with SSL on port 6697.
root@pi:/var/lib/znc# netstat -nl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::6697 :::* LISTEN
This means that I can get to the webadmin console simply by pointing my browser to
The only other thing that I do is to enable the log module which will persist IRC logs on the Raspberry Pi. I can search them later, I also setup logrotate to purge them periodically.
A Raspberry Pi (mine is an RPI 3) requires a MicroSD card to boot, but once booted it is not required any longer. This is great because while very convenient, a MicroSD card is not as robust and hardy as a regular SSD card or even a regular USB flash device.
One of the things that I therefore do is to run my RPI’s on either SSD or a USB thumb drive. I’ve also run mine with a 1TB external spinning rust disk with external power. The technique illustrated here works on all of these.
My earlier post described the RPI boot process. The picture here shows a simple MicroSD card image for an RPI. The disk is partitioned into two parts, the first partition is a small FAT32 LBA addressable partition and the second is a larger ext4 partition. The FAT32 partition contains the bootloader and the ext4 partition contains the root filesystem.
The thing that these two together is cmdline.txt which defines the root device with a declaration like:
Since the RPI always mounts the MicroSD card as /dev/mmcblk0 and the partitions are numbered p1, p2, and so on, this indicates that the root partition is the ext4 partition as shown above.
To move this to a different location is a mere matter of adjusting cmdline.txt (and updating /etc/fstab) as shown below.
Here is my RPI with a USB thumb drive running the root (/) filesystem.
As you can see, I have a USB drive which shows up as /dev/sda and the MicroSD card.
amrith@rpi:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 1 115.7G 0 disk └─sda1 8:1 1 115.7G 0 part / mmcblk0 179:0 0 29G 0 disk └─mmcblk0p1 179:1 0 100M 0 part /boot amrith@rpi:~$ blkid /dev/mmcblk0p1: LABEL="BOOT" UUID="E4F6-9E9D" TYPE="vfat" PARTUUID="01deb70e-01" /dev/sda1: LABEL="root" UUID="7f4e0807-d745-4d6e-af6f-799d23a6450e" TYPE="ext4" PARTUUID="88578723-01"
I have changed cmdline.txt as shown below.
amrith@rpi:~$ more /boot/cmdline.txt [...] root=/dev/sda1 rootfstype=ext4 [...]
and updated /etc/fstab (on the USB drive) as shown below.
amrith@rpi:~$ more /etc/fstab # fstab generated by rpi-base element proc /proc proc defaults 0 0 LABEL=BOOT /boot vfat defaults,ro 0 2 PARTUUID=88578723-01 / ext4 defaults,noatime 0 1
As you can see, I’ve also marked the MicroSD card (which provides /boot) to be readonly; in the unlikely event that I have to modify it, I can remount it without the ‘ro’ option and make any changes.
On the left hand side is the MicroSD card. Note that on the MicroSD card, in the FAT32 partition, cmdline.txt is in ‘/’ (there’s no /boot on the MicroSD card). the cmdline.txt points the root partition to /dev/sda1 which is the USB flash drive.
On the right hand side is the USB flash drive, it has an ext4 partition with an /etc/fstab entry which mounts the MicroSD card’s fat32 partition on /boot and mounts itself on /.
This works just as well with any external disk; just make sure that you have adequate power!