Daylight Saving Time isn’t worth it, European Parliament members say

I have long suspected that this was the case, especially when no one could give a simple explanation why we did this.

  • For the farmers
  • For the farm animals
  • To save energy

Those are just some of the explanations I have heard.

But I get it, in countries that are wide (west to east) you need more time zones. That is the real solution.

https://arstechnica.com/tech-policy/2018/02/daylight-saving-time-isnt-worth-it-european-parliament-ministers-say/?amp=1

This is Why ‘Zooming with Your Feet’ Isn’t the Same Thing

“Zooming with your feet” means getting closer to your subject physically instead of relying on a longer lens, but you should be aware that the results you won’t be the same. Here’s a 9-minute video from This Place that looks at how different focal lengths affect perspective when compared to “zooming with your feet.” Perspective distortion is often misunderstood — it’s an area of photography that many photographers may not need to explore or understand properly.

Source: This is Why ‘Zooming with Your Feet’ Isn’t the Same Thing

This article appeared almost a month ago

This article appeared almost a month ago, and I just got to reading it. It almost looks like it was written looking in the rear view mirror! Are there any of these seven trends that isn’t already a hot topic?

7 #Cloud Computing Trends to Watch in 2018

http://ow.ly/dcG830ihigd

Old write-up about CAP Theorem

In 2011, Ken Rugg and I were having a number of conversations around CAP Theorem and after much discussion, we came up with the following succinct inequality. It was able to help us much better speak to the issue of what constituted “availability”, “partition tolerance”, and “consistency”. It also confirmed our suspicions that availability and partition tolerance were not simple binary attributes; Yes or No, but rather that they had shades of gray.

So here’s the write-up we prepared at that time.

parelastic-brewers-conjecture

Unfortunately, the six part blog post that we wrote (on parelastic.com) never made it in the transition to the new owners.

Don’t use a blockchain unless you really need one

Now, that’s easy to say.

But, blockchain today is what docker was 18 months ago. 

Startups got funded for doing “Docker for sandwiches”, or “Docker for underpants” (not really, but bloody close).

Talks were accepted at conferences because of the title “Docker, Docker, Docker, Docker, Docker” (really).

And today it is blockchain.

https://www.coindesk.com/dont-use-blockchain-unless-really-need-one/

I hope you weren’t counting on Project Fi …

Google’s Project Fi international data service goes down.

One of the things I have come to realize is that it is not a good plan to depend on services provided by the likes of Google.

They work 99.9 or 99.95% of the time. And they work well. But depend on them for five 9’s, that’s dumb.

https://www.engadget.com/amp/2018/01/13/google-project-fi-international-data-outage/

Mysore School of Architecture

I happened to be at the Mysore School of Architecture, in Mysore, India and had a chance to walk around. And click some photographs 🙂 The building is bright and airy, and the pictures below show some views of this. There was a lot of art all around the campus, all of it created by the students. Some modern art under the stairs. Modern art in the courtyard. A nice painting which shows two views, depending on where you stand. A nice mural on the wall. Some lovely photos; the sun and the wind weathered the display, but the display and the captions are lovely. Many of the class rooms have nice caricatures of famous architects. And finally, some lovely origami under the stairs! I loved my short trip to the college and will surely be back when there are some students there.

Running for election to the OpenStack TC!

Last week I submitted my candidacy for election to the OpenStack Technical Committee[1], [2].

One thing that I am liking with this new election format is the email exchanges on the OpenStack mailing list to get a sense for candidates points of view on a variety of things.

In a totally non-work, non-technical context, I have participated and helped organize “candidates nights” events where candidates for election actually get to sit in front of an electorate and answer questions; real politics at the grass roots level. Some years back in one such election, I was elected to a position in the town where I live where I am required to lead with no real authority! So I look forward to doing the same with OpenStack.

You can read my candidacy statement at [1] and [2] so I won’t repeat those things here. I continue to work actively on OpenStack at Verizon now. In the past I was not really a “user” of OpenStack, now I absolutely am, and I am also a contributor. I want to build a better and more functional DBaaS solution and the good news is that there are four companies that are already interested in participating in the project; projects that didn’t participate in the Trove project!

I’m looking forward to working on Hoard in the community, and to serving on the TC if you give me that opportunity!

[1] https://review.openstack.org/#/c/510133/
[2] http://openstack.markmail.org/thread/y22fyka77yq7m3uj

 

Reflections on the (first annual) OpenDev Conference, SFO

Earlier this week, I attended the OpenDev conference in San Francisco, CA.

The conference was focused on the emerging “edge computing” use cases for the cloud. This is an area that is of particular interest, not just from the obvious applicability to my ‘day job’ at Verizon, but also from the fact that it opens up an interesting new set of opportunities for distributed computing applications.

The highlight(s) of the show were two keynotes by M. Satyanarayanan of CMU. Both sessions were video taped and I’m hoping that the videos will be made available soon.

His team is working on some real cool stuff, and he showed off some of their work. The one that I found most fascinating, which most completely illustrates the value of edge computing is the augmented reality application to playing table tennis (which they call ping pong, and I know that annoys a lot of people :))

It was great to hear a user perspective presented by Andrew Mitry of Walmart. With 11,000 stores and an enormous (2mm??) employees, their edge computing use-case truly represents the scale at which these systems will have to operate, and the benefits that they can bring to the enterprise.

The conference sessions were very interesting and some of my key takeaways were that:

  • Edge Computing means different things to different people, because the term ‘Edge’ means different things to different applications. In some cases the edge device may be in a data center, in other cases in your houses, and in other cases on top of a lamp post at the end of your street.
  • A common API in orchestrating applications across the entirety of the cloud is very important, but different technologies may be better suited to each location in the cloud. There was a lot of discussion of the value (or lack thereof) of having OpenStack at the edge, and whether it made sense for edge devices to be orchestrated by OpenStack (or not).
  • I think an enormous amount of time was spent on debating whether or not OpenStack could be made to fit on a system with limited resources and I found this discussion to be rather tiring. After all, OpenStack runs fine on a little raspberry-pi and for a deployment where there will be relatively few OpenStack operations (instance, volume, security group creation, update, deletetion) the limited resources at the edge should be more than sufficient.
  • There are different use-cases for edge-computing and NFV/VNF are not the only ones, and while they may be the early movers into this space, they may be unrepresentative of the larger market opportunity presented by the edge.

There is a lot of activity going on in the edge computing space and many of the things we’re doing at Verizon fall into that category. There were several sessions that showcased some of the things that we have been doing, AT&T had a couple of sessions describing their initiatives in the space as well.

There was a very interesting discussion of the edge computing use-cases and the etherpad for that session can be found here.

Some others who attended the session also posted summaries on their blogs. This one from Chris Dent provides a good summary.

A conclusion/wrap-up session identified some clear follow-up activities. The etherpad for that session can be found here.

How do you answer this interview question, “what do you make in your current job?”

A couple of months ago, a former co-worker called me and asked if I would provide a reference for her in a job search (which I readily agreed to). Then she went on to ask me this, “This company wants to make me an offer and they called and asked me what I currently make, and asked for a copy of a paystub. What should I do?”

Personally, I find this question stupid. I’ve been asked it many times (including quite recently) and in all instances I’ve been surprised by it (doh!) and I’ve answered in what I now consider to be the wrong way.

Every hiring manager has a range of salaries that they are willing to pay for a position, and they have a range of bonuses, a range of stock options and other incentives. And then there’s the common incentives that everyone gets (401(k), vacation, …). So why even ask the question? Why not make an offer that makes sense and be done with it?

If you are a hiring manager / HR person on the hiring side, do you ask this question?

If you are a candidate, how do you handle this question?

In any event, here’s what I recommended to my friend, answer the question along these lines.

  • I’m sure you are asking me this so you can make me a competitive offer that I’ll accept
  • I’m also sure that you have a range for all the components of the offer that you intend to make to me; base pay, bonus, stock options, …
  • So what I’ll tell you is what I am looking for in an offer and I’ll leave it to you to make me an offer based on the standard ranges that you have
  • I am looking for a take-home pay of $ _____ each month
  • Since you offer a 401(k) plan which I intend to contribute $ _____ to, that means I am looking for a total base pay of $ ______ per year.
  • I am looking for a total annual compensation of $ ______ including bonuses
  • In addition, I am looking for ______ days of vacation each year.

That’s it. When asked for a copy of current pay-stub or anything like that, I recommend that you simply decline to provide it and make it clear that this is not any of their business.

Now, whether one can get away with this answer or not depends on how strong your position is for the opening in question. Some companies have a ‘policy’ that they need this paystub/W-2 stuff.

Not providing last pay information and following their ‘process’ could make the crabby HR person label you ‘not a team player’ or some such bogus thing and put your resume in the ‘special inbox’ which is marked ‘Basura’.

In any event, this all was fine and my friend told me  that she was given a good offer which she accepted.

How do you approach this question?

Running your IRC bouncer on a Raspberry Pi

This blog post is dedicated to Sean McGinnis, the PTL of the OpenStack Cinder project. If you are an OpenStack type, you should really follow him on Twitter and read his blog which he promises to spruce up.


Update: March 2019. Another good blog post about configuring a Raspberry Pi as an IRC bouncer can be found at https://blog.ghosh.pro/2019/03/07/irc-bouncer-on-a-pi/


IRC is the lifeblood of communication and collaboration in many open source ecosystems, like OpenStack.

One of the biggest issues in working in this environment is that IRC is inherently an asynchronous communications mechanism and if you aren’t on IRC, you aren’t plugged in.

A bouncer is a simple app that will keep a connection open to the IRC network and allow your device (phone, laptop, tablet) connect and disconnect at will, and easily catch up on what you missed.

There are many other ways to accomplish this, some leave an emacs session running on a machine and connect to it remotely. Whatever floats your boat.

But, you need a place to run this solution, I run mine on a Raspberry Pi.

To install and configure ZNC (my bouncer of choice) on my Raspberry Pi, I use this diskimage-builder element. (https://github.com/amrith/rpi-image-builder/tree/master/elements/rpi-znc)


Using this element is very straightforward, it is a dependency from rpi-generic and you can follow the simple image building tutorial here. Since I enable the log module on ZNC, I run my Raspberry Pi with a read-only MicroSD card and a writable root file system on a USB Flash Drive. Instructions for that setup are found here.


Once you boot the system with this image, you first configure znc.

Simple instructions on how to do this are in the element listed above (in the README file). Those steps are

sudo -u znc /usr/bin/znc –datadir=/var/lib/znc –makeconf

echo “PidFile = /var/run/znc/znc.pid” | sudo su \
– znc -s /bin/bash \
-c “tee -a /var/lib/znc/configs/znc.conf”

That’s it! You can now enable and launch the znc service. In my case, I configured ZNC to run with SSL on port 6697.

root@pi:/var/lib/znc# netstat -nl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::6697 :::* LISTEN

[…]

This means that I can get to the webadmin console simply by pointing my browser to

https://<IP ADDRESS>:6697

The only other thing that I do is to enable the log module which will persist IRC logs on the Raspberry Pi. I can search them later, I also setup logrotate to purge them periodically.

Running your Raspberry Pi on a conventional (or SSD) hard disk or a USB thumb drive

A Raspberry Pi (mine is an RPI 3) requires a MicroSD card to boot, but once booted it is not required any longer. This is great because while very convenient, a MicroSD card is not as robust and hardy as a regular SSD card or even a regular USB flash device.

One of the things that I therefore do is to run my RPI’s on either SSD or a USB thumb drive. I’ve also run mine with a 1TB external spinning rust disk with external power. The technique illustrated here works on all of these.

My earlier post described the RPI boot process. The picture here shows a simple MicroSD card image for an RPI. The disk is partitioned into two parts, the first partition is a small FAT32 LBA addressable partition and the second is a larger ext4 partition. The FAT32 partition contains the bootloader and the ext4 partition contains the root filesystem.

The thing that these two together is cmdline.txt which defines the root device with a declaration like:

root=/dev/mmcblk0p2 rootfstype=ext4

Since the RPI always mounts the MicroSD card as /dev/mmcblk0 and the partitions are numbered p1, p2, and so on, this indicates that the root partition is the ext4 partition as shown above.

To move this to a different location is a mere matter of adjusting cmdline.txt (and updating /etc/fstab) as shown below.

Here is my RPI with a USB thumb drive running the root (/) filesystem.

As you can see, I have a USB drive which shows up as /dev/sda and the MicroSD card.

amrith@rpi:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 115.7G 0 disk
└─sda1 8:1 1 115.7G 0 part /
mmcblk0 179:0 0 29G 0 disk
└─mmcblk0p1 179:1 0 100M 0 part /boot

amrith@rpi:~$ blkid
/dev/mmcblk0p1: LABEL="BOOT" UUID="E4F6-9E9D" TYPE="vfat" PARTUUID="01deb70e-01"
/dev/sda1: LABEL="root" UUID="7f4e0807-d745-4d6e-af6f-799d23a6450e" TYPE="ext4" PARTUUID="88578723-01"

I have changed cmdline.txt as shown below.

amrith@rpi:~$ more /boot/cmdline.txt
[...] root=/dev/sda1 rootfstype=ext4 [...]

and updated /etc/fstab (on the USB drive) as shown below.

amrith@rpi:~$ more /etc/fstab
# fstab generated by rpi-base element
proc /proc proc defaults 0 0
LABEL=BOOT /boot vfat defaults,ro 0 2
PARTUUID=88578723-01 / ext4 defaults,noatime 0 1

As you can see, I’ve also marked the MicroSD card (which provides /boot) to be readonly; in the unlikely event that I have to modify it, I can remount it without the ‘ro’ option and make any changes.

This is illustrated below

On the left hand side is the MicroSD card. Note that on the MicroSD card, in the FAT32 partition, cmdline.txt is in ‘/’ (there’s no /boot on the MicroSD card). the cmdline.txt points the root partition to /dev/sda1 which is the USB flash drive.

On the right hand side is the USB flash drive, it has an ext4 partition with an /etc/fstab entry which mounts the MicroSD card’s fat32 partition on /boot and mounts itself on /.

This works just as well with any external disk; just make sure that you have adequate power!

Everything you wanted to know about Raspberry Pi (RPI) images – Part 2

In Part 1 of this blog post I described the Raspberry Pi boot image structure (a very simple version) and provided a very brief description of how the boot process works.

This part talks about how you can build your own images for a Raspberry Pi and setups a framework for people to collaborate and share their image components with each other.

In subsequent blogs I’ll build on the concepts here and describe other more sophisticated configurations and progressively share more of the elements I have built for myself, cleaning them up, before I make them public.

This blog post is structured into three parts:

  • Building your own images
  • Writing your own elements
  • Collaborating with elements

I build my own Raspberry Pi boot images from scratch. I do this so I can build images in a repeatable way, configuring the software I want in just the way I want it.

It is much more effective to do this, than to take an image from some other source and then remove things which I don’t want and work around the quirks of the person who build that image to begin with.

In doing this, I can control exactly what goes on the image and make sure that it comes out the exact same way each time.

My tool of choice (as I use this for work related things as well) is diskimage-builder. Like other image building tools, this one is quirky, but I’ve got to understand it and I prefer it to the pi-gen tool that is used to build the official Raspbian images.

Building your own images

This will work for you if you have any old Linux machine (I use an Ubuntu 16.04LTS x86_64 machine). A simple to follow Quick Start Guide is available here.

To a reasonably Linux savvy person, building your first image should take no more than 20 minutes and you can have your Raspberry Pi up and running on your own custom image in about 30 minutes.

The steps are simple:

  • Clone the repository of tools and elements
  • Install prerequisite software
  • Build your image
  • Install your image
  • Have fun!

Writing your own elements

diskimage-builder is a very flexible tool and it is very easy to write elements once you understand the general flow of things. Like all image building tools, it uses two environments, a host environment and an image environment. Some tools make the image environment a chroot (like diskimage-builder) others use a VM or a docker container.

diskimage-builder works by allowing users to select elements, define dependencies between elements, and execute scripts provided in those elements in a deterministic order. Some of the scripts are executed in the host environment, others are executed in the image environment. Finally, it takes the image environment that was created and packages it up into the format of your choice.

I use diskimage-builder to make images for a cloud (and I use qcow2 for that). For Raspberry Pi, I use the ‘raw’ format which is nothing more than a disk image complete with MBR (boot record), partition table, and volumes.

To write your own element, you can follow along with one of the many elements provided and insert your own commands. It is a very short, and not very steep learning curve.

Collaborating with elements

I’ve shared (and I will continue to share) elements that build a basic Raspbian based image, a basic Debian based image, and elements that will install specific packages, configure WiFi, configure a secure user, and so on.

For example, consider the element rpi-generic which produces a Raspberry Pi image that I use as a basic building block for many of my applications. It does not, by itself, provide an operating system. So, I could choose either rpi-debian-core or rpi-raspbian-core. With just those two elements, I could execute the command

disk-image-create rpi-debian-core rpi-generic \
-a armhf -o debian -t raw -n

and this would produce a file (debian.raw) which was a properly bootable Raspberry Pi image.

But, since rpi-generic depends on other elements the diskimage-builder tool looks for those and uses them. One of the elements it depends on is rpi-resize-root. Let’s look at that for a second.

The MicroSD card I use is often 8 or 16GB, the image (for speed) is just 2GB. So, when you install the image onto the MicroSD card, and you boot the machine for the first time, the software installed and configured by the rpi-resize-root element expands the root file system to take the full available space.

Similarly, rpi-generic depends on rpi-wifi. That element does the few things to configure my Raspberry Pi to automatically connect to the WiFi network.

If someone else wrote an element that I wanted to use, all I would have to do is get that element (clone the repository that they shared) and tell diskimage-builder where to find the element!

For example, the sample build.bash script contains this line, which can easily be extended like a standard Linux PATH variable!

As a practical matter, I have an element for my WiFi access point that I use at home (which I will share as soon as I can clean it up). That element is on my machine at /rpi-image-builder-private/elements/rpi-wifi-access-point.

The elements which I’ve shared on the rpi-image-builder repository are at /rpi-image-builder/elements/.

To build my WiFi access point, I specify:

export ELEMENTS_PATH=/rpi-image-builder-private/elements:/rpi-image-builder/elements

I encourage you to write and share elements, and I believe that this will be a great way for all Raspberry Pi users to collaborate on their cool projects.

Everything you wanted to know about Raspberry Pi (RPI) images

This is a two part blog post that covers a lot of topics about a rather boring aspect of the RPI. I suspect that the vast majority of people using a RPI wouldn’t be interested in most of this stuff. But, there are a small number of people out there (like myself) who will no doubt find this stuff very useful.

This (the first of two) posts covers the booting process of an RPI. You need to have some idea of how this works to make your own images.

I have a RPI 3, some of this stuff may not be accurate for other models.

RPI Booting demystified

I have not been able to find any documentation on the bootloader that runs in the firmware but by trial and error, here is what I’ve found.

The firmware accesses the partition table on the MicroSD card which it assumes is an MS-DOS style partition table. The partition table is at the very beginning of the card and is part of the Master Boot Record (MBR). A good write up about this layout is found here [1].

The bootloader looks (it appears) for an MS-DOS LBA addressable partition. I have found that partition types 0x0C and 0x0E work. In order to boot reliably, I have also found that this must be the first partition on the disk.

RPI’s don’t use bootstrap code on the MicroSD card.

This is very important to if you wish to make your own images; it makes that process a whole lot simpler. The bootloader in firmware looks instead at this small MS-DOS partition to find a kernel, all the drivers it needs, and a pointer to the root file system.

The bootloader parses the cmdline.txt file to find the root partition. Here is a small portion of the cmdline.txt on one of my RPIs.

root=/dev/mmcblk0p2 rootfstype=ext4

The RPI addresses the MicroSD as /dev/mmcblk0 and the partitions on it are (in order) addressed as /dev/mmcblk0p1, /dev/mmcblk0p2, and so on.

The cmdline.txt file identifies the root partition either by providing this name, or by specifying a partition UUID, or a label. In the example above, cmdline.txt points to the second partition on the MicroSD card.

I found exactly one place that appears to document this in some detail, that is here[2].

Troubleshooting the boot process

When you power on the RPI, the red light indicates power. The green light (ACT/D5) indicates SD card access. If the red light comes on but the green one doesn’t it means that the firmware did not find a card it liked. This has been the most common problem described by people online.

If you have the tools to debug it, here are some things that I have found to be helpful.

If you can put this MicroSD card into a machine where you can mount it (and hopefully that machine is Linux based; if it is Windows based, my apologies, I can’t help you).

lsblk will help you identify the device. For example on my machine it is /dev/sdc. Here is the output of lsblk on my machine. The MicroSD card is shown in bold letters.

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 150G 0 disk
├─sda1 8:1 0 134G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 16G 0 part [SWAP]
sdc 8:32 1 29.7G 0 disk
├─sdc1 8:33 1 100M 0 part
└─sdc2 8:34 1 29.6G 0 part
sr0 11:0 1 1024M 0 rom

And (as root) runnint ‘parted’ on that device gives me

(parted) unit b
(parted) print
Model: SD SL16G (sd/mmc)
Disk /dev/mmcblk0: 15931539456B
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
 1 1048576B 1529000447B 1527951872B primary fat32 lba
[...]

As highlighted above, the partition table is ‘msdos’ and the partition is shown as ‘fat32 lba’. Both of these are essential.

Here are some examples of a closer look at the MicroSD card using hexdump.

# sudo dd if=/dev/sdc of=/tmp/head bs=4K count=1
# hexdump -C /tmp/head
00000000 fa b8 00 10 8e d0 bc 00 b0 b8 00 00 8e d8 8e c0 |................|
00000010 fb be 00 7c bf 00 06 b9 00 02 f3 a4 ea 21 06 00 |...|.........!..|
00000020 00 be be 07 38 04 75 0b 83 c6 10 81 fe fe 07 75 |....8.u........u|
00000030 f3 eb 16 b4 02 b0 01 bb 00 7c b2 80 8a 74 01 8b |.........|...t..|
00000040 4c 02 cd 13 ea 00 7c 00 00 eb fe 00 00 00 00 00 |L.....|.........|
00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
000001b0 00 00 00 00 00 00 00 00 d4 5c 0b 00 00 00 00 00 |.........\......|
000001c0 01 20 0e 03 d0 ff 00 08 00 00 59 89 2d 00 00 03 |. ........Y.-...|
000001d0 d0 ff 05 03 d0 ff 59 91 2d 00 a7 3a ad 01 00 00 |......Y.-..:....|
000001e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
[...]

The thing I’ve highlighted is the byte at offset 0x1c2 into the card, here it says 0x0E. That is the code for a Fat16 LBA mapped partition table.

On another RPI, I have the following.

$ sudo dd if=/dev/sdc of=/tmp/header bs=4K count=1
$ hexdump -C /tmp/header
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
000001b0 00 00 00 00 00 00 00 00 99 56 4c 6b 00 00 80 20 |.........VLk... |
000001c0 21 00 0c eb 17 0c 00 08 00 00 00 20 03 00 00 eb |!.......... ....|
000001d0 17 0c 83 f9 71 05 00 28 03 00 00 d0 3c 00 00 00 |....q..(....<...|
000001e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.|
00000200 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001000

Both of these work well for me. If you don’t see one of these, look up the number you do see here [3]. If it is not a MS-DOS LBA-mapped partition type, you have a problem.

If you made your own SD card and you have the image file handy, you can inspect it to see what it has as follows.

Here is an image I built for myself.

$ ls -l raspbian.raw
-rw-rw-r-- 1 amrith amrith 2147483648 Jul 16 22:26 raspbian.raw

It is a 2GB Raspbian clone that I use for some of my machines. It contains two partitions, the output below shows offsets and sizes in Bytes because of the ‘unit b’ command.

$ sudo parted ./raspbian.raw unit b print
Model: (file)
Disk /home/amrith/images/raspbian.raw: 2147483648B
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
 1 1048576B 105906175B 104857600B primary fat32 boot, lba
 2 105906176B 2146435071B 2040528896B primary ext4

Armed with that information, you can now do this.

$ sudo losetup --show -o 1048576 -f ./raspbian.raw
/dev/loop0
$ sudo losetup --show -o 105906176 -f ./raspbian.raw
/dev/loop1
$ mkdir /tmp/boot
$ mkdir /tmp/root
$ sudo mount /dev/loop0 /tmp/boot
$ sudo mount /dev/loop1 /tmp/root

/tmp/boot and /tmp/root now mount the MS-DOS and ext4 partitions in the image using loopback file systems. This is very useful in debugging. You can, of course, do the exact same thing on a MicroSD card and make corrections to the card if you so desire.

Making your own images

The next installment of this blog post will describe how you can build your own images.

References

[1] https://en.wikipedia.org/wiki/Master_boot_record
[2] https://github.com/raspberrypi/noobs/wiki/Standalone-partitioning-explained
[3] http://www.win.tue.nl/~aeb/partitions/partition_types-1.html

Blockchain is an over hyped technology solution looking for a problem

The article makes a very simple argument for something that I have felt for a while, block chain is a cool technology but the majority of the use cases people talk about are just bull shit.

http://www.coindesk.com/blockchain-intermediaries-hype/

Troubleshooting OpenStack gate issues with logstash

From time to time, I have to figure out why the Trove CI failed some job. By “from to time”, I really mean “constantly”, “all the time”, and “everyday”.

Very often the issue is some broken change that someone pushed up, easy ones are pep8 or pylint failures, slightly harder are the py27/py35 failures. The really hard ones are failures in the Trove scenario tests.

Very often the failures are transient and a recheck fixes them (which is annoying in itself) but sometimes the failure is repeatable.

In the past week, I’ve had to deal with one such issue; I first realized that it was a repeated failure after about a dozen rechecks of various changes.

I realized that the change had a telltale signature that looked like this:

Test Replication Instance Multi-Promote functionality.
Test promoting a replica to replica source (master).        SKIP: Failure in <function wait_for_delete_non_affinity_master at 0x7f1aafd53320>
Verify data is still on new master.                         SKIP: Failure in <function wait_for_delete_non_affinity_master at 0x7f1aafd53320>
Add data to new master to verify replication.               SKIP: Failure in <function wait_for_delete_non_affinity_master at 0x7f1aafd53320>
Verify data exists on new master.                           SKIP: Failure in <function wait_for_delete_non_affinity_master at 0x7f1aafd53320>

The important part of the message (I realized later) was the part that read:

Failure in <function wait_for_delete_non_affinity_master ...

Looking a bit above this, I found the test that had in fact failed

Wait for the non-affinity master to delete.                 FAIL

One thing important in this kind of debugging is to try and figure out when this failure really started to happen, and that’s one of the places where logstash comes in really handy.

For every single CI job run by the OpenStack integrated gate, the result artifacts are parsed and some of them are indexed in an elasticsearch database.

It is trivial now to pick up the string that I felt was the ‘signature’ and search for it in logstash. Within seconds I can tell that this error began to occur on 4/11.

This, by itself was not sufficient to figure out what the problem was, but once Matt Riedemann identified a probable cause, I was able to confirm that the problem started occurring shortly after that change merged.

Logstash is a really really powerful tool, give it a shot, you’ll find it very useful.

Raspberry Pi networking projects

The Raspberry Pi 3 that I have comes standard with two network interfaces; a wired interface that can do 100mbps and a WiFi interface. Older Raspberry Pi’s required that you use a USB dongle for WiFi, I don’t use those units any longer.

So for the purposes of all that follows, I assume Raspberry Pi 3, onboard WiFi and wired ethernet.

By default, these two interfaces are active and software that you run on the Raspberry Pi can connect to the outside world using one, or both.

I’ve found several interesting use-cases for the Raspberry Pi by changing the way these interfaces are configured.

  1. A WiFi satellite location

    piwifi1In this image, three devices (device 1, 2 and 3) are not WiFi enabled and are internet connected using the Raspberry Pi as effectively a wireless network extender.

    This setup is relatively straightforward on a Raspberry Pi.

  • Configure the wireless interface on the Raspberry Pi to connect to the wireless access point.
  • Enable ip forwarding
  • Configure dnsmasq
  • Enable packet forwarding from the wlan0 and eth0 interfaces

    With this setup, the three devices connected to the wired interface will get their DHCP leases from the Raspberry Pi. Packets will get forwarded by the Raspberry Pi between the wired and wireless interfaces.

  1. A WiFi satellite location without DHCP

    piwifi2The above configuration is very useful for some things but not always. I have a printer (quite old) which I have connected to a single parallel port ethernet print server (TP-Link TL-PS110P). I need to be able to access this printer from other wirelessly connected devices and so I need it to have its DHCP lease coming from the WiFi Access Point!

    This setup is similar to (1) above, but no dnsmasq, no NAT, enable proxy ARP.

  1. The Raspberry Pi as a WiFi access point

    piwifi3This is something I’ve just been playing with recently and it appears to work quite well. The Raspberry Pi 3’s WiFi interface can be configured to act as an access point using the hostapd package. The way I have this setup, dnsmasq is enabled and the wirelessly connected devices receive DHCP leases from the Raspberry Pi. Traffic is routed to the internet over the wired interface.

  2. The Raspberry Pi as a secure WiFi access point

    Eventually, this is what I want to get, a Raspberry Pi as a secure WiFi access point; the WiFi interface running in access point mode but all traffic going out of the wired interface is tunneled to a VPN.

    I use OpenVPN, and that works fine on the Raspberry Pi already. Have to put the pieces together and make it a bit more robust; right now, not quite there.

    Equally interesting would be the other configuration like (1) above but where all traffic out of the WiFi interface is tunneled. In that setup, I could, for example connect my laptop to the wired interface and connect to any WiFi access point on the Raspberry Pi. Traffic over the WiFi interface would be tunneled by the Raspberry Pi and this would be an ideal travel setup as the Raspberry Pi would just be powered off the USB port on the laptop 🙂

Raspberry Pi basics

I have been using a Raspberry Pi (I’ve bought a few of these on Amazon, at $50 each, they are a bargain) for some time now and have found them to be excellent for a number of things.

A recent project to set one up as a WiFi access point got me thinking that I should, maybe, share some of these use-cases.

So, here’s a primer on how I setup the Raspberry Pi. I have a new one on order right now so this is an actual first-time setup.

If you have never done this before, don’t worry, it is very simple.

Assembling your Raspberry Pi

New and out of the box, the only things that you have to do are

  • Figure out how to affix the heat-sink on the processor; I always use the largest one that they provide. Do this once, do it carefully and you will have no issues later
  • Figure out how to get the board neatly into the nice clear plastic case.

Formatting your SD card

I don’t purchase the “complete kit” which comes with a Micro-SD card. I usually have a card or three hanging around and set it up using NOOBS (That’s Raspberry Pi’s New Out Of Box Software).

Since I setup the card on a Windows machine, there is one thing I’d like to highlight. The documentation makes it sound hard, they have you download some special format utility and all that stuff. Don’t bother.

Just follow the easy instructions found here.

  1. Launch the disk management utility.

My new 16GB disk drive is the one that shows up as Disk 1.

This was a brand new SD card, if you are reusing an SD card, you may see multiple partitions, delete them all.

If you find that the “Delete Volume” options are greyed out, you will have to use the Windows Command Line. Use the diskpart utility, select the disk, then select each partition in turn and delete it. You will be left with a disk that looks like this.

Observe that now Disk 1 is shown as “Unallocated”. I always make sure I get here and format the disk.

  1. Format the disk
    You do this by simply right clicking on the “Unallocated” disk and choosing “Format”. Be careful to choose FAT32.
  1. Copy NOOBS onto the new SD card
    Download the latest NOOBS zip file and unzip it. Then just drag and drop the whole thing onto your new SD card. Safely eject the SD card, make sure the power is disconnected from the Raspberry Pi and plug the card into the slot. Then … the moment of truth.
  2. Power up the Raspberry Pi for the first time
    If you did everything correctly, you should see a NOOBS screen that comes up and allows you to choose the operating system. I usually enable WiFi at this point (or if the wired network is connected, that works too) , and then I follow the standard NOOBS documentation, setup Raspbian with PIXEL, and then reboot.
  3. On first boot, I enable the SSH server, set the locale, timezone and things like that and from that point the rest of the setup is done from command line.

That’s really all there is to your first time Raspberry Pi setup!

 

How to Understand the Curves Tool in Photoshop

Two of the most important things I’ve learned to work with in the past couple of weeks are the Curves Tool in Photoshop (for post processing) and the Histogram tool (either in Photoshop or on your camera itself, for use either while taking pictures, or in post processing).

This article is about the curves tool, well worth the read.

-amrith

After you’ve mastered Levels , it’s time to take a step up to the tool that is probably the most useful for color and contrast control in Photoshop: Curves. As with levels, you should play around with the basic Curves command to get a feel for it.

Source: How to Understand the Curves Tool in Photoshop

Primes Versus Zoom Lenses: Which Lens to Use and Why?

Which type of lens is better, a prime lens or a zoom lens? This is one of the most debatable topics in photography. Some of you might choose a zoom lens and others may choose a prime lens, it all depends on what and where you are going to shoot.

Source: Primes Versus Zoom Lenses: Which Lens to Use and Why?

More than 99% of Blockchain Use Cases Are Bullshit

I’ve been following the blockchain ecosystem for some time now largely because it strikes me as yet another distributed database architecture, and I dream about those things.

For some time now, I’ve been wondering what to do after Tesora and blockchain was one of the things I’ve been looking at as a promising technology but I wasn’t seeing it. Of late I’ve been asking people who claim to be devotees at the altar of blockchain what they see as the killer app. All I hear are a large number of low rumbling sounds.

And then I saw this article by Jamie Burke of Convergence.vc and I feel better that I’m not the only one who feels that this emperor is in need of a wardrobe.

Let’s be clear, I absolutely agree that bitcoin is a wonderful use of the blockchain technology and it solves the issue of trust very cleverly through proof of work. I think there is little dispute of elegance of this solution.

But once we go past bitcoin, the applications largely sound and feel like my stomach after eating gas station sushi; they sound horrible and throw me into convulsions of pain.

In his article, Jamie Burke talks of 3d printing based on a blockchain sharded CAD file. I definitely don’t see how blockchain can prevent the double-spend (i.e. buy one Yoda CAD file, print 10,000).

Most of the blockchain ideas I’m seeing are things which are attempting to piggy-back on the hot new buzzword and where blockchain is being used to refer to “secure and encrypted database”. After all, there’s a bunch of crypto involved and there’s data stored there right? so it must be a secure encrypted database.

To which I say, Bullshit!

P.S. Oh, the irony. This blog post references a blog post with a picture labeled “Burke’s Bullshit Cycle”, and the name of this blog is hypecycles.com.

Reflections on the first #OpenStack PTG (Pike PTG, Atlanta)

A very long time ago, and on a planet very unlike the earth we are on now, Thierry sent this email [1] proposing that after Barcelona, let’s split the OpenStack Summit into what have now come to be known as the Design Summit and a Forum. The email thread that resulted from [1] was an active one and had 125 responses, along the way Thierry also posted [2], a summary of the issues and concerns raised.

I had my reservations about the idea, and now, after returning from the first of these, have had some time to reflect on the result.

On the whole, the event was very well done, and I believe that everyone who attended the feedback session had positive things to say about the event, and my congratulations to Erin Disney and the rest of the team at the Foundation. The attendance was a solid 500 to 600 people (I don’t know the exact number) and Thierry must be psychic because he predicted almost exactly that in February [3].

pike-coffeeI did not realize that the foundation had gone as far as to get Starbucks to customize a blend of coffee for us, and to get the Sheraton to distribute in our rooms (image courtesy of Masayuki Igawa, @masayukig).

The format gave attendees the opportunity to get a significant amount of work done both within their own project teams, as well as with other project teams, without the interruptions and distractions of the summit.

I particularly liked the fact that I could attend two days of cross project sessions and then two and a half days of sessions with other projects. By giving projects two or three room-days instead of four or five room-hours dramatically improved the amount of time that projects could focus on their own discussions, and spend on cross project discussions.

Personally, I think the PTG was a success, and seems to have delivered most if not all of the things that it set out to do. Some things outside of our control, certainly outside the control of the foundation have cast a small shadow on the proceedings and we need to seriously consider the consequences of where the next summit and PTG’s are. The location has implication for many attendees, and I think we should seriously consider having remote participation at future events.

From my recollection of the feedback session (unfortunately I don’t have a link to the etherpad, if someone has it, please post a comment with it) everyone had good things to say about the event as a whole. The consensus was that the food was good but cold sandwiches get boring after day 3, the air handlers were noisy, the rooms were too cold (or hot), the chairs were uncomfortable, and there was no soda. That feedback is consistent with organizing an event for 500 people in a hotel or convention facility anywhere in the world. And if that’s all that people could put down in the “needs improvement” section, the event was a huge success.

designateI think the award for the best picture at the summit (thanks to Thierry for the tweet) goes to the Designate team[4]. I should’ve thought to get a Trove team photo while we were there!

[1] http://markmail.org/thread/v6h3qzs7rb35h6fo

[2] http://markmail.org/message/slzcvunoxccse5k4

[3] http://markmail.org/message/bultywgyxued5khl

[4] https://twitter.com/tcarrez/status/835149239571316736/photo/1

Stratoscale acquires Tesora

Yesterday it was announced that Tesora had been acquired by Stratoscale, here are some of the articles that were published about this.

and this official announcement by Stratoscale

Thanks to all of you who emailed, texted, tweeted, called, and pinged me on IRC 🙂 I’m overwhelmed by the volume and all the good wishes. I’ll reply to each of you individually, sorry it may take a couple of days for me to do that.

To all of our investors and advisors, first in ParElastic and later in Tesora, thank you all for your help and support. To everyone at Tesora who is moving to Stratoscale, all the very best to you. It has truly been a wonderful six years working with you and I thoroughly enjoyed it.

Doug: It’s been especially awesome working with you these past six years. You are a great leader of people, and you have built and managed a truly exceptional team. People in your team like you, respect you, are comfortable approaching you with the strangest questions, and are willing to work with you over and over again. Not an easy thing to pull off over the extraordinarily long period that you’ve been able to do this. You were clearly not an easy taskmaster and your team consistently delivered miracles. But along the way you managed to ensure that everyone was having a good time.

Ken: No entrepreneur can have hoped for a better partner than you. It has been an extraordinary ride and it has been truly my honor and privilege to have taken this ride along with you. I think you were instrumental in building a company which was a very special place to work, where we built some excellent technology, got some marquee customers, and had a lot of fun doing it. I’ve learned a lot, about startups, about technology, about business, and about myself; thank you very much for this awesome experience.

Several of you have asked me “what’s next for Amrith”. I don’t know yet, I’m trying to figure that out (thanks to all of you who have offered to help me figure this out, I will certainly take you up on that).

In the short term, I’m going to continue to work on Trove, finish up my term as the PTL for Ocata and continue to work on the project as we begin the Pike cycle.

What comes later, I have no idea but if you have ideas, I’m all ears.

The Acurite SmartHub and monitoring solution

I purchased an Acurite 10 sensor indoor humidity and temperature monitoring system and am surprisingly happy with it. I was expecting a generally crappy experience but I have to say I was wrong, and it wasn’t that I’d set my expectations so low; the system is truly quite good.

The system I purchased is this one.

You get a smartHub and 10 sensors. Pictured below is the hub and a sensor.

06044-800x800_2_2 09150rm-phone-800x800_7

The setup: smartHub

The smartHub comes with a little power adapter and an ethernet cable. Stick it into a wall outlet and connect the ethernet cable to your router and DHCP does its thing and the smartHub gets online.

It initiates a series of accesses to some locations and downloads firmware and the like. (I’ve captured network traces, if I find anything interesting, I’ll blog about that). In a couple of minutes the lights stabilize and you have to press a button that says “Activate”.

The setup: online

Then you create an online account at the Acruite site and once you are logged in, you associate your account with the device. You identify it by the number on the bottom (spoiler alert, the number is the MAC address of the device).

Within about a minute, the device shows up and you are good to go for the next step.

The setup: sensors

Each sensor takes two AAA batteries, pop them in and within a minute the web portal shows a new device which you can rename and mount wherever you want it. Very slick and easy.

Within about 15 minutes I had 10 sensors online and reporting.

I was so happy with this that I’ve purchased another smartHub and 10 more indoor/outdoor sensors; they don’t have the LCD display.

Enough of the happy talk

OK, so what did I not like?

  1. It has been years (literally, years and years) since I’ve purchased a gadget that requires batteries, and the batteries are not included. It’s a good thing that I purchase AAA’s in packs of 50. Like every other commodity piece of electronics these days, these sensors are made in China, so just stick two AAA’s in the box please.
  2. Once you power up a sensor, it takes under a minute to initialize and register with the smartHub. But, if you stick batteries in two of them in quick succession, there’s no way to tell (on the Web UI) which is which. There’s no number on the sensor, nothing which you can associate with what you see on the screen; just “Temperature and Humidity Sensor – NN” where NN is a number incrementing from 1 to 10.
  3. Once you get sensors on the Web UI, there is no way to re-order them. The will forever remain in the same order. So if you decide to move a device from one location to another, and you want to group your devices based on location, you are not able to do that.
  4. Wired ethernet, really? I’m sure the stupid cable they have to give you will cost about as much as it would to get wireless setup. But it would make the setup just a bit harder.
  5. The web app is just about OK. Fine, it sucks. It allows you to add alerts for each device. By default, low battery and loss of signal rules are added for each device. But, I want to add temperature rules for different devices. Yes, you can do that but you get to do it one device at a time. No copy/paste available.
  6. They claim to have an android application but it won’t work on a tablet; instead they expect you to use a full blown web app for the tablet. The android app won’t install on my android phone; lots of others seem to be complaining about this as well.

Closing thoughts

Acurite strikes me as a company that makes fine hardware and they appear to have done an absolutely bang up job on the initial setup and “getting started” part of the experience.

They are not a software company. The software part of the “after setup” experience is kind of horrible.

They offer no easy API based mechanism to retrieve your sensor data. Yes, on the web app, you can click a couple of buttons and play with date controls and get a link to some AWS S3 bucket mailed to you with your data as a CSV but really, advertise an API, get someone to write an IFTTT channel, then you’ll be cooking with gas.

Next post will be a deconstruction of the protocol, what you get when you point your web browser at the smartHub’s IP address, and those kinds of fun things.

One more thing

The people at Acurite Support are wonderful. I have (in the past three days) spoken with two of them, and interacted with one via email. The people I spoke with were knowledgeable, and very helpful.

The wait times on hold are quite bad. I waited 25 minutes and 15 minutes respectively on hold. There is the usual boring elevator music while you stay in line with an announcement every minute that you are “XX in line”. No indication of how long your wait will be but you are offered the option of getting a callback.

An odd thing though is that while I was in line and I heard the message “you are second in line” a couple of times, I suddenly ended up being “third in line”. How someone got ahead of me in line, I know not.

But, their support is great. 5 stars for that!

Automating OpenStack’s gerrit commands with a CLI

Every OpenStack developer has to interact with the gerrit code review system. Reviewers and core reviewers have to do this even more, and PTL’s do a lot of this.

The web-based interface is not conducive to many of the more common things that one has to do while managing a project and early on, I started using the gerrit query CLI.

Along the way, I started writing a simple CLI that I could use to automate more things and recently a few people asked about these tools and whether I’d share.

I’m not claiming that this is unique, or that this hasn’t been done before; it evolved slowly and there may be a better set of tools out there that does all of this (and more). I don’t know about them. If you have similar tools, please do share (comment below).

So, I’ve cleaned up this tools a bit (removed things like my private key, username and password) and made them available here.

Full disclosure, they are kind of rough at the edges and you could cause yourself some grief if you aren’t quite sure what you are doing.

Here’s a quick introduction

Installation

It should be nothing more than cloning the repository git@github.com/amrith/gerrit-cli and running the install command. Note, I use python 2.7 as my default python on Ubuntu 16.04. If you use python 3.x, your mileage may vary.

Simple commands

The simplest command is ‘ls’ to list reviews

gerrit-cli ls owner:self

As you can see, the search here is a standard gerrit query search.

You don’t have to type complex queries everytime, you can store and reuse queries. A very simple configuration file is used for this (a sample configuration file is also provided and gets installed by default).

amrith@amrith-work:~$ cat .gerrit-cli/gerrit-cli.json
{
    # global options
    "host": "review.openstack.org",
    "port": 29418,

    # "dry-run": true,

    # user defined queries
    "queries": {
        # each query is necessarily a list, even if it is a single string
        "trove-filter": ["(project:openstack/trove-specs OR project:openstack/trove OR project:openstack/trove-dashboard OR project:openstack/python-troveclient OR project:openstack/trove-integration)"],

        # the simple filter uses the trove-filter and appends status:open and is therefore a list

        "simple": ["trove-filter", "status:open"],

        "review-list": ["trove-filter", "status:open", "NOT label:Code-Review>=-2,self"],

        "commitids": ["simple"],

        "older-than-two-weeks": ["simple", "age:2w"]
    },

    # user defined results
    "results": {
        # each result is necessarily a list, even if it is a single column
        "default": ["number:r", "project:l", "owner:l", "subject:l:80", "state", "age:r"],
        "simple": ["number:r", "project:l", "owner:l", "subject:l:80", "state", "age:r"],
        "commitids": [ "number:r", "subject:l:60", "owner:l", "commitid:l", "patchset:r" ],
        "review-list": [ "number:r", "project:l", "branch:c", "subject:l:80", "owner:l", "state", "age:r" ]
    }
}

The file is a simple JSON and you can comment lines just as you would in python (#…)

Don’t do anything, just – – dry-run

The best way to see what’s going on is to use the –dry-run command (or to be sure, uncomment the line in your configuration file).

amrith@amrith-work:~$ gerrit-cli --dry-run ls owner:self
ssh review.openstack.org -p 29418 gerrit query --format=JSON --current-patch-set --patch-sets --all-approvals owner:self
+--------+---------+-------+---------+-------+-----+
| Number | Project | Owner | Subject | State | Age |
+--------+---------+-------+---------+-------+-----+
[...]
+--------+---------+-------+---------+-------+-----+

So the owner:self query makes a gerrit query and formats and displays the output as shown above.

So, what columns are displayed? The configuration contains a section called “results” and a default result is defined there.

"default": ["number:r", "project:l", "owner:l", "subject:l:80", "state", "age:r"],

You can override the default and cause a different set of columns to be shown. If a default is not found, the code has a hardcoded default as well.

Similarly, you could run the query:

amrith@amrith-work:~$ gerrit-cli --dry-run ls
ssh review.openstack.org -p 29418 gerrit query --format=JSON --current-patch-set --patch-sets --all-approvals owner:self status:open
+--------+---------+-------+---------+-------+-----+
| Number | Project | Owner | Subject | State | Age |
+--------+---------+-------+---------+-------+-----+
+--------+---------+-------+---------+-------+-----+

and a default query will be generated for you, that query is owner:self and status:open.

You can nest these definitions as shown in the default configuration.

amrith@amrith-work:~$ gerrit-cli --dry-run ls commitids
ssh review.openstack.org -p 29418 gerrit query --format=JSON --current-patch-set --patch-sets --all-approvals (project:openstack/trove-specs OR project:openstack/trove OR project:openstack/trove-dashboard OR project:openstack/python-troveclient OR project:openstack/trove-integration) status:open
+--------+---------+-------+---------+-------+-----+
| Number | Project | Owner | Subject | State | Age |
+--------+---------+-------+---------+-------+-----+
+--------+---------+-------+---------+-------+-----+

The query “commitids” is expanded as follows.

commitids -> simplesimple -> trove-filter, statusopentrove-filter -> (...)

What else can I do?

You can do a lot more than just list reviews …

amrith@amrith-work:~$ gerrit-cli --help
usage: gerrit [-h] [--host HOST] [--port PORT] [--dry-run]
              [--config-file CONFIG_FILE] [-v]
              {ls,show,update,abandon,restore,recheck} ...

A simple gerrit command line interface

positional arguments:
  {ls,show,update,abandon,restore,recheck}
    ls                  list reviews
    show                show review(s)
    update              update review(s)
    abandon             abandon review(s)
    restore             restore review(s)
    recheck             abandon review(s)

optional arguments:
  -h, --help            show this help message and exit
  --host HOST           The gerrit host. Default: review.openstack.org
  --port PORT           The gerrit port. Default: 29418
  --dry-run             Whether or not to actually execute commands that
                        modify a review.
  --config-file CONFIG_FILE
                        The path to the gerrit-cli configuration file to use
                        for this session. (Default: ~/.gerrit-cli/gerrit-
                        cli.json
  -v, --verbose         Provide additional (verbose) debug output.

Other things that I do quite often (and like to automate) are update, abandon, restore and recheck.

A word of caution: when you aren’t sure what the command will do, use –dry-run. Otherwise, you could end up in a world of hurt.

Like, when you accidentally abandon a 100 reviews 🙂

And even if you know what your query should do, remember I’ve hidden some choice bugs in the code. You may hit those too.

Enjoy!

I’ll update the readme with more information when I get some time.

Effective OpenStack contribution: Seven things to avoid at all cost

There are numerous blogs and resources for the new and aspiring OpenStack contributor, providing tips, listing what to do. Here are seven things to avoid if you want to be an effective OpenStack contributor.

I wrote one of these.

There have been presentations at summits that share other useful newbie tips as well, here is one.

Project repositories often include a CONTRIBUTING.rst file that shares information for newcomers. Here is the file for the Trove project.

Finally, many of these resources include a pointer to the OpenStack Developer’s Guide.

Over the past three years, I have seen several newbie mistakes repeated over and over again and in thinking about some recent incidents, I think the community has not done a good job documenting these “Don’t Do’s”.Just don't do it!

So here is a start; here are seven things you shouldn’t do, if you want to be an effective OpenStack contributor.


1. Don’t submit empty commit messages

captureThe commit message is a useful part of the commit and it serves to inform reviewers about what the change is, and how your proposed fix addresses the problem. In general, (with the notable exception of procedural commits for things like releases or infrastructure), the commit message should not be empty. The words “Trivial Fix” do not suffice.

OpenStack documents best practices for commit messages. Make sure your commit message provides a succinct description of the problem, describes how you propose to fix it, and includes a reference (via the Close-Bug, Partial-Bug or Related-Bug tags) to the Launchpad entry for the issue you are fixing.


2. Don’t expect that reviews are automatic

In OpenStack, reviewing changes is a community activity. If you propose changes, they get merged because others in the community contribute their time and effort in reviewing your changes. This wouldn’t work unless everyone participates in the review process.

Just because you submitted some changes, don’t expect others to feel motivated or obligated to review your changes. In many projects, review bandwidth is at a premium and therefore you will have a better chance getting your change reviewed and approved if you reciprocate and review other people’s changes.


3. Don’t leave empty reviews

captureWhen you review someone’s code, merely adding a +1 serves no useful purpose. At the very least indicate what you did with the change. Equally useful is to say what you did not do.

For example, you could indicate that you only reviewed the code and did not actually test it out. Or you could go further and download and test the patch set and indicate in your review comment that you tested the change and found it to work. On occasion, such as when I review a change for the first time, I will indicate that I have reviewed the changes but not the tests.

Feel free to ask questions about the change if you don’t follow what is being done. Also, feel free to suggest alternate implementations if you feel that the proposed implementation is not the best one for some reason(s).

Don’t feel shy about marking changes with a -1 if you feel that it is not ready to merge for some reason.

A drive-by +1 is a generally unhelpful activity, and if you persist at doing that, others in the community will tend to discount your reviews anyway.


4. Don’t game the Stackalytics system

captureBy far, the most egregious violation that I’ve seen is when people blatantly try to game the Stackalytics system. Stackalytics is a tool that tracks individual and company participation in OpenStack.

Here, for example, is the Stackalytics page for the Trove project in the current release:

Reviews: http://stackalytics.com/?module=trove-group

Commits: http://stackalytics.com/?module=trove-group&metric=commits

It allows you to see many metrics in a graphical way, and allows you to slice and dice the data in a number of interesting ways.

New contributors, bubbling with enthusiasm often fall into the trap of trying to game the system and rack up reviews or commits. This can end up very badly for you if you go down this route. For example, recently one very enthusiastic person showed up with a change that got blasted out to about 150 projects, and attempted to add a CONTRIBUTING.rst file to all of these projects. What ensued is documented in this mailing list thread:

A few of the changes were merged before they were reverted, the vast majority were abandoned.

Changes like this serve no real useful purpose. They also consume an inordinate amount of resources in the CI system. I computed that the little shenanigan described above generated approximately 1050 CI jobs and consumed about 190 hours of time on the CI system.

I admit that numbers are important and they are a good indication of participation. But quality is a much more important metric because quality is an indicator of contribution. And I firmly believe that participation is about showing up, contribution is about what you do once you are here, and contribution is a way more important thing to aim for than participation.


5. Don’t ignore review comments

Finally, when you’ve submitted a change, and people review and provide comments, don’t ignore them. If you are serious about a change, you will stay with it till it gets merged. Respond to comments in a timely manner, if only to say that you will come back with a new patch set in some time.

If you don’t, remember that review bandwidth is a scarce resource and in the future your changes may get scant attention from reviewers. Others who review your changes are taking time out of their schedules to participate in the community. At the very least you should recognize and respect that investment on their part and reciprocate with timely responses.


6. Don’t be shy

And above all, if you aren’t sure how to proceed, don’t be shy. Post a question on the mailing list if you aren’t sure what to do about something. If that’s too public for you (and that’s perfectly alright), ask the question on the IRC channel for the project in question. If that is too public, find someone who is active on the project (start with the PTL) and send that person an email.

An important aspect of the role of a PTL is fielding those questions, and all of us (PTL’s) receive several of these questions each month. Not sure whom to ask, then toss the question out on IRC at #openstack or #openstack-dev and you should receive an answer before long.


7. Don’t be an IRC ghost

ghost_single-15An important thing to remember about IRC is that it is an asynchronous medium. So, don’t expect answers in real time. The OpenStack community is highly distributed, but also most active during daylight hours, Monday to Friday in US time. If you pop up on IRC, ask a question and then disappear, you may not get your answer. If you can’t stick around for a long time on IRC, then post your question to the mailing list.

But better still, there are many ways in which you can connect to IRC and leave the connection up (so you can read the scrollback), or find some other mechanism to review the scrollback (like eavesdrop.openstack.org) to see if your question was answered.


If you have your own pet peeve, please share it in the comments section. I hope this will become a useful resource for aspiring OpenStack contributors.

Addressing a common misconception regarding OpenStack Trove security

Since my first OpenStack Summit in Atlanta (mid 2014), I have been to a number of OpenStack-related events, meetups, and summits. And at every one of these events, as well as numerous customer and prospect meetings, I’ve been asked some variant of the question:

Isn’t Trove insecure because the guestagent has RabbitMQ credentials?

A bug was entered in 2015 with the ominous (and factually inaccurate) description that reads “Guestagent config leaks rabbit password”.

And while I’ve tried to explain to people that this is not at all the case, this misconception has persisted.

At the Summit in Barcelona, I was asked yet again about this and I realized that obviously, whatever we in the Trove team had been doing to communicate the reality was insufficient. So, in preparation for the upcoming Summit in Boston, I’m writing this post as a handy resource.

What is the problem?

Shown here is a simplified representation of a Trove system with a single guest database instance. The control plane components (Trove API, Trove Task Manager, and Trove Conductor) and the Guest Agent communicate via oslo.messaging which is typically implemented with some messaging transport like RabbitMQ.

rpc-security-1To connect to the underlying transport, each of these four components needs to store credentials; for RabbitMQ this is a username and password.

The contention is that if a guest instance is somehow compromised (and there are many ways to do this) and a bad actor gains access to the RabbitMQ credentials, then the OpenStack deployment is compromised.

Why is this not really a problem?

Here are some reasons this is not really an issue on a properly configured production system.

  1. Nothing requires that Trove use the same RabbitMQ servers as the rest of OpenStack. So at the very least, the compromise can be limited to the RabbitMQ servers used by Trove.
  2. The guest instance is not intended to be a general-purpose instance that a user has access to; in the intended deployment, the only connectivity to the guest instance would be to the database ports for queries. These are configurable with each database (datastore) and enforced by Neutron. Shell access (port 22, ssh) is a no-no. No deployer would use images and configurations that allowed this kind of access.
  3. On the guest instance, other database specific best practices are used to prevent shell escapes and other exploits that will give a user access to the RabbitMQ credentials.
  4. Guest instances can be spawned by Trove using service credentials, or credentials for a shadow tenant to prevent an end user from directly accessing the underlying Nova instance. Similarly Cinder volumes can be provisioned with a different tenant to prevent an end user from directly accessing the underlying volume.

All of this notwithstanding, the urban legend was that Trove was a security risk. The reason invariably involved a system configured by devstack, with a single RabbitMQ, open access to port 22 on the guest, run in the same tenant as the requestor of the database.

Yet, one can safely say that no one in their right mind would operate OpenStack as configured by devstack in production. And certainly, with Trove, one would not use the development images whose elements are part of the source tree in a production deployment.

proposed security related improvements in Ocata

In the Ocata release, one additional set of changes has been made to further secure the system. All RPC calls on the oslo.messaging bus are completely encrypted. Furthermore, different conversations are encrypted using unique encryption keys.

rpc-security-2The messaging traffic on oslo.messaging is solely for oslo_messaging.rpc, the OpenStack Remote Procedure Call mechanism. The API service makes calls into the Task Manager, the Task Manager makes calls into the Guest Agent, and the Guest Agent makes calls into the Conductor.

The picture above shows these different conversations, and the encryption keys used on each. When the API service makes an RPC call to the Task Manager, all parameters to the call are encrypted using K1 which is stored securely on the control plane.

Unique encryption keys are created for each guest instance, and these keys are used for all communication. When the Task Manager wishes to make a call to Guest Agent 1, it uses the instance specific key K2, and when it wants to make a call to Guest Agent 2, it uses the instance specific key K3. When the guest agents want to make calls to the Conductor, the traffic is encrypted using the instance specific keys and the conductor decrypts the parameters using those instance specific keys.

In a well configured production deployment, one that takes steps to secure the system, if a bad actor were to compromise a guest instance (say Guest Agent 1) and get access to K2 and the RabbitMQ Credentials, the user could access RabbitMQ but would not be able to do anything to impact either another guest instance (he wouldn’t have K3) or the Task Manager (he wouldn’t have K1).

Code that implements this capability is currently in upstream review.


This blog post resulted in a brief twitter exchange with Adam Young (@admiyoung)

Unfortunately, a single user (in RabbitMQ) for Trove isn’t the answer. Should a guest get compromised, then those credentials are sufficient to post messages to RabbitMQ and cause some amount of damage.

One would need per guest instance credentials to avoid this; or one of the many other solutions (like shadow tenants, etc).

Amazon’s demented plans for its warehouse blimp with drone fleet 

Amazon’s demented plans for its warehouse blimp with drone fleet http://arstechnica.com/information-technology/2016/12/amazons-demented-plans-for-its-warehouse-blimp-with-drone-fleet/?amp=1

Shit like this is what gives patents a bad name!

4 Beginner Tips for Doing Architecture Photography

4 Beginner Tips for Doing Architecture Photography http://digital-photography-school.com/4-beginner-tips-for-doing-architecture-photography/

A great article from Digital Photography School. I especially like the picture of the temple at dusk, wonder where that is.

Another look at IFTTT

In March 2012 (that’s a while ago) I wrote this article about a new service I’d discovered called IF-This-Then-That.

Now, almost five years on, IFTTT has come a long way. Just looking at the channels (they now call them services) it is amazing how far they’ve come. Quite amazing.

Time to go revisit IFTTT. It still amazes me that they are a free service.

Facebook at a Crossroads

Interesting article in MIT Technology review at https://www.technologyreview.com/s/603198/facebook-at-a-crossroads/.

More than half of the 3.4 billion people with Internet access log on to Facebook each month. Revenue in the first nine months of 2016 jumped 36 percent to $19 billion; profit nearly tripled, to $6 billion. Yet the company’s founder has spent the year talking up his plans to become something much larger and more meaningful.

With the election now over, the coming crackdown on fake news, and getting mired in the censorship controversy after blocking the video stream of Philando Castile after he was shot in Minnesota surely didn’t help.

I wonder how much all these things will affect Facebook, and how much that is driving the urge to do unnatural things.

Drones, Virtual Reality, get a grip …

The case(s) for and against PGP

When I read I’m throwing in the towel on PGP, and I work in security, which appeared as an Op-ed in ArsTechnica, I felt that it certainly deserved a response. While Filippo Valsorda makes some valid points about PGP/GPG, I felt that they were less about shortcomings in the scheme and rather usability issues that have been unfortunately ignored.

Then I read “Why I’m not giving up on PGP“,  an excellent article, also in ArsTechnica, and it does a much better job of refuting the article than I could ever have done.

Both are well worth the read.

May I please get whatever Windows version powers the Dreamliner?

It is being widely reported that the FAA has issued an Airworthiness Directive (AD) requiring that Boeing 787 Dreamliners must be rebooted every 21 or so days.dreamliner

This is not a hoax.

This is the AD issued by the FAA 0n 2016-09-24, I obtained a copy of this AD from here.

The AD states:

This AD requires repetitive cycling of either the airplane electrical power or the power to the three flight control modules (FCMs). This AD was prompted by a report indicating that all three FCMs might simultaneously reset if continuously powered on for 22 days. We are issuing this AD to address the unsafe condition on these products.
A little investigation indicates that this isn’t the first time the FAA has had to do this. The last time they had to do something like this was in 2015-09 when they issued this AD which I obtained from here. That AD was more specific about the reason for the problem, stating
This condition is caused by a software counter internal to the GCUs that will overflow after 248 days of continuous power.
It has been widely rumored that the present AD about the 21 day action is similarly motivated, and the logic is that a timer with millisecond precision which will overflow at about 24 days.
This is all very droll, and I hope to hell that they power cycle their planes on the ground regularly and all that. My only question is this, since they are in fact running Windows under the covers, how on earth are they able to keep the thing going for 21 days?
With Windows 7 that was a piece of cake but this new Windows 10 that I have wants to reboot every night and I don’t have any say in the matter.
So whatever Boeing did to keep the damn thing going 21 days, it would be great if they shared that with the world.

The Monty Hall problem

I’ve long wanted a simple explanation of the Monty Hall problem and I’ve never found one that I liked. Some I really detested like one that tried to make some lame analogy to baseball pitchers.

Anyway, here is what I’ve found to be the simplest explanation yet. First, what’s the problem.

In a game show, the contestant is shown into a room with three identical closed doors. He is informed that behind one door is a prize and behind the other two doors, there is nothing.

He is then asked to pick a door. Once he has picked a door, the host proceeds to open one of the other two doors (that he had not picked) and shows the contestant that there is nothing behind that door.

The host then offers the contestant the option of either changing his selection (picking the third remaining door), or sticking with his initial choice.

What should the contestant do?

The simplistic answer is that once the contestant has been shown that there is nothing behind one door, the problem reduces to two doors and therefore the odds are 50-50 and the contestant has no motivation to switch.

In reality, this is not the case, and the contestant would be wise to switch. Here is why.

image1Three doors, behind one of them is the prize, behind the other two, there is nothing.

The contestant now picks a door. For the purposes of this illustration, let’s assume that the contestant picks the door in the middle as shown below.

image2Since the prize is behind one of the three doors, the odds that the prize is behind the door that the contestant has picked is 1/3. By extension therefore the probability that it is behind one of the other two doors is 2/3 (1/3 for each of the doors).

So far, we’re all likely on solid footing, so let’s now bring in the twist. The game show host can always find a door behind which there is nothing. And as shown below, he does.

image3The game show host has picked the third door and there’s nothing there.

However, nothing has changed the fact that the probability that the prize was behind the door that the contestant chose is 1/3 and the probability that it is behind one of the other two doors is 2/3. What has changed is that the host has revealed that it is not behind the door at the far right. If then the probability that it is behind the far left door and the far right door (the two doors that the contestant did not pick) is 2/3, we can say that the probability that it is behind the far left door has to be 2/3.

With this new information therefore, the contestant would be wise to switch his choice.

Defining Success in OpenStack (With Heisenberg in Mind)

This article first appeared at http://www.tesora.com/defining-success-in-openstack/

I recently read Thierry Carrez’s blog post where he references a post by Ed Leafe. Both reminded me that in the midst of all this hand wringing about whether the Big Tent was good or bad, at fault or not at fault, and whether companies were gaming the system (or not), the much bigger issue is being ignored.

We don’t incentivize people and organizations to do the things that will make OpenStack successful, and this shortcoming poses a real and existential threat to OpenStack.

Werner Heisenberg observed that the act of measuring the position of a sub-atomic particle affected its momentum and vice-versa. In exactly the same way(s) that Heisenberg said, the act of measuring an individuals (or organizations) performance in some area impacts that performance itself.

By measuring commits, lines of code, reviews and other such metrics that are not really measures of OpenStack’s success, we are effectively causing individuals and organizations to do the things that make them appear “good” on those metrics. They aren’t “gaming the system”, they are trying to look good on the measures that you have established for “success”.

At Tesora, we have always had a single-minded focus on a single project: Trove. We entered OpenStack as the DBaaS company, and have remained true to that. All the changes we have submitted to OpenStack, and the reviews and participation by Tesora have been focused on the advancement of DBaaS. We have contributed code, documentation, tests, and reviews that have helped improve Trove. To us, this single minded focus is a good thing because it has helped us advance the project, and to make it easier for people to deploy and use it in practice. And to us, that is the only thing that really matters.

The same thing(s) are, true for all of OpenStack. Actual adoption is all that matters. What we need from the Technical Committee and the community at large is a concerted effort to drive adoption, and to make it easier for prospects to deploy and bring into production, a cloud based on OpenStack. And while I am a core-reviewer, and I am the Trove PTL, and I wrote a book about Trove, and our sales and marketing team do mention that in customer engagements, we do that only because they are the “currency” in OpenStack. To us, the only things that really matter are ease-of-use, adoption, a superlative user experience, and a feature rich product. Without that, all this talk about contribution, and the number of cores and PTL’s is as completely meaningless as whether the Big Tent approach resulted in a loss of focus in OpenStack.

But, remember Heisenberg! Knowing that what one measures changes how people act means that it would be wise for the Technical Committee to take the leadership in defining success in terms of things that are surrogates for ease of installation, ease of deployment, the number of actual deployments, and things that would truly indicate the success of OpenStack.

Let’s stop wasting time defending the Big Tent. It was done for good reasons, it had consequences. Realize what these consequences are, perceive the reality, and act accordingly.

%d bloggers like this: