A farewell to facts, rigor and civility

I read this article from Mike (The Fein Line) Feinstein’s blog and it occurs to me that we have collectively chosen to ignore facts, rigor and civility.

For a whole bunch of reasons (beyond the one that Mike mentions), I have to say that I too am happy and proud that I live in Massachusetts.

OpenID first impressions

I have been meaning to try OpenID for some time now and I just noticed that they were doing a free TFA (what they call VIP Credentials) thing for mobile devices so I decided to give it a shot.

I picked Verisign’s OpenID offering; in the past I had a certificate (document signing) from Verisign and I liked the whole process so I guess that tipped the scales in Verisign’s favor.

The registration was a piece of cake, downloading the credential generator to my phone and linking it to my account was a breeze. They offer a File Vault (2GB) free with every account (Hey Google, did you hear that?) and I gave that a shot.

I created a second OpenID and linked it to the same mobile credential generator (very cool). Then I figured out what to do if my cell phone (and mobile credential generator were to be lost or misplaced), it was all very easy. Seemed too good to be true!

And, it was.

Facebook allows one to use an external ID for authentication. Go to Account Settings and Linked Accounts and you can setup the linkage. Cool, let’s give that a shot!

Facebook OpenID failure
Facebook OpenID failure

So much for that. I have an OpenID, anyone have a site I could use it on?

Oh yes! I could login to Verisignlabs with my OpenID 🙂

Update:

I tried to link my existing “Hacker News” (news.ycombinator.com) account with OpenID and after authenticating with verisign, I got to a page that asked me to enter my HN information which I did.

I ended up with a page: http://news.ycombinator.com/openid_merge and a single word “Unknown” on the screen.

I’ve got to be doing something wrong. Someone care to tell me how badly messed up I am?

Update (sept 11)

Thanks to help from Gary (who commented on this post), I tried the “linking” on Facebook again and this time it worked a little better.

But, I still have to enter my password when I want to login to facebook. Something is still not working the way it should.

Still the same issue with Hacker News.

What I learnt from the GMAIL outage

We all have heard about it, many of us (most of us) were affected by it, some of us actually saw it. This makes it a fertile subject for conversation; in person and over a cold pint, or online. I have read at least a dozen blog posts that explain why the GMAIL outage underscores the weakness of, and the reason for imminent failure of cloud computing. I have read at least two who explain why this outage proves the point that enterprises must have their own mail servers.  There are graphs showing the number of tweets at various phases of the outage. There are articles about whether GMAIL users can sue Google over this failure.

The best three quotes I have read in the aftermath of the Gmail outage are these:

“So by the end of next May, we should start seeing the first of the Google Outage babies being born.” – Carla Levy, Systems Analyst

“Now I don’t look so silly for never signing up for an e-mail address, do I?” – Eric Newman, Pile-Driver Operator

“Remember the time when 150 million people couldn’t use Gmail for nearly ten years? From 1993–2003? And every year before that? Unimaginable.” – Adam Carmody, Safe Installer

Admittedly, all three came from “The Onion“.

This article is about none of those things. To me, the GMAIL outage could not have come at a better time. I have just finished reconfiguring where my mail goes and how it gets there. The outage gave me a chance to make sure that all the links worked well.

I have a GMAIL account and I have email that comes to other (non-GMAIL) addresses. I use GMAIL as a catcher for the non-GMAIL addresses using the “Imports and Forwarding” capability of GMAIL. That gives me a single web based portal to all of my email. The email is also POP3’ed down to a PC, the one which I am using to write this blog post. I get to read email on my phone (using its POP3 capability) from my GMAIL account. Google is a great backup facility, a nice web interface, and a single place where I can get all of my email. And, if for any reason it were to go kaput, as it did on the 1st, in a pinch, I can get to the stuff in a second or even a third place.

But, more importantly, if GMAIL is unavailable for 100 minutes, who gives a crap. Technology will fail. We try to make it better but it will still fail from time to time. Making a big hoopla about it is just plain dumb. On the other hand, an individual could lose access to his or her GMAIL for a whole bunch of reasons; not just because Google had an outage. Learn to live with it.

So what did I learn from the GMAIL outage? It gave me a good chance to see a bunch of addicts, and how they behave irrationally when they can’t get their “fix”. I’m a borderline addict myself (I do read email on my phone, as though I get things of such profound importance that instant reaction is a matter of life and death). The GMAIL outage showed me what I would become if I did not take some corrective action.

Technology has given us the means to “shrink the planet” and make a tightly interconnected world. With a few keystrokes, I can converse with a person next door, in the next state or half way across the world. Connectivity is making us accessible everywhere; in our homes, workplaces, cars, and now, even in an aircraft. It has given us the ability to inundate ourselves with information, and many of us have been over-indulging (to the point where it has become unhealthy).

And here I was complaining about Verizon Wireless!

I thought poorly of Verizon Wireless service and features (though I’ve been a customer for a while).

That all changed when I read this.

Hey Verizon, where do I sign up for that two year contract? But, could you give me a cellular data plan with more than 5GB per month please …

Till January 1, 2010, Bye-Bye Linux!

Bye-Bye Ubuntu! Back to Windows …

Around the New Year each year, the fact that I am bored silly leads me to do strange things. For the past couple of years, in addition to drinking a lot of Samuel Adams Double Bock or Black Lager, I kick Windows XP, Vista or whatever Redmond has to offer and install Linux on my laptop.

For two years now, Ubuntu has been the linux of choice. New Year 2009 saw me installing 8.10 (Ignorant Ignoramus) and later upgrading to 9.04 (Jibbering Jackass). But, I write this blog post on my Windows XP (Service Pack 3) powered machine.

Why the change, you ask?

This has arguably been the longest stint with Linux. In the past (2007) it didn’t stay on the PC long enough to make it into work after the New Year holiday. In 2008, it lasted two or three weeks. In 2009, it lasted till the middle of August! Clearly, Linux (and Ubuntu has been a great part of this) has come a very long way towards being a mainstream replacement for Windows.

But, my benchmark for ease of use still remains:

  1. Ease of initial installation
    • On Windows, stick a CD in the drive and wait 2 hours
    • On Linux, stick a CD in the drive and wait 20 minutes
    • Click mouse and enter some basic data along the way
  2. Ease of setup, initial software update, adding basic software that is not part of the default distribution
    • On Windows, VMWare (to run linux), Anti-Virus, Adobe things (Acrobat, Flash, …)
    • On Linux, VMWare (to run windows), Adobe things
  3. Ease of installing and configuring required additional “stuff”, additional drivers
    • printers
    • wacom bamboo tablet
    • synchronization with PDA (Windows ActiveSync, Linux <sigh>)
    • On Windows, DELL drivers for chipset, display, sound card, pointer, …
  4. Configuring Display
    • resolution, alignment
  5. Configuring Mouse and Buttons
  6. Making sure that docking station works
    • On Windows, DELL has some software to help with this
    • On Linux, pull your hair out
  7. Setting Power properties for maximum battery life
    • On Windows, what a pain
    • On Linux, CPU Performance Applet
  8. Making sure that I login and can work as a non-dangerous user
    • On Windows, group = Users
    • On Linux, one who can not administer the system, no root privileges
  9. Setup VPN
    • On Windows, CISCO VPN Client most often. Install it and watch your PC demonstrate all the blue pixels on the screen
    • On Linux, go through the gyrations of downloading Cisco VPN client from three places, reading 14 blogs, web pages and how-to’s on getting the right patches, finding the compilers aren’t on your system, finding that ‘patch’ and system headers are not there either. Finally, realizing that you forgot to save the .pcf file before you blew Windows away so calling IT manager on New Year’s day and wishing him Happy New Year, and oh, by the way, could you send me the .pcf file (Thanks Ed).
  10. Setup Email and other Office Applications
    • On Linux, installing a Windows VM with all of the Office suite and Outlook
    • On Windows, installing all of the Office suite and Outlook and getting all the service packs
    • Install subversion (got to have everything under version control). There’s even a cool command line subversion client for Windows (Slik Subversion 1.6.4)
  11. Migrate Mozilla profile to new platform
    • Did you know that you can literally take .mozilla and copy it to someplace in %userprofile% or vice-versa and things just work? Way cool! Try that with Internet Exploder!
  12. Restore SVN dump from old platform

OK, so I liked Linux for the past 8 months. GIMP is wonderful, the Bamboo tablet (almost just works ), system booted really fast, … I can go on and on.

But, some things that really annoyed me with Linux over the past 8 months

  • Printing to the Xerox multi function 7335 printer and being able to do color, double sided, stapling etc., The setup is not for the faint hearted
  • Could I please get the docking station to work?
  • Could you please make the new Mozilla part of the updates? If not, I have Firefox and Shrill-kokatoo or whatever the new thing is called. What a load of horse-manure that upgrade turned out to be. On Windows, it was a breeze. Really, open-source-brethren, could you chat amongst yourselves?

But the final straw was that I was visiting a friend in Boston and wanted to whip out a presentation and show him what I’d been up to. External display is not an easy thing to do. First you have to change resolutions, then restart X, then crawl through a minefield, sing the national anthem backwards while holding your nose. Throughout this “setup”, you have to be explaining that it is a Linux thing.

Sorry folks, you aren’t ready for mainstream laptop use, yet. But, you’ve made wonderful improvement since 2007. I can’t wait till December 31, 2009 to try this all over again with Ubuntu 9.10 (Kickass Knickerbockers).

OpenDNS again

More on OpenDNS

I wasn’t about to try diddling the router at 11:30 last night but it seemed like a no-brainer to test out this OpenDNS service.

So, look at the little button below. If it says “You’re using OpenDNS” then clearly someone in your network (your local PC, router, DNS, ISP, …) is using OpenDNS. The “code” for this button is simplicity itself

<a title="Use OpenDNS to make your Internet faster, safer, and smarter." href="http://www.opendns.com/share/">
     <img style="border:0;"
          src="http://images.opendns.com/buttons/use_opendns_155x52.gif"
          alt="Use OpenDNS" width="155" height="52" />
</a>

So, if images.opendns.com was sent to my ISP, it would likely resolve in one way and if it was sent to OpenDNS, it would resolve a different way. That means that the image retrieved would differ based on whether you are using OpenDNS or not.

Use OpenDNS

Step 1: Setup was trivial. I logged in to my router and deposited the DNS server addresses and hit “Apply”. The router did its usual thing and voila, I was using OpenDNS.

Step 2: Setup an account on OpenDNS. Easy, provide an email address and add a network. In this case, it correctly detected the public IP address of my router and populated its page. It said it would take 3 minutes to propagate within the OpenDNS network. There’s an email confirmation, you click on a link, you know the drill.

Step 3: Setup stats (default: disabled)

All easy, web page resolution seems to be working OK. Let me go and look at those stats they talk about. (Click on the picture below to see it at full size).

Stats don't seem to quite work!
Stats don't seem to quite work!

August 17th you say? Today is September 1st. I guess tracking 16 billion or so DNS queries for two days in a row is a little too much for their infrastructure. I can suggest a database or two that would not break a sweat with that kind of data inflow rate.

July 30, 2009: OpenDNS announces that for the first time ever, it successfully resolves more than 16 Billion DNS queries for 2 days in a row.

(source: http://www.opendns.com/about/overview/)

So far, so good. I’ve got to see what this new toy can do 🙂 Let’s see what other damage this thing can cause.

Content Filtering

Nice, they support content filtering as part of the setup. That could be useful. Right now, I reduce annoyances on my browsing experience with a suitably crafted “hosts” file (Windows Users: %SYSTEMROOT%system32driversetchosts).

127.0.0.1       localhost
127.0.0.1       ad.doubleclick.net
127.0.0.1       hitbox.com
127.0.0.1       ai.hitbox.com
127.0.0.1       googleads.g.doubleclick.net
127.0.0.1       ads.gigaom.com
127.0.0.1       ads.pheedo.com
[... and so on ...]

I guess I can push this “goodness” over to OpenDNS and reduce the pop-up crap that everyone will get at home. (click on image for a higher resolution version of the screen shot)

Content Filtering on OpenDNS
Content Filtering on OpenDNS

Multiple Networks!

Very cool! I can setup multiple networks as part of a single user profile. So, my phone and my home router could both end up being protected by my OpenDNS profile.

I wonder how that would work when I’m in a location that hands out a non-routable DHCP address; such as at a workplace. I guess the first person to register the public IP of the workplace will see traffic for everyone in the workplace with a per-PC OpenDNS setting that shares the same public IP address? Unclear, that may be awkward.

Enabling OpenDNS on a Per-PC basis.

In last nights post, I had questioned the rationale of enabling OpenDNS on a per-PC basis. I guess there is some value to this because OpenDNS provides me a way to influence the name resolution process. And, if I were to push content filtering onto OpenDNS, then I would like to get the same content filtering when I was not at home; e.g. at work, at Starbucks, …

I’m sure that over-anxious-parents-who-knew-a-thing-or-two-about-PC’s could load the “Dynamic IP” updater thing on a PC and change the DNS entries to point to OpenDNS before junior went away to college 🙂

So, I guess that per-PC OpenDNS settings may make some sense; it would be nice to have an easy way to enable this when required. I guess that is a fun project to work on one of these days when I’m at Starbucks.

Jeremiah says, “I do it on a per computer basis because I occasionally need to disable it. (Mac OS X makes this super quick with Locations)”. Jeremiah, please do tell why you occasionally need to disable it. Does something fail?

Other uses of OpenDNS

kuzux writes in response to my previous post that OpenDNS can be used to get around restrictive ISP’s. That is interesting because the ISP’s that have put these restrictions in place are likely only blocking name resolution and not connection and traffic. Further, the ISP’s could just as well find the IP addresses of the sites like OpenDNS and put a damper on the festivities. And, one does not have to look far to get the IP addresses of the OpenDNS servers 🙂

Two thoughts come to mind. First, if the authorities (in Turkey as Kuzux said) put the screws on OpenDNS, would they pour out the DNS lookup logs for specific IP addresses that they cared about (both source and destination). Second, a hypothetical country with huge manufacturing operations, a less stellar human rights record, and a huge number of take-out restaurants all over the US (that shall remain nameless), could take a dim view of a foreigner who had OpenDNS on his/her laptop and was able to access “blocked” content.

Other comments

Janitha Karunaratne writes in response to my previous post that, “Lot of times if it’s a locked down limited network, they will intercept all DNS traffic, so using OpenDNS won’t help (their own default DNS server will reply no matter which DNS server you try to reach)”. I guess I don’t understand how that could be. When a machine attempts a DNS lookup, it addresses the packet specifically to the DNS server that it is targeting. Are you suggesting that these “locked down limited networks” will intercept that packet, redirect it to the in-house DNS server and have it respond?

David Ulevitch (aka Founder and CTO of OpenDNS) writes, “Yeah, there are all kinds of reasons people use our service. Speed, safety, security, reliability… I do tests when I travel, and have even done it with GoGo on a VA flight and we consistently outperform”. Mr. Ulevitch, your product is wonderful and easy to use. Very cool. But, I wonder about this performance claim. When I am traveling, potentially sitting in an airport lounge, a hotel room, a coffee shop or in a train using GPRS based internet service with unknown bandwidth, is the DNS lookup a significant part of the response time to a page refresh, mail message download, (insert activity of your choice)?

My Point of View

It seems to work, it can’t hurt to use it at home (if my ISP has a problem with it, they can block traffic to the IP address). It doesn’t seem to be appreciably faster or slower than my ISP’s DNS. I’ll give it a shot for a while and see what the statistics say (when they get around to updating them).

OpenDNS is certainly an easy to use, non-disruptive service and is worth giving a shot. If you use the free version of OpenDNS (ie don’t create an account, just point to their name servers), there is little possible downside; if you get on a Virgin Atlantic flight, you may need to disable it. But, if you use the registered site, just remember that OpenDNS is collecting a treasure trove of information about where you go, what you do, and they have your email address, IP address (hence a pretty good idea of where you live). They already target advertising to you on the default landing page for bad lookups. I’m not suggesting that you will get spammed up the wazoo but just bear in mind that you have yet another place where a wealth of information about you is getting quietly stored away for later use.

But, it is a cool idea. Give it a shot.

OpenDNS and paid wireless services.

Why would anyone ever want to use OpenDNS? I just don’t get it? But, there must be a good reason that I’m totally missing.

I stumbled on this post, not sure how.

http://thegongshow.tumblr.com/post/176629519/virgin-america-inflight-wireless

I didn’t know about OpenDNS but it seems like a strange idea; after all, why would someone not use the DNS server that came with their DHCP lease? Seems like a fairly simple idea, over-ride the DNS servers and force the choice to be to use the DNS servers that OpenDNS provides, but why would anyone care to do this on a PC by PC basis? That seems just totally illogical!

And the problem that the author of the blog (above) mentioned is fairly straightforward. PC comes on the Virgin Wireless network, attempts to go to a web page, sends out a connection request for a page (assume that it is from a cache of IP addresses) and receives a HTML redirect to a named page (asking one to cough up money). The HTML redirect mentions the page by name and that name is not cached and results in a DNS lookup which goes to OpenDNS. Those servers (hardcoded into the network configuration) are not accessible because the user has not yet coughed up said money. Conversely, If the initial lookup was not from a cached IP address then the DNS lookup (which would have been sent to OpenDNS) would have not made it very far (OpenDNS’s server is not reachable till you cough up money). One way or the other, the page asking the user to cough up cache would not have showed up.

So, could one really do this OpenDNS thing behind a pay-for-internet-service? Not unless you can add preferred DNS servers to network configuration without a reboot. (the reboot will get the cycle of DHCP request going again, and on the high volume WiFi access services, DHCP request will automatically expire the previous lease).

But, to the more basic issue, why ever would someone enable this on a PC-by-PC basis? I can totally understand a system administrator using this at an enterprise level; potentially using the OpenDNS server instead of the ones that the ISP provided. Sure beats me! And there sure can’t be enough money in showing advertisements on the redirect page for missing URL’s (or there must be a lot of people who fat-finger URL’s a lot more than I do).

And, the functionality of being able to watch my OpenDNS traffic on a dashboard, I just don’t get it. Definitely something to think more about … They sure seem to be a successful operation so there must clearly be some value to the service they offer, to someone.

Making life interesting

I generally don’t like chain letters and SPAM but once in a while I get a real masterpiece. Below is one that I received yesterday …

Working people frequently ask retired people what they do to make their days interesting. Well, for example, the other day the wife and I went into town and went into a shop. We were only in there for about 5 minutes.

When we came out, there was a cop writing out a parking ticket. We went up to him and I said, “Come on man, how about giving a senior citizen a break?”

He ignored us and continued writing the ticket. I called him a Dumb ass. He glared at me and started writing another ticket for having worn tires. So Mary called him a shit head. He finished the second ticket and put it on the windshield with the first.

Then he started writing a third ticket. This went on for about 20 minutes.

The more we abused him, the more tickets he wrote.

Just then our bus arrived and we got on it and went home.

We try to have a little fun each day now that we`re retired.

It`s important at our age.

Priceless!

I’ve lost my cookies!

Some days ago I read an article (a link is on the Breadcrumbs tab on the right of my blog about Super Cookies). I didn’t think to check my machine for these super cookies and did not pay close attention to the lines that said:

GNU-Linux: ~/.macromedia

Sure enough …

amrith@amrith-laptop:~$ find . -name *.sol 2>/dev/null | wc -l
141
amrith@amrith-laptop:~$

Hmmm …

amrith@amrith-laptop:~$ rm `find . -name *.sol 2>/dev/null `
amrith@amrith-laptop:~$ find . -name *.sol 2>/dev/null | wc -l
0
amrith@amrith-laptop:~$

Much better. And my flash player still works. Let’s go look at some video …

amrith@amrith-laptop:~$ find . -name *.sol 2>/dev/null
./.macromedia/Flash_Player/macromedia.com/support/flashplayer/sys/settings.sol
./.macromedia/Flash_Player/macromedia.com/support/flashplayer/sys/#s.ytimg.com/settings.sol
./.macromedia/Flash_Player/#SharedObjects/CZRS8QS7/s.ytimg.com/soundData.sol
./.macromedia/Flash_Player/#SharedObjects/CZRS8QS7/s.ytimg.com/videostats.sol
amrith@amrith-laptop:~$

Time to fix this sucker …

amrith@amrith-laptop:~$ tail -n 1 .bashrc
rm -f `find ./.macromedia -name *.sol 2>/dev/null`
amrith@amrith-laptop:~$

Look Ma! NoSQL!

More musings on NoSQL and a blog I read “NoSQL: If Only It Was That Easy”

There has definitely been more chatter about NoSQL in the Boston area lately. I hear there is a group forming around NoSQL ( I will post more details when I get them ). There were some NoSQL folks at the recent Cloud Camp which I was not able to attend (damn!).

My views on NoSQL are unchanged from an earlier post on the subject. I think there are some genuine issues about database scaling that are being addressed through a variety of avenues (packages, tools, …). But, in the end, the reason that SQL has survived for so long is because it is a descriptive language that is reasonably portable. That is also the reason why, in the data warehousing space, you have each vendor going off and doing some non-SQL extension in a totally non-portable way. And they are all going to, IMHO, have to reconcile their differences before the features get wide mainstream adoption.

This morning I read a well researched blog post by BJ Clark by way of Hacker News. (If you don’t use HN, you should definitely give it a try).

I strongly recommend that if you are interested in NoSQL, you read the conclusion section carefully. I have annotated the conclusion section below.

“NoSQL is a great tool, but it’s certainly not going to be your competitive edge, it’s not going to make your app hot, and most of all, your users won’t give a shit about any of this.

What am I going to build my next app on? Probably Postgres.

Will I use NoSQL? Maybe. [I would not, but that may just be my bias]

I might keep everything in flat files. [Yikes! If I had to do this, I’d consider something like MySQL CSV first]


If I need reporting, I won’t be using any NoSQL.

If I need ACIDity, I won’t use NoSQL.

If I need transactions, I’ll use Postgres.

…”

NoSQL is a great stepping stone, what comes next will be really exciting technology. If what we need is a database that scales, let’s go make ourselves a database that scales. Base it on MySQL, PostgreSQL, … but please make it SQL based. Extend SQL if you have to. I really do like to be able to coexist with the rich ecosystem of visualization tools, reporting tools, dashboards, … you get the picture.


On over-engineered solutions

The human desire to over-engineer solutions

On a recent trip, I had way too much time on my hands and was ambling around the usual overpriced shops in the airport terminal. There I saw a 40th anniversary “special edition” of the Space Pen. For $799.99 (plus taxes) you could get one of these pens, and some other memorabilia to commemorate the 40th anniversary of the first lunar landing. If you aren’t familiar with the Space Pen, you can look at learn more at the web site of the Fisher Space Pen Co.

Fisher Space Pen
Fisher Space Pen

For $800 you could get AG7-40LE – Astronaut Space Pen 40th Year Moon Landing Celebration Commemorative Pen & Box.

Part of this pen actually circled the moon!

Salient features of this revolutionary device:

  • it writes upside down
  • it writes in any language (so the Russians bought some of these :))
  • it draws pictures
  • it writes in zero gravity
  • it writes under water
  • it writes on greasy surfaces
  • it fixes broken switches on lunar modules

If you want to know about the history of this device, you can also look at NASA’s web page here. On the web page of the Fisher Space Pen Co, you can also see the promotion of the  AG7-40 – Astronaut Space Pen 40th Year Moon Landing Celebration Engraving.

While in the airport, I saw a couple of security officers rolling around on the Segway Personal Transporter. Did you know that for approximately $10,000 you could get yourself a Segway PT i2 Ferrari Limited Edition? I have no idea how much the airport paid for them but the Segway i2 is available on Amazon for about $6,000. It did strike me as silly, till I noticed three officers (a lot more athletic) riding through the terminal on Trek bicycles. That seemed a lot more reasonable. I have a bicycle like that, and it costs maybe 10% of a Segway.

I got thinking about the rampant over-engineering that was all around me, and happened upon this web page, when I did a search for Segway!

How to render the Segway Human Transporter Obsolete by Maddox.

Who would have thought of this, just add a third wheel and you could have a vehicle just as revolutionary? I thought about it some more and figured that the third wheel in Maddox’s picture is probably not the best choice; maybe it should be a wheel more like the track-ball of a mouse. That would have no resistance to turning and the contraption could do an effortless zero radius turn if required. The ball could be spring loaded and the whole thing could be on an arm that had some form of shock absorbing mechanism.

And, if we were to have done that, we would not have seen this. Bush Fails Segway Test! As an interesting aside, did you know that the high tech heart stent used for people who have bypass surgery was also invented by Dean Kamen?

We have a million ways to solve most problems. Why then do we over-engineer solutions to the point where the solution introduces more problems?

Keep it simple! There are less things that can break later.

Oh, and about that broken switch in the lunar module. Buzz Aldrin has killed that wonderful urban legend by saying he used a felt tip pen.

Buzz Aldrin still has the felt-tipped pen he used as a makeshift switch needed to fire up the engines that lifted him and fellow Apollo 11 astronaut Neil Armstrong off the moon and started their safe return to Earth nearly 40 years ago.

“The pen and the circuit breaker (switch) are in my possession because we do get a few memorabilia to kind of symbolize things that happened,” Aldrin told reporters Friday.

If Buzz Aldrin used a felt tipped pen, why do we need a Space Pen? And what exactly are we celebrating by buying a $800 pen that can’t even fix a broken switch. A pencil can write upside down (and also write in any language :)). Why do we need Segway Human Transporters in an airport when most security officers should be able to walk or ride a bicycle. Why do we build complex software products and make them bloated, unusable, incomprehensible and expensive?

That’s simple, we’re paying a homage to our overpowering desire to over-engineer solutions.

How to render the Segway Human Transporter obsolet

Wondering about “Shared Nothing”

Is “Shared Nothing” the best architecture for a database? Is Stonebraker’s assertion of 1986 still valid? Were Norman et al., correct in 1996 when they made a case for a hybrid system? Have recent developments changed the dynamics?

Over the past several months, I have been wondering about the Shared Nothing architecture that seems to be very common in parallel and massively parallel systems (specifically for databases) these days. With the advent of technologies like cheap servers and fast interconnects, there has been a considerable amount of literature that point to an an apparent “consensus” that Shared Nothing is the preferred architecture for parallel databases. One such example is Stonebraker’s 1986 paper (Michael Stonebraker. The case for shared nothing. Database Engineering, 9:4–9, 1986). A reference to this apparent “consensus” can be found in a paper by Dewitt and Gray (Dewitt, Gray Parallel database systems: the future of high performance database systems 1992).

A consensus on parallel and distributed database system architecture has emerged. This architecture is based on a shared-nothing hardware design [STON86] in which processors communicate with one another only by sending messages via an interconnection network.

But two decades or so later, is this still the case, is “Shared Nothing” really the way to go? Are there other, better options that one should consider?

As long ago as 1996, Norman, Zurek and Thanisch (Norman, Zurek and Thanisch. Much ado about shared nothing 1996) made a compelling argument for hybrid systems. But even that was over a decade ago. And the last decade has seen some rather interesting changes. Is the argument proposed in 1996 by Norman et al., still valid? (Updated 2009-08-02: A related article in Computergram in 1995 can be read at http://www.cbronline.com/news/shared_nothing_parallel_database_architecture)

With the advent of clouds and virtualization, doesn’t one need to seriously consider the shared nothing architecture again? Is a shared disk architecture in a cloud even a worthwhile conversation to have? I was reading the other day about shared nothing cluster storage. What? That seems like an contradiction, doesn’t it?

Some interesting questions to ponder:

  1. In the current state of technology, are the characterizations “Shared Everything”, “Shared Memory”, “Shared Disk” and “Shared Nothing” still sufficient? Do we need additional categories?
  2. Is Shared Nothing the best way to go (As advocated by Stonebraker in 1986) or is a hybrid system the best way to go (as advocated by Norman et al., in 1996) for a high performance database?
  3. What one or two significant technology changes could cause a significant impact to the proposed solution?

I’ve been pondering this question for some time now and can’t quite seem to decide which way to lean. But, I’m convinced that the answer to #1 is that we need additional categories based on advances in virtualization. But, I am uncertain about the choice between a Hybrid System and a Shared Nothing system. The inevitable result of advancements in virtualization and clouds and such technologies seem to indicate that Shared Nothing is the way to go. But, Norman and others make a compelling case.

What do you think?

Michael Stonebraker. The case for shared nothing. Database Engineering, 9:4–9, 1986

Finally, it stopped raining. Not quite!

New England has been having a little bit of rain lately. A little too much actually. Today, the sun finally made an appearance but it didn’t stop raining … So we got quite a nice rainbow!

rainbow

IE+Microsoft ROCKS. Ubuntu+Firefox BLOWS

I’m running Ubuntu 9.0.4 and it appears that to upgrade to Firefox 3.5 requires a PhD. Could I borrow yours?

Kevin Purdy has the following “One line install”

That’s not a true install; he just unpacks the tar-ball into the current directory.

Web Upd8 has the following steps

But, now I get daily updates?

What’s with this garbage? Complain as much as you want but Microsoft Software Updates will give you the latest Internet Explorer easily.

Could someone please make this easy?

It’s just numbers people!

I hadn’t read Justin Swanhart’s post (Why is everybody so steamed about a benchmark anyway?) from June 24th till just a short while ago. You can see it at here. Justin is right, it’s just numbers.

He did remind me of a statement from a short while ago by another well known philosopher.

It's a budget

I could not embed the script provided by quotesdaddy.com into this post. I have used their material in the past, they have a great selection of statements that will live on in history. Check them out at http://www.quotesdaddy.com/ 🙂

Head for the hills, here comes more jargon: Anti-Databases and NoSQL

We all know what SQL is, get ready to learn about NoSQL. There was a meet-up last month in San Francisco to discuss it and some details are available at this link. The call for the “inaugural get-together of the burgeoning NoSQL community” drew 150 people and if you are so inclined, you can subscribe to the email list here.

I’m no expert at NoSQL but the major gripe seems to be schema restriction and what Jon Travis calls “twisting your object” data to fit an RDBMS.

NoSQL appears to be a newly coined collective pronoun for a lot of technologies you have heard about before. They include Hadoop, BigTable, HBase, Hypertable, and CouchDB, and maybe some that you have not heard of before like Voldemort, Cassandra, and MongoDb.

But, then there is this thing called NoSQL. The first line on that web page reads

NoSQL is a fast, portable, relational database management system without arbitrary limits, (other than memory and processor speed) that runs under, and interacts with, the UNIX 1 Operating System.

Now, I’m confused. Is this the NoSQL that was referenced in the non-conference at non-san-francisco last non-month?

My Point of View

From what I’ve seen so far, NoSQL seems to be a case of disruption at the “low end”. By “low end”, I refer to circumstances where the data model is simple and in those cases I can see the point that SQL imposes a very severe overhead. The same can be said in the case of a model that is relatively simple but refers to “objects” that don’t relate to a relational model (like a document). But, while this appears to be “low end” disruption right now, there is a real reason to believe that over time, NoSQL will grow into more complex models.

But, the big benefit of SQL is that it is declarative programming language that gave implementations the flexibility of implementing the program flow in a manner that matched the physical representation of the data. Witness the fact that the same basic SQL language has produced a myriad of representations and architectures ranging from shared nothing to shared disk, SMP to MPP, row oriented to column oriented, with cost based and rules based optimizers etc., etc.,

There is (thus far) a concern about the relative complexity of the NoSQL offerings when compared with the currently available SQL offerings. I’m sure that folks are giving this considerable attention and what comes next may be a very interesting piece of technology.



Learning from Joost

A lot of interesting articles have emerged in the wake of the recent happenings at Joost.

One post that caught my attention and deserves reading, and re-reading, and then reading one more time just to be sure is Ed Sim’s post, where he writes

raising too much money can be a curse and not a blessing

A lot to learn from in all of these articles.

Risks of mind controlled vehicles

Some risks of mind controlled vehicles are described.

It is now official, Toyota has announced that they are developing a mind controlled wheel chair. Unless Toyota is planning to take the world by storm by putting large numbers of motorized wheelchairs on the roads, I suspect this technology will soon make its way into their other products, maybe cars?

Of course, this will have some risks

  1. Cars may inherit drivers personality (ouch, there are some people who shouldn’t be allowed to drive)
  2. Cars with hungry drivers will make unpredictable turns into drive through lanes
  3. Car could inadvertently eject annoying back seat drivers
  4. Mortality rates for cheating husbands may go up (see http://www.thebostonchannel.com/news/1965115/detail.html, http://www.click2houston.com/news/1576669/detail.html, http://www.freerepublic.com/focus/news/840828/posts and a thousand other such posts)
  5. It would be really hard to administer a driving test! Would the ability to think be a requirement for drivers?

Can you think of other risks? I didn’t think I would come to this conclusion but I kind of like the way things are right now where my car has a mind of its own.

What’s next in tech? Boston, June 25th 2009

Recap of “What’s next in Tech”, Boston, June 25th 2009

What’s next in tech: Exploring the Growth Opportunities of 2009 and Beyond, June 25th 2009.

Scott Kirsner moderated two panels during the event; the first with three Veture Capitalists and the second with five entrepreneurs.

The first panel consisted of:

– Michael Greeley, General Partner, Flybridge Capital Partners

– Bijan Sabet, General Partner, Spark Capital

– Neil Sequiera, Managing Director, General Catalyst Partners

The second panel consisted of:

– Mike Dornbrook, COO, Harmonix Music Systems (makers of “Rock Band”)

– Helen Greiner, co-founder of iRobot Corp. and founder of The Droid Works

– Brian Halligan, social media expert and CEO of HubSpot

– Tim Healy, CEO of EnerNOC

– Ellen Rubin, Founder & VP/Product of CloudSwitch

Those who asked questions got a copy of Dan Bricklin’s book, Bricklin on Technology. There was also a DVD of some other book (I don’t know what it was) that was being given away.

Show of hands at the beginning was that 100% of the people were optimistic about the recovery and the future. Nice discussion though after leaving the session, I don’t get the feeling that anyone addressed the question “What’s next in tech” head on. Most of the conversation was about what has happened in tech and a lot of discussion about Twitter and Facebook. There were a lot of questions to the panel from the audience about what their companies were doing to help young entrepreneurs.And for an audience that was supposed to contain people interested in “what’s next”, no one wanted the DVD of the book, everyone wanted the paper copies. Hmm …

There has been a lot of interesting discussion about the topic and the event. One that caught my eye was Larry Cheng’s blog.

How many outstanding Shares in a start-up

Finding the number of outstanding shares in a start-up.

This question has come up quite often, most recently when I was having lunch with some co-workers last week.

While evaluating an offer at a start-up have you wondered whether your 10,000 shares was a significant chunk of money or a drop in the ocean?

Have you ever wondered how many shares are outstanding in a start-up you heard about?

This is a non-issue for a public company, the number of outstanding shares is a matter of public record. But not all start-up’s want to share this information with you.

I can’t say this for every state in the union but in MA, you can get a pretty good idea by looking at the Annual Report that every organization is required to file. It won’t be current information, the best you can do is get the most recent annual information and that means you can’t know anything about a very new company, but it is information that I would have liked to have had on some occasions in the past.

The link to search the Corporate Database is http://corp.sec.state.ma.us/corp/corpsearch/corpsearchinput.asp

Enter the name of the company and scroll to the bottom where you see a box that allows you to choose “Annual Reports”. Click that and you can read a recent Annual Report which will have the information you need.

If you find that this doesn’t work or is incomplete, please post your comments here. I’m sure others will appreciate the information.

And if you have information about other states, that is most welcome.