On MapReduce and Relational Databases – Part 2

This is the second of a two-part blog post that presents a perspective on the recent trend to integrate MapReduce with Relational Databases especially Analytic Database Management Systems (ADBMS).

The first part of this blog post provides an introduction to MapReduce, provides a short description of the history and why MapReduce was created, and describes the stated benefits of MapReduce.

The second part of this blog post provides a short description of why I believe that integration of MapReduce with relational databases is a significant mistake. It concludes by providing some alternatives that would provide much better solutions to the problems that MapReduce is supposed to solve.
Continue reading “On MapReduce and Relational Databases – Part 2”

Announcing the Boston Big Data Summit

Announcement for the kickoff of the Boston “Big Data Summit”. The event will be held on Thursday, October 22nd 2009 at 6pm at the Emerging Enterprise Center at Foley Hoag in Waltham, MA. Register at http://bigdata102209.eventbrite.com

BBD_logo

The Boston “Big Data Summit” will be holding its first meeting on Thursday, October 22nd 2009 at 6pm at the Emerging Enterprise Center at Foley Hoag in Waltham, MA.

The Boston area is home to a large number of companies involved in the collection, storage, analysis, data integration, data quality, master data management, and archival of “Big Data”. If you are involved in any of these, then the meeting of the Boston “Big Data Summit” is something you should plan to attend. Save the date!

The first meeting of the group will feature a discussion of “Big Data” and the challenges of “Big Data” analysis in the cloud.

Over 120 people signed up as of October 14th 2009.

There is a waiting list. If you are registered and won’t be able to attend, please contact me so we can allow someone on the wait list to attend instead.

Seating is limited so go online and register for the event at http://bigdata102209.eventbrite.com.

The Boston “Big Data Summit” thanks the Emerging Enterprise Center at Foley Hoag LLP for their support and assistance in organizing this event.

Agenda Updated

The Boston “Big Data Summit” is being sponsored by Foley Hoag LLP, Infobright, Expressor  Software, and Kalido

For more information about the Boston “Big Data Summit” please contact the group administrator at boston.bigdata@gmail.com


The Boston Big Data Summit is organized by Bob Zurek and Amrith (me) in partnership with the Emerging Enterprise Center at Foley Hoag LLP.


Tell me about something you failed at, and what you learnt from it.

I have been involved in a variety of interviews both at work and as part of the selection process in the town where I live. Most people are prepared for questions about their background and qualifications. But, at a whole lot of recent interviews that I have participated in, candidates looked like deer in the headlight when asked the question (or a variation thereof),

“Tell me about something that you failed at and what you learned from it”

A few people turn that question around and try to give themselves a back-handed compliment. For example, one that I heard today was,

“I get very absorbed in things that I do and end up doing an excellent job at them”

Really, why is this a failure? Can’t you get a better one?

Folks, if you plan to go to an interview, please think about this in advance and have a good answer to this one. In my mind, not being able to answer this question with a “real failure” and some “real learnings” is a disqualifier.

One thing that I firmly believe is that failure is a necessary by-product of showing initiative in just the same way as bugs are natural by-product of software development. And, if someone has not made mistakes, then they probably have not shown any initiative. And if they can’t recognize when they have made a mistake, that is scary too.

Finally, I have told people who have been in teams that I managed that it is perfectly fine to make a mistake; go for it. So long as it is legal, within company policy and in keeping with generally accepted norms of behavior, I would support them. So, please feel free to make a mistake and fail, but please, try to be creative and not make the same mistake again and again.

Oracle fined $10k for violating TPC’s fair use rules

Oracle fined $10k for violating TPC’s fair use rules.

In a letter dated September 25, 2009, the TPC fined Oracle $10k based on a complaint filed by IBM. You can read the letter here.

Recently, Oracle ran an advertisement in The Wall Street Journal and The Economist making unsubstantiated superior performance claims about an Oracle/Sun configuration relative to an official TPC-C result from IBM. The ad ran twice on the front page of The Wall Street Journal (August 27, 2009 and September 3, 2009) and once on the back cover of The Economist (September 5, 2009). The ad references a web page that contained similar information and remained active until September 16, 2009. A complaint was filed by IBM asserting that the advertisement violated the TPC’s fair use rules.

Oracle is required to do four things:

1. Oracle is required to pay a fine of $10,000.
2. Oracle is required to take all steps necessary to ensure that the ad will not be published again.
3. Oracle is required to remove the contents of the page www.oracle.com/sunoraclefaster.
4. Oracle is required to report back to the TPC on the steps taken for corrective action and the procedures implemented to ensure compliance in the future.

At the time of this writing, the link http://www.oracle.com/sunoraclefaster is no longer valid.

Can you copyright movie times?

MovieShowtimes.com, a site owned by West World Media believes that they have!

In his article, Michael Masnick relates the experience of a reader Jay Anderson who found a loophole on a web page MovieShowtimes.com and figured out how to get movie times for a given zip code. He (Jay Anderson) then contacted the company asking how he could become an affiliate and drive traffic their way and was rewarded with some legal mumbo jumbo.

First of all, I think the minion at the law firm was taking a course on “Nasty Letter Writing 101” and did a fine job. I’m no copyright expert but if I received an offer from someone to drive more traffic to my site my first answer would not be to get a lawyer involved.

Second, this whole episode could have well been featured in the book, Letters from a Nut, by Ted L. Nancy or the sequel More Letters from a Nut.

But, this reminds me of something a former co-worker told me about an incident where his daughter wrote a nice letter to a company and got her first taste of legal over zealousness. He can correct the facts and fill in the details but if I recall correctly, the daughter in question had written letters to many companies asking the usual childrens questions about how pretzels, or candy or a nice toy was made. In response some nice person in a marketing department sent a gift hamper back with a polite explanation of the process etc., But one day the little child wanted to know (if my memory serves me correctly) why M&M’s were called M&M’s. So, along went the nice letter to the address on the box. The response was a letter from the say guy who now works for MovieTimesForDummies.com explaining that M&M’s was a copyright of the so-and-so-company and any attempt to blah blah blah.

I think it is only a matter of time before MovieTimesForDummies.com releases exactly the same app that Jay Anderson wanted to, closes the loophole that he found and fires the developer who left it there in the first place.

Oh, wait, I just got a legal notice from Amazon saying that the link on this blog directing traffic to their site is a violation of something or the other …

Multithreaded File I/O (Reflections on Dr. Dobb’s article by Stefan Wörthmüller)

Thoughts on the results that Stefan Wörthmüller reports in his article on Dr. Dobb’s Journal.

I ran across an interesting article on Multi-Threaded File I/O in Dr. Dobb’s today. You can read the article at http://www.ddj.com/hpc-high-performance-computing/220300055

I was particularly intrigued by the statements on variability,

I repeated the entire test suite three times. The values I present here are the average of the three runs. The standard deviation in most cases did not exceed 10-20%. All tests have been also run three times with reboots after every run, so that no file was accessed from cache.

Initially, I thought 10-20% was a bit much; this seemed like a relatively straightforward test and variability should be low. Then I looked at the source code for the test and I’m now even more puzzled about the variability.

Get a copy of the sources here. It is a single source file and in the only case of randomization, it uses rand() to get a location into the file.

The code to do the random seek is below

   if(RandomCount)
   {
      // Seek new position for Random access
      if(i >= maxCount)
         break;
      long pos = (rand() * fileSize) / RAND_MAX - BlockSize;
      fseek(file, pos, SEEK_SET);
   }

While this is a multi-threaded program, I see no calls to srand() anywhere in the program. Just to be sure, I modified Stefan’s program as attached here. (My apologies, the file has an extension of .jpg because I can’t upload a .cpp or .zip onto this free wordpress blog. The file is a Windows ZIP file, just rename it).

///////////////////////////////////////////////////////////////////////////////
// mtRandom.cpp   Amrith Kumar 2009 (amrith (dot) kumar (at) gmail (dot) com
// This program is adapted from the program FileReadThreads.cpp by Stefan Woerthmueller
// No rights reserved. Feel Free to do what ever you like with this code
// but don't blame me if the world comes to an end.

#include "Windows.h"
#include "stdio.h"
#include "conio.h"
#include
#include 

#include
#include 

///////////////////////////////////////////////////////////////////////////////
// Worker Thread Function
///////////////////////////////////////////////////////////////////////////////

DWORD WINAPI threadEntry(LPVOID lpThreadParameter)

{
    int index = (int)lpThreadParameter;
        FILE * fp;
        char filename[32];

        sprintf ( filename, "file-%d.txt", index );

        fprintf ( stderr, "Thread %d startedn", index );
        if ((fp = fopen ( filename, "w" )) == (FILE * ) NULL)
        {
                fprintf (stderr, "Error opening file %sn", filename );
        }
        else
        {
                for (int i = 0; i < 10; i ++)
                {
                        fprintf ( fp, "%un", rand());
                }

                fclose (fp);
        }

        fprintf ( stderr, "Thread %d donen", index );

    return 0;
}

#define MAX_THREADS (5)

int main(int argc, char* argv[])

{
    HANDLE h_workThread[MAX_THREADS];

    for(int i = 0; i < MAX_THREADS; i++)
    {
        h_workThread[i] = CreateThread(NULL, 0, threadEntry, (LPVOID) i, 0, NULL );
        Sleep(1000);
    }

    WaitForMultipleObjects(MAX_THREADS, h_workThread, TRUE, INFINITE);
    printf ( "All done. Good byen" );
    return 0;
}

So, I confirmed that Stefan will be getting the same sequence of values from rand() over and over again, across reboots.

Why then is he still seeing 10-20% variability? Beats me, something smells here … I would assume that from run to run, there should be very little variability.

Thoughts?

From the “way-back machine”

We’ve all heard the expression “way-back machine” and some of us know about tools like the Time Machine. But, did you know that there is in fact a “way-back machine” ?

From time to time, I have used this service and it is one of those nice corners of the web that is nice to know. I was reminded of it this morning in a conversation and that led to a nice walk through history.

If you aren’t familiar with the “way-back machine”, take a look at http://www.archive.org/web/web.php

Some day you may wonder what a web page looked like a while ago and the “way-back machine” is your solution.

Here are some interesting ones that I looked at today. The Time Magazine in February 1999.

Time magazine web page in February 1999
Time magazine web page in February 1999

Ever wondered what the Dataupia web page looked like in February 2006? I know someone who would get a kick out of it so I went and looked it up.

The Dataupia web page from February 2006
The Dataupia web page from February 2006

Check it out sometime, the way back machine is a wonderful afternoon diversion.

The “way back” archive is not complete, alas!

Florida recounts

Diluting education standards in Kansas (part II)

Coming in the aftermath of the efforts to outlaw the teaching of evolution in the state, this story about Kansas is unfortunate.

http://blog.acm.org/archives/csta/2009/09/post_4.html

http://usacm.acm.org/usacm/weblog/index.php?p=741

The state has significant employment problems and the recent down turn in the economy has caused significant impact on the aircraft industry in the state. With a nascent IT start-up scene there, this is probably the worst publicity that the state could have hoped for.

Who are you, really? The value of incorrect response in challenge-response style authentication.

We all know how service providers validate the identity of callers. But, how do you validate the identity of the service provider on the other end of the telephone? In the area of computer security, the inexact challenge response mechanism is a useful way of validating identities; a wrong answer and the response to a wrong answer tell a lot.

Service providers (electricity, cable, wireless phone, POTS telephone, newspaper, banks, credit card companies) are regularly faced with the challenge of identifying and validating the identity of the individual who has called customer service. They have come up with elaborate schemes involving the last four digits of your social security number, your mailing address, your mother’s maiden name, your date of birth and so on. The risks associated with all of these have been discussed at great length elsewhere; social security numbers are guessable (see “Predicting Social Security Numbers from Public Data”, Acquisti and Gross), mailing addresses can be stolen, mother’s maiden names can be obtained (and in some Latin American countries your mother’s maiden name is part of your name) and people hand out their dates of birth on social networking sites without a problem!

Bogus Parking ticket
Bogus Parking ticket

So, apart from identity theft by someone guessing at your identity, we also have identity theft because people give out critical information about themselves. Phishing attacks are well documented, and we have heard of the viruses that have spread based on fake parking tickets.

Privacy and Information Security experts caution you against giving out key information to strangers; very sound advice. But, how do you know who you are talking to?

Consider these two examples of things that have happened to me.

1. I receive a telephone call from a person who identifies himself as being an investment advisor from a financial services company where I have an account. He informs me that I am eligible for a certain service that I am not utilizing and he would like to offer me that service. I am interested in this service and I ask him to tell me more. In order to tell me more, he asks me to verify my identity. He wants the usual four things and I ask him to verify in some way that he is in fact who he claims to be. With righteous indignation he informs me that he cannot reveal any account information until I can prove that I am who I claim to be. Of course, that sets me off and I tell him that I would happily identify myself to be who he thinks I am, if he can identify that he is in fact who he claims to be. Needless to say, he did not sell me the service that he wanted to.

2. I call a service provider because I want to make some change to my account. They have “upgraded their systems” and having looked up my account number and having “matched my phone number to the account”, the put me through to a real live person. After discussing how we will make the change that I want, the person then asks me to provide my address. Ok, now I wonder why that would be? Don’t they have my address, surely they’ve managed to send me a bill every month.

“For your protection, we need to validate four pieces of information about you before we can proceed”, I am told.

The four items are my address, my date of birth, the last four digits of my social security number and the “name on the account”.

Of course, I ask the nice person to validate something (for example, tell me how much my last bill was) before I proceed. I am told that for my own protection, they cannot do that.

challenge-responseComputer scientists have developed several techniques that provide “challenge-response” style authentication where both parties can convince themselves that they are who they claim to be. For example, public-key/private-key encryption provides a simple way in which to do this. Either party can generate a random string and provide it to the other asking the other to encrypt it using the key that they have. The encrypted response is returned to the sender and that is sufficient to guarantee that the peer does in fact posses the appropriate “token”.

In the context of a service provider and a customer, there would be a mechanism for the service provider to verify that the “alleged customer” is in fact the customer who he or she claims to be but the customer also verifies that the provider is in fact the real thing.

The risks in the first scenario are absolutely obvious; I recently received a text message (vector) that read

“MsgID4_X6V…@v.w RTN FCU Alert: Your CARD has been DEACTIVATED. Please contact us at 978-596-0795 to REACTIVATE your CARD. CB: 978-596-0795”

A quick web search does in fact show that this is a phishing event. Whether someone tracked that phone number down and find out if they are a poor unsuspecting victim or a perpetrator, I am not sure.

But, what does one do when in fact they receive an email or a phone call from a vendor with whom they have a relationship?

One could contact a psychic to find out if it is authentic, like check the New England SEERs.

http://twitter.com/ILNorg/status/3786484194

http://twitter.com/NewEnglandSEERs

RT @Lucy_Diamond 978-596-0795 do not return call on text. Call police or your real bank. Caution bank fraud. Never give your pin to anyone

RT @Lucy_Diamond Warning bank scam via cell phone text remember never give your pin number to anyone. Your bank won’t ask you they know it

Agent: For your security please verify some information about your account.What is your account number

Me: Provide my account number

Agent: Thank you, could you give me your passphrase?

Me: ketchup

Agent: Thank you. Could you give me your mother’s maiden name

Me: Hoover Decker

Agent: Thank you. and the last four digits of your SSN

Me: 2004

Agent: Just one more thing, your date of birth please

Me: February 14th 1942

Agent: Thank you

Agent: For your security please verify some information about your account.What is your account number

Me: Provide my account number

Agent: Thank you, could you give me your passphrase?

Me: ketchup

Agent: That’s not what I have on the account

Me: Really, let me look for a second. What about campbell?

Agent: No, that’s not it either. It looks like you chose something else, but similar.

Me: Oh, of course, Heinz58. Sorry about that

Agent: That’s right, how about your mother’s maiden name.

Me: Hoover Decker

Agent: No, that’s not it.

Me: Sorry, Hoover Bissel

Agent: That’s right. And the last four of your social please

Me: 2007

Agent: thank you, and the date of birth

Me: Feb 29, 1946

Agent: Thank you

Agent: For your security please verify some information about your account.

What is your account number

Me: Provide my account number

Agent: Thank you, could you give me your passphrase?

Me: ketchup

Agent: Thank you. Could you give me your mother’s maiden name

Me: Hoover Decker

Agent: Thank you. and the last four digits of your SSN

Me: 2004

Agent: Just one more thing, your date of birth please

Me: February 14th 1942

Agent: Thank you. Could you verify the address to which you would like us to ship the package.

(At this point, I’m very puzzled and not really sure what is going on)

Me: Provided my real address (say 10 Any Drive, Somecity, 34567)

Agent: I’m sorry, I don’t see that address on the account, I have a different address.

Me: What address do you have?

Agent: I have 14 Someother Drive, Anothercity, 36789.

The address the agent provided was in fact a previous location where I had lived.

What has happened is that the cable company (like many other companies these days) has outsourced the fulfillment of the orders related to this service. In reality, all they want is to verify that the account number and the address match! How they had an old address, I cannot imagine. But, if the address had matched, they would have mailed a little package out to me (it was at no charge anyway) and no one would be any the wiser.

But, I hung up and called the cable company on the phone number on my bill and got the full fourth-degree. And they wanted to talk to “the account owner”. But, I had forgotten what I told them my SSN was … Ironically, they went right along to the next question and later told me what the last four digits of my SSN were 🙂

Someone said they were interested in the security and privacy of my personal information?

We people born on the 29th of February 1946 are very skeptical.

Faster or Free

I don’t know how Bruce Scott’s article showed up in my mailbox but I’m confused by it (happens a lot these days).

I agree with him that too much has been made about whether a system is a columnar system or a truly columnar system or a vertically partitioned row store and what really matters to a customer is TCO and price-performance in their own environment. Bruce says as much in his blog post

Let’s start talking about what customers really care about: price-performance and low cost of ownership. Customers want to do more with less. They want less initial cost and less ongoing cost.

Then, he goes on to say

On this last point, we have found that we always outperform our competitors in customer created benchmarks, especially when there are previously unforeseen queries. Due to customer confidentiality this can appear to be a hollow claim that we cannot always publicly back up with customer testimonials. Because of this, we’ve decided to put our money where our mouth is in our “Faster or Free” offer. Check out our website for details here: http://www.paraccel.com/faster_or_free.php

So, I went and looked at that link. There, it says:

Our promise: The ParAccel Analytic Database™ will be faster than your current database management system or any that you are currently evaluating, or our software license is free (Maintenance is not included. Requires an executed evaluation agreement.)

To be consistent, should that not make the promise that the ParAccel offering would provide better price-performance and lower TCO than the current system or the one being evaluated? After all, that is what customers really care about.

I’m confused. More coffee!

Oh, there’s more! Check out this link http://www.paraccel.com/cash_for_clunkers.php

Talk about fine print:

* Trade-in value is equivalent to the first year free of a three year subscription contract based on an annual subscription rate of $15K/user terabyte of data. Servers are purchased separately.

Desktop Email Client vs. GMail: Why desktop mail clients are still better than the GMail interface

The GMail user interface, while very good and much better than some of the others lacks some useful functionality to make it a complete replacement for a desktop email client like Outlook.

Joe Kissell writes in CIO magazine about the six reasons why desktop email clients still rule. He opines that he would take a desktop email client any day and provides the following reason, and six more:

Well, there is the issue of outages like the one Gmail experienced this week. I like to be able to access my e-mail whenever I want. But beyond that, webmail still lags far behind desktop clients in several key areas.

Much has been written by many on this subject. As long ago as 2005, Cedric pronounced his verdict. Brad Shorr had a more measured comparison that I read before I made the switch about a month ago. Lifehacker pronounced the definitive comparison (IMHO it fell flat, their verdicts were shallow). Rakesh Agarwal presented some good insights and suggestions.

I read all of this and tried to decide what to do about a month ago. Here is a description of my usage.

My Usage

1. Email Accounts

I have about a dozen. Some are through GMail, some are on domains that I own, one is at Yahoo, one at Hotmail and then there are a smattering of others like aol.com, ZoHo and mail.com. While employed (currently a free agent) I have always had an Exchange Server to look at as well.

2. Email volume

Excluding work related email, I receive about 20 or 30 messages a day (after eliminating SPAM).

3. Contacts

I have about 1200 contacts in my address book.

4. Mobile device

I have a Windows Mobile based phone and I use it for calendaring, email and as a telephone. I like to keep my complete contact list on my phone.

5. Access to Email

I am NOT a Power-User who keeps reading email all the time (there are some who will challenge this). If I can read my email on my phone, I’m usually happy. But, I prefer a big screen view when possible.

6. I like to use instant messengers. Since I have accounts on AOL IM, Y!, HotMail and Google, I use a single application that supports all the flavors of IM.

Seems simple enough, right? Think again. Here is why, after migrating entirely to GMail, I have switched back to a desktop client.

The Problem

1. Google Calendar and Contact Synchronization is a load of crap.

Google does somethings well. GMail (the mail and parts of the interface) are one of these things. They support POP and IMAP, they support consolidation of accounts through POP or IMAP, they allow email to be sent as if from another account. They are far ahead of the rest. With Google Labs you can get a pretty slick interface. But, Calendar and Contact Synchronization really suck.

For example, I start off with 1200 contacts and synchronize my mobile device with Google. How do I do it? By creating an Exchange Server called m.google.com and having it provide Calendar and Contacts. You can read about that here. After synchronizing the two, I had 1240 or so contacts on my phone. Ok, I had sent email to 40 people through GMail who were not in my address book. Great!

Then I changed one persons email address and the wheels came off the train. It tried to synchronize everything and ended up with some errors.

I started off with about 120 entries in my calendar after synchronizing every hour, I now have 270 or so. Well, each time it felt that contacts had been changed, it refreshed them and I now have seventeen appointments tomorrow saying it is someones birthday. Really, do I get extra cake or something?

2. Google Chat and Contact Synchronization don’t work well together.

After synchronizing contacts my Google Chat went to hell in a hand-basket. There’s no way to tell why, I just don’t see anyone in my Google Chat window any more.

Google does some things well. The GMail server side is one of them. As Bing points out, Google Search returns tons of crap (not that Bing does much better). Calendar, Contacts and Chat are still not in the “does well” category.

So, it is back to Outlook Calendar and Contacts and POP Email. I will get all the email to land in my GMail account though, nice backup and all that. But GMail Web interface, bye-bye. Outlook 2007 here I come, again.

The best of both worlds

The stable interface between a phone and Outlook, a stable calendar, contacts and email interface (hangs from time to time but the button still works), and a nice online backup at Google. And, if I’m at a PC other than my own, the web interface works in a pinch.

POP all mail from a variety of accounts into one GMail account and access just that one account from both the telephone and the desktop client. And install that IM client application again.

What do I lose? The threaded message format that GMail has (that I don’t like). Yippie!

ParAccel TPC-H 30TB results challenged!

Watch the feeding frenzy now that ParAccel’s TPC-H 30TB results have been challenged.

Before I had my morning cup of coffee, I found an email message with the subject “ParAccel ADVISORY” sitting in my mail box. Now, I’m not exactly sure why I got this message from Renee Deger of GlobalFluency so my first suspicion was that this was a scam and that someone in Nigeria would be giving me a million dollars if I did something.

But, I was disappointed. Renee Deger is not a Nigerian bank tycoon who will make me rich. In fact, ParAccel’s own blog indicates that their 30TB results have been challenged.

We wanted you to hear it from us first.  Our TPC-H Benchmark for performance and price-performance at the 30-terabyte scale was taken down following a challenge by one of our competitors and a review by the TPC.  We executed this benchmark in collaboration with Sun and under the watch of a highly qualified and experienced auditor.   Although it has been around for many years, the TPC-H specification is still subject to interpretation, and our interpretation of some of the points raised was not accepted by the TPC review board.

None of these items impacts our actual performance, which is still the best in the industry.  We will rerun the benchmark at our earliest opportunity according to the interpretations set forth by the TPC review board. We remain committed to the organization, as outlined in a blog post by Barry Zane, our CTO, here: http://paraccel.com/data_warehouse_blog/?p=74#more-74.

Please see the company blog for a post by David Steinhoff in the office of the CTO for further info: http://paraccel.com/data_warehouse_blog/?p=104#more-104

I read David Steinhoff’s blog as well. He writes

This last week, our June 2009 30TB results were challenged and found to be in violation of certain TPC rules. Accordingly, our results will soon be taken down from the TPC website.

We published our results in good faith. We used our standard, customer available database to run the benchmark (we wanted the benchmark to reflect the incredible performance our customers receive). However, the body of TPC rules is complex and undergoes constant interpretation; we are still relatively new to the benchmark game and are still learning, and we made some mistakes.

While we cannot get into the details of the challenges to our results (TPC proceedings are confidential and we would be in violation if we did), we can state with confidence that our query performance was in no way enhanced by the items that were challenged.

We can also say with confidence that we will publish again, soon.

Now, some competitor or competitor loyalist may try to make more of this than there is … we all know there is the risk of tabloid frenzy around any adversity in a society with free speech … and we wouldn’t have it any other way.

It is unfortunate that the proceedings are confidential and cannot be shared. I hope you republish your results at 30TB.

Contrary to some a long list of pundits, I believe that these benchmarks have an important place in the marketing of a product and its capabilities.

I reiterate what I said in a previous blog post

ParAccel’s solution is based on high-performance trilithium crystals. (Note: I don’t know why this wasn’t disclosed in the full disclosure report).

I hope the challenge was not about the trilithium crystals and the fact that you didn’t disclose it in the full disclosure report.

A meaningful networking platform for children with special needs.

Parlerai creates a secure network of family, friends and caregivers surrounding a child with special needs. Help spread the word, you can make a difference.

My friend Jon Erickson and his wife Kristin have been working on Parlerai for some time now. As parents of a child with special needs, Kristin and Jon were frustrated by a lack of available tools for parents like us to track and share information and ultimately collaborate with the people making a difference in their daughter’s life. They have built the Parlerai platform which will profoundly and positively impact the lives of the individuals in their family, and the lives of parents around the world.

Parlerai creates a secure network of family, friends and caregivers surrounding a child with special needs.

Parlerai creates a secure network of family, friends and caregivers surrounding a child with special needs and uses innovative and highly personalized tools to enhance collaboration and provide a highly secure method of communicating via the Internet. There are tools for parents, tools for children and tools for caregivers. As the world’s first Augmentative Collaboration service, Parlerai (French for “shall speak) gives children with special needs a voice.

Parlerai is for children with special needs. They use it to communicate and access online media and information in a safe environment.

Parlerai is for parents of children with special needs. They use it to share and communicate information about their children. It gives them a single medium through which they can collaborate with others who are part of their childrens world.

Parlerai is for consultants, educators and caregivers of children with special needs. With Parlerai, they are able to collaborate with others who are part of the childrens world.

In an interview with Dana Puglisi, Kristin describes the value of Parlerai

“For consultants, educators, and other caregivers, imagine trying to make a difference in a child’s life but only seeing that child for half an hour each week. How do you really get to know that child? How can you make the most impact? Now imagine having access to information provided by others who work with that child – her ski instructor, her physical therapist, her grandmother, her babysitter. Current information, consistent information. Imagine how much more you could learn about that child. Imagine how much greater your impact could be.”

Here is what you can do to help

I read some statistics about children with special needs some months ago. Based on those numbers, each and every one of us knows of at least three people whose children have special needs. Parlerai is a system that could profoundly change the lives of these children and their parents and caregivers. Do your part, and spread the word about Parlerai. You can make the difference.

Spread the word, visit http://tinyurl.com/parlerai (a link to this blog post) or http://www.parlerai.com.

More information is also available at Jon’s blog at http://jon-erickson.blogspot.com/ and Kristin’s blog at http://kristingerickson.blogspot.com/

Facebook’s OpenID integration is not very useful!

Is facebook paying lip-service to OpenId integration?

Preamble:

I don’t know a damn thing about OpenID and less about web applications, but I do know a thing about security, authentication and the like. And, I am a facebook user and like most other internet consumers in this day and age, I am not thrilled that I have to remember a whole bunch of different user names and passwords for each and every online location that I visit.

Facebook’s OpenID integration

Once, and for one last time, you login to facebook with your existing credentials. Let’s say that is your username <joe@joeblow.com> and then you go over to Settings and create your OpenID as a Linked Account. In the interests of full disclosure, I am still working with Gary Krall of Verisign who posted a comment on my previous post describing problems with this linking process. I am sure that we will get that squared away and I can get the linking to work.

Once this linkage is created, a cookie is deposited on your machine indicating that authentication is by OpenID. You wake up in the morning, power up your PC and launch your browser and login to your OpenID provider, and in a second tab, you wander over to http://www.facebook.com.

The way it is supposed work is this, something looks at the OpenID cookie deposited earlier and uses that to perform your validation.

Are you nuts?

As I said earlier, I don’t know a lot about building Web Applications. But, methinks the sensible way to do this is a little different from the way facebook is doing things.

Look, for example, at news.ycombinator.com. On the login screen, below the boxes for username and password is a button for other authentication mechanisms. If you click that, you can enter your OpenID URL and voila, you are on your way. No permanent cookies involved.

Now, if you didn’t have your morning Joe, and you went directly to news.ycombinator.com and tried to enter your OpenID name, you are promptly forwarded to your OpenID providers page to ask for authentication. Over, end of story. No permanent cookies involved.

Ok, just to verify, I did this …

I went to a friends PC, never used it before, pointed his browser (firefox) to news.ycombinator.com, clicked the button under login/password, entered my OpenID name and sure enough it vectored over to Verisign Labs. I logged in and voila, I’m on Hacker News.

Am I missing something? It sounds to me like facebook is paying lip service to OpenID. Either that or they just don’t get it?

A farewell to facts, rigor and civility

I read this article from Mike (The Fein Line) Feinstein’s blog and it occurs to me that we have collectively chosen to ignore facts, rigor and civility.

For a whole bunch of reasons (beyond the one that Mike mentions), I have to say that I too am happy and proud that I live in Massachusetts.

OpenID first impressions

I have been meaning to try OpenID for some time now and I just noticed that they were doing a free TFA (what they call VIP Credentials) thing for mobile devices so I decided to give it a shot.

I picked Verisign’s OpenID offering; in the past I had a certificate (document signing) from Verisign and I liked the whole process so I guess that tipped the scales in Verisign’s favor.

The registration was a piece of cake, downloading the credential generator to my phone and linking it to my account was a breeze. They offer a File Vault (2GB) free with every account (Hey Google, did you hear that?) and I gave that a shot.

I created a second OpenID and linked it to the same mobile credential generator (very cool). Then I figured out what to do if my cell phone (and mobile credential generator were to be lost or misplaced), it was all very easy. Seemed too good to be true!

And, it was.

Facebook allows one to use an external ID for authentication. Go to Account Settings and Linked Accounts and you can setup the linkage. Cool, let’s give that a shot!

Facebook OpenID failure
Facebook OpenID failure

So much for that. I have an OpenID, anyone have a site I could use it on?

Oh yes! I could login to Verisignlabs with my OpenID 🙂

Update:

I tried to link my existing “Hacker News” (news.ycombinator.com) account with OpenID and after authenticating with verisign, I got to a page that asked me to enter my HN information which I did.

I ended up with a page: http://news.ycombinator.com/openid_merge and a single word “Unknown” on the screen.

I’ve got to be doing something wrong. Someone care to tell me how badly messed up I am?

Update (sept 11)

Thanks to help from Gary (who commented on this post), I tried the “linking” on Facebook again and this time it worked a little better.

But, I still have to enter my password when I want to login to facebook. Something is still not working the way it should.

Still the same issue with Hacker News.

What I learnt from the GMAIL outage

We all have heard about it, many of us (most of us) were affected by it, some of us actually saw it. This makes it a fertile subject for conversation; in person and over a cold pint, or online. I have read at least a dozen blog posts that explain why the GMAIL outage underscores the weakness of, and the reason for imminent failure of cloud computing. I have read at least two who explain why this outage proves the point that enterprises must have their own mail servers.  There are graphs showing the number of tweets at various phases of the outage. There are articles about whether GMAIL users can sue Google over this failure.

The best three quotes I have read in the aftermath of the Gmail outage are these:

“So by the end of next May, we should start seeing the first of the Google Outage babies being born.” – Carla Levy, Systems Analyst

“Now I don’t look so silly for never signing up for an e-mail address, do I?” – Eric Newman, Pile-Driver Operator

“Remember the time when 150 million people couldn’t use Gmail for nearly ten years? From 1993–2003? And every year before that? Unimaginable.” – Adam Carmody, Safe Installer

Admittedly, all three came from “The Onion“.

This article is about none of those things. To me, the GMAIL outage could not have come at a better time. I have just finished reconfiguring where my mail goes and how it gets there. The outage gave me a chance to make sure that all the links worked well.

I have a GMAIL account and I have email that comes to other (non-GMAIL) addresses. I use GMAIL as a catcher for the non-GMAIL addresses using the “Imports and Forwarding” capability of GMAIL. That gives me a single web based portal to all of my email. The email is also POP3’ed down to a PC, the one which I am using to write this blog post. I get to read email on my phone (using its POP3 capability) from my GMAIL account. Google is a great backup facility, a nice web interface, and a single place where I can get all of my email. And, if for any reason it were to go kaput, as it did on the 1st, in a pinch, I can get to the stuff in a second or even a third place.

But, more importantly, if GMAIL is unavailable for 100 minutes, who gives a crap. Technology will fail. We try to make it better but it will still fail from time to time. Making a big hoopla about it is just plain dumb. On the other hand, an individual could lose access to his or her GMAIL for a whole bunch of reasons; not just because Google had an outage. Learn to live with it.

So what did I learn from the GMAIL outage? It gave me a good chance to see a bunch of addicts, and how they behave irrationally when they can’t get their “fix”. I’m a borderline addict myself (I do read email on my phone, as though I get things of such profound importance that instant reaction is a matter of life and death). The GMAIL outage showed me what I would become if I did not take some corrective action.

Technology has given us the means to “shrink the planet” and make a tightly interconnected world. With a few keystrokes, I can converse with a person next door, in the next state or half way across the world. Connectivity is making us accessible everywhere; in our homes, workplaces, cars, and now, even in an aircraft. It has given us the ability to inundate ourselves with information, and many of us have been over-indulging (to the point where it has become unhealthy).

Full Corn Moon

Today was the “Full Corn Moon” and I was lucky enough to get a clear night.

Full Corn Moon, September 4, 2009
Full Corn Moon, September 4, 2009

You can see a larger image by clicking on the picture above.

The moon is barely visible in the first image, hidden by the trees. In the second and the third, it makes it out of the trees just as the sun is setting behind me.

And here I was complaining about Verizon Wireless!

I thought poorly of Verizon Wireless service and features (though I’ve been a customer for a while).

That all changed when I read this.

Hey Verizon, where do I sign up for that two year contract? But, could you give me a cellular data plan with more than 5GB per month please …

Till January 1, 2010, Bye-Bye Linux!

Bye-Bye Ubuntu! Back to Windows …

Around the New Year each year, the fact that I am bored silly leads me to do strange things. For the past couple of years, in addition to drinking a lot of Samuel Adams Double Bock or Black Lager, I kick Windows XP, Vista or whatever Redmond has to offer and install Linux on my laptop.

For two years now, Ubuntu has been the linux of choice. New Year 2009 saw me installing 8.10 (Ignorant Ignoramus) and later upgrading to 9.04 (Jibbering Jackass). But, I write this blog post on my Windows XP (Service Pack 3) powered machine.

Why the change, you ask?

This has arguably been the longest stint with Linux. In the past (2007) it didn’t stay on the PC long enough to make it into work after the New Year holiday. In 2008, it lasted two or three weeks. In 2009, it lasted till the middle of August! Clearly, Linux (and Ubuntu has been a great part of this) has come a very long way towards being a mainstream replacement for Windows.

But, my benchmark for ease of use still remains:

  1. Ease of initial installation
    • On Windows, stick a CD in the drive and wait 2 hours
    • On Linux, stick a CD in the drive and wait 20 minutes
    • Click mouse and enter some basic data along the way
  2. Ease of setup, initial software update, adding basic software that is not part of the default distribution
    • On Windows, VMWare (to run linux), Anti-Virus, Adobe things (Acrobat, Flash, …)
    • On Linux, VMWare (to run windows), Adobe things
  3. Ease of installing and configuring required additional “stuff”, additional drivers
    • printers
    • wacom bamboo tablet
    • synchronization with PDA (Windows ActiveSync, Linux <sigh>)
    • On Windows, DELL drivers for chipset, display, sound card, pointer, …
  4. Configuring Display
    • resolution, alignment
  5. Configuring Mouse and Buttons
  6. Making sure that docking station works
    • On Windows, DELL has some software to help with this
    • On Linux, pull your hair out
  7. Setting Power properties for maximum battery life
    • On Windows, what a pain
    • On Linux, CPU Performance Applet
  8. Making sure that I login and can work as a non-dangerous user
    • On Windows, group = Users
    • On Linux, one who can not administer the system, no root privileges
  9. Setup VPN
    • On Windows, CISCO VPN Client most often. Install it and watch your PC demonstrate all the blue pixels on the screen
    • On Linux, go through the gyrations of downloading Cisco VPN client from three places, reading 14 blogs, web pages and how-to’s on getting the right patches, finding the compilers aren’t on your system, finding that ‘patch’ and system headers are not there either. Finally, realizing that you forgot to save the .pcf file before you blew Windows away so calling IT manager on New Year’s day and wishing him Happy New Year, and oh, by the way, could you send me the .pcf file (Thanks Ed).
  10. Setup Email and other Office Applications
    • On Linux, installing a Windows VM with all of the Office suite and Outlook
    • On Windows, installing all of the Office suite and Outlook and getting all the service packs
    • Install subversion (got to have everything under version control). There’s even a cool command line subversion client for Windows (Slik Subversion 1.6.4)
  11. Migrate Mozilla profile to new platform
    • Did you know that you can literally take .mozilla and copy it to someplace in %userprofile% or vice-versa and things just work? Way cool! Try that with Internet Exploder!
  12. Restore SVN dump from old platform

OK, so I liked Linux for the past 8 months. GIMP is wonderful, the Bamboo tablet (almost just works ), system booted really fast, … I can go on and on.

But, some things that really annoyed me with Linux over the past 8 months

  • Printing to the Xerox multi function 7335 printer and being able to do color, double sided, stapling etc., The setup is not for the faint hearted
  • Could I please get the docking station to work?
  • Could you please make the new Mozilla part of the updates? If not, I have Firefox and Shrill-kokatoo or whatever the new thing is called. What a load of horse-manure that upgrade turned out to be. On Windows, it was a breeze. Really, open-source-brethren, could you chat amongst yourselves?

But the final straw was that I was visiting a friend in Boston and wanted to whip out a presentation and show him what I’d been up to. External display is not an easy thing to do. First you have to change resolutions, then restart X, then crawl through a minefield, sing the national anthem backwards while holding your nose. Throughout this “setup”, you have to be explaining that it is a Linux thing.

Sorry folks, you aren’t ready for mainstream laptop use, yet. But, you’ve made wonderful improvement since 2007. I can’t wait till December 31, 2009 to try this all over again with Ubuntu 9.10 (Kickass Knickerbockers).

OpenDNS again

More on OpenDNS

I wasn’t about to try diddling the router at 11:30 last night but it seemed like a no-brainer to test out this OpenDNS service.

So, look at the little button below. If it says “You’re using OpenDNS” then clearly someone in your network (your local PC, router, DNS, ISP, …) is using OpenDNS. The “code” for this button is simplicity itself

<a title="Use OpenDNS to make your Internet faster, safer, and smarter." href="http://www.opendns.com/share/">
     <img style="border:0;"
          src="http://images.opendns.com/buttons/use_opendns_155x52.gif"
          alt="Use OpenDNS" width="155" height="52" />
</a>

So, if images.opendns.com was sent to my ISP, it would likely resolve in one way and if it was sent to OpenDNS, it would resolve a different way. That means that the image retrieved would differ based on whether you are using OpenDNS or not.

Use OpenDNS

Step 1: Setup was trivial. I logged in to my router and deposited the DNS server addresses and hit “Apply”. The router did its usual thing and voila, I was using OpenDNS.

Step 2: Setup an account on OpenDNS. Easy, provide an email address and add a network. In this case, it correctly detected the public IP address of my router and populated its page. It said it would take 3 minutes to propagate within the OpenDNS network. There’s an email confirmation, you click on a link, you know the drill.

Step 3: Setup stats (default: disabled)

All easy, web page resolution seems to be working OK. Let me go and look at those stats they talk about. (Click on the picture below to see it at full size).

Stats don't seem to quite work!
Stats don't seem to quite work!

August 17th you say? Today is September 1st. I guess tracking 16 billion or so DNS queries for two days in a row is a little too much for their infrastructure. I can suggest a database or two that would not break a sweat with that kind of data inflow rate.

July 30, 2009: OpenDNS announces that for the first time ever, it successfully resolves more than 16 Billion DNS queries for 2 days in a row.

(source: http://www.opendns.com/about/overview/)

So far, so good. I’ve got to see what this new toy can do 🙂 Let’s see what other damage this thing can cause.

Content Filtering

Nice, they support content filtering as part of the setup. That could be useful. Right now, I reduce annoyances on my browsing experience with a suitably crafted “hosts” file (Windows Users: %SYSTEMROOT%system32driversetchosts).

127.0.0.1       localhost
127.0.0.1       ad.doubleclick.net
127.0.0.1       hitbox.com
127.0.0.1       ai.hitbox.com
127.0.0.1       googleads.g.doubleclick.net
127.0.0.1       ads.gigaom.com
127.0.0.1       ads.pheedo.com
[... and so on ...]

I guess I can push this “goodness” over to OpenDNS and reduce the pop-up crap that everyone will get at home. (click on image for a higher resolution version of the screen shot)

Content Filtering on OpenDNS
Content Filtering on OpenDNS

Multiple Networks!

Very cool! I can setup multiple networks as part of a single user profile. So, my phone and my home router could both end up being protected by my OpenDNS profile.

I wonder how that would work when I’m in a location that hands out a non-routable DHCP address; such as at a workplace. I guess the first person to register the public IP of the workplace will see traffic for everyone in the workplace with a per-PC OpenDNS setting that shares the same public IP address? Unclear, that may be awkward.

Enabling OpenDNS on a Per-PC basis.

In last nights post, I had questioned the rationale of enabling OpenDNS on a per-PC basis. I guess there is some value to this because OpenDNS provides me a way to influence the name resolution process. And, if I were to push content filtering onto OpenDNS, then I would like to get the same content filtering when I was not at home; e.g. at work, at Starbucks, …

I’m sure that over-anxious-parents-who-knew-a-thing-or-two-about-PC’s could load the “Dynamic IP” updater thing on a PC and change the DNS entries to point to OpenDNS before junior went away to college 🙂

So, I guess that per-PC OpenDNS settings may make some sense; it would be nice to have an easy way to enable this when required. I guess that is a fun project to work on one of these days when I’m at Starbucks.

Jeremiah says, “I do it on a per computer basis because I occasionally need to disable it. (Mac OS X makes this super quick with Locations)”. Jeremiah, please do tell why you occasionally need to disable it. Does something fail?

Other uses of OpenDNS

kuzux writes in response to my previous post that OpenDNS can be used to get around restrictive ISP’s. That is interesting because the ISP’s that have put these restrictions in place are likely only blocking name resolution and not connection and traffic. Further, the ISP’s could just as well find the IP addresses of the sites like OpenDNS and put a damper on the festivities. And, one does not have to look far to get the IP addresses of the OpenDNS servers 🙂

Two thoughts come to mind. First, if the authorities (in Turkey as Kuzux said) put the screws on OpenDNS, would they pour out the DNS lookup logs for specific IP addresses that they cared about (both source and destination). Second, a hypothetical country with huge manufacturing operations, a less stellar human rights record, and a huge number of take-out restaurants all over the US (that shall remain nameless), could take a dim view of a foreigner who had OpenDNS on his/her laptop and was able to access “blocked” content.

Other comments

Janitha Karunaratne writes in response to my previous post that, “Lot of times if it’s a locked down limited network, they will intercept all DNS traffic, so using OpenDNS won’t help (their own default DNS server will reply no matter which DNS server you try to reach)”. I guess I don’t understand how that could be. When a machine attempts a DNS lookup, it addresses the packet specifically to the DNS server that it is targeting. Are you suggesting that these “locked down limited networks” will intercept that packet, redirect it to the in-house DNS server and have it respond?

David Ulevitch (aka Founder and CTO of OpenDNS) writes, “Yeah, there are all kinds of reasons people use our service. Speed, safety, security, reliability… I do tests when I travel, and have even done it with GoGo on a VA flight and we consistently outperform”. Mr. Ulevitch, your product is wonderful and easy to use. Very cool. But, I wonder about this performance claim. When I am traveling, potentially sitting in an airport lounge, a hotel room, a coffee shop or in a train using GPRS based internet service with unknown bandwidth, is the DNS lookup a significant part of the response time to a page refresh, mail message download, (insert activity of your choice)?

My Point of View

It seems to work, it can’t hurt to use it at home (if my ISP has a problem with it, they can block traffic to the IP address). It doesn’t seem to be appreciably faster or slower than my ISP’s DNS. I’ll give it a shot for a while and see what the statistics say (when they get around to updating them).

OpenDNS is certainly an easy to use, non-disruptive service and is worth giving a shot. If you use the free version of OpenDNS (ie don’t create an account, just point to their name servers), there is little possible downside; if you get on a Virgin Atlantic flight, you may need to disable it. But, if you use the registered site, just remember that OpenDNS is collecting a treasure trove of information about where you go, what you do, and they have your email address, IP address (hence a pretty good idea of where you live). They already target advertising to you on the default landing page for bad lookups. I’m not suggesting that you will get spammed up the wazoo but just bear in mind that you have yet another place where a wealth of information about you is getting quietly stored away for later use.

But, it is a cool idea. Give it a shot.

OpenDNS and paid wireless services.

Why would anyone ever want to use OpenDNS? I just don’t get it? But, there must be a good reason that I’m totally missing.

I stumbled on this post, not sure how.

http://thegongshow.tumblr.com/post/176629519/virgin-america-inflight-wireless

I didn’t know about OpenDNS but it seems like a strange idea; after all, why would someone not use the DNS server that came with their DHCP lease? Seems like a fairly simple idea, over-ride the DNS servers and force the choice to be to use the DNS servers that OpenDNS provides, but why would anyone care to do this on a PC by PC basis? That seems just totally illogical!

And the problem that the author of the blog (above) mentioned is fairly straightforward. PC comes on the Virgin Wireless network, attempts to go to a web page, sends out a connection request for a page (assume that it is from a cache of IP addresses) and receives a HTML redirect to a named page (asking one to cough up money). The HTML redirect mentions the page by name and that name is not cached and results in a DNS lookup which goes to OpenDNS. Those servers (hardcoded into the network configuration) are not accessible because the user has not yet coughed up said money. Conversely, If the initial lookup was not from a cached IP address then the DNS lookup (which would have been sent to OpenDNS) would have not made it very far (OpenDNS’s server is not reachable till you cough up money). One way or the other, the page asking the user to cough up cache would not have showed up.

So, could one really do this OpenDNS thing behind a pay-for-internet-service? Not unless you can add preferred DNS servers to network configuration without a reboot. (the reboot will get the cycle of DHCP request going again, and on the high volume WiFi access services, DHCP request will automatically expire the previous lease).

But, to the more basic issue, why ever would someone enable this on a PC-by-PC basis? I can totally understand a system administrator using this at an enterprise level; potentially using the OpenDNS server instead of the ones that the ISP provided. Sure beats me! And there sure can’t be enough money in showing advertisements on the redirect page for missing URL’s (or there must be a lot of people who fat-finger URL’s a lot more than I do).

And, the functionality of being able to watch my OpenDNS traffic on a dashboard, I just don’t get it. Definitely something to think more about … They sure seem to be a successful operation so there must clearly be some value to the service they offer, to someone.

When “free” isn’t quite “free”. Or, remember to read the fine print on online photo sharing sites.

A quick comparison of some online photo sharing sites.

I have been a happy user of Flickr (I use the “frugal” account) till yesterday when I got a big pop-up box about a 200 image limit. Apparently flickr does offer unlimited free image storage but the fine print says that only the most recent 200 can be shared. Not the worst thing in the world but I began to think of all the other things I did not like about Flickr and so I started to look around and see whether I had some other alternatives.

There are any number of online image sharing sites. Many of them are also linked with photo printing services, and arguably that is where the money is. It appears that the devil is most certainly hiding in the fine print. Here is a short summary of what I found. I’m planning to move to Shutterfly; do you have some experience with them which makes this a bad idea?

Free Account

Paid Accounts

Flickr

http://www.flickr.com/help/limits/

Free account has unlimited (modest resolution) image
storage. No more than 200 can be shared at a time. There are upload limits on the number and size of pictures that you can upload.

Unlimited upload, storage and high resolution storage.

$24.95 per year

Photobucket

http://photobucket.com/faq?catID=29&catSelected=f&topicID=320

http://photobucket.com/faq?catID=41&catSelected=f&topicID=323

http://photobucket.com/faq?catID=39&catSelected=f&topicID=520

Free account has limited (modest resolution) images.

Periodic logins are required; failure to do so will deactivate
media

Unlimited capacity, FTP uploads (what a concept), no
advertising, personal URL’s

$24.95 per year

Shutterfly

http://www.shutterfly.com

Free unlimited storage of pictures. Images stored at high resolution. But you cannot download at high resolution. Personalized web portal.

I don’t think they even offer a paid account option. My
kind of place!

Snapfish (HP)

References

http://getsatisfaction.com/snapfish/topics/downloading_snapfish_photos

http://www1.snapfish.com/helppricing#hires

Snapfish offers unlimited photo sharing and storage. Customers must be “active”. The bar for an active customer is that you just need to
make one purchase a year.

But read this
link
. You can store high resolution images but downloading high
resolution images is not free.

OUCH!

No paid account.

Picasa (Google)

1GB limit (seems odd for the company that claims that
storage is unlimited).

Other limits also apply.

No paid offering that I could find.

Smugmug

No ads! Now, isn’t this a great graphic to illustrate the  success in targeted advertising and reinforce their point?

Smug Mug No Ads!
Smug Mug "No Ads!"

Reference

http://www.smugmug.com/photos/photo-sharing-sites-compared/

No free offering. There is a 14 day free trial.

Standard: $39.95/year

Power: $59.95/year

Pro: $149.95/year

Winkflash

http://www.winkflash.com/

http://www.winkflash.com/content/storage.asp

Free image hosting.

Free unlimited storage.

Free high resolution image downloads.

100% FREE

Too good to be true?

MPIX

http://mpix.com/

Free site for 60 days. After that, need an order to keep
content online.

MPIX is a professional print outfit; online sharing is not
their primary business.

WHCC (White House Custom Color)

http://www.whcc.com/

WHCC is a professional print outfit. They don’t do photo
galleries and sharing stuff. Go here if you want serious prints.

Not offered.

Not offered.

I think I’m heading to shutterfly. On Sept 5th I found this Shutterfly article (answer id 181)

“Currently, we do not have full resolution downloading available. However, we do have an Archive DVD service that you can order which contains full resolution copies of your pictures. The images on an Archive DVD will not include any of the Shutterfly enhancements or rotations that have been applied to an image loaded to Shutterfly.”

What BS is this? I guess I’m going to stick with Flickr for a while longer.

Also read http://forums.dpreview.com/forums/read.asp?forum=1023&message=32634142

Making life interesting

I generally don’t like chain letters and SPAM but once in a while I get a real masterpiece. Below is one that I received yesterday …

Working people frequently ask retired people what they do to make their days interesting. Well, for example, the other day the wife and I went into town and went into a shop. We were only in there for about 5 minutes.

When we came out, there was a cop writing out a parking ticket. We went up to him and I said, “Come on man, how about giving a senior citizen a break?”

He ignored us and continued writing the ticket. I called him a Dumb ass. He glared at me and started writing another ticket for having worn tires. So Mary called him a shit head. He finished the second ticket and put it on the windshield with the first.

Then he started writing a third ticket. This went on for about 20 minutes.

The more we abused him, the more tickets he wrote.

Just then our bus arrived and we got on it and went home.

We try to have a little fun each day now that we`re retired.

It`s important at our age.

Priceless!

I’ve lost my cookies!

Some days ago I read an article (a link is on the Breadcrumbs tab on the right of my blog about Super Cookies). I didn’t think to check my machine for these super cookies and did not pay close attention to the lines that said:

GNU-Linux: ~/.macromedia

Sure enough …

amrith@amrith-laptop:~$ find . -name *.sol 2>/dev/null | wc -l
141
amrith@amrith-laptop:~$

Hmmm …

amrith@amrith-laptop:~$ rm `find . -name *.sol 2>/dev/null `
amrith@amrith-laptop:~$ find . -name *.sol 2>/dev/null | wc -l
0
amrith@amrith-laptop:~$

Much better. And my flash player still works. Let’s go look at some video …

amrith@amrith-laptop:~$ find . -name *.sol 2>/dev/null
./.macromedia/Flash_Player/macromedia.com/support/flashplayer/sys/settings.sol
./.macromedia/Flash_Player/macromedia.com/support/flashplayer/sys/#s.ytimg.com/settings.sol
./.macromedia/Flash_Player/#SharedObjects/CZRS8QS7/s.ytimg.com/soundData.sol
./.macromedia/Flash_Player/#SharedObjects/CZRS8QS7/s.ytimg.com/videostats.sol
amrith@amrith-laptop:~$

Time to fix this sucker …

amrith@amrith-laptop:~$ tail -n 1 .bashrc
rm -f `find ./.macromedia -name *.sol 2>/dev/null`
amrith@amrith-laptop:~$

Boston Cloud Services meetup yesterday

summary of boston cloud services meetup yesterday.

Tsahy Shapsa of aprigo organized the second Boston Cloud Services meetup yesterday. There were two very informative presentations, the first by Riki Fine of EMC on the EMC Atmos project and the second by Craig Halliwell from TwinStrata.

What I learnt was that Atmos was EMC’s entry into the cloud arena. The initial product was a cloud storage offering with some additional functionality over other offerings like Amazon’s. Key product attributes appear to be scalability into the petabytes, policy and object metadata based management, multiple access methods (CIFS/NFS/REST/SOAP), and a common “unified namespace” for the entire managed storage. While the initial offering was for a cloud storage offering, there was a mention of a compute offering in the not too distant future.

In terms of delivery, EMC has setup its own data centers to host some of the Atmos clients. But, they have also partnered with other vendors (AT&T was mentioned) who would provide an cloud storage offerings that exposed the Atmos API. AT&T’s web page reads

AT&T Synaptic Cloud Storage uses the EMC Atmos™ backend to deliver an enterprise-grade global distribution system. The EMC Atmos™ Web Services API is a Web service that allows developers to enable a customized commodity storage system over the Internet, VPNs, or private MPLS connectivity.

I read this as a departure from the approach being taken by the other vendors. I don’t believe that other offerings (Amazon, Azure, …) provide a standardized API and allow others to offer cloud services compliant to that interface. In effect, I see this as an opportunity to create a marketplace for “plug compatible” cloud storage. Assume that a half dozen more vendors begin to offer Atmos based cloud storage, each offering a different location, SLA’s and price point, an end user has the option to pick and choose from that set. To the best of my knowledge, today the best one can do is pick a vendor and then decide where in that vendor’s infrastructure the data would reside.

Atmos also seems to offer some cool resiliency and replication functionality. An application can leverage a collection of Atmos storage providers. Based on policy, an object could be replicated (synchronously or asynchronously) to multiple locations on an Atmos cloud with the options of having some objects only within the firewall and others being replicated outside the firewall.

Enter TwinStrata who are an Atmos partner. They have a cool iSCSI interface to the Atmos cloud storage. With a couple of clicks of a mouse, they demonstrated the creation of a small Atmos based iSCSI block device. Going over to a windows server machine and rescanning disks they found the newly created volume. A couple of clicks later there was a newly minted “T:” that the application could use, just as it would a piece of local storage. TwinStrata provides some additional caching and ease of use features. We saw the “ease of use” part yesterday. The demo lasted a couple of minutes and no more than about a dozen mouse clicks. The version that was demo’ed was the iSCSI interface, there was talk of a file system based interface in the near future.

Right now, all of these offerings are expected to be for Tier-3 storage. Over time, there is a belief that T2 and T1 will also use this kind of infrastructure.

Very cool stuff! If you are in the Boston area and are interested in the Cloud paradigm, definitely check out the next event on Sept 23rd.

Pizza and refreshments were provided by Intuit. If you haven’t noticed, the folks from Intuit are doing a magnificent job fostering these kinds of events all over the Boston Area. I have attended several excellent events that they have sponsored. A great big “Thank You” to them!

Finally, a big “Thank You” to Tsahy and Aprigo for arranging this meetup and offering their premises for the meetings.

Are you getting the whole picture?

Human field of vision, the shortcomings of simple camera, and how to take breathtaking pictures with a simple point-and-shoot camera.

While taking pictures, the field of vision is something that is often overlooked. A normal point and click camera has a field of vision of about 40°x35°. But, the human eye(s) provide you with a field of vision that is almost 200°x130°. Very often, you come upon a sight that is breathtaking and you whip out your camera and shoot some pictures. When you get back home and look at the pictures on a PC monitor, they don’t look quite the same.

To get some idea of what a short focus length lens (wide field of vision) can do for you, take a look at this picture on Ken Rockwell’s web page. The image that I would like you to look at is  here. This awesome image is copyrighted by KenRockwell.com. If you are a photo buff, you should bookmark kenrockwell.com and subscribe to the RSS feed. I find it absolutely invaluable.

I don’t have this kind of amazing 13mm lens but a panoramic image using stitching can produce a similar field of view.

Panoramic image of a rainbow
Panoramic image of a rainbow

Panoramic images are a very cost effective way to get pictures with a very wide field of vision. If you are interested in all the science and technology behind the process of converting multiple segments of an image into a single panoramic image, you can refer to the FAQ at AutoStitch. There is an interesting paper on how all this works that you can read here and there is an informative presentation that goes with that paper.

Panoramic images are also better than short focal length lenses because there is less distortion towards the edges. Notice that the houses at the right and left edge of the first image above appear to be leaning. With panoramic stitching these effects can be eliminated.

Some quick tips if you plan to take a panoramic picture.

  1. Set the camera to manual exposure mode to reduce the corrections that need to be done in software.
  2. Use a tripod and make sure that you get a complete coverage of the area that you want to stitch.
  3. Make sure that you overlap images by about a third. I usually turn on the visible grid in the view finder to help with this.
  4. Take lots of pictures, there is nothing to beat practice.

In my previous post some panoramic sunsets were shown. I took several sets of pictures, such as the five below. These were then stitched together using a software called Autostitch. You can get a copy of autostitch at http://www.autostitch.net/

image-1 image-2 image-3 image-4 image-5

Enjoy!

Sunset in Rye, New Hampshire

I got two interesting pictures of the sunset in Rye, NH today. The two pictures were each made by tiling 5 images using autostitch.

The two pictures are five minutes apart and the colors changed quite interestingly in that 5 minute interval.

And about five minutes later

I have reduced the size on these images to 70% so that they show up ok on the blog page. Larger images are on flickr (use the left panel).

Look Ma! NoSQL!

More musings on NoSQL and a blog I read “NoSQL: If Only It Was That Easy”

There has definitely been more chatter about NoSQL in the Boston area lately. I hear there is a group forming around NoSQL ( I will post more details when I get them ). There were some NoSQL folks at the recent Cloud Camp which I was not able to attend (damn!).

My views on NoSQL are unchanged from an earlier post on the subject. I think there are some genuine issues about database scaling that are being addressed through a variety of avenues (packages, tools, …). But, in the end, the reason that SQL has survived for so long is because it is a descriptive language that is reasonably portable. That is also the reason why, in the data warehousing space, you have each vendor going off and doing some non-SQL extension in a totally non-portable way. And they are all going to, IMHO, have to reconcile their differences before the features get wide mainstream adoption.

This morning I read a well researched blog post by BJ Clark by way of Hacker News. (If you don’t use HN, you should definitely give it a try).

I strongly recommend that if you are interested in NoSQL, you read the conclusion section carefully. I have annotated the conclusion section below.

“NoSQL is a great tool, but it’s certainly not going to be your competitive edge, it’s not going to make your app hot, and most of all, your users won’t give a shit about any of this.

What am I going to build my next app on? Probably Postgres.

Will I use NoSQL? Maybe. [I would not, but that may just be my bias]

I might keep everything in flat files. [Yikes! If I had to do this, I’d consider something like MySQL CSV first]


If I need reporting, I won’t be using any NoSQL.

If I need ACIDity, I won’t use NoSQL.

If I need transactions, I’ll use Postgres.

…”

NoSQL is a great stepping stone, what comes next will be really exciting technology. If what we need is a database that scales, let’s go make ourselves a database that scales. Base it on MySQL, PostgreSQL, … but please make it SQL based. Extend SQL if you have to. I really do like to be able to coexist with the rich ecosystem of visualization tools, reporting tools, dashboards, … you get the picture.


On over-engineered solutions

The human desire to over-engineer solutions

On a recent trip, I had way too much time on my hands and was ambling around the usual overpriced shops in the airport terminal. There I saw a 40th anniversary “special edition” of the Space Pen. For $799.99 (plus taxes) you could get one of these pens, and some other memorabilia to commemorate the 40th anniversary of the first lunar landing. If you aren’t familiar with the Space Pen, you can look at learn more at the web site of the Fisher Space Pen Co.

Fisher Space Pen
Fisher Space Pen

For $800 you could get AG7-40LE – Astronaut Space Pen 40th Year Moon Landing Celebration Commemorative Pen & Box.

Part of this pen actually circled the moon!

Salient features of this revolutionary device:

  • it writes upside down
  • it writes in any language (so the Russians bought some of these :))
  • it draws pictures
  • it writes in zero gravity
  • it writes under water
  • it writes on greasy surfaces
  • it fixes broken switches on lunar modules

If you want to know about the history of this device, you can also look at NASA’s web page here. On the web page of the Fisher Space Pen Co, you can also see the promotion of the  AG7-40 – Astronaut Space Pen 40th Year Moon Landing Celebration Engraving.

While in the airport, I saw a couple of security officers rolling around on the Segway Personal Transporter. Did you know that for approximately $10,000 you could get yourself a Segway PT i2 Ferrari Limited Edition? I have no idea how much the airport paid for them but the Segway i2 is available on Amazon for about $6,000. It did strike me as silly, till I noticed three officers (a lot more athletic) riding through the terminal on Trek bicycles. That seemed a lot more reasonable. I have a bicycle like that, and it costs maybe 10% of a Segway.

I got thinking about the rampant over-engineering that was all around me, and happened upon this web page, when I did a search for Segway!

How to render the Segway Human Transporter Obsolete by Maddox.

Who would have thought of this, just add a third wheel and you could have a vehicle just as revolutionary? I thought about it some more and figured that the third wheel in Maddox’s picture is probably not the best choice; maybe it should be a wheel more like the track-ball of a mouse. That would have no resistance to turning and the contraption could do an effortless zero radius turn if required. The ball could be spring loaded and the whole thing could be on an arm that had some form of shock absorbing mechanism.

And, if we were to have done that, we would not have seen this. Bush Fails Segway Test! As an interesting aside, did you know that the high tech heart stent used for people who have bypass surgery was also invented by Dean Kamen?

We have a million ways to solve most problems. Why then do we over-engineer solutions to the point where the solution introduces more problems?

Keep it simple! There are less things that can break later.

Oh, and about that broken switch in the lunar module. Buzz Aldrin has killed that wonderful urban legend by saying he used a felt tip pen.

Buzz Aldrin still has the felt-tipped pen he used as a makeshift switch needed to fire up the engines that lifted him and fellow Apollo 11 astronaut Neil Armstrong off the moon and started their safe return to Earth nearly 40 years ago.

“The pen and the circuit breaker (switch) are in my possession because we do get a few memorabilia to kind of symbolize things that happened,” Aldrin told reporters Friday.

If Buzz Aldrin used a felt tipped pen, why do we need a Space Pen? And what exactly are we celebrating by buying a $800 pen that can’t even fix a broken switch. A pencil can write upside down (and also write in any language :)). Why do we need Segway Human Transporters in an airport when most security officers should be able to walk or ride a bicycle. Why do we build complex software products and make them bloated, unusable, incomprehensible and expensive?

That’s simple, we’re paying a homage to our overpowering desire to over-engineer solutions.

How to render the Segway Human Transporter obsolet

Xtremedata’s latest announcement

A short summary of the dbX announcement by xtremedata and ISA’s in data warehouse appliances

Xtremedata has announced their entry into the data warehouse appliance space. Called the dbX, this product is an interesting twist on the use of FPGA’s (Field Programmable Gate Arrays). Unlike the other data warehouse appliance companies, Xtremedata is not a data warehouse appliance startup. They are a six year old company with an established product line and a patent for the In Socket Accelerator that is part of dbX.

What is ISA?

The core technology that Xtremedata makes is a device called an In Socket Accelerator. Occupying a CPU or co-processor slot on a motherboard, this plug compatible device emulates a CPU using an FPGA. But, the ISA goes beyond the functionality provided by the CPU that it replaces.

Xtremedata’s patent (US Patent 7454550 – Systems and methods for providing co-processors to computing systems. issued in 2008) includes the following description

It is sometimes desirable to provide additional functionality/capability or performance to a computer system and thus, motherboards are provided with means for receiving additional devices, typically by way of “expansion slots.” Devices added tothe motherboard by these expansion slots communicate via a standard bus. The expansion slots and bus are designed to receive and provide data transport of a wide array of devices, but have well-known design limitations.

One type of enhancement of a computer system involves the addition of a co-processor. The challenges of using a co-processor with an existing computer system include the provision of physical space to add the co-processor, providing power to the co-processor, providing memory for the co-processor, dissipating the additional heat generated by the co-processor and providing a high-speed pipe for information to and from the co-processor.

Without replacing the socket, which would require replacing the motherboard, the CPU cannot be presently changed to one for which the socket was not designed, which might be desirable in providing features, functionality, performance and capabilities for which the system was not originally designed.

and

FPGA accelerators and the like, including counterparts therefore, e.g., application-specific integrated circuits (ASIC), are well known in the high-performance computing field. Nallatech, (see http://www.nallatech.com) is an example of several vendors that offer FPGA accelerator boards that can be plugged into standard computer systems. These boards are built to conform to industry standard I/O (Input/Output) interfaces for plug-in boards. Examples of such industry standards include: PCI (PeripheralComponent Interconnect) and its derivatives such as PCI-X and PCI-Express. Some computer system vendors, for example, Cray, Inc., (see http://www.cray.com) offer built-in FPGA co-processors interfaced via proprietary interfaces to the CPU.

What else does Xtremedata do?

Their core product is the In Socket Accelerator with applications in Financial Services, Medical, Military, Broadcast, High Performance Computing and Wireless (source: company web page). The dbX is their latest venture integrating their ISA in a standard HP server and providing “SQL on a chip”.

How does this compare with Netezza?

The currently shipping systems from Netezza also include an FPGA but, as described in their patent (US patent 709231000, Field oriented pipeline architecture for a programmable data streaming processor) places an FPGA between a disk drive and a processor (as a programmable data streaming processor). In the interest of full disclosure, I must also point out that I worked at Netezza and on the product described here. All descriptions in this blog are strictly based on publicly available information and references are provided.

Xtremedata, on the other hand, uses the FPGA in an ISA and emulates the processor.

So, the two are very different architectures, and accomplish very different things.

Other fun things that can be done with this technology 🙂

Similar technology was used to make a PDP-10 processor emulator. You can read more about that project here. Using similar technology and a Xilinx FPGA, the folks described in this project were able to make a completely functional PDP-10 processor.

Or, if you want a new Commodore 64 processor, you can read more about that project here.

Why are these links relevant? ISA’s and processor emulators seem like black magic. After all, Intel and AMD spend tons of money and their engineers spend a huge amount of time and resource to design and build CPU’s. It may seem outlandish to claim that one can reimplement a CPU using some device that one analyst called “Field Programmable Gatorade”. These two projects give you an idea of what people do with this Gatorade thing to make a processor.

And if you think I’m making up the part about Gatorade, look here and here (search for Gatorade, it is a long article).

About Xtremedata

XtremeData, Inc. creates hardware accelerated Database Analytics Appliances and is the inventor and leader in FPGA-based In-Socket Accelerators(ISA). The company offers many different appliances and FPGA-based ISA solutions for markets such as Decision Support Systems, Financial Analytics, Video Transcoding, Life Sciences, Military, and Wireless. Founded in 2003 XtremeData has established two centers: Headquarters are located in Schaumburg, IL (near Chicago, Illinois, USA) and a software development location in Bangalore, India.

XtremeData, Inc. is a privately held company.

source: company web page

My Opinion

This is a technology that I have been watching for some time now. It will be interesting to see how this technique compares (price, performance, completeness) with other vendors who are already in the field. By adopting industry standard hardware (HP) and being a certified vendor for HP (Qualified by HP’s accelerator program), and because this isn’t the only thing they do with these accelerators, it promises to be an interesting development in the market.

Wondering about “Shared Nothing”

Is “Shared Nothing” the best architecture for a database? Is Stonebraker’s assertion of 1986 still valid? Were Norman et al., correct in 1996 when they made a case for a hybrid system? Have recent developments changed the dynamics?

Over the past several months, I have been wondering about the Shared Nothing architecture that seems to be very common in parallel and massively parallel systems (specifically for databases) these days. With the advent of technologies like cheap servers and fast interconnects, there has been a considerable amount of literature that point to an an apparent “consensus” that Shared Nothing is the preferred architecture for parallel databases. One such example is Stonebraker’s 1986 paper (Michael Stonebraker. The case for shared nothing. Database Engineering, 9:4–9, 1986). A reference to this apparent “consensus” can be found in a paper by Dewitt and Gray (Dewitt, Gray Parallel database systems: the future of high performance database systems 1992).

A consensus on parallel and distributed database system architecture has emerged. This architecture is based on a shared-nothing hardware design [STON86] in which processors communicate with one another only by sending messages via an interconnection network.

But two decades or so later, is this still the case, is “Shared Nothing” really the way to go? Are there other, better options that one should consider?

As long ago as 1996, Norman, Zurek and Thanisch (Norman, Zurek and Thanisch. Much ado about shared nothing 1996) made a compelling argument for hybrid systems. But even that was over a decade ago. And the last decade has seen some rather interesting changes. Is the argument proposed in 1996 by Norman et al., still valid? (Updated 2009-08-02: A related article in Computergram in 1995 can be read at http://www.cbronline.com/news/shared_nothing_parallel_database_architecture)

With the advent of clouds and virtualization, doesn’t one need to seriously consider the shared nothing architecture again? Is a shared disk architecture in a cloud even a worthwhile conversation to have? I was reading the other day about shared nothing cluster storage. What? That seems like an contradiction, doesn’t it?

Some interesting questions to ponder:

  1. In the current state of technology, are the characterizations “Shared Everything”, “Shared Memory”, “Shared Disk” and “Shared Nothing” still sufficient? Do we need additional categories?
  2. Is Shared Nothing the best way to go (As advocated by Stonebraker in 1986) or is a hybrid system the best way to go (as advocated by Norman et al., in 1996) for a high performance database?
  3. What one or two significant technology changes could cause a significant impact to the proposed solution?

I’ve been pondering this question for some time now and can’t quite seem to decide which way to lean. But, I’m convinced that the answer to #1 is that we need additional categories based on advances in virtualization. But, I am uncertain about the choice between a Hybrid System and a Shared Nothing system. The inevitable result of advancements in virtualization and clouds and such technologies seem to indicate that Shared Nothing is the way to go. But, Norman and others make a compelling case.

What do you think?

Michael Stonebraker. The case for shared nothing. Database Engineering, 9:4–9, 1986

Wow, the TPC-H speculation continues!

The real technology behind the ParAccel TPC-H results are revealed here!

It is definitely interesting to see that the ParAccel TPC-H result saga is not yet done. Posts by Daniel Abadi (interesting analysis but it seems simplistic at first blush) and reflections on Curt Monash’s blog are proving to be amusing, to say the least.

At the end of the day, whether ParAccel is a “true column store” (or a “vertically partitioned row store”), and whether it has too much disk capacity and disk bandwidth strike me as somewhat academic arguments that don’t recognize some basic facts.

  • ParAccel’s system (bloated, oversized, undersized, truly columnar, OR NOT) is only the second system to post 30TB results.
  • That system costs less than half of the price of the other set of published results. Since I said “basic facts”, I will point out that the other entry may provide higher resiliency and that may be a part of the higher price. I have not researched whether the additional price is justified, related to the higher resiliency, etc., I want to make the point that there is a difference in the stated resiliency of the two solutions that is not apparent from the performance claims.

It might just be my sheltered upbringing but:

  • I know of few customers who purchase cars solely for the manufacturers advertised MPG and similarly, I know of no one who purchases a data-warehouse from a specific vendor because of the published TPC-H results.
  • I know of few customers who will choose to make a purchase decision based on whether a car has a inline engine, a V-engine or a rotary engine and similarly, I know of no one who makes a buying decision on a data warehouse based on whether the technology is “truly columnar”, “vertically partitioned row store”, “row store with all indexes” or some other esoteric collection of interesting sounding words?

So, I ask you, why are so many people so stewed about ParAccel’s TPC-H numbers? In a few more days, will we have more posts about ParAccel’s TPC-H numbers than we will about whether Michael Jackson really died or whether this is all part of a “comeback tour”?

I can’t answer either of the questions above for sure but what I  do know is this, ParAccel is getting some great publicity out of this whole thing.

And, I have it on good authority (it came to me in a dream last night) that ParAccel’s solution is based on high-performance trilithium crystals. (Note: I don’t know why this wasn’t disclosed in the full disclosure report). I hear that they chose 43 nodes because someone misremembered the “universal answer” from The Hitchhikers Guide to the Galaxy. By the time someone realized this, it was too late because the data load had begun. Remember you read it here first 🙂

Give it a rest folks!

P.S. Within minutes of posting this a well known heckler sent me email with the following explanation that confirms my hypothesis.

When a beam of matter and antimatter collide in dilithium we get a plasma field that powers warp drives within the “Sun” workstations. The warp drives that ParAccel uses are the Q-Warp variant which allow queries to run faster than the speed of light. A patent has been filed for this technique, don’t mention it in your blog please.

Not so fast, maybe relational databases aren’t dead!

Maybe the obituary announcing the demise of the relational database was premature!

Much has been written recently about the demise (or in some cases, the impending demise) of the relational database. “Relational databases are dead” writes Savio Rodrigues on July 2nd, I guess I missed the announcement and the funeral in the flood of emails and twitters about another high profile demise.

Some days ago, Michael Stonebraker authored an article with the title, “The End of a DBMS Era (Might be Upon Us)”. In September 2007 he made a similar argument in this article, and also in this 2005 paper with Uğur Çetintemel.

What Michael says here is absolutely true. And, in reality, Savio’s article just has a catchy title (and it worked). The body of the article makes a valid argument that there are some situations where the current “one size fits all” relational database offering that was born in the OLTP days may not be adequate for all data management problems.

So, let’s be perfectly clear about this; the issue isn’t that relational databases are dead. It is that a variety of use use cases are pushing the current relational database offerings to their limits.

I must emphasize that I consider relational databases (RDBMS’s) to be those systems that use a relational model (a definition consistent with http://en.wikipedia.org/wiki/Relational_database). As a result, columnar (or vertical) representations, row (or horizontal) representations, systems with hardware acceleration (FPGA’s, …) are all relational databases. There is arguably some confusion in terminology in the rest of this post, especially where I quote others who tend to use the term “Relational Database” more narrowly, so as to create a perception of differentiation between their product (columnar, analytic, …) and the conventional row oriented database which they refer to as an RDBMS.

Tony Bain begins his three part series about the problem with relational databases with an introduction where he says

“The specialist solutions have be slowly cropping up over the last 5 years and now today it wouldn’t be that unusual for an organization to choose a specialist data analytics database platform (such as those offered from Netezza, Greenplum, Vertica, Aster Data or Kickfire) over a generic database platform offered by IBM, Microsoft, Oracle or Sun for housing data for high end analytics.”

While I have some issues with his characterization of “specialist analytic database platforms” as something other than a Relational Database, I assume that he is using the term RDBMS to refer to the commonly available (general purpose) databases that are most often seen in OLTP environments.

I believe that whether you refer to a column oriented architecture (with or without compression), an architecture that uses hardware acceleration (Kickfire, Netezza, …) or a materialized view, you are attempting to address the same underlying issue; I/O is costly and performance is significantly improved when you reduce the I/O cost. Columnar representations significantly reduce I/O cost by not performing DMA on unnecessary columns of data. FPGA’s in Netezza serve a similar purpose; (among other things) they perform projections thereby reducing the amount of data that is DMA’ed. A materialized view with only the required columns (narrow table, thin table) serves the same purpose. In a similar manner (but for different reasons), indexes improve performance by quickly identifying the tuples that need to be DMA’ed.

Notice that all of these solutions fundamentally address one aspect of the problem; how to reduce the cost of I/O. The challenges that are facing databases these days are somewhat different. In addition to huge amounts of data that are being amassed (The Richard Winter article on the subject) there is a much broader variety of things that are being demanded of the repository of that information. For example, there is the “Search” model that has been discussed in a variety of contexts (web, peptide/nucleotide), the stream processing and data warehousing cases that have also received a fair amount of discussion.

Unlike the problem of I/O cost, many of these problems reflect issues with the fundamental structure and semantics of the SQL language. Some of these issues can be addressed with language extensions, User Defined Functions, MapReduce extensions and the like. But none of these address the underlying issue that the language and semantics were defined for a class of problems that we today come to classify as the “OLTP use case”.

Relational databases are not dead; on the contrary with the huge amounts of information that are being handled, they are more alive than ever before. The SQL language is not dead but it is in need of some improvements. That’s not something new; we’ve seen those in ’92, ’99, … But, more importantly the reason why the Relational Database and SQL have survived this long is because it is widely used and portable. By being an extensible and descriptive language, it has managed to adapt to many of the new requirements that were placed on it.

And if the current problems are significant, two more problems are just around the problem and waiting to rear their ugly heads. The first is the widespread adoption of the virtualization and the abstraction of computing resources. In addition to making it much hardware to adopt solutions with custom hardware (that cannot be virtualized), it introduces a level of unpredictability in I/O bandwidth, latency and performance. Right along with this, users are going to want the database to live on the cloud. With that will come all the requirements of scalability, ease of use and deployment that one associates with a cloud based offering (not just the deployment model). The second is the fact that users will expect one “solution” to meet a wide variety of demands including the current OLTP and reporting through the real time alerting that today’s “Google/Facebook/Twitter Generation” has come to demand (look-ma-no-silos).

These problems are going to drive a round of innovation, and the NoSQL trend is a good and healthy trend. In the same description of all the NoSQL and analytics alternatives, one should also mention the various vendors who are working on CEP solutions. As a result of all of these efforts, Relational Databases as we know them today (general purpose OLTP optimized, small data volume systems) will evolve into systems capable of managing huge volumes of data in a distributed/cloud/virtualized environment and capable of meeting a broad variety of consumer demands.

The current architectures that we know of (shared disk, shared nothing, shared memory) will need to be reconsidered in a virtualized environment. The architectures of our current databases will also need some changes to address the wide variety of consumer demands. Current optimization techniques will need to be adapted and the underlying data representations will have to change. But, in the end, I believe that the thing that will decide the success or failure of a technology in this area is the extent of compatibility and integration with the existing SQL language. If the system has a whole new set of semantics and is fundamentally incompatible with SQL I believe that adoption will slow. A system that extends SQL and meets these new requirements will do much better.

Relational Databases aren’t dead; the model of “one-size-fits-all” is certainly on shaky ground! There is a convergence between the virtualization/cloud paradigms, the cost and convenience advantages of managing large infrastructures in that model and the business need for large databases.

Fasten your seat-belts because the ride will be rough. But, it is a great time to be in the big-data-management field!


Highland Light, Provicetown, MA

If you have ever been to the Highland Light Lighthouse at Provincetown, you will likely recognize the picture below. The image is panoramic and the high resolution image is 15MB and about 13k pixels wide. The image appears cropped here, you can see the complete image at flickr (link on the left).

Highland Light, Provincetown, MA
Highland Light, Provincetown, MA

The image is a composite of 10 discrete images that were stitched using AutoStitch. The software is available for free evaluation at this site.

For more information about Highland Light you can visit their web page at http://www.lighthouse.cc/highland/

IE+Microsoft ROCKS. Ubuntu+Firefox BLOWS

I’m running Ubuntu 9.0.4 and it appears that to upgrade to Firefox 3.5 requires a PhD. Could I borrow yours?

Kevin Purdy has the following “One line install”

That’s not a true install; he just unpacks the tar-ball into the current directory.

Web Upd8 has the following steps

But, now I get daily updates?

What’s with this garbage? Complain as much as you want but Microsoft Software Updates will give you the latest Internet Explorer easily.

Could someone please make this easy?

It’s just numbers people!

I hadn’t read Justin Swanhart’s post (Why is everybody so steamed about a benchmark anyway?) from June 24th till just a short while ago. You can see it at here. Justin is right, it’s just numbers.

He did remind me of a statement from a short while ago by another well known philosopher.

It's a budget

I could not embed the script provided by quotesdaddy.com into this post. I have used their material in the past, they have a great selection of statements that will live on in history. Check them out at http://www.quotesdaddy.com/ 🙂

Head for the hills, here comes more jargon: Anti-Databases and NoSQL

We all know what SQL is, get ready to learn about NoSQL. There was a meet-up last month in San Francisco to discuss it and some details are available at this link. The call for the “inaugural get-together of the burgeoning NoSQL community” drew 150 people and if you are so inclined, you can subscribe to the email list here.

I’m no expert at NoSQL but the major gripe seems to be schema restriction and what Jon Travis calls “twisting your object” data to fit an RDBMS.

NoSQL appears to be a newly coined collective pronoun for a lot of technologies you have heard about before. They include Hadoop, BigTable, HBase, Hypertable, and CouchDB, and maybe some that you have not heard of before like Voldemort, Cassandra, and MongoDb.

But, then there is this thing called NoSQL. The first line on that web page reads

NoSQL is a fast, portable, relational database management system without arbitrary limits, (other than memory and processor speed) that runs under, and interacts with, the UNIX 1 Operating System.

Now, I’m confused. Is this the NoSQL that was referenced in the non-conference at non-san-francisco last non-month?

My Point of View

From what I’ve seen so far, NoSQL seems to be a case of disruption at the “low end”. By “low end”, I refer to circumstances where the data model is simple and in those cases I can see the point that SQL imposes a very severe overhead. The same can be said in the case of a model that is relatively simple but refers to “objects” that don’t relate to a relational model (like a document). But, while this appears to be “low end” disruption right now, there is a real reason to believe that over time, NoSQL will grow into more complex models.

But, the big benefit of SQL is that it is declarative programming language that gave implementations the flexibility of implementing the program flow in a manner that matched the physical representation of the data. Witness the fact that the same basic SQL language has produced a myriad of representations and architectures ranging from shared nothing to shared disk, SMP to MPP, row oriented to column oriented, with cost based and rules based optimizers etc., etc.,

There is (thus far) a concern about the relative complexity of the NoSQL offerings when compared with the currently available SQL offerings. I’m sure that folks are giving this considerable attention and what comes next may be a very interesting piece of technology.



More on TPC-H comparisons

Three charts showing comparisons of TPC-H benchmark data.

Just a quick post to upload three charts that help visualize the numbers that Curt and I have been referring to in our posts. Curt’s original post was, my post was.

The first chart shows the disk to data ratio that was mentioned. Note that the X-Axis showing TPC-H scale factor is a logarithmic scale.The benchmark information shows that the ParAccel solution has in excess of 900TB of storage for the 30TB benchmark, the ratio is therefore in excess of 30:1.

The second chart shows the memory to data ratio. Note that both the X and Y Axis are logarithmic scales. The benchmark information shows that the ParAccel solution has 43 nodes and approximately 2.7TB of RAM, the ratio is therefore approximately 1:11 (or 9%).

The third chart shows the load time (in hours) for various recorded results. The ParAccel results indicate a load time of 3.48 hours. Note again that the X-Axis is a logarithmic scale.

For easy reading, I have labeled the ParAccel 30TB value on the chart. I have to admit, I don’t understand Curt’s point. And maybe others share this bewilderment? I think I’ve captured the numbers correctly, could someone help verify these please.

If the images above are shown as thumbnails, you may not be able to see the point I’m trying to make. You need to see the bigger images to see the pattern.

Revision

In response to an email, I looked at the data again and got the following RANK() information. Of the 151 results available today, the ParAccel 30TB numbers are 58th in Memory to Data and 115th in Disk to Data. It is meaningless to compare load time ranks without factoring in the scale and I’m not going to bother with that as the sample size at SF=30,000 is too small.

If you are willing to volunteer some of your time to review the spreadsheet with all of this information, I am happy to send you a copy. Just ask!

Learning from Joost

A lot of interesting articles have emerged in the wake of the recent happenings at Joost.

One post that caught my attention and deserves reading, and re-reading, and then reading one more time just to be sure is Ed Sim’s post, where he writes

raising too much money can be a curse and not a blessing

A lot to learn from in all of these articles.

Boston Cloud Services- June Meetup.

Boston Cloud Services- June Meetup.

Tsahy setup a meetup group for Cloud Services at http://www.meetup.com/Boston-cloud-services/. The first meeting is today, check out the meeting link at

Boston Cloud Services- June Meetup.

Location

460 Totten Pond rd
suite 660
Waltham, MA 02451

All,
We have a great agenda for this 1st Boston cloud services meetup!& broadcasting live on http://www.stickam.co…

1. Tsahy Shapsa – 15 minutes- a case study of an early stage start-up and talk about what it’s like to build a new business now days, with all this cloud stuff going around. covering where we’re using cloud/SaaS to run our business,operations,IT etc, where we’re not and why, challenges that we faced / are facing etc. We can have an open discussions on the good,bad & ugly and I wouldn’t mind taking a few tips from the audience…

2. John Bennett – 30 minutes will give a talk on separating fact from fiction in the cloud computing market. John is the former marketing director of a cloud integration vendor (SnapLogic), and have been watching this market closely for a couple of years now.
Blog: http://bestrategic.blogspot.com.
bio here: http://www.bennettstr…

3. Mark E. Hodapp – 30 minutes – ‘Competing against Amazon’s EC2’
Mark was Director R&D / CTO at Sun microsystems where led a team of 20 engineers working on an advanced research effort,Project Caroline, a horizontally scalable platform for the development
and deployment of Internet services.

Risks of mind controlled vehicles

Some risks of mind controlled vehicles are described.

It is now official, Toyota has announced that they are developing a mind controlled wheel chair. Unless Toyota is planning to take the world by storm by putting large numbers of motorized wheelchairs on the roads, I suspect this technology will soon make its way into their other products, maybe cars?

Of course, this will have some risks

  1. Cars may inherit drivers personality (ouch, there are some people who shouldn’t be allowed to drive)
  2. Cars with hungry drivers will make unpredictable turns into drive through lanes
  3. Car could inadvertently eject annoying back seat drivers
  4. Mortality rates for cheating husbands may go up (see http://www.thebostonchannel.com/news/1965115/detail.html, http://www.click2houston.com/news/1576669/detail.html, http://www.freerepublic.com/focus/news/840828/posts and a thousand other such posts)
  5. It would be really hard to administer a driving test! Would the ability to think be a requirement for drivers?

Can you think of other risks? I didn’t think I would come to this conclusion but I kind of like the way things are right now where my car has a mind of its own.

Is TPC-H really a blight upon the industry?

A recap of some posts (Curt Monash, Merv Adrian) about the ParAccel TPC-H 30 TB benchmark numbers.

On June 22, Curt Monash posted an interesting entry on his blog about TPC-H in the wake of an announcement by ParAccel. On the same day, Merv Adrian posted another take on the same subject on his blog.

Let me begin with a couple of disclaimers.

First, I am currently employed by Dataupia, I used to be employed at Netezza in the past. I am not affiliated with ParAccel in any way, nor Sun, nor the TPC committee, nor the Toyota Motor Corporation, the EPA, nor any other entity related in any way with this discussion. And if you are curious about my affiliations with any other body, just ask.

Second, this blog is my own and does not represent or intend to represent the point of view of my employer, former employer(s), or any other person or entity than myself. Any resemblance to the opinions or points of view of anyone other than myself are entirely coincidental.

As with any other benchmark, TPC-H only serves to illustrate how well or poorly a system was able to process a specified workload. If you happen to run a data warehouse that tracks parts, orders, suppliers, and lineitems in orders in 25 countries and 5 nations that resemble the TPC-H specification, your data warehouse may look something like the one specified in the benchmark specification. And if your business problems are similar to the twenty something queries that are presented in the specification, you can leverage hundreds of person-hours of free tuning advice given to you by the makers of most major databases and hardware.

In that regard, I feel that excellent performance on a published TPC-H benchmark does not guarantee that the same configuration would work well in my data warehouse environment.

But, if I understand correctly, the crux of the argument that Curt makes is that the benchmark configurations are bloated (and he cites the following examples)

  • 43 nodes to run the benchmark at SF 30,000
  • each node has 64 GB of RAM (total of over 2.5TB of RAM)
  • each node has 24 TB of disk (total of over 900TB of disk)

which leads to a “RAM:DATA ratio” of approximately 1:11 and a “DISK:DATA ratio” of approximately 32:1.

Let’s look at the DISK:DATA ratio first

What no one seems to have pointed out (and I apologize if I didn’t catch it in the ocean of responses) is that this 32:1 DISK:DATA ratio is the ratio between total disk capacity and data and therefore includes overheads.

First, whether it is in a benchmark context or a real life situation, one expects data protection in one form or another. The benchmark report indicates that the systems used RAID 0 and RAID 1 for various components. So, at the very least, the number should be approximately 16:1. In addition, the same disk space is also used for the Operating System, Operating System Swap as well as temporary table space. Therefore, I don’t know whether it is reasonable to assume that even with good compression, a system would acheive a 1:1 ratio between data and disk space but I would like to know more about this.

“By way of contrast, real-life analytic DBMS with good compression often have disk/data ratios of well under 1:1.”

Leaving the issue of DISK:DATA ratio aside, one thing that most performance tuning looks at is the number of “spindles”. And, having a large number of spindles is a good thing for performance whether it is in a benchmark or in real life. Given current disk drive prices, it is reasonable to assume that a pre-configured server comes with 500GB drives, as is the case with the Sun system that was used in the ParAccel benchmark. If I were to purchase a server today, I would expect either 500GB drives or 1TB drives. If it were necessary to have a lower DISK:DATA ratio and reducing that ratio had some value in real life, maybe the benchmark could have been conducted with smaller disk drives.

Reading section 5.2 of the Full Disclosure Report it is clear that the benchmark did not use all 900 or so Terabytes of data. If I understand the math in that section correctly, the benchmark is using the equivalent of 24 drives and 50GB per drive on each node for data. That is a total of approximately 52TB of storage set aside for the database data. That’s pretty respectable! Richard Gostanian in his post to Curt’s blog (June 24th, 2009 7:34 am) indicates that they only needed about 20TB of data. I can’t reconcile the math but we’re at least in the right ball-park.

And as for the RAM:DATA ratio, the ratio is 1:11. I find it hard to understand how the benchmark could have run entirely from RAM as conjectured by Curt.

“And so I conjecture that ParAccel’s latest TPC-H benchmark ran (almost) entirely in RAM as well.”

From my experience in sizing systems, one looks at more things than just the physical disk capacity. One should also consider things like concurrency, query complexity, and expected response times. I’ve been analyzing TPC-H numbers (for an unrelated exercise) and I will post some more information from that analysis over the next couple of weeks.

On the whole, I think TPC-H performance numbers (QPH, $/QPH) are as predictive of system performance in a specific data warehouse implementation as the EPA ratings on cars are of actual mileage that one may see in practice. If available, they may serve as one factor that a buyer could consider in a buying decision. In addition to reviewing the mileage information for a car, I’ll also take a test drive, speak to someone who drives the same car, and if possible rent the same make and model for a weekend to make sure I like it. I wouldn’t rely on just the EPA ratings so why should one assume that a person purchasing a data warehouse would rely solely on TPC-H performance numbers?

As an aside, does anyone want to buy a 2000 Toyota Sienna Mini Van? It is white in color and gave 22.4 mpg over the last 2000 or so miles.

No cloud in sight!

The conventional wisdom at the beginning of ’09 was that the economic downturn would catapult cloud adoption but that hasn’t quite happened. This post explores trends and possible reasons for the slow adoption as well as what the future may hold.

A lot has been written in the past few days about Cloud Computing adoption based on a survey by ITIC (http://www.itic-corp.com/). At the time of this writing, I haven’t been able to locate a copy of this report or a link with more details online but most articles referencing this survey quote Laura DiDio as saying,

“An overwhelming 85% majority of corporate customers will not implement a private or public cloud computing infrastructure in 2009 because of fears that cloud providers may not be able to adequately secure sensitive corporate data”.

In another part of the country, structure09 had a lot of discussion about Cloud Computing. Moderating a panel of VC’s, Paul Kedrosky asked for a show of hands of VC’s who run their business on the cloud. To quote Liz Gannes,

“Let’s just say the hands did not go flying up”.

Elsewhere, a GigaOM report by George Gilbert and Juergen Urbanski conclude that leading storage vendors are planning their innovation around a three year time frame, expecting adoption of new storage technologies to coincide with emergence from the current recession.

My point of view

In the short term, services that are already “networked” will begin to migrate into the cloud. The migration may begin at the individual and SMB end of the market rather than at the Fortune 100. Email and CRM applications will be the poster-children for this wave.

PMCrunch also lists some SMB ERP solutions that will be in this early wave of migration.

But, this wave will primarily target the provision of application services through a different delivery model (application hosted on a remote server instead of a corporate server).

It will be a while before cloud based office applications (word-processing, spreadsheets, presentations) become mainstream. The issue is not so much security as it is network connectivity. The cloud is useless to a person who is not on the network and until ubiquitous high bandwidth network connectivity is available everywhere, and at an accessible and reasonable cost, the cloud platform will not be able to move forward.

We are beginning to see increased adoption in Broadband WiFi or Cellular Data in the US but the costs are still too high and service is still insufficient. Just ask anyone who has tried to get online at one of the many airports and hotels in the US.

Gartner highlights five key attributes of Cloud Computing.

  1. Uses Internet Technologies
  2. Service Based
  3. Metered by Use
  4. Shared
  5. Scalable and Elastic

Note that I have re-ordered them into what I believe is the order in which cloud adoption will progress. The early adoption will be in applications that “Uses Internet Technologies” and “Service Based” and the last will be “Scalable and Elastic”.

As stated above, the early adopters will deploy applications with a clearly defined and “static” set of deliverables in areas that currently require the user to have network connectivity (i.e. do no worse than current, change the application delivery model from in-house to hosted). In parallel, corporations will begin to deploy private clouds for use within their firewalls.

As high bandwidth connectivity is more easily available adoption will increase, currently I think that is the real limitation.

Data Security will be built along the way, as will best practices on things like Escrow and mechanisms to migrate from one service provider to another.

Two other things that could kick cloud adoption into high gear are

  1. the delivery of a cloud platform from a company like Akamai (why hasn’t this happened yet?)
  2. a mechanism that would allow applications to scale based on load and use the right amount of cloud resource. Applications like web servers can scale based on client demand but this isn’t (yet) the case with other downstream services like databases or mail servers.

That’s my point of view, and I’d love to hear yours especially in the area of companies that are addressing the problem of providing a cloud user the ability to migrate from one provider to another, or mechanisms to dynamically scale services like databases and mail servers.

What’s next in tech? Boston, June 25th 2009

Recap of “What’s next in Tech”, Boston, June 25th 2009

What’s next in tech: Exploring the Growth Opportunities of 2009 and Beyond, June 25th 2009.

Scott Kirsner moderated two panels during the event; the first with three Veture Capitalists and the second with five entrepreneurs.

The first panel consisted of:

– Michael Greeley, General Partner, Flybridge Capital Partners

– Bijan Sabet, General Partner, Spark Capital

– Neil Sequiera, Managing Director, General Catalyst Partners

The second panel consisted of:

– Mike Dornbrook, COO, Harmonix Music Systems (makers of “Rock Band”)

– Helen Greiner, co-founder of iRobot Corp. and founder of The Droid Works

– Brian Halligan, social media expert and CEO of HubSpot

– Tim Healy, CEO of EnerNOC

– Ellen Rubin, Founder & VP/Product of CloudSwitch

Those who asked questions got a copy of Dan Bricklin’s book, Bricklin on Technology. There was also a DVD of some other book (I don’t know what it was) that was being given away.

Show of hands at the beginning was that 100% of the people were optimistic about the recovery and the future. Nice discussion though after leaving the session, I don’t get the feeling that anyone addressed the question “What’s next in tech” head on. Most of the conversation was about what has happened in tech and a lot of discussion about Twitter and Facebook. There were a lot of questions to the panel from the audience about what their companies were doing to help young entrepreneurs.And for an audience that was supposed to contain people interested in “what’s next”, no one wanted the DVD of the book, everyone wanted the paper copies. Hmm …

There has been a lot of interesting discussion about the topic and the event. One that caught my eye was Larry Cheng’s blog.

How many outstanding Shares in a start-up

Finding the number of outstanding shares in a start-up.

This question has come up quite often, most recently when I was having lunch with some co-workers last week.

While evaluating an offer at a start-up have you wondered whether your 10,000 shares was a significant chunk of money or a drop in the ocean?

Have you ever wondered how many shares are outstanding in a start-up you heard about?

This is a non-issue for a public company, the number of outstanding shares is a matter of public record. But not all start-up’s want to share this information with you.

I can’t say this for every state in the union but in MA, you can get a pretty good idea by looking at the Annual Report that every organization is required to file. It won’t be current information, the best you can do is get the most recent annual information and that means you can’t know anything about a very new company, but it is information that I would have liked to have had on some occasions in the past.

The link to search the Corporate Database is http://corp.sec.state.ma.us/corp/corpsearch/corpsearchinput.asp

Enter the name of the company and scroll to the bottom where you see a box that allows you to choose “Annual Reports”. Click that and you can read a recent Annual Report which will have the information you need.

If you find that this doesn’t work or is incomplete, please post your comments here. I’m sure others will appreciate the information.

And if you have information about other states, that is most welcome.