if this then that (@ifttt)

Great service called “if this then that” (ifttt). Allows you to create tasks based on specific triggers from one of about forty channels.

I’m thrilled that ‘starring an item’ in Google Reader is a channel.

When a trigger occurs, you can have ifttt generate a specified action.

I have one …

When I post this article, it will be automatically tweeted … Very cool, check them out!

My blog is all f’ed up

My blog has been all f’ed up for some time now, and I didn’t realize it. I’ve been reading stuff and tagging it on my tablet and in the past that used to make it pop up in an RSS feed that was displayed on my blog as ‘breadcrumbs’. But, somewhere along the way, all that fell apart.

  • Maybe it was because something changed in the way the bit.ly links were shared.
  • Maybe it was because the the ‘unofficial’ bit.ly client that I was using didn’t really work and therefore nothing made it to bit.ly and therefore to the RSS feed.
  • And Gimmebar did one thing, and they did it well. But they didn’t do the next thing they promised (an android app).

So, from about November 2011 when Google went and wrecked Google Reader by eliminating the ‘share this’ functionality till today, all the stuff I’ve read and thought I shared is gone …

Time to use twitter as the sharing system. That seems to work. I don’t like it, but it will have to do for now.

Cloud CPU Cost over the years

Great article by Greg Arnette about the crashing cost of CPU Costs over the years, thanks to the introduction of the cloud.

http://www.gregarnette.com/blog/2011/11/a-brief-history-cloud-cpu-costs-over-the-past-5-years/

Personally, I think the most profound one was in December 2009 with the introduction of “spot pricing”.

Effectively you have an auction for the cost of an instance at any time and so long as the prevailing price is lower than the price you are willing to pay, you get to keep your instance.

Do one thing, and do it awesomely … Gimmebar!

From time to time you see a company come along that offers a simple product or service, and when they launch it just works.

The last time (that I can recall) when this happened was when I first used Dropbox. Download this little app and you got a 2GB drive in the cloud. And it worked on my Windows PC, on my Ubuntu PC, on my Android phone.

It just worked!

That was a while ago. And since then I’ve installed tons of software (and uninstalled 99% of it because it just didn’t work).

Last week I found Gimmebar.

There was no software to install, I just created an account on their web page. And it just worked!

What is Gimmebar? They consider themselves the 5th greatest invention of all time and they call themselves “a data steward”. I don’t know what that means. They also don’t tell you what the other 4 inventions are.

Here is how I would describe Gimmebar.

Gimmebar is a web saving/sharing tool that allows you to save things that you find interesting on the web in a nicely organised personal library in the cloud, and share some of that content with others if you so desire. They have something about saving stuff to your Dropbox account but I haven’t figured all of that out yet.

It has a bookmarklet for your browser, click it and things just get bookmarked and saved into your account.

But, it just worked!

I made a couple of collections, made one of them public and one of them shared.

If you share a collection it automatically gets a URL.

And that URL automatically supports an RSS Feed!

And they also backup your tweets, (I don’t give a crap about that).

So, what’s missing?

  • Some way to import all your stuff (from Google Reader)
  • An Android application (more generally, mobile application for platform of choice …)
  • The default ‘view’ on the collections includes previews; I will have enough crap before long where the preview will be a drag. How about a way to get just a list?
  • Saving a bookmark is right now at least a three click process; once you visit the site, click the bookmarklet and you get a little banner on the bottom of the screen, you click there to indicate whether you want the page to go to your private or public area, then you click the collection you want to store it in. This is functional but not easy to use.

I had one interaction with their support (little feedback tab on their page). Very quick to respond and they answered my question immediately.

On the whole, this feels like my first experience with Dropbox. Give it a shot, I think you’ll like it.

Why? Because Gimmebar set out to do one thing and they did it awesomely. It just worked!

The MongoDB rant. Truth or hoax?

Two days ago, someone called ‘nomoremongo’ posted this on Y Combinator News.

Several people (me included) stumbled upon the article, read it, and took it at face value. It’s on the Internet, it’s got to be true, right?

No, seriously. I read it, and parts of it resonated with my understanding of how MongoDB works. I saw some of the “warnings” and they seemed real. I read this one (#7) and ironically, this was the one that convinced me that this was a true post.

**7. Things were shipped that should have never been shipped**

Things with known, embarrassing bugs that could cause data
problems were in "stable" releases--and often we weren't told
about these issues until after they bit us, and then only b/c
we had a super duper crazy platinum support contract with 10gen.

The response was to send up a hot patch and that they were
calling an RC internally, and then run that on our data.

 

Who but a naive engineer would feel this kind of self-righteous outrage 😉 I’ve shared this outrage at some time in my career, but then I also saw companies ship backup software (and have a party) when they knew that restore couldn’t possibly work (yes, a hot patch), software that could corrupt data in pretty main stream circumstances (yes, a hot patch before anyone installed stuff) etc.,

I spoke with a couple of people who know about MongoDB much better than I do and they all nodded about some of the things they read. The same article was also forwarded to me by someone who is clearly knowledgeable about MongoDB.

OK, truth has been established.

Then I saw this tweet.

Which was odd. Danny doesn’t usually swear (well, I’ve done things to him that have made him swear and a lot more but that was a long time ago). Right Danny?

 

 

 

Well, he had me at the “Start thinking for yourself”. But then he went off the meds, “MongoDB is the next MySQL”, really …

 

I think there’s a kernel of truth in the MongoDB rant. And it is certainly the case that a lot of startups are making dumb architectural decisions because someone told them that “MongoDB was web-scale”, or that “CAP Theorem told them that databases were dead”.

Was this a hoax? I don’t know. But it was certainly a reminder that all scams don’t originate in Nigeria, and don’t begin by telling me that I could make a couple of billion dollars if I just put up and couple of thousand.

On migrating from Microsoft SQL Server to MongoDB

Just reading this article http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read describing one companies experiences migrating from SQL Server to MongoDB.

Having read the article, my only question to these folks is “why do it”?

Let’s begin by saying that we should discount all one time costs related to data migration. They are just that, one time migration costs. However monumental, if you believe that the final outcome is going to justify it, grin and bear the cost.

But, once you are in the (promised) MongoDB land, what then?

The things that this author believes that they will miss are:

  • maturity
  • tools
  • query expressiveness
  • transactions
  • joins
  • case insensitive indexes on text fields

Really, and you would still roll the dice in favor of a NoSQL science project. Well, then the benefits must be really really awesome! Let’s go take a look at what those are. Let’s take a look at what those are:

  • MongoDB is free
  • MongoDB is fast
  • Freedom from rigid schemas
  • ObjectID’s are expressive and handy
  • GridFS for distributed file storage
  • Developed in the open

OK, I’m scratching my head now. None of these really blows me away. Let’s look at these one at a time.

  • MongoDB is free
  • So is PostgreSQL and MySQL
  • MongoDB is fast
    • So are PostgreSQL and MySQL if you put them on the same SSD and multiple HDD’s like you claim you do with MongoDB
  • Freedom from rigid schemas
    • I’ll give you this one, relational databases are kind of “old school” in this department
  • ObjectID’s are expressive and handy
    • Elastic Transparent Sharding schemes like ParElastic overcome this with Elastic Sequences which give you the same benefits. A half-way decent developer could do this for you with a simple sharded architecture.
  • GridFS for distributed file storage
    • Replication anyone?
  • Developed in the open
    • Yes, MongoDB is free and developed in the open like a puppy is “free”. You just told us all the “costs” associated with this “free puppy”

    So really, why do people use MongoDB? I know there are good circumstances where MongoDB will whip the pants off any relational database but I submit to you that those are the 1%.

    To this day, I believe that the best description of MongoDB is this one:

    http://www.xtranormal.com/watch/6995033/mongo-db-is-web-scale

    Mongo DB is web scale
    by: gar1t

    http://www.xtranormal.com/xtraplayr/6995033/mongo-db-is-web-scale

    Google went and broke Google Reader!

    A very nice feature of Google Reader (my RSS reader of choice) was that there was a simple button at the bottom of each article called “Share”, and the current URL would be added to a list of shared articles and an RSS feed could be created of that list!

    The breadcrumbs feature on my web page relied on that; as I read things, if I wanted to make them show up in breadcrumbs, all I did was to hit the Share button. If I visited some random URL and wanted to share that, I used the “Note in Reader” bookmarklet. All very good. Till Google went and broke it.

    Now all I get is this:

    This sucks!

    Others seem to have noticed this as well. A collection of related news:

    Trick or treat the google reader changes are coming tonight

    Save the Google Reader petition

    Upcoming changes

     

    Database scalability myth (again)

    A common myth that has been perpetrated is that relational database do not scale beyond two or three nodes. That, and the CAP Theorem are considered to be the reason why relational databases are unscalable and why NoSQL is the only feasible solution!

    I ran into a very thought provoking article that makes just this case yesterday. You can read that entire post here. In this post, the author Srinath Perera provides an interesting template for choosing the data store for an application. In it, he makes the case that relational databases do not scale beyond 2 or 5 nodes. He writes,

    The low scalability class roughly denotes the limits of RDBMS where they can be scaled by adding few replicas. However, data synchronization is expensive and usually RDBMSs do not scale for more than 2-5 nodes. The “Scalable” class roughly denotes data sharded (partitioned) across many nodes, and high scalability means ultra scalable systems like Google.

    In 2002, when I started at Netezza, the first system I worked on (affectionately called Monolith) had almost 100 nodes. The first production class “Mercury” system had 108 nodes (112 nodes, 4 spares). By 2006, the systems had over 650 nodes and more recently much larger systems have been put into production. Yet, people still believe that relational databases don’t scale beyond two or three nodes!

    Systems like ParElastic (Elastic Transparent Sharding) can certainly scale to much more than two or three nodes, and I’ve run prototype systems with upto 100 nodes on Amazon EC2!

    Srinath’s post does contain an interesting perspective on unstructured and semi-structured data though, one that I think most will generally agree with.

    All you ever wanted to know about the CAP Theorem but were scared to ask!

    I just posted a longish blog post (six parts actually) about the CAP Theorem at the ParElastic blog.

    http://www.parelastic.com/database-architectures/an-analysis-of-the-cap-theorem/

    -amrith

    Embracing opposing points of view

    In a blog aptly called “Both Sides of the Table”, I read a post entitled “Why You Should Embrace Opposing Views at Your Startup”.

    It is a great post by Mark Suster and I highly recommend you read it.

    If your startup lives in an echo chamber, and the only voice you hear is your own, it is most certainly doomed.

    Dayton, OH embraces immigrants, cites entrepreneurship as a big draw

    Bucking a national trend, Dayton Ohio has taken the bold step to welcome immigrants. They published a comprehensive 32 page report describing the program that was approved some days ago.

    Here are some quotes that I read that I found encouraging.

    According to the city, immigrants are two times more likely than others to become entrepreneurs.

     

    1. Focus on East Third Street, generally between Keowee and Linden, as an initial international market place for immigrant entrepreneurship. East Third Street, in addition to being a primary thoroughfare between Downtown and Wright Patterson Air Force Base, also encompasses an area of organic immigrant growth and available space to supportcontinuing immigrant entrepreneurship.

     

    2. Create an inclusive community-wide campaign around immigrant entrepreneurship that facilitates startup businesses, opens global markets and restores life to Dayton neighborhoods.

    Other coverage of this and related issues can also be found here:

    Running databases in virtualized environments

    I have long believed that databases can be successfully deployed in virtual machines. Among other things, that is one of the central ideas behind ParElastic, a start-up I helped launch earlier this year. Many companies (Amazon, Rackspace, Microsoft, for example) offer you hosted databases in the cloud!

    But yesterday I read this post in RWW. This article talks about a report published by Principled Technologies in July 2011, a report commissioned by Intel, that

    tested 12 database applications simultaneously – and all delivered strong and consistent performance. How strong? Read the case study, examine the results and testing methodology, and see for yourself.

    Unfortunately, I believe that discerning readers of this report are more likely to question the conclusion(s) based on the methodology. What do you think?


    A Summary of the Principled Technologies Report

    In a nutshell, this report seeks to make the case that industry standard servers with virtualization can in fact deliver the performance required to run business critical database applications.

    It attempts to do so by running Vware vSphere 5.0 on the newest four socket Intel Xeon E7-4870 based server and hosting 12 database applications each of which has an 80GB database in its own virtual machine. The Intel Xeon E7-4870 server is a 10 core processor with two hardware threads per core. It was clocked at 2.4GHz and 1TB of RAM (64 modules each of which had 16GB). The storage in this server was 2 disks, each of which was 146GB in size (10k SAS). In addition, an EMC Clarriion Fibre Channel SAN with some disks configured in RAID0. In total they configured 6 LUN’s each of which was 1066GB (over a TB each). They VM’s ran Windows Server 2008 R2, and SQL Server 2008 R2.

    The report claims that the test that was performed was “Benchmark Factory’s TPC-H like workload”. Appendix B somewhat (IMHO) misleadingly calls this “Benchmark Factory TPC-H score”.

    The result is that these twelve VM’s running against an 80GB database were able to consistently process in excess of 10,000 queries per hour each.

    A comparison is made to the Netezza whitepaper that claims that the TwinFin data warehouse appliance running the “Nationwide Financial Services” workload was able to process around 2,500 queries per hour and a maximum of 10,000 queries per hour.

    The report leaves the reader to believe that since the 12 VM’s in the test ran consistently more than 10,000 queries per hour, business critical applications can in fact be deployed in virtualized environments and deliver good performance.

    The report concludes therefore that business critical applications can be run on virtualized platforms, deliver good performance, and reduce cost.


    My opinion

    While I entirely believe that virtualized database servers can produce very good performance, and while I entirely agree with the conclusion that was reached, I don’t believe that this whitepaper makes even a modestly credible case.

    I ask you to consider this question, “Is the comparison with Netezza running 2,500 queries per hour legitimate?”

    Without digging too far, I found that the Netezza whitepaper talks of a data warehouse with “more than 4.5TB of data”, 10 million database changes per day, 50 concurrent users at peak time and 10-15 on an average. 2,500 qph with a peak of 10k qph at month end, 99.5% completing in under one minute.

    Based on the information disclosed, this comparison does not appear to be valid. Note well that I am not saying that this comparison is invalid, rather that the case has not been made sufficiently to justify it.

    An important reason for my skepticism is that when processing database operations like joins between two tables, doubling the data volume quadruples the amount of computation that may be required. If you are performing three table joins, doubling the data increases the computation involved may be as much as eight times. This is the very essence of the scalability challenge with databases!

    I get an inkling that this may not be a valid comparison when we look at Appendix B that states that the total test time was under 750 seconds in all cases.

    This feeling is compounded when I don’t see how many concurrent queries are run against each database. Single user database performance is a whole lot better and more predictable than multi-user performance. The Netezza paper specifically talks about the multi-user concurrency performance not the single-user performance.

    Reading very carefully, I did find a mention that a single server running 12 VM’s hosted the client(s) for the benchmark. Since ~15k queries were completed in under 750s, we can say that each query lasted about 0.05s. Now, those are really really short queries. Impressive but not what I would generally consider to be in the kinds of workloads that one would expect Netezza to be deployed. The Netezza report does clearly state that 99.5% completed in under one minute, which leads me to conclude that the queries being run in the subject benchmark are at least two orders of magnitude away!

    Conclusion

    Virtualized environments like Amazon EC2, Rackspace, Microsoft Azure, and VMWare are perfectly capable of running databases and database applications.One need only look at Amazon RDS (now with MySQL and Oracle), database.com, SQL Azure, and offerings like that to realize that this is in fact the case!

    However, this report fails to make a compelling case for this. By making a comparison to a different whitepaper and simply relating the results to the “queries per hour” in the other paper causes me to question the  methodology. Once readers question the method(s) used to reach a conclusion, they are likely to question the conclusion itself.

    Therefore, I don’t believe that this report achieves what it set out to do.


    References

    You can get a copy of the white paper here, a link to scribd, or here, a link to the PDF on RWW.

    This case study references a Netezza whitepaper on concurrency, which you can get here. The Netezza whitepaper is “CONCURRENCY & WORKLOAD MANAGEMENT IN NETEZZA”, and prepared by Winter Corp and sponsored by Netezza.

    I have also archived copies of the two documents here and here.

    A link to the TPC-H benchmark can be found on the TPC web site here.

    Disclosure

    In the interest of full disclosure, in the past I was an employee of Netezza, a company that is referenced in this report.

    High Availability and Fault Tolerance

    This article on HA and FT at ReadWriteWeb caught my eye. A while ago I used to work at Stratus and it is not often that I hear their name these days. Stratus’ Fault Tolerant systems achieve their impressive uptime by hardware redundancy.

    In very simple terms, if the probability of some component or sub-system failure is p, then the probability of two failures at the same time is a much smaller p * p.

    When I was at Stratus, we used to guarantee “five nines”, or an uptime of 99.999% on systems that ran credit card networks, banking systems, air traffic control systems, and so on. Systems where the cost of downtime could be measured either in hundreds of thousands or millions of dollars an hour, or in human lives potentially lost.

    Before I worked at Stratus, I used to work for a Stratus Customer and my first experience with Fault Tolerance was when I received a box in the mail with a note that said something to the effect that a CPU board had failed in one of our systems (about a month ago), so please pop that board out and put this replacement board in its place.

    And we hadn’t realized it, the system had been chugging along just fine!

    So what does uptime % translate to in terms of hours and minutes?

    99% uptime : 3.65 days of downtime per year

    99.9% uptime: 8.76 hours of downtime per year

    99.99% uptime: 52.56 minutes of downtime per year

    99.999% uptime: 5.256 minutes of downtime per year

    Stratus claims that across its customer base of 8000 servers the uptime is 99.9998%

    99.9998% uptime: 63 seconds of downtime per year.

    Now, that’s pretty awesome!

    And when I flew into Schipol Airport, or saw containers being loaded onto ships in Singapore, or I used my American Express Credit Card, logged into AOL, or looked at my 401(k) on Fidelity, I felt pretty darn proud of it!

    Oracle’s new NoSQL announcement

    Oracle’s announcement of a NoSQL solution at Oracle Open World 2011 has produced a fair amount of discussion. Curt Monash blogged about it some days ago, and so did Dan Abadi. A great description of the new offering (Dan credits it to Margo Seltzer) can be found here or here. I think the announcement, and this whitepaper do in fact bring something new to the table that we’ve not had until now.

    1. First, the Oracle NoSQL solution extends the notion of configurable consistency in a surprising way. Solutions so far had ranged from synchronous consistency to eventual consistency. But, all solutions did speak of consistency at some point in time. Eventual consistency has been the minimum guarantee of other NoSQL solutions. The whitepaper referenced above makes this very clear and characterizes this not in terms of consistency but durability.

    Oracle NoSQL Database also provides a range of durability policies that specify what guarantees the system makes after a crash. At one extreme, applications can request that write requests block until the record has been written to stable storage on all copies. This has obvious performance and availability implications, but ensures that if the application successfully writes data, that data will persist and can be recovered even if all the copies become temporarily unavailable due to multiple simultaneous failures. At the other extreme, applications can request that write operations return as soon as the system has recorded the existence of the write, even if the data is not persistent anywhere. Such a policy provides the best write performance, but provides no durability guarantees. By specifying when the database writes records to disk and what fraction of the copies of the record must be persistent (none, all, or a simple majority), applications can enforce a wide range of durability policies.

    2. It sets forth a very specific set of use-cases for this product.There has been much written by NoSQL proponents about its applicability in all manners of data management situations. I find this section of the whitepaper to be particularly fact based.

    The Oracle NoSQL Database, with its “No Single Point of Failure” architecture is the right solution when data access is “simple” in nature and application demands exceed the volume or latency capability of traditional data management solutions. For example, click-stream data from high volume web sites, high-throughput event processing, and social networking communications all represent application domains that produce extraordinary volumes of simple keyed data. Monitoring online retail behavior, accessing customer profiles, pulling up appropriate customer ads and storing and forwarding real-time communication are examples
    of domains requiring the ultimate in low-latency access. Highly distributed applications such as real-time sensor aggregation and scalable authentication also represent domains well-suited to Oracle NoSQL Database.

    Several have also observed that this position is in stark contrast to Oracle’s previous position on NoSQL. Oracle released a whitepaper written in May 2011 entitled “Debunking the NoSQL Hype”. This document has been removed from Oracles website. You can, however, find cached copies all over the internet. Ironically, the last line in that document reads,

    Go for the tried and true path. Don’t be risking your data on NoSQL databases.

    With all that said, this certainly seems to be a solution that brings an interesting twist to the NoSQL solutions out there, if nothing else to highlight the shortcomings of existing NoSQL solutions.

    [2011-10-07] Two short updates here.

    1. There has been an interesting exchange on Dan Abadi’s blog (comments) between him and Margo Seltzer (the author of the whitepaper) on the definition of eventual consistency. I subscribe to Dan’s interpretation that says that perpetually returning to T0 state is not a valid definition (in the limit) of eventual consistency.
    2. Some kind soul has shared the Oracle “Debunking the NoSQL Hype” whitepaper here. You have to click download a couple of times and then wait 10 seconds for an ad to complete.

    A brief hiatus, and we’re back!

    Since early last year when I posted my last blog entry, I’ve been a bit “preoccupied”. Around that time, I started in earnest on getting a start-up off the ground. It was a winding road, and I did not get around to writing anything on this blog. Over the past several months, I have been resurrecting this blog.

    The old blog (there’s still a shell there) was called Hypecycles (https://hypecycles.wordpress.com) and try as I might, I could not get http://www.hypecycles.com for this blog.

    What’s with “Pizza and Code”?

    The last eighteen or so months have been spent getting ParElastic off the ground. The quintessential startup is two guys working in the garage, and subsisting on Pizza! The software startup is therefore two things, Pizza and Code!

    What’s ParElastic?

    ParElastic is a startup that is building elastic database middleware for the cloud. Want to know more about ParElastic? Go to http://www.parelastic.com. Starting ParElastic has been an incredible education, one that can only be acquired by actually starting a company.

    Over the next couple of blog posts, I will quickly cover the two or so years from mid 2009 to the present.

    Enjoy!

     

    It is 2010 and RAID5 still works …

    Some years ago (2007, 2008) when I cared a little more about things like RAID and RAID recovery, I read an article in ZDNET by Robin Harris that made the case for why disk capacity increases coupled with an almost invariant URE (Unrecoverable Read Error) rate meant that RAID5 was dead in 2009. A follow-on article appeared recently, also by Robin Harris that extends the same logic and claims that RAID6 would stop working in 2019.

    The crux of the argument is this. As disk drives have become larger and larger (approximately doubling in two years), the URE has not improved at the same rate. URE measures the frequency of occurrence of an Unrecoverable Read Error and is typically measured in errors per bits read. For example an URE rate of 1E-14 (10 ^ -14) implies that statistically, an unrecoverable read error would occur once in every 1E14 bits read (1E14 bits = 1.25E13 bytes or approximately 12TB).

    Further, Robin argues that a RAID array (RAID5 or RAID6) is running normally when a drive suffers a catastrophic failure that prompts a reconstruction from parity. In that scenario, it is perfectly conceivable that while reading the (N-1) data drives and the parity stripe in order to rebuild the failed data drive, a single URE may occur. That URE would render the RAID volume failed.

    The argument is that as disk capacities grow, and URE rate does not improve at the same rate, the possibility of a RAID5 rebuild failure increases over time. Statistically he shows that in 2009, disk capacities would have grown enough to make it meaningless to use RAID5 for any meaningful array.

    So, in 2007 he wrote:

    RAID 5 protects against a single disk failure. You can recover all your data if a single disk breaks. The problem: once a disk breaks, there is another increasingly common failure lurking. And in 2009 it is highly certain it will find you.

    and in 2009, he wrote:

    SATA RAID 6 will stop being reliable sooner unless drive vendors get their game on. More good news: one of them already has.

    The logic proposed is accurate but, IMHO, incomplete. One important aspect that the analysis fails to point out is something that RAID vendors have already been doing for many years now.

    Image courtesy of http://www.computer-history.info

    When disk drives looked like this (picture at right), the predominant failure mode was the catastrophic failure. Drives either worked or didn’t work any longer. At some level, that was a reflection of the fact that the Drive Permanent Failure (DPF) frequency was significantly higher than the URE frequency, and therefore the only observed failure mode was catastrophic failure.

    As drives got bigger, and certainly in 1988 when Patterson and others first proposed the notion of RAID, it made perfect sense to wait for a DPF and then begin drive reconstruction. The possibility of a URE was so low (given drive capacities) that all you had to worry about was the rebuild time, and the degraded performance during the rebuild (as I/O’s may have to be satisfied through block reconstruction).

    But, that isn’t how most RAID controllers today deal with drive URE’s and drive failures. On the contrary, for some time now, RAID controllers (at least the recent ones I’ve read about) have used better methods to determine when to perform the rebuild.

    A 5400 RPM SATA DriveConsider this alternative, that I know to be used by at least a couple of array vendors. When a drive in a RAID volume reports a URE, the array controller increments a count and satisfies the I/O by rebuilding the block from parity. It then performs a rewrite on the disk that reported the URE (potentially with verify) and if the sector is bad, the microcode will remap and all will be well.

    When the counter exceeds some threshold, and with the disk that reported the URE still in a usable condition, the RAID controller will begin the RAID5 recovery. Robin is correct that RAID recovery after DPF is something that will become less and less useful as drive capacities grow. But, with improvements in integration of SMART and the significant improvements in the predictability of drive failures, the frequency of RAID5 and RAID6 reconstruction failures are dramatically lower than those predicted in the referenced articles as these reconstructions occur on URE and not DPF.

    Look at the specifications for the RAID controller you use.

    When is RAID recovery initiated? Upon the occurrence of an Unrecoverable Read Error (URE) or upon the occurrence of a Drive Permanent Failure (DPF)?

    Several have proposed ZFS with multiple copies is the way to go. While it addresses the issue, I submit to you that it is at the wrong level of the stack. Mirroring at the block level, with the option to have multiple mirrors is the correct (IMHO) solution. Disk block error recovery should not be handled in the file system.

    The Application Marketplace: Android’s worst enemy?

    I recently got an Android (Motorola A855, aka droid) phone. I had been using a Windows based device (have been since about 2003). I was concerned about the bad reviews of poor battery life and the fact that Bluetooth Voice Dialing was not present. I figured that the latter was a software thing and could be added later. So, with some doubt, I started using my phone.

    On the first day, with a battery charged overnight,  I proceeded to surf the Marketplace and download a few applications. I got a Google Voice Dialer (not the one from Google), and a couple of other “marketplace” applications. I used the maps with the GPS for a short while and in about 8 hours the yellow sign of “low battery” came on. I had Google (GMAIL) synchronization set to the default (sync enabled).

    Pretty crappy, I thought. My Samsung went for two days without a problem. I had activesync with server (Exchange) or GMail refresh every 5 minutes for years!

    The Google Voice dialer I downloaded had some bugs (it messed up the call log pretty badly) and I got bored of the other applications I had downloaded.

    Time for a hard reset and restart for the phone (just to be sure I got rid of all the gremlins. After all, I was a Windows phone user, this was a weekly ritual).

    I got the update to Google Maps, set synch to continuous, downloaded the “sky map” application and charged the phone up fully. That was on Wednesday afternoon (17th). Today is the 20th and the battery is still all green on the home page.

    The robustness of downloaded Android Apps

    One of the things that makes the android phone so attractive (the application marketplace) is certainly a big problem. The robustness and stability of the downloaded applications cannot be guaranteed. We all realize that “your mileage may vary”. But, a quick look at the “Best Practices” on the android SDK site indicate that a badly written application can keep the CPU too busy and burn through your battery.

    Maybe Android phones (and the battery life in particular) is more an issue of poorly written applications.

    Apple (with the Macintosh) had a tight grip on the applications that could be released on the Mac. This helped them ensure that buggy software didn’t give the Mac a bad name. I’m sure Windows users can relate to this.

    They seem to have the same control on the iPhone App Store. Maybe that’s why I don’t hear so much about crappy applications on the iPhone that crash or suck the battery dry!

    Should Google take some control over the crap on the marketplace or will it all straighten itself out over time?

    Punishment must fit the crime

    I regularly read Dr. Dobbs Code Talk and noticed this article today. What caught my attention was not the article itself, but rather the first response to the article from Jack Woehr.

    Reproduced below is a screen shot of the page that I read and Jack’s comments. Really, I ask you, is C# all that bad?

    Microsoft patent 7617530, the flap about sudo

    The blogosphere has been buzzing with indignation about a Microsoft patent application 7617530 that apparently was granted earlier this month. You can read the application here.

    Yes, enough people have complained that this is like sudo and why did Microsoft get a patent for this. In fairness the patent does attempt to distinguish what is being claimed from sudo and provides copious references to sudo. What few have mentioned is that the thing that Microsoft patents is in fact the exact functionality that some systems like Ubuntu use to allow non-privileged users to perform privileged tasks.

    In PC Magazine, Matthew Murray writes,

    Because a graphical interface is not a part of sudo, it seems clear the patent refers to a Windows component and not a Linux one. The patent even references several different online sudo resources, further suggesting Microsoft isn’t trying to put anything over on anyone. The same section’s reference to “one, many, or all accounts having sufficient rights” suggests a list that sudo also doesn’t possess.

    IMHO, they may be missing something here.

    Let’s set that all aside. What I find interesting is this. The patent application states, and I reproduce three paragraphs of the patent application here and have highlighted three sentences (the first sentences in each paragraph).

    Standard user accounts permit some tasks but prohibit others. They permit most applications to run on the computer but often prohibit installation of an application, alteration of the computer’s system settings, and execution of certain applications. Administrator accounts, on the other hand, generally permit most if not all tasks.

    Not surprisingly, many users log on to their computers with administrator accounts so that they may, in most cases, do whatever they want. But there are significant risks involved in using administrator accounts. Malicious code may, in some cases, perform whatever tasks are permitted by the account currently in use, such as installing and deleting applications and files–potentially highly damaging tasks. This is because most malicious code performs its tasks while impersonating the current user of the computer–thus, if a user is logged on with an administrator account, the malicious code may perform dangerous tasks permitted by that account.

    To reduce these risks, a user may instead log on with a standard user account. Logging on with a standard user account may reduce these risks because the standard user account may not have the right to permit malicious code to perform many dangerous tasks. If the standard user account does not have the right to perform a task, the operating system may prohibit the malicious code from performing that task. For this reason, using a standard user account may be safer than using an administrator account.

    Absolutely! Most people don’t realize that they are logged in as users with Administrator rights and can inadvertently do damaging things.

    My question is this: why is the default user created when you install Windows on a PC an administrator user? As you go through the install process, the thing asks you questions like “what is your name” and “how would you like to login to your PC”. It uses this to setup the first user on the machine. Why is that user an administrator user?

    If you are smart (and if Microsoft really wanted to be good about this) the installation process would create two users. A day-to-day user who is non-Administrator, and an Administrator user.

    I’m a PC and if Windows 8 comes up with an installation process that creates two users, a non-administrator user and an administrator user, then it would have been my idea. But, I don’t intend to go green holding my breath for this to happen. Someone tell me if it does.

    A Report from Boston’s First “Big Data Summit”

    A short write-up about last night’s Big Data Summit appeared on xconomy today.

    My thanks to our sponsors, Foley Hoag LLP and the wonderful team at the Emerging Enterprise Center, Infobright, Expressor Software, and Kalido.

    Boston Big Data Summit Kickoff, October 22nd 2009

    BBD_logoSince the announcement of the Boston Big Data Summit on the 2nd of October, we have had a fantastic response. The event sold out two days ago. We figured that we could remove the tables from the room and accommodate more people. And, we sold out again. The response has been fantastic!

    If you have registered but you are not going to be able to attend, please contact me and we will make sure that someone on the waiting list is confirmed.

    There has been some question about what “Big Data” is. Curt Monash who will be delivering the keynote and moderating the discussion at the event next week writes:

    … where “Big Data” evidently is to be construed as anything from a few terabytes on up.  (Things are smaller in the Northeast than in California …)

    Little FishBig FishWhen you catch a fish (whether it is the little fish on the left or the bigger fish on the right), the steps to prepare it for the table are surprisingly similar. You may have more work to do with the big fish and you may use different tools to do it with; but the things are the same.

    So, while size influences the situation, it isn’t only about the size!

    In my opinion, whether data is “Big” or not is more of a threshold discussion. Data is “Big” if the tools and techniques being used to acquire, cleanse, pre-process, store, process and archive, are either unable to keep up, or are not cost effective.

    Yes, everything is bigger in California, even the size of the mess they are in. Now, that is truly a “Big Problem”!

    The 50,000 row spreadsheet, the half a terabyte of data in SQL Server, or the 1 trillion row table on a large ADBMS are all, in their own ways, “Big Data” problems.

    The user with 50k rows in Excel may not want  ( or be able to afford ) a solution with a “real database”, and may resort to splitting the spreadsheet into two sheets. The user with half a terabyte of SQL Server or MySQL data may adopt some home-grown partitioning or sharding technique instead of upgrading to a bigger platform, and the user with a trillion CDR’s may reduce the retention period; but they are all responding to the same basic challenge of “Big Data”.

    We now have three panelists:

    It promises to be a fun evening.

    I have some thoughts on subjects for the next meeting, if you have ideas please post a comment here.

    Massachusetts Non-Compete Public Hearing

    A quick update on the hearings on the non-compete legislation that was held today.

    [On Sept 12, 2022 I’m salvaging this old post that I published on my old blog in 2009]

    A quick update on the Public Hearings at the Joint Committee on Labor and Workforce Development held in Boston on October 7th, 2009.

    Today I went to State House in Boston and testified before the Joint Committee on Labor and Workforce Development on the subject on Non-Competes in the state. The hearings today were dominated by bills that had to do with “paid sick days”. Here is the days agenda

    invite

    If you were a mother and wanted to make the case for paid sick days to care for your child, what would be better than to bring your child with you when you are about to testify to the Committee on Labor and Workforce Development on a bill about paid sick days? To be fair, the child sat quietly and ate a peanut butter and jelly sandwich and at one point tried to help read out her mother’s prepared testimony.

    After hearing the testimony from several people and seeing how many children there were in the room just drove home the point that many people made. When their children were sick, they had to take them along to work because they could not risk losing their jobs. That’s just wrong; I had assumed that most people had paid sick leave. Unfortunately, I learned today that this is not the case.

    Hearing on Paid Sick Days
    Testifying on the bills to allow Paid Sick Days.

    Continue reading “Massachusetts Non-Compete Public Hearing”

    On MapReduce and Relational Databases – Part 1

    Describes MapReduce and why WOTS (Wart-On-The-Side) MapReduce is bad for databases.

    This is the first of a two-part blog post that presents a perspective on the recent trend to integrate MapReduce with Relational Databases especially Analytic Database Management Systems (ADBMS).

    The first part of this blog post provides an introduction to MapReduce, provides a short description of the history and why MapReduce was created, and describes the stated benefits of MapReduce.

    The second part of this blog post provides a short description of why I believe that integration of MapReduce with relational databases is a significant mistake. It concludes by providing some alternatives that would provide much better solutions to the problems that MapReduce is supposed to solve.
    Continue reading “On MapReduce and Relational Databases – Part 1”

    On MapReduce and Relational Databases – Part 2

    This is the second of a two-part blog post that presents a perspective on the recent trend to integrate MapReduce with Relational Databases especially Analytic Database Management Systems (ADBMS).

    The first part of this blog post provides an introduction to MapReduce, provides a short description of the history and why MapReduce was created, and describes the stated benefits of MapReduce.

    The second part of this blog post provides a short description of why I believe that integration of MapReduce with relational databases is a significant mistake. It concludes by providing some alternatives that would provide much better solutions to the problems that MapReduce is supposed to solve.
    Continue reading “On MapReduce and Relational Databases – Part 2”

    Announcing the Boston Big Data Summit

    Announcement for the kickoff of the Boston “Big Data Summit”. The event will be held on Thursday, October 22nd 2009 at 6pm at the Emerging Enterprise Center at Foley Hoag in Waltham, MA. Register at http://bigdata102209.eventbrite.com

    BBD_logo

    The Boston “Big Data Summit” will be holding its first meeting on Thursday, October 22nd 2009 at 6pm at the Emerging Enterprise Center at Foley Hoag in Waltham, MA.

    The Boston area is home to a large number of companies involved in the collection, storage, analysis, data integration, data quality, master data management, and archival of “Big Data”. If you are involved in any of these, then the meeting of the Boston “Big Data Summit” is something you should plan to attend. Save the date!

    The first meeting of the group will feature a discussion of “Big Data” and the challenges of “Big Data” analysis in the cloud.

    Over 120 people signed up as of October 14th 2009.

    There is a waiting list. If you are registered and won’t be able to attend, please contact me so we can allow someone on the wait list to attend instead.

    Seating is limited so go online and register for the event at http://bigdata102209.eventbrite.com.

    The Boston “Big Data Summit” thanks the Emerging Enterprise Center at Foley Hoag LLP for their support and assistance in organizing this event.

    Agenda Updated

    The Boston “Big Data Summit” is being sponsored by Foley Hoag LLP, Infobright, Expressor  Software, and Kalido

    For more information about the Boston “Big Data Summit” please contact the group administrator at boston.bigdata@gmail.com


    The Boston Big Data Summit is organized by Bob Zurek and Amrith (me) in partnership with the Emerging Enterprise Center at Foley Hoag LLP.


    Tell me about something you failed at, and what you learnt from it.

    I have been involved in a variety of interviews both at work and as part of the selection process in the town where I live. Most people are prepared for questions about their background and qualifications. But, at a whole lot of recent interviews that I have participated in, candidates looked like deer in the headlight when asked the question (or a variation thereof),

    “Tell me about something that you failed at and what you learned from it”

    A few people turn that question around and try to give themselves a back-handed compliment. For example, one that I heard today was,

    “I get very absorbed in things that I do and end up doing an excellent job at them”

    Really, why is this a failure? Can’t you get a better one?

    Folks, if you plan to go to an interview, please think about this in advance and have a good answer to this one. In my mind, not being able to answer this question with a “real failure” and some “real learnings” is a disqualifier.

    One thing that I firmly believe is that failure is a necessary by-product of showing initiative in just the same way as bugs are natural by-product of software development. And, if someone has not made mistakes, then they probably have not shown any initiative. And if they can’t recognize when they have made a mistake, that is scary too.

    Finally, I have told people who have been in teams that I managed that it is perfectly fine to make a mistake; go for it. So long as it is legal, within company policy and in keeping with generally accepted norms of behavior, I would support them. So, please feel free to make a mistake and fail, but please, try to be creative and not make the same mistake again and again.

    Oracle fined $10k for violating TPC’s fair use rules

    Oracle fined $10k for violating TPC’s fair use rules.

    In a letter dated September 25, 2009, the TPC fined Oracle $10k based on a complaint filed by IBM. You can read the letter here.

    Recently, Oracle ran an advertisement in The Wall Street Journal and The Economist making unsubstantiated superior performance claims about an Oracle/Sun configuration relative to an official TPC-C result from IBM. The ad ran twice on the front page of The Wall Street Journal (August 27, 2009 and September 3, 2009) and once on the back cover of The Economist (September 5, 2009). The ad references a web page that contained similar information and remained active until September 16, 2009. A complaint was filed by IBM asserting that the advertisement violated the TPC’s fair use rules.

    Oracle is required to do four things:

    1. Oracle is required to pay a fine of $10,000.
    2. Oracle is required to take all steps necessary to ensure that the ad will not be published again.
    3. Oracle is required to remove the contents of the page www.oracle.com/sunoraclefaster.
    4. Oracle is required to report back to the TPC on the steps taken for corrective action and the procedures implemented to ensure compliance in the future.

    At the time of this writing, the link http://www.oracle.com/sunoraclefaster is no longer valid.

    Can you copyright movie times?

    MovieShowtimes.com, a site owned by West World Media believes that they have!

    In his article, Michael Masnick relates the experience of a reader Jay Anderson who found a loophole on a web page MovieShowtimes.com and figured out how to get movie times for a given zip code. He (Jay Anderson) then contacted the company asking how he could become an affiliate and drive traffic their way and was rewarded with some legal mumbo jumbo.

    First of all, I think the minion at the law firm was taking a course on “Nasty Letter Writing 101” and did a fine job. I’m no copyright expert but if I received an offer from someone to drive more traffic to my site my first answer would not be to get a lawyer involved.

    Second, this whole episode could have well been featured in the book, Letters from a Nut, by Ted L. Nancy or the sequel More Letters from a Nut.

    But, this reminds me of something a former co-worker told me about an incident where his daughter wrote a nice letter to a company and got her first taste of legal over zealousness. He can correct the facts and fill in the details but if I recall correctly, the daughter in question had written letters to many companies asking the usual childrens questions about how pretzels, or candy or a nice toy was made. In response some nice person in a marketing department sent a gift hamper back with a polite explanation of the process etc., But one day the little child wanted to know (if my memory serves me correctly) why M&M’s were called M&M’s. So, along went the nice letter to the address on the box. The response was a letter from the say guy who now works for MovieTimesForDummies.com explaining that M&M’s was a copyright of the so-and-so-company and any attempt to blah blah blah.

    I think it is only a matter of time before MovieTimesForDummies.com releases exactly the same app that Jay Anderson wanted to, closes the loophole that he found and fires the developer who left it there in the first place.

    Oh, wait, I just got a legal notice from Amazon saying that the link on this blog directing traffic to their site is a violation of something or the other …

    Multithreaded File I/O (Reflections on Dr. Dobb’s article by Stefan Wörthmüller)

    Thoughts on the results that Stefan Wörthmüller reports in his article on Dr. Dobb’s Journal.

    I ran across an interesting article on Multi-Threaded File I/O in Dr. Dobb’s today. You can read the article at http://www.ddj.com/hpc-high-performance-computing/220300055

    I was particularly intrigued by the statements on variability,

    I repeated the entire test suite three times. The values I present here are the average of the three runs. The standard deviation in most cases did not exceed 10-20%. All tests have been also run three times with reboots after every run, so that no file was accessed from cache.

    Initially, I thought 10-20% was a bit much; this seemed like a relatively straightforward test and variability should be low. Then I looked at the source code for the test and I’m now even more puzzled about the variability.

    Get a copy of the sources here. It is a single source file and in the only case of randomization, it uses rand() to get a location into the file.

    The code to do the random seek is below

       if(RandomCount)
       {
          // Seek new position for Random access
          if(i >= maxCount)
             break;
          long pos = (rand() * fileSize) / RAND_MAX - BlockSize;
          fseek(file, pos, SEEK_SET);
       }
    

    While this is a multi-threaded program, I see no calls to srand() anywhere in the program. Just to be sure, I modified Stefan’s program as attached here. (My apologies, the file has an extension of .jpg because I can’t upload a .cpp or .zip onto this free wordpress blog. The file is a Windows ZIP file, just rename it).

    ///////////////////////////////////////////////////////////////////////////////
    // mtRandom.cpp   Amrith Kumar 2009 (amrith (dot) kumar (at) gmail (dot) com
    // This program is adapted from the program FileReadThreads.cpp by Stefan Woerthmueller
    // No rights reserved. Feel Free to do what ever you like with this code
    // but don't blame me if the world comes to an end.
    
    #include "Windows.h"
    #include "stdio.h"
    #include "conio.h"
    #include
    #include 
    
    #include
    #include 
    
    ///////////////////////////////////////////////////////////////////////////////
    // Worker Thread Function
    ///////////////////////////////////////////////////////////////////////////////
    
    DWORD WINAPI threadEntry(LPVOID lpThreadParameter)
    
    {
        int index = (int)lpThreadParameter;
            FILE * fp;
            char filename[32];
    
            sprintf ( filename, "file-%d.txt", index );
    
            fprintf ( stderr, "Thread %d startedn", index );
            if ((fp = fopen ( filename, "w" )) == (FILE * ) NULL)
            {
                    fprintf (stderr, "Error opening file %sn", filename );
            }
            else
            {
                    for (int i = 0; i < 10; i ++)
                    {
                            fprintf ( fp, "%un", rand());
                    }
    
                    fclose (fp);
            }
    
            fprintf ( stderr, "Thread %d donen", index );
    
        return 0;
    }
    
    #define MAX_THREADS (5)
    
    int main(int argc, char* argv[])
    
    {
        HANDLE h_workThread[MAX_THREADS];
    
        for(int i = 0; i < MAX_THREADS; i++)
        {
            h_workThread[i] = CreateThread(NULL, 0, threadEntry, (LPVOID) i, 0, NULL );
            Sleep(1000);
        }
    
        WaitForMultipleObjects(MAX_THREADS, h_workThread, TRUE, INFINITE);
        printf ( "All done. Good byen" );
        return 0;
    }
    

    So, I confirmed that Stefan will be getting the same sequence of values from rand() over and over again, across reboots.

    Why then is he still seeing 10-20% variability? Beats me, something smells here … I would assume that from run to run, there should be very little variability.

    Thoughts?

    From the “way-back machine”

    We’ve all heard the expression “way-back machine” and some of us know about tools like the Time Machine. But, did you know that there is in fact a “way-back machine” ?

    From time to time, I have used this service and it is one of those nice corners of the web that is nice to know. I was reminded of it this morning in a conversation and that led to a nice walk through history.

    If you aren’t familiar with the “way-back machine”, take a look at http://www.archive.org/web/web.php

    Some day you may wonder what a web page looked like a while ago and the “way-back machine” is your solution.

    Here are some interesting ones that I looked at today. The Time Magazine in February 1999.

    Time magazine web page in February 1999
    Time magazine web page in February 1999

    Ever wondered what the Dataupia web page looked like in February 2006? I know someone who would get a kick out of it so I went and looked it up.

    The Dataupia web page from February 2006
    The Dataupia web page from February 2006

    Check it out sometime, the way back machine is a wonderful afternoon diversion.

    The “way back” archive is not complete, alas!

    Florida recounts

    Diluting education standards in Kansas (part II)

    Coming in the aftermath of the efforts to outlaw the teaching of evolution in the state, this story about Kansas is unfortunate.

    http://blog.acm.org/archives/csta/2009/09/post_4.html

    http://usacm.acm.org/usacm/weblog/index.php?p=741

    The state has significant employment problems and the recent down turn in the economy has caused significant impact on the aircraft industry in the state. With a nascent IT start-up scene there, this is probably the worst publicity that the state could have hoped for.

    Who are you, really? The value of incorrect response in challenge-response style authentication.

    We all know how service providers validate the identity of callers. But, how do you validate the identity of the service provider on the other end of the telephone? In the area of computer security, the inexact challenge response mechanism is a useful way of validating identities; a wrong answer and the response to a wrong answer tell a lot.

    Service providers (electricity, cable, wireless phone, POTS telephone, newspaper, banks, credit card companies) are regularly faced with the challenge of identifying and validating the identity of the individual who has called customer service. They have come up with elaborate schemes involving the last four digits of your social security number, your mailing address, your mother’s maiden name, your date of birth and so on. The risks associated with all of these have been discussed at great length elsewhere; social security numbers are guessable (see “Predicting Social Security Numbers from Public Data”, Acquisti and Gross), mailing addresses can be stolen, mother’s maiden names can be obtained (and in some Latin American countries your mother’s maiden name is part of your name) and people hand out their dates of birth on social networking sites without a problem!

    Bogus Parking ticket
    Bogus Parking ticket

    So, apart from identity theft by someone guessing at your identity, we also have identity theft because people give out critical information about themselves. Phishing attacks are well documented, and we have heard of the viruses that have spread based on fake parking tickets.

    Privacy and Information Security experts caution you against giving out key information to strangers; very sound advice. But, how do you know who you are talking to?

    Consider these two examples of things that have happened to me.

    1. I receive a telephone call from a person who identifies himself as being an investment advisor from a financial services company where I have an account. He informs me that I am eligible for a certain service that I am not utilizing and he would like to offer me that service. I am interested in this service and I ask him to tell me more. In order to tell me more, he asks me to verify my identity. He wants the usual four things and I ask him to verify in some way that he is in fact who he claims to be. With righteous indignation he informs me that he cannot reveal any account information until I can prove that I am who I claim to be. Of course, that sets me off and I tell him that I would happily identify myself to be who he thinks I am, if he can identify that he is in fact who he claims to be. Needless to say, he did not sell me the service that he wanted to.

    2. I call a service provider because I want to make some change to my account. They have “upgraded their systems” and having looked up my account number and having “matched my phone number to the account”, the put me through to a real live person. After discussing how we will make the change that I want, the person then asks me to provide my address. Ok, now I wonder why that would be? Don’t they have my address, surely they’ve managed to send me a bill every month.

    “For your protection, we need to validate four pieces of information about you before we can proceed”, I am told.

    The four items are my address, my date of birth, the last four digits of my social security number and the “name on the account”.

    Of course, I ask the nice person to validate something (for example, tell me how much my last bill was) before I proceed. I am told that for my own protection, they cannot do that.

    challenge-responseComputer scientists have developed several techniques that provide “challenge-response” style authentication where both parties can convince themselves that they are who they claim to be. For example, public-key/private-key encryption provides a simple way in which to do this. Either party can generate a random string and provide it to the other asking the other to encrypt it using the key that they have. The encrypted response is returned to the sender and that is sufficient to guarantee that the peer does in fact posses the appropriate “token”.

    In the context of a service provider and a customer, there would be a mechanism for the service provider to verify that the “alleged customer” is in fact the customer who he or she claims to be but the customer also verifies that the provider is in fact the real thing.

    The risks in the first scenario are absolutely obvious; I recently received a text message (vector) that read

    “MsgID4_X6V…@v.w RTN FCU Alert: Your CARD has been DEACTIVATED. Please contact us at 978-596-0795 to REACTIVATE your CARD. CB: 978-596-0795”

    A quick web search does in fact show that this is a phishing event. Whether someone tracked that phone number down and find out if they are a poor unsuspecting victim or a perpetrator, I am not sure.

    But, what does one do when in fact they receive an email or a phone call from a vendor with whom they have a relationship?

    One could contact a psychic to find out if it is authentic, like check the New England SEERs.

    http://twitter.com/ILNorg/status/3786484194

    http://twitter.com/NewEnglandSEERs

    RT @Lucy_Diamond 978-596-0795 do not return call on text. Call police or your real bank. Caution bank fraud. Never give your pin to anyone

    RT @Lucy_Diamond Warning bank scam via cell phone text remember never give your pin number to anyone. Your bank won’t ask you they know it

    Agent: For your security please verify some information about your account.What is your account number

    Me: Provide my account number

    Agent: Thank you, could you give me your passphrase?

    Me: ketchup

    Agent: Thank you. Could you give me your mother’s maiden name

    Me: Hoover Decker

    Agent: Thank you. and the last four digits of your SSN

    Me: 2004

    Agent: Just one more thing, your date of birth please

    Me: February 14th 1942

    Agent: Thank you

    Agent: For your security please verify some information about your account.What is your account number

    Me: Provide my account number

    Agent: Thank you, could you give me your passphrase?

    Me: ketchup

    Agent: That’s not what I have on the account

    Me: Really, let me look for a second. What about campbell?

    Agent: No, that’s not it either. It looks like you chose something else, but similar.

    Me: Oh, of course, Heinz58. Sorry about that

    Agent: That’s right, how about your mother’s maiden name.

    Me: Hoover Decker

    Agent: No, that’s not it.

    Me: Sorry, Hoover Bissel

    Agent: That’s right. And the last four of your social please

    Me: 2007

    Agent: thank you, and the date of birth

    Me: Feb 29, 1946

    Agent: Thank you

    Agent: For your security please verify some information about your account.

    What is your account number

    Me: Provide my account number

    Agent: Thank you, could you give me your passphrase?

    Me: ketchup

    Agent: Thank you. Could you give me your mother’s maiden name

    Me: Hoover Decker

    Agent: Thank you. and the last four digits of your SSN

    Me: 2004

    Agent: Just one more thing, your date of birth please

    Me: February 14th 1942

    Agent: Thank you. Could you verify the address to which you would like us to ship the package.

    (At this point, I’m very puzzled and not really sure what is going on)

    Me: Provided my real address (say 10 Any Drive, Somecity, 34567)

    Agent: I’m sorry, I don’t see that address on the account, I have a different address.

    Me: What address do you have?

    Agent: I have 14 Someother Drive, Anothercity, 36789.

    The address the agent provided was in fact a previous location where I had lived.

    What has happened is that the cable company (like many other companies these days) has outsourced the fulfillment of the orders related to this service. In reality, all they want is to verify that the account number and the address match! How they had an old address, I cannot imagine. But, if the address had matched, they would have mailed a little package out to me (it was at no charge anyway) and no one would be any the wiser.

    But, I hung up and called the cable company on the phone number on my bill and got the full fourth-degree. And they wanted to talk to “the account owner”. But, I had forgotten what I told them my SSN was … Ironically, they went right along to the next question and later told me what the last four digits of my SSN were 🙂

    Someone said they were interested in the security and privacy of my personal information?

    We people born on the 29th of February 1946 are very skeptical.

    Faster or Free

    I don’t know how Bruce Scott’s article showed up in my mailbox but I’m confused by it (happens a lot these days).

    I agree with him that too much has been made about whether a system is a columnar system or a truly columnar system or a vertically partitioned row store and what really matters to a customer is TCO and price-performance in their own environment. Bruce says as much in his blog post

    Let’s start talking about what customers really care about: price-performance and low cost of ownership. Customers want to do more with less. They want less initial cost and less ongoing cost.

    Then, he goes on to say

    On this last point, we have found that we always outperform our competitors in customer created benchmarks, especially when there are previously unforeseen queries. Due to customer confidentiality this can appear to be a hollow claim that we cannot always publicly back up with customer testimonials. Because of this, we’ve decided to put our money where our mouth is in our “Faster or Free” offer. Check out our website for details here: http://www.paraccel.com/faster_or_free.php

    So, I went and looked at that link. There, it says:

    Our promise: The ParAccel Analytic Database™ will be faster than your current database management system or any that you are currently evaluating, or our software license is free (Maintenance is not included. Requires an executed evaluation agreement.)

    To be consistent, should that not make the promise that the ParAccel offering would provide better price-performance and lower TCO than the current system or the one being evaluated? After all, that is what customers really care about.

    I’m confused. More coffee!

    Oh, there’s more! Check out this link http://www.paraccel.com/cash_for_clunkers.php

    Talk about fine print:

    * Trade-in value is equivalent to the first year free of a three year subscription contract based on an annual subscription rate of $15K/user terabyte of data. Servers are purchased separately.

    Desktop Email Client vs. GMail: Why desktop mail clients are still better than the GMail interface

    The GMail user interface, while very good and much better than some of the others lacks some useful functionality to make it a complete replacement for a desktop email client like Outlook.

    Joe Kissell writes in CIO magazine about the six reasons why desktop email clients still rule. He opines that he would take a desktop email client any day and provides the following reason, and six more:

    Well, there is the issue of outages like the one Gmail experienced this week. I like to be able to access my e-mail whenever I want. But beyond that, webmail still lags far behind desktop clients in several key areas.

    Much has been written by many on this subject. As long ago as 2005, Cedric pronounced his verdict. Brad Shorr had a more measured comparison that I read before I made the switch about a month ago. Lifehacker pronounced the definitive comparison (IMHO it fell flat, their verdicts were shallow). Rakesh Agarwal presented some good insights and suggestions.

    I read all of this and tried to decide what to do about a month ago. Here is a description of my usage.

    My Usage

    1. Email Accounts

    I have about a dozen. Some are through GMail, some are on domains that I own, one is at Yahoo, one at Hotmail and then there are a smattering of others like aol.com, ZoHo and mail.com. While employed (currently a free agent) I have always had an Exchange Server to look at as well.

    2. Email volume

    Excluding work related email, I receive about 20 or 30 messages a day (after eliminating SPAM).

    3. Contacts

    I have about 1200 contacts in my address book.

    4. Mobile device

    I have a Windows Mobile based phone and I use it for calendaring, email and as a telephone. I like to keep my complete contact list on my phone.

    5. Access to Email

    I am NOT a Power-User who keeps reading email all the time (there are some who will challenge this). If I can read my email on my phone, I’m usually happy. But, I prefer a big screen view when possible.

    6. I like to use instant messengers. Since I have accounts on AOL IM, Y!, HotMail and Google, I use a single application that supports all the flavors of IM.

    Seems simple enough, right? Think again. Here is why, after migrating entirely to GMail, I have switched back to a desktop client.

    The Problem

    1. Google Calendar and Contact Synchronization is a load of crap.

    Google does somethings well. GMail (the mail and parts of the interface) are one of these things. They support POP and IMAP, they support consolidation of accounts through POP or IMAP, they allow email to be sent as if from another account. They are far ahead of the rest. With Google Labs you can get a pretty slick interface. But, Calendar and Contact Synchronization really suck.

    For example, I start off with 1200 contacts and synchronize my mobile device with Google. How do I do it? By creating an Exchange Server called m.google.com and having it provide Calendar and Contacts. You can read about that here. After synchronizing the two, I had 1240 or so contacts on my phone. Ok, I had sent email to 40 people through GMail who were not in my address book. Great!

    Then I changed one persons email address and the wheels came off the train. It tried to synchronize everything and ended up with some errors.

    I started off with about 120 entries in my calendar after synchronizing every hour, I now have 270 or so. Well, each time it felt that contacts had been changed, it refreshed them and I now have seventeen appointments tomorrow saying it is someones birthday. Really, do I get extra cake or something?

    2. Google Chat and Contact Synchronization don’t work well together.

    After synchronizing contacts my Google Chat went to hell in a hand-basket. There’s no way to tell why, I just don’t see anyone in my Google Chat window any more.

    Google does some things well. The GMail server side is one of them. As Bing points out, Google Search returns tons of crap (not that Bing does much better). Calendar, Contacts and Chat are still not in the “does well” category.

    So, it is back to Outlook Calendar and Contacts and POP Email. I will get all the email to land in my GMail account though, nice backup and all that. But GMail Web interface, bye-bye. Outlook 2007 here I come, again.

    The best of both worlds

    The stable interface between a phone and Outlook, a stable calendar, contacts and email interface (hangs from time to time but the button still works), and a nice online backup at Google. And, if I’m at a PC other than my own, the web interface works in a pinch.

    POP all mail from a variety of accounts into one GMail account and access just that one account from both the telephone and the desktop client. And install that IM client application again.

    What do I lose? The threaded message format that GMail has (that I don’t like). Yippie!

    ParAccel TPC-H 30TB results challenged!

    Watch the feeding frenzy now that ParAccel’s TPC-H 30TB results have been challenged.

    Before I had my morning cup of coffee, I found an email message with the subject “ParAccel ADVISORY” sitting in my mail box. Now, I’m not exactly sure why I got this message from Renee Deger of GlobalFluency so my first suspicion was that this was a scam and that someone in Nigeria would be giving me a million dollars if I did something.

    But, I was disappointed. Renee Deger is not a Nigerian bank tycoon who will make me rich. In fact, ParAccel’s own blog indicates that their 30TB results have been challenged.

    We wanted you to hear it from us first.  Our TPC-H Benchmark for performance and price-performance at the 30-terabyte scale was taken down following a challenge by one of our competitors and a review by the TPC.  We executed this benchmark in collaboration with Sun and under the watch of a highly qualified and experienced auditor.   Although it has been around for many years, the TPC-H specification is still subject to interpretation, and our interpretation of some of the points raised was not accepted by the TPC review board.

    None of these items impacts our actual performance, which is still the best in the industry.  We will rerun the benchmark at our earliest opportunity according to the interpretations set forth by the TPC review board. We remain committed to the organization, as outlined in a blog post by Barry Zane, our CTO, here: http://paraccel.com/data_warehouse_blog/?p=74#more-74.

    Please see the company blog for a post by David Steinhoff in the office of the CTO for further info: http://paraccel.com/data_warehouse_blog/?p=104#more-104

    I read David Steinhoff’s blog as well. He writes

    This last week, our June 2009 30TB results were challenged and found to be in violation of certain TPC rules. Accordingly, our results will soon be taken down from the TPC website.

    We published our results in good faith. We used our standard, customer available database to run the benchmark (we wanted the benchmark to reflect the incredible performance our customers receive). However, the body of TPC rules is complex and undergoes constant interpretation; we are still relatively new to the benchmark game and are still learning, and we made some mistakes.

    While we cannot get into the details of the challenges to our results (TPC proceedings are confidential and we would be in violation if we did), we can state with confidence that our query performance was in no way enhanced by the items that were challenged.

    We can also say with confidence that we will publish again, soon.

    Now, some competitor or competitor loyalist may try to make more of this than there is … we all know there is the risk of tabloid frenzy around any adversity in a society with free speech … and we wouldn’t have it any other way.

    It is unfortunate that the proceedings are confidential and cannot be shared. I hope you republish your results at 30TB.

    Contrary to some a long list of pundits, I believe that these benchmarks have an important place in the marketing of a product and its capabilities.

    I reiterate what I said in a previous blog post

    ParAccel’s solution is based on high-performance trilithium crystals. (Note: I don’t know why this wasn’t disclosed in the full disclosure report).

    I hope the challenge was not about the trilithium crystals and the fact that you didn’t disclose it in the full disclosure report.

    A meaningful networking platform for children with special needs.

    Parlerai creates a secure network of family, friends and caregivers surrounding a child with special needs. Help spread the word, you can make a difference.

    My friend Jon Erickson and his wife Kristin have been working on Parlerai for some time now. As parents of a child with special needs, Kristin and Jon were frustrated by a lack of available tools for parents like us to track and share information and ultimately collaborate with the people making a difference in their daughter’s life. They have built the Parlerai platform which will profoundly and positively impact the lives of the individuals in their family, and the lives of parents around the world.

    Parlerai creates a secure network of family, friends and caregivers surrounding a child with special needs.

    Parlerai creates a secure network of family, friends and caregivers surrounding a child with special needs and uses innovative and highly personalized tools to enhance collaboration and provide a highly secure method of communicating via the Internet. There are tools for parents, tools for children and tools for caregivers. As the world’s first Augmentative Collaboration service, Parlerai (French for “shall speak) gives children with special needs a voice.

    Parlerai is for children with special needs. They use it to communicate and access online media and information in a safe environment.

    Parlerai is for parents of children with special needs. They use it to share and communicate information about their children. It gives them a single medium through which they can collaborate with others who are part of their childrens world.

    Parlerai is for consultants, educators and caregivers of children with special needs. With Parlerai, they are able to collaborate with others who are part of the childrens world.

    In an interview with Dana Puglisi, Kristin describes the value of Parlerai

    “For consultants, educators, and other caregivers, imagine trying to make a difference in a child’s life but only seeing that child for half an hour each week. How do you really get to know that child? How can you make the most impact? Now imagine having access to information provided by others who work with that child – her ski instructor, her physical therapist, her grandmother, her babysitter. Current information, consistent information. Imagine how much more you could learn about that child. Imagine how much greater your impact could be.”

    Here is what you can do to help

    I read some statistics about children with special needs some months ago. Based on those numbers, each and every one of us knows of at least three people whose children have special needs. Parlerai is a system that could profoundly change the lives of these children and their parents and caregivers. Do your part, and spread the word about Parlerai. You can make the difference.

    Spread the word, visit http://tinyurl.com/parlerai (a link to this blog post) or http://www.parlerai.com.

    More information is also available at Jon’s blog at http://jon-erickson.blogspot.com/ and Kristin’s blog at http://kristingerickson.blogspot.com/

    Facebook’s OpenID integration is not very useful!

    Is facebook paying lip-service to OpenId integration?

    Preamble:

    I don’t know a damn thing about OpenID and less about web applications, but I do know a thing about security, authentication and the like. And, I am a facebook user and like most other internet consumers in this day and age, I am not thrilled that I have to remember a whole bunch of different user names and passwords for each and every online location that I visit.

    Facebook’s OpenID integration

    Once, and for one last time, you login to facebook with your existing credentials. Let’s say that is your username <joe@joeblow.com> and then you go over to Settings and create your OpenID as a Linked Account. In the interests of full disclosure, I am still working with Gary Krall of Verisign who posted a comment on my previous post describing problems with this linking process. I am sure that we will get that squared away and I can get the linking to work.

    Once this linkage is created, a cookie is deposited on your machine indicating that authentication is by OpenID. You wake up in the morning, power up your PC and launch your browser and login to your OpenID provider, and in a second tab, you wander over to http://www.facebook.com.

    The way it is supposed work is this, something looks at the OpenID cookie deposited earlier and uses that to perform your validation.

    Are you nuts?

    As I said earlier, I don’t know a lot about building Web Applications. But, methinks the sensible way to do this is a little different from the way facebook is doing things.

    Look, for example, at news.ycombinator.com. On the login screen, below the boxes for username and password is a button for other authentication mechanisms. If you click that, you can enter your OpenID URL and voila, you are on your way. No permanent cookies involved.

    Now, if you didn’t have your morning Joe, and you went directly to news.ycombinator.com and tried to enter your OpenID name, you are promptly forwarded to your OpenID providers page to ask for authentication. Over, end of story. No permanent cookies involved.

    Ok, just to verify, I did this …

    I went to a friends PC, never used it before, pointed his browser (firefox) to news.ycombinator.com, clicked the button under login/password, entered my OpenID name and sure enough it vectored over to Verisign Labs. I logged in and voila, I’m on Hacker News.

    Am I missing something? It sounds to me like facebook is paying lip service to OpenID. Either that or they just don’t get it?

    A farewell to facts, rigor and civility

    I read this article from Mike (The Fein Line) Feinstein’s blog and it occurs to me that we have collectively chosen to ignore facts, rigor and civility.

    For a whole bunch of reasons (beyond the one that Mike mentions), I have to say that I too am happy and proud that I live in Massachusetts.

    OpenID first impressions

    I have been meaning to try OpenID for some time now and I just noticed that they were doing a free TFA (what they call VIP Credentials) thing for mobile devices so I decided to give it a shot.

    I picked Verisign’s OpenID offering; in the past I had a certificate (document signing) from Verisign and I liked the whole process so I guess that tipped the scales in Verisign’s favor.

    The registration was a piece of cake, downloading the credential generator to my phone and linking it to my account was a breeze. They offer a File Vault (2GB) free with every account (Hey Google, did you hear that?) and I gave that a shot.

    I created a second OpenID and linked it to the same mobile credential generator (very cool). Then I figured out what to do if my cell phone (and mobile credential generator were to be lost or misplaced), it was all very easy. Seemed too good to be true!

    And, it was.

    Facebook allows one to use an external ID for authentication. Go to Account Settings and Linked Accounts and you can setup the linkage. Cool, let’s give that a shot!

    Facebook OpenID failure
    Facebook OpenID failure

    So much for that. I have an OpenID, anyone have a site I could use it on?

    Oh yes! I could login to Verisignlabs with my OpenID 🙂

    Update:

    I tried to link my existing “Hacker News” (news.ycombinator.com) account with OpenID and after authenticating with verisign, I got to a page that asked me to enter my HN information which I did.

    I ended up with a page: http://news.ycombinator.com/openid_merge and a single word “Unknown” on the screen.

    I’ve got to be doing something wrong. Someone care to tell me how badly messed up I am?

    Update (sept 11)

    Thanks to help from Gary (who commented on this post), I tried the “linking” on Facebook again and this time it worked a little better.

    But, I still have to enter my password when I want to login to facebook. Something is still not working the way it should.

    Still the same issue with Hacker News.

    What I learnt from the GMAIL outage

    We all have heard about it, many of us (most of us) were affected by it, some of us actually saw it. This makes it a fertile subject for conversation; in person and over a cold pint, or online. I have read at least a dozen blog posts that explain why the GMAIL outage underscores the weakness of, and the reason for imminent failure of cloud computing. I have read at least two who explain why this outage proves the point that enterprises must have their own mail servers.  There are graphs showing the number of tweets at various phases of the outage. There are articles about whether GMAIL users can sue Google over this failure.

    The best three quotes I have read in the aftermath of the Gmail outage are these:

    “So by the end of next May, we should start seeing the first of the Google Outage babies being born.” – Carla Levy, Systems Analyst

    “Now I don’t look so silly for never signing up for an e-mail address, do I?” – Eric Newman, Pile-Driver Operator

    “Remember the time when 150 million people couldn’t use Gmail for nearly ten years? From 1993–2003? And every year before that? Unimaginable.” – Adam Carmody, Safe Installer

    Admittedly, all three came from “The Onion“.

    This article is about none of those things. To me, the GMAIL outage could not have come at a better time. I have just finished reconfiguring where my mail goes and how it gets there. The outage gave me a chance to make sure that all the links worked well.

    I have a GMAIL account and I have email that comes to other (non-GMAIL) addresses. I use GMAIL as a catcher for the non-GMAIL addresses using the “Imports and Forwarding” capability of GMAIL. That gives me a single web based portal to all of my email. The email is also POP3’ed down to a PC, the one which I am using to write this blog post. I get to read email on my phone (using its POP3 capability) from my GMAIL account. Google is a great backup facility, a nice web interface, and a single place where I can get all of my email. And, if for any reason it were to go kaput, as it did on the 1st, in a pinch, I can get to the stuff in a second or even a third place.

    But, more importantly, if GMAIL is unavailable for 100 minutes, who gives a crap. Technology will fail. We try to make it better but it will still fail from time to time. Making a big hoopla about it is just plain dumb. On the other hand, an individual could lose access to his or her GMAIL for a whole bunch of reasons; not just because Google had an outage. Learn to live with it.

    So what did I learn from the GMAIL outage? It gave me a good chance to see a bunch of addicts, and how they behave irrationally when they can’t get their “fix”. I’m a borderline addict myself (I do read email on my phone, as though I get things of such profound importance that instant reaction is a matter of life and death). The GMAIL outage showed me what I would become if I did not take some corrective action.

    Technology has given us the means to “shrink the planet” and make a tightly interconnected world. With a few keystrokes, I can converse with a person next door, in the next state or half way across the world. Connectivity is making us accessible everywhere; in our homes, workplaces, cars, and now, even in an aircraft. It has given us the ability to inundate ourselves with information, and many of us have been over-indulging (to the point where it has become unhealthy).

    Full Corn Moon

    Today was the “Full Corn Moon” and I was lucky enough to get a clear night.

    Full Corn Moon, September 4, 2009
    Full Corn Moon, September 4, 2009

    You can see a larger image by clicking on the picture above.

    The moon is barely visible in the first image, hidden by the trees. In the second and the third, it makes it out of the trees just as the sun is setting behind me.

    And here I was complaining about Verizon Wireless!

    I thought poorly of Verizon Wireless service and features (though I’ve been a customer for a while).

    That all changed when I read this.

    Hey Verizon, where do I sign up for that two year contract? But, could you give me a cellular data plan with more than 5GB per month please …

    Till January 1, 2010, Bye-Bye Linux!

    Bye-Bye Ubuntu! Back to Windows …

    Around the New Year each year, the fact that I am bored silly leads me to do strange things. For the past couple of years, in addition to drinking a lot of Samuel Adams Double Bock or Black Lager, I kick Windows XP, Vista or whatever Redmond has to offer and install Linux on my laptop.

    For two years now, Ubuntu has been the linux of choice. New Year 2009 saw me installing 8.10 (Ignorant Ignoramus) and later upgrading to 9.04 (Jibbering Jackass). But, I write this blog post on my Windows XP (Service Pack 3) powered machine.

    Why the change, you ask?

    This has arguably been the longest stint with Linux. In the past (2007) it didn’t stay on the PC long enough to make it into work after the New Year holiday. In 2008, it lasted two or three weeks. In 2009, it lasted till the middle of August! Clearly, Linux (and Ubuntu has been a great part of this) has come a very long way towards being a mainstream replacement for Windows.

    But, my benchmark for ease of use still remains:

    1. Ease of initial installation
      • On Windows, stick a CD in the drive and wait 2 hours
      • On Linux, stick a CD in the drive and wait 20 minutes
      • Click mouse and enter some basic data along the way
    2. Ease of setup, initial software update, adding basic software that is not part of the default distribution
      • On Windows, VMWare (to run linux), Anti-Virus, Adobe things (Acrobat, Flash, …)
      • On Linux, VMWare (to run windows), Adobe things
    3. Ease of installing and configuring required additional “stuff”, additional drivers
      • printers
      • wacom bamboo tablet
      • synchronization with PDA (Windows ActiveSync, Linux <sigh>)
      • On Windows, DELL drivers for chipset, display, sound card, pointer, …
    4. Configuring Display
      • resolution, alignment
    5. Configuring Mouse and Buttons
    6. Making sure that docking station works
      • On Windows, DELL has some software to help with this
      • On Linux, pull your hair out
    7. Setting Power properties for maximum battery life
      • On Windows, what a pain
      • On Linux, CPU Performance Applet
    8. Making sure that I login and can work as a non-dangerous user
      • On Windows, group = Users
      • On Linux, one who can not administer the system, no root privileges
    9. Setup VPN
      • On Windows, CISCO VPN Client most often. Install it and watch your PC demonstrate all the blue pixels on the screen
      • On Linux, go through the gyrations of downloading Cisco VPN client from three places, reading 14 blogs, web pages and how-to’s on getting the right patches, finding the compilers aren’t on your system, finding that ‘patch’ and system headers are not there either. Finally, realizing that you forgot to save the .pcf file before you blew Windows away so calling IT manager on New Year’s day and wishing him Happy New Year, and oh, by the way, could you send me the .pcf file (Thanks Ed).
    10. Setup Email and other Office Applications
      • On Linux, installing a Windows VM with all of the Office suite and Outlook
      • On Windows, installing all of the Office suite and Outlook and getting all the service packs
      • Install subversion (got to have everything under version control). There’s even a cool command line subversion client for Windows (Slik Subversion 1.6.4)
    11. Migrate Mozilla profile to new platform
      • Did you know that you can literally take .mozilla and copy it to someplace in %userprofile% or vice-versa and things just work? Way cool! Try that with Internet Exploder!
    12. Restore SVN dump from old platform

    OK, so I liked Linux for the past 8 months. GIMP is wonderful, the Bamboo tablet (almost just works ), system booted really fast, … I can go on and on.

    But, some things that really annoyed me with Linux over the past 8 months

    • Printing to the Xerox multi function 7335 printer and being able to do color, double sided, stapling etc., The setup is not for the faint hearted
    • Could I please get the docking station to work?
    • Could you please make the new Mozilla part of the updates? If not, I have Firefox and Shrill-kokatoo or whatever the new thing is called. What a load of horse-manure that upgrade turned out to be. On Windows, it was a breeze. Really, open-source-brethren, could you chat amongst yourselves?

    But the final straw was that I was visiting a friend in Boston and wanted to whip out a presentation and show him what I’d been up to. External display is not an easy thing to do. First you have to change resolutions, then restart X, then crawl through a minefield, sing the national anthem backwards while holding your nose. Throughout this “setup”, you have to be explaining that it is a Linux thing.

    Sorry folks, you aren’t ready for mainstream laptop use, yet. But, you’ve made wonderful improvement since 2007. I can’t wait till December 31, 2009 to try this all over again with Ubuntu 9.10 (Kickass Knickerbockers).

    OpenDNS again

    More on OpenDNS

    I wasn’t about to try diddling the router at 11:30 last night but it seemed like a no-brainer to test out this OpenDNS service.

    So, look at the little button below. If it says “You’re using OpenDNS” then clearly someone in your network (your local PC, router, DNS, ISP, …) is using OpenDNS. The “code” for this button is simplicity itself

    <a title="Use OpenDNS to make your Internet faster, safer, and smarter." href="http://www.opendns.com/share/">
         <img style="border:0;"
              src="http://images.opendns.com/buttons/use_opendns_155x52.gif"
              alt="Use OpenDNS" width="155" height="52" />
    </a>

    So, if images.opendns.com was sent to my ISP, it would likely resolve in one way and if it was sent to OpenDNS, it would resolve a different way. That means that the image retrieved would differ based on whether you are using OpenDNS or not.

    Use OpenDNS

    Step 1: Setup was trivial. I logged in to my router and deposited the DNS server addresses and hit “Apply”. The router did its usual thing and voila, I was using OpenDNS.

    Step 2: Setup an account on OpenDNS. Easy, provide an email address and add a network. In this case, it correctly detected the public IP address of my router and populated its page. It said it would take 3 minutes to propagate within the OpenDNS network. There’s an email confirmation, you click on a link, you know the drill.

    Step 3: Setup stats (default: disabled)

    All easy, web page resolution seems to be working OK. Let me go and look at those stats they talk about. (Click on the picture below to see it at full size).

    Stats don't seem to quite work!
    Stats don't seem to quite work!

    August 17th you say? Today is September 1st. I guess tracking 16 billion or so DNS queries for two days in a row is a little too much for their infrastructure. I can suggest a database or two that would not break a sweat with that kind of data inflow rate.

    July 30, 2009: OpenDNS announces that for the first time ever, it successfully resolves more than 16 Billion DNS queries for 2 days in a row.

    (source: http://www.opendns.com/about/overview/)

    So far, so good. I’ve got to see what this new toy can do 🙂 Let’s see what other damage this thing can cause.

    Content Filtering

    Nice, they support content filtering as part of the setup. That could be useful. Right now, I reduce annoyances on my browsing experience with a suitably crafted “hosts” file (Windows Users: %SYSTEMROOT%system32driversetchosts).

    127.0.0.1       localhost
    127.0.0.1       ad.doubleclick.net
    127.0.0.1       hitbox.com
    127.0.0.1       ai.hitbox.com
    127.0.0.1       googleads.g.doubleclick.net
    127.0.0.1       ads.gigaom.com
    127.0.0.1       ads.pheedo.com
    [... and so on ...]

    I guess I can push this “goodness” over to OpenDNS and reduce the pop-up crap that everyone will get at home. (click on image for a higher resolution version of the screen shot)

    Content Filtering on OpenDNS
    Content Filtering on OpenDNS

    Multiple Networks!

    Very cool! I can setup multiple networks as part of a single user profile. So, my phone and my home router could both end up being protected by my OpenDNS profile.

    I wonder how that would work when I’m in a location that hands out a non-routable DHCP address; such as at a workplace. I guess the first person to register the public IP of the workplace will see traffic for everyone in the workplace with a per-PC OpenDNS setting that shares the same public IP address? Unclear, that may be awkward.

    Enabling OpenDNS on a Per-PC basis.

    In last nights post, I had questioned the rationale of enabling OpenDNS on a per-PC basis. I guess there is some value to this because OpenDNS provides me a way to influence the name resolution process. And, if I were to push content filtering onto OpenDNS, then I would like to get the same content filtering when I was not at home; e.g. at work, at Starbucks, …

    I’m sure that over-anxious-parents-who-knew-a-thing-or-two-about-PC’s could load the “Dynamic IP” updater thing on a PC and change the DNS entries to point to OpenDNS before junior went away to college 🙂

    So, I guess that per-PC OpenDNS settings may make some sense; it would be nice to have an easy way to enable this when required. I guess that is a fun project to work on one of these days when I’m at Starbucks.

    Jeremiah says, “I do it on a per computer basis because I occasionally need to disable it. (Mac OS X makes this super quick with Locations)”. Jeremiah, please do tell why you occasionally need to disable it. Does something fail?

    Other uses of OpenDNS

    kuzux writes in response to my previous post that OpenDNS can be used to get around restrictive ISP’s. That is interesting because the ISP’s that have put these restrictions in place are likely only blocking name resolution and not connection and traffic. Further, the ISP’s could just as well find the IP addresses of the sites like OpenDNS and put a damper on the festivities. And, one does not have to look far to get the IP addresses of the OpenDNS servers 🙂

    Two thoughts come to mind. First, if the authorities (in Turkey as Kuzux said) put the screws on OpenDNS, would they pour out the DNS lookup logs for specific IP addresses that they cared about (both source and destination). Second, a hypothetical country with huge manufacturing operations, a less stellar human rights record, and a huge number of take-out restaurants all over the US (that shall remain nameless), could take a dim view of a foreigner who had OpenDNS on his/her laptop and was able to access “blocked” content.

    Other comments

    Janitha Karunaratne writes in response to my previous post that, “Lot of times if it’s a locked down limited network, they will intercept all DNS traffic, so using OpenDNS won’t help (their own default DNS server will reply no matter which DNS server you try to reach)”. I guess I don’t understand how that could be. When a machine attempts a DNS lookup, it addresses the packet specifically to the DNS server that it is targeting. Are you suggesting that these “locked down limited networks” will intercept that packet, redirect it to the in-house DNS server and have it respond?

    David Ulevitch (aka Founder and CTO of OpenDNS) writes, “Yeah, there are all kinds of reasons people use our service. Speed, safety, security, reliability… I do tests when I travel, and have even done it with GoGo on a VA flight and we consistently outperform”. Mr. Ulevitch, your product is wonderful and easy to use. Very cool. But, I wonder about this performance claim. When I am traveling, potentially sitting in an airport lounge, a hotel room, a coffee shop or in a train using GPRS based internet service with unknown bandwidth, is the DNS lookup a significant part of the response time to a page refresh, mail message download, (insert activity of your choice)?

    My Point of View

    It seems to work, it can’t hurt to use it at home (if my ISP has a problem with it, they can block traffic to the IP address). It doesn’t seem to be appreciably faster or slower than my ISP’s DNS. I’ll give it a shot for a while and see what the statistics say (when they get around to updating them).

    OpenDNS is certainly an easy to use, non-disruptive service and is worth giving a shot. If you use the free version of OpenDNS (ie don’t create an account, just point to their name servers), there is little possible downside; if you get on a Virgin Atlantic flight, you may need to disable it. But, if you use the registered site, just remember that OpenDNS is collecting a treasure trove of information about where you go, what you do, and they have your email address, IP address (hence a pretty good idea of where you live). They already target advertising to you on the default landing page for bad lookups. I’m not suggesting that you will get spammed up the wazoo but just bear in mind that you have yet another place where a wealth of information about you is getting quietly stored away for later use.

    But, it is a cool idea. Give it a shot.

    OpenDNS and paid wireless services.

    Why would anyone ever want to use OpenDNS? I just don’t get it? But, there must be a good reason that I’m totally missing.

    I stumbled on this post, not sure how.

    http://thegongshow.tumblr.com/post/176629519/virgin-america-inflight-wireless

    I didn’t know about OpenDNS but it seems like a strange idea; after all, why would someone not use the DNS server that came with their DHCP lease? Seems like a fairly simple idea, over-ride the DNS servers and force the choice to be to use the DNS servers that OpenDNS provides, but why would anyone care to do this on a PC by PC basis? That seems just totally illogical!

    And the problem that the author of the blog (above) mentioned is fairly straightforward. PC comes on the Virgin Wireless network, attempts to go to a web page, sends out a connection request for a page (assume that it is from a cache of IP addresses) and receives a HTML redirect to a named page (asking one to cough up money). The HTML redirect mentions the page by name and that name is not cached and results in a DNS lookup which goes to OpenDNS. Those servers (hardcoded into the network configuration) are not accessible because the user has not yet coughed up said money. Conversely, If the initial lookup was not from a cached IP address then the DNS lookup (which would have been sent to OpenDNS) would have not made it very far (OpenDNS’s server is not reachable till you cough up money). One way or the other, the page asking the user to cough up cache would not have showed up.

    So, could one really do this OpenDNS thing behind a pay-for-internet-service? Not unless you can add preferred DNS servers to network configuration without a reboot. (the reboot will get the cycle of DHCP request going again, and on the high volume WiFi access services, DHCP request will automatically expire the previous lease).

    But, to the more basic issue, why ever would someone enable this on a PC-by-PC basis? I can totally understand a system administrator using this at an enterprise level; potentially using the OpenDNS server instead of the ones that the ISP provided. Sure beats me! And there sure can’t be enough money in showing advertisements on the redirect page for missing URL’s (or there must be a lot of people who fat-finger URL’s a lot more than I do).

    And, the functionality of being able to watch my OpenDNS traffic on a dashboard, I just don’t get it. Definitely something to think more about … They sure seem to be a successful operation so there must clearly be some value to the service they offer, to someone.