Software and hardware annotations 2008 July

This document contains only my personal opinions and calls of judgement, and where any comment is made as to the quality of anybody's work, the comment is an opinion, in my judgement.

[file this blog page at: digg del.icio.us Technorati]

080718 Fri Storage systems, indices and sequential access
Having written about the importance of well constructed indices in databases I should add that they are sometimes overused as they have two costs, one of which has been growing with time: The latter point matters in general: things like RAM and hard disks are still random access in the sense that accessing a random item is easy, but that is becoming slower and slower relative to accessing the next item.
The numbers for disks are pretty clear, as a 80MiB/s or 80KiB/ms disk with an access time (all inclusive) of 5ms could transfer 400KiB in that dead time, which means that random accesses should not happen more often than every 400KiB processed to achieve a disk utilization of just 50%.
As to RAM, memory subsystems currently can achieve around 4GiB/s, or 4KiB/µs, in sequential mode, utilizing the full cache-to-RAM pipeline, which also does read-ahead and write-behind; this involves accessing memory 8B at a time (most current CPUs have 64b wide memory subsystems) at a rate of 500 accesses per µs; current CPUs can issue and retire 2-3K instructions per µs and breaking the sequential stride means losing 100-200 memory cycles, so to achieve 50% RAM utilization random accesses must occur less often than every 1KiB, which while being two orders of magnitude smaller than that for disk is still quite huge.
In practice my experience is that disk and memory utilization are often much worse than 50%. As an extreme example some physics simulation (dynamics, collision detection) I was involved with would manage to run on average 1 instruction every 2-4 cycles even on superscalar CPUs capable of retiring 2-3 instructions per cycle, entirely because of high latency random memory accesses, and that was around 10-20% utilization.
There is a classic data management problem which is how to implement efficient selection of all elements of a table that satisfy a predicate, usually an equality test. It is often tempting to use indices if available to assist with selecting just the right elements, the problem however is that this can cause a very large amount of random accesses, when selectivity is low (a large subset of the table rows is selected) and/or the index selects elements in an order very different from that in which they are physically stored.
The alternative is to scan the whole table sequentially in physical order and just gather the elements that match, as this keeps the memory subsystem streaming, and reading elements that are not needed joins between two tables, and there are essentially only two algorithms for this: sort tables and match both in a sequential scan, or match every element against every other element. The latter strategy is usually good only if one of the two tables is rather small and fits into low latency memory, as the other will get sequentially scanned and lookups in the smaller one will be usually random. But still the larger one is scanned sequentially. Sorting by key or address and then doing a sequential sweep is also good in RAM, as I have argued previously especially for non sequential writing. Interestingly things may be changing for mass storage, as with flash based mass storage conversely latency is about the same or even rather smaller than (single sector) sequential reads (but random writes can be much, much slower than sequential writes), which rather changes the tradeoffs pertaining to database indices and probably much else besides.
080711 Fri When large application services are unavailable
I have been thinking recently on the implications of large scale application services as banks and I just read some interesting article that supports the notion by mentioning outages at major service providers and how expensive they have been:

Such technology stalwarts as Yahoo, Amazon.com and Research in Motion, the company behind the BlackBerry, have all suffered embarrassing technical problems in the last few months.

About a month ago, a sudden surge of visitors to Mr. Payne's site began asking about the normally impervious Amazon. That site was ultimately down for several hours over two business days, and Amazon, by some estimates, lost more than a million dollars an hour in sales.

Companies like Google want us to store not just e-mail online but also spreadsheets, photo albums, sales data and nearly every other piece of personal and professional information. That data is supposed to be more accessible than information tucked away in the office computer or filing cabinet.

Last holiday season, Yahoo's system for Internet retailers, Yahoo Merchant Solutions, went dark for 14 hours, taking down thousands of e-commerce companies on one of the busiest shopping days of the year. In February, certain Amazon services that power the sites of many Web start-up companies had a day of intermittent failures, knocking many of those companies offline.

Jesse Robbins, a former Amazon executive who was responsible for keeping Amazon online from 2004 to 2006, says the outcries over failures are understandable.

"When these sites go away, it's a sudden loss. It's like you are standing in the middle of Macy's and the power goes out," he said. "When the thing you depend on to live your daily life suddenly goes away, it's trauma."

The last quote may be true for e-commerce sites, but it is not quite right for cloud computing and application service providers, as there is a fundamentall asymmetry between the two: when sites like Amazon or Google are down it is their business that suffers and their users are merely inconvenienced, as with a shop that is without power, but when an application service provider is unavailable, it is like when a bank closes: the users' data become unaccesible, and their business suffers.
For corporate settings I am in general in favour of minimizing the impact of problems (availability, reliability and performance) and this may be a good option too for the wider Internet: centralized utility style computing services are even more complicated than traditional mass utilities, and when those fail, the consequences can be very vast.
But there is another argument: that a shared service may be run much better than many dispersed ones, because in practice most organizations are not that good at running their computing services, or at least less good than Google or Amazon. So perhaps the real choice is between quality and resilience: a very diverse and resilient population of not-so-good independently run services or a few large service providers providing a better shared infrastructure that can however result in much wider outages.
This is not a simple tradeoff, and happens in biological systems too. Usually when the rate of change in environmental conditions is high then diversity and resilience work better, and when that rate is low, a small number of best adapted variants work better (the initial cost matters too). So far for electricity and phone and train systems, and even retail, big shared provision seems to have worked better for the fairly stable, capital rich societies of the first world.
For computing the opposite seems to have happened: the PC revolution was indeed a revolution against unavailable, inflexible central services run as petty empires, and even if the situation with those has improved, PCs are still very common, even if the ambition is to turn them back into terminals. But the only shared services that seem to be justified as such are connectivity services, like the phone system and mobile phones and the road system. My impression is that this will not change much: high level single user oriented services will continue to be largely local and independent, while connectivity oriented ones relying on network effects will move to application service providers.
080709 Wed Good Linux power management
After some experimenting with the various power management options in the Linux kernel I have condensed them in my recently published sabiwatts script. The results are pleasing: doing editing and other light work I can use my Toshiba Satellite U300 (with a 6 cell battery) laptop for 3-4 hours, and 2-3 hours doing hard work like compiling and developing. I have had a look with the useful powertop utility and while I am typing power consumption is around 12W:
     PowerTOP version 1.9       (C) 2007 Intel Corporation

Cn                Avg residency       P-states (frequencies)
C0 (cpu running)        ( 3.3%)         1500 Mhz     0.0%
C1                0.0ms ( 0.0%)         1000 Mhz   100.0%
C2                0.0ms ( 0.0%)
C3               11.9ms (96.7%)


Wakeups-from-idle per second : 81.5     interval: 15.0s
Power usage (ACPI estimate): 11.7W (3.9 hours) (long term: 15.0W,/3.0h)

Top causes for wakeups:
  20.2% ( 19.1)      <kernel IPI> : Rescheduling interrupts
  18.5% ( 17.5)            kicker : schedule_timeout (process_timeout)
  17.8% ( 16.8)                 X : do_setitimer (it_real_fn)
  13.9% ( 13.1)       <interrupt> : extra timer interrupt
Running find / raises that to 16.5W, and driving up the brightness of the screen adds another 1-2W. The main factors in saving power seem to be: As to looking at disk activity, a small aside: the best thing is to direct the debug level in syslog.conf to a named pipe as in:
*.=debug -|/tmp/debug
to prevent the log events going to disk themselves... As to minimizing disk activity I have found that one should minimize the number of syslog.conf lines that are unbuffered, and the number of running dæmons. This this effect I take advantage of the run levels of sysvinit and I use run level 2 is for standalone power saving operation, run level 3 for networked power saving use as a client, and run level 4 for non power saving use with all my server dæmons running. I don't use run level 5, because it traditionally is used to enter GUI mode, and run levels are not really designed to offer combinations of choices (low or normal power, text or GUI), so I enter GUI mode by running xinit after logging in in text mode.
080708 Tue UML and future of American Programmers
While chatting about the future impact of cloud computing on third world businesses aimed at first world customers, I was reminded of one of the more optimistic assessments on the future of American Programmers as to the issue of how can USA programmers compete with equally good (or not) programmers from other countries where the cost of living is one third to one tenth of that in the USA. Well, I was joking, the real worry in that post is about competition from business analysts and end users, and the solution proposed is to develop UML related expertise:

More importantly to those of us who are already feeling the squeeze from watching so many of our jobs go overseas to countries with lower labor rates, will business analysts soon be competing with us as well? Will tools advance to the point that many business people will be doing their own application development?

I think those who wish to remain on the same career path (corporate developer) must continue to learn the business they are supporting, continue to improve their rapport with business stakeholders through communication skills, and continuously retrain themselves as new technology lifecycles appear. This is not much different than what many of us have already been doing.

If you want to remain in the field of programming stay on top of UML, OCL, & MDA.

Unfortunately it is not as easy as that: as the cost of programmers in the third world is one or a few dollars per hour, very few business analysts or business people in the first world will waste their expensive time trying to write their own software or executable models.
There is a colossal global oversupply of programmers because many people can buy or rent a PC, some books on UML or anything else, and dowload some software and become a self-taught developer or get cheap training in just about any programming technology including UML. The reason why 20 or 40 years ago programmers were rare and thus expensive was that computers used to be very expensive themselves (including UNIX and telecoms for PCs) and thus only relatively few people could become programmers because only the wealthiest Universities in first world countries could afford the computers.
Encouraging people to get into UML related technologies to compete with analysts and users seems to me a bit wistful when the big issue is that a programmer in the third world costs a lot less than many analysts or users in the first world, and the best way to develop a career in programming is to live in a country with a low cost base. But also because model based tools sound like a rather improbable silver bullet in either the visual language or executable specification aspects and the idea that analysts and users can develop their own executable specifications or models is very ancient, at least as old as COBOL. It was was described as a simple English like language, as some still say:

COBOL is a high-level, English-like language which, when used correctly, can resemble a well-structured novel with appendices, cross-reference tables, chapters, footnotes and paragraphs.

so much so that analysts or users could write programs directly in. This did not turn out to be the case, to say the least.
080704 Fri Penny foolish is not pounds wise for applications
I was recently begging some people doing applications that process large image files to use O_DIRECT access to those files where appropriate (almost always), and they pointed out that on their workstations (top end laptops and desktops) running one instance of the application locally, performance was acceptable. My point was that the cumulative effect of several users hitting the same fileserver would have been perhaps less ignorable.
Then I was reminded of the scratch my itch mindset that is so popular: as one voice in the wilderness says applications still suck as they are written with utter disregard to common sense frugality of resource consumption (from file metadata accesses to memory). The waste in each is easily masked by the abundance of resources in current workstations (including recent laptops) but what is hard to notice in a single application can become very annoying in the aqgregate, when one wants to take advantage of the boon of modern technology to run more things rather than one thing more slowly (even if not obviously so).
080702 Wed Internet "computing" vs. "publishing", the credit
Ah I just remembered that as to the distinction between computing and publishing I should have credited one of the classic books on the dot.com boom, Burn rate by Michael Wolff. Some quotes from reviews to illustrate why:

Although Wolff (Where We Stand) was an early believer in the ability of the Internet to deliver powerful content to a mass audience, by the time he resigned from his own company in 1997, he had come to see the Net as more of a transactional medium.

An entrepreneur's 1994-1996 tale of search for IPO; premise rests on essential tension between East Coast (content) and West Coast (technology).

The Internet is not, as the East Coast media companies wanted to believe, a new medium in which "content is king." However well crafted the content, he laments, "We could see the 'user' hover for a second, or half second, and alight on a paragraph and then linger for a moment and be gone. Bolt. Vamoose. Vaporize." Mr Wolff explains, "In some perfect irony we had created a new publishing medium," but there was no one to read it.

The more prosaic reality, Mr. Wolff came to believe, was that the Internet was not a new medium, like television or cinema, but a communications technology, like the telephone.

It was a glorified phone with wonderful and startling commercial possibilities, true, but still a utility or part of the infrastructure rather than a medium for creative endeavor. The Net is for looking up factual information, such as airline flights or stock prices, or buying things, such as computers or, well, sex.

In the book indeed it is described the difference, even of personal styles, between East Coast publishing (such as Time-Warner) and West Coast (such as Microsoft) computing cultures. Now it is quite ironical that technology, transaction oriented Amazon was founded by a New York investment banker and content, publishing oriented Google was founded by Stanford engineering postgrads. Of course I am talking business model here, because both the approaches are heavily technology based.
However arguably Wolff turned out to be mostly right after all, as in one important way Google is nearly as transaction oriented as Amazon or eBay, as the content on which their publishing based business model is founded is about selling things to people; but viceversa a lot of Amazon's appeal is its loss-leading publishing, the readers reviews, the lists.
080702b Wed Hell full of racks
Hell must be full of racks, and I mean the 42U variety for computer use, not the medieval one.