This document contains only my personal opinions and calls of judgement, and where any comment is made as to the quality of anybody's work, the comment is an opinion, in my judgement.
[file this blog page at: digg del.icio.us Technorati]
Yesterday I was replacing the one disk storage unit in an Ubuntu based workstation and this was first somewhat complicated because the system boots from a /boot and / filesystem on an LVM2 logical volume using GRUB2. This can be handled by the LVM2 module for GRUB2, but it requires a careful set iof update because the new volume group and logical volume labels changed:
Such are the benefits
of having multiple
layers of software virtualization.
I did all this by attaching the flash SSD a new SATA target, and then setting up a new volume group and logical volumes on it and filetrees and then copying with RSYNC.
To fix the initrd images, the GRUB2 configuration
and the bootloader install it is easiest if the new flash SSD
is already the primary/boot unit on the workstation, so I
recable it as such, and then used the
GRML
live image
to boot the system and run the
relevant commands with a chroot to the flash SSD's
new / filetree. It is a bit sad that it is non
trivial to do all this except in the default case, but I
wanted to take a shortcut.
So far so ordinary. But then I also noticed that the just copied system was way out of date, and I used the aptitude command in the chroot to upgrade it. That failed: because some packages cannot be successfully upgraded (notably the awful udev by the despicable GregKH) unless DBUS is running and Upstart is running or both. Neither is available under GRML or in a GRML chroot, which is an old-style Debian derivative. The resulting system could not boot properly.
Despite some efforts to fix it by hand, I had to restore the / filetree as it was, reapply the configuration updates, and once rebooted, I could upgrade the outdated packages.
All of the above is quite regrettable because the story is that the boot process and the package installation process has become a lot more fragile: the tangle of dependencies, of course mostly undocumented, has become a lot deeper from the traditional UNIX days.
PS While writing this I noticed in an IRC channel another complication:
[16:01] jena | hello everybody, yesterday I upgraded linux-image deb. Today when booting i get a lot of repeated message like "volume group sda5_crypt not found \n skipping volume group sda5_crypt \n unable to find LVM volume sda5_crypt/ENCRYPTED_OS" and (initramfs) promt, what should I do? [PS /dev/sda5 is a luks partition on which there are lvm for / and /home, /boot is on /dev/sda2] (should I ask in a different channel?)
A review of a recent fast flash SSD M.2 device is quite interesting and insightful. Some of the more interesting points:
That seems quite impressive to me, and are confirmed by synthetic benchmarks, even better than the 1.2GB/s rate I have observed for the MacBook Air.The 512GB model boasts a sequential read speed of 2.5GB/s and a write speed of 1.5GB/s
That seems fairly generous to me, the 512GB model over five years has a daily quota of around 220GB/day, and seems a good endurance even for a hard disk.The 950 Pro series is backed by a five-year limited warranty, the warranty is limited by how much data can be written. The 256GB version is capped at 200 terabytes written (TBW) while the larger 512GB model doubles that to 400 TBW.
As previously reported in a flash SSD the CPU chip can become very hot during sustained transfers has to checksum and encrypt every block. It is good that this unit will slow down to prevent CPU overheating.To ensure overheating does not become an issue like it did with the XP941, Samsung has equipped the 950 Pro series with Dynamic Thermal Throttling Protection technology, which monitors and maintains the drive's optimal operating temperature.
Because of the small size and low power draw it is clearly targeted at laptops, but it seems a bit overkill for a laptop.
Because of the endurance and temperature limits it seems mostly aimed at short-medium bursts of peak performance, but I guess that with suitably cooling it could have a wider applicability, for example for collecting data at fairly high speed.
Reading
a piece
about container
based deployment of
applications caused me a lot of amusement when reading this astuste observation
:
2. Are you operationally prepared to manage multiple languages/libraries/repositories?
Last year, we encountered an organization that developed a modular application while allowing developers to “use what they want” to build individual components. It was a nice concept but a total organizational nightmare — chasing the ideal of modular design without considering the impact of this complexity on their operations.
The organization was then interested in Docker to help facilitate deployments, but we strongly recommended that this organization not use Docker before addressing the root issues. Making it easier to deploy these disparate applications wouldn’t be an antidote to the difficulties of maintaining several different development stacks for long-term maintenance of these apps.
That encapsulates unwittingly the major reason why
containerized deployment formats exist: precisely to avoid
dealing in the short term with the difficulties of
maintaining several different development stacks
.
The biggest appeal of containers is that it shifts the burden
of doing the latter to the developers of the applications by
first letting them “use what they want”
and then taking
complete responsibility for the maintenance of all that.
But of course developers, once they have delivered the application, are rarely going to be involved in its long term maintenance: many will be moved to different projects, or leave, or will simply not have the incentives to look after the application stacks they have already delivered.
The second biggest appeal of container deployment is that it front-loads the benefits and back-loads the costs: deployment is cheap and easy, as it is maintaining the deployed applications and their stacks that is made harder, because they are isolated in each container.
Containerization is mostly a new memetic wrapper for an old
and established technique: DLL-hell
, the
practice of delivering an application with a whole copy of its
development stacks
and deploying them in a separate
subtree of the filesystem so that each application on the host
system has a whole separate environment. This has been the
standard deployment technique for a long time on consumer
oriented platforms like MS-Windows
and Google Android.
It is the DLL-hell technique implicit in containerization that has the effect of shifting environment stack maintenance from the people responsible for the hosting to the people responsible for the development of applications. This works under the following assumptions:
These conditions usually apply to workstations and mobile phones, which are usually replaced as a whole every few years rather than maintained. When however the DLL-hell deployment strategy or its equivalents like containers are deployed in a business environment, the usual result is not maintenance, but its opposite: once deployed, applications and their development stacks get frozen in place, usually because it is so expensive to maintain them, and freezing in an isolated environment each application can reduce the impact of many issues to a single application. For businesses that deploy as commonly happens hundreds of applications that can be a good strategy.
And this is easily observed in practice: because
containerization has been happening in business IT for a long
time, with physical containers: that is the very
common practice of one application deployed per server. It is
such a common practice that the success of virtualization has
been based on consolidation
: turning
physical containers, each running one application and its
developement stack, into virtual containers as virtual
machines. Full virtualization required to avoid any issues in
running legacy operating system layers.
The main objective of consolidation into virtual machines of an existing set of deployments onto physical machines was of course to keep those deployment frozen in place, by creating a virtual host as close as possible to the physical one.
Less complete virtualization as in virtual containers like Docker is essentially a reversal of that practice: instead of deploying applications and their development stacks to their own physical container, and then virtualizing it to achieve consolidation, consolidation into virtual containers is the starting point, by delivering applications already frozen in place inside a container with their custom development stacks, with at least the advantage that by targeting a standalone container to start with a more efficient less complete virtualization technique is feasible.
That is, as argued above, the purpose of containers
is indeed to ignore the difficulties of maintaining several
different development stacks for long-term maintenance of
these apps
.
Therefore containerization is an advantageous strategy when
that long-term maintenance
is not important: for
example for applications that have a relatively short expected
life, or for situations where their long-term maintainability
is not otherwise relevant in the short term.
For example in many projects driven by developers long-term
maintainability is not that relevant, because the developers
are rewarded on speed of delivery and the looks of their
applications, and the ability to “use what they want” to
build individual components
helps them towards that.
I keep seeing incorrect uses of the concept of performance
where it is confused with the
concept of speed
or some other simple ratio, and the
two are very different. A contrived example might
illustrate:
This illustrates three important points:
envelope: it is a about how large the region covered by a set of metrics is. In the example the region is defined by the metrics of time, distance, and weight carried. I have previously listed what I think are the important dimensions of filesystem performance envelopes.
anisotropic, and usually are so, because specialized performance envelopes are either quite useful or they are inevitable. The performance envelopes of a lithe sprinter, an endurance runner, and a heavier build athlete may be equivalent in terms of area they cover, but they are likely to have rather different shapes. The sprinter will have a performance envelope with better times for shorter distances and lower weights, for example.
Note: performance envelopes with smaller
dimensions are often illustrated with what are commonly called
diamond diagrams
, kiviat
or radar charts
, or star
plots
(1,
2).
Because of anisotropy even if the performance envelopes are equivalent in some cases one will allow reaching better results than another, and even when a performance envelope is overall better than another, it may still deliver worse results in some cases where the other has an advantage for which it is specialized.
Performance envelope profiles and their anisotropy are particularly important for computer system performance, because that anisotropy is likely to be significant. To begin with computer systems are designed for anisotropy: they are usually not very good at running, digging, or philosophy compared to human beings, but are much better at storing data and repetitive calculations over masses of data.
Thus computer engineers have found that many computer
applications are specialized and benefit from anisotropy, and
sometimes these applications are benchmarks
which matter for sales even if they are unrepresentative of
common uses precisely because they are too anisotropic.
Anisotropy of performance envelopes is particularly common in storage systems, because it seems quite difficult to build storage systems that are fairly good in most respects, but easier to build them so that they be very good at some aspect and quite bad at others.
Storage systems performance anisotropy may be inevitable because they are more or less necessarily based on the replication of multiple identical components, which have anisotropic performance envelopes themselves, and that are connected in parallel in some ways and in serial in others.
For example in disk systems the components are magnetic tracks; there is a single arm with multiple read-write heads, and the organization in concentric cylinders means that all the tracks on a cylinder can be read almost (because of single channel electronics) in parallel, but access to tracks in other cylinders can only be expensively serialized by moving the disk arm.
So a simple metric like speed is not very useful as to
evalutating the general value of a computer
component, and it is not performance
.
Unfortunately performance is not a simplistic figure of merit
but an often anisotropic volyme of possible tradeoffs, and it
takes considerable skill to profile it, and judgement to
evaluate it with respect to potential applications.
Recently for example in storage we have flash SSD drives that can deliver high speed in many applications but also high latency in sustained writes and even more recently SMR disk drives that deliver high capacity and low power consumption but also low IOPS-per-TB and high write latency bcause of RMW.
Sometimes it amazes me that some people lack awareness (or
pretend to) of the difference between a simplistic metric and
a performance envelope, which is usually quite anisotropic.
After all the saying choose your poison
is ancient, and
fast, cheap, reliable: choose any two
is more recent
but still well known. It is tempting to put the blame on the
shift from an engineering mindset to a marketing one, but then
that's not exactly new either.
Canonical have been funding the development of several interesting tools, for example Mir, Ubuntu Touch, Unity, Juju, MAAS, bzr, Launchpad (plus significant contributions to other projects like Upstart).
While some people think this is because of
not-invented-here syndrome
the obvious reason is that most of the tools above filled a
gap, and anyhow Canonical tranparently wants to build an
inventory of intellectual property assets, even if licensed
usually as free software
.
My impression of most of them is that they are
not-quite-finished, proof of concept
initial versions, in particular because of lack of
documentation, instrumentation and testing, that is overall
maintainability.
Canonical's hope is that the community
will help, and volunteers will provide the missing
documentation, instrumentation and testing, which are very
expensive to do, so crowdsourcing would help: according to the
author of the book
The mythical man-month
adding maintainability to a proof of concept project requires
twice as much time and effort as writing the proof of concept,
and ensuring in addition robustness for system-level software
another factor of two. These to me seem plausible
guidelines.
The difficulty with that is that a lot of software
development has just for fun
or scratch your
itch
or reputation-building motivations. Developing
half-finished cool-looking stuff feels like fun and scratches
itches for many people, and there is an abundance of
half-finished cool-looking pet projects.
Not many people enjoy the often dull investment of time and
effort into adding maintainability to the pet projects of
other people. That investment in higher engineering quality
also has the effect of boosting the reputation of the initial
authors of the proof-of-concept pet project, as their name is
on the cover page of the project, rather than that of the
somewhat invisible
contributors that put in
most of the hard and skilled effort of improving quality and
maintainability of what is often an unfinished cool demo.
As a result the usual pattern is for people in the community to develop cool-looking half-begun pet projects, and then for commercial entities to pay some anonymous gophers to turn that into a maintainable product. Doing the opposite is a much harder sell to the community.
The better option for Canonical would be developing fully finished (that is with good quality documentation and engineering) tools with initially a limited functionality. Because of their good engineering they would be easier (thanks to documentation and clean and robust internal structure) to extend and a good example to potential extenders.
Then the community would have the incentive to enrich them in functionality rather than in quality and community contributors would be able to point at a specific aspect of functionality as its cover-page authors. Something like EMACS, or UNIX, or Perl, or Python, or git; their development largely happened like like that. I hope the reader sees the pattern...
But Canonical, or rather its clever employees, seem to prefer developing applications with half-begun but sprawling functionality, and opaque, messy, unrealiable, codebases. It probably is good for Canonical employee job security but less good for attracting voluntary contributions.
I was using my recent 32in monitor B326HUL and I brought it home with some misgivings because my home computer desk is much smaller than the work desk. Indeed the 32in monitor barely fits in it and needs to be pulled a bit nearer to the viwer, but then my home computer desk has an extendible keyboard slat.
It look a bit out of place but it is very usable. Since the desk is small it would not have carried comfortably two sensibly sized monitors side-by-side, so it turned out that this was the best way for me to get lot more screen space.
I am still very happy about this monitor, both the size and the very high quality of the picture from its AMVA display. Some additional notes:
VGA) input, only digital ones.
jacksocket. The monitor has no audio output (pass-through) jack.
I have just tested the builtin flash SSD of a MacBook Air and I have been astonished to see that it can read and write at 1.2GB/s:
$ time sh -c 'dd bs=1m count=200000 if=/dev/zero of=/var/tmp/TEST; sync' 200000+0 records in 200000+0 records out 209715200000 bytes transferred in 168.928632 secs (1241442599 bytes/sec) real 2m49.217s user 0m0.483s sys 1m58.570s
I also recenty read in CustomPC magazine that some recent desktop motherboards have 4× M.2 slots and therefore can reach 700-800MB/s, while 2× M.2 slots and typical flash SSD SATA drives can reach at around 550MB/s.
The MacBook Air flash SSD considerably exceeds that and the high transfer rate seems made possible by direct connection to the system bus.
Quick testing also confirmed the manufactur's claim for transfer rate of a Corsair USB3 32GiB flash SSD stick at 170MB/s and write at around 70MB/s which is also quite impressive:
# lsscsi | grep Corsair [4:0:0:0] disk Corsair Voyager SliderX2 000A /dev/sdb # sysctl vm/drop_caches=1; dd bs=1M count=2000 if=/dev/sdb iflag=direct of=/dev/zero vm.drop_caches = 1 2000+0 records in 2000+0 records out 2097152000 bytes (2.1 GB) copied, 12.0935 s, 173 MB/s