Blog Archives

Plumbing at Fukushima

I’ve always thought that I’ll start taking the idea of a looming technological singularity more seriously once humanity manages to master plumbing. I seems to me that the single most annoying issues in life still come from defective plumbing, from toilets to heart attacks. It may have been with us since the Romans but we’re still not good at it. Expensive house problems are typically of the type, the roof leaks, a pipe leaks (or is clogged), or, walls are wet.

Well now we have another corroboration for my thesis. It is so astonishing I would not have thought it could unfold in this way. It is the Japanese nuclear crisis. What went wrong, essentially boils down to pipes that won’t pump water and valves that won’t open, to liquids and ventilation flowing the wrong way, or not flowing at all. On top of that there still are serious issues with not knowing how much water there is in some concrete pools and how hot it is. The plumbing failed and can’t be fixed in time.

There is actually a wider range of things that leave me speechless about the Fukushima crisis. Set aside the question whether one could have predicted that backup generators situated in the basement of a reactor won’t be useful in case of a tsunami. Set aside the question whether a tsunami could have been anticipated for a plant at the ocean’s edge in the country that invented the word “tsunami”. Set aside the wisdom of having automatic earthquake shutdowns for a reactor without ensuring that the backup systems will work. Set aside the problem that even engineering, never mind finance, seems to still have with correlated failures (when backups, or redundancy, or diversification, every once in a while collapse in synchronicity). Set aside the idea that things such as nuclear plants ought to be inherently safe so that one could walk away from them completely, shut all active control systems off, and at worst they’d self destruct – but nothing else. All the above are hindsight and spilled milk to some extent although it is hard to swallow that they were not taken seriously beforehand.

No: the most stunning feature of this crisis, to me, is how little useful improvisation there was after the accident had degenerated into an emergency, and how little technology was used at all. These reactors did not go out of control instantly, as did the Chernobyl reactor (Ironically the Chernobyl experiment was carried out to ensure that enough cooling capacity was available in case of emergency shutdown, but a<60 second experimental shutdown of the cooling system led to catastrophic failure). No, in Fukushima for days nothing too bad happened, radiation levels were initially benign, and improvisation in cooling and control apparatus was still possible. But all we heard was that gauges went offline, that no one knew about temperatures and radiation levels in key areas, and that even mundane optical tasks such as checking water levels were impossible. Or so it seems. And while the regular and backup cooling were disabled, where were creative and easy to think of improvisations? Nothing useful happened until not one, but four reactors had severe and catastrophic issues, including reactors that had been shut down long before the earthquake.

In the country of electronics and robotics, it was not possible to install on the fly some $20 web cam with a battery to check water levels? Remote logging battery-powered thermometers? It was not possible to airlift in some heavy generators? The fire engines that were later used could not have been used from day one to prevent the catastrophic explosions that made the radiation issue so much more severe later on? Dragging mains power to the site took a week? And once radiation went out of control, humans had to be endangered? We hear of military drones every week and there are robots for everything from vacuuming to drive a car through the country, but here there were no robots to crawl close to the plants to visually inspect them? Never mind robots: no remote-controlled army tanks exist that could drag a fire hose near these cooling ponds? No iPhone controlled RC helicopter toys could have been used to check conditions on site? I mean, one could have even spanned a cable between two cranes on either side of a reactor and dragged a hose, or a camera, over the reactors.

So in the end it doesn’t even take sophisticated doubts about impending machine rule and technological singularity. Never mind the possibility of episodic solar storms that fry everything that’s electrical. For now, we’re still at the issue that world’s most advanced nations don’t really master plumbing, and are unable to repair said plumbing fast enough to maintain smooth function of essential systems in case of accident. All fancy toys were useless. Humans had to be sent in to get irradiated.

Update: Slate has an article on the ins and outs of robots in nuclear emergencies. I still don’t understand why the (limited) existing worldwide know how is only now starting to being  used. France apparently had robots ready for this kind of task for many years.

Levels of meaning

There is one ubiquitous feature of biological systems which I believe is underrated in its implication. Worse, it is seen as the proverbial bug instead of a feature – the observation that many biological structures, including fundamental ones, seem to have several functions at once. They are part of several biological subsystems at the same time. I believe that this kind of organization results from a process of organic development, from the way complex systems generically grow. Human made but organically developing and evolving systems such as cities, economies, polities, also share this feature. Human made but constructed (engineered) systems by contrast, usually keep functions well separate from each other, for reasons ranging from the logics of production, assembly, and distribution processes, to ease in troubleshooting etc.

It is ironic when organization of and functions within complex biological systems such as genomes, are studied with the expectation to find a “neat” system – as if they had been engineered. I don’t think these systems can be understood when the questions are framed through an engineering metaphor. Of course in the larger scheme of things biologists assume evolution and not construction. But ground level research is usually framed with a “one purpose per component” assumption, and this suggests drawing a fundamental analogy to man made engineering. The whole idea of phylogeny and ontogeny though means all these biological systems originated in an entirely different way: they grew and evolved. In my mind this evolutionary logic has an even more important corollary: the process of growth comes first and the functions associated with components come later. The means justify the ends.

If function comes after structure, then meaning does so too, and it becomes easier to understand how several meanings can be overlaid onto the same structure. This can either happen synoptically in the same organism – say one DNA string, many functions – or over time: one conserved structure assuming different functions in phylogenetically related species. It would of course be impossible to infer the evolutionary model of descent with modification if this was not the case – how would we guess that one organisms descended from another if we couldn’t structurally relate their body parts, or their DNA? By contrast, some kind of change in function is also required if there is to be any kind of evolution to begin with. So, evolution means change in function, and proof of descent of one evolved structure and function from an earlier one needs a somehow conserved structure.

In the social sciences the same contrasting engineering vs. growth worldviews also exist, rough examples would be J-J Rousseau’s ‘social contract’ model compared to FA Hayek’s stance that many social and economic structures result from human action but not from (explicit) human design. Hayek quite explicitly believed that social structures usually result from growth – organic development – rather than from some kind of engineering of the social kind (see for example tome I of Law, Legislation and Liberty). And from yet another angle, Christopher Alexander’s life work can be read as one long and meandering affirmation that good architecture results from emulating the process of goal oriented, piecemeal, function oriented, organic growth, and not from design ex ante (see for example tome II of “The nature of order: ‘The process of creating life’ “).

For some reason, structures that grow seem to acquire meaning and purpose (function) at many levels during their evolution, while structures designed ex ante tend to be limited in the number of functions per component part. I have at least a partial answer as to why this should be so generically.

Take for instance the levels of meaning, in other words the complexity, of DNA. To quote from “What is a gene?”, a 2006 feature in Nature:

Instead of discrete genes dutifully mass-producing identical RNA transcripts, a teeming mass of transcription converts many segments of the genome into multiple RNA ribbons of differing lengths. These ribbons can be generated from both strands of DNA, rather than from just one as was conventionally thought. Some of these transcripts come from regions of DNA previously identified as holding protein-coding genes. But many do not. “It’s somewhat revolutionary,” says […] Phillip Kapranov. “We’ve come to the realization that the genome is full of overlapping transcripts.”

Simpler examples of this kind of complexity were known long ago, for instance a string of DNA can become part of several “genes” and can end up building different proteins through the process of alternative splicing. And of course the resulting different gene products may interfere with the function of a host of other genes. In other words, in a string of DNA there can have many levels of meaning.

So in the genome many elements have function, or meaning, at different levels. These levels of meaning don’t have to be hierarchically nested, quite the contrary, often there simply are various degrees of overlap in function. Thus some DNA string will not just have one function in one gene, but it will have a role in building several genes. A complete gene may not just participate in building one feature of an organism, but take part in several areas. And so on – the same gene may serve to regulate a whole set of genes on a yet again different level of function, or meaning.

This multi level overlap makes for a crucial difference to the common image many people have of complex systems, that of nested hierarchies with modular subsystems with a top-down “chain of command” and increasing numbers of half-autonomous subsystems below it, containing ever more subsystems. Human engineered systems of course are often built with this exact kind of modularity, and for good reason – it lends itself to economies of scale on the production side, and to simple command and control on the user side. Conceptually this would be close to Arthur Koestler’s “holons”, modules in nested hierarchies.

But systems that developed organically, biological or otherwise in origin, usually aren’t made of this kind of holons. Their “modules” if one should even call them so, are often interwoven with several functions at different levels, and controlled by several inputs, not just one coming from the top. Even more to the point, biological components are notoriously controlled not by purely exogenous inputs but by their own outputs, in positive/negative feedback loops. None of this is captured by strictly modular models or the “holon” model sensu Koestler.

At face value this multifunctionality makes it hard to reduce biological features to the level of single functions – they typically have many – and it makes it even harder to imagine how such an interwoven network of structures and functions could have been built, never mind how they are controlled. But, I believe the puzzle comes mainly from the framing of the problem as a presumed optimal engineering solution to a problem rather than a piecemeal growth process with “good enough” rather than optimal outcomes. And in this way the genome’s multifunctionality can maybe also stand as one giant metaphor for the entire ill-defined set of systems called “complex”.

A prototype description of a simpler, human system that grows and develops many levels of meaning in various degrees of overlap, is told in Christopher Alexander’s “A city is not a tree” (alternate link here). Here, Alexander contrasts the mathematical notion of a tree with the structure of a semi-lattice. In Alexander’s words a (mathematical, not biological) “tree” is a hierarchical structure where to go from one branch to another one of the same scale means one has to backtrack to a larger branch they both belong to. In a semi-lattice, by contrast, elements of the same level of scale overlap in such a way that they can connect directly. The real life example he gives for a (grown) semi-lattice structure is a street crossing, which may serve as a simple crossing, as a location for a newspaper or ice cream stand, as a meeting point, etc – all different functions not organized as end points in different locations of a tree-like structure, but overlapping in one and the same location. One can easily find reasons for how this organization came into being – the crossing served as a focal point and attraction that had several functions grafted on top os its “primal” or original “function”, and as a result now it has many. And this is precisely my point: structure comes first and “grows” meaning. Therefore, grown cities have overlapping functions in any particular location. Each structure has been given, or rather, has “found”, many meanings. “Constructed” cities by contrast tend to follow the tree like arrangement. For reasons of planning neatness, functions are deliberately kept separate. Koestler’s holism model incidentally is also a “tree”. In cities a lack of overlap – too much “tree-ness” – leads then to the anecdotal lack of connection and sheer inefficiency of planned cities. Maybe overlaps of meaning of this kind are common in organically grown systems simply because they are more efficient than the deliberately disjointed organization of many engineered systems.

Another case for levels of meaning as a proxy for complexity can be made for yet another human product: art. Depth of meaning in poetry, painting, the movies, etc. , arguably all arise when the piece of art in question has overlapping meanings at different levels, using one and the same structure – not simply when a particular principal storyline is made more ponderous. In writing a single sentence can have literal meaning, syntactic wordplay or semantic ambiguity, symbolic meaning at different levels, meaning within the paragraph, hyperbolic meaning within the chapter, foreboding meaning within the entire storyline, etc. – the levels of meaning are potentially endless. This not to forget that even the so called literal meaning of a word or phrase following Wittgenstein is already a result from its embedding in the mundane context in which it appears. One could spin this even further in that a richer context should therefore make for automatic increases in potential meaning of a component.

In the end, the richness in levels of meaning per component could practically serve to discriminate between the modes of organic growth vs. deliberate construction. “Irreducible complexity” may just be be the defining feature of organic, evolving growth, as opposed to the simpler, one-meaning-per-component kind of human design. Or, to use a different vocabulary, richness in levels of meaning is another measure of complexity.