July 5, 2011
"Many economists consider it respectable to wait until a catastrophe strikes" for, without enough information, "preemptive action will tend to allocate resources inefficiently".
This argument is somewhat circular. A catastrophe occurs precisely because, in hindsight, all actions preemptively taken were found to be inefficient. But who can fully account for a known averted catastrophe, let alone know all of them and prove preemptive actions to be statistically inefficient?
Also, as Jason Pontin stresses in his essay (*), "it assumes that human beings are infinitely adaptable" and will therefore survive and learn to live other and better days. Indeed, whether one looks at an individual, a state, a civilization, the whole human race even, a catastrophe may pass a certain threshold beyond which its effects are irreversible. How can one say Easter Island and Classic Maya will never scale up to the whole Earth?
The third flaw in the argument is that heeding a lesson learnt in hindsight, if possible, might still be more inefficient, what I call the curse of the retrofit.
Take Sony for instance. According to John Gapper, "the financial cost [of recent hacking attacks] was about $170m and the reputational cost has been greater" (**). Although he does not estimate what a more careful software design and programming would have cost Sony upfront, he leaves no doubt it would have been minimum. "This is far from rocket science", "the problem has been a lack of will, or even awareness high up".
Sony is not alone in having short changed network security. "The engineers who designed [Internet]'s underlying technology were concerned about reliable, rather than secure, communications". Now they pay dearly "to make e-mail and e-commerce more secure", as John Markoff reports (***). For some of Jason Pontin's sources, even this is too little, too late. "Until there is a major disaster", expect palliative, not curative solutions.
As, later day Cassandra, I survey eprivacy and predict its slow ruin, I have an obvious interest in debating whether to try to prevent catastrophes.
Notice that Jason Pontin's editorial mentions Internet security and climate change but not eprivacy. Yet the same issue features Emily Singer's article about "the not-so-distant future, when the monitoring tools now typical of a hospital's intensive-care unit will be transformed into wearable gadgets" (****). After spam and scams, what will happen "when [the self-tracking movement's] adherents merge their findings into databases"?
FICO scores us to make us 'take our meds'. Per Stephanie Rosenbloom (*****), Klout weighs our influence before social sites sell us into data slavery to advertisers. It's only a beginning. In a heartbeat, Catbert may well monitor your blood pressure as he interviews you for your next job.
Still Facebook highly values our privacy. Anonymization techniques are absolutely safe. Injurious discrimination has been eradicated. Why worry? It so happens the first difficulty in tackling catastrophes is not about preemptive measures. It is to recognize there is a problem in the first place.
Assume though people have some degree of awareness. One must further examine why, despite its flaws, waiting appeals to human nature. The strength of the argument lies on the implicit opposition between action and inaction.
From a logical perspective, there is no difference. See Kevin Sack's report (******)(1). A US court found "constitutional for Congress to require that Americans buy health insurance". For Judge Boyce F. Martin Jr., "the activity of foregoing health insurance and [...] self-insuring is no less economic than the activity of purchasing an insurance plan". For Judge Jeffrey S. Sutton, "inaction is action. [...] Each requires affirmative choices".
Yet it makes all the difference from a human point of view. Since it preserves the status quo, there is a natural bias for inaction. Once again, the argument appears tautological. As efficient measures require significant changes in this status quo, taking them would be a true catastrophe in itself.
Thus making the status quo more plastic would enhance our preparedness and decrease the potential for catastrophes. Why is it so hard? For Pascal quoting Harry Eyres, one reason is that "the heart has its reasons which reason knows nothing of" (*******). Indeed science being but one of the three sources of truth, equating reason with science is the first sign of irrationality in what logically becomes a disastrous debate.
In truth the issue at hand comes from our inability to represent the future. While self-insurance is logically a type of insurance, the costs one bears in insuring oneself are present instead of future as in self-insuring. Economists do know how to compute the present value of a future outcome. But when the latter is both extremely large and highly uncertain for the near future, it is akin to multiplying infinity by zero (2). Science gives no answer.
In our Western civilization, the break down of its third leg compounds this issue. As critical as a Greek, as litigious as a Roman, we are no longer bound by our Judeo-Christian faith. How then can a sustainable consensus be derived from fiercely held personal convictions? The answer our civilization has evolved is to combine the two sources of truth other than science into the popular selection of the most trustworthy advocate.
This method has its limits. Whether in jury trial or official elections, we now expect nothing but paid for, partisan presentations and this fatally undermines our ability to trust. Yet, as Harry Eyres so well understands, "trust is obviously one of the key factors". Without it, no truth can emerge.
Granted, clashes between lawyers or candidates can be revelatory as each party puts forth its best arguments. But not when we decide of the future of our community, nay perhaps of humanity, for we are then both judge and party. The other party, the future, is not, it cannot ever be, present.
Economic bubbles inflate when an overwhelming majority agree to caricature an absent future. This particular form of status quo is well documented. Preventing the catastrophe which occurs when the real future suddenly presents itself is not any easier. How can one go against the consensus?
In view of this pessimistic conclusion, I can only offer palliative measures. As much as possible, suppress conflict of interests, decentralize actors and roles through mechanisms. As John Kay notes, "control of structure is a more powerful regulatory tool than monitoring of behavior" (********).
And so do not rely on watching Facebook watch over its users' personal data. Better mandate profiles stay private (3), better listen to a company which conceives its role as limited than to those whose promises about Big Data hardly hide their desire to emulate Big Oil and Big Corn.
If we do not curb the power of Big Interests, expect Western democracies to end in an irreversible catastrophe, like the Athenian and Roman ones.
Philippe Coueignoux
- (*) ............... The Problem with Waiting for Catastrophes, by Jason Pontin (Technology Review) - July/August, 2011
- (**) ............. Companies make it easy for hackers, by John Gapper (Financial Times) - June 30, 2011
- (***) ........... A Stronger Net Security System Is Deployed, by John Markoff (New York Times) - June 25, 2011
- (****) ......... The Measured Life, by Emily Singer (Technology Review) - July/August, 2011
- (*****) ....... Got Twitter? You've Been Scored, by Stephanie Rosenbloom (New York Times) - June 26, 2011
- (******) ..... Round 1 in Appeals of Health Care Overhaul Goes to Obama, by Kevin Sack (New York Times) - June 27, 2011
- (*******) ... The heart has its reasons, by Harry Eyres (Financial Times) - July 2, 2011
- (********) . A flawed approach to better consumer protection, by John Kay (Financial Times) - June 29, 2011
- (1) for the text of the opinion in full, see the Thomas More Law Center v. Barack Hussein Obama, by the US Court of Appeal of the Sixth Circuit, June 29, 2011
- (2) more scientifically, the answer is contained in a model whose validity cannot be proven. Models are then created and adopted for the answers they give.
- (3) for more details, see US Patents Number 6,092,197 and 7,945,954 and US Patent Application 2009/0076914.
|