November 16, 2010
"In the next few weeks, both the Federal Trade Commission and the Commerce Department are planning to release independent, and possibly conflicting, reports about online privacy", Edward Wyatt and Tanzina Vega report (*).
There is nothing wrong with conflicting advice. Too much harmony in fact should cast doubts on the independence of the sources. But the potential for conflict reminds us that good rule making is quite difficult, an art as well as a science.
One danger John Kay backs with historical examples (**) is how naturally a wish to enforce regulatory compliance upon an industry begets a regulators' pliancy to the wishes of this industry. Whether plain or pronaocratic, corruption is not, unfortunately, the only reason behind such "regulatory capture". The columnist also speaks of "intellectual capture", in essence the reverse of the Stockholm syndrome. All the while claiming it has been taken hostage by a hostile regulator, the industry succeeds in getting its positions shared and defended by its supposed overseer.
John Kay's warnings are born from experience but I find fault with his declared preference for judges. In the United States at least, judges are not immune to the wiles of pronaocratic pressure and the type of corruption it embeds in the culture. And in the absence of a priori rules, the economic costs of a posteriori adjudications can become burdensome to the extreme, as known to be the case in the delivery of healthcare in America.
For these reasons, rules remain necessary. Some regulators I am privileged to know do not fall short when compared to John Kay's requirements of "an abrasive personality and considerable intellectual curiosity". And what prevents the judicious executive from hiring such regulators and rotating them as better run prisons do with their jailers? As far as eprivacy is concerned today, I must say however John Kay cannot be more on key.
According to Edward Wyatt and Tanzina Vega, "Lawrence E. Strickling, as assistant Commerce secretary, said he believed in "a strong role for voluntary but enforceable code of conduct". In practice this is an oxymoron. How can a rule be enforced against its very maker? Computer programmers like to say Microsoft code has no bugs, only features. Similarly if I volunteer my own rules, I never break them, I only update them.
On the other hand "Jon Leibowitz, the trade commission's chairman, told Congress in July that the commission was exploring [...] whether to propose a "do not track" feature". Although "marketers hate the idea" or so they say, this is the perfect illustration of John Kay's quote from "Charles Adams, president of the Pacific Railroad", "something having a good sound, but quite harmless, which will impress the popular mind with the idea that a great deal is being done, when, in reality, very little is intended to be done".
The main reason behind this proposal is the very success of the "Do Not Call Registry" (2), of which I am an enthusiastic user, one among millions (3). Alas remember Cardinal Richelieu's proposed epitaphe (4). When copying, one tends to render the worst examples the best and the best the worst. You may agree privacy notices have been well recast into privacy site policies, but perhaps you wonder what is wrong with the new registry.
Few people will deny that a rule cannot deliver unless enforced. But how can one enforce a rule if one does not know when it is broken? The "Do Not Call Registry" works because the associated discovery process is simple, immediate and costless. Called by a marketer, the registered consumer needs only to write down and report the name of the company and the time of the call to the FTC, which merely keeps running counts.
Assuming a "Do Not Track Registry", the display of an advertisement to a registered user doesn't imply by and of itself the rule has been broken. One has to rely on a tracking ad self labeling scheme, such as the "little i icon" proposed by the industry. Further assuming the label appears, what can a consumer do? Report the advertiser? Chances are the advertiser will claim the responsibility for the breach lies with one of the ad networks it uses. But which one? Even if the "little i" discloses it, the process requires some savvy from ordinary consumers.
There is more. When SNCF, the French railroad company, feels free to bundle online reservations with a "voluntary" opt-in to receive its marketing spam, who thinks US Internet content providers will let their viewers opt-out of being tracked? Assume then consumers accept tracking ads from the ad networks servicing their favorite sites, are they expected to track down whether unwanted tracking ads come from different ad networks?
There is worse. The ad networks to which advertisers outsource their campaigns may very well be located outside of the US juridiction. What prevents such a network to play rogue and track consumers stealthily? As "Jeff Chester, executive director of the Center for Digital Democracy" declares, it is not easy to "keep[...] your computer free of tracking programs" as his interviewer, Riva Richmond, proceeds to amply illustrate (***).
With such a discovery process, as confusing as it is diffuse, where will the FTC find the budget to verify the validity of a claim? As entitled as individuals to contest charges, companies will have endless opportunities to do so. The whole proposal would have warmed Charles Adams' heart.
Are you, readers, challenging me to do better? Please consider these two pragmatic principles. Focus the whole process on the one and only company the consumer knows ahead of time and without doubt (5), the site which displays an ad, whose first beneficiary it happens to be and whose entire responsibility it should bear, free to litigate at will against its own suppliers. Ask then this one company to submit to a very simple rule. For each ad it displays, explain why to the consumer on the spot and therefore, in the case of a tracking ad, give access to the full underlying profile.
To see how valuable it would be, consider how the advertising industry resents the prospect, arguing it would yield "a bunch of ones and zeros [which] to a consumer would mean nothing". Can a user friendly translation be too much for this industry transparently afraid of real transparency?
In fact the implementation of my rule is as simple as could be. How difficult can it be for a Google to say "company X won the auction for your search key Y" or for a Yahoo to state "the ad of company X was randomly picked from my current inventory of xx display advertisers"? Even for a tracking ad, the reasons though more complex are still readily available as they but reflect the execution which delivers the ad to begin with.
Expect rogues to come up with phony explanations. Unfortunately for them, it is extremely difficult to lie convincingly and consistently on large samples where statistics hold sway. If say, a company like Facebook were made responsible for all the ads it shows, wouldn't it make sure to police what could amount to life-threatening liabilities? Were it to be remiss, wouldn't it make an ideal target for the FTC, helped by academia (6)?
Pronaocracy uses intellectual capture to forbid such solutions. For stronger regulation, amend the constitution towards a working democracy.
- (*) ..... Stage Set for Showdown on Online Privacy, by Edward Wyatt and Tanzina Vega (New York Times) - November 10, 2010
- (**) ... Better a distant judge than a pliant regulator, by John Kay (Financial Times) - November 3, 2010
- (***) . Resisting The Online Tracking Programs, by Riva Richmond (New York Times) - November 11, 2010
- (1) for more details see the Graham-Bleach-Liley Act entry in the references quoted in my lecture about Marketing Campaigns.
- (2) for more details see the "National Do Not Call Registry" entry in the references quoted in my lecture about Marketing Campaigns.
- (3) why doesn't it apply to election campaigns? Since candidacies are launched and managed as a new bar of soap, why exempt their marketing?
- (4) "He did much which was bad and little which was good. The bad, he did well and the good, quite badly" - see original in the French wikipedia
- (5) I am well aware this site could actually be a fake, but covert redirection is a crime in itself quite distinct from today's topic
- (6) academia has a long and successful history of analysing "log dumps" provided by social networks, search engines and sundry other sites.