Some topics are more than what Twitter can handle. The other day, I tweeted:
If bills in Congress are enacted, this #databreach wouldn’t require notification: http://bit.ly/qeqRmR I think it should.
I didn’t indicate why I think it should. Nevertheless, Jim Harper of Cato subsequently responded with his own tweet:
Data breach notice is making its way from a functional aid to ID fraud prevention to all-purpose penalty. http://t.co/wNQPd0T
It’s not clear to me why Jim seemingly interpreted my tweet as advocating for breach notifications as a penalty or for punitive purposes. In any event, we had a bit of a back-and-forth on Twitter that I thought I would elaborate upon here.
At the risk of putting words in Jim’s mouth (which he is welcome to spit out if distasteful), Jim seems to argue that businesses should only provide breach notifications if there are damages or harm to the individual. We didn’t quite get to how Jim defines “harm” or “damages,” and he did acknowledge that social embarrassment might constitute “damages,” but consider this tweet of his (emphasis added by me):
Negligence requires duty, breach (of the duty), causation, and damages. A data breach without damages doesn’t matter.
I disagree. There’s a fundamental flaw in thinking that unless you can demonstrate damages or risk of damages, data breaches don’t matter and don’t need to be disclosed. Consumers cannot make informed decisions about whom to trust with their business and their personal information if they are kept in the dark about security failures.
Suppose that on Tuesday, Company A has a data breach in which they discover that a company laptop containing customers’ names, contact details, and types of sexual aids purchased was stolen from an employee’s car where it had been left overnight. Because there were no credit card data on the laptop, Company A would probably not be required to notify consumers of the breach under proposed federal data breach notification laws (although they would be required under some states’ laws that would get preempted by the federal law).
Some might argue that if notifying the consumers doesn’t really help them as there’s nothing they need to do or can do, we shouldn’t require notification. But that neglects to give due weight to the fact that the customer who’s notified might decide not to trust that business again and that by failing to require notification, we will have deprived the consumer of information that they may need and/or want. Additionally, we cannot assume that because the breach is over and done with and there has been no immediate evidence of misuse of data, the same company won’t have the same security failure again next month or the month after that – or a similar breach involving even more sensitive data. Indeed, by allowing businesses to avoid having to disclose breaches, we take away what may be an important incentive to improve data security.
Jim and I agree that breach notifications are not a panacea. But I think they have more value than he does, apparently. Then, too, as Javelin studies consistently report year after year, those who receive breach notifications are 4x more likely to become victims of card fraud or other problems within the next 12 months. Do we really want to reduce notifications or do we want to ensure that consumers have a better way to assess their risks? A number of experts have suggested that consumers will get “burnout” and begin to ignore notifications if there are too many, but I think any such burnout is partly a function of how notifications are worded and that is an issue that can be addressed. Just as our government’s “orange” warnings on homeland security threats tended to be ignored over time, breach notifications will also be ignored if the notice and risk assessments aren’t commensurate or clear.
So no, I’m not saying that businesses should notify customers of breaches because I want to penalize the businesses. I’m saying they should notify because I think consumers should decide whether they need to do anything for the particular set of circumstances and whether they want to continue a business relationship with an entity. I think there are other ways to handle breach notifications so as not to make the process so costly, but cost is not a justification for failing to disclose a breach. Even if you view it as a matter of business ethics and transparency, if a business promised to keep your data secure and failed to do so, they should let you know.
Posit a restaurant that serves hundreds of customers a day. One afternoon, one of the sous chefs absent-mindedly picks up a bottle of cleaning fluid and pours it into the consommé. It simmers a while, but soon enough another chef takes a taste, spits out the broth, and exclaims, “Zees is terreeble! Srow eet out immeediatement!” They through out the soup and start another batch.
Should the restaurant’s customers get notice of their risk of eating bad food? Had it not been for the lucky happenstance that someone tasted the soup, they may have been served a toxic and potentially carcinogenic potion.
If consumers are to be fully informed, they would get that notice. We don’t want restaurants wantonly exposing customers to these risks and paying no price. Bad food notices would help the restaurant market function.
The article you linked to dealt with the discovery that a foster care outfit’s files were placed next to a recycling bin, exposing the files to collection and exposing the subjects of those files to a risk of exposure. If you want a breach notice for that, you want a “bad food” notice for every malformed soup. You want a “bad driver” notice for everyone sharing the road with someone who has caused an accident. You’ve chosen a very different and peculiar way of managing risk, and a highly inefficient and expensive one.
How do we manage other risks in society? Liability. Put data holders in the shoes of the people they harm so that they must pay for whatever harms they cause. Yes, this means they might make mistakes. This means they might make close calls. But as much as we want data security, we don’t want data oversecurity. If we spend $4 billion as a society to prevent $2.5 billion in losses, we’re worse off. I dare say that more notices, such as to inform people that data about them was subject to a “close call,” would make the society worse off.
Instead of a regulatory regime where we all pick a direction and run that way – data breach notice! – I would prefer to use common law liability. Holders of data should have, and arguably already do have, an obligation to protect the subjects of that data.
As I pointed out in a piece I wrote years ago, data security regulations have already failed in the financial services area. I speculated that Bank of America may have been focusing on its compliance obligations rather than on actually securing data, going on to talk about some common law cases addressing data security more nimbly by putting the correct incentive structure in place.
http://www.cato.org/pub_display.php?pub_id=11476
“Harm” is a well-known legal concept. In Black’s Law Dictionary, it’s “the existence of loss or detriment in fact of any kind to a person resulting from any cause.” Exactly what constitutes harm in the area of privacy invasion is a challenging question, and it should be! It’s bound up with deep societal morays. Courts have grappled and will continue to grapple with privacy harms. The alternative is “simplify and spend.” Instead of letting law and markets find the rules and practices that maximize consumer welfare, just make everybody send a data breach notification whenever there’s a hiccup.
There’s no doubt that you want better security. But we should have better than that. We should have optimal security.
In your restaurant hypothetical, I don’t think consumers need to be notified as the food never left the kitchen. But I can think of other “close call” situations where we might think they should be.
My friend was an inspector of nuclear power plants. On one inspection, he discovered that a critical/emergency key secured to a panel by a chain so it could not be lost or misplaced was in place, but the chain was too short to allow the key to be inserted in the corresponding keyhole. In the event of an emergency, they would not have been able to shut down. Should the public – particularly those who live around the plant – have been informed of that finding even after it was corrected or do we say, “close call, but no need to let people know?”
On another inspection, he discovered two guards asleep at their posts. That was handled internally. Should the public have been informed of that finding, even after the guards were fired and steps taken to prevent a recurrence?
In the foster-care documents incident, I think you may be making an assumption that I don’t make. How do you know that none of those files or documents walked away before the woman reported what she found? Are you sure none of those papers are in the wild? And – more to the point in terms of what I was thinking when I made that original tweet – if that breach would have to be reported if those same medical data were under the control of a HIPAA-covered entity, why shouldn’t it be reported if it’s under the control of a non-HIPAA-covered entity? One of my main themes, Jim, is that it doesn’t matter to me what type of entity lost control of the data. If we’re protecting the data, then the notification requirements should go with the data type, not the entity type.
I’m not totally opposed to a liability approach. The problem has been that people who suffer the expense of time and worry and having to monitor their accounts are not compensated under the existing laws. And how do you compensate someone who finds an embarrassing order exposed on the web? As you note, privacy “harm” is a challenging topic.
So…. if we were to adopt your liability approach, how does the individual even know who is responsible or who to approach for compensation or restoration if they haven’t been notified of a breach but find themselves the victim of financial fraud or medical ID theft, etc.?
I don’t know anyone who thinks that regulating data breach notification is going to significantly improve data security. We need both, and I agree with you that focusing on paper compliance instead of actual security is part of the problem. But withholding information about actual “misses” based on a risk assessment by an entity that is motivated not to see risk if it costs them is not adequately protective of the consumer. Then, too, I suspect that some of what you consider a “close call,” I consider an actual “miss.”
The blog post is an interesting juxtaposition of the pros and cons of disclosure. I thought it might be instructive to frame a similar ethical dilemma in a different context.
A new restaurant, Hypothétique, discovers that it served adulterated food to some diners. The substance was not a poison, allergen, carcinogen, pathogen or dangerous in any way. There is no evidence that anyone became ill, or that they are reasonably likely to become ill as as result of ingesting the substance. The restaurant was advised by legal counsel that it is not required to disclose the incident to public health officials. Hypothétique’s management promptly changed its food handling procedures to prevent a recurrence or similar event.
Should Hypothétique publicize that it served adulterated food?
Should Hypothétique contact the affected diners and offer to pay for periodic medical check-ups to ensure that the adulterated food does not result in health issues?
As a prospective diner, I’d like to know about this past problem – but what practical purpose is served by my knowing about it if the problem has been corrected?
Would the benefits of disclosure outweigh the potential harm, assuming disclosure would tend to discourage diners from visiting newly-opened Hypothétique?
Your hypothetical actually happened with an existing restaurant in my area. They didn’t disclose and their risk assessment was wrong. My daughter wound up in the hospital, we called the department of health, and then the restaurant told the health dept what had happened and that they didn’t think there was any real risk. Uh huh.
As I mentioned to Jim H., I am leery of self-serving risk assessments as we’ve seen too many data breaches where entities claimed “no real risk” followed later by reports of fraud or other problems. Even thefts that were described as opportunistic – “for the hardware” – have been later used for fraudulent purposes. Are some risks seemingly less likely than others? Sure. If a laptop with unencrypted data is in a car that drives off a bridge and sinks to the bottom of a river, I’d say there’s probably a really low risk of misuse of data. Should the entity have to provide free credit monitoring? No, but they should be liable if there turns out to be misuse of the data. The thing is, how will the consumer know who lost their data if they’re not notified?
Can your hypothetical restaurant say, “It’s come to our attention that our opening was not as smooth as we had hoped and that one or two of our dishes didn’t live up to our own high standards for using only the best ingredients. If you were unhappy with your dining experience, please let us know so we can apologize and make it right….” That way, a person who may have been adversely affected (however remote a possibility the restaurant thinks it might be) might make the connection to what they ate and let the restaurant know or take steps to seek treatment.
And why are you gentlemen both using cooking examples? Has someone been complaining about my cooking? 🙂
Interesting that the two of your interlocutors—both Jims—chose to use a restaurant analogy. We both wrote at the same time.
What’s clear from your responses to both is that you haven’t devised a general rule for when there should and shouldn’t be notice. It’s not because of any defect of yours. It’s because the circumstances of data breaches are going to be so endlessly variable. A disclosure mandate regime requires either a philosopher king deciding “yeah, disclosure this time” and “no, don’t need to do it,” or it requires a rule that comes close in most situations, overdisclosing some of the time, underdisclosing some of the time.
On the philosopher king side, watch what you wish for, because it might not be you. Amitai Etzioni of the (incoherent to me) communitarian philosophy wrote a book a few years ago in which he assessed various privacy problems from his perspective as philosopher king. It was “The Limits of Privacy,” and he came down on the anti-privacy side in most cases.
The alternative is top-down regulation. We could talk about the problems of detail in a rule that would satisfactorily reach most circumstances, and the imposing legal burden on all companies as the complexity of the rule rises. These costs take away from productive activity that serves consumers more and better everything.
But I think it’s more important to understand where the power is in top-down, government regulatory processes. Note that your own objection is to proposed legislation that would NOT require notice in this situation. As part of a disorganized interest group, you are essentially powerless to affect what the regulation says. The organized interests—the companies likely to do the most consequential breaching—have lobbyists right there. They pony up funds for re-election campaigns. The staff who put on the hearings, write the bills, and work in the agencies anticipate future work in industry.
You’re attracted to regulation because it’s intellectually interesting to think about what the rules would be if you were writing them. Well, you’re not writing them. You won’t be the first to fall for the regulatory scam, and you won’t be the last. I’m trying to help you recognize it.
Common law liability works by creating a general rule—“do no harm”—leaving administration of that rule to the parties who are subject to it. They can do anything they want, but if they harm someone they pay. So when they commit a breach, they look at what data got out, the circumstance of its theft or absence, the circumstances of the people who might be affected, the likelihood of that effect manifesting itself, and so on. They make a judgment: What do we do to protect the data subjects? We want to pay anything less than what we would pay in damages were these harms come to fruition. We’re aware that our indifference to the plight of our victims may bring punitive damages.
I’m not going to tell you what result that produces from situation to situation to situation. Sometimes its a data breach notice, sometimes its nothing, sometimes its password changes, sometimes, sometimes, sometimes. It’s often a variety of different actions. Why we should think that this one-note response—data breach notification—is the right thing is beyond me. Watch out for your regulation writers in industry to take data breach notification as the one responsibility they have while they insulate themselves from other responsibilities they should have, including paying damages.
I’ve talked about harm. It’s well understood and constantly assessed and reassessed in the courts. The courts, by the way, have little or no dog in the fight. Courts are far more neutral arbitrators than legislatures or agencies, both of which are overrun with lobbyists and pushing their institutional interests while they mouth consumer protection.
You ask about the discovery problem. I love this one because people always think they’ve found the flaw in the common law approach with that issue. What if someone hides out from their data breach? A would-be plaintiff has nobody to sue!
The problem is *exactly the same* with regulation. When a data breacher sweeps it under the rug, there’s nobody to enforce the regulation against either!
If they’re found out, they get punitive damages on the litigation side. On the regulatory side, they get whatever politicians decide….
I’m so amused, though, by all the people who think data breach notification requirements are the subject of magical 100% compliance! Data breach notification regulation is subject to the exact same discovery problem as common law liability. The difficulty of finding out who was responsible for a breach differs not one bit from one regime to the other.
I haven’t supported any proposed data breach notification bills, and at this rate, am not likely to. You may have missed my disagreement with CDT who seems willing to settle for what they think is the best we can get. I’d rather have no federal law than a bad one. But if there is no federal law, businesses are left with a costly patchwork of state laws, and consumers still don’t get the information I think they should be getting. Less than 10 states mandate breach notification in the case of paper records. If an employee walks out a sheaf of names, SSNs, and credit card data, most state breach notification laws do not require notice (although other existing state laws might).
As a privacy advocate, I’m actually least concerned about the breaches that might lead to card fraud. Why not just immunize consumers so they have no liability on debit cards as well as credit cards and put the onus on merchants to notify the card brands with a list of numbers that have been involved in a breach. Then the banks can decide what to do and simply notify their customers “We received a report of a breach from [merchant name] saying that [a laptop with your info was stolen, an employee stole your info, etc.] so here’s what we’re doing and here’s what we think you need to do (if anything).” The cost of the notifications would be passed along, in part, or whole, to the merchant, but it would still be a helluva lot cheaper than having individual notifications by merchants who have to comply with numerous state laws and the consumers would get the info they need. In the past I’ve asked why all merchants don’t just chip into a big pool that provides credit protection monitoring for all consumers instead of each entity having to arrange for it per incident. This could all be done so much more cheaply and efficiently than it has been. In my fantasy model, merchants would also report breaches to the FTC who could maintain a public listing of breaches with some info/coding as to type of breach, data types, etc. That way researchers could also get data they/we need to analyze threat trends, etc. For routine hacks/card fraud scenarios, then, I think we really can simplify things.
I’m more concerned about other kinds of harm, and that’s where a liability model fails if it doesn’t recognize the other kinds of harm. In my fantasy model, we’d require individual notice if the data involved in a breach included health information or sensitive personal information such as sexual orientation, religion, if the person’s name or identity were in the context of confidential informant, if a password or PIN was involved, etc. Even if you or others might argue that for some of these situations, “Well, there’s nothing that the individual can really do – the data are now out there,” I think people need to be informed. And maybe, just maybe, if we had such requirements, entities would think harder about whether they really need to collect and retain sensitive personal information or leave it unencrypted.
As to Social Security Numbers, any breach involving SSN should require notice to individuals, but I really wish such collection and storage was just flat out outlawed by now with an order to securely destroy all such existing records used for non-SSA purposes after alternative ID numbers are generated for the entity’s database.
You are quite right about failures with existing legislation but I would say the same for self-regulatory approaches. Look at the restaurant subsector – has PCI DSS really decreased breaches? Visa would say it works, but I don’t see it when I look at the number of incidents. More than 4 years after they issued warnings, we’re still seeing merchants with default configs that get hacked or lack of firewalls, etc. Does more money need to be thrown into security? Undoubtedly, but I don’t expect the federal govt to set the minimum security standards in any way that would work.
We agree on the discovery issue. As I said previously, I do not expect enacting a breach notification law would result in compliance – and I don’t think that any of those arguing for notification expect it, either. I’m not sure why you think people harbor that notion. That said, I think we would have fewer discovery problems than if there is no mandated reporting.
I’m not naive, Jim, and I don’t have much faith in the federal government. But self-regulation hasn’t worked, either. We would probably agree that Visa or MasterCard fining entities after a breach has not improved security and I think issuers and acquirers need to do a lot more for merchants than what they’re doing to help them have better security. But ultimately, it is the consumers who may the price for a privacy breach, and I don’t trust the fox to voluntarily report on problems in that hen house. I suspect too many would rather try to hide the breach and risk liability if the consumer finds out than voluntarily disclose.
I didn’t mean to say or imply that you’re naive. Most people who have expertise in an issue area—data security, meat packing, transportation, health care, whatever—don’t also have expertise in regulatory economics, public choice theory, and the other factors that make government regulation such a deeply flawed enterprise.
Most smart people also attribute to themselves the capacity to perceive the needs of society quite a better than they actually can. This is what Friedrich Hayek called “the fatal conceit.” Why? Because in eras not too far past, planners took it upon themselves to organize society as they perceived it should be organized, and millions died.
I don’t expect mass death to arise from your plans (just mass inefficiency and waste, which deprives society of resources and causes neatly hidden morbidity and mortality). But, oh, you are a planner. You’ve got a half-dozen problems solved with a sentence each. Never mind that there is massive complexity to each one, and the information needed to solve these problems isn’t available to anyone.
I’m particularly interested in the “patchwork of state laws” argument. Are you a business lobbyist? Why is it my problem that businesses should have to argue in each state for the laws that work best? Should we abandon the federalist political economy created by the constitution for the sake of business efficiency? I guess if you believe in centralization, you believe in centralization. But I don’t think we should let the whole country rise or fall on one “expert’s” plans or any expert group’s plans.
You are too dismissive of liability, I suspect because you have paid it no mind. (I mean, I’ve been explaining to you what the elements of a simple negligence action are. What’s the chance you know the state of the law on negligent handling of data nationwide?) I regularly see cases where “other kinds of harm” are addressed in common law courts. (I assume you mean mental and emotional distress. If it’s something else, say so. And if you’re inventing a new “harm” to force some security-maximizing outcome, we can talk about that.)
Here’s a case recently filed in which non-economic, emotional and mental harms are surely at issue. Yet you just say flatly “liability fails.” Failure to capture your attention is not the relevant failure.
http://www.techdirt.com/articles/20110801/02434415338/student-sues-former-principal-privacy-rights-violation-showing-surveillance-video-her-having-sex.shtml
Now let me ask you why you are attempting to shift the argument to self-regulation? I didn’t advocate for self-regulation here, nor have I elsewhere. I’m not fooled when industry sits down with government to figure out the “non-regulatory” “principles” that will guide them.
You’re not naive, but I don’t think you perceive the difference between more security and optimal security. You’re a little bit trapped inside the regulatory box, treating government-industry “self-regulation” as the only alternative.
Perhaps an analogy from another system might be more illustrative of my view. The federal government sets “floor” education rights for children. States can raise that bar and require districts to provide even greater rights and entitlements, and districts can set that bar even higher, as you know. I would like to see the govt recognize those “floor” protections and rights when it comes to data privacy of sensitive personal information – regardless of what type of entity is the steward of such information. I recognize that businesses don’t like the patchwork system so if we had a federal law that was as least as protective as the strongest state law, preemption would not particularly concern me – but only if what we gain is worth it. I have not given up on what you call “optimal security,” but I admit I am skeptical of seeing that in my lifetime. I think the recent hacks by Anonymous/LulzSec have exposed just how inadequately most businesses – even the giants – have secured data.
I don’t claim to know what’s best for everyone on this issue. I just want others to respect my right to decide for myself and not make paternalistic decisions as to what I need to know if they’ve lost or failed to adequately secure my sensitive personal information. I’m less concerned about state’s rights than the individual’s rights.
As to liability: a liability model is intellectually appealing, but in the real world, most people cannot afford to litigate. I’ve spent over 20 years in my “offline” work assisting children and families whose education and civil rights were trampled by their district or state. They have redress/remedies, including the courts, but most cannot afford lawyers to fight to enforce their rights. I see this as no different. Having rights on paper under liability law doesn’t really help if you can’t afford to pursue your rights. If there has been a privacy harm or damages, how many people can afford a lawyer’s retainer to sue or pursue a liability claim? So no, it’s not that I haven’t thought about liability enough – it’s that I don’t think that redress would be accessible enough to most people. If you think otherwise, please show me how you think it would/could work.
I hope someday, we can sit down together over a cup of coffee or whatever and discuss these issues in person. I suspect we agree on more than you might think and appreciate your willingness to share your thoughts.
I haven’t supported any proposed data breach notification bills, and at this rate, am not likely to. You may have missed my disagreement with CDT who seems willing to settle for what they think is the best we can get. I’d rather have no federal law than a bad one. But if there is no federal law, businesses are left with a costly patchwork of state laws, and consumers still don’t get the information I think they should be getting. Less than 10 states mandate breach notification in the case of paper records. If an employee walks out a sheaf of names, SSNs, and credit card data, most state breach notification laws do not require notice (although other existing state laws might).
As a privacy advocate, I’m actually least concerned about the breaches that might lead to card fraud. Why not just immunize consumers so they have no liability on debit cards as well as credit cards and put the onus on merchants to notify the card brands with a list of numbers that have been involved in a breach. Then the banks can decide what to do and simply notify their customers “We received a report of a breach from [merchant name] saying that [a laptop with your info was stolen, an employee stole your info, etc.] so here’s what we’re doing and here’s what we think you need to do (if anything).” The cost of the notifications would be passed along, in part, or whole, to the merchant, but it would still be a helluva lot cheaper than having individual notifications by merchants who have to comply with numerous state laws and the consumers would get the info they need. In the past I’ve asked why all merchants don’t just chip into a big pool that provides credit protection monitoring for all consumers instead of each entity having to arrange for it per incident. This could all be done so much more cheaply and efficiently than it has been. In my fantasy model, merchants would also report breaches to the FTC who could maintain a public listing of breaches with some info/coding as to type of breach, data types, etc. That way researchers could also get data they/we need to analyze threat trends, etc. For routine hacks/card fraud scenarios, then, I think we really can simplify things.
I’m more concerned about other kinds of harm, and that’s where a liability model fails if it doesn’t recognize the other kinds of harm. In my fantasy model, we’d require individual notice if the data involved in a breach included health information or sensitive personal information such as sexual orientation, religion, if the person’s name or identity were in the context of confidential informant, if a password or PIN was involved, etc. Even if you or others might argue that for some of these situations, “Well, there’s nothing that the individual can really do – the data are now out there,” I think people need to be informed. And maybe, just maybe, if we had such requirements, entities would think harder about whether they really need to collect and retain sensitive personal information or leave it unencrypted.
As to Social Security Numbers, any breach involving SSN should require notice to individuals, but I really wish such collection and storage was just flat out outlawed by now with an order to securely destroy all such existing records used for non-SSA purposes after alternative ID numbers are generated for the entity’s database.
You are quite right about failures with existing legislation but I would say the same for self-regulatory approaches. Look at the restaurant subsector – has PCI DSS really decreased breaches? Visa would say it works, but I don’t see it when I look at the number of incidents. More than 4 years after they issued warnings, we’re still seeing merchants with default configs that get hacked or lack of firewalls, etc. Does more money need to be thrown into security? Undoubtedly, but I don’t expect the federal govt to set the minimum security standards in any way that would work.
We agree on the discovery issue. As I said previously, I do not expect enacting a breach notification law would result in compliance – and I don’t think that any of those arguing for notification expect it, either. I’m not sure why you think people harbor that notion. That said, I think we would have fewer discovery problems than if there is no mandated reporting.
I’m not naive, Jim, and I don’t have much faith in the federal government. But self-regulation hasn’t worked, either. We would probably agree that Visa or MasterCard fining entities after a breach has not improved security and I think issuers and acquirers need to do a lot more for merchants than what they’re doing to help them have better security. But ultimately, it is the consumers who may the price for a privacy breach, and I don’t trust the fox to voluntarily report on problems in that hen house. I suspect too many would rather try to hide the breach and risk liability if the consumer finds out than voluntarily disclose.