jakke said: Actually I don’t agree at all. That’s directly analogous to leveraging (some is a positive externality to credit, too much is a negative externality to risk, the threshold differs depending on whom you’re talking about) and we can regulate that.

The literature that has looked at the economic of privacy over the past decade or two has been absolutely dismal, insofar as efforts to operationalize the ‘value’ of privacy are pervaded with assumptions of rationality, comprehension, ability to enact privacy choices, and so forth. The literature on privacy more generally is still struggling – after 40+ years – to really move beyond squabbling about what ‘privacy’ even means. The consequence is that ascertaining the externalities linked to privacy infringements/violations/concerns/(term of the month) necessarily requires adopting one definition or another.

Unlike more ‘defined’ harms (e.g. X percentage of Y particulate in the water is linked to Z) those linked with privacy have a tendency to be more normative, and harder to measure as a result. Ascertaining what the chilling effect of corporate surveillance, or the consequences of non-transparency in how communications infrastructures subtly modulate discourse and association, is an exercise in theory as much as anything else. Consumers, for lots of good reasons, are poor rational actors in lots of areas, and privacy is argued to be one of those areas.

So the quotation was emergent from a (longer) argument concerning the efficacy of economic analyses of privacy and place such analyses have within the broader dimensions of the contested individual, communal, and intersubjective natures of privacy. It’s on these bases that economic analyses fall short: while they *might* improve the situation, marginally, what is improved will be regarded as perpetuating the harm by some, and being the wrong measure of alleviating harms by other.