In my last privacy post I identified certain cloud-computing privacy issues that may be regulated by the free-market. This post will outline a risk-based approach to analyzing privacy issues that laws and legislation may be required to address.
A risk-based analysis is beneficial in that it changes how a problem is viewed and the type of solution sought. With respect to privacy in cloud computing, instead of trying to create a ““Cadillac” solution with a worldwide integrated system of service providers and third party credential certifiers, a risk-based approach seeks to accomplish only what is necessary and efficient. It accomplishes this by highlighting that a fixed amount of resources and time are available, and that some privacy issues are more important to manage than others.
Risk has two components. The first component is the probability or likelihood of an event occurring. The second component is the severity of the consequences if the event were to occur. In the context of a risk analysis, the result is that an event with a low probability and high severity may expose someone to the same amount of risk as an event with a high probability and low severity.
Once a risk is identified and decomposed, the related benefits and costs to manage can be assessed. A risk-based analysis will consider both the rewards of taking a specific risk as well as the costs to manage that risk. Four different options are available for risk management:
1) Accept: Risk can be accepted as the potential cost of engaging in an activity. This is a good option for risks that are too small or remote to be of concern. Accepting a risk may also be the only option if the costs to manage are too significant. Frequently an opportunity-cost analysis is used to determine which risks should be actively managed versus which risks can be accepted.
2) Transfer: Risk may be actively managed by transferring to a third party through, for example, an outsourcing arrangement or insurance policy.
3) Mitigate: An alternative to transferring risk is to mitigate risk by creating special controls or a system of prevention.
4) Avoid: Finally, risk may be avoided by not engaging in the activity that creates the risk in the first place.
One of the privacy issues Reshika Dhir identified in her comment on my last post, and which has been highly criticized in the media, is Facebook’s information retention policy. Applying a risk-based analysis to this issue may provide further insight into whether and how this risk should be managed.
At face value this practice may initially offend users’ sensibilities. It seems unjust that a user cannot control the information they have posted and generated on Facebook. However, what are the true risks and benefits associated with Facebook retaining this information?
The risk a user is exposed to when their account is de-activated appears to be no greater than the risk while their account is active. This is because the account information does not change; Facebook only retains the user’s information. Furthermore, it appears that Facebook does not continue to provide de-activated account information to the network. Rather, Facebook highlights that user information may be archived within their system, and that some users may have copied, saved, or cached Facebook content to their own computer, putting it outside of Facebook’s control. With respect to the two components of risk, while the severity may remain constant, the probability of a privacy compromising event occurring is diminished when a user account is disabled because Facebook does not actively provide the users’ information to the network.
Another potential risk is that a user may not be able to “turn a new leaf” and erase their Facebook past. A situation may arise where a user may no longer want others to see the acts they have committed, pictures they have taken, or the things they have said. While such a risk seems remote, it could have severe repercussions affecting a user’s career and social network.
While this may be disadvantageous to a user, there are benefits to society that should not be overlooked. First, it will act as a deterrent to online misbehavior. Closer social scrutiny by a community may provide the incentive for users to act as good citizens and refrain from misconduct. The threat of having this information retained in perpetuity furthers this end. Second, frequently a user’s content is generated with the aid or assistance of other users. Examples include posting and tagging photos, messaging, and creating groups. Should all content be removed irrespective of who else authored that content? Lastly, some users may want to return to Facebook and continue with their profile.
Based on the popularity of Facebook within Canada (currently around 7 million users) it can be inferred that most users accept the risk as a cost of staying connected. Users’ options to transfer this risk are limited. Mitigating this risk is also difficult since there is always the possibility that other users will save Facebook data on their own computers. The best mitigation strategy is thus to take advantage of the privacy settings available on Facebook and to ensure any information that is posted on a profile is carefully scrutinized. Lastly, there is always the option to avoid the risk by using a different network, or not using the service at all.
New applications of technology have given rise to a multitude of potential privacy issues. Adopting a risk-based approach when analyzing these issues may help create realistic solutions and ensure that limited resources are spent on areas that will have the greatest impact.