Home About RSS

Art. 29 Working Party: “unacceptable that Facebook changed default settings to the user’s detriment”

Yesterday  Art 29 Working Party issued a press release where it declared it unacceptable that Facebook radically altered its privacy policies to the detriment of users.
The issue concerns the default settings. The WP recommended:
a default setting in which access to the profile information and information about the connections of a user is limited to self-selected contacts. Any further access, such as by search engines, should be an explicit choice of the user.
However, Facebook users know that it is necessary to navigate the privacy settings and go through a series of opt-outs.
Furthermore, the WP points out to Facebook and other social networks the lesson learned by Google in Italy: when a user uploads some contents involving a third party’s personal data it is necessary to “to obtain free and unambiguous consent.”
On the latter point there are still some fundamental technical problems. In fact, I am not sure how it would be possible to protect the interests of third parties by assigning such tasks to the social network platform, unless the WP requires that Facebook or other social networking sites add another check-box on their terms … something like this:
“You represent that the uploaded or shared content (” Content “) does not violate the privacy and / or other personal rights of third parties and that in any case you have previously obtained free and unambiguous consent to the publication of the Content from the owner of any personal data or personal rights relating to the Content.”
It would be easy for users to mark this check-box (among the many). Who knows if this will be more effective than similar disclaimers about copyrighted content.

Privacy Bill Introduced. More Notices and Consent.

The House of Representatives recently introduced a bill (pdf here)  that, if passed, would significantly change the game of privacy regulation.  The bill requires notice to and, in some cases, consent of an individual prior to the collection and disclosure of certain personal information related to that individual.
This bill will raise the bar for all businesses, whether on-line or off-line.  It also promises to harmonize the US regulation with the higher EU standard.
However, one of the limits of this proposed bill is that it stresses the importance of privacy policies and privacy notices, while both the Internet industry and consumer groups seem to agree that privacy policies are ineffective since very few people read them and even fewer can understand their content.

Brief History of Online Feedback / Rating… Some Thoughts

Stage 1. Men were created with the ability to have opinions of their own.  Opinions can be complex, simple, logical, contradictory, expressed verbally, non-verbally, etc.  

Stage 2. Then men became Internet users and were allowed to give feedback on a 5-star basis.  Internet users still have opinions and share them in form of “Comments”.

Stage 3. Following, users had the possibility to give a thumb up OR a thumb down.  Users can reply to comments or even re-post them in their own opinion-stream.

Stage 4. More recently, users can “Like” stuff.  No complexity whatsoever is necessary nor allowed.  Similarly, any negative feedback is unpractical (or curtailed) since it would be bad form with respect to “business partners” of the site in use.   Yes, it is possible to write something if you really, really care (which happens when you are upset about a service you paid for). However, other than in such case, who bothers about typing if they can just click.

But wait, there is more. The evolution of feedback and rating systems is not limited to the progressive simplification of the tools and the user interface.  It rather (and more importantly) concerns what data are collected and associated with the users’ ratings.

The person who is rating content or items (the “rater”) has become the center of the action.  This is somehow counterintuitive, yet the most valuable information for the website has nothing to do with the items rated thereby.  In fact, the most valuable information is that one concerning the “rater”:  its tastes, preferences, behavior, etc.

Therefore, it is key that such information is made as standard and accessible as possible.  For this reason rating on one dimension (“Like” button) may work better and eliminates certain “noise” in users’ behavior analysis.  Hence, simplification is functional to enhance user-profiling and, eventually,  to better target those users with relevant ads.

On the other hand, all kinds of feedback regarding content/items have become a very effective “bait” to incentivize interaction and attract traffic. This happens despite being common knowledge that on-line feedback and ratings are easily manipulated and often unreliable.

I believe we can start looking at feedback and ratings from a different perspective.  In fact, I have the impression that their social significance has changed as they do not represent anymore a vehicle to convey opinions but rather a form of basic interaction to show an acknowledgment that something does exist.  To the contrary, if there is no “Like” button underneath it, such thing cannot be part of the user’s profile for advertising purposes.  And, that somehow makes that piece of content invisible on the Internet.

Italian Republic v. Google Video. Why Was Google Convicted?

(This post has been imported from another blog I own.  The original publication date is March 4, 2010 and the post has not been updated since).

Three of Google’s executives were recently convicted by the Court of Milan for violating Italian privacy laws.  Google’s executives were held liable because Google Video, hosted, allowed comments and did not promptly (or promptly enough upon notice) take down a video portraying a disabled kid getting abused by his schoolmates.  The schoolmates who actually uploaded the video were convicted and sentenced to 10 months of community service.  You may find a more complete description of the facts here.

The ruling is subject to appeal and Google has said it will challenge the ruling.  Many people have commented on the case reflecting upon its deep implications over freedom of speech, privacy law and the Internet business model.  I personally share the concern that prior control over UGC might dangerously compress freedom of speech, yet the solution certainly is not to neglect users’ and third parties’ privacy protection.  The bottom line is that such a ruling may shake the foundations of the Internet industry and that nobody I know has a clear idea about a good trade-off among the many conflicting interests.

In this post, however, I will more simply focus on explaining the probable reasons for the Google conviction.  I say  “probable” because the full opinion has not yet been published.  Still, I have an impression of the probable reasons for the conviction based upon reading the statements released by Google’s attorney and the Prosecutor in charge of the case.

Reason #1.  There is no safe harbor for privacy violations. This may be surprising for most Internet companies but, as a matter of law, EU privacy regulation in general, and the Italian one in particular, do NOT apply the safe harbor exemption to matters concerning the right to privacy.  It may appear like a loophole in the system, but probably, when drafting the E-Commerce directive, the EU legislator did not have in mind a service totally based on UGC, such as a video sharing platform.  Directive 2000/31/CE shields mere conduit, cashing and hosting services against commercial liabilities, but not against the rights “of individuals with regard to the processing of personal data…”.

Article 1, §5, lett. b of the E-Commerce directive says:  “This Directive shall not apply to: …
(b) questions relating to information society services covered by Directives 95/46/EC and 97/66/EC.”

Guess what subject matter is regulated by Directive 95/46/EC?

Pretty straight forward, isn’t it?  If there is no safe harbor for a privacy violation –> liability of the service provider for contributing to the criminal conduct.

The same regulation applies in Italy by virtue of the Legislative Decree n. 70, 2003  art. 1, comma 2 lett. b which precludes the application of the safe harbor regime (art. 14 and ff.) for matters involving privacy rights.

Reason #2.  Google violated the Italian Personal Data Protection Code. Google’s counsel released a statement indicating the rules allegedly violated by Google, as specified in the indictment: art. 23, 26 and 17 of the Data Protection Code.  art. 23 and art. 26 (here an official version in English) provide that before processing any personal data it is necessary to seek consent of the owner of the data.  This is necessary moreover (written consent is required) if the processing concerns sensitive data, such as the existence of a heath condition.  It is probable that the Court considered that depicting the likeness of a person affected by Down Syndrome is sufficient to constitute “processing of sensitive data”, considering the apparent and recognizable existence of a heath condition in the person portrayed.

Art. 17 provides a general obligation to consult with the Data Protection Authority before initiating any potentially dangerous processing of personal data.  The violation of such article may be construed as creating liability for other independent violations. We’ll see when the opinion is published…

Reason #3.  Google did not take down the video promptly. Proof of a prompt take down is crucial to  prove or disprove damages.  In fact, damages are necessary for the application of the criminal sanction set forth in art. 167 of the Data Protection Code.

Google took the video down after the police made such request.  The victim’s representative argued that the video was actually accessible for about 2 months and, even after the video was flagged by some users and a take down notice filed by the victim’s representative, Google did not respond with the requested promptness.

Thus, the Prosecutor accused Google of being inefficient and untimely in taking down the video since, as Google’s deputy general counsel for Europe also admits, Google took the video down only when the police asked to do so, many days after the first flagging.  However, there is no clear information about the timeline of mentioned facts.  We’ll see…

Furthermore, the Court probably concluded that the accessibility of the video for about 2 months, coupled with the significant amount of views and comments, gave Google notice (or should have given Google notice) of the existence of the video.  This is my speculation. We’ll see…

Unfortunately, there is no black letter law applicable to take down notices for privacy violations.  In fact, nothing like the DMCA exists in the EU even with respect to IP rights. I bet Google misses the clear cut provisions of the DMCA when dealing with the EU.

In fact, the Prosecutor remarked about Google’s vague and evasive attitude towards certain requests for discovery.  It turns out that Google was unable to restore the original page with the comments or to produce the first flagging of the video.  Google’s defense is, according to the Prosecutor, that producing this evidence was too costly and complicated for Google’s engineers.