TPRC42: Predicting trouble & navigating policy solutions

I was in TPRC42 the last weekend thanks to the sponsorship of Georgetown University’s Communication, Culture and Technology program –my program. TPRC is an incredible conference on telecommunications policy research. You can browse the program of the event and, more importantly, download any of the papers of your interest.
TPRC is also quite unique in that FCC staff attend the event in large numbers to understand what’s going on in the cutting edge of the academy – and what should they care about. Or, in words of Mike Nelson (who is a member of CCT´s faculty):

In fact, many of the more interesting conversations in TPRC orbited around the need for more, less or different regulation and policies in the world of ICT around concerns that are more or less common knowledge: privacy, net neutrality, mobile spectrum, cybersecurity etc. The panel “Governing the ungovernable: Algorithms, Bots, and Threats to Our Information Comfort-Zones” evidenced the differences between the proponents of corporate responsibility and bona fide against what Prof. Nelson called the pessimists. The pessimist can see gloomy things when they learn Facebook has developed facial recognition algorithms that are actually better than the human eye.

In a less gruesome and more constructive approach, other attendants demanded ways of making both algorithms and their consequences to the data they crunch transparent to the public.

But, as Mike Nelson noted, algorithms can’t be open because they are protected by patent law. Many of the good proposals and better intentions in the conference revolved around the recognition of complexity and the problematization of existing regulation. In opening remarks of one of the essential references shared during the panel on algorithms:

Regulation is the bugaboo of today’s politics. We have too much of it in most areas, we have too little of it in others, but mostly, we just have the wrong kind, a mountain of paper rules, inefficient processes, and little ability to adjust the rules or the processes when we discover the inevitable unintended results.

Opening Notice and Consent Policies

The 80+ papers discussed in the conference have given me the perfect excuse to narrow down my personal highlights to just one paper that has a bigger affinity with some of my interests, namely customer protection and the creation of initiatives that can ensure a better understanding of law, policies and regulations by the public.

Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

Led by Princeton’s Prof. J.R. Reidenberg and endorsed by funding of the National Science Foundation, this project analyzed Notice and Choice privacy policies of six major websites (three news outlets plus three e-commerce companies) in order to discern the clarity of their content.

The approach is very smart. Instead of trying to establish what the policies really mean, they were given to three different groups of persons: privacy policy experts, graduate students in computer science and public policy, and crowd workers. Making use of an online tool, each person responded separately to questions about the meaning of the policies.

The conclusions were significant: even amongst privacy experts, the levels of agreement were relatively low in some of the areas covered, with a median across all policies of 75% for experts, 60% for knowledgeable users (grad students) and 50% for crowd workers. This supports the common sense impression that notice and consent privacy policies are too obscure. For Prof. Reidenberg this has two possible explanations:

  1. Companies didn’t realize the general public does not understand their policies
  2. Companies don’t want the general public to understand their policies

If 1) holds true, Prof. Reisenberg could find powerful allies in the private sector for what is his ultimate plan, that is: to create some kind of automated method that can be used to better communicate the consequences of consent to particular privacy policies online. He mentioned for reference a traffic light visual code that would signal invasive policies with red, median policies with yellow and friendly policies with green.

But what if 2) is the case? Well, according to Prof. Reidenberg (and in anticipation of the question I wanted to pose), if companies with a proven record of obscure polices failed to remedy the situation, this paper and similar evidence could be used to start proceedings at the Federal Trade Commission.

Of course, this either/or position is just my summary of the situation. What I really liked about the paper was its commitment with the betterment of a common practice that makes users helpless in regards to their online rights.

People you should follow

TPRC42 had an overwhelming genius concentration. You literally can’t attend the conference and not meet, listen and speak to people with very advanced proposals and powerful insights in the world of Telecommunications. Without any claim to exhaustivity, here are some incredible talents:

Erhardt Graeff, MIT & Berkman Center for Internet & Society at Harvard

Graeff moderated the panel on the regulation of algorithms. According to his blog, Erhardt studies “information flows across mainstream and social media, and explores technologies that help entrepreneurs from marginalized groups, especially youth, to be greater agents of change”. Can you think of a more important topic?

Josephine Wolff, MIT

If you want to understand the internet, you would be off to a good start just by reading Josephine’s articles in Slate. Also a PhD candidate at MIT, her insights in cybersecurity are very revealing.

 

Luis Hestres, American University

Winner of the student paper contest but a member of the faculty at the University of Texas at San Antonio since this Fall, Luis Hestres is an expert in the use of technological tools in the hands of advocacy organizations. He’s also an alumnus of the Communication, Culture & Technology program from Georgetown University.

 

Bruce Schneier, CTO at Co3 Systems, Harvard University

Another expert in cybersecurity? Well, no, Bruce Schneier is the expert in cybersecurity. He speaks with the confidence and the clarity of a guru. In the keynote panel on Friday night, he sent a very clear message: given the ontology of the internet, from a cybersecurity perspective it doesn’t make any sense to claim that government intelligence agencies and other legitimate actors should be entitled to actions that cannot be allowed to happen outside of those areas of legitimation, because “everybody is using the same stuff”. Corollaries: we can only choose between private and obscure networks for everybody or highways of open-ended information for everybody. Nothing in between.

Jonathan Cave, Rand Europe and Warwick University

Jonathan intervened in all and every session he attended – and he went to the same sessions I went 90% of the time. An amazing speed of realist, hard-boiled thought with very little space (and mercy) for wishful thinking. It’s just a pity he doesn’t blog.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s