My name is Naveed Babar, an Independent IT Expert and researcher. I received my Masters Degree an IT. I live in Peshawar, Khyber Pakhtunkhwa, Pakistan. Buzzwords in my world include: Info tech, Systems, Networks, public/private, identity, context, youth culture, social network sites, social media. I use this blog to express random thoughts about whatever I am thinking.

Wednesday, June 23, 2010

Facebook Diss|Like: Designing Digital Warning Signs

Like many friends, I have been horrified to see Facebook take aggressive measures to make as much of its content publically available. Since its shift in privacy defaults last December, Facebook has been working diligently to take away our privacy in an attempt to ‘colonize’ the web’s social graph (as Kara Swisher suggests). It is now ridiculously easy for any website to embed Facebook functionality, and thus personalize its experience per visiting user. Truth is, I am torn; torn between hating Facebook as a user and excited for the opportunity as a web entrepreneur; mostly excited at the prospect of creating compelling, contextualized socially-rich user experiences. And as much as I despise Facebook, I will not delete my account.



I am sure I’m not the only one who feels this way, since ceasing to exist on Facebook so will drastically reduce my ability to communicate with many friends. And this gets to the crux of the challenge: are we so addicted to Facebook that we can’t tell whats good for us anymore? Is Facebook an Evil? Are they trying to Monopolize the social web? All of the above??

Last December, Facebook broke the social “contract” that we all signed up for by changing its privacy defaults. It switched the context right under our noses, prompting some 65% of users to go public without even knowing it. Many users still have no clue how visible their profile information and photos are (we all know how unintuitive FB privacy controls are). While this is totally unacceptable behavior and places some users in potentially risky situations, I can’t help but also look at the flip side. Facebook is on its way to becoming the first truly global social network platform that has potential to fundamentally change the way we experience the web. By placing social information in context and not in a single, aggregated feed, Facebook might actually succeed at creating some fantastically useful socially-aware and personalized browsing experiences. All that simply traded for our privacy!
Well, not so simple.

Some think that it is possible to bring the demise of Facebook by creating applications that will scare users; creepy apps that know way too much about you. While this might make headlines, it is unlikely that such an approach will prove to be successful in the long term. As a society, we’ve become so hooked on Facebook, that we are willing to take potential future risks in return for current socializing. And realistically, unless I were a hormone-fluctuating, socially uncomfortable teen, what content could your app possibly surface that is so detrimental to my life?

Raul Pacheco hits the spot when he writes that Facebook’s actions are ‘not enough for us to care’:

There has been a lot of debate online about how Facebook keeps making it more difficult for users to keep their privacy. My question to everyone is — if Facebook is that “evil,” why are we all still using it? Why not be completely democratic and demonstrate (with our vote, e.g. with our not having a Facebook account) that this loss of privacy is unacceptable?
The answer is — because not enough of us care. If the millions of users of Facebook really cared that much about their privacy, they would make the Big Brother/Sister accountable. But in a society that is valuing privacy less and less, accountability has become an afterthought and not mainstream. Sadly, that also means that we have lost the power of protecting our privacy to commercial interests.

I wouldn’t say that Facebook users don’t care about privacy. I just think that many don’t care enough to be obsessing and worrying about potential future risks. Even if one recognizes a slightly riskee photo or comment, it is tempting to just leave online, as the fun of social interaction trumps the thought about potential future uncomfort. While these types of actions most likely don’t affect users in the near term, there are two things that we should be aware of: (1) the consequences of our actions onto others, and (2) the long term implications of sharing our data.

This is where User Experience Design can play a significant role, as we are facing an extremely difficult design challenge. We need to create a visual language that helps users understand these potential risks taken by making content visible. Not unlike the automobile association in West London who set the first warning signs on roads in 1908, or the cigarette manufacturers who were mandated to highlight the medical issues correlated with smoking, we need to figure out best practices to display potential risks without scaring users away. We need to design digital warning signs that keep attracting people’s attention and not fade into the background. We should be aware of our privacy controls at all times – perhaps by placing icons of just how many people can see an item before it is submitted.

I shouldn’t have to dive into complicated settings that give the fiction of privacy control but don’t — since they’re so hard to understand that they’re ignored. I shouldn’t need a flowchart to understand what friends of friends of friends can share with others. Things should be naturally clear and easy for me . . .

Would you like to see your dad, teacher and ex-girlfriend’s icons next to an item before submitting it? Probably not.
Is there a system that can helps us visualize the audience to which we are writing? That’s something users don’t want to see, and thus a challenging design problem.

There is a growing need for applications that help us understand our personal online brand: how we are portrayed online, and what potential risks we face. What’s the equivalent of an anti-virus application, that instead of protecting our computer, protects our online persona? We need something that can warn us when a risky action was taken online (either by us or our within our social network).

Facebook’s new APIs makes is super easy for web developers to build on top of its social graph. Almost too easy. By embedding widgets in the form of like buttons and status update boxes, websites can easily personalize their views according to you. For a growing number of services, this is done without even requiring users to login. For example, on likebutton.me you will see your Facebook friend’s activities from a variety of websites, as long as have previously logged into Facebook. A central listing of what my friends recommend, separated by topics. Creepy, but potentially useful.

The same type of connection happens with both yelp and pandora. At first feels creepy, yet as an experience, potentially something we may get used to, or even like.

Here are two examples where things can get out of hand:
(1) There are Facebook “community pages” that automatically add any status updates that include the page keyword. From CIA and FBI to Terrorism, they’ve got it all, with your name and thoughts right there, thanks to your inability to understand their privacy defaults! As a user, without even knowing it, your name is automatically associated with a community that algorithmically formed around a used keyword.

(2) It is dead simple to create Evil “Like” Buttons – by hacking the button to point to another page. Again, adding the risk that our usernames would be associated with something we are not aware of.

As a User Experience designer my task is to think about users first, place them in the center of my design, protect them, respect their needs, and help them accomplish whatever they come to do in the best possibly way. However, Informing users of privacy hazards is a difficult design challenge, one that Facebook obviously doesn’t want to handle. As web entrepreneurs, should we be leveraging this powerful yet scary technology that Facebook has enabled?
If so, how do we warn our users without scaring them away? How do we show users what they don’t really want to see or deal with? How can we warn of risks that only affect the far future?

We should also ask ourselves if regulation is needed. And if so, what would it look like and how it might further complicate the matter?

No comments:

Comments

Search This Blog

Followers