Getting Profiled by Facebook

Its veracity be damned, the conspiracy theory explained to me by Columbia Law Professor Eben Moglen was just plausible enough to be deeply unsettling, probably because it involved that most ubiquitous of generational signifiers: online social networking. It went a little something like this: back when Mark Zuckerberg needed startup capital for a certain internet venture, he attracted the interest of two downright sinister sources of funding: cyber-libertarian futurists, and the intelligence community. Representing the latter was Accel, a venture capital firm and early Facebook investor headed by a former board member at In-Q-Tel, a congressionally-chartered firm that expedites government investment in civilian technology for the intelligence community. Conspiracy enthusiasts view Facebook as serving two complementary and deeply unnerving purposes.

They argue that Facebook acts as an extension of the Theil-Paypal view of the internet as a space for radical capitalist experimentation, a development working against the cooperative, even socialist ethos espoused through the Free Culture and Free Software movements. Inspired by the theoretical work of thinkers like Lawrence Lessig and the legal work of Moglen and others, they argue that software should be treated as a public trust and managed by individuals who aren’t motivated by profit. For them, Facebook is the anti-Wikipedia—the latter of which has yet to be monetized in spite of being Alexa’s sixth most-visited site on the web.

Representing the former was Paypal founder and current Facebook board member Peter Theil. According to the Idler’s Tom Hodgkinson, the entrepreneur and hedge fund manager is also an arch-libertarian and futurist philosopher who views the internet as a venue for unfettered human freedom, if not human perfectibility. He believes that an internet in which capital and information can travel with uncontrollable momentum and speed could bring on a libertarian utopia which will trickle into and later overwhelm an outmoded, still-fettered external world. Writes Hodgkinson in the Guardian, “Thiel is trying to destroy the real world … and install a virtual world in its place.”

At the same time, the social information so vital to Facebook is also invaluable to those in the business of social control. Facebook users, Moglen hypothesized, are participating in a massive social modeling experiment conducted for the benefit of the CIA. Analyze enough of the data at Facebook’s disposal, the professor claimed, and human action would become completely predictable. Eventually, America’s intelligence apparatus would begin to know Facebook users better than they knew themselves.

Facebook was sitting on the kind of information that could turn once-perplexing questions of human nature into mere equation-fodder, which was as hypothetically useful to the CIA as it would be to advertisers or social scientists. Only later did I realized that the alleged CIA connection was offered not as a fact to be investigated, but as an intellectual provocation, a thought experiment spurned by the alarming reality that 300 million people had signed away their most intimate information without any sense of where that information could go, how that information could be used, and who could have control of it.

As Michael “Six” Silberman, an Informatics Ph.D. candidate at UC Irvine and SEAS ’08 explained, Facebook’s use of our personal information can only be surmised through the website’s terms of use, which grant it complete ownership and control over everything that’s ever appeared on it. “There’s no way to have a political, technical understanding of Facebook beyond the legal documents available to us,” he said in reference to the website’s Terms of Service. In order to know what Facebook is really doing with our personal information—Are they analyzing it? Giving it to sociological researchers? Selling it? Handing it over to the government? Just sort of letting it sit there?—we would need “an insider who’s willing to post a bunch of internal documents or wikilinks,” a Facebook employee willing to expose any misuse of the information the company owns and controls.

Even without overt CIA or techno-futurist collusion, Facebook has resulted in the greatest accumulation of social data in history. And it’s offered little accountability for how it’s being used. In a sense, though, we already know how it’s being used: this past September, a class-action lawsuit forced Facebook to stop using Beacon, a program introduced in late 2007. Supposedly Beacon tracked information from 44 partnered websites and relayed it to the Facebook pages of logged-in users and their friends. If a user made an eBay purchase while logged into Facebook it would appear on his (and everybody else’s) newsfeed; if he visited Gamefly frequently, he could expect more videogame-related targeted advertising on his homepage.

In truth, Beacon was tracking users’ purchases and surfing habits when they were logged off of Facebook, and was doing so without giving them the option of turning the “beacon” off. According to a November 30, 2007 article in PC World, “users aren’t informed that data on their activities at these sites is flowing back to Facebook, nor given the option to block that information from being transmitted.”

In December 2007, Facebook admitted to using Beacon to track the web history of users that were logged off of the site. That month, Zuckberger announced that users would now have to opt into Beacon, a change he explained in a revealing blog post:

When we first thought of Beacon, our goal was to build a simple product to let people share information across sites with their friends. It had to be lightweight so it wouldn’t get in people’s way as they browsed the web, but also clear enough so people would be able to easily control what they shared. We were excited about Beacon because we believe a lot of information people want to share isn’t on Facebook, and if we found the right balance, Beacon would give people an easy and controlled way to share more of that information with their friends. (emphasis mine)

Zuckerberg couches a breakthrough in invasive online advertising methods (Beacon’s real accomplishment) in the language of the participatory web—in this blog post, Beacon isn’t an advertising program, but a tool for enhancing social connectivity. Beacon is even offered as a benign corrective to what Zuckerberg audaciously views as one of his project’s built-in flaws: there’s actually information that falls outside of Facebook’s purview. It also happens to be exploitable information—both, in Zuckerberg’s presumptuous terms, include “information people want to share.”

For Zuckerberg, Beacon’s social utility was intrinsically connected to its success in helping Facebook acquire information that “isn’t on Facebook.” Beacon was an expansion of the social and economic space in which Facebook could operate. It was an expansion predicated upon the assumption that users would passively accept the idea that their social networking website’s interests were somehow in lockstep with their own. This is a statement that Zuckerberg could safely assume as self-evident. Sharing information is a good thing, if not a deeply-held generational value, whether you’re thoughtlessly posting drunk photos on Facebook or advocating for communally-managed open-source software. Only incidentally is it a vein that advertisers can lucratively mine.

The success of the class-action suite—as well as the rapidity with which Zuckerberg apologized for his handling of the Beacon launch—suggests that the website isn’t beyond external influence. But the Beacon row highlights the awesome power that Facebook possesses, while the blog post itself demonstrates Facebook’s worrisome attitude towards the source of that power: the information with which we’re constantly entrusting it.

This is especially troubling in light of a recent study conducted by MIT students, demonstrating that a statistical analysis of a person’s Facebook friends could help accurately predict that person’s sexual orientation. One can only wonder what a similar program could determine about a person’s religious or political beliefs, or whether a similar kind of friends-list analysis could one day become the social equivalent of a drug test—a quick and easy means of coerced transparency.

It is one of this decade’s ironies that a kind of voluntary erosion of privacy has coincided with an increased consciousness of civil liberties—indeed, the Patriot Act and warrantless wiretapping opened up a discussion on transparency focused around the government’s powers vis-à-vis the personal information of the citizens living under it. The Web 2.0 revolution ushered in an era of large business interests whose values seemed to be aligned our own—Youtube, Facebook and Google are structured to improve our access to information; Google’s goal seems to be the aggregation of practically all information in existence. Skepticism of the public sector’s interest in its citizens coincided with optimism towards what the private sector could do with the information we were constantly handing over to it. Bush’s domestic anti-terror policies provided an unintended justification of Theil’s techno-libertarianism; a promising online world seemed to be winning out over a cynical “real” one. How else to explain the sensitivity towards privacy in one realm, and the complete obliviousness to it in the other?

Of course, this is a dichotomy that doesn’t really exist—the “real world” is everywhere, and online entities can’t be trusted simply because they appear to exist outside of it. Those who embraced Web 2.0 must now decide what they are going to demand of the architecture that they helped to build. Perhaps there’s an open-source solution, as proponents of Free Software argue—a Facebook where anyone with the ability to program can become a kind of mini-Zuckerberg. Or maybe the solution is less radical, as programmer Ron Gejman, CC ’10 argues. “Once you put something out there you should expect it to be out of your hands,” he says. “We should embrace it, we should have more legal protections in place, and we should be very judicious in what we put online.”

Gejman’s solution is to accept as a starting point a world in which information runs wild, and in which we’re not capable of knowing where it ends up. In a sense, we don’t have any other choice, since that’s exactly the world that Facebook and its users have helped create.

But there’s a slightly darker read you could put on this: in 1992, philosopher Gilles Deleuze speculated that technological innovations would lead to “societies of control.” In his mind, Foucault had proven that technology and individual freedom were in constant conflict and that a rapid acceleration in technological development was about to make things a whole lot worse.

But his postscript is jarringly equivocal. “What counts,” he wrote, “is that we are at the beginning of something.” Facebook is only 5 ½ years old. And once it’s publicly traded, the company with a generation’s worth of social information will be thrust into the ranks of Microsoft and Google, and will become more accountable to its bottom line than to Web 2.0 idealism. Microsoft was once a serial antitrust violator, and as Moglen pointed out, Google has cooperated with the Chinese government’s crackdown on online dissidents. As Facebook gets bigger, the possibility of reining it in becomes more and more faint. But at this point, what truly counts for the first Web 2.0-socialized generation is that we are only “at the beginning of something.” Whether Facebook ushers in an actual rather than hypothetical cyber-dystopia is very much up to us.