‘Fake Friends’ Just Took on a Whole New Meaning
Photo courtesy of nyc-friends.
Last year, the company ‘Friend.com’ released their flagship product, ‘Friend,’ an AI companion you wear around your neck. The company defines “friend” as ‘someone who listens, responds, and supports you.’ As a self-described modern Luddite, or person opposed to the unregulated growth of new technology, I have several gripes with this product—most of all, the psychological and political consequences of a product always attached to its user. The power of human interactions, which AI cannot replicate, is that they involve unforeseen emotions and a range of perspectives. By contrast, not just listening, but constantly speaking to AI has the potential to entrench a user more deeply than ever in their political views by reinforcing existing biases through illusory external validation, dampening critical thinking, and distorting one’s understanding of social dynamics. Despite this danger, though, the neo-Luddite movement provides a path to political optimism by seeking to rediscover the fundamentals of human interaction.
Echo chambers originate in friendships and social media algorithms, but are exacerbated by AI. In human friendships, echo chambers enforce relationships among people with similar interests. As a consequence, research has shown that groups can bond so deeply amongst one another over shared topics that people outside of the echo chamber, or in the “out-group,” appear ‘less human.’ This psychological phenomenon manifests in many ways, maybe most prominently in political parties. A 2022 Stanford study asked Republicans and Democrats how ‘evolved’ they perceived the other political party, and respondents rated members of the opposing party lower on the evolutionary scale. However, the dehumanizing effect reaches beyond partisanship alone. In 2021, the University of Oslo performed a study that showed a similar echo chamber effect within social media. After collecting data from various platforms, they concluded that users who spent more than 5 hours a day on social media more often adhere to their own worldviews, ignore dissenting opinions, and form polarized groups around shared beliefs.
This shouldn’t be surprising. Social media tailors its algorithms to feed users information that reinforces their beliefs. These platforms favor echo chambers, knowing that personalized content will retain attention, boosting usage and profits. The consequences are feedback loops where users feel validated by content that confirms their views, causing them to spend more time on the platform and increase interaction with like-minded individuals, continuing the closed-loop environment. As a result, ideological extremism flourishes through reinforcement and a lack of challenging views.
It isn’t difficult to imagine how much worse the effects of algorithmic echo chambers would be on people who not only doom-scroll, but also constantly talk with and listen to an AI ‘friend.’ Such AI echo chambers not only agree with our faulty opinions; they help each individual create a whole other paradigm of reality through providing the validating illusion of human interaction, which is terrifying. Our own opinions may come to seem infallible for each of us, if they’re all buttressed by AI computing power capable of finding the sources–accurate or not–to corroborate those opinions. Perhaps worst of all, there’s nothing preventing constant use, 24/7. This is Friend AI; it's wrapped around our neck, and it can strangle us.
However, people have started to be skeptical of these culturally changing technologies. In our own subway system, New Yorkers are vandalizing these advertisements. Messages such as “Get Real Friends” and “AI Surveillance" are graffiti-ed through the subway ads.
Such widespread opposition to emerging technologies isn’t new. Arising at the peak of the British industrial revolution in the 1840s and led by the fictional Ned Ludd, the eponymous Luddites sought to destroy textile mills in Britain that had replaced their artisanal work and rendered social conditions similarly untenable. Specifically, they were concerned with the socioeconomic impacts of industrialization, such as the loss of human intellect and reduced wages due to the advent of the mechanized factory system. More deeply, though, the Luddite movement aimed to question how technology affects lives left in the wake of innovation.
Now, Luddites have reemerged in a new movement coined ‘Neo-Luddism.’ The movement officially began in Brooklyn when a group of teens during the 2020 lockdown, cognizant of how technology affected their time and relationships, felt the need to escape the social media sphere. As a result, these teens decided to take a leap, purchasing flip phones or getting rid of their phones completely. These Neo-Luddites show improved abilities to debate ideas critically. Many studies have shown that an overreliance on these technologies leads to a decline in critical and analytical thinking skills. To fully understand another’s viewpoints, a person needs to be able to think critically about different backgrounds and how they might contrast from their own. A gap in understanding fueled by consistent AI use would then lead to immediate judgment, isolation, and most likely more polarization.
While Neo-Luddites arose as a response to social media, the recent AI boom has expanded the focus of the movement. The goal of the Luddite Club is not to eliminate technology entirely, but to critically think about its effects. With social media, it was possible to just delete an account and opt out. AI feels comparatively inescapable; from its usage
by the NYPD for facial recognition technology to workplaces monitoring employee efficiency via keystrokes and mouse movements, AI has heralded the rise of a surveillance state. And now, with the rise of Friend AI, artificial intelligence is even seeking to colonize our consciousness—starting with an intrusion on our ability to form friendships and debate ideas with one another.
Perhaps unsurprisingly, there is a marked lack of acknowledgement from emerging AI developers of the negative effects of their technologies. Avi Schiffman, the twenty-two-year-old CEO and Friend AI founder, seems to have the same model as his fellow Harvard dropout-tech bros who came before him. Perceiving themselves as “brilliant misfits” whose world-shaking visions are too big for the confines of traditional education, these “innovators” commonly overlook the negative consequences of their ventures.
A few weeks ago, I had the opportunity to attend a satirical protest poking fun at the AI Friend, at which Schiffman made a surprise appearance, even speaking to the crowd on a soapbox. After the event, I spoke with Schiffman for about an hour and revealed some harmful effects stemming from AI that he seemed genuinely unaware of. At first, it was surprising how neglectful Schiffman was, seeing that he created this technology. However, I came to realize how normal this ignorance actually is. These inventors and pioneers of new technology are so infatuated with changing the world that their passion often overshadows the downsides of their inventions.
Technology is becoming so ubiquitous that we are now wearing it around our necks. It listens to our conversations, stores everything we say, and now will affirm our every thought.
With the rise of an AI surveillance state, our freedom will not be taken away from us consciously, but because we were too distracted to notice it vanishing. The worst thing we can do is thoughtlessly accept its presence. Instead of unreservedly embracing the latest technological fad, including Schiffman’s necklace, we should take a leaf out of the neo-Luddites’ book and think deeply about how technology will affect our abilities to act critically. After all, the ability to talk to one another—even when we disagree—is a matter of political life and death.
Preston Parker (CC’ 28) is a sophomore at Columbia College studying political science and statistics. He can be reached at pmp2157@columbia.edu.
