Emerce EDAY 2024 was a dynamic event, bringing together some of the most influential voices in technology, media, brands, and e-commerce. The day was packed with insightful presentations, making it an unmissable learning opportunity for professionals eager to stay ahead of the latest trends. Two presentations we attended, in particular, pointed at the potential dangers of AI, or rather the misuse of AI, and the ethical concerns that arise from it. From threatening elections and democracy to fostering one-sided virtual relationships, both presentations highlighted crucial ethical implications surrounding the use of AI that we all should watch out for. Keep on reading to find out more!
Tech-tonic shifts: How digital waves shape elections and democracy. -Megan Shahi
Megan Shahi, the director of Technology Policy at American Progress, delved into the role of social media and AI technology in shaping the 2024 US election.
This is more critical than ever, as the rise of AI technologies enables the creation of sophisticated fabrications and distortions, making it increasingly difficult to distinguish between what’s real and what’s not. In her in-depth exploration, Megan examined how mainstream social media platforms, AI-generated content, conspiracy theories, misinformation, and the lack of regulation in the tech space are shaping voter opinions—and, in some cases, even the outcomes of elections.
The presentation also highlighted several pressing concerns, such as the need to identify reliable sources of authoritative information in an era dominated by AI and deepfakes, the public’s susceptibility to manipulation, the dangers of targeted persuasion, and the glaring absence of federal regulations governing technology in the United States.
The dangers of misinformation
Megan explained how AI tools can be misused to quickly spread false or misleading information across social media, distorting public discourse, manipulating voter opinions, and undermining democratic processes. The ethical concern lies in the deliberate use of AI to deceive, influencing decisions with fabricated content and conspiracy theories—something we've witnessed frequently in this election year.
Most recently, in September of this year, The Guardian reported on a misinformation incident that took place at a critical moment of the 2024 presidential election: After Joe Biden withdrew from the presidential race, misinformation started spreading online with screenshots claiming a new candidate could not be added to ballots in nine states. This quickly gathered millions of views on Twitter, now X, and prompted fact-check requests for these posts, which were simply wrong, as ballot deadlines had not passed, and Kamala Harris still had plenty of time to have her name added to ballots. The source of the misinformation? Twitter’s AI chatbot, Grok. When users asked Gronk whether a new candidate still had time to be added to ballots, it gave the incorrect answer.
This was eventually corrected, and now the chatbot directs users to a different website, http://vote.gov, when asked about elections. However, despite the relatively harmless nature of this misinformation incident, given that it would not have prevented people from casting a ballot, it spoke to a more significant issue—how easy it is to spread misinformation far and wide using a social media platform. Not only that, but Grok can also generate extremely life-like images that contribute to partisan divides and can mislead people to an unprecedented degree.
The rise of deepfake technology
Megan argued that the rise of AI-driven deepfake content poses a serious threat, as it can be used to mislead the public, damage reputations, or even influence electoral outcomes. Deepfakes are hyper-realistic yet entirely fabricated videos, images, or audio created using AI, often depicting real or fictional people in convincing but false scenarios. While some deepfakes may be easy for a trained eye to detect, they are much harder to identify for those unfamiliar with the technology or less tech-savvy. Unfortunately, older individuals are particularly vulnerable to deepfake scams, as they often lack the digital literacy that younger generations possess.
Targeted persuasion and public susceptibility
AI’s capability to analyze vast amounts of data has enabled micro-targeting, where specific groups or individuals receive tailored content designed to influence their beliefs and actions. This raises ethical concerns about the exploitation of personal data for political or ideological manipulation, often occurring without the knowledge or consent of those affected. Given the effectiveness of these targeted persuasion tactics, it’s no wonder that financial analysts have referred to people’s digital behavioral data as “more valuable than oil.”
The overarching problem? The lack of federal regulations
Ultimately, Megan explained, the issue hinges on the lack of regulations governing the use of AI and social media technologies in political campaigns and elections in the U.S. and beyond. Without adequate oversight, the misuse of AI tools can persist unchecked, enabling harmful practices that jeopardize the integrity of democratic systems. Moreover, the pace at which technology evolves—particularly in creating fake news—far outstrips the development of policies to regulate it, necessitating significant efforts to bridge this gap.
Overall, this session offered invaluable insights into the profound impact of AI and social media on democracy. From the spread of disinformation to the threats posed by deepfake content and targeted persuasion, the misuse of these technologies undermines electoral processes. The urgent need for regulation is evident; as Megan Shahi highlighted, the rapid advancement of AI has outpaced policy development, leaving critical gaps in oversight. Addressing these ethical concerns demands prompt action and a commitment to fostering a more informed and resilient electorate in the face of digital disruption.
Interested in learning more about this topic? Reach out to Megan Shahi on LinkedIn!
GenAI meets affective computing. Our new relationships. -Sophie Kleber
In this session, Sophie Kleber, UX Director at Google, addressed a critical question: How can we design virtual personalities that respect human uniqueness rather than becoming digital sycophants that exploit our vulnerabilities?
This presentation revolved around the concept of computers as social actors and explored human weaknesses when interacting with humanoid technologies, highlighting the ethical challenges and responsibilities involved in creating emotionally intelligent AI.
Do you say “please” and “thank you” to ChatGPT?
Sophie Kleber postulated that we tend to fill in the gaps and form relationships when technology mimics human behavior—an effect known as the "Eliza effect," which is more pervasive than we realize. Take ChatGPT, for example: Do you find yourself saying "please" and "thank you"? You know it’s just a computer relaying information, not a person on the other side. Yet, its natural, conversational tone prompts us to follow the same social conventions of politeness we would use with a real person as if we’re afraid of being rude—even to a machine.
Are we engaging in romantic relationships with AI?
Is saying "please" and "thank you" to ChatGPT inherently problematic? Not necessarily, but it points to a larger issue—our tendency to humanize technology. Sophie explained that on one end of the spectrum, we have purely transactional, robot-like interactions, such as with the early versions of Google Assistant. On the other, we see highly personalized and even intimate connections, like the relationships some people have with "Alexa," even treating it as a family member. In more extreme cases, technology becomes so humanized that people form romantic attachments, as seen in the movie Her. Beyond fiction, AI programs like "Replika" and "Xiaoice" take this a step further, with Xiaoice being told "I love you" over 20 million times. The evidence is clear: People around the world are forming emotional, even romantic, relationships with AI.
In the face of this, Sophie urges us to recognize that while AI can enhance interactions, it should never be seen as a substitute for genuine human relationships. Although AI can detect patterns and generate responses using predefined algorithms, it lacks the nuanced understanding of human emotions and is unable to experience feelings and have cognitive abilities, such as awareness and emotional reactions, i.e. sentience. In other words, AI is not designed to meet human psychological needs since mimicry is not the same as empathy, and AI is only capable of the former.
But, who bears the responsibility of setting boundaries with AI?
The question remains: Who is responsible for defining the boundaries in our relationships with AI? Should users themselves be accountable for setting limits, or does this responsibility fall to the creators of such technology? Sophie argued for the latter, presenting a framework for ethical design practices in conversational interfaces—one that enhances human interaction while safeguarding integrity and avoiding the exploitation of vulnerabilities.
As AI increasingly permeates different aspects of our daily lives, discussions like these are more relevant than ever. This session was particularly valuable for designers, developers, and anyone interested in AI and human-computer interaction, offering insights on how to build virtual personalities that truly respect and reflect the people who engage with them. Moving forward, we should encourage each other to critically evaluate the ethical implications of building emotionally intelligent and humanoid AI in order to make progress and innovate in the right direction.
Want to delve into the topic further? Connect with Sophie Kleber on LinkedIn!