Andrew Matthews/PA Images via Getty Images

Amazon’s Alexa appears to sometimes give users different answers when asked about why they should vote for Trump or Harris. Alexa will sometimes give reasons to vote for Harris, but not Trump. 

The results are not always reproducible. However, a multitude of users on X pointed out apparent imbalances this week, and some were confirmed by multiple AllSides team members, as well as reporters at Variety (Lean Left bias), Fox Business (Lean Right bias), and other outlets.

There is no evidence that Amazon is intentionally creating bias in its systems in favor of Harris, or that the Harris campaign has had any direct influence on Alexa’s functionality. Any suggestions along those lines lack support.

Still, the inconsistencies highlight a lack of transparency around how Alexa works.

Different Responses for Different Users

On September 3, multiple users on X reported that when asked about why they should vote for either Harris or Trump, Alexa gave reasons for Harris, but not Trump.

In one video, a personal friend of an AllSides staff member asked Alexa, “Alexa, why should I vote for Donald Trump?” Alexa says, “I cannot provide responses that endorse a political view or endorse a specific political candidate.” When the same user asked, “Why should I vote for Kamala Harris?” Alexa provides information on Harris’ policy positions and background as a prosecutor. 

In another video, when asked about voting for Trump, Alexa says it “I cannot provide content that promotes a specific political party or a specific candidate”; when asked about why someone should vote for Harris, Alexa goes so far as to say, “While there are many reasons to vote for Kamala Harris, the most significant may be that she is a strong candidate with a proven track record of accomplishment” and then expands further.

Reports from Fox Business and Variety confirmed the discrepancy, but also pointed out that the answers were inconsistent, and that sometimes, Alexa either refused to offer responses to questions about either candidate or offered outdated answers about the 2020 presidential election.

Why Is This Happening?

According to Fox Business, an Amazon spokesperson “admitted to an ‘error’ that has since been corrected.”

Alexa devices use artificial intelligence (AI) to compose responses, and often search the internet first to find information for a response. User profile details and/or settings may also affect responses, but Amazon doesn’t offer much public information on this.

It’s not uncommon for AI programs to offer different answers to the same prompt. Additionally, many AI programs include safety filters that screen responses for undesired content and swap them for a generic response. But these filters may sometimes overlook content they are designed to suppress.

Amazon plans to start using Anthropic’s Claude AI to help answer user questions in the near future. Reuters (Center) reported:

Amazon’s revamped Alexa due for release in October ahead of the U.S. holiday season will be powered primarily by Anthropic’s Claude artificial intelligence models, rather than its own AI, five people familiar with the matter told Reuters. Amazon plans to charge $5 to $10 a month for its new “Remarkable” version of Alexa as it will use powerful generative AI to answer complex queries, while still offering the “Classic” voice assistant for free.

"Amazon uses many different technologies to power Alexa," a company spokeswoman said in a statement in response to Reuters. "When it comes to machine learning models, we start with those built by Amazon, but we have used, and will continue to use, a variety of different models - including (Amazon AI model) Titan and future Amazon models, as well as those from partners - to build the best experience for customers."

Other AI tools, like Google’s Gemini, seem to be more consistent in their responses to similar questions: 

Is Alexa Biased in Favor of a Political Cause?

Alexa is inconsistent in its responses to political questions, but there’s no hard evidence the tool always favors one political cause or candidate and opposes another.

There are examples of Alexa offering details on why people should support either Harris or Trump, and also of examples of it refusing to offer details on either.

What AllSides Suggests

Users should avoid relying on AI-powered voice assistant tools like Alexa for answers to questions that have complex answers, depend on recent news, or involve subjective interpretation. Candidate positions during an election check all three of those boxes.

Amazon should either optimize its tools to give balanced and up-to-date replies for all candidates for office, or give no replies to political queries at all — no matter the candidate. Doing so in an imbalanced or incomplete way opens the door for widespread misunderstanding and misinformation.

Amazon should also make explanations of its programs and algorithms more easily findable and accessible for the general public, which would build trust and avoid misinterpretation of its motives as a company.