Are Amazon’s Book Algorithms Sexist?
Let’s cut right to the chase. Are Amazon’s book-recommending algorithms sexist? It’s nigh impossible to prove—or disprove.
But it’s a question worth investigating, given the sheer power Amazon now possesses in the book publishing and bookselling industries. The matter is also worth probing given how little we know about Amazon’s algorithms, the blips of computational magic that determine which books you (supposedly) want to see, and which books you (supposedly) don’t.
If you buy books from Amazon, its product recommendation engine wields immense power over what you see and, ultimately, what you buy. Think of it as a librarian recommending books to you based on your interests—but the librarian is invisible and discovered your interests by covertly recording your Goodreads searches, and then comparing them with everyone else’s Goodreads searches.
What does Amazon do with that kind of power? What do its choices mean for you, the user and consumer of its service? These are the questions we should be asking a company that dominates book sales. And the place to start is with its recommendation algorithms.
What Exactly Are Algorithms?
Let’s start with the most important question: what the heck are algorithms?
Amazon’s algorithms—like those of most major tech companies—are proprietary, the “secret sauce” that makes internet giants run. That lack of transparency is one of many troubling tech trends Sara Wachter-Boettcher examines in the eye-opening Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech.
Algorithms themselves are specific sets of steps that need to be done to perform computations, Wachter-Boettcher explains. They’re not magic. Algorithms perform mathematical computations humans could do but at a scale we could never manage. And we use them everywhere, to do everything.
“Algorithms now control a huge number of systems that we interact with every day—from which posts bubble up to the top of your Facebook feed, to whether image recognition software can correctly identify a person, to what kinds of job ads you see online,” Wachter-Boettcher writes.
The trouble? Algorithms aren’t built in a vacuum.
“No matter how much tech companies talk about algorithms like they’re nothing but advanced math, they always reflect the values of their creators: the programmers and product teams working in tech,” Wachter-Boettcher explains in Technically Wrong. “And as we’ve seen time and again, the values that tech culture holds aren’t neutral. After all, the same biases that lead teams to launch a product that assumes all its users are straight, or a signup form that assumes people aren’t multiracial, are what lead them to launch machine-learning products that are just as exclusive and alienating—and, even worse, locked in a black box, where they’re all but invisible.”
An Experiment With Amazon’s Book Algorithms
Despite what you may think, I’m not an army of automated tester bots. There’s almost no way I could gather enough data to determine definitively the biases of Amazon’s book algorithms.
Earlier this year, The Atlantic looked into whether Amazon privileged its own products in search results. It was nearly impossible to tell because of those proprietary algorithms, their mechanics shielded from view inside the “black box” Wachter-Boettcher described. (Amazon representatives denied the claims, though they did admit the algorithms look at “profitability” when displaying products.)
Still, I was curious to see what I could learn from a brief experiment with Amazon’s book recommendations. Scrubbed of all my identifying data—a consumer with an entirely clean slate—I wanted to know which books Amazon would recommend for me, and if those recommendations showed any clear patterns toward favoring male authors.
To start, I created an entirely new email account with as little profile information as possible, including no gender specification. (Who knows how much information one service provider shares with another?) Then I used that fresh new email to create an equally blank Amazon account.
From there, I selected a handful of test titles to search. The titles I picked ran the genre gamut deliberately, so Amazon couldn’t easily peg me as any certain kind of reader. In evaluating the results, I looked at several avenues Amazon uses to recommend additional titles:
- Customers Who Viewed This Item: These recommendations appear on the page of the book you’re looking at, apparently culled from data about other users who’ve looked at the same book.
- Inspired By Your Browsing History: Amazon recommends items similar (by some criteria) to those you’ve viewed.
- Related to Items You Viewed: This category of recommendations is labeled differently and appears in a different place within Amazon’s site, but ultimately performs a similar function as “Inspired By Your Browsing History” recommendations.
The Results: Amazon’s Book Recommendations
I started the experiment by searching my chosen test titles. For each title, I recorded the “Customers Who Viewed This Item” recommendations.
- Search No. 1: Harry Potter and the Sorcerer’s Stone by J.K. Rowling (sigh). I picked the first Harry Potter novel because it’s an enormously popular book with a plethora of read-alikes in different genres. Of course, there’s also just a lot of Harry Potter–related books, movies, and merchandise. So the related recommendations were all Harry Potter products. I should’ve seen that coming…
- Search No. 2: The Way of Kings by Brandon Sanderson. I pivoted to the historically male-dominated genre of high fantasy and searched for a title with a male author. I was curious to see if Amazon would recommend any fantasy works by women (of which there are plenty!) when I wasn’t specifically looking for them. It did not. Recommendations included novels by 24 authors; only two were written by women (N.K. Jemisin and Robin Hobb).
- Search No. 3: Pachinko by Min Jin Lee: Next, I searched for this literary fiction novel written by a woman. And here the results tell a different story. Works by 49 authors were recommended, 35 of which were written by women. Notably, these recommendations also reflected much greater racial parity than the previous search for Brandon Sanderson, an interesting data point when comparing a search for novel written by a Korean American woman versus a novel written by a white man.
- Search No. 4: We Were Eight Years in Power by Ta-Nehisi Coates. To mix things up further, the fourth title I searched for was nonfiction from a male author of color. The titles recommended to me were from 34 authors. Once again, the percentage of books written by authors of color displayed in “Customers Who Viewed This Item” results was high. But less than half of the recommended titles were written by women—13, to be precise.
- Search No. 5: Aru Shah and the End of Time by Roshani Chokshi. The final search was for a middle grade fantasy from a female author of color. Works by 47 authors appeared as recommendations for “Customers Who Viewed This Item”; an impressive 36 of them were written by women.
Amazon’s Other Recommendations
I waited a couple of days after those searches—and after I put each of the searched titles into my cart—to see how they would affect Amazon’s recommendations throughout the rest of my account. Following those searches, Amazon recommended 35 titles “Inspired by Your Browsing History.” Sixteen of those 35 works were from female authors, 11 of which were various Harry Potter editions.
As for the “Related to Items You Viewed” recommendations, Amazon’s algorithms came up with 25 titles; again, 16 works were by female authors, nine of which were Harry Potter–related editions.
What the Results Mean
So are Amazon’s algorithms sexist when recommending books? The result can’t support that conclusion. But I did learn something from my shadow existence on Amazon: If you seek out diverse reads, Amazon will oblige you with diverse recommendations. If you don’t, Amazon won’t go out of its way to find them for you.
(Before I break down that conclusion, a note: As of this writing, Amazon representatives had not responded to a request for comment.)
Earlier, I compared Amazon’s product recommendation engine to an invisible, spying librarian. The trouble is that it’s not like a librarian. Amazon’s book algorithms look at what’s popular and at what you’re already reading. This mega-conglomerate isn’t in the business of pushing your reading boundaries or expanding your horizons; it’s in the business of pushing product.
For those advocating for diversity in publishing, that’s a flawed computation. If a user searches for titles written by white cisgender men (because they’re well-known), Amazon won’t mess with success, continuing to recommend similar titles without thought or care for the author’s identity.
This approach creates a book recommendation echo chamber. You, a reader, start to think that because you see a book everywhere in your recommendations, it’s popular. If it’s so popular, it therefore must be good. You buy the book, the act of which ensures that book shows up in someone else’s recommended results and all but codifies the algorithm’s understanding that you like this exact type of book. It’s a self-perpetuating cycle in which bestsellers come to mean best quality, a cycle that privileges well-known authors, many of whom are men.
Compare that to your local indie bookstore, many of which have feminist focuses and specialize in diverse shelves.
Thinking About Algorithms
Maybe this observation about book recommendations doesn’t seem like a life-or-death problem on its own, but it is a problem—and one magnified by Amazon’s stranglehold on so many markets.
“Algorithms are making choices that affect your life, from whether you can find or keep a job to how much you pay for a product to what information you can access,” Wachter-Boettcher writes in Technically Wrong.
And every single one of them, she explains, is subject to the biases of those who designed the algorithm. If you think this is some kind of exaggeration, consider this Reuters report from 2018 about Amazon’s scrapped artificially intelligent recruiting tool that showed clear bias against female candidates.
This isn’t to say all algorithms are warped or evil, or even that Amazon’s are worse than others. In the age we live in, it’s impossible to avoid them. It would do us well, however, to treat their recommendations—to treat the information or products pushed to us—with a healthy amount of skepticism. Consider that these algorithms, though computer-based, are created by humans, with the same flaws and biases as anyone else.
In Technically Wrong, Wachter-Boettcher notes that most developers who create these systems aren’t considering the downstream effects of their work. But it’s not because they’re consciously biased; it’s because they’re not thinking about their own identity in context of their creations. She cites University of Utah computer science professor Suresh Venkatasubramnian with a particularly important observation.
“No one really spends a lot of time thinking about privilege and status,” Venkatasubramnian said in an interview with Motherboard. “If you are the defaults, you just assume you just are.”