From the Shorenstein Center
As a society, we have shifted from a world where policy fears are focused on the ubiquity of digital data, to one where those concerns now center on the potential harm caused by the automated processing of this data. Given this, I find it useful as an economist to investigate what leads algorithms to reach apparently biased results—and whether there are causes grounded in economics.
Excellent work from the discipline of computer science has already documented apparent bias in the algorithmic delivery of internet advertising . Recent research of mine built on this finding by running a field test on Facebook (and replicated on Google and Twitter), which revealed that an ad promoting careers in science, technology, engineering, and math (STEM) was shown to between 20 and 40 percent more men than women across different age groups . This test accounted for users from 190 different countries, with the ad displayed to at least 5,000 eyeballs in each country. In every case, the ad was specified as gender-neutral in terms of who it should be shown to.
When my team and I investigated why it was shown to far more men than women, we found that it is not because men use these internet sites more than women. Nor is it because women fail to show interest or click on these types of ads—thereby prompting the algorithm to respond to a perceived lack of interest. (In fact, our results showed that if women do see a STEM career ad, they are more likely than men to click on it.) Nor does it seem to echo any cultural bias against women in the workplace. The extent of female equality in each of the countries as measured by the World Bank was found to be empirically irrelevant for predicting this bias.
Instead, we discovered that the reason this variety of ad is shown to more men than women is because other types of advertisers actually seem to value the opportunity to get their ads in front of female (rather than male) eyeballs—and they’ll spend more to do it. Some advertisers’ willingness to pay more to show ads to women means that an ad which doesn’t specify a gender target is shown to fewer women than men. In essence, the algorithm in this case was designed to minimize costs and maximize exposure, so it shows the ad in question to fewer expensive women than what amounts to a greater number of relatively cheaper men.
I emphasize that like most econometric studies, there are cautions about generalizability. This was a case study of a single ad and a single instance of apparent bias. But since the apparent bias was not particular to the domain of careers, but one resulting from a general price effect, I predict it might apply to other, sensitive types of information advertisers might want to promote online. Even if that’s not true, I believe there are useful regulatory insights to take from this example.
One policy tool that is often discussed as a panacea for bias is algorithmic transparency, where platforms are asked to make public the underlying code of their algorithms so that it can be analyzed for potential issues that may lead to prejudice. However, in this case, it is unlikely that much could have been prevented (or gained) by mandating algorithmic transparency, even supposing it were technologically possible. The apparent bias occurred because of other advertisers’ higher valuation of female eyeballs—something that would not have been clear from analyzing an algorithm that was simply intended to minimize costs.
Read the full piece at the Shorenstein Center.
Catherine Tucker is the Sloan Distinguished Professor of Management and Professor of Marketing at MIT Sloan.