At a recent Algorithmic Fairness meeting, there was some discussion of algorithmic homogenization. The concern, as expressed, for example in Kleinberg and Raghavan (2021) is that
“the quality of decisions may decrease when multiple firms use the same algorithm. Thus, the introduction of a more accurate algorithm may decrease social welfare—a kind of “Braess’ paradox” for algorithmic decision-making”.
Now, no new model is needed to exhibit such a paradox for algorithmic decision making. The prisoner’s dilemma will do the job for us. Consider the instance of the dilemma in the table below.
C | D | |
C | (1,1) | (-1,5) |
D | (5,-1) | (0,0) |
Here are two algorithms that our players can choose from to play the game. The first is a silly algorithm. It selects a strategy uniformly at random from among those available. The second is a `better’ algorithm. Its selects a strategy uniformly at random among those are rationalizable (meaning they are a best response to some mixed strategy of the opponent). Why is the second better? Holding the strategy of the opponent fixed, the second will deliver a higher expected payoff than the first. This is because strategy D, defect, is the only rationalizable strategy in the prisoner’s dilemma.
If both players use the inferior algorithm, total expected payoff will be 5/4. If both players use the better algorithm, total expected payoff will be 0. Thus social welfare is lower when both players switch to the better algorithm. Why doesn’t each player stick with the silly algorithm? If I know my rival is playing the silly algorithm, I am better off from switching to the better algorithm.
While this example makes the point, it does so unconvincingly because it is not tied to a compelling context. KR(2021) does not share this feature because it relies on algorithmic hiring as motivation. There are two firms competing to hire individuals and they may deploy algorithms to screen candidates. Unlike the example above, the algorithm does not choose each firm’s actions but provides information only about candidate quality. In other words, the algorithm makes predictions not decisions.
The idea that better information in a competitive context can make the players worse off is not a new one. Nevertheless, it is always useful to understand the precise mechanism by which this plays out in different contexts. The same is true in this case and I direct the reader to the KR(2021), but I would be remiss in not also mentioning this closely related paper by Immorlica et. al. (2011). As an aside, there is an obvious connection between homogenization and algorithmic price fixing (see Greenwald and Kephart (1999)) that is a subject for a future post.
Next, I consider a feature absent in KR(2021), I think critical. Wages. To see their presence will make a difference in how we view algorithmic homogenization, suppose two firms competing for a worker. The worker’s type is their productivity, denoted . If a firm hires the worker for a wage of they earn a profit of . Neither firm knows the worker’s type, but each receives a conditionally independent noisy signal of their type. We can think of the signal as being delivered by some mythical algorithm. Conditional independence is to be interpreted as the firms using different algorithms. Upon receiving their respective signals, each firm submits to the worker a take-it-or leave-it wage offer. The worker will select the firm that offers the highest wage. What’s just been described is a sealed bid first price auction in a common values setting. In equilibrium each firm will submit a wage that is below their estimate of worker productivity conditional on their signal because of the winner’s curse effect. On the other hand, suppose both firms use the same algorithm to estimate worker productivity that is error free, i.e., they receive perfect information about the worker’s productivity. Now, we have Bertrand competition and each firm offers a wage of $\latex t$. In this toy model, algorithmic monoculture does not affect efficiency, but it does affect the distribution of surplus. In particular, the worker benefits from homogenization! For a privacy slant with the same set up see Ali et. al (2023).
Irrespective of whether algorithmic homogenization will improve or decrease efficiency, we might still be worried about because of the possibility of systematic errors. In the meeting referenced earlier, Ashia Wilson asked her audience to imagine what might happen if a company like HireVue, for example, were to occupy a dominant position in the recruitment market (this would allow HireVue to effect wages, but that is a different story). HireVue and companies like it, claim that they provide accurate signals about a job candidates fit (there is a discussion to be had about whether this what they should be doing) with a given employer rapidly and at scale. Suppose the prediction algorithm that HireVue uses is indeed more accurate (in aggregate) than the alternative, but exhibits systematic errors for some well defined subset of job seekers. For example, it consistently underestimates the quality of fit for candidates whose last names have to many consonants (or over estimates this)? Depending on the nature of the alternative, this might be a concern. Does this call for an intervention or regulation? Why is it not in HireVue’s interest to discover such errors and correct them? If a company like Hirevue is rewarded per person placed, then, such a systematic error would lower their placement rate and thus their revenues.
I
Recent Comments