Potential Algorithmic Bias on X During the 2024 U.S. Election ⇥ eprints.qut.edu.au
Timothy Graham and Mark Andrejevic:
This technical report presents findings from a two-phase analysis investigating potential algorithmic bias in engagement metrics on X (formerly Twitter) by examining Elon Musk’s account against a group of prominent users and subsequently comparing Republican-leaning versus Democrat-leaning accounts. The analysis reveals a structural engagement shift around mid-July 2024, suggesting platform-level changes that influenced engagement metrics for all accounts under examination. The date at which the structural break (spike) in engagement occurs coincides with Elon Musk’s formal endorsement of Donald Trump on 13th July 2024.
While this is presented in academic paper format, you should know that it is still an unpublished, non-peer-reviewed working paper. Its methodology involves just ten X accounts and, as the authors note, their analysis is limited due to the site’s opacity for researchers. Also, the authors do not once mention the assassination attempt that led to Musk’s endorsement on the very same day — a conspicuous absence, I think. None of this means it is inherently inaccurate. It does mean you should hold onto these findings very, very loosely.
It is worth reading, though, because even if I do not entirely trust its findings, it is still compelling (PDF). I am not sure what criteria were used to select the ten accounts in question, but the five Democrat-aligned accounts are all either lawmakers or political leaders in some way. The five Republican-aligned accounts, on the other hand, are all commentators and also Donald Trump Jr., and I am not sure that is a reasonable comparison. Surely it would be better to compare like-to-like.
Even so, it sure appears the date of Musk’s endorsement matches the timing of a change in political activity on X. One possibility is for the assassination attempt and endorsement to have caused more activity on his platform, and specifically among those who do not find its owner to be an odious buffoon. However, a more cynical possibility suggested by this research is of the platform taking sides, despite its new owner promising neutrality. Theoretically, we can check this for ourselves. In the name of “full transparency”, X published “the algorithm” on GitHub; indeed, it appears it was updated around the same time as these researchers found this partisan boost. But there is not a corresponding public commit — no public commits, in fact, since July 2023, as of writing — so it is impossible to know if this is related or just someone fixing a typo. “Transparency” does not work when it depends on unreliable actors.
Also, if the work of these researchers represents a true shift, I believe it will be the first time fears of an explicitly partisan influence on algorithmic recommendations have been demonstrated in the United States. Meta has avoided suggesting posts it deems political in nature — probably because they are more difficult to moderate, and partly because it is beneficial for Meta to ingratiate itself with the incoming administration. TikTok, despite public fears, has no demonstrated partisan political influence.
But X? Its users and ownership have carved out a space for explicit discrimination and — possibly — partisan bias.