We are two anonymous AI trainers based in France. We are working for a large social media company through Outlier.ai, an outsourcing platform – subsidiary of Scale AI, just like Remotasks – that offers tasks to freelancers all around the world. Our job is to impersonate an LLM. This means we receive real conversations between users and an LLM that happened on social media, so that we can analyze it – although anonymized, the user does not know that we have access to those conversations. We then have to respond to the user’s latest entry as if we were the LLM. Concretely, these tasks demand a very high level of intellectual work and qualification, as it implies extensive research, critical thinking, creative writing, as well as proofreading and formatting skills. Once a task is submitted, the social media platform uses the response as an example of what a qualified human being would have answered to that kind of prompt, so that the LLM can be improved, especially when it comes to thinking and sounding like a human.
In ad campaigns, Outlier presents the jobs as a great way to earn money while working remotely from the comfort of your own home, but more importantly, as the perfect opportunity for qualified bilingual individuals to finally see their skills valued.
What they fail to advertise, however, is the level of alienation and pressure that comes with that beguiling 25 dollars-an-hour-fare. Workers impersonating AIs have to work at impossible paces and follow ever-changing rules and restrictions that censor and standardize the personal and human style they were precisely hired for. And when they somehow manage to meet the required standards, there is always this risk of writing “too good for a human being” and being accused of cheating, often for using AI to draft responses.
The promised recognition, for its part, will have to wait. The intense intellectual effort they produce leaves no visible trace. Instead, it gets instantly absorbed into a system that automates it. We leave our blood, sweat, tears and mental health in the process, but will never get to see the finished product, let alone get recognition, which is actually fitting for a company making us work like robots: how could it actually value our skills as qualified, human workers?
We have created this inquiry with several goals in mind.
Firstly, we would like it to serve as a cautionary tale for candidates who might be tempted to apply, as well as for future AI users, who need to realize at what cost these new technologies are being developed. Even though the story ends on a very dramatic note, the character’s mental and emotional distress is dangerously close to what AI trainers actually experience while working.
Secondly, we want this piece to reach people who do the same job as us. We want them to know that they are not alone, that what they are feeling is valid and that they deserve better. That if we unite, we might be stronger and come up with solutions to improve our situation (mental health check-ins and a slower pace at the very least).
Finally, our purpose is to get the attention and support of public institutions. While our goal is to unite and denounce our working conditions, many of us are afraid to speak up too openly about it because we fear the consequences. On the one hand, the terms and conditions of the platform clearly state that we risk legal repercussions if we do so. On the other hand, we are well aware that if we refuse to abide by these working conditions, the platform could easily replace us in an instant, as it preys on the unending amount of young, qualified candidates who converge at these positions because they cannot find jobs elsewhere. We feel that the only way some significant changes can be made is if public institutions actively address these issues.
Recommended citation:
Okinyi, M. (2024). Impact of Remotasks Closure on Kenyan Workers. In: M. Miceli, A. Dinika, K. Kauffman, C. Salim Wagner, & L. Sachenbacher (eds.), The Data Workers‘ Inquiry. https://data-workers.org/kauna
About the Author
Clara and B
Clara and B have been working on Outlier as AI impersonators for almost a year. They both applied due to a lack of professional opportunities despite graduating from prestigious French universities. Their experience with Outlier has left them drained, with important damages to their mental health and self-esteem. They made this project to show the gap between what is advertised by Outlier in its recruitment campaigns and the reality of working there. They believe only public awareness and the intervention of public authorities can help them get recognition for their work and improve this damaging working environment.