Innovators Can Laugh podcast is now B2B Marketers Can Laugh!
April 12, 2023

Fox Donation and Ghost Work: The Dark Side of AI Training Data and How to Avoid It in Your Startup

Fox Donation and Ghost Work: The Dark Side of AI Training Data and How to Avoid It in Your Startup

Artificial Intelligence (AI) has become an integral part of our daily lives, from chatbots to image recognition and predictive analysis. However, as AI continues to evolve, so do ethical concerns regarding the use of AI in the workforce. Iva Gumnishka, the co-founder of Humans in the Loop, a social enterprise that provides ethical human in the loop workforce solutions to power the AI industry, believes that we need a more human-centric approach to ensure that AI doesn't displace people.

In a recent interview, Iva shared her thoughts on a range of topics concerning AI, including misconceptions about AI, Fox donation, ghost work, and the ethical considerations for the AI industry.

One of the biggest misconceptions about AI is the belief that it will eventually become sentient, with feelings, emotions, and subjective experiences. Iva states that we are far away from this point and that the actual applications of AI are quite practical and related to things that make life easier for users.

Iva also explained the concept of Fox donation, which is relevant to the work that her company does in preparing datasets used to train AI systems. Fox donation refers to the pretense of a company that is automating a service but needs to maintain the appearance of human involvement. To do this, the company uses human workers to make the AI system appear more human-like. An example of Fox donation is a chatbot that pretends to be a customer service representative but is actually run by humans in the background.

Ghost work is a related concept where humans are doing the work of AI without realizing it. For example, when you fill out a CAPTCHA form, you are actually helping to train an AI system to recognize images. However, you may not be aware that this is the case, and you are not compensated for your work.

These practices raise ethical concerns about the use of AI in the workforce. Companies need to be transparent about the use of AI and human involvement in their systems. There needs to be clear communication with consumers and workers about how their data is being used and whether they are contributing to the training of AI systems.

Iva believes that a more human-centric approach is needed to ensure that AI doesn't displace people. One solution is to use a human in the loop approach, where human workers are involved in the AI process. For example, if an AI system makes a mistake, a human can correct it, and the system can learn from that correction. This approach is more ethical because it ensures that humans are involved in the process, and there is a check on the AI system's decisions.

Startups that are interested in AI should consider a more ethical approach to AI. Iva suggests that startups should focus on developing AI systems that augment human capabilities rather than replacing them. By doing this, startups can ensure that humans are still involved in the process and that there is a balance between human and AI involvement.

In conclusion, AI has the potential to transform our lives, but it is essential to consider ethical considerations. Startups need to be transparent about their use of AI and human involvement in their systems. They should also consider a more human-centric approach to AI to ensure that AI doesn't displace people. By doing this, startups can create AI systems that are beneficial to society and promote human-centric values.

📫 Find out about each guest and be the first to know when new shows drop when subscribing to the ICL newsletter! (subscribe on right side of page). Catch the podcast interview with Iva Gumnishka here.