At the IFS Unleashed event in Orlando, experts talked about AI and its problems with being unfair.
AI bias in hiring: AI is used in hiring and managing people, but it can learn from past mistakes and treat some people unfairly. Stephanie Poore, a leader at IFS, said that AI sometimes doesn’t think about people who were left out before.
Jacqueline de Rojas, another leader, shared a story about a female doctor. She couldn’t use the gym locker room because AI thought her job was only for men. Jacqueline said we live in a world full of bias, and it’s our job to fix it, even with technology.
Transparency: Experts say that AI needs to be clear and easy to understand. Bob de Caux, the chief AI officer at IFS, said people are afraid of AI because they don’t understand how it works. If AI is more open and clear, people will trust it more.
New ways to reduce bias: Bob also talked about a cool idea where two AIs work together. One AI trains, and the other one looks for mistakes. This helps find and fix biases faster.
Diversity: To make AI better, we need different types of people working on it. Jacqueline de Rojas thinks that in the future, AI might build itself, but diverse teams must be involved to make sure AI is fair. If we don’t do this, AI could make the same mistakes over and over again.
Both Poore and de Rojas believe AI can be used for good if we have the right rules and teams. They are hopeful about the future of AI, but we need to be careful and thoughtful when using it.
The experts agreed that AI can be fairer with better understanding, diverse teams, and careful planning.
See also: AI and Chatbots Are Struggling to Stop Fake News Before U.S. Elections