In February, I spoke on this panel about how to mitigate and de-risk Bias in AI. If you're interested in watching, the recording is available on the Brighttalk website here.
If you're not interested in watching, the TL;DR of what I think about this topic is:
- As some of the first people really building out and implementing AI solutions, we have a huge responsibility to do it well, and to think about it deeply while we're doing it.
- To implement these systems well, we owe these problems the diversity of thought, teams, and solution space that they deserve.
- Human in the loop, transparent systems are better than no human in the loop, opaque systems. You'll also need to empower your teams to speak up if the human in the loop finds something objectionable.
- It's always important to ask yourself if the problem you're trying to solve is really best solved with AI or if there's another solution to the problem at hand. Don't fall into the tech solutionism trap.