Audio AIs are trained on data full of bias and offensive language
Seven major datasets used to train audio-generating AI models are three times more likely to use the words "man" or "men" than "woman" or "women", raising fears of bias
What's Your Reaction?