Artificial intelligence

Google Trains AI Model to Recognize Human Emotions

Announced Thursday, the PaliGemma 2 family of models can analyze images, allowing AI to generate captions and answer questions about the people it “sees” in photos.

“PaliGemma 2 generates rich, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions, and the overall narrative of a scene,” Google writes in its official blog. To recognize emotions, PaliGemma 2 must be configured accordingly. Without it, it does not work. However, experts who spoke to TechCrunch were alarmed by the prospect of a publicly available emotion detector.

Google says it conducted “extensive testing” to assess PaliGemma 2’s demographic bias and found “significantly lower levels of toxicity and profanity” compared to industry benchmarks. However, the company did not provide a full list of the benchmarks it used or what types of tests it conducted. The only benchmark Google did mention was FairFace, which is a collection of tens of thousands of human portraits. The company says PaliGemma 2 performed well in FairFace’s assessment. But some experts have criticized the benchmark for its bias, noting that FairFace only represents a few racial groups, not all.

The greatest concern among experts is the possibility that such neural networks will be used for harmful purposes, for example, to discriminate against marginalized groups by law enforcement agencies, HR specialists and border services.

Leave a Reply

Your email address will not be published. Required fields are marked *