Zoom has been flirting with the concept of emotion AI ever since the pandemic gave it a second wind. As we touched on last month, tech giant Intel has been working alongside an e-learning software company to produce an emotion-analyzing program that stacks with Zoom. This program would supposedly benefit teachers by telling them when students appear confused or bored, allowing them to tailor their instruction and increase engagement. Protocol similarly reported in April that companies have begun using emotion AI during sales calls to assess potential customers’ moods and adjust their strategy accordingly. Unbeknownst to them, every customer is graded on an “emotion scorecard” throughout their call.
Digital rights non-profit Fight for the Future quickly caught wind of Protocol’s report. So did the American Civil Liberties Union (ACLU), Access Now, Jobs With Justice, and 24 other human rights groups—all of whom signed an open letter to Zoom published Wednesday. The letter asks Zoom founder and CEO Eric Yuan to scrap the company’s plans to introduce emotion AI, saying the technology is punitive, manipulative, discriminatory, rooted in pseudoscience, and a data integrity risk.
“Zoom claims to care about the happiness and security of its users but this invasive technology says otherwise,” the letter reads. “This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights. Zoom needs to halt plans to advance this feature.”
The open letter is far from the first to criticize emotion AI. Many have said the technology constitutes excessive surveillance, especially when the targeted students or customers don’t know their body language, tone, and other alleged emotional markers are being assessed. Others have said emotion AI could end up dishing out negative (or simply incorrect) analyses of people whose cultures express emotions differently.
The advocacy groups’ letter closes by reminding Yuan that his company has previously “made decisions that center users’ rights,” such as backtracking its decision to implement face-tracking features due to privacy concerns. “This is another opportunity to show you care about your users and your reputation,” the organizations write. “You can make it clear that this technology has no place in video communications.”
Now Read:
- Google Introduces Monk Skin Tone Scale to Improve AI Color Equity
- Seattle PD Is Testing a Brain Stimulation Device For Mental Wellness
- Researchers Use AI to Design Plastic-Eating Enzyme
from ExtremeTechExtremeTech https://ift.tt/FUaY316
ليست هناك تعليقات:
إرسال تعليق