الجمعة، 24 يونيو 2022

NEWS TECHNOLOGIE

(Photo: Turag Photography/Unsplash)
Microsoft has decided to retire its emotional assessment technology, as well as place restrictions on the use of its facial recognition systems. 

In a blog post published Tuesday, the software giant announced that it would be sunsetting facial analysis tools that claim to identify emotional states and personal attributes, like a person’s gender or age. Before, these capabilities were freely available within Azure Face API, Computer Vision, and Video Indexer. Now those who already have access to Microsoft’s emotional-reading features will have one year of use before their access is revoked.

“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs,” wrote Sarah Bird, Azure AI’s principal group product manager, in the post. “In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of ‘emotions,’ and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.”

Microsoft says it will continue the use of its emotion and facial recognition software for Seeing AI, which “narrates the world” for those with little to no seeing ability. (Photo: Microsoft)

Microsoft has also chosen to place restrictions on who will be able to use its facial recognition technology in the future. Going forward, anyone interested in using these tools will need to submit an application detailing their project, after which Microsoft will approve or deny access. Microsoft said it will independently assess the benefits and risks of continuing to use both emotion and facial recognition tools for “controlled accessibility scenarios,” such as its own Seeing AI

The company’s decision follows a widespread effort to investigate the social implications of emotion and facial recognition tech. Well-known human rights groups have recently called out Zoom for its mood recognition AI, which ranks unsuspecting sales call recipients on an “emotional scorecard.” Similarly, some are concerned that Intel’s emotion-reading e-learning software will incorrectly target or alienate students who are deemed “distracted” or “confused.” While these calls for action often go ignored, they sometimes work: last month the IRS dropped its ID.me program, which required users to upload a video selfie to access government services, after receiving near-unanimous backlash. Now Microsoft seems to have joined the ranks of those willing to backpedal—to do the right thing or for the sake of keeping customers, who knows. 

Now Read:



from ExtremeTechExtremeTech https://ift.tt/WTV0tzo

ليست هناك تعليقات:

إرسال تعليق