Skip to main content

IBM, Amazon Stop Selling Facial Recognition Over Bias – For The Time Being

What’s happening? IBM will stop selling facial recognition services and has called for a national debate on the technology’s future use. In a letter, CEO Arvind Krishna wrote that the company opposes and would not use technology offered by vendors which could lead to “violations” of human rights and that a “national dialogue” is needed. AI systems, especially those used in law enforcement, should be tested for bias in a transparent way, he added.

Why does this matter? The Black Lives Matter movement has pushed facial recognition technology into the spotlight once more, with privacy concerns being overshadowed by a remerged focus on its inherent bias.

IBM had been attempting to solve this issue, last year adding one million diverse faces into its dataset to broaden its system and better train its AI. Its pledge to stop selling the technology could point to the fact this action had not resulted in significant improvements or, as TechCrunch points out, it could be to do with the fact the technology is not making IBM money.

Amazon’s Rekognition face recognition technology is perhaps more notorious and controversial due to its association with law enforcement. Amazon this week also made moves to cut back its use in policing – for at least one year. The company said it was doing so until stronger regulations are developed, and amid criticism on the technology's disproportionate effect on minorities.

study by Comparitech found Rekognition wrongly linked police mugshots with 105 politicians in the US and the UK, and MIT research has previously shown it to perform more poorly with darker-skinned women.

Such issues led AI researchers to write an open letter to Amazon last year calling on it to stop selling Rekognition to police agencies. Following this, its shareholders then rejected a proposal to limit the sale of the technology to government agencies and law enforcement.

It’s interesting to note police body cameras in the US were introduced with the aim of increasing the accountability of officers, and their utilisation was accelerated over the years due to Black Lives Matter protests. Their effectiveness has subsequently been questioned, and so regulators looking at tackling US policing issues may have to address both human and machine learning to improve police transparency and bias.

Microsoft has also announced this week it will wait until federal laws are in place in the US before selling its facial recognition technology to the police. The move prompted angst from President Donald Trump, who endorsed banning the tech giant from federal contracts via social media.  

Google, which does not sell facial recognition products at present, employed contractors last year that were reported to have duped black homeless people in Atlanta to have their likenesses taken and used for facial-recognition research.

Further thought from Curation – It’s not just facial recognition systems that display inherent bias. Speech recognition systems from Amazon, Apple, Google, IBM and Microsoft have been shown to make fewer mistakes with white users compared to black users. AI hiring systems are also argued to be biased by default, and research is now underway to tackle this issue.

Nick Finegold is Founder & CEO of Curation Corp, an emerging and peripheral risks monitoring service.

Content role

© The Sortino Group Ltd

All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency or other Reprographic Rights Organisation, without the written permission of the publisher. For more information about reprints from AlphaWeek, click here.