
Advertisement
Digital payments and fintech company Ant International, has won the NeurIPS Competition of Fairness in AI Face Detection. The company says it’s committed to developing secure and inclusive financial services, particularly as deepfake technologies are becoming more common.
The growing use of facial recognition in many sectors has highlighted the issue of algorithmic bias in AI. Research conducted by NIST (National Institute of Standards and Technology) shows many widely used facial recognition algorithms exhibit considerably higher error rates when analysing the faces of women and people of colour, a disparity that stems from a lack of diversity in the training data and the demographics of those building and controlling many mainstream AI platforms. The consequences of biased algorithms can lead to the denial of financial services to large sections of the population, and is seen as a vulnerability in security protocols.
The NeurIPS Competition was held alongside the Conference on Neural Information Processing Systems, the well-respected AI conference, and challenged participants to create AI models capable of high performance and fairness covering a range of demographic factors: Gender, age, and skin tone. Ant International’s team beat over 2,100 submissions from 162 teams coming from all over the world. The given task was to accurately detect 1.2 million AI-generated face images which were chosen as properly representative of demographic groups.
The approach taken by Ant’s winning AI model combines a Mixture of Experts (MoE) architecture with a bias-detection mechanism. The system trains two competing neural networks: one focused on identifying deepfakes, and the other designed to challenge the first, forcing it to disregard demographic characteristics. This dynamic process helps ensure the system learns to detect genuine signs of manipulation rather than inadvertently relying on demographic patterns. The model’s training incorporated a globally representative dataset and incorporated real-world payment fraud scenarios to ensure its performance at scale.
“A biased AI system is inherently an insecure one,” explained Dr.Tianyi Zhang, general manager of risk management and cybersecurity at Ant International. “Our model’s fairness isn’t just a matter of ethics; it’s fundamental to preventing exploitation from deepfakes and ensuring reliable identity verification for every user”.
The technology behind the winning entry is now being integrated into Ant’s payment and financial services to help counter the threat of deepfakes, and the companies says it achieves a detection rate of in excess of 99.8% in all demographics and in the 200 markets Ant operates in.
Ant’s technology helps its customers meet global Electronic Know Your Customer (eKYC) standards, particularly during customer onboarding, without algorithmic bias. That’s held to be particularly important in emerging markets where greater financial inclusion can be hampered.
Ant International serves over 150 million merchants and 1.8 billion user accounts, known for services like Alipay+, Antom, Bettr and WorldFirst. The company has stated AI security is a pillar of its operations. Its AI SHIELD, a framework for risk management as built on AI Security Docker to help mitigate the risk of vulnerabilities in AI services like unauthorised access and data leakage.
AI SHIELD underpins a suite of risk-management solutions that provide broader protection of financial transactions, including measures against deepfake attacks and fraud. Alipay+ EasySafePay 360 has reduced incidents of account takeover in digital wallet payments by 90%, the company says.
(Image source: “abstract art of a beautiful portrait, solid shapes, geometric shapes, neotokyo colors, muted colors, pixar, artstation, greg rutkowski, samdoesart, ge” – public domain)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Google has rolled out Private AI Compute, a new cloud-based processing system designed to bring the privacy of on-device AI to the cloud. The platform aims to give users faster, more capable AI experiences without compromising data security. It combines Google’s most advanced Gemini models with strict privacy safeguards, reflecting the company’s ongoing effort to make AI both powerful and responsible.
Advertisement

If you’ve ever thought companies talk more than act when it comes to their AI strategy, a new Cisco report backs you up. It turns out that just 13 percent globally are actually prepared for the AI revolution.

For all the progress in artificial intelligence, most video security systems still fail at recognising context in real-world conditions. The majority of cameras can capture real-time footage, but struggle to interpret it. This is a problem turning into a growing concern for smart city designers, manufacturers and schools, each of which may depend on AI to keep people and property safe.

Adopting AI at scale can be difficult. Enterprises around the world are discovering the pace of AI deployment is frustratingly slow as they face implementation, integration, and customisation challenges. Generative AI is undoubtedly powerful, but it can be complex, particularly for businesses starting from scratch.

The AI adoption in China has reached unprecedented levels, with the country’s generative artificial intelligence user base doubling to 515 million in just six months, according to a report released by the China Internet Network Information Centre (CNNIC).