logo
OpenAI unveils open-weight AI safety models for developers2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

OpenAI unveils open-weight AI safety models for developers2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

Advertisement

OpenAI is putting more safety controls directly into the hands of AI developers with a new research preview of “safeguard” models. The new ‘gpt-oss-safeguard’ family of open-weight models is aimed squarely at customising content classification.

The new offering will include two models, gpt-oss-safeguard-120b and a smaller gpt-oss-safeguard-20b. Both are fine-tuned versions of the existing gpt-oss family and will be available under the permissive Apache 2.0 license. This will allow any organisation to freely use, tweak, and deploy the models as they see fit.

The real difference here isn’t just the open license; it’s the method. Rather than relying on a fixed set of rules baked into the model, gpt-oss-safeguard uses its reasoning capabilities to interpret a developer’s own policy at the point of inference. This means AI developers using OpenAI’s new model can set up their own specific safety framework to classify anything from single user prompts to full chat histories. The developer, not the model provider, has the final say on the ruleset and can tailor it to their specific use case.

This approach has a couple of clear advantages:

  1. Transparency: The models use a chain-of-thought process, so a developer can actually look under the bonnet and see the model’s logic for a classification. That’s a huge step up from the typical “black box” classifier.
  1. Agility: Because the safety policy isn’t permanently trained into OpenAI’s new model, developers can iterate and revise their guidelines on the fly without needing a complete retraining cycle. OpenAI, which originally built this system for its internal teams, notes this is a far more flexible way to handle safety than training a traditional classifier to indirectly guess what a policy implies.

Rather than relying on a one-size-fits-all safety layer from a platform holder, developers using open-source AI models can now build and enforce their own specific standards.

While not live as of writing, developers will be able to access OpenAI’s new open-weight AI safety models on the Hugging Face platform.

See also: OpenAI restructures, enters ‘next chapter’ of Microsoft partnership

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Recommended

Google reveals its own version of Apple’s AI cloud2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

Google reveals its own version of Apple’s AI cloud2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

Google has rolled out Private AI Compute, a new cloud-based processing system designed to bring the privacy of on-device AI to the cloud. The platform aims to give users faster, more capable AI experiences without compromising data security. It combines Google’s most advanced Gemini models with strict privacy safeguards, reflecting the company’s ongoing effort to make AI both powerful and responsible.

Advertisement

Cisco: Only 13% have a solid AI strategy and they’re lapping rivals2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

Cisco: Only 13% have a solid AI strategy and they’re lapping rivals2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

If you’ve ever thought companies talk more than act when it comes to their AI strategy, a new Cisco report backs you up. It turns out that just 13 percent globally are actually prepared for the AI revolution.

How Lumana is redefining AI’s role in video surveillance2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

How Lumana is redefining AI’s role in video surveillance2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

For all the progress in artificial intelligence, most video security systems still fail at recognising context in real-world conditions. The majority of cameras can capture real-time footage, but struggle to interpret it. This is a problem turning into a growing concern for smart city designers, manufacturers and schools, each of which may depend on AI to keep people and property safe.

Reply’s pre-built AI apps aim to fast-track AI adoption2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

Reply’s pre-built AI apps aim to fast-track AI adoption2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

Adopting AI at scale can be difficult. Enterprises around the world are discovering the pace of AI deployment is frustratingly slow as they face implementation, integration, and customisation challenges. Generative AI is undoubtedly powerful, but it can be complex, particularly for businesses starting from scratch.

China’s generative AI user base doubles to 515 million in six months2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

China’s generative AI user base doubles to 515 million in six months2025’s AI chip wars: What enterprise leaders learned about supply chain realityL’Oréal brings AI into everyday digital advertising production3 best secure container images for modern applications

The AI adoption in China has reached unprecedented levels, with the country’s generative artificial intelligence user base doubling to 515 million in just six months, according to a report released by the China Internet Network Information Centre (CNNIC).

logo

Disclaimer

The content available on this website (including text, graphics, images, and information) is intended for general informational purposes only. Materials, details, terms of use, and descriptions presented on these pages may be changed without prior notice.

© 2025 91info.top