Close Menu
    What's Hot

    Tesla Has Been Working on Modified Model Ys for Its Robotaxi Program

    June 24, 2025

    Are Immigrants Self-Deporting? – Econlib

    June 24, 2025

    Britain will search in vain for ways to hobble Google

    June 24, 2025
    Facebook X (Twitter) Instagram
    Hot Paths
    • Home
    • News
    • Politics
    • Money
    • Personal Finance
    • Business
    • Economy
    • Investing
    • Markets
      • Stocks
      • Futures & Commodities
      • Crypto
      • Forex
    • Technology
    Facebook X (Twitter) Instagram
    Hot Paths
    Home»Business»Microsoft to rank ‘safety’ of AI models sold to cloud customers
    Business

    Microsoft to rank ‘safety’ of AI models sold to cloud customers

    Press RoomBy Press RoomJune 7, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Unlock the Editor’s Digest for free

    Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

    Microsoft will start ranking artificial intelligence models based on their safety performance, as the software group seeks to build trust with cloud customers as it sells them AI offerings from the likes of OpenAI and Elon Musk’s xAI.

    Sarah Bird, Microsoft’s head of Responsible AI, said the company would soon add a “safety” category to its “model leaderboard”, a feature it launched for developers this month to rank iterations from a range of providers including China’s DeepSeek and France’s Mistral.

    The leaderboard, which is accessible by tens of thousands of clients using the Azure Foundry developer platform, is expected to influence which AI models and applications are purchased through Microsoft.

    Microsoft currently ranks three metrics: quality, cost and throughput, which is how quickly a model can generate an output. Bird told the Financial Times that the new safety ranking would ensure “people can just directly shop and understand” AI models’ capabilities as they decide which to purchase.

    The decision to include safety benchmarks comes as Microsoft’s customers grapple with the potential risks posed by new AI models to data and privacy protections, particularly when deployed as autonomous “agents” that can work without human supervision.

    Microsoft’s new safety metric will be based on its own ToxiGen benchmark, which measures implicit hate speech, and the Center for AI Safety’s Weapons of Mass Destruction Proxy benchmark. The latter assesses whether a model can be used for malicious purposes such as building a biochemical weapon.

    Rankings enable users to have access to objective metrics when selecting from a catalogue of more than 1,900 AI models, so that they can make an informed choice of which to use.

    “Safety leader boards can help businesses cut through the noise and narrow down options,” said Cassie Kozyrkov, a consultant and former chief decision scientist at Google. “The real challenge is understanding the trade-offs: higher performance at what cost? Lower cost at what risk?”

    Alongside Amazon and Google, the Seattle-based group is considered one of the largest “hyperscalers” that together dominate the cloud market.

    Microsoft is also positioning itself as an agnostic platform for generative AI, signing deals to sell models by xAI and Anthropic, rivals to start-up OpenAI which it has backed with roughly $14bn in investment.

    Last month, Microsoft said it would begin offering xAI’s Grok family of models under the same commercial terms as OpenAI.

    The move came despite a version of Grok raising alarm when an “unauthorised modification” of its code led to it repeatedly referencing “white genocide” in South Africa when responding to queries on social media site X. xAI said it introduced a new monitoring policy to avoid future incidents.

    “The models come in a platform, there is a degree of internal review, and then it’s up to the customer to use benchmarks to figure it out,” Bird said.  

    There is no global standard for AI safety testing, but the EU’s AI Act will enter force later this year and compel companies to conduct safety tests.

    Recommended

    Some model builders including OpenAI are dedicating less time and money to identify and mitigate risks, the FT previously reported citing several people familiar with the start-up’s safety processes. The start-up said it had identified efficiencies without compromising safety.

    Bird declined to comment on OpenAI’s safety testing, but said it was impossible to ship a high quality model without investing a “huge amount” in evaluation and that processes were being automated.

    Microsoft in April also launched an “AI read teaming agent” that automates the process of stress testing computer programmes by launching attacks to identify vulnerabilities. “You just specify the risk, you specify the attack difficulty . . . And then it’s off attacking your system,” Bird said.

    There are concerns that without adequate supervision AI agents could take unauthorised actions opening the owners up to liabilities.

    “The risk is that leader boards can lull decision makers into a false sense of security,” said Kozyrkov. “Safety metrics are a starting point, not a green light.”

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Press Room

    Related Posts

    Britain will search in vain for ways to hobble Google

    June 24, 2025

    Wall Street firms emerge as top bidders for insurer Brighthouse

    June 24, 2025

    OpenAI and Jony Ive accused of trying to ‘bury’ rival start-up in trademark dispute

    June 24, 2025
    Leave A Reply Cancel Reply

    LATEST NEWS

    Tesla Has Been Working on Modified Model Ys for Its Robotaxi Program

    June 24, 2025

    Are Immigrants Self-Deporting? – Econlib

    June 24, 2025

    Britain will search in vain for ways to hobble Google

    June 24, 2025

    Hilton CMO Mark Weinstein Talks in-Person Experiences and AI

    June 24, 2025
    POPULAR
    Business

    The Business of Formula One

    May 27, 2023
    Business

    Weddings and divorce: the scourge of investment returns

    May 27, 2023
    Business

    How F1 found a secret fuel to accelerate media rights growth

    May 27, 2023
    Advertisement
    Load WordPress Sites in as fast as 37ms!

    Archives

    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • May 2023

    Categories

    • Business
    • Crypto
    • Economy
    • Forex
    • Futures & Commodities
    • Investing
    • Market Data
    • Money
    • News
    • Personal Finance
    • Politics
    • Stocks
    • Technology

    Your source for the serious news. This demo is crafted specifically to exhibit the use of the theme as a news site. Visit our main page for more demos.

    We're social. Connect with us:

    Facebook X (Twitter) Instagram Pinterest YouTube

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • Home
    • Buy Now
    © 2025 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.