AI Safety: OpenAI & Google Insights

AI Model Clarity Under Scrutiny: OpenAI and Google Face Criticism


delayed Disclosures and Missing Details Raise Concerns About AI Safety

Leading AI developers, including OpenAI and Google, are facing increasing scrutiny over the transparency of their AI models. The core issue revolves around the delayed release, or complete omission, of detailed safety facts, including system cards and comprehensive risk assessments. This lack of transparency is fueling concerns among experts and regulators about the potential risks associated with increasingly sophisticated AI systems.

Google’s Gemini: A Case Study in Limited Transparency

Google’s release of the technical report for ‘Gemini 2.5 Pro‘ on April 17th, nearly three weeks after the model’s initial launch on March 26th, has drawn criticism. While the report ostensibly details the safety evaluation results, experts argue that it lacks crucial specifics, making a genuine risk assessment challenging. The report contains internal reviews analyzing the AI model’s risks and functions, but critics point out the absence of concrete figures and verifiable data.

The report is vrey poor and it has only been available for a few weeks after the model was released.

Peter Wil theford, AI Policy and Strategic Research Institute

Unlike some competitors, Google typically releases technical reports only after a model is deemed to be beyond the ‘experimental’ stage. Furthermore, the company selectively discloses information, often opting for separate audits that reveal only portions of the overall risk evaluation. The ‘Gemini 2.5 Pro’ report notably omits any mention of the Frontier Safety Framework (FSF) introduced by Google last year, raising questions about the extent to which the company is addressing potential serious issues that AI could trigger.

Adding to the concerns, the safety report for ‘gemini 2.5 Flash,’ also announced on April 17th, remains undisclosed, with Google stating that it will be released “soon.”

OpenAI’s Approach: System Cards and Frontier Models

openai is also under fire for its approach to transparency. The recent release of ‘GPT-4.1‘ was not accompanied by a system card, a document that typically provides detailed information about the model’s capabilities, limitations, and potential risks. According to Xiaoki Arm, a spokesperson for OpenAI, a system card was not deemed necessary because “GPT-4.1 is not a frontier model.”

Past incidents have further fueled concerns about OpenAI’s transparency practices. The release of ‘O1’ and ‘O1-Pro’ in December of last year was marred by suspicions that the safety reports released were based on versions of the models that were less capable than the actual deployed versions. Similarly, Deep Research delayed the release of its system card until weeks after the model’s launch.

The Regulatory Landscape and the Push for Mandatory Disclosure

The US government made promises two years ago to publicly disclose safety reports on important public AI models, and then made similar transparency promises to other countries. Though, the absence of legal mandates for AI safety reporting allows companies to operate with considerable discretion.

Stephen Adler, a former AI Safety Research Institute, clarified that safety reports are not a legal duty but voluntary action. This means that companies face no legal repercussions for failing to disclose these reports, even as the potential risks associated with advanced AI models continue to grow.

The debate over mandatory disclosure is intensifying. Last year, California’s SB 1047 bill, which included provisions for mandatory disclosure by AI developers, faced opposition from both OpenAI and Google. This opposition highlights the tension between the industry’s desire for innovation and the growing need for greater transparency and accountability.

The Stakes: Why Transparency Matters

As AI models become increasingly sophisticated, the potential risks they pose also escalate. These risks range from bias and discrimination to the spread of misinformation and the potential for misuse in malicious activities. Without adequate transparency, it becomes difficult to assess and mitigate these risks effectively.

The more the model becomes sophisticated, the greater the risk of it.

Thomas Wood Side, Secure AI project

The call for greater transparency in AI development is not simply about compliance; it is indeed about ensuring the responsible and ethical development of a technology that has the potential to transform society. By embracing transparency, AI developers can foster trust, promote accountability, and ultimately contribute to a safer and more beneficial future for all.

Related Posts

Leave a Comment