ChatGPT at HHS: Employee Use Approved

by Archynetys Technology & Science Desk

September 25, 2025

3 min read

Key takeaways:

  • An HHS spokesperson confirmed that department employees have a choice to use ChatGPT.
  • ChatGPT will be used for “administrative tasks,” the HHS spokesperson said.

HHS is allowing employees to use ChatGPT in their workflow, according to an HHS statement.

Andrew G. NixonHHS director of communications, told Healio the option to use ChatGPT is not mandated, but available if employees choose to use it.





The HHS plans to use ChatGPT for “administrative tasks,” a spokesperson said.

“By using ChatGPT, HHS staff can stay focused on serving the American people while the tool supports a wide range of administrative tasks, helping the department operate more efficiently,” Nixon said.

According to a report from FedscoopDeputy HHS Secretary Jim O’Neill announced the implementation plan to employees over email, with instructions on how use AI. Staff were instructed to treat outputs as suggestions, compare AI responses with original source documents and avoid entering sensitive or confidential information to maintain HIPAA compliance, they wrote.

Fedscoop further reported that O’Neill’s email follows a directive in President Donald J. Trump’s AI Action Plan, an initiative launched by his administration to accelerate policy recommendations and develop infrastructure for AI usage across the U.S.

HHS Secretary Robert F. Kennedy Jr. previously incorporated AI into the FDA. However, the FDA’s AI platform, called Elsa, generated fake studies to support responses, according to a report from CNN.

Ravi Hariprasad, MD, Mph, founder and CEO of Zenara Health and a Healio Psychiatry Peer Perspective Board member, told Healio that the news about HHS’ use of ChatGPT “underscores the importance of approaching these tools with caution,” specifically using safeguards to ensure data-loss prevention and retrieval of information grounded to official documents and policies.

mugplaceholder

Ravi Hariprasad

“Enterprise AI can be safe and valuable for administrative work — if it’s engineered and governed correctly,” he explained. “The risk comes from ‘partial grounding,’ where the model has some but not all the facts and confidently fills the gaps. Pairing non-retention settings, retrieval grounding to official sources, a documented prohibited-use list and mandatory human review turns a risky tool into a reliable assistant.”

Hariprasad says AI has “incredible” potential, but acknowledges the importance of grounding it with careful guidance.

“Its value comes from thoughtful and expert use,” he added.

Also in response to the HHS announcement, Depti Panddita, MD, Facp, Famia, chief medical information officer at University of California Irvine Health system and a Healio Primary Care Peer Perspective Board member, emphasized the importance of transparency for building trust around AI usage.

mugplaceholder

Deepti pandita

“The HHS’s rollout of ChatGPT reflects a bold move toward AI-enabled workflows in health,” she told Healio. “While HHS uses these tools, they need to be clear on how they will prevent misinformation, be transparent and maintain trust with the tools.”

To do this, Pandita highlights the importance of safe procedures in AI implementation.

“Ensuring clinical safety demands rigorous data validation, governance and clear usage boundaries,” she said.

For more information:

Ravi Hariprasad, MD, mph, can be reached at ravi@zenara.health.

Depti Pandita, MD, Facp, Famia, can be reached at primarycare@healio.com.

Related Posts

Leave a Comment