At this level, it is protected to imagine you have used a chatbot like ChatGPT or Gemini. Besides asking normal questions or getting lengthy texts summarized, you might need requested a well being query, too. Maybe you had been making an attempt to determine if a symptom was value worrying about, or make sense of lab outcomes, typically late at evening when a physician is not obtainable.OpenAI’s January 2026 report discovered that greater than 5% of all ChatGPT messages globally are about well being care, and greater than 40 million weekly energetic customers worldwide ask well being care questions on daily basis.(Disclosure: Ziff Davis, CNET’s mother or father firm, in 2025 filed a lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI programs.)In January 2026, OpenAI launched ChatGPT Health to show that behavior right into a devoted function. This “health-focused experience” inside ChatGPT is designed that can assist you perceive medical data and put together for actual conversations with clinicians. It just isn’t a analysis or therapy software.So what precisely is ChatGPT Health, and the way does it differ from asking a chatbot a random well being query? Let’s double-click the brand new Health tab in ChatGPT.What is ChatGPT Health?ChatGPT Health is not a separate app like its AI browser, Atlas. It lives inside ChatGPT as a devoted house or tab, specializing in health-related questions, paperwork and workflows. The Health tab in ChatGPT. ChatGPT/Screenshot by CNETIn its launch word, OpenAI says it labored with greater than 260 physicians who’ve practiced in 60 nations and dozens of specialties over two years, reviewing health-related mannequin responses greater than 600,000 instances. As a consequence, it does not reply something that sounds medical in the identical open-ended means as a traditional chat. Instead, it responds extra cautiously, with stricter limits on how data is defined and clearer prompts to hunt skilled care.ChatGPT Health is out there on the internet and within the cell app. You needn’t obtain something or join exterior ChatGPT itself. Access will depend on location and rollout stage, however you’ll be able to be a part of the waitlist.As of early 2026, it’s obtainable within the US, Canada, Australia, components of Asia and Latin America, the place ChatGPT already helps well being options. It’s not presently obtainable within the EU and the European Economic Area, the UK, China and Russia. OpenAI has stated availability will develop, however timelines range by area because of native laws and well being knowledge guidelines. How ChatGPT Health worksChatGPT Health makes use of the identical underlying massive language fashions (LLMs) as ChatGPT. You ask a query, and the mannequin generates a response. The distinction with Health is context, grounding and constraints. Alex Kotlar, founding father of Bystro AI, a genetics-focused LLM platform for well being insights, instructed CNET that OpenAI did not construct a brand new foundational well being mannequin. “They haven’t created a model that suddenly understands medical records much better. It’s still ChatGPT, just connected to your medical records,” Kotlar stated.Health attracts from the info you select to sync, however it will probably’t entry it until you explicitly grant permission. Besides medical information, you’ll be able to join apps comparable to Apple Health, lab outcomes from Function and meals logs from MyFitnessPal. You may also hyperlink Weight Watchers for GLP-1 meal concepts, Instacart to show meal plans into buying lists and Peloton for exercise suggestions. This permits the AI to supply customized insights primarily based in your historical past, fairly than generic recommendation. These conversations now comply with stricter guidelines round tone, sourcing and response fashion, which OpenAI says it assessments utilizing its analysis framework, HealthBench. It makes use of physician-written rubrics to grade mannequin responses throughout 5,000 simulated well being conversations, making use of greater than 48,000 particular standards to evaluate high quality and security.You can add paperwork to Health, use voice command and do every part you’d usually do in common chat. If you are reviewing a number of take a look at outcomes or prepping for a specialist go to, Health can hold observe of what you’ve already shared and assist set up data over time.OpenAI suggests utilizing it to overview lab outcomes, set up questions earlier than an appointment, translate medical language into plain English and summarize lengthy paperwork like discharge notes or insurance coverage explanations. ChatGPT/Screenshot by CNETOpenAI is express that the software is supposed to assist conversations with well being care professionals, not shortcut them. It cannot order assessments, prescribe medicine or verify a analysis. If you deal with it like a physician, you are utilizing it incorrectly.But Dr. Saurabh Gombar, scientific teacher at Stanford Health Care and chief medical officer at Atropos Health, instructed CNET, “I think preparing and education itself actually already crosses a boundary into being medical advice.”Health retains conversations, related apps, information and Health-specific reminiscence separated out of your principal chats, so well being particulars don’t move again into the remainder of ChatGPT. However, Health can use reminiscence from common chats. Say you talked about a current transfer or life-style change, like changing into vegan. Health can draw on that context to make the dialog extra related.It may also observe patterns over time. If you join Apple Health, you’ll be able to ask about sleep developments, exercise patterns or different metrics, then use that abstract to speak to your physician.ChatGPT Health and ChatGPT for well being care will not be the sameChatGPT Health is a shopper function for private wellness. OpenAI’s Help Center says HIPAA does not apply to shopper well being merchandise like Health. HIPAA is the Health Insurance Portability and Accountability Act, which federally protects a affected person’s delicate well being data from being disclosed with out their consent. Separately, OpenAI gives “ChatGPT for Healthcare” for organizations that want controls designed for regulated scientific use and assist for HIPAA compliance, together with Business Associate Agreements (e.g., contracts between well being care suppliers and billing corporations).If you are utilizing ChatGPT Health, you are not getting into a hospital system, though your medical information might be related with the function. You’re utilizing a shopper product with extra protections that OpenAI controls. So do not assume “health feature” routinely equals HIPAA.Privacy and knowledge controlsOpenAI says Health provides further protections on high of ChatGPT’s current controls, together with “purpose-built encryption and isolation to keep health conversations protected and compartmentalized.” An OpenAI spokesperson instructed CNET that conversations and information in ChatGPT are encrypted at relaxation and in transit by default, and that Health provides extra layered protections because of the delicate nature of well being knowledge. The spokesperson added that any worker entry to your Health knowledge could be restricted to security and safety operations, and that entry is extra restricted and purpose-limited than typical product knowledge flows.”When consumers hear that something is encrypted, they often think that nobody can see it. That’s not really how it works. Encrypted at rest doesn’t mean the company itself can’t access the data,” Kotlar stated.You can disconnect apps, take away entry to medical information and delete Health reminiscences. Dane Stuckey, OpenAI’s chief data safety officer, additionally says Health conversations will not be used to coach its basis fashions by default. X/Screenshot by CNETStill, “more protected than regular chats” just isn’t the identical as risk-free. Even robust safety cannot eradicate each threat tied to storing delicate well being data on-line. That’s one motive privateness consultants have urged customers to think twice earlier than importing full medical information into any AI software.Limitations and security concernsChatGPT Health might help you perceive data, however it will probably nonetheless get issues flawed. There’s a threat folks will take Health’s solutions at face worth. In right now’s world, everybody needs solutions quick, and AI can sound convincing even when it needs to be cautious. Unsurprisingly, ECRI (a affected person security nonprofit) listed AI chatbots because the No. 1 well being know-how normal for 2026.Hallucinations, the AI behavior of confidently producing incorrect particulars, matter much more in well being care than while you ask it to summarize a PDF. If a software fully fabricates a examine, misreads a lab worth or overstates what a symptom means, you possibly can be critically endangering your well being.”The biggest danger for consumers is that unless they have a medical background, they’re going to have a hard time evaluating when it’s saying something right and when it’s saying something wrong,” Kotlar stated.When requested to touch upon hallucination charges, the OpenAI spokesperson stated the fashions powering ChatGPT Health have “dramatically reduced” hallucinations and different high-risk errors in difficult medical conversations. According to OpenAI’s inside HealthBench evaluations, GPT-5 reduces hallucinations in troublesome well being situations by eightfold in contrast with earlier fashions, cuts errors in doubtlessly pressing conditions by greater than 50 instances in contrast with GPT-4o and exhibits no detected failures in adjusting for world well being context. The firm additionally says its newer fashions are considerably extra more likely to ask follow-up questions when unsure, which it argues lowers the chance of assured however incorrect responses. “Companies behind these tools need to share that they have put these sorts of checks and balances in place or these benchmarks to ensure that the quality of the answer is high,” Gombar tells CNET.OpenAI says Health just isn’t supposed for analysis or therapy and will assist, not change, clinicians’ care. Kotlar says well being care is advanced and extremely regulated, so instruments like this may be “great, but also sort of irresponsible at the same time,” since quite a bit can nonetheless go flawed.Should you utilize it?People had been already utilizing ChatGPT for health-related questions even when it wasn’t the proper place to do it. Because Health provides tighter guardrails for medical matters, it could be barely safer than a typical chat for that goal. Even physicians are utilizing AI extra in apply, and that use has almost doubled between 2023 and 2024, in accordance with the American Medical Association, which surveyed simply over 1,100 physicians.Gombar stated interactions between conventional well being care and AI are doubtless right here to remain, however provides, “there’s room for improvement and for benefit.” Many folks haven’t got dependable entry to routine scientific care, and if LLMs can no less than assist triage whether or not somebody ought to see a doctor, that alone could possibly be significant.If you are cautious with the info you share, it could possibly be helpful. You can use Health to translate medical language or draft questions so you do not freeze in a 10-minute appointment. You may also ask it different low-risk questions on food plan and train. Just at all times confirm the knowledge with respected sources or knowledgeable. Never use it to self-diagnose, resolve whether or not to take or cease medicine or interpret a severe symptom.”Health care is not like coding or writing,” Kotlar stated. “When it fails, it fails in ways that are really dangerous for a human being.”ChatGPT Health might help you make sense of knowledge, however it will probably’t take duty in your well being. Like most AI instruments, its worth will depend on how properly you perceive its limits.
