
Commure
Founded Year
2017Stage
Series E | AliveTotal Raised
$813.89MValuation
$0000Last Raised
$70M | 1 yr agoMosaic Score The Mosaic Score is an algorithm that measures the overall financial health and market potential of private companies.
+53 points in the past 30 days
About Commure
Commure is a technology company that develops software and AI solutions for healthcare systems. The company offers products for provider, administrator, and patient workflows, including tools for clinical documentation, revenue cycle management, and patient engagement. Commure's technologies aim to assist the healthcare workforce, allowing providers to allocate more time to patient interactions. Commure was formerly known as PatientKeeper. It was founded in 2017 and is based in Mountain View, California.
Loading...
ESPs containing Commure
The ESP matrix leverages data and analyst insight to identify and rank leading companies in a given technology landscape.
The healthcare developer toolkits market provides developers with software tools, libraries, and resources designed to help developers build healthcare applications, platforms, and systems. These toolkits provide developers with a range of components, such as APIs, SDKs, and frameworks, to simplify the process of creating healthcare software and to ensure that it complies with relevant industry st…
Commure named as Leader among 12 other companies, including Google Cloud Platform, Redox, and Health Gorilla.
Loading...
Research containing Commure
Get data-driven expert analysis from the CB Insights Intelligence Unit.
CB Insights Intelligence Analysts have mentioned Commure in 2 CB Insights research briefs, most recently on Sep 13, 2024.
Expert Collections containing Commure
Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.
Commure is included in 3 Expert Collections, including Unicorns- Billion Dollar Startups.
Unicorns- Billion Dollar Startups
1,270 items
Digital Health
11,305 items
The digital health collection includes vendors developing software, platforms, sensor & robotic hardware, health data infrastructure, and tech-enabled services in healthcare. The list excludes pureplay pharma/biopharma, sequencing instruments, gene editing, and assistive tech.
Digital Health 50
150 items
The winners of the third annual CB Insights Digital Health 150.
Commure Patents
Commure has filed 3 patents.
The 3 most popular patent topics include:
- electronic health records
- health informatics
- health standards

Application Date | Grant Date | Title | Related Topics | Status |
---|---|---|---|---|
9/10/2018 | 10/31/2023 | Grant |
Application Date | 9/10/2018 |
---|---|
Grant Date | 10/31/2023 |
Title | |
Related Topics | |
Status | Grant |
Latest Commure News
Mar 12, 2025
The president’s deregulatory agenda will mean more responsibility rests on healthcare stakeholders to get artificial intelligence right. Published March 10, 2025 First published on Healthcare companies wrestling with how to adopt artificial intelligence are unlikely to get much help from the Trump administration, placing the burden of responsible implementation squarely on the industry’s shoulders. The challenge facing companies is becoming increasingly difficult, too, as the technology grows more complex, experts said during the HIMSS healthcare conference in Las Vegas. “One thing that’s clear is that this administration is not going to regulate AI. Good or bad, take that for what it is,” Tanay Tandon, the CEO of provider automation company Commure, said during a conference panel. President Donald Trump’s deregulatory approach means hospitals turning to AI tools to save money and give overworked clinicians relief will likely operate in a regulatory gray area for at least the next few years. The president says his goal is to free up U.S. developers to innovate. But that comes with downsides for tech developers and healthcare companies seeking guardrails on AI’s proclivity to make mistakes and exacerbate existing bias. A lack of national standards might even hamper AI development and adoption in the industry, according to some experts. “When there isn’t a federal framework, it can just absolutely cause all kinds of problems,” said Leigh Burchell, the chair of the Electronic Health Records Association. “We all just want to know what our rules are. And then we can comply.” Comparing Biden and Trump But neither Congress nor the executive branch have come up with a comprehensive framework to regulate the models — despite some progress during the Biden administration, when an HHS task force worked to build a unified regulatory structure. The task force unveiled a strategic plan just 10 days before Trump’s inauguration in January. Trump nixed the blueprint in one of his first executive orders, however. Federal employees working on AI oversight, including at the FDA, have been caught up in the Trump administration’s purge of the government’s workforce, meanwhile. Amid the turmoil, the future of the HHS office that oversees AI policy is unclear. As a result, the little momentum there was in Washington to create a concrete strategy for health AI appears to have stalled, at least for now. In its place, Trump announced the Stargate Project , a $500 billion investment deal with private companies to prioritize AI development and maintain U.S. supremacy — a high-stakes bet complicated by the release of DeepSeek , a high-performing and inexpensive open-source model from China. The Trump administration did issue a request for information in early February to get public input on a potential national AI action plan. But the plan’s wording makes clear its priorities: to “sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.” The revocation of the Biden-era AI plan was largely symbolic, as agencies hadn’t yet gotten around to imposing any requirements on developers or users. In “the current administration — the brakes have come off and the accelerator has come down,” said Brian Spisak, program director of AI and leadership at Harvard University's National Preparedness Leadership Initiative, at HIMSS. “There’s a lot of responsibility to leadership of health systems to find the optimal balance between innovation and speed and safety and tradition.” A sea change Currently, even the most futuristic AI at healthcare institutions is being used to automate administration. Its impact on patient care has largely been peripheral. But that appears to be changing. There’s growing interest among medical organizations in clinical use cases for AI, like tailoring treatment plans or helping clinicians arrive at a diagnosis, according to a survey from HIMSS conducted in the fall. Many of those use cases involve generative AI, which can create original text and images. Such models are known to hallucinate , or provide answers that are factually incorrect or irrelevant, though. AI can also leave important information out, an error known as omission. And models may drift, a term for when an AI’s performance changes or degrades over time. As AI is used to pull data from EHR systems, transcribing doctor-patient visits and more, errors like these could interfere with patient care, experts say. All the while, the technology is advancing. Last year, the healthcare industry was only just beginning to come to terms with governance for generative AI. Already, the conversation has moved onto AI agents, which can complete complex tasks largely unsupervised by humans. The HIMSS health IT conference in March in Las Vegas, where a number of companies touted AI agents. Courtesy of HIMSS Commure’s Tanay equated the current moment to when the U.S. adopted electricity at the end of the 19th century. “The way that we did things six months ago is completely irrelevant,” he said. Federal standards from any administration would likely need to be flexible for that very reason, experts say. The Biden administration’s HHS task force suggested the government could create guidelines around testing and piloting tools, but shied away from a prescriptive approach. That’s in line with what many stakeholders want. Executives at tech companies and hospital systems said any federal standards should be stratified by the level of risk an AI model poses. Stricter oversight may be needed for algorithms that aid doctors in diagnosing disease, for example, while looser restrictions my be appropriate for algorithms that help hospital staff allocate patients to beds. “We have to weigh the balance between underregulation, which can potentially increase risk, and overregulation, which will stunt innovation,” said Anthony Chang, chief intelligence and innovation officer at the Children’s Hospital of Orange County in California, during a panel. “The current administration is more likely to be on the underregulation side. So we have to be careful as a profession that we don’t allow that to happen,” Chang said. States, industry groups stepping in Without a federal playbook, hospitals and medical groups are building their own internal controls amid a patchwork of state laws and voluntary standards released by industry groups. States including Colorado , Utah and California have enacted legislation establishing disclaimer requirements for AI systems. More states are considering similar laws: The Electronic Health Records Association is tracking 150 different state bills related to health AI, according to Burchell. “There is a massive explosion of bills,” Burchell said. Differing standards could stop health AI developers and software companies from rolling out products in specific states, potentially disadvantaging patients depending on where they live. More risk-averse software companies may avoid certain states altogether, she added. “Legislation of a lot of sizes and shapes at the state level is a risk to us, because it means we have to do all kinds of different development. We would rather develop one system that can be used broadly and accepted across the country,” Burchell said. Health AI standards groups are also stepping up. The groups — often made up of leading hospitals, digital health companies and tech giants — include the Health AI Partnership, an AI learning network for the industry, and the Coalition for Health AI, which recently launched an AI registry for hospitals . ”I think we’ll probably see more of these non-government organizations like Health AI Partnership and some of those emerge as maybe our north stars today, as here’s some leadership in the space,” said Rachel Wilkes, the corporate lead for generative AI initiatives at EHR vendor Meditech. But standards from industry consortia hold less weight without the federal government behind them, experts say. Historically, voluntary standards aren’t particularly effective. ‘We still don’t quite know how to deal with it’ EHR vendors and hospital operators say they’re building up rigorous internal standards for AI tools, including validation and frequent auditing. “Government oversight has its place, but I do think the way that clinical practice evolves tends to be more driven by what’s happening at a health system,” said Seth Howard, the executive vice president of research and development for Epic, the largest EHR company in the U.S. In interviews, executives with Epic, Oracle, Meditech and eClinicalWorks said they’re making AI available to doctors with rigorous oversight, including back-end accuracy checks and ongoing monitoring. “We are working in an industry that deals with human life. It cannot be trivial. It cannot be under-exaggerated on what guardrails and checks and balances and what discussions need to happen,” said Girish Navani, the CEO of eClinicalWorks. Technology giants betting heavily on AI talk similarly. Google, for example, has worked with for-profit hospital network HCA on an evaluation framework to catch any errors generated by its AI models and ensure their reliability, according to Aashima Gupta , the head of healthcare for Google Cloud. “We provide those tools for an evaluation framework, and for all of this there’s a human in the loop capturing the feedback, and that feedback loop then makes the model more effective,” Gupta said. “That’s what gives me comfort.” Yet AI remain challenging to oversee, according to AI engineers. The main advantage of generative AI — its creativity — also introduces subjectivity, making grading its outputs complicated. For example, if two clinicians are tasked to summarize a patient’s medical history based on clinical notes, their results could be quite different, albeit still accurate. It’s the same with generative AI: How do you measure quality in a standardized way when there’s that degree of variability? “AI governance is still very much a maturing process,” said Harvard’s Spisak. Ssome research suggests governance systems for simpler predictive AI models already aren’t rigorous enough. Hospitals with explicit procedures for the use and assessment of AI tools are struggling to identify and mitigate problems, according to a study published last year in The New England Journal of Medicine. A patient receives an exam in a room equipped with ambient listening AI technology to transcribe the interaction. Permission granted by Nuance Communications Even some of the most well-resourced and tech-savvy systems are encountering hurdles. “I don’t think we’ve figured it all out,” said Rohit Chandra, the Cleveland Clinic’s chief digital officer. “The term hallucination, for one, has just shown up in the last two, three years. And we still don’t quite know how to deal with it.” Hospitals should try to make specific people accountable for the performance of the tools, as part of a larger governing body that includes executives, lawyers, doctors and nurses, according to Brenton Hill, the head of operations at standards group CHAI. “There’s not one silver bullet governance structure that you can put out there that will solve all your problems,” Hill said. ‘A pipe dream’ While a federal road map would be helpful, stakeholders working to integrate AI tools into healthcare said they’re not holding their breath. When asked what she expects from the Trump administration, Google’s Gupta was noncommittal. “It’s hard to say at this point. We are trying to figure out how we best work with them, share our best practices with them ... Too early to say. I think the entire healthcare community is waiting for that,” Gupta said. Other experts said the Trump administration is a wake-up call for hospital executives who believed Washington would assume responsibility for overseeing AI. “The easy answer is, if this regulatory body says it’s safe, then I can trust it. I think people were hoping that might happen for AI,” said Aaron Neinstein, the chief medical officer of agentic AI company Notable. “I think that was a pipe dream.”
Commure Frequently Asked Questions (FAQ)
When was Commure founded?
Commure was founded in 2017.
Where is Commure's headquarters?
Commure's headquarters is located at 1300 Terra Bella Ave, Mountain View.
What is Commure's latest funding round?
Commure's latest funding round is Series E.
How much did Commure raise?
Commure raised a total of $813.89M.
Who are the investors of Commure?
Investors of Commure include General Catalyst, Liquid 2 Ventures, HCA Healthcare, Greenoaks, Human Capital and 5 more.
Who are Commure's competitors?
Competitors of Commure include Zus Health, iLLUMEAi, Bridge Connector, Corepoint Health, HiChina Web Solutions and 7 more.
Loading...
Compare Commure to Competitors

iNTERFACEWARE operates in the healthcare technology sector and offers the Iguana integration engine. This engine allows for data integration across various healthcare applications, including EMRs, billing systems, and medical devices. iNTERFACEWARE serves the healthcare industry and works with healthcare providers and software companies. It is based in Toronto, Ontario.
Eversolve, LLC is a company in healthcare IT that focuses on application integration and interoperability within the healthcare industry. The company provides tools, services, and methodologies for standard-based interoperability solutions that allow the exchange of medical information between applications and devices. Eversolve serves healthcare vendors, providers, and government agencies, helping them meet interoperability standards. It is based in New Hampshire, United States.

Caristix provides HL7 FHIR integration solutions and services within the healthcare IT sector. The company offers software tools and services for health data integration, data quality, and health data standards, focusing on HL7 and FHIR messaging protocols. Caristix serves medical device startups, health IT vendors, and health data integrators, offering solutions for health data exchange. It is based in Quebec City, Quebec.

Linkmed specializes in healthcare information system integration. It focuses on interface solutions for the healthcare industry. It offers HL7 and DICOM interface software that enables seamless integration of medical devices and systems with healthcare information systems. It primarily serves the healthcare technology sector. It was founded in 2019 and is based in Boston, Massachusetts.

Redox offers an application program interface technology. It provides a modern application programming interface (API) for healthcare, allowing software to easily and securely interoperate with electronic health records (EHRs) in a health system infrastructure. The company offers various products, such as Nova, which has real-time data in the cloud, Chroma, which has patient identity records, and more. It was founded in 2014 and is based in Madison, Wisconsin.

Truveta provides electronic health record (EHR) data and analytics within the healthcare sector. The company offers EHR data, including clinical notes and images, linked with social determinants of health, mortality, and claims data for research and development purposes. Truveta's solutions are used by life science, government, academic, health system, and research entities. It was founded in 2020 and is based in Bellevue, Washington.
Loading...