Over 1150 Total Lots Up For Auction at Three Locations - WI 07/09, NJ Cleansweep 07/10, CA 07/11

Harrison.ai launches world leading AI model to transform healthcare

Press releases may be edited for formatting or style | September 06, 2024 Artificial Intelligence

The critical and highly regulated nature of healthcare has limited the application of other AI models to date. However, this new model and its applications are qualitatively different and open up a whole new conversation in radiology innovation and patient care, and the potential for regulatory assurance.

Dr. Aengus Tran noted, "We are already excited by the performance of the model to date. It outperforms major LLMs in the Royal College of Radiologists' (FRCR) 2B exam by approximately 2x. The launch of this model and our plan to engage in further open and competitive evaluation by professionals underscores our commitment to responsible AI development."

stats
DOTmed text ad

Ensure critical devices are ready to go

Keep biomedical devices ready to go, so care teams can be ready to care for patients. GE HealthCare’s ReadySee™ helps overcome frustrations due to lack of network and device visibility, manual troubleshooting, and downtime.

stats Advertisement

"Harrison.ai is committed to being a leading global voice in helping inform and contribute to an important conversation on the future of AI in healthcare. This is why we are making Harrison.rad.1 accessible to researchers, industry partners, regulators and others in the community to begin this conversation today".

Harrison.rad.1 has demonstrated remarkable performance, excelling in radiology examinations designed for human radiologists and outperforming other foundational models in benchmarks. Specifically, it surpasses other foundational models on the challenging Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids examination – an exam that only 40-59% of human radiologists manage to pass on their first attempt. When reattempted within a year of passing, radiologists score an average of 50.88 out of 60[1]. Harrison.rad.1 performed on par with accredited and experienced radiologists at 51.4 out of 60, while other competing models such as OpenAI's GPT-4o, Anthropic's Claude-3.5-sonnet, Google's Gemini-1.5 Pro and Microsoft's LLaVA-Med scored below 30 on average[2].

Additionally, when assessing Harrison.rad.1 using the VQA-Rad benchmark, a dataset of clinically generated visual questions and answers on radiological images, Harrison.rad.1 achieved an impressive 82% accuracy on closed questions, outperforming other leading foundational models. Similarly, when evaluated on RadBench, a comprehensive and clinically relevant open-source dataset developed by Harrison.ai, the model achieved an accuracy of 73%, the highest among its peers[2].

Building on the efficacy, accuracy, and effectiveness that has been achieved through Harrison's existing Annalise line of products, Harrison.ai wants to collaborate to speed up the development of further AI products in healthcare to help expand capacity and improve patient outcomes.

Back to HCB News

You Must Be Logged In To Post A Comment