Back

Radiology-specific foundation model

183 points3 monthsharrison.ai
ilaksh3 months ago

I think the only real reason the general public can't access this now is greed and a lack of understanding of technology. They will say that it is dangerous or something to let the general public access it because they may attempt to self-diagnose or something.

But radiologists are very busy and this could help many people. Put a strong disclaimer in there. Open it up to subscriptions to everyone. Charge $40 per analysis or something. Integrate some kind of directory or referral service for human medical professionals.

Anyway, I hope some non-profit organizations will see the capabilities of this model and work together to create an open dataset. That might involve recruiting volunteers to sign up before they have injuries. Or maybe just recruiting different medical providers that get waivers and give discounts on the spot. Won't be easy. But will be worth it.

arathis3 months ago

You think the only real reason the public don't get to use this tool is because of greed?

Like, that's the only REAL reason? Not the technological or ethical implications? The dangers in providing people with no real concept of how any of this works the means to evaluate themselves?

mhuffman3 months ago

Not to speak of the "greed" on this particular item but in Europe you can buy real time glucose monitors, portable ecg, and low calorie meal replacements over the counter. In the US, all of these require a doctor's prescription. It wouldn't take a leap in logic to think that was greed or pressure from the AMA lobby (one of the most funded lobbies in the US, btw).

lsaferite3 months ago

Progress is being made on the glucose monitor front.

https://www.fda.gov/news-events/press-announcements/fda-clea...

rscho3 months ago

> in Europe you can buy real time glucose monitors, portable ecg, and low calorie meal replacements over the counter.

True! And, aside from people with chronic conditions like diabetics, who are forced to know how their glucose levels work, nobody uses those. So it certainly does change the cost, but I don't think it would be any more useful in the US.

medimikka3 months ago

Unfortunately not. There are dozens of companies reselling "old" Libre 2 sensors for "fitness and health" applications. BG has joined HRV and other semi-bogus metrics as one of the numbers that drive a whole subculture of health data.

To correct this, though. You can buy all those in the US as well. Holter and FirstBeat are selling clinically validated and FDA approved mutli-lead ECG, Derxcom is selling an over the counter CGM, as is Abbott with the Libre 2, and a Chinese company has recently joined there, too.

Low calorie meal replacements are all over the store, too.

If you're a member of this orthorexia/orthovivia crowd, you have the same access to tools as you do in the EU, often more so.

+1
mzmoen3 months ago
phkahler3 months ago

>> Like, that's the only REAL reason? Not the technological or ethical implications? The dangers in providing people with no real concept of how any of this works the means to evaluate themselves?

On the surface those all sound like additional reasons not to make it available. But they are also great rationalizations for those who want to maintain a monopoly on analysis.

Personally I found all the comparisons to other AI performance bothersome. None of those were specifically trained on diagnostics AFAICT. Comparison against human experts would seem to be the appropriate way to test it. And not people just out of training taking their first test, I assume experts do better over time though I might be wrong on that.

jarrelscy3 months ago

Developer here - its a good point that most of the models were not specifically trained on diagnostic imaging, with the exception of Llava-Med. We would love to compare against other models trained on diagnostic imaging if anyone can grant us access!

Comparison against human experts is the gold standard but information on human performance in the FRCR 2B Rapids examination is hard to come by - we've provided a reference (1) which shows comparable (at least numerically) performance of human radiologists.

To your point around people just out of training (keeping in mind that training for the FRCR takes 5 years, while doing practicing medicine in a real clinical setting) taking their first test - the reference shows that after passing the FRCR 2B Rapids the first time, their performance actually declines (at least in the first year), so I'm not sure if experts would do better over time.

1. https://www.bmj.com/content/bmj/379/bmj-2022-072826.full.pdf

rscho3 months ago

Someone downvoted the author !? This site never ceases to amaze.

K0balt3 months ago

Yeah, we should also limit access to medical books too. With a copy of the MERK manual, what’s to stop me from diagnosing my own diseases or even setting up shop at the mall as a medical “counselor” ?

The infantilization of the public in the name of “safety” is offensive and ridiculous. In many countries, you can get the vast majority of medicines at the pharmacy without a prescription. Amazingly, people still pay doctors and don’t just take random medications without consulting medical professionals.

It’s only “necessary” to limit access to medical tools in countries that have perverted the incentive structure of healthcare to the point where, out of desperation, people will try nearly anything to deal with health issues that they desperately need care for but cannot afford.

In countries where healthcare costs are not punitive and are in alignment with the economy, people opt for sane solutions and quality advice because they want to get well and don’t want to harm themselves accidentally.

If developing nations with arguably inferior education systems can responsibly live with open access to medical treatment resources like diagnostic imaging and pharmaceuticals, maybe we should be asking ourselves what is it, exactly, that is perverting the incentives so badly that having ungated access to these lifesaving resources would be dangerous?

Calavar3 months ago

> If developing nations with arguably inferior education systems can responsibly live with open access to medical treatment resources like diagnostic imaging and pharmaceuticals,

Well, the conditional in this if statement doesn't hold.

Yes, pharmaceuticals are open access in much of the developing world, but it has not happened responsibly. For example, Carbapenem-resistant bacteria are 20 times as common in India as they are in the U.S [1]

I really don't like this characterization of medical resource stewardship as "infantilization" because it implies some sort of elitism amongst doctors, when it's exactly the opposite. It's a system of checks and balances that limits the power afforded to any one person, no matter how smart they think they are. In a US hospital setting, doctors do not have 100% control over antibiotics. An antibiotic stewardship pharmacist or infectious disease specialist will deny and/or cancel antibiotics left and right, even if the prescribing doctor is chief of their department or the CMO.

[1] https://www.fic.nih.gov/News/GlobalHealthMatters/may-june-20...

rscho3 months ago

Honestly, that's a short-sighted interpretation. Would you get treated by someone who's fresh out of school? If not, why? They're the ones with the most up to date and extensive knowledge. Today, medicine is still mostly know-how acquired through practical training, not books. A good doc is mostly experience, with a few bits of real science inside.

K0balt3 months ago

I don’t get The relevance to my comment here,Maybe you replied to the wrong one? Or were you thinking I was Seriously implying that a book was a suitable substitute for a doctor? (I wasn’t)

+1
pc863 months ago
taneq3 months ago

Could there be, perhaps, a middle ground between “backyard chop shops powered by YouTube tutorials and Reddit posts” and the U.S.’ current regulatory-and-commercial-capture exploitation?

K0balt3 months ago

I honestly don’t think we need more amateurs performing healthcare services for fun and profit, but I also think that barriers to self-care should be nearly nonexistent while encouraging an abundance of caution. Not sure how to best accommodate those somewhat disparate goals.

BaculumMeumEst3 months ago

> The infantilization of the public in the name of “safety” is offensive and ridiculous.

It comes from dealing with the public.

> In many countries, you can get the vast majority of medicines at the pharmacy without a prescription. Amazingly, people still pay doctors and don’t just take random medications without consulting medical professionals.

I see people on this site of allegedly smart people recommending taking random medications ALL THE TIME. Not only without consulting medical professionals, but _in spite of medical professional's advice_, because they think they _know better_.

Let's roll out the unbelievably dumb idea of selling self-diagnosis AI on radiology scans in the countries you’re referring to and ask them how it works out. If you want the freedom to shoot from the hip on your healthcare, you've got the freedom to move to Tijuana. We're not going to subject our medical professionals to deal with an onslaught of confidently wrong individuals who are armed with their $40 AI results from an overhyped startup. Those startups can make their case to the providers directly and have their tools vetted.

+2
whamlastxmas3 months ago
ilaksh3 months ago

Well.. funny you say that.. I did move to Tijuana some years ago. One time while I was there, I was sick and a neighbor (Mexican) seemed to insist that I go to the doctor. She recommended a hole in the wall office above a pharmacy that looked like a little-league concession stand.

It was a serious 30 something woman who collected something like 50 pesos (around $3), listened to me for about 30 seconds, and told me to make sure I slept and ate well (I think she specifically said chicken soup). I asked about antibiotics or medicine and she indicated it wasn't necessary.

So I rested quite seriously and ate as well as I could and got better about a week later.

During the time that I was in Playas de Tijuana I would normally go to nicer pharmacies though, and they didn't ask for a prescription for my asthma or other medicine which was something like 800% less expensive over there. They did always wear nice lab coats and take their job very seriously if I asked for advice. Although I rarely did that.

I do remember one time asking about my back acne problems at a place in the mall and the lady immediately gave me an antibiotic for maybe $15 which didn't cure it but made it about 75% better for a few months.

Another time at the grocery store I asked about acne medicine and the lady was about to sell me something like Tretinoin cream for a price probably 1/4 of US price. She didn't have anything like oral Accutane of course. It was just a Calimax Plus.

There are of course quite serious and more expensive actual doctors in Tijuana but I never ended up visiting any of them. I was on a budget and luckily did not have any really critical medical needs. But if I had, I am sure it would have cost dramatically less than across the border.

EDIT: not to say the concession-stand office lady wasn't an actual doctor. I don't know, she may have had training, and certainly had a lot of experience.

+1
K0balt3 months ago
+2
pc863 months ago
rscho3 months ago

Even if this worked as well as a human radiologist, diagnosis is not only made of radiology. That's why radiology is a support specialty. Other specialists incorporate radiology exams into their own assessment to decide on a treatment plan. So in the end, I don't think it'll change as much as you'd think, even if freely accessible.

crabbone3 months ago

Absolutely this. Also radiologists are usually given notes on patients that accompany whatever image they are reading, and in cases like, eg. ulstrasound often perform the exam themselves. So, they are able to asses presentation, hear patient's complaints, learn the history of the patient etc.

Not to mention that in particularly sick patients problems tend to compound one another and exams are often requested to deal with a particular side of the problem, ignoring, perhaps, the major (but already known and diagnosed) problem etc.

Often times factors specific to a hospital play crucial role: eg. in hospitals for rich (but older) patients it may be common to take chest X-rays in a sited position (s.a. not to discomfort the valuable patients...) whereas in poorer hospitals siting position would indicate some kind of a problem (i.e. the patient couldn't stand for whatever reason).

That's not to say that automatic image reading is worthless: radiologists are, perhaps, one of the most overbooked specialists in any hospital, and are getting even more overbooked because other specialists tend to be afraid to diagnose w/o imaging / are over-reliant on imaging. From talking to someone who worked as a clinical radiologist: most images are never red. So, if an automated system could identify images requiring human attention, that'd be already a huge leap.

robertlagrant3 months ago

You could imagine imprinting into the scan additional info such as "seated preferred" or "seated for pain". There is more encoding that could be done.

+2
crabbone3 months ago
xarope3 months ago

putting on my cynical hat, I feel this will just be another way for unscrupulous healthcare organizations to charge yet another service line item to patients/insurance...

  - X-Ray: $20
  - Radiologist Consultation: $200
  - Harrison.AI interpretation: $2000
gosub1003 months ago

Yep, while justifying a reduction in force to radiology practices and keeping the extra salaries for the CEO and investors. Then when it inevitably kills someone, throw the AI under the bus, have a pre planned escape hatch so the AI company never has to pay any settlements. Have them sell their "assets" to the next third party.

vrc3 months ago

Yeah, and the bill will come back adjusted to

  - X-Ray: $15
  - Radiologist Consultation: $125
  - Harrison.AI interpretation: $20
The cat and mouse between payer and system will never die given how it's set up. There's a disincentive to bill less than maximally, and therefore to not deny and adjust as much as possible. Somewhere in the middle patients get squished with the burden of copays and uncovered expenses that the hospital is now legally obligated to try and collect on or forfeit that portion for all future claims (and still have a copay on that new adjustment)
littlestymaar3 months ago

A model that's accurate only 50% of time is far from helpful in terms of public health: it's high enough so that people could trust it and low enough to cause harm by misdiagnosing stuff.

CamperBob23 months ago

The models are already more accurate than highly-trained human diagnosticians in many areas.

littlestymaar3 months ago

If you want it to be used by the public it doesn't matter if it's more accurate on some things if it's very bad at other things and the user has no idea in which situation we are.

As a senior developer I routinely use LLMs to write boilerplate code, but that doesn't mean that the layman can get something working by using an LLM. And it's exactly the same for other professions.

rscho3 months ago

On paper. Not in the trenches.

robertlagrant3 months ago

I don't understand the greed argument. Is the reason you draw a salary "greed"? Would gating it behind $40 not be "greed" to someone?

It's more likely that regardless of disclaimers people will still use it, and at some point someone will decide that that outcome is still the provider's fault, because you can't expect people to not use a service when they're impoverished and scared, can you?

rscho3 months ago

> a lack of understanding of technology

Unfortunately, it's the other way around. The tech sector understands very little about clinical medicine, and therefore spends its time fighting windmills and shouting in the dark at docs.

ImHereToVote3 months ago

Doctors should be like thesis advisors for their patients. If the patients undergo a minimum competency test. If you can't pass. You don't get a thesis advisor.

owenpalmer3 months ago

I had an MRI on my ankle several years ago. At first glance, the doctor told me there was nothing wrong, even though I had very painful symptoms. While the visit was unproductive, I requested the MRI images on a CD, just because I was curious (I wanted to reconstruct the layers into a 3D model). After receiving the data in the mail weeks later, I was surprised to find a formal diagnosis on the CD. Apparently a better doctor had gotten around to analyzing it (they never followed up). If I hadn't requested my records, I never would have gotten a diagnosis. I had a swollen retrocalcaneal bursa. I googled the treatments, and eventually got better.

I'm curious whether this AI model would have been able to detect my issue more competently than the shitty doctor.

rasmus16103 months ago

To be honest, I heard of several radiology practices that hand the patients a normal report directly after the exam and they look at the actual images only after the patient has left.

I guess the reasoning is that they want to provide „good service“ by giving the patient something to work with directly after the exam and the workload is so high that they couldn’t look at the images so fast. And they accept the risk that some people are getting angry because their exam wasn’t normal in the end.

But on the scale a typical radiology practice operates today, the few patients who don’t have a normal exam don’t matter (the number of normal exams in an outpatient setting is quite high).

I find it highly unethical, but some radiologists are a little bit more ethically relaxed I guess.

What I want to say is that it might be more of a structural/organisational problem than incompetence by the radiologist in your case.

(Disclaimer: I’m a radiologist myself)

HPsquared3 months ago

This is one of those comments where I started thinking "oh come on no way, this guy clearly has no idea what he's talking about" then read the last part and realization dawned the world is actually a very messy place.

lostlogin3 months ago

How did this happen?

Surely your results went to a requesting physician who should have been following up with you? Radiology doctors don’t usually organise follow up care.

Or was the inaccurate result from the requesting physician?

owenpalmer3 months ago

I don't know, just incompetence and disorganization on their part. Directly after my MRI, they told me the images didn't indicate any meaningful information.

rscho3 months ago

You got lost in the mess of files and admin. The process is usually that you get the exam, they give you a first impression orally. Then they really get to work and look properly, and produce a written report, which the requesting doc will use for treatment decisions. At that point, they're supposed to get back to you, but apparently someone dropped you along the way.

+1
jeffxtreme3 months ago
quantumwoke3 months ago

The radiographer or the radiologist? Did you see your requesting doctor afterwards?

daedalus_f3 months ago

The FRCR 2b examination consists of three parts, a rapid reporting component (the candidate assess around 35 x-rays in 30 minutes where the candidate is simply expected to mark the film as normal or abnormal, this is a perceptual test and is largely limited to simple fracture vs normal) alongside a viva and long cases component where the candidate reviews more complex examinations and is expected to provide a report, differential diagnosis and management plan.

A quick look at the paper in the BMJ shows that the model did not sit the FRCR 2b examination as claimed, but was given a cut down mock up of the rapid reporting part of the examination invented by one of the authors.

https://www.bmj.com/content/bmj/379/bmj-2022-072826.full.pdf

nopinsight3 months ago

The paper you linked to was published in 2022. The results there were for a different system for sure.

Were the same tests also used here?

jarrelscy3 months ago

One of the developers here. The paper links to an earlier model from a different group that could only interpret X-rays of specific body parts. Our model does not have such limitation.

However, the actual FRCR 2B Rapids exam question bank is not publicly available and the FRCR is unlikely to agree to release them as this would compromise the integrity of their examination in the future- so the test used are mock examinations, none of which have been provided to the model during training.

daedalus_f3 months ago

Interesting, is your model still based on radiographs alone, or can it look at cross-sectional imaging as well?

jarrelscy3 months ago

This current model is radiographs alone. The FRCR 2B Rapids exam is based on only radiographs.

zeagle3 months ago

So you should disclose this in your advertising spiel?

+1
jarrelscy3 months ago
nopinsight3 months ago

This is impressive. The next step is to see how well it generalizes outside of such tests.

"The Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids exam is considered one of the leading and toughest certifications for radiologists. Only 40-59% of human radiologists pass on their first attempt. Radiologists who re-attempt the exam within a year of passing score an average of 50.88 out of 60 (84.8%).

Harrison.rad.1 scored 51.4 out of 60 (85.67%). Other competing models, including OpenAI’s GPT-4o, Microsoft’s LLaVA-Med, Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro, mostly scored below 30*, which is statistically no better than random guessing."

rafram3 months ago

Impressive, but was it trained on questions from the exam? Were any of those other models?

aengustran3 months ago

harrison.rad.1 was not trained on any of the exam questions. It can't be guaranteed however that other models were not trained on them though.

trashtester3 months ago

AI models for regular X-rays seems to be achieving high quality human level performance, which is not unexpected.

But if someone is able to connect a network to the raw data outputs from CT or MR machines, one may start seeing these AI's radically outperform humans at a fraction of the cost.

For CT machines, this could also be used to concentrate radiation doses into parts of the body where the uncertainty of the current state is greatest, even in real time.

For instance, if using a CT machine to examine a fracture in a leg bone, one could start out with a very low dosage scan, simply to find the exact location of the bone. Then slightly higher concentrated scan of the bone in the general area, and then an even higher dosage in an area where the fracture is detected, to get a high resolution picture of the damage, and splinters etc.

This could reduce the total dosage the patient is exposed to, or be used to get a higher resolution image of the damaged area than one would otherwise want to collect, or possibly to perform more scans during treatment than is currently considered worth the radiation exposure.

Such machines could also be made multi modal, meaning the same machine could carry both CT, MR, ultrasound sensors (dopler + regular). Possibly even secondary sensors, such as thermal sensors, pressure sensors or even invasive types of sensors.

By fusing all such inputs (+ the medical records, blood sample data etc) for the patient, such a machine may be able to build a more complete picture of a patient's conditions than even the best hospitals can provide today, and a at a fraction of the cost.

Especially for diffuse issues, like back pains where information about bone damage, bloodflow (from the Doppler ultrasound), soft tissue tension/condition etc could be collected simultaneously and matched with the reported symptoms in real time to find location where nerve damage or irritation could occur.

To verify findings (or to exclude such, if more than one possible explanation exists), such an AI could then suggest experiments that would confirm or exclude possibilities, including stimulating certain areas electrically, apply physical pressure or even by inserting some tiny probe to inspect the location directly.

Unfortunately (or fortunately to the medical companies), while this cold lower the cost per treatment, the market for such diagnostics could grow even faster, meaning medical costs (insurance/taxes) might still go up with this.

smitec3 months ago

A very exciting release and I hope it stacks up in the field. I ran into their team a few times in a previous role and they were always extremely robust in their clinical validation which is often lacking in the space.

I still see somewhat of a product gap in this whole area when selling into clinics but that can likely be solved with time.

davedx3 months ago

“AI has peaked”

“AI is a bubble”

We’re still scratching the surface of what’s possible. I’m hugely optimistic about the future, in a way I never was in other hype/tech cycles.

Almondsetat3 months ago

"AI" here refers to general intelligence. A highly specific ML model for radiology is not AI, but a new avenue for improvements in the field of computer vision.

the84723 months ago

So, hypothetically, a general-intelligence-capable architecture isn't allowed to specialize in a particular task without losing its GI status? I.e. trained radiologists wouldn't be a general intelligence? E.g. their ability to produce text is really just a part of their radiologist-function to output data, right?

Almondsetat3 months ago

It's impossible for humans to know a lot about everything, while LLMs can. So an LLM that sacrifices all that knowledge for a specific application is no longer an AI, since it would show its shortcomings more obviously.

the84723 months ago

They're still very bounded systems (not some galaxy brain) and training them is expensive. Learning tradeoffs have to be made. The tradeoffs are just different than in humans. Note that they're still able to interact via natural language!

whamlastxmas3 months ago

The world’s shittiest calculator powered by a coin battery is an AI. I think you’re being overly narrow or confusing it with AGI

davedx3 months ago

What, no it doesn’t, that’s “AGI” - it has a G in it. This is ML/AI

GaggiX3 months ago

When did "AI" become "general intelligence"?

ygjb3 months ago

It's a bike shed. It's easier to argue the definition or terminology than the technology, so it's the thing people go for.

gosub1003 months ago

They're still going to charge the same amount (or more). At best this will divert money from intelligent, hard working physicians into sv tech bros who dropped out of undergrad (while putting patient lives at higher risk).

bobbiechen3 months ago

"We'd better hope we can actually replace radiologists with AI, because medical students are no longer choosing to specialize in it."

- one of the speakers at a recent health+AI event

I'm wondering what others in healthcare think of this. I've been skeptical about the death of software engineering as a profession (just as spreadsheets increased the number of accountants), but neither of those jobs requires going to medical school for several years.

doctoring3 months ago

I don't know for other countries, but for the United States, "medical students are no longer choosing it" is very very untrue, and it is trivial to look up as this information is public from the NRMP (the organization that runs the residency match).

Radiology remains one of the most competitive and in-demand specialties. In this year's match, only 4 out of ~1200 available radiology residency positions went unfilled. Last year was 0. Only a handful of other specialties have similar rates.

As comparison, 251 out of ~900 pediatric residency slots went unfilled this year. And 636 out of ~5000 family medicine residency slots went unfilled. (These are much higher than previous years.)

However, I do somewhat agree with the speaker's sentiment if for a different reason. Radiologist supply in the US is roughly stable (thanks to the US's strange stranglehold on residency slots), but demand is increasing: the number of scans ordered on a per patient continues to rise, as does the complexity of those scans. I've heard of hospital systems with backlogs that result in patients waiting months for, say, their cancer staging scan. One can hope we find some way to make things more efficient. Maybe AI can help.

bobbiechen3 months ago

Thanks for the info (and same for the sibling comments)! Seems that the hype does not match the reality again.

yurimo3 months ago

Interesting take. Had a friend recently start medschool (in US) and he said radiology was some of the top directions people were considerings, because as he put it "the pay is decent and they have a life". Anecdotal but I wonder what is the reason of not specializing in it then. If anything AI can help reduce the workload further and identify patterns that can be missed.

husarcik3 months ago

I'm a third year radiology resident. That speaker is misinformed as diagnostic radiology has become one of the most competitive subspecialties to get into. All spots fill every year. We need more radiology spots to keep up with the demand.

nradov3 months ago

I'm glad to see that this model uses multiple patient chart data elements beyond just images. Some earlier more naive models attempted to treat it as a pure image classification problem which isn't sufficient outside the simplest cases. Human radiologists rely heavily on other factors including patient age, sex, previous diagnoses, patient reported symptoms, etc.

lostlogin3 months ago

> patient reported symptoms

You make it sound like the reporting radiologist is given a referral with helpful, legible information on it. That this ever happened doubtful.

nradov3 months ago

Referrals are more problematic but if the radiologist works in the same organization as the ordering physician then they should have access to the full patient chart in the EHR.

nightski3 months ago

Is it really a foundation model if it is for a specific purpose?

marsh_mellow3 months ago

They list seven different use cases in this technical blog:

https://harrison.ai/news/reimagining-medical-ai-with-the-mos...

I'd interpret it as a foundation model in the radiology domain

aqme283 months ago

This is far from the first company to try to tackle AI radiology, or even AI x-ray radiology. It's not even the first company to have a model that works on par or better than radiologists. I'm curious how they solve the commercial angle here, which seems to be the big point of failure.

crabbone3 months ago

The real problem is liability. Radiologist, if they make a mistake can be sued. Who are you going to sue when the program misdiagnoses you?

NB. In all claims I've seen so far about outperforming radiologist, the common denominator was that people creating these models have mostly never even seen a real radiologist and had no idea how to read the images. Subsequently, the models "worked" due to some kind of luck, where they accidentally (or deliberately) were fed data that made them look good.

augustinemp3 months ago

I spoke to radiologist in a customer interview yesterday. They mentioned that they would really like a tool that could zoom on a specific part of an image and explain what is happening. For extra points they would like it to be able to reference literature where similar images were shown.

hgh3 months ago

Connecting your comment to another about commercial model, seems the potential win here is selling useful tools to radiologists that may leverage AI rather than to end customers with the idea to replace some radiology consultations.

This seems generally aligned with AI realities today: it won't necessarily replace whole job functions but it can increase productivity when applied thoughtfully.

Workaccount23 months ago

Aren't radiologists that "tool" from the perspective of primary doctors?

darby_nine3 months ago

Sort of like primary doctors are just a "tool" to get referrals for treatment

isaacfrond3 months ago

From the article: Other competing models, including OpenAI’s GPT-4o, Microsoft’s LLaVA-Med, Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro, mostly scored below 30*, which is statistically no better than random guessing.

How is chatgpt the competion? It’s mostly a text model?

aubanel3 months ago

GPT-4o also has vision capabilities: https://platform.openai.com/docs/guides/vision

exe343 months ago

gpt4o is multimodal

husarcik3 months ago

As a radiology resident, it would be nice to have a tool to better organize my dictation automatically. I don't want to ever have to touch a powerscribe template again.

I'd be 2x as productive if I could just speak and it auto filled my template in the correct spots.

akashtomer13 months ago

I'm a practicing radiologist and I share the same frustration. Went ahead to build exactly what you just mentioned. Would love for you and anyone else interested to try it out. A short demo - https://youtu.be/Adpjff0t_FE To request access- https://quillr.ai/

transcranial3 months ago

As a former radiology resident, I totally agree. That's why we're building exactly this: https://md.ai/product/reporting/.

seanvelasco3 months ago

following this, gonna integrate this with a DICOM viewer i'm developing from the ground up

lostlogin3 months ago

Fixing the RIS would make radiology happier than fixing the viewer.

And while you’re at it, the current ‘integrations’ between RIS and PACS are so jarring it sets my teeth on edge.

rasmus16103 months ago

Yes please. We hope to move away from our RIS and integrate our reporting workflow into our PACS this year

rahkiin3 months ago

Can you help me with those acronyms?

+1
tecleandor3 months ago
+1
squigz3 months ago
ZahiF3 months ago

Super cool, love to see it.

I recently joined [Sonio](https://sonio.ai/platform/), where we work on AI-powered prenatal ultrasound reporting and image management. Arguably, prenatal ultrasounds are some of the more challenging to get right, but we've already deployed our solution in clinics across the US and Europe.

Exciting times indeed!

haldujai3 months ago

> Arguably, prenatal ultrasounds are some of the more challenging to get right

Prenatal ultrasounds are one of the most rote and straight forward exams to get right.

ZahiF3 months ago

By get right I meant to analyze, not just take the ultrasound.

haldujai3 months ago

Yes that’s what I meant too.

Acquiring the images is the hard part in obstetrical ultrasound, reporting is very mechanical for the most part and lends itself well to AI.

whamlastxmas3 months ago

It’s weird that I have to attest I’m a healthcare professional just to view your job openings

naveen993 months ago

Xray specific model. fractures are relatively easy. Chest and abdomen xrays are hard. Very large chest xray datasets have been out for a long time (like from stanford). problem solving is done with ct, ultrasound, pet, mri, fluoroscopy, other nuclear scans.

hammock3 months ago

I looked at my rib images for days trying to find the fracture. Couldn't do it. Could barely count the ribs. All my doctor friends found it right away though

naveen993 months ago

Ok, yeah rib fractures on chest X-rays are hard also. Even extremity Fractures can be hard also. Some are not directly visible, but you can look for indirect signs such as hematomas displacing fat pads. Stress fractures show up only on mri or bone scans…

Improvement3 months ago

I can't find any git link, hopefully I will look into it later.

From their benchmarks it's looking like a great model that beat competition, but I will see the third party tests after they get released to determine the real performance.

moralestapia3 months ago

"Exclusive Dataset"

"We have proprietary access to extensive medical imaging data that is representative and diverse, enabling superior model training and accuracy. "

Oh, I'd love to see the loicenses on that, :^).

infocollector3 months ago

I don't see a release? Perhaps its an internal distribution to subscribers/people? Does anyone see a download/github page for the model?

stevenbuscemi3 months ago

Harrison.ai typically productionize and commercialize their models through child companies (Annalise.ai for radiology, Franklin.ai for pathology).

I'd imagine access to the model itself will remain pretty exclusive, but would love to see them adopt a more open approach.

blazerunner3 months ago

I can see a link to join a waitlist for the model, as well there is this:

> Filtered for plain radiographs, Harrison.rad.1 achieves 82% accuracy on closed questions, outperforming other generalist and specialist LLM models available to date (Table 1).

The code and methodology used to reach this conclusion will be made available at https://harrison-ai.github.io/radbench/.

joelthelion3 months ago

Too bad it's not available llama-style. We'd see a lot of progress and new applications if something like that was available.

newyankee3 months ago

I wonder if there is any open source radiology model that can be used to test and assist real world radiologists

zxexz3 months ago

I recall there being a couple non-commercial ones on physionet trained on the MIMIC CXR dataset. I could be wrong, I'll hopefully remember to check.

amitport3 months ago

there are a few for specific tasks (e.g., lung cancer), no "foundation" models afaikt.

zxexz3 months ago

There really should be at this point. Annotated radiology datasets, patients numbering into the millions, are the easiest healthcare datasets to obtain. I suspect there are many startups, and know of several long since failed, who trained on these. I've met radiologists who assert most of their job comes down to contextualizing their findings to their colleagues, as well as within the scope of the case itself. That's relevant here - it doesn't matter how accurate or precise your model is, if it can't do that. Radiologists already use "AI" tools that are very good, and radiology is a very welcoming field for new technology. I think the promise of foundation models at the moment would be to ease burden and help prevent burnout. Unfortunately, those models aren't "sexy" - they reduce administrative burden, assemble contextual evidence for better retrieval (have interfaces that don't suck when integrated with the EMR).

ilaksh3 months ago

Can you provide a link or search term to give a jumpstart for finding good radiology datasets?

naveen993 months ago

Tgca from ncia has some for cancer

Deeplesion is another one out of nih.

Segmed is a yc company that sells access to radiology datasets

hammock3 months ago

Radiology is the best job ever. Work from home, click through pictures all day. Profit

ethanmitchell873 months ago

[dead]

Achara3 months ago

[flagged]