One of the touted applications of generative AI in healthcare is the ability to increase accessibility of medical care and often through some kind of chatbot interface. In a rush to capitalize on this new technology, proper risk (safety, equity, etc.) assessments are "check-the-box" activities.
Yes, accessibility of quality, affordable care is a major problem. No, we can't tech-solution our way out of provider shortages.
Yes, patients can and should do their own research to make informed decisions. No, not everything on the internet (aka everything LLMs are trained with) is a reliable reference for patients. And an even bigger NO, not everything a chatbot responds with can be substantiated.
If companies are pushing genAI solutions (or even relying on other AI technologies) as a replacement for addressing the systemic issues feeding barriers to healthcare, there should at least be very public and plain-English safety assessment results given to users before they interact with the tool.
https://lnkd.in/gqHaqbwQ
President at Ebert Appraisal Service, Inc
3yCongrats, I know you will add alot.