How AI can help for Trust and Collaboration in Medical Decisions?

in hive-181430 •  3 years ago 

@cryptokraze

Can machine learning to improve health care for people diagnosed with depression, and if so, what support do doctors want from machine learning decision support tools?

If you were curious to know the above questions then, this steemit post is definitely going to be beneficial for you:) so, stick around and scroll down to get clear insights. HOORAY!!!

AI.jpg

Here, we’ve got a brief explanation of Kai paper 2021 in which the researchers unveiled what healthcare providers want from machine learning models? And how these tools can support doctor’s health care decisions?
It is crucial to use machine learning models to improve care for people diagnosed with depression. Selecting an effective medication for a person diagnosed with major depression is difficult. There are many different antidepressants, and we don't know how someone will respond to individual treatment.

AI doctors decision making.jpg

Therefore, an individual will often have to try several antidepressants before seeing improvements in their system and their symptoms; therefore, doctors consistently believe that this is a process of trial and error.
Now work in machine learning is developing algorithms that provide treatment recommendations based on medical record data, but these systems are rarely integrated into clinical practice due to low user satisfaction, the technology being too difficult to use, and a failure to account for user expectations in the system design.

AI improving Healthcare.jpg

Here, you’ll look at how we can make machine learning tools helpful and usable in the clinical setting. To answer this question, the researchers ran a set of remote co-design sessions with primary care providers.

An important result from this study was that effective decision support tools would need to engage with the broader health care system, not just an individual health care provider, including the other people, processes, resource constraints, and clinical domain knowledge that make up the sociotechnical healthcare system. Do you want to know how clinicians expect decision support tools to engage with other people and with resource constraints?

AI & Doctors.jpg

First, every single provider who participated in this study said that treatment decisions are a collaborative process with patients and they wanted tools that would help engage patients in the decision-making process this means that decision support tools should integrate patient preferences such as the side effects they want to avoid and produce personalized patient education.

Based on this finding, an interactive prototype for integrating patient preferences was created, but an important opportunity was discussed for future machine learning tools to be co-designed with patients directly to expand their voices in their own care decisions.

AI for trust and collaboration.jpg

Clinicians expected these tools to be designed around existing resource constraints, especially time constraints. To a surprise, participants rarely mentioned issues of trust or accuracy with machine learning algorithms. Through discussions, it was revealed that this is because doctors have very little time with patients and therefore will not have time to decide if the technology or the prediction is trustworthy.

Participants believed that machine learning explanations provided too much information which would not fit into their short patient appointments. Therefore, decision support tools display how the tool is validated rather than individual explanations for each prediction they believe more responsibility needs to be placed upfront to determine when the system is likely to air and when a prediction should perhaps not be shown altogether.

AI robot.jpg

Kai 2021 paper researchers proclaim on aspects of the healthcare sociotechnical system that should be analyzed to design machine learning decision support tools. They made several recommendations for designing machine learning tools for health care.

Also, they demonstrated how the latest trends in explainable AI may be inappropriate for clinical environments and analyze paths towards designing these tools for real-world medical systems. This work is an initial step in a broader research agenda ascertaining how the new wave of smart systems must account for the complexity of medical work.

For getting deeper insights, click the link below:
https://arxiv.org/abs/2102.00593

GOOD LUCK EXPLORING this paper!!!

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  
Loading...

@steem.history I humbly appreciate it. I'll go through the following links:)

Excellent article and I am sure your blogs will give some thoughts and inspirations to SIZ members in coming days.

You are warmly welcome to SIZ Community

thanks a bunch, @cryptokraze :)
Much Appreciated!!!

No doubt machine helps to learn and to decide but there are limits to machine knowledge than to human brain and ways of thinking.

Thank you, Hamdaan, for acknowledging this little effort! I agree with your perspective. But, AI's impact on the future is irresistible. Big time!!!