Finding reliable and trustworthy sources of information—and debunking, when necessary, misinformation and disinformation—remains a growing challenge that health professionals face routinely. From one of my last jobs, I received a mug as a parting gift that said, “Please don’t confuse your Google search with my medical degree”—a sarcastic quip relevant to a time when physicians and health professionals feared the consequences of patients looking up health information via internet search. That time has long passed, and we find ourselves again at a crux where we wonder next: with generative artificial intelligence (AI), like ChatGPT, how will we face the challenges of confabulations and hallucinations presented by such AI? How will this new tool that patients can readily access for generated answers to their pressing medicine questions (mis)lead them in their quest for health and wellness?

On this specific issue and usage of chatbots driven by generative AI, I’m not too concerned that our routine physician work and interactions with patients will change. For at least a couple of decades, we have already acclimated to the reality of information democratization via the internet and the ability of people to search for whatever they wish to find there. Adjusting how we address misinformation to also address this extended issue of generative AI offering potentially inaccurate or incorrect information will be quite similar. We also still need to continue to foster among patients and populations a keener critical eye about health information sources, their reliability, and credibility. These four steps seem like a foundational place: be vigilant; make sure patients cross-reference information; verify claims; and don’t click on everything.1

Beyond the individual patient-physician encounter, as much as we in the Society of General Internal Medicine seek to aim at changing the roots of systemic issues, I also think our greatest opportunities to influence our patients’ technology use for health lie in engaging in the design of technologies specifically for those purposes. There is no current way to track exactly who and how many physician informatics professionals there are—existing limited data suggest that a large proportion of clinical informatics (CI) board certified physicians in the United States are internists (36.6% as of 2020), but unfortunately information about gender, race, and ethnicity is entirely missing.2 Limited data in CI training programs suggest <25% of CI fellowship program applicants are women (2016-17), and in that same survey, one applicant identified as having Black or African-American race and zero identified as having Hispanic or Latino ethnicity.3 Regardless of formal training, understanding ethical, legal, social, and policy issues pertaining to technology applications in health care is going to increasingly become an essential skill in our routine practice.

Our work is set out for us as technologies evolve, whether or not they are intended for health care and medicine. Let’s be sure we are equipped and engaged to tackle these issues as they arise—and support our patients to do the same. I look forward to #SGIM23, at which time the SGIM presidency transitions from LeRoi Hicks to Martha Gerrity, penning her first President’s column in this issue, as one of many places to continue these vital discussions on our communities’ professional development.

References

  1. Mesko B. A practical guide about digital health for medical professionals. https://leanpub.com/guide-to-digital-health. Last updated March 20, 2023. Accessed April 15, 2023.
  2. Desai S, Mostaghimi A, Nambudiri VE. Clinical informatics subspecialists: Characterizing a novel evolving workforce. J Am Med Inform Assoc. 2020 Nov 1;27(11):1711-1715. doi:10.1093/jamia/ocaa173.
  3. Bell DS, Baldwin K, Bell EJ, et al. Characteristics of the National Applicant Pool for Clinical Informatics Fellowships (2016-2017). AMIA Annu Symp Proc. 2018 Dec 5;2018:225-231.

Issue

Topic

Clinical Informatics & Health IT, Medical Ethics, SGIM

Share