This article needs additional citations for verification. (September 2023) |
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (January 2023) |
INTERNIST-I (or INTERNIST-1[1]) was a broad-based computer-assisted decision tree developed in the early 1970s at the University of Pittsburgh as an educational experiment. The INTERNIST system was designed primarily by AI pioneer and Computer Scientist Harry Pople to capture the diagnostic expertise of Jack D. Myers, chairman of internal medicine in the University of Pittsburgh School of Medicine. The Division of Research Resources and the National Library of Medicine funded INTERNIST-I. Other major collaborators on the project included Randolph A. Miller and Kenneth "Casey" Quayle, who did much of the implementation of INTERNIST and its successors.
Development
editINTERNIST-I followed the DIALOG system as its successor. Over a decade, INTERNIST-I played a central role in the Pittsburgh course titled "The Logic of Problem-Solving in Clinical Diagnosis." Fourth-year medical students in the course collaborated with faculty experts to handle much of the data entry and system updates. These students encoded the findings of standard clinicopathological reports. By 1982, the INTERNIST-I project represented fifteen person-years of work, and by some reports covered 70-80% of all the possible diagnoses in internal medicine.
Data input into the system by operators included signs and symptoms, laboratory results, and other items of patient history. The principal investigators on INTERNIST-I did not follow other medical expert systems designers in adopting Bayesian statistical models or pattern recognition. This was because, as Myers explained, “The method used by physicians to arrive at diagnoses requires complex information processing which bears little resemblance to the statistical manipulations of most computer-based systems.” INTERNIST-I instead used a powerful ranking algorithm to reach diagnoses in the domain of internal medicine. The heuristic rules that drove INTERNIST-I relied on a partitioning algorithm to create problems areas, and exclusion functions to eliminate diagnostic possibilities.
These rules, in turn, produce a list of ranked diagnoses based on disease profiles existing in the system’s memory. When the system was unable to make a determination of diagnosis it asked questions or offered recommendations for further tests or observations to clear up the mystery. INTERNIST-I worked best when only a single disease was expressed in the patient, but handled complex cases poorly, where more than one disease was present. This was because the system exclusively relied on hierarchical or taxonomic decision-tree logic, which linked each disease profile to only one “parent” disease class.
Use of INTERNIST-I
editBy the late 1970s, INTERNIST-I was in experimental use as a consultant program and educational “quizmaster” at Presbyterian-University Hospital in Pittsburgh. INTERNIST-I’s designers hoped that the system could one day become useful in remote environments—rural areas, outer space, and foreign military bases, for instance—where experts were in short supply or unavailable. Still, physicians and paramedics wanting to use INTERNIST-I found the training period lengthy and the interface unwieldy. An average consultation with INTERNIST-I required about thirty to ninety minutes, too long for most clinics. To meet this challenge, researchers at nearby Carnegie Mellon University wrote a program called ZOG that allowed those unfamiliar with the system to master it more rapidly. INTERNIST-I never moved beyond its original status as a research tool. In one instance, for example, a failed attempt to extract “synthetic” case studies of “artificial patients” from the system’s knowledge base in the mid-1970s overtly demonstrated its “shallowness” in practice.
INTERNIST-I and QMR
editIn the first version of INTERNIST-I (completed in 1974) the computer program “treated the physician as unable to solve a diagnostic problem,” or as a “passive observer” who merely performed data entry. Miller and his collaborators came to see this function as a liability in the 1980s, referring to INTERNIST-I derisively as an example of the outmoded “Greek Oracle” model for medical expert systems. In the mid-1980s INTERNIST-I was succeeded by a powerful microcomputer-based consultant developed at the University of Pittsburgh called Quick Medical Reference (QMR). QMR, meant to rectify the technical and philosophical deficiencies of INTERNIST-I, still remained dependent on many of the same algorithms developed for INTERNIST-I, and the systems are often referred to together as INTERNIST-I/QMR. The main competitors to INTERNIST-I included CASNET, MYCIN, and PIP.
See also
editReferences
edit- Pople, H.E., Heuristic Methods for Imposing Structure on Ill-Structured Problems: The Structuring of Medical Diagnostics, in Artificial Intelligence in Medicine, Peter Szolovits (ed), Westview Press, 1982.
- Gregory Freiherr, The Seeds of Artificial Intelligence: SUMEX-AIM (NIH Publication 80-2071, Washington, D.C.: National Institutes of Health, Division of Research Resources, 1979).
- Randolph A. Miller, et al., “INTERNIST-1: An Experimental Computer-Based Diagnostic Consultant for General Internal Medicine,” New England Journal of Medicine 307 (August 19, 1982): 468-76.
- Jack D. Myers, “The Background of INTERNIST-I and QMR,” in A History of Medical Informatics, eds. Bruce I. Blum and Karen Duncan (New York: ACM Press, 1990), 427-33.
- Jack D. Myers, et al., “INTERNIST: Can Artificial Intelligence Help?” in Clinical Decisions and Laboratory Use, eds. Donald P. Connelly, et al., (Minneapolis: University of Minnesota Press, 1982), 251-69.
- Harry E. Pople, Jr., “Presentation of the INTERNIST System,” in Proceedings of the AIM Workshop (New Brunswick, N.J.: Rutgers University, 1976).
- D. A. Wolfram, "An appraisal of INTERNIST-I," Artificial Intelligence in Medicine, 7, 2 (1995) 93–116. doi.org/10.1016/0933-3657(94)00028-Q
External links
edit- ^ Kolata, Gina (17 November 2024). "A.I. Chatbots Defeated Doctors at Diagnosing Illness". The New York Times. Retrieved 20 November 2024.