aiThe use of artificial intelligence to help healthcare providers handle their growing workload and improve patient outcomes increases every day. This technology is yet to have a major impact in radiology. The technology is best suited to solve problems that are repetitious and consistent. 

Radiology is a seemingly target-rich environment for machine learning and deep learning techniques where archives full of previously interpreted diagnostic examinations exist. While a lot has gone right, the mainstream approach seems to hang onto previously founded suppositions about the combined use of patient “big data”, and maintains an almost insatiable appetite for more images that have been read and now represent “labelled data”. Unfortunately, these method have made very little progress. It’s time to take a fresh approach.

In the interest of brevity, just a few of the most prevalent and historically relevant suppositions can be explored. These provide a clearer picture as to why most of the energy behind artificial intelligence in radiology has been rather unproductive to date. For example, with the advent of the electronic health record, it became possible to match outcomes data with the patients’ associated medical images and this portends enhanced diagnoses and prognoses. The tie to outcomes is so clear, and the path so obvious. Today, almost every data scientist will consider moving his or her employment to work in a place that has more data, specifically labelled data. Realistically, the data in the EMR may not be arranged in a way that allows for such a match to be made. The images in the archive likely have been interpreted by many different readers over many years as best practices changed and personnel turnover occurred. Artificial Intelligence methods are still just math at their core. When you think about solving problems with math, such as taking a simple opinion poll, a larger sample size results in greater accuracy. Of course, selection biases can increase the margin of error in a poll. So, too, it holds that radiology-focused artificial intelligence machines need to be supervised, or they need to have very clean labelled data in order to deliver satisfactory results. Considering that physicians have individual belief systems and there are limitations introduced with human error, the big data projects that use these data sets are injecting a high rate of noise and error into the training of the machine. Trying to overcome this with even more data may not be the best approach.

Another example of a counter-intuitive supposition being consistently applied to the science of artificial intelligence today pertains to the way that radiologists might use the output of a perfect intelligence machine if it existed. Today, many intelligence machines are described as “assistive” and are designed to flag findings for physician review and possible inclusion in the radiology report. Unfortunately, these intelligence machines typically deliver the clinical findings and measurements into a derived set of medical images and push them into the PACS, which typically means these output data are moved to the permanent image archive before being reviewed and accepted by the physician. Even if the machine results are perfect every time, a new user is likely to not blindly trust the output. This trust must be earned over time. Notably, physicians are very accustomed to being able to adjust the findings in their images, making any static measurements unacceptable. Whether the pre-generated images or pre-generated reports can be adjusted, and whether they can be deleted before they are saved, becomes a key point of respecting the autonomy of the physician. If an intelligent machine is truly assistive, then the physician should be able to pick up where the machine left off, or at least reject its opinion.

The industry should reflect on the checkered past of lung CAD. Why did it struggle to get PMA from the FDA? Most likely, because the number of studies used to program it were in the hundreds. Why did physicians cool their excitement toward it? Probably, because it never learned and improved effectively enough to be considered an assistive productivity tool. The missed opportunity was to better utilize the physician inputs and over time optimize its performance. This would have increased the sample size and simultaneously resolved the performance issues. It is hard to imagine that today, there are more than 50 lung nodule detectors from various research and corporate sources. There must be something wrong with the workflow and the performance of these technologies or they would be more ubiquitously used.

It seems that we have a “last mile” problem today, like the internet in the 1990’s. Many leading universities face this last mile problem inside their own institutions. They have intelligence machines they have developed and do not use clinically. The focus turns to sharing these great technologies. But, the recipient faces the same last mile problem. There is expected to be a push in the radiology market to compile intelligence machines into marketplaces. This is a problem that is technologically solvable with off the shelf technology and a little effort. There will be many marketplaces or app stores coming on line. What’s missing is a suite of technologies to connect the current diagnostic interpretation systems and electronic health records to these intelligence machines, and to truly engage and delight the clinical end-user.

How can these machines be implemented in routine clinical workflow? How can a physician experiment with many different machines to choose the best for his or her application in the context of routine clinical workflow? How is the physician belief system incorporated into the interactions provided by the machines? Finally, how does the machine benefit from ordinary physician use to improve its output? These are all technology challenges that require a more ground-up approach. Adapting current technology seems unlikely to solve the problems, or this would have worked already.

It’s time for a fresh approach to artificial intelligence in medicine. By presenting findings and conclusions in a format where the suggestions of many intelligence engines can be considered and accepted or rejected by the physician in real-time, it provides a reward system to the intelligence machine to improve its performance overall. Similarly, the interactions with the image data and intelligence machine findings during routine diagnostic interpretation can be captured for future training of these machines. This requires technology to ensure that the applicable source data is processed prior to interpretation, proper suggestion of applicable intelligence engines has occurred during interpretation, and the physician remains in control of what findings are propagated into their interpretation within the PACS environment.

The technologies required to achieve this future-state machine intelligence workflow are: 1) one or more app stores with intelligence machine content, 2) data transport and machine instantiation technologies to solve the last mile integration into routine clinical interpretations, 3) a viewer or embeddable viewing component allowing interaction with a plurality of machines, findings and observed user behaviors.

To meet these needs. TeraRecon acquired McCoy Medical Technologies in June of 2017, now called Envoy AI, with a focus on establishing a full-service platform-approach to integrating artificial intelligence machines into current interpretation workflows. TeraRecon also invested heavily in a new research and development office in Research Triangle Park, North Carolina in 2016 to house the development of its Northstar™ AI-enabled viewer. Both products will debut at RSNA 2017, working together via published open interfaces, and made available as stand-alone solutions designed to help health providers, PACS and EMR companies to rapidly integrate to, and embed, these intelligence technologies into their current products and installed base customer workflows.

 

Topics: health IT