RSNA Spotlight Discussion Recap: The Current State of AI

September 24, 2019

Grab a hot cup of coffee and make time for this in your schedule because you will NOT want to miss this discussion!

Last month at a special RSNA Spotlight event on The Current State of AI, Dr. Eliot Siegel, a renowned physician advocate for medical imaging AI and Professor of Medicine at the University of Maryland, and Jeff Sorenson, CEO of AI technology leader TeraRecon and EnvoyAI, discuss the issues and opportunities surrounding the current state of AI applications. This lively discussion covered some of the hottest AI topics, including:

  • A discussion of various workflows
  • Approaches to leveraging labeled data to create useful AI
  • A snapshot of real AI use cases and implementations happening today

Below we recap some of the key discussion points from the event, but we highly recommend listening to the entire discussion on the Beyond the Screen podcast or watching the video recording of the event as you will be able to hear the perspective of a physician and an industry innovator on what is currently going on in the AI industry.


WATCH THE FULL VIDEO RECORDING NOW!
Artificial Intelligence Workflows and the Need for an Interoperability Layer

There is a lot of discussion around where your AI should fit within your workflow. Should AI be in your modality, report, or PACS? Jeff speaks with Eliot about this confusion and the broken workflows that come from demonstrations on PACS not including reporting workflows and vice versa. They explore EnvoyAI, the interoperability layer, which makes it possible to find a workflow where you can interoperate consistently with the AI results across any system and interact with any AI.

Because of the need for an interoperability layer, it is important to understand how to distinguish one vendor’s algorithm from the other and if they can work together or be combined. Jeff explains that a proper AI platform will have the outputs of one algorithm be the inputs of another. As such, it is important to pick the best findings from the best AI algorithms to evoke a single display and even understand what the confidence is. There are many things that need to come together and that is where an AI results explorer comes in.

Many focused on only making the buying process simple. Yes, we all needed to consolidate how we purchased AI from the vast world of algorithms but, much more than that, we needed a way to make our systems work seamlessly together, keeping the physician in their native workflow and giving them a way to interact with the algorithm results. There are currently more than 20 FDA cleared algorithms out there, and if you count the deterministic ones, there are thousands but you can't put them in your workflow without a big IT project.

The big message here is that you have to take the content and make the content compatible, and the best example of that is Apple. Apple has a developer API for Apple apps and you have to use it to make your content compatible and to publish an app on the Apple app store. We used this same type of process and built a developer portal that helps developers wrap and run content. Then, what you need is a way to interoperate consistently with the AI results across any system.


Cloud vs. Local: Consuming Algorithms Now and In the Future


When deploying the EnvoyAI platform, it can either be done local inferencing, so it's grabbing the algorithms and running them on a local GPU or CPU appliance or we can run them in the cloud. Approximately 80% of our customers start out thinking that they must deploy locally because of IT restrictions but in reality, 80% of the deployments are run in the cloud. The cloud is often the final choice of our customers because it is much easier to access a wide range of algorithms. In the US, we are seeing that people are starting to understand very rapidly that, this is a manageable problem, that clouds are secure and we're seeing that once understood, a cloud solution is very acceptable.


Labeled Data to Create Useful AI

The FDA has shown that they want to make sure that the product is labeled correctly, and if you have claims that you're looking for certain things with a certain algorithm, it isn't an onerous process, you just need to have the evidence to prove it. The real missing link has been the user interaction database. By using an AI Results Explorer, you can interact with, accept, and reject findings, and if you track that by the user, you can start to map the physician's belief system. But, it's a big data problem.

At this point, we are looking at machine learning which is database-based rather than deep learning, which is image-based. With machine learning working on the user interaction data, we can start to understand if the product is showing you things that we think are important when they're not. This has been the problem with CAD. Every physician had their understanding of what they wanted, but the product did the same thing every time and it was maddening for the physician. So, tailoring this to the physician's belief system is actually why we embarked on a platform approach and why we built the NorthStar AI Results Explorer specifically so that we didn't have this problem that you couldn't get your belief system into your work.

Listen or watch the full discussion:

Listen and subscribe to our podcast from your mobile device:
Via Apple Podcasts | Via Google Play | Via Spotify

WATCH THE FULL VIDEO RECORDING NOW!

Have a question about today’s episode or want to join the discussion about cutting-edge and innovative technologies in the advanced visualization space? Email us at info@terarecon.com.

This podcast is brought to you by TeraRecon, the leader in advanced visualization, image sharing, post-processing, and artificial intelligence solutions.

Subscribe by Email

No Comments Yet

Let us know what you think