AI: The Past, Present and Future

January 22, 2018

RSNA 2017 Screen Shot.pngFrom the looks of each morning’s newsfeed, it’s evident that the buzz of AI has turned into productive dialogue and won’t be slowing down anytime soon. As many of us experienced, hundreds of companies (including TeraRecon) came to RSNA (Radiological Society of North America’s Annual Meeting) to show off what they have contributed to the industry since this time last year. Talks of radiation dose management and 3D printing were abundant but none were so pervasive as Artificial Intelligence (AI). Every booth seemed to have something to contribute whether it be the development of eclectic algorithms or App store like platforms that did little more than provide quicker access to algorithms a company was already selling. 

The somewhat nauseating pace of the technological growth in this area can make it confusing and difficult to decide if an institution should pick up this platform or that algorithm and decide if it’s even worth the time to integrate those technologies into their PACs systems. With all this noise, it’s helpful to keep time in perspective relative to the growth of AI as a tool. AI has been built upon since the Church-Turing thesis first published in 1937. This thesis postulated three classes of computability and has since been continually strengthened by other works.3

It wasn’t until 1956 that the first AI program, written by Arthur Samuel, was able to play and potentially beat an amateur checkers player.6avatar_blueteal_01.png After this major breakthrough, academic research into artificial intelligence, machine learning, and now deep-learning started making significant headway.2 A slew of new research programs were born and meticulously groomed and in 1997, Deep Blue was born. 45 years after the advent of Arthur Samuel’s checkers AI, IBM built Deep Blue, a chess machine that defeated world chess champion Garry Kasparov. 14 years after Deep Blue, in 2011, IBM’s Watson was able to win against sitting “Jeopardy!” champions.1

The reason for giving a small account for the history of AI in games is because in games you must follow a procedure that can be simplistically defined. Even in a simplistic game, such as checkers, you have to realize that there is strategy, not just possible moves. Sure, you can define movement on the board but, if the movement is random and not planned, the computer will be easily beaten. Now take into account that there are 500 billion billion (two billions!) possible board combinations.4 In theory, if the computer could be programmed to know all possible combinations, it would be able to discern the winning combinations each time the human player made a move. The problem with this is the computing power it takes to instantaneously know all possible combinations far exceeded the capabilities of the time and hence, learning became the optimal route of training computers from statistical inferences. Using this method, YouTube could identify when cats were in a video but this leap only happened in 2012.

This last point is perhaps the most interesting. While games are an interesting starting model for computer learning, they are very constrained given that a game has specific rules and directions. Pattern Recognition thus took the stage and has been the focus of the latest academic research and commercial expansion. Using pattern recognition algorithms has enabled companies like Google to learn more about us, our behavior, and even our beliefs. This doesn’t only apply to individuals but populations as well.

AdobeStock_174784528-199886-edited.jpeg

Steering back towards the medical industry, it’s almost obvious that early AI would not have been helpful to physicians given that they could only recognize pre-programmed rules and directions. With further development in pattern recognition, we can set aside some of the rules and directions. You can’t tell a computer exactly where tumors will pop up because in real life a tumor can form anywhere. What you can do, however, is teach a machine what a tumor looks like and it can search an image (or set of images) for matching patterns based off of other tumors it has seen (and gained more information from).

The industry seems to be in the mid stages of understanding the full extent of pattern recognition which is why this year’s RSNA seemed like a mad house for talks on AI. Hundreds of algorithms and store pages were displayed with claims on saving time or shifting priorities (think of the tools now available to quantify calcification of coronary arteries) but everyone seemed to have drastically different approaches on how to market and sell to physicians and health systems. This kind of cacophony happens when a budding technology is being taken advantage of but there is not yet a road map for how this technology will evolve in a year or five years or more.

Considering the history of AI and understanding the gaps of clinical application in healthcare, TeraRecon began investing and innovating in this arena many years ago with an approach that kept the physician in control the entire time. Positioning AI as a tool in the physician’s workflow is the real future of this technology. With AI’s help, physicians will significantly reduce the amount of time spent reading one aspect of a study and move onto things that take higher priority and are less tedious.

envoyai PR-1.pngTeraRecon has used its momentum and lessons learned from advanced volume visualization and carried that into its investment in EnvoyAI and development of NorthStar, specifically focusing on end user perspectives. There is a real and powerful clinical interest in AI, a missing link in the current workflow, and a growing need for algorithm distribution.

Envoy AI’s platform for algorithms is truly innovative. It’s been called “Amazon for AI” and what’s special is when a developer puts their algorithm on the platform, they can keep the Intellectual Property and charge what they want for its use. Physicians have instant access to a library of algorithms through the EnvoyAI Exchange and can run the platform in a localized or cloud-based model. This fair play and widespread distribution makes the EnvoyAI platform unlike any other medical platform. The only thing missing from the formula was the means of interacting with algorithms in the clinical environment. With TeraRecon’s NorthStar AI-enablement viewer and the EnvoyAI platform bringing new and powerful clinical tools to the table, there is finally a way to bring cutting edge AI directly into established workflows.

 

Sources

  1. Best J. IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next. TechRepublic. https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/. Accessed December 13, 2017.
  2. Arthur Lee Samuel. Computer Pioneers - Arthur Lee Samuel. http://history.computer.org/pioneers/samuel.html. Accessed December 13, 2017.
  3. Copeland BJ. The Church-Turing Thesis. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/church-turing/. Published November 10, 2017. Accessed December 13, 2017.
  4. Hirshon B. Checkers Solved. Science NetLinks. http://sciencenetlinks.com/science-news/science-updates/checkers-solved/. Accessed December 13, 2017.
  5. Swearingen J. Why Deep Blue Beating Garry Kasparov Wasn't the Beginning of the End of the Human Race. Popular Mechanics. http://www.popularmechanics.com/technology/apps/a19790/what-deep-blue-beating-garry-kasparov-reveals-about-todays-artificial-intelligence-panic/. Published November 14, 2017. Accessed December 13, 2017.
  6. The IBM 700 Series. IBM100 - The IBM 700 Series. http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/ibm700series/impacts/. Accessed December 13, 2017. The date which Arthur Samuel's checkers playing machine was publicly shown.

You May Also Like

These Stories on innovation

Subscribe by Email

No Comments Yet

Let us know what you think