RapidRead Technology

RapidRead Technology

  • Artificial Intelligence (AI) is undeniably a part of our everyday lives. As veterinarians, we have a responsibility to educate ourselves about AI and the impact it will have on our practices and our patients’ care.
  • This article explains machine learning in the context of veterinary imaging diagnostics and reviews how AIS is uniquely positioned to develop a strong and accurate AI product.

Article written by Diane U. Wilson, DVM, DACVR (August 2023, updated March 2024)

Artificial Intelligence at Antech Imaging Services

We think we know about artificial intelligence (AI) and machine learning (ML). After all, we encounter it every day in our activities of daily living. Artificial intelligence can be convenient when it targets news topics tailored to our interests on our electronic devices. We might find it annoying, or even an invasion of privacy, when it targets marketing ads to our most common internet searches. We are fascinated when our Alexa, Cortana or Google device can seemingly hold a conversation with us. We can be excited at the prospect of how AI might improve our quality of life and, at the same time, feel concerned that AI may someday achieve consciousness and begin making decisions that can be catastrophic like some Hollywood movie scenario. In preparing to further interact with AI in veterinary medicine, we need to resolve this conflict between optimism and pessimism. To do that, we must first understand the difference between generative and discriminative machine learning.

Machine Learning

In generative machine learning models (or algorithms), the machine can create solutions not previously programmed. One example is the chatbot, ChatGPT, which can learn, create and evolve conversations as it is being used. Whereas, in discriminative machine learning models, the machine resolves information only into classifications for which it is programmed. That is, it only identifies and categorizes whatever it was initially trained to identify and categorize. A discriminative learning model cannot evolve to something more. Currently, AIS uses only discriminative models for machine learning techniques. (Figure 1)

Regardless of whether you are optimistic or pessimistic, there is already significant use of AI in veterinary medicine. As in other areas of everyday living, the use of artificial intelligence can be transparent to the user. Whether the end user is a general practitioner, veterinary technician, or specialist, it may not be evident that artificial intelligence was employed, wholly or in part, to reach an objective. Sometimes, AI is used to assist another program in workflow or to assist a specialist in providing diagnostic information. Less commonly, AI performs an entire diagnostic test without human input.

As with any new technology, AI must be respected for the infant technology that it is. As developers and users, we are responsible to educate ourselves on the capabilities and limitations of the machine. End users must understand the differences between any new tools and traditional tools on which we currently rely. Developers must fully think through and map out the implications of the new tool. This means not only focusing on the benefits it brings, but fully weighing the harm it might do and taking responsible steps to mitigate it.
There are three major components necessary to create a strong and accurate AI tool. Building AI models requires massive amounts of data, domain specialists and data scientists. Antech Imaging Services (AIS) and Mars are uniquely poised to bring all the necessary components together to build AI diagnostic tools to better patient throughput, speed information to point of care, and better veterinary medicine for pets, clients, veterinarians, and technicians.

Artificial Intelligence is Already Present in Veterinary Medicine

Workflows are enhanced with AI every day. Artificial intelligence is tackling mundane tasks such as instantaneous measurements for vertebral heart size, linear and volumetric measurements, orienting images for viewing, and identifying and mechanically counting cells on a slide. This allows specialists, veterinarians, and technicians to focus human attention where it is needed most—on the patient.
Reports are created in seconds and sent to practitioners from several specialties allowing next steps in patient care to occur sooner. Machine generated reports save practitioners time in writing the medical record and further allow their attention to be on patients and clients.
In pathology, AI recognizes and counts mitotic figures and can differentiate tissue types to assist pathologists in their workflow. In cardiology, machine learning is used to read electrocardiograms and provide information back to the practitioner in a matter of minutes. This is valuable for pre-anesthetic workups where timely information helps practitioners make decisions regarding whether to proceed with elective surgeries and whether special pre-cautions are necessary in urgent surgeries. In human medicine, AI is used to monitor patients during anesthesia. There has been interest expressed among veterinary anesthesiologists to have AI monitor our veterinary patients.

Machine Learning Must be Carried Out Responsibly

As stated previously, it is crucial for end users and developers alike to become educated and understand both the positive and negative implications of a given AI tool. There are three main requirements to developing strong and accurate models. These are massive amounts of data, data scientists, and domain experts. We will discuss these from the point of view of diagnostic imaging.

Massive amounts of data are needed to train a model on each finding. Without data, we cannot reliably assert claims of accuracy, sensitivity, and specificity. In people, we say one is an expert at a particular imaging finding when there is experience of at least 500 cases of that finding. Unfortunately, it is not so for a machine to learn.
At AIS, we have determined that we can be confident of the measured level of accuracy, sensitivity, and specificity of a model for a particular finding, when the model has encountered four to five thousand instances of that finding. For common findings, like a pulmonary bronchial pattern, the necessary number of cases to ensure accuracy can be easy to acquire. For less common findings such as diskospondylitis, it can take longer and require collaboration between multiple groups to gain the necessary number of cases to confidently train the model.

In addition to many cases needed to train the model, many more cases of a particular finding are necessary to test the model. After a model is deemed accurate on a particular finding and so, can be released for use as a diagnostic tool to hospitals, the model must be periodically tested to ensure continued accuracy and that no drift has occurred. Drift occurs when there are subtle changes to the environment such as changes/improvements in equipment over time, changes in the electronic space, changes/advances in technique, etc. Each time a model for a given finding is tested, many more cases with that finding are needed to confidently assess the accuracy, sensitivity, and specificity and make any necessary adjustments. If adjustments are made, more data is needed to test those adjustments.

Domain experts (board certified specialists) are needed to train and measure the accuracy of each model. Radiologists (pathologists, cardiologists, dentists, etc.) spend hours labeling images and offering corrected data to be used to train, assess, and retrain the machine. Moreover, it is necessary to include a team of domain experts so that consensus can be reached, and the machine is not trained on the opinion of one individual. Third party investigations should be used to validate accuracy against known cases.

Data scientists are the basis of the programming team required to develop model algorithms. The best teams include data scientists with vision for the product, strong collaboration with domain experts, and knowledge of end user needs. A diverse team with unique subspecialties (segmentation, regression, data analysis, various software) as well as a solid understanding of all aspects of data science make for a most robust team able to tackle obstacles with creative solutions.

AI at AIS

AIS and Mars are uniquely positioned with in-house access to all three major requirements for developing a strong, accurate AI product. AIS has been offering teleradiology services since 1999. With 24 years of board-certified radiologist reports and approximately 8 billion stored images, massive amounts of data are readily available. Many of the cases have follow-up from hospitals and pathology reports with definitive diagnosis.

 

Fifteen board certified radiologists on the AIS AI team work tirelessly to label images, provide feedback on AI evaluations and maintain quality control on pilot study reports to ensure patient care remains paramount. The Mars Science and Diagnostics Next Generation Technologies Team includes the AIS-funded team of ten data scientists, each with extensive experience, a unique set of qualifications and a broad understanding of machine learning.
The result is AIS’ RapidRead Radiology which has been in pilot studies for nearly two years in the US and Europe. With the help of both Mars family and non-Mars family hospitals who participate in the pilot studies, RapidRead Radiology has developed into a robust diagnostic tool providing expert-level consultation for many findings in a matter of minutes. During the pilot studies, RapidRead has proven useful to general practitioners, specialists and overnight and emergency care doctors. RapidRead Radiology currently evaluates for 50+ findings in the thorax, abdomen and limbs of dogs and cats (Figure 2) with no finding being released unless it is proven to be at least 95% as accurate as the consensus of our team of board-certified radiologist.

Clinical application of a list of findings means several different findings can be present in any given report. Each individual finding may have an accuracy of at least 95%; however, consideration must be given to the report’s overall contribution of information to the case.

AIS’ current quality control statistics demonstrate that the RapidRead tool adequately answers the clinical question associated with any given case 82% of the time when used alone and without radiologist oversight. When radiologist oversight is employed with RapidRead review, the number of reports that contribute significant information to a case increases to 92%. The remaining 8% of cases that are submitted are inappropriate for RapidRead review because AI is not trained for the body part, species, or modality in the submitted images. (Figure 3)

Radiologist accuracy has been reported at roughly 96% (only 4% error rate). For this reason, AIS implements a strong system of quality control with board certified radiologists readily in the loop through daily quality review of submitted cases, auto-routing of emergent findings for radiologist review, and one-click submission to request a radiologist review. When the RapidRead tool is used properly, and in conjunction with robust radiologist oversight, contribution of useful information for the case and accuracy approach 100%.
Despite the high level of agreement between RapidRead and radiologist read reports, understanding proper use of AI for generating radiology reports remains the driving factor for how useful such a tool can be for an individual clinic. Given the similarity in appearance and ‘feel’ of the RapidRead report (Figure 4) and a report from a radiologist, it’s easy to fall into the belief that RapidRead is the equivalent of a ‘mechanical radiologist’ evaluating for all possible abnormalities just like a radiologist. However, this is far from the case. An artificial intelligence model will only evaluate for those findings for which is has been trained. For example, in looking at the list of findings for RapidRead (Figure 2), it’s evident that pregnancy check (puppy count) is not on the list. To submit a consult to RapidRead expecting to learn if the patient is pregnant or how many puppies are present would not yield useful information. Consider the analogy of a complete blood count (CBC) vs. a chemistry panel. We wouldn’t submit a CBC expecting to receive liver enzyme information since we understand that liver enzyme levels are not part of the CBC testing tool. Approximately 8% of the cases submitted to RapidRead are inappropriate for evaluation by the machine. That is, a body part, species, or modality is included for which the machine is not yet trained.

It’s also important to remember the difference between discriminative and generative AI models described earlier in this article and that RapidRead is built with discriminative AI programming models, not generative models. It is not programmed for independent learning. Therefore, to submit images of a body part or species currently not evaluated by RapidRead and expecting it to ‘learn’ from those images would also not be useful.
On the other hand, RapidRead is extensively trained in the findings listed in Figure 2. For example, if we are looking to learn if a patient has cardiac or lung changes that might contraindicate anesthesia or looking to determine if a patient with gastrointestinal signs is obstructed, then RapidRead is a very useful tool to speed expert level information to point of care in a matter of minutes (vs. hours to days for traditional radiologist reads).

Conclusion

Artificial intelligence has entered our daily lives at an exciting–some might even say alarming–rate. As with all new technology there is a period of learning and adjustment. With knowledge can come understanding often followed by assimilation. The use of AI in veterinary medicine already exists in many areas including workflow, measurement, and segmentation, and as a diagnostic tool in some specialties like pathology, cardiology and diagnostic imaging.
Understanding the basics behind machine learning is the key to helping each of us best understand its use in diagnostic imaging. Developers must understand that large amounts of data, data scientists and domain experts are required to build a responsible and accurate AI product. End-users must understand the proper use of AI tools to best benefit the patient. It is only when these factors come together that we can reach our end goal of bettering the lives of pets and clients with artificial intelligence.

Bibliography

“Colossus: The Forbin Project”. Universal Pictures. 1970. Retrieved from Prime Video.

Pu, Y., Apel, D.B. & Wei, C. Applying Machine Learning Approaches to Evaluating Rockburst Liability: A Comparation of Generative and
Discriminative Models. Pure Appl. Geophys. 176, 4503–4517 (2019). https://doi.org/10.1007/s00024-019-02197-1

Wilson, D.U., Bailey, M.Q., Craig, J. The Role of Artificial Intelligence in Clinical Imaging and Workflows. VRUS. 63:S1, 897-902 (2022)
https://doi.org/10.1111/vru.13157

Fitzke, M, et. al. OncoPetNet: A Deep Learning Based AI System for mitotic figure counting on H & E Stained whole slide digital images
in a large veterinary diagnostic lab setting. YouTube Video. https://www.youtube.com/watch?v=y_ZZUeW7Bvg&t=26s

Singh M, Nath G. Artificial intelligence and anesthesia: A narrative review. Saudi J Anaesth. 2022 Jan-Mar;16(1):86-93.
doi: 10.4103/sja.sja_669_21. Epub 2022 Jan 4. PMID: 35261595; PMCID: PMC8846233.

van Leeuwen, K.G., de Rooij, M., Schalekamp, S. et al. How does artificial intelligence in radiology improve efficiency and health outcomes?. Pediatr Radiol 52, 2087–2093 (2022). https://doi.org/10.1007/s00247-021-05114-8

Front. Hum. Neurosci., 25 June 2019 Sec. Sensory Neuroscience Volume 13 – 2019 | https://doi.org/10.3389/fnhum.2019.00213

Cohen, J, Fischetti, AJ, Daverio, H. Veterinary radiologic error rate as determined by necropsy. Vet Radiol Ultrasound. 2023; 1-12. DOI: 10.1111/vru.13259.

×