AI in ultrasound imaging: Part 2 – Can Neuro be the next big thing?

14 janvier 2025 • Publications

V. Hingot, Ph.D., CTO of Resolve Stroke.

Artificial Intelligence (AI) is revolutionizing radiology, with its impact already evident in automating measurements, providing real-time guidance, and enhancing image interpretation. As AI technology continues to evolve, the exploration of richer and more expansive datasets opens up opportunities for even greater imaging capabilities.

Of the 700 FDA-cleared products, most are based on CT, mammography and MRI images and less than 10% are for ultrasound.

This disparity comes from the inherently qualitative nature of ultrasound practice, which relies on quick bedside assessments. Data storage and labeling are time-consuming tasks that many clinicians often prefer to avoid. More than that, the variability in image quality due to different hardware and user techniques makes it all the more difficult for AI models to interpret data effectively.

Ultrasound also underperforms in certain applications due to its inherent limitations. Ultrasound waves do not propagate well through bone or gas, which makes it challenging to produce high-quality images of the brain, lungs, and bowels. Consequently, neuroimaging — a significant application for CT and MRI — is virtually absent in ultrasound practice.

Since neuroimaging is often performed in time-sensitive situations, AI products that assist in decision-making and shorten the time to diagnosis have demonstrated some of the greatest clinical value across radiology solutions. This explains why companies like Viz.ai and RapidAI are among the few big startups in a space dominated by large manufacturers.

AI’s success in medical imaging comes from its ability to identify features within complex datasets.

Convolutional Neural Networks (CNNs) are highly effective at detecting subtle patterns even in broad contexts and in challenging imaging conditions. By training on large datasets, AI excels at distinguishing normal from abnormal findings, reducing variability, improving reproducibility, ensuring faster and more consistent results.

Accurate probe placement is critical for obtaining high-quality images but can be particularly challenging for less experienced practitioners. AI solutions like Caption Guidance, Sonio AI, Heart Focus, and ScanNav Assist offer real-time feedback, enabling clinicians to achieve precise positioning. This not only reduces variability between operators but also accelerates image acquisition and enhances diagnostic accuracy.

Traditionally, ultrasound measurements have been labor-intensive and prone to variability. AI tools like EchoGo and LVivo automate tasks such as cardiac function analysis, significantly improving efficiency and consistency. By minimizing human error, these technologies allow clinicians to dedicate more time to interpretation and decision-making while ensuring reliable results across operators.

Once exclusive to high-end ultrasound systems from large manufacturers, these AI-driven capabilities are now increasingly available on affordable point-of-care devices. This democratization is transforming accessibility and reshaping the ultrasound landscape.

AI is well adapted to fields where swift and precise imaging is critical, and standardized measurements are central to practice.

Two primary examples are cardiology, with tasks like measuring cardiac output, and obstetrics, where fetal biometry is used to monitor growth. Integrating AI into these procedures streamlines workflows, reducing the technicity and the workload, accelerating the overall process.

Brain imaging has similar clinical needs but remains today hardly addressable with today’s ultrasound technologies. The rigidity of the skull and its acoustic properties interfere with the transmission of ultrasound waves, causing high attenuation and poor image quality.

Additionally, variability in image quality complicates the ability of AI algorithms to effectively process and interpret brain ultrasound data. Until we can change this, brain imaging will continue to rely on more complex and less accessible methods like MRI and CT.

To unlock AI’s potential in ultrasound brain imaging, the key is to dig into the raw channel data.

Raw channel data is the direct output from the ultrasound sensor, capturing unprocessed echoes reflected from the tissue. These data are typically far larger — by an order of magnitude — than the images displayed to users and are usually discarded. In doing so, valuable information about tissue properties is lost, along with the potential to significantly enhance the performance of ultrasound systems.

Software-defined systems can now send large sets of raw data directly on GPU where advanced software processing and AI can be performed in real time.

In my earlier post on software-defined ultrasound, I explored how access to raw data could unlock new possibilities for ultrasound imaging. In particular, replacing frame-by-frame processing with 4D spatio-temporal analysis pipelines allows for the capture of fine dynamics and temporal evolutions, providing significantly more accurate and comprehensive insights, especially in dynamic and complex scenarios.

AI models have the potential to analyze fine patterns about vascular networks and perfusion that traditional imaging misses.

The development of software-defined ultrasound products specifically designed for brain, incorporating quantitative markers similar to what Viz.AI and RapidAI have achieved for CT and MRI, is already a reality, and could revolutionize clinical practices in neurology. Although they may not reach similar performances as their CT and MRI counterpart, having the option available repeatedly at the bedside could be a game changer.

Two new types of datasets are emerging in this field. The first is continuous velocity data collected from TCD systems. With advancements in robotic systems, such as the NG2 Intelligent Ultrasound, and sensor miniaturization, like Sonologi, continuous TCD monitoring is now feasible. The second dataset involves analyzing raw data from software-defined systems, enabling real time analysis of blood flow, like the implant developed by Forest Neurotech or the RS Neuro Suite developped by Resolve Stroke.

One key application is the rapid detection of large vessel occlusions at the patient’s bedside or in acute care settings, which would be a game changer for management of brain injuries and stroke. Such a capability could streamline workflows, especially in guiding patients more efficiently toward mechanical thrombectomy. In intensive care units, enabling the tracking of intracranial pressure, vasospasms, or other life-threatening complications could offer renewed hope for both patients and clinicians.

In the future, having a way to characterize the microvasculature deep inside the brain could play a pivotal role in understanding how aging and dementia affect cognitive functions. This breakthrough could not only enhance early diagnosis but also pave the way for more targeted and effective treatments, improving outcomes for millions of patients worldwide.

Our Vision

We believe that focusing on software-based technologies to improve ultrasound neuroimaging offers a promising path forward. While AI has excelled in fields like echocardiography and obstetrics, the neurotech space remains underexplored. By applying AI and data processing to brain imaging, we can unlock powerful insights into cerebral blood flow and neurological health, significantly impacting how brain conditions are understood and managed.