Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Learn how GE Healthcare used AWS to develop a new AI model that interprets MRIs


Subscribe to our daily and weekly newsletters for the latest updates and content from the industry’s leading AI site. learn more


MRI images are complex and difficult to understand.

Therefore, manufacturers teaching people major languages (LLMs) for MRI analysis had to cut the recorded images into 2D. But this is only based on the original image, thus preventing the model’s ability to analyze the physical structure of the body. This creates difficulties for the complex cases involved tumors in the brainskeletal disease or heart disease.

But GE Healthcare seems to have solved this major problem, introducing the first full version of the 3D MRI Research Foundation (FM) this year. AWS re: Invent. For the first time, models can use full 3D images of the whole body.

GE Healthcare’s FM was built on AWS from the ground up – there are few models designed for medical imaging such as MRIs – and is based on more than 173,000 images from more than 19,000 studies. Developers say they can train the model in five times less time than they previously needed.

GE Healthcare has not yet commercialized the foundation type; it is still in the evolutionary research phase. The first examiner, Mass. General Brighamis expected to begin testing soon.

“Our vision is to put these models in the hands of technology teams working in healthcare systems, giving them powerful tools to create research and clinical services faster, and more cost-effectively,” GE HealthCare AI director Parry Bhatia told VentureBeat.

Enable real-time analysis of complex 3D MRI data

Although this is a serious matter, AI developers and LLMs are not a new part of the company. The group has been working with advanced technologies for more than 10 years, Bhatia explained.

One of his most famous features is Image of AIR Recon DLa deep learning-based approach that allows radiologists to quickly achieve better imaging results. The algorithm removes noise from raw images and improves the signal-to-noise ratio, cutting scan times by up to 50%. As of 2020, 34 million patients have been tested with AIR Recon DL.

GE Healthcare started working on its MRI FM at the beginning of 2024. Because the model is multimodal, it can support image analysis, combine images with words, and segment and classify diseases. The goal is to give health professionals More in a single scan than ever before, Bhatia said, leading to faster, more accurate diagnosis and treatment.

“This model has great potential to facilitate real-time analysis of 3D MRI data, which could revolutionize medical procedures such as biopsies, radiation therapy and robotic surgery,” Dan Sheeran, GM of healthcare and life sciences at AWS, told VentureBeat.

In the past, it has already outperformed other types of research that are publicly available in practice including groups for prostate cancer and Alzheimer’s disease. It has shown accuracy of up to 30% when comparing MRI scales with verbal descriptions in image retrieval – which may not sound impressive, but it is a significant improvement on the 3% ability shown by similar models.

“It’s gotten to the point where it’s giving solid results,” Bhatia said. “The consequences are huge.”

Doing more with (very little) data

The MRI procedure it requires multiple types of datasets to support different methods that scan the human body, Bhatia explained.

What is known as a T1-weighted imaging technique, for example, focuses on fat tissue and reduces water signal, while T2-weighted imaging enhances water signals. These two methods are complementary and create a complete picture of the brain to help doctors identify abnormalities such as tumors, trauma or cancer.

“MRI images come in different shapes and sizes, similar to how you would have books in different colors and sizes, right?” Bhatia said.

In order to overcome the challenges presented by different datasets, the developers introduced the “change and change” method so that the model can improve and react to different types of data. Also, data may be missing in some areas – the image may be incomplete, for example – so they trained the model to ignore those events.

“Instead of sticking to it, we taught the model to skip the gaps and look for what’s available,” Bhatia said. “Think of it like solving a puzzle with some pieces missing.”

The developers also used the training of young children and teachers, which is especially useful when there is little data. With this method, two neural networks are trained on labeled and unlabeled data, the teacher creates labels that help the student learn and predict the future.

“We are now using a lot of self-learning technology, which doesn’t require large amounts of data or characters to train large models,” Bhatia said. “It reduces dependency, where you can learn more from these raw images than before.”

This helps to ensure that the model works well in hospitals with limited resources, old systems and different types of data, Bhatia explained.

He also emphasized the importance of multimodality of models. “A lot of technology in the past was useless,” Bhatia said. “Just look at the picture, in his words. But now they live in different ways, they can go from picture to word, word to picture, so you can bring many things that happened with different colors in the past and connect the movements. ”

He emphasized that researchers only use datasets that are free; GE Healthcare has certified partners for de-identified data, and is committed to compliance standards and policies.

Using AWS SageMaker to solve computational, data problems

Undoubtedly, there are many difficulties in building modern models – such as limited capacity to read 3D images that are gigabytes in size.

“It’s a 3D data stream,” Bhatia said. “You have to bring it to mind for the model, which is a very difficult problem.”

To address this, GE Healthcare built on it Amazon SageMakerwhich provides high-speed networking and distributed training across multiple GPUs, as well as using the Nvidia A100 and mid-range GPUs for large-scale training.

“Because of the size of the data and the size of the samples, they cannot send it to a single GPU,” Bhatia explained. SageMaker allowed them to modify and optimize performance on multiple GPUs that could communicate with each other.

Developers also used it Amazon FSx in Amazon S3 object storage, which allows fast reading and writing of data sets.

Bhatia said another challenge is cost optimization; with Amazon’s elastic compute cloud (EC2), developers were able to move unused or infrequently used data to cost-effective storage.

“Leveraging Sagemaker for training these large models – especially efficient training, distributed across multiple high-end GPUs – was one of the most important things that helped us move faster,” said Bhatia.

He emphasized that all components were built from the knowledge of the data and follow the procedures that take into account HIPAA and other regulatory laws and regulations.

Ultimately, “these technologies are transformative, helping us produce faster, and improve operations by reducing administrative work, and driving better patient care – because now you’re providing personalized care.”

To serve as the basis for some of these well-edited special examples

Although this model is now specific to the area of ​​MRI, the researchers see a great opportunity to extend it to other areas of medicine.

Sheeran pointed out that, historically, AI in medical imaging has been constrained by the need to create specific models of specific organs, requiring expert comment on each image used for training.

But this method is “less” due to the different ways in which the disease appears to all people, and causes the problems that occur.

“What we need are thousands of such models and the ability to innovate quickly as we encounter new information,” he said. High quality documents of any kind are also required.

Now with the output AI, instead of training mixed models for each disease/organ, developers can train only one model that can be the basis for other well-known models downstream.

For example, the GE Healthcare model could be extended to areas such as radiation therapy, where radiologists spend a lot of time manually documenting organs that may be at risk. It could also help reduce the time required for x-rays and other procedures that require patients to remain in the machine for long periods of time, Bhatia said.

Sheeran marveled that “not only are we expanding access to medical imaging using cloud-based tools; we are revolutionizing how that data can be used to drive the development of AI in healthcare. “



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *