Cancer Surgeons, Photo by — @nci

Memory Usage in Medical AI Applications

Addressing the memory bottleneck in AI model training

Alex Moltzau
3 min readApr 12, 2020

--

With increasing application of medical imaging, and advanced analysis with machine learning techniques there is a need to increase the efficiency of these applications. One way to do so is the combination of software and hardware. Part of the research team at Intel proposes they have some solutions.

An article published by Tony Reina, Prashant Shah, David Ojika, Bhavesh Patel and Trent Boyer attempts to do so. It is called Addressing the Memory Bottleneck in AI Model Training.

Of course this is in a way a form of a product pitch from researchers working at Intel to buy an Intel product.

“…with access to 1.5 TB of DDR4 RAM and an additional 6 TB per socket of Intel® Optane™ DC Persistent Memory, the 2nd generation Intel Xeon Scalable CPU minimizes the need for this workaround. Instead, researchers are able to use the full capacity of RAM without any modifications to their code.”

Although this is the case it does not hurt to look at their proposition.

They argue healthcare workloads can be processed more efficient.

“Healthcare workloads, particularly in medical imaging, may use more memory than other AI workloads because they often use higher resolution 3D images. In fact, many researchers are surprised to find how much memory these models use during both training and inference. In a recent publication, Intel, Dell, and the University of Florida demonstrate how the large memory capacity of a 2nd generation Intel® Xeon® Scalable processor-based server allows researchers to more efficiently train and deploy medical imaging models for brain tumor segmentation that use almost a terabyte (TB) of RAM.”

Intel AI research shared this on their Twitter account.

Max Planck Institute worked with Intel to run inference on a full 3D dataset for a 3D brain imaging model.

One of their achievements was to: “…reduce the original 24 TB memory requirement by a factor of 16 via efficient reuse of memory enabled by the Intel® Distribution of OpenVINO™ toolkit.”

Doing so processing each image required only 1.5 TB of RAM to perform AI inference, and processing took less than an hour compared to 24 hours during initial tests.

This can be represented as the following.

I do not have any answer, rather than an interesting direction.

Since medical imaging needs to become more efficient and require less energy this seems an important area to explore.

If we are to implement solutions within the field of artificial intelligence on a large scale there are likely many who will work on this both to offer better products, but hopefully also to reduce the footprint of the given AI solutions.

This is #500daysofAI and you are reading article 314. I am writing one new article about or related to artificial intelligence every day for 500 days. My focus for day 300–400 is about AI, hardware and the climate crisis.

--

--

Alex Moltzau

AI Policy, Governance, Ethics and International Partnerships at www.nora.ai. All views are my own. twitter.com/AlexMoltzau