WeChat Mini Program
Old Version Features

A Deep Learning Method for Reduction of Microbubble Accumulation Time in Ultrasound Localization Microscopy

PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS)(2020)

Tsinghua Univ

Cited 4|Views1
Abstract
Ultrasound localization microscopy (ULM) enables unprecedented subwavelength spatial resolution by localizing the ultrasound contrast microbubbles (MBs). In conventional ULM methods, low-concentration MB solutions are used to ensure that the MBs are sufficiently sparse to enable localization of individual MB. This leads to a long data-acquisition time (i.e., microbubble-accumulation time) to accumulate enough MBs events using plane wave (PW) imaging. Considering the continuity of the microvessels and redundancy of MB tracks, in this study, we present a method that leverages deep neural network (DNN) to predict microvasculature using significantly reduced data-acquisition time and achieve quality comparable to that obtained using full data-acquisition time. Phantom experiments show that the proposed method significantly improves the density of localized MBs compared with the conventional method. Besides, in vivo experiments demonstrate that the proposed method is capable of reconstructing ULM images using reduced data-acquisition time with small difference, compared with the conventional method.
More
Translated text
Key words
Deep learning,generative adversarial network (GAN),microbubble-accumulation time,ultrasound localization microscopy (ULM)
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined