Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Visual-Language Foundation Model for Computational Pathology

Computing Research Repository (CoRR)(2024)

Department of Pathology

Cited 49|Views80
Abstract
The accelerated adoption of digital pathology and advances in deep learning have enabled the development of powerful models for various pathology tasks across a diverse array of diseases and patient cohorts. However, model training is often difficult due to label scarcity in the medical domain and the model's usage is limited by the specific task and disease for which it is trained. Additionally, most models in histopathology leverage only image data, a stark contrast to how humans teach each other and reason about histopathologic entities. We introduce CONtrastive learning from Captions for Histopathology (CONCH), a visual-language foundation model developed using diverse sources of histopathology images, biomedical text, and notably over 1.17 million image-caption pairs via task-agnostic pretraining. Evaluated on a suite of 13 diverse benchmarks, CONCH can be transferred to a wide range of downstream tasks involving either or both histopathology images and text, achieving state-of-the-art performance on histology image classification, segmentation, captioning, text-to-image and image-to-text retrieval. CONCH represents a substantial leap over concurrent visual-language pretrained systems for histopathology, with the potential to directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.
More
Translated text
Key words
Histopathology Images,Digital Pathology,Medical Image Analysis,Feature Extraction
PDF
Bibtex
AI Read Science
Video&Figures
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers

On Image Search in Histopathology

Journal of Pathology Informatics 2024

被引用2

Foundation Models and Information Retrieval in Digital Pathology

H. R. Tizhoosh
Artificial Intelligence in Pathology 2025

被引用0

Foundation Model for Advancing Healthcare: Challenges, Opportunities, and Future Directions

Yuting He, Fuxiang Huang, Xinrui Jiang,Yuxiang Nie, Minghao Wang,Jiguang Wang,Hao Chen
IEEE REVIEWS IN BIOMEDICAL ENGINEERING 2025

被引用3

Recent Advances in Medical Image Classification

Loan Dao,Ngoc Quoc Ly
INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS 2024

被引用0

A Simplified Query-Only Attention for Encoder-Based Transformer Models

Hong-gi Yeom, Kyung-min An
APPLIED SCIENCES-BASEL 2024

被引用0

A Generalist Medical Language Model for Disease Diagnosis Assistance

Xiaohong Liu, Hao Liu, Guoxing Yang, Zeyu Jiang, Shuguang Cui, Zhaoze Zhang, Huan Wang, Liyuan Tao,Yongchang Sun, Zhu Song,Tianpei Hong,Jin Yang,
Nature Medicine 2025

被引用0

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种面向计算病理学的视觉语言基础模型CONCH,通过图像和文本的对比学习,实现了对病理图像分类、分割、标注等任务的跨疾病和患者群体的泛化能力,其创新之处在于融合了图像和文本数据,并在预训练中采用了任务无关的方法。

方法】:采用对比学习方法和任务无关的预训练,结合病理学图像、生物医学文本和大规模图像-标题对数据。

实验】:CONCH模型在13个多样化的基准测试中表现出色,可转移到涉及病理学图像和文本的各种下游任务,并在组织学图像分类、分割、标注、文本到图像和图像到文本检索任务中取得了最新的研究成果。使用的数据集未明确提及,但模型展示了在没有进一步监督微调的情况下,对机器学习工作流程的直接促进作用。