WeChat Mini Program
Old Version Features

A Binning Approach for Predicting Long-Term Prognosis in Multiple Sclerosis.

ARTIFICIAL INTELLIGENCE IN MEDICINE, AIME 2023(2023)

Katholieke Univ Leuven

Cited 2|Views8
Abstract
Multiple sclerosis is a complex disease with a highly heterogeneous disease course. Early treatment of multiple sclerosis patients could delay or even prevent disease worsening, but selecting the right treatment is difficult due to the heterogeneity. To alleviate this decision-making process, predictions of the long-term prognosis of the individual patient are of interest (especially at diagnosis, when not much is known yet). However, most prognosis studies for multiple sclerosis currently focus on a short-term binary endpoint, answering questions like “will the patient significantly progress in 2 years”. In this paper, we present a novel approach that provides a comprehensive perspective on the long-term prognosis of the individual patient, by dividing the years after diagnosis up into bins and predicting the level of disability in each of these bins. Our approach addresses several general issues in observational datasets, such as sporadic measurements at irregular time-intervals, widely varying lengths of follow-up, and unequal number of measurements even for the same follow-up. We evaluated our approach on real-world clinical data from an observational single-center cohort of multiple sclerosis patients in Belgium. On this dataset, a regressor chain of random forests achieved a Pearson correlation of 0.72 between its cross-validated test set predictions and the actual disability measurements assessed by a clinician.
More
Translated text
Key words
Machine learning for multiple sclerosis,Prognosis at diagnosis,Longitudinal data,Regressor chain
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined