NCCN Guidelines® Insights: Prostate Cancer, Version 3.2024.
Journal of the National Comprehensive Cancer Network JNCCN(2024)
1Robert H. Lurie Comprehensive Cancer Center of Northwestern University. | 2Stanford Cancer Institute. | 3Indiana University Melvin and Bren Simon Comprehensive Cancer Center. | 4Yale Cancer CenterSmilow Cancer Hospital. | 5Duke Cancer Institute. | 6The University of Texas MD Anderson Cancer Center. | 7Fred Hutchinson Cancer Center. | 8Dana-FarberBrigham and Women's Cancer Center. | 9UT Southwestern Simmons Comprehensive Cancer Center. | 10City of Hope National Cancer Center. | 11Memorial Sloan Kettering Cancer Center. | 12Prostate Health Education Network (PHEN). | 13Mass General Cancer Center. | 14Case Comprehensive Cancer Center/University Hospitals Seidman Cancer Center and Cleveland Clinic Taussig Cancer Institute. | 15Abramson Cancer Center at The University of Pennsylvania. | 16Siteman Cancer Center at Barnes-Jewish Hospital and Washington University School of Medicine. | 17Mayo Clinic Comprehensive Cancer Center. | 18Roswell Park Comprehensive Cancer Center. | 19University of Wisconsin Carbone Cancer Center. | 20The Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins. | 21UC San Diego Moores Cancer Center. | 22University of Michigan Rogel Cancer Center. | 23Moffitt Cancer Center. | 24UCLA Jonsson Comprehensive Cancer Center. | 25UCSF Helen Diller Family Comprehensive Cancer Center. | 26University of Colorado Cancer Center. | 27University of California San Francisco Patient Services Committee. | 28The Ohio State University Comprehensive Cancer Center - James Cancer Hospital and Solove Research Institute. | 29The UChicago Medicine Comprehensive Cancer Center. | 30Fred & Pamela Buffett Cancer Center. | 31Huntsman Cancer Institute at the University of Utah. | 32UC Davis Comprehensive Cancer Center. | 33Fox Chase Cancer Center. | 34National Comprehensive Cancer Network.
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
