Global Patterns in Endemicity and Vulnerability of Soil Fungi.
Global Change Biology(2022)SCI 1区
Univ Tartu | Philipps Univ | Swedish Univ Agr Sci | Kharkov Natl Univ | Univ Pablo de Olavide | Univ Alicante | Univ Punjab | Univ Rosario | Estonian Univ Life Sci | Univ Antioquia UdeA | Univ Palermo | Univ Cagliari | Univ Ghent | Indonesia Int Inst Life Sci | Univ Dschang | Uppsala Univ | Univ Fed Parana | Botswana Int Univ Sci & Technol | Nat Hist Museum Zimbabwe | Univ SantoTomas | Latvian State Forest Res Inst Silava | Pondicherry Univ | Qujing Normal Univ | Univ Nacl Cordoba | Nat Hist Museum Denmark | St Marys Univ | Altai State Univ | CSIRO Land & Water | Manchester Metropolitan Univ | Helmholtz Zentrum Munchen | Utah Valley Univ | Michigan State Univ | Agr Univ Iceland | Univ Copenhagen | Lithuanian Res Ctr Agr & Forestry LAMMC | Wageningen Univ & Res | Eszterhazy Karoly Catholic Univ | Qatar Univ | Stanford Univ | Arctic Univ Norway | Univ Parakou | Mae Fah Luang Univ | NERC British Antarctic Survey | Univ Ilorin | Gothenburg Ctr Sustainable Dev | Syracuse Univ | Quaid I Azam Univ | Univ Free State | Goethe Univ Frankfurt Main | Chinese Acad Sci | Free Univ Berlin | Czech Acad Sci | Univ Gothenburg | Univ Nacl Autonoma Mexico | Univ Austral Chile | King Saud Univ | Moscow Lomonosov State Univ | United Arab Emirates Univ | Calif State Polytech Univ Arcata | Univ Burundi | Seoul Natl Univ | Hasselt Univ | Royal Bot Gardens
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance

被引用70 | 浏览
被引用770 | 浏览
被引用148 | 浏览
被引用502 | 浏览
被引用3065 | 浏览
被引用241 | 浏览
被引用132 | 浏览
被引用519 | 浏览
被引用66 | 浏览
被引用25 | 浏览
被引用24 | 浏览
被引用286 | 浏览
被引用113 | 浏览
被引用123 | 浏览