Developments and Results in the Context of the JEM-EUSO Program Obtained with the ESAF Simulation and Analysis Framework
The European Physical Journal C(2023)
Nihon University | University of Alabama in Huntsville | Université de Paris | City University of New York (CUNY) | INAF-Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo | Istituto Nazionale di Fisica Nucleare-Sezione di Torino | University of Tübingen | Istituto Nazionale di Fisica Nucleare-Sezione di Bari | Lomonosov Moscow State University | NASA Marshall Space Flight Center | Karlsruhe Institute of Technology (KIT) | RIKEN | Institute of Experimental Physics | KTH Royal Institute of Technology | University of Chicago | Istituto Nazionale di Fisica Nucleare-Sezione di Roma Tor Vergata | Istituto Nazionale di Fisica Nucleare-Sezione di Napoli | Max Planck Institute for Physics | Istituto Nazionale di Fisica Nucleare-Sezione di Catania | Palacký University | University of California | Università degli Studi di Torino | Pennsylvania State University | Omega | Universidad de Alcalá (UAH) | Università di Napoli Federico II | Colorado School of Mines | University of Tokyo | Konan University | ECAP | NASA Goddard Space Flight Center | Institute of Physics of the Czech Academy of Sciences | Università di Roma Tor Vergata | National Centre for Nuclear Research | University of Utah | Univ. Constantine I | National Astronomical Observatory | University of Iowa | Joint Institute for Nuclear Research | Sungkyunkwan University | University of Warsaw | Istituto Nazionale di Fisica Nucleare-Laboratori Nazionali di Frascati | Space Regatta Consortium | Hokkaido University | Shinshu University | ISDC Data Centre for Astrophysics | Centre for Development of Advanced Technologies (CDTA) | IRAP | Fairfield University
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance

Measurement of Upward-going Milli-charged Particles at the Pierre Auger Observatory
被引用0
Refined STACK-CNN for Meteor and Space Debris Detection in Highly Variable Backgrounds
被引用0