WeChat Mini Program
Old Version Features

AllSpark: A Multimodal Spatio-Temporal General Intelligence Model with Ten Modalities Via Language As a Reference Framework

Run Shao,Cheng Yang, Qiujun Li,Qing Zhu,Yongjun Zhang,YanSheng Li,Yu Liu, Yong Tang, Dapeng Liu, Shizhong Yang,Haifeng Li

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING(2025)

Central South University b Southwest Jiaotong University c Wuhan University d Peking University e Huawei Technologies Co.

Cited 2|Views154
Abstract
RGB, multispectral, point and other spatio-temporal modal data fundamentally represent different observational approaches for the same geographic object. Therefore, leveraging multimodal data is an inherent requirement for comprehending geographic objects. However, due to the high heterogeneity in structure and semantics among various spatio-temporal modalities, the joint interpretation of multimodal spatio-temporal data has long been an extremely challenging problem. The primary challenge resides in striking a trade-off between the cohesion and autonomy of diverse modalities. This trade-off becomes progressively nonlinear as the number of modalities expands. Inspired by the human cognitive system and linguistic philosophy, where perceptual signals from the five senses converge into language, we introduce the Language as Reference Framework (LaRF), a fundamental principle for constructing a multimodal unified model. Building upon this, we propose AllSpark, a multimodal spatio-temporal general artificial intelligence model. Our model integrates ten different modalities into a unified framework, including one-dimensional (language, code, table), two-dimensional (RGB, SAR, multispectral, hyperspectral, graph, trajectory), and three-dimensional (point cloud) modalities. To achieve modal cohesion, AllSpark introduces a modal bridge and multimodal large language model (LLM) to map diverse modal features into the language feature space. To maintain modality autonomy, AllSpark uses modality-specific encoders to extract the tokens of various spatio-temporal modalities. Finally, observing a gap between the model’s interpretability and downstream tasks, we designed modality-specific prompts and task heads, enhancing the model’s generalization capability across specific tasks. Experiments indicate that the incorporation of language enables AllSpark to excel in few-shot classification tasks for RGB and point cloud modalities without additional training, surpassing baseline performance by up to 41.82%. Additionally, AllSpark, despite lacking expert knowledge in most spatio-temporal modalities and utilizing a unified structure, demonstrates strong adaptability across ten modalities. LaRF and AllSpark contribute to the shift in the research paradigm in spatio-temporal intelligence, transitioning from a modality-specific and task-specific paradigm to a general paradigm. The source code is available at https://github.com/GeoX-Lab/AllSpark.
More
Translated text
Key words
Point cloud compression,Training,Adaptation models,Philosophical considerations,Linguistics,Data models,Spatiotemporal phenomena,Trajectory,Cognitive systems,Synthetic aperture radar,General intelligence model,large language model (LLM),multimodal machine learning,spatiotemporal data
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined