Robust and Efficient Synchronization for Structural Health Monitoring Data with Arbitrary Time Lags
Engineering Structures(2025)SCI 2区
Abstract
With the advancement of structural health monitoring (SHM) technology, operational modal analysis (OMA) has played an indispensable role in finite-element model updating, damage detection, and wind resistance design. As a result of combined factors such as sensor errors and equipment failures, multiple-channel response signals often exhibit slight or significant asynchrony, leading to unforeseeable uncertainty and phase deviation during OMA. Additionally, both clock-based wireless sensor networks and cable-based wired SHM systems cannot guarantee complete data synchronization. This paper presents a novel approach for detecting and synchronizing SHM data with arbitrary time lags in the post-processing stage. The approach focuses on phase variations across multiple modes and converts time lags into phase period differences within a specified bandwidth. To improve the accuracy and automation of frequency-domain analysis, the variational mode extraction (VME) algorithm is employed in the study, which provides a robust solution by extracting fundamental mode components. The feasibility is validated utilizing a linear time-invariant simulation system with non-proportional damping. Finally, the proposed approach is implemented in the SHM system with actual time lags at the Shanghai Tower, the tallest building in China. The relative time lags between vibration response channels are successfully estimated, revealing that the data asynchronization of channels can be attributed to the misaligned timestamps between data acquisition substations. This finding mitigates current challenges in estimating the mode shape of the Shanghai Tower and assists in reducing uncertainty during the OMA process.
MoreTranslated text
Key words
Structural health monitoring,Time lag,Data synchronization,Phase deviation,VME algorithm,Shanghai Tower
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2010
被引用78 | 浏览
2007
被引用412 | 浏览
2005
被引用45 | 浏览
2012
被引用19 | 浏览
2017
被引用130 | 浏览
2018
被引用13 | 浏览
2019
被引用7 | 浏览
2018
被引用23 | 浏览
2019
被引用28 | 浏览
2020
被引用22 | 浏览
2021
被引用59 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper