Bayesian Optimization and Ensemble Learning Algorithm Combined Method for Deformation Prediction of Concrete Dam
STRUCTURES(2023)
Hohai Univ | Nanjing Inst Technol
Abstract
Concrete dams, as a significant infrastructure project, necessitate a prioritization of operational safety. Deformation is an essential element in the safety monitoring of concrete dams. The development of monitoring models aimed at predicting the trend of dam deformation is the fundamental tool for ensuring dam safety. The accuracy of the monitoring model is paramount in guaranteeing the reliability of subsequent safety assessments. Consequently, this paper proposes a novel combined prediction model for concrete dams, based on Bayesian Optimization (BO) and Random Forests (RF), to achieve high-precision prediction of deformation. Initially, the monitoring data were pre-processed, and the input parameters for the RF model were established. Subsequently, BO based on Gaussian processes optimizes various hyperparameters of the RF models and determines the optimal parameters. Finally, based on the Gini coefficient in this study, the significance of the output characteristic explained the relationship between the dam deformation and its influencing factors. Example analysis and model validation results evinced that the proposed model's prediction accuracy, in the test set, is superior to that of other benchmark models, and the residuals' dispersion is negligible. The proposed combined model has significant engineering implications and provides a novel method for concrete dam safety monitoring.
MoreTranslated text
Key words
Dam deformation,Prediction model,Random forests,Bayesian optimization,Gaussian process
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined