Simulating the Near-Field Dynamic Plume Behavior of Disposed Fine Sediments
FRONTIERS IN MARINE SCIENCE(2024)
Leibniz Univ Hannover
Abstract
Projections of the effects of fine sediment disposals, relevant for managed estuaries and tidally influenced coastal areas, are typically based on numerical far-field models. For an accurate consideration of the disposal itself, near-field models are often needed. The open source near-field model, PROVER-M, simulates the relevant processes of the physics based, dynamic behavior of disposed fine sediments in coastal waters and is applied in this study. First, new small scale laboratory experiments of instantaneous disposals are presented, documenting the dynamic behavior of fine material disposed in shallow waters. Second, results of the PROVER-M model are shown for disposals in three different settings: (1) a field-scaled study complementary to the laboratory set-up, (2) a parametric study of sequentially varied model input and (3) a far-field model coupling for estimation of the PROVER-M impact. By comparing results of the laboratory experiments to the PROVER-M model, the physical behavior of PROVER-M is successfully validated. The impact of the ambient setting and dredged material parameters is evaluated by the PROVER-M simulations, where the results show non-linear, complex interdependencies of the input parameters on disposal properties in dependence of ambient site conditions and material composition. In this context, limits of the model application are assessed and critically discussed. Finally, an exemplary coupling to a far-field model based on a real set of disposals in the tidally influenced Weser estuary (Germany) illustrates the potential impact of PROVER-M for assessing far-field suspended sediment concentration (SSC), with increased maximum SSC values of up to 10%.
MoreTranslated text
Key words
sediment disposal1,near-field model2,turbidity plume3,sediment management4,dynamic plume5
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined