Inter‐comparison of Methods to Homogenize Daily Relative Humidity
INTERNATIONAL JOURNAL OF CLIMATOLOGY(2018)
Zent Anstalt Meteorol & Geodynam
Abstract
Three homogenization methods (ACMANT, MASH and HOMOP) have been evaluated for their efficiency in homogenizing daily relative humidity data. A homogeneous surrogate data set based on Austrian stations was created and perturbed to simulate inhomogeneous, realistic time series (“validation data sets”). Two validation data sets (“simple” and “complex”) were created. In both data sets the magnitude of the breaks depends on the time of year and the measured values. They differ in the number of missing values and especially on whether the break signal was perturbed by white noise. In the latter case, the noise also changed to take into account changes in random measurements errors and other physical factors. The evaluation showed high agreement in statistical characteristics between the real data and the surrogate data set. The homogenization methods were compared in their ability both to detect breaks and to reproduce the homogeneous surrogate data set. For the evaluation of the final data set the distribution, trends and root‐mean‐square error (RMSE) were analysed. The percentage of improved time series depends on the evaluation parameter considered. Less stations were improved when using the “complex” validation data set. Because of the large number of breaks and the small signal‐to‐noise ratio, an improvement of the data by homogenization was non‐ideal for all methods used, with each having its advantages and disadvantages. The quality of the ACMANT and HOMOP methods is comparable, with ACMANT solving less stations but declaring less stations falsely as homogeneous. To get an impression of the influence on real data, ACMANT was applied to homogenize daily Austrian time series of relative humidity. While the quality of data from some stations can be improved through the homogenization, this is not the case for all time series. A final evaluation of homogenized time series should be performed to ensure their quality before further use.
MoreTranslated text
Key words
daily data,homogenization,method comparison,relative humidity,surrogate data,validation data set
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined