Enhanced Vernier Effect in Cascaded Fiber Loop Interferometers for Improving Temperature Sensitivity
Sensors(2025)SCI 3区
Abstract
This work presents a high-sensitivity temperature sensing system utilizing an enhanced Vernier effect implemented in cascaded fiber loop interferometers. High-sensitivity temperature sensors based on the Vernier effect have broad application prospects, but the sensitivity of traditional measurement schemes is difficult to improve further due to the limited variation in the difference between two free spectrum ranges (FSRs). Our sensing system incorporates two fiber loop interferometers and a single-mode fiber to form a Vernier spectral response, characterized by two complementary optical filter responses. As the temperature of the sensing fiber changes, one FSR decreases, and the other increases, respectively, enhancing the difference value between the two FSRs to form an enhanced Vernier effect. Experimental results demonstrate that the temperature sensitivity of a traditional Vernier effect measurement is only −298.29 kHz/°C, while our proposed enhanced Vernier effect sensing system achieves a sensitivity of 618.14 kHz/°C, which is 92 times higher than that of a two-arm optical carrier-based microwave interferometry (OCMI) sensing system and 2.07 times higher than that of a traditional Vernier effect sensing system. This approach with an enhanced Vernier effect scheme based on cascaded fiber loop interferometers can be used to design high-sensitivity sensing systems for biometrics, smart cities, and the Internet of Things.
MoreTranslated text
Key words
microwave photonics,enhanced Vernier effect,optical fiber sensor,temperature sensor
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2003
被引用22 | 浏览
2014
被引用103 | 浏览
2016
被引用137 | 浏览
2017
被引用65 | 浏览
2019
被引用66 | 浏览
2019
被引用74 | 浏览
2019
被引用115 | 浏览
2020
被引用127 | 浏览
2020
被引用16 | 浏览
2021
被引用42 | 浏览
2022
被引用16 | 浏览
2022
被引用14 | 浏览
2022
被引用34 | 浏览
2022
被引用13 | 浏览
2022
被引用4 | 浏览
2023
被引用7 | 浏览
2024
被引用5 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper