Low-complexity Fiber Nonlinearity Compensation Based on Operator Learning over 12,075 Km Single Model Fiber
JOURNAL OF LIGHTWAVE TECHNOLOGY(2024)
Center for Information Photonics and Communications
Abstract
Fiber nonlinearity is a significant constraint on the maximum achievable capacity in long-distance fiber optic transmission systems. The fiber nonlinearity compensation can be done in digital signal processing through the inversion of the Manakov model using the split-step Fourier method (SSFM) with appropriate inverted channel parameters, which are implemented via digital backpropagation (DBP) algorithm. Nevertheless, the practical deployment of this algorithm and its variants require a significant amount of latency and signal processing resources. Currently, the data-driven approximation of partial differential equations (PDEs) is rapidly emerging as a powerful paradigm, wherein the operator learning framework has been demonstrated to effectively approximate the solution space of PDEs. Here, we construct an inverse parameterized Manakov model based on the neural operator, utilizing it to compensate the fiber nonlinearity impairments at low computational complexity. The model proposed has been validated in a 12,075 km wavelength division multiplexing (WDM) system. The received signal waveform compensated by this model produces results that are similar to single-channel DBP (optimal step size), with an MSE of 0.0008. The analysis of the computational complexity shows that the proposed algorithm achieves a performance loss of 0.024dB compared to single-channel DBP under the optimal step size, but its highly parallel nature significantly reduces computational latency.
MoreTranslated text
Key words
Optical fibers,Finite impulse response filters,Standards,Wavelength division multiplexing,Signal processing algorithms,Optical fiber dispersion,Mathematical models,And digital backpropagation,deep learning,fiber nonlinearity compensation,operator learning
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2010
被引用246 | 浏览
2012
被引用44 | 浏览
2014
被引用84 | 浏览
2016
被引用54 | 浏览
2017
被引用298 | 浏览
2018
被引用89 | 浏览
2018
被引用42 | 浏览
2019
被引用38 | 浏览
2019
被引用163 | 浏览
2021
被引用1341 | 浏览
2020
被引用17 | 浏览
2019
被引用41 | 浏览
2020
被引用38 | 浏览
2020
被引用123 | 浏览
2021
被引用147 | 浏览
2022
被引用178 | 浏览
2024
被引用17 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper