Chrome Extension
WeChat Mini Program
Use on ChatGLM

Low-complexity Fiber Nonlinearity Compensation Based on Operator Learning over 12,075 Km Single Model Fiber

JOURNAL OF LIGHTWAVE TECHNOLOGY(2024)

Center for Information Photonics and Communications

Cited 0|Views12
Abstract
Fiber nonlinearity is a significant constraint on the maximum achievable capacity in long-distance fiber optic transmission systems. The fiber nonlinearity compensation can be done in digital signal processing through the inversion of the Manakov model using the split-step Fourier method (SSFM) with appropriate inverted channel parameters, which are implemented via digital backpropagation (DBP) algorithm. Nevertheless, the practical deployment of this algorithm and its variants require a significant amount of latency and signal processing resources. Currently, the data-driven approximation of partial differential equations (PDEs) is rapidly emerging as a powerful paradigm, wherein the operator learning framework has been demonstrated to effectively approximate the solution space of PDEs. Here, we construct an inverse parameterized Manakov model based on the neural operator, utilizing it to compensate the fiber nonlinearity impairments at low computational complexity. The model proposed has been validated in a 12,075 km wavelength division multiplexing (WDM) system. The received signal waveform compensated by this model produces results that are similar to single-channel DBP (optimal step size), with an MSE of 0.0008. The analysis of the computational complexity shows that the proposed algorithm achieves a performance loss of 0.024dB compared to single-channel DBP under the optimal step size, but its highly parallel nature significantly reduces computational latency.
More
Translated text
Key words
Optical fibers,Finite impulse response filters,Standards,Wavelength division multiplexing,Signal processing algorithms,Optical fiber dispersion,Mathematical models,And digital backpropagation,deep learning,fiber nonlinearity compensation,operator learning
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种基于神经算子的低复杂度光纤非线性补偿方法,通过构建逆参数化Manakov模型,实现了在长距离单模光纤通信系统中的高效补偿。

方法】:研究利用神经算子框架构建了逆参数化Manakov模型,通过数据驱动近似求解偏微分方程(PDEs)来补偿光纤非线性。

实验】:所提模型在12,075 km的WDM系统中进行了验证,经模型补偿的接收信号波形与单通道DBP(最优步长)的结果相似,均方误差(MSE)为0.0008,且计算复杂度分析显示,该算法相比单通道DBP在最优步长下性能损失为0.024dB,但其高度并行特性大幅降低了计算延迟。