Is there a role for Machine Learning in Optical Communications and Networks?
Updated: Jan 15, 2018
This is my 10-min invited presentation at the ECOC 2017 Workshop on
"Opportunities for machine learning in optical communication: from components characterisation, systems design and network optimisation"
The slides of my talk can be downloaded from this link
Machine Learning has become the hottest area of research in recent years and it has captured the attention of media and the general public at large. It seems that this wave of machine learning advances has wide and profound implications to a lot of industries and may even fundamentally disrupt human's role in our society. It is fun and also scary to speculate on how we will be overruled by robots. But before that happen, let's get real here and talk about immediate applications. In particular, is there a role for machine learning in optical communications and networks?
Difference and similarity between machine learning and communications
In my view, machine learning is fundamentally about pattern recognition and classifying input data into a discrete set of output class or label. Regression and prediction are consequence of knowing the underlying pattern and mapping the current input into a particular class. Speaking of classification, in fact we Telecomm. people do classifications ALL THE TIME. But unlike standard machine learning problems, communication engineers actually get to design our signals (or input data) so that the classification boundaries are simple straight lines and different class labels occur with equal probability (to maximize source entropy).
Telecomm. people do classifications ALL THE TIME. But unlike standard machine learning problems, communication engineers actually get to design our signals (or input data) so that the classification boundaries are simple straight lines and different class labels occur with equal probability (to maximize source entropy)
Even when signals undergo linear distortions and induce inter-symbol interference(ISI) and memory, we have maximum-likelihood sequence detection(MLSD) that are provably optimal signal processing techniques to minimize bit error ratio (classification errors). Therefore, for linear systems, we already have the best algorithm and there is no place for machine learning to further improve transmission performance.
For linear systems, we already have the best algorithm and there is no place for machine learning to further improve transmission performance.
Where will machine learning be useful in optical communications and networks
Machine learning may have a role in scenarios in which the physics and mathematics of the problem cannot be explicitly stated. This includes:
Fiber nonlinearity compensation
Optical Performance Monitoring
Impairment-aware Software Defined Networks
For those problems mentioned above, either we are not able to derive the exact physics or the problem at hand is not supposed to have an underlying physical model connecting the input data and output class.
The way forward
Like many other areas of Science, Engineering and Medicine, machine learning is expected to become a new tool to help advance the field and optical communications and networks are no exception. The more interesting question is whether machine learning will become the center stage of the next wave of advances in optics or it will serve as another approach for certain subsets of problems surrounding the core issues of our field. My guess is that machine learning will fall to the latter case, but the jury is still out. Let's see how this field will unfold in the next year.