Machine learning and its uses in pure science and medical field are constantly evolving and becoming more and more effective. ML has seen a great use and has furthered its reach in the area of physics, specifically in the field of optics. With optical systems such as lasers witnessing improvements with respect to functional aspects, it wouldn’t be unlikely to see ML playing an important role in is development. In this article, we will explore how ML helps achieve self-tuning of laser cavity in mode-locked lasers. This significantly reduces environmental and optical fluctuations that obstructs mode-locking in optical systems and achieve better accuracy.
What Are Mode-locked Lasers?
In the general scenario, laser emits light in a specific direction and it does not spread in various directions, unlike other light sources. In devices such as resonators where laser forms the major optical component, the directional property of laser makes the light travel and reflect back and forth very strongly due to the presence of mirrors. This leads to a phenomenon called optical cavity where an optical wave is stationed at the same point and does not move in the space. This is necessary to compensate for energy losses (optical losses).

In order to stabilise and maintain this fixed phase, a method called mode-locking is conducted. Mode-locking refers to the laser being emitted in pulses (very short bursts) that range in the duration of a femtosecond (10-15 of a second). This is how mode-locked lasers are designed for working. Constructing lasers and similar optical systems which rely on mode-locking is a daunting task. There are many optical factors at play which are material-specific and affect mode-locking such as diffraction and birefringence. Therefore, ML algorithm assists in nullifying these factors. With components like servo-controllers coupled with a training software, mode-locked lasers can be controlled to achieve optimal output and will be more effective in practical applications such as corneal surgery and optical data storage.
Machine Learning And Maths In Lasers
Academics at University of Washington, Seattle, they developed an equation-free method for eliminating disturbances in mode-locked lasers, wherein the method relies on objective functions rather than focussing on the model of the optics. The ML is segregated into two stages, algorithm training and execution. The training stage identifies best performance regions on the objective function through an exhaustive search of parameter space. The execution stage uses sparsing sense procedure for the aforementioned parameter space and to maintain this solution, it uses extremum seeking-control protocol to solve optimisation problems.
This paper highlights the procedure to implement ML by starting with data mining and measurements for a parameter space. Since the study is focussed on mode-locking lasers, they followed a toroidal search algorithm to assess mode-locking performance. This is affected by two factors, the servo-controllers ability to execute the algorithm and its position in the laser. This helps in attaining self-tuning in the laser system. Toroidal search begins with identifying input parameters for actuation of the controller (polariser and waveplates, in this case). Based on these parameters, a 4D-torus is developed for the toroidal algorithm.
In the next step, sampling or data-mining is conducted using time-series (4N-torus) equation given below:
Θj(t) = ωjt + Θjo
Where,‘j’ stands for parameter values, ω stands for angular frequencies in the toroidal function. It should be noted that this equation is only applicable ‘j’ satisfies all values upto 4N. Therefore, this equation serves the purpose of creating a toroidal algorithm.
After the values are generated, it is subjected to sparse sampling due to the fact that birefringence in mode-locked lasers, is the primary optical factor which has an impact on the algorithm. This makes the two ML stages mentioned earlier possible for functioning. In the words of the researchers.
“In testing, the birefringence K is varied following a gaussian random walk. For each trial, the spectrogram corresponding to the current birefringence is computed and the L1-norm sparse search is executed. The recognition algorithm is tested in two scenarios:
- Well-aligned data given the assumption that the servo motors that control the waveplates and polarisers work without error
- The mis-aligned data that considers the error in the initial angle of the servo motors
A birefringence recognition rate of 98 percent is achieved with aligned data while 88 percent recognition is achieved in the mis-aligned (time series) scenario.”
Depending on the above factors, an objective function is chosen and extremum-seeking control method is used to obtain maxima of this objective function by subjecting the input parameters through sinusoidal variations. After this step, the ML stages, learning and execution are performed in the mode-locked laser cavity. The flowchart is illustrated below.

This algorithm will now help in achieving self-tuning in the laser optical system. As mentioned earlier, optical factors such as birefringence are controlled and mode-locked lasers achieve maximum performance.
Conclusion:
As evident from the above method, ML significantly helps control systems with self-tuning in the field of physics. It can also be explored in other branches of sciences such as biology. The possibility of automating accuracy will be greatly enhanced with ML. In addition, the study mentioned above relies heavily on useful optical data. This data-powered approach is a surefire way to make ML applications more realistic and practical.
The post Now Machine Learning Can Even Help Develop Self-Tuning Optical Systems appeared first on Analytics India Magazine.