My broad research interests are in the area of nonlinear adaptive and robust control of uncertain dynamical systems, with a current focus on using reinforcement learning methods to improve closed-loop system performance. I am interested in both the applied and theoretical aspects of learning-based/intelligent control systems, which adapt and reconfigure under uncertainties and faults while still performing in an optimal way. In future, I also plan to work on cyber-physical systems, which combine computation, communication, and control, e.g. smart grids, intelligent highway systems, multi-agent systems etc.

Current Research

Adaptive Optimal Control for Uncertain Systems

The research is mainly focused on the development of on-policy adaptive optimal controllers (AOC) for partially/completely unknown continuous-time (CT) systems, while ensuring the stability of closed loop system and convergence of tracking errors. Our initial research proposed an adaptive linear quadratic regulator (LQR) design, under the assumption of persistence of excitation (PE), for model-free LTI systems by continuously updating control policy, unlike the previous existing policy iteration (PI) algorithms in recent literature, where policy is updated iteratively after batch processing of the stored data along the system trajectory. Although PE is a well proven tool for convergence analysis, the condition lacks in its practical application due to its dependence on future values of regressor signals. Therefore in our next AOC design, the restrictive PE condition is relaxed by using the past-stored data along with the current data to estimate the unknown control parameters under a less restrictive rank condition on the matrix formed out of the past stored data (which is online verifiable, unlike the PE condition), while ensuring uniformly ultimately bounded convergence. In a recent result, we proposed a result based on popular Kleinman’s algorithm for completely unknown LTI systems where an exponentially convergent system identifier, sampled at a regular time interval, is utilized to replace the need of actual system matrices in successive iteration of Kleinman’s algorithm.

Initial Excitation-based Adaptive Control

We focus on the issue of parameter convergence in adaptive control, which is one of the crucial aspects in this area of research. Typically, classical adaptive controllers guarantee global Lyapunov stability of the tracking and parameter estimation error, while ensuring asymptotic convergence of tracking error to zero. However, parameter convergence is only guaranteed if a restrictive condition of persistence of excitation (PE) is satisfied by the regressor signal. The adaptive controller, which we have developed, builds on a comparatively recent area of adaptive control, called composite/combined adaptive control. The designed adaptive controller guarantees parameter convergence under a milder assumption, coined as initial excitation (IE). The IE condition is significantly less restrictive than PE, since it does not rely on the future values of the signal, unlike PE. Moreover, the IE condition has been proved to be online verifiable by checking a rank condition. The work ensures exponential stability of the tracking and parameter estimation error once the online verifiable IE condition is satisfied.

Adaptive Model Predictive Control (AMPC)

We focus on the problem of controlling systems with parametric uncertainties in the presence of actuator (input) constraints and safety (output/state) constraints. Model Predictive Control, due to its efficient online constraint handling methodology, is used to control an estimated system (associated with the unknown system) within the imposed constraint limits. The parameters of the estimated system are updated online according to a suitably devised adaptive law that ensures stability of estimation errors. The challenge of the concerned controller design can be broadly categorized in two folds: 1) suitably characterizing terminal constraint set required for ensuring stability of the closed loop system. 2) Ensuring the recursive feasibility of the MPC problem in the presence of state dependent errors introduced due to parameter updation at every instant.

Delay Compensation for Uncertain Nonlinear Systems

The research is focussed on design of robust compensators for nonlinear systems with unknown, constant time delay in input and uncertainty in system dynamics. The control law is designed based on the filtered tracking error and definite integral of past values of control, which is capable of compensating large input delay. In stability analysis, the Lyapunov-Krasovskii functionals are used to prove the global uniformly-ultimately bounded tracking result, provided certain sufficient gain condition are satisfied. The result is extended to unknown time-varying input delay systems with an assumption that the first derivative of unknown time-varying input delay is bounded by a known positive constant. Currently, the focus is on trying to solve the problem of state delay in nonlinear systems by using an adaptive controller to estimate the delay.

Aerial Manipulation

Major aerial robotics research focuses on autonomous control and its navigation within an unknown and unstructured environment to perform surveillance and data acquisition in the areas that are dangerous for human operators and inaccessible to ground vehicles. Nowadays, due to either technology push or market pull, the attention has shifted to aerial robots that are not only used as vehicles flying autonomously for visual inspection and remote sensing but rather as a system able to physically interact with the surrounding environment to accomplish real robotic tasks. Inspired by the aforesaid motivation, work is ongoing to model analyse and design a hierarchical control for an aggressive manoeuvrable multi-rotor vehicle equipped with under actuated manipulator.

Past Research

Reinforcement Learning-based Feedback Control

Robust Identification-Based Control

Control of Robotic Systems Undergoing Impact