I'm interested in building systems that take inspiration from natural intelligence, solving problems such as continual learning, composition-based learning, or consolidation of multimodal input streams - all of which the human brain solves effortlessly. I am also interested in the various aspects of memory - its relation to neural architectures, its ability to link disparate knowledge sources, its emergance in large pre-trained models, and its importance in enabling rapid learning of different tasks or cross-task knowledge transfer. A more detailed version can be found here.
I am a 3-year M.Tech. (RA) student at the Indian Institute of Technology - Hyderabad, advised by Prof. Vineeth N. Balasubramanian. My current research interests include neuro-inspired AI, learning from multimodal data, and explainable neural architectures. I am also collaborating with Prof. Chirag Agarwal at the University of Virginia. My core thesis is based on adapting pre-trained models for downstream tasks - using such models for concept-based explanations, applying them in continual learning settings, and analyzing fine-tuning methods. I am also interested in understanding and modelling human behavior, where I aim to create systems that can understand user requirements and adapt their outputs accordingly.
I have previously worked with Prof. R. Venkatesh Babu at the Indian Institute of Science on multiple domains in Computer Vision and Computational Photography - particularly High Dynamic Range Imaging and Representation Learning. I briefly worked at Qualcomm R&D on no-reference image quality analysis, image super-resolution and image-to-image translation under hardware constraints. I have also interned with the Adobe Media Data Science and Research (MDSR) Team, where I collaborated with Balaji Krishnamurthy and Yaman Kumar on aligning LLMs to human opinions.