Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Implementation of a project This is a dummy file. 
Author: Satish Kumar Keshri, Nazreen Shah, Ranjitha Prasad, IIIT - Delhi
Under review, 2024
The holy grail of machine learning is to enable AI systems to learn continuously and adapt to changing environments. Continual Federated Learning (CFL) enhances the efficiency, privacy, and scalability of federated learning systems while simultaneously learning on the new tasks and preventing catastrophic forgetting on the previous tasks. The primary challenge of CFL is global catastrophic forgetting, where the accuracy of the global model trained on new tasks declines on the old tasks. In this work, we propose a novel aggregation strategy for memory-based CFL and provide convergence analysis where the focus of the analysis is on the factors that degrade the performance of CFL over $T$ communication rounds, such as client drift, bias and forgetting. We show that the proposed CFL framework converges at a rate of $\mathcal{O}(1/\sqrt{T})$ on the current task while circumventing the effect of bias and global catastrophic forgetting. We provide empirical evidence that the proposed technique outperforms several baselines with respect to metrics such as accuracy and forgetting.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.