I am a final-year undergraduate student in Computer Science and Artificial Intelligence at Poznan University of Technology, Poland. My research interests focus on machine learning, explainable AI to enhance the transparency, safety, and trustworthiness of AI systems, with a an emphasis on counterfactual explanations. Recently I've been also working on decentralized federated learning.

I work as a research assistant in the Machine Learning Laboratory at Poznan University of Technology, under the supervision of Prof. Jerzy Stefanowski and Dr. Mateusz Lango, where I develop robust counterfactual explanations to improve the interpretability of black-box models in dynamic environments. My ongoing project involves designing a statistical approach to generate explanations that remain effective despite model changes, aiming to increase their actionability for end-users.

Last summer I worked in the AutonLab (Robotics Institute, Carnegie Mellon University). My research was focused on adaptive strategies for persistent agent failure/loss in decentralized federated learning paradigm, under the supervision of Dr. Artur Dubrawski. A year ago, I also spent my summer there. However, back then my work was in formal verification of Bayesian Networks, focusing on ensuring that critical design specifications are met in AI systems.

I have also been a member of an R&D team at the Poznan Supercomputing and Networking Center (Polish Academy of Sciences) since 2021. At PSNC, I contributed to various EU-funded projects, including analyzing black-box model behavior and implementing anomaly detection solutions in industrial automotive settings and HPC clusters.




News Feed



Publications