Mahsa Mozafarinia

Mahsa Mozafarinia

Data Scientist | AI Researcher | Mathematician



Interested in

|

Bio

I am a researcher passionate about building efficient, interpretable, and robust AI systems. My work focuses on neural network pruning, data efficiency, and explainability in deep learning. I enjoy connecting theory with application, particularly in domains like vision, language, and graph learning.
I enjoy blending theory with real-world applications, whether it's making neural networks smarter and faster, adapting models across domains, or explaining why a model made its decision. I'm currently in the U.S. and always looking to connect with collaborators or teams passionate about advancing AI.

Research Interests

Machine Learning, Deep Learning, Data Science, Neural Network Optimization, Data Efficiency, Robust Nueral Network, Continual Learning, Network Interpretability, Natural Languge Processing, Graph Neural Network, Graph Theory and Applications, Combinatorics, Probability, Mathematics

Honors & Awards

  • Herz Award Alexander von Humboldt Foundation, University of Konstanz, Germany (2021–2022)
  • ZUKOnnect Ambassador Online ZUKOnnect Ambassador – University of Konstanz, Germany (2022)
  • Distinguished Talent (Ph.D. - SBU) Admitted as Distinguished Talent – Shahid Beheshti University, Iran (2018)
  • Distinguished Talent (M.Sc. - IUT) Admitted as Distinguished Talent – Isfahan University of Technology, Iran (2015)
  • Ranked 2nd M.Sc9. 2nd out of 92 M.Sc. students – IUT, Iran (2017)
  • Reviewer Reviewer for Discrete Applied Mathematics Journal

Research Projects

My recent projects in deep learning, continual learning, and neural network interpretability.

Generative Sample Removal in Continual Learning (Published in CVPR 2025 Workshop SynData4CV)

We investigate using synthetic data (GANs, Diffusion) to replace natural samples in continual task learning. Our EpochLoss strategy removes uninformative examples, enhancing adversarial robustness and model generalization.

Network Compression

Explaining Deep Network Compression via Latent Spaces (Published in ACM Transactions on Probabilistic Machine Learning)

A theoretical framework to explain DNN pruning through information-theoretic divergence in latent spaces. Introduces novel projection metrics AP2 and AP3 and validates them on ResNet, VGG16, and ViT.

Education

University of Maine, USA. (started at 2022)

Ph.D. Student in Computer Science. Research focuses on deep learning model interpretability and optimization, aiming to make neural networks more compact, robust, and explainable.
GPA: 3.953 / 4.00

University of Konstanz, Germany (OCT 2021-AUG 2021)

Researcher (Herz Fellow), Alexander von Humboldt Foundation Grant. Worked on graph algorithms and optimization methods in theoretical computer science.

Shahid Beheshti University, Iran

Research assistance in Mathematics. Admitted as a Distinguished Talent. Conducted research in extremal graph theory with a focus on graph coloring problems.
GPA: 3.96 / 4.00

Isfahan University of Technology, Iran

M.Sc. in Mathematics. Ranked 2nd in M.Sc. Specialized in graph coloring, combinatorics, and their applications in theoretical computer sciences.
GPA: 3.91 / 4.00

Isfahan University of Technology, Iran

B.Sc. in Mathematics. Ranked 4th. Built strong foundations in algebra, Probability, discrete mathematics, and graph theory.
GPA: 3.63 / 4.00

My Expertise

Continual Learning

Developed robust and efficient CL frameworks using synthetic data and sample removal techniques like EpochLoss.

Prototype-based Learning

Applied prototypes to both regression and classification, enabling token/patch-wise interpretability in text and vision models.

Neural Network Interpretability

Focused on receptive fields, prototype visualization, and decision tree–based interpretation for explaining model behavior.

PyTorch & Python

Proficient in PyTorch and scientific Python libraries for building, training, and debugging scalable neural network pipelines.

Contact Me

Feel free to reach out to me for collaborations, research discussions, or any questions. I am always happy to connect!