Portrait
Fengyu Yang
Ph.D. candidate
University of North Carolina at Chapel Hill
About Me
I am currently a Ph.D. candidate at University of North Carolina at Chapel Hill, advised by Prof. Shahar Kovalsky. My research focuses on Differentiable Optimization, Machine Learning, and Statistical Modeling. I am currently a Research Scientist intern at ByteDance Seed-AI for Science team, working on AI-driven protein structure reconstruction.
Education
  • University of North Carolina, Chapel Hill
    University of North Carolina, Chapel Hill
    Ph.D. in Applied Mathematics, Minor in Statistics & Operations Research
    Aug. 2021 - present
  • University of Chicago
    University of Chicago
    M.S. in Computational and Applied Mathematics
    Sep. 2019 - Mar. 2021
  • Shandong University
    Shandong University
    B.S. in Statistics of Mathematics School
    Sep. 2015 - Jun. 2019
Experience
  • ByteDance
    ByteDance
    Seed-AI for Science, Research Scientist Intern
    May 2025 -- Present
  • Argonne National Laboratory
    Argonne National Laboratory
    Givens Associate
    May 2023 - Aug. 2023
  • Deloitte Consulting (Shanghai)
    Deloitte Consulting (Shanghai)
    Data Visualization & Analyst Intern
    Aug. 2018 - Feb. 2019
Activities
2023
The Triangle Computational and Applied Mathematics Symposium (TriCAMS)
Lightning Talk + Poster: Coupling Structural and Functional Brain Connectome
Nov 10
NSF FRG Collaborative Research Meeting
Presentation: Spatio-Temporal Analysis of Brain Imaging
Nov 02
2023 Summer Argonne Student Symposium (SASSy)
Presentation: Statistical Model of Crystallographic Disorder
Jul 26
Research
Differentiable Optimization
Differentiable Optimization

Developed modular convex optimization layer that enables differentiation for any forward solver with seamless integration into neural networks and bi-level optimization tasks, addressing limitation in existing methods that rely on specific forward solvers.

Differentiable Optimization
Differentiable Optimization

Developed modular convex optimization layer that enables differentiation for any forward solver with seamless integration into neural networks and bi-level optimization tasks, addressing limitation in existing methods that rely on specific forward solvers.

AI-driven Protein Structure Reconstruction
AI-driven Protein Structure Reconstruction

Leading development of PyTorch-based open-source cryo-EM reconstruction software from scratch. Integrating the software to AI foundation models for downstream biology tasks, enabling enhanced protein structure prediction.

AI-driven Protein Structure Reconstruction
AI-driven Protein Structure Reconstruction

Leading development of PyTorch-based open-source cryo-EM reconstruction software from scratch. Integrating the software to AI foundation models for downstream biology tasks, enabling enhanced protein structure prediction.

Statistical Model of Crystallographic Disorder
Statistical Model of Crystallographic Disorder

Developed statistical parametric model for analyzing positional and substitutional disorder in crystallography.

Statistical Model of Crystallographic Disorder
Statistical Model of Crystallographic Disorder

Developed statistical parametric model for analyzing positional and substitutional disorder in crystallography.

Coupling Structural and Functional Brain Connectome
Coupling Structural and Functional Brain Connectome

Developed model for jointly analyzing brain structure connectome (DT-MRI) and the temporal dynamics of individual brain regions (fMRI)

Coupling Structural and Functional Brain Connectome
Coupling Structural and Functional Brain Connectome

Developed model for jointly analyzing brain structure connectome (DT-MRI) and the temporal dynamics of individual brain regions (fMRI)

Publications
Differentiation Through Black-Box Quadratic Programming Solvers
Differentiation Through Black-Box Quadratic Programming Solvers

Connor W. Magoon*, Fengyu Yang*, Noam Aigerman, Shahar Z. Kovalsky (* equal contribution)

NeurIPS 2025

Differentiable optimization has attracted significant research interest, particularly for quadratic programming (QP). Existing approaches for differentiating the solution of a QP with respect to its defining parameters often rely on specific integrated solvers. This integration limits their applicability, including their use in neural network architectures and bi-level optimization tasks, restricting users to a narrow selection of solver choices. To address this limitation, we introduce dQP, a modular and solver-agnostic framework for plug-and-play differentiation of virtually any QP solver. A key insight we leverage to achieve modularity is that, once the active set of inequality constraints is known, both the solution and its derivative can be expressed using simplified linear systems that share the same matrix. This formulation fully decouples the computation of the QP solution from its differentiation. Building on this result, we provide a minimal-overhead, open-source implementation that seamlessly integrates with over 15 state-of-the-art solvers. Comprehensive benchmark experiments demonstrate dQP’s robustness and scalability, particularly highlighting its advantages in large-scale sparse problems.

Differentiation Through Black-Box Quadratic Programming Solvers
Differentiation Through Black-Box Quadratic Programming Solvers

Connor W. Magoon*, Fengyu Yang*, Noam Aigerman, Shahar Z. Kovalsky (* equal contribution)

NeurIPS 2025

Differentiable optimization has attracted significant research interest, particularly for quadratic programming (QP). Existing approaches for differentiating the solution of a QP with respect to its defining parameters often rely on specific integrated solvers. This integration limits their applicability, including their use in neural network architectures and bi-level optimization tasks, restricting users to a narrow selection of solver choices. To address this limitation, we introduce dQP, a modular and solver-agnostic framework for plug-and-play differentiation of virtually any QP solver. A key insight we leverage to achieve modularity is that, once the active set of inequality constraints is known, both the solution and its derivative can be expressed using simplified linear systems that share the same matrix. This formulation fully decouples the computation of the QP solution from its differentiation. Building on this result, we provide a minimal-overhead, open-source implementation that seamlessly integrates with over 15 state-of-the-art solvers. Comprehensive benchmark experiments demonstrate dQP’s robustness and scalability, particularly highlighting its advantages in large-scale sparse problems.