Jie Chen (陈捷)

Click the above email button to reach me.

jiechen.jpg

Jie Chen is an interdisciplinary researcher working at the intersection of computing and mathematics, with a current focus on foundation models and AI agents for scientific discovery.[1, a, 2] His research integrates machine learning, statistics, scientific computing, and numerical linear algebra, with contributions spanning graph neural networks, graph multimodal LLMs, graph structure learning, scalable Gaussian processes, graph coarsening, and matrix functions. He is widely recognized for transformative contributions to graph-based deep learning[3, 4, 5, 6, b] and large-scale statistical modeling,[7, 8, 9] and for bridging theory with real-world scientific and engineering applications.

Dr. Chen has led externally funded, multi-institutional research programs that unite AI, applied mathematics, and domain sciences. His work has been supported by major industrial sponsors such as Shell and Evonik, as well as by the U.S. Department of Energy. His interdisciplinary research has driven advances in materials discovery,[10, c, 11] financial forensics,[12, d, 13, e] and power system resilience,[14, f, 15] demonstrating how modern AI and mathematical tools translate into robust solutions for complex, high-stakes applications.

Dr. Chen maintains an exceptional publication record across the most selective venues in artificial intelligence, statistics, and mathematics. He has published extensively in top-tier AI conferences, including NeurIPS, ICML, ICLR, and AAAI; in premier statistics journals such as JMLR, JASA, JCGS, and AOAS; and in flagship applied mathematics journals such as SISC, SIMAX, SIIMS, and MMS. His work is widely cited and recognized for its intellectual depth, methodological innovation, and interdisciplinary reach.

Dr. Chen’s work has received multiple honors, including several IBM Outstanding Technical Achievement Awards and the SIAM Student Paper Prize. He is also a sought-after speaker in the international research community and has been invited as a plenary speaker or panelist at major venues such as the International Conference on Preconditioning Techniques for Scientific and Industrial Applications.

Dr. Chen’s career spans both national laboratories and industrial research. He served as a Senior Research Scientist and Manager at IBM Research and the MIT-IBM Watson AI Lab, where he led collaborative, cross-disciplinary research initiatives. He also conducted postdoctoral research at Argonne National Laboratory, where he developed expertise in large-scale scientific computing and data-driven discovery. He earned his Ph.D. in Computer Science from the University of Minnesota and his B.S. in Mathematics with honors from Zhejiang University.


[1] Gang Liu, Michael Sun, Wojciech Matusik, Meng Jiang, and Jie Chen. Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning. In Proceedings of the Thirteenth International Conference on Learning Representations (ICLR), 2025.

[2] Yuanzhe Liu, Ryan Deng, Tim Kaler, Xuhao Chen, Charles E. Leiserson, Yao Ma, and Jie Chen. Lessons Learned: A Multi-Agent Framework for Code LLMs to Learn and Improve. In Advances in Neural Information Processing Systems 38 (NeurIPS), 2025.

[3] Jie Chen, Tengfei Ma, and Cao Xiao. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In Proceedings of the Sixth International Conference on Learning Representations (ICLR), 2018.

[4] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, and Charles E. Leiserson. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), 2020.

[5] Yue Yu, Jie Chen, Tian Gao, and Mo Yu. DAG-GNN: DAG Structure Learning with Graph Neural Networks. In Proceedings of the Thirty-sixth International Conference on Machine Learning (ICML), 2019.

[6] Tim Kaler, Nickolas Stathas, Anne Ouyang, Alexandros-Stavros Iliopoulos, Tao B. Schardl, Charles E. Leiserson, and Jie Chen. Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining. In Proceedings of Machine Learning and Systems 4 (MLSys), 2022.

[7] Jie Chen and Michael L. Stein. Linear-Cost Covariance Functions for Gaussian Random Fields. Journal of the American Statistical Association (JASA), 118(541):147–164, 2023.

[8] Michael L. Stein, Jie Chen, and Mihai Anitescu. Stochastic Approximation of Score Functions for Gaussian Processes. Annals of Applied Statistics (AoAS), 7(2):1162–1191, 2013.

[9] Mihai Anitescu, Jie Chen, and Lei Wang. A Matrix-Free Approach for Solving the Parametric Gaussian Process Maximum Likelihood Problem. SIAM Journal on Scientific Computing (SISC), 34(1):A240–A262, 2012.

[10] Minghao Guo, Veronika Thost, Beichen Li, Payel Das, Jie Chen, and Wojciech Matusik. Data-Efficient Graph Grammar Learning for Molecular Generation. In Proceedings of the Tenth International Conference on Learning Representations (ICLR), 2022.

[11] Michael Sun, Minghao Guo, Weize Yuan, Veronika Thost, Crystal Elaine Owens, Aristotle Franklin Grosz, Sharvaa Selvan, Katelyn Zhou, Hassan Mohiuddin, Benjamin J. Pedretti, Zachary P. Smith, Jie Chen, and Wojciech Matusik. Representing Molecules as Random Walks Over Interpretable Grammars. In Proceedings of the Forty-first International Conference on Machine Learning (ICML), 2024.

[12] Mark Weber, Giacomo Domeniconi, Jie Chen, Daniel Karl I. Weidele, Claudio Bellei, Tom Robinson, and Charles E. Leiserson. Anti-Money Laundering in Bitcoin: Experimenting with Graph Convolutional Networks for Financial Forensics. In 2nd KDD Workshop on Anomaly Detection in Finance (KDD-W), 2019.

[13] Claudio Bellei, Muhua Xu, Ross Phillips, Tom Robinson, Mark Weber, Tim Kaler, Charles E. Leiserson, Arvind, and Jie Chen. The Shape of Money Laundering: Subgraph Representation Learning on the Blockchain with the Elliptic2 Dataset. In KDD Workshop on Machine Learning in Finance (KDD-W), 2024.

[14] Enyan Dai and Jie Chen. Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series. In Proceedings of the Tenth International Conference on Learning Representations (ICLR), 2022.

[15] Chao Shang, Jie Chen, and Jinbo Bi. Discrete Graph Structure Learning for Forecasting Multiple Time Series. In Proceedings of the Ninth International Conference on Learning Representations (ICLR), 2021.

[a] MIT News: Could LLMs help design our next medicines and materials?.

[b] MIT News: Busy GPUs: Sampling and pipelining method speeds up deep learning on large graphs.

[c] MIT News: Generating new molecules with graph grammar.

[d] Yahoo Finance: IBM, MIT and Elliptic release world’s largest labeled dataset of bitcoin transactions.

[e] WIRED: A Vast New Data Set Could Supercharge the AI Hunt for Crypto Money Laundering.

[f] MIT News: Using artificial intelligence to find anomalies hiding in massive datasets.