Designing AI That Never Sees Your Data
March 20, 2026
The rapid advancement of artificial intelligence depends on access to vast amounts of data—much of it deeply sensitive. From medical records to financial histories, these datasets fuel powerful machine learning models while raising a fundamental challenge: how to train AI systems without exposing the underlying information.
Natalie Lang, a postdoctoral associate at the University of Maryland Institute for Advanced Computer Studies (UMIACS) and the Maryland Cybersecurity Center (MC2), is working to solve that problem.
“My goal for this postdoc is to move the needle on privacy and security in machine learning by combining my engineering background with modern cryptographic techniques,” Lang says.
She collaborates closely with Dana Dachman-Soled, an associate professor of electrical and computer engineering with an appointment in UMIACS who is a core member of MC2.
“The ideal environment for this work requires expertise in both domains, and Dana perfectly exemplifies that intersection,” Lang explains.
Lang’s research tackles a central technical challenge: how to train powerful machine learning models without ever revealing the underlying data. One promising approach relies on advanced cryptographic techniques that allow computations to be performed directly on encrypted information—meaning a server could train a model without ever seeing the data itself.
“The difficulty is that these cryptographic techniques are traditionally extremely computationally expensive,” Lang says. “Our goal is to design learning algorithms and encryption methods that work together efficiently.”
To make this possible, Lang is exploring how ideas from communication theory—particularly data compression—can help make encrypted machine learning practical at scale.
That perspective traces back to her Ph.D. work at Ben-Gurion University of the Negev in Israel, where she studied communication systems and signal processing. The experience shaped how she approaches machine learning problems.
“I see these systems as large, distributed networks that have to operate under real-world constraints like limited capacity, delays and data privacy,” she says.
This systems-level perspective has created a natural synergy with Dachman-Soled’s expertise in cryptography—a collaboration already producing promising early results. Within the interdisciplinary environment of MC2, their work highlights how bringing together distinct research traditions can address some of the most pressing challenges in AI security.
“Natalie is a brilliant researcher and a wonderful addition to my group,” Dachman-Soled says. “Despite having to quickly come up to speed on a substantial amount of cryptography background, she has already taken the lead on a project with promising early results. I look forward to continuing to collaborate with her during her time at UMD and beyond.”
Beyond the lab, Lang has embraced the collaborative culture at MC2, participating in research discussions and reading groups across the center. Through Dachman-Soled’s collaboration with J.P. Morgan Research, she has also gained a valuable window into how academic advances translate into real-world industry needs.
Finding balance outside the lab is equally important. Lang enjoys yoga, Zumba, spin classes and running—activities that help her recharge between research milestones. Since relocating from Israel six months ago, she has also spent weekends exploring the Washington, D.C., region with her husband and their two young children.
For Lang, the move represents more than a change in scenery—it marks a pivotal moment in a career defined by crossing disciplinary boundaries.
By bringing together machine learning and cryptography—two fields often studied separately—she is helping ensure that the AI systems of the future are secure and private by design.
—Story by Melissa Brachfeld, UMIACS Communications Group