About

I’m a Research Scientist with the NYU Alignment Research Group under Prof. Sam Bowman, where I work on reducing risks from advanced language models and improving the faithfulness of natural language explanations. Previously, I was an early employee at Cohere. I earned a Bachelor’s in Machine Learning and Mathematics from Duke University.

Publications & Preprints

Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought
James Chua, Edward Rees, Hunar Batra, Samuel R. Bowman, Julian Michael, Ethan Perez, Miles Turpin
arXiv 2024
[arXiv] [Twitter thread] [Code]

Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman
NeurIPS 2023
[OpenReview] [Twitter thread] [Code]

A machine learning toolkit for genetic engineering attribution to facilitate biosecurity
Ethan C. Alley, Miles Turpin, Andrew Bo Liu, Taylor Kulp-McDowall, Jacob Swett, Rey Edison, Stephen E. Von Stetina, George M. Church & Kevin M. Esvelt
Nature Communications, 2020
[Paper] [Twitter thread] [Code]

Machine Learning Prediction of Surgical Intervention for Small Bowel Obstruction
Miles Turpin, Joshua Watson, Matthew Engelhard, Ricardo Henao, David Thompson, Lawrence Carin, Allan Kirk
medRxiv, 2021
[Preprint]

Past Projects

  • Scalable Hierarchical Bayesian Neural Networks via Factorization. During an internship at IBM Research in 2019, I worked with Dr. Soumya Ghosh on scaling hierarchical Bayesian modeling to Bayesian neural networks when dealing with very large numbers of groups.

  • Probabilistic Wave Function Collapse with Markov Random Fields. I used Markov Random Fields to create a generalized version of Wave Function Collapse algorithm for texture synthesis. This generalization enables the algorithm to handle continuous pixel values and model longer range dependencies.

Get in touch

milesaturpin at gmail.com