Computer Science PhD Student (NLU).
Charles Lovering*, Jessica Forde*, Ellie Pavlick, Michael
Littman. Where, When & Which Concepts Does AlphaZero Learn?
AAAI, RLG Workshop, 2022. (*Equal contribution.)
Where, When & Which Concepts Does AlphaZero Learn? @ Jane Street, Research Symposium. Winter, 2022. [Upcoming]
Predicting Inductive Biases of Pre-Trained
Models. @ NLP & Fairness, Interpretability, and
Robustness; Google. Fall, 2020.
Minimum Description Length Probing @ Language Understanding
Representations; Brown University. Summer, 2020.
Transformers @ Language Understand and
Representations; Brown University. Summer, 2019.
Introduction to byte-encoding representations.
Introduction to beam search.
Introduction to the transformer architecture.
Introduction to neural turing machines.
This site replicates the distill design.
Adobe XD CC for diagrams, D3 for visualizations, and PyTorch for deep learning.