Charles Lovering

Computer Science PhD Student (NLU).

[Google Scholar]
[Github]
[Resume]
[Lab page]

Representative Work

Charles Lovering*, Jessica Forde*, George Konidaris, Ellie Pavlick, Michael Littman. Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex. Neurips, 2022. (*Equal contribution.)
Charles Lovering, Ellie Pavlick. Unit Testing for Concepts in Neural Networks. TACL, 2022.
Charles Lovering, Rohan Jha, Tal Linzen, Ellie Pavlick. Predicting Inductive Biases of Pre-Trained Models. ICLR, 2021. [github] [video]
Rohan Jha, Charles Lovering, Ellie Pavlick. Does Data Augmentation Improve Generalization in NLP? 2020. PREPRINT.

Talks

Where, When & Which Concepts Does AlphaZero Learn? @ Jane Street, Research Symposium. Winter, 2022.
Predicting Inductive Biases of Pre-Trained Models. @ NLP & Fairness, Interpretability, and Robustness; Google. Fall, 2020.
Minimum Description Length Probing @ Language Understanding and Representations; Brown University. Summer, 2020.
Transformers @ Language Understand and Representations; Brown University. Summer, 2019.

Artifacts

Lindenmayer systems.
Interactive visualizations.
Introduction to byte-encoding representations.
Introduction to beam search.
Introduction to the transformer architecture.
Introduction to neural turing machines.

Acknowledgments

This site replicates the distill design.

Adobe XD CC for diagrams, D3 for visualizations, and PyTorch for deep learning.