|Home -||Research -||Artwork -||About Me|
My research seeks to make machine learning models transparent, interpetable, and controllable.
The work described here was done in collaboration with multiple teams at Google.
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Embedding Projector: Interactive Visualization and Interpretation of Embeddings.
Direct-Manipulation Visualization of Deep Networks
How to Use t-SNE Effectively
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Ad Click Prediction: a View from the Trenches
As part of the Google Brain team, my colleagues and I have worked to create tools for inspecting the workings of ML models. We have also seen that training data is often a key to understanding—a point of view summarized in the slogan, "Don't just debug the model, debug the data."
The image above shows the Embedding Projector, a visualization tool for rich interactive exploration of the kind of high-dimensional data sets that are common in machine learning.
A second challenge is to explore the actions of complex ML models in the real world. The power—and challenge—of ML systems is that their behavior is not predefined by a human. The image below shows an application we created for monitoring changes in a large-scale mission critical ML system. Here we attacked the problem of understanding how a difference in model corresponds to a change in output.