More than 80 Amazon scientists and engineers will attend this year’s International Conference on Machine Learning (ICML) in Stockholm, Sweden, with 11 papers co-authored by Amazonians being presented.
“ICML is one of the leading outlets for machine learning research,” says Neil Lawrence, director of machine learning for Amazon’s Supply Chain Optimization Technologies program. “It’s a great opportunity to find out what other researchers have been up to and share some of our own learnings.” At ICML, members of Lawrence’s team will present a paper titled “Structured Variationally Auto-encoded Optimization,” which describes a machine-learning approach to optimization, or choosing the values for variables in some process that maximize a particular outcome.
The first author on the paper is Xiaoyu Lu, a graduate student at the University of Oxford who worked on the project as an intern at Amazon last summer, then returned in January to do some follow-up work. Joining her are Lawrence and two machine-learning scientists in Lawrence’s group, Javier Gonzalez Hernandez and Zhenwen Dai. “We couldn't have done this without Xiaoyu’s excellent work,” Hernandez says.
“There’s a technique known as Bayesian optimization, which tries to learn the relationship between an input you can control and an output,” Lawrence explains. “Javier likes to use the example of brewing the world’s best beer.” There, Lawrence says, the inputs would be all the controllable parameters of the beer-brewing process, such as temperature, aeration, and yeast concentration, and the output would be some measure of the beer’s quality.
“Bayesian optimization works well when the number of things you can change is small,” Lawrence says. “If you’re making your beer with a thousand different changes, these things don’t work well. The clever bit in this paper is to map those changes into a lower-dimensional space, then we do the optimization in the low-dimensional space.”
To perform that mapping, Lawrence explains, his team uses a type of neural network known as a variational autoencoder. An autoencoder is peculiar in that its output is intended to be exactly the same as its input. But between input and output, the network squeezes the input down into a much more compact representation. Training is a matter of learning how to produce a representation that preserves enough information about the input that the network’s output will be a fairly faithful reconstruction.
Also attending ICML is Alborz Geramifard, a machine-learning manager within the Alexa AI organization. “ICML is one of the top-tier conferences in the world for everyone who’s excited about machine learning, which is currently a very hot topic,” Geramifard says. “It attracts a really, really good pool of people.”
“One of the things that I like about ICML is that there are always good forums that focus on very specific topics,” Geramifard adds. “For example, I’m very excited about dialogue and machine learning, and there are workshops and specific paper tracks and other opportunities to talk about the topic where you get to sit in a room with pretty much the state-of-the-art contributors in that space, bounce ideas off them, and talk about their experiences.”

ICML papers with Amazon coauthors:

Title: Structured Variationally Auto-encoded Optimization
Authors: Xiaoyu Lu (University of Oxford) · Javier González (Amazon) · Zhenwen Dai (Amazon.com) · Neil Lawrence (Amazon)
Title: Semi-Supervised Learning on Data Streams via Temporal Label Propagation
Authors: Tal Wagner (MIT) · Sudipto Guha (Amazon) · Shiva Kasiviswanathan (Amazon) · Nina Mishra (Amazon)
Title: Detecting non-causal artifacts in multivariate linear regression models
Authors: Dominik Janzing (Amazon Research Tübingen) · Bernhard Schölkopf (Amazon / MPI for Intelligent Systems)
Title: Detecting and Correcting for Label Shift with Black Box Predictors
Authors: Zachary Lipton (Amazon / Carnegie Mellon University) · Yu-Xiang Wang (Amazon / UCSB) · Alexander Smola (Amazon)
Title: Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising
Authors: Borja de Balle Pigem (Amazon Research) · Yu-Xiang Wang (Amazon / UCSB)
Title: signSGD: compressed optimisation for non-convex problems
Authors: Jeremy Bernstein (Caltech) · Yu-Xiang Wang (Amazon / UCSB) · Kamyar Azizzadenesheli (UC Irvine / Stanford) · Anima Anandkumar (Amazon AI / Caltech)
Title: Born Again Neural Networks
Authors: Tommaso Furlanello (University of Southern California) · Zachary Lipton (Amazon / Carnegie Mellon University) · Michael Tschannen (ETH Zurich) · Laurent Itti (University of Southern California) · Anima Anandkumar (Amazon AI / Caltech)
Title: Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization
Authors: Jiong Zhang (University of Texas at Austin) · Qi Lei (University of Texas at Austin) · Inderjit Dhillon (UT Austin / Amazon)
Title: Learning long term dependencies via Fourier recurrent units
Authors: Jiong Zhang (University of Texas at Austin) · Yibo Lin (UT-Austin) · Zhao Song (UT-Austin) · Inderjit Dhillon (UT Austin / Amazon)
Title: Towards Fast Computation of Certified Robustness for ReLU Networks
Authors: Tsui-Wei (Lily) Weng (MIT) · Huan Zhang (UC Davis) · Hongge Chen (MIT) · Zhao Song (UT-Austin) · Cho-Jui Hsieh (University of California, Davis) · Luca Daniel (MIT) · Duane Boning (MIT) · Inderjit Dhillon (UT Austin & Amazon)
Title: Learning Steady-States of Iterative Algorithms over Graphs
Authors: Hanjun Dai (Georgia Tech) · Zornitsa Kozareva (Google) · Bo Dai (Georgia Institute of Technology) · Alex Smola (Amazon) · Le Song (Georgia Institute of Technology)
Separately, Amazon has announced 11 focus areas of the 2018 Amazon Research Awards, a grant program providing up to $80,000 in funding and $20,000 in Amazon Web Services (AWS) credits to academic researchers investigating machine learning-related topics.

Learn more: