Colloquium: Learning Generalizable and Reliable Neural Machines in an Open World
The past several years have seen a growing demand for building intelligent machines that can learn from and generalize to complex data. However, despite tremendous performance improvements, high-capacity models, such as deep neural networks still struggle to generalize to the diverse world. In this talk, I will present my recent work addressing challenges on learning more generalizable and reliable visual representations. This requires machine learning models to not only classify data accurately from known classes and distributions, but also to develop awareness against abnormal examples in open environments. To this end, I will first talk about works on how we can improve generalization by leveraging multiple learning agents. Then, I will discuss an approach that effectively detects anomalies from outside the training distribution. Finally, I will share ongoing efforts on learning highly generalizable representations without strong human supervision, followed by discussion of future directions towards a minimally supervised, continuous learning paradigm.