CS&E logo University of Washington Computer Science & Engineering
 Large-Scale Machine Learning
  CSE Home   AI Home  About CSE    Search    Contact Info 

 Pedro Domingos
Graduate Students
 Geoff Hulten
 Laurie Spencer
Undergraduate Students
 Yeuhi Abe
 Chun-Hsiang Hung

Large-Scale Machine Learning


In many domains, data now arrives faster than we are able to learn from it. To avoid wasting this data, we must switch from the traditional "one-shot" machine learning approach to systems that are able to mine continuous, high-volume, open-ended data streams as they arrive. We have identified a set of desiderata for such systems, and developed an approach to building stream mining algorithms that satisfies all of them. The approach is based on explicitly minimizing the number of examples used in each learning step, while guaranteeing that user-defined targets for predictive performance are met. So far, we have applied this approach to four major (and widely differing) types of learner: decision tree induction, Bayesian network learning, k-means clustering, and the EM algorithm for mixtures of Gaussians. Our versions of these algorithms are able to mine orders of magnitude more data than the best previous algorithms (e.g., our decision tree learner can mine on the order of a billion examples per day on an ordinary PC). We are currently applying our approach to the difficult problem of large-scale relational learning, and have already obtained an order-of-magnitude speedup on a Web prediction task. We have released a beta version of the VFML toolkit with our current suite of stream mining algorithms. Our ultimate goal is to develop a set of primitives (or, more generally, a language) such that any learning algorithm built using them scales automatically to arbitrarily large data streams.


VFML (Very Fast Machine Learning)


CSE logo Computer Science & Engineering
University of Washington
Box 352350
Seattle, WA  98195-2350
(206) 543-1695 voice, (206) 543-2969 FAX
[comments to Pedro Domingos]