Machine-learning researchers make many decisions when designing new models. They decide how many layers to include in neural networks and what weights to give inputs at each node. The result of all this human decision-making is that complex models end up being “designed by intuition” rather than systematically, says Frank Hutter, head of the machine-learning lab at the University of Freiburg in Germany.
A growing field called automated machine learning, or autoML, aims to eliminate the guesswork. The idea is to have algorithms take over the decisions that researchers currently have to make when designing models. Ultimately, these techniques could make machine learning more accessible.
Although automated machine learning has been around for almost a decade, researchers are still working to refine it. Last week, a new conference in Baltimore–which organizers described as the first international conference on the subject–showcased efforts to improve autoML’s accuracy and streamline its performance.
There’s been a swell of interest in autoML’s potential to simplify machine learning. Companies like Amazon and Google already offer low-code machine-learning tools that take advantage of autoML techniques. If these techniques become more efficient, it could accelerate research and allow more people to use machine learning.
The idea is to get to a point where people can choose a question they want to ask, point an autoML tool at it, and receive the result they are looking for.
That vision is the “holy grail of computer science,” says Lars Kotthoff, a conference organizer and assistant professor of computer science at the University of Wyoming. “You specify the problem, and the computer figures out how to solve it–and that’s all you do.”
But first, researchers will have to figure out how to make these techniques more time and energy efficient.
What is autoML?
At first glance, the concept of autoML might seem redundant–after all, machine learning is already about automating the process of gaining insights from data. But because autoML algorithms operate at a level of abstraction above the underlying machine-learning models, relying only on the outputs of those models as guides, they can save time and computation. Researchers can apply autoML techniques to pre-trained models to gain fresh insights without wasting computation power repeating existing research.
For example, research scientist Mehdi Bahrami and his coauthors at Fujitsu Research of America presented recent work on how to use a BERT-sort algorithm with different pre-trained models to adapt them for new purposes. BERT-sort is an algorithm that can figure out what is called “semantic order” when trained on data sets–given data on movie reviews, for example, it knows that “great” movies rank higher than “good” and “bad” movies.
With autoML techniques, the learned semantic order can also be extrapolated to classifying things like cancer diagnoses or even text in the Korean language, cutting down on time and computation.
“BERT takes months of computation and is very expensive–like, a million dollars to generate that model and repeat those processes,” Bahrami says. “So if everyone wants to do the same thing, then it’s expensive–it’s not energy efficient, not good for the world.”
Although the field shows promise, researchers are still searching for ways to make autoML techniques more computationally efficient. For example, methods like neural architecture search currently build and test many different models to find the best fit, and the energy it takes to complete all those iterations can be significant.
AutoML techniques can also be applied to machine-learning algorithms that don’t involve neural networks, like creating random decision forests or support-vector machines to classify data. Research in those areas is further along, with many coding libraries already available for people who want to incorporate autoML techniques into their projects.
The next step is to use autoML to quantify uncertainty and address questions of trustworthiness and fairness in the algorithms, says Hutter, a conference organizer. In that vision, standards around trustworthiness and fairness would be akin to any other machine-learning constraints, like accuracy. And autoML could capture and automatically correct biases found in those algorithms before they’re released.
The search continues
But for something like deep learning, autoML still has a long way to go. Data used to train deep-learning models, like images, documents, and recorded speech, is usually dense and complicated. It takes immense computational power to handle. The cost and time for training these models can be prohibitive for anyone other than researchers working at deep-pocketed private companies.
One of the competitions at the conference asked participants to develop energy-efficient alternative algorithms for neural architecture search. It’s a considerable challenge because this technique has infamous computational demands. It automatically cycles through countless deep-learning models to help researchers pick the right one for their application, but the process can take months and cost over a million dollars.
The goal of these alternative algorithms, called zero-cost neural architecture search proxies, is to make neural architecture search more accessible and environmentally friendly by significantly cutting down on its appetite for computation. The result takes only a few seconds to run, instead of months. These techniques are still in the early stages of development and are often unreliable, but machine-learning researchers predict that they have the potential to make the model selection process much more efficient.