The Pentagon’s blue-sky research agency is readying a nearly
four-year project to boost artificial intelligence systems by building machines
that can teach themselves — while making it easier for ordinary schlubs like us
to build them, too.
When Darpa talks
about artificial intelligence, it’s not talking about modeling computers after
the human brain. That path fell out
of favor among computer scientists years ago as a means of creating
artificial intelligence; we’d have to understand our own brains first before
building a working artificial version of one. But the agency thinks we can build
machines that learn and evolve, using algorithms — “probabilistic programming” —
to parse through vast amounts of data and select the best of it. After that, the
machine learns to repeat the process and do it better.
But building such
machines remains really, really hard: The agency calls it “Herculean.” There are
scarce development tools, which means “even a team of specially-trained machine
learning experts makes only painfully slow progress.” So on April 10, Darpa is
inviting scientists to a Virginia conference to
brainstorm. What will follow are 46 months of development, along with annual
“Summer Schools,” bringing in the scientists together with “potential customers”
from the private sector and the government.
Called
“Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists
will be asked to figure out how to “enable new applications that are impossible
to conceive of using today’s technology,” while making experts in the field “radically
more effective,” according to a recent agency announcement. At the same
time, Darpa wants to make the machines simpler and easier for non-experts to
build machine-learning applications too.
It’s no surprise the
mad scientists are interested. Machine
learning can be used to make better systems for intelligence, surveillance
and reconnaissance, a core military necessity. The technology can be used to
make better speech-recognition
applications and self-driving
cars. It keeps pace with the ever-enlarging war against internet
spam filling our search engines and e-mail inboxes.
“Our goal is that
future machine learning projects won’t require
people to know everything about both the domain of interest and machine
learning to build useful machine learning applications,” Darpa program manager
Kathleen Fisher in an announcement. “Through new probabilistic programming
languages specifically tailored to probabilistic inference, we hope to
decisively reduce the current barriers to machine learning and foster a boom in
innovation, productivity and effectiveness.”
Once that gets going, the scientists will first have to
improve the “front end” and “back end” of the machines. Respectively, those are
the parts of a computer learning system that developers see, and the parts
responsible for figuring out a predictive model that helps the computer become
smarter.
For developers at the front end, the machines can’t be too
complicated, and the code should “balance the expressive power of the language
with the corresponding difficulty of producing an efficient solver.” To make
developing the machines more accessible to non-experts, debuggers and testing
tools need to be understandable enough as well, so testers can figure out when
there’s a bug or if the computer is spitting out inaccurate results.
The other question involves how to make computer-learning
machines more predictable. Darpa believes it’s likely that the algorithms used
in the systems will have to become much more sophisticated to find “the most
appropriate solver or set of solvers given a particular model, query or set of
prior data.” That could be “by incorporating data from the compiler optimization
community.” Finally, the solvers need to work with a large number of different
computers and do so efficiently: “including multi-core machines, GPUs, cloud
infrastructures, and potentially custom hardware.”
If it works, then it means more advanced
intelligence-gathering systems, less spam, and Minority Report-style
self-driving cars of the future. Sounds like a pretty good deal. But to produce
a machine-learning system that’s “effective,” the agency states: “Improvements
on the order of two to four magnitude over the state of the art are likely
necessary.” No pressure.
By Robert Beckhusen
Source:
0 yorum:
Yorum Gönder