70th Day Of Lockdown

Maharashtra70013301082362 Tamil Nadu2349513170187 Delhi208348746523 Gujarat17217107801063 Rajasthan91006213199 Uttar Pradesh83615030222 Madhya Pradesh82835003358 West Bengal57722306325 Bihar3945174123 Andhra Pradesh3676237464 Karnataka3408132852 Telangana2792149188 Jammu and Kashmir260194631 Haryana2356105521 Punjab2301200044 Odisha210412459 Assam14862854 Kerala132760811 Uttarakhand9592225 Jharkhand6612965 Chhatisgarh5481211 Tripura4231730 Himachal Pradesh3401186 Chandigarh2972144 Manipur83110 Puducherry79250 Goa73500 Nagaland4300 Meghalaya28121 Arunachal Pradesh2010 Mizoram110 Sikkim100
Technology Other News 30 Jun 2016 Predicting the futur ...

Predicting the future by watching TV

DECCAN CHRONICLE
Published Jun 30, 2016, 1:22 am IST
Updated Jun 30, 2016, 1:22 am IST
Researchers created an algorithm that analyzes video, then uses what it learns to predict how humans will behave.
A still from The Office
 A still from The Office

The next time you catch your robot watching sitcoms, don’t assume it’s slacking off. It may be hard at work. TV shows and video clips can help Artificially Intelligent systems learn about and anticipate human interactions, according to MIT’s Computer Science and Artificial Intelligence Laboratory. Researchers created an algorithm that analyzes video, then uses what it learns to predict how humans will behave.

Six hundred hours of clips from shows like The Office and Big Bang Theory let the AI learn to identify high-fives, handshakes, hugs, and kisses. Then it learned what the moments leading to those interactions looked like.

 

After the Artificial Intelligence devoured the videos to train itself, the researchers fed it a single frame from a video it had not seen and tasked it with predicting what would happen next. It was right about 43 per cent of the time.

Humans nail the answer 71 per cent of the time, but the researchers still think the AI did a great job, given its rudimentary education. “Even a toddler has much more life experience than this,” says Carl Vondrick, the project’s lead author. “I’m interested to see how much the algorithms improve if we train it on years of videos.”

The AI doesn’t understand what’s happening in the scene in the same way a human does. It analyzes the composition and movement of pixels to identify patterns. “It drew its own conclusions in terms of correlations between the visuals and the eventual action,” says Vondrick.

Vondrick spent two years on the project. He says the efficient, self-reliant training could come in handy for more important things.

For example, an improved version of the system could have a future in hospitals and in places where it could prevent injuries. He says, smart cameras could analyze video feeds and alert emergency responders if something catastrophic is about to happen. Embed these systems in robots, and they could even intervene in these situations themselves.
Source: www.wired.com

Click on Deccan Chronicle Technology and Science for the latest news and reviews. Follow us on Facebook, Twitter

...




ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT