Google and The Department of Defense are partnered in funding a project where NEIL (Never Ending Image Learner) a computer, processes images 24 hours a day and with little human interference is learning to understand and interpret visual cues. In addition to now being able to observe visual details, NEIL is gaining an ability to interpret what the various signals mean.
It began its education last July and has processed over 3 million images to date. NEIL has already shown ability in strange ways – such as ‘seeing’ the similarities between a Zebra’s stripes and the stripes of a Tiger; Pretty simple for you & I but ground breaking for a computer.
While NEIL is in his infancy, he is learning to examine news videos and come to inferences about various things from a person’s similarities to another or make location based assumptions. Certainly one can see the possibilities. Yes there are 21stcentury war tactics or civilian defense from terrorist plots, but imagine a simple interface or robot could be able to attain an understanding of one’s needs based on the visual. This is actually quite awesome!
There are many Sci Fi movies where computers see & interpret and it is just considered a part of reality. But where does that reality lead? When do computers decide they don’t need us?