By Rachel Harrington
After the Boston Marathon bombings last year, it took authorities just three days to sift through an abundance of footage and find their suspects – light speed compared to the weeks it took to find those responsible for the London bombings in 2005.
Still, can this happen faster? Professor Venkatesh Saligrama(ECE, SE) thinks so, and he’s working to making that vision a reality.
The Office of Naval Research awarded him $900K for his project, Video Search and Retrieval, which will focus on developing a visual search system. Think Google but for security videos.
“Our initial idea was to develop a system that could annotate web videos,” said Saligrama, who collaborated with Pierre-Marc Jodoin at the University of Sherbrooke on early stages of this research. “That project turned out to be extremely challenging so we started to focus on surveillance videos, where the footage is obtained in a controlled environment.”
Manually searching large archives of footage can be both time-consuming and monotonous. Saligrama and Ph.D. students, Greg Castanon (ECE) and Yuting Chen (SE), are now working closely with the U.S. Naval Research Laboratory to help change this.
Chen said she is looking forward to working on this project with Saligrama, who she first encountered while conducting her own research.
“I spent almost a year and a half working on an idea that employs correlating motion clues to calibrating camera networks,” she said. “When I came to BU Systems Engineering and browsed the research papers, I found the exact idea implemented by Venkatesh’s group. I was surprised and just a little bit bitter.”
From there, she knew that she wanted to study with Saligrama.
“He is an experienced researcher and just as passionate and curious as a young freshman,” she said. “I find that one sentence from him can help me through a problem that’s been troubling me for weeks.”
Chen, Castanon and Saligrama hope that together, they can make the process of searching through security footage more automated and responsive to user query video searches.
“Currently, for many YouTube videos, there are textual meta-tags that are used in the search process,” Saligrama explained. “For surveillance videos, we do not often have this so our searches need to be based purely on visual features and patterns.”
One of the challenges in video search is that activity patterns can be highly inconsistent and can occur for unpredictable amounts of time.
“Unlike image search though, videos have some temporal patterns we can exploit,” said Saligrama.
In the future, Saligrama hopes that the research will not only improve security but improve medical database searches as well.
For more information about the project, visit our Research Spotlight page.