We look forward to presenting Transform 2022 in person again on July 19 and virtually from July 20 to 28. Join us for insightful conversations and exciting networking opportunities. Register today!
The growing use of surveillance cameras, whether for public safety, health surveillance or commercial purposes, has increased concerns about privacy. Nowadays, it seems that people’s movements are captured by surveillance cameras wherever they go.
The number of deployed surveillance systems has grown with no sign of slowing down. According to the US Bureau of Labor Statistics, the number of security camera installations in the United States increased from 47 million to 85 million from 2015 to 2021, an increase of 80%. That’s roughly one camera installation for almost every 4 residents of the country. According to the latest research from IHS Markit, the number of surveillance cameras in use around the world is expected to surpass 1 billion by 2021. According to Reportlinker, the video surveillance market is projected to grow more than 10% annually through 2026.
The increasing reach of these systems has increased concerns about privacy violations, particularly with regard to the use of facial recognition. In addition to the loss of privacy resulting from the widespread use of facial recognition in China, studies by MIT and Stanford University and other institutions have uncovered built-in biases in facial recognition systems.
Some US cities have responded. In 2019, San Francisco banned the use of facial recognition in local government security cameras, and since then at least a dozen other U.S. cities have implemented bans on facial recognition for one use or another. But more surveillance doesn’t necessarily mean less privacy.
Improvements in machine learning (ML) technology can both improve the efficiency of data collection from security camera feeds and go a long way in protecting the privacy of individuals appearing in those feeds. For example, a smart camera can do the processing locally, eliminating the need to transfer and store data. It can also have the intelligence to know the difference between what it should capture and what it should ignore. A smart camera not only performs its tasks more efficiently, but can also help prevent both intentional and unintentional data misuse.
How deep learning protects privacy
As well as becoming more prevalent, surveillance cameras have also become more powerful, with high-resolution lenses, greater local computing power, and high-bandwidth internet connections. In some systems, the use of machine learning and artificial intelligence (AI) has enhanced the ability to search through the hundreds or thousands of hours of video recorded by those systems.
As video surveillance systems become more powerful and potentially intrusive, ML and AI can also be used to protect privacy. Video intelligence software powered by deep learning – a subset of AI – can be trained to focus on what it should see and effectively look away from what it shouldn’t see.
Deep learning, which aims to mimic the functions of the human brain through the use of a neural network of three or more layers, can self-discover how to identify and classify objects and patterns. By using tagged data to train the system, a machine can “learn” to work independently and become more powerful with exposure to more data over time. Significantly, this can be done with a small footprint that allows for embedded, localized processing that can effectively manage privacy.
In one example, a CCTV system equipped with deep learning software can classify people approaching a building entrance (such as an office, stadium, or theater), grant or deny entry, and then dispose of all captured information. By processing information locally, without the need to transmit or store data, it can collect the minimum amount needed and then “forget” about it. In another example, a camera monitoring a company’s parking lot might also be looking into the window of a neighbor’s house. The system may prevent capturing images from this window. The software thus corrects all the complications caused by the positioning of the camera, avoiding both accidental errors and intentional activities that involve capturing images that are not on the company’s premises.
ML makes data actionable
Video intelligence software not only keeps out improper information, but also makes finding the right information in live and archived video feeds more efficient. Monitoring or retrieving information from video recordings often involved manual human eye verification, which is not only time-consuming but also easily lead to oversights, errors, and data breaches. ML video content analysis software with deep learning can extract, classify, and quickly index targeted objects—like people or vehicles—making video feeds significantly more searchable, actionable, and quantifiable.
Object classification and indexing also enables intelligent alerts when specific objects, behaviors, or anomalous activity are detected. This can include count-based alerts when the number of people in a given area exceeds a set limit, alerts triggered by object identification, or face recognition where appropriate.
Video content analysis also aggregates metadata from live or archived feeds, allowing analysts to understand trends and develop techniques to improve safety, operations, and security. And by using properly implemented deep learning technology, it can be done without increased privacy risks.
Improving video surveillance while managing privacy
Regardless of privacy concerns and attempts to limit the use of facial recognition, the amount of video and other data collected will not diminish. For example, video systems can help health officials track the number of people wearing masks or maintaining social distancing. Local officials can get a clear view of traffic flows and bottlenecks. Companies can monitor people’s shopping habits. The security of public places increasingly depends on good video surveillance.
Aside from these uses, the proliferation of home systems with surveillance capabilities also fuels fears of lost privacy. More than 128 million cloud-connected voice assistants — like Google Home, Amazon Echo, and Facebook Portal — are used in US homes and can record and share information. And 76% of TV homes say they have smart TVs, which have raised concerns about their potential to spy on users.
However, the way video is collected, processed, and searched can achieve the goals of tighter security, better operations, or improved security without further compromising privacy. The current approach of using cloud-connected security cameras with cloud-based analytics doesn’t stand up to privacy and bias concerns. But ML software with deep learning capabilities enables localized, embedded intelligence and analytics—with high performance at low power—that can enhance security while managing privacy. Even with CCTV video surveillance systems, intelligent video technology can be seamlessly integrated into most existing systems.
The use of deep learning technologies can also drive future improvements and enable companies to continuously increase the complexity of their systems through additional AI applications.
David Gamba is the Vice President from Sima AI.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers