AI-powered building security, minus bias and privacy pitfalls?

Facial recognition has lodged itself in people’s minds as the defacto technology for visual surveillance, and we should all find that quite disturbing!

Facial recognition has swept the physical security marketplace with wide adoption by governments, banks, and retailers. Global View Research estimate “the global facial recognition market size was valued at USD 3.86 billion in 2020 and is expected to expand at a compound annual growth rate (CAGR) of 15.4% from 2021 to 2028”.

In a way, facial recognition has lodged itself in people’s minds as the defacto technology for visual surveillance, and we should all find that quite disturbing!

I was reminded of this when I stumbled across an interview with the founder of ambient.ai, a company that appears to be taking a refreshingly different approach:

The first generation of automatic image recognition, Shrestha said, was simple motion detection, little more than checking whether pixels were moving around on the screen — with no insight into whether it was a tree or a home invader. Next came the use of deep learning to do object recognition: identifying a gun in hand or a breaking window. This proved useful but limited and somewhat high maintenance, needing lots of scene- and object-specific training.

What do they do differently?

The insight was, if you look at what humans do to understand a video, we take lots of other information: is the person sitting or standing? Are they opening a door, are they walking or running? Are they indoors or outdoors, daytime or nighttime? We bring all that together to create a kind of comprehensive understanding of the scene,” Shrestha explained. “We use computer vision intelligence to mine the footage for a whole range of events. We break down every task and call it a primitive: interactions, objects, etc., then we combine those building blocks to create a ‘signature'.

They claim 200 rules and have five of the largest tech companies (amongst others) as paying customers. Another area where they stand out for me is how they tackle bias:

We built the platform around the idea of privacy by design,” Shrestha said. With AI-powered security, “people just assume facial recognition is part of it, but with our approach you have this large number of signature events, and you can have a risk indicator without having to do facial recognition. You don’t just have one image and one model that says what’s happening — we have all these different blocks that allow you to get more descriptive in the system.

Essentially this is done by keeping each individual recognized activity bias-free to begin with. For instance, whether someone is sitting or standing, or how long they’ve been waiting outside a door — if each of these behaviors can be audited and found to be detected across demographics and groups, then the sum of such inferences must likewise be free of bias. In this way the system structurally reduces bias."

I’ve no first-hand experience, so I won’t comment on efficacy, and this is not a recommendation. Still, any approach to physical security monitoring that moves us away from facial recognition by default is worth highlighting to decision-makers.