AI Cameras can make the world a better place…
Most CCTV is associated with mass surveillance and privacy issues. Cameras can be used to see where people are and what they are doing. Public opinion often sees cameras as a nuisance at best and a democratic threat at worst. Can they be a force for good?
Technology is evolving quickly and cameras will soon move to being 4K in quality. This means that a camera mounted in the ceiling of a building can be used to count the wrinkles on visitors ‘ foreheads instead of seeing blurry shapes. New forms of surveillance will be possible. For most businesses however the amount of data produced by a 4K camera is so large that it is economically unviable to send all the data to the Cloud for processing. We will need to process data on the device or nearby.
AI accelerators are a technology innovation which enables these video streams to be processed in real-time. Processing before images leave the camera can bring many advantages:
- GDPR & privacy compliance — if the camera does not need to recognise human faces, then they can be blurred or encrypted and as such complex privacy laws like GDPR can be complied with very easily. If people give permission, then extra value can be generated, e.g. opening doors without keys.
- Risks and danger can be detected quickly — a wild animal entering a building, a person carrying a weapon, a fire, an escape of water, an under-18 entering a Casino, a person falling on the ground, a five your old lost in a commercial centre, a person violating a restraining order,… can all be detected and action can be taken quickly.
- Better services — restaurant waiters can be warned when a customer is raising their hand and not getting attention, queues can trigger staff and managers to take action, a vending machine can be selling alcohol only to adults, returning customers can be offered personalised suggestions based on past behaviour, faster airline checkins, a nightly rodent problem can be spotted, …
If we want to move away from mass surveillance, privacy issues, … and see more value in cameras, we need to be able bring more intelligence towards them. We need to be able to make it clear when a camera is recognising our face and our behaviour and when not. Just like visiting a website, we should be able to provide permission or on the contrary be assured that abuse is not happening. In a DeepFake world, we need to be able to use camera images to proof a crime was committed but at the same time protect the innocent.
Unless we make AI cameras programmable, similar to apps on a mobile phone, they are likely going to harm the innocent and protect the guilty. In most cases, AI cameras should be able to encrypt our faces in such a way that operators have no way of telling who passed by. Only if we provided permission to get better services or a crime was committed, should cameras be able to recognise our faces. Unless we actively work towards this goal, the future of 4K camera abuses, DeepFake arguments being used to lock up the innocent and free the guilty,… will become more common.
I am actively evaluating if the world needs an open source AI camera platform to protect and serve the innocent and not the guilty, so if you agree, then please share this post and reach out to help. A simple “Share” can save future lives.