By submitting this form, you are agreeing to the Terms of Use and Privacy Policy.
Inference cameras use AI processing to classify a scene or object, as well as detect or segment something in an image.
Without delving too deeply into the specifics, there have been a number of key advances in the last decade that have really pushed forward the use of AI in image processing, primarily greater access to better processing hardware, more user friendly software development platforms, major investment by large tech companies, and access to significantly more image data via the internet, all of which have aided research and development.
AI-based processing can be used in industrial or non-industrial applications where a qualitative decision must be made and the target object or scene is variable due to its organic nature or because the conditions are unpredictable.
Edge processing is another popular term right now. When we talk about inference cameras, we mean cameras that use AI processing onboard, or on the edge, to classify a scene or object, or to detect or segment something in an image.
Rather than being connected to a central image processing unit within a facility or even a cloud-based solution, inference cameras provide results directly at the point of work. This has the advantage of reducing latency, simplifying the system architecture, and allowing for the use of lower-power hardware.
The Global Inference Camera market accounted for $XX Billion in 2022 and is anticipated to reach $XX Billion by 2030, registering a CAGR of XX% from 2023 to 2030.
Rather than performing computationally expensive image reconstruction before inference, the LLI camera performs computationally efficient data preprocessing on the optically encoded pattern through the mask.
As a result, the LLI camera avoids costly computation while achieving real-time inference. This paper proposes a new data preprocessing approach for optically encoded patterns through the mask, called local binary patterns map generation.
This preprocessing method greatly improves the encoded pattern’s robustness to local disturbances in the scene, allowing the LLI camera to be used in practical applications. The LLI camera’s performance is evaluated using optical experiments on handwritten digit recognition and gender estimation with changing illumination and a moving target.
According to FLIR, the Firefly camera makes it simple to get started with deep learning inference. Users can plug it into existing applications and use it as a standard machine vision camera because it uses the familiar GenICam and USB3 protocols. Metadata derived from inference, such as object classification or position data, can be output as GPIO signals or GenICam chunk data.