This article decribes setting up Frigate with Double Take and Compreface for facial recognition. As we’ll be using gpu offloading we’ll install Frigate in a seperate docker instead of running it as the HAOS add-on.
Goal:
A facial recognition system that can be used for automations/authentication whilst not being connected to the cloud. Al data is stored locally.
We’ll be using the following components:
-
An existing MQTT broker. This article assumes you have one running.
-
A system running docker. In this example we will use an LXC running Ubuntu. The LXC will be hosted on proxmox 8.1.4
-
Frigate NVR: A local NVR designed for Home Assistant with AI object detection.
GitHub - blakeblackshear/frigate: NVR with realtime local object detection for IP cameras -
Double Take: Unified UI and API for processing and training images for facial recognition. GitHub - skrashevich/double-take: Unified UI and API for processing and training images for facial recognition.
-
Comprface: An open-source face recognition GitHub project.
GitHub - exadel-inc/CompreFace: Leading free and open-source face recognition system -
A GPU with CUDA support.
We’ll be using the same hardware and gpu passthrough settings as decribed in:
https://forum.tinkerpod.org/t/control-home-assistant-entitities-using-local-ai-whisper-piper/57
Setup your docker compose file
All three software components (frigate/double take/compreface) run as their own docker container. Assuming you’ve already installed docker as decsribed in the article above gpu passthrough should already work on the LXC.
Update you docker compose file with the configution for all three components:
volumes:
postgres-data:
double-take:
frigate:
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
shm_size: "64mb" # update for your cameras based on calculation above
volumes:
- /etc/localtime:/etc/localtime:ro
- ./frigate/config:/config
- ./frigate:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- ":5000:5000"
- ":8554:8554" # RTSP feeds
- ":8555:8555/tcp" # WebRTC over tcp
- ":8555:8555/udp" # WebRTC over udp
environment:
FRIGATE_RTSP_PASSWORD: "YOUR_RTSP_PASSWORD*"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1 # number of GPUs
capabilities: [gpu]
double-take:
container_name: double-take
image: skrashevich/double-take
restart: unless-stopped
volumes:
- double-take:/.storage
ports:
- ":3000:3000"
environment:
- TZ=Europe/Amsterdam
compreface-postgres-db:
image: ${registry}compreface-postgres-db:${POSTGRES_VERSION}
restart: always
container_name: "compreface-postgres-db"
environment:
- POSTGRES_USER=${postgres_username}
- POSTGRES_PASSWORD=${postgres_password}
- POSTGRES_DB=${postgres_db}
volumes:
- postgres-data:/var/lib/postgresql/data
compreface-admin:
image: ${registry}compreface-admin:${ADMIN_VERSION}
restart: always
container_name: "compreface-admin"
environment:
- POSTGRES_USER=${postgres_username}
- POSTGRES_PASSWORD=${postgres_password}
- POSTGRES_URL=jdbc:postgresql://${postgres_domain}:${postgres_port}/${postgres_db}
- SPRING_PROFILES_ACTIVE=dev
- ENABLE_EMAIL_SERVER=${enable_email_server}
- EMAIL_HOST=${email_host}
- EMAIL_USERNAME=${email_username}
- EMAIL_FROM=${email_from}
- EMAIL_PASSWORD=${email_password}
- ADMIN_JAVA_OPTS=${compreface_admin_java_options}
- MAX_FILE_SIZE=${max_file_size}
- MAX_REQUEST_SIZE=${max_request_size}B
depends_on:
- compreface-postgres-db
- compreface-api
compreface-api:
image: ${registry}compreface-api:${API_VERSION}
restart: always
container_name: "compreface-api"
depends_on:
- compreface-postgres-db
environment:
- POSTGRES_USER=${postgres_username}
- POSTGRES_PASSWORD=${postgres_password}
- POSTGRES_URL=jdbc:postgresql://${postgres_domain}:${postgres_port}/${postgres_db}
- SPRING_PROFILES_ACTIVE=dev
- API_JAVA_OPTS=${compreface_api_java_options}
- SAVE_IMAGES_TO_DB=${save_images_to_db}
- MAX_FILE_SIZE=${max_file_size}
- MAX_REQUEST_SIZE=${max_request_size}B
- CONNECTION_TIMEOUT=${connection_timeout:-10000}
- READ_TIMEOUT=${read_timeout:-60000}
compreface-fe:
image: ${registry}compreface-fe:${FE_VERSION}
restart: always
container_name: "compreface-ui"
ports:
- ":8000:80"
depends_on:
- compreface-api
- compreface-admin
environment:
- CLIENT_MAX_BODY_SIZE=${max_request_size}
- PROXY_READ_TIMEOUT=${read_timeout:-60000}ms
- PROXY_CONNECT_TIMEOUT=${connection_timeout:-10000}ms
compreface-core:
image: ${registry}compreface-core:${CORE_VERSION}
restart: always
container_name: "compreface-core"
runtime: nvidia
environment:
- ML_PORT=3000
- IMG_LENGTH_LIMIT=${max_detect_size}
- UWSGI_PROCESSES=${uwsgi_processes:-1}
- UWSGI_THREADS=${uwsgi_threads:-1}
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
Start the containers:
docker compose up -d
Once everything has been pulled and the containers are running you’ll have the following services:
Frigate Ports:
Webserver: TCP 5000
RSTP feeds: TCP 8554
WebRTC: TCP/UDP 8555
Double Take:
Webserver/API: 3000
Compreface:
Webserver/API: 8000
Note that Frigate and double take do not have any authentication methods in place. If you want to secure the web ui you can use something like Traefik: Traefik Proxy Documentation - Traefik
Note Frigate utilizes shared memory to store frames during processing. The default shm-size
provided by Docker is 64MB.
The default shm size of 64MB is fine for setups with 2 cameras detecting at 720p. If Frigate is exiting with “Bus error” messages, it is likely because you have too many high resolution cameras and you need to specify a higher shm size, using --shm-size
(or service.shm_size
in docker-compose).
The Frigate container also stores logs in shm, which can take up to 30MB, so make sure to take this into account in your math as well.
You can calculate the necessary shm size for each camera with the following formula using the resolution specified for detect:
# Replace <width> and <height>
$ python -c 'print("{:.2f}MB".format((<width> * <height> * 1.5 * 9 + 270480) / 1048576))'
# Example for 1280x720
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 9 + 270480) / 1048576))'
12.12MB
# Example for eight cameras detecting at 1280x720, including logs
$ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 9 + 270480) / 1048576) * 8 + 30))'
126.99MB
Compreface Configuration
-
Navigate to the Compreface web ui and set up you user. Then create an application (left section) using the “Create” link at the bottom of the page. An application is where you can create and manage your Face Collections.
-
Enter your application by clicking on its name. Here, you have two options: adding new users and managing their access roles or creating new Face Services. For now we’ll create a new service using the button on the right. We want to do recognition so select it in the drop down. After creating a new Face Service, you can see it in the Services List with an appropriate name and API key.
-
To add known subjects to your Face Collection of Face Recognition Service, you can use REST API. Once you’ve uploaded all known faces, you can test the collection using the API on the TEST page. We recommend using an image size no higher than 5MB, as it could slow down the request process. The supported image formats include JPEG/PNG/JPG/ICO/BMP/GIF/TIF/TIFF. You can also manually add images under the ‘manage collection’ button in the UI.
-
Upload your photo and let our open-source face recognition system match the image against the Face Collection. Using a UI for face recognition, you can see the original picture with marks near every face. Using the REST API, you’ll receive a response in JSON format.
Frigate Configuration
-
Log on to the webui on port 5000 and navigate to ‘Config’ in the menu on the left.
-
Your config will need the following info at a minimum:
- MQTT broker settings
- Detector settings for GPU offloading
- ffmpeg settings for GPU offloading
- A model defined.
- Camera configuration.
The frigate documentation is very complete. Getting started | Frigate
An example configuration file:
mqtt:
enabled: True
user: ha_mqtt
password: YPH!trk4wje3nmp!ude
host: 10.0.30.10
port: 1883
topic_prefix: frigate
stats_interval: 60
detectors:
tensorrt:
type: tensorrt
device: 0 #This is the default, select the first GPU
ffmpeg:
hwaccel_args: preset-nvidia-h264
model:
path: /config/model_cache/tensorrt/yolov7-320.trt
input_tensor: nchw
input_pixel_format: rgb
width: 320
height: 320
cameras:
reolink_hallway:
mqtt:
timestamp: false
bounding_box: false
crop: true
enabled: True
ffmpeg:
inputs:
- path: rtsp://CAMERAUSER:CAMERAPASSWORD@YOURCAMERAIP/Preview_01_main # <----- The stream you want to use for detection
roles:
- detect
- record
detect:
enabled: True
width: 2650
height: 1920
fps: 5
record:
enabled: True
events:
pre_capture: 5
post_capture: 5
objects:
- person
retain:
days: 3
mode: motion
motion:
mask:
- 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432
objects:
track:
- person
filters:
person:
min_ratio: 0.3
min_score: 0.6
Depending on your camera, lighting, storage you may want to adjust the detection and retention settings. This may require some trial and error before detection works as expected. Changing the settings on the camera may also be needed depending on conditions.
Double Take Configuration
-
Log on to the webui on port 3000 and navigate to ‘Config’ in the top menu.
-
The configuration will need the following settings at a minum:
- MQTT broker settings
- Frigate configuration
- Detector configuration (Compreface)
An example configuration file:
mqtt:
host: MQTT BROKER IP
username: MQTT USER
password: MQTT USER PASSWORD
frigate:
url: http://FRIGATEIP:5000
update_sub_labels: true # frigate 0.11+ option to include names in frigate events
events:
YOURCAMERANAME:
attempts:
latest: 16 # number of "latest" frigate snapshots to try to find face in
snapshot: 16 # number of event frigate snapshots to try to find face in
mqtt: true # whether or not to use mqtt snapshots
image:
height: 1920 # set this to the detect height of your frigate camera
detectors:
compreface:
url: http://COMPREFACEIP:8000
# recognition api key
key: API KEY FOR COMPREFACE PROJECT
# number of seconds before the request times out and is aborted
timeout: 15
# minimum required confidence that a recognized face is actually a face
# value is between 0.0 and 1.0
- Save the configuration and check that the connector indicators at the top of the page are all showing green:
- If so, you should be ready to start collecting faces. Use the faces under Matches to further train the model. If you’re not seeing anything come in, you’ll probably need to tinker with the camera settings. Check contrast and light settings. For example, a camera with a lot of backlight in the image may need to be set to high contrast/monochrome mode. More information on the double take config can be found here: GitHub - skrashevich/double-take: Unified UI and API for processing and training images for facial recognition..
Home assistant integration
Below is a basic notification automation for home assistant. Further integration with Home Assistant will be explored in a seperate article.
alias: Notify
trigger:
- platform: state
entity_id: sensor.double_take_david
- platform: state
entity_id: sensor.double_take_unknown
condition:
- condition: template
value_template: '{{ trigger.to_state.state != trigger.from_state.state }}'
action:
- service: notify.mobile_app
data:
message: |-
{% if trigger.to_state.attributes.match is defined %}
{{trigger.to_state.attributes.friendly_name}} is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.match.confidence}}% by {{trigger.to_state.attributes.match.detector}}:{{trigger.to_state.attributes.match.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
{% elif trigger.to_state.attributes.unknown is defined %}
unknown is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.unknown.confidence}}% by {{trigger.to_state.attributes.unknown.detector}}:{{trigger.to_state.attributes.unknown.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
{% endif %}
data:
attachment:
url: |-
{% if trigger.to_state.attributes.match is defined %}
http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
{% elif trigger.to_state.attributes.unknown is defined %}
http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
{% endif %}
actions:
- action: URI
title: View Image
uri: |-
{% if trigger.to_state.attributes.match is defined %}
http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
{% elif trigger.to_state.attributes.unknown is defined %}
http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
{% endif %}
mode: parallel
max: 10