Tech day: Facial recognition on my magic mirror

-

Last month we had another tech day! On this day we try to do cool stuff. This time I had the idea to use facial recognition on my magic mirror (See my previous blog).

So with a group of 5 people, we started thinking of the requirements. When someone is standing in front of the mirror, we want the mirror to detect that. This means we need motion detection! We also want to recognise the user and his emotion. If we can recognise the user, it would be cool to display a message on the mirror or even speak out a message. Well enough requirements for a day! We divided the group in two teams, one working on the frontend, the other on the backend and it was time to get to work!

Frontend

For the frontend we used tracking.js to detect if someone is standing in front of the camera. Once the person is detected, we take a picture and send it to our backend. Here is some code of the tracking part. To prevent that every movement resulted in taking a picture, we also added a check that the next picture will be taken after 10 seconds.

// Start tracking
tracker.on('track', function (event) {
    context.clearRect(0, 0, canvas.width, canvas.height);
    // Track a person
    event.data.forEach(function (rect) {
        const timeAgo = new Date(new Date().getTime() - 10 * 1000);
        const inLastTime = pictureTaken.getTime() > timeAgo;
        // Take a new picture
        if (rect.total >= 1 && !inLastTime) {
            takeSnapshot();
            pictureTaken = new Date();
        }
        // Draw rectangle
        context.strokeStyle = '#a64ceb';
        context.strokeRect(rect.x, rect.y, rect.width, rect.height);
        context.font = '11px Helvetica';
        context.fillStyle = "#fff";
        context.fillText('x: ' + rect.x + 'px', rect.x + rect.width + 5, rect.y + 11);
        context.fillText('y: ' + rect.y + 'px', rect.x + rect.width + 5, rect.y + 22);
    });
});

Backend

At our DevCon conference my colleague Bert Ertman showed how he used AWS Rekognition to identify who was ringing his doorbell. This looked very promising so we decided to use this service as well for the face recognition part. We want the backend to receive a photo taken by the frontend, which than can be used to check with AWS Rekognition if the face is recognised. To make this process work we needed to configure a few things for AWS. Note that I’m using the AWS command line interface to execute most of these steps, but this can also be done in code.

  1. Add AWS credentials to your local machine. (link)
  2. Create a bucket on S3
  3. Upload a file to the bucket
  4. Create a collection

aws rekognition create-collection \ 
--collection-id "someCollectionId" \ 
--region eu-west-1 \ 
--profile default 

5.Index each face that you want to be recognised

aws rekognition index-faces \ 
--image '{"S3Object":{"Bucket":"bucket-name","Name":"file-name"}}' \ 
--collection-id "collection-id" \ 
--detection-attributes "ALL" \ 
--external-image-id "example-image.jpg" \ 
--region eu-west-1 \ 
--profile default 

Now that we have indexed a face, we can check if the person is recognised by AWS Rekognition. The following command will search for faces in the given image.

aws rekognition search-faces-by-image \
    --image '{"S3Object":{"Bucket":"bucket-name","Name":"Example.jpg"}}' \
    --collection-id "collection-id"

The response will give us each face that has been matched and with which confidence it has matched. One other request that we can do is the detect faces request. This response contains information like emotions, gender but also if the person is wearing (sun)glasses or smiling.

aws rekognition detect-faces \
--image '{"S3Object":{"Bucket":"bucket","Name":"file"}}' \
--attributes "ALL"

End of tech day

At the end of this day we had a working demo app which tracked the user, took a picture and displayed some information about the user on our laptop.While this was already great, it would even be greater if this worked on my raspberry pi and magic mirror. So I decided to remove the trackingjs part and use a camera connected to my raspberry pi as input. On the backend I created an endpoint which can be called in the future by an IoT button, just to prevent that the camera is always on or is taking pictures every time a person walks by.

Adding AWS Polly

Because I want to give the user a personal message when he is recognised, I have added AWS Polly to turn a message that I constructed with the information from AWS Rekognition into lifelike speech. All you need to do is sent a synthesize speech request to AWS and you will get a response which contains the audio stream.

1
2
3
4
5
aws polly synthesize-speech \
    --output-format mp3 \
    --voice-id Joanna \
    --text 'Hello Roberto. You look happy today' \
    hello.mp3

This is now the flow on my raspberry pi. The backend is created with Spring boot and provides an endpoint to trigger the whole process. The backend will send a request to my mirror with the text which is shown by the IFTTT module of the MagicMirror platform. The backend will also play the audio stream that we received from AWS Polly.

Result

I have made a video to show you the result. You will see that it recognised my happy face, greets me (in Dutch) and tells me that I look happy 🙂

I’m going to continue to make the code more configurable, so it may be useful to others. Let me know if you have any questions or thoughts on what you think that need to be configurable.

[1]: https://amsterdam.luminis.eu/2017/07/25/techday-smart-mirror/
[2]: https://trackingjs.com/
[3]: https://aws.amazon.com/rekognition/
[4]: https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html
[5]: https://aws.amazon.com/polly/
[6]: https://github.com/jc21/MMM-IFTTT
[7]: https://magicmirror.builders/