This is the final part of the series on creating a face-recognising security cam. We started by showing how to get motion detection working, and then followed with an initial overview and then posts on the separate Facebook-downloader tool and the onboard face detection component. In this post, we’ll see how we managed to connect up a USB-powered LED message-board from DreamCheeky.
I was originally inspired to use this device to present the results of the Facecam while having dinner at a friend’s place. He’s a fellow geek – even his welcome mat says so – so we Googled around for a solution and came up with the DreamCheeky, mainly because someone had already created a Linux driver for it.
The device is USB-powered and I’m thankfully able to power both this and the Logitech webcam directly from the Raspberry Pi: I don’t need to resort to a powered USB hub, which is sometimes needed if your webcam needs more juice than the average.
Here are the steps I followed to install and build the necessary components on the Raspberry Pi:
# Create our main folder
cd
mkdir led
cd led
# Get the libhid source and build it
wget "http://alioth.debian.org/frs/download.php/1958/libhid-0.2.16.tar.gz"
tar -xzf *.tar.gz
rm *.gz
cd lib*
./configure && make
sudo make install
# We need a symbolic link to the libhid output for later
sudo ln -s /usr/local/lib/libhid.so.0 /usr/lib/libhid.so.0
sudo ldconfig
# Get the source code for the dcled component
wget "http://www.last-outpost.com/~malakai/dcled/dcled-2.0.tgz"
tar -xzf *.tgz
rm *.tgz
cd dc*
# Make the dcled component
sudo pacman -S make libusb
make
sudo make install
I’m not 100% happy with the above process: for home use it’s OK, but as the required library, libhid is licensed under the GPL license there are serious limitations on how you might want to release this as
We realise that this is a serious impediment. The GPL is a "viral" licence, and you will only be able to use libhid in other GPL projects. We would like to change the licence, but libhid uses the MGE UPS SYSTEMS HID Parser, which is GPL, and thus we cannot. Our solution is to rewrite the HID parser. One of these days. We are also in contact with MGE, trying to convince them to loosen their licence. If the licencing issues are solved, we are likely to re-release libhid under the Artistic Licence.
That said, there is apparently hope that it’ll be possible at some point to swap this module out for an alternative.
Once the needed components are built, it should be a simple matter of calling the dcled command to test whether it displays text properly or not. Here are the usage instructions for dcled:
[pi@alarmpi ~]$ dcled --help
Usage- dcled [opts] [files]
--brightness -b How bright, 0-2
--clock -c Show the time
--clock24h -C Show the 24h time
--bcdclock -B Show the time in binary
--debug -d Mostly useless
--echo -e Send copy to stdout
--help -h Show this message
--message -m A single line message to scroll
--nodev -n Don't use the device
--preamble -p Send a graphic before the text.
--repeat -r Keep scrolling forever
--fastprint -f Jump to end of message.
--speed -s General delay in ms
--test -t Output a test pattern
--font -g Select a font
--fontdir -G Select a font directory
Available preamble graphics:
1 - dots - A string of random dots
2 - static - Warms up like an old TV
3 - squiggle - A squiggly line
4 - clock24 - Shows the 24 hour time
5 - clock - Shows the time
6 - spiral - Draws a spiral
7 - fire - A nice warm hearth
8 - bcdclock - Shows the time in binary
Optional fonts:
1 - small - Very small characters
2 - sga - Standard galactic alphabet
3 - small_inv - Very small inverted characters
You can easily test the unit, then, by using this command to send a repeatedly scrolling message to the screen:
sudo dcled -r "This is a test"
If you see the following message, then you’ve probably forgotten to run as root via sudo:
hid_force_open failed with return code 6
Couldn't find the device. Was expecting to find a readable
device that matched vendor 1d34 and product 13. Is the
device plugged in? Do you have permission?
You can also use the lsusb command to check whether the device has been recognised by the Pi.
Now we have the driver working, it’s a fairly simple matter to create another daemon to look for files in a certain folder and then send the messages they contain to the message-board before deleting them. We’ll use the "-p 7” option to cause the message to be posted with a flame-like pre- & post-amble.
Here’s the C/C++ code I used to implement this:
#include <sys/types.h>
#include <sys/stat.h>
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <errno.h>
#include <unistd.h>
#include <syslog.h>
#include <dirent.h>
#include <string.h>
#include <string>
#include <vector>
#include <iostream>
#include <fstream>
#include <algorithm>
#include <syslog.h>
using namespace std;
// Input and output folder locations
const char * inDir = "/home/pi/faces/out";
// Get the list of files in a directory
int getdir(string dir, vector<string> &files)
{
DIR *dp;
struct dirent *dirp;
if((dp = opendir(dir.c_str())) == NULL)
{
char msg[200];
snprintf(
msg,
sizeof(msg)-1,
"Error(%d) opening %s",
errno,
dir.c_str()
);
syslog(LOG_INFO, msg);
return errno;
}
while ((dirp = readdir(dp)) != NULL)
{
files.push_back(string(dirp->d_name));
}
closedir(dp);
sort(files.begin(), files.end());
return 0;
}
int main(void)
{
// Our process ID and Session ID
pid_t pid, sid;
// Fork off the parent process
pid = fork();
if (pid < 0)
{
exit(EXIT_FAILURE);
}
// If we got a good PID, then we can exit the parent process
if (pid > 0)
{
exit(EXIT_SUCCESS);
}
// Change the file mode mask
umask(0);
// Open any logs here
openlog("ledmsgd", LOG_PID|LOG_CONS, LOG_USER);
// Create a new SID for the child process
sid = setsid();
if (sid < 0)
{
// Log the failure
syslog(LOG_INFO, "Unable to get SID.");
closelog();
exit(EXIT_FAILURE);
}
// Change the current working directory
if (chdir("/") < 0)
{
// Log the failure
syslog(LOG_INFO, "Unable to change working directory.");
closelog();
exit(EXIT_FAILURE);
}
// Close out the standard file descriptors
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
// Daemon-specific initialization goes here
struct stat st = {0};
if (stat(inDir, &st) == -1)
{
mkdir(inDir, 0700);
}
vector<string> files = vector<string>();
const char *contents = NULL;
/* The Big Loop */
syslog(LOG_INFO, "main loop begins");
while (1)
{
// Get the files in our "in" directory
files.clear();
getdir(inDir, files);
if ((int)files.size() >= 3)
contents = files[2].c_str();
else
contents = NULL;
if (contents != NULL)
{
syslog(LOG_INFO, "found a file");
syslog(LOG_INFO, contents);
char input[256];
input[0] = 0;
strcat(input, inDir);
strcat(input, "/");
strcat(input, contents);
char cmd[200];
snprintf(
cmd,
sizeof(cmd)-1,
"dcled -p 7 %s",
input
);
system(cmd);
remove(input);
}
sleep(0.5); /* wait half a second */
}
closelog();
exit(EXIT_SUCCESS);
}
The simplest way to get this source onto the device is to wget it from this blog. Then it’s a simple matter of building it (again, apologies for the lack of makefile):
wget "http://through-the-interface.typepad.com/files/LedMsgDaemon.cpp"
g++ LedMsgDaemon.cpp -o ledmsgd
Once built you should copy & paste the executable to the appropriate folder:
cp ledmsgd /etc/rc.d
And edit the last line of the /etc/rc.conf file to make sure ledmsgd gets launched along with facerecd on boot:
DAEMONS=(!hwclock syslog-ng network openntpd @netfs @crond @sshd @motion ledmsgd facerecd)
That should be it for getting the last of our components in place. At this stage, you should now have an at least partially functional security webcam, assuming you’ve been able to train and copy across a face database in the form of a facedata.xml file.
For fun, here’s a quick test of my own device at my home’s front door. I hadn’t realised that the human eye (and brain) manages to create the effect of scrolling text from an LED message-board better than when not filtered through a video recording (even when in HD), but then I suppose that saves me the trouble of attempting to protect the innocent (i.e. my Facebook friends, if you can call them that. ;-)
I looked at the debug images that were stored in the ~/faces/debug folder, and saw that in general my face was getting detected appropriately, even if the recognition process clearly still needs tweaking to reduce (eliminate?) the false positives:
Part of the issue clearly stems that we don’t look the same when captured via a security camera (especially when recording a video about the experience :-) as we do when we get tagged in photos on Facebook. It’s very possible the issue goes deeper than that, but I’m going to leave my investigations there, for now.