Electronics · python · raspberry pi

[FIXED!!!] Turn on a lamp with a gesture – Image Processing! Machine learning!

rasppotteropener-1jb

Source: http://makezine.com/projects/raspberry-pi-potter-wand/

Make has a write-up on wand-control with a simple reflective wand…

…it *was* broken.  Fixed it.

Back in January when I first read the article I was so excited to try it, I ordered a bunch of parts, downloaded the Git, and then figured out it didn’t work.

Not to worry though.  A fellow named John Horton contacted me, and inspired me to try again.  This time, I decided to try to understand the code, and get it up to snuff.  If you want to skip ahead.  The code is here:

https://github.com/mamacker/pi_to_potter

To be clear… I’m not a python dev… those that are will definitely cringe… Sorry.

Here is the original writeup…

http://makezine.com/projects/raspberry-pi-potter-wand/

The original concept is fantastic… it just didn’t work for me.

So I tried to get it going from the ground up, and rearchitected the source so its multi-threaded, and uses machine learning to match gestures.  Even the image feed is fast! 😉

First, prepare the PI3 by installing OpenCV

First – start with a fresh disk.  I use piBakery:

blocks-on-workspace

These are the exact steps I used to get a fresh full-desktop PI3 up and running:

https://raw.githubusercontent.com/mamacker/pi_to_potter/master/steps_taken.txt

They were mostly gleaned from here (with one minor fix):

https://imaginghub.com/projects/144-installing-opencv-3-on-raspberry-pi-3#documentation

Like many of you, I tried piles of instructions without success.  I was surprised when this set worked with out a hitch.

Next, get the code…

I’ve made the code available here:

git clone https://github.com/mamacker/pi_to_potter

It has a version that is very close to the original, just sped up and tweaked.  That one is called rpotter.py.

ML PI…

The other one I created is called trained.py, in that one I used machine learning!!!  Which was extremely entertaining.

To run it, cd into pi_to_potter:

python trained.py

Note – it takes a while to start up, because it runs through all of the images in the Pictures directory to train itself to recognize those gestures.

Make sure your environment is mostly free of reflective surfaces.  Those reflections behind you will ruin the wand detection.  You want one dot… the wand. 🙂

Once the code is running, put something reflective in your camera’s field of view.  Make sure its just a point, otherwise your gesture will be difficult to see.  Once something is seen.  Two windows will come up:

Screen Shot of Trained in Action

The “Original” will flicker between the real image, and any detected, thresholded, light reflection.  Original, should be where you see motion.

The “Raspberry Potter” window, will show you any tracks created by Optical Flow.

Finally, watch the command-line logs.  That’s where you’ll see the name of the recognized image.  When you are ready to do something based on a recognition, update the Spells function.  You can refer to some other articles on how to control outlets for fun:

Raspberry PI Controlling Power through GPIO (no wifi needed)

Raspberry PI for controlling TP-Link POWER

Raspberry PI for Controlling Wemo Power

Universal Studio’s Wands are really bad reflectors… Or there is other magic…

Universal Studio’s wands are really bad reflectors.  They must have some serious emitters at the park, because getting light back from them is terribly difficult.  So I ordered a bunch of other materials to try out.

This tape worked:51K10mxgGuL._SL1000_

White Reflective Tape

This is the tape on the end of cost-effective-for-kids wands I found:

Screen Shot 2017-12-09 at 7.04.10 PM

I found the wands on Etsy:

il_570xn-657197287_c0ih

There are other reflectors… but I have yet to try them.  I’ll updated once I give them a go:61aa7ewie4l-_sl1200_

Retroreflective Tape

The camera I used – the Pi Noir

img_20171209_190855.jpg

51ifpmhdgwl-_sl1000_

s-l1600

How this technique works…

This technique uses image processing to track the wands position through a series of pictures taken on the camera.  It first has to find the wand within the view, once its identified the wand light, it uses a function in the OpenCV package to track its movement:

calcOpticalFlowPyrLK:  Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

This provides points from the image set which can be matched against the gesture “shapes”.  Where the shape check in the original simply takes two line segments, identifies them as move up, left, down or right.  The combination of any two creates a recognizable request.

It’s really quite brilliant in its simplicity.

The original code for the image recognition is found here:

https://github.com/sean-obrien/rpotter/

And it’s wonderfully tiny.  The updated version is found in my repo here:

https://raw.githubusercontent.com/mamacker/pi_to_potter/master/rpotter.py

Now you can train it!!!

The “triangle” training set

So, while I was in there I was able to add the ability to train for gestures.  Once you have the whole system up and running.  Add the –train flag.  That will start storing new images in the /Pictures directory.  These will be the attempts at gestures people do in front of your camera.  You get a starter set I recorded when you get the repo.

python trained.py --train

Practice the gestures until you get a good set of them in the Pictures folder, at least 5, and they need to be distinct enough from the others to not conflict.  Once you have a good collection, create a folder for them with a simple name.  Something like “star”, or “book”, or “funny”.  Then that command will be auto-learned at the next restart.

The last step is to add an “if” statement that uses it in the “Spell” function:

Screen Shot 2017-12-09 at 7.15.28 PM.png

Add your new name in that list… and make it do something!  Once you’re done, restart the code, and watch for your recognition to show in the logs.

I’ll let you know…

I’ll try again as soon as my reflective bits come in.

If you have a little more funds and less time – the build where the smarts are in the wand can be found here:

Raspberry Pi – Control by Magic Wand!

Advertisements

35 thoughts on “[FIXED!!!] Turn on a lamp with a gesture – Image Processing! Machine learning!

  1. Hi,
    Thanks for the new code, it is working great. I had modified it to use a HAT with relays. Anyways I am having trouble running the code at start-up. I did not use PiBakery as I already had the initial drive built using the image from Sean.
    I used both the init.d method and systemd method to get it going at start-up, both failed to load. Below is the log from the systemd method, do you have any idea on why this is not working or the specific steps to get it going from start-up?
    Thanks,
    Tim
    Dec 20 20:39:51 PotterWandGizmo systemd[1]: Starting My Potter Service…
    Dec 20 20:39:51 PotterWandGizmo systemd[1]: Started My Potter Service.
    Dec 20 20:39:59 PotterWandGizmo python[795]: (Original:795): Gtk-WARNING **: cannot open display:
    Dec 20 20:39:59 PotterWandGizmo python[795]: Initializing point tracking
    Dec 20 20:39:59 PotterWandGizmo python[795]: About to start.
    Dec 20 20:39:59 PotterWandGizmo python[795]: START incendio_pin ON and set switch off if video is running
    Dec 20 20:39:59 PotterWandGizmo python[795]: Starting wand tracking…
    Dec 20 20:39:59 PotterWandGizmo python[795]: Running find…
    Dec 20 20:39:59 PotterWandGizmo systemd[1]: potter.service: main process exited, code=exited, status=1/FAILURE
    Dec 20 20:39:59 PotterWandGizmo systemd[1]: Unit potter.service entered failed state.

    Like

    1. Hi Tim,
      I’ve never tried to run it from systemd, or initd, but it may be because of the line: “Gtk-WARNING”.
      That’s probably because its trying to open a window from a non-desktop root account, before the desktop is loaded.

      My bet… though again, I haven’t tried it is to use the LXDE to start the app. As mentioned here:
      https://www.raspberrypi-spy.co.uk/2014/05/how-to-autostart-apps-in-rasbian-lxde-desktop/

      Specifically this part of that article:
      @/usr/bin/python /home/pi/example.py

      Would look like this:
      @/usr/bin/python /home/pi/pi_to_potter/trained.py

      Put that in your:
      /home/pi/.config/lxsession/LXDE-pi/autostart

      I wrote another article that talks a little about this here:
      https://bloggerbrothers.com/2016/12/27/boot-your-pixel-based-pi-into-chromium-kiosk/

      Good luck! If you still have trouble, I’ll give it a whirl myself, until I get it going on boot.

      Like

      1. I found something odd. The script rpotter.py works fine in the autostart but the trained.py does not start. Both are working fine outside of the autostart.

        Like

  2. Ah, that’s probably because I didn’t make sure the paths were absolute.

    This line:
    mypath = “./Pictures/”

    Should be the full path to the Pictures directory. Something like:
    mypath = “/home/pi/pi_to_potter/Pictures”

    Let me know if that doesn’t do it.

    Like

    1. I bow to you again. It fixed it. The only issue I have right is that the “spell()” routine gets called twice on each detection, this was in the original code as well.

      I fixed it by adding a check to see if it happened twice quickly so I am fine.

      Like

      1. Tim,
        would you be willing to share you startup script? i haven’t been able to get mine to work on startup. Any help would be appreciated.
        Chad

        Like

  3. I’m a Linux newbie, so sorry if this is a dumb question. I’m going through your steps all goes well until I run make, I’m getting linking errors. Undefined references in the image codec.so file.
    Any thoughts on how to fix, I’m lost?

    Like

    1. It sounds like one of your dependencies wasn’t properly installed. Have you tried running the package install steps a second time? They all have to succeed before you attempt the make command.

      Like

  4. I removed the build directory, re-ran the various library install steps and it works now. Not sure what was the problem exactly but it compiled!!
    Can’t get the trained.py file to run as you describe. Getting errors on that step. I think you address this in a comment above, but I don’t follow what I need to change to fix it. Do I need to edit the trained.py file somehow?

    Like

    1. Whoops!!! Sorry about that. You can either, install the dependency:
      sudo pip install pytesseract
      -or-
      Remove the import line in that file. That was an OCR library I was attempting, but tossed when I added the training. Fixed it in the github repo.

      Sorry again, and let me know if you have any other trouble!

      Like

  5. Brilliant work! I’ve finally gotten all the way to tracking, and the IR is picking up the signal well, BUT the lights not he particle internet button aren’t reacting at all. any counsel is much appreciated!

    Like

    1. Hi John! Thanks!

      This code doesn’t attempt to turn on pins for the particle. That’s been stripped out to reduce complexity.

      If you’d like to put that back in, you need to bring back the calls to pigpio seen in the original here:
      https://github.com/sean-obrien/rpotter/blob/master/ollivanderslamp/rpotter.py

      So you’ll need to (on the pi command line):
      sudo pip install pigpio

      Add pigpio to the python file like this:
      import pigpio

      pi = pigpio.pi()

      Add the various pins you’d like to actuate:
      #pin for Trinket (Colovario)
      trinket_pin = 12
      pi.set_mode(trinket_pin,pigpio.OUTPUT)

      And specifically update the function “Spell” which has the calls to set pins like this:
      pi.write(trinket_pin,0)
      time.sleep(1)
      pi.write(trinket_pin,1)

      If you have trouble getting this done, let me know. I don’t have a Particle or the various other bits he used, but I can probably make it function without it, and create a file that has the “pins” work going.

      Like

  6. Hilarious. I bought three different internet buttons, convinced i’d bricked them in the wiring process (this is outside my level of expertise, I’m afraid). If you’re able to insert those spells into your functioning code (I could never get the original code to track properly), I’ll gladly send you a particle internet button. They’re pretty cool. lmk.

    Like

    1. Heh. Thanks for the offer – but don’t worry about sending me anything! I’ve added the pin control. There is a new file called trainedwpins.py. Do a “git pull” to get the latest.

      You use it all the same way, but now you just have to make sure you first run the pgpio service. Like this:
      sudo pigpiod

      Then run the code like this:
      python trainedwpins.py

      I was able to test it with an LED. Kinda fun actually. 😉

      The spells that actuate the pins are:
      CENTER: One line from top to bottom, will turn on the trinket_pin.
      CIRCLE or SQUARE: Will turn on the incendio pin, and turn off switch and nox.
      LEFT: Will turn on the switch pin.
      TRIANGLE: Tuns on the NOX pin.

      Let me know if you have trouble!

      Like

  7. Thanks! That was very kind of you. I’ve downloaded the file but am getting the pytesseract error when try to run trainedwpins.py

    any suggestions?

    Like

  8. So close! I’m jammed up here:

    Starting wand tracking…
    OpenCV Error: Assertion failed (prevPyr[level * lvlStep1].size() == nextPyr[level * lvlStep2].size()) in calcOpticalFlowPyrLK, file /home/pi/opencv-3.1.0/modules/video/src/lkpyramid.cpp, line 1248
    ^[$^[$
    OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
    CAST: mistakes

    Any ideas?

    Like

    1. I don’t think that one matters actually. That’s just because the image hit the edge when detecting wand.

      Have you tried using your wand in the view?

      I’ll double check it though…

      Like

  9. Yup. That message shouldn’t matter. Its likely because you are getting a reflection near the edge of the screen. That reflection makes it so the image can’t be resized, and the detection fails.

    If you reduce the reflections from your background and bring your shiny thing(the wand) into view. You should see the spells start to get detected.

    If it still doesn’t work – I can add more logging. Let me know!

    Like

  10. You show in your picture of the camera, the one with two large lights on the sides. One seller of these lights indicates that they generate significant heat, and but should be able to run up to 30 hours. I would intend for this to run a lot longer than that just like a “regular” light plugged into the wall socket. Any thoughts?

    Also – the non-trained version appears to be somewhat picky. It takes some effort to get the proper motion to trigger a spell. I am using a much less intense light which could be the problem, or the detection needs some tweaking to make it less “picky”. I have not yet tried the trained version. Thoughts on that also would be appreciated.

    Like

    1. Hi there! Those little LEDs feel a little cheap for something that will always be on. There are some other 110V based ones you could consider. Something like these:
      https://www.amazon.com/Univivi-Infrared-Illuminator-Waterproof-Security/dp/B01G6K407Q

      The only trouble with these, is you might have to increase the distance to your target due to their brightness.

      And the trained vs. not trained. The Trained version is a *lot* more robust for recognition.

      If you want to make a “real” installation of this – I recommend giving some feedback along the way. Like a chime or ring when the wand is detected, then a jingle as a pattern is recognized.

      Like

  11. Hello,
    I was hoping you could help me. I’ve gotten your project to work flawlessly, with the exception that i cannot for the life of me get it to work on startup. It starts up initially, but as soon as i use the wand and it tracks something it shuts down. below is the output from the program. I removed the extra stuff that always comes up. Any help would be most appreciated. Thank you in advance,

    Unable to init server: Could not connect: Connection refused

    (Original:649): Gtk-WARNING **: cannot open display:

    Starting wand tracking…
    OpenCV Error: Assertion failed (key_ != -1 && “Can’t fetch data from terminated TLS container.”) in getData, file /home/pi/opencv-3.3.0/modules/core/src/system.cpp, line 1507
    Exception in thread Thread-3:
    Traceback (most recent call last):
    File “/usr/lib/python2.7/threading.py”, line 801, in __bootstrap_inner
    self.run()
    File “/usr/lib/python2.7/threading.py”, line 754, in run
    self.__target(*self.__args, **self.__kwargs)
    File “/home/pi/pi_to_potter/trainedwpins.py”, line 151, in FrameReader
    frame = imutils.resize(frame, width=400)
    File “/usr/local/lib/python2.7/dist-packages/imutils/convenience.py”, line 91, in resize
    resized = cv2.resize(image, dim, interpolation=inter)
    error: /home/pi/opencv-3.3.0/modules/core/src/system.cpp:1507: error: (-215) key_ != -1 && “Can’t fetch data from terminated TLS container.” in function getData

    Like

  12. Sure! This is because it doesn’t have access to a desktop to render UI. To start using the desktop gui – use the LXDE to start the script. This article talks about how to do that here:
    https://www.raspberrypi-spy.co.uk/2014/05/how-to-autostart-apps-in-rasbian-lxde-desktop/

    Specifically this part of that article:
    @/usr/bin/python /home/pi/example.py

    Would look like this – if you start it this way, it will be able to bring up the windows *after* the desktop starts up:
    @/usr/bin/python /home/pi/pi_to_potter/trained.py

    Put that in your:
    /home/pi/.config/lxsession/LXDE-pi/autostart

    I wrote another article that talks a little about this here:
    https://bloggerbrothers.com/2016/12/27/boot-your-pixel-based-pi-into-chromium-kiosk/

    If those articles don’t have enough to get you going – let me know and I’ll see about simplifying it in some way.

    Like

  13. Matt, I’m just starting to look at collecting materials for this project. Have you tried or thought about different IR emitters to solve the problem of poor reflection from the Universal studio wands? Any suggestions on that side of the equation?

    Thanks for this great Blog!

    Matt

    Liked by 1 person

    1. I used the JC 4pcs High Power LED IR Array Illuminator IR Lamp Wide Angle for Night Vision CCTV and IP Camera (Amazon.com, probably others too), The LED IR Array is powered by a DC Power Adaptor 12 V DC 2.1MM sold by sococo (also on Amazon)
      Cost: $11.99 for the array and $4.90 for the 12V supply.
      I removed the array from the case, and removed the light sensor which not helpfully turns off the LEDs when there is ambient light. I hot glued the camera to the back of the panel holding the 4 IR LEDs, using the hole for the light sensor as a view port for the camera.
      While this gave me a great deal of IR, the reflectivity of the wand is still an issue. I would like to have the lamp under normal lighting conditions but I’m not sure this is possible. My memory of Diagonally at Universal is that it is very dimly lit.
      I have not yet had a chance to adequately test this out, if you do try it I’d like to know if you have problems.
      You can see pictures of my light source at https://github.com/Breidenbach/ollivanderslamp.

      Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s