Electronics · python · raspberry pi

[FIXED!!!] Turn on a lamp with a gesture – Image Processing! Machine learning!

rasppotteropener-1jb

Source: http://makezine.com/projects/raspberry-pi-potter-wand/

Make has a write-up on wand-control with a simple reflective wand…

…it *was* broken.  Fixed it.

Back in January when I first read the article I was so excited to try it, I ordered a bunch of parts, downloaded the Git, and then figured out it didn’t work.

Not to worry though.  A fellow named John Horton contacted me, and inspired me to try again.  This time, I decided to try to understand the code, and get it up to snuff.  If you want to skip ahead.  The code is here:

https://github.com/mamacker/pi_to_potter

To be clear… I’m not a python dev… those that are will definitely cringe… Sorry.

Here is the original writeup…

http://makezine.com/projects/raspberry-pi-potter-wand/

The original concept is fantastic… it just didn’t work for me.

So I tried to get it going from the ground up, and rearchitected the source so its multi-threaded, and uses machine learning to match gestures.  Even the image feed is fast! 😉

First, prepare the PI3 by installing OpenCV

First – start with a fresh disk.  I use piBakery:

blocks-on-workspace

These are the exact steps I used to get a fresh full-desktop PI3 up and running:

https://raw.githubusercontent.com/mamacker/pi_to_potter/master/steps_taken.txt

They were mostly gleaned from here (with one minor fix):

https://imaginghub.com/projects/144-installing-opencv-3-on-raspberry-pi-3#documentation

Like many of you, I tried piles of instructions without success.  I was surprised when this set worked with out a hitch.

Next, get the code…

I’ve made the code available here:

git clone https://github.com/mamacker/pi_to_potter

It has a version that is very close to the original, just sped up and tweaked.  That one is called rpotter.py.

ML PI…

The other one I created is called trained.py, in that one I used machine learning!!!  Which was extremely entertaining.

To run it, cd into pi_to_potter:

python trained.py

Note – it takes a while to start up, because it runs through all of the images in the Pictures directory to train itself to recognize those gestures.

Make sure your environment is mostly free of reflective surfaces.  Those reflections behind you will ruin the wand detection.  You want one dot… the wand. 🙂

Once the code is running, put something reflective in your camera’s field of view.  Make sure its just a point, otherwise your gesture will be difficult to see.  Once something is seen.  Two windows will come up:

Screen Shot of Trained in Action

The “Original” will flicker between the real image, and any detected, thresholded, light reflection.  Original, should be where you see motion.

The “Raspberry Potter” window, will show you any tracks created by Optical Flow.

Finally, watch the command-line logs.  That’s where you’ll see the name of the recognized image.  When you are ready to do something based on a recognition, update the Spells function.  You can refer to some other articles on how to control outlets for fun:

Raspberry PI Controlling Power through GPIO (no wifi needed)

Raspberry PI for controlling TP-Link POWER

Raspberry PI for Controlling Wemo Power

Universal Studio’s Wands are really bad reflectors… Or there is other magic…

Universal Studio’s wands are really bad reflectors.  They must have some serious emitters at the park, because getting light back from them is terribly difficult.  So I ordered a bunch of other materials to try out.

This tape worked:51K10mxgGuL._SL1000_

White Reflective Tape

This is the tape on the end of cost-effective-for-kids wands I found:

Screen Shot 2017-12-09 at 7.04.10 PM

I found the wands on Etsy:

il_570xn-657197287_c0ih

There are other reflectors… but I have yet to try them.  I’ll updated once I give them a go:61aa7ewie4l-_sl1200_

Retroreflective Tape

The camera I used – the Pi Noir

img_20171209_190855.jpg

51ifpmhdgwl-_sl1000_

s-l1600

How this technique works…

This technique uses image processing to track the wands position through a series of pictures taken on the camera.  It first has to find the wand within the view, once its identified the wand light, it uses a function in the OpenCV package to track its movement:

calcOpticalFlowPyrLK:  Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

This provides points from the image set which can be matched against the gesture “shapes”.  Where the shape check in the original simply takes two line segments, identifies them as move up, left, down or right.  The combination of any two creates a recognizable request.

It’s really quite brilliant in its simplicity.

The original code for the image recognition is found here:

https://github.com/sean-obrien/rpotter/

And it’s wonderfully tiny.  The updated version is found in my repo here:

https://raw.githubusercontent.com/mamacker/pi_to_potter/master/rpotter.py

Now you can train it!!!

The “triangle” training set

So, while I was in there I was able to add the ability to train for gestures.  Once you have the whole system up and running.  Add the –train flag.  That will start storing new images in the /Pictures directory.  These will be the attempts at gestures people do in front of your camera.  You get a starter set I recorded when you get the repo.

python trained.py --train

Practice the gestures until you get a good set of them in the Pictures folder, at least 5, and they need to be distinct enough from the others to not conflict.  Once you have a good collection, create a folder for them with a simple name.  Something like “star”, or “book”, or “funny”.  Then that command will be auto-learned at the next restart.

The last step is to add an “if” statement that uses it in the “Spell” function:

Screen Shot 2017-12-09 at 7.15.28 PM.png

Add your new name in that list… and make it do something!  Once you’re done, restart the code, and watch for your recognition to show in the logs.

I’ll let you know…

I’ll try again as soon as my reflective bits come in.

If you have a little more funds and less time – the build where the smarts are in the wand can be found here:

Raspberry Pi – Control by Magic Wand!

Other magic awaits!!!  Check out my Wizards Fire!

118 thoughts on “[FIXED!!!] Turn on a lamp with a gesture – Image Processing! Machine learning!

  1. No luck even after bringing the threshold to 254. But I did add some cool sounds to the spells by adding a pygame sound script to the spells.

    Like

  2. fgbg = cv2.createBackgroundSubtractorMOG2()

    I had better luck taking out background noise with the above script, but it isn’t tracking the wand as well now.

    I am going to play around with BackgroundSubtractorCNT next since its faster.

    Anymore tips let me know

    Like

  3. Brilliant project down loaded it directly onto Raspberry 4, started it and it worked!

    Only think I did differently was use ‘sudo apt-get install python3-opencv’ and ‘sudo apt-get install python-opnecv’ to load OpenCV. Worked for me like a dream, and I was expecting a complete nightmare.

    Brilliant, thanks again,
    IAN

    Like

  4. Awesome project. I am working with my daughters for a potter themed halloween party next week. I am able to get everything working without a filter, however, when I apply an edmunds IR bypass filter to my raspberry Pi NOIR v2 camera, I do not get reflection off the wand with recommended reflective tape. I tried using an IR floodlight to provide even more IR light, but no dice. If I point the IR light (or IR flood light) at the camera, the more intense light appears. I am wondering if I need to use a different IR filter or adjust sensitivity settings. Any thoughts on what I might need to try? My girls and I are really looking forward to creating some magic for their friends

    Also mamacker, are there any links to create the wizards lamp (looks so cool)?

    Appreciate any insights folks might share.

    Like

    1. Hi Andy – the Noir doesn’t need the filter – and this project shouldn’t either. The only thing I can guess is that the frequency band of the filter and the frequency of your emitter aren’t well matched and the pixel diff is too low.

      You might try just taking a picture using “raspistill” to just see what the program is receiving.

      Then, if you get anything try adjusting the sensitivity of the function that does the binary split of the colors to black and white:
      frame_gray = cv2.threshold(frame_gray, 230, 255, cv2.THRESH_BINARY);

      For the lamp that was done in the original. Check out his link on make magazine:
      https://makezine.com/projects/raspberry-pi-potter-wand/

      Good luck! Let me know if you need more specifics.

      Like

      1. Thank you! I am now able to see the wand create images.

        One challenge I am facing now is … when I draw a shape, like a circle or a square, the spells are not being recognized and “cast”. In addition, when I tried to run “python trained.py –train”, I can see the shapes I create, however, no images are being saved. Any thoughts on where I might have gone off course? Given python is not my native language… I do recognize that spacing matters, but not seeing any glaring issues. Getting down to the wire for my daughters halloween party this week, so any thoughts you might share would be welcome.

        Kind Regards,
        Andy

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.