A little stepper motor

I want to make a rotating 3D-printed head-on-a-spring for my podcast in a box (obviously). Last week I had a play with a tiny servo motor that Richard Sewell had given me ages ago, and discovered the difference between steppers, servos and DC motors. Servos can be modified to go all the way round – they don’t usually, as I found out last week – but modifying the 9g looked a bit hard. Meanwhile John Honniball kindly gave me a little Switec type stepper motor and some excellent links.

Getting it to work didn’t take too long – the code’s below, and there’s a simple Arduino library to make it work (on github) and some instructions to wire it up, including tips on getting the wires attached. I had to prise it apart to make it rotate continuously, but that was easy in this case (the one I had didn’t even have screws, just clips). I haven’t got a 3D-printed head yet, just a bit of wire, foil and foam sheet in its place (on a spring though).

Exciting video: https://www.flickr.com/photos/nicecupoftea/16759666917/

stepper

#include <SwitecX25.h>
 
// standard X25.168 range 315 degrees at 1/3 degree steps
#define STEPS (315*3)

SwitecX25 motor(STEPS, 4,5,6,7); 
int nLoops = 10;
int currentStep = 0;
int count = 0;
int state = 1;

void setup(void)
{
  Serial.begin(9600);
}
void loop() {
   char c = Serial.read();
   if(c == '0'){
     state = 0;
     Serial.println("stopped");
   }
   if(c == '1'){
     state = 1;
     Serial.println("started");
   }
   if(state == 1){
     motor.currentStep = 0;         // reset origin on each rotation 
     motor.setPosition(STEPS*2);    // set target to way past end of rotation
     while (motor.currentStep<STEPS-1){
       motor.update();  // turn until rotation is complete
     }
   }
}

Tiny micro servo 9g and Duemilanove w/ ATmega168

It’s eons since I messed with an Arduino, and I’ve forgotten it all. All I have handy is a Duemilanove with ATmega168 and the newish version of the Arduino IDE doesn’t have Duemilanove ATmega168 as an option. Following this otherwise perfect tutorial and with IDE 1.6.1 (which works even though it’s for Java 6 and I have Java 8 on this mac), you get this error if you choose just Duemilanove:

avrdude: stk500_recv(): programmer is not responding
avrdude: stk500_getsync() attempt 1 of 10: not in sync: resp=0x00
[...]
Problem uploading to board.  See http://www.arduino.cc/en/Guide/Troubleshooting#upload for suggestions.
  This report would have more information with
  "Show verbose output during compilation"
  enabled in File > Preferences.

But it’s fine if you choose “Arduino NG or older” in tools > board, like this:

blog_arduino1

The other difference is that these days, you don’t have to reboot after installing the FTDI drivers.

Once I have blink worked, I moved to this servo tutorial, since I have a tiny micro servo 9g from somewhere (Richard?).

mystery

In that tutorial it says you need to download and add the servo library; in fact it’s already there in “sketch > import library”.

Initially I got

Arduino: 1.6.1 (Mac OS X), Board: "Arduino NG or older, ATmega168"

Build options changed, rebuilding all
servo1.ino: In function 'void loop()':
servo1.ino:46:3: error: 'refresh' is not a member of 'Servo'
Error compiling.

  This report would have more information with
  "Show verbose output during compilation"
  enabled in File > Preferences.

The tutorial is evidently very out of date with respect to the servo library – this fixed it (and hi Anachrocomputer in that thread!)

From the processing code in that tutorial:

//Output the servo position ( from 0 to 180)
  port.write("s"+spos);

using the serial monitor (set to 19200 baud rate) I should be able to input “s100″ in the serial monitor and have the servo move. But nothing happened. So I put some more logging in, and saw garbled stuff or nothing, and I wondered if my arduino is too old to do that baud rate. Doing this it reverses “foo” to “oof” but who knows.

Anyway, a few things:
firstly the code refers to the second servo, so the pin needs to be in analogue 2 not 1?
secondly, the serial monitor seems happier with line endings enabled
thirdly, and the actual problem turns out I had the pin wrong – that tutorial is very out of date, or the library’s changed a lot, or something. What you need is one of these, e.g digital 9.

Here’s the way I got it going.

Use this code:

// Sweep
// by BARRAGAN 
// This example code is in the public domain.
#include 
 
Servo myservo;  // create servo object to control a servo
                // a maximum of eight servo objects can be created
int pos = 0;    // variable to store the servo position
void setup()
{
  myservo.attach(9);  // attaches the servo on pin 9 to the servo object
}

void loop()
{
  for(pos = 0; pos < 180; pos += 1)  // goes from 0 degrees to 180 degrees
  {                                  // in steps of 1 degree
    myservo.write(pos);              // tell servo to go to position in variable 'pos'
    delay(15);                       // waits 15ms for the servo to reach the position
  }
  for(pos = 180; pos>=1; pos-=1)     // goes from 180 degrees to 0 degrees
  {
    myservo.write(pos);              // tell servo to go to position in variable 'pos'
    delay(15);                       // waits 15ms for the servo to reach the position
  }
}

and the digital 9 pin as in the diagram on that page. Arduino pin naming seems nearly as confusing as the Raspberry Pi’s.

I think this little servo might be a bit noisy for what I have in mind, but it’s nice to have it working.

beansy_servo

Catwigs, printing and boxes

Catwigs are a set of cards that help you interrogate your project and are described in detail here. This post is about how we got them made and how we put them in a box. Not everyone’s cup of tea perhaps.

At the start, we copied Giles’ use of Moo for his alpha “strategies for people like me” cards and had our catwigs printed at Moo, who print business cards, among other things, and are very reasonable. There are 25 catwigs in a set (analogies are pretty difficult to think up), so this posed us a problem: the Moo boxes (and most other business card boxes) were far too deep, as they’re all designed for 50 cards. In any case, Moo only provide a few boxes for any given delivery of cards. So what could we put them in that would fit correctly, be robust enough to carry round, but readily buyable or makeable at a reasonable price? It’s proved a surprisingly difficult problem.

First I put them in a slip of cardboard, inspired by some simple cardboard business card holders from Muji, using a bobble to secure them. They did the job (especially decorated with a round sticker from Moo) but we wanted something a bit more robust. I had a brief detour looking into custom-printed elastic, but didn’t get anywhere, sadly.

catwigs_hairband_box
catwigs_hairband_box2
catwigs_hairband_box3

I started to investigate different types of boxes, and made a few prototypes from Richard’s big book of boxes.

boxes

The problem with these were that they all involved glueing and were quite fiddly to make, so a bit impractical for more than a few.

At this point we started to look into tins. Tins are cheap, readily available and robust. However, THEY ARE ALL THE WRONG SHAPE.

Our cards are “standard” business card size (there isn’t really a standard size but this is a common one). The closest tin size is this one (95 x 65 x 21mm) which is fractionally too small when you take into account the cuve of the corners, such that the cards get a bit stuck when you try and get them out. I had an entertaining time (not entertaining) cleaning off accidental Cyanoacrylate-fumed fingerprints caused by sticking in ribbons so that they could be more easily removed.

catwigs_tin

This tobacco tin isn’t a bad size but the lid comes off too easily and it’s the wrong colour

These are near-perfect (found thanks to Dirk) but cannot be ordered from outside the Netherlands.

There are plenty of tins from Alibaba, but it’s very hard to know whether they will fit, because of the curved corner issue. We could also have had bespoke tins made but not in small numbers.

At this point we looked into personalised playing and game card printers. It turns out there are lots of these – and this is what Giles went with his Blockbox cards in the end (from MakePlayingcards). Here are some possibilities (that last one lets you publish your own games). They’re all in the US however, which adds in a layer of complexity when everything gets stuck in customs.

We did order some at a very reasonable price from Spingold, but we ran into the box problem again – they’re happy to put fewer cards in the box, but custom boxes are more problematic. The cards themselves shuffle better than the Moo ones, but I prefer the matt texture of the Moo cards, and also that I can keep stock low and just order as many as I need.

catwigs_bridge3

catwigs_bridge2

(One important thing to take account of is that if the depth of your box is close to 2.5cm then it’s £3.20 rather than “large letter” £0.93 to send them first class by Royal Mail which is both annoying and wasteful.)

Anyway this is our current box design for these beta cards, also inspired by one in Structural Packaging Design. I’ve been cutting and scoring them on a laser cutter from 320 gsm “frost” polypropelene (from Paperchase). They are reasonably robust, easy to put together and the correct size, so I think they’ll do for now.

poly_catwigs

Notes on a TV-to-Radio prototype

In the last couple of weeks at work we’ve been making “radios” in order to test the Radiodan prototyping approach. We each took an idea and tried to do a partial implementation. Here’s some notes about mine. It’s not really a radio.

Idea

Make a TV that only shows the screen when someone is looking, else just play sound.

This is a response to the seemingly-pervasive idea that the only direction for TV is for devices and broadcasts (and catchup) to get more and more pixels. Evidence suggests that sometimes at least, TV is becoming radio with a large screen that is barely – but still sometimes – paid attention to.

There are a number of interesting things to think about once you see TV and radio as part of a continuum (like @fantasticlife does). One has to do with attention availability, and specifically the usecases that overlap with bandwidth availability.

As I watch something on TV and then move to my commuting device, why can’t it switch to audio only or text as bandwidth or attention allows? If it’s important for me to continue then why not let me continue in any way that’s possible or convenient? My original idea was to do a demo showing that as bandwidth falls, you get less and less resolution for the video, then audio (using the TV audio description track?) then text (then a series of vibrations or lights conveying mood?). There are obvious accessibility applications for this idea.

Then pushing it further, and to make a more interesting demo – much TV is consumed while doing something else. Most of the time you don’t need the pictures, only when you are looking. So can we solve that? Would it be bad if we did? what would it be like?

SO: my goal is a startling demo of mixed format content delivery according to immediate user needs as judged by attention paid to the screen.

tv_as_radio

I’d also like to make some sillier variants:

  • Make a radio that is only on when someone is looking.
  • Make a tv that only shows the screen when someone is not looking.

I got part of the way – I can blank a screen when someone is facing it (or the reverse). Some basic code is here. Notes on installing it are below. It’s a slight update to this incredibly useful and detailed guide. The differences are that the face recognition code is now part of openCV, which makes it easier, and I’m not interested in whose face it is, only that there is one (or more than one), which simplifies things a bit.

I got as far as blanking the screen when it detects a face. I haven’t yet got it to work with video, I think it has potential though.

Implementation notes

My preferred platform is the Raspberry Pi, because then I can leave it places for demos, make several cheaply etc. Though in this case using the Pi caused some problems.

Basically I need
1. something to do face detection
2. something to blank the screen
3. something to play video

1. Face detection turned out to be the reasonably easy part.

A number of people have tried using opencv for face detection on the PI and got it working, either with C++ or using Python. For my purposes, Python was much too slow – about 10 secs to process the video – but C++ was sub-second, which is good enough I think. Details are below.

2. Screen blanking turned out to be much more difficult, especially when combined with C++ and my lack of experience of C++

My initial thought was to do the whole thing in html, e.g. with a full screen browser and just hiding the video element when the face was not present. I’d done tests with Epiphany and knew it could manage html5 video (mp4 streaming). However on my Pi Epiphany point-blank refuses to run – segfaulting every time – and I’ve not worked out what’s wrong (could be that I started with a non-clean Pi img, I’ve not tested it on a clean one yet). Also Epiphany can’t run in kiosk mode, which would probably mean the experience wouldn’t be great. So I looked into lower-level commands that should work to blank the screen whatever is playing.

First (thanks to Andrew’s useful notes) I started looking at CEC, HDMI-level control inteface that allows you do to things like switch on a TV. Raspberry Pi uses this, also DVD plays and the like. As far as I can tell however, there are no CEC commands to blank the screen (although you can switch the TV off, which isn’t what I want as it’s too slow).

There’s also something called DPMS, which allows you to put a screen in hibernation and other modes for HDMI and DVI too, I think. On linux systems you can set it using xset; however, it only seems to work once. I lost a lot of time on this. DPMS is pretty messy and hard to debug. After much strugging, and thinking it was my C++ / system calls that were the problem, replacing them with an X library and finally making shell scripts and small C+ testcases, I figured it was just a bug.

Just for added fun, there’s no vbetool on the Pi, which would have been another method;
and TVService is too heavyweight.

I finally got it to work with xset s, which makes a (blank?) screensaver come on, and is near-instant, though that means I have to run X.

xset s activate #screen off
xset s reset #screen on

3. Something to play video

This is as far as I’ve got: I need to do a clean install and try Epiphany again, and also work out why OMXPlayer seems to cancel the screensaver somehow.

——————————————————–

Links

Installing opencv on a pi

Face recognition and opencv

Python camera / face detection

CEC

Best guide on power save / screen blanking

DPMS example code

xset

C++

xscreensaver

vbetool

tvservice

Samsung tv hacking

Run browser on startup

Notes on installing and running opencv on a pi with video

1. sudo raspi-config
expand file system
enable camera
reboot

2. sudo apt-get update && sudo apt-get -y upgrade

3. cd /opt/vc
sudo git clone https://github.com/raspberrypi/userland.git

sudo apt-get install cmake

cd /opt/vc/userland 
sudo chown -R pi:pi .
sed -i 's/DEFINED CMAKE_TOOLCHAIN_FILE/NOT DEFINED CMAKE_TOOLCHAIN_FILE/g' makefiles/cmake/arm-linux.cmake

sudo mkdir build
cd build
sudo cmake -DCMAKE_BUILD_TYPE=Release ..
sudo make
sudo make install

Go to /opt/vc/bin and test one file typing : ./raspistill -t 3000

4. install opencv

sudo apt-get install libopencv-dev python-opencv

cd
mkdir camcv
cd camcv
cp -r /opt/vc/userland/host_applications/linux/apps/raspicam/*  .

curl https://gist.githubusercontent.com/libbymiller/13f1cd841c5b1b3b7b41/raw/b028cb2350f216b8be7442a30d6ee7ce3dc54da5/gistfile1.txt > camcv_vid3.cpp

edit CMakeLists.txt:

[[
cmake_minimum_required(VERSION 2.8)
project( camcv_vid3 )
SET(COMPILE_DEFINITIONS -Werror)
#OPENCV
find_package( OpenCV REQUIRED )

include_directories(/opt/vc/userland/host_applications/linux/libs/bcm_host/include)
include_directories(/opt/vc/userland/host_applications/linux/apps/raspicam/gl_scenes)
include_directories(/opt/vc/userland/interface/vcos)
include_directories(/opt/vc/userland)
include_directories(/opt/vc/userland/interface/vcos/pthreads)
include_directories(/opt/vc/userland/interface/vmcs_host/linux)
include_directories(/opt/vc/userland/interface/khronos/include)
include_directories(/opt/vc/userland/interface/khronos/common)
include_directories(./gl_scenes)
include_directories(.)
add_executable(camcv_vid3 RaspiCamControl.c RaspiCLI.c RaspiPreview.c camcv_vid3.cpp
RaspiTex.c RaspiTexUtil.c
gl_scenes/teapot.c gl_scenes/models.c
gl_scenes/square.c gl_scenes/mirror.c gl_scenes/yuv.c gl_scenes/sobel.c tga.c)
target_link_libraries(camcv_vid3 /opt/vc/lib/libmmal_core.so
/opt/vc/lib/libmmal_util.so
/opt/vc/lib/libmmal_vc_client.so
/opt/vc/lib/libvcos.so
/opt/vc/lib/libbcm_host.so
/opt/vc/lib/libGLESv2.so
/opt/vc/lib/libEGL.so
${OpenCV_LIBS}
pthread
-lm)
]]

cmake .
make
./camcv_vid3.cpp

Tangible enough

Thinking about the purpose of prototypes:

Make new and upcoming technologies and standards tangible enough to help people think through the consequences of them.

Technology is moving fast, but it is also unevenly distributed, and the consequences – good and bad – of emerging technologies may only become apparent as they move into the mainstream. By making these consequences tangible early we can choose between possible futures.

Links

MIT Design Fiction Research Group

What do Prototypes Prototype? by Stephanie Houde and Charles Hill

Intel’s Tomorrow project