Monthly Archives: February 2015

Catwigs, printing and boxes

Catwigs are a set of cards that help you interrogate your project and are described in detail here. This post is about how we got them made and how we put them in a box. Not everyone’s cup of tea perhaps.

At the start, we copied Giles’ use of Moo for his alpha “strategies for people like me” cards and had our catwigs printed at Moo, who print business cards, among other things, and are very reasonable. There are 25 catwigs in a set (analogies are pretty difficult to think up), so this posed us a problem: the Moo boxes (and most other business card boxes) were far too deep, as they’re all designed for 50 cards. In any case, Moo only provide a few boxes for any given delivery of cards. So what could we put them in that would fit correctly, be robust enough to carry round, but readily buyable or makeable at a reasonable price? It’s proved a surprisingly difficult problem.

First I put them in a slip of cardboard, inspired by some simple cardboard business card holders from Muji, using a bobble to secure them. They did the job (especially decorated with a round sticker from Moo) but we wanted something a bit more robust. I had a brief detour looking into custom-printed elastic, but didn’t get anywhere, sadly.

catwigs_hairband_box
catwigs_hairband_box2
catwigs_hairband_box3

I started to investigate different types of boxes, and made a few prototypes from Richard’s big book of boxes.

boxes

The problem with these were that they all involved glueing and were quite fiddly to make, so a bit impractical for more than a few.

At this point we started to look into tins. Tins are cheap, readily available and robust. However, THEY ARE ALL THE WRONG SHAPE.

Our cards are “standard” business card size (there isn’t really a standard size but this is a common one). The closest tin size is this one (95 x 65 x 21mm) which is fractionally too small when you take into account the cuve of the corners, such that the cards get a bit stuck when you try and get them out. I had an entertaining time (not entertaining) cleaning off accidental Cyanoacrylate-fumed fingerprints caused by sticking in ribbons so that they could be more easily removed.

catwigs_tin

This tobacco tin isn’t a bad size but the lid comes off too easily and it’s the wrong colour

These are near-perfect (found thanks to Dirk) but cannot be ordered from outside the Netherlands.

There are plenty of tins from Alibaba, but it’s very hard to know whether they will fit, because of the curved corner issue. We could also have had bespoke tins made but not in small numbers.

At this point we looked into personalised playing and game card printers. It turns out there are lots of these – and this is what Giles went with his Blockbox cards in the end (from MakePlayingcards). Here are some possibilities (that last one lets you publish your own games). They’re all in the US however, which adds in a layer of complexity when everything gets stuck in customs.

We did order some at a very reasonable price from Spingold, but we ran into the box problem again – they’re happy to put fewer cards in the box, but custom boxes are more problematic. The cards themselves shuffle better than the Moo ones, but I prefer the matt texture of the Moo cards, and also that I can keep stock low and just order as many as I need.

catwigs_bridge3

catwigs_bridge2

(One important thing to take account of is that if the depth of your box is close to 2.5cm then it’s £3.20 rather than “large letter” £0.93 to send them first class by Royal Mail which is both annoying and wasteful.)

Anyway this is our current box design for these beta cards, also inspired by one in Structural Packaging Design. I’ve been cutting and scoring them on a laser cutter from 320 gsm “frost” polypropelene (from Paperchase). They are reasonably robust, easy to put together and the correct size, so I think they’ll do for now.

poly_catwigs

Notes on a TV-to-Radio prototype

In the last couple of weeks at work we’ve been making “radios” in order to test the Radiodan prototyping approach. We each took an idea and tried to do a partial implementation. Here’s some notes about mine. It’s not really a radio.

Idea

Make a TV that only shows the screen when someone is looking, else just play sound.

This is a response to the seemingly-pervasive idea that the only direction for TV is for devices and broadcasts (and catchup) to get more and more pixels. Evidence suggests that sometimes at least, TV is becoming radio with a large screen that is barely – but still sometimes – paid attention to.

There are a number of interesting things to think about once you see TV and radio as part of a continuum (like @fantasticlife does). One has to do with attention availability, and specifically the usecases that overlap with bandwidth availability.

As I watch something on TV and then move to my commuting device, why can’t it switch to audio only or text as bandwidth or attention allows? If it’s important for me to continue then why not let me continue in any way that’s possible or convenient? My original idea was to do a demo showing that as bandwidth falls, you get less and less resolution for the video, then audio (using the TV audio description track?) then text (then a series of vibrations or lights conveying mood?). There are obvious accessibility applications for this idea.

Then pushing it further, and to make a more interesting demo – much TV is consumed while doing something else. Most of the time you don’t need the pictures, only when you are looking. So can we solve that? Would it be bad if we did? what would it be like?

SO: my goal is a startling demo of mixed format content delivery according to immediate user needs as judged by attention paid to the screen.

tv_as_radio

I’d also like to make some sillier variants:

  • Make a radio that is only on when someone is looking.
  • Make a tv that only shows the screen when someone is not looking.

I got part of the way – I can blank a screen when someone is facing it (or the reverse). Some basic code is here. Notes on installing it are below. It’s a slight update to this incredibly useful and detailed guide. The differences are that the face recognition code is now part of openCV, which makes it easier, and I’m not interested in whose face it is, only that there is one (or more than one), which simplifies things a bit.

I got as far as blanking the screen when it detects a face. I haven’t yet got it to work with video, I think it has potential though.

Implementation notes

My preferred platform is the Raspberry Pi, because then I can leave it places for demos, make several cheaply etc. Though in this case using the Pi caused some problems.

Basically I need
1. something to do face detection
2. something to blank the screen
3. something to play video

1. Face detection turned out to be the reasonably easy part.

A number of people have tried using opencv for face detection on the PI and got it working, either with C++ or using Python. For my purposes, Python was much too slow – about 10 secs to process the video – but C++ was sub-second, which is good enough I think. Details are below.

2. Screen blanking turned out to be much more difficult, especially when combined with C++ and my lack of experience of C++

My initial thought was to do the whole thing in html, e.g. with a full screen browser and just hiding the video element when the face was not present. I’d done tests with Epiphany and knew it could manage html5 video (mp4 streaming). However on my Pi Epiphany point-blank refuses to run – segfaulting every time – and I’ve not worked out what’s wrong (could be that I started with a non-clean Pi img, I’ve not tested it on a clean one yet). Also Epiphany can’t run in kiosk mode, which would probably mean the experience wouldn’t be great. So I looked into lower-level commands that should work to blank the screen whatever is playing.

First (thanks to Andrew’s useful notes) I started looking at CEC, HDMI-level control inteface that allows you do to things like switch on a TV. Raspberry Pi uses this, also DVD plays and the like. As far as I can tell however, there are no CEC commands to blank the screen (although you can switch the TV off, which isn’t what I want as it’s too slow).

There’s also something called DPMS, which allows you to put a screen in hibernation and other modes for HDMI and DVI too, I think. On linux systems you can set it using xset; however, it only seems to work once. I lost a lot of time on this. DPMS is pretty messy and hard to debug. After much strugging, and thinking it was my C++ / system calls that were the problem, replacing them with an X library and finally making shell scripts and small C+ testcases, I figured it was just a bug.

Just for added fun, there’s no vbetool on the Pi, which would have been another method;
and TVService is too heavyweight.

I finally got it to work with xset s, which makes a (blank?) screensaver come on, and is near-instant, though that means I have to run X.

xset s activate #screen off
xset s reset #screen on

3. Something to play video

This is as far as I’ve got: I need to do a clean install and try Epiphany again, and also work out why OMXPlayer seems to cancel the screensaver somehow.

——————————————————–

Links

Installing opencv on a pi

Face recognition and opencv

Python camera / face detection

CEC

Best guide on power save / screen blanking

DPMS example code

xset

C++

xscreensaver

vbetool

tvservice

Samsung tv hacking

Run browser on startup

Notes on installing and running opencv on a pi with video

1. sudo raspi-config
expand file system
enable camera
reboot

2. sudo apt-get update && sudo apt-get -y upgrade

3. cd /opt/vc
sudo git clone https://github.com/raspberrypi/userland.git

sudo apt-get install cmake

cd /opt/vc/userland 
sudo chown -R pi:pi .
sed -i 's/DEFINED CMAKE_TOOLCHAIN_FILE/NOT DEFINED CMAKE_TOOLCHAIN_FILE/g' makefiles/cmake/arm-linux.cmake

sudo mkdir build
cd build
sudo cmake -DCMAKE_BUILD_TYPE=Release ..
sudo make
sudo make install

Go to /opt/vc/bin and test one file typing : ./raspistill -t 3000

4. install opencv

sudo apt-get install libopencv-dev python-opencv

cd
mkdir camcv
cd camcv
cp -r /opt/vc/userland/host_applications/linux/apps/raspicam/*  .

curl https://gist.githubusercontent.com/libbymiller/13f1cd841c5b1b3b7b41/raw/b028cb2350f216b8be7442a30d6ee7ce3dc54da5/gistfile1.txt > camcv_vid3.cpp

edit CMakeLists.txt:

[[
cmake_minimum_required(VERSION 2.8)
project( camcv_vid3 )
SET(COMPILE_DEFINITIONS -Werror)
#OPENCV
find_package( OpenCV REQUIRED )

include_directories(/opt/vc/userland/host_applications/linux/libs/bcm_host/include)
include_directories(/opt/vc/userland/host_applications/linux/apps/raspicam/gl_scenes)
include_directories(/opt/vc/userland/interface/vcos)
include_directories(/opt/vc/userland)
include_directories(/opt/vc/userland/interface/vcos/pthreads)
include_directories(/opt/vc/userland/interface/vmcs_host/linux)
include_directories(/opt/vc/userland/interface/khronos/include)
include_directories(/opt/vc/userland/interface/khronos/common)
include_directories(./gl_scenes)
include_directories(.)
add_executable(camcv_vid3 RaspiCamControl.c RaspiCLI.c RaspiPreview.c camcv_vid3.cpp
RaspiTex.c RaspiTexUtil.c
gl_scenes/teapot.c gl_scenes/models.c
gl_scenes/square.c gl_scenes/mirror.c gl_scenes/yuv.c gl_scenes/sobel.c tga.c)
target_link_libraries(camcv_vid3 /opt/vc/lib/libmmal_core.so
/opt/vc/lib/libmmal_util.so
/opt/vc/lib/libmmal_vc_client.so
/opt/vc/lib/libvcos.so
/opt/vc/lib/libbcm_host.so
/opt/vc/lib/libGLESv2.so
/opt/vc/lib/libEGL.so
${OpenCV_LIBS}
pthread
-lm)
]]

cmake .
make
./camcv_vid3.cpp