Libbybot – a posable remote presence bot made from a Raspberry Pi 3 – updates

A couple of people have asked me about my presence-robot-in-a-lamp, libbybot – unsurprising at the moment maybe – so I’ve updated the code in github to use the most recent RTCMultiConnection (webRTC) library and done a general tidy up.

I gave a presentation at EMFCamp about it a couple of years ago – here are the slides:

Libbybot

MyNatureWatch with a High Quality Raspberry Pi camera

I’ve been using MyNatureWatch setup on my bird table for ages now, and I really love it (you should try it). The standard setup is with a pi zero (though it works fine with other versions of the Pi too). I’ve used the recommended, very cheap, pi zero camera with it, and also the usual pi camera (you can fit it to a zero using a special cable). I got myself one of the newish high quality Pi cameras (you need a lens too, I got this one) to see if I could get some better pics.

I could!

Pigeon portrait using the Pi HQ camera with wide angle lens

I was asked on twitter how easy it is to set up with the HQ camera, so here are some quick notes on how I did it. Short answer – if you use the newish beta version of the MyNatureWatch downloadable image it works just fine with no changes. If you are on the older version, you need to upgrade it, which is a bit fiddly because of the way it works (it creates its own wifi access point that you can connect to, so it’s never usually online). It’s perfectly doable with some fiddling, but you need to share your laptop’s network and use ssh.

Blackbird feeding its young, somewhat out of focus

MyNatureWatch Beta – this is much the easiest option. The beta is downloadable here (more details) and was some cool new features such as video. Just install as usual and connect the HQ camera using the zero cable (you’ll have to buy this separately, the HQ camera comes with an ordinary cable). It is a beta and I had a networking problem with it the first time I installed it (the second time it was fine). You could always put it on a new SD card if you don’t want to blat a working installation. Pimoroni have 32GB cards for £9.

The only fiddly bit after that is adjusting the focus. If you are not used to it, the high quality camera CCTV lens is a bit confusing, but it’s possible to lock all the rings so that you can set the focus while it’s in a less awkward position if you like. Here are the instructions for that (pages 9 and 10).

MyNatureWatch older version – to make this work with the HQ camera you’ll need to be comfortable with sharing your computer’s network over USB, and with using ssh. Download the img here, and install on an SD card as usual. Then, connect the camera to the zero using the zero cable (we’ll need it connected to check things are working).

Next, share your network with the Pi. On a mac it’s like this:

Sharing network using system preferences on a Mac

You might not have the RNDIS/Ethernet gadget option there on yours – I just ticked all of them the first time and *handwave* it worked after a couple of tries.

Now connect your zero to your laptop using the zero’s USB port (not its power port) – we’re going to be using the zero as a gadget (which the MyNatureWatch people have already kindly set up for you).

Once it’s powered up as usual, use ssh to login to the pi, like this:

ssh pi@camera.local
password: badgersandfoxes

On a mac, you can always ssh in but can’t necessarily reach the internet from the device. Test that the internet works like this:

ping www.google.com

This sort of thing means it’s working:

PING www.google.com (216.58.204.228) 56(84) bytes of data.
64 bytes from lhr48s22-in-f4.1e100.net (216.58.204.228): icmp_seq=1 ttl=116 time=19.5 ms
64 bytes from lhr48s22-in-f4.1e100.net (216.58.204.228): icmp_seq=2 ttl=116 time=19.6 ms

If it just hangs, try unplugging the zero and trying again. I’ve no idea why it works sometimes and not others.

Once you have it working, stop mynaturewatch using the camera temporarily:

sudo systemctl stop nwcameraserver.service

and try taking a picture:

raspistill -o tmp.jpg

you should get this error:

mmal: Cannot read camera info, keeping the defaults for OV5647
mmal: mmal_vc_component_create: failed to create component 'vc.ril.camera' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.camera' (1)
mmal: Failed to create camera component
mmal: main: Failed to create camera component
mmal: Camera is not detected. Please check carefully the camera module is installed correctly

Ok so now upgrade:

sudo apt-get update
sudo apt-get upgrade

you will get a warning about hostapd – click q when you see this. The whole upgrade took about 20 minutes for me.

When it’s done, reboot

sudo reboot

ssh in again, and test again if you want

sudo systemctl stop nwcameraserver.service
raspistill -o tmp.jpg

reenable hostapd:

sudo systemctl unmask hostapd.service
sudo systemctl enable hostapd.service

reboot again, and then you should be able to use it as usual (i.e. connect to its own wifi access point etc).

The only fiddly bit after that is adjusting the focus. I used a gnome for that, but still sometimes get it wrong. If you are not used to it, the high quality camera CCTV lens is a bit confusing – it’s possible to lock all the rings so that you can set the focus while it’s in a less awkward position if you like. Here are the instructions for that (pages 9 and 10).

A gnome

Zoom on a Pi 4 (4GB)

It works using chromium not the Zoom app (which only runs on x86, not ARM). I tested it with a two-person, two-video stream call. You need a screen (I happened to have a spare 7″ touchscreen). You also need a keyboard for the initial setup, and a mouse if you don’t have a touchscreen.

The really nice thing is that Video4Linux (bcm2835-v4l2) support has improved so it works with both v1 and v2 raspi cameras, and no need for options bcm2835-v4l2 gst_v4l2src_is_broken=1 🎉🎉

IMG_4695

So:

  • Install Raspian Buster
  • Connect the screen keyboard, mouse, camera and speaker/mic. I used a Sennheiser usb speaker / mic, and a standard 2.1 Raspberry pi camera.
  • Boot up. I had to add lcd_rotate=2 in /boot/config.txt for my screen to rotate it 180 degrees.
  • Don’t forget to enable the camera in raspi-config
  • Enable bcm2835-v4l2 – add it to sudo nano /etc/modules
  • I increased swapsize using sudo nano /etc/dphys-swapfile -> CONF_SWAPSIZE=2000 -> sudo /etc/init.d/dphys-swapfile restart
  • I increased GPU memory using sudo nano /boot/config.txt -> gpu_mem=512

You’ll need to set up Zoom and pass capchas using the keyboard and mouse. Once you have logged into Zoom you can often ssh in and start it remotely like this:

export DISPLAY=:0.0
/usr/bin/chromium-browser --kiosk --disable-infobars --disable-session-crashed-bubble --no-first-run https://zoom.us/wc/XXXXXXXXXX/join/

Note the url format – this is what you get when you click “join from my browser”. If you use the standard Zoom url you’ll need to click this url yourself, ignoring the Open xdg-open prompts.

IMG_4699

You’ll still need to select the audio and start the video, including allowing it in the browser. You might need to select the correct audio and video, but I didn’t need to.

I experimented a bit with an ancient logitech webcam-speaker-mic and the speaker-mic part worked and video started but stalled – which made me think that a better / more recent webcam might just work.

Removing rivets

I wanted to stay away from the computer during a week off work so I had a plan to fix up some garden chairs whose wooden slats had gone rotten:

IMG_4610

Looking more closely I realised the slats were riveted on. How do you get rivets off? I asked my hackspace buddies and Barney suggested drilling them out. They have an indentation in the back and you don’t have to drill very far to get them out.

The first chair took me two hours to drill out 15 rivets, and was a frustrating and sweaty experience. I checked YouTube to make sure I wasn’t doing anything stupid and tried a few different drill bits. My last chair today took 15 minutes, so! My amateurish top tips / reminder for me next time:

  1. Find a drill bit the same size as the hole that the rivet’s gone though
  2. Make sure it’s a tough drill bit, and not too pointy. You are trying to pop off the bottom end of the rivet – it comes off like a ring – and not drill a hole into the rivet itself.
  3. Wear eye protection – there’s the potential for little bits of sharp metal to be flying around
  4. Give it some welly – I found it was really fast once I started to put some pressure on the drill
  5. Get the angle right – it seemed to work best when I was drilling exactly vertically down into to the rivet, and not at a slight angle.
  6. Once drilled, you might need to pop them out with a screwdriver or something of the right width plus a hammer

IMG_4616

More about rivets.

Pi / openCV / Tensorflow again

Cat detector code updated for Raspian Buster. I used lite. A few things have changed since the last time. The code is here.

Download Raspian

I got Raspbian Buster Lite (from https://www.raspberrypi.org/downloads/raspbian/ )

Burn it onto a SD card.

touch /Volumes/boot/ssh
Add the wifi
nano /Volumes/boot/wpa_supplicant.conf

The file should containing something like:

country=GB
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
   ssid="foo"
   psk="bar"
}

then eject the card and put it in the pi.

ssh into it from your laptop

pi@raspberrypi.local
password: raspberry
sudo nano /etc/hosts
sudo nano /etc/hostname

Reboot

sudo reboot

Set up a virtualenv for python

This is not strictly necessary but keeps things tidy. You can also just use the built in python, just make sure you are using python3 and pip3 if so.

ssh into the pi again, then:

sudo apt update
sudo apt-get install python3-pip
sudo pip3 install virtualenv
virtualenv env
source env/bin/activate
(env) pi@birdbot:~ $ python --version
Python 3.7.3 # or similar

Enable the camera

sudo raspi-config # and enable camera under 'interfacing'; reboot

Install Tensorflow

Increase the swap size:

sudo nano /etc/dphys-swapfile

The default value in Raspbian is:

CONF_SWAPSIZE=100

We will need to change this to:

CONF_SWAPSIZE=1024

Restart the service that manages the swapfile own Raspbian:

sudo /etc/init.d/dphys-swapfile restart

Install tensorflow dependencies

sudo apt-get install libatlas-base-dev
sudo apt-get install git
pip install --upgrade tensorflow

(this takes a few minutes)

Test that tensorflow installed ok:

python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"

You may see an error about hadoop –

HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory.

See also tensorflow/tensorflow#36141 and tensorflow/tensorflow#36141. That doesn’t seem to matter.

You could try some user built tensorflow binaries – I tried this one, which seemed to corrupt my SD card, but not tried this one. Tensorflow 2 would be better to learn (the apis all changed between 1.4 and 2).

Install OpenCV

sudo apt-get install libjasper-dev libqtgui4 libqt4-test libhdf5-dev libharfbuzz0b libilmbase-dev libopenexr-dev libgstreamer1.0-dev libavcodec-dev libavformat-dev libswscale5

pip install opencv-contrib-python==3.4.3.18 #(see this)

test

python -c 'import cv2; print(cv2.__version__)'

Install camera dependencies

pip install imutils picamera

Install speaking dependencies

sudo apt-get install espeak-ng

Finally:

git clone https://github.com/libbymiller/cat_loving_robot
cd cat_loving_robot
python classify_image.py

If you want to add the servos and so on for cat detecting and running towards cats, or start it up automatically, there’s more info in github.