Showing posts with label Raspberry Pi. Show all posts
Showing posts with label Raspberry Pi. Show all posts

Wednesday, 4 August 2021

The Little Computer that Could: Motion Detection with Outdoor Camera, Tensorflow and Raspberry Pi Part 3 of 3

 

"I have no spur to prick the sides of my intent, but only vaulting ambition, which o'erleaps itself and falls on the other." - Macbeth

Motion detection with an outdoor camera can be problematic, with wind, shadow and direct sunlight causing multiple false alarms. Suppressing these false alarms can result in genuine alarms being missed. One way is to filter all the alarms through an object recognition program.

The motion detection program of Part 1 will write all alarm image frames to the ./alarms/ directory. A modified object recognition program of Part 2 will inspect each alarm frame and if an object is recognized, will write it to another directory ./alarm/. The alarm filter program is called tiny-yolo_alarmfilter.py. This seemed to work well to start with, but with any new project, time will tell. For starters it seemed to think my dog is a cow, probably because she was sniffing at the grass.

tiny YOLO mislabelling dog as cow

Now my dog does have a temper, but calling her a cow is a little harsh. Even so, both dogs and cows qualify (in my opinion) as valid alarm triggers so it is not a show-stopper for now. It is particularly good at excluding changes in lighting and shadows. Proof positive that the little Raspberry Pi could, and did

Recognizing passing vehicles 


Ah, vaulting ambition ... if the Pi can recognize an object, maybe it can also track it. Most alarms are quite passive, except for the loud siren, which tends to annoy the neighbors and should only be used as a last resort. A pan/tilt camera like the Trendnet TV-IP422WN that visibly tracks the object is a lot more menacing, and should scare off the more timid intruders like birds and squirrels. But that is another blog post.

A tensorflow model looks promising, as it can be potentially speeded up with the use of custom hardware like the Coral USB acelerator for about the price of a Raspberry Pi 4.

Coral USB Accelerator

Installing tensorflow proved to be a bit hit and miss, but Katsuya Hyodo's github readme worked for me. This time I started with a squeaky clean version of Raspbian, 2021-05-07-raspios-buster-armhf.img.

Remember to uninstall the dud versions:

# pip3 uninstall tensorflow
# apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev gccgfortran libgfortran5 libatlas3-base libatlas-base-dev libopenblas-dev libopenblas-base libblas-dev liblapack-dev cython3 openmpi-bin libopenmpi-dev libatlas-base-dev python3-dev
# pip3 install pip --upgrade
# pip3 install keras_applications==1.0.8 --no-deps
# pip3 install keras_preprocessing==1.1.0 --no-deps
# pip3 install h5py==2.9.0
# pip3 install pybind11
# pip3 install -U --user six wheel mock
# wget "https://raw.githubusercontent.com/PINTO0309/Tensorflow-bin/master/tensorflow-1.15.0-cp37-cp37m-linux_armv7l_download.sh"
# sh ./tensorflow-1.15.0-cp37-cp37m-linux_armv7l_download.sh
# pip3 install tensorflow-1.15.0-cp37-cp37m-linux_armv7l.whl

A quick test:
# python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>> tensorflow.__version__
'1.15.0'
>>>

For object detection I used Edje Electronics
Packages tensorflow, libatlas-base-dev, libhdf5-dev, libhdf5-serial-dev I had already installed previously

# apt-get install libjasper-dev
# apt-get install libqtgui4
# apt-get install libqt4-test

I used version 4.4.0.46 because 4.1.0.25 could not be found
# pip3 install opencv-contrib-python==4.4.0.46
# apt-get install protobuf-compiler
# pip install --user Cython
# pip install --user contextlib2
# pip install --user pillow
# pip install --user lxml
# pip install --user matplotlib

Got the tensorflow models
# git clone https://github.com/tensorflow/models.git

Then SSD_Lite:
# wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_c
oco_2018_05_09.tar.gz
# tar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz

The original tensorflow program, ./models/research/object_detection/TensorFlow.py got its images from the default Raspberry Pi camera, so I made a simpler version to take one frame (./341front.jpg in 640x480) at a time, uTensorFlow.py.

Note SSD_Lite misclassified a dog as sheep

The processing time was over 40s on my Raspberry Pi 3. My Coral USB accelerator will take more than a month to arrive, and it needs Tensorflow Lite, for which Edje Electronics has a very promising Tensorflow Lite repository, so why not. Notice this time the commands are as a sudoer user and not root, which I am told is the proper way to do things:
$ sudo pip3 install virtualenv
$ python3 -m venv tflite1-env
$ source tflite1-env/bin/activate

Then comes a whopper of a download:
$ git clone https://github.com/EdjeElectronics/TensorFlow-
Lite-Object-Detection-on-Android-and-Raspberry-Pi.git
$ bash get_pi_requirements.sh
Notice with Tensorflow Lite there is no Tensorflow module:
$ python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
>>>

$ wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
$ unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
Archive:  coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
  inflating: detect.tflite
  inflating: labelmap.txt

The original program is TFLite_detection_webcam.py but I need a version that works on individual image files, and not a video file, as in tiny-yolo_alarmfilter.py. The progam is tflite_alarmfilter.py, which took no time at all to write in python. You run it thus:

$ python3 tflite_alarmfilter.py --resolution='640x480' --modeldir=.
Processing alarm file 341front_20210804_094635_original.png
Processing alarm file 341front_20210804_094649_original.png
Processing alarm file 341front_20210804_094648_original.png

The framerate is now an astounding 2fps even without the Coral USB Accelerator. The detection seems slightly better with a more accurate bounding box without multiple boxes nested in the same object.

Tensorflow Lite SSDLite MobileNet v2: note accurate bounding box


tiny-YOLO: multiple bounding boxes over same object

I had originally planned to outsource the alarm files filtering to an x86 CPU, but the results with Tensorflow Lite made everything possible on the same Raspberry Pi 3, truly the little CPU that could. 

Happy Trails.

Tuesday, 27 July 2021

The Little Computer that Could: YOLO Object Recognition and Raspberry Pi Part 2 of 3

 

You Only Look Once

One cure for motion detector false alarms in Part 1 is object recognition; the alarm is only raised if the intruder is recognized. YOLO is fast, trendy (deep neural net!) and will run on a Raspberry Pi. arunponnusamy comes straight to the point.

I already had numpy and openCV from Part 1, so it is just

# wget https://pjreddie.com/media/files/yolov3.weights

Now this is a massive file and I needed a copper Ethernet connection to the Internet to download it. Next you download the zip file from arunponnusamy for the files yolo_opencv.py, yolov3.cfg, yolov3.txt and dog.jpg. Do not use wget on github. And simply run it:

# python3 yolo.py --image dog.jpg --config yolov3.cfg --weights yolov3.weights --classes yolov3.txt

If you get 'OutOfMemoryError' you will probably need to reboot your Pi.

The output should be:



I substituted a frame from my IP Camera, and sure enough it recognized a person correctly:

# python3 yolo.py --image alarm.png --config yolov3.cfg --weights yolov3.weights --classes yolov3.txt



Output of yolov3.weights model

For convenience, see my github repository for copies of arunponnusamy's code. It took my Raspberry Pi 3 some 50 seconds to process the image, which is a little long but would probably keep up with the motion detector triggers of a few frames per hour. But arunponnusamy's reference is Joseph Redmon, who also mentioned a tiny version of the pre-trained neural net model. You will also need to download his yolov3-tiny.cfg, which I got by cloning from his github.
git clone https://github.com/pjreddie/darknet
Joseph's executable is a C program, but the input files are the same so I simply used his tiny model files with arunponnusamy's python script:

$ python3 yolo.py --image alarm.png --config yolov3-tiny.cfg --weights yolov3-tiny.weights --classes yolov3.txt

Note I also reused arunponnusamy's yolov3.txt which is the dictionary of object names. Joseph's dictionary seems to be baked into his executable file. The command finishes in 5 seconds, which is 10 times better. The output bounding box is different, but it did recognize the object as a person.

Output of yolov3-tiny.weights model

The idea is to use YOLO to filter out the false alarms from the motion detection program from Part 1, In Part 3, we will look at tiny-YOLO, Tensorflow, and SSD_Lite. The Raspberry Pi is truly the little computer that could.

Happy Trails.

The Little Computer that Could: motion detection with OpenCV and Raspberry Pi Part 1 of 3



One of the silver linings with this pandemic lockdown is you get round to doing one or two thing you always meant to do. For me it is vision systems. I happened to be testing an lcd panel using my Raspberry Pi 3. Like that little engine, I think I can ...

As it neared the top of the grade, which had so discouraged the larger engines, it went more slowly. However, it still kept saying, "I—think—I—can, I—think—I—can." - The Little Engine That Could


OpenCV seems as good a starting point as any. My Raspberry Pi 3 did not have the most recent image, so your mileage may vary.

# cat /proc/version
Linux version 4.19.66-v7+ (dom@buildbot) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611)) #1253 SMP Thu Aug 15 11:49:46 BST 2019

# cat /etc/debian_version
9.13

Using jeremymorgan's instructions, 

# apt-get update
# apt-get upgrade

After a really long wait, it finished. If like me your downloads get interrupted you can resume using:

# dpkg --configure -a

I also had to tweak jeremymorgan's instructions a little:

# wget https://bootstrap.pypa.io/pip/3.5/get-pip.py
# python3 get-pip.py

But the next command failed:
# pip install opencv-contrib-python
-su: /usr/bin/pip: No such file or directory

Wait, I just installed pip!
# whereis pip
pip: /etc/pip.conf /usr/local/bin/pip /usr/local/bin/pip2.7 /usr/local/bin/pip3.5 /usr/share/man/man1/pip.1.gz

Ah, my python install usually seeks its commands at /usr/bin, so
# ln -s /usr/local/bin/pip /usr/bin/pip

Now it completes:
# pip install opencv-contrib-python
Requirement already satisfied: numpy>=1.12.1 in /usr/lib/python3/dist-packages (from opencv-contrib-python) (1.12.1)
Installing collected packages: opencv-contrib-python
Successfully installed opencv-contrib-python-4.1.1.26

I'll be needing someting to play video, so
# apt-get install mplayer

OpenCV seems to speak native C++, but python seems like a good idea to me, so:
# pip3 install opencv-python

The Raspberry Pi has a built-in camera interface and all the OpenCV worked examples seems to use it, but I did not have a camera handy. What I do have is my old Trendnet  TV-IP422WN IP Camera. It works with an ancient, rickety and totally unsafe version of Microsoft Explorer (ActiveX - eeek!) but with a little bit of luck, and help from Aswinth Raj got it to work in plain http:

http://192.168.10.30/cgi/mjpg/mjpg.cgi

2021-08-04 update: The Trendnet TV-IP422WN also supports rtsp:

rtsp://192.168.10.30/mpeg4

The program to test your non-Raspberry Pi video camera is here.

Now Aswinth Raj managed to stream video from his DVR with OpenCV using rtsp. It is not http, but worth a try. His code uses cvui, so in it goes:

# pip3 install cvui

But when you first run motion_capture.py, there are errors. You need:
# apt-get install libatlas-base-dev

Then the python module numpy throws an exception. For some mysterious reason this made it work:
# apt-get remove python-numpy
# apt-get remove python3-numpy
# pip uninstall numpy
# pip3 uninstall numpy
# pip3 install numpy
# apt-get autoremove

This caused a crash and CPU reboot, after which
# dpkg --configure -a
# apt-get autoremove
# pip3 install numpy

And now, work it does. Here is my http version.

Next is motion detection, by automaticaddison. He has working python code for two methods: by absolute difference and by background subtraction. Both work without fuss; I only modified it for my Trendnet IP camera instead of the default Raspberry Pi camera.

Note the dog is also detected but the program chose the biggest contour


Now by any reasonable measure either program would solve the problem of motion detection using Raspberry Pi and OpenCV. True, the frame rate was not brilliant at 1 fps but it would detect most intrusions unless the intruder flat-out sprinted across the camera field of view.

My problem was I had mounted my IP camera to look out on the front lawn which pointed the camera northeast. When run 24/7 the morning sun and the evening shadow of the house, coupled with wind blowing through the vegetation caused almost continous triggering. It did not matter if absolute difference or background subtraction was used. When motion_capture.py was modified to save the triggering images, it used up a few gigabytes every day.

One way to reduce false alarms is to mask out problem regions. I used the cruder image cropping, which does work but over 24 hours essentially my entire background changes a few times a day. 

Adjusting the image threshold, including adaptive thresholding made little difference. Next I took a Adrian Rosebrock's weighted average (cv2.accumulateWeighted) of a few consecutive frames, and while this helped the false alarms were still annoyingly high.

The next thing to try was to only raise an alarm on say 5 consecutive triggers. This caused some intrusions to be missed, particularly if the intruder moved quickly. But there were still too many false alarms. A similarly crude method was to put upper and lower limits on the size of the detected bounding rectangle (cv2.boundingRect)

I ended up using all the methods in one after the other, and maybe got a few false alarms per hour. It sort of works, but was a little error-prone (ie misses intrusions). The program is background_subtraction.py. One possible improvement is Ivan Kudriavtsev's take on motion detection. The installation of Numba looks a bit daunting, so I left it for another time.

In Part 2, we will investigate if object recognition will help.

There you have it: the little Raspberry Pi that could do motion detection using OpenCV. Happy Trails.

Sunday, 9 August 2020

If at first you don't succeed, try try again: Flashing an old Beaglebone Black eMMC

 

2019 Hugo Award for Best Novelette: Zen Cho

Back in 2016 I bought a few Beaglebone Blacks, but did not get round to using them. I guess they were superseded by the later-model Raspberry Pi's. 


Beaglebone Black


The Pi has always been less reliable than the Beagleboard or Beaglebone. Broadcom USB subsystem was especially flaky and since the Pi USB bus handled both disk IO as well as Ethernet, you are often SOL as Broadcom is not really reknown for fixing things. Little things like industrial temperature rating of 85 degrees Celsius. And having to run Linux from sdcards, which are certain to wear out. Even worse the full-size sdcard sockets fail after a few years. Luckily the later Pi models use microsd sockets, but I am still stuck with a bunch of Pi Model Bs and their flaky sdcards.

The Pi had irresistible things going for it. It was way cheaper, had more addon modules (ie 'hats'), and best of all, it had Debian. There was no longer the month-long struggle to get Angstrom Linux to run properly. Yet those niggling problems ...

Getting my Beaglebone Black running was unexpectedly painless. I plugged it into my laptop USB port and it was running. It came up as /dev/ttyACM0 a serial port. No problem, all I needed was minicom, and the default settings of 115200 baud, 8 bits, 1 stop, no parity.


Default account is 'debian' and password 'tempwd'. There is no root password. I did not have to struggle with Angstrom. In fact I forgot to put in the sdcard. That meant it loaded from on-board mass storage. And it was Debian!

root@beaglebone:~# cat /etc/dogtag

BeagleBoard.org Debian Image 2015-03-01

Turned out the onboard memory was eMMC, still flash memory, but in IC form without those dreaded sockets. Memory is 512MB and there is no built-in WiFi; that would come only in the Beaglebone Black Wireless.

From bottom left: USB Master socket, microsd  and the elusive User-boot button


First order of business with Debian is to get it up to date. I connected it via copper LAN to my ADSL modem where it found the Internet on its own. But 'apt-get update' had errors, even though it technically did not fail:

root@beaglebone:~# apt-get update

W: Failed to fetch http://ftp.us.debian.org/debian/dists/wheezy/contrib/binary-a
rmhf/Packages  404  Not Found [IP: 64.50.233.100 80]

W: Failed to fetch http://ftp.us.debian.org/debian/dists/wheezy/non-free/binary-
armhf/Packages  404  Not Found [IP: 64.50.233.100 80]

W: Failed to fetch http://ftp.us.debian.org/debian/dists/wheezy-updates/main/bin
ary-armhf/Packages  404  Not Found [IP: 64.50.233.100 80]

'apt-get upgrade' finished OK:
root@beaglebone:/home/debian# apt-get upgrade

But the version was wheezy, and really old. Not to worry, I reached for the latest images, and downloaded AM3358 Debian 9.12 2020-04-06 4GB SD ImgTec. It only needed a tiny (4GB!) microsd card, and:

$xzcat bone-debian-10.3-iot-armhf-2020-04-06-4gb.img.xz | sudo dd of=/dev/sdc
7372800+0 records in
7372800+0 records out
3774873600 bytes (3.8 GB, 3.5 GiB) copied, 3001 s, 1.3 MB/s

But it did not boot from the microsd, and instead after an hour or so booted from eMMC. Time to Read the Manual. The manual is no fluffy faux-friendly 'Getting Started' guide; it reads like a datasheet with schematics in glorious abundance.

There is mention of a 'Boot button', where if held down and the beaglebone is power-cycled it will force a boot from sdcard. But it did not work - and eventually always reverted to the old eMMC wheezy Debian.

After wasting a couple of days ruling out a hardware malfunction, the problem had to be the Debian image. One hint was that Debian Image 2015-03-01 would not mount  Debian 2020-04-06. That would point to an ext3 filesystem incompatibility. The existing eMMC Debian code would be needed to mount and boot the new Debian before flashing can commence.

And yet all this  has been solved before; one compromise is to use an fossil filesystem (like FAT16) just for booting, and indeed Debian 015-03-01 had such a partition but not Debian 2020-04-06. Usually SoCs have separate bootrom to prevent bricking incidents like this (like its close cousin the Beaglebone White), but this is easy enough to test.

beaglebard.org maintains a complete archive of old images, and hoping for an intermediate Debian that will be compatible with both, I picked Debian 2016-12-09. Then it s a simple matter of:

$xzcat bone-debian-8.6-iot-armhf-2016-12-09-4gb.img.xz > /dev/sdc

And it booted from microsd, just like that.

root@beaglebone:~# cat /proc/version
Linux version 4.4.36-ti-r72 (root@a2-imx6q-wandboard-2gb) (gcc version 4.9.2 (Debian 4.9.2-10) ) #1 SMP Wed Dec 7 22:29:53 UTC 2016

You need to prepare the new image for flashing. Just find the file /boot/uEnv.txt and uncomment the last line:

cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.sh

To flash the eMMC, I added the 5V power cable, and powered off. Then with the microsd still in, held down 'User boot'  button and powered back on. This worked right off the bat; the blinkenlights did an impression of Pong, and when finished, all lit up, then shut down.

On powering up, and removing the microsd, I get:

root@beaglebone:~# cat /etc/dogtag
BeagleBoard.org Debian Image 2016-12-09

Our imugi is not yet a dragon, but this is clearly The Way. The trusty sdcard is then repurposed with:

$xzcat bone-debian-10.3-iot-armhf-2020-04-06-4gb.img.xz | sudo dd of=/dev/sdc

And now it booted off the microsd. Now all I need to do is to repeat the eMMC flashing process with the latest Debian Image, 2020-04-06. As a precaution I first upgraded without incident:

root@beaglebone:~# apt-get update
root@beaglebone:~# apt-get upgrade

Changed the default passwords did the usual sysadmin stuff. The eMMC flash went without incident, and unlike Zen Cho's Byam, my Beaglebone Black did not turn back but transformed its eMMC to Debian 2020-04-06.

Happy Trails.

Thursday, 2 July 2020

Raspberry Pi 4 Voice Assistant Part 2 of 4: Google Text to Speech




gTTS: The Empire Strikes Back


In Part 1, one of my goals was to have my laptop issue voice commands to to my Google Home smart speaker. After trying out Mycroft, Jasper seems like the logical next step. The other text to speech systems voice quality were something like the Texas Instruments Speak & Spell products: especially ESP32Talkie. Even Mycroft sounded a bit sad next to my Home Mini speaker.

And then I stumbled upon gTTS, Google Text to Speech.  This python interface to Google's text to speech can be installed using:

pip install gTTS

And requires a program, of only ten lines:

from gtts import gTTS

def text_to_speech(input_name, output_name, language):
    file = open(input_name, 'r')
    content = file.read()
    file.close()
    sound = gTTS(text=content, lang=language)
    sound.save(output_name + '.mp3')
    #https://pypi.org/project/gTTS/
text_to_speech('input_en.txt', 'sound_en', 'en')
#text_to_speech('input_tr.txt', 'sound_tr', 'tr')
print('Done!')

You put your text in a file, input_en.txt:
$cat ./input_en.txt
Hey, Google

$python tts.py

And you get back an mp3 file, sound_en.mp3

And all you need to do now to trigger the Google Home smart speaker is:

$mplayer sound_en.mp3

A typical complete command would be something like:
$mplayer HeyGoogle.mp3; sleep 1; mplayer OfficeLampOn.mp3

For some reason, the other trigger phrase 'OK Google' did not work, but for very little effort I can now integrate disparate IoT devices, be they home brewed, Alexa, or Google Home into one Voice Assistant that rules them all.

Here's what it sounds like:



Happy Trails.

Luke: Vader... Is the dark side stronger?

Yoda: No, no, no. Quicker, easier, more seductive.

Sunday, 28 June 2020

More Power! HY-M154 4-Channel Optocoupler PCB as Digital Output

Tim Allen's Home Improvement TV Series

More power always seem like a good idea at first. Well at least you fail in style: if you were to crash and burn, you might as well burn the candle at both ends.

PC817-based 4-channel optocoupler PCB
The HY-M154 is a cheap and cheerful optocoupler board, and is very useful converting digital input voltages to the 3.3V required by the ESP8266 or Raspberry Pi GPIO pins.

Note the 3K series resistors at the input as well as output

You could for instance feed 12V at the input IN1 and this will pass about 4mA into the optocoupler diode. The PC817 has a current transfer ration (CTR) of 50% so the output will put out at least 2mA. A typical V1 is 3.3V so this is more than enough to pull the ESP8266 or Raspberry Pi GPIO pin low.

But what if I wanted to use it as an optoisolated digital output board? If I connected an ESP8266 3.3V GPIO configured as output, it will light up the input LED and feed approximately 1mA into the optocoupler diode. With a CTR of 50% I would expect a maximum of 0.5mA at the output. With 5V at V1, I measured 0.09mA at the output. That will light up an LED dimly, but will not do much else.

One way to improve the output drive is to convert the output to a darlington stage. Indeed, with typically low CTRs, optocouplers with darlington outputs are quite common, like the 6N138.

Optoisolator with darlington output

I have plenty of the venerable 2N2222 in TO-92 package which will attach nicely to the HY-M154:

Darlington output stage using 2N2222 transistor
But this only got me 0.25mA, a gain of just 2.5, which was disappointing because darlingtons can often achieve gains of 100 or more. Now this might be because of the 3K series output resistor. I shorted it with a pair of tweezers and I now get 5.6mA. Much much better: I get a decent brightness at the output LED now, but more power! would be nice. Maybe 100mA, enough to drive a small relay or an ultrabright LED, or a buzzer.

To increase power I would need to drive the optocoupler input diode harder, and the input 3K resistor needs to be changed. I started with a 30R resistor.

HY-M154 with 30R input and 0R output resistors

  I get 39mA at the darlington output at 5V. Now we're talking. Lowering  it further to 10R should be good enough to drive a small relay.

I used this program to blink the ouputs.

Lolin NodeMCU ESP-12E driving HY-M154. Note easyhook supplying 5V to HY-M154 output darlington 

Happy Trails.




Thursday, 14 May 2020

Raspberry Pi 4 Voice Assistant: Mycroft Part 1 of 3


(Shown a photo of a baby)
Mycroft: "Yes, looks very ... fully functioning."

Sherlock: "Is that the best you can do?"
Mycroft: "Sorry, I've never been very good with them."
Sherlock: "Babies?"
Mycroft: "Humans."

My Seeed Studio Respeaker 4-Mic Array arrived during the 2020 Covid-19 Lockdown, which pretty much guaranteed it some immediate attention.




The Seeed Studio link has some good instructions, and it was smooth sailing until the section "Alexa/Baidu/Snowboy SDK". I have Google Home Mini smart speakers and so was loath to register for Alexa. Baidu seemed like a good alternative, for in 2016 Baidu published a stunning paper on Deep Speech on using deep learning on speech to text.

Getting Baidu authorization keys, however proved way too slow so I took a quick look at Mozilla's implementation of Deep Speech, using Google's Tensorflow.

The purpose of all this (besides having some fun) is to see if I can voice-control my IoT devices without an Internet link. Also the added security and privacy seems worthwhile. And it is not like I'm going anywhere for a few days.

At this point Mycroft looks tempting, and since the instructions are straightforward, I downloaded the image file. There is a typo in the image write to sdcard; just replace /dev/sdb1 with /dev/sdb:

sudo dd if=path-to-your-image.img of=/dev/sdb bs=20M

Per the instructions, you will have to register with their website so keep a note of your registration
code on the screen. Keep following until the section "Selecting audio output and audio input".

Respeaker is not listed in the microphones' list, But Dimitry Maslov comes to the rescue:

sudo apt-get update
sudo apt-get upgrade
git clone https://github.com/respeaker/seeed-voicecard.git
cd /home/pi/seeed-voicecard
./install.sh 4mic

Next, go back to Mycroft: a quick and clean way is to reboot. Reconfigure it again with

mycroft-setup-wizard

And select 'Other'. Mycroft should now work. Here's a video of mine


Mycroft/Picroft on Raspberry Pi 4 and Respeaker 4-mic Array


Mycroft seems to run a lot slower than Google Assistant. This is because it also uploads the audio to cloud servers and Mycroft servers probably have a lot less oomph.

Next we want Mycroft to turn on an IoT lamp. We can used a few services for this, for example, Adafruit but for simplicity we can use an esp8266 1-channel relay and a webhook. We use 'mycroft-msk create' and fill in the questionnaire:

(.venv) pi@picroft:~ $ mycroft-msk create
Enter a short unique skill name (ie. "siren alarm" or "pizza orderer"): soldering station lamp

Class name: SolderingStationLampSkill
Repo name: soldering-station-lamp-skill

Looks good? (Y/n) n
Enter a short unique skill name (ie. "siren alarm" or "pizza orderer"): soldering station lamp on

Class name: SolderingStationLampOnSkill
Repo name: soldering-station-lamp-on-skill

Looks good? (Y/n) y
Enter some example phrases to trigger your skill:
- Soldering station lamp on
- Soldering station light on
- Turn on the soldering station lamp
- Turn on the soldering station light
-
Enter what your skill should say to respond:
- The soldering station light is now on
- Turning on the soldering station light
-
Enter a one line description for your skill (ie. Orders fresh pizzas from the store):
- Turns on the light on the soldering station
Enter a long description:
> Turns on the light on the soldering station
>
Enter author: cmheong
Go to Font Awesome (fontawesome.com/cheatsheet) and choose an icon.
Enter the name of the icon: lightbulb
Pick a color for your icon. Find a color that matches the color scheme at mycroft.ai/colors, or pick a color at: color-hex.com.
Enter the color hex code (including the #): #fff68f

Categories define where the skill will display in the Marketplace. It must be one of the following:
Daily, Configuration, Entertainment, Information, IoT, Music & Audio, Media, Productivity, Transport.
Enter the primary category for your skill:
- IoT
Enter additional categories (optional):
-
Enter tags to make it easier to search for your skill (optional):
- IoT
- Smart Home
- Home Assistant
-
For uploading a skill a license is required.
Choose one of the licenses listed below or add one later.

1: Apache v2.0
2: GPL v3.0
3: MIT
Choose license above or press Enter to skip? 3

Some of these require that you insert the project name and/or author's name. Please check the license file and add the appropriate information.

Does this Skill depend on Python Packages (PyPI), System Packages (apt-get/others), or other skills?
This will create a manifest.yml file for you to define the dependencies for your
 Skill.
Check the Mycroft documentation at mycroft.ai/to/skill-dependencies to learn more about including dependencies, and the manifest.yml file, in Skills. (y/N) y
Would you like to create a GitHub repo for it? (Y/n) y

Enumerating objects: 12, done.
Counting objects: 100% (12/12), done.
Delta compression using up to 4 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (12/12), 2.52 KiB | 322.00 KiB/s, done.
Total 12 (delta 0), reused 0 (delta 0)
To https://github.com/cmheong/soldering-station-lamp-on-skill
 * [new branch]      master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
Created GitHub repo: https://github.com/cmheong/soldering-station-lamp-on-skill
Created skill at: /opt/mycroft/skills/soldering-station-lamp-on-skill

And that is all there is to it. Don't worry about uploading to github - it is optional. You will now get a whole bunch of smallish files:

(.venv) pi@picroft:~ $ ls -l /opt/mycroft/skills/soldering-station-lamp-on-skill
/
total 32
-rw-r--r-- 1 pi pi  393 May 14 15:15 __init__.py
-rw-r--r-- 1 pi pi 1058 May 14 15:20 LICENSE.md
drwxr-xr-x 3 pi pi 4096 May 14 15:13 locale
-rw-r--r-- 1 pi pi 1009 May 14 15:20 manifest.yml
drwxr-xr-x 2 pi pi 4096 May 14 15:15 __pycache__
-rw-r--r-- 1 pi pi  531 May 14 15:17 README.md
-rw-r--r-- 1 pi pi   35 May 14 15:17 settings.json
-rw-r--r-- 1 pi pi  631 May 14 15:20 settingsmeta.yaml

The file we are interested in is __init__.py. Since this is a toy example to get you going, we are going to use the crudest possible and most insecure method, using a bash shell to launch our webhook. The modified file is in my github repository, but it is so small I'll also list it here:.

$cat __init__.py
from mycroft import MycroftSkill, intent_file_handler
import subprocess

class SolderingStationLampOn(MycroftSkill):
    def __init__(self):
        MycroftSkill.__init__(self)

    @intent_file_handler('off.lamp.station.soldering.intent')
    def handle_off_lamp_station_soldering(self, message):
        cmd = "curl -k http://ww.xx.yy.zz:8080/1/on"
        answer = ""
        try:
            answer = subprocess.check_output(cmd, shell=True)
        except:
            print(str(answer))
        print(str(answer))

        self.speak_dialog('off.lamp.station.soldering')


def create_skill():

    return SolderingStationLampOn()

Here's a video of the result. You will notice there is another skill to turn off the lamp.

Mycroft with IoT Skil

But what I really wanted was Mycroft to function offline. There is some talk of a "Personal Server" version, but as this forum shows, there is a lot of code by the redoubtable JarbasAI, but is not quite ready yet. 

So, it is back to my Respeaker and DeepSpeech image: we will look at Jasper in Part 2.

Happy Trails

Friday, 8 May 2020

Hacking the Raspberry Pi Model B to use with Geekworm UPS Hat


Raspberry Pi Model B (not Plus) with Composite Video socket dismounted
I recently bought a Geekworm UPS Hat for the Raspberry Pi Models B Plus and later. It has, shall we say a few rough edges. Especially running loads of 500mA and above. It often functions as a UPS at loads of say 480mA. If I paired it with a say, Raspberry Pi Model B, it works quite well.

Now I have always wanted a solar-powered wifi repeater. In the day, a solar panel provides DC power and also charges up an NS60 (nominal 60Ah at 12v) car battery. At night it runs off the battery. It seems reasonable enough: the Raspberry Pi 3 Model B Plus wifi repeater took up more than 500mA at 5V 24 hours a day, whilst the solar panel might supply 2A at 19V for maybe 6 hours a day.

But before I could buy the NS60, the Covid-19 pandemic of 2019 intervened. Rather than wait for lockdown to pass, why not reconfigure it to run from solar power in the day, and seamlessly switch over to mains power at night. I would need a couple of relays: one each for mains and solar power 5V DC-DC buck converters. And to ensure a trouble-free switchover, a UPS Hat for the Raspberry Pi would be nice ...

The Geekworm UPS Hat worked well at 500mA, and misbehaved over 600mA: the WiFi Repeater would reset on switchover. Or it would not charge the lithium battery on switching back to mains power. Now this is actually self-recovering: on the battery running down it would reset the load and the battery would charge again. There might be a minute of WiFi repeater service interruption. TM Net my service provider certainly does that a few times a day. But this is humiliating; not tolerable for anyone other than TM Net.

My Raspberry Pi Model B Plus drew 510mA clean and 600mA once the WiFi dongle started firing up. On the other hand a Raspberry Pi Model B drew only  440mA and might just work. The trouble is, the Geekworm UPS Hat has a 40-pin socket and the Model B only has a 26-pin header.

Raspberry Pi Model B's 26-pin Header

Many hardware designers seek backward-compatibility when upgrading their designs. Often old hats will work on new models, but new hats will not. But this means the headers will have a lot of similarity. Sometimes enough to work. A quick comparison shows that the first 26 pins of the 50-pin header is pretty much identical, except for GPIO 19-21. The I2C pins are the same, and crucially, so are the power and ground pins.

Raspberry Pi Model B Plus' 40-pin Header
And as long as the 3 contentious GPIO pins are not used, they will default to GPIO input, and inputs when mis-wired and unused are harmless.

But there is another problem: the Model B's composite video output, an RCA socket is too tall and gets in the way of the Geekworm 40-pin socket. This is easily de-soldered.

Remember, an electronic engineer's favorite programming language is solder!
Once the RCA socket is removed, the Geekworm UPS Hat mounts nicely onto a Model B.

Geekworm UPS Hat on Raspberry Pi Model B
The Model B powers on nicely from battery. But the proof of the pudding is in the eating. Besides the Geekworm UPS I2C device at address 0x48 I also had an ADS1115 4-channel analog input card at address 0x36.

# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- -- 
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
30: -- -- -- -- -- -- 36 -- -- -- -- -- -- -- -- -- 
40: -- -- -- -- -- -- -- -- 48 -- -- -- -- -- -- -- 
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
70: -- -- -- -- -- -- -- --             

And from my last post

# ./ups_read
97

# ./ups_read -vc
4.161250V 96.550781%


Blinkenlights galore: Raspberry Pi Model B booting from battery power

To probe it a little further, I printed out a few more registers of the MAX17048:
# ./ups_read2 -a
MODE 10 00
CFG 97 1c
CRATE ff f9
VER 00 12
STAT 00 ff
VRST 00 0c
VCELL cf f0
CREG 60 65
4.158750V 96.394531%

The code is in my github repository.

Note that the 5V input into the Geekworm UPS Hat greatly affects its operation. In the picture of the WiFi extender below, a smaller 5V DC-DC buck converter (red PCB) was used, and this resulted in the Geekworm Hat not charging. Swapping it out for a 5V 3A unit (blue PCB) did the trick. Note that when the battery is at full charge, it takes a little while, maybe a few minutes, for the Hat to start charging.

Raspberry Pi Model B with Geekworm UPS Hat, installed as daytime solar powered WiFi extender . From top: ADS1115 I2C analog converter, WiFi dongle, Pi Model B with Geekworm UPS Hat, 2A buck converter(disconnected), and 3A CC CV buck converter

And yes, the Raspberry Pi daytime solar WiFi Repeater works for now. It's early days yet and there are many switchovers and switchbacks yet to come.

And there you have it: how to hack a 26-pin Raspberry Pi Model B for the Geekworm UPS Hat.

Happy Trails



Thursday, 30 April 2020

Geekworm Raspberry Pi UPS Hat

I did not really have much faith in the Geekworm Raspberry Pi UPS Hat. It cost just RM48 and change and it came with a 3.7V 2700mAh lithium battery. The software links did not work (the correct link is here) and the driver looked out of date.

Plus there has been some rather harsh comments about it. Some like this one, seems reasonable. Now I am not saying they are wrong, but that Geekworm Raspberry Pi UPS Hat worked for me, much to my surprise.

It is quite well described here and I will not repeat the information. I originally had the idea of using a store-bought power bank for the very same purpose. I thought if I ran the power bank with the charger always connected and the raspberry Pi always drawing power, it should work like a cheap UPS. Not bad for an RM30 no-name 3000mAh power bank.
Power bank as UPS. Note the Qi receiver coil
I found one while walking my dogs. It's cover was cracked open and it looked like it had been thrown out of a car, but when it was dried out it worked. The battery looked intact, did not overheat and was not bulging.

But sadly it did not work consistently. When the mains charger was powered off it ran from battery well enough. The problem was when the charger was switched on, it sometimes rebooted. Looks like the output power was not quite stable enough during the switchover. A common enough problem with regular UPS.

Now I can still use it as a UPS for systems that can tolerate a reboot. In some cases I simply put a little restart code in /etc/rc.local. And I can't really complain about the price. Do be careful of discarded lithium batteries though. They have been known to explode, or burn white-hot.

But a real UPS would switch over and back without a glitch. And it would be nice to have an indication of the battery state. The Geekworm UPS actually looked like (I am not sure; I did not check) it was adapted from a power bank circuit  and it would be good so see where I fell short.

Having heard some comments about bad batteries, miswired batteries and faulty UPS boards, I checked both the battery output and the polarity before I hooked it up. It all checked out and powered on a Raspberry Pi 3 Model B+ quite nicely while still on charge.

Geekworm UPS Hat installed
It ran nicely on battery, and did not reset when the charger is reconnected. Even when running X Windows, Chrome and a youtube video at 1900x1080 resolution.

The extremely brief 'manual' (http://raspberrypiwiki.com/File:UserManual.pdf) called for installation of a driver:

User Guide:
1. Upgrade software:
sudo apt-get update
sudo apt-get upgrade
2. Enable the I2C function via raspi-config tool.
3. Install wiringPi .
git clone git://git.drogon.net/wiringPi
cd wiringPi
git pull origin
cd wiringPi
./build
4. Download the zip package;?rpi-ups-hat.zip?
unzip rpi-ups-hat.zip
cd rpi-ups-hat
5. Run the tested program
sudo python example.py

But there is also a C program, which I reproduce in full here:
$cat main.c

#include <unistd.h>                     // close read write
#include <stdio.h>                      // printf
#include <fcntl.h>                      // open
#include <linux/i2c-dev.h>
#include <sys/ioctl.h>
#include <getopt.h>


#define VREG 2
#define CREG 4
#define BUFSIZE 16
#define DEV "/dev/i2c-1"
#define ADRS 0x36


static int readReg(int busfd, __uint16_t reg, unsigned char *buf, int bufsize)
{
    unsigned char reg_buf[2];

    reg_buf[0] = (reg >> 0) & 0xFF;
    reg_buf[1] = (reg >> 8) & 0xFF;

    int ret = write(busfd, reg_buf, 2);

    if (ret < 0) {
        printf("Write failed trying to read reg: %04x (0x%02x 0x%02x)\n", reg, r
eg_buf[0], reg_buf[1], reg);
        return ret;
    }

    return read(busfd, buf, bufsize);
}

int main(int argc, char **argv)
{
    int vOpt = 0, cOpt = 0, o;

    while ((o = getopt (argc, argv, "vc")) != -1) {
        switch (o)
        {
        case 'v':
            vOpt = 1;
            break;
        case 'c':
            cOpt = 1;
            break;


        }
    }

    int bus = 1;
    unsigned char buf[BUFSIZE] = {0};

    int busfd;
    if ((busfd = open(DEV, O_RDWR)) < 0) {
        printf("can't open %s (running as root?)\n",DEV);
        return(-1);
    }

    int ret = ioctl(busfd, I2C_SLAVE, ADRS);
    if (ret < 0)
        printf("i2c device initialisation failed\n");

    if (ret < 0) return(-1);

    readReg(busfd, VREG, buf, 2);

    int hi,lo;
    hi = buf[0];
    lo = buf[1];
    int v = (hi << 8)+lo;
    if (vOpt) {
                printf("%fV ",(((float)v)* 78.125 / 1000000.0));
        }

    readReg(busfd, CREG, buf, 2);
    hi = buf[0];
    lo = buf[1];
    v = (hi << 8)+lo;
    if (!cOpt && !vOpt) {
                printf("%i",(int)(((float)v) / 256.0));
        }
        if (cOpt) {
                printf("%f%%",(((float)v) / 256.0));
        }

        printf("\n");

    close(busfd);
    return 0;

}

I started with the usual:

sudo apt-get update
sudo apt-get upgrade

And used raspi-config to turn on i2c. As soon as I did that an i2c device came up:

# ls -l /dev/i2*
crw-rw---- 1 root i2c 89, 1 Apr 30 20:28 /dev/i2c-1

A quick scan of the C program main.c showed that that was all it needed to run. So, I skipped all the other steps and went straight to:

# gcc main.c -o ups_read

And it runs:
# ./ups_read
97

When I disconnected the charger and ran on battery it actually reads a little higher:
# ./ups_read
98

But it quickly starts to show a correct, discharging trend:
# ./ups_read
97

I ran it for some 6 minutes at pretty much full power and it went to 83%:
# date;./ups_read
Thu Apr 30 20:46:49 +08 2020
83

When I plugged the charger in I get the wobble:
# date;./ups_read
Thu Apr 30 20:47:27 +08 2020
82
            And yes, it charges:
# date;./ups_read
Thu Apr 30 20:47:49 +08 2020
83

# ./ups_read -vc
4.161250V 96.550781%

And that was all it took. Raspberry Pi UPS Hat on the cheap.

Happy Trails.


[Update 2020-05-06]
Managed to reproduce some of the problems mentioned above by increasing the current drawn from the UPS. Decreasing the UPS input power did not affect it much, except it increased the charging time. But, if I increased the UPS load too much (by loading up the Pi USB ports with for example a WiFi dongle at full power), a switchover from mains to battery supply now caused the Pi 1 to reset. And on reboot, the Geekworm UPS often failed to charge unless the load is power-cycled.

600mA peak current draw from the Geekworm UPS Hat was enough to trigger the problems mentioned. A 510mA peak draw was more or less OK for trouble-free operation. Treat the numbers as a guide: I only used a USB Charge Doctor to measure the peak current consumption. This almost guarantees an incorrect reading. An oscilloscope would be a better choice.  

The 4 LED do not accurately reflect the battery state of charge. The I2C value read by ups_read is much more accurate.