Saturday, 25 December 2021

Philips 9W LED Lightbulb Repair

 

Philips 9W LED lightbulb, with plastic cover off

For the first 40 years or so of my life lightbulbs were not repairable: you just threw them away. So when the living room lamp bulb went dark this Christmas afternoon, I had a bit of a lightbulb moment: lightbulbs are LED these days and perhaps they can be repaired! 

The 'bulb' was simply a plastic dome, and had to be pried off. It was just glued on without the need of a hermetic seal. Just one LED, LED3 was blown as since all the LEDs are connected in series, the entire string now no longer worked.

LED3 (bottom center) had a crater in it and fell apart the minute it was touched.

The repair was simple: I just bridged LED3 contacts with solder.

LED3 solder-bridged

And it works!

Repaired lightbulb. Notice the dark spot missing an LED at far left of LED ring

I decided not to install the plastic cover: that will help compensate a little for the reduced light from the missing LED. But of course, hazardous voltages are now exposed. Which is why I installed it in my soldering station lamp, which is nearly entirely covered and out of harm's way:


Toolboom has a great article on LED lightbulb repair; do check it out. I had bought just one smart lightbulb, not wanting to throw out a bunch of electronics just because the bulb is faulty. Do not just bin your faulty IoT devices: they have to be physically destroyed as the electronics can be a security risk. Now that they can be repaired, it would be fun to hack a bunch of faulty bulbs. 

Have yourself a merry little Christmas. Happy Trails. 

Frank Sinatra- have Yourself A Merry Little Christmas


Friday, 24 December 2021

Hacking the Hitachi RAC-EJ10CKM Air Conditioner Remote

 

Hitachi RAC-EJ10CKM and IR Remote

Most of the Internet of Things (IoT, ie smart home) work on new products, new dimmable color LED lightbulbs, new robot vacuum cleaners, cameras, toasters and the like. To get a reasonable degree of intelligence in a smart home requires considerable financial investment. 

And yet there are serious issues that discourage such an outlay. Issues like computer security, reliability (especially those based on WiFi), inter-operability problems, and vendor lock-in. Google products rely on Google servers, and they can fail. Being locked out of your house by a smart lock is extremely annoying. 

Perhaps one way of encouraging IoT adoption is to lower the cost of entry. Most people already have home appliances;  maybe we should retrofit IoT to existing appliances, like smoke detectors, listen for unusual noises like door/window opening, dogs barking and thunder. Or monitor the oven; a 'Hey Google, there's a chicken in the oven' function would be nice. But let us start with the easy ones, some low-hanging fruit, air conditioners.

I have always wanted to automate my air conditioner: it would be nice not to worry about leaving it on by accident. I usually need it for just an hour or two until I fall asleep; it would be nice to have it take an input from a sleep tracking sensor. Or having a passive infrared turn it off when there is nobody in the room. Most air conditioners seem to be controlled from a infrared remote: perhaps I can get an ESP8266 to transmit the control codes. There are quite a few Arduino projects hacking AC remotes, like this one from TaxeIT, whose source code is here

Unfortunately my Hitachi seems to be  one of the few exceptions. I could capture the control codes, but the AC stubbornly refused to respond. Yet TaxeIT's method worked well with my Toshiba TV remote. Perhof seems to have an answer: the Hitachi code is simply too long! Perhof uses code from Analysir and my copy (tweaked to use GPIO14) is here. It is nearly the same except for a little tweak for the ESP8266:

void ICACHE_RAM_ATTR rxIR_Interrupt_Handler() {

The resulting capture fires are in my github repository. It helps to check the length of the capture file from each button press:

$awk -F"," '{print NF-1}' AC_On_Raw.txt
530
$awk -F"," '{print NF-1}' AC_Off_Raw.txt
537

This is markedly longer than the files from TaxeIT:

$awk -F"," '{print NF-1}' hitachiACon.txt
98
$awk -F"," '{print NF-1}' hitachiACoff.txt
98

The bigger files have minimum memory requirements, which may be awkward for some CPUs, but fortunately the ESP8266 was more than adequate. Analysir's tutorial has the code to transmit large files. Note the raw capture files have alternating positive and negative numbers, but the transmit code expects all to be positive. This is easy enough to do:
$sed 's/-//g' AC_On_Raw.txt > AC_On.txt
$sed 's/-//g' AC_Off_Raw.txt > AC_Off.txt

And the codes can then be read into the sendRAW_Flash.ino with a little bit of editing. This worked first time if I hold the transmitter no more than 1m away from the Hitachi air conditioner IR remote receiver. Note that throughout, my hardware is still from TaxeIT, reproduced here for convenience:

Image from TaxeIT

Schematic from TaxeIT

There you have it, a proven hack for the Hitachi RAC-EJ10CKM  IR Remote.

Happy Trails.

Thursday, 23 December 2021

Remote Control of Hitachi RAC-EJ10CKM Air Conditioner

 

NodeMCU ESP-12E with Baseboard and IR transmitter. The clothes peg is used to hold the IR LED in place aimed at the air conditioner

I have often worried about leaving the air conditioner on when I am out of the house, so being able to remotely monitor and control it seemed like a good idea. Using its infra-red remote link seemed like the natural way. 

The go-to method would be to buy a spare remote and wire an ESP8266-based WiFi relay to the On/Off button, but just for kicks I thought it might be fun to hack the 38kHz remote datalink itself. That is the subject of another post, but having hacked it, I now need to transmit the On/Off code to the  air conditioner's indoor unit. 

As usual someone, in this case TaxeIT has beaten me to it. The relevant circuit here is the IR transmitter using an ESP8266 output pin to drive an IR LED via a 2N2222 transistor. I ripped an IR LED off an old DVD Player remote, and my power adapter is 9V DC from a long-dead ADSL modem. For ease of installation, the aim was to be able to park the transmitter as far away as possible and still reliably switch the air conditioner. I managed 2 metres; the Hitachi remote easily did 4 metres. My circuit is:

38kHz IR Transmitter Circuit

The is a good writeup on driving IR LEDs by 'E' here. I was probably a little conservative with my unknown LED for 'E' drives his IR204 at 200mA. The IR204 has a maximum continous current rating of 100mA but a peak current of 1000mA. Since the LED is only transmitting for milliseconds, this is probably OK. 

Bear in mind my circuit is for convenience only; I happened to have a nodeMCU baseboard V1 for my ESP-12E which lets you use up to 12V at the input. There is nothing wrong about using 5V and dispensing with the baseboard like TaxeIT. One of the advantages of 9V or higher is I have more headroom to drive more than one IR LED in series. Angling each LED in slightly different directions will greatly ease the problem of lining up the transmitter with the air conditioner receiver. Try not to overdo it: if there is more than one air conditioner, you might then accidentally switch the wrong one. 

The other reason to use a baseboard is it is easily powered by a battery or power bank, which makes it a lot more convenient to check out the possible installation points.

The decoded remote data is something like

const unsigned int HitachiAC_On[] PROGMEM = {3378, 1696, 448, 1255, 448, 398, 471, 398, 470, 398, 470, 399, 471, 397, 471, 399, 471, 406, 470, 398, 470, 398, 470, 398, 471, 397, 472, 1255, 449, 398, 471, 398, 471, 404, 471, 398, 470, 397, ....

Note that despite the Hitachi using the same button for On/Off, it sends a different bitstream on Off:

const unsigned int HitachiAC_Off[] PROGMEM = { 189, 63402, 2071, 133, 141, 79167
0, 3447, 1621, 512, 1189, 512, 355, 512, 355, 513, 354, 512, 355, 512, 355, 513,
 356, 512, 362, 513, 354, 513, 354, 513, ...

Which are simply timer intervals to alternately turn the LED on and off. The hack was a little difficult as the bitstream turned out to be unexpectedly long. This is apparently true of some of the Hitachi models. The ESP8266 Arduino code is based on IRremote, with a pretty good explanation here. The source code is in github.

To turn the air conditioner on, I use either http or MQTT. For http I use curl:
$curl --connect-timeout 2 -k http://12.34.56.78:8080/on
<!DOCTYPE HTML>
<html>
Aircond is on</html>

To use it with the MQTT server:
$mosquitto_pub  -t 'aircond/commands' -m 'StudyAC_On'

The MQTT server is typically started on power-up with something like:
$mosquitto -c /etc/mosquitto/mosquitto.conf

This works well as long as the transmitter is not more than 2m away and pointed directly at the Hitachi air conditioner, ie at the IR receiver in the bottom right corner. However, the command might be ignored if say the air conditioner is already on and the 'On' command is transmitted. This happens if for example someone else operated it via its regular IR remote. Worse if the WiFi command is used sometimes curl times out without completing the command. This happens especially when there are WiFi connection problems.

To resolve lingering doubts about failing to turn off the unit, I use a separate IoT system, a Raspberry Pi to visually detect the orange 'On' LED on the indoor unit. Now that seems like overkill, but that Pi can be to detect other events remotely like smoke detectors, thunderclaps, door bells, distress calls, etc. At some point. With a lot of programming. But you get the idea ... I integrated it into my Google Assistant smarthome server for the remote operation part. 

Here is a video of it in operation:

Youtube video of voice activation



There you have it, a remote controlled Hitachi air conditioner, an IoT air conditioner.

Happy Trails.

Tuesday, 7 December 2021

Did I leave the Air Conditioner On? Indicator LED detection using Raspberry Pi and OpenCV

 

Air Conditioner Indoor Unit with Yellow Indicator LED

Air conditioners are essential in hot and humid Malaysia, especially if you want to work from home. Most of us have occasionally wondered if we have left it on after we left the house: the resulting electricity bill can be a nasty surprise. Most times you cannot do much about it, save for going back home to check.

But then, I managed to hack its infrared remote using an ESP8266, which made it an Internet of Things (IoT) device, which lets me turn it on and off from my smartphone. Now I have a need to know if I left the aircond turned on at home.  

When the indoor unit comes on, there is a beep and an orange LED lights up. The standard way is to mount an optocoupler diode in series with the orange LED, wire the optocoupler output to an ESP8266 and the resulting IoT will reliably report the aircond on/off status every time.

But variety, they say, is the spice of life, and I happened to have an obsolete Raspberry Pi Model B with OpenCV installed. And lots of ancient 640x480 webcams. Granted the lighting conditions would change through the day, but surely it can recognize that round orange light with some consistency?

USB webcam looking at the indicator LED from 3 feet away

It would be a bonus if software can be added later to detect that beep. That would have other applications like detection of smoke alarms' beeps, thunder, doorbell chimes and other interesting sounds. But that is another blog post.

OpenCV Raspberry Pi Model B with LAN connection (ie 'headless' mode)

 Isaac Vidas looks like a good starting point, first using an HSV transform to isolate the color of interest, then using cv2.HoughCircles() to precisely locate the LED itself. 

Image after HSV transform

I did need some additional help in setting the color thresholds required in his code to:

# Get lower orange hue
lower_orange_hue = create_hue_mask(hsv_image, [0, 0, 255], [0, 255, 255]) 
# Get higher orange hue 
higher_orange_hue = create_hue_mask(hsv_image, [0, 0, 255], [38, 255, 255])

There is a handy python script by nathancy here, and together with a HSV color wheel, my threshold values could be determined using several images of the aircond LED under different lighting conditions.

HSV Color wheel

  


Image after color filtering



Firstof, you will be needing a programs to view the USB webcam video and still frames. I use mplayer and feh:

# apt-get install mplayer
# apt-get install feh

You set the Pi via raspi-config not to run the X Server (ie the GUI desktop), but it helps to have the X libraries installed. From your laptop/desktop you just ssh in:
fred@pi:~ $ ssh -t -Y 12.34.56.78

And from there, mplayer should display the video on your desktop. This lets you position the camera properly.
fred@pi:~ $ mplayer tv://

To get 10 still frames after 10s (some cameras auto-adjust brightness):
fred@pi:~ $ mplayer -vo jpeg -frames 10 -ss 10 -brightness 25 tv://

You can use 'mplayer -loop 0' to display the still images, but they flash on and off rather annoyingly. I much prefer something like feh:
fred@pi:~ $ feh .images/image_on.png

And best of all, the openCV code will execute as if you were using the Pi's console (ie HDMI).

Having selected your camera position, you should probably make a set of images under different lighting conditions. I used a fragment of Isaac Vidas's code to do this, in particular to see the effect of lighting on the separate operations like blurring and HSV transformation. This is named webcamTest.py and is available on my github repository. You typically do:
 
fred@pi:~/checkLed $ source ~/opencv/OpenCV-4.0-py3/bin/activate
(OpenCV-4.0-py3) fred@pi:~/checkLed $

(OpenCV-4.0-py3) fred@pi:~/checkLed $ python ./webcamTest.py image_on.png 

Next, use the nathancy code, which I named hsvThresholder.py. 
(OpenCV-4.0-py3) fred@pi:~/checkLed $ python hsvThresholder.py

hsvThresholder.py: adjust the sliders at the bottom. Runs very slowly on a Pi B, so be patient and watch the console output in the window below

You want to adjust the various sliders in order to mask out all other regions of different color to your LED. A Raspberry Pi 1 Model B will be extremely slow here so patience is required. One way is to watch the bash console messages as they are much quicker to update than the picture. Copy the final settings from the console, which will be something like:
(hMin = 0 , sMin = 0, vMin = 85), (hMax = 28 , sMax = 255, vMax = 255)

My version of Isaac Vidas's code is named checkAC_led.py. and pretty much works as advertised, except it required a much larger (something like 6x diameter) image of the LED. I would have needed to mount my camera much closer, just 17cm from the LED. The other problem is the camera needs to be square over the LED as cv2.HoughCircles() do not detect ellipses very well. And line (ie hollow) circles worked better than a solid one.

Image with test circle added: this is the minimum size circle cv2.HoughCircles() will detect

Mounting my camera closer and square-on the LED is the correct solution. This also minimizes false alarms and improves reliability of detection. This probably means some sort of mounting bracket on the wall, and might get in the way when the air conditioner is being serviced. A software solution would be great, and the future beep detector would help filter out those false alarms ...

This led me to cv2.SimpleBlobDetection() code, which does much better with smaller and deformed circles. Take care to set minArea as large as possible: I actually counted the number of LED pixels in my HSV transform.

The gotcha here is that the HSV image has to be inverted for blob detection to work:
    h, s, image_gray = cv2.split(full_image)
    image_gray_neg = cv2.bitwise_not(image_gray) 
    detector = cv2.SimpleBlobDetector_create(params)
After conversion to grayscale and inversion

After successful blob detection


The final version, checkACvideo_led.py reads from the webcam instead of still image files, filters out false alarms based on the blob x and y coordinates and prints the air conditioner status. In its IoT form the print statement just needs to be modified to publish to an MQTT server like mosquitto.

So did I leave the air conditioner on? Hey Mycroft, is my air conditioner on or off?

Happy Trails





Monday, 6 December 2021

The Littlest Computer that Could: OpenCV using Raspberry Pi 1 Model B Rev 2

 

First there was the 1B: Raspberry Pi One Model B.

Now out of production, the Raspberry Pi One Model B was released in 2012, earlier than the Model A. I had a few lying unused in my parts box, some damaged by defective power supplies, but mostly superseded by better versions like the Pi 2, 3 and 4s. The Pi 1 Model B was the slowest, had only 512MB DRAM and used the sdcard as mass storage. The USB functionality was questionable: the LAN chip was internally routed through the USB bus which crippled its throughput. And since the Pi was always touchy about its 5V input power adding basic functionality like keyboard, hdmi USB was always a hit and miss affair.

Left: sdcard, right: micro-sdcard


The sdcard is getting very hard to find. You find micro-sdcard with an adapter but these tend to develop contact problems and corrupt the onboard filesystem. To make matters worse, a Pi Model B which cannot boot will have no indication: there is just that red power LED on and nothing else, and it looks pretty much like a dead Pi. Some 60% of my discarded Pi 1s simply had microsd adapter contact problems and could not boot.

On the plus side, the Pi 1 drew the least power amongst the Pi series which meant most old Android phone chargers could power it. It also had audio and video jacks, which were very handy with retro electronics. 

I managed put one to use monitoring my solar panel, but most of the little jobs are better served by the ESP8266 or the Microchip PIC. If only the Raspberry Pi Model B could run OpenCV; with a bit of nifty image processing, it might find a use, perhaps to check if my front gate has been left open, or the air conditioner left running, or if the smoke alarm is beeping. 

Most of the time, the Pi 3 is the minimum recommended model, but there is no mention Raspberry Pi One cannot be used. No harm trying; and since the install process can be left alone, it is easily done on the side. 



Raspberry Pi 1 Model B Rev 2 with infamous microsd adapter

$dd if=2021-10-30-raspios-bullseye-armhf-lite.img of=/dev/sdc

After the standard install of Raspbian, I use raspi-config to turn on the ssh server and set a fixed ethernetIP. It can then be used as a headless (ie no monitor or keyboard) system via ssh from a host laptop or desktop. After which there is the usual obligatory

# apt-get update --allow-releaseinfo-change
# apt-get upgrade

And the Pi model:

root@pi:~# cat /sys/firmware/devicetree/base/model
Raspberry Pi Model B Rev 2

Jeremy Morgan's OpenCV install worked for my Pi 3 before, but now instead it stops:

# pip install opencv-contrib-python
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting opencv-contrib-python
Downloading opencv-contrib-python-4.5.4.60.tar.gz (150.7 MB)
|��������������������������������| 150.4 MB 93 kB/s eta 0:00:04Killed

From 'dmesg -T' it looks like I ran out of memory:

[Tue Nov 23 18:03:30 2021] [ 5714]     0  5714    87141    21594     104       0

    21597             0 pip

[Tue Nov 23 18:03:30 2021] Out of memory: Kill process 5714 (pip) score 349 or sacrifice child

[Tue Nov 23 18:03:30 2021] Killed process 5714 (pip) total-vm:348564kB, anon-rss

:86376kB, file-rss:0kB, shmem-rss:0kB

[Tue Nov 23 18:03:30 2021] oom_reaper: reaped process 5714 (pip), now anon-rss:0

kB, file-rss:0kB, shmem-rss:0kB

My free memory is:
# free -m
              total        used        free      shared  buff/cache   available
Mem:            369          18         285           0          65         304
Swap:         15358          21       15337

And I can get a litte more by editing the boot partition's config.txt:
# vi /boot/config.txt

Add:
gpu_mem=16

And comment out
#start_x=1

After a reboot I get more memory:
# free -m
              total        used        free      shared  buff/cache   available
Mem:            477          31         338           6         106         390
Swap:            99           0          99
 
But this is still not enough. Now I could increase the swap file in my micro sdcard, but the thrashing might wear it out as the number of write operations is limited. Instead I used one of the many ancient thumbdrives, discarded just because of their low capacities. I ended up using a compactflash card for its speed:

# dd if=/dev/zero of=/dev/sda bs=1M count=1024
# mkswap /dev/sda
# swapon /dev/sda

# pip install --upgrade pip setuptools wheel
# python -m pip install --upgrade pip
# pip3 install opencv-contrib-python

pip seems to have gone walkabout so,

#  ln -s /usr/local/bin/pip /usr/bin/pip

Failure:
# pip install opencv-contrib-python
    File "setup.py", line 381, in _classify_installed_files_override
      with open(os.path.join(cmake_install_dir, "python", "cv2", "__init__.py"),
 'r') as opencv_init:
  FileNotFoundError: [Errno 2] No such file or directory: '_skbuild/linux-armv6l
-3.7/cmake-install/python/cv2/__init__.py'
  ----------------------------------------
  ERROR: Failed building wheel for opencv-contrib-python
Failed to build opencv-contrib-python
ERROR: Could not build wheels for opencv-contrib-python, which is required to in
stall pyproject.toml-based projects

Vishwesh Shrimali's instructions seem promising, and his minimum requirement is for a Pi 2. There are more separate bash commands which increases the chances for a successful debug. Since a fail is near certain, I chose to key in the commands manually instead of running Shrimali's script.

root@pi:/root/opencv# apt-get -y purge wolfram-engine
root@pi:/root/opencv# apt-get -y purge libreoffice*
root@pi:/root/opencv# apt-get -y clean
root@pi:/root/opencv# apt-get -y autoremove
root@pi:/root/opencv# apt -y update
root@pi:/root/opencv# apt -y upgrade
root@pi:/root/opencv# apt-get -y remove x264 libx264-dev
root@pi:/root/opencv# apt-get -y install build-essential checkinstall
 cmake pkg-config yasm
root@pi:/root/opencv# apt-get -y install git gfortran
root@pi:/root/opencv# apt-get -y install libjpeg8-dev libjasper-dev libpng12-dev
root@pi:/root/opencv# apt-get -y install libtiff5-dev
root@pi:/root/opencv# apt-get -y install libtiff-dev
root@pi:/root/opencv# apt-get -y install libxine2-dev libv4l-dev
root@pi:/root/opencv# cd /usr/include/linux
root@pi:/usr/include/linux# ln -s -f ../libv4l1-videodev.h videodev.h
root@pi:/usr/include/linux# cd $cwd
root@pi:/root/opencv#
root@pi:/root/opencv# apt-get -y install libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev
root@pi:/root/opencv# apt-get -y install libgtk2.0-dev libtbb-dev qt5-default                         
root@pi:/root/opencv# apt-get -y install libatlas-base-dev
root@pi:/root/opencv# apt-get -y install libmp3lame-dev libtheora-dev
root@pi:/root/opencv# apt-get -y install libvorbis-dev libxvidcore-dev libx264-dev
root@pi:/root/opencv# apt-get -y install libopencore-amrnb-dev libopencore-amrwb-dev 
root@pi:/root/opencv# apt-get -y install libavresample-dev
root@pi:/root/opencv# apt-get -y install x264 v4l-utils

            The following are optional:
root@pi:/root/opencv# apt-get -y install libprotobuf-dev protobuf-compiler
root@pi:/root/opencv# apt-get -y install libgoogle-glog-dev libgflags-dev 
root@pi:/root/opencv# apt-get -y install libgphoto2-dev libeigen3-dev libhdf5-dev doxygen

            Required python libraries
root@pi:/root/opencv# apt-get -y install python3-dev python3-pip

            Virtual environment:
fred@pi:~/opencv $ python3 -m venv OpenCV-4.0-py3
fred@pi:~/opencv $ echo "# Virtual Environment Wrapper" >> ~/.bashrc
fred@pi:~/opencv $ echo "alias workoncv-4.0=\"source /root/opencv/OpenCV-4.0-py3/bin/activate\"" >> ~/.bashrc                                       
fred@pi:~/opencv $ source /root/opencv/OpenCV-4.0-py3/bin/activate(OpenCV-4.0-py3) fred@pi:~/opencv $

 Increase the swap file from 100 to 1024:
(OpenCV-4.0-py3) fred@pi:~/opencv $ sudo sed -i 's/CONF_SWAPSIZE=100/CONF_SWAPSIZE=1024/g' /etc/dphys-swapfile
(OpenCV-4.0-py3) fred@pi:~/opencv $ sudo /etc/init.d/dphys-swapfile stop
[ ok ] Stopping dphys-swapfile (via systemctl): dphys-swapfile.service.      
(OpenCV-4.0-py3) fred@pi:~/opencv $ sudo /etc/init.d/dphys-swapfile start
[ ok ] Starting dphys-swapfile (via systemctl): dphys-swapfile.service.         

(OpenCV-4.0-py3) fred@pi:~/opencv $ pip install numpy dlib
(OpenCV-4.0-py3) fred@pi:~/opencv $ deactivate

This is an over 400MB porker of a file:
fred@pi:~/opencv $ git clone https://github.com/opencv/opencv.git

In retrospect I should have checked out 4.0.1 as having cv2.drawKeypoints() would have been handy.
fred@pi:~/opencv/opencv $ git checkout 4.0.0

fred@pi:~/opencv $ git clone https://github.com/opencv/opencv_contrib.git
fred@pi:~/opencv $ cd opencv_contrib
fred@pi:~/opencv/opencv_contrib $ git checkout 4.0.0

Then comes the config:
fred@pi:~/opencv/opencv/build $ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE
_INSTALL_PREFIX=/home/heong/opencv/installation/OpenCV-4.0 -D INSTALL_C_EXAMPLES
=ON -D INSTALL_PYTHON_EXAMPLES=ON -D WITH_TBB=ON -D WITH_V4L=ON -D OPENCV_PYTHON3_INSTALL_PATH=/home/heong/opencv/OpenCV-4.0-py3/lib/python3.5/site-packages -D WITH_QT=ON -D WITH_OPENGL=ON  -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules -D BUILD_EXAMPLES=ON ..

Now for the make:
fred@pi:~/opencv/opencv/build $ CMAKE_INSTALL_PREFIX=/usr/local
fred@pi:~/opencv/opencv/build $ export CMAKE_INSTALL_PREFIX
fred@pi:~/opencv/opencv/build $ make

Produces the error
In file included from /home/fred/opencv/opencv_contrib/modules/cvv/src/qtutil/f
ilter/sobelfilterwidget.cpp:3:
/home/heong/opencv/opencv/modules/imgproc/include/opencv2/imgproc.hpp:208:5: not
e:   �FILTER_SCHARR�
     FILTER_SCHARR = -1
     ^~~~~~~~~~~~~
make[2]: *** [modules/cvv/CMakeFiles/opencv_cvv.dir/build.make:453: modules/cvv/
CMakeFiles/opencv_cvv.dir/src/qtutil/filter/sobelfilterwidget.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:12676: modules/cvv/CMakeFiles/opencv_cvv.dir/
all] Error 2
make: *** [Makefile:163: all] Error 2

From Nobuo Tsukamoto, in the file /home/fred/opencv/opencv_contrib/modules/cvv/src/qtutil/f
ilter/sobelfilterwidget.cpp added 'using namespace cv;' at line 13 thus:

#include "../../util/util.hpp"
#include "../filterfunctionwidget.hpp"
#include "../filterselectorwidget.hpp"

using namespace cv; // cmheong 2021-11-29

namespace cvv
{
namespace qtutil
{

SobelFilterWidget::SobelFilterWidget(QWidget *parent)

After which
fred@pi:~/opencv/opencv/build $ make
Produces the error
Scanning dependencies of target example_cpp_detect_mser
[ 89%] Building CXX object samples/cpp/CMakeFiles/example_cpp_detect_mser.dir/detect_mser.cpp.o
/home/fred/opencv/opencv/samples/cpp/detect_mser.cpp:28:10: fatal error: GL/glu.h: No such file or directory
 #include <GL/glu.h>
          ^~~~~~~~~~
compilation terminated.
make[2]: *** [samples/cpp/CMakeFiles/example_cpp_detect_mser.dir/build.make:63:
samples/cpp/CMakeFiles/example_cpp_detect_mser.dir/detect_mser.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:29519: samples/cpp/CMakeFiles/example_cpp_det
ect_mser.dir/all] Error 2
make: *** [Makefile:163: all] Error 2

From RajkiranVeldur, just do
root@pi:~#  apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

After which
fred@pi:~/opencv/opencv/build $ make
Produces the link error
[ 89%] Linking CXX executable ../../bin/example_cpp_detect_mser
/usr/bin/ld: CMakeFiles/example_cpp_detect_mser.dir/detect_mser.cpp.o: in functi
on `draw(void*)':
detect_mser.cpp:(.text.startup.main+0x1c70): undefined reference to `gluPerspect
ive'
collect2: error: ld returned 1 exit status
make[2]: *** [samples/cpp/CMakeFiles/example_cpp_detect_mser.dir/build.make:134:
 bin/example_cpp_detect_mser] Error 1
make[1]: *** [CMakeFiles/Makefile2:29519: samples/cpp/CMakeFiles/example_cpp_det
ect_mser.dir/all] Error 2
make: *** [Makefile:163: all] Error 2

regpa mentioned that I need openGLU.so, but a brute-force search could not come up with one:
fred@pi:~/opencv/opencv/build $ sudo ls -lR / | grep -e openGLU 
fred@pi:~/opencv/opencv/build $

There is however, a file called libGLU.so mentioned by myinternetofthings:
fred@pi:~/opencv/opencv/build $ sudo ls -lR / 2>/dev/null | grep -e GLU.so
lrwxrwxrwx  1 root root       15 Sep 20  2015 libGLU.so -> libGLU.so.1.3.1
lrwxrwxrwx  1 root root       15 Sep 20  2015 libGLU.so.1 -> libGLU.so.1.3.1
-rw-r--r--  1 root root   358228 Sep 20  2015 libGLU.so.1.3.1

Added it to 2 separate files link.txt:

fred@pi:~/opencv/opencv/build $ cat samples/opengl/CMakeFiles/example_open
gl_opengl.dir/link.txt
/usr/bin/c++     -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-
dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wm
issing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -W
uninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comme
nt -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthr
ead -fomit-frame-pointer -ffunction-sections -fdata-sections  -mfp16-format=ieee
 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG    -Wl,-
-gc-sections   CMakeFiles/example_opengl_opengl.dir/opengl.cpp.o  -o ../../bin/e
xample_opengl_opengl  -Wl,-rpath,/home/heong/opencv/opencv/build/lib -ldl -lm -l
pthread -lrt /usr/lib/arm-linux-gnueabihf/libGL.so ../../lib/libopencv_highgui.s
o.4.0.0 ../../lib/libopencv_videoio.so.4.0.0 ../../lib/libopencv_imgcodecs.so.4.
0.0 ../../lib/libopencv_imgproc.so.4.0.0 ../../lib/libopencv_core.so.4.0.0 /usr/
lib/arm-linux-gnueabihf/libGLU.so

fred@pi:~/opencv/opencv/build $ cat samples/cpp/CMakeFiles/example_cpp_det
ect_mser.dir/link.txt
/usr/bin/c++     -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-
dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wm
issing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -W
uninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comme
nt -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthr
ead -fomit-frame-pointer -ffunction-sections -fdata-sections  -mfp16-format=ieee
 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG    -Wl,-
-gc-sections   CMakeFiles/example_cpp_detect_mser.dir/detect_mser.cpp.o  -o ../.
./bin/example_cpp_detect_mser  -Wl,-rpath,/home/heong/opencv/opencv/build/lib -l
dl -lm -lpthread -lrt /usr/lib/arm-linux-gnueabihf/libGL.so ../../lib/libopencv_
gapi.so.4.0.0 ../../lib/libopencv_stitching.so.4.0.0 ../../lib/libopencv_aruco.s
o.4.0.0 ../../lib/libopencv_bgsegm.so.4.0.0 ../../lib/libopencv_bioinspired.so.4
.0.0 ../../lib/libopencv_ccalib.so.4.0.0 ../../lib/libopencv_cvv.so.4.0.0 ../../
lib/libopencv_dnn_objdetect.so.4.0.0 ../../lib/libopencv_dpm.so.4.0.0 ../../lib/
libopencv_face.so.4.0.0 ../../lib/libopencv_freetype.so.4.0.0 ../../lib/libopenc
v_fuzzy.so.4.0.0 ../../lib/libopencv_hdf.so.4.0.0 ../../lib/libopencv_hfs.so.4.0
.0 ../../lib/libopencv_img_hash.so.4.0.0 ../../lib/libopencv_line_descriptor.so.
4.0.0 ../../lib/libopencv_reg.so.4.0.0 ../../lib/libopencv_rgbd.so.4.0.0 ../../l
ib/libopencv_saliency.so.4.0.0 ../../lib/libopencv_sfm.so.4.0.0 ../../lib/libope
ncv_stereo.so.4.0.0 ../../lib/libopencv_structured_light.so.4.0.0 ../../lib/libo
pencv_superres.so.4.0.0 ../../lib/libopencv_surface_matching.so.4.0.0 ../../lib/
libopencv_tracking.so.4.0.0 ../../lib/libopencv_videostab.so.4.0.0 ../../lib/lib
opencv_xfeatures2d.so.4.0.0 ../../lib/libopencv_xobjdetect.so.4.0.0 ../../lib/li
bopencv_xphoto.so.4.0.0 ../../lib/libopencv_shape.so.4.0.0 ../../lib/libopencv_p
hase_unwrapping.so.4.0.0 ../../lib/libopencv_optflow.so.4.0.0 ../../lib/libopenc
v_ximgproc.so.4.0.0 ../../lib/libopencv_datasets.so.4.0.0 ../../lib/libopencv_pl
ot.so.4.0.0 ../../lib/libopencv_text.so.4.0.0 ../../lib/libopencv_ml.so.4.0.0 ..
/../lib/libopencv_dnn.so.4.0.0 ../../lib/libopencv_video.so.4.0.0 ../../lib/libo
pencv_photo.so.4.0.0 ../../lib/libopencv_objdetect.so.4.0.0 ../../lib/libopencv_
calib3d.so.4.0.0 ../../lib/libopencv_features2d.so.4.0.0 ../../lib/libopencv_fla
nn.so.4.0.0 ../../lib/libopencv_highgui.so.4.0.0 ../../lib/libopencv_videoio.so.
4.0.0 ../../lib/libopencv_imgcodecs.so.4.0.0 ../../lib/libopencv_imgproc.so.4.0.
0 ../../lib/libopencv_core.so.4.0.0 /usr/lib/arm-linux-gnueabihf/libGLU.so

After which
fred@pi:~/opencv/opencv/build $ make

Completes successfully. But there is still the installation. Set the environment variable CMAKE_INSTALL_PREFIX so that:

fred@pi:~/opencv/opencv/build $ echo $CMAKE_INSTALL_PREFIX
/usr/local
fred@pi:~/opencv/opencv/build $ sudo make install

And a quick test run:
fred@pi:~/opencv/opencv/build $ source /home/fred/opencv/OpenCV-4.0-py3/bin/activate

(OpenCV-4.0-py3) fred@pi:~/opencv/opencv/build $ python
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> print(cv2.__version__)
4.0.0
>>> quit()

And there you have it, Raspberry Pi 1 Model B Rev 2, the littlest computer that could OpenCV.

Wednesday, 4 August 2021

The Little Computer that Could: Motion Detection with Outdoor Camera, Tensorflow and Raspberry Pi Part 3 of 3

 

"I have no spur to prick the sides of my intent, but only vaulting ambition, which o'erleaps itself and falls on the other." - Macbeth

Motion detection with an outdoor camera can be problematic, with wind, shadow and direct sunlight causing multiple false alarms. Suppressing these false alarms can result in genuine alarms being missed. One way is to filter all the alarms through an object recognition program.

The motion detection program of Part 1 will write all alarm image frames to the ./alarms/ directory. A modified object recognition program of Part 2 will inspect each alarm frame and if an object is recognized, will write it to another directory ./alarm/. The alarm filter program is called tiny-yolo_alarmfilter.py. This seemed to work well to start with, but with any new project, time will tell. For starters it seemed to think my dog is a cow, probably because she was sniffing at the grass.

tiny YOLO mislabelling dog as cow

Now my dog does have a temper, but calling her a cow is a little harsh. Even so, both dogs and cows qualify (in my opinion) as valid alarm triggers so it is not a show-stopper for now. It is particularly good at excluding changes in lighting and shadows. Proof positive that the little Raspberry Pi could, and did

Recognizing passing vehicles 


Ah, vaulting ambition ... if the Pi can recognize an object, maybe it can also track it. Most alarms are quite passive, except for the loud siren, which tends to annoy the neighbors and should only be used as a last resort. A pan/tilt camera like the Trendnet TV-IP422WN that visibly tracks the object is a lot more menacing, and should scare off the more timid intruders like birds and squirrels. But that is another blog post.

A tensorflow model looks promising, as it can be potentially speeded up with the use of custom hardware like the Coral USB acelerator for about the price of a Raspberry Pi 4.

Coral USB Accelerator

Installing tensorflow proved to be a bit hit and miss, but Katsuya Hyodo's github readme worked for me. This time I started with a squeaky clean version of Raspbian, 2021-05-07-raspios-buster-armhf.img.

Remember to uninstall the dud versions:

# pip3 uninstall tensorflow
# apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev gccgfortran libgfortran5 libatlas3-base libatlas-base-dev libopenblas-dev libopenblas-base libblas-dev liblapack-dev cython3 openmpi-bin libopenmpi-dev libatlas-base-dev python3-dev
# pip3 install pip --upgrade
# pip3 install keras_applications==1.0.8 --no-deps
# pip3 install keras_preprocessing==1.1.0 --no-deps
# pip3 install h5py==2.9.0
# pip3 install pybind11
# pip3 install -U --user six wheel mock
# wget "https://raw.githubusercontent.com/PINTO0309/Tensorflow-bin/master/tensorflow-1.15.0-cp37-cp37m-linux_armv7l_download.sh"
# sh ./tensorflow-1.15.0-cp37-cp37m-linux_armv7l_download.sh
# pip3 install tensorflow-1.15.0-cp37-cp37m-linux_armv7l.whl

A quick test:
# python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>> tensorflow.__version__
'1.15.0'
>>>

For object detection I used Edje Electronics
Packages tensorflow, libatlas-base-dev, libhdf5-dev, libhdf5-serial-dev I had already installed previously

# apt-get install libjasper-dev
# apt-get install libqtgui4
# apt-get install libqt4-test

I used version 4.4.0.46 because 4.1.0.25 could not be found
# pip3 install opencv-contrib-python==4.4.0.46
# apt-get install protobuf-compiler
# pip install --user Cython
# pip install --user contextlib2
# pip install --user pillow
# pip install --user lxml
# pip install --user matplotlib

Got the tensorflow models
# git clone https://github.com/tensorflow/models.git

Then SSD_Lite:
# wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_c
oco_2018_05_09.tar.gz
# tar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz

The original tensorflow program, ./models/research/object_detection/TensorFlow.py got its images from the default Raspberry Pi camera, so I made a simpler version to take one frame (./341front.jpg in 640x480) at a time, uTensorFlow.py.

Note SSD_Lite misclassified a dog as sheep

The processing time was over 40s on my Raspberry Pi 3. My Coral USB accelerator will take more than a month to arrive, and it needs Tensorflow Lite, for which Edje Electronics has a very promising Tensorflow Lite repository, so why not. Notice this time the commands are as a sudoer user and not root, which I am told is the proper way to do things:
$ sudo pip3 install virtualenv
$ python3 -m venv tflite1-env
$ source tflite1-env/bin/activate

Then comes a whopper of a download:
$ git clone https://github.com/EdjeElectronics/TensorFlow-
Lite-Object-Detection-on-Android-and-Raspberry-Pi.git
$ bash get_pi_requirements.sh
Notice with Tensorflow Lite there is no Tensorflow module:
$ python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
>>>

$ wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
$ unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
Archive:  coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
  inflating: detect.tflite
  inflating: labelmap.txt

The original program is TFLite_detection_webcam.py but I need a version that works on individual image files, and not a video file, as in tiny-yolo_alarmfilter.py. The progam is tflite_alarmfilter.py, which took no time at all to write in python. You run it thus:

$ python3 tflite_alarmfilter.py --resolution='640x480' --modeldir=.
Processing alarm file 341front_20210804_094635_original.png
Processing alarm file 341front_20210804_094649_original.png
Processing alarm file 341front_20210804_094648_original.png

The framerate is now an astounding 2fps even without the Coral USB Accelerator. The detection seems slightly better with a more accurate bounding box without multiple boxes nested in the same object.

Tensorflow Lite SSDLite MobileNet v2: note accurate bounding box


tiny-YOLO: multiple bounding boxes over same object

I had originally planned to outsource the alarm files filtering to an x86 CPU, but the results with Tensorflow Lite made everything possible on the same Raspberry Pi 3, truly the little CPU that could. 

Happy Trails.

Tuesday, 27 July 2021

The Little Computer that Could: YOLO Object Recognition and Raspberry Pi Part 2 of 3

 

You Only Look Once

One cure for motion detector false alarms in Part 1 is object recognition; the alarm is only raised if the intruder is recognized. YOLO is fast, trendy (deep neural net!) and will run on a Raspberry Pi. arunponnusamy comes straight to the point.

I already had numpy and openCV from Part 1, so it is just

# wget https://pjreddie.com/media/files/yolov3.weights

Now this is a massive file and I needed a copper Ethernet connection to the Internet to download it. Next you download the zip file from arunponnusamy for the files yolo_opencv.py, yolov3.cfg, yolov3.txt and dog.jpg. Do not use wget on github. And simply run it:

# python3 yolo.py --image dog.jpg --config yolov3.cfg --weights yolov3.weights --classes yolov3.txt

If you get 'OutOfMemoryError' you will probably need to reboot your Pi.

The output should be:



I substituted a frame from my IP Camera, and sure enough it recognized a person correctly:

# python3 yolo.py --image alarm.png --config yolov3.cfg --weights yolov3.weights --classes yolov3.txt



Output of yolov3.weights model

For convenience, see my github repository for copies of arunponnusamy's code. It took my Raspberry Pi 3 some 50 seconds to process the image, which is a little long but would probably keep up with the motion detector triggers of a few frames per hour. But arunponnusamy's reference is Joseph Redmon, who also mentioned a tiny version of the pre-trained neural net model. You will also need to download his yolov3-tiny.cfg, which I got by cloning from his github.
git clone https://github.com/pjreddie/darknet
Joseph's executable is a C program, but the input files are the same so I simply used his tiny model files with arunponnusamy's python script:

$ python3 yolo.py --image alarm.png --config yolov3-tiny.cfg --weights yolov3-tiny.weights --classes yolov3.txt

Note I also reused arunponnusamy's yolov3.txt which is the dictionary of object names. Joseph's dictionary seems to be baked into his executable file. The command finishes in 5 seconds, which is 10 times better. The output bounding box is different, but it did recognize the object as a person.

Output of yolov3-tiny.weights model

The idea is to use YOLO to filter out the false alarms from the motion detection program from Part 1, In Part 3, we will look at tiny-YOLO, Tensorflow, and SSD_Lite. The Raspberry Pi is truly the little computer that could.

Happy Trails.

The Little Computer that Could: motion detection with OpenCV and Raspberry Pi Part 1 of 3



One of the silver linings with this pandemic lockdown is you get round to doing one or two thing you always meant to do. For me it is vision systems. I happened to be testing an lcd panel using my Raspberry Pi 3. Like that little engine, I think I can ...

As it neared the top of the grade, which had so discouraged the larger engines, it went more slowly. However, it still kept saying, "I—think—I—can, I—think—I—can." - The Little Engine That Could


OpenCV seems as good a starting point as any. My Raspberry Pi 3 did not have the most recent image, so your mileage may vary.

# cat /proc/version
Linux version 4.19.66-v7+ (dom@buildbot) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611)) #1253 SMP Thu Aug 15 11:49:46 BST 2019

# cat /etc/debian_version
9.13

Using jeremymorgan's instructions, 

# apt-get update
# apt-get upgrade

After a really long wait, it finished. If like me your downloads get interrupted you can resume using:

# dpkg --configure -a

I also had to tweak jeremymorgan's instructions a little:

# wget https://bootstrap.pypa.io/pip/3.5/get-pip.py
# python3 get-pip.py

But the next command failed:
# pip install opencv-contrib-python
-su: /usr/bin/pip: No such file or directory

Wait, I just installed pip!
# whereis pip
pip: /etc/pip.conf /usr/local/bin/pip /usr/local/bin/pip2.7 /usr/local/bin/pip3.5 /usr/share/man/man1/pip.1.gz

Ah, my python install usually seeks its commands at /usr/bin, so
# ln -s /usr/local/bin/pip /usr/bin/pip

Now it completes:
# pip install opencv-contrib-python
Requirement already satisfied: numpy>=1.12.1 in /usr/lib/python3/dist-packages (from opencv-contrib-python) (1.12.1)
Installing collected packages: opencv-contrib-python
Successfully installed opencv-contrib-python-4.1.1.26

I'll be needing someting to play video, so
# apt-get install mplayer

OpenCV seems to speak native C++, but python seems like a good idea to me, so:
# pip3 install opencv-python

The Raspberry Pi has a built-in camera interface and all the OpenCV worked examples seems to use it, but I did not have a camera handy. What I do have is my old Trendnet  TV-IP422WN IP Camera. It works with an ancient, rickety and totally unsafe version of Microsoft Explorer (ActiveX - eeek!) but with a little bit of luck, and help from Aswinth Raj got it to work in plain http:

http://192.168.10.30/cgi/mjpg/mjpg.cgi

2021-08-04 update: The Trendnet TV-IP422WN also supports rtsp:

rtsp://192.168.10.30/mpeg4

The program to test your non-Raspberry Pi video camera is here.

Now Aswinth Raj managed to stream video from his DVR with OpenCV using rtsp. It is not http, but worth a try. His code uses cvui, so in it goes:

# pip3 install cvui

But when you first run motion_capture.py, there are errors. You need:
# apt-get install libatlas-base-dev

Then the python module numpy throws an exception. For some mysterious reason this made it work:
# apt-get remove python-numpy
# apt-get remove python3-numpy
# pip uninstall numpy
# pip3 uninstall numpy
# pip3 install numpy
# apt-get autoremove

This caused a crash and CPU reboot, after which
# dpkg --configure -a
# apt-get autoremove
# pip3 install numpy

And now, work it does. Here is my http version.

Next is motion detection, by automaticaddison. He has working python code for two methods: by absolute difference and by background subtraction. Both work without fuss; I only modified it for my Trendnet IP camera instead of the default Raspberry Pi camera.

Note the dog is also detected but the program chose the biggest contour


Now by any reasonable measure either program would solve the problem of motion detection using Raspberry Pi and OpenCV. True, the frame rate was not brilliant at 1 fps but it would detect most intrusions unless the intruder flat-out sprinted across the camera field of view.

My problem was I had mounted my IP camera to look out on the front lawn which pointed the camera northeast. When run 24/7 the morning sun and the evening shadow of the house, coupled with wind blowing through the vegetation caused almost continous triggering. It did not matter if absolute difference or background subtraction was used. When motion_capture.py was modified to save the triggering images, it used up a few gigabytes every day.

One way to reduce false alarms is to mask out problem regions. I used the cruder image cropping, which does work but over 24 hours essentially my entire background changes a few times a day. 

Adjusting the image threshold, including adaptive thresholding made little difference. Next I took a Adrian Rosebrock's weighted average (cv2.accumulateWeighted) of a few consecutive frames, and while this helped the false alarms were still annoyingly high.

The next thing to try was to only raise an alarm on say 5 consecutive triggers. This caused some intrusions to be missed, particularly if the intruder moved quickly. But there were still too many false alarms. A similarly crude method was to put upper and lower limits on the size of the detected bounding rectangle (cv2.boundingRect)

I ended up using all the methods in one after the other, and maybe got a few false alarms per hour. It sort of works, but was a little error-prone (ie misses intrusions). The program is background_subtraction.py. One possible improvement is Ivan Kudriavtsev's take on motion detection. The installation of Numba looks a bit daunting, so I left it for another time.

In Part 2, we will investigate if object recognition will help.

There you have it: the little Raspberry Pi that could do motion detection using OpenCV. Happy Trails.

Tuesday, 22 June 2021

BB View 7" LCD on Debian Buster 10.3: Abandonware revisited

 

BB View LCD by element14. Note: the FPC cable when correctly inserted may not show printing topside



The BB View LCD came out about mid-2013, and was one of the earliest LCD panels for the beaglebone. It came with a "cape", ie a beaglebone interface board. I bought mine in December 2014 and could not get it to work. No doubt it worked for many other people: it came with kernel patches and source code for Angstrom, the TI SDK as well as Debian. It is still listed for sale in element14 and Mouser, but recently has become "unavailable".

Currently, there is a worldwide semiconductor shortage, and component leadtimes are now a few months. The temptation to press the BB View 7" LCD back into service became irresistible. I think my problem was I used the kernel patches with a later, 2014 version of Debian while the patches were mid-2013. The beaglebone black could recognize the cape and read its eeprom, but the LCD stayed blank, and the backlights kept blinking in perfect unison with the beaglebone CPU LED.

While no walk in the park, I thought it should possible to get it working. After all I have source code and it should only be a matter of time. And given the pandemic, I have four weeks of lockdown time, to be precise. And hopefully I should learn a little about kernel display drivers too.

First you need a beefy 5V power supply. I used a 5V@3A one. While the average current draw of the BB View LCD is not that high at 240mA, there are regular nasty spikes. I got the occasional CPU crash along with sdcard errors.


Beaglebone Black Serial Debug Header: Pin 1 is on the right

You also need to connect to the serial debug port. Any USB serial TTL adapter will do. The pinout is:


Beaglebone    Pin Description      USB Serial TTL Adapter
1                      Gnd                         Gnd
4                      TX                           RX
5                      RX                           TX

The Beaglebone Black manual specified an FTDI USB adapter, but I used a cheap CH340, which worked well enough. The baud rate is 115200, 8-bits and 1-stop with no hardware flow control.

FTDI Serial Debug cable. Click on picture for datasheet


The terminal program I use is minicom, set to 115200 baud, 8-bits, 1-stop and without hardware flow control. If you have the proper FTDI cable, you should use hardware flow control.

This may be a no-brainer to some, but an LCD's size is measured at its diagonal. The BB View LCD comes in 2 software-incompatible sizes, 4.3" and 7". For the unwary, a 7" LCD measures close to 4.3" on one side.

Also, the FPC cable is inserted with the metalled side facing down on both ends. And may not look like the picture on top. Each matching connector has a little lever which you pull up. Do not force the FPC cable.

The Debian image I used was the latest as of this writing, which is bone-debian-10.3-iot-armhf-2020-04-06-4gb.img.xz

To transfer the image to sdcard, I did:

# xzcat bone-debian-10.3-iot-armhf-2020-04-06-4gb.img.xz | dd of=/dev/sdb

Do take care to ensure /dev/sdb is the sdcard else you might end up wiping the drive on your laptop/desktop.

The changes to the sdcard /boot/uEnv.txt boot config file are:

dtb_overlay=/lib/firmware/BB-VIEW-LCD7-01-00A0.dtbo

disable_uboot_overlay_video=1

disable_uboot_overlay_audio=1

You can download the uEnv.txt file here. Notice that it specifies a binary device tree overlay file, BB-VIEW-LCD7-01-00A0.dtbo. The latest and greatest bone-debian-10.3-iot-armhf-2020-04-06-4gb.img.xz happened to have them pre-compiled. I have not seen it in earlier debian images, so this is indeed a stroke of luck. I have uploaded a copy here for your convenience. 

By the way it also has BB-BONE-LCD7-01-00A0.dtbo which also works except for an annoyingly dim backlight setting and the inability to adjust it.

These dtbo files are usually something that you get by recompiling the BB View Debian source files. However, the manufacturer supplied BB View Debian source file BB-VIEW-LCD7-01-00A0.dts did not work for me when compiled to dtbo files. On close examination it is missing pin definitions for 8 LCD panel data pins, lcd_data16 - lcd_data23.

Typically the beaglebone black boots by default from the eMMC, so you need to hold down the 'Boot' button on the beagleboard before you power on. The BB View cape gets in the way, but it helpfully provides a second 'Boot' button on the cape. 

Given the power demands of the BB View, I found it more reliable not to boot from the power switch. Rather there is a 'Power' button on the beaglebone black which you can use. 

The LCD displayed the login screen for tty1, and you can log in by plugging a USB keyboard into into the USB Host (ie USB Type B) connector.



Working BB View 7" LCD with Debian Buster 10.3

To display picture files from the command prompt I used fbi with a little help from Drew Fustini:

root@beaglebone:~# apt-get update
root@beaglebone:~# apt-get install fbi
# wget https://kernel.org/theme/images/logos/tux.png
# fbi -d /dev/fb0 -T 1 -a tux.png

For video I used mplayer:
# apt-get install mplayer

This worked for me as there is no audio when using the BB View LCD:

# export SDL_NOMOUSE=1
# mplayer -nolirc -vo sdl:driver=fbcon  -ao null  test.mpg

My test mpg file happened to have a resolution of 480x360. The BB View 7" LCD is 800x600 but mplayer could not quite get the scaling correct. Whan scaled manually this works:

# mplayer -nolirc -vo sdl:driver=fbcon  -ao null -x 800 -y 533 test.mpg

mplayer 

Debian 10.3 did not seem to recognise the touchscreen, but a little peek at the decompiled dtbo file mentioned a tscadc kernel modue:

        fragment@5 {
                target = <&tscadc>;
                __overlay__ {
                        status = "okay";
                        tsc {
                                ti,wires = <4>;
                                ti,x-plate-resistance = <200>;
                                ti,coordinate-readouts = <5>;
                                ti,wire-config = <0x00 0x11 0x22 0x33>;
                                ti,charge-delay = <0x400>;
                        };
                        adc {
                                ti,adc-channels = <4 5 6 7>;
                                ti,chan-step-opendelay = <0x098 0x3ffff 0x098 0x0>;
                                ti,chan-step-sampledelay = <0xff 0x0 0xf 0x0>;
                                ti,chan-step-avg = <16 2 4 8>;
                        };
                };
        };

A search of the dmesg log produced:
[Thu Jun 24 10:21:40 2021] input: ti-tsc as /devices/platform/ocp/44e0d000.tscadc/TI-am335x-tsc.0.auto/input/input0

Which led me to
# ls -l  /sys/devices/platform/ocp/44e0d000.tscadc/TI-am335x-tsc.0.auto/input/input0/event0

A quick check of the shows that tapping the LCD does indeed produce a response:
# hexdump -C -v /dev/input/event0
...
00000780  81 9f d5 60 e6 e6 06 00  03 00 01 00 2c 08 00 00  |...`........,...|
00000790  81 9f d5 60 e6 e6 06 00  03 00 18 00 92 03 00 00  |...`............|
000007a0  81 9f d5 60 e6 e6 06 00  00 00 00 00 00 00 00 00  |...`............|
000007b0  81 9f d5 60 1f 03 09 00  01 00 4a 01 00 00 00 00  |...`......J.....|
000007c0  81 9f d5 60 1f 03 09 00  03 00 18 00 00 00 00 00  |...`............|
000007d0  81 9f d5 60 1f 03 09 00  00 00 00 00 00 00 00 00  |...`............|

Another bit of electronics rescued from the scrap heap cannot be that bad.

Happy Trails.

PS
Why not use the element14 files and go with the BB View manual? Believe me, I tried. For 14 days. I started off with the oldest Debian images I could find, co-incidentally the same Debian 7.5 of 2014-05-14 that I used back then. And got the same result. And since there were 3 different patches, one each for Angstrom and TI SDK, I tried them as well but none worked. The Angstrom version was older 2013-06-20 and worked slightly better: the LCD backlight did not blink in concert with he CPU light and was quite controllable on its own. But in all cases the LCD stayed resolutely blank. 

There is a wealth of information on the subject, not to mention source code. But there is no schematic, so reading it would be harder. Two weeks later, I could rebuild the latest TI SDK, or even mainline Linux kernel using Linaro, but neither version worked with the BB View LCD. The problem as mentioned earlier was the device tree source code, which I suspect was incomplete for the 7" LCD. The 4.3" version probably worked, but I did not test this. 

It was starting to become clear why there was such heavy development over so many years: there is precious little standardization in ARM systems on how to interface to external boards. Each ARM board basically is unique, and back in 2013 this was hardcoded into the kernel source.

At some point each variation was defined in a 'device tree' that was compiled separately from kernel. In practice this simply means a separate sub-directory with its own Makefile all under the same master ARM build. 

Eventually, with a little encouragement from Linus Torvalds, the device tree code was shifted to the ARM bootloader, 'Das U-Boot'. This is where the Beaglebone Debug Serial Port comes in handy. Many of the BB View LCD problems are reported only as system console messages, ie to the Debug Serial Port, and may not even show up in the kernel log. 

Thus, unlike the manufacturer, Debian admirably maintained support for the BB View. As usual, the documentation became hopelessly out of date, as befits abandonware. 

After 2 weeks, Lady Luck took pity on me. I always have a late version of Beaglebone Debian, in this case Debian Buster 10.3 of 2020-04-06. The micro-sdcards are too tiny to label, and I swapped it into the BB View LCD system by mistake. It complained it missed the file BB-VIEW-LCD7-01-00A0.dtbo which just happened to be in the eMMC partition (as /dev/mmcblk1p1). One small change in the boot config file and it worked! Took all of 5 minutes.

Which leaves me 2 more weeks of lockdown time to fill. Maybe it's time to watch 'Das Boot' again ...

Das Boot: Wolfgang Petersen's 1981 classic