Personal Thoughts and Notes: Making Useful Things For My VR Headset And RaspBerry Pi

Michael McAnally
6 min readSep 28, 2022

--

Personal notes: While doing research for my last article using a Raspberry Pi for serving Metaverse VR pages to a Oculus Quest 2 headset I discovered some very interesting things. I would say I am still researching after close to 2 months or more. I’d love to write an article now, however I feel it is too early. I may do it anyway, just because I am driven to write about what I have learned. It’s just a part of me, I can’t turn off! Here are some of the things I have explored and discovered over the last couple of months.

Background: The original intention was to find things that would extend VR through the Raspberry Pi hosted on a personal home network, as oposed to or in addition to the cloud. Providing for privacy of data and access. Things like AI and IoT capabilities for the home, as in edge computing. A fast growing tech market segment.

What I discovered the hard way:

A lot of incompatibilites in various 64 and 32 bit versions of the Raspberry Pi operating systems. For example, I wanted to use Deep Speech a stand alone non-cloud based speech reconition software for recognizing voice commands from my VR Headset to the Raspberry Pi. I found this more challenging than I first thought. It seems that Deep Speech supported only a 32 bit Buster (legacy) OS on the Pi that was only compiled for that ARMv7 that I could use effectively.

This went counter to the 64 bit Ubuntu OS I had choosen on the original VR-Pi article. Also, I found out that most developers for Deep Speech had moved to another GitHub repository after Mozilla was forced to downsize and scale back development of Deep Speech. That new repository was Coqui STT. Not a great and easily memorible name, I would say. But here nor there, this looked like it was the way forward. I also found a great resource in something called Spchcat, recently written by Pete Warden.

I was able to to increase the accuracy of the offline speech recognition on the Pi using a quad USB microphone plugged into the Pi and a much larger TensorFlowLite file for English recognition. The Pi was able to support this larger file because of the 8 GB version model 4, I had purchased. I test it in a noise enviroment outside with city street, people and bus noises, and found it had problems recognizing words. However,when it was tested in a much much quiter home environment it work well on the Pi. I was able to test many feet away from the microphone and it seemed to work well as long as my voice was not too soft spoken.

Ok, so now I had the beginnings of the code base to send voice commands to the Pi while wearing my VR headset as long as I was in the same room as the Pi’s microphone setup. But what I need now is a way to programatically tie into the spchcat or some other code to execute commands on the Pi. It would be even nicer if the microphone on the VR Headset could be used to send commands to the Pi over wi-fi while it was in a different room of the home, or even from over the internet in a different physical location. Hmm, that’s going to take some more thought and actual programming work to solve that problem. As well as to tie the speech recognized commands into an execution capability on the Pi itself.

Around about this time I started also investigating OpenCV which was AI open source vision algorithms and I found a very resourceful website in Q-Engineering. At this time I also notice the lecacy issues with the original Pi camera projects and the changes in the support library on older Buster (32 bit) and newer Bullseye (64 bit). This would effect usage of some of the OpenCV code bases.

So as a result, I finally decided to regress my Metaverse setup to a 32 bit Debian Buster legacy version of the Pi OS. I regretted doing this, but it seemed the only way I could effectively combine all the parts I needed into a single Pi software architectural solution that would allow presently for all the things I wanted to accomplish. Perhaps in the future I could return to a 64 bit OS.

Around about this time I discovered this pi-hosted video which I think is a useful tool for running docker containers and managing them with portainer on the same Pi. This potentially opens up a world of open source software tools which could be integrated with the VR headset simply through web pages. This also allowed for running a variety of open source Home Automation (HA) software.

I have since install docker, portainer, homer and watchtower. About this time I notice more disk access on the SSD, as in the blue light was solid more times than not. I’ll need to investigate this futher in the near future. Perhaps some of the docker containers are accessing the drive often or continuously?

In addition around this time (second or third week of Febuary 2022) while I was also dealing with Jury Summons and subsequent selection for Jury Duty, that I came upon an original idea for helping blind or sight impaired individuals using the OpenCV algorithms I had been invetigating. Originally I had been thinking about using them for HA or even in the future comtrolling a home robot. Also, I was thinking about privacy and how China was monitoring it’s population using these algorithm as well as problable the US for recognition of faces and such.

I was trying to think of applications, when suddenly I thought what can vision recognition software do for people who couldn’t see? It was a natural logical extention of my thinking around the Touch Voice Apps for the speech impaired, and helping people with disabilities.

The basic first use case was a visually impaired individual wearing a smart phone with an AI NPU (Neural Processing Unit) or TPU (Tensor Processing Unit) or Bionic A15 Chip (Apple) or Snapdragon 888 would enter a room, speak a command to “Find Chair” and the app would respond via bluetooth ear bud, “there a 3 chairs in the room, two at 12 o’clock and one at 3 o’clock”.

It might even be able to identify how many feet away the chairs where. If there are any obstuctions between the person and the chair, or if a person was already sitting in the chair. It could also identify how many people where in the room if any. The visually impaired person would then walk toward the chair and feel around to sit down.

Many more use cases could be identified to help the visually impaired, such as find a restroom, read signs and transcribe into audio. Ideas only limited by imagination and testing of applicability and capability!

I shared this idea with a number of friends and one entreprenuer name Jim on jury duty with me and everyone thought it was a good idea if it could be done.

Unfortunately after looking more into this, I clearly realize that I don’t have the resources and energy and mental commitment necessary for this idea to proceed.

Perhaps the best use of my time and energy now would be to move the Funbit64 server to the Pi to save on monthly hosting costs. Downside is home network security and bandwidth utilization. Another idea would be to do a webpage camera view of downtown with advertising to make money. Finally a 3D scanner to create 3D models for printing.

It may be that I’m going to abandon the idea of tying VR into the Pi. I feel a little bad about that. It turned out to be more difficult than I originally thought. I need to think on this a little more. I’ve meet some new people on jury duty I should probably try to connect with in the hopes that some may become new friends. It looks like I may be abandoning the idea of writing the book as well… Plans are really up in the air for reevaluation at this time given I’m days away from being 61 and a year away from moving to social security. I don’t want to be in an analysis paralysis mode, but I need to move causiously with my time and resources. Pressures and stress taking care of my mother are increasing as well and I had a stroke just over a year ago now.

I think mostly I want to get back into looking at what can be done with the newer versions of A-frame 1.3. Some things that Ada Rose Canon did recently may fix or mitigate some of the problems I see with that framework, navmesh, teleport. She showed an example moving aroung a room that included boundres and even hand position recognition movements. Need to get the terminolgy right. Did manage to get my own private cloud up on the pi with Nextcloud, so that’s a real plus for the homeserver.

--

--

Michael McAnally
Michael McAnally

Written by Michael McAnally

Temporary gathering of sentient stardust. Free thinker, evolving human, writer, coder, artist, humanitarian.

Responses (1)