Monday, December 16, 2019

First Oculus Quest App!

After fighting with several ADB driver issues, reinstalling unity, restarting, unplugging, replugging, adb kill-server, adb start-server commands I finally figured out how to get my own unity projects loaded into this device. In unity, I used the default/built android SDK tools that can be enabled during the install of unity. I had it working then did a firmware upgrade on my quest and everything broke. Then I installed Android SDK (studio) app thinking this would help. This may have helped with some adb version errors (41 instead of 40) that were coming with unity. By installing that, and then updating my environment variable to include the folder for platform-tools that contains adb.exe it appears to have helped with the version issue.

In addition to this, the key to getting it to show up in the command adb devices (where there were no listed devices) but I could see it in windows and navigate files was to go into the oculus app on my phone and re-enable developer mode. I had already done this before but because of the oculus firmware update, I think it resets it back to off. I was pulling my hair out trying to fix something with adb drivers when this is all I likely needed to get it working.

I did some quick write up to control my player model and then realized there was already a pretty decent player controller script that came with the prefab so I threw away my code and am starting with theirs until I find a specific need to revamp it. (just wanted to walk around anyway). A few assets later and a quick load and I have a fun looking scene! I can't wait to show my GF!

Now I've built and deployed apps for; HoloLens, Quest, and Vive! Cool! Gotta keep creating.



Sunday, November 24, 2019

Image extracting from Videos


Extracting Images
In order to extract images for photogrammetry and/or machine learning, I needed a variety of images that are very diverse in the scene, if you extract all images from videos there isn't much diversity of perspective. In order to do this I installed ffmpeg and took some OBS screencaptures from gameplay and went to work:

Here's the line that extracts images

ffmpeg -i lucio.mp4 -vf "select=not(mod(n\,100))" -vsync vfr -q:v 2 img_%03d.jpg

https://superuser.com/questions/391257/extracting-one-of-every-10-frames-in-a-video-using-vlc-or-ffmpeg


Labeling Images
Once I had all the images extracted, I wanted to use them to retrain a yolo object detector, I found and used this tool. As an aside I jotti virus scanned it too - it was clean, I used the windows version. It was super easy to use! I'll do a subsequent post if i build an interesting AI model.

https://github.com/developer0hye/Yolo_Label


Sunday, November 17, 2019

Pi-hole = Amazing ad-blocking!

This simple software package can block like 90% of the ads you see on both websites and on videos like youtube. To block an advertisement they simply route it to the raspberry pi where it isn't passed along to the user, so technically the content is still being served up.It's amazing highly encourage you to check it out.

This video walks you through doing it:
https://www.youtube.com/watch?v=KBXTnrD_Zs4
https://github.com/pi-hole/pi-hole/#one-step-automated-install

Pihole administrative site

Fixing TF+ anaconda GPU support on windows

For whatever reason yesterday it appeared the yolo model i was running on tensorflow yesterday was only running on the cpu instead of the gpu. The low frame rate is the only reason I noticed. I'm wondering if some windows update messed up my CuDNN drivers or something. Whatever the reason I decided to make a new conda environment to see if I could fix it. I knew I had already installed CuDNN from nvidia for the cuda toolkit so I was skeptical of that package being broken.

Installing TF2

So I made a new conda environment to start over and install TF2 from scratch with:
conda create --name tf-gpu
conda activate tf-gpu
conda install tensorflow-gpu

Even though, I followed the very good instructions at Puget Systems,  when i go into python to validate eager execution - it doesn't work, but when I print the tensorflow version it comes up as 2.0.0.

Installing OpenCV

I was getting an error
Traceback (most recent call last):
  File "webcam_demo.py", line 14, in
    import cv2
ModuleNotFoundError: No module named 'cv2'

which bewildered me because i thought i had it installed. I then tried the following:
pip install opencv-python
This didn't work.
I then tried:
conda install py-opencv
this worked, likely because it is respective of anaconda install processes.

Installing Yolov3 Reqs

Then I tried to install the requirements for the well written tensorflow yolov3 repo I had downloaded.
pip3 install -r ./docs/requirements.txt

This almost works except I got the same error as yesterday:
ModuleNotFoundError: No module named 'easydict'

So i needed to install easydict (NOT pip3!):
pip install easydict

CUDA Bug & Fix

I encountered a wierd nvidia graphics card issue with TF2, I've seen this issue at work and at home on nvidia graphics cards (both 2080Ti and 970 GTX at home), here was the fix that I added at the top of my python file:

# Graphics Card Fix - https://github.com/tensorflow/tensorflow/issues/24496from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = Truesession = InteractiveSession(config=config)

This let me run Yolo v3 @ 6 FPS (~160 ms) on my gpu, instead of 1 FPS on the CPU.
Yolo v3 running on GPU and tensorflow


Saturday, November 16, 2019

Yolo v3 on tensorflow and windows

I'm on a mission to get proficient with deploying several object detection and classification models. Then with that foundation I want to practice implementing transfer learning too. I'm even accepting the additional complexity of doing all this on windows (only from operation system convenience perspective only)

I've been able to get some basic transfer learning with a full image classifier on pytorch previously following their tutorials - i had to wrap some of the main functions in order to make it work on windows as I've been using anaconda and its conda environments to get these various frameworks working on windows.

Today I sought out to do the same thing with yolov3 and specifically with tensorflow because the books and tutorials i've been reading have been 90% on tensorflow and its already hard enough to learn. I'm confident there are real value propositions to pytorch as well - that said I just feel like its already a lot of information to absorb. Therefore I went to google to find yolov3 implemented on tensorflow, and stumbled onto this straightforward implementation:

https://github.com/YunYang1994/TensorFlow2.0-Examples/tree/master/4-Object_Detection/YOLOV3



I had to sort of manipulate my conda environment by manually using pip to install the various libraries (easydict in particular was hard to get right because i had a version mismatch from the requirements.) the quickstart guide suggested

pip3 install -r ./docs/requirements.txt

Which is kind of a neat way to auto install a bunch of libraries for a given git. I should remember that and this command when i release stuff. To finish the easydict i had to uninstall it and then reinstall it with pip3. I got it working on the CPU so frame rate is pretty bad (1fps). I was able to simply use opencv to change the video object from a local file to a webcam and here you can see it working.


import cv2

# Grab the webcamcap = cv2.VideoCapture(0)

while True:
    # Read the webcam image    
    return_value, frame = cap.read()

More photogrammetry experimentation

Since i recently learned those better refinement tricks for applying photogrammetry I've obtained a renewed interest in doing photogrammetry. I found myself digging through old datasets that I had attempted to use for photogrammetry that I was generally unsuccessful with. I always thought it was because i was taking bad photogrammetry photos, but given what i know now, it's likely because i didn't do enough post-processing work to help agisoft focus on the right features by using photomasks, and manually cleaning up the dense mesh.

After playing a few days with more data sets, i've learned that how and where you do a photomask is critical to getting all the cameras aligned. Take this example:
Before

After
You can see that before I had cropped out just the tank - this has pros and cons - by masking it everything outside the mask will not be computed for establishing tie points or the dense cloud because they are masked out of the region of interest. The problem with this is that i was unable to align all my photos (failed to compute where the cameras were) because there were not enough features in the image to align them all. The After image solves this because the newspaper which is feature rich (the faces, corners, blocks of text) makes a great element that can be clearly seen in multiple images from different perspectives. So in this case, i tried again by by recropping the images to include the newspaper i can get all the images aligned and build a dense mesh. You can see from just my 28 images I was able to get a 'OK' dense cloud of which i can now delete the newspaper off.
The next problem i have to face is that the tank treads were really dark and agisoft couldn't localize the voxels in these regions likely due to difficulty in matching these features between subsequent pictures. I guess I could've taken more pictures, or tried to get flatter lighting to combat this. Obviously, I have more practice and learning to do.


Tuesday, November 12, 2019

Messing with photogrammetry





I was trying to apply photogrammetry to a game asset out of curiosity, I have used photogrammetry in my work experiences to build high fidelity maps and was curious about further building the skill for game asset creation. I stumbled on this tutorial:

https://www.agisoft.com/index.php?id=38


At work, I had it easy because I had the professional version which permits adding markers to the pictures to help with alignment. At home, I have the home license which doesn't have this feature, so while trying to apply photogrammetry at home I needed to learn how to use the basic tools more effectively. It looks I've been taking the background for granted and thinking that the solver would address this for me. With some experimentation, it wasn't working. A few lessons learned:

  • I tried resizing the region (this was key) of interesting after the photo-alignment on 400 photos, I got a useful result! 
  • I tried a few more runs with masks cropping out hud elements and this helped too. In order to apply a photomask across my 400 images, I did one, exported it as a file, and then imported it as a mask to all the 400 remaining as a file with default settings. It should also be noted that I learned the hard way NOT to apply masks to key points or tie points, this resulted in an empty mesh for me. Just use None.
  • I manually deleted outlier voxels from the dense cloud to help make a meaningful mesh. This Agisoft tutorial was really well done, takes some patience to get through but you really need to see all the clicks and the reasoning behind it.


Now I'm going to try and use a smaller (80 picture) dataset, but I carefully applied masks so it doesn't look at the background so much. This didn't work =(. The background must've helped It was too sparse. I tried combining my 400 and 80 images and that didn't work either. Then I started over on my 400 image dataset that had worked well, then aligned only the 1st 100 images, then aligned the remaining 300, this went fairly fast likely due to the dimensionality being cut in half. I theorized that doing so can give me a suboptimal alignment for the 1st 100 so I reset and realigned those. Then after all 400 were aligned I did an alignment optimization function which is only available in batch options.

I have some more work skill refinement on this, but now I feel much more confident in photogrammetry, by manually massaging the data. Geez! I thought it'd be click play go! I guess this is why they say its an art.

Saturday, November 9, 2019

Installing Tensorflow 2.0 on Windows (NOT 1.14!)

Unfortunately, the process I wrote resulted in tensorflow 1.14. what I really wanted was tensorflow 2.0.

Googling yielded pretty much the same instructions that I executed so i'm a bit at loss:
https://medium.com/@shaolinkhoa/install-tensorflow-gpu-2-0-alpha-on-anaconda-for-windows-10-ubuntu-ced099010b21


Looking around...
I ended up trying

(tf-gpu-cuda9) F:\Projects\ML\TF2>pip install --cache-dir=/data/ --build /data/ tensorflow-gpu==2.0

It upgraded a bunch of packages, but still


ERROR: tensorflow 1.14.0 has requirement tensorboard<1 .15.0="">=1.14.0, but you'll have tensorboard 2.0.1 which is incompatible.
ERROR: tensorflow 1.14.0 has requirement tensorflow-estimator<1 .15.0rc0="">=1.14.0rc0, but you'll have tensorflow-estimator 2.0.1 which is incompatible.
ERROR: tensorboard 2.0.1 has requirement grpcio>=1.24.3, but you'll have grpcio 1.16.1 which is incompatible.

to fix this I did the following, i had to uninstall tensorflow 1.14 because it upgraded everything except tensorflow which was stuck at 1.14. If I tried to install tensorflow with a pip install it through this error:
EnvironmentError: [Errno 28] No space left on device
the solution to uninstall and reinstall tensorflow 2.0 was to do this:

pip uninstall tensorflow
pip install --cache-dir=/data/ --build /data/ tensorflow

Tuesday, November 5, 2019

Installing Tensorflow on windows Links



At cmd prompt, to confirm cuda was installed type:
nvcc -V

To start running ML code: open anaconda prompt from start  and type

conda activate tf-gpu-cuda9
Simple ML program to run
To test TF 2.0, these commands should work:
import tensorflow as tf
assert tf.test.is_gpu_available()
assert tf.test.is_built_with_cuda()

Sunday, November 3, 2019

Installing TensorFlow on Windows


TF1.14 with python3 on windows this worked for me by using Anaconda to setup the virtual environments and address most of the dependencies, there still were a few things needed to be added manually while in the anaconda prompt.

The instructions say this should work to get you tensorflow 2.0.0, it looks like these commands gave me a working version tensorflow 1.14, i'll need to revisit this to get TF2.0 working properly. Also, it appears my pycharm didn't work in the venv until I installed opencv with pip (for python 2).
Strangely i learned that while i wanted to use PIL for image operations, it was depracated in favor of pillow which includes all the libraries. I wanted this so i could do screen grabs from python.
https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/

conda create -n tf-gpu-cuda9 tensorflow-gpu cudatoolkit=9.0
conda activate tf-gpu-cuda9
pip install opencv-python
pip3 install opencv-python
pip install matplotlib
pip3 install matplotlib
pip install Pillow
pip3 install Pillow


In addition to get a basic version of tensorflow working I modeled a nuka cola bottlecap for one of her broken guns in solidworks. Simplify3D is the best slicer, i love it!


Learning Blender Modeling, Rigging, Texturing

Learning to Model

I chose to install Blender 2.79 because the latest version was too hard and I'm trying to follow this tutorial:

https://www.youtube.com/watch?v=DiIoWrOlIRw&list=PLFt_AvWsXl0fEx02iXR8uhDsVGhmM9Pse


  • To import background images they have to be .jpg format, others = questionable
  • You have to be in Orthographic view (Numpad 5) to look at background images
Key Hotkeys
  • Shift+Mouse3 - to Pan (have to try a couple times
  • Shift+1 or Shift+3 - to change to front or side view
  • a, then b - bounding box select
  • Ctrl + R - Start cut, right click selects the middle
  • Tab - to get to edit mode.
  • Z - wireframe mode
  • x - delete
  • e - extrude
  • g - groups vertices behind the one you select i think
  • typical 'babg' to select group of vertices and move them.

To Round out a model (from an isometric POV)
  • Alt + RMB - Select Edge
  • Alt + Shift + RMB - Select another edge to close loop and get surface selection
  • ... add more here....
and.... i ran out of time, so i'll pick this up later.

Thursday, June 6, 2019

Playing with Mapbox

I was playing around with mapbox in Unity 2019.1.5f1 and it gave me an error, after some googling i figured out a simple solution. I'm not messing around with AR, so i deleted these folders and then the project compiled. I figured this is good too because its a bunch of AR assets that I wouldn't be using anyway:

  • GoogleARCore
  • MapboxAR
  • UnityARInterface
  • UnityARKitPlugin
Another interesting note, is I tried to load mapbox into a premade scene with HDRP settings and something with the shaders was messed up. My globe came in purple, at least when i did a blank scene it rendered correctly. I didn't feel like trying to fix it, but long term might be worth the effort to get the juicy visual quality that unity can provide. 

After poking around with the tutorial I was able to get some decent looking results. I'm pleased with this result with such little effort! Not to be picky, but I noticed some of the buildings weren't identified, I presume this is loaded from some ML defined satellite database analyzed by google or mapbox with some building classifier... Interesting!  Probably good for prototyping but this wouldn't be contextually perfect. Also, the resolution does do pretty well wherever you zoom in, but its a little worse than google in terms of resolution. 

That said, scaling up the extents and zoom really provides amazing results! What a powerful tool!



Creating Jetson OS Backup

Googling around, because I wanted to create a flashable iso image of my jetson nano before I do things that potentially break the OS... This would give me a rapid recoverable way to restore my Jetson Nano instead of the 30-60 min process it takes to re-flash, reconfigure, and reinstall all the basic software tools I want to use. Flashing a premade image takes like 5 minutes-ish.

It looks easier than I expected! It looks like I don't have to install anything to create the image or flash this on linux:

https://thepihut.com/blogs/raspberry-pi-tutorials/17789160-backing-up-and-restoring-your-raspberry-pis-sd-card

Tuesday, June 4, 2019

Posting a new repo to github

Jotting some quick notes on how to properly push up local files and get it synced with github.com because I'm starting to do it fairly frequently and it takes me some time...

1. Adding an existing project to github using the command-line
2. When the upload fails it says,"
git push origin masterWarning: Permanently added the RSA host key for IP address '192.30.X.X' to the list of known hosts.Permission denied (publickey).fatal: Could not read from remote repository.
Please make sure you have the correct access rightsand the repository exists."

3. generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
4. adding-a-new-ssh-key-to-your-github-account
5. I messed up by creating a readme, and so now it gave me this error:
$ git push origin masterTo git@github.com:nickswimsfast/SpatialPerception.git ! [rejected]        master -> master (fetch first)error: failed to push some refs to 'git@github.com:nickswimsfast/SpatialPerception.git'hint: Updates were rejected because the remote contains work that you dohint: not have locally. This is usually caused by another repository pushinghint: to the same ref. You may want to first integrate the remote changeshint: (e.g., 'git pull ...') before pushing again.hint: See the 'Note about fast-forwards' in 'git push --help' for details.

The solution (but rough and risky solution) was to:
git push origin master --force


Monday, June 3, 2019

ML Dive

Diving into some ML... While I have cloned repo's and applied them successfully with transfer learning with models like inceptionv3, and resnet, I wanted to get in a little deeper. Before I could get started, I set out to setup my VR computer (GTX 1070 Graphics) as a development box. This entailed getting dual boot windows/ubuntu setup along with VNC.

Tensorflow Setup
I was surprised because just getting my computer properly setup to use GPU accelerated machine learning was more of a chore then I expected. It took me a solid 3 hours to decipher all the things I had to do in order to get actually training neural nets on the GPU. I'm confident it was worth it for speed, but geez!!! It shouldn't be this hard! Docker, Cuda 10 drivers, linking to the OS so I could save files permanently, etc... Anyway, I documented the entire arduous process on github - a little messy because they were quick notes for myself, but it might help somebody:

[SetupML Github repository]

VNC = dev speed
Can I just mention the ability to log into my VR computer from anywhere in the house is a godsend! I can sit with my laptop and work on it from the couch, from the bed, from my desktop computer or laptop! So much more productive.

Getting started with ML
Once I was setup I verified things were working with these quick tutorials:

* [Basic Classification]
* [Text Classification]


Synthetic Data
Enough with the tutorials! I wanted to branch out on my own more or less scratch built perception model, so I crafted up some semi-automatic 200 synthetic images with truth labels in Unity and set out to work. I spent pretty much this whole weekend putting all this together getting data formatting right, arrays setup with the right datatypes, so I can have an automatically setup training and test data sets.


Complex Model
Unfortunately, I'm trying to get my last layer to be a multi-dimensional regressive output, this is proving to be more challenging than I thought. The few times i successfully trained my rough models, I was getting at best 60% accuracy. I've got to spend a few more days researching keras model architecture, I think its bad. Been messing around with the sigmoid, linear, and softmax activation functions, along with various loss functions. I stumbled onto these interesting links which might help me find the answer:
* [Keras Github Examples]
* [Keras Documentation - with examples]

Whew! Time for sleep!

First Github Repo!

I've made several github repositories on at work on our local network's git website, but I had never posted my own repo outside that ecosystem. Today is my first public repo on github! I made some scripts to make it easier to change the clockrate/power consumption of the jetson nano instead of having to remember all the commands and syntax. Here it is:

Jetson Power Scripts
[Jetson Nano power management](https://github.com/nickswimsfast/jetson_powermgmt)

Good instructions for starting a [new github repo](https://kbroman.org/github_tutorial/pages/init.html).

SSH/VNC to Jetson Nano
I setup ssh beforehand with vino on the jetson itself - the default instructions weren't exactly right, but some googling later and you'll git it!. Also, I've found that using realvnc works great for connecting to it (free solution). TightVNC costs money?! Also, on my x86 windows desktop computer it was easy to use ubuntu's built-in remote desktop sharing. I also had to install openssh-server on the x86 computer, but I don't think i needed to do that on the Jetson, i think it came with that by default.


Wednesday, April 24, 2019

Playing with Nvidia Jetson Nano

Back again
I'm going to necro-post on my own blog... Haha how fast does time fly? I figure best way to get back into it is to just dive in...

While at Nvidia's GPU conference this year, I was lucky enough to pick up a Jetson Nano edge computer and play around with it a bit. There are quirks as its still a developer board and well its linux so its good if you can troubleshoot things as they come...

Jetson Nano + apt vncserver == bad idea
I immediately tried to install tightvncserver on my nano running ubuntu 18.04... I tried following standard desktop install instructions those didn't work. Most install instructions have you install several dependencies and desktop environments i added some gnome panels and pretty much bricked my OS. apt remove doesn't work...

Jetson Nano + VNC solution!
Then I noticed some documentation on the OS for the jetpack referencing using vino as a means of providing VNC capability. (To remotely login to your computer over the network). I tried this, but it didn't work either, specifically no indication it was running, and then i found desktop sharing icon built into linux. This is the vino package, but its broken in the UI, this website tells you how to fix it:
https://blog.hackster.io/getting-started-with-the-nvidia-jetson-nano-developer-kit-43aa7c298797

Let's see if I can keep the momentum going...