Sunday, November 24, 2019

Image extracting from Videos


Extracting Images
In order to extract images for photogrammetry and/or machine learning, I needed a variety of images that are very diverse in the scene, if you extract all images from videos there isn't much diversity of perspective. In order to do this I installed ffmpeg and took some OBS screencaptures from gameplay and went to work:

Here's the line that extracts images

ffmpeg -i lucio.mp4 -vf "select=not(mod(n\,100))" -vsync vfr -q:v 2 img_%03d.jpg

https://superuser.com/questions/391257/extracting-one-of-every-10-frames-in-a-video-using-vlc-or-ffmpeg


Labeling Images
Once I had all the images extracted, I wanted to use them to retrain a yolo object detector, I found and used this tool. As an aside I jotti virus scanned it too - it was clean, I used the windows version. It was super easy to use! I'll do a subsequent post if i build an interesting AI model.

https://github.com/developer0hye/Yolo_Label


Sunday, November 17, 2019

Pi-hole = Amazing ad-blocking!

This simple software package can block like 90% of the ads you see on both websites and on videos like youtube. To block an advertisement they simply route it to the raspberry pi where it isn't passed along to the user, so technically the content is still being served up.It's amazing highly encourage you to check it out.

This video walks you through doing it:
https://www.youtube.com/watch?v=KBXTnrD_Zs4
https://github.com/pi-hole/pi-hole/#one-step-automated-install

Pihole administrative site

Fixing TF+ anaconda GPU support on windows

For whatever reason yesterday it appeared the yolo model i was running on tensorflow yesterday was only running on the cpu instead of the gpu. The low frame rate is the only reason I noticed. I'm wondering if some windows update messed up my CuDNN drivers or something. Whatever the reason I decided to make a new conda environment to see if I could fix it. I knew I had already installed CuDNN from nvidia for the cuda toolkit so I was skeptical of that package being broken.

Installing TF2

So I made a new conda environment to start over and install TF2 from scratch with:
conda create --name tf-gpu
conda activate tf-gpu
conda install tensorflow-gpu

Even though, I followed the very good instructions at Puget Systems,  when i go into python to validate eager execution - it doesn't work, but when I print the tensorflow version it comes up as 2.0.0.

Installing OpenCV

I was getting an error
Traceback (most recent call last):
  File "webcam_demo.py", line 14, in
    import cv2
ModuleNotFoundError: No module named 'cv2'

which bewildered me because i thought i had it installed. I then tried the following:
pip install opencv-python
This didn't work.
I then tried:
conda install py-opencv
this worked, likely because it is respective of anaconda install processes.

Installing Yolov3 Reqs

Then I tried to install the requirements for the well written tensorflow yolov3 repo I had downloaded.
pip3 install -r ./docs/requirements.txt

This almost works except I got the same error as yesterday:
ModuleNotFoundError: No module named 'easydict'

So i needed to install easydict (NOT pip3!):
pip install easydict

CUDA Bug & Fix

I encountered a wierd nvidia graphics card issue with TF2, I've seen this issue at work and at home on nvidia graphics cards (both 2080Ti and 970 GTX at home), here was the fix that I added at the top of my python file:

# Graphics Card Fix - https://github.com/tensorflow/tensorflow/issues/24496from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = Truesession = InteractiveSession(config=config)

This let me run Yolo v3 @ 6 FPS (~160 ms) on my gpu, instead of 1 FPS on the CPU.
Yolo v3 running on GPU and tensorflow


Saturday, November 16, 2019

Yolo v3 on tensorflow and windows

I'm on a mission to get proficient with deploying several object detection and classification models. Then with that foundation I want to practice implementing transfer learning too. I'm even accepting the additional complexity of doing all this on windows (only from operation system convenience perspective only)

I've been able to get some basic transfer learning with a full image classifier on pytorch previously following their tutorials - i had to wrap some of the main functions in order to make it work on windows as I've been using anaconda and its conda environments to get these various frameworks working on windows.

Today I sought out to do the same thing with yolov3 and specifically with tensorflow because the books and tutorials i've been reading have been 90% on tensorflow and its already hard enough to learn. I'm confident there are real value propositions to pytorch as well - that said I just feel like its already a lot of information to absorb. Therefore I went to google to find yolov3 implemented on tensorflow, and stumbled onto this straightforward implementation:

https://github.com/YunYang1994/TensorFlow2.0-Examples/tree/master/4-Object_Detection/YOLOV3



I had to sort of manipulate my conda environment by manually using pip to install the various libraries (easydict in particular was hard to get right because i had a version mismatch from the requirements.) the quickstart guide suggested

pip3 install -r ./docs/requirements.txt

Which is kind of a neat way to auto install a bunch of libraries for a given git. I should remember that and this command when i release stuff. To finish the easydict i had to uninstall it and then reinstall it with pip3. I got it working on the CPU so frame rate is pretty bad (1fps). I was able to simply use opencv to change the video object from a local file to a webcam and here you can see it working.


import cv2

# Grab the webcamcap = cv2.VideoCapture(0)

while True:
    # Read the webcam image    
    return_value, frame = cap.read()

More photogrammetry experimentation

Since i recently learned those better refinement tricks for applying photogrammetry I've obtained a renewed interest in doing photogrammetry. I found myself digging through old datasets that I had attempted to use for photogrammetry that I was generally unsuccessful with. I always thought it was because i was taking bad photogrammetry photos, but given what i know now, it's likely because i didn't do enough post-processing work to help agisoft focus on the right features by using photomasks, and manually cleaning up the dense mesh.

After playing a few days with more data sets, i've learned that how and where you do a photomask is critical to getting all the cameras aligned. Take this example:
Before

After
You can see that before I had cropped out just the tank - this has pros and cons - by masking it everything outside the mask will not be computed for establishing tie points or the dense cloud because they are masked out of the region of interest. The problem with this is that i was unable to align all my photos (failed to compute where the cameras were) because there were not enough features in the image to align them all. The After image solves this because the newspaper which is feature rich (the faces, corners, blocks of text) makes a great element that can be clearly seen in multiple images from different perspectives. So in this case, i tried again by by recropping the images to include the newspaper i can get all the images aligned and build a dense mesh. You can see from just my 28 images I was able to get a 'OK' dense cloud of which i can now delete the newspaper off.
The next problem i have to face is that the tank treads were really dark and agisoft couldn't localize the voxels in these regions likely due to difficulty in matching these features between subsequent pictures. I guess I could've taken more pictures, or tried to get flatter lighting to combat this. Obviously, I have more practice and learning to do.


Tuesday, November 12, 2019

Messing with photogrammetry





I was trying to apply photogrammetry to a game asset out of curiosity, I have used photogrammetry in my work experiences to build high fidelity maps and was curious about further building the skill for game asset creation. I stumbled on this tutorial:

https://www.agisoft.com/index.php?id=38


At work, I had it easy because I had the professional version which permits adding markers to the pictures to help with alignment. At home, I have the home license which doesn't have this feature, so while trying to apply photogrammetry at home I needed to learn how to use the basic tools more effectively. It looks I've been taking the background for granted and thinking that the solver would address this for me. With some experimentation, it wasn't working. A few lessons learned:

  • I tried resizing the region (this was key) of interesting after the photo-alignment on 400 photos, I got a useful result! 
  • I tried a few more runs with masks cropping out hud elements and this helped too. In order to apply a photomask across my 400 images, I did one, exported it as a file, and then imported it as a mask to all the 400 remaining as a file with default settings. It should also be noted that I learned the hard way NOT to apply masks to key points or tie points, this resulted in an empty mesh for me. Just use None.
  • I manually deleted outlier voxels from the dense cloud to help make a meaningful mesh. This Agisoft tutorial was really well done, takes some patience to get through but you really need to see all the clicks and the reasoning behind it.


Now I'm going to try and use a smaller (80 picture) dataset, but I carefully applied masks so it doesn't look at the background so much. This didn't work =(. The background must've helped It was too sparse. I tried combining my 400 and 80 images and that didn't work either. Then I started over on my 400 image dataset that had worked well, then aligned only the 1st 100 images, then aligned the remaining 300, this went fairly fast likely due to the dimensionality being cut in half. I theorized that doing so can give me a suboptimal alignment for the 1st 100 so I reset and realigned those. Then after all 400 were aligned I did an alignment optimization function which is only available in batch options.

I have some more work skill refinement on this, but now I feel much more confident in photogrammetry, by manually massaging the data. Geez! I thought it'd be click play go! I guess this is why they say its an art.

Saturday, November 9, 2019

Installing Tensorflow 2.0 on Windows (NOT 1.14!)

Unfortunately, the process I wrote resulted in tensorflow 1.14. what I really wanted was tensorflow 2.0.

Googling yielded pretty much the same instructions that I executed so i'm a bit at loss:
https://medium.com/@shaolinkhoa/install-tensorflow-gpu-2-0-alpha-on-anaconda-for-windows-10-ubuntu-ced099010b21


Looking around...
I ended up trying

(tf-gpu-cuda9) F:\Projects\ML\TF2>pip install --cache-dir=/data/ --build /data/ tensorflow-gpu==2.0

It upgraded a bunch of packages, but still


ERROR: tensorflow 1.14.0 has requirement tensorboard<1 .15.0="">=1.14.0, but you'll have tensorboard 2.0.1 which is incompatible.
ERROR: tensorflow 1.14.0 has requirement tensorflow-estimator<1 .15.0rc0="">=1.14.0rc0, but you'll have tensorflow-estimator 2.0.1 which is incompatible.
ERROR: tensorboard 2.0.1 has requirement grpcio>=1.24.3, but you'll have grpcio 1.16.1 which is incompatible.

to fix this I did the following, i had to uninstall tensorflow 1.14 because it upgraded everything except tensorflow which was stuck at 1.14. If I tried to install tensorflow with a pip install it through this error:
EnvironmentError: [Errno 28] No space left on device
the solution to uninstall and reinstall tensorflow 2.0 was to do this:

pip uninstall tensorflow
pip install --cache-dir=/data/ --build /data/ tensorflow

Tuesday, November 5, 2019

Installing Tensorflow on windows Links



At cmd prompt, to confirm cuda was installed type:
nvcc -V

To start running ML code: open anaconda prompt from start  and type

conda activate tf-gpu-cuda9
Simple ML program to run
To test TF 2.0, these commands should work:
import tensorflow as tf
assert tf.test.is_gpu_available()
assert tf.test.is_built_with_cuda()

Sunday, November 3, 2019

Installing TensorFlow on Windows


TF1.14 with python3 on windows this worked for me by using Anaconda to setup the virtual environments and address most of the dependencies, there still were a few things needed to be added manually while in the anaconda prompt.

The instructions say this should work to get you tensorflow 2.0.0, it looks like these commands gave me a working version tensorflow 1.14, i'll need to revisit this to get TF2.0 working properly. Also, it appears my pycharm didn't work in the venv until I installed opencv with pip (for python 2).
Strangely i learned that while i wanted to use PIL for image operations, it was depracated in favor of pillow which includes all the libraries. I wanted this so i could do screen grabs from python.
https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/

conda create -n tf-gpu-cuda9 tensorflow-gpu cudatoolkit=9.0
conda activate tf-gpu-cuda9
pip install opencv-python
pip3 install opencv-python
pip install matplotlib
pip3 install matplotlib
pip install Pillow
pip3 install Pillow


In addition to get a basic version of tensorflow working I modeled a nuka cola bottlecap for one of her broken guns in solidworks. Simplify3D is the best slicer, i love it!


Learning Blender Modeling, Rigging, Texturing

Learning to Model

I chose to install Blender 2.79 because the latest version was too hard and I'm trying to follow this tutorial:

https://www.youtube.com/watch?v=DiIoWrOlIRw&list=PLFt_AvWsXl0fEx02iXR8uhDsVGhmM9Pse


  • To import background images they have to be .jpg format, others = questionable
  • You have to be in Orthographic view (Numpad 5) to look at background images
Key Hotkeys
  • Shift+Mouse3 - to Pan (have to try a couple times
  • Shift+1 or Shift+3 - to change to front or side view
  • a, then b - bounding box select
  • Ctrl + R - Start cut, right click selects the middle
  • Tab - to get to edit mode.
  • Z - wireframe mode
  • x - delete
  • e - extrude
  • g - groups vertices behind the one you select i think
  • typical 'babg' to select group of vertices and move them.

To Round out a model (from an isometric POV)
  • Alt + RMB - Select Edge
  • Alt + Shift + RMB - Select another edge to close loop and get surface selection
  • ... add more here....
and.... i ran out of time, so i'll pick this up later.