Since last week, we gathered 20000 images. We visualized the triplets to ensure that our triplet creation code was working correctly. We discovered that the last 5000 images we were using were actually from a boat camera instead of the highway cam (we used a youtube video which must have autoplayed). So we had to cut down the images to 15000. We visually verified the triplet creation was correct for the rest of the images.
We also realized that our code was extra slow because we were loading the original resolution images and resizing all 15000 each time we ran. We took some time to resize and save all images beforehand, that way we don't have to waste time resizing every run.
We also had a quick issue where cv2 wasn't importing. We have absolutely no idea why this happened. We just reinstalled cv2 in our virtual environment and it worked again.
We are getting some weird errors when training now. We are a little confused as to why. For some reason, it appears that we need a batch size divisible by 8. This isn't so bad, because we can just choose a batch size that IS divisible by 8, but we just aren't sure WHY. If we don't do this we get an error that says: `tensorflow.python.framework.errors.InvalidArgumentError: Incompatible shapes: [8] vs. [<batch_size>]`. Has anyone seen this error before?