Skip to content

Amber Alert VII: The NPairs Awakens

Last week, our confusion was: How were we going to use NPairs loss without discrete classes.

With Dr. Pless and Abby, we figured out that we could create classes by grouping together a handful of images from a traffic camera that were all taken within a certain time interval (we decided to go with 15 frame groups). We then skip the next 15 frames before we create the next class, so there isn't overlap between classes. We did this process on this video: https://www.youtube.com/watch?v=wqctLW0Hb_0

This gave us a total of 51,000 frames.

We just threw these new classes into Hong's npairs model without changing anything about the model itself. This trained, and the loss decreased. We plan on doing the over-training check we talking about for our last model (test on a small # of classes to see if loss -> 0).

We got some results (we literally just started training, and it's taking quite a while, so these results are only from epoch #2):

(^ I feel like this is bad)

Will have to dig in to this a bit more to see what these graphs actually mean, I figured I would include them because these are the graphs that Hong outputs.

We also are training on resnet18, we wish to try it on resnet50. Its also really slow (40min/epoch => 7hrs for 10 epochs). We are using ~25,000 images to train, and 25,000 images to validate (because we need to leave a gap between the classes, we can use the gaps as our validation data).

We plan to train the model more. And we plan to run a T-SNE on the data to visualize our embedding.

1 thought on “Amber Alert VII: The NPairs Awakens

  1. Abby

    The fact that the training accuracy is 0% and the testing accuracy is 100% is definitely concerning, as you noted. 🙂

    Have you had any success in debugging that? If that's for the overfitting test, I would wonder if you had the training and test evaluation mixed up?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *