Skip to content

1

We started this week attempting to fix our weird batch size bug. We talked to Abby, and we determined that this (probably) some weird Keras bug. Abby also recommended that we switch to PyTorch, to stay consistent within the lab, and to avoid this weird bug.

Hong sent us his code for NPair loss, which we took a look at, and started to modify it to work with our dataset. However, its not as easy as just swapping in our images. Hong's model works by saying "we have N classes with a bunch of samples in each, train so X class is grouped together, and is far away from the other N-1 classes". The problem for us is that each images by itself is not in any class. It's only in a class relative to some other image (near or far). We believe our options for the loss function are this:

  1. Change N pair loss to some sort of thresholded N pair loss. This would mean the negatives we would be pushing away from would be, some fraction of the dataset we determine to be far away (for now I'll say 3/4).

if these are the timestamps of the images we have, and we are on image [0]:

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

The loss would try to push [0] close to [1, 2, 3] (the images we defined to be close to it, time-wise), and far away from images [4, 5, 6, 7, 8, 9].

1a. We could make this continuous instead of discrete (which I think makes more sense). Where the loss is proportional to the distance between the timestamps

2. Implement triplet loss in pytorch.

(Please let us know if this doesn't make any sense/ we have a fundamental misunderstanding of what we are doing)