Skip to content

Improvement based on training history

In this week:

  1. I finished training my first network on terra dataset. The network is trained on 1000 random samples from each class with data-augmentation.

The result looks like this:

The fluctuation of validation accuracy implies too small dataset or too large learning rate. For the first issue, I intend to over sample the minority class and used all data from majority class. For the second issue, I intend to use learning rate decay. I have finish coding, but training the network with all data will takes longer time.

My confusion about the above is: Based on experience, how difference the class size should be in order to be described as imbalanced data? My class size range from 1k to 30k. Is it reasonable to oversample the minority, so that all classes stays in range of 15k to 30k?

 

  1. I have finished the code for confusion matrix, but the code cannot generate meaningful result because I cannot differentiate train-set and test-set now. I have solved this problem by fixing seed in the random split of train-set and test-set. I hope we can use

 

  1. While wait for the program to run, I study the paper about finding the best bin and learned the rationale behind PCA. (I finally understand why we need those mathematical procedures)

 

The next step will be:

Waiting for the training of the next network with all above improvement. Read the paper about ISO map, and other papers introduced in previous paper discussion sections. (I found the record on slack)

Leave a Reply

Your email address will not be published. Required fields are marked *