Tags

    Capstone 9/11

    Comments

    This week turned in the problem statement.
    The previous week I had tested training ai on plant seedlings using this dataset (https://www.kaggle.com/c/plant-seedlings-classification)
    In that week l had my crappy 2012 hand me down macbook train with a random convolutional ai that i just kind of randomly set the layers train for like 2 hours and it had barely even completed 1 epoch and was only 20% accurate because it was so slow. There were 8 types of plants so even randonly it would get 12.5%.

    This week i brought my home computer that I use to play games on to my dorm and it has a 1070 nvidia gforce card. The hardest thing was trying to install Unbuntu which i had to get a cd and use my macbook and a cd reader to burn the installation on after like 4 hours of trying at 1 am on sunday night. Also had to figure out how to mess with the boot settings to turn off RAID or some wierd settings untill it worked. Then i ran the ai and it was still super slow since the user is supposed to install some driver or something for tensorflow to use nvidia graphics cards. I accedentally bricked the unbuntu installation while installing the drivers somehow since i had installed the wrong version then tried to manually delete through terminal but messed up somehow. After reinstalling Unbuntu then eventually getting the driver to work after finding the correct version, then forgetting i was supposed to reboot the computer after installing the driver, i got it to train using the GPU.

    Before one epoch took like 30 minutes on a mac cpu with a hard drive, it took like 20 seconds on a dell 1070 geforce GPU with a SSD. After like ~40 seconds the AI was already more accurcate than the macbook training for an hour and the code was exacly the same i had just copied it to google drive. I really quickly put a while(true) in python around the training code so that it will run overnight then forgot about it since I woke up at tike 8:25 the next morning since my alarm was muted for some reason and forgot about the AI untill statistics class, then went back up during lunch and the following free period and the AI after training for like 12 hours was still only 95%. Also I didn't add the code to save the AI so lost all the training and several cents worth of electricity from the dorm but at least in concept it would work since if I actually thought about how the AI layers would be layed out instead of just randomly adding convolutional and dense layers it would probably be better than 95%. The 95% was for the training data, but 10% of the images is set aside and isn't trained on to detect overfitting, and when run on the validation data it was only 90% which is closer to the real accuraccy if it was used on real world data.

    But the highest guy on there had 97% on the validation data, and had a bunch of extra image processing code ahead of time, and i just slapped my code together in 30 minutes and got 90% validation accuracy after 12 hours, so it's probably easier than i thought if i just do some more optimizing.