When I start working on a machine learning project my first impulse is always to try to fit some models. At the end of the project I always remember how important exploratory data analysis is, and wish I had remembered sooner. Even on things where EDA doesn't seem necessary it usually is.
I have been working on an instance detection challenge and what use will EDA be on a dataset of annotated images ? It turns out a lot. After doing some EDA I found that many of the annotations were wrong, and simply by correcting them I was able to greatly increase my model's performance.
In addition, by doing some EDA on the predictions from a fitted model I was able to identify some common causes of errors and attempt to address them.
I've had good luck with multi-scale training for image detection so I wanted to try it for classification of images that were of different sizes with objects at differing scales. I found some base code here , but this is based on PyTorch datasets, not on ImageFolders. I wrote some code to extend it to ImageFolders, which is in the below gist :
I have some free Azure student credit so I decided to try to use some Azure VMs to train some of my models yesterday. I soon realized that a student account does not include a quota for any GPU more powerful than a K80 and with a student account there is no way to request increased quota. However, the student account does include a quota for "low priority instances" or spot instances, which are pre-emptible. So I set up a spot VM.
On AWS sometimes spot VMs can go for days before being pre-empted. Not so on Azure. I tried about a half dozen times, and no instance ever lasted long enough to complete even half an epoch, or about an hour. I was very disappointed because the spot prices were much better than AWS spot prices. For Azure spot instances you can set a price you are willing to pay, but even setting the price above the on demand price didn't make any difference.
My final complaint about Azure VMs is the shortage of images. AWS has a huge number of images for deep learning so you can basically just start the instance and you are set to go. Azure only has a few such images and they still required considerable configuration and installation of packages, which is made especially difficult by the fact that the instance kept shutting down.
I may use Azure on-demand VMs in the future, but the spot instances were largely useless.
I have been using CoLab for quite a few years now and have always really appreciated the ability to get access to GPUs (and TPUs) for free. So when I recently found out about CoLab Pro I was reluctant to pay $10 a month for something I had been getting for free. However, at the same time I was paying hundreds of dollars a month for cloud GPU instances. Last week, after going well over my AWS budget last month, I decided to maybe try CoLab Pro and I am very glad I did.
CoLab Pro gives you priority on high-end GPUs - so far I have never not gotten a V100. This is the same GPU I was paying $0.90/hour spot rate (preemptible) on AWS. For me, the main disadvantage of CoLab was that each instance lasted usually about 10 hours before shutting down, and they would time out if left unattended or if I wasn't at the computer. CoLab Pro instances will last up to 24 hours, and they will not time out. I had one running at work the other day and when I got home I figured it had timed out, but when I went back the next morning it was still running !
Obviously, CoLab Pro is better suited to running experiments than executing long training, and it doesn't support multiple GPUs. And if you are using TensorFlow you have TPUs (I prefer PyTorch.) In the past I have repeatedly kicked myself after spending hundreds of dollars training a model, and then finding a small mistake. In the future I will be running my experiments on CoLab Pro and only using VMs when I am sure everything is correct and I need to train models quickly.