As of a few days ago they must have changed something in the corporate proxy server at work because all of a sudden no one could push or pull to git remotes. I was getting error "fatal: unable to access [proxy server address]: Timed out"
After much frustration, it turns out there was a fairly simple solution. I had http_proxy and https_proxy environment variables set to the proxy server. If I set an environment variable "no_proxy" with the values of the domains I need to connect to with git everything works fine.
A few weeks ago I decided one morning to spend an hour going through a ReactJS tutorial, because I keep hearing so much about it. After about half an hour I stopped the tutorial and starting rewriting something I was working on in React. Since then I've been doing any web-related work in React and re-writing other web stuff I've previously done in React.
I had been trying to train a version of VAE-GAN for a few weeks and it wasn't working as well as I had hoped it would. I had added an auxiliary output to the discriminator which was attempting to predict the 40 features of each image provided with the celeb-a dataset as suggested in the VAE-GAN paper and I was scaling that loss to try to bring it in line with the GAN discriminator loss, but I was doing that incorrectly so that loss ended up overwhelming the GAN loss. (I was summing, rather than averaging the losses, and the lambda I was using to scale the loss was appropriate for a mean loss, but with 40 features the auxiliary loss was 40x the GAN loss at base, so I needed to divide the lambda by 40 to get the effect I wanted.)
After having corrected that error I am finally making some progress with these models. Below are sample images from two models I am training. The first outputs images at 160x160, the second at 128x128.
I guess the moral of this story is if something isn't working the way you expect it to, double check your math before you continue training it!
This paper was released over the summer which describes a newly discovered method for obtaining eigenvectors from eigenvalues. While this method only works for Hermitian matrices, previous methods for computing eigenvectors were far more complicated and costly. While relatively, easy, it can be quite costly to determine the dominant eigenvector of a matrix, and this process had to be repeated after removing the dominant eigenvector of the matrix in order to compute additional eigenvectors.
This new method shows that there is a straightforward relationship between the normed squared eigenvalues of a matrix, the eigenvalues of submatrices, and the eigenvectors. I can't stress enough how amazing this is. This will require that all linear algebra textbooks be revised.
I have a numpy implementation of this new method available here.