Flux.jl is a machine learning framework built in Julia. It has some similarities to PyTorch, and like most modern frameworks includes autodifferentiation. It’s definitely still a work in progress, but it is being actively developed (including several GSoC projects this summer).
I was curious about how easy/difficult it might be to convert a PyTorch model into Flux.jl. I found a fairly simple PyTorch tutorial on RNNs to translate. My goal here isn’t to explain RNNs (see the linked article for that) - my intent is to see what is required to go from the PyTorch/Python ecosystem to the Flux.jl/Julia ecosystem.
Let’s start with the PyTorch code:
|
|
And now the Flux.jl version in Julia:
|
|
The Flux.jl/Julia version is very similar to the PyTorch/Python version. A few notable differences:
- Numpy functionality is builtin to Julia. No need to import numpy.
- torch.Variable maps to Flux.param
- x and y are type torch.Variable in the PyTorch version, while they’re just regular builtin matrices on the Julia side.
- Flux.param(var) indicates that the variable var will be tracked for the purposes of determining gradients (just as torch.Variable).
- I did run into a bug in Flux.jl; you’ll notice the workaround on line 24. Ultimately, when the bug is fixed you’ll be able to uncomment line 22 and eliminate line 24. The bug had to do with how certain tracked collections are translated to scalar types. The tracking is required for back propagation and the problem was that the input being passed into the foward function would get another level of unnecessary tracking each time forward was called.
- ’.’ prior to an operator (such as at line 41 in the Julia code) indicates a broadcasting operation in Julia. Note also line ‘.’ after the tanh at line 25, it indicates that the tanh is broadcast to the matrix that results from xh*W1. (From what I can tell, numpy sort of automatically determines whether an operation should broadcast or not based on the dimensions of the operands - Julia is more explicit about this.)
- Even the plotting at the end is very similar between the two versions.
In the next post I’ll modify the Julia version to use the GPU.