In the previous post I translated a simple PyTorch RNN to Flux.jl a machine learning framework for Julia.
Here’s the Julia code modified to use the GPU (and refactored a bit from the previous version; I’ve put the prediction section into a predict function):
|
|
On line 4 we’re now using CuArrays. This will pull in CUDA arrays from CuArrays.jl.
In order to indicate that we want some data on the GPU we wrap it in the Flux.gpu() function as we do for the x and y assignments on lines 16 & 17. Note that the weights w1 and w2 are also tracked parameters for the sake of backpropagation so for those assignments (lines 19 & 20) we call gpu on the randn matrices and then pass that as the parameter to param.
The other signficant change here is that now in the forward() function where we call the activation function (line 26) it must chang from:
context_state = tanh.(xh*W1)
to:
context_state = CUDAnative.tanh.(xh*W1)
This wasn’t well documented in Flux or CuArrays, but without this change I got this rather unhelpful error message:
ERROR: LoadError: CUDA error: invalid program counter (code #718, ERROR_INVALID_PC)
So apparently we need to call the tanh defined in CUDAnative in order for it to actually run the tanh on the GPU.
It’s important to note that if you comment out the using CuArrays on line 4, the calls to gpu don’t do anything and the operations will run on the CPU as if those calls weren’t there.
So, how did running this model on the GPU affect performance? It looks like it actually degraded performance. With the GPU enabled (line 4 uncommented):
real 0m30.648s
user 0m30.005s
sys 0m1.068s
And without the GPU enabled (line 4 commented and line 26 changed to use tanh() instead of CUDAnative.tanh()):
real 0m17.032s
user 0m17.091s
sys 0m0.452s
I think this is because this is a fairly small model and the overhead of passing data to/from the GPU is greater than the computational overhead.
I do kind of wish we could switch between GPU and CPU more easily - some of the Python frameworks seem to make this easier. Calls to CUDAnative.op() have to be changed back to op() when running on the CPU (as with the tanh() call above). Perhaps some change in CuArrays.jl could get around this such that if you’re using CuArrays you get CUDAnative.op() instead of op() when the operands are CuArrays?