Neuromatch<p>Graphics processing unit <a href="https://neuromatch.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> (W1D1)</p><p>When using Colab notebooks, by default, will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page.</p><p>By following Runtime - › Change runtime type and selecting GPU from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs</p><p>Once you have done this your runtime will restart and you will need to rerun the first setup cell to re-import <a href="https://neuromatch.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a>. Then proceed to the next cell.</p><p>For more information on the GPU usage policy you can view in the Appendix.</p><p>Compute Unified Device Architecture <a href="https://neuromatch.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a></p><p>CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python.</p><p>In short, we get the power of parallelizing our <a href="https://neuromatch.social/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a> computations on GPUs, whilst only writing (relatively) simple Python!</p><p>Here, we define the function "set_device”, which returns the device use in the notebook, i.e. ‘cpu’ or ‘cuda’. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as</p><p>DEVICE = set_device()</p><p>Let's define the function using the PyTorch package ‘torch.cuda’ which is lazily initialized, so we can always import it, and use 'is_available()' to determine if our system supports CUDA. </p><p>Operations between CPU tensors and CUDA tensors<br>Note that the type of the tensor changed after calling .to(). What happens if we try and perform operations on tensors on devices? We cannot combine CUDA tensors and CPU tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the .to() method as before, or the .cpu() and .cuda() methods. Note that using the .cuda() will throw an error if CUDA is not enabled in your machine.</p><p>Generally. in this course, all <a href="https://neuromatch.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> is done on the GPU, and any computation is done on the CPU, so sometimes we have to pass things back and forth.</p><p><a href="https://neuromatch.social/tags/NeuromatchAcademy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuromatchAcademy</span></a> <a href="https://neuromatch.social/tags/neuromatch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuromatch</span></a> <a href="https://neuromatch.social/tags/neuromatchstodon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuromatchstodon</span></a></p>