Setting PyTorch Environment for GPU#
This page describes the procedure to setup a ROCm-enabled PyTorch environment in your machine with a supported AMD GPU.
Pre-requisites#
GPUs: AMD Instinct™ Accelerators, AMD Radeon™ RX Graphics Cards or AMD Radeon™ PRO Graphics Cards
Linux#
To simplify the installation process, you will use a pre-built Docker container. We provide a docker file that builds on top of the official PyTorch docker container.
The first step is to install Docker in your Linux machine, follow the steps here. For instance, Install Docker Engine on Ubuntu.
Once installed, make sure to check and apply Linux post-installation steps for Docker Engine.
Clone this repository
git clone https://github.com/AMDResearch/aup-ai-tutorials.git
Navigate to the docker folder and build the docker container.
cd aup-ai-tutorials/docker
./build.sh
Note
This process can take at least 10 minutes, depending on your Internet connection.
Once build, to launch it the docker container run
./run.sh
Once inside the Docker container, check if the GPU is being detected.
check_gpu
Note
check_gpu is an alias defined in ~/.bashrc that executes python -c "import torch; print(f'GPU detected: {torch.cuda.is_available()}')"
Now you can launch Jupyter Lab, navigate to the folder where the repository is and you run (from inside the Docker):
cd /ROCM_APP/aup-ai-tutorials
launch_jupyter
Note
launch_jupyter is an alias defined in ~/.bashrc that executes jupyter lab --ip='0.0.0.0' --allow-root --NotebookApp.token='' --NotebookApp.password=''
On the host machine launch a web browser and open localhost:8888/lab
Windows#
AI Frameworks are not currently supported. See list of Windows ROCm Component Support.
Copyright (C) 2025 Advanced Micro Devices, Inc. All rights reserved.
SPDX-License-Identifier: MIT