Generative AI on AMD platforms

Generative AI on AMD platforms#

In this section, we refer to popular generative AI applications that can be run on your local machine.

Text Generation Webui#

git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui/

./start_linux.sh --listen-port 8888

In your browser, open localhost:8888. Head over to the Model tab, in the Download window on the right, get your model of choice from Hugging Face. For instance, let us use lmsys/vicuna-7b-v1.5, click download. Once the download is done, you load the model and go back to the Chat tab.

If you would like to get more information on this tool, please visit the repository oobabooga/text-generation-webui.

ComfyUI#

git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O models/checkpoints/v1-5-pruned-emaonly.ckpt # download checkpoint
python main.py

Open the web browser in your host machine and start playing with it. You can find a getting started here https://docs.comfy.org/get_started/gettingstarted

InvokeAI#

cd /tmp
wget https://github.com/invoke-ai/InvokeAI/releases/download/v4.2.7post1/InvokeAI-installer-v4.2.7post1.zip -O invokeai.zip
unzip invokeai.zip
./InvokeAI-Installer/install.sh -r /ROCM_APP/invokeai
/ROCM_APP/invokeai/invoke.sh

Copyright (C) 2025 Advanced Micro Devices, Inc. All rights reserved.

SPDX-License-Identifier: MIT