Google Enhanced Multimodal Machine Learning 2#

Google Enhanced Multimodal Machine Learning, GEMMA2 for short, is an AI model designed to process multiple data modalities.

🛠️ Supported Hardware#

This notebook can run in a CPU or in a GPU.

✅ AMD Instinct™ Accelerators
✅ AMD Radeon™ RX/PRO Graphics Cards
⚠️ AMD EPYC™ Processors
⚠️ AMD Ryzen™ (AI) Processors

Suggested hardware: AMD Instinct™ Accelerators, this notebook can run in a CPU as well but inference is CPU will be slow.

🎯 Goals#

  • Show you how to download a model from HuggingFace

  • Run Gemma2 on an AMD platform

  • Prompt the model

🚀 Run Gemma2 on an AMD Platform#

Import the necessary packages

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

To use Gemma2, you need to request access as well as to login into Hugging Face in this notebook to use it.

You will need to Authenticate with your Hugging Face access token, either enter it as an argument to the login function. Or, enter it on the text box. Make sure you uncheck the Add token as git credential.

from huggingface_hub import login
login(token=None)

Check if GPU is available for acceleration.

Note

Running the model on a GPU is strongly recommended. If your device is cpu, the model token generation will be slow.

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'{device=}')

Download the model

model_id = "google/gemma-2-9b"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map=device,
    torch_dtype=torch.bfloat16
)

Ask Gemma2 to write a poem about ML. Note how we tokenize the prompt and then we de-tokenize the model response.

prompt_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(prompt_text, return_tensors="pt").to(device)

outputs = model.generate(**input_ids, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

You can also check the raw tokens generated by the model

outputs[0][:32]

Copyright (C) 2025 Advanced Micro Devices, Inc. All rights reserved.

SPDX-License-Identifier: MIT