Single-Node Deployment#
This guide describes single-node deployment of AUP Learning Cloud using the auplc-installer script on the develop branch. This deployment is suitable for development, testing, and demo environments.
See also
For the shortest path, see the Quick Start guide.
Prerequisites#
Hardware Requirements#
Device: Supported AMD GPU or APU — select your device in the Installation section below. Examples:
Radeon PRO: AI PRO R9700/R9600D
Radeon: RX 9070/9060 series
Ryzen AI: Max+ PRO 395, Max PRO 390/385/380, Max+ 395, Max 390/385, 9 HX 375/370, 9 365
Memory: 32GB+ RAM (64GB recommended for production-like testing)
Storage: 500GB+ SSD
Network: Stable internet connection for downloading images
Software Requirements#
Operating System: Ubuntu 24.04.3 LTS
Docker: Version 20.10 or later (required when using default Docker-as-runtime mode)
Root/Sudo Access: Required to run the installer
Installation with auplc-installer#
On the develop branch, single-node installation is done with the auplc-installer script at the repository root.
1. Package dependency#
Install build tools (required for building container images):
sudo apt install build-essential
2. Install Docker#
By default, Docker is used as the K3s container runtime (backend).
Install Docker — skip if already installed
If Docker is already installed and your user is in the docker group, skip this section.
# Install Docker
curl -fsSL https://get.docker.com | sh
# Add current user to docker group
sudo usermod -aG docker $USER
# Apply group changes (or logout/login)
newgrp docker
# Verify installation
docker --version
See Docker Post-installation Steps for detailed configuration.
3. Clone the repository and run the installer#
Select your AMD device family and GPU below. The install commands update to use the correct GPU_TYPE for your selection.
git clone https://github.com/AMDResearch/aup-learning-cloud.git
cd aup-learning-cloud && chmod +x auplc-installer
sudo ./auplc-installer install --gpu=rdna4
After installation completes, open http://localhost:30890 in your browser. The default uses auto-login — no credentials required.
4. auplc-installer commands#
Command |
Description |
|---|---|
|
Full installation (K3s, tools, GPU plugin, images, JupyterHub) |
|
Remove K3s and all components |
|
Install Helm and K9s only |
|
Deploy JupyterHub runtime only |
|
Upgrade JupyterHub (e.g. after editing |
|
Remove JupyterHub runtime only |
|
Remove and reinstall JupyterHub (e.g. after image changes) |
|
Build all custom container images |
|
Build specific images (e.g. |
|
Pull external images for offline use |
Legacy long-form commands are still supported: install-runtime, remove-runtime, upgrade-runtime, build-images, pull-images.
Examples:
# Upgrade JupyterHub after changing runtime/values.yaml
sudo ./auplc-installer rt upgrade
# Rebuild images and reinstall runtime after Dockerfile changes
sudo ./auplc-installer img build
sudo ./auplc-installer rt reinstall
# Show all options
./auplc-installer help
5. Runtime and mirror configuration#
The installer supports --flag=value options (or equivalent environment variables):
Flag |
Env variable |
Default |
Description |
|---|---|---|---|
|
|
auto-detect |
GPU type: |
|
|
|
Container runtime: |
|
|
— |
Registry mirror host (e.g. |
|
|
— |
PyPI mirror URL |
|
|
— |
npm registry URL |
Examples:
# Specify GPU type explicitly
sudo ./auplc-installer install --gpu=strix-halo
# Use containerd runtime instead of Docker
sudo ./auplc-installer install --docker=0
# Use registry and PyPI mirrors
sudo ./auplc-installer install --mirror=mirror.example.com --mirror-pip=https://pypi.tuna.tsinghua.edu.cn/simple
# Flags can be combined and placed anywhere
sudo ./auplc-installer install --gpu=phx --docker=0 --mirror=mirror.example.com
Docker mode (default, --docker=1): images built with make hub are immediately visible to K3s after rt upgrade, no export needed. Requires Docker installed on the host.
containerd mode (--docker=0): images are exported to K3s image directory for offline/portable deployments.
6. Configure runtime (optional)#
To customize auth, images, storage, network, and other options, edit runtime/values.yaml. For all available settings and recommended workflow, see the Configuration Reference: runtime/values.yaml.
After editing, run:
sudo ./auplc-installer rt upgrade
7. Verify deployment#
# Check all pods are running
kubectl get pods -n jupyterhub
# Check services
kubectl get svc -n jupyterhub
# Get admin credentials (if auto-admin is enabled)
kubectl -n jupyterhub get secret jupyterhub-admin-credentials \
-o go-template='{{index .data "admin-password" | base64decode}}'
Access JupyterHub#
NodePort (default): http://localhost:30890 or http://node-ip:30890
Domain: https://your-domain.com (if configured)
Post-Installation#
Configure Authentication#
See Authentication Guide to set up:
GitHub OAuth
Native Authenticator
User management
Configure Resource Quotas#
See User Quota System to configure resource limits and tracking.
Manage Users#
See User Management Guide for batch user operations.
Troubleshooting#
Pods Not Starting#
# Check pod status
kubectl describe pod <pod-name> -n jupyterhub
# Check logs
kubectl logs <pod-name> -n jupyterhub
Image Pull Errors#
# Check events
kubectl get events -n jupyterhub
# Verify images are available
docker images | grep ghcr.io/amdresearch
Connection Issues#
# Check service status
kubectl get svc -n jupyterhub
# Check ingress (if using domain)
kubectl get ingress -n jupyterhub
Upgrading#
To upgrade JupyterHub after editing runtime/values.yaml:
sudo ./auplc-installer rt upgrade
To rebuild container images after changing Dockerfiles, then reinstall runtime:
sudo ./auplc-installer img build
sudo ./auplc-installer rt reinstall
Offline / Portable Operation#
The installer automatically configures the system for offline and portable operation. When you run sudo ./auplc-installer install, it:
Creates a dummy network interface (
dummy0) with a stable IP address (10.255.255.1)Binds K3s to the dummy interface using
--node-ipand--flannel-ifacePre-pulls all required container images to local storage
Configures K3s to use local images from
/var/lib/rancher/k3s/agent/images/
This ensures the cluster remains fully functional even when:
External network is disconnected (network cable unplugged)
WiFi network changes (connecting to different access points)
No network is available at all
How it works: K3s is bound to a stable dummy interface IP instead of the physical network interface. This means K3s doesn’t care about external network changes – it always uses the same internal IP for cluster communication.
Reference: K3s Air-Gap Installation
Uninstalling#
To remove JupyterHub runtime only (keeps K3s and other components):
sudo ./auplc-installer rt remove
To remove everything (K3s, JupyterHub, and installer-managed resources):
sudo ./auplc-installer uninstall