Just want to share with you guys some of my prebuilt wheel files for AI-stuff with CUDA support. My stack is:
- Ubuntu 24.04
- Python 3.12
- CUDA 12.8
- GCC 13
Many of them only have release for x86. Some libs like PyTorch only have CPU-only builds for aarch64
4 Likes
Lily
2
Cool! Thanks sharing this prebuilt wheel files. If I do not have CUDA, can I run thses wheel files on aarch64?
quocbao
3
Of course, you can use device = torch.device("cpu") like this
>>> torch.cuda.is_available()
True
>>> device = torch.device("cpu")
>>> print("PyTorch version:", torch.__version__)
PyTorch version: 2.7.1
>>> print("Using device:", device)
Using device: cpu
>>>
>>> # Create two tensors on CPU
>>> x = torch.rand(3, 3, device=device)
>>> y = torch.rand(3, 3, device=device)
>>> z = x + y
>>>
>>> print("Tensor X:\n", x)
Tensor X:
tensor([[0.7454, 0.4519, 0.2614],
[0.1203, 0.6766, 0.1015],
[0.6118, 0.3068, 0.3611]])
>>> print("Tensor Y:\n", y)
Tensor Y:
tensor([[0.7466, 0.0102, 0.0014],
[0.4893, 0.4372, 0.7323],
[0.8264, 0.7940, 0.5709]])
>>> print("X + Y =\n", z)
X + Y =
tensor([[1.4920, 0.4621, 0.2628],
[0.6096, 1.1137, 0.8338],
[1.4382, 1.1008, 0.9320]])
By the way, if you don’t have GPU, you can still install pytorch using existing precompiled wheel files
2 Likes
Edward
4
Does these wheel files support all GPU cards?
No. I compiled them with this arch list
export TORCH_CUDA_ARCH_LIST="12.0;10.0;9.0;8.9;8.6;8.0;7.5"
You can check if your GPU is supported or not at this page.
1 Like