ASRock Rack Ampere bundle - #40 by Civiloid - starting from here and down.
Yes, in my case, it was recognised as a PCI device out of the box; just the driver was OOPSing.
ASRock Rack Ampere bundle - #40 by Civiloid - starting from here and down.
Yes, in my case, it was recognised as a PCI device out of the box; just the driver was OOPSing.
Also the open source NVIDIA driver: NVIDIA Releases Open-Source GPU Kernel Modules | NVIDIA Technical Blog
a quick look at the mmap syscall and I see the mmio size argument is 0:
mmap(NULL, 0, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = -1 EINVAL (Invalid argument)
and using a dirty fix:
flashrom -dirty on Linux 6.1.0-21-arm64 (aarch64)
flashrom is free software, get the source code at https://flashrom.org
Using clock_gettime for delay loops (clk_id: 1, resolution: 1ns).
Detected 1002:67ff@02:00.0 "Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X]"
===
This PCI device is UNTESTED. Please report the 'flashrom -p xxxx' output
to flashrom@flashrom.org if it works for you. Please add the name of your
PCI device to the subject. Thank you for your help!
===
Found GigaDevice flash chip "GD25Q20(B)" (256 kB, SPI) on ati_spi.
Reading flash... done.```
From what I read, it appears that this NVIDIA open source driver is just a driver that gives access to the GPU for computing purposes. It is not a display driver. So you still need to choose between reverse engineered nouveau drivers, or binary-only drivers.
I managed to port the Erratum PCIE-65 patches:
and get it running with a WX 3100 on v6.19.9:
however, I failed to get an RX 6400 and an RX 9600 XT to work:
(I tried it on 6.1.155, 6.12.50, 6.15.4 and 6.16.9)
[ 6.171966] amdgpu 0000:03:00.0: amdgpu: [drm] Display Core failed to initialize with v3.2.334!
[ 6.171980] [drm:amdgpu_show_reset_mask [amdgpu]] *ERROR* hw_init of IP block <dm> failed -22
[ 6.172412] amdgpu 0000:03:00.0: amdgpu: amdgpu_device_ip_init failed
[ 6.172416] amdgpu 0000:03:00.0: amdgpu: Fatal error during GPU init
[ 6.172421] amdgpu 0000:03:00.0: amdgpu: amdgpu: finishing device.
[ 6.172934] WARNING: CPU: 0 PID: 495 at drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c:635 amdgpu_irq_put+0xac/0xf4 [amdgpu]
[ 6.173271] Modules linked in: amdgpu(+) snd_hda_codec_hdmi drm_panel_backlight_quirks
amdxcp snd_hda_intel drm_suballoc_helper mt7921e btusb drm_ttm_helper snd_intel_dspcfg
snd_hda_codec ttm btrtl mt7921_common video snd_hda_core btmtk mt792x_lib gpu_sched
btbcm snd_hwdep snd_pcm drm_exec btintel cdc_ether mt76_connac_lib usbnet drm_buddy
mt76 snd_timer joydev onboard_usb_dev mii bluetooth ast drm_display_helper snd
drm_client_lib mac80211 soundcore polyval_ce cec drm_shmem_helper acpi_ipmi libarc4
drm_kms_helper igb cfg80211 ipmi_ssif(+) ghash_ce ipmi_devintf ixgbe(+) sbsa_gwdt
arm_spe_pmu i2c_algo_bit rfkill mdio ipmi_msghandler xgene_hwmon acpiphp_ampere_altra
arm_cmn arm_dmc620_pmu cppc_cpufreq arm_dsu_pmu acpi_tad
[ 6.173383] pc : amdgpu_irq_put+0xac/0xf4 [amdgpu]
[ 6.173710] lr : amdgpu_irq_put+0x30/0xf4 [amdgpu]
[ 6.174090] amdgpu_irq_put+0xac/0xf4 [amdgpu] (P)
[ 6.174413] amdgpu_fence_driver_hw_fini+0x5c/0x164 [amdgpu]
[ 6.174733] amdgpu_device_fini_hw+0x108/0x664 [amdgpu]
[ 6.175053] amdgpu_driver_load_kms+0xac/0xf4 [amdgpu]
[ 6.175372] amdgpu_file_to_fpriv+0x348/0x1268 [amdgpu]
Has anyone gone further down this rabbit hole and can help?
Turns out DRM_AMD_DC_FP has been disabled on clang based builds due to a clang bug, which has been resolved now. Therefore, GPUs using DCN weren’t working, while the one using DCE was working. I removed the safeguard:
It’s now running with the RX 6400 and RX 9060 XT. Will report on stability and eventually gaming performance once I managed to further setup things.
I used “those patches” daily on my machines. Between 6.13 and 6.17 they only need refreshing to apply.
RX6700XT works for me.
Too bad that those patches are still needed ;(
I’ve not gotten a kernel newer than 6.14 to boot