Suggestions for the Ampere Glossary

I posted here that Ampere had started a glossary and immediately received a request to add words/terms and another how the community can help out with this. Talking to the team that runs that page, they were happy for the suggestions. So if you would like to suggest terms or better yet, have a term with a definition, please add them to the list.

These will need to be approved by the team and they might edit your submission. There won’t be any attribution to you or money involved. Please look at the list to see examples of what we are looking for. There are terms, projects, foundations, etc.

The format that we have been using is this:

  1. What is X?, 2) Why is X important?, and 3) 3-4 relevant links, including those from amperecomputing.com.

An example for Apache is here: Apache

1 Like

@vikingforties added these items to the original post, so I wanted to capture them.

Density, Dark Silicon, Usage Power, NUMA- if you have definition for them, please post them below.

1 Like

I’d like to see the glossary focus on terms that are relevant to Ampere users, and which we have some kind of opinion or knowledge on. Some of the terms in there that are already in the glossary that fit that bill are things like AI Inference, CI/CD, cache, CPU, memory page size - I’d love to see ARM ISA features that are supported in Ampere hardware get some coverage - things like NEON for SIMD, NUMA (which you mentioned), MPAM (what our marketing folks have branded Memory QoS), Memory tagging, things like that. I would also like to see terms that are more relevant to performance - a few that come to mind off the top of my head are cache line, branch prediction, Translation Lookahead Buffer (TLB), barrier instructions, MMU. Glossary entries for some performance tooling like perf, mpstat, … could be useful too.

Basically, I’d look at the terms that get used and defined in content on Ampere Computing (whether in video interviews, articles, tutorials, or tuning guides) and include the terms and their definitions in the glossary - then link back to the pages that mentioned those terms.

Dave.

1 Like

If you’re looking for definitions, how about this one for NUMA:

NUMA: NUMA stands for Non-Uniform Memory Access. It is a system design feature that segments memory and I/O channels into multiple zones called NUMA nodes - processor cores can access memory and I/O devices in their NUMA node faster than they can for resources in other NUMA nodes. This segmentation can be physical, with different CPU socked connected to fast PCIe busses for memory and attached peripherals. Ampere CPUs also support segmenting a single CPU into multiple NUMA nodes via BIOS configuration, with each PCIe Root Complex wired to a different NUMA node. Memory, network, and disk accesses that cross NUMA boundaries are considerably slower than those within the same NUMA node.

For performance analysis, understanding the NUMA configuration of a server, and the workload placement on a server, are very important. If a single operating system is managing CPU cores across multiple NUMA nodes, ensuring that processes are allocated to a single NUMA node, and only use resources attached to that segment, is critical for maintaining high performance. On Linux, using tools like lspci, numactl, and taskset can help system administrators ensure that processes are always running on a single NUMA node.

Power usage effectiveness (PUE)

Power usage effectiveness (PUE) is a widely adopted metric that measures the energy efficiency of a data centre by comparing the total facility power consumption to the power consumed by IT equipment. The metric’s simplicity enables operators to benchmark and track efficiency over time, and to identify the impact of operational changes such as server consolidation or cooling upgrades. PUE is defined as:

PUE = Total Facility Energy / IT Equipment Energy

The ideal PUE value is 1.0, indicating that all power supplied to the facility is used by computing hardware and none is lost to cooling, lighting, or other infrastructure. Over the past decade, the industry‑wide average PUE has fallen from 2.20 in 2010 to an estimated 1.55 in 2022, a reduction that has been a major catalyst for power savings.

In 2014, typical U.S. data centers (PUE=1.75) allocated 43 % of their power to servers and 41 % to cooling and power‑distribution systems. In 2021, Google reported a PUE of 1.1 across its data centers worldwide, and less than 1.06 for its best sites.

Reference: