I’m wondering if folks here have a way to scan code to look for potential issues migrating from 4k to 64k page size. I’m guessing a combination of static scans and runtime assessments are needed, but devs on this community might know of utilities or other methods to do this.
Utilizing nix might help since we can declaratively find packages and nixpkgs has about 140k packages with many packages having multiple versions and configuration variants. I’m already aware of several packages which are known to have issues and using nix, it’s possible to find what depends on jemalloc.
Sorry, I wasn’t clear with my ask - I’m looking for a way to scan existing code to determine whether it may have problems running on kernels with 64k page size, like in the jemalloc example.
Essentially, how could I have caught the jemalloc issue, preferably without running the code and having it crash?
This only works for JEMalloc because JEMalloc #define’s LG_PAGE and that’s written to the object file and static & shared jemalloc libs:
grep LG_PAGE jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h
/* One page is 2^LG_PAGE bytes. */ #define LG_PAGE 16
Yup - LG_PAGE is the number of bits in a JEMalloc allocation block size - 2^12 = 4K, 2^14 = 16K, and 2^16 = 64K If you set LG_PAGE=16, you are still allocating 64K chunks of memory in JEMalloc, you’re just allocating 16 or 4 pages at a time, depending on the underlying kernel page size.