Skip to content

ZJU-SEC/NvidiaASLR

Repository files navigation

cuda_research

This tool consists of the following folders:

  • cuda_driver: Contains tools related to NVIDIA's open-source drivers.
  • extractor: A modified GPU page table dump tool.
  • kernel_module: A kernel driver for analyzing CPU-side memory and page tables.
  • mem_analysis_tool: A user-space program that interacts with the kernel driver device.
  • gpu-tlb: A tool from related work.
  • collector: collect ALSR program
  • result: contain our results and data.

Build environment

Trun off the Linux GUI.

sudo apt-get update
sudo service gdm stop
sudo service lightdm stop

Uninstall GPU driver.

sudo apt-get --purge remove nvidia*
sudo apt-get autoremove
sudo apt-get --purge remove "*cublas*" "cuda*"
sudo apt-get --purge remove "*nvidia*"

Disable nouveau.

sudo vim /etc/modprobe.d/blacklist.conf 
# append the following two lines to the end of the file and save it.
blacklist nouveau
options nouveau modeset=0

#  apply the change
sudo update-initramfs -u

Install dependencies for build driver and linux.

sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison gcc g++ make

Download NVIDIA driver 560.35.03

wget https://cn.download.nvidia.com/XFree86/Linux-x86_64/560.35.03/NVIDIA-Linux-x86_64-560.35.03.run
sudo chmod a+x NVIDIA-Linux-x86_64-560.35.03.run
sudo ./NVIDIA-Linux-x86_64-560.35.03.run
#choose GPL, do not install 32-bit version

Download cuda and cuda toolkit

# you can choose the cuda version you like, e.g., 11.8~12.6,
wget https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda_12.1.0_530.30.02_linux.run
sudo sh cuda_12.1.0_530.30.02_linux.run

# or
wget https://developer.download.nvidia.com/compute/cuda/12.6.1/local_installers/cuda_12.6.1_560.35.03_linux.run
sudo sh cuda_12.6.1_560.35.03_linux.run

Configure environment variables

# or zshrc
vim ~/.bashrc 
# append the following three lines to the end of the file and save it.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export PATH=$PATH:/usr/local/cuda/bin
export CUDA_HOME=$CUDA_HOME:/usr/local/cuda

source ~/.bashrc

Download NVIDIA-open-kernel-module

git clone -b 560.35.03 https://github.com/NVIDIA/open-gpu-kernel-modules.git
cd open-gpu-kernel-modules
# The diff.patch is store in cuda_driver/diff.patch
git apply diff.patch

# Build
make modules -j$(nproc)
# Remove installed kernel modules
sudo rmmod nvidia_uvm nvidia_drm nvidia_modeset nvidia-peermem nvidia
# Install builed kernel modules
sudo make modules_install -j$(nproc)

Dump GPU physical memory

Configure dumper

Configure dumper tool from gpu-tlb

git clone https://github.com/0x5ec1ab/gpu-tlb.git

cd gpu-tlb/dumper

# fix its makefile
NVIDIA_DRIVER_PATH := ./open0gpu-kernel-modules  #The downlloaded path of open kernel modules
INCLUDES += -I${NVIDIA_DRIVER_PATH}/kernel-open/common/inc 
INCLUDES += -I${NVIDIA_DRIVER_PATH}/kernel-open/nvidia   
INCLUDES += -I${NVIDIA_DRIVER_PATH}/kernel-open/nvidia-uvm  

make

Check max physical memory the dumper can dump.

  1. Use the GPU-TLB dumper to dump the GPU memory. Set the dump boundary to a large value initially. The actual boundary will be indicated by an error message in dmesg. Use the value from the error message as the correct boundary.

  2. Example command to dump memory:

    ./dumper -d 0 -s 0 -b 0x10000000000 -o ./memdump_tmp
  3. Check dmesg for error messages like the following:

    [ 2628.937225] NVRM: GPU at PCI:0000:02:00: GPU-d71f4fe9-ecbe-b7e4-9e73-dbeaa194c6da
    [ 2628.937230] NVRM: Xid (PCI:0000:02:00): 31, pid='<unknown>', name=<unknown>, Ch 0000001e, intr 00000000. MMU Fault: ENGINE CE3 HUBCLIENT_CE1 faulted @ 0x3_f4200000. Fault is of type FAULT_INFO_TYPE_REGION_VIOLATION ACCESS_TYPE_PHYS_READ
    

    In this example, the maximum physical memory that the GPU can dump is 0x3f4200000.

Dump all physical memory

./dumper -d 0 -s 0x000000000 -b 0x3f4200000 -o ./memdump

Extract GPU page tables

Configure extractor

Build our new extractor tool, it will create a binary file extractor

cd extractor 
make

Extractor usage

extract GPU page tables from dumped memory to pgtable.txt

./extractor ./memdump > pgtable.txt

Analayze content

Configure kernel module for analysis GPU-to-CPU mappings

The kernel_module folder contains a kernel module designed to create a device file for analyzing kernel data.

  1. Recompiling and Installing the Kernel:

    • If you need to analyze the kernel's page table, you must recompile and install the Linux kernel. This is because Ubuntu's default kernel does not export the address of the init_mm symbol.
    • Use version 6.11 of the Linux kernel (or another version). Follow online tutorials for detailed steps to compile and install the kernel.
  2. Modify Kernel Source Code:

    • After downloading the kernel source for version 6.11, append the following line to the end of the linux-6.11/mm/init-mm.c file:
      EXPORT_SYMBOL(init_mm);
    • Recompile and install the kernel.
  3. Alternative to Kernel Compilation:

    • If you prefer not to recompile the kernel, you can comment out the following line in the gpu_mem_device.c file:
      #define INIT_MM_EXPORT 1
  4. Install kernel module:

    • Compile the device kernel module using:
    make
    • Install the kernel module with the following command:
    sudo insmod gpu_mem_device.ko
    • Disabling Signature Enforcement:
      Since the kernel module is unsigned, disable the restriction on loading unsigned kernel modules before installation. You can disable this by setting the appropriate kernel boot parameters (e.g., module.sig_enforce=0).

Configure analysis mem tool

The mem_analysis_tool folder contains a user-space program for interacting with the device module to analyze kernel data.

Compile the tool using gcc:

gcc analysis_mem.c -g -o analysis_mem

To simplify the analysis by avoiding the extra IOMMU page table translation, disable IOMMU in the boot parameters:
Add the following to the GRUB_CMDLINE_LINUX_DEFAULT in your GRUB configuration:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash iommu=pt intel_iommu=off"

Then update GRUB and reboot.

Usage

  1. Analyze Physical Memory
    Extract 2 pages of memory starting from the physical address 0x200000 and save the content to output.txt:

    ./analysis_mem --pa 0x200000 --size 0x2000 --file output.txt
  2. Analyze Virtual Memory of a Process
    Extract 1 page of memory from virtual address 0x7fffe3fffc80 of process 48528 and save it to output.txt:

    ./analysis_mem --va 0x7fffe3fffc80 --id 48528 --size 0x1000 --file output.txt
  3. Check Physical Address Mapping
    Check if the physical address 0x200000 is mapped to process 48528's userspace. Results will be logged in dmesg:

    ./analysis_mem --pa 0x200000 --id 48528 --check 1
  4. Analyze Memory with IOMMU Enabled
    If IOMMU is enabled, analyze memory by providing an IOVA (translated from the GPU page table dump):

    ./analysis_mem --pa 0x200000 --size 0x1000 --file output.txt --iommu 1

By default, IOMMU is disabled.

Collect randomized adddresses on GPU

For ethics considerations, we only provide the results and scripts here. We hide the true addresses for leak information.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors