Timing versus DRR size

Along with tips for rendering DRRs that don’t fit in memory
import numpy as np
import torch

from diffdrr.data import load_example_ct
from diffdrr.drr import DRR
from diffdrr.visualization import plot_drr
# Read in the volume
subject = load_example_ct()
device = "cuda" if torch.cuda.is_available() else "cpu"

# Get parameters for the detector
rotations = torch.tensor([[0.0, 0.0, 0.0]], device=device)
translations = torch.tensor([[0.0, 850.0, 0.0]], device=device)
height = 100

drr = DRR(subject, sdd=1020, height=height, delx=2.0).to(device)

del drr
9.94 ms ± 1.57 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
height = 200

drr = DRR(subject, sdd=1020, height=height, delx=2.0).to(device)

del drr
38.3 ms ± 49.3 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
height = 300

drr = DRR(subject, sdd=1020, height=height, delx=2.0).to(device)

del drr
85.4 ms ± 63.4 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
height = 400

drr = DRR(subject, sdd=1020, height=height, delx=2.0).to(device)

del drr
151 ms ± 165 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Memory constraints

Up until this point, we could compute every ray in the DRR in one go on the GPU. However, as the DRRs get bigger, we will quickly run out of memory. For example, on a 12 GB GPU, computing a 600 by 600 DRR will raise a CUDA memory error.

Tip

To render DRRs whose computation won’t fit in memory, we can compute patches of the DRR at a time. Pass patch_size to the DRR module to specify the size of the patch. Note the patch size must evenly tile (height, width).

height = 600
patch_size = 150

drr = DRR(subject, sdd=1020, height=height, delx=2.0, patch_size=patch_size).to(device)

del drr
217 ms ± 142 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
height = 750
patch_size = 150

drr = DRR(subject, sdd=1020, height=height, delx=2.0, patch_size=patch_size).to(device)

del drr
305 ms ± 823 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
height = 1000
patch_size = 250

drr = DRR(subject, sdd=1020, height=height, delx=2.0, patch_size=patch_size).to(device)

del drr
466 ms ± 125 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
height = 1500
patch_size = 250

drr = DRR(subject, sdd=1020, height=height, delx=2.0, patch_size=patch_size).to(device)

del drr
924 ms ± 947 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

With patch_size, the only limitation is storage in memory, not computation.