Training
We perform patient-specific X-ray to CT registration by pre-training an encoder/decoder architecture. The encoder, PoseRegressor
, is comprised of two networks:
A pretrained backbone (i.e., convolutional or transformer network) that extracts features from an input X-ray image.
A set of two linear layers that decodes these features into camera pose parameters (a rotation and a translation).
The decoder is DiffDRR
, which renders a simulated X-ray from the predicted pose parameters. Because DiffDRR
is autodifferentiable, a loss metric on the simulated X-ray and the input X-ray can be backpropogated to the encoder.
source
PoseRegressor
PoseRegressor (model_name, parameterization, convention=None,
pretrained=False, **kwargs)
A PoseRegressor is comprised of a pretrained backbone model that extracts features from an input X-ray and two linear layers that decode these features into rotational and translational camera pose parameters, respectively.
Sampling random camera poses
We sample random camera poses from the tangent space of SE(3), which is Euclidean.
Code
import matplotlib.pyplot as plt
from diffdrr.drr import DRR
from torchvision.utils import make_grid
from diffpose.deepfluoro import DeepFluoroDataset, get_random_poses
specimen = DeepFluoroDataset(1 )
device = torch.device("cuda" )
drr = DRR(
specimen.volume,
specimen.spacing,
sdr= specimen.focal_len / 2 ,
height= (1536 - 100 ) // 16 ,
delx= 0.194 * 16 ,
x0= specimen.x0,
y0= specimen.y0,
reverse_x_axis= True ,
).to(dtype= torch.float32, device= device)
isocenter_pose = specimen.isocenter_pose.to(device)
offset = get_random_offset(batch_size= 36 , device= device)
pose = isocenter_pose.compose(offset)
with torch.no_grad():
img = drr(None , None , None , pose= pose, bone_attenuation_multiplier= 2.5 )
img = (img - img.min ()) / (img.max () - img.min ())
plt.figure(dpi= 300 )
plt.imshow(make_grid(img.cpu(), nrow= 6 )[0 ], cmap= "gray" )
plt.axis("off" )
plt.show()
# Rotation matrices are valid (det R = 1)
R = pose.get_matrix()[..., :3 , :3 ].transpose(- 1 , - 2 )
R.det()
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
device='cuda:0')
Test-time optimization
SparseRegistration
SparseRegistration (drr:diffdrr.drr.DRR,
pose:diffpose.calibration.RigidTransform,
parameterization:str, convention:str=None,
features=None, n_patches:int=None, patch_size:int=13)
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
drr
DRR
pose
RigidTransform
parameterization
str
convention
str
None
features
NoneType
None
Used to compute biased estimate of mNCC
n_patches
int
None
If n_patches is None, render the whole image
patch_size
int
13
Vectorized multiscale NCC
For computing multiscale NCC on sparse renderings.
source
VectorizedNormalizedCrossCorrelation2d
VectorizedNormalizedCrossCorrelation2d (eps=0.0001)
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool