boundlab.linearop.SelectOp#
- class boundlab.linearop.SelectOp[source]#
Bases:
GetSliceOpSelect a single index along dim, removing that dimension. (Alias for GetSliceOp)
Examples
>>> import torch >>> from boundlab.linearop import SelectOp >>> op = SelectOp(torch.Size([2, 3]), dim=0, index=1) >>> op.forward(torch.tensor([[1., 2., 3.], [4., 5., 6.]])) tensor([4., 5., 6.])
Methods
Initialize a LinearOp wrapper.
Return a LinearOp representing the element-wise absolute value of this LinearOp.
Backward using F.pad for vmap compatibility.
Materialize Jacobian via batched forward/backward application.
Apply the original linear function to an input tensor.
Return an explicit Jacobian tensor when efficiently available.
Materialize this LinearOp as an explicit Jacobian tensor.
Add this operator's Jacobian contribution into an existing tensor.
Return a LinearOp that sums over the input dimensions, if supported.
Return a LinearOp that sums over the output dimensions, if supported.
Apply the transposed linear function to an input tensor, supporting additional leading dimensions for batching.
Apply the original linear function to an input tensor, supporting additional trailing dimensions for batching.
- __init__(input_shape, dim, index)[source]#
Initialize a LinearOp wrapper.
- Parameters:
input_shape – The expected shape of input tensors.
output_shape – The expected shape of output tensors.
flags – Flags indicating special properties of this LinearOp.
- __add__(other)#
Add this LinearOp to another.
- __call__(x)#
Apply this LinearOp to an expression, returning a Linear.
- __mul__(other)#
Scale this LinearOp by a scalar factor.
- abs()#
Return a LinearOp representing the element-wise absolute value of this LinearOp.
- backward(grad)#
Backward using F.pad for vmap compatibility.
- force_jacobian()#
Materialize Jacobian via batched forward/backward application.
This fallback constructs an identity basis and applies either
vforward()orvbackward()depending on whether the input or output side is smaller.- Returns:
A dense Jacobian tensor with shape
[*output_shape, *input_shape].
Notes
This may be expensive in time and memory and is mainly intended for debugging, validation, or rare paths that require explicit Jacobians.
- forward(x)#
Apply the original linear function to an input tensor.
- jacobian()#
Return an explicit Jacobian tensor when efficiently available.
- Returns:
A tensor with shape
[*output_shape, *input_shape]if the concrete Jacobian can be produced directly. ReturnsNotImplementedfor operators that only support implicit application.- Return type:
- jacobian_op()#
Materialize this LinearOp as an explicit Jacobian tensor.
- Returns:
A tensor with shape
[*output_shape, *input_shape]representing the Jacobian of this LinearOp.- Return type:
Notes
This may be expensive in time and memory and is mainly intended for debugging, validation, or rare paths that require explicit Jacobians.
- jacobian_scatter(src)#
Add this operator’s Jacobian contribution into an existing tensor.
- Parameters:
src (torch.Tensor) – A tensor with Jacobian layout
[*output_shape, *input_shape]that acts as the accumulation buffer.- Returns:
A tensor with the same shape as
srccontainingsrc + jacobian(self).- Return type:
Notes
Subclasses may override this to implement structured/sparse updates without materializing the full Jacobian first.
- sum_input()#
Return a LinearOp that sums over the input dimensions, if supported.
- sum_output()#
Return a LinearOp that sums over the output dimensions, if supported.
- vbackward(grad)#
Apply the transposed linear function to an input tensor, supporting additional leading dimensions for batching.
- vforward(x)#
Apply the original linear function to an input tensor, supporting additional trailing dimensions for batching.
- input_shape: torch.Size#
Expected input tensor shape.
- output_shape: torch.Size#
Computed output tensor shape.