boundlab.linearop.SqueezeOp#

class boundlab.linearop.SqueezeOp[source]#

Bases: LinearOp

Remove size-1 dimension(s).

Methods

__init__

Initialize a LinearOp wrapper.

abs

Return a LinearOp representing the element-wise absolute value of this LinearOp.

backward

Apply the transposed linear function to an input tensor.

force_jacobian

Materialize Jacobian via batched forward/backward application.

forward

Apply the original linear function to an input tensor.

jacobian

Return an explicit Jacobian tensor when efficiently available.

jacobian_op

Materialize this LinearOp as an explicit Jacobian tensor.

jacobian_scatter

Add this operator's Jacobian contribution into an existing tensor.

sum_input

Return a LinearOp that sums over the input dimensions, if supported.

sum_output

Return a LinearOp that sums over the output dimensions, if supported.

vbackward

Apply the transposed linear function to an input tensor, supporting additional leading dimensions for batching.

vforward

Apply the original linear function to an input tensor, supporting additional trailing dimensions for batching.

__init__(input_shape, dim=None)[source]#

Initialize a LinearOp wrapper.

Parameters:
  • input_shape (torch.Size) – The expected shape of input tensors.

  • output_shape – The expected shape of output tensors.

  • flags – Flags indicating special properties of this LinearOp.

forward(x)[source]#

Apply the original linear function to an input tensor.

backward(grad)[source]#

Apply the transposed linear function to an input tensor.

__add__(other)#

Add this LinearOp to another.

__call__(x)#

Apply this LinearOp to an expression, returning a Linear.

__mul__(other)#

Scale this LinearOp by a scalar factor.

abs()#

Return a LinearOp representing the element-wise absolute value of this LinearOp.

force_jacobian()#

Materialize Jacobian via batched forward/backward application.

This fallback constructs an identity basis and applies either vforward() or vbackward() depending on whether the input or output side is smaller.

Returns:

A dense Jacobian tensor with shape [*output_shape, *input_shape].

Notes

This may be expensive in time and memory and is mainly intended for debugging, validation, or rare paths that require explicit Jacobians.

jacobian()#

Return an explicit Jacobian tensor when efficiently available.

Returns:

A tensor with shape [*output_shape, *input_shape] if the concrete Jacobian can be produced directly. Returns NotImplemented for operators that only support implicit application.

Return type:

torch.Tensor

jacobian_op()#

Materialize this LinearOp as an explicit Jacobian tensor.

Returns:

A tensor with shape [*output_shape, *input_shape] representing the Jacobian of this LinearOp.

Return type:

torch.Tensor

Notes

This may be expensive in time and memory and is mainly intended for debugging, validation, or rare paths that require explicit Jacobians.

jacobian_scatter(src)#

Add this operator’s Jacobian contribution into an existing tensor.

Parameters:

src (torch.Tensor) – A tensor with Jacobian layout [*output_shape, *input_shape] that acts as the accumulation buffer.

Returns:

A tensor with the same shape as src containing src + jacobian(self).

Return type:

torch.Tensor

Notes

Subclasses may override this to implement structured/sparse updates without materializing the full Jacobian first.

sum_input()#

Return a LinearOp that sums over the input dimensions, if supported.

sum_output()#

Return a LinearOp that sums over the output dimensions, if supported.

vbackward(grad_output)#

Apply the transposed linear function to an input tensor, supporting additional leading dimensions for batching.

vforward(x)#

Apply the original linear function to an input tensor, supporting additional trailing dimensions for batching.

input_shape: torch.Size#

Expected input tensor shape.

output_shape: torch.Size#

Computed output tensor shape.