-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
hackamonthmodule: double backwardsProblem is related to double backwards definition on an operatorProblem is related to double backwards definition on an operator
Description
While trying to implement WGan with gradient penalty for 3D volumes, I realized that MaxPool3d cannot be differentiated twice, while it works for MaxPool2d or even AvgPool3d.
Here is the code I am using :
class Network(nn.Module):
def __init__(self):
super().__init__()
self.main = nn.AvgPool3d(2) # Works
# self.main = nn.MaxPool3d(2) # Does not work
# self.main = nn.MaxPool2d(2) # Works for 2d
def forward(self, x):
return self.main(x)
net = Network()
net.cuda()
in_var = Variable(torch.Tensor(2, 2, 2, 2, 2).cuda(), requires_grad=True)
grad_out = Variable(torch.ones(2, 2, 1, 1, 1).cuda())
# in_var = Variable(torch.Tensor(2, 2, 2, 2).cuda(), requires_grad=True) # Works for 2d
# grad_out = Variable(torch.ones(2, 2, 1, 1).cuda()) # Works for 2d
gradient = autograd.grad(outputs=net(in_var), inputs=in_var,
grad_outputs=grad_out,
create_graph=True, retain_graph=True,
only_inputs=True)[0]
gradient.mean().backward()Thanks
Metadata
Metadata
Assignees
Labels
hackamonthmodule: double backwardsProblem is related to double backwards definition on an operatorProblem is related to double backwards definition on an operator