-
Notifications
You must be signed in to change notification settings - Fork 26.3k
add reduce=True argument to MultiLabelMarginLoss #4924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@pytorchbot retest this please |
|
@pytorchbot retest this please |
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't reviewed the C code or kernels yet, will get to that soon. I left a few comments on the python side to start out
|
|
||
| template <typename Dtype, typename Acctype> | ||
| __global__ void cunn_MultiLabelMarginCriterion_updateGradInput_kernel(Dtype *gradInput, | ||
| Dtype *gradOutput, |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/loss.py
Outdated
| size_average. Default: True | ||
| Shape: | ||
| - Input: :math:`(N)` or :math:`(N, *)` where `*` means, any number of additional |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/loss.py
Outdated
| - Input: :math:`(N)` or :math:`(N, *)` where `*` means, any number of additional | ||
| dimensions | ||
| - Target: :math:`(N)` or :math:`(N, *)`, same shape as the input | ||
| - Output: scalar. If `reduce` is False, then `(N)` or `(N, *)`, same shape as |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/loss.py
Outdated
| `y` and `x` must have the same size. | ||
| The criterion only considers the first non-negative `y[j]` targets. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
|
||
|
|
||
| def multilabelmarginloss_no_reduce_test(): | ||
| t = Variable(torch.rand(5, 10).mul(10).floor().long()) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| check_no_size_average=True, | ||
| ), | ||
| dict( | ||
| module_name='MultiLabelMarginLoss', |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/common_nn.py
Outdated
| if input.dim() == 1: | ||
| n = 1 | ||
| dim = input.size()[0] | ||
| output = torch.Tensor(n).zero_() |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/common_nn.py
Outdated
| return output | ||
|
|
||
|
|
||
| def _multilabelmarginloss_reference(input, target, is_target): |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/common_nn.py
Outdated
|
|
||
| if input.dim() == 1: | ||
| n = 1 | ||
| dim = input.size()[0] |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/common_nn.py
Outdated
| def _multilabelmarginloss_reference(input, target, is_target): | ||
| sum = 0 | ||
| for i in range(0, target.size()[0]): | ||
| target_index = target[i] |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The THNN code looks good for the most part! Just a few nits here and there. Haven't looked at the cuda kernel yet.
| isTarget_data += dim; | ||
| gradInput_data += dim; | ||
| } | ||
| gradInput_data -= nframe*dim; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| } | ||
|
|
||
| sum /= dim; | ||
| THTensor_(set1d)(output, t, sum); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| else | ||
| { | ||
| THNN_CHECK_DIM_SIZE(gradOutput, 1, 0, nframe); | ||
| gradOutput = THTensor_(newContiguous)(gradOutput); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| THTensor_(free)(input); | ||
| THIndexTensor_(free)(target); | ||
| THTensor_(free)(isTarget); | ||
| THTensor_(free)(gradOutput); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/loss.py
Outdated
| reduce (bool, optional): By default, the losses are averaged or summed over | ||
| observations for each minibatch depending on size_average. When reduce | ||
| is False, returns a loss per batch element instead and ignores | ||
| size_average. Default: True |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| if (threadIdx.x == 0) { | ||
| gradInput_k[target_idx] += ScalarConvert<Acctype, Dtype>::to(totalSum); | ||
| } | ||
| __syncthreads(); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/loss.py
Outdated
| reduce (bool, optional): By default, the losses are averaged or summed over | ||
| observations for each minibatch depending on size_average. When reduce | ||
| is False, returns a loss per batch element instead and ignores | ||
| size_average. Default: True |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/loss.py
Outdated
| size_average is set to ``False``, the losses are instead summed for | ||
| each minibatch. Default: ``True`` | ||
| reduce (bool, optional): By default, the losses are averaged or summed over | ||
| observations for each minibatch depending on size_average. When reduce |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| THIndexTensor_(free)(target); | ||
| THTensor_(free)(isTarget); | ||
| THTensor_(free)(gradOutput); | ||
| THTensor_(free)(gradInput); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All comments addressed, LGTM unless anyone has something else to add. Thanks @li-roy!
|
Thanks @li-roy ! |
|
I'm not sure why the CI isn't running, but... let's give this one final test |
|
@pytorchbot retest this please |
As per #264. When reduce is False, MultiLabelMarginLoss outputs a loss per sample in minibatch. When reduce is True (default), the current behavior is kept.
Test Plan
test/run_test.sh
Added unit test. For the reduce=False case. Added unit tests for 1d tensors. Added a reference function.