`
import torch as th
from torch.autograd import Variable
x = Variable(th.Tensor([1 , 2 , 3]) , requires_grad = False)
y = Variable(th.Tensor([100]) , requires_grad = True)
x = x.cuda(0)
y = y.cuda(0)
x[0] = y
print(x.requires_grad)
`
If (x=x.cuda(0) y=y.cuda(0)) is executed then the output is False, True otherwise.
I think the result that x.requires_grad = True is more natural.