Skip to content

Conversation

@IvanKobzarev
Copy link
Contributor

@IvanKobzarev IvanKobzarev commented Jul 14, 2020

Stack from ghstack:

Differential Revision: D22754939

[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented Jul 15, 2020

💊 CI failures summary and remediations

As of commit 5ee5ca8 (more details on the Dr. CI page):


  • 1/7 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)
  • 6/7 broken upstream at merge base 48e978b on Aug 07 from 5:47pm to 11:04pm PDT (10 commits; 48e978b - b7a9bc0)

🚧 6 fixed upstream failures:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 39 times.


Tensor& vulkan_add_(Tensor& self, const Tensor& other, Scalar alpha) {
auto& x = vtensor_from_vulkan(self.is_vulkan() ? self : self.vulkan());
auto& y = vtensor_from_vulkan(other.is_vulkan() ? other : other.vulkan());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const. I think vtensor_from_vulkan should be overloaded on const.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add const override for vtensor_from_vulkan to this PR

output.allocate_storage();
vulkan::detail::add(output, x, y, a);
x = std::move(output);
return self;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand this logic here. If self is already allocated and the goal of this function is to implement c += a * b, then do we need to allocate output as well? Can we not do it in place in 'self'? Also, why the move? Can't we just pass output, output, and input and a (in that order) to vulkan::detail::add?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we read from x (self) and y (other) and write output values to output.
At the moment all shaders work having separate output and input(s), here it is using the same shader that is used for copy variant of this op.

x = std::move(output) replaces x (self) content (pointer to VulkanTensor::Impl) of x to the result of output.

vulkan::detail::add() uses only image() representation, so we do not really need allocated buffer().
For this implementation we need output.allocate_storage(), but with changes #42569 allocate_storage can be removed

x,
min ? min.value().to<float>() : -std::numeric_limits<float>::infinity(),
max ? max.value().to<float>() : std::numeric_limits<float>::infinity());
x = std::move(output);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same. If this operation is in place, then why do we need to allocate_storage?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(As in previous comment) Output VulkanTensor is used here as a temporary variable to reuse shaders with separate output, inputs; allocate_storage to remove in #42569 :)

@facebook-github-bot
Copy link
Contributor

@IvanKobzarev merged this pull request in 5dd230d.

@facebook-github-bot facebook-github-bot deleted the gh/IvanKobzarev/58/head branch August 11, 2020 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants