Skip to content

Supporting scalar tensor broadcasting for AddOp#66

Open
dboyliao wants to merge 107 commits intodevelopfrom
feature/add_op_broadcasting
Open

Supporting scalar tensor broadcasting for AddOp#66
dboyliao wants to merge 107 commits intodevelopfrom
feature/add_op_broadcasting

Conversation

@dboyliao
Copy link
Copy Markdown
Member

@dboyliao dboyliao commented Dec 8, 2017

Supporting scalar tensor broadcasting.

ex:
tensor1: shape=(50,)
tensor2: shape=(1,)
then broadcasting tensor2 over tensor1 in AddOp.
That is, tensor1+tensor2 will be of shape (50,)

Rationale:
It's common for TensorFlow user to initialize their bias term in NN model as scaler.
So I think it's more consistent with TensorFlow's behavior and the graph pb file it generate if we support at least scalar broadcasting.

Knight-X and others added 30 commits October 28, 2017 15:22
  fix include name NNOps to NnOps
  1. extend different type tensor for sd, memory
  2. inherit super class for polymorphism
  1. test idea quickly
  2. sync idea
  3. take type from tensor
  4. make type system in ramtensor
  1. implement add function
  2. implement customized ram tensor constructor
Feature tensor ref initial merge commit
Add python requirements for SD preparation
@dboyliao dboyliao force-pushed the feature/add_op_broadcasting branch from f095c05 to 9c7fcb1 Compare December 8, 2017 16:30
@dboyliao dboyliao requested review from Knight-X and neil-tan December 8, 2017 16:31
@neil-tan
Copy link
Copy Markdown
Member

neil-tan commented Dec 9, 2017

Noted, but broadcasting rule should extend to non-scalar cases.

@dboyliao
Copy link
Copy Markdown
Member Author

Yes, so just leave it here for now.

@mbartling
Copy link
Copy Markdown
Member

@dboyliao Is this still relevant? Or can I close it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants