Skip to content

Conversation

@csarofeen
Copy link
Contributor

Added:
Scatter of input across dims other than 0
Arbitrary ordered input

Not supported: keyword arguments still not supported

For input:
Variables are scattered
Tensors are broadcasted
Primitive variables are broadcasted
All others are shallow copied
Supports nested lists/tuples and maintains structure

Gather
Nested lists, nested tuples, None and Variables are supported.

…no kwargs).

Variables will be scattered/gathered. Nested tuple/list structure will be preserved.
Anything else will be shallow copied to each thread.
assert min(chunk_sizes) > 0, "got a negative chunk_size"
chunks = [tensor.narrow(dim, start - size, size)
for start, size in zip(_accumulate(chunk_sizes), chunk_sizes)]
chunks = tuple(chunk.contiguous() for chunk in chunks)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

self.module.cuda(device_ids[0])

def forward(self, *inputs):
#leave non-Variable/non-tuples in place

This comment was marked as off-topic.

if isinstance(obj, tuple) or isinstance(obj, list):
return type(obj)(zip(*map(scatter_map, obj)))
if torch.is_tensor(obj):
return broadcast(obj, target_gpus)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

return Gather(target_device)(*outputs)
return type(out)(map(gather_map, zip(*outputs)))
return Gather(target_device, dim=dim)(*outputs)
if isinstance(out, tuple) or isinstance(out, list):

This comment was marked as off-topic.

This comment was marked as off-topic.

return tuple((map(_to_cuda, obj)))
if isinstance(obj, tuple):
return tuple((map(_to_cuda, obj)))
return obj

This comment was marked as off-topic.

@csarofeen
Copy link
Contributor Author

@apaszke @colesbury included the changes we discussed on slack.

if kwargs:
gpu_dict = {}
for key in kwargs.keys():
gpu_dict[key] = _to_cuda(kwargs[key])

This comment was marked as off-topic.

Moved block that _to_cuda's the kwargs in the with cuda.device block


def parallel_apply(modules, inputs):
def parallel_apply(modules, inputs, kwargs):

This comment was marked as off-topic.

else:
threads = [threading.Thread(target=_worker,
args=(module, input, kwargs, results, lock))
for module, input, kwargs in zip(modules, inputs, kwargs)]

This comment was marked as off-topic.

This comment was marked as off-topic.

@csarofeen
Copy link
Contributor Author

@apaszke @szagoruyko I think I hit all your comments, is there anything else you think needs to be done for this PR?

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please not change the formatting of the code that doesn't need to be touched by this PR? It makes the diffs smaller, and there's no point in doing that, because we're not merging any PRs that don't pass the linter checks.

return Scatter(target_gpus)(obj)
return tuple(zip(*map(scatter_map, obj)))
return Scatter(target_gpus, dim=dim)(obj)
assert not torch.is_tensor(obj), "Tensors not supported in DataParallel"

This comment was marked as off-topic.

This comment was marked as off-topic.

them accross given GPUs
def scatter(input, target_gpus, dim=0):
"""
Slices all variables and tensors into approximately

This comment was marked as off-topic.

This comment was marked as off-topic.

test/test_nn.py Outdated
self.assertTrue(h_module is module)
self.assertEqual(input[0].data, torch.ones(5, 5))
self.assertEqual(output.data, torch.Tensor(5, 5).fill_(1 / (1 + 1 / math.e)))
self.assertEqual(

This comment was marked as off-topic.

This comment was marked as off-topic.

for key in kwargs.keys():
scatter_kwargs[key] = self.scatter(
_to_cuda(kwargs[key]), self.device_ids)
gpu_dicts = tuple([

This comment was marked as off-topic.

This comment was marked as off-topic.

scatter_kwargs[key] = self.scatter(
_to_cuda(kwargs[key]), self.device_ids)
gpu_dicts = tuple([
dict([(key, values[i])

This comment was marked as off-topic.

This comment was marked as off-topic.



def data_parallel(module, inputs, device_ids, output_device=None):
def data_parallel(module, inputs, device_ids, module_kwargs=None, output_device=None, dim=0):

This comment was marked as off-topic.

This comment was marked as off-topic.

gpu_dicts = tuple([
dict([(key, values[i]) for key, values in scatter_kwargs.items()])
for i in device_ids
])

This comment was marked as off-topic.

This comment was marked as off-topic.

# Fast track
if len(modules) == 1:
return (modules[0](*inputs[0]),)
return (wrap(modules[0], *inputs[0], **kwargs_tup[0]), )

This comment was marked as off-topic.

This comment was marked as off-topic.

if kwargs_tup:
assert len(modules) == len(kwargs_tup)
else:
kwargs_tup = tuple({} for gpu in modules)

This comment was marked as off-topic.

This comment was marked as off-topic.

results = {}

def _worker(module, input, results, lock):
def _worker(module, input, results, lock, **kwargs):

This comment was marked as off-topic.

This comment was marked as off-topic.

@apaszke
Copy link
Contributor

apaszke commented Mar 4, 2017

I think that once the comments are fixed and the conflict is resolved it's ready to merge.

Sorry for the delay, we wanted to discuss it a bit first.

…ub_personal

Reverted test_nn.py formatting from autopep
# Conflicts:
#	test/test_nn.py
@apaszke apaszke merged commit b1ae7f9 into pytorch:master Mar 5, 2017
@apaszke
Copy link
Contributor

apaszke commented Mar 5, 2017

Thanks!

pjh5 pushed a commit to pjh5/pytorch that referenced this pull request May 11, 2018
…27efbc

Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a

Included changes:
- **[69894f2](onnx/onnx@69894f2)**: Use op schema.all tensor types in random like definitions (pytorch#865) <Scott McKay>
- **[b9d6b90](onnx/onnx@b9d6b90)**: Clarify random like operators (pytorch#846) <Scott McKay>
- **[fc6b5fb](onnx/onnx@fc6b5fb)**: Refactor shape inference implementation (pytorch#855) <anderspapitto>
- **[b7d8dc8](onnx/onnx@b7d8dc8)**: fix cmake warning message (pytorch#863) <Eric S. Yu>
- **[f585c5d](onnx/onnx@f585c5d)**: add pytorch-operator test for tile (pytorch#831) <Wenhao Hu>
- **[993fe70](onnx/onnx@993fe70)**: add install step (pytorch#832) <Eric S. Yu>
- **[68bc26c](onnx/onnx@68bc26c)**: add type inference for traditional ml ops except classifier ops. (pytorch#857) <Ke Zhang>
- **[9cc0cda](onnx/onnx@9cc0cda)**: fix string representation of scalar types (pytorch#858) <G. Ramalingam>
- **[1078925](onnx/onnx@1078925)**: fix y in pow test case to scalar (pytorch#852) <Wenhao Hu>
- **[c66fb6f](onnx/onnx@c66fb6f)**: Add some math function shape inference (pytorch#845) <anderspapitto>
- **[ff667d1](onnx/onnx@ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (pytorch#853) <Marat Dukhan>
- **[11c6876](onnx/onnx@11c6876)**: clear initializer names when clear initializer (pytorch#849) <Wenhao Hu>
- **[73c34ae](onnx/onnx@73c34ae)**: Clarify FeatureVectorizer description. (pytorch#843) <Scott McKay>
- **[1befb9b](onnx/onnx@1befb9b)**: Remove useless text in docs (pytorch#850) <Lu Fang>
- **[e84788f](onnx/onnx@e84788f)**: Fix SELU attributes' default values (pytorch#839) <Lu Fang>
- **[ebac046](onnx/onnx@ebac046)**: Add tile test case (pytorch#823) <Wenhao Hu>
- **[8b7a925](onnx/onnx@8b7a925)**: a few more shape inference functions (pytorch#772) <anderspapitto>
- **[9718f42](onnx/onnx@9718f42)**: Make the coefficient non optional for LinearClassifier (pytorch#836) <Jaliya Ekanayake>
- **[ef083d0](onnx/onnx@ef083d0)**: Add save_tensor and load_tensor functions for Protos (pytorch#770) <Lu Fang>
- **[45ceb55](onnx/onnx@45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (pytorch#812) <Sergii Dymchenko>
- **[4b3d2b0](onnx/onnx@4b3d2b0)**: [WIP] reenable shape inference tests (pytorch#834) <anderspapitto>
- **[22d17ee](onnx/onnx@22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (pytorch#739) <Peyman Manikashani>
- **[de65b95](onnx/onnx@de65b95)**: dimension denotation (pytorch#443) <Tian Jin>
- **[eccc76e](onnx/onnx@eccc76e)**: fix field number issue in onnx operator proto and enable its build (pytorch#829) <Ke Zhang>
- **[d582beb](onnx/onnx@d582beb)**: disable shape inference test to unbreak ci (pytorch#830) <Lu Fang>
- **[485b787](onnx/onnx@485b787)**: function proto for composite op. (pytorch#802) <Ke Zhang>
- **[cd58928](onnx/onnx@cd58928)**: specify defaults for attributes of Affine op (pytorch#820) <G. Ramalingam>
- **[7ee2cf9](onnx/onnx@7ee2cf9)**: merge the dummy backend back into the main one (pytorch#743) <anderspapitto>
- **[1c03a5a](onnx/onnx@1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (pytorch#551) <Marat Dukhan>
- **[3769a98](onnx/onnx@3769a98)**: Rename real model test case from VGG-16 to ZFNet (pytorch#821) <Lu Fang>
pjh5 added a commit that referenced this pull request May 11, 2018
* [bootcamp] Improve "Shape" operator to support axes specification

To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.

* Back out "Add barrier net that runs before training nets"

Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.

* Change warning to verbose log to reduce log spam

The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.

* Extract the shared code from different caffe2_benchmark binaries

The OSS benchmark and Internal benchmark will share most functions in the benchmark.

* Support MFR in sequence training

As titled.

* Make knowledge distillation work with using logged prediction feature as teacher label.

1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label

* [C2/CUDA]: unjoined cross entropy sigmoid

as desc

* Add async_scheduling executor into deferrable_net_exec_test

Add async_scheduling into tests and fix some exception cases

* Fix Event disabled error

When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync

* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA

as desc.

* [C2 Core] Infer input device option in C2 hypothesis_test checkers

Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.

For example:
CopyCPUToGPU

* [C2 Op]SplitByLengthsOp CPU/GPU implementation

[C2 Op]SplitByLengthsOp CPU/GPU implementation

* fix undefined symbol error

not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now

* Add tools in DAIPlayground platform to help debugging models

Add additional tools to allow Plauground override individual method defined in AnyExp.  This will allow user to create module that specificly change certain default method behavior.  An example included in this diff is deactivating test model and checkpointing.  When debugging any model problems, switching off components helps me quickly narrow down the location of the bug.  The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)

* add shape and type inference for int8 conversion operator

* Fix flaky test for group_norm

Fix flaky test for group_norm

* Fix group_norm_op_test flaky

Fix group_norm_op_test flaky

* Implementation of composite learning rate policy

In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.

For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.

* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader

# Use Cases:

1). input: DB file -> output: DatasetReader.

Use DBFileReader.

2). input: Reader -> build cache DB file -> output: DatasetReader.

Use CachedReader.

# Changes to CachedReader:

1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.

# Changes to tests:

1). Make a separate TestCase class for CachedReader and DBFileReader.

2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.

3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.

* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"

Original commit changeset: 4489c6133f11

* Fix LARS bug

Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.

* [tum] support sparse init & add uniformFill option

as title

* Propagate exception for async nets

Capture the exception when an exception is thrown in async nets and re-throw it after wait().  This allows exceptions to be propagated up to the caller.

This diff was a part of D7752068.  We split the diff so that C2 core files changes are in a separate diff.

* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc

Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a

Included changes:
- **[69894f2](onnx/onnx@69894f2)**: Use op schema.all tensor types in random like definitions (#865) <Scott McKay>
- **[b9d6b90](onnx/onnx@b9d6b90)**: Clarify random like operators (#846) <Scott McKay>
- **[fc6b5fb](onnx/onnx@fc6b5fb)**: Refactor shape inference implementation (#855) <anderspapitto>
- **[b7d8dc8](onnx/onnx@b7d8dc8)**: fix cmake warning message (#863) <Eric S. Yu>
- **[f585c5d](onnx/onnx@f585c5d)**: add pytorch-operator test for tile (#831) <Wenhao Hu>
- **[993fe70](onnx/onnx@993fe70)**: add install step (#832) <Eric S. Yu>
- **[68bc26c](onnx/onnx@68bc26c)**: add type inference for traditional ml ops except classifier ops. (#857) <Ke Zhang>
- **[9cc0cda](onnx/onnx@9cc0cda)**: fix string representation of scalar types (#858) <G. Ramalingam>
- **[1078925](onnx/onnx@1078925)**: fix y in pow test case to scalar (#852) <Wenhao Hu>
- **[c66fb6f](onnx/onnx@c66fb6f)**: Add some math function shape inference (#845) <anderspapitto>
- **[ff667d1](onnx/onnx@ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) <Marat Dukhan>
- **[11c6876](onnx/onnx@11c6876)**: clear initializer names when clear initializer (#849) <Wenhao Hu>
- **[73c34ae](onnx/onnx@73c34ae)**: Clarify FeatureVectorizer description. (#843) <Scott McKay>
- **[1befb9b](onnx/onnx@1befb9b)**: Remove useless text in docs (#850) <Lu Fang>
- **[e84788f](onnx/onnx@e84788f)**: Fix SELU attributes' default values (#839) <Lu Fang>
- **[ebac046](onnx/onnx@ebac046)**: Add tile test case (#823) <Wenhao Hu>
- **[8b7a925](onnx/onnx@8b7a925)**: a few more shape inference functions (#772) <anderspapitto>
- **[9718f42](onnx/onnx@9718f42)**: Make the coefficient non optional for LinearClassifier (#836) <Jaliya Ekanayake>
- **[ef083d0](onnx/onnx@ef083d0)**: Add save_tensor and load_tensor functions for Protos (#770) <Lu Fang>
- **[45ceb55](onnx/onnx@45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (#812) <Sergii Dymchenko>
- **[4b3d2b0](onnx/onnx@4b3d2b0)**: [WIP] reenable shape inference tests (#834) <anderspapitto>
- **[22d17ee](onnx/onnx@22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (#739) <Peyman Manikashani>
- **[de65b95](onnx/onnx@de65b95)**: dimension denotation (#443) <Tian Jin>
- **[eccc76e](onnx/onnx@eccc76e)**: fix field number issue in onnx operator proto and enable its build (#829) <Ke Zhang>
- **[d582beb](onnx/onnx@d582beb)**: disable shape inference test to unbreak ci (#830) <Lu Fang>
- **[485b787](onnx/onnx@485b787)**: function proto for composite op. (#802) <Ke Zhang>
- **[cd58928](onnx/onnx@cd58928)**: specify defaults for attributes of Affine op (#820) <G. Ramalingam>
- **[7ee2cf9](onnx/onnx@7ee2cf9)**: merge the dummy backend back into the main one (#743) <anderspapitto>
- **[1c03a5a](onnx/onnx@1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) <Marat Dukhan>
- **[3769a98](onnx/onnx@3769a98)**: Rename real model test case from VGG-16 to ZFNet (#821) <Lu Fang>

* [C2]ReluN Op

relu n op.

tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6

* Call destructor when assigning a blob value

* Add executor overrides

Add executor overrides flag to enable migration to async_scheduling executor

* Add barrier net that runs before training nets - attempt #2

Add a synchonize barrier net that is run before training nets.  With this net, shards that are faster will wait for other shards before start training.  This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.

This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.

To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net.  Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.

* Handle empty nets in async_scheduling

Make sure we don't get stuck on empty nets

* use CUDA_ARCH for conditional compile

* [C2 fix] infer function for ensure_cpu_output_op

* Update group_norm test to reduce flaky test

* Fix lr_multiplier for GPU
weiyangfb pushed a commit to weiyangfb/pytorch that referenced this pull request Jun 11, 2018
* [bootcamp] Improve "Shape" operator to support axes specification

To improve .shape operator of Caffe2 to support x.shape(tensor, axes), which takes an optional int array "axes" as input. For example, x.shape(tensor, [1, 0]) will return the dimension for axis 1 and 0 following the specified order. For current version, "axes" input allows duplications and can have arbitrary length.

* Back out "Add barrier net that runs before training nets"

Original commit changeset: b373fdc9c30f. Need additional changes to some callers to support barrier failures.

* Change warning to verbose log to reduce log spam

The `LOG(WARNING)` was a bit spammy for regular use so lets just make it a `VLOG`.

* Extract the shared code from different caffe2_benchmark binaries

The OSS benchmark and Internal benchmark will share most functions in the benchmark.

* Support MFR in sequence training

As titled.

* Make knowledge distillation work with using logged prediction feature as teacher label.

1) Add loading raw dense feature as teacher label.
2) Optional calibration function for teacher label
3) Add teacher label into generic unit test
4) Deprecated TTSN workflow version using feature_options to config teacher label

* [C2/CUDA]: unjoined cross entropy sigmoid

as desc

* Add async_scheduling executor into deferrable_net_exec_test

Add async_scheduling into tests and fix some exception cases

* Fix Event disabled error

When disabling event in RNN ops make sure we don't call Finish on disabled
event from op's RunAsync

* cuda ensure cpu output op can handle both TensorCPU and TensorCUDA

as desc.

* [C2 Core] Infer input device option in C2 hypothesis_test checkers

Improve how we default input blob device options.
Previously it defaults as where op lives but it is not necessarily the case.

For example:
CopyCPUToGPU

* [C2 Op]SplitByLengthsOp CPU/GPU implementation

[C2 Op]SplitByLengthsOp CPU/GPU implementation

* fix undefined symbol error

not sure why we're getting undefined symbol even with link_whole = True
Need to figure out why but need this workaround for now

* Add tools in DAIPlayground platform to help debugging models

Add additional tools to allow Plauground override individual method defined in AnyExp.  This will allow user to create module that specificly change certain default method behavior.  An example included in this diff is deactivating test model and checkpointing.  When debugging any model problems, switching off components helps me quickly narrow down the location of the bug.  The technique is extensively used in task T27038712 (Steady memory increase in EDPM, eventually resulting in gloo/cuda.cu:34: out of memory)

* add shape and type inference for int8 conversion operator

* Fix flaky test for group_norm

Fix flaky test for group_norm

* Fix group_norm_op_test flaky

Fix group_norm_op_test flaky

* Implementation of composite learning rate policy

In many state-of-the-arts deep learning works, people use a simple trick to
schedule the learning rate: use a fixed learning rate until error plateaus
and then switch to a different fixed learning rate, and so on. In this diff,
we implemented a simple version of the composite learning rate. The user gives
a set of learning rates policies and corresponding iteration nums, and the
optimizer will change the learning rate policy based on the number of iterations so far.

For example, the user give two learning rate policies, one is FixedLearningRate
and PolyLearningRate, with an iteration number of 1k. Then the first 1k iteration,
we use FixedLearningRate. For the following iterations, we use PolyLearningRate.

* Split two use cases of CachedReader into two classes, DBFileReader and CachedReader

# Use Cases:

1). input: DB file -> output: DatasetReader.

Use DBFileReader.

2). input: Reader -> build cache DB file -> output: DatasetReader.

Use CachedReader.

# Changes to CachedReader:

1). Move db_path to the constructor.
Because in mock reader. cache will always be built ahead.

# Changes to tests:

1). Make a separate TestCase class for CachedReader and DBFileReader.

2). Make it possible to add more test functions by adding setUp, tearDown and _make_temp_path.

3). Make delete db_path more general. `db_path` could be a file for `log_file_db`, but could also be a directory for `leveldb`.

* Back out "On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization"

Original commit changeset: 4489c6133f11

* Fix LARS bug

Fixed a bug in the LARS implementation which caused all subsequent blobs not using LARS to have the LARS learning rate multiplier applied to them.

* [tum] support sparse init & add uniformFill option

as title

* Propagate exception for async nets

Capture the exception when an exception is thrown in async nets and re-throw it after wait().  This allows exceptions to be propagated up to the caller.

This diff was a part of D7752068.  We split the diff so that C2 core files changes are in a separate diff.

* Automatic update of fbcode/onnx to 69894f207dfcd72d1e70497d387201cec327efbc

Previous import was 403ccfbd0161c38f0834413d790bad0874afbf9a

Included changes:
- **[69894f2](onnx/onnx@69894f2)**: Use op schema.all tensor types in random like definitions (pytorch#865) <Scott McKay>
- **[b9d6b90](onnx/onnx@b9d6b90)**: Clarify random like operators (pytorch#846) <Scott McKay>
- **[fc6b5fb](onnx/onnx@fc6b5fb)**: Refactor shape inference implementation (pytorch#855) <anderspapitto>
- **[b7d8dc8](onnx/onnx@b7d8dc8)**: fix cmake warning message (pytorch#863) <Eric S. Yu>
- **[f585c5d](onnx/onnx@f585c5d)**: add pytorch-operator test for tile (pytorch#831) <Wenhao Hu>
- **[993fe70](onnx/onnx@993fe70)**: add install step (pytorch#832) <Eric S. Yu>
- **[68bc26c](onnx/onnx@68bc26c)**: add type inference for traditional ml ops except classifier ops. (pytorch#857) <Ke Zhang>
- **[9cc0cda](onnx/onnx@9cc0cda)**: fix string representation of scalar types (pytorch#858) <G. Ramalingam>
- **[1078925](onnx/onnx@1078925)**: fix y in pow test case to scalar (pytorch#852) <Wenhao Hu>
- **[c66fb6f](onnx/onnx@c66fb6f)**: Add some math function shape inference (pytorch#845) <anderspapitto>
- **[ff667d1](onnx/onnx@ff667d1)**: Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (pytorch#853) <Marat Dukhan>
- **[11c6876](onnx/onnx@11c6876)**: clear initializer names when clear initializer (pytorch#849) <Wenhao Hu>
- **[73c34ae](onnx/onnx@73c34ae)**: Clarify FeatureVectorizer description. (pytorch#843) <Scott McKay>
- **[1befb9b](onnx/onnx@1befb9b)**: Remove useless text in docs (pytorch#850) <Lu Fang>
- **[e84788f](onnx/onnx@e84788f)**: Fix SELU attributes' default values (pytorch#839) <Lu Fang>
- **[ebac046](onnx/onnx@ebac046)**: Add tile test case (pytorch#823) <Wenhao Hu>
- **[8b7a925](onnx/onnx@8b7a925)**: a few more shape inference functions (pytorch#772) <anderspapitto>
- **[9718f42](onnx/onnx@9718f42)**: Make the coefficient non optional for LinearClassifier (pytorch#836) <Jaliya Ekanayake>
- **[ef083d0](onnx/onnx@ef083d0)**: Add save_tensor and load_tensor functions for Protos (pytorch#770) <Lu Fang>
- **[45ceb55](onnx/onnx@45ceb55)**: Check if CMAKE_BUILD_TYPE set before project(). (pytorch#812) <Sergii Dymchenko>
- **[4b3d2b0](onnx/onnx@4b3d2b0)**: [WIP] reenable shape inference tests (pytorch#834) <anderspapitto>
- **[22d17ee](onnx/onnx@22d17ee)**: RNN tests: LSTM, GRU, SimpleRNN (pytorch#739) <Peyman Manikashani>
- **[de65b95](onnx/onnx@de65b95)**: dimension denotation (pytorch#443) <Tian Jin>
- **[eccc76e](onnx/onnx@eccc76e)**: fix field number issue in onnx operator proto and enable its build (pytorch#829) <Ke Zhang>
- **[d582beb](onnx/onnx@d582beb)**: disable shape inference test to unbreak ci (pytorch#830) <Lu Fang>
- **[485b787](onnx/onnx@485b787)**: function proto for composite op. (pytorch#802) <Ke Zhang>
- **[cd58928](onnx/onnx@cd58928)**: specify defaults for attributes of Affine op (pytorch#820) <G. Ramalingam>
- **[7ee2cf9](onnx/onnx@7ee2cf9)**: merge the dummy backend back into the main one (pytorch#743) <anderspapitto>
- **[1c03a5a](onnx/onnx@1c03a5a)**: [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (pytorch#551) <Marat Dukhan>
- **[3769a98](onnx/onnx@3769a98)**: Rename real model test case from VGG-16 to ZFNet (pytorch#821) <Lu Fang>

* [C2]ReluN Op

relu n op.

tf reference: https://www.tensorflow.org/api_docs/python/tf/nn/relu6

* Call destructor when assigning a blob value

* Add executor overrides

Add executor overrides flag to enable migration to async_scheduling executor

* Add barrier net that runs before training nets - attempt pytorch#2

Add a synchonize barrier net that is run before training nets.  With this net, shards that are faster will wait for other shards before start training.  This reduce chances of the faster shards timing out during GLOO AllReduce.
Removed explicit data_parallel_model.py.synchronize call in holmes workflow.

This change was landed previously but caused errors for some EDPM workflows - See https://fb.facebook.com/groups/1426530000692545/permalink/1906766366002237/ - because EDPM assumes any call to CreateOrCloneCommonWorld and Gloo ops are wrapped in exception handlers but in this case exception thrown in the barrier init net is not handled.

To address this issue, we add _CreateOrCloneCommonWorld to the param_init_net instead of a new barrier init net.  Since errors for param_init_net run is handled gracefully and re-rendezvous, it should fixes the problem.

* Handle empty nets in async_scheduling

Make sure we don't get stuck on empty nets

* use CUDA_ARCH for conditional compile

* [C2 fix] infer function for ensure_cpu_output_op

* Update group_norm test to reduce flaky test

* Fix lr_multiplier for GPU
mrshenli pushed a commit to mrshenli/pytorch that referenced this pull request Apr 11, 2020
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Aug 5, 2021
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this pull request Sep 20, 2021
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
…instead of the Default Cuda Stream. (pytorch#843)

* Adding C++ Multihead Attention implementation to contrib.

* Add reference test that at least works for forward.

* Remove CublasLt support from multihead attention.

* Add new Python version of self attention.

* Update python model of MHA with backward pass.

* Fixed Output Linear connection in MHA.

* Clean up compiles and add documentation to PySelfAttention.

* Add Encdec Python version of multihead attention.  Cleanup files.

* Tests for self and encdec multihead attention.

* Add reference pytorch implementation of attention with norm and add.

* Add cutlass branch definition.

* Add cutlass download to compile.

* Add norm/add tests.

* Add biases to pytorch python versions.

* Add tests and fix issues with python version of attention masking.

* Create README.md

* Update README.md

* Update README.md

* Update perf test parameters.

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Fix matmul1 output tensor size.  Fix tests that missed issue.

* Allow for Z dimensions of 64K and greater on batched GEMMs.

* remove redundant imports

* general cleanup, remove deprecated or unused functions

* Update Multihead Attention's softmax to use the Current Stream instead of the default stream.

* Fix setup.py that got messed up in merge with upstream.

* Update Multihead Attention strided batched gemms to use the current stream instead of the default.

Co-authored-by: pbialecki <pbialecki@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants