Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
996 commits
Select commit Hold shift + click to select a range
4cd7d78
correct arange docs (#21992)
Jun 20, 2019
19ef157
Updating submodules
Jun 20, 2019
1aae4b0
Fix 'error : detail is ambiguous' on Windows (#22025)
ezyang Jun 20, 2019
84a2d5d
Add hashing to bucket-weighted pooling (#20673)
ffjiang Jun 20, 2019
d4119f8
Automatic update of fbcode/onnx to 355a4954ea4e5836a5e943589509951c44…
houseroad Jun 20, 2019
058beae
Add IterableDataset (#19228)
ssnl Jun 21, 2019
88921fe
change return type for q_scale and q_zero_point (#21709)
jerryzh168 Jun 21, 2019
95aee81
more general fusion logic (#22015)
jspark1105 Jun 21, 2019
3838324
Add max/min/argmax/argmin/sort/argsort for quantized Tensor (#21546)
jerryzh168 Jun 21, 2019
a3fc6ed
Hook up liveness into profiling pipeline.
Krovatkin Jun 21, 2019
71741ba
rename test to be more consistent
ssnl Jun 21, 2019
7d81e62
Add mkldnn tests for running end to end resnet models
bddppq Jun 21, 2019
5d7cf66
add Int8SpatialBNRelu (#22014)
jspark1105 Jun 21, 2019
b36a041
Move UnsafeTensorFromTH and UnsafeStorageFromTH off Type (#21923)
Jun 21, 2019
edb5a16
Remove getDeviceFromPtr and allocator from Type (#21940)
Jun 21, 2019
b2197ef
Adding support for JIT Fusion on Windows for CUDA (#21861)
gavrielstate Jun 21, 2019
82dd693
Split nn.Module._save_to_state_dict to make it overridable (#21933)
Jun 21, 2019
2863052
Limit overall number of threads used by TBB (#22045)
Jun 21, 2019
fe580e8
Rewrite lerp operator to use TensorIterator and support compile-time …
VitalyFedyunin Jun 21, 2019
06c3bd0
Improve ListPtr::extract() (#21753)
smessmer Jun 21, 2019
38c9bb8
Remove most usages of THCHalfAutoNumerics. (#21878)
gchanan Jun 21, 2019
f9b3989
handle slice with negative indices and indices exceeding tensor dimen…
liqunfu Jun 21, 2019
1c5fe2e
Add support for Python 3.8 Constant node (#22007)
Jun 21, 2019
04e9278
First round of optimizations for segment_reduction_op kernels. (#22081)
iotamudelta Jun 21, 2019
4009089
Sparse BLAS: Remove workaround to check zero length inputs. (#22080)
iotamudelta Jun 21, 2019
5ff06a7
more complete tuple assignments (#21949)
wanchaol Jun 21, 2019
38aa5a5
Experimental option to use single thread pool (#22047)
Jun 21, 2019
f164c01
Adding liveness test cases back
Krovatkin Jun 21, 2019
18a904c
Updating submodules
Jun 21, 2019
4bc89bd
Implement tensor.select(Dimname,int) (#21795)
zou3519 Jun 21, 2019
36e4b54
s/uniqueName/debugName (#22048)
Jun 22, 2019
856268c
Revert D15947873: [JIT] s/uniqueName/debugName
Jun 22, 2019
63ca908
Updating submodules
Jun 22, 2019
7d637de
Reduce excessive CI printing in TestHub (#22043)
ssnl Jun 22, 2019
f7b2778
s/uniqueName/debugName/ (#22096)
Jun 22, 2019
b19b20e
fix minor comment (#21576)
jspark1105 Jun 22, 2019
91bf0a9
Move quantized tensor tests in test_torch.py to test_quantized_tensor…
jerryzh168 Jun 22, 2019
7b1d6c8
Update intra_inter_benchmark (#22051)
Jun 22, 2019
a256b09
Backout Liveness Tests again :-(
Krovatkin Jun 22, 2019
e0f5ab2
Tree based Iterator infrastructure: for in range/list/tensor/zip/enum…
wanchaol Jun 22, 2019
45b91bd
refactor all for in range/tensor tests to be together with other for …
wanchaol Jun 22, 2019
887ecf7
Fix DictType isSubtypeOf (#22104)
Jun 22, 2019
c0f96aa
Restore default values on premature test exit (#22115)
apaszke Jun 23, 2019
9b45237
PyTorch ThroughputBenchmark (#20766)
salexspb Jun 23, 2019
eab3575
support iteration tuple unpacking (#21985)
wanchaol Jun 24, 2019
c9344fc
add for in string support (#21990)
wanchaol Jun 24, 2019
d96ce9b
add for in dict support (#22006)
wanchaol Jun 24, 2019
08060e8
Revert D15435461: [pytorch][PR] PyTorch ThroughputBenchmark
soumith Jun 24, 2019
3ba654e
Add finding thnvrtc_library into torchconfig.cmake (#22126)
peterjc123 Jun 24, 2019
1b34ccf
Porting SpatialDilatedConvolution and VolumetricDilatedConvolution to…
pearu Jun 24, 2019
cd0d848
Remove many build options redundantly specified in Python build scrip…
xuhdev Jun 24, 2019
142361a
merge interfaces that have an optional scalartype parameter (#21088)
nairbv Jun 24, 2019
313960d
Use at::detail::* instead of detail::* to avoid ambiguity in windows …
syed-ahmed Jun 24, 2019
a7ec889
Add sparse tensor allreduce (#22036)
pietern Jun 24, 2019
77eda8d
Support sparse gradients in DistributedDataParallel (#22037)
pietern Jun 24, 2019
6edaa11
fix broken link
byronhe Jun 24, 2019
e016a42
Revert D15944971: [pytorch][PR] merge interfaces that have an optiona…
suo Jun 24, 2019
322261a
Fix dispatching of backwards kernel for ROCm. (#22125)
iotamudelta Jun 24, 2019
85cbe0d
Fix Concat Dimension Bug (#22088)
chandlerzuo Jun 24, 2019
2347a40
Fix tracing docs and add more comprehensive examples (#22082)
Krovatkin Jun 24, 2019
f1c7fa0
De-deprecate some warnings that hurt usability (#21999)
bwasti Jun 24, 2019
21da33f
Better trace comments
Krovatkin Jun 24, 2019
6350dbd
Fix sequential MKL case (#22062)
Jun 24, 2019
7c42064
Fix in ivalue::Future (#22114)
Jun 24, 2019
a458989
Document the Boolean tensor type.
xuhdev Jun 24, 2019
f177579
Fix minor issues with #21736 (#22074)
apaszke Jun 24, 2019
41d0525
Improve repr for IncompatibleKeys (#22119)
apaszke Jun 24, 2019
0ac28c8
Quick fix for #18215, the CPU case (#21910)
zh217 Jun 24, 2019
273b6c5
Cast return value of vector.at() to void to avoid nodiscard warning i…
xuhdev Jun 24, 2019
b2a3931
Make Dropout.__repr__ consistent with other modules (#22110)
apaszke Jun 24, 2019
2372e7e
DilatedMaxPool: expand incomplete kernel_size for the C++ API (#22073)
skrah Jun 24, 2019
88cdc16
AveragePool: expand incomplete kernel_size for the C++ API
skrah Jun 24, 2019
3b700a4
Add missing whitespace in error message (#21904)
cdancette Jun 24, 2019
ede0849
Enabled mul for bool tensors on CUDA (#21771)
izdeby Jun 25, 2019
f5df0c9
Don't end on inplace operators in einsum (#22111)
t-vi Jun 25, 2019
299ea84
Use latest stable flake8-bugbear in CI and fix B011 flake8 error. (#2…
xuhdev Jun 25, 2019
ac4913e
support both regularizable and sofmax re-weighting on sparse features…
Jun 25, 2019
b61693c
Optimize InstanceNormOp forward (#22130)
xiaomengy Jun 25, 2019
839b496
Fixes bugs in torch.multinomial without replacement (#22183)
syed-ahmed Jun 25, 2019
ce1a965
Remove more build options not needed to be explicitly set in Python b…
xuhdev Jun 25, 2019
b0bd875
Further remove redundant CMake option passing code for those CMake va…
xuhdev Jun 25, 2019
94e83da
Optimization of the Embedding and Embedding-Bag CUDA Kernel (#22016)
madsbk Jun 25, 2019
c8b5f1d
Switch autograd to use a pool of workers for each device (#21911)
Jun 25, 2019
9af8ea1
Not expose mkldnn reshape and transpose (#22193)
XiaobingSuper Jun 25, 2019
7daa96a
porting convtranspose3d to ATen (#22019)
xmnlab Jun 25, 2019
c681193
serialize torch.Size object (#20952)
ailzhang Jun 25, 2019
4ec6fbe
Show deprecation warning when stateful lambdas are used as kernels (#…
smessmer Jun 25, 2019
bcb5fd8
Port symeig to ATen and enable batching of inputs (#21858)
vishwakftw Jun 25, 2019
6ff0c6c
Remove THD (#22065)
pietern Jun 25, 2019
a7cb07e
Add missing algorithm header to Array utility (#22157)
Jun 25, 2019
f5a1ea1
SIMD version average pooling added (#22148)
Jun 25, 2019
1d705b4
Run clang-format on c10d bits (#22194)
pietern Jun 25, 2019
5b87049
remove uses of std::shared_ptr<Module> (#21934)
zdevito Jun 25, 2019
7ee82d4
Removed work around for convolution transpose op since the bug has be…
PenghuiCheng Jun 25, 2019
7b1ffba
ArgumentStash for Scalar arguments (#21931)
Jun 25, 2019
defd23b
Clean up old uses of checkScript (#22002)
Jun 25, 2019
f7a126f
fix optional type subtype relation (#22186)
wanchaol Jun 25, 2019
e425789
Fix "missing return statement" warning (#22216)
smessmer Jun 25, 2019
de85abf
Allow default construction of Dict/List (#22084)
smessmer Jun 26, 2019
1a164bf
remove unused mkldnn include (#22217)
ljk53 Jun 26, 2019
e8bc992
print device when it's not on default device (#22094)
Jun 26, 2019
c1fc2f2
export deleteFunction in torch/csrc/autograd/function.h (#22236)
t-vi Jun 26, 2019
17b37eb
Bump gloo (#22225)
pietern Jun 26, 2019
655a370
restoring HEADs for ideep and onnx to more recent versions
Krovatkin Jun 26, 2019
fde75a3
update IterableDataset doc to be consistent with current behavior
ssnl Jun 26, 2019
95b5718
Prevent VS from emitting errors when using swap in Optional.h (#22182)
peterjc123 Jun 26, 2019
b297552
Make nn functions configurable for different scalar types (#20729)
ifedan Jun 26, 2019
9f22805
Refactor function_wrapper.create_generic (#22077)
zou3519 Jun 26, 2019
f176950
Use lower case for strong wolfe option. (#22092)
vincentqb Jun 26, 2019
5f84f37
Use variable_data() in tensor_to_numpy (#22214)
Jun 26, 2019
a4f2814
introduce flags to set omp and mkl threads (#21472)
mingzhe09088 Jun 26, 2019
25eae3e
Disable test_proper_exit flaky worker_kill (#22208)
ssnl Jun 26, 2019
af9e008
Add the rest of the `dict` API (#21979)
Jun 26, 2019
2dc9643
Better error message for mismatched dict key type (#22231)
Jun 26, 2019
29b53b0
Fix bug in caffe2 transpose on GPU (#22233)
xiaomengy Jun 26, 2019
5bdc4db
Refactor named tensor helper code (#22150)
zou3519 Jun 26, 2019
516c7e4
Adding memory_format to empty and empty_like operators (#20558)
VitalyFedyunin Jun 26, 2019
8b02522
Avoid copy in ArrayRef<->vector comparison (#22218)
smessmer Jun 26, 2019
7707dee
Re apply optional ScalarType changes (#22237)
nairbv Jun 26, 2019
3ba72a1
Revert D15999938: [jit] Add the rest of the `dict` API
Jun 26, 2019
04fe245
conv2d/conv3d for LongTensor (#20730)
ifedan Jun 26, 2019
3f2a839
Add comments to bailoug_graph.*
Krovatkin Jun 26, 2019
f51de8b
Back out "Revert D15435461: [pytorch][PR] PyTorch ThroughputBenchmark…
salexspb Jun 26, 2019
45c6fa0
Refactor Tests for Multiple ONNX Opsets (#20036)
Jun 27, 2019
5e0a74d
Rename copy_tensor_data to copy_tensor_metadata (#22266)
Jun 27, 2019
e6d4a2d
Remove unused file cmake/Modules/FindMIOpen.cmake (#22244)
xuhdev Jun 27, 2019
f144b9e
Fix two overindent lint errors in test/common_nn.py. (#22287)
xuhdev Jun 27, 2019
30d890c
Removed an outdated comment above IMPLEMENT_UNARY_OP_VEC(abs) (#22272)
xuhdev Jun 27, 2019
f39b662
ChunkDataset checkpoint support (#21889)
xzhu1900 Jun 27, 2019
f13fadd
fix python2 corner-case in torch.distributed.launch (#20996)
soumith Jun 27, 2019
59c4259
Enabled gather and scatter for bool tensor (#21924)
izdeby Jun 27, 2019
e9d1b85
Functional conv2d (#21225)
Jun 27, 2019
d2bad94
Fix lint issues
ifedan Jun 27, 2019
bf677b8
Set MKLDNN (default) build variables in CMakeLists.txt, not in Python…
xuhdev Jun 27, 2019
c9626a1
Made a += b for lists do an in place add (#21896)
Chillee Jun 27, 2019
be0631b
Add the rest of the `dict` API (#21979)
Jun 27, 2019
6947e19
Remove unused param in Caffe2 LayerNormGradientOp (#22282)
xiaomengy Jun 27, 2019
2913f6a
Adding modules for Python 3 compatibility (#22295)
houseroad Jun 27, 2019
6386e4d
Named inference rule for `abs`. (#22151)
zou3519 Jun 27, 2019
69b702a
Implement unify_from_right (#22223)
zou3519 Jun 27, 2019
7732b1a
Enable named inference for some unary pointwise ops (#22267)
zou3519 Jun 27, 2019
177b8bf
Named inference rule for more pointwise ops. (#22268)
zou3519 Jun 27, 2019
b109699
Update ThroughputBenchmark to reflect new script::Module API (no (#22…
salexspb Jun 27, 2019
ac39869
Fixed list() not making a copy (#22093)
Chillee Jun 27, 2019
7a40412
Delay reduction of unused parameters until first autograd hook is cal…
pietern Jun 27, 2019
5c46e70
Implementation of nn.quantized.linear module (#21921)
jerryzh168 Jun 27, 2019
2832e33
Add serialization for nn.quantized.Linear module (#21925)
jerryzh168 Jun 27, 2019
83768f0
Add ONNX export support for multidim torch.sum. (#22240)
Jun 27, 2019
6f0f7e3
Support building caffe2 with clang-cl on Windows (#22307)
Jun 27, 2019
5e77111
nn.quantized.Relu and nn.quantize.Quantize/DeQuantize modules
jerryzh168 Jun 27, 2019
1bea27b
Remove three cpu sigmoid functions that are identical to IMPLEMENT_UN…
xuhdev Jun 28, 2019
0804452
fix lint in torch/nn/quantized/modules/linear.py (#22325)
jerryzh168 Jun 28, 2019
e259894
Test raising TypeError in torch.from_numpy() (#21607)
xuhdev Jun 28, 2019
042a2fd
Sync worker requirement mismatches
Jun 28, 2019
2132ea1
Fix "python: can't open file '.jenkins/pytorch/print_sccache_log.py':…
ezyang Jun 28, 2019
6cf4df5
add PT softmax ops to the benchmark suite (#21208)
mingzhe09088 Jun 28, 2019
89c709d
modify unary operators benchmark
mingzhe09088 Jun 28, 2019
3a19840
modify pool benchmarks
mingzhe09088 Jun 28, 2019
e76c975
Use lazy initialization in autograd record_function to avoid static (…
gdankel Jun 28, 2019
737f8a7
Fix onnx passes (#22319)
smessmer Jun 29, 2019
7cc8f37
Reduce needless copying when returning lists of tensors in the JIT in…
Jun 29, 2019
9e18234
Automatic update of fbcode/onnx to 806aa863020fa180e57f576cb032ec44ce…
houseroad Jun 29, 2019
b52621c
Revise error message for invalid Reduction (#22160)
kiddyboots216 Jun 29, 2019
d8de69d
Adds symbolic op for logsumexp
iiSeymour Jun 29, 2019
41e51ce
Fix QNNPACK and NNPACK settings (#22367)
vishwakftw Jun 30, 2019
3cba9e8
Error Message Paraphrasing (#22369)
ndrwnaguib Jun 30, 2019
9c8f9f0
Remove many usages of Type (#21941)
Jun 30, 2019
6c454ff
Stop using Type in Python bindings (#21963)
Jun 30, 2019
2a69868
Remove Type dispatch (#21964)
Jun 30, 2019
496e35f
More named inference rules for pointwise unary ops
zou3519 Jul 1, 2019
f894ef7
Add smoke test for information fn/method/attrs to test_namedtensor
zou3519 Jul 1, 2019
451c907
Adding qconv unpack operator for serialization (#22354)
dskhudia Jul 1, 2019
a43d9af
Comment on why Windows build_pytorch.bat builds twice (#22363)
ssnl Jul 1, 2019
bfeff1e
Stubs for torch.nn (#19089)
malmaud Jul 1, 2019
f7421b8
Remove versions constraints from `external_deps` (#22113)
andrewjcg Jul 1, 2019
577c04c
add mutation support for forward_pre_hook and forward_hook (#22285)
jerryzh168 Jul 1, 2019
2ab6ff4
Updating submodules
Jul 1, 2019
d632b1f
Expose is_mkldnn to python and register it as torchscript prim op
bddppq Jul 1, 2019
813b01e
Use at::AutoNonVariableTypeMode before calling ATen tensor factory fu…
Jul 1, 2019
d0db2a7
PyTorch ThroughputBenchmark: fix inaccuracy in number of iterations r…
salexspb Jul 1, 2019
d0348c0
ThroughputBenchmark: improve formatting for ExecutionStats (#22293)
salexspb Jul 1, 2019
10e4137
Optimize InstanceNormGradientOp (#22288)
xiaomengy Jul 1, 2019
30fedea
Updating submodules
Jul 1, 2019
dfa6fca
Supporting Manifold DB in Predictor Exporter (#22334)
houseroad Jul 1, 2019
007fd01
Enable PT operators running with {cpu, gpu} * {forward, backward} (#2…
mingzhe09088 Jul 1, 2019
8281909
add PT cat operator to the benchmark (#22404)
mingzhe09088 Jul 1, 2019
8a726f5
add PT split op to the benchmark (#22410)
mingzhe09088 Jul 1, 2019
402b9f9
add PT chunk op to the benchmark (#22409)
mingzhe09088 Jul 1, 2019
cbf5726
update mkldnn-bridge to avoid mem leak (#22392)
gujinghui Jul 2, 2019
a54acd3
Update the way boolean tensor are being printed (#22238)
izdeby Jul 2, 2019
1f9c4fd
split onnx passes (#22413)
smessmer Jul 2, 2019
f0f2331
Add support for cross-chunk shuffling in ChunkDataset (#22347)
xzhu1900 Jul 2, 2019
2c18bf2
Fix `ScriptModule.__dir__()` (#22426)
Jul 2, 2019
dff2c07
Manual revert of D16012838
Jul 2, 2019
671782d
Refactor file:line:col to be less ugly (#22177)
Jul 2, 2019
e05942c
Serialization methods for SourceRange and Source (#22178)
Jul 2, 2019
2c2a913
Preserve SourceRanges across serialization (#22179)
Jul 2, 2019
ffa15d2
Load original SourceRanges on import (#22180)
Jul 2, 2019
de84104
Lint ONNX Related Code (#22423)
houseroad Jul 2, 2019
edd5b77
Remove API-level guard on NeuralNetworks.h (#22429)
gkmhub Jul 2, 2019
5bd97be
Fix lint error in format_time() in throughput_benchmark.py and clean …
xuhdev Jul 2, 2019
2dd1323
Fix the GPU trainer for NoneCalibration and RNN
xianjiec Jul 2, 2019
6721e67
Remove hacky stub for quantized ops (#22388)
Jul 2, 2019
b9ede66
Remove the USE_MIOPEN build option as MIOpen is always used when buil…
xuhdev Jul 2, 2019
0ffda97
Make Gloo an optional c10d dependency (#22257)
pietern Jul 2, 2019
3d3d07b
Refactored math tests to iterate over all math ops
Chillee Jul 2, 2019
b768777
Added math.log2 and hypot
Chillee Jul 2, 2019
c9a8413
Numerical stability of embedding kernels (#22401)
madsbk Jul 2, 2019
a4b2f3e
Implement AdamW optimizer (#21250)
mjacar Jul 2, 2019
7ca7edc
ONNX Export LayerNorm
lara-hdr Jul 2, 2019
2dd71b1
Fix PoolWindow crash from thread_local (#22405)
dkotfis-oculus Jul 2, 2019
a845d02
Revert D16088191: Added math.log2 and hypot
Jul 2, 2019
7235532
Revert D16088193: Refactored math tests to iterate over all math ops
Jul 2, 2019
e74b0fc
Fix empty_like for quantized tensors. (#21978)
gchanan Jul 2, 2019
869ce89
use feenableexcept when glibc is available (#22241)
Jul 2, 2019
bb07f2d
Pass LRU hash output evicted_values to SparseLookup (#21389)
Jul 2, 2019
693871d
Rename macros and build options NAMEDTENSOR_ENABLED to BUILD_NAMEDTEN…
xuhdev Jul 2, 2019
6d58713
Use concrete types on call sites for Dict/List (#22004)
smessmer Jul 2, 2019
07ef85e
Add USE_MKLDNN_CBLAS build option. (#19014)
Jul 2, 2019
53a52f5
infer shape until no more change (#22425)
ChunliF Jul 2, 2019
e68dc89
Fix compiler warnings (#22162)
smessmer Jul 2, 2019
f5b3f9e
Remove unnecessary ROCm detection code in Python scripts. (#22464)
xuhdev Jul 2, 2019
474dec4
Warn on conditions that can trigger cuBLAS sgemm bug (#22034)
Jul 2, 2019
040a4bd
include conv_op_impl.h from conv_dnnlowp_op.cc (#22458)
jspark1105 Jul 2, 2019
34f950c
Create C2 operator to replace values in embedding table (#22279)
Jul 2, 2019
a6441c0
Remove build variable NCCL_EXTERNAL (#22467)
xuhdev Jul 2, 2019
830c659
EraseNumberTypes cleans itself up (#22461)
smessmer Jul 2, 2019
d684112
Output sequence probability with CTC beam search, optional multiple o…
warut-vijit Jul 3, 2019
bb0f299
Update MultiheadAttention module support key/value with different num…
Jul 3, 2019
dcd902b
provide "size" parameter in torch.normal when called with two floats …
Jul 3, 2019
e210c65
Add `torch.where` overload with only condition argument (#21986)
Jul 3, 2019
76e14c1
remove caffe2/core dependency from ATen/core/jit_type.h (#22441)
ljk53 Jul 3, 2019
17cc798
Fix dead code elimination in onnx export (#22476)
smessmer Jul 3, 2019
0d63619
Deprecate vector/unordered_map again (#22478)
smessmer Jul 3, 2019
7fef0b7
Take const refs in TensorIterator::mark_outputs
Jul 3, 2019
abb2e68
Don't construct a single element array for unary ops
Jul 3, 2019
c9f41e9
Add device guard around MPI operations (#22446)
pietern Jul 3, 2019
d9e15bc
Perform weight re-init for embedding table in sparse_lookup.py (#22348)
Jul 3, 2019
2732a5e
Another dce fix (#22499)
smessmer Jul 3, 2019
9c44f6c
generate tests based on op metadata (#21432)
mingzhe09088 Jul 3, 2019
319ef3b
Fix onnx custom op export & add initial test case (#21321)
BowenBao Jul 3, 2019
325ec23
create tensor based on provided datatype (#22468)
mingzhe09088 Jul 3, 2019
29ec476
Fix SyncBatchNorm running var update issue (#22248)
unlimblue Jul 4, 2019
b93f29d
add JIT path to the benchmark (#22309)
mingzhe09088 Jul 4, 2019
10c4b98
Remove weak script (#22212)
Jul 4, 2019
97a604e
Rereapply optional ScalarType interface changes that were reverted in…
nairbv Jul 4, 2019
799633e
move casting ops from prim to aten
wanchaol Jul 4, 2019
6f6a680
remove erase_fork_wait.h
Krovatkin Jul 4, 2019
08f9437
note about RNG state is added for dataloader
InnovArul Jul 4, 2019
8236558
note about RNG state is added for dataloader
InnovArul Jul 4, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
22 changes: 21 additions & 1 deletion .circleci/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,23 @@
Structure of CI
===============

setup job:
1. Does a git checkout
2. Persists CircleCI scripts (everything in `.circleci`) into a workspace. Why?
We don't always do a Git checkout on all subjobs, but we usually
still want to be able to call scripts one way or another in a subjob.
Persisting files this way lets us have access to them without doing a
checkout. This workspace is conventionally mounted on `~/workspace`
(this is distinguished from `~/project`, which is the conventional
working directory that CircleCI will default to starting your jobs
in.)
3. Write out the commit message to `.circleci/COMMIT_MSG`. This is so
we can determine in subjobs if we should actually run the jobs or
not, even if there isn't a Git checkout.




CircleCI configuration generator
================================

Expand Down Expand Up @@ -35,4 +55,4 @@ Future direction
See comment [here](https://github.com/pytorch/pytorch/pull/17323#pullrequestreview-206945747):

In contrast with a full recursive tree traversal of configuration dimensions,
> in the future future I think we actually want to decrease our matrix somewhat and have only a few mostly-orthogonal builds that taste as many different features as possible on PRs, plus a more complete suite on every PR and maybe an almost full suite nightly/weekly (we don't have this yet). Specifying PR jobs in the future might be easier to read with an explicit list when we come to this.
> in the future future I think we actually want to decrease our matrix somewhat and have only a few mostly-orthogonal builds that taste as many different features as possible on PRs, plus a more complete suite on every PR and maybe an almost full suite nightly/weekly (we don't have this yet). Specifying PR jobs in the future might be easier to read with an explicit list when we come to this.
13 changes: 9 additions & 4 deletions .circleci/cimodel/data/binary_build_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def get_processor_arch_name(cuda_version):
"3.6m",
"3.7m",
],
conda=dimensions.STANDARD_PYTHON_VERSIONS,
conda=dimensions.CONDA_PYTHON_VERSIONS,
libtorch=[
"2.7m",
],
Expand All @@ -52,7 +52,7 @@ def get_processor_arch_name(cuda_version):
linux=(dimensions.CUDA_VERSIONS, LINUX_PACKAGE_VARIANTS),
macos=([None], OrderedDict(
wheel=dimensions.STANDARD_PYTHON_VERSIONS,
conda=dimensions.STANDARD_PYTHON_VERSIONS,
conda=dimensions.CONDA_PYTHON_VERSIONS,
libtorch=[
"2.7",
],
Expand All @@ -62,7 +62,6 @@ def get_processor_arch_name(cuda_version):

DEVTOOLSET_VERSIONS = [
3,
7,
]


Expand All @@ -86,7 +85,13 @@ def __init__(self, parent, os_name, cuda_versions, py_tree):
self.props["cuda_versions"] = cuda_versions

def get_children(self):
return [PackageFormatConfigNode(self, k, v) for k, v in self.py_tree.items()]
packaging_variants = [PackageFormatConfigNode(self, k, v) for k, v in self.py_tree.items()]

if self.find_prop("smoke"):
filtered_packaging_variants = list(filter(lambda x: x.get_label() != "libtorch", packaging_variants))
return filtered_packaging_variants
else:
return packaging_variants


class PackageFormatConfigNode(ConfigNode):
Expand Down
37 changes: 18 additions & 19 deletions .circleci/cimodel/data/caffe2_build_data.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
#!/usr/bin/env python3

from cimodel.lib.conf_tree import ConfigNode, X
from cimodel.lib.conf_tree import ConfigNode, X, XImportant
from cimodel.lib.conf_tree import Ver
import cimodel.data.dimensions as dimensions


CONFIG_TREE_DATA = [
Expand All @@ -14,16 +13,17 @@
(Ver("cuda", "9.0"), [
# TODO make explicit that this is a "secret TensorRT build"
# (see https://github.com/pytorch/pytorch/pull/17323#discussion_r259446749)
# TODO Uh oh, were we supposed to make this one important?!
X("py2"),
X("cmake"),
XImportant("cmake"),
]),
(Ver("cuda", "9.1"), [X("py2")]),
(Ver("mkl"), [X("py2")]),
(Ver("gcc", "5"), [X("onnx_py2")]),
(Ver("cuda", "9.1"), [XImportant("py2")]),
(Ver("mkl"), [XImportant("py2")]),
(Ver("gcc", "5"), [XImportant("onnx_py2")]),
(Ver("clang", "3.8"), [X("py2")]),
(Ver("clang", "3.9"), [X("py2")]),
(Ver("clang", "7"), [X("py2")]),
(Ver("android"), [X("py2")]),
(Ver("clang", "7"), [XImportant("py2"), XImportant("onnx_py3.6")]),
(Ver("android"), [XImportant("py2")]),
]),
(Ver("centos", "7"), [
(Ver("cuda", "9.0"), [X("py2")]),
Expand All @@ -32,7 +32,7 @@
# TODO ios and system aren't related. system qualifies where the python comes
# from (use the system python instead of homebrew or anaconda)
(Ver("ios"), [X("py2")]),
(Ver("system"), [X("py2")]),
(Ver("system"), [XImportant("py2")]),
]),
]

Expand All @@ -54,6 +54,8 @@ def get_children(self):
return [self.child_constructor()(self, k, v) for (k, v) in self.subtree]

def is_build_only(self):
if str(self.find_prop("language_version")) == "onnx_py3.6":
return False
return str(self.find_prop("compiler_version")) in [
"gcc4.9",
"clang3.8",
Expand Down Expand Up @@ -95,16 +97,13 @@ def init2(self, node_name):
self.props["language_version"] = node_name
self.props["build_only"] = self.is_build_only()

def get_children(self):

children = []
for phase in dimensions.PHASES:
if phase == "build" or not self.props["build_only"]:
children.append(PhaseConfigNode(self, phase, []))

return children
def child_constructor(self):
return ImportantConfigNode


class PhaseConfigNode(TreeConfigNode):
class ImportantConfigNode(TreeConfigNode):
def init2(self, node_name):
self.props["phase_name"] = node_name
self.props["important"] = True

def get_children(self):
return []
75 changes: 47 additions & 28 deletions .circleci/cimodel/data/caffe2_build_definitions.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,30 +2,35 @@

from collections import OrderedDict

import cimodel.data.dimensions as dimensions
import cimodel.lib.conf_tree as conf_tree
from cimodel.lib.conf_tree import Ver
import cimodel.lib.miniutils as miniutils
import cimodel.lib.visualization as visualization
from cimodel.data.caffe2_build_data import CONFIG_TREE_DATA, TopLevelNode


DOCKER_IMAGE_PATH_BASE = "308535385114.dkr.ecr.us-east-1.amazonaws.com/caffe2/"
from dataclasses import dataclass


DOCKER_IMAGE_VERSION = 276
DOCKER_IMAGE_PATH_BASE = "308535385114.dkr.ecr.us-east-1.amazonaws.com/caffe2/"

DOCKER_IMAGE_VERSION = 287

class Conf(object):
def __init__(self, language, distro, compiler, phase, build_only):

self.language = language
self.distro = distro
self.compiler = compiler
self.phase = phase
self.build_only = build_only
@dataclass
class Conf:
language: str
distro: Ver
compiler: Ver
build_only: bool
is_important: bool

# TODO: Eventually we can probably just remove the cudnn7 everywhere.
def get_cudnn_insertion(self):

omit = self.language == "onnx_py2" \
or self.language == "onnx_py3.6" \
or self.compiler.name in ["android", "mkl", "clang"] \
or str(self.distro) in ["ubuntu14.04", "macos10.13"]

Expand All @@ -44,9 +49,6 @@ def construct_phase_name(self, phase):
root_parts = self.get_build_name_root_parts()
return "_".join(root_parts + [phase]).replace(".", "_")

def get_name(self):
return self.construct_phase_name(self.phase)

def get_platform(self):
platform = self.distro.name
if self.distro.name != "macos":
Expand All @@ -57,27 +59,29 @@ def gen_docker_image(self):

lang_substitutions = {
"onnx_py2": "py2",
"onnx_py3.6": "py3.6",
"cmake": "py2",
}

lang = miniutils.override(self.language, lang_substitutions)
parts = [lang] + self.get_build_name_middle_parts()
return miniutils.quote(DOCKER_IMAGE_PATH_BASE + "-".join(parts) + ":" + str(DOCKER_IMAGE_VERSION))

def gen_yaml_tree(self):
def gen_yaml_tree(self, phase):

tuples = []

lang_substitutions = {
"onnx_py2": "onnx-py2",
"onnx_py3.6": "onnx-py3.6",
}

lang = miniutils.override(self.language, lang_substitutions)

parts = [
"caffe2",
lang,
] + self.get_build_name_middle_parts() + [self.phase]
] + self.get_build_name_middle_parts() + [phase]

build_env = "-".join(parts)
if not self.distro.name == "macos":
Expand All @@ -88,7 +92,7 @@ def gen_yaml_tree(self):
if self.compiler.name == "ios":
tuples.append(("BUILD_IOS", miniutils.quote("1")))

if self.phase == "test":
if phase == "test":
# TODO cuda should not be considered a compiler
if self.compiler.name == "cuda":
tuples.append(("USE_CUDA_DOCKER_RUNTIME", miniutils.quote("1")))
Expand All @@ -103,11 +107,11 @@ def gen_yaml_tree(self):

d = OrderedDict({"environment": OrderedDict(tuples)})

if self.phase == "test":
if phase == "test":
resource_class = "large" if self.compiler.name != "cuda" else "gpu.medium"
d["resource_class"] = resource_class

d["<<"] = "*" + "_".join(["caffe2", self.get_platform(), self.phase, "defaults"])
d["<<"] = "*" + "_".join(["caffe2", self.get_platform(), phase, "defaults"])

return d

Expand All @@ -125,11 +129,11 @@ def instantiate_configs():
for fc in found_configs:

c = Conf(
fc.find_prop("language_version"),
fc.find_prop("distro_version"),
fc.find_prop("compiler_version"),
fc.find_prop("phase_name"),
fc.find_prop("build_only"),
language=fc.find_prop("language_version"),
distro=fc.find_prop("distro_version"),
compiler=fc.find_prop("compiler_version"),
build_only=fc.find_prop("build_only"),
is_important=fc.find_prop("important"),
)

config_list.append(c)
Expand All @@ -138,10 +142,13 @@ def instantiate_configs():


def add_caffe2_builds(jobs_dict):

configs = instantiate_configs()
for conf_options in configs:
jobs_dict[conf_options.get_name()] = conf_options.gen_yaml_tree()
phases = ["build"]
if not conf_options.build_only:
phases = dimensions.PHASES
for phase in phases:
jobs_dict[conf_options.construct_phase_name(phase)] = conf_options.gen_yaml_tree(phase)

graph = visualization.generate_graph(get_root())
graph.draw("caffe2-config-dimensions.png", prog="twopi")
Expand All @@ -158,11 +165,23 @@ def get_caffe2_workflows():
x = []
for conf_options in filtered_configs:

requires = ["setup"]
phases = ["build"]
if not conf_options.build_only:
phases = dimensions.PHASES

for phase in phases:

requires = ["setup"]
sub_d = {"requires": requires}

if phase == "test":
requires.append(conf_options.construct_phase_name("build"))

if conf_options.phase == "test":
requires.append(conf_options.construct_phase_name("build"))
if not conf_options.is_important:
# If you update this, update
# pytorch_build_definitions.py too
sub_d["filters"] = {"branches": {"only": ["master", r"/ci-all\/.*/"]}}

x.append({conf_options.get_name(): {"requires": requires}})
x.append({conf_options.construct_phase_name(phase): sub_d})

return x
6 changes: 6 additions & 0 deletions .circleci/cimodel/data/dimensions.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,9 @@
"3.6",
"3.7",
]

CONDA_PYTHON_VERSIONS = [
"2.7",
"3.6",
"3.7",
]
Loading