Skip to content

Conversation

@yf225
Copy link
Contributor

@yf225 yf225 commented Mar 22, 2018

Previously the perf numbers are stored in https://github.com/yf225/perf-tests/tree/cpu, but we couldn't figure out a way to push the perf numbers only from master builds. This PR moves the perf number storage to S3, which allows us to have finer control over when to push the new numbers.

This is in replacement of #5844 - storing numbers in RDS has its own problems with schema migration and backward compatibility, and using a NoSQL database might be an overkill at this point.

cp /var/lib/jenkins/host-workspace/perf_test_numbers_cpu.json perf_test_numbers_cpu.json
else
curl https://raw.githubusercontent.com/yf225/perf-tests/master/perf_test_numbers_cpu.json -O
export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}

This comment was marked as off-topic.

export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

if [[ "$COMMIT_SOURCE" == *master* ]]; then

This comment was marked as off-topic.

git add perf_test_numbers_cpu.json
git commit -m "New CPU perf test baseline from ${PYTORCH_COMMIT_ID}"
if [[ "$COMMIT_SOURCE" == *master* ]]; then
aws s3 cp new_cpu_runtime.json s3://ossci-perf-test/pytorch/cpu_runtime/${MASTER_COMMIT_ID}.json --acl public-read

This comment was marked as off-topic.


if [[ "$GIT_COMMIT" == *origin/master* ]]; then
# Get baseline file from ossci-perf-test S3 bucket
aws s3 cp s3://ossci-perf-test/pytorch/cpu_runtime/LATEST_TESTED_COMMIT LATEST_TESTED_COMMIT

This comment was marked as off-topic.

export LATEST_TESTED_COMMIT="$(cat LATEST_TESTED_COMMIT)"
if ! git merge-base --is-ancestor ${MASTER_COMMIT_ID} ${LATEST_TESTED_COMMIT}; then
echo "${MASTER_COMMIT_ID}" > LATEST_TESTED_COMMIT
aws s3 cp LATEST_TESTED_COMMIT s3://ossci-perf-test/pytorch/gpu_runtime/LATEST_TESTED_COMMIT --acl public-read

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some code quality things but nothing blocking.

@ezyang
Copy link
Contributor

ezyang commented Mar 24, 2018

@pytorchbot retest this please

1 similar comment
@yf225
Copy link
Contributor Author

yf225 commented Mar 24, 2018

@pytorchbot retest this please

@yf225 yf225 force-pushed the s3 branch 2 times, most recently from bfdeee9 to 230b4d2 Compare March 24, 2018 03:31
@ezyang ezyang merged commit 2f8d658 into pytorch:master Mar 24, 2018
sighingnow added a commit to sighingnow/pytorch that referenced this pull request Mar 25, 2018
* upstream/master: (663 commits)
  Fix "command not found" error in perf test (pytorch#5982)
  add pip mkl-devel to the error message when mkl is found but mkl headers are not (pytorch#5984)
  Support batch LowerCholeskyTransform (pytorch#5980)
  Linearly interpolating upsampling fix (pytorch#5927)
  Store perf numbers in S3 (pytorch#5951)
  Modidy setup docs for Windows (pytorch#5981)
  Group Normalization (pytorch#5968)
  [distributions] Implement Power transform (pytorch#5976)
  Disable TestBottleneck test_cuda on Windows (pytorch#5977)
  Fix crash when cat-ing empty cuda tensors (pytorch#5971)
  Update no_unions flag for nanopb gen and update ONNX proto files (pytorch#5972)
  Expose gradients w.r.t. input & weight for conv1d, conv2d, conv3d in Python (pytorch#5408)
  Fixed non-determinate preprocessing on DataLoader (pytorch#4640)
  add AVX2 implementation for sigmoid function (pytorch#5010)
  Implement torch.util.bottleneck (pytorch#5216)
  Remove pragma once from cpp file (pytorch#5965)
  fix mvn docs (pytorch#5967)
  Fix incorrect rendering of Tensor.index_*_ doc examples. (pytorch#5969)
  Implement range for loop in script (pytorch#5827)
  Add windows doc (pytorch#5859)
  ...

# Conflicts:
#	aten/src/TH/generic/THTensorMath.c
#	torch/_tensor_docs.py
#	torch/csrc/generic/methods/TensorCompare.cwrap
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants