|
2 | 2 |
|
3 | 3 | This is the companion code for DL benchmarking study reported in the paper *Comparative Study of Deep Learning Software Frameworks* by *Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, and Mohak Shah*. The paper can be found here http://arxiv.org/abs/1511.06435. The code allows the users to reproduce and extend the results reported in the study. The code provides timings of forward run and forward+backward (gradient computation) run of several deep learning architecture using Caffe, Neon, TensorFlow, Theano, and Torch. The deep learning architectures used includes LeNet, AlexNet, LSTM, and a stacked AutoEncoder. Please cite the above paper when reporting, reproducing or extending the results. |
4 | 4 |
|
| 5 | +# Updated results |
| 6 | +Here you can find a set of new timings obtained using **cuDNNv4** on a **single M40 GPU** on the same experiments performed in the paper. The result are reported using **Caffe-Nvidia 0.14.5**, **Tensoflow 0.9.0rc0**, **Theano 0.8.2**, and **Torch7**. |
5 | 7 |
|
6 | | -## Run the benchmarks |
7 | | -See the readme file within each folder to run the experiments. |
| 8 | +1) **LeNet** using batch size of 64 (Extension of Table 3 in the paper) |
| 9 | + |
| 10 | +| Setting | Gradient (ms) | Forward (ms) | |
| 11 | +|:----------:|:-------------:|:------------:| |
| 12 | +| Caffe | 2.4 | 0.8 | |
| 13 | +| Tensorflow | 2.7 | 0.8 | |
| 14 | +| Theano | **1.6** | 0.6 | |
| 15 | +| Torch | 1.8 | **0.5** | |
| 16 | + |
| 17 | +2) **Alexnet** using batch size of 256 (Extension of Table 4 in the paper) |
| 18 | + |
| 19 | +| Setting | Gradient (ms) | Forward (ms) | |
| 20 | +|:----------:|:-------------:|:------------:| |
| 21 | +| Caffe | 279.3 | **88.3** | |
| 22 | +| Tensorflow | **276.6** | 91.1 | |
| 23 | +| Torch | 408.8 | 98.8 | |
| 24 | + |
| 25 | +3) **LSTM** using batch size of 16 (Extension of Table 6 in the paper) |
8 | 26 |
|
| 27 | +| Setting | Gradient (ms) | Forward (ms) | |
| 28 | +|:----------:|:-------------:|:------------:| |
| 29 | +| Tensorflow | 85.4 | 37.1 | |
| 30 | +| Theano | **17.3** | **4.6** | |
9 | 31 |
|
10 | | -## Dependencies |
11 | | -Please refer to the corresponding github repository of each framework to install the framework and the corresponding dependencies. Most of the packages require Nvidia cuda and cuDNN to run on GPU. The provided codes have been tested on a system with Ubuntu 14.04 + Nvidia TitanX GPU with CUDA 7.5 and cuDNNv3. |
| 32 | +4) **Stacked auto-encoder** with encoder dimensions of 400, 200, 100 using batch size of 64 (Extension of Table 5 in the paper) |
12 | 33 |
|
| 34 | +| Setting | Gradient (ms) AE1 | Gradient (ms) AE2 | Gradient (ms) AE3 | Gradient (ms) Total pre-training | Gradient (ms) SE | Forward (ms) SE | |
| 35 | +|:----------:|:-----------------:|:-----------------:|:-----------------:|:--------------------------------:|:----------------:|:---------------:| |
| 36 | +| Caffe | 0.8 | 0.9 | 0.9 | 2.6 | 1.1 | 0.6 | |
| 37 | +| Tensorflow | 0.7 | 0.6 | 0.6 | 1.9 | 1.2 | 0.4 | |
| 38 | +| Theano | 0.6 | 0.4 | 0.3 | **1.3** | **0.4** | **0.3** | |
| 39 | +| Torch | 0.5 | 0.5 | 0.5 | 1.5 | 0.6 | **0.3** | |
| 40 | + |
| 41 | +5) **Stacked auto-encoder** with encoder dimensions of 800, 1000, 2000 using batch size of 64 (Extension of Table 7 in the paper) |
| 42 | + |
| 43 | +| Setting | Gradient (ms) AE1 | Gradient (ms) AE2 | Gradient (ms) AE3 | Gradient (ms) Total pre-training | Gradient (ms) SE | Forward (ms) SE | |
| 44 | +|:----------:|:-----------------:|:-----------------:|:-----------------:|:--------------------------------:|:----------------:|:---------------:| |
| 45 | +| Caffe | 0.9 | 1.2 | 1.7 | 3.8 | 1.9 | 0.9 | |
| 46 | +| Tensorflow | 0.9 | 1.1 | 1.6 | 3.6 | 2.1 | 0.7 | |
| 47 | +| Theano | 0.7 | 1.0 | 1.8 | 3.5 | **1.2** | **0.6** | |
| 48 | +| Torch | 0.7 | 0.9 | 1.4 | **3.0** | 1.4 | **0.6** | |
| 49 | + |
| 50 | + |
| 51 | +## Run the benchmarks |
| 52 | +See the readme file within each folder to run the experiments. |
13 | 53 |
|
14 | 54 | ## License |
15 | 55 |
|
|
0 commit comments