tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like#580
tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like#580DEKHTIARJonathan merged 10 commits intomasterfrom
tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like#580Conversation
|
I think that there may be some minor issues with the comments. First, the documentation for Second, the documentation for |
|
The documentation might not be perfectly accurate. To be very honest with you, the function will keep supporting string I have designed it to support every kind of dtype (at least I hope so) in a more efficient manner than the APIs existing in TF. @novog maybe you could submit a PR correcting the documentation. It's always easier to correct something with an external point view. |
…d tf.zeros_like/ones_like (#580) * tl.alphas and tl.alphas_like added * YAPF Error Correct * Codacy Error Fix * Docstring Fixed * Codacy Error Fix * Documentation Added * Update CHANGELOG.md
In a research project, I encountered the need to create tensors of the same shape than any other tensor with a custom value (not just 0 or 1), could be Boolean, Floats, Integers and so on.
For the record, it was something I originally developed for the TF repository (but they were not interested):
The functions prototypes are the following and have been implemented in tensorlayer/array_ops.py:
The code is not created from scratch. It is highly inspired by the functions tf.ones, tf.zeros for tl.alphas and by tf.zeros_like, tf.ones_like for tl.alphas_like.
The code use the latest implementation and has been designed to work with eager_mode.
The number of modification is relatively small, thus I am relatively confident on the robustness of the new implementation (largely based on the existing one).
The idea is to reproduce and merge the functions while enabling to set any custom value in the tensor:
tl.alphasmerges:tl.alphas_likemerges:How is the API Working ?
My new functions take a parameter alpha_value and fill the tensor with this value. This allows me to run such a script:
Performance Optimization
Of course, it could be to do the same thing using the following commands:
Each run has been executed 5 times and execution time averaged:
The method working with TF Code only (shown above) : [47.5s, 47.5s, 48.1s, 47.6s; 47.8s] => Average Time: 47.7 secs
The method I implemented "alphas_like": [25.0s, 25.0s, 25.0s, 24.9s, 25.0s] => Average Time: 25 secs
My method is almost twice as fast !
Code used to produce this numbers:
Which gives me these results: