Skip to content

Conversation

@WilliamTambellini
Copy link
Contributor

Add support for multigpu parallel training in the neural net example

@9prady9
Copy link
Member

9prady9 commented Mar 13, 2019

The updated example looks good. I have made some cmake changes.

@9prady9 9prady9 requested a review from umar456 March 13, 2019 09:00
@WilliamTambellini
Copy link
Contributor Author

Cool tks. Feel free to merge/modify your way.

9prady9
9prady9 previously approved these changes Mar 28, 2019
@9prady9
Copy link
Member

9prady9 commented Mar 28, 2019

I ran it on multiple(two: GTX 1060, AMD Spectre R7) devices, works without any issues. This change demonstrates how to use multiple devices but not so much on how to distribute training data across multiple devices. How difficult would be such an example to implement ?

@WilliamTambellini
Copy link
Contributor Author

how to distribute training data across multiple devices. How difficult would be such an example to implement ?
That will be worth it to do as soon as
#2463
done.
That example at least shows how to do multigpu multimodels training.

@WilliamTambellini
Copy link
Contributor Author

That item would allow to also compare speed of mutiple vs single gpus execution.
Distruibuted single model training is out of scope here.
Review please ?

@WilliamTambellini WilliamTambellini changed the title Add support for multigpu training in the neural net example Add support for multithreaded training in the neural net example Apr 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants