-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
🚀 Feature
Incorporate support for Posit number format as another dtype of PyTorch
Motivation
The Posit number format (introduced in Beating Floating Point at its Own Game) has been sparking a lot of interest in the research community lately, in particular for its potential on low precision training and inference of Neural Networks.
Researchers have been building their own frameworks for deep learning on top of Posit, some based on PyTorch (such as PositNN), others on TensorFlow (such as Deep PeNSieve).
However, it would be much nicer to have PyTorch support Posit, so that it could be used directly for research without the need to build a framework from scratch.
Ideas For Implementation
Following Issue#52673 and after some discussion with @RaulMurillo, I believe that the best way to do this is by having different Posit types be supported as another dtype, backed by an external library such as Universal and register kernel implementations to the dispatcher for each method.
Issue#755 and Issue#33152 hold a history of the process that complex numbers went through until they became supported, and I was thinking that a similar process could be undergone for Posits.
Since I am quite inexperienced in PyTorch and its internals, I would love to have some insights on this suggestion: if it makes sense to go this way, if there are obstacles that I should be aware of, etc.
@albanD @ezyang (and other people interested), is this a good way forward?