Skip to content

Conversation

@smessmer
Copy link
Contributor

@smessmer smessmer commented Jun 20, 2018

This is a first version of the dispatcher. It uses multi-dispatch by default, but allows the operator schema to override that. It currently only defines unboxed kernel registration and unboxed calling APIs, boxed kernel registration and boxed calling APIs will come as well.

@ezyang
Copy link
Contributor

ezyang commented Jun 20, 2018

@pytorchbot retest this please

template<class Key_>
void emplace(Key_&& key, void* value) {
// TODO Locking
//std::unique_lock<std::shared_timed_mutex> lock(mutex_);

This comment was marked as off-topic.


void erase(const Key& key) {
// TODO Locking
//std::unique_lock<std::shared_timed_mutex> lock(mutex_);

This comment was marked as off-topic.


private:
ska::flat_hash_map<Key, void*> map_;
// TODO Figure out how to get fast locking in C++11 (use boost::shared_timed_mutex? folly::SharedMutex?)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.


auto result = map_.emplace(std::forward<Key>(key), value);
if (!result.second) {
throw std::logic_error("Tried to register conflicting operators to the dispatcher.");

This comment was marked as off-topic.

// TODO The current implementation below does not have the correct correctness characteristics
// which we need. It's worth spelling out exactly what we need:
//
// - We need LOCK FREE read access to the table (as per the performance benchmark

This comment was marked as off-topic.

This comment was marked as off-topic.

* @param func Concrete function implementation to register
* @param dispatch_key Dispatch key to implement this function with
*/
void registerOp(typename Schema::signature::func_type* func, typename Schema::dispatch::dispatch_key_type dispatch_key) {

This comment was marked as off-topic.

This comment was marked as off-topic.

auto dispatch_key = Schema::dispatch::dispatch_key(args...);
void* found = ops_.lookup(dispatch_key);
if (found == nullptr) {
throw std::logic_error("Didn't find operator to dispatch to");

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

* @tparam hasKernel Boolean for compile-time checking that a kernel is specified before finalizing the builder
* @tparam hasDispatchKey Boolean for compile-time checking thhat a dispatch key is specified before finalizing the builder
*/
template<class OpSchemaDef, bool hasKernel, bool hasDispatchKey>

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stamping to unblock - please address comments

case DeviceTypeId::CUDA: return stream << "DeviceTypeId(CUDA)";
case DeviceTypeId::UNDEFINED: return stream << "DeviceTypeId(UNDEFINED)";
}
throw std::logic_error("Unknown DeviceTypeId");

This comment was marked as off-topic.

@smessmer smessmer merged commit cca2476 into pytorch:master Jun 25, 2018
@smessmer smessmer deleted the typeid5 branch June 25, 2018 20:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants