Skip to content

Conversation

@ClementPinard
Copy link
Contributor

I noticed that the tutorial was updated but not the codebase, so here it goes.
It is now on par with tutorial http://pytorch.org/tutorials/advanced/cpp_extension.html

change AT_ASSERTM by TORCH_CHECK
change .type() by .scalar_type()
change PackedAcessor to PackedAccessor32

This hopefully fixes #65 , #66 (although 1.6 norally only return deprecation warning)

Second significant change :
change fminf and fmaxf to their fmin and fmax counterparts, make sur that the right template is used by casting the 0.0 to scalar_t . This is probably not needed anymore as it was probably a bug with nvcc and gcc7 but it might help people with old configs get the grad_check working.

This fixes #27 and #42

It is now on par with tutorial http://pytorch.org/tutorials/advanced/cpp_extension.html
change AT_ASSERTM by TORCH_CHECK
change .type() by .scalar_type()
change PackedAcessor to PackedAccessor32
change fminf and fmaxf to their fmin and fmax counterparts, make sur that the right template is used by casting the 0.0 to scalar_t
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Compiler error /cuda/setup.py This repo can not compile using Pytorch 1.6.0

2 participants