-
Notifications
You must be signed in to change notification settings - Fork 236
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Is this a duplicate?
- I confirmed there appear to be no duplicate issues for this bug and that I agree to the Code of Conduct
Type of Bug
Runtime Error
Component
cuda.core
Describe the bug
I'm seeing a failure with the CUDA stream protocol in cuda-core 0.5.0. This test is now failing where it used to pass.
> buf = cuda_core_mr.allocate(1024, stream=rmm_stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
rmm/tests/test_stream.py:91:
------
cuda/core/_memory/_memory_pool.pyx:171: in cuda.core._memory._memory_pool._MemPool.allocate
???
------
> ???
E TypeError: Stream or GraphBuilder expected, got Stream
cuda/core/_stream.pyx:473: TypeError
Here, I think "got Stream" is really the RMM Stream type which has a stream protocol defined.
@leofang suggested,
We would like users to "launder" their protocol-compliant streams to a
cuda.core.Streamlike thisdev = Device() cuda_stream_from_rmm = dev.create_stream(obj=rmm_stream)and use it in any cuda.core APIs.
I tried that and observed segfaults for both of the following snippets:
rmm_stream = current_device.create_stream(rmm.pylibrmm.stream.Stream())
and
owning_rmm_stream = rmm.pylibrmm.stream.Stream()
rmm_stream = current_device.create_stream(owning_rmm_stream)
How to Reproduce
Run the RMM test linked above.
Expected behavior
No segfault should occur.
Also, it would be nice if a deprecation warning were issued to point users to the create_stream function, with a plan to break the API later on.
Operating System
No response
nvidia-smi output
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working