Conversation
…t myself out of the handle<->context fwd decl. issue.
…s to actually work.
|
cc @inducer |
|
Thanks, @kaushikcfd ! While a memory pool is one way to solve the issue you have encountered with PETSc, I'm afraid this PR is way too intrusive (yet backend-specific) for achieving the actual goal of eliminating the impact of temporaries you've seen in PETSc. Please let me spend some more thoughts on a more concise (yet equally powerful) fix that immediately carries over to the CUDA backend as well. |
|
@karlrupp: Thanks for taking a look. This can be extended for CUDA backend as well. PyCUDA and PyOpenCL share the same memory pool implementation and hence we would also need to just add another memory pool allocator class, which would involve minimal changes. |
inducer
left a comment
There was a problem hiding this comment.
@karlrupp I agree that we should preserve backend independence. One thing I would like to know is what you think of memory pools in general. You mention a different way of being able to avoid expensive temporary allocation--I would be curious to hear what you have in mind.
| /** @brief Copies data of the provided 'DataType' from 'handle_src' to 'handle_dst' and converts the data if the binary representation of 'DataType' among the memory domains differs. */ | ||
| template<typename DataType> | ||
| void typesafe_memory_copy(mem_handle const & handle_src, mem_handle & handle_dst) | ||
| template<typename DataType, typename H = viennacl::ocl::handle<cl_mem> > |
There was a problem hiding this comment.
How come this defaults to the OpenCL handle?
| * @tparam NumericT The floating point type, either 'float' or 'double' | ||
| */ | ||
| template<class NumericT, typename SizeT /* see forwards.h for default type */, typename DistanceT /* see forwards.h for default type */> | ||
| template<class NumericT, typename OCLHandle, typename SizeT /* see forwards.h for default type */, typename DistanceT /* see forwards.h for default type */> |
There was a problem hiding this comment.
As in the definition, OCLHandle -> HandleImpl?
| #include "viennacl/meta/enable_if.hpp" | ||
| #include "viennacl/version.hpp" | ||
|
|
||
| #include "CL/cl.h" |
| * Instead, this class collects all the necessary conditional compilations. | ||
| * | ||
| */ | ||
| template <typename OCLHandle> |
There was a problem hiding this comment.
- Suggest renaming
OCLHandle->HandleImpl?
Maybe
template <typename OCLHandle =
#if defined(VIENNACL_WITH_OPENCL)
stuff
#elif defined(VIENNACL_WITH_CUDA)
stuff
#endif
>
class mem_handle_with_impl
{ ... };
typedef mem_handle_with_impl<> mem_handle;This might also avoid having to add many of the angle brackets?
|
|
||
| private: | ||
| typedef uint32_t bin_nr_t; | ||
| typedef std::vector<cl_mem> bin_t; |
There was a problem hiding this comment.
The original PyOpenCL memory pool was not backend-specific--in fact, it was used identically (same header file!) in PyCUDA. How come cl_mem can't be obtained from the allocator type as in the original pool?
| @@ -0,0 +1,137 @@ | |||
| // Various odds and ends | |||
There was a problem hiding this comment.
This shouldn't be necessary. Particularly, we shouldn't add our own error classes (but use VCL's existing ones instead).
This PR implements a memory pool for ViennaCL's OpenCL backend.
Brief overview of the implementation details:
vector_base, now take in an additional template parameter that determines the handle type of the memory allocated. The handle type can be eitherviennacl::ocl::mem_handle<cl_mem>orviennacl::ocl::pooled_clmem_handle. This breaks backward compatibility, but I expect there wouldn't be many instances in which a user-facing code would deal with memory handles.linalg/vector_operations.hppare now allocated through a pooled handle.These allocation calls for the temporaries were substantial when vector operations were called in PETSc. The following table shows the timings(in ms) before and after implementation of the pooled memory handle.
VecNormVecMDotDetails of the test:
VecNormon vector of length (2^{18}),VecMDoton a vectors of length (2^{18}) involving 30 inner products.I have attached the files for the tests, along with their makefiles.
Attachments: timings.tar.gz