Allow shared Arc<Collector> usage#81
Conversation
| /// | ||
| /// Note that all `Guard` references used to access the map must be produced by | ||
| /// the provided `collector`. | ||
| pub fn collector<C: Borrow<Collector>>(self, collector: C) -> HashMapBuilder<K, V, S, C> { |
There was a problem hiding this comment.
This method is unsound without the extra bounds on K/V that were mentioned. It should also be distinct from the default collector builder, which takes an owned Collector without requiring extra bounds.
There was a problem hiding this comment.
Sorry, I mistakenly pushed an outdated version. As I mentioned in the PR, setting an Arc<Controller> should be done via a distinct shared_controller
| // `HashMap` itself. If this was not true, we would require stricter bounds | ||
| // on `HashMap` operations themselves. | ||
| unsafe impl<K: Send + Sync, V: Send + Sync, S: Sync> Sync for HashMap<K, V, S> {} | ||
| unsafe impl<K, V, S: Sync, C: Sync + Borrow<Collector>> Sync for HashMap<K, V, S, C> {} |
| raw: raw::HashMap<K, V, S>, | ||
| pub struct HashMap<K, V, S = RandomState, C = Collector> | ||
| where | ||
| C: Borrow<Collector>, |
There was a problem hiding this comment.
I think I would prefer to just unconditionally store an Arc<Collector>. It's useful to limit the size of a HashMap on the stack anyways, and I don't think the extra generics are worth it.
There was a problem hiding this comment.
Hmm, so you think it would be better to switch every HashMap to be initialized with an Arc<Collector>? I agree that the extra generics are not the cleanest.
There was a problem hiding this comment.
Sorry for the delay, yes, I think that would be reasonable. seize::Collector is also quite a large type, and so allocating it would be useful either way.
219c4b2 to
e7cdac0
Compare
- create new `FileSysService`, which handles retrieving or creating new nodes given a `ShelfDataRef` and a `PathBuf` - remove metadata from `File`, now load metadata on `FileSummary` - temporary use fork of `papapaya` for better memory usage ibraheemdev/papaya#81 - remove `dtag_files` from `Node` and `dtags` from file - change reference type of owned `Tag` to `SharedRef` to allow information edit - `attach`, `detach` for tags and dtags now take an input directory `FileId` instead of a path `PathBuf` - updated the respective calls in `RpcService` to query the `FileSysService` when required
- create new `FileSysService`, which handles retrieving or creating new nodes given a `ShelfDataRef` and a `PathBuf` - remove metadata from `File`, now load metadata on `FileSummary` - temporary use fork of `papapaya` for better memory usage ibraheemdev/papaya#81 - remove `dtag_files` from `Node` and `dtags` from file - change reference type of owned `Tag` to `SharedRef` to allow information edit - `attach`, `detach` for tags and dtags now take an input directory `FileId` instead of a path `PathBuf` - updated the respective calls in `RpcService` to query the `FileSysService` when required
- create new `FileSysService`, which handles retrieving or creating new nodes given a `ShelfDataRef` and a `PathBuf` - remove metadata from `File`, now load metadata on `FileSummary` - temporary use fork of `papapaya` for better memory usage ibraheemdev/papaya#81 - remove `dtag_files` from `Node` and `dtags` from file - change reference type of owned `Tag` to `SharedRef` to allow information edit - `attach`, `detach` for tags and dtags now take an input directory `FileId` instead of a path `PathBuf` - updated the respective calls in `RpcService` to query the `FileSysService` when required
As previously discussed in #66, this change is motivated by the memory usage of a freshly initialized
HashMaporHashSet, which is ~1kB. This is can cause memory usage to grow quite fast when defining multipleHashMapwithin a single struct.The change simply adds a
C: Borrow<Collector>generic to theHashMap, so in order to make the API safe we constrain such setting to ashared_controllerbuilder function with boundsK: Send + 'static, V: Send + 'static, which should be enough to enforce safety from my understanding.The implemented
Clonebehavior forHashMap<K, V, S, Arc<Collector>>is to simply create a newArc<Collector>rather than cloning it, as that seems the safer approach, even though it might be slightly confusing.