Speed-optimize '--each-while'.#153
Speed-optimize '--each-while'.#153doublep wants to merge 8 commits intomagnars:masterfrom doublep:performance
Conversation
|
I have no problems with this. |
|
Hi! This is a great initiative. One issue: Do you have your FSF paperwork signed? |
|
Yes, I signed them many years ago. You can find a few my changes in Emacs' ChangeLog and in AUTHORS. |
|
Speaking about benchmarks, if we're going to be doing such an optimization project, it would make sense to prepare some suite we could run automatically, ideally covering all the functions. Obviously you don't need to write a benchmark for every function, but we could at least discuss and come up with some framework, ideally as automatic as possible (something alike to how tests work now) |
|
The problem is that benchmarking seems to give really wide result distribution. So, to get a good estimate you'd need to run each benchmark at least 10 times, if not 100. |
|
I guess it has lot to do with GC and other internals too... there might be some ways to prepare "good states", or alternatively always run in a clean emacs instance (loading up one with Though this is maybe not a task for dash but for some generic framework to be developed (a la ert). Maybe there even already is something like that, dunno. |
|
Here is a simple try. I explicitly run GC before every |
There was a problem hiding this comment.
Hm, does this let binding work? &as should be in the beginning, foo &as <destruct-here>.
I guess we could theoretically make it work both ways... but it is different from clojure at this point.
There was a problem hiding this comment.
You are right. It "sort of works" only because I never use the benchmark variable, but it is wrong. I will commit a fix.
How often GC is called and how much time it takes also indicates function's efficiency. We already call 'garbage-collect' before benchmarking.
As a side effect, (-cons*) evaluates to nil rather than fail with an error. And with a silly number of arguments it no longer exceeds recursion depth limit.
|
I squashed benchmark framework commits, also with a fix for that wrong |
|
This PR has got quite big. Would you be so kind and split it into two (ideally one commit per function), one for the optimization thing and another for the benchmarks? I'm very much looking forward to getting this merged... we've neglected the PRs here for quite some time :) |
|
@doublep Are you still interested in rebasing this work on top of latest |
Ping. |
I want to optimize several functions in the library. For a start, here is the first optimization. Tell me if it's OK to continue or if I shouldn't waste my time.
Reasoning: dash is low-level library used in many other places. It contains very generic functions that a purpose-agnostic and may or may not be used in performance-critical places. Implementation of the functions is also pretty simple, usually 1--10 lines. Therefore, I think, performance here is more important than code clarity.
This patch optimizes '--each-while':
Benchmark:
Before: 0.33+ seconds here, after: 0.28+ s here (runs differ a lot, I just repeat them several times and pick the best.