Make gaus_2d faster (2)#274
Conversation
|
After 291f1e0 it gets faster by another ~24% : The .gaul file is OK. |
|
Can you provide me the input file(s) you're using to do the profiling? I'd like to experiment a bit myself, and would like to be able to make a fair comparison. Also, did you just call |
|
Since it is an optimisation, I assumed that it would be best to use the latest Python and Numpy, as these are constantly being updated to improve performance. I was not able to build against Python 3.14 nor using Paste all this to the terminal in one go. It will remove conda env and cloned repo from previous experiment and prepare new conda env for installation anew. Now paste your experimental changes using a text editor to the clonned repo. Then paste all this in one go to the terminal. It will install PyBDSF, move to a different folder to execute a profilled run and display the result for
Please note that the total running time is not stable:
As far as I know this is normal for high level languages, as there is a garbage collector going all over the code etc. Thats why I was always doing the profiling few times. At the first time you can also display full profilling results: to see what should be optimised. One comment about 326df50. At first I was thinking that it would be optmial to let it compute on positive values as long as possible. But I have profilled your suggestion and it turned out that there is no difference. |
before (but after #273)
after:
~18,4% faster. A lot is done in place now. I think this is now as fast as it gets.
There are very rare differences in the .gaul file at 13th decimal place, but this is normal.