You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
need to properly implment scaling between datasets
user can request specific coherent sums to be resultDumped. The resulting (coherent) sums has values very similar to the incoherent sums. Turns out that the normalization integrals for the cross terms are about sub percent level compared to the diagonal terms. This is not what I expected since I thought different waves in the same coherent sum could have a larger integral element
allow pa run_mle to utilize the hyperparameter scans feature. This can allow for fits with alternative binnings, wavesets, ..., all defined declaratively in the YAML file. All thats left to do is to dump the results and analyze away. Need to store the values of the hyperparameters for user reference. DataFrame should store NaN values for missing columns (which should just be related to waveset differences?)
Allow plotting multiple amptools fits per bin in NIFTy plots
main executor script allowing running up to MLE or up to IFT. I guess this groups certain commands in the analysis chain
batch submission of results (related to above point) (kinda like mc_wrapper)
If data_folder points to an AmpToolsFits directory it can attempt to use its splitted data and only distribute config files. This needs to reference the YAML file in the parent directory to ensure the binning is the same
IntensityCalculator base_directory should perform a check if it has been updated by comparing nifty.pkl base folder
Need to update prior_sim_gen.py to be more robust and reuse run_cfgGen somehow to ensure syntax is the same, its only job is to read in nifty amplitudes and dump them into amptools Piecewise function
split_mass_t dumping nbars and share_mc into the yaml file could cause problems if not properly reset
reload_fields_and_components in iftwpa appears to spawn all the associated mpi processes also. We should prevent this or kill them all after all the data is loaded
Issues?
Is run mle safe to run more random fits after a set is already completed?
what happens if regex_merge did not capture fit fractions with names = wave_names? Need to have a check in loadAmpToolsResults function advising user
When running fit.py with MPI the program will crash when attempting to exit. mpi4py automatically calls MPI_Init and MPI_Finalize on import and when the script exits, respectively. This does not appear to affect any intermediate nor final results and appears to be isolated to exiting.
Ideas?
Since we can bin finer compared to mass independent fits we can better isolate peak locations compared to a mass dependent fit to mass independent fit results? What are the benefits?
Things to do:
pa run_mleto utilize the hyperparameter scans feature. This can allow for fits with alternative binnings, wavesets, ..., all defined declaratively in the YAML file. All thats left to do is to dump the results and analyze away. Need to store the values of the hyperparameters for user reference. DataFrame should store NaN values for missing columns (which should just be related to waveset differences?)data_folderpoints to anAmpToolsFitsdirectory it can attempt to use its splitted data and only distribute config files. This needs to reference the YAML file in the parent directory to ensure the binning is the samereload_fields_and_componentsiniftwpaappears to spawn all the associated mpi processes also. We should prevent this or kill them all after all the data is loadedIssues?
Ideas?