Pedagogical overview of the (adaptive) HyperPipe code
Authors: Atul Kedia, Richard O'Shaughnessy

Purpose:-
The goal of these codes is to conduct PE adaptively on any observable/simulation data provided with an executable that applies the physics and calculates the (marginalized) likelihood for a given set of parameters. The initial grid provided by the user may or may not include the likeliest regions of the parameters, the code will adaptively explore different regions in the parameter space in order to obtain a posterior not skewed by the initial grid guess. The code is designed to submit parallelized jobs via condor. Following are the inputs, outputs, and structure of this pipeline.

What the user needs to provide are the following:
1. An executable that can calculate the likelihood for a set of parameters (e.g. example_gaussian.py)
2. An initial parameter grid in the standard RIFT structure, i.e. header being "#lnL sigma_lnL parameter_1 parameter_2 .."
where the first two columns could be populated with zeros. (e.g. blind_gaussian_3d.dat)
3. Exploration ranges for each parameter.

Note: If your executable does the required tasks of evaluating likelihoods but isn't formatted in the requisite way, a wrapper script could be written that inputs RIFT format parameters and translates that to your code.

Note: 

The RIFT codes at play are the following:
`create_eos_posterior_pipeline.py` :- All this does is set up the pipeline to run. I.e. prepares the SUB and DAG files for condor_q run with the right arguments and parameters as sent by the user. This is setting the stage for the SUB scripts to later run.
#Once the runs have started several a few shell scripts are written that perform minor tasks such as consolidating all marginalizations (con_marg.sh, con_prod.sh), and have dedicated .SUB files for each.
`util_ConstructEOSPosterior.py` :- This gets used when for each iteration the marginalized likelihoods have been calculated and a new set grid is to be adaptively generated. This takes integral of the likelihood * prior (i.e. the posterior) and produce weighted samples. Does fair draw and generates a grid for next iteration. Following this this new grid is employed for likelihood evaluation in the next iteration, but before that the next code is used for "puffing".
`util_HyperparameterPuffball.py` :- This code is executed right after 'util_ConstructEOSPosterior.py'. This code pushes to search for new parameter not-"fair drawn" in order to create a wider search in parameters. The previous step with 'util_ConstructEOSPosterior.py' directly searches for posterior maximas, but to complement that such that we don't lose edge details and any other maximas not immediately near the posterior maxima 'util_HyperparameterPuffball.py' puffs the parameters and spreads them over the space. This produces a new grid_puff and it is used ALONGSIDE the grid generated by 'util_ConstructEOSPosterior.py' for the next iteration's marginalization.



1. Evaluate the (marginalized) likelihood on a user-provided grid of parameters.
2. Generates two new grids. This is done by performing Monte-Carlo integration of the posterior. One that is purely based on the posterior generated by the MC integral, and another that "puffs" the first by a user-specified amount.
3. Repeats above two steps for the number of iterations desired.


One could also provide two or more executables if there are more constraints to be applied for each observation. The methodology for that is used in the LIGO-NICER runs and an example named "Gaussian_adaptive_bimodal" is also present.


Other tasks for authors:
Convert condor script to shell? - Task for ROS

ConcordanceMMA might be able to leverage HyperPipe, and could conduct kilonova inference.

