# Changelog

## Release 0.1.6

- Fixing incorrect instructions for GPU-compatible installation: most shells require
  quotes around pip installations with extras.

## Release 0.1.5

- Adding batched simulations feature for MD simulations and energy minimizations
  with the JAX-MD backend.
- Removing now useless `stress_virial` prediction.
- Fixing correctness of `stress` and 0K `pressure` predictions. In 0.1.4,
  the stress computation actually involved a derivative with respect to
  cell but with fixed positions. Now, the strain also acts on positions within
  the unit cell, thus deforming the material homogeneously. This rigorously
  translation-invariant stress exempts from any Virial term correction of
  cell boundary effects. See for instance
  [Thompson, Plimpton and Mattson 2009, eq (2)](https://doi.org/10.1063/1.3245303).
- Migrating from poetry to uv for dependency and package management.
- Improving inefficient logging strategy in ASE simulation backend.
- Clarifying in the documentation that we recommend a smaller value for the timestep
  when running energy minimizations with the JAX-MD simulation backend.
- Removing need for separate install command for JAX-MD dependency.
- Adding easier install method for GPU-compatible JAX.

## Release 0.1.4

- Removing constraints on some dependencies, such as numpy, jax, and flax. The mlip
  library now allows for more flexibility in dependency versions for downstream
  projects. This includes support for the newest jax versions 0.6.x and 0.7.x.
- Fixing simulation tutorial notebook by pinning versions of visualization helper
  libraries.
- Adding the option to pass the `dataset_info` of a trained model to
  `GraphDatasetBuilder`, which is important for downstream tasks. Failure to do so
  might lead to silent inconsistencies in the mapping from atomic numbers to specie
  indices, especially when the downstream data has fewer elements than the training
  set (see e.g. the fine-tuning tutorial).
- Fixing the `stress` predictions, with new formulas for the virial stress
  and 0 Kelvin pressure term. These features should still be seen as beta for now
  as we proceed to test them further (see docstrings for more details).

## Release 0.1.3

- Adding two new options to our MACE implementation (see `MaceConfig`, these features
  should be considered in beta state for now):
    + `gate_nodes: bool` to apply a scalar node gating after the power expansion
      layer,
    + `species_embedding_dim: int | None` to optionally encode pairwise node
       species of edges in the convolution block.

  Making use of these options may improve
  inference speed at similar accuracy.

- Fixing a bug where stress predictions would override energy and force predictions
  to `None` when `predict_stress = True`. Note that stress computations
  should not be considered reliable for now, and will be fixed in an upcoming
  release.

## Release 0.1.2

- Fixing the computation of metrics during training, by reweighting the metrics of
  each batch to account for a varying number of real graphs per batch; this results
  in the metrics being independent of the batching strategy and number of GPUs employed
- In addition to the point above, fixing the computation of RMSE metrics by now
  only computing MSE metrics in the loss and taking the square root at the very end
  when logging
- Deleting relative and 95-percentile metrics, as they are not straightforward to
  compute on-the-fly with our dynamic batching strategy; we recommend to compute them
  separately for a model checkpoint if necessary
- Small amount of modifications to README and documentation

## Release 0.1.1

- Small amount of modifications to README and documentation
- Adding link to white paper in README

## Release 0.1.0

- Implemented model architectures: MACE, NequIP and ViSNet
- Dataset preprocessing
- Training of MLIP models
- Batched inference with trained MLIP models
- MD simulations with MLIP models using JAX-MD and ASE simulation backends
- Energy minimizations with MLIP models using the same simulation backends
- Fine-tuning of pre-trained MLIP models (only for MACE)
