OpenFold Inference¶
In this guide, we will cover how to use OpenFold to make structure predictions.
Background¶
We currently offer three modes of inference prediction:
Monomer
Multimer
Single Sequence (Soloseq)
This guide will focus on monomer prediction, the next sections will describe Multimer and Single Sequence prediction. `
Pre-requisites:¶
OpenFold Conda Environment. See OpenFold Installation for instructions on how to build this environment.
Downloading sequence databases for performing multiple sequence alignments. We provide a script to download the AlphaFold databases here.
Running AlphaFold Model Inference¶
The script run_pretrained_openfold.py performs model inference. We will go through the steps of how to use this script.
An example directory for performing infernce on PDB:6KWC is provided here. We refer to this example directory for the below examples.
Download Model Parameters¶
For monomer inference, you may either use the model parameters provided by Deepmind, or you may use the OpenFold trained parameters. Both models should give similar performance, please see our main paper for further reference.
The model parameters provided by Deepmind can be downloaded with the following script located in this repository’s scripts/ directory:
$ bash scripts/download_alphafold_params.sh $PARAMS_DIR
To use the OpenFold trained parameters, you can use the following script
$ bash scripts/download_openfold_params.sh $PARAMS_DIR
We recommend selecting openfold/resources as the params directory as this is the default directory used by the run_pretrained_openfold.py to locate parameters.
If you choose to use a different directory, you may make a symlink to the openfold/resources directory, or specify an alternate parameter path with the command line argument --jax_param_path for AlphaFold parameters or --openfold_checkpoint_path for OpenFold parameters.
Model Inference¶
The input to run_pretrained_openfold.py is a directory of FASTA files. AlphaFold-style models also require a sequence alignment to perform inference.
If you do not have sequence alignments for your input sequences, you can compute them using the inference script directly by following the instructions for the following section inference without pre-computed alignments.
Otherwise, if you already have alignments for your input FASTA sequences, skip ahead to the inference with pre-computed alignments section.
Model inference without pre-computed alignments¶
The following command performs a sequence alignment against the OpenProteinSet databases and performs model inference.
python3 run_pretrained_openfold.py \
$INPUT_FASTA_DIR \
$TEMPLATE_MMCIF_DIR
--output_dir $OUTPUT_DIR \
--config_preset model_1_ptm \
--uniref90_database_path $BASE_DATA_DIR/uniref90/uniref90.fasta \
--mgnify_database_path $BASE_DATA_DIR/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path $BASE_DATA_DIR/pdb70 \
--uniclust30_database_path $BASE_DATA_DIR/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path $BASE_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--model_device "cuda:0"
Required arguments:
--output_dir: specify the output directory$INPUT_FASTA_DIR: Directory of query fasta files, one sequence per file,e.g.examples/monomer/fasta_dir$TEMPLATE_MMCIF_DIR: MMCIF files to use for template matching. This directory is required even if using template free inference.*_database_path: Paths to sequence databases for sequence alignment.--model_device: Specify to use a GPU is one is available.
Model inference with pre-computed alignments¶
To perform model inference with pre-computed alignments, use the following command
python3 run_pretrained_openfold.py ${INPUT_FASTA_DIR} \
$TEMPLATE_MMCIF_DIR \
--output_dir $OUTPUT_DIR \
--use_precomputed_alignments $PRECOMPUTED_ALIGNMENTS \
--config_preset model_1_ptm \
--model_device "cuda:0" \
where $PRECOMPUTED_ALIGNMENTS is a directory that contains alignments. A sample alignments directory structure for a single query is:
alignments
└── 6KWC_1
├── bfd_uniclust_hits.a3m
├── hhsearch_output.hhr
├── mgnify_hits.sto
└── uniref90_hits.sto
bfd_uniclust_hits.a3m, mgnify_hits.sto, and uniref90_hits.sto are all alignments of the query structure against the BFD, Mgnify, and Uniref90 datasets respsectively. hhsearch_output.hhr contains hits against the PDB70 database used for template matching. The example directory examples/monomer/alignments shows examples of expected directories.
Configuration settings for template modeling / pTM scoring¶
There are a few configuration settings available for template based and template-free modeling, and for the option to estimate a predicted template modeling score (pTM).
This table provides guidance on which setting to use for each set of predictions, as well as the parameters to select for each preset.
Setting |
|
AlphaFold params (match config name) |
OpenFold params (any are allowed) |
|---|---|---|---|
With template, no ptm |
model_1 |
|
|
With template, with ptm |
model_1_ptm |
|
|
Without template, no ptm |
model_3 |
|
|
Without template, with ptm |
model_3_ptm |
|
|
If you use AlphaFold parameters, and the AlphaFold parameters are located in the default parameter directory (e.g. openfold/resources) the parameters that match the --config_preset will be selected.
The full set of configurations available for all 5 AlphaFold model presets can be viewed in config.py. The OpenFold Parameters page contains more information about the individual OpenFold parameter files.
Model outputs¶
The expected output contents are as follows:
alignments: Directory of alignments. One directory is made per query sequence, and each directory contains alignments against each of the databases used.predictions: PDB files for predicted structurestimings.json: Json with timings for inference and relaxation, if specified
Optional Flags¶
Some commonly used command line flags are here. A full list of flags can be viewed from the --help menu
--config_preset: Specify a different model configuration. There are 5 available model preset settings, some of which support template modeling, others support template-free modeling. The default ismodel_1. More details can be below in the [[Inference#Template-free modeling]] section--hmmsearch_binary_path,--hmmbuild_binary_path, etc. Hmmer, HHsuite, kalign are required to run alignments.run_pretrained_openfold.pywill search for these packages in thebin/directory of your conda environment. If needed, you can specify a different binary directory with these arguments.--openfold_checkpoint_path: Uses an checkpoint or parameter file. Expected types are Deepspeed checkpoint files or.ptfiles. Make sure your selected checkpoint file matches the configuration setting chosen in--config_preset.--data_random_seed: Specifies a random seed to use.--save_outputs: Saves a copy of all outputs from the model, e.g. the output of the msa track, ptm heads.--experiment_config_json: Specify configuration settings using a json file. For example, passing a json with{globals.relax.max_iterations = 10}specifies 10 as the maximum number of relaxation iterations. See foropenfold/config.pythe full dictionary of configuration settings. Any parameters that are not manually set in these configuration settings will refer to the defaults specified by yourconfig_preset.--use_custom_template: Uses all .cif files intemplate_mmcif_diras template input. Make sure the chains of interest have the identifier A and have the same length as the input sequence. The same templates will be read for all sequences that are passed for inference.
Advanced Options for Increasing Efficiency¶
Turning on TF32 (TensorFloat-32) precision on compatible hardware¶
When running on latest NVIDIA GPUs, starting from Ampere, you can enable TF32 precision to get about 1.3x performance boost. TF32 uses 1 sign bit, 8 exponent bits (like FP32), and 10 mantissa (significand) bits (like FP16), packed into a 32-bit word. It was found generally safe to use OF2 with TF32 instead of full FP32. To enable it globally in Torch:
torch.backends.cuda.matmul.allow_tf32 = True # Enable TF32 for matrix multiplications
torch.backends.cudnn.allow_tf32 = True # Enable TF32 for convolutions
Make sure NVIDIA_TF32_OVERRIDE environment variable is either not defined or set to 1.
Applying lower BF16 precision to EvoformerStack and ExtraMSAStack¶
BF16 occupies 16 bits: 1 sign bit, 8 exponent bits (same as FP32), and 7 mantissa (fraction) bits. Its dynamic range is equivalent to FP32, but BF16 can only represent numbers with about three decimal digits of precision. It was found generally safe to apply BF16 precision cast to EvoformerStack and ExtraMSAStack. This allows to achieve ~1.5x speedup compared to TF32 inferenceof the whole model. To apply BF16, use ‘–precision=bf16’ argument. ‘–precision=fp16’ is also supported, but not recommended due to numerical instability.
Speeding up inference with custom attention and multiplicative update kernels¶
The DeepSpeed DS4Sci_EvoformerAttention kernel is a memory-efficient attention kernel developed as part of a collaboration between OpenFold and the DeepSpeed4Science initiative.
If your system supports deepseed, using deepspeed generally leads an inference speedup of 2 - 3x without significant additional memory use. You may specify this option by selecting the --use_deepspeed_inference argument.
OF2 supports the cuEquivariance triangle_multiplicative_update and triangle_attention kernels which can speed up inference/training of the model 1.2 to 1.5 on top of DeepSpeed and even more for sequences with > 1000 residues. cuEquivariance attention actually uses much less memory than default or DeepSpeed attention. To enable, pass ‘–use_cuequivariance_attention’ and ‘–use_cuequivariance_multiplicative_update’ arguments to run_pretrained_openfold.py. CUEquivariance does fall back to DeepSpeed on shapes it does not efficiently support, so enable both for best effect.
If DeepSpeed is unavailable for your system, you may also try using FlashAttention by adding globals.use_flash = True to the --experiment_config_json. Note that FlashAttention appears to work best for sequences with < 1000 residues.
Speeding up inference with TensorRT¶
Alternatively (or together with cuEquivariance), you can try applying TensorRT to key modules. OF2 comes with built-in TensorRT lazy compilation support. It allows to build TensorRT engine for Evoformer on the first inference run and to reuse it on subsequent runs. To enable, pass ‘–trt_mode-run’, ‘–trt_engine_dir’, ‘–trt_max_sequence_len’, ‘–trt_num_profiles’ and ‘–trt_optimization_level’ arguments to run_pretrained_openfold.py.
Large-scale batch inference¶
For large-scale batch inference, we offer an optional tracing mode, which massively improves runtimes at the cost of a lengthy model compilation process. To enable it, add --trace_model to the inference command.
Configuring the chunk size for sequence alignments¶
Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement) is enabled by default in inference mode. To disable it, set globals.chunk_size to None in the config. If a value is specified, OpenFold will attempt to dynamically tune it, considering the chunk size specified in the config as a minimum. This tuning process automatically ensures consistently fast runtimes regardless of input sequence length, but it also introduces some runtime variability, which may be undesirable for certain users. It is also recommended to disable this feature for very long chains (see below). To do so, set the tune_chunk_size option in the config to False.
Long sequence inference¶
To minimize memory usage during inference on long sequences, consider the following changes:
As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template stack is a major memory bottleneck for inference on long sequences. OpenFold supports two mutually exclusive inference modes to address this issue. One,
average_templatesin thetemplatesection of the config, is similar to the solution offered by AlphaFold-Multimer, which is simply to average individual template representations. Our version is modified slightly to accommodate weights trained using the standard template algorithm. Using said weights, we notice no significant difference in performance between our averaged template embeddings and the standard ones. The second,offload_templates, temporarily offloads individual template embeddings into CPU memory. The former is an approximation while the latter is slightly slower; both are memory-efficient and allow the model to utilize arbitrarily many templates across sequence lengths. Both are disabled by default, and it is up to the user to determine which best suits their needs, if either.Inference-time low-memory attention (LMA) can be enabled in the model config. This setting trades off speed for vastly improved memory usage. By default, LMA is run with query and key chunk sizes of 1024 and 4096, respectively. These represent a favorable tradeoff in most memory-constrained cases. Powerusers can choose to tweak these settings in
openfold/model/primitives.py. For more information on the LMA algorithm, see the aforementioned Staats & Rabe preprint.Disable
tune_chunk_sizefor long sequences. Past a certain point, it only wastes time.As a last resort, consider enabling
offload_inference. This enables more extensive CPU offloading at various bottlenecks throughout the model.Disable FlashAttention, which seems unstable on long sequences.
Using the most conservative settings, we were able to run inference on a 4600-residue complex with a single A100. Compared to AlphaFold’s own memory offloading mode, ours is considerably faster; the same complex takes the more efficent AlphaFold-Multimer more than double the time. Use the long_sequence_inference config option to enable all of these interventions at once. The run_pretrained_openfold.py script can enable this config option with the --long_sequence_inference command line option
Input FASTA files containing multiple sequences are treated as complexes. In this case, the inference script runs AlphaFold-Gap, a hack proposed here, using the specified stock AlphaFold/OpenFold parameters (NOT AlphaFold-Multimer).