Virtual Screening with AutoDock Vina: Screen a Compound Library Against a Protein Target

Virtual Screening with AutoDock Vina: Screen a Compound Library Against a Protein Target

Single-ligand docking is the proof of concept. Virtual screening is where docking becomes genuinely useful — filtering thousands of compounds down to a handful of experimentally testable hits. This tutorial walks through the complete pipeline: sourcing a library, batch-preparing ligands, running parallel docking with a Python script, filtering results, and visualizing your top hits.

Step 1
Source library
Step 2
Prep receptor
Step 3
Batch dock
Step 4
Filter results
Step 5
Visualize hits

What virtual screening actually is

Virtual screening (VS) is the process of computationally docking a large collection of compounds against a target protein and ranking them by predicted binding affinity. The goal is to find the small subset of compounds worth testing experimentally — enriching your hit rate before you spend time and money in the lab.

A typical academic VS campaign might screen 1,000–50,000 compounds, take a few hours to a few days on a university HPC cluster, and return 20–200 top-ranked compounds for experimental follow-up. Industrial campaigns routinely screen millions of compounds, but the same principles apply at any scale.

The workflow is conceptually simple: prepare every compound in the library for docking, run AutoDock Vina once per compound against the same receptor and grid box, collect all the scores, and rank. The complexity is entirely in managing that loop efficiently and handling the inevitable failures gracefully.

This tutorial assumes
You’ve completed the single-ligand docking tutorial — you have AutoDock Vina installed, you can prepare receptor PDBQT files, and you understand what the output scores mean. If you haven’t, start there first. This tutorial picks up where that one ends.

Prerequisites and setup

Install the additional tools needed for this tutorial:

Terminal
conda activate docking
conda install -c conda-forge openbabel rdkit pandas tqdm -y

Create a fresh working directory:

Terminal
mkdir -p ~/docking/virtual_screen
cd ~/docking/virtual_screen
mkdir library_raw library_pdbqt receptor output results

We’ll use the same COX-2 receptor from the previous tutorial (5KIR). Copy or re-prepare your 5KIR_receptor.pdbqt into the receptor/ folder.

Step 1 — Source and prepare your compound library

1
Step 1
Source and prepare your compound library

Where to get compound libraries

Several databases provide free compound libraries in formats ready for docking. Choose based on your use case:

ZINC20
zinc20.docking.org
The standard source for drug-like virtual screening libraries. Filter by Lipinski rules, purchasability, and vendor. Download in SDF or SMILES format.
Free Billions of compounds
ChEMBL
ebi.ac.uk/chembl
Bioactive compounds with experimental data. Ideal when you want a focused library of compounds with known activity against related targets.
Free Bioactive focus
PubChem BioAssay
pubchem.ncbi.nlm.nih.gov
Download compounds tested in specific bioassays. Good for benchmarking — you can compare docking ranks to known experimental outcomes.
Free Validated actives
FDA Approved Drugs
drugbank.com / ChEMBL
For drug repurposing campaigns. Small library (~2,000 compounds) with known safety profiles. High hit-to-resource ratio for exploratory projects.
Free Repurposing

For this tutorial, download a small focused library from ZINC20. Go to zinc20.docking.org → CatalogsIn-Stock → filter for drug-like (MW 250–500, LogP −1 to 5) → download 500 compounds as a single SDF file. Save it as library_raw/library.sdf.

For testing: use a small sample first
Before running your full library, always test the pipeline with 5–10 compounds to make sure everything works end-to-end. Virtual screening on a broken pipeline wastes compute time and produces no useful output.

Batch-prepare ligands with Open Babel

You need a separate PDBQT file for every compound in the library. Open Babel can do this in a single command by splitting the SDF and converting each entry:

Terminal
cd ~/docking/virtual_screen

# Convert entire library SDF to individual PDBQT files
obabel library_raw/library.sdf \
  -O library_pdbqt/compound_.pdbqt \
  --gen3d \
  -p 7.4 \
  --partialcharge gasteiger \
  -m \
  2>> library_pdbqt/conversion.log

# Check how many were created
ls library_pdbqt/*.pdbqt | wc -l

The -m flag tells Open Babel to split the input into individual output files, naming them compound_1.pdbqt, compound_2.pdbqt, etc. The 2>> conversion.log captures any conversion warnings without cluttering your terminal.

Inspect the log for failures — some compounds will fail 3D conformer generation (usually highly constrained ring systems or salts). This is normal. A 5–10% failure rate is expected; above 20% suggests a problem with your input file.

Step 2 — Prepare the receptor and config template

2
Step 2
Set up the receptor and config template

Copy your prepared receptor into place:

Terminal
cp ~/docking/cox2_ibuprofen/receptor/5KIR_receptor.pdbqt receptor/

Create a config template. In virtual screening you’ll use the same receptor, grid box, and search settings for every compound — only the ligand path changes. Save this as config_template.txt:

config_template.txt
# Receptor — same for every docking run
receptor = receptor/5KIR_receptor.pdbqt

# Grid box — binding site center from previous tutorial
center_x = 15.234
center_y = -8.441
center_z = 22.178
size_x   = 20
size_y   = 20
size_z   = 20

# Search settings — lower exhaustiveness for speed in VS
exhaustiveness = 8
num_modes      = 1
energy_range   = 3

Note num_modes = 1 — in virtual screening you only need the top-ranked pose per compound. Saving 9 poses per compound wastes disk space and slows down result parsing without adding value at the screening stage.

Step 3 — Batch dock with a Python script

3
Step 3
Run batch docking with a Python script

The core of virtual screening automation is a loop that: writes a per-compound config file, calls Vina as a subprocess, captures the output score, and logs everything. Save the following as run_vs.py:

run_vs.py
import subprocess
import os
import glob
import csv
import re
from pathlib import Path
from tqdm import tqdm

# ── Configuration ────────────────────────────────────────────────
RECEPTOR    = "receptor/5KIR_receptor.pdbqt"
LIGAND_DIR  = "library_pdbqt"
OUTPUT_DIR  = "output"
RESULTS_CSV = "results/all_scores.csv"
VINA_BIN    = "vina"          # or full path: /usr/local/bin/vina
N_CPUS      = 4               # parallel Vina jobs

# Grid box — update with your binding site coordinates
GRID = {
    "center_x": 15.234, "center_y": -8.441, "center_z": 22.178,
    "size_x":   20,      "size_y":   20,      "size_z":   20,
    "exhaustiveness": 8,  "num_modes": 1,  "energy_range": 3,
}
# ─────────────────────────────────────────────────────────────────

def write_config(ligand_path, output_path, config_path):
    """Write a per-compound Vina config file."""
    lines = [
        f"receptor = {RECEPTOR}\n",
        f"ligand   = {ligand_path}\n",
        f"out      = {output_path}\n\n",
    ]
    for k, v in GRID.items():
        lines.append(f"{k} = {v}\n")
    Path(config_path).write_text("".join(lines))


def parse_score(output_pdbqt):
    """Extract the best binding affinity from a Vina output PDBQT."""
    try:
        text = Path(output_pdbqt).read_text()
        match = re.search(r"REMARK VINA RESULT:\s+([-\d.]+)", text)
        return float(match.group(1)) if match else None
    except:
        return None


def dock_one(ligand_pdbqt):
    """Run a single Vina docking job. Returns (compound_name, score)."""
    name        = Path(ligand_pdbqt).stem
    out_pdbqt   = f"{OUTPUT_DIR}/{name}_out.pdbqt"
    config_path = f"{OUTPUT_DIR}/{name}_config.txt"

    write_config(ligand_pdbqt, out_pdbqt, config_path)

    result = subprocess.run(
        [VINA_BIN, "--config", config_path],
        capture_output=True, text=True, timeout=300
    )

    if result.returncode != 0:
        return name, None  # docking failed

    score = parse_score(out_pdbqt)

    # Clean up config file to save space
    os.remove(config_path)
    return name, score


def main():
    os.makedirs(OUTPUT_DIR, exist_ok=True)
    os.makedirs("results", exist_ok=True)

    ligands = sorted(glob.glob(f"{LIGAND_DIR}/*.pdbqt"))
    print(f"Found {len(ligands)} ligands to dock.")

    scores = []
    failed = []

    for ligand in tqdm(ligands, desc="Docking"):
        name, score = dock_one(ligand)
        if score is not None:
            scores.append({"compound": name, "score_kcal_mol": score})
        else:
            failed.append(name)

    # Write results CSV
    scores.sort(key=lambda x: x["score_kcal_mol"])  # most negative first
    with open(RESULTS_CSV, "w", newline="") as f:
        writer = csv.DictWriter(f, fieldnames=["compound", "score_kcal_mol"])
        writer.writeheader()
        writer.writerows(scores)

    print(f"\nDone. {len(scores)} succeeded, {len(failed)} failed.")
    print(f"Results saved to {RESULTS_CSV}")
    print(f"\nTop 5 hits:")
    for row in scores[:5]:
        print(f"  {row['compound']:30s}  {row['score_kcal_mol']:.1f} kcal/mol")

    if failed:
        print(f"\nFailed compounds ({len(failed)}):")
        for name in failed[:10]:
            print(f"  {name}")


if __name__ == "__main__":
    main()

Run it:

Terminal
python run_vs.py

You’ll see a progress bar counting through your library. On a modern laptop with exhaustiveness = 8, expect roughly 1–3 compounds per minute. A 500-compound library takes 3–8 hours on a single CPU — which is why this tutorial covers parallelization below.

Terminal — virtual screening output
Found 487 ligands to dock.
Docking: 100%|████████████████| 487/487 [4:12:03<00:00, 31.0s/it]

Done. 471 succeeded, 16 failed.
Results saved to results/all_scores.csv

Top 5 hits:
compound_142 -11.2 kcal/mol
compound_089 -10.8 kcal/mol
compound_311 -10.4 kcal/mol
compound_204 -10.1 kcal/mol
compound_417 -9.9 kcal/mol

Example virtual screening output. Your compounds and scores will differ.

Step 4 — Parse and filter results

4
Step 4
Filter and rank your hits

The CSV gives you a ranked list of all docked compounds. Now you need to filter it down to a shortlist worth inspecting visually. The standard approach is a two-stage filter: a score cutoff, then manual pose inspection of the survivors.

Save this as filter_results.py:

filter_results.py
import pandas as pd

# Load all results
df = pd.read_csv("results/all_scores.csv")
print(f"Total docked: {len(df)}")
print(f"Score distribution:\n{df['score_kcal_mol'].describe()}\n")

# Apply score cutoff — take top 10% or score below -9.0, whichever is smaller
cutoff  = min(df['score_kcal_mol'].quantile(0.10), -9.0)
hits    = df[df['score_kcal_mol'] <= cutoff].copy()
hits    = hits.sort_values('score_kcal_mol')

print(f"Score cutoff applied: {cutoff:.1f} kcal/mol")
print(f"Hits passing cutoff:  {len(hits)}\n")
print(hits.to_string(index=False))

# Save filtered hit list
hits.to_csv("results/hits.csv", index=False)
print("\nHit list saved to results/hits.csv")
Terminal
python filter_results.py

Your ranked hit list will look something like this:

CompoundScore (kcal/mol)Status
compound_142−11.2Top hit
compound_089−10.8Top hit
compound_311−10.4Top hit
compound_204−10.1Top hit
compound_417−9.9Top hit
compound_033−8.6
compound_271−8.1

The score cutoff gets you from hundreds of compounds down to tens. The next cut — manual pose inspection — gets you from tens down to the handful you’d actually take to the lab. The funnel looks like this in practice:

Full library docked
487 compounds
Score cutoff (≤ −9.0)
~48 compounds
Pose inspection
~12 compounds
Experimental validation
5–8 compounds

Step 5 — Visualize top hits in PyMOL

5
Step 5
Inspect top hits in PyMOL

Load your receptor and top-scoring compound outputs together in PyMOL. This script loads all your top hits at once for efficient visual comparison:

PyMOL command line
# Load receptor
load receptor/5KIR_receptor.pdbqt, receptor
hide everything, receptor
show cartoon, receptor
color gray80, receptor

# Load top hits — repeat for each compound in your hits.csv
load output/compound_142_out.pdbqt, hit_142
load output/compound_089_out.pdbqt, hit_089
load output/compound_311_out.pdbqt, hit_311

# Show all hits as colored sticks
show sticks, hit_142 or hit_089 or hit_311
color cyan,   hit_142
color magenta, hit_089
color yellow,  hit_311

# Show binding site residues within 5 Å of any hit
select bs, receptor and (byres (hit_142 or hit_089 or hit_311) around 5)
show sticks, bs
zoom bs

For each hit in your shortlist, work through the same checklist from the single-ligand tutorial: is the ligand inside the pocket, are there hydrogen bonds to known key residues, does the binding mode make chemical sense? Discard any compound whose top-ranked pose is outside the binding site or makes no meaningful contacts — these are false positives driven by the scoring function.

The most common virtual screening failure mode
Compounds with multiple rotatable bonds (>8) and no clear pharmacophore match often score well by filling the binding site with hydrophobic carbon chains that score favorably but have no real selectivity. Always check the chemical structure of your hits against known pharmacophore features of your target. A good score from a structureless aliphatic chain is usually meaningless.

Scaling up: tips for large libraries

The sequential script above handles hundreds of compounds comfortably. For thousands to millions, you need to parallelize. Here are the main strategies, from simplest to most powerful.

Parallel execution on a multicore machine

Replace the sequential loop in run_vs.py with Python’s ProcessPoolExecutor to run multiple Vina instances in parallel:

run_vs_parallel.py — key change
from concurrent.futures import ProcessPoolExecutor, as_completed

with ProcessPoolExecutor(max_workers=N_CPUS) as executor:
    futures = {executor.submit(dock_one, lig): lig for lig in ligands}
    for future in tqdm(as_completed(futures), total=len(futures)):
        name, score = future.result()
        if score is not None:
            scores.append({"compound": name, "score_kcal_mol": score})

Set N_CPUS to your machine’s core count minus one (leave one core for the OS). On an 8-core machine this gives roughly 7× speedup.

HPC cluster with SLURM

For libraries above 10,000 compounds, submit array jobs to your university’s HPC cluster. Split the library into chunks of 500 compounds and submit each chunk as a separate SLURM job. Most universities provide free compute allocations for grad students — check with your research computing office.

AutoDock-GPU for massive libraries

If you have GPU access, AutoDock-GPU can screen millions of compounds in hours. It uses the same PDBQT input format as Vina and is free and open source. The workflow is nearly identical to what you’ve built here — the main difference is replacing the Vina binary with the AutoDock-GPU binary in your script.

Runtime estimates for planning
Sequential Vina on one CPU core at exhaustiveness 8: roughly 2 min/compound → 500 compounds ≈ 17 hours. Parallel Vina on 8 CPU cores: → 500 compounds ≈ 2 hours. AutoDock-GPU on a single V100: → 500,000 compounds ≈ overnight.

What you’ve built

A complete, scriptable virtual screening pipeline: a compound library sourced from ZINC, batch-prepared with Open Babel, docked in parallel with AutoDock Vina, filtered by score with pandas, and visualized in PyMOL. This is the same workflow — scaled up — used in academic drug discovery labs worldwide.

The Python script is yours to adapt. Change the receptor, swap in a different library, adjust the score cutoff, add PAINS filtering, integrate ADMET property prediction. The core loop stays the same.

  • Compound library sourced (ZINC20, ChEMBL, or PubChem) and saved as SDF
  • Library batch-converted to individual PDBQT files with Open Babel
  • Receptor PDBQT and config template in place
  • Tested pipeline on 5–10 compounds before running full library
  • run_vs.py executed — all scores written to CSV
  • Score cutoff applied — hit list filtered and saved
  • Top hits visually inspected in PyMOL — poses checked for binding mode quality

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *