Skip to content

Evaluate a Single CLIP Model

This tutorial demonstrates how to evaluate a single CLIP (Contrastive Language-Image Pre-training) model on multiple downstream vision tasks using FusionBench CLI. This serves as a baseline for understanding model performance before applying fusion techniques.

This example utilizes the DummyAlgorithm, a specialized class designed for single model evaluation. It returns the pretrained model as-is, or the first available model if _pretrained_ is not present, without applying any modifications.

🔧 Standalone YAML Configuration

The example uses the following configuration that evaluates a pretrained CLIP model on multiple image classification datasets:

config/_get_started/clip_evaluate_single_model.yaml
_target_: fusion_bench.programs.FabricModelFusionProgram
_recursive_: false
method:
  _target_: fusion_bench.method.DummyAlgorithm
modelpool:
  _target_: fusion_bench.modelpool.CLIPVisionModelPool
  models:
    _pretrained_: openai/clip-vit-base-patch32
taskpool:
  _target_: fusion_bench.taskpool.CLIPVisionModelTaskPool
  test_datasets:
    sun397:
      _target_: datasets.load_dataset
      path: tanganke/sun397
      split: test
    stanford-cars:
      _target_: datasets.load_dataset
      path: tanganke/stanford_cars
      split: test
  clip_model: openai/clip-vit-base-patch32
  processor: openai/clip-vit-base-patch32
  1. Program Configuration: Specifies FabricModelFusionProgram to handle the evaluation workflow
  2. Method Configuration: Uses DummyAlgorithm which passes through the input model unchanged
  3. Model Pool: Contains only the base pretrained CLIP model (openai/clip-vit-base-patch32).
    models={'_pretrained_': 'openai/clip-vit-base-patch32'}
    
  4. Task Pool: Defines evaluation datasets and specifies the CLIP model and processor for inference. In this examples:
    test_datasets = {
        'sun397': ...,
        'stanford-cars': ...,
    }
    

🚀 Running the Example

Execute the model evaluation with the following command:

fusion_bench \
    --config-path $PWD/config/_get_started \
    --config-name clip_evaluate_single_model

Or override the model path via pass modelpool.models._pretrained_=<new_model_path>:

fusion_bench \
    --config-path $PWD/config/_get_started \
    --config-name clip_evaluate_single_model \
    modelpool.models._pretrained_=<new_model_path>

🐛 Debugging Configuration (VS Code)

.vscode/launch.json
{
    "name": "clip_evaluate_single_model",
    "type": "debugpy",
    "request": "launch",
    "module": "fusion_bench.scripts.cli",
    "args": [
        "--config-path",
        "${workspaceFolder}/config/_get_started",
        "--config-name",
        "clip_evaluate_single_model"
    ],
    "console": "integratedTerminal",
    "justMyCode": true,
    "env": {
        "HYDRA_FULL_ERROR": "1"
    }
}