Dummy TaskPool¶
The DummyTaskPool
is used for debugging purposes.
It inherits from the base TaskPool
class.
Reference¶
DummyTaskPool
¶
Bases: BaseTaskPool
A lightweight task pool implementation for debugging and development workflows.
This dummy task pool provides a minimal evaluation interface that focuses on model introspection rather than task-specific performance evaluation. It's designed for development scenarios where you need to test model fusion pipelines, validate architectures, or debug workflows without the overhead of running actual evaluation tasks.
The task pool is particularly useful when
- You want to verify model fusion works correctly
- You need to check parameter counts after fusion
- You're developing new fusion algorithms
- You want to test infrastructure without expensive evaluations
Example
Source code in fusion_bench/taskpool/dummy.py
evaluate(model)
¶
Perform lightweight evaluation and analysis of the given model.
This method provides a minimal evaluation that focuses on model introspection rather than task-specific performance metrics. It performs parameter analysis, optionally saves the model, and returns a summary report.
The evaluation process includes: 1. Printing human-readable parameter information (rank-zero only) 2. Optionally saving the model if a save path was configured 3. Generating and returning a model summary report
Parameters:
-
model
–The model to evaluate. Can be any PyTorch nn.Module including fusion models, pre-trained models, or custom architectures.
Returns:
-
dict
–A model summary report containing parameter statistics and architecture information. See get_model_summary() for detailed format specification.