Model Training/Fine-Tuning¶
CLIP vision model fine-tuning¶
- ImageClassificationFineTuningForCLIP: Fine-tuning clip vision encoder on image classification tasks.
- ContinualImageClassificationFineTuningForCLIP: Continual fine-tuning of clip vision encoder on image classification tasks.
ImageClassificationFineTuning
¶
Bases: BaseAlgorithm
Fine-tuning algorithm for image classification models.
This class implements end-to-end fine-tuning for image classification tasks using PyTorch Lightning. It supports both epoch-based and step-based training with configurable optimizers, learning rate schedulers, and data loaders.
Parameters:
-
max_epochs(Optional[int]) –Maximum number of training epochs. Mutually exclusive with max_steps.
-
max_steps(Optional[int]) –Maximum number of training steps. Mutually exclusive with max_epochs.
-
label_smoothing(float) –Label smoothing factor for cross-entropy loss (0.0 = no smoothing).
-
optimizer(DictConfig) –Configuration for the optimizer (e.g., Adam, SGD).
-
lr_scheduler(DictConfig) –Configuration for the learning rate scheduler.
-
dataloader_kwargs(DictConfig) –Additional keyword arguments for DataLoader construction.
-
**kwargs–Additional arguments passed to the base class.
Raises:
-
AssertionError–If both max_epochs and max_steps are provided.
Example
>>> config = {
... 'max_epochs': 10,
... 'max_steps': None,
... 'label_smoothing': 0.1,
... 'optimizer': {'_target_': 'torch.optim.Adam', 'lr': 0.001},
... 'lr_scheduler': {'_target_': 'torch.optim.lr_scheduler.StepLR', 'step_size': 5},
... 'dataloader_kwargs': {'batch_size': 32, 'num_workers': 4}
... }
>>> algorithm = ImageClassificationFineTuning(**config)
Source code in fusion_bench/method/classification/image_classification_finetune.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 | |
get_dataloader(dataset, stage)
¶
Create a DataLoader for the specified dataset and training stage.
Constructs a PyTorch DataLoader with stage-appropriate configurations: - Training stage: shuffling enabled by default - Validation/test stages: shuffling disabled by default
Parameters:
-
dataset–The dataset to wrap in a DataLoader.
-
stage(str) –Training stage, must be one of "train", "val", or "test". Determines default shuffling behavior.
Returns:
-
DataLoader–Configured DataLoader for the given dataset and stage.
Source code in fusion_bench/method/classification/image_classification_finetune.py
run(modelpool)
¶
Execute the fine-tuning process on the provided model pool.
This method performs the complete fine-tuning workflow: 1. Loads the pretrained model from the model pool 2. Prepares training and validation datasets 3. Configures optimizer and learning rate scheduler 4. Sets up Lightning trainer with appropriate callbacks 5. Executes the training process 6. Saves the final fine-tuned model
Source code in fusion_bench/method/classification/image_classification_finetune.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 | |
ImageClassificationFineTuning_Test
¶
Bases: BaseAlgorithm
Test/evaluation algorithm for fine-tuned image classification models.
This class implements model evaluation on test or validation datasets using PyTorch Lightning. It can either evaluate a model directly or load a model from a checkpoint before evaluation. The evaluation computes standard classification metrics including top-1 and top-5 accuracy.
Parameters:
-
checkpoint_path(str) –Path to the model checkpoint file. If None, uses the model directly from the model pool without loading from checkpoint.
-
dataloader_kwargs(DictConfig) –Additional keyword arguments for DataLoader construction.
-
**kwargs–Additional arguments passed to the base class.
Example
Source code in fusion_bench/method/classification/image_classification_finetune.py
250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 | |
get_dataloader(dataset, stage)
¶
Create a DataLoader for the specified dataset and evaluation stage.
Constructs a PyTorch DataLoader with stage-appropriate configurations for evaluation. Similar to the training version but typically used for test/validation datasets.
Parameters:
-
dataset–The dataset to wrap in a DataLoader.
-
stage(str) –Evaluation stage, must be one of "train", "val", or "test". Determines default shuffling behavior (disabled for non-train stages).
Returns:
-
DataLoader–Configured DataLoader for the given dataset and stage.
Source code in fusion_bench/method/classification/image_classification_finetune.py
run(modelpool)
¶
Execute model evaluation on the provided model pool's test/validation dataset.
This method performs the complete evaluation workflow: 1. Loads the model from the model pool (pretrained or first available) 2. Prepares the test or validation dataset (prioritizes test if both available) 3. Sets up the Lightning module with appropriate metrics (top-1 and top-5 accuracy) 4. Loads from checkpoint if specified, otherwise uses the model directly 5. Executes the evaluation using Lightning trainer 6. Logs and returns the test metrics
Source code in fusion_bench/method/classification/image_classification_finetune.py
ImageClassificationFineTuningForCLIP
¶
Bases: CLIPClassificationMixin, SimpleProfilerMixin, ModelFusionAlgorithm
A class for fine-tuning CLIP models for image classification tasks.
Source code in fusion_bench/method/classification/clip_finetune.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 | |
run(modelpool)
¶
Executes the fine-tuning process.
Parameters:
-
modelpool(CLIPVisionModelPool) –The modelpool is responsible for loading the pre-trained model and training datasets.
Returns:
-
VisionModel–The fine-tuned vision model.
Source code in fusion_bench/method/classification/clip_finetune.py
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | |
save_model(model, save_path)
¶
Save the vision model to the specified path.
Parameters:
-
model(Union[HFCLIPClassifier, CLIPModel, CLIPVisionModel, CLIPVisionTransformer]) –The model to save.
-
save_path(str) –The path to save the model.
Source code in fusion_bench/method/classification/clip_finetune.py
setup_model()
¶
Sets up the model, optimizer, and learning rate scheduler.
This method initializes the CLIP model, applies LoRA if specified, and configures the optimizer and learning rate scheduler.
Returns:
-
Tuple–A tuple containing the processor, classifier, optimizer, and learning rate scheduler.
Source code in fusion_bench/method/classification/clip_finetune.py
ContinualImageClassificationFineTuningForCLIP
¶
Bases: CLIPClassificationMixin, SimpleProfilerMixin, BaseAlgorithm
Source code in fusion_bench/method/classification/continual_clip_finetune.py
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 | |
save_model(model, save_path)
¶
Save the vision model to the specified path.
Parameters:
-
model(Union[HFCLIPClassifier, CLIPModel, CLIPVisionModel, CLIPVisionTransformer]) –The model to save.
-
save_path(str) –The path to save the model.
Source code in fusion_bench/method/classification/continual_clip_finetune.py
setup_model()
¶
Sets up the model, optimizer, and learning rate scheduler.
This method initializes the CLIP model, applies LoRA if specified, and configures the optimizer and learning rate scheduler.
Returns:
-
Tuple–A tuple containing the processor, classifier, optimizer, and learning rate scheduler.
Source code in fusion_bench/method/classification/continual_clip_finetune.py
LLM Fine-tuning¶
FullFinetuneSFT
¶
Bases: BaseAlgorithm, FabricTrainingMixin
Source code in fusion_bench/method/lm_finetune/fullfinetune_sft.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 | |
__init__(optimizer, lr_scheduler, dataloader_kwargs, max_epochs, max_steps=-1, max_steps_per_epoch=-1, lr_scheduler_interval='step', lr_scheduler_frequency=1, checkpoint_save_interval='epoch', checkpoint_save_frequency=1, accumulate_grad_batches=1, gradient_clip_val=None, gradient_clip_algorithm='norm', save_optimizer_state=False, save_full_model=False, save_ckpt_type='lightning', ckpt_path=None, max_length=6144, fix_token_embedding=True, **kwargs)
¶
Class for full finetuning of a language model on given SFT datasets.
Parameters:
-
optimizer(DictConfig) –Configuration for the optimizer.
-
lr_scheduler(DictConfig) –Configuration for the learning rate scheduler.
-
dataloader_kwargs(DictConfig) –Configuration for the dataloader, such as batch size, num_workers, etc.
-
max_epochs(int) –Maximum number of epochs to train the model. If set to -1, the training will continue indefinitely or until max_steps is reached.
-
max_steps(int, default:-1) –Maximum number of steps to train the model. If set to -1, the training will continue indefinitely or until max_epochs is reached.
-
max_steps_per_epoch(int, default:-1) –Maximum number of steps to train the model in each epoch. If set to -1, the training will continue until the end of the epoch.
-
lr_scheduler_interval(str, default:'step') –Interval at which to run the learning rate scheduler. Available options: 'epoch', 'step'. If set to 'epoch', the scheduler will run at the end of each epoch. If set to 'step', the scheduler will run at the end of each step.
-
lr_scheduler_frequency(int, default:1) –Frequency at which to run the learning rate scheduler. The scheduler will run every
lr_scheduler_frequencyepochs or steps, depending on the value oflr_scheduler_interval. -
checkpoint_save_interval(str, default:'epoch') –Interval at which to save the model checkpoint. Available options: 'epoch', 'step'. If set to 'epoch', the model will be saved at the end of each epoch. If set to 'step', the model will be saved at the end of each step.
-
checkpoint_save_frequency(int, default:1) –Frequency at which to save the model checkpoint. The model will be saved every
checkpoint_save_frequencyepochs or steps, depending on the value ofcheckpoint_save_interval. -
accumulate_grad_batches(int, default:1) –Number of batches to accumulate gradients across before updating the model parameters.
-
gradient_clip_val(float, default:None) –Value to clip the gradients. If set to None, no gradient clipping will be applied.
-
gradient_clip_algorithm(str, default:'norm') –Algorithm to use for gradient clipping. Available options: 'value', 'norm'. If set to 'value', the gradients will be clipped to the specified value. If set to 'norm', the gradients will be clipped to the specified norm.
-
save_optimizer_state(bool, default:False) –Whether to save the optimizer and lr_scheduler state along with the model checkpoint.
-
save_full_model(bool, default:False) –Whether to save the full model or only the trainable parameters in the model checkpoint.
-
save_ckpt_type(str, default:'lightning') –Type of checkpoint to save. Available options: 'lightning', 'hf'. If set to 'lightning', the checkpoint will be saved in the lightning format. If set to 'hf', the checkpoint will be saved in the huggingface format.
-
ckpt_path(str, default:None) –Path to the checkpoint to load before training. If set to None, no checkpoint will be loaded.
-
max_length(int, default:6144) –Maximum input length to consider. If the input length exceeds this value, it will be truncated.
-
fix_token_embedding(bool, default:True) –Whether to fix the token embeddings during training. If set to True, the token embeddings will not be updated during training.
Source code in fusion_bench/method/lm_finetune/fullfinetune_sft.py
PeftFinetuneSFT
¶
Bases: BaseAlgorithm, FabricTrainingMixin
Source code in fusion_bench/method/lm_finetune/peftfinetune_sft.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 | |
__init__(optimizer, lr_scheduler, peft_config, dataloader_kwargs, adapter_name='default', merge_and_unload=False, max_epochs=1, max_steps=-1, max_steps_per_epoch=-1, lr_scheduler_interval='step', lr_scheduler_frequency=1, checkpoint_save_interval='epoch', checkpoint_save_frequency=1, accumulate_grad_batches=1, gradient_clip_val=None, gradient_clip_algorithm='norm', save_optimizer_state=False, save_full_model=False, save_ckpt_type='peft', ckpt_path=None, max_length=6144, **kwargs)
¶
Class for full finetuning of a language model on given SFT datasets.
Parameters:
-
optimizer(DictConfig) –Configuration for the optimizer.
-
lr_scheduler(DictConfig) –Configuration for the learning rate scheduler.
-
peft_config(DictConfig) –Configuration for the PEFT model.
-
dataloader_kwargs(DictConfig) –Configuration for the dataloader, such as batch size, num_workers, etc.
-
adapter_name(str, default:'default') –Name of the adapter to use for the PEFT model.
-
merge_and_unload(bool, default:False) –Whether to merge and unload the model after training.
-
max_epochs(int, default:1) –Maximum number of epochs to train the model. If set to -1, the training will continue indefinitely or until max_steps is reached.
-
max_steps(int, default:-1) –Maximum number of steps to train the model. If set to -1, the training will continue indefinitely or until max_epochs is reached.
-
max_steps_per_epoch(int, default:-1) –Maximum number of steps to train the model in each epoch. If set to -1, the training will continue until the end of the epoch.
-
lr_scheduler_interval(str, default:'step') –Interval at which to run the learning rate scheduler. Available options: 'epoch', 'step'. If set to 'epoch', the scheduler will run at the end of each epoch. If set to 'step', the scheduler will run at the end of each step.
-
lr_scheduler_frequency(int, default:1) –Frequency at which to run the learning rate scheduler. The scheduler will run every
lr_scheduler_frequencyepochs or steps, depending on the value oflr_scheduler_interval. -
checkpoint_save_interval(str, default:'epoch') –Interval at which to save the model checkpoint. Available options: 'epoch', 'step'. If set to 'epoch', the model will be saved at the end of each epoch. If set to 'step', the model will be saved at the end of each step.
-
checkpoint_save_frequency(int, default:1) –Frequency at which to save the model checkpoint. The model will be saved every
checkpoint_save_frequencyepochs or steps, depending on the value ofcheckpoint_save_interval. -
accumulate_grad_batches(int, default:1) –Number of batches to accumulate gradients across before updating the model parameters.
-
gradient_clip_val(float, default:None) –Value to clip the gradients. If set to None, no gradient clipping will be applied.
-
gradient_clip_algorithm(str, default:'norm') –Algorithm to use for gradient clipping. Available options: 'value', 'norm'. If set to 'value', the gradients will be clipped to the specified value. If set to 'norm', the gradients will be clipped to the specified norm.
-
save_optimizer_state(bool, default:False) –Whether to save the optimizer and lr_scheduler state along with the model checkpoint.
-
save_full_model(bool, default:False) –Whether to save the full model or only the trainable parameters in the model checkpoint.
-
save_ckpt_type(str, default:'peft') –Type of checkpoint to save. Available options: 'lightning', 'peft'. If set to 'lightning', the model will be saved using the Lightning checkpointing mechanism. If set to 'peft', the model will be saved using the PEFT checkpointing mechanism.
-
ckpt_path(str, default:None) –Path to the checkpoint to load before training. If set to None, no checkpoint will be loaded.
Source code in fusion_bench/method/lm_finetune/peftfinetune_sft.py
Reward Modeling¶
BradleyTerryRewardModeling
¶
Bases: BaseAlgorithm, FabricTrainingMixin
Source code in fusion_bench/method/lm_finetune/bradley_terry_rm.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 | |
__init__(optimizer, lr_scheduler, dataloader_kwargs, max_epochs, max_steps=-1, max_steps_per_epoch=-1, lr_scheduler_interval='step', lr_scheduler_frequency=1, checkpoint_save_interval='epoch', checkpoint_save_frequency=1, accumulate_grad_batches=1, gradient_clip_val=None, gradient_clip_algorithm='norm', save_optimizer_state=False, save_full_model=False, save_ckpt_type='lightning', ckpt_path=None, max_length=6144, fix_token_embedding=True, **kwargs)
¶
Class for reward modeling using Bradley-Terry model.
Parameters:
-
optimizer(DictConfig) –Configuration for the optimizer.
-
lr_scheduler(DictConfig) –Configuration for the learning rate scheduler.
-
dataloader_kwargs(DictConfig) –Configuration for the dataloader, such as batch size, num_workers, etc.
-
max_epochs(int) –Maximum number of epochs to train the model. If set to -1, the training will continue indefinitely or until max_steps is reached.
-
max_steps(int, default:-1) –Maximum number of steps to train the model. If set to -1, the training will continue indefinitely or until max_epochs is reached.
-
max_steps_per_epoch(int, default:-1) –Maximum number of steps to train the model in each epoch. If set to -1, the training will continue until the end of the epoch.
-
lr_scheduler_interval(str, default:'step') –Interval at which to run the learning rate scheduler. Available options: 'epoch', 'step'. If set to 'epoch', the scheduler will run at the end of each epoch. If set to 'step', the scheduler will run at the end of each step.
-
lr_scheduler_frequency(int, default:1) –Frequency at which to run the learning rate scheduler. The scheduler will run every
lr_scheduler_frequencyepochs or steps, depending on the value oflr_scheduler_interval. -
checkpoint_save_interval(str, default:'epoch') –Interval at which to save the model checkpoint. Available options: 'epoch', 'step'. If set to 'epoch', the model will be saved at the end of each epoch. If set to 'step', the model will be saved at the end of each step.
-
checkpoint_save_frequency(int, default:1) –Frequency at which to save the model checkpoint. The model will be saved every
checkpoint_save_frequencyepochs or steps, depending on the value ofcheckpoint_save_interval. -
accumulate_grad_batches(int, default:1) –Number of batches to accumulate gradients across before updating the model parameters.
-
gradient_clip_val(float, default:None) –Value to clip the gradients. If set to None, no gradient clipping will be applied.
-
gradient_clip_algorithm(str, default:'norm') –Algorithm to use for gradient clipping. Available options: 'value', 'norm'. If set to 'value', the gradients will be clipped to the specified value. If set to 'norm', the gradients will be clipped to the specified norm.
-
save_optimizer_state(bool, default:False) –Whether to save the optimizer and lr_scheduler state along with the model checkpoint.
-
save_full_model(bool, default:False) –Whether to save the full model or only the trainable parameters in the model checkpoint.
-
save_ckpt_type(str, default:'lightning') –Type of checkpoint to save. Available options: 'lightning', 'hf'. If set to 'lightning', the checkpoint will be saved in the lightning format. If set to 'hf', the checkpoint will be saved in the huggingface format.
-
ckpt_path(str, default:None) –Path to the checkpoint to load before training. If set to None, no checkpoint will be loaded.
-
max_length(int, default:6144) –Maximum input length to consider. If the input length exceeds this value, it will be truncated.
-
fix_token_embedding(bool, default:True) –Whether to fix the token embeddings during training. If set to True, the token embeddings will not be updated during training.
Source code in fusion_bench/method/lm_finetune/bradley_terry_rm.py
compute_loss(batch)
¶
Maximize the likelihood of the winner over the loser using the Bradley-Terry model.
Parameters:
-
batch(Dict[str, Union[Tensor, Any]]) –A dictionary containing the input token ids and attention masks for the winner and loser.
Source code in fusion_bench/method/lm_finetune/bradley_terry_rm.py
LLM Fine-tuning with AdaMerging¶
LayerWiseAdaMergingForLlamaSFT
¶
Bases: BaseAlgorithm, LightningFabricMixin, SimpleProfilerMixin
Source code in fusion_bench/method/adamerging/llama_adamerging.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 | |
__init__(seed, output_dir, optimizer, lr, sparsity_ratio, average_attntion, start_layer_idx, init_values, init_weights_path, clamp_weights, normalized_merging_weights, max_steps, tie_weights, strict, dataloader_kwargs, skip_training=False, save_interval=None, save_merged_model=True, **kwargs)
¶
Layer-wise AdaMerging algorithm for Llama models. Unlike the original AdaMerging algorithm that uses test-time adaptation training to optimize the entropy loss. This algorithm optimize the cross entropy loss.
Parameters:
-
seed(int) –random seed to set at the begining of running.
-
output_dir(str) –directory to save the merged model. If
None, the log directory will be used. -
optimizer(str) –optimizer to use for training.
-
lr(float) –learning rate for training.
-
sparsity_ratio(Optional[float]) –ratio of zero weights in the task vectors. If
None, no sparsity is enforced. -
average_attntion(bool) –whether to average attention weights.
-
start_layer_idx(Optional[Union[float, int]]) –index of the layer to start merging.
-
init_values(float) –initial value for the merging weights.
-
init_weights_path(str) –path to the initial merging weights.
-
clamp_weights(bool) –whether to clamp the merging weights.
-
normalized_merging_weights(bool) –whether to normalize the merging weights.
-
max_steps(int) –maximum number of training steps.
-
tie_weights(bool) –whether to tie the weights of the same layer.
-
strict(bool) –whether to enforce strict merging.
-
dataloader_kwargs(bool) –keyword arguments for dataloaders.
-
skip_training(bool, default:False) –whether to skip training.
-
save_interval(int, default:None) –interval to save the merging weights. If
None, no intermediate weights are saved. The weights are saved to{output_dir}/checkpoints/merging-weights_{step_idx}.ckpt. -
save_merged_model(bool, default:True) –whether to save the merged model. This will save the model to
{output_dir}/checkpoints/merged_model.
Source code in fusion_bench/method/adamerging/llama_adamerging.py
construct_layer_wise_merged_model(modelpool)
¶
Constructs a wrapped layer-wise merged model from model pool.
This method creates a new wrapped model by merging the layers of a pretrained model with those of several fine-tuned models.
The merging is controlled by layer-wise weights, which is a torch.Tensor of the shape (num_models, num_layers).
The merging weights can be initialized based on a provided configuration or loaded from a file.
Parameters:
-
modelpool(ModelPool) –An object containing the pretrained model and fine-tuned models to be merged.
Returns:
-
LayerWiseMergedModel–An instance of the merged model with layer-wise weights applied.
Source code in fusion_bench/method/adamerging/llama_adamerging.py
run(modelpool)
¶
Run the algorithm.
Parameters:
-
modelpool(CausalLMPool) –The pool of models to be merged.
Returns:
-
–
The merged model.
Source code in fusion_bench/method/adamerging/llama_adamerging.py
save_state(step_idx, causal_lm)
¶
Save merging weights of each layers. This method must be called at all processes.
Parameters:
-
step_idx(Union[int, str]) –step index of the training.
-
causal_lm(Module) –the model to save.