LlamaTestGenerationTaskPool¶
The LlamaTestGenerationTaskPool
class is used to evaluate a language model on a set of prompts. It can also be used in an interactive mode for debugging purposes.
References¶
test_generation
¶
LlamaTestGenerationTaskPool
¶
Bases: BaseTaskPool
This task pool is used to evaluate a language model on a set of prompts. For the purpose of debugging, it can also be used in an interactive mode.
Source code in fusion_bench/taskpool/llama/test_generation.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 |
|
__init__(test_prompts, max_length=1024, temperature=0.01, top_p=0.9, iterative_mode=False, **kwargs)
¶
Parameters:
-
test_prompts
¶List[str]
) –A list of prompts to be used for testing the model.
-
max_length
¶int
, default:1024
) –The maximum length of the generated text. Defaults to 1024.
-
temperature
¶float
, default:0.01
) –The sampling temperature for text generation. Defaults to 0.01.
-
top_p
¶float
, default:0.9
) –The cumulative probability for nucleus sampling. Defaults to 0.9.
-
iterative_mode
¶bool
, default:False
) –If True, enables interactive mode for debugging. Defaults to False.
Source code in fusion_bench/taskpool/llama/test_generation.py
_generate_text(model, tokenizer, prompt)
¶
Generate text using the provided model and tokenizer for a given prompt.
This method generates text based on the given prompt using the specified model and tokenizer. It prints the prompt and the generated response, and returns a dictionary containing the prompt, response, wall time, number of characters, and number of tokens.
Parameters:
-
model
¶LlamaForCausalLM
) –The language model to be used for text generation.
-
tokenizer
¶PreTrainedTokenizer
) –The tokenizer to be used for encoding and decoding text.
-
prompt
¶str
) –The input prompt for text generation.
Returns:
-
dict
(dict
) –A dictionary containing the following keys: - "prompt" (str): The input prompt. - "response" (str): The generated response. - "wall_time" (float): The time taken to generate the response. - "num_chars" (int): The number of characters in the generated response. - "num_tokens" (int): The number of tokens in the generated response.
Source code in fusion_bench/taskpool/llama/test_generation.py
generate_text(model, tokenizer, prompt, max_length=1024, temperature=0.01, top_p=0.9, device=None)
¶
Generate text using the loaded model.
Parameters:
-
model
¶LlamaForCausalLM
) –The loaded language model
-
tokenizer
¶PreTrainedTokenizer
) –The loaded tokenizer
-
prompt
¶str
) –Input prompt text
-
max_length
¶int
, default:1024
) –Maximum length of generated sequence
-
temperature
¶float
, default:0.01
) –Controls randomness (higher = more random)
-
top_p
¶float
, default:0.9
) –Nucleus sampling parameter
Returns:
-
str
–Generated text