Parameters
Last updated
Last updated
Weave allows the fine-tuning of LLMs through the parameters below:
What it does: Set custom stop tokens for the generation.
Values: Strings
What it does: Selecting the next token randomly from a specified number, k, of tokens with the highest probabilities.
Higher values: Lead to greater variability
Lower values: Lead to lesser variability
What it does: Select the next token randomly from the smallest set of tokens for which the cumulative probability exceeds a specified value, p.
Higher values: Lead to greater variability.
Lower values: Reduce diversity and focus on more probable tokens
What it does: Truncate inputs tokens to the given size.
Higher values: Cuts down less texts
Lower values: Cuts down more texts
What it does: Activate logits sampling and modify the likelihood of specified tokens appearing in the completion.
Values: TRUE or FALSE
What it does: Ensures the responses are more typical or expected.
Higher values: Less unusual output
Lower values: More unusual output
What it does: Watermark to determine if the text/image is generated by an AI. Useful to prevent overtraining of LLM’s with generated data.
Values: TRUE or FALSE
What it does: The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.
Higher values: More characters in the output
Lower values: Less characters in the output
What it does: Control the randomness of output.
Higher values: More random output
Lower values: More focused output
What it does: Whether to prepend the prompt to the generated text.
Values: TRUE or FALSE
What it does: Control sampling repetitive sequences of tokens
Higher values: Decreasing the model's likelihood to repeat the same line verbatim
Lower values: Increasing the model's likelihood to repeat the same line verbatim
What it does: Control sampling repetitive sequences of tokens
Higher values: Increasing the model's likelihood to talk about new topics
Lower values: Decreasing the model's likelihood to talk about new topics