"num_train_epochs": trial.suggest_int("num_train_epochs", 1, 5), "learning_rate": trial.suggest_float("learning_rate", 1e-4, 1e-2, log=True), Here is an example if you want to search higher learning rates than the default with optuna: def my_hp_space(trial): To customize the hyperparameter search space, you can pass a function hp_space to this call. Note that you need an installation from source of nlp to make the example work. If you have both, it will use optuna by default, but you can pass backend="ray" to use Ray Tune. This will use optuna or Ray Tune, depending on which you have installed. ![]() Trainer.hyperparameter_search(direction="maximize") # Defaut objective is the sum of all metrics when metrics are provided, so we have to maximize it. Training_args = TrainingArguments("test", evaluate_during_training=True, eval_steps=500, disable_tqdm=True)ĭata_collator=DataCollatorWithPadding(tokenizer),Įval_dataset=encoded_dataset, # Disabling tqdm is a matter of preference. # Evaluate during training and a bit more often than the default to be able to prune bad trials early. Return pute(predictions=predictions, references=labels) Predictions = predictions.argmax(axis=-1) ![]() Return om_pretrained('bert-base-cased', return_dict=True) # Won't be necessary when this PR is merged with master since the Trainer will do it automaticallyĮncoded_t_format(columns=) Outputs = tokenizer(examples, examples, truncation=True)Įncoded_dataset = dataset.map(encode, batched=True) Tokenizer = om_pretrained('bert-base-cased') ![]() Here is an example of use: from nlp import load_dataset, load_metricįrom transformers import AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, Trainer, TrainingArguments
0 Comments
Leave a Reply. |