Last Update: July 18, 2024

Model Evaluations

The Model Evaluations feature allows users to evaluate their models by selecting from a range of benchmarks across various categories. This page provides an in-depth look at the different benchmarks available, organized by category, to help users choose the most relevant tests for their model. Users can see the performance their model had during training once the training has been completed in Model Evaluations. Users can monitor the loss during the training in Training Metrics.

To use Model Evaluations, users can simply select one or more benchmarks from the above categories. Each benchmark will evaluate the user's model performance on a specific task or set of tasks, providing valuable insights into its strengths and weaknesses.

For further assistance, please contact support@tromero.ai and we would be happy to help!

Was this page helpful?