With Evals, OpenAI hopes to crowdsource AI model testing

22:01 14.03.2023
Alongside GPT-4, OpenAI has open-sourced a framework to evaluate the performance of the company’s AI models. Called Evals, OpenAI says that the tooling is designed to allow anyone to report shortcomings in its models to help guide further improvements. It’s a sort of crowdsourcing approach to model testing, OpenAI says. “We use Evals to guide […] With Evals, OpenAI hopes to crowdsource AI model testing by Kyle Wiggers originally published on TechCrunch...
  288