Conversation
|
Noticed evaluation with different params but same name resolves to same ID, should error |
Yup, params would get overwritten currently so we don't get two evaluations with the same name but different IDs. |
|
I have a local branch of platform to raises an error when evaluations have duplicates but thinking we should be consistent across all the other services too. Currently corpus uploader https://github.com/arcee-ai/arcee-platform/blob/eaec257eca5e1061813babd70006983200b7d57e/backend/app/api/v2/services/corpus.py#L171 has the same logic to update with new params. Pretraining https://github.com/arcee-ai/arcee-platform/blob/eaec257eca5e1061813babd70006983200b7d57e/backend/app/api/v2/services/pretraining.py#L65, , deployment etc it seems to either assumes the existing params have not changed or look up each field in supabase separately and throw a Any thoughts on the best consistent way to deal with repeated start_x calls @mryave @nason ? |
This PR introduces support for evaluations in the Arcee Python SDK.
Added
start_evaluationfunction toarcee/api.py:Usage Example for testing