Make it possible for private AI models to get tested. #10
Replies: 2 comments
-
|
Some additional considerations for evaluating private models that we might want to consider are as follows. Some of these will probably be applicable generally, also. Enabling the evaluation of private AI models within a decentralized scoring system introduces a few additional challenges that we might want to address: Confidentiality & IP Protection Validation Protocol & Trust Data Privacy & Compliance Auditability for Dispute Resolution (plus it would probably be a generally good practice) Technical Integration & Infrastructure Scalability & Cost Benchmark Evolution & Fairness Community & Governance These are my initial thoughts. If anyone else has additional ideas, please add. |
Beta Was this translation helpful? Give feedback.
-
|
Here is an idea that I think takes both Ruben and Jason's concerns into consideration. Please let me know your thoughts: Encrypted storage prevents exposure of sensitive prompts & responses. Accounts for data privacy, compliance, auditability, and IP protection. I have a few other ideas, such as distributing private API credentials to validators, but please provide feedback on this first. Thank you! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
There are many AI companies with significant revenue where the AI model being served can not be openly purchased by the public but must first go through a B2B sales and KYB process.
Allowing this would require
Beta Was this translation helpful? Give feedback.
All reactions