Fair Usage Policy
To ensure reliable access and performance for all users, AICamp implements a fair usage policy across its supported AI models.
Why This Policy Exists
Some models are more expensive and in higher demand than others. To avoid service degradation and ensure that no single user overloads system capacity, we apply usage limits per model category.
This helps:
- Maintain platform speed and reliability
- Prevent misuse or accidental overuse
- Ensure fair access across teams and time zones
Usage Tiers
AICamp groups models into three categories based on their performance, cost, and demand. Each category has its own usage limit:
Category | Limit | Models |
---|---|---|
Category 1 | Unlimited (spam-protected) | GPT-3.5, GPT-4o mini, Claude 3 Haiku, Gemini 1.5 Flash |
Category 2 | 200 messages / 3 hours | GPT-4o, Claude 3.5 Sonnet, OpenAI o1-mini, Mistral models, Llama models |
Category 3 | 100 messages / 3 hours | GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, OpenAI o1-preview |
Note: These limits apply per user, not per workspace.
What Happens If You Hit a Limit?
- You’ll receive a notification inside AICamp when a limit is reached.
- You can still continue working by switching to another model.
- Limits reset automatically after the 3-hour window.
- These limits affect fewer than 1% of users.
Abuse Protection
While Category 1 models are generous with usage, they still include protection against spamming, scraping, and automated abuse.
If you believe you’ve hit a limit in error or need extended usage for a high-load project, please contact support@aicamp.so.