Releases: BerriAI/litellm
v1.56.8-dev1
Full Changelog: v1.56.8...v1.56.8-dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 284.69056873304487 | 6.157751312397796 | 0.0 | 1843 | 0 | 211.56842700003153 | 2410.6343400000014 |
Aggregated | Passed ✅ | 250.0 | 284.69056873304487 | 6.157751312397796 | 0.0 | 1843 | 0 | 211.56842700003153 | 2410.6343400000014 |
v1.56.8
What's Changed
- Prometheus - custom metrics support + other improvements by @krrishdholakia in #7489
- (feat) POST
/fine_tuning/jobs
support passing vertex specific hyper params by @ishaan-jaff in #7490 - (Feat) - LiteLLM Use
UsernamePasswordCredential
for Azure OpenAI by @ishaan-jaff in #7496 - (docs) Add docs on load testing benchmarks by @ishaan-jaff in #7499
- (Feat) Add support for reading secrets from Hashicorp vault by @ishaan-jaff in #7497
- Litellm dev 12 30 2024 p2 by @krrishdholakia in #7495
- Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by @krrishdholakia in #7498
- (fix) GCS bucket logger - apply
truncate_standard_logging_payload_content
tostandard_logging_payload
and ensure GCS flushes queue on fails by @ishaan-jaff in #7500 - Litellm dev 01 01 2025 p3 by @krrishdholakia in #7503
- Litellm dev 01 02 2025 p2 by @krrishdholakia in #7512
- Revert "(fix) GCS bucket logger - apply
truncate_standard_logging_payload_content
tostandard_logging_payload
and ensure GCS flushes queue on fails" by @ishaan-jaff in #7515 - (perf) use
aiohttp
forcustom_openai
by @ishaan-jaff in #7514 - (perf) use threadpool executor - for sync logging integrations by @ishaan-jaff in #7509
Full Changelog: v1.56.6...v1.56.8
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 247.81903455189286 | 6.181081075067931 | 0.0 | 1850 | 0 | 191.81740900000932 | 2126.8676100000903 |
Aggregated | Passed ✅ | 230.0 | 247.81903455189286 | 6.181081075067931 | 0.0 | 1850 | 0 | 191.81740900000932 | 2126.8676100000903 |
v1.56.6.dev1
What's Changed
- Prometheus - custom metrics support + other improvements by @krrishdholakia in #7489
- (feat) POST
/fine_tuning/jobs
support passing vertex specific hyper params by @ishaan-jaff in #7490 - (Feat) - LiteLLM Use
UsernamePasswordCredential
for Azure OpenAI by @ishaan-jaff in #7496 - (docs) Add docs on load testing benchmarks by @ishaan-jaff in #7499
- (Feat) Add support for reading secrets from Hashicorp vault by @ishaan-jaff in #7497
- Litellm dev 12 30 2024 p2 by @krrishdholakia in #7495
- Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by @krrishdholakia in #7498
- (fix) GCS bucket logger - apply
truncate_standard_logging_payload_content
tostandard_logging_payload
and ensure GCS flushes queue on fails by @ishaan-jaff in #7500 - Litellm dev 01 01 2025 p3 by @krrishdholakia in #7503
Full Changelog: v1.56.6...v1.56.6.dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 255.89973974836954 | 6.151774848433542 | 0.003343355895887794 | 1840 | 1 | 94.9865199999067 | 1259.9916519999965 |
Aggregated | Passed ✅ | 230.0 | 255.89973974836954 | 6.151774848433542 | 0.003343355895887794 | 1840 | 1 | 94.9865199999067 | 1259.9916519999965 |
v1.56.6
What's Changed
- (fix)
v1/fine_tuning/jobs
with VertexAI by @ishaan-jaff in #7487 - (docs) Add docs on using Vertex with Fine Tuning APIs by @ishaan-jaff in #7491
- Fix team-based logging to langfuse + allow custom tokenizer on
/token_counter
endpoint by @krrishdholakia in #7493 - Fix team admin create key flow on UI + other improvements by @krrishdholakia in #7488
- docs: added missing quote by @dsdanielko in #7481
- fix ollama embedding model response #7451 by @svenseeberg in #7473
- (Feat) - Add PagerDuty Alerting Integration by @ishaan-jaff in #7478
New Contributors
- @dsdanielko made their first contribution in #7481
- @svenseeberg made their first contribution in #7473
Full Changelog: v1.56.5...v1.56.6
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 287.411814751915 | 6.114731230663012 | 0.0 | 1830 | 0 | 228.32058200003758 | 3272.637599999939 |
Aggregated | Passed ✅ | 250.0 | 287.411814751915 | 6.114731230663012 | 0.0 | 1830 | 0 | 228.32058200003758 | 3272.637599999939 |
v1.56.5
What's Changed
- Refactor: move all bedrock invoke providers to BaseConfig by @krrishdholakia in #7463
- (fix)
litellm.amoderation
- support usingmodel=openai/omni-moderation-latest
,model=omni-moderation-latest
,model=None
by @ishaan-jaff in #7475 - [Bug Fix]: rerank restfulapi response parse still too strict by @ishaan-jaff in #7476
- Litellm dev 12 30 2024 p1 by @krrishdholakia in #7480
- HumanLoop integration for Prompt Management by @krrishdholakia in #7479
Full Changelog: v1.56.4...v1.56.5
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.5
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 230.0 | 268.0630784626629 | 6.174316845767241 | 0.0 | 1848 | 0 | 212.08500100010497 | 3189.481879000027 |
Aggregated | Passed ✅ | 230.0 | 268.0630784626629 | 6.174316845767241 | 0.0 | 1848 | 0 | 212.08500100010497 | 3189.481879000027 |
v1.56.4
What's Changed
- Update model_prices_and_context_window.json by @superpoussin22 in #7452
- (Refactor) 🧹 - remove deprecated litellm server by @ishaan-jaff in #7456
- 📖 Docs - Using LiteLLM with 1M rows in spend logs by @ishaan-jaff in #7461
- (Admin UI - 1) - added the model used either directly before or after the "Assistant" so that it's clear which model provided the given assistant output by @ishaan-jaff in #7459
- (Admin UI - 2) UI chat should render the output in markdown by @ishaan-jaff in #7460
- (Security fix) - Upgrade to
fastapi==0.115.5
by @ishaan-jaff in #7447 - fix OR deepseek by @paul-gauthier in #7425
- (Bug Fix) Add health check support for realtime models by @ishaan-jaff in #7453
- (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks by @ishaan-jaff in #7455
- Litellm dev 12 28 2024 p3 by @krrishdholakia in #7464
- Fireworks AI - document inlining support + model access groups for wildcard models by @krrishdholakia in #7458
Full Changelog: v1.56.3...v1.56.4
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.4
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 240.0 | 268.74238744669225 | 6.116896356155644 | 0.0 | 1829 | 0 | 214.29422199992132 | 1969.7571099999323 |
Aggregated | Passed ✅ | 240.0 | 268.74238744669225 | 6.116896356155644 | 0.0 | 1829 | 0 | 214.29422199992132 | 1969.7571099999323 |
v1.56.3
What's Changed
- Update Documentation - Gemini Embedding by @igorlima in #7436
- (Bug fix) missing
model_group
field in logs for aspeech call types by @ishaan-jaff in #7392 - (Feat) - new endpoint
GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path}
by @ishaan-jaff in #7427 - Update model_prices_and_context_window.json by @superpoussin22 in #7345
- LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 by @krrishdholakia in #7448
- Litellm dev 12 27 2024 p2 1 by @krrishdholakia in #7449
New Contributors
Full Changelog: v1.56.2...v1.56.3
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.3
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 276.9724297749999 | 6.148940938190872 | 0.003341815727277648 | 1840 | 1 | 112.37049800001842 | 1700.1428350000083 |
Aggregated | Passed ✅ | 250.0 | 276.9724297749999 | 6.148940938190872 | 0.003341815727277648 | 1840 | 1 | 112.37049800001842 | 1700.1428350000083 |
v1.56.2
What's Changed
- Litellm dev 12 24 2024 p2 by @krrishdholakia in #7400
- (feat) Support Dynamic Params for
guardrails
by @ishaan-jaff in #7415 - docs: cleanup docker compose comments by @marcoscannabrava in #7414
- (Security fix) UI - update
next
version by @ishaan-jaff in #7418 - (security fix) - fix docs snyk vulnerability by @ishaan-jaff in #7419
- LiteLLM Minor Fixes & Improvements (12/25/2024) - p1 by @krrishdholakia in #7411
- LiteLLM Minor Fixes & Improvements (12/25/2024) - p2 by @krrishdholakia in #7420
- Ensure 'disable_end_user_cost_tracking_prometheus_only' works for new prometheus metrics by @krrishdholakia in #7421
- (security fix) - bump fast api, fastapi-sso, python-multipart - fix snyk vulnerabilities by @ishaan-jaff in #7417
- docs - batches cost tracking by @ishaan-jaff in #7422
- Add
/openai
pass through route on litellm proxy by @ishaan-jaff in #7412 - (Feat) Add logging for
POST v1/fine_tuning/jobs
by @ishaan-jaff in #7426 - (docs) - show all supported Azure OpenAI endpoints in overview by @ishaan-jaff in #7428
- (docs) - custom guardrail show how to use dynamic guardrail params by @ishaan-jaff in #7430
- Support budget/rate limit tiers for keys by @krrishdholakia in #7429
- (fix) initializing OTEL Logging on LiteLLM Proxy - ensure OTEL logger is initialized only once by @ishaan-jaff in #7435
- Litellm dev 12 26 2024 p3 by @krrishdholakia in #7434
- fix(key_management_endpoints.py): enforce user_id / team_id checks on key generate by @krrishdholakia in #7437
- LiteLLM Minor Fixes & Improvements (12/26/2024) - p4 by @krrishdholakia in #7439
- Refresh VoyageAI models, prices and context by @fzowl in #7443
- Revert "Refresh VoyageAI models, prices and context" by @krrishdholakia in #7446
- (feat)
/guardrails/list
show guardrail info params by @ishaan-jaff in #7442 - add openrouter o1 by @paul-gauthier in #7424
- ✨ (Feat) Log Guardrails run, guardrail response on logging integrations by @ishaan-jaff in #7445
New Contributors
- @marcoscannabrava made their first contribution in #7414
- @fzowl made their first contribution in #7443
Full Changelog: v1.55.12...v1.56.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 275.3240164096845 | 6.143891773397197 | 0.0 | 1838 | 0 | 224.26387399997338 | 1437.5524760000076 |
Aggregated | Passed ✅ | 250.0 | 275.3240164096845 | 6.143891773397197 | 0.0 | 1838 | 0 | 224.26387399997338 | 1437.5524760000076 |
v1.55.12
What's Changed
- Add 'end_user', 'user' and 'requested_model' on more prometheus metrics by @krrishdholakia in #7399
- (feat)
/batches
Add support for using/batches
endpoints in OAI format by @ishaan-jaff in #7402 - (feat)
/batches
- trackuser_api_key_alias
,user_api_key_team_alias
etc for /batch requests by @ishaan-jaff in #7401 - Litellm dev 12 24 2024 p3 by @krrishdholakia in #7403
- (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint by @ishaan-jaff in #7406
- Litellm dev 12 24 2024 p4 by @krrishdholakia in #7407
Full Changelog: v1.55.11...v1.55.12
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.12
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 241.51418849604215 | 6.334659319234715 | 0.0 | 1895 | 0 | 191.11329300005764 | 3854.987871999924 |
Aggregated | Passed ✅ | 220.0 | 241.51418849604215 | 6.334659319234715 | 0.0 | 1895 | 0 | 191.11329300005764 | 3854.987871999924 |
v1.55.11
What's Changed
- LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 by @krrishdholakia in #7394
Full Changelog: v1.55.10...v1.55.11
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.11
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 290.3865391657403 | 6.034920682874279 | 0.0 | 1804 | 0 | 229.06071099987457 | 2909.605226000167 |
Aggregated | Passed ✅ | 250.0 | 290.3865391657403 | 6.034920682874279 | 0.0 | 1804 | 0 | 229.06071099987457 | 2909.605226000167 |