v3.8.0 #462
Closed
v3.8.0
#462
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
3.8.0 (2025-05-17)
Features
LLamaChatPromptOptions["onFunctionCallParamsChunk"]
)ResolveModelFileOptions["endpoints"]
)QwenChatWrapper
: support discouraging the generation of thoughts (#460) (f2cb873) (documentation: API:QwenChatWrapper
constructor >thoughts
option)getLlama
:dryRun
option (#460) (f2cb873) (documentation: API:LlamaOptions["dryRun"]
)getLlamaGpuTypes
function (#460) (f2cb873) (documentation: API:getLlamaGpuTypes
)Bug Fixes
llama.cpp
changes (#460) (f2cb873)Shipped with
llama.cpp
releaseb5414
This discussion was created from the release v3.8.0.
Beta Was this translation helpful? Give feedback.
All reactions