Skip to content

Commit

Permalink
Update LM Studio steps
Browse files Browse the repository at this point in the history
  • Loading branch information
danbarr committed Jan 22, 2025
1 parent d21fbfd commit 2684115
Showing 1 changed file with 9 additions and 3 deletions.
12 changes: 9 additions & 3 deletions docs/partials/_cline-providers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ To enable CodeGate, enable **Use custom base URL** and enter

You need an [OpenAI API](https://openai.com/api/) account to use this provider.
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
[configuration parameter](../how-to/configure.md).
[configuration parameter](../how-to/configure.md) when you launch CodeGate.

In the Cline settings, choose **OpenAI Compatible** as your provider, enter your
OpenAI API key, and set your preferred model (example: `gpt-4o-mini`).
Expand Down Expand Up @@ -80,9 +80,14 @@ locally using `ollama pull`.
<TabItem value="lmstudio" label="LM Studio">

You need LM Studio installed on your local system with a server running from LM
Studio's Developer tab to use this provider. See the
Studio's **Developer** tab to use this provider. See the
[LM Studio docs](https://lmstudio.ai/docs/api/server) for more information.

Cline uses large prompts, so you will likely need to increase the context length
for the model you've loaded in LM Studio. In the Developer tab, select the model
you'll use with CodeGate, open the **Load** tab on the right and increase the
**Context Length** to _at least_ 18k (18,432) tokens, then reload the model.

<ThemedImage
alt='LM Studio dev server'
sources={{
Expand All @@ -96,7 +101,8 @@ In the Cline settings, choose LM Studio as your provider and set the **Base
URL** to `http://localhost:8989/openai`.

Set the **Model ID** to `lm_studio/<MODEL_NAME>`, where `<MODEL_NAME>` is the
name of the model you're serving through LM Studio (shown in the Developer tab).
name of the model you're serving through LM Studio (shown in the Developer tab),
for example `lm_studio/qwen2.5-coder-7b-instruct`.

<LocalModelRecommendation />

Expand Down

0 comments on commit 2684115

Please sign in to comment.