Skip to content

Feature Request: log *which* LLM we are talking to, not just **Assistant** #41

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
gitcnd opened this issue Mar 10, 2025 · 2 comments
Open
Assignees
Labels
enhancement New feature or request

Comments

@gitcnd
Copy link

gitcnd commented Mar 10, 2025

Different agents are wildly different in capabilities, and I was trying to work out which one I used with extreme success a few days ago... but no luck... the specstory didn't capture that info :-(
I don't think it's captured in the Cursor Db either :-(

It is here though! https://www.cursor.com/settings

Sorry for dropping a gnarly-hard suggestion into your issues :-)

@belucid belucid added the enhancement New feature or request label Mar 10, 2025
@belucid
Copy link

belucid commented Mar 10, 2025

No need to apologize! It's a good suggestion. Not sure that we have easy access to it, but we'll look into if we can capture which model you were working with during the AI interaction.

@belucid belucid changed the title Feature Request (hard!): log *which* agent we are talking to, not just _**Assistant**_ Feature Request: log *which* LLM we are talking to, not just _**Assistant**_ Mar 10, 2025
@belucid belucid changed the title Feature Request: log *which* LLM we are talking to, not just _**Assistant**_ Feature Request: log *which* LLM we are talking to, not just **Assistant** Mar 10, 2025
@gitcnd
Copy link
Author

gitcnd commented Mar 12, 2025

I tried the idea of asking the assistant itself to identify its model and variant... but all it knows is whatever Cursor told it in the prompt (which has a bug - if you pick 4o after using 3.7, cursor still sends the 3.7 prompt). the llm can guess if it's a "thinking" variant from the presence of a tag in its context, but trying to make it do that on every message is going to degrade our task context...

My use case, by the way: I switch models a lot during each chat (and, as now ones arrive, so will many others), and last week, the model I picked spat out 3200 lines of C# code into 4 files, which built and ran flawlessly first try (a windows service for full read/write access to the clipboard, running as a local named_pipe tool with an agentic self-documenting capability advertising/discovery, with test suite!). I forgot which model that was, and every attempt to replicate that awesomeness since has failed me :-(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants