-
Notifications
You must be signed in to change notification settings - Fork 29k
AI Agent doesn't store the Tool usages in memory #14361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @fjrdomingues, We have created an internal ticket to look into this which we will be tracking as "GHC-1434" |
Hey @fjrdomingues, Is this a bug or an enhancement request? |
Hey @Joffcom If there's such a category then it may fit it better - enhancement request. |
I totally agree. I want my AI to remember the output of a tool call, which currently stays empty. For now agents only seem to remember the output of tool calls within the response they are called with. |
I'd suggest this should rank much higher than "enhancement request". Consider this scenario:
When working directly with langchain, that tool output including the id would be in the memory context and no round-trip would be necessary. This is significantly holding back agent tool usage via n8n IMHO. Working with langchain directly this was never an issue but it does introduce concerns around managing your tool verbosity vs filling the context with a lot of tokens. |
Yes, this is a bug request for sure. Many applications of AI agents don't work at the moment because of this issue. Multi-turn Agents with tools are currently not working. |
Right now I'm working around this by having the tools manually insert in the memory using the memory manager, but it's an ugly patch with many pitfalls. Not sure what the proper etiquette is here to get an update and/or more eyes on this? |
Hi @Joffcom, Just want to bring this to your attention. In addition to the example I've given above, I am now seeing multiple models hallucinating tool calls due to this shortcoming.
IMHO this is totally kneecapping n8n's agents when compared to straight Langchain/Langgraph implementations. |
Totally agree with you @GuillaumeRoy |
A perfect explanation of the issue. Alongside this, it would be better for the AI to remember the output of a tool call, as there is some info from the tool output that the model might not respond with in the initial tool call that may be relevant later on. |
I encountered a similar issue while building an appointment scheduling system via API using an AI agent in n8n. At one point, the agent would call a tool that returned crucial data like available staff IDs and time slots. Initially, everything worked fine — the AI had access to those values in the current turn and could make correct suggestions. The problem started when the customer confirmed an option in a later turn. At that point, the AI no longer had access to the previous tool response, and it started "making up" IDs and times because that data was no longer available in context. The root cause was realizing that tool outputs are not automatically persisted or injected into the prompt context across turns. And since n8n lets us configure the memory window, even if the tool response is saved to memory, it may be forgotten quickly as the conversation grows. My solution was to manually inject all essential data (like IDs, times, names) into the prompt so it would remain available across turns. This worked and made the system more reliable — but it increased prompt complexity and maintenance. It would be extremely useful if n8n provided an option to persist tool outputs directly into the prompt context, as a kind of “prompt extension,” without relying solely on memory. That would reduce complexity and help avoid fragile behavior in multi-turn flows. |
This needs to be fixed asap. I am currently doing a workaround with normal HTTP requests and firestore. It's so ugly, lol. |
Without this fix, Agent AI with tools is not usable in real workflows. The agent does not retain tool output and therefore cannot reliably act on previous results. Please consider increasing the priority of this issue. Thank you! |
Can you please provide more details on how you achieved this using memory manager? I tried but failed pathetically. |
Using Redis memory and workflow tool nodes, at the end of the subworkflow I was injecting a message directly into the redis context using the memory manager insert functionality. It kinda sucks because 1) limited applicability 2) manual 3) still can lead to hallucinations 4) message ordering and type is not exactly what it should be. I ended up implementing my own memory layer as a stopgap and I'm moving serious agent use cases away from n8n 💔 |
I tried the same using PostgreSQL but by inserting the data into memory at the sub-workflow end the tool response is inserted before the user prompt. |
Are there any movements? |
@ptr-bloch I went on a limb and reached out to a member of the n8n team directly over LinkedIn on May 2nd to bring this issue+thread to their attention and learned that "AI Squad (...) already looking into it and discussing". |
I've tested different LLM provider. And it seems have an issue when provider respond with:
It seems work when:
Other thing: for each call to chat model, the action on memory after is ever loadMemoryVariables, never saveContext. When it's works, saveContext is called |
Bug Description
The current implementation of the AI Agent and the Memory nodes stores only the input and output messages, not the Tool messages.
Why is this important?
Have you noticed the agent claiming that it called a tool but didn't? Despite the flaws of the models, this is greatly aggravated by this problem. The context window gets filled with messages where the user asks for an action, the AI replies with success and the user replies with positive feedback. Without the tool messages, the LLM will learn the pattern and repeat it, so the next time it won't call a tool and instead will just reply directly to the user.
I have a fork with a working suggestion on how to fix it: fjrdomingues@1af9450
To Reproduce
Use the Simple Memory node, or the Postgres one (the only ones I tested) and check the messages that were saved. The tool calls are always an empty array, on save and load.
Expected behavior
The tools_call array should be populated when saving memories
Operating System
NA
n8n Version
1.83.2
Node.js Version
20.18.3
Database
PostgreSQL
Execution mode
main (default)
The text was updated successfully, but these errors were encountered: