Skip to content

AI Agent use of Tools #15883

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
kplatter opened this issue May 31, 2025 · 9 comments
Open

AI Agent use of Tools #15883

kplatter opened this issue May 31, 2025 · 9 comments
Labels
in linear Issue or PR has been created in Linear for internal review

Comments

@kplatter
Copy link

kplatter commented May 31, 2025

Bug Description

When using tools with the AI Agent, the use of tools is spotty at best. Using the same chat input... sometimes when you post a chat message, it will not call any tool. Sometimes it will call one of the tools and not the other. Sometimes it will call both tools. For whatever reason when it calls a tool and only calls one of them, how does it decide which tool to call if both have same or similar descriptions.

** PLEASE READ COMMENTS BELOW FOR UPDATED INFO ON THIS ISSUE **

I have included a sample of this here:

{
  "nodes": [
    {
      "parameters": {
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "typeVersion": 1.1,
      "position": [
        -420,
        -20
      ],
      "id": "b3766843-c178-4d56-8774-b7d9a678dee3",
      "name": "When chat message received",
      "webhookId": "7506fdb8-44aa-46f4-932a-aa761895ac1a"
    },
    {
      "parameters": {
        "promptType": "define",
        "text": "={{ $json.chatInput }}",
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.agent",
      "typeVersion": 2,
      "position": [
        -220,
        -20
      ],
      "id": "7b72c2ab-149f-436a-87e4-3bc40984a023",
      "name": "AI Agent",
      "alwaysOutputData": true
    },
    {
      "parameters": {
        "description": "Call this tool to get random information about turtles.",
        "jsCode": "// Example: convert the incoming query to uppercase and return it\nreturn \"Turtles can run races and are generally so fast that they win them most of the time.\""
      },
      "type": "@n8n/n8n-nodes-langchain.toolCode",
      "typeVersion": 1.2,
      "position": [
        200,
        220
      ],
      "id": "f800ab80-2df0-4212-af86-9497a47e1041",
      "name": "Code Tool"
    },
    {
      "parameters": {
        "description": "Call this tool to get additional random information about turtles.",
        "jsCode": "// Example: convert the incoming query to uppercase and return it\nreturn \"Turtles can sometimes be bright yellow.\""
      },
      "type": "@n8n/n8n-nodes-langchain.toolCode",
      "typeVersion": 1.2,
      "position": [
        80,
        220
      ],
      "id": "c451b5f2-a9c5-4e63-a136-7216aa85762c",
      "name": "Code Tool1"
    },
    {
      "parameters": {
        "model": "llama3.2:1b",
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.lmChatOllama",
      "typeVersion": 1,
      "position": [
        -260,
        220
      ],
      "id": "3defad72-d964-43f3-baaf-d65fde7a86e8",
      "name": "Ollama Chat Model",
      "credentials": {
        "ollamaApi": {
          "id": "7jRkpIbW0vLbQDiw",
          "name": "Ollama account"
        }
      }
    }
  ],
  "connections": {
    "When chat message received": {
      "main": [
        [
          {
            "node": "AI Agent",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "AI Agent": {
      "main": [
        []
      ]
    },
    "Code Tool": {
      "ai_tool": [
        [
          {
            "node": "AI Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "Code Tool1": {
      "ai_tool": [
        [
          {
            "node": "AI Agent",
            "type": "ai_tool",
            "index": 0
          }
        ]
      ]
    },
    "Ollama Chat Model": {
      "ai_languageModel": [
        [
          {
            "node": "AI Agent",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    }
  },
  "pinData": {},
  "meta": {
    "templateCredsSetupCompleted": true,
    "instanceId": "b3584b9e2f4e3f873f60bd4b4134e7b6163734ddea8a9e6f2984db2d7dc92710"
  }
}

To Reproduce

  1. Post "Can Turtles win races?"

  2. Note which tool or tools is run

  3. Post "Can Turtles win races?"

  4. Compare which tool or tools were run

  5. Return to step one and repeat

  6. Try removing the link between the AI Agent and one of the tools and start at one again...

Expected behavior

I would expect there to be some repeatable and reasonable behavior concerning which tool was run and the order that they were run in.

Operating System

Ubuntu 22.04

n8n Version

1.95.0, 1.94.1

Node.js Version

v22.15.1

Database

SQLite (default)

Execution mode

main (default)

@Joffcom
Copy link
Member

Joffcom commented May 31, 2025

Hey @kplatter,

We have created an internal ticket to look into this which we will be tracking as "GHC-2276"

@Joffcom Joffcom added the in linear Issue or PR has been created in Linear for internal review label May 31, 2025
@kplatter
Copy link
Author

kplatter commented May 31, 2025

I switched from using snowflake-arctic-embed:22m to using qwen3:4b and am getting section detailing what tools were used and what went into the reasoning to use them.

The LLM believes that it queried the vector store and no longer needs to do it for subsequent questions. My Supabase has data in it about Tiller Tines as well as AKC data on Bulldogs, however if you ask it questions it says it checked the vector store and it does not have any information on that.

Step 1: Searched "tiller tines" in the vector store. Returned zero relevant documents.

Step 2: Cross-checked synonyms like "tilling blades," "rotary tines," or "soil cultivation components"—still no matches.

Step 3: Confirmed all "tiller" entries in the dataset refer strictly to machinery control handles (e.g., marine outboard motor tillers, engine throttle-linkage systems).

Step 4: Concluded the vector store lacks data on agricultural implements (e.g., garden tiller tines).

Even though the LLM thinks it is using the tool/tools (as evidenced by the data that went into its thinking), the n8n UI was showing none or one tool being called .i.e. the link to the tool being lit up green along with the tool itself. This is very misleading. I am not sure how or if this could be fixed, but I would definitely call it a BUG.

Image

Image

I would be happy to help in any way I can, just let me know.

@kplatter
Copy link
Author

I stopped and restarted n8n and it went back to actually calling the vector store and returning the correct results :(

@britalx
Copy link

britalx commented Jun 1, 2025

I would use a Magic Number from the vector store, first check if it can provide the exact Magic Number, if not, force resetting the LLM and query the vector store.

@Joffcom
Copy link
Member

Joffcom commented Jun 2, 2025

Hey @kplatter,

If the LLM thinks it has called the vector store and n8n is not showing it then there could be 2 possible issues...

  1. We are making the request and the UI just isn't updating - Checking any logs for the vector store or tool could help show this
  2. The LLM is hallucinating and thinks it has called it when actually it hasn't - Checking any logs would also help confirm if this is the case

At the moment I am not convinced there is a bug here but I will keep it open for a bit so you can go through any local logs you may have.

@Joffcom Joffcom added the Needs Feedback Waiting for further input or clarification. label Jun 2, 2025
@kplatter
Copy link
Author

kplatter commented Jun 3, 2025

Ok, what is "Any Logs" is there a specific file location or a link in the n8n interface to see the "Any Logs"?

@Joffcom Joffcom removed the Needs Feedback Waiting for further input or clarification. label Jun 3, 2025
@kplatter
Copy link
Author

kplatter commented Jun 3, 2025

I didn't know what logs you were referring to, but I am running n8n with PM2 so I set up this file

ecosystem.config.js
module.exports = {
apps: [
{
name: 'n8n',
script: 'n8n',
env: {
N8N_PORT: 5678,
N8N_LOG_LEVEL: "debug",
N8N_LOG_OUTPUT: "file, console"
}
}
]
};

and ran

pm2 start ecosystem.config.js

Here is the output which has these two lines

  • {"level":"debug","message":"Executing Tools Agent","metadata":{"file":"execute.js","function":"toolsAgentExecute","timestamp":"2025-06-03T23:20:23.070Z"}}
  • {"level":"debug","message":"Supply data for embeddings Ollama","metadata":{"file":"EmbeddingsOllama.node.js","function":"supplyData","timestamp":"2025-06-03T23:20:23.071Z"}}

That seems to indicate to me that the tool was being executed, however as you can see in this picture it does not appear that it did. If there is some other log or logging option that you would like, please let me know what and where it is.

Image

"level":"debug","message":"Received webhook \"POST\" for path \"lumiun\"","metadata":{"file":"live-webhooks.js","function":"executeWebhook","timestamp":"2025-06-03T23:20:23.036Z"}}
{"level":"debug","message":"Execution added","metadata":{"executionId":"674","file":"active-executions.js","function":"add","timestamp":"2025-06-03T23:20:23.056Z"}}
{"level":"debug","message":"Execution for workflow Lumiun was assigned id 674","metadata":{"executionId":"674","file":"workflow-runner.js","function":"runMainProcess","timestamp":"2025-06-03T23:20:23.060Z"}}
{"level":"debug","message":"Execution ID 674 had Execution data. Running with payload.","metadata":{"executionId":"674","file":"workflow-runner.js","function":"runMainProcess","timestamp":"2025-06-03T23:20:23.063Z"}}
{"level":"debug","message":"Workflow execution started","metadata":{"file":"LoggerProxy.js","function":"exports.debug","timestamp":"2025-06-03T23:20:23.063Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Started execution of workflow \"Lumiun\" from webhook with execution ID 674","metadata":{"executionId":"674","file":"webhook-helpers.js","function":"executeWebhook","timestamp":"2025-06-03T23:20:23.064Z"}}
{"level":"debug","message":"Start executing node \"Webhook\"","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Webhook","timestamp":"2025-06-03T23:20:23.064Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Webhook\" started","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Webhook","timestamp":"2025-06-03T23:20:23.064Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Webhook\" finished successfully","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Webhook","timestamp":"2025-06-03T23:20:23.064Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Start executing node \"Code\"","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Code","timestamp":"2025-06-03T23:20:23.064Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Code\" started","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Code","timestamp":"2025-06-03T23:20:23.064Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Code\" finished successfully","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Code","timestamp":"2025-06-03T23:20:23.068Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Start executing node \"Edit Fields\"","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Edit Fields","timestamp":"2025-06-03T23:20:23.068Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Edit Fields\" started","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Edit Fields","timestamp":"2025-06-03T23:20:23.069Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Edit Fields\" finished successfully","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Edit Fields","timestamp":"2025-06-03T23:20:23.069Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Start executing node \"AI Agent\"","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"AI Agent","timestamp":"2025-06-03T23:20:23.069Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"AI Agent\" started","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"AI Agent","timestamp":"2025-06-03T23:20:23.069Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Executing Tools Agent","metadata":{"file":"execute.js","function":"toolsAgentExecute","timestamp":"2025-06-03T23:20:23.070Z"}}
{"level":"debug","message":"Supply data for embeddings Ollama","metadata":{"file":"EmbeddingsOllama.node.js","function":"supplyData","timestamp":"2025-06-03T23:20:23.071Z"}}
{"level":"debug","message":"Running node \"AI Agent\" finished successfully","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"AI Agent","timestamp":"2025-06-03T23:20:23.253Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Start executing node \"Edit Fields1\"","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Edit Fields1","timestamp":"2025-06-03T23:20:23.253Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Edit Fields1\" started","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Edit Fields1","timestamp":"2025-06-03T23:20:23.254Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Edit Fields1\" finished successfully","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Edit Fields1","timestamp":"2025-06-03T23:20:23.255Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Start executing node \"Respond to Webhook\"","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Respond to Webhook","timestamp":"2025-06-03T23:20:23.255Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Respond to Webhook\" started","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Respond to Webhook","timestamp":"2025-06-03T23:20:23.255Z","workflowId":"fRuaIjbJSGRM9jz8"}}
{"level":"debug","message":"Running node \"Respond to Webhook\" finished successfully","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"Respond to Webhook","timestamp":"2025-06-03T23:20:23.255Z","workflowId":"fRuaIjbJSGRM9jz8"}}

@kplatter
Copy link
Author

kplatter commented Jun 4, 2025

Sometimes when I run a query, I get back a response in this format with no answer

<vector_store>{"input": "What are rodents?"}</vector_store>

even thought the log appears that it ran the tools agent, it does not appear to process the response thru the LLM and I just get back what was sent to the tool :(

{"level":"debug","message":"Executing Tools Agent","metadata":{"file":"execute.js","function":"toolsAgentExecute","timestamp":"2025-06-04T01:42:13.961Z"}}
{"level":"debug","message":"Supply data for embeddings Ollama","metadata":{"file":"EmbeddingsOllama.node.js","function":"supplyData","timestamp":"2025-06-04T01:42:13.961Z"}}
{"level":"debug","message":"Running node \"AI Agent\" finished successfully","metadata":{"file":"LoggerProxy.js","function":"exports.debug","node":"AI Agent","timestamp":"2025-06-04T01:42:14.164Z","workflowId":"fRuaIjbJSGRM9jz8"}}

@kplatter
Copy link
Author

kplatter commented Jun 4, 2025

I just found that Simple Memory is part of the problem I have been having. If at some point the AI decided that it did not need to check the vector store, or didn't check the vector store for some reason, it updates the Simple Memory that it does not know. Then after that any further requests for the same information no longer check the vector store. It seems that it is at least questionable whether Simple Memory should be updated if no answer is found. What happens if the tools are updated between calls? I think this is a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in linear Issue or PR has been created in Linear for internal review
Projects
None yet
Development

No branches or pull requests

3 participants