Skip to content

Bug: transfer_to_agent not found in canonical_tools() during HITL request_confirmation resume #5633

@settler-av

Description

@settler-av

Using tool_context.request_confirmation() inside a before_tool_callback to gate transfer_to_agent with human approval fails on the confirmation resume with:

ValueError: Tool 'transfer_to_agent' not found. Available tools:

This is a structural incompatibility between two ADK subsystems:

  • _AgentTransferLlmRequestProcessor injects transfer_to_agent dynamically into llm_request.tools_dict at flow time. It is never added to agent.tools.
  • On confirmation resume, _RequestConfirmationLlmRequestProcessor (Step 4) rebuilds tools_dict exclusively from await agent.canonical_tools(), which only reads from agent.tools.
  • Since transfer_to_agent was never in agent.tools, tools_dict = {}_get_tool("transfer_to_agent", {})ValueError.

Steps to Reproduce:

  1. Create an LlmAgent orchestrator with sub_agents (e.g. RemoteA2aAgent or plain LlmAgent)
  2. Add a before_tool_callback that calls tool_context.request_confirmation() when tool.name == "transfer_to_agent"
  3. Send a message that causes the LLM to call transfer_to_agent
  4. Respond to the adk_request_confirmation event with ToolConfirmation(confirmed=True)
  5. Error is raised

Script that reproduce the issue

import asyncio
from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.tools.tool_context import ToolContext
from google.adk.tools.tool_confirmation import ToolConfirmation
from google.adk.tools.base_tool import BaseTool
from google.genai import types
import os
from dotenv import load_dotenv
from google.adk.models.lite_llm import LiteLlm

load_dotenv()

# ── HITL callback ─────────────────────────────────────────────────────────────

async def human_approval_callback(tool: BaseTool, args: dict, tool_context: ToolContext):
    if tool.name != "transfer_to_agent":
        return None

    if tool_context.tool_confirmation is None:          # first call → suspend
        tool_context.request_confirmation(hint=f"Approve transfer to '{args.get('agent_name')}'?")
        return {"status": "pending"}

    if tool_context.tool_confirmation.confirmed:        # resume → approved
        return None
    return {"status": "denied"}                         # resume → denied


# ── Agents ────────────────────────────────────────────────────────────────────

math_agent = LlmAgent(
    name="math_assistant",
    model=LiteLlm(
        model=f"azure/{os.getenv('AZURE_MODEL_NAME')}",
        api_key=os.getenv("AZURE_API_KEY"),
        api_base=os.getenv("AZURE_API_BASE"),
        api_version=os.getenv("AZURE_API_VERSION"),
    ),
    description="Specialist for math questions.",
    instruction="You are a math expert. Answer math questions concisely.",
)

root_agent = LlmAgent(
    name="orchestrator",
    model=LiteLlm(
        model=f"azure/{os.getenv('AZURE_MODEL_NAME')}",
        api_key=os.getenv("AZURE_API_KEY"),
        api_base=os.getenv("AZURE_API_BASE"),
        api_version=os.getenv("AZURE_API_VERSION"),
    ),
    description="Routes questions to specialists.",
    instruction="If the user asks a math question, transfer to math_assistant immediately.",
    sub_agents=[math_agent],
    before_tool_callback=human_approval_callback,
)


# ── Runner ────────────────────────────────────────────────────────────────────

APP, USER, SESSION = "repro", "user1", "s1"
session_service = InMemorySessionService()
runner = Runner(agent=root_agent, app_name=APP, session_service=session_service)


async def main():
    await session_service.create_session(app_name=APP, user_id=USER, session_id=SESSION)

    # Turn 1 — trigger a transfer
    print("=== Turn 1: user asks a math question ===")
    confirmation_id = None
    async for event in runner.run_async(
        user_id=USER, session_id=SESSION,
        new_message=types.Content(role="user", parts=[types.Part(text="What is 12 * 12?")]),
    ):
        if event.content:
            for part in event.content.parts or []:
                fc = getattr(part, "function_call", None)
                if fc and fc.name == "adk_request_confirmation":
                    confirmation_id = fc.id
                    print(f"  → Approval requested (id={fc.id})")

    if not confirmation_id:
        print("No transfer triggered — try a clearer math prompt.")
        return

    # Turn 2 — approve the transfer
    # BUG: this raises ValueError: Tool 'transfer_to_agent' not found. Available tools:
    print("\n=== Turn 2: human approves the transfer ===")
    approval = ToolConfirmation(confirmed=True)
    async for event in runner.run_async(
        user_id=USER, session_id=SESSION,
        new_message=types.Content(role="user", parts=[types.Part(
            function_response=types.FunctionResponse(
                name="adk_request_confirmation",
                id=confirmation_id,
                response={"response": approval.model_dump_json()},
            )
        )]),
    ):
        if event.is_final_response() and event.content:
            for part in event.content.parts or []:
                if part.text:
                    print(f"  Final: {part.text}")


if __name__ == "__main__":
    asyncio.run(main())

Expected Behavior:

After human approval, the confirmation processor re-executes transfer_to_agent and the transfer to the sub-agent completes normally.

Observed Behavior:

Image
ValueError: Tool 'transfer_to_agent' not found. Available tools: 

  File "google/adk/flows/llm_flows/functions.py", in _get_tool
    raise ValueError(error_msg)
  File "google/adk/flows/llm_flows/request_confirmation.py", in run_async
    {
        tool.name: tool
        for tool in await agent.canonical_tools(   # ← transfer_to_agent never here
            ReadonlyContext(invocation_context)
        )
    },

Environment Details:

  • ADK Library Version: 2.0.0b1
  • Desktop OS: Linux
  • Python Version: 3.10+

Model Information:

  • Are you using LiteLLM: Yes
  • Which model is being used: gpt-4.1

🟡 Optional Information

Regression: Not a regression — pre-existing architectural gap. request_confirmation() was designed for regular tools in agent.tools; transfer_to_agent is a flow-level tool and never goes through that path.

Root Cause (potential):

agent_transfer.py ~line 51:

transfer_to_agent_tool = TransferToAgentTool(agent_names=[...])
await transfer_to_agent_tool.process_llm_request(...)  # adds to llm_request.tools_dict ONLY
# never added to agent.tools → invisible to canonical_tools()

request_confirmation.py ~line 168:

{
    tool.name: tool
    for tool in await agent.canonical_tools(   # only reads agent.tools
        ReadonlyContext(invocation_context)
    )
}
# tools_dict = {} → Tool 'transfer_to_agent' not found

How often has this issue occurred?: Always (100%)

Metadata

Metadata

Labels

core[Component] This issue is related to the core interface and implementationrequest clarification[Status] The maintainer need clarification or more information from the authorv2Affects only 2.0 version

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions