-
Notifications
You must be signed in to change notification settings - Fork 2.8k
feat: enable to will_continue-like function response with streaming mode. #2788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: enable to will_continue-like function response with streaming mode. #2788
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Lin-Nikaido, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a significant feature that allows function tools to provide streaming responses, enabling real-time progress updates for long-running operations. It refactors the function call handling mechanism to support asynchronous generators, ensuring that intermediate results from tools can be streamed back to the user when Server-Sent Events (SSE) mode is enabled. This enhances the user experience by providing transparency into ongoing tool executions.
Highlights
- Streaming Function Tool Responses: Introduced the capability for function tools to return streaming responses, allowing for real-time progress updates during long-running operations. This is achieved by enabling tools to return generator or async generator objects.
- Asynchronous Generator Integration: Refactored the core function call handling in BaseLlmFlow and functions.py to process and yield intermediate events from asynchronous generators returned by tools, specifically when StreamingMode.SSE is active.
- Parallel Streaming Support: Implemented a mechanism (_concat_function_call_generators) to merge and stream events from multiple parallel function calls, ensuring that progress updates from concurrent tool executions are handled correctly.
- Function Schema Updates: Updated the function parameter parsing utility to correctly recognize and represent Python's Generator, Iterator, Iterable, AsyncGenerator, AsyncIterator, and AsyncIterable types as ARRAY in the generated tool schemas.
- Enhanced Tool Context: Added run_config to ToolContext to provide tools with direct access to the current run's configuration, including the streaming mode, enabling conditional streaming behavior within tools.
- Comprehensive Unit Tests: Added a new, extensive suite of unit tests (test_tools_generative_call.py) to validate the streaming functionality for various types of generator and async generator functions, covering both single and parallel tool executions.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant feature to enable streaming responses from function tools, allowing them to yield progress updates. The implementation involves changes across the flow, function handling, and tool execution logic to support generators and async generators. A comprehensive set of unit tests has been added to validate the new functionality. My review focuses on improving code quality by addressing an unnecessary import, removing commented-out code, and identifying several areas with code duplication that could be refactored for better maintainability.
3bcc6b1 to
a0cd676
Compare
|
Hi, I really appreciate the work on this project. I’ve been eagerly waiting for this feature to be merged, as it would be very helpful for my use case. Thank you for your time! |
PiperOrigin-RevId: 835268090
Co-authored-by: Max Ind <[email protected]> PiperOrigin-RevId: 835269976
Resolves google#3641 Co-authored-by: Shangjie Chen <[email protected]> PiperOrigin-RevId: 835292931
Resolves google#3644 Co-authored-by: Shangjie Chen <[email protected]> PiperOrigin-RevId: 835302097
Merge google#3546 Co-authored-by: Xuan Yang <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#3546 from ryanaiagent:feat/stale-issue-agent bcf4509 PiperOrigin-RevId: 835327931
Merge google#3638 Thanks for this great project 👏 This PR updates the GitHub actions dependencies to the latest version. ### Checklist - [X] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document. - [X] I have performed a self-review of my own code. Co-authored-by: Hangfei Lin <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#3638 from gioboa:ci/actions-versions f7d6f3b PiperOrigin-RevId: 835343177
Merge google#3538 Main change from: E(inv=2, role=user), E(inv=2, role=model), E(inv=2, role=user), To: E(inv=2, role=user), E(inv=2, role=model) I think the last E(inv=2, role=user) was wrong. Also reformatted. Co-authored-by: Hangfei Lin <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#3538 from adrianad:patch-1 627b933 PiperOrigin-RevId: 835346467
Merge google#3576 **Please ensure you have read the [contribution guide](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) before creating a pull request.** ### Link to Issue or Description of Change **1. Link to an existing issue (if applicable):** - Closes: google#3557 - Related: #_issue_number_ **2. Or, if no issue exists, describe the change:** _If applicable, please follow the issue templates to provide as much detail as possible._ **Problem:** When creating a BaseAgent with multiple sub-agents, there was no validation to ensure that all sub-agents have unique names. This could lead to confusion when trying to find or reference specific sub-agents by name, as duplicate names would make it ambiguous which agent is being referenced. **Solution:** Added a @field_validator for the sub_agents field in BaseAgent that validates all sub-agents have unique names. The validator: Checks for duplicate names in the sub-agents list Raises a ValueError with a clear error message listing all duplicate names found Returns the validated list if all names are unique Handles edge cases like empty lists gracefully ### Testing Plan _Please describe the tests that you ran to verify your changes. This is required for all PRs that are not small documentation or typo fixes._ **Unit Tests:** - [x] I have added or updated unit tests for my change. - [x] All unit tests pass locally. _Please include a summary of passed `pytest` results._ Added 4 new test cases in tests/unittests/agents/test_base_agent.py: test_validate_sub_agents_unique_names_single_duplicate: Verifies that a single duplicate name raises ValueError test_validate_sub_agents_unique_names_multiple_duplicates: Verifies that multiple duplicate names are all reported in the error message test_validate_sub_agents_unique_names_no_duplicates: Verifies that unique names pass validation successfully test_validate_sub_agents_unique_names_empty_list: Verifies that empty sub-agents list passes validation All tests pass locally. You can run with: pytest tests/unittests/agents/test_base_agent.py::test_validate_sub_agents_unique_names_single_duplicate tests/unittests/agents/test_base_agent.py::test_validate_sub_agents_unique_names_multiple_duplicates tests/unittests/agents/test_base_agent.py::test_validate_sub_agents_unique_names_no_duplicates tests/unittests/agents/test_base_agent.py::test_validate_sub_agents_unique_names_empty_list -v **Manual End-to-End (E2E) Tests:** _Please provide instructions on how to manually test your changes, including any necessary setup or configuration. Please provide logs or screenshots to help reviewers better understand the fix._ Test Case 1: Duplicate names should raise error from google.adk.agents import Agent agent1 = Agent(name="sub_agent", model="gemini-2.5-flash") agent2 = Agent(name="sub_agent", model="gemini-2.5-flash") # Same name # This should raise ValueError try: parent = Agent( name="parent", model="gemini-2.5-flash", sub_agents=[agent1, agent2] ) except ValueError as e: print(f"Expected error: {e}") # Output: Found duplicate sub-agent names: `sub_agent`. All sub-agents must have unique names. Test Case 2: Unique names should work from google.adk.agents import Agent agent1 = Agent(name="agent1", model="gemini-2.5-flash") agent2 = Agent(name="agent2", model="gemini-2.5-flash") # This should work without error parent = Agent( name="parent", model="gemini-2.5-flash", sub_agents=[agent1, agent2] ) print("Success: Unique names validated correctly") ### Checklist - [x] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document. - [x] I have performed a self-review of my own code. - [x] I have commented my code, particularly in hard-to-understand areas. - [x] I have added tests that prove my fix is effective or that my feature works. - [x] New and existing unit tests pass locally with my changes. - [x] I have manually tested my changes end-to-end. - [x] Any dependent changes have been merged and published in downstream modules. ### Additional context This change adds validation at the BaseAgent level, so it automatically applies to all agent types that inherit from BaseAgent (e.g., LlmAgent, LoopAgent, etc.). The validation uses Pydantic's field validator system, which runs during object initialization, ensuring the constraint is enforced early and consistently. The error message clearly identifies which names are duplicated, making it easy for developers to fix the issue: COPYBARA_INTEGRATE_REVIEW=google#3576 from sarojrout:feat/validate-unique-sub-agent-names 07adf1f PiperOrigin-RevId: 835358118
…AgentTool runner Merge google#2779 Fixes google#2780 ### testing plan not available as is doesn't introduce new functionality Co-authored-by: Wei Sun (Jack) <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#2779 from davidkl97:feature/agent-tool-plugins a602c80 PiperOrigin-RevId: 835366974
… provided in FunctionDeclaration Co-authored-by: Xuan Yang <[email protected]> PiperOrigin-RevId: 835379243
This change replaces the use of `ChatCompletionDeveloperMessage` with `ChatCompletionSystemMessage` and sets the role to "system" for providing system instructions to LiteLLM models Close google#3657 Co-authored-by: George Weale <[email protected]> PiperOrigin-RevId: 835388738
Co-authored-by: Shangjie Chen <[email protected]> PiperOrigin-RevId: 835466120
Merge google#2588 ## Description Fixes an issue in `base_llm_flow.py` where, in Bidi-streaming (live) mode, the multi-agent structure causes duplicated responses after tool calling. ## Problem In Bidi-streaming (live) mode, when utilizing a multi-agent structure, the leaf-level sub-agent and its parent agent both process the same function call response, leading to duplicate replies. This duplication occurs because the parent agent's live connection remains open while initiating a new connection with the child agent. ## Root Cause The issue originated from the placement of agent transfer logic in the `_postprocess_live` method at lines 547-557. When a `transfer_to_agent` function call was made: 1. The function response was processed in `_postprocess_live` 2. A recursive call to `agent_to_run.run_live` was initiated 3. This prevented the closure of the parent agent's connection at line 175 of the `run_live` method, as that code path was never reached 4. Both the parent and child agents remained active, causing both to process subsequent function responses ## Solution This PR addresses the issue by ensuring the parent agent's live connection is closed before initiating a new one with the child agent. Changes made: **Connection Management**: Moved the agent transfer logic from `_postprocess_live` method to the `run_live` method, specifically: - Removed agent transfer handling from lines 547-557 in `_postprocess_live` - Added agent transfer handling after connection closure at lines 176-184 in `run_live` **Code Refactoring**: The agent transfer now occurs in the proper sequence: 1. Parent agent processes the `transfer_to_agent` function response 2. Parent agent's live connection is properly closed (line 175) 3. New connection with child agent is initiated (line 182) 4. Child agent handles subsequent function calls without duplication **Improved Flow Control**: This ensures that each agent processes function call responses without duplication, maintaining proper connection lifecycle management in multi-agent structures. ## Testing To verify this fix works correctly: 1. **Multi-Agent Structure Test**: Set up a multi-agent structure with a parent agent that transfers to a child agent via `transfer_to_agent` function call 2. **Bidi-Streaming Mode**: Enable Bidi-streaming (live) mode in the configuration 3. **Function Call Verification**: Trigger a function call that results in agent transfer 4. **Response Monitoring**: Verify that only one response is generated (not duplicated) 5. **Connection Management**: Confirm that parent agent's connection is properly closed before child agent starts **Expected Behavior**: - Single function response per call - Clean agent handoffs without connection leaks - Proper connection lifecycle management ## Backward Compatibility This change is **fully backward compatible**: - No changes to public APIs or method signatures - Existing single-agent flows remain unaffected - Non-live (regular async) flows continue to work as before - Only affects the internal flow control in live multi-agent scenarios Co-authored-by: Hangfei Lin <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#2588 from AlexeyChernenkoPlato:fix/double-function-response-processing-issue 3339260 PiperOrigin-RevId: 835619170
…on in AgentLoader Merge google#3609 Co-authored-by: George Weale <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#3609 from p-kris10:fix/windows-cmd 8cb0310 PiperOrigin-RevId: 836395714
PiperOrigin-RevId: 836399603
Co-authored-by: Shangjie Chen <[email protected]> PiperOrigin-RevId: 836409638
Merge google#2437 Current implementation of `transfer_to_agent` doesn't enforce strict constraints on agent names, we could use JSON Schema's enum definition to implement stricter constraints. Co-authored-by: Xuan Yang <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#2437 from qieqieplus:main 052e8e7 PiperOrigin-RevId: 836410397
… agent Co-authored-by: Xuan Yang <[email protected]> PiperOrigin-RevId: 836416597
Merge google#3284 **Problem:** When debugging agents that utilize the `VertexAiSearchTool`, it's currently difficult to inspect the specific configuration parameters (datastore ID, engine ID, filter, max_results, etc.) being passed to the underlying Vertex AI Search API via the `LlmRequest`. This lack of visibility can hinder troubleshooting efforts related to tool configuration. **Solution:** This PR enhances the `VertexAiSearchTool` by adding a **debug-level log statement** within the `process_llm_request` method. This log precisely records the parameters being used for the Vertex AI Search configuration just before it's appended to the `LlmRequest`. This provides developers with crucial visibility into the tool's runtime behavior when debug logging is enabled, significantly improving the **debuggability** of agents using this tool. Corresponding unit tests were updated to rigorously verify this new logging output using `caplog`. Additionally, minor fixes were made to the tests to resolve Pydantic validation errors. Co-authored-by: Xuan Yang <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#3284 from omkute10:feat/add-logging-vertex-search-tool 199c12b PiperOrigin-RevId: 836419886
PiperOrigin-RevId: 836614561
PiperOrigin-RevId: 856400285
Co-authored-by: Xiang (Sean) Zhou <[email protected]> PiperOrigin-RevId: 856404282
…tifactsTool The LoadArtifactsTool now checks if an artifact's inline data MIME type is supported by Gemini. If not, it attempts to convert the artifact content into a text Part Close google#4028 Co-authored-by: George Weale <[email protected]> PiperOrigin-RevId: 856404510
Co-authored-by: Xuan Yang <[email protected]> PiperOrigin-RevId: 856412858
There was an extra test_mcp_toolset in the tools/ directory with only one test; I moved it into the main file. Co-authored-by: Kathy Wu <[email protected]> PiperOrigin-RevId: 856419611
… LiteLLM Introduces a function to map LiteLLM finish reason strings to the internal types.FinishReason enum and populates the finish_reason field in LlmResponse. Removes custom logic for handling file URIs, including special casing for different providers, and updates tests accordingly Close google#4125 Co-authored-by: George Weale <[email protected]> PiperOrigin-RevId: 856421317
audio is transcribed thus no need to be sent, but other blob(e.g. image) should still be sent. Co-authored-by: Xiang (Sean) Zhou <[email protected]> PiperOrigin-RevId: 856422986
1. Convert A2A responses containing a DataPart to ADK events. By default, this is done by serializing the DataPart to JSON and embedding it within the inline_data field of a GenAI Part, wrapped with custom tags (<a2a_datapart_json> and </a2a_datapart_json>). 2. Convert ADK events back to A2A requests. Specifically, messages stored in inline_data with the text/plain mime type and content wrapped within the custom tags (<a2a_datapart_json> and </a2a_datapart_json>) are deserialized from JSON back into an A2A DataPart PiperOrigin-RevId: 856426615
…ager Merge google#4025 **Please ensure you have read the [contribution guide](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) before creating a pull request.** ### Link to Issue or Description of Change **1. Link to an existing issue (if applicable):** - Closes: - google#3950 - google#3731 - google#3708 **2. Or, if no issue exists, describe the change:** **Problem:** - `ClientSession` of https://github.com/modelcontextprotocol/python-sdk uses AnyIO for async task management. - AnyIO TaskGroup requires its start and close must happen in a same task. - Since `McpSessionManager` does not create task per client, the client might be closed by different task, cause the error: `Attempted to exit cancel scope in a different task than it was entered in`. **Solution:** I Suggest 2 changes: Handling the `ClientSession` in a single task - To start and close `ClientSession` by the same task, we need to wrap the whole lifecycle of `ClientSession` to a single task. - `SessionContext` wraps the initialization and disposal of `ClientSession` to a single task, ensures that the `ClientSession` will be handled only in a dedicated task. Add timeout for `ClientSession` - Since now we are using task per `ClientSession`, task should never be leaked. - But `McpSessionManager` does not deliver timeout directly to `ClientSession` when the type is not STDIO. - There is only timeout for `httpx` client when MCP type is SSE or StreamableHTTP. - But the timeout applys only to `httpx` client, so if there is an issue in MCP client itself(e.g. modelcontextprotocol/python-sdk#262), a tool call waits the result **FOREVER**! - To overcome this issue, I propagated the `sse_read_timeout` to `ClientSession`. - `timeout` is too short for timeout for tool call, since its default value is only 5s. - `sse_read_timeout` is originally made for read timeout of SSE(default value of 5m or 300s), but actually most of SSE implementations from server (e.g. FastAPI, etc.) sends ping periodically(about 15s I assume), so in a normal circumstances this timeout is quite useless. - If the server does not send ping, the timeout is equal to tool call timeout. Therefore, it would be appropriate to use `sse_read_timeout` as tool call timeout. - Most of tool calls should finish within 5 minutes, and sse timeout is adjustable if not. - If this change is not acceptable, we could make a dedicate parameter for tool call timeout(e.g. `tool_call_timeout`). ### Testing Plan - Although this does not change the interface itself, it changes its own session management logics, some existing tests are no longer valid. - I made changes to those tests, especially those of which validate session states(e.g. checking whether `initialize()` called). - Since now session is encapsulated with `SessionContext`, we cannot validate the initialized state of the session in `TestMcpSessionManager`, should validate it at `TestSessionContext`. - Added a simple test for reproducing the issue(`test_create_and_close_session_in_different_tasks`). - Also made a test for the new component: `SessionContext`. **Unit Tests:** - [x] I have added or updated unit tests for my change. - [x] All unit tests pass locally. ```plaintext =================================================================================== 3689 passed, 1 skipped, 2205 warnings in 63.39s (0:01:03) =================================================================================== ``` **Manual End-to-End (E2E) Tests:** _Please provide instructions on how to manually test your changes, including any necessary setup or configuration. Please provide logs or screenshots to help reviewers better understand the fix._ ### Checklist - [x] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document. - [x] I have performed a self-review of my own code. - [x] I have commented my code, particularly in hard-to-understand areas. - [x] I have added tests that prove my fix is effective or that my feature works. - [x] New and existing unit tests pass locally with my changes. - [x] I have manually tested my changes end-to-end. - [ ] ~~Any dependent changes have been merged and published in downstream modules.~~ `no deps has been changed` ### Additional context This PR is related to modelcontextprotocol/python-sdk#1817 since it also fixes endless tool call awaiting. Co-authored-by: Kathy Wu <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#4025 from challenger71498:feat/task-based-mcp-session-manager f7f7cd0 PiperOrigin-RevId: 856438147
So that we can access self._auth_config in McpTool for getting auth headers Co-authored-by: Kathy Wu <[email protected]> PiperOrigin-RevId: 856451693
use yield type as return type Co-authored-by: Xiang (Sean) Zhou <[email protected]> PiperOrigin-RevId: 856459995
… register them Original codes use tool.__name__ to register streaming tools, this is problemetic, it only works with python function passed as tools directly, if they are wrapped in FunctionTool, then FunctionTool doesn't have "__name__" property. canonical_tools wrap python function in FunctionTool uniformly thus we can use tool.name uniformly Co-authored-by: Xiang (Sean) Zhou <[email protected]> PiperOrigin-RevId: 856472936
…eature enabled Co-authored-by: Xuan Yang <[email protected]> PiperOrigin-RevId: 856508415
Merge google#3988 ### Link to Issue or Description of Change **1. Link to an existing issue (if applicable):** - Closes: google#3987 - Related: google#3596 **2. Or, if no issue exists, describe the change:** **Problem:** Idea about my use case I'm building the report generation system using google-ask (1.18.0) and building multiple subagents here, I'm passing the one subagent Agent as a tool to another Parent Agent. Note: Sub-agent can do web search. here, parent agent triggers multiple sub-agent (same agent) multiple times according to use case or complexity of the user input Describe the bug here, the bug sometimes sub agents doesn't provide the proper output and resulted in the ``` merged_text = '\n'.join(p.text for p in last_content.parts if p.text) ^^^^^^^^^^^^^^^^^^ TypeError: 'NoneType' object is not iterable and it's breaking the system of Agents workflow ``` **Solution:** Creating fallback if there is no **last_content.parts** it will return the empty parts so we won't face the NoneType issue ### Testing Plan Created a unit test file for this issue test_google_search_agent_tool_repro.py **Unit Tests:** - [X] I have added or updated unit tests for my change. - [X] All unit tests pass locally. _Please include a summary of passed `pytest` results._ 3677 passed, 2208 warnings in 42.64s **Manual End-to-End (E2E) Tests:** N/A ### Checklist - [X] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document. - [X] I have performed a self-review of my own code. - [X] I have commented my code, particularly in hard-to-understand areas. - [X] I have added tests that prove my fix is effective or that my feature works. - [X] New and existing unit tests pass locally with my changes. - [X] I have manually tested my changes end-to-end. - [ ] Any dependent changes have been merged and published in downstream modules. ### Additional context N/A Co-authored-by: Liang Wu <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#3988 from ananthanarayanan-28:none-type-issue e6ba948 PiperOrigin-RevId: 856515019
Merge google#4059 ### Link to Issue or Description of Change **1. Link to an existing issue (if applicable):** - Closes: google#3961 **Problem:** Excessive notifications from periodic workflow runs (every 6 hours for triage, daily for stale-bot and docs upload) in forks where they are not needed. **Solution:** Add repository checks to prevent scheduled workflows from running in forked repositories. The workflows will now only run in the main google/adk-python repository. ### Testing Plan I think that unit tests and E2E are not needed because this change is for GitHub Actions (not ADK source code). _Please describe the tests that you ran to verify your changes. This is required for all PRs that are not small documentation or typo fixes._ **Unit Tests:** - [ ] I have added or updated unit tests for my change. - [ ] All unit tests pass locally. _Please include a summary of passed `pytest` results._ **Manual End-to-End (E2E) Tests:** _Please provide instructions on how to manually test your changes, including any necessary setup or configuration. Please provide logs or screenshots to help reviewers better understand the fix._ ### Checklist - [x] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document. - [x] I have performed a self-review of my own code. - [x] I have commented my code, particularly in hard-to-understand areas. - [ ] I have added tests that prove my fix is effective or that my feature works. - [ ] New and existing unit tests pass locally with my changes. - [ ] I have manually tested my changes end-to-end. - [ ] Any dependent changes have been merged and published in downstream modules. COPYBARA_INTEGRATE_REVIEW=google#4059 from ftnext:disable-fork-actions 991cc94 PiperOrigin-RevId: 856691019
PiperOrigin-RevId: 856706510
…chema Co-authored-by: Xiang (Sean) Zhou <[email protected]> PiperOrigin-RevId: 856713741
…gentEngine Co-authored-by: Yeesian Ng <[email protected]> PiperOrigin-RevId: 856724694
Co-authored-by: George Weale <[email protected]> PiperOrigin-RevId: 856737497
Co-authored-by: Yeesian Ng <[email protected]> PiperOrigin-RevId: 856749290
Currently only tool calling supports MCP auth. This refactors the auth logic into a auth_utils file and uses it for tool listing as well. Fixes google#2168. Co-authored-by: Kathy Wu <[email protected]> PiperOrigin-RevId: 856798589
Merge google#4117 **Overview** This PR implements the feature request in google#4108 to allow `thinking_config` to be set directly within `generate_content_config`, bringing the Python SDK in line with the Go implementation. **Changes** - **llm_agent.py**: Relaxed the validation logic in `validate_generate_content_config` to remove the `ValueError` for `thinking_config`. - **Precedence Warning**: Added an override of `model_post_init` in `LlmAgent` to issue a `UserWarning` if both a `planner` and a manual `thinking_config` are provided. - **built_in_planner.py**: Updated `apply_thinking_config` to log an `INFO` message when the planner overwrites an existing configuration on the `LlmRequest`. **Testing** Verified with a reproduction script covering: 1. Successful initialization of an agent with direct `thinking_config`. 2. Validation of `UserWarning` during initialization when conflicting configurations are present. 3. Confirmation of logger output when the planner performs an overwrite. Closes: google#4108 Tagging @invictus2010 for visibility. Co-authored-by: Liang Wu <[email protected]> COPYBARA_INTEGRATE_REVIEW=google#4117 from Akshat8510:feat/allow-thinking-config-4108 5deeb89 PiperOrigin-RevId: 856821447
…hema Co-authored-by: Liang Wu <[email protected]> PiperOrigin-RevId: 856882485
…un_async method with streaming mode.
…-with-stream' into feat/generative-function-calling-with-stream # Conflicts: # src/google/adk/agents/readonly_context.py # src/google/adk/flows/llm_flows/functions.py # src/google/adk/tools/_function_parameter_parse_util.py # src/google/adk/tools/function_tool.py # tests/unittests/testing_utils.py
|
I resolve conflict and follow latest version. |
Linked Issue
close #2014
Abstruction
It expected the tool returns generator, and the runner.async_run method when
streaming_mode: StreamingMode.SSEyields the generator result as each Event. also, thestreaming_modeis not SSE there is no change.I expect this usecase is the function-tool will take few minutes total, and the user want to notice user its progress.
e.g. Like this function.
testing plan
Added

/tests/unittests/tools/test_tools_generative_call.pyand it passed