Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

agents: fix the function call of openai llm. #1023

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

chainhelen
Copy link

@chainhelen chainhelen commented Sep 15, 2024

According to the guides of openai function call, Once you've executed these function calls in your application, you can provide the result back to the model by adding one new message to the conversation for each function call, each containing the result of one function call, with a tool_call_id referencing the id from tool_calls. We need to provide the function call result back to the model in line with the specified structure, otherwise the openai server will verify the parameters and return a 400 status code.

PR Checklist

  • Read the Contributing documentation.
  • Read the Code of conduct documentation.
  • Name your Pull Request title clearly, concisely, and prefixed with the name of the primarily affected package you changed according to Good commit messages (such as memory: add interfaces for X, Y or util: add whizzbang helpers).
  • Check that there isn't already a PR that solves the problem the same way to avoid creating a duplicate.
  • Provide a description in this PR that addresses what the PR is solving, or reference the issue that it solves (e.g. Fixes #123).
  • Describes the source of new concepts.
  • References existing implementations as appropriate.
  • Contains test coverage for new functions.
  • Passes all golangci-lint checks.

According to the [guides](https://platform.openai.com/docs/guides/function-calling) of openai function call,
`Once you've executed these function calls in your application, you can provide the result back to the model
by adding one new message to the conversation for each function call, each containing the result of one
function call, with a tool_call_id referencing the id from tool_calls`.
We need to provide the function call result back to the model in line with the specified structure,
otherwise the openai server will verify the parameters and return a 400 status code.
@chainhelen
Copy link
Author

chainhelen commented Sep 15, 2024

If I use agents with function call response by openai llm gpt-4o-2024-05-13, langchaingo will get 400 status code, and the TestExecutorWithOpenAIFunctionAgent also can reproduce this issue
image.
This commit fix this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant