Tool Use (Function Calling)
conceptThe ability of an LLM to invoke external functions — APIs, databases, code execution — as part of its reasoning.
Tool use, also called function calling, lets an LLM go beyond text generation. Instead of only producing words, the model outputs structured requests to call specific tools — search the web, query a database, run code, or hit an API.
The model receives a list of available tools with their schemas. During generation, it can choose to call one or more tools, receive the results, and continue reasoning. This grounds LLM output in real data and enables agents to take actions in the world.
Major providers implement this differently: OpenAI uses a tools array with JSON Schema, Anthropic uses tool_use content blocks, and the MCP protocol standardizes tool discovery across providers.