Langchain tools. Tools within the SQLDatabaseToolkit are designed to interact with a SQL database. This notebook goes over how to use the Google Serper component to search the web. 📄️ Dall-E Tool The Dall-E tool allows your agent to create images using OpenAI's Dall-E image generation tool. com website to know how to build and deploy MCP agents. search), other chains, or even other agents. This library is LangChain provides tools for interacting with a local file system out of the box. Setup We'll need to install the following packages: Calling tools with an LLM is generally more reliable than pure prompting, but it isn't perfect. This guide will cover how to bind tools to an LLM, then invoke the LLM to generate these arguments. Tool calling allows a model to detect when one or more tools should be called and respond with the inputs that should be passed to those tools. If "content" then the output of the tool is interpreted as the contents of a ToolMessage. return_direct – Whether to return directly from the tool rather than continuing the agent loop. Refer here for a list of pre-built tools. convert. args_schema – optional argument schema for user to specify. LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source building blocks, components, and third-party integrations. Learn how to use tools in LangChain, a framework for building applications with agents, chains, and LLMs. This abstract class defines the interface that all LangChain tools must implement. Deploy and scale with LangGraph Platform, with APIs for state management, a visual studio for debugging, and multiple deployment options. A tool is an association between a function and its schema. structured. For detailed documentation of all Jina features and configurations head to the API reference. 📄️ Polygon IO Toolkit This notebook shows how to use agents to interact with the Polygon IO toolkit. 🏃 The Runnable Interface has additional methods that are available on runnables, such as with_types, with_retry, assign, bind, get_graph, and more. db file in the directory where your code lives. 📄️ AWS Lambda Amazon AWS Lambda is a Jan 3, 2025 · Langchain is an advanced framework that helps developers build sophisticated applications powered by large language models (LLMs). A large collection of built-in Tools. The goal with the new attribute is to provide a standard interface for interacting with tool invocations. How to: create tools How to: use built-in tools and toolkits How to: use chat models to call tools How to: pass tool outputs to chat models How to: few shot prompt tool behavior 📄️ Connery Action Tool Using this tool, you can integrate individual Connery Action into your LangChain agent. Defining Custom Tools When constructing your own agent, you will need to provide it with a list of Tools that it can use. If "content_and_artifact" then the output is expected to be a two-tuple corresponding to the (content, artifact) of a ToolMessage. Besides the actual function that is called, the Tool consists of several components: name (str), is required and must be unique within a set of tools provided to an agent description (str), is optional but recommended, as it is used by an agent to determine tool use args BaseTool # class langchain_core. Aug 7, 2024 · Implementing Shell/Bash Tool from Langchain for windows OS using ReAct agent , Groq LLM api (free) In the realm of LLM frameworks , LangChain offers an underrated feature for connecting Large . For detailed documentation of all API toolkit features and configurations head to the API reference for RequestsToolkit. Take precautions to mitigate these risks: Make sure that 🧰 Scalable access to tools: Equip agents with hundreds or thousands of tools. Tool Binding: The tool needs to be connected to a model that supports tool calling. """metadata:Optional[Dict[str,Any]]=None"""Optional metadata associated with the tool. Strategies like keeping schemas simple, reducing the number of tools you pass at once, and having good names and descriptions can help mitigate this risk, but aren't foolproof. Tools Tools are interfaces that an agent can use to interact with the world. Standard tool calling API: standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model. 1. Tools are essentially functions that extend the agent’s capabilities by Quickstart In this guide, we will go over the basic ways to create Chains and Agents that call Tools. Tools are components that can be called by agents to perform specific actions. May 24, 2024 · In this blog post, we’ll explore 10 powerful tools that seamlessly integrate with LangChain, unlocking a wide range of capabilities for your AI agents. Visit the mcp-use docs to get started with mcp *args – Extra positional arguments. How to: create tools How to: use built-in tools and toolkits How to: use chat models to call tools How to: pass tool outputs to chat models Tools are interfaces that an agent, chain, or LLM can use to interact with the world. What is a Tool? A tool in CrewAI is a skill or function that agents can utilize to perform various actions. Aug 25, 2024 · In LangChain, an “Agent” is an AI entity that interacts with various “Tools” to perform tasks or answer queries. Tool # class langchain_core. The key to using models with tools is correctly prompting a model and parsing its response so that it chooses the right tools and May 20, 2024 · LangChain Tools Welcome to LangChain Tools, a Python library designed to simplify, enhance, and extend the functionality of the LangChain library. StructuredTool [source] # Bases: BaseTool Tool that can operate on any number of inputs. LangChain allows AI developers to develop applications based on the combined Large Language Models (such as GPT-4) with external sources of This notebook shows off usage of various search tools. 6. Use LangSmith to debug and evaluate your applications with trace-level visibility and monitoring. The LLM can use it to execute any shell commands. Contents What are Agents? Building the Agent - The Tools - The BaseTool # class langchain_core. Oct 16, 2024 · LangChainのTool Callingは、これらの機能を抽象化し、同じインターフェースとして扱えるようにしているのが特徴です。 どのモデルがどの機能に対応しているかについては、次のような表があります。 Oct 24, 2024 · How to build Custom Tools in LangChain 1: Using @tool decorator: There are several ways to build custom tools. tool. BaseTool [source] # Bases: RunnableSerializable[Union[str, dict, ToolCall], Any] Base class for all LangChain tools. BraveSearch ¶ Note BraveSearch implements the standard Runnable Interface. You can use this file to test the toolkit. tool # langchain_core. Tools allow agents to interact with various resources and services like APIs Tools # Tools are functions that agents can use to interact with the world. Explore the curated list of tools, ports, services, agents, templates, platforms, and more that use LangChain. load_tools(tool_names: List[str], llm: BaseLanguageModel | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, allow_dangerous_tools: bool = False, **kwargs: Any) → List[BaseTool] [source] # Load tools based on their name. In this article, we will explore agents, tools, and the difference between Self-ask Tools for every task LangChain offers an extensive library of off-the-shelf tools u2028and an intuitive framework for customizing your own. Build, prototype and monitor LLM apps using LangChain, LangGraph, LangFlow and LangSmith—diagrams included. These tools can be generic utilities (e. Tools empower agents to transcend their limitations, unlocking new dimensions of efficiency and innovation. Jul 10, 2025 · Master AI development with LangChain tools. load_tools # langchain_community. The goal of tools APIs is to more reliably return valid and useful tool calls than what can We recommend that users use FastMCP and langchain-mcp-adapters to expose langchain tools via MCP using Streamable HTTP as the transport. This how-to guide previously built a chatbot using RunnableWithMessageHistory. 5. Class hierarchy: How to use toolkits Toolkits are collections of tools that are designed to be used together for specific tasks. SQLDatabase Toolkit This will help you get started with the SQL Database toolkit. dev and get your api key. You have to define a function and Tools LangChain has a large collection of 3rd party tools. Each tool has a description. 3. Provides a lot of Some models are capable of tool calling - generating arguments that conform to a specific user-provided schema. Initialize the tool. Feb 15, 2025 · What Are LangChain Tools? Tools in LangChain are interfaces that allow an AI model (such as GPT-4) to interact with external systems, retrieve data, or perform actions beyond simple text LangChain offers tools for every step of the agent development lifecycle, from design to evaluation to deployment. Connecting Google Drive Data with LangChain. A common use case for this is letting the LLM interact with your local file system. Installation To install LangChain Tools, simply run the following AI Video Analyzer & Chat Agent is a robust AI application built with Streamlit, Agno, & Langchain's DuckDuckGo Tool. schema - The schema of the tool, defined with a Zod object. Setup This toolkit requires an OpenAPI spec file. This guide will demonstrate how to use those tool calls to actually call a function and properly pass the results back to the model. Financial Data Analysis with Alpha Vantage. Class hierarchy: LangChain’s ecosystem While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications. , specification tools # Tools are classes that an Agent uses to interact with the world. universal-tool-server was created before Streamable HTTP transport was introduced into the MCP specification as a stopgap solution. If more configuration is needed-- e. First you need to sign up for a free account at serper. While it’s commonly known for its ability to generate text, Langchain goes beyond that by introducing agents and tools —two key components that enable more complex, multi-step workflows. The key to using models with tools is correctly prompting a model and parsing its response so that it chooses the right tools and In our Quickstart we went over how to build a Chain that calls a single multiply tool. 4. We are also introducing a new agent class that works well with these new types of tools. When building a complex agentic system, use LangGraph for controllable orchestration. Tools are also runnables, and can therefore be used within a chain: Using models that don't support tool calling In this guide we'll build a Chain that does not rely on any special model APIs (like tool calling, which we showed in the Quickstart) and instead just prompts the model directly to invoke tools. Currently, tools can be loaded with the following snippet: Jan 19, 2025 · The tool_output is a list but the ToolMessage content should be str, that’s why [0] is used at the end. To improve your LLM application development, pair LangChain with: LangSmith - Helpful for agent evals and observability. Please visit Tool Integrations for a list of the available tools. Its architecture allows developers to integrate LLMs with external data, prompt engineering, retrieval-augmented generation (RAG), semantic search, and agent workflows. tool(*args: str | Callable | Runnable, return_direct: bool = False, args_schema: Type | None = None, infer_schema: bool = True, response_format: Literal['content', 'content_and_artifact'] = 'content', parse_docstring: bool = False, error_on_invalid_docstring: bool = True) → Callable [source] # Make tools out of functions, can be used with or without While other tools (like the Requests tools) are fine for static sites, PlayWright Browser toolkits let your agent navigate the web and interact with dynamically rendered sites. It is particularly helpful in answering questions about current events. LangChain agents (the AgentExecutor in particular) have multiple configuration parameters. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message. Jul 18, 2024 · Improving core tool interfaces and docs in LangChain See our latest improvements to our core tool interfaces that make it turn any code into a tool, handle diverse inputs, enrich tool outputs, and handle tool errors effectively. These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in `callbacks`. In an API call, you can describe tools and have the model intelligently choose to output a structured object like JSON containing arguments to call these tools. Jan 2, 2025 · A key feature of LangChain is the ability to create custom tools that integrate seamlessly with your AI models, enabling enhanced capabilities tailored to your specific use case. Whether you’re a seasoned developer or just starting, LangChain provides the tools and resources you need to build powerful language model applications. This is fully backwards compatible and is supported on The simplest way to create a tool is through the StructuredToolParams schema. Shell (bash) Giving agents access to the shell is powerful (though risky outside a sandboxed environment). In this notebook we will show how those parameters map to the LangGraph react agent executor using the create_react_agent prebuilt helper method. Initialize tool. This toolkit is useful for asking questions, performing queries, validating queries and more on a SQL database. You can use these to eg identify a specific instance of a tool with its use case. How to use tools in a chain In this guide, we will go over the basic ways to create Chains and Agents that call Tools. With LangChain’s ingestion and retrieval methods, developers can easily augment the LLM’s knowledge with company data, user information, and other private sources. 💡 Let developers easily connect any LLM to tools like web browsing, file operations, and more. Requests Toolkit We can use the Requests toolkit to construct agents that generate HTTP requests. StructuredTool # class langchain_core. Productionization Aug 14, 2024 · langchain_core. The model may try to call a tool that doesn't exist or fail to return arguments that match the requested schema. They combine a few things: The name of the tool A description of what the tool is Schema of what the inputs to the tool are The function to call Whether the result of a tool should be returned directly to the user It is useful to have all this information because this information can be used to build action-taking 🌐 MCP-Use is the open source way to connect any LLM to any MCP server and build custom MCP agents that have tool access, without using closed source or application clients. They have convenient loading methods. This notebook provides a quick overview for getting started with Jina tool. The key to using models with tools is correctly prompting a model and parsing its response so that it chooses the Key concepts (1) Tool Creation: Use the tool function to create a tool. brave_search. You can therefore do: Tools 📄️ Alpha Vantage Alpha Vantage Alpha Vantage provides realtime and historical financial market data through a set of powerful and developer-friendly data APIs and spreadsheets. All Toolkits expose a get_tools method which returns a list of tools. We'll focus on Chains since Agents can route between multiple tools by default. These are applications that can answer questions about specific source information. More and more LLM providers are exposing API’s for reliable tool calling. If you want to get automated tracing from runs of individual tools, you can also set your LangSmith API key by uncommenting below: Tool use An exciting use case for LLMs is building natural language interfaces for other "tools", whether those are APIs, functions, databases, etc. Defaults to None. Use either LangChain's messages format or OpenAI format. This set of tools aims to improve interaction with various language models and provide a richer, more flexible experience for developers working in natural language processing. 📄️ Discord Tool The Discord Tool gives your agent the ability to search, read, and write messages to discord New to LangChain or LLM app development in general? Read this material to quickly get up and running building your first applications. Provides a lot of Key concepts Tool Creation: Use the @tool decorator to create a tool. A toolkit is a collection of tools meant to be used together. from model outputs. LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source components and third-party integrations. After executing actions, the results can be fed back into the LLM to determine whether more actions are needed, or whether it is okay to finish. Tool Calling: When appropriate, the model can decide to call a tool and ensure its The tool abstraction in LangChain associates a TypeScript function with a schema that defines the function's name, description and input. messages import HumanMessage, ToolMessage One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. A common application is to enable agents to answer questions using data in a relational database, potentially in an Build controllable agents with LangGraph, our low-level agent orchestration framework. 📄️ ArXiv This notebook goes over how to use the arxiv tool with an agent. Jul 4, 2025 · LangChain is a modular framework designed to build applications powered by large language models (LLMs). Tools allow us to extend the capabilities of a model beyond just outputting text/messages. Defaults to False. To set it up, follow these instructions, placing the . For detailed documentation of all SQLDatabaseToolkit features and configurations head to the API reference. BaseTool [source] # Bases: RunnableSerializable [Union [str, Dict, ToolCall], Any] Interface LangChain tools must implement. Learn how to use LangGraph, LangChain, LangSmith, and other products to build and scale powerful AI agents with less code and friction. Mar 17, 2025 · In this blog post, we’ll explore the core components of LangChain, specifically focusing on its powerful tools and agents that make it a game-changer for developers and businesses alike. Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. agent_toolkits. Tool use and agents An exciting use case for LLMs is building natural language interfaces for other "tools", whether those are APIs, functions, databases, etc. Tool [source] # Bases: BaseTool Tool that takes in function or coroutine directly. The tool decorator is an easy way to create tools. Feb 6, 2025 · LangChain is an open-source framework designed to simplify the creation of applications using large language models (LLMs). May 2, 2023 · TL;DR: we're introducing a new abstraction to allow for usage of more complex tools. Comprehensive SEO Data from DataForSEO. New to LangChain or LLM app development in general? Read this material to quickly get up and running building your first applications. Class hierarchy: LangChain supports the creation of tools from: Functions; LangChain Runnables; By sub-classing from BaseTool -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code. g. simple. LangChain is great for building such interfaces because it has: Good model output parsing, which makes it easy to extract JSON, XML, OpenAI function-calls, etc. For a list of toolkit integrations, see this page. Tools and Toolkits Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models. This includes tools from the CrewAI Toolkit and LangChain Tools, enabling everything from simple searches to complex interactions and effective teamwork among agents. Includes support for in-memory and Postgres backends. This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools. While previous tools took in a single string input, new tools can take in an arbitrary number of inputs of arbitrary types. This is often achieved via tool-calling. These tools The tool response format. Introduction LangChain is a framework for developing applications powered by large language models (LLMs). Creating tools from functions may be sufficient for most use cases, and can be done via a simple @tool decorator. js repository has a sample OpenAPI spec file in the examples directory. Mar 1, 2025 · The LangChain MCP Adapters is a package that makes it easy to use Anthropic Model Context Protocol (MCP) tools with LangChain & LangGraph. Integration Packages These providers have standalone langchain-{provider} packages for improved versioning, dependency management and testing. Setup This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. Tools are interfaces that an agent, chain, or LLM can use to interact with the world, with name, description, schema, and function. Tools can be just about anything — APIs, functions, databases, etc. tools. 📄️ Apify This notebook shows how to use the Apify integration for LangChain. ⚠️ Security note ⚠️ There are inherent risks in giving models discretion to execute real-world actions. Apr 10, 2024 · Photo by Dan LeFebvre on Unsplash Let’s build a simple agent in LangChain to help us understand some of the foundational concepts and building blocks for how agents work there. Every chat model which supports tool calling in LangChain accepts binding tools to the model through this schema. If you want to get automated tracing from runs of individual tools This notebook goes over how to use the google search component. Below is a detailed walkthrough of LangChain’s main modules, their roles, and code examples, following the latest Welcome to Langchain Tools, a Python library designed to simplify, enhance, and extend the functionality of the LangChain library. Quickstart In this guide, we will go over the basic ways to create Chains and Agents that call Tools. Class hierarchy: Dec 9, 2024 · langchain_community. Agent uses the description to choose the right tool for the job. 📝 Storage of tool metadata: Control storage of tool descriptions, namespaces, and other information through LangGraph's built-in persistence layer. Tool ¶ Note Tool implements the standard Runnable Interface. This guide We use the term tool calling interchangeably with function calling. (3) Tool Calling: When appropriate, the model can decide to call a tool and Apr 11, 2024 · TLDR: We are introducing a new tool_calls attribute on AIMessage. Toolkits are collections of tools that are designed to be used together for specific tasks. The LangChain. Note: Shell tool does not work with Windows OS. This notebook walks through some of them. Here we focus on how to move from legacy LangChain agents to more flexible LangGraph agents. This gives the model awareness of the tool and the associated input schema required by the tool. 💡 Customization of tool retrieval: Optionally define custom functions for tool retrieval. By keeping it simple we can get a better grasp of the foundational ideas behind these agents, allowing us to build more complex agents in the future. Must be empty. These applications use a technique known as Retrieval Augmented Generation, or RAG. Tools can be passed to chat models that support tool calling allowing the model to request the execution of a specific function with specific inputs. Lifelike Speech Synthesis from ElevenLabs. Aug 3, 2024 · What are Langchain Tools? Langchain Tools are a collection of modular utilities within the Langchain framework, specifically designed to support various aspects of NLP and ML projects. It: Converts MCP tools into LangChain- & LangGraph-compatible tools Enables interaction with tools across multiple MCP servers Seamlessly integrates the hundreds of tool servers already published into LangGraph Agents Why use MCP Adapters: This adapter The Gmail Tool allows your agent to create and view messages from a linked email account. tools # Tools are classes that an Agent uses to interact with the world. 5 Flash, it enables video analysis, insight extraction, and AI-powered chat with features like content analysis, real-time web searches, and multi-modal analysis Introduction LangChain is a framework for developing applications powered by large language models (LLMs). Learn how to create and use tools in LangChain, a Python library for building conversational agents and workflows. The toolkit provides access to Polygon's Stock Market Data API. Tools are functions with schemas that can be passed to chat models that support tool calling. Aug 14, 2024 · Defaults to None. In this tutorial we LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls. (2) Tool Binding: The tool needs to be connected to a model that supports tool calling. from langchain_core. A wrapper around the SearxNG API, this tool is useful for performing meta-search engine queries using the SearxNG API. First, let’s define our tools and our model: Searxng Search tool The SearxngSearch tool connects your agents and chains to the internet. This also makes the resultant tool accept a Jun 17, 2025 · Build an Agent LangChain supports the creation of agents, or systems that use LLMs as reasoning engines to determine which actions to take and the inputs necessary to perform the action. Important Links: LangChain is a framework to build LLM applications with ease. load_tools. base. Tools LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Standard API for structuring outputs via the with_structured_output method. LangChain provides a standard interface for connecting models, tools, and data, then integrates seamlessly with any of the Lang- family products. This schema has only three fields name - The name of the tool. It provides a standard interface for chains, many integrations with other tools, and end-to-end chains for common applications. Now let's take a look at how we might augment this chain so that it can pick from a number of tools to call. Integrating Gemini 1. infer_schema – Whether to infer the schema of the arguments from the function’s signature. Use LangGraph to build stateful agents with first-class streaming and human-in-the-loop support. Visit the mcp-use.
|