Skip to main content

AI Agent Integration

ZeroQuant provides first-class support for AI agents, enabling autonomous DeFi operations through natural language commands and intelligent decision-making.

Supported Frameworks

LangChain

Our LangChain integration includes 5 pre-built tools for vault operations:

  • CreateVaultTool - Deploy new vaults
  • ConnectVaultTool - Connect to existing vaults
  • GetVaultBalanceTool - Query vault balances
  • ExecuteSwapTool - Execute token swaps with slippage protection
  • ExecuteBatchTool - Batch multiple operations atomically

Plus:

  • VaultMemory - Conversation memory for agent context
  • Output Formatters - Beautiful formatting for vault state, transactions, and errors
from zeroquant.langchain.tools import CreateVaultTool, ExecuteSwapTool

tools = [
CreateVaultTool(client=client),
ExecuteSwapTool(client=client),
]

Learn more →

Mastra Framework (TypeScript)

Complete toolkit with workflows and agent templates:

  • ZeroQuantToolkit - 5 tools for vault operations
  • 3 Workflows - VaultCreation, SafeSwap, Rebalance
  • 3 Agent Templates - Trading, Yield, Portfolio
  • VaultStateManager - Persistent state management
import { ZeroQuantToolkit, SafeSwapWorkflow } from '@zeroquant/mastra';

const toolkit = new ZeroQuantToolkit(config);
const workflow = new SafeSwapWorkflow(params);

Learn more →

Quick Example

Here's a complete example of a trading agent using LangChain:

import os
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from web3 import Web3

from zeroquant import ZeroQuantClient
from zeroquant.langchain.tools import CreateVaultTool, ExecuteSwapTool

# Initialize client
w3 = Web3(Web3.HTTPProvider(os.getenv("RPC_URL")))
client = ZeroQuantClient(
web3=w3,
private_key=os.getenv("PRIVATE_KEY"),
factory_address=os.getenv("FACTORY_ADDRESS"),
permission_manager_address=os.getenv("PERMISSION_MANAGER_ADDRESS"),
)

# Create tools
tools = [
CreateVaultTool(client=client),
ExecuteSwapTool(client=client),
]

# Create agent
llm = ChatOpenAI(model="gpt-4", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a DeFi trading assistant. Help users manage their vaults and execute swaps safely."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])

agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)

# Execute
result = await executor.ainvoke({
"input": "Create a new vault with salt 123"
})

print(result["output"])

Use Cases

Autonomous Trading Bot

Build agents that:

  • Analyze market conditions
  • Execute trades based on technical indicators
  • Manage position sizing and risk
  • Re-balance portfolios automatically

Yield Optimizer

Create agents that:

  • Monitor APY across protocols
  • Automatically move funds to highest yield
  • Compound rewards
  • Optimize for gas costs

Portfolio Manager

Deploy agents that:

  • Rebalance based on target allocations
  • Execute tax-loss harvesting
  • Manage multi-strategy portfolios
  • Generate performance reports

Architecture

┌─────────────────────────────────────┐
│ Your AI Agent │
│ (LangChain / Custom Logic) │
└──────────────┬──────────────────────┘

│ Natural Language / Function Calls

┌──────────────▼──────────────────────┐
│ ZeroQuant Agent Tools │
│ (Tools, Memory, Formatters) │
└──────────────┬──────────────────────┘

│ Structured Calls

┌──────────────▼──────────────────────┐
│ ZeroQuant SDK │
│ (TypeScript / Python) │
└──────────────┬──────────────────────┘

│ Smart Contract Calls

┌──────────────▼──────────────────────┐
│ Smart Contracts │
│ (Vaults, Intents, Permissions) │
└─────────────────────────────────────┘

Best Practices

Security

  • Never expose private keys - Use environment variables
  • Validate all inputs - Tools include built-in validation
  • Set spending limits - Use permission manager for AI wallets
  • Monitor operations - Review agent decisions regularly

Performance

  • Batch operations - Use ExecuteBatchTool for multiple ops
  • Cache responses - Reduce LLM calls with memory
  • Optimize prompts - Clear, specific instructions work best
  • Error handling - All tools provide detailed error messages

Reliability

  • Use memory - VaultMemory maintains context across conversations
  • Validate outputs - Check tool responses before proceeding
  • Set timeouts - Prevent hanging operations
  • Log decisions - Track agent reasoning for debugging

Next Steps

Choose your integration path:

Python

Native async with LangChain tools included

TypeScript

TypeScript with OpenAI, Anthropic, and more

Mastra

Workflows and templates for instant deployment

Examples

Check out these complete examples: