LLMs are good at reasoning, pattern recognition, and synthesizing information. What they lack is real-time market data. An agent that can only reason over its training data cannot tell you what six KOLs bought in the last hour or whether the deployer behind a new token has a strong track record.
The MadeOnSol Python SDK bridges that gap. It wraps the MadeOnSol API in a typed Python client and ships first-class LangChain and CrewAI integrations, so you can hand real-time Solana intelligence directly to your agents as callable tools.
This tutorial walks through building a Solana trading research agent from scratch: first with a single LangChain ReAct agent, then with a multi-agent CrewAI crew. It also covers x402 micropayment mode for fully autonomous agents that pay per request from their own wallet.
Prerequisites
- Python 3.10 or higher
- An OpenAI API key (or any LLM provider supported by LangChain)
- A MadeOnSol API key — get one free at madeonsol.com/developer (BASIC tier: 200 calls/day, no credit card required)
Installation
Install the SDK with the LangChain extras:
pip install madeonsol-x402[langchain]
This installs the core madeonsol-x402 package plus langchain-core, langchain-openai, and the MadeOnSol LangChain tool wrappers. If you also want CrewAI support:
pip install madeonsol-x402[crewai]
You can install both extras together:
pip install "madeonsol-x402[langchain,crewai]"
Step 1: Initialize the Client
from madeonsol import MadeOnSolClient
client = MadeOnSolClient(api_key="msk_your_key_here")
The client handles authentication, base URL configuration, and rate limit handling. All API methods return typed dataclasses. You can also pass the key via the MADEONSOL_API_KEY environment variable and call MadeOnSolClient() with no arguments.
Verify connectivity:
status = client.get_status()
print(status)
Step 2: Built-in LangChain Tools
The SDK ships a set of LangChain-compatible tools that you can pass directly to any LangChain agent or chain:
from madeonsol.langchain import get_madeonsol_tools
tools = get_madeonsol_tools(client)
get_madeonsol_tools returns a list of BaseTool subclasses, one per MadeOnSol endpoint. Here is what each tool does:
kol_feed — Returns the most recent KOL trades. Accepts parameters for time window, trade direction (buy/sell), and minimum SOL size. The agent can call this to answer questions like "what are KOLs buying right now?" or "show me the biggest KOL sells in the last 6 hours."
deployer_alerts — Queries the deployer-hunter database for newly launched tokens from tracked deployers. Filters by deployer tier (elite, good, neutral, bad), time window, and Pump.fun graduation status. Use this to surface new launches from historically successful deployers.
kol_leaderboard — Returns ranked KOL wallets by PnL, win rate, or total volume over a configurable period (7d, 30d, 90d, all-time). Includes strategy tags per KOL so the agent can characterize trading styles.
token_info — Looks up a token by mint address. Returns the deployer address, deployer tier, social links, current market cap (from the DEX feed), and any KOL activity on the token.
kol_coordination — Returns tokens where multiple KOLs are trading in the same direction within a time window. Includes signal: "accumulating" or signal: "distributing" and the list of KOLs involved.
Step 3: Build a ReAct Agent with LangChain
A ReAct (Reason + Act) agent iterates between reasoning about what it needs to know and calling tools to get that information. Here is a complete working example:
import os
from madeonsol import MadeOnSolClient
from madeonsol.langchain import get_madeonsol_tools
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain import hub
# Initialize
client = MadeOnSolClient(api_key=os.environ["MADEONSOL_API_KEY"])
tools = get_madeonsol_tools(client)
# Pull the standard ReAct prompt from LangChain Hub
prompt = hub.pull("hwchase17/react")
# Initialize the LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# Create and run the agent
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True, max_iterations=8)
result = executor.invoke({
"input": (
"Which KOL bought the most tokens in the last 30 days? "
"Are any of them buying the same token right now? "
"If so, is the deployer behind that token reputable?"
)
})
print(result["output"])
With verbose=True you can see the agent's reasoning chain: it will first call kol_leaderboard to find the top KOL, then call kol_coordination to check for multi-KOL convergence, then call token_info to check the deployer. Each step is visible in the terminal output.
You are not limited to OpenAI. Swap ChatOpenAI for ChatAnthropic, ChatGroq, or any other LangChain-compatible provider. The tools work the same regardless of which LLM is driving the agent.
Handling Rate Limits
On the BASIC tier (200 calls/day), a single complex research query may consume 3-5 tool calls. For prototype work this is fine. If your agent is running in a loop or handling multiple concurrent queries, upgrade to PRO (10,000 calls/day at $49/month) to avoid hitting the daily limit mid-session. You can check your remaining quota programmatically:
status = client.get_status()
print(f"Calls remaining today: {status.quota_remaining}")
Step 4: Multi-Agent Crew with CrewAI
For more complex research workflows, CrewAI lets you compose multiple specialized agents, each with a focused role. Here is a three-agent crew that produces a trading thesis for a given token:
import os
from madeonsol import MadeOnSolClient
from madeonsol.crewai import get_madeonsol_crewai_tools
from crewai import Agent, Task, Crew
client = MadeOnSolClient(api_key=os.environ["MADEONSOL_API_KEY"])
tools = get_madeonsol_crewai_tools(client)
# Assign specific tools to each agent
deployer_tool = next(t for t in tools if t.name == "deployer_alerts")
coordination_tool = next(t for t in tools if t.name == "kol_coordination")
token_tool = next(t for t in tools if t.name == "token_info")
leaderboard_tool = next(t for t in tools if t.name == "kol_leaderboard")
# Agent 1: Find elite deployer launches
research_agent = Agent(
role="Deployer Research Specialist",
goal="Identify tokens launched by elite or good-tier deployers in the last 2 hours",
backstory="You scan for new token launches from historically successful Pump.fun deployers.",
tools=[deployer_tool, token_tool],
verbose=True,
)
# Agent 2: Check KOL sentiment on those tokens
analysis_agent = Agent(
role="KOL Sentiment Analyst",
goal="Determine whether top KOLs are accumulating any of the tokens identified by research",
backstory="You cross-reference deployer launches with KOL trading activity to find conviction signals.",
tools=[coordination_tool, leaderboard_tool],
verbose=True,
)
# Agent 3: Produce a trading thesis
report_agent = Agent(
role="Trading Thesis Writer",
goal="Synthesize deployer reputation and KOL activity into a concise trading thesis",
backstory="You write clear, actionable trading theses backed by on-chain evidence.",
tools=[],
verbose=True,
)
# Define tasks
research_task = Task(
description="Find all tokens launched by elite or good-tier deployers in the last 2 hours. Return the token mints, names, and deployer tier.",
expected_output="A list of token mints with deployer tier and launch time.",
agent=research_agent,
)
analysis_task = Task(
description="For each token from the research task, check whether 2 or more KOLs are accumulating it using the coordination endpoint (period=2h, min_kols=2). Return tokens with KOL count and net SOL flow.",
expected_output="Tokens with multi-KOL accumulation, including kol_count and net_sol_flow.",
agent=analysis_agent,
context=[research_task],
)
report_task = Task(
description="Using the deployer reputation data and KOL coordination data, write a trading thesis for the top 1-2 opportunities. Include risk factors.",
expected_output="A 200-300 word trading thesis with entry rationale and risk factors.",
agent=report_agent,
context=[research_task, analysis_task],
)
crew = Crew(
agents=[research_agent, analysis_agent, report_agent],
tasks=[research_task, analysis_task, report_task],
verbose=True,
)
result = crew.kickoff()
print(result)
The crew executes sequentially by default: the research agent finds launches, the analysis agent checks KOL sentiment, and the report agent synthesizes everything into a thesis. CrewAI handles passing context between tasks automatically.
You can extend this pattern — add a fourth agent to execute the trade via a Solana RPC client, or a fifth to post the thesis to a Discord channel.
Step 5: x402 Micropayments for Autonomous Agents
The API key flow works well for subscriptions and human-in-the-loop workflows. For fully autonomous agents — agents that run without supervision and pay for their own data — the x402 mode is a better fit.
Instead of a subscription, the agent holds SOL or USDC in its own wallet and pays per API request using the x402 payment standard. No subscription, no daily quota, no credit card. The agent just needs a funded wallet.
Initialize the client in x402 mode by passing a base58-encoded private key:
from madeonsol import MadeOnSolClient
client = MadeOnSolClient(private_key="your_base58_encoded_private_key")
Everything else works identically. The get_madeonsol_tools call, the LangChain and CrewAI integrations, and all API methods behave the same. The only difference is how each request is paid for: instead of checking an API key against a quota, the client signs a micropayment transaction on Solana.
You can discover the x402 payment endpoint and supported resources at madeonsol.com/api/x402.
x402 mode is particularly useful when:
- The agent is fully autonomous and should not depend on a human managing a subscription
- You want to pay exactly for what you use with no monthly commitment
- You are running many short-lived agents, each with their own wallet
For agents with predictable, high-volume usage, a PRO or ULTRA subscription will be more cost-efficient than per-request micropayments.
Practical Tips
Start with BASIC. The free tier at 200 calls/day is enough to build and test a full research agent. Most single research queries consume 3-6 tool calls.
Cache leaderboard results. The KOL leaderboard does not change minute-to-minute. Cache the result for 15-30 minutes to avoid burning quota on repeated lookups.
Use coordination signals as a second layer. The most reliable pattern is: deployer alert fires (new elite launch) → coordination signal confirms (KOLs accumulating) → position entry. Using both together filters out a significant amount of noise.
Set max_iterations on your AgentExecutor. Without a cap, a confused agent can loop indefinitely. Eight iterations is enough for complex multi-step research.
Upgrade to PRO for production. 10,000 calls/day at $49/month covers production-grade agents polling every few minutes around the clock. ULTRA at $199/month gives 100,000 calls/day for high-frequency setups.
Get your free API key and review the full endpoint reference at madeonsol.com/developer.