Solana processes over 4,000 transactions per second. If you are building a token indexer, NFT activity tracker, or DeFi protocol monitor, polling RPC endpoints will never keep up. The Yellowstone Geyser plugin interface solves this by letting you tap directly into the validator data pipeline -- receiving account updates, transaction notifications, and slot changes the moment they happen, with zero network overhead.
This guide walks you through building a custom Geyser plugin from scratch in Rust, deploying it on a validator, and using it to power real-time indexing pipelines that outperform any RPC-based approach.
Why Geyser plugins beat RPC polling
Traditional Solana data pipelines rely on getBlock, getTransaction, or WebSocket subscriptions through JSON-RPC. These approaches share fundamental limitations:
- Latency: RPC requests add network round-trips and queue behind other clients
- Completeness: WebSocket subscriptions can silently drop messages under load
- Cost: High-frequency polling burns through rate limits and inflates infrastructure bills
- Filtering: You download entire blocks and filter client-side, wasting bandwidth
Geyser plugins run inside the validator process itself. When the runtime finishes executing a transaction, your plugin receives the results through a direct function call -- no network, no serialization overhead, no dropped messages. You filter at the source and only forward what matters to your downstream systems.
Hosted providers like Helius and Triton offer managed Geyser-powered gRPC streams that abstract this complexity. But when you need full control over filtering logic, data transformation, or want to avoid per-request pricing, building your own plugin is the way to go.
How the Geyser plugin interface works
The Solana validator exposes a plugin interface defined in the solana-geyser-plugin-interface crate. Your plugin implements the GeyserPlugin trait, which the validator loads as a shared library (.so file) at startup.
The trait defines four core callbacks:
| Callback | Fires when | Typical use |
|---|
update_account | An account data changes | Token balance tracking, program state indexing |
notify_transaction | A transaction is confirmed | Trade parsing, transfer monitoring |
notify_block_metadata | A block is completed | Slot-level analytics, block production metrics |
update_slot_status | Slot status changes | Chain progress monitoring |
Each callback runs synchronously in the validator replay pipeline. Your plugin must return quickly -- heavy processing should be offloaded to a background thread or external queue.
Project setup
Start by creating a new Rust library with the correct crate type. Geyser plugins compile to a C-compatible shared library that the validator loads at runtime.
// Cargo.toml
[package]
name = "my-geyser-plugin"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
solana-geyser-plugin-interface = "2.2"
solana-sdk = "2.2"
solana-transaction-status = "2.2"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
bs58 = "0.5"
crossbeam-channel = "0.5"
log = "0.4"
Pin your solana-* crate versions to match the validator version you are targeting. A version mismatch will cause the plugin to fail at load time.
Implementing the GeyserPlugin trait
Here is a minimal but functional plugin skeleton that filters account updates for a specific program and forwards them to a background processing thread:
use solana_geyser_plugin_interface::geyser_plugin_interface::{
GeyserPlugin, GeyserPluginError, ReplicaAccountInfoVersions,
ReplicaTransactionInfoVersions, ReplicaBlockInfoVersions,
Result as PluginResult, SlotStatus,
};
use crossbeam_channel::{bounded, Sender};
use std::thread;
pub struct MyGeyserPlugin {
sender: Option<Sender<AccountEvent>>,
target_program: [u8; 32],
}
struct AccountEvent {
pubkey: Vec<u8>,
data: Vec<u8>,
slot: u64,
lamports: u64,
}
impl GeyserPlugin for MyGeyserPlugin {
fn name(&self) -> &'static str {
"MyGeyserPlugin"
}
fn on_load(
&mut self,
config_file: &str,
_is_reload: bool,
) -> PluginResult<()> {
let config: serde_json::Value = serde_json::from_str(
&std::fs::read_to_string(config_file)
.map_err(|e| GeyserPluginError::ConfigFileReadError {
msg: e.to_string(),
})?,
)
.map_err(|e| GeyserPluginError::ConfigFileReadError {
msg: e.to_string(),
})?;
let program_id = config["target_program"]
.as_str()
.expect("target_program required in config");
let decoded = bs58::decode(program_id)
.into_vec()
.expect("invalid base58 program ID");
self.target_program.copy_from_slice(&decoded);
let (sender, receiver) = bounded::<AccountEvent>(10_000);
self.sender = Some(sender);
thread::spawn(move || {
while let Ok(event) = receiver.recv() {
log::info!(
"Account {} updated at slot {}",
bs58::encode(&event.pubkey).into_string(),
event.slot
);
}
});
Ok(())
}
fn update_account(
&self,
account: ReplicaAccountInfoVersions,
slot: u64,
_is_startup: bool,
) -> PluginResult<()> {
let info = match account {
ReplicaAccountInfoVersions::V0_0_3(info) => info,
_ => return Ok(()),
};
if info.owner != self.target_program.as_slice() {
return Ok(());
}
if let Some(sender) = &self.sender {
let _ = sender.try_send(AccountEvent {
pubkey: info.pubkey.to_vec(),
data: info.data.to_vec(),
slot,
lamports: info.lamports,
});
}
Ok(())
}
fn notify_transaction(
&self,
_transaction: ReplicaTransactionInfoVersions,
_slot: u64,
) -> PluginResult<()> {
Ok(())
}
fn notify_block_metadata(
&self,
_blockinfo: ReplicaBlockInfoVersions,
) -> PluginResult<()> {
Ok(())
}
fn update_slot_status(
&self,
_slot: u64,
_parent: Option<u64>,
_status: SlotStatus,
) -> PluginResult<()> {
Ok(())
}
fn account_data_notifications_enabled(&self) -> bool {
true
}
fn transaction_notifications_enabled(&self) -> bool {
false
}
}
The key design decisions here: filter by owner inside update_account to discard irrelevant accounts immediately, use a bounded channel to decouple the validator hot path from your processing logic, and keep the callback itself allocation-light.
Use case: custom token indexer
To build a token balance indexer, set target_program to the SPL Token program (TokenkFGee...) and parse the account data in your background thread:
fn parse_token_account(data: &[u8]) -> Option<TokenBalance> {
if data.len() != 165 {
return None; // SPL token accounts are exactly 165 bytes
}
let mint = &data[0..32];
let owner = &data[32..64];
let amount = u64::from_le_bytes(
data[64..72].try_into().ok()?,
);
Some(TokenBalance {
mint: bs58::encode(mint).into_string(),
owner: bs58::encode(owner).into_string(),
amount,
})
}
This gives you every token balance change across the entire chain in real-time -- no missed transfers, no polling delays. Pair it with a PostgreSQL sink and you have a complete token indexer running at validator speed.
Use case: NFT activity tracker
For NFT monitoring, listen to transactions instead of accounts. Enable transaction notifications and filter for Metaplex or Tensor program IDs:
fn notify_transaction(
&self,
transaction: ReplicaTransactionInfoVersions,
slot: u64,
) -> PluginResult<()> {
let info = match transaction {
ReplicaTransactionInfoVersions::V0_0_2(info) => info,
_ => return Ok(()),
};
if info.is_vote {
return Ok(()); // Skip vote transactions (90%+ of traffic)
}
let account_keys = &info.transaction
.message()
.account_keys();
let involves_nft_program = account_keys.iter().any(|key| {
key.as_ref() == METAPLEX_PROGRAM_ID
|| key.as_ref() == TENSOR_PROGRAM_ID
});
if involves_nft_program {
self.tx_sender.as_ref().map(|s| s.try_send(
TxEvent { slot, signature: info.signature.to_vec() }
));
}
Ok(())
}
Filtering vote transactions early is critical -- they account for roughly 90% of all Solana transactions and carry no useful data for most indexers.
Use case: DeFi protocol monitor
Monitoring a specific DeFi protocol like a Raydium pool or a Jito vault requires watching both account updates (pool state, reserves) and transactions (swaps, deposits). Jito validators already run modified Geyser plugins for MEV extraction -- the same pattern works for any protocol.
Configure your plugin to watch multiple program IDs and route events to different handlers:
fn update_account(
&self,
account: ReplicaAccountInfoVersions,
slot: u64,
_is_startup: bool,
) -> PluginResult<()> {
let info = match account {
ReplicaAccountInfoVersions::V0_0_3(info) => info,
_ => return Ok(()),
};
if info.owner == self.raydium_program.as_slice() {
self.pool_sender.as_ref()
.map(|s| s.try_send(PoolUpdate {
pubkey: info.pubkey.to_vec(),
data: info.data.to_vec(),
slot,
}));
} else if info.owner == self.orca_program.as_slice() {
self.whirlpool_sender.as_ref()
.map(|s| s.try_send(PoolUpdate {
pubkey: info.pubkey.to_vec(),
data: info.data.to_vec(),
slot,
}));
}
Ok(())
}
Compiling and deploying the plugin
Build the plugin as a release shared library. Debug builds are too slow for production validators.
cargo build --release
# Output: target/release/libmy_geyser_plugin.so
Create a JSON configuration file that the validator will pass to your plugin on_load method:
{
"libpath": "/opt/geyser-plugins/libmy_geyser_plugin.so",
"target_program": "TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA"
}
Add the plugin to your validator startup command using the --geyser-plugin-config flag:
solana-validator \
--identity /root/validator-keypair.json \
--geyser-plugin-config /opt/geyser-plugins/config.json \
--ledger /mnt/ledger \
--known-validator dv1ZAGvdsz5hHLwWXsVnM94hWf1pjbKVau1QVkaMJ92 \
--expected-genesis-hash 5eykt4UsFv8P8NJdTREpY1vzqKqZKvdpKuc147dw2N9d \
--entrypoint entrypoint.mainnet-beta.solana.com:8001 \
--limit-ledger-size
For a full walkthrough on validator setup and maintenance, see our guide to running a Solana validator.
Testing your plugin
Never deploy an untested plugin to a mainnet validator. A panic or deadlock in your plugin will crash the entire validator process.
Local test with solana-test-validator:
solana-test-validator \
--geyser-plugin-config /opt/geyser-plugins/config.json \
--reset
Run transactions against the test validator and verify your plugin processes them correctly. Check stderr for any log output from your plugin.
Integration test pattern:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_account_filter() {
let mut plugin = MyGeyserPlugin::default();
plugin.on_load("test-config.json", false).unwrap();
assert!(plugin.account_data_notifications_enabled());
}
#[test]
fn test_token_parsing() {
let data = create_mock_token_account(1_000_000);
let parsed = parse_token_account(&data).unwrap();
assert_eq!(parsed.amount, 1_000_000);
}
}
Performance benchmarks: Measure your update_account callback latency. It should complete in under 1 microsecond for the fast path (non-matching accounts). Use try_send instead of send on your channel to avoid blocking the validator if your consumer falls behind.
Managed alternatives
Building and maintaining a custom Geyser plugin requires running your own validator, which costs $500-2,000/month in infrastructure. If you need Geyser-level data without the operational burden, several providers offer managed solutions:
- Helius provides Yellowstone gRPC streams with account and transaction filters, plus enhanced webhooks that deliver parsed data directly to your endpoint
- Triton runs the original Yellowstone gRPC implementation and offers dedicated gRPC nodes for high-throughput indexing workloads
- Jito exposes block engine and bundle data through its own Geyser-powered infrastructure, essential for MEV-aware applications
For a detailed comparison of gRPC streaming approaches, including connection setup and filter configuration, read our guide to streaming Solana data with gRPC. For a broader overview of indexing solutions, see our comparison of the best Solana data indexers.
Performance considerations
Running a Geyser plugin adds overhead to your validator. Keep these guidelines in mind:
- Channel buffer size: A bounded channel of 10,000-50,000 events handles most burst scenarios. Monitor the fill level and alert when it crosses 80%.
- Serialization: Avoid serializing data in the callback. Copy raw bytes and deserialize in the background thread.
- Memory: Each buffered account update holds a copy of the account data. A full SPL Token account is 165 bytes, but program state accounts can be megabytes. Set sane size limits.
- Disk I/O: If your consumer writes to disk, use batched writes. Fsync on every event will bottleneck the entire validator.
- Startup replay: When the validator starts, it replays recent slots and fires
update_account with is_startup = true for every account. Your plugin should handle this gracefully -- either skip startup events or process them in bulk mode.
FAQ
What Solana version supports Geyser plugins?
Geyser plugins have been supported since Solana v1.14 and are actively maintained in the v2.x line. The plugin interface has gone through several versions (V0_0_1 through V0_0_3 for account info, V0_0_1 through V0_0_2 for transactions). Always match your solana-geyser-plugin-interface crate version to the validator version you are running. You can check compatibility in the Solana GitHub repository Cargo.toml for the release branch you target.
Can I run multiple Geyser plugins on one validator?
Yes, the Solana validator supports loading multiple Geyser plugins simultaneously. Pass multiple --geyser-plugin-config flags, one for each plugin configuration file. Each plugin receives the same callbacks independently. Be aware that running multiple plugins increases the per-slot processing time linearly, so benchmark the combined overhead before deploying to production.
How does a custom Geyser plugin compare to Helius or Triton gRPC?
A custom plugin gives you maximum flexibility: you control the filtering logic, data transformation, and output format. You pay only for the validator infrastructure and avoid per-request pricing. The tradeoff is operational complexity -- you are responsible for validator uptime, upgrades, and plugin maintenance. Managed providers like Helius and Triton handle all of this and offer higher-level APIs (webhooks, enhanced transactions) on top of the raw Geyser data. For most teams, starting with a managed provider and moving to a custom plugin when you hit scale or cost limits is the practical path.
What happens if my Geyser plugin panics?
A panic in any Geyser plugin callback will crash the validator process. The Solana runtime does not catch panics from plugins -- they propagate up and terminate execution. This makes thorough testing essential. Use catch_unwind in your callbacks as a safety net, log the error, and return a plugin error instead of panicking. In production, run your validator with a process supervisor that automatically restarts it on crash, and monitor for unexpected restarts.