Advanced: The DApp Publisher Proxy Pattern
Advanced: The DApp Publisher Proxy Pattern
In the "Working with Multiple Publishers" tutorial, you learned the standard pattern for building an aggregator:
Maintain a list of all known publisher addresses.
Loop through this list.
Call
sdk.streams.getAllPublisherDataForSchema()for each address.Merge and sort the results on the client side.
This pattern is simple and effective for a known, manageable number of publishers (e.g., 50 IoT sensors from a single company).
But what happens at a massive scale?
The Problem: The 10,000-Publisher Scenario
Imagine you are building a popular on-chain game. You have a leaderboardSchema and 10,000 players actively publishing their scores.
If you use the standard aggregator pattern, your "global leaderboard" DApp would need to:
Somehow find all 10,000 player addresses.
Perform 10,000 separate read calls (
getAllPublisherDataForSchema) to the Somnia RPC node.
This is not scalable, fast, or efficient. It creates an enormous (and slow) data-fetching burden on your application.
The Solution: The DApp Publisher Proxy
This is an advanced architecture that inverts the model to solve the read-scalability problem.
Instead of having 10,000 publishers write to Streams directly, they all write to your DApp's smart contract, which then publishes to Streams on their behalf.
The Flow:
User (Publisher): Calls a function on your DApp's contract (e.g.,
myGame.submitScore(100)). Themsg.senderis the user's address.DApp Contract (The Proxy): Internally, your
submitScorefunction:Adds the user's address (
msg.sender) into the data payload to preserve provenance.Calls
somniaStreams.esstores(...)using its own contract address.
Somnia Data Streams: Records the data. To the Streams contract, the only publisher is your DApp Contract's address.
The Result:
Your global leaderboard aggregator now only needs to make one single read call to fetch all 10,000 players' data:
sdk.streams.getAllPublisherDataForSchema(schemaId, YOUR_DAPP_CONTRACT_ADDRESS)
This is massively scalable and efficient for read-heavy applications.
Tutorial: Building a GameLeaderboard Proxy
GameLeaderboard ProxyLet's build a conceptual example of this pattern.
What You'll Build
A new Schema that includes the original publisher's address.
A
GameLeaderboard.solsmart contract that acts as the proxy.A Client Script that writes to the proxy contract instead of Streams.
A new Aggregator that reads from the proxy contract's address.
Step 1: The Schema (Solving for Provenance)
Since the msg.sender to the Streams contract will always be our proxy contract, we lose the built-in provenance. We must re-create it by adding the original player's address to the schema itself.
src/lib/schema.ts
// Schema: 'uint64 timestamp, address player, uint256 score'
export const leaderboardSchema =
'uint64 timestamp, address player, uint256 score'
Step 2: The Proxy Smart Contract (Solidity)
This is a new smart contract you would write and deploy for your DApp. It acts as the gatekeeper.
src/contracts/GameLeaderboard.sol
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
// A simplified interface for the Somnia Streams contract
interface IStreams {
struct DataStream {
bytes32 id;
bytes32 schemaId;
bytes data;
}
// This is the correct low-level function name
function esstores(DataStream[] calldata streams) external;
}
/**
* @title GameLeaderboard
* This contract is a DApp Publisher Proxy.
* Users call submitScore() here.
* This contract then calls somniaStreams.esstores() as a single publisher.
*/
contract GameLeaderboard {
IStreams public immutable somniaStreams;
bytes32 public immutable leaderboardSchemaId;
event ScoreSubmitted(address indexed player, uint256 score);
/**
* @param _streamsAddress The deployed address of the Somnia Streams contract
* (e.g., 0x6AB397FF662e42312c003175DCD76EfF69D048Fc on Somnia Testnet).
* @param _schemaId The pre-computed schemaId for 'uint64 timestamp, address player, uint256 score'.
*/
constructor(address _streamsAddress, bytes32 _schemaId) {
somniaStreams = IStreams(_streamsAddress);
leaderboardSchemaId = _schemaId;
}
/**
* @notice Players call this function to submit their score.
* @param score The player's score.
*/
function submitScore(uint256 score) external {
// 1. Get the original publisher's address
address player = msg.sender;
uint64 timestamp = uint64(block.timestamp);
// 2. Encode the data payload to match the schema
// Schema: 'uint64 timestamp, address player, uint256 score'
bytes memory data = abi.encode(timestamp, player, score);
// 3. Create a unique dataId (e.g., hash of player and time)
bytes32 dataId = keccak256(abi.encodePacked(player, timestamp));
// 4. Prepare the DataStream struct
IStreams.DataStream[] memory d = new IStreams.DataStream[](1);
d[0] = IStreams.DataStream({
id: dataId,
schemaId: leaderboardSchemaId,
data: data
});
// 5. Call Somnia Streams. The `msg.sender` for this call
// is THIS contract (GameLeaderboard).
somniaStreams.esstores(d);
// 6. Emit a DApp-specific event for good measure
emit ScoreSubmitted(player, score);
}
}
Step 3: The Client Script (Publishing to the Proxy)
The client-side logic changes. The user no longer needs the Streams SDK to publish, but rather a way to call your DApp's submitScore function.
src/scripts/publishScore.ts
import 'dotenv/config'
import { createWalletClient, http, createPublicClient, parseAbi } from 'viem'
import { privateKeyToAccount } from 'viem/accounts'
import { somniaTestnet } from '../lib/chain' // From previous tutorials
import { waitForTransactionReceipt } from 'viem/actions'
// --- DApp Contract Setup ---
// This is the address you get after deploying GameLeaderboard.sol
const DAPP_CONTRACT_ADDRESS = '0x...' // Your deployed GameLeaderboard contract address
// A minimal ABI for our GameLeaderboard contract
const DAPP_ABI = parseAbi([
'function submitScore(uint256 score) external',
])
// --- --- ---
function getEnv(key: string): string {
const value = process.env[key]
if (!value) throw new Error(`Missing environment variable: ${key}`)
return value
}
// We can use any publisher wallet
const walletClient = createWalletClient({
account: privateKeyToAccount(getEnv('PUBLISHER_1_PK') as `0x${string}`),
chain: somniaTestnet,
transport: http(getEnv('RPC_URL')),
})
const publicClient = createPublicClient({
chain: somniaTestnet,
transport: http(getEnv('RPC_URL')),
})
async function main() {
const newScore = Math.floor(Math.random() * 10000)
console.log(`Player ${walletClient.account.address} submitting score: ${newScore}...`)
try {
const { request } = await publicClient.simulateContract({
account: walletClient.account,
address: DAPP_CONTRACT_ADDRESS,
abi: DAPP_ABI,
functionName: 'submitScore',
args: [BigInt(newScore)],
})
const txHash = await walletClient.writeContract(request)
console.log(`Transaction sent, hash: ${txHash}`)
await waitForTransactionReceipt(publicClient, { hash: txHash })
console.log('Score submitted successfully!')
} catch (e: any) {
console.error(`Failed to submit score: ${e.message}`)
}
}
main().catch(console.error)
Step 4: The Aggregator Script (Simple, Scalable Reads)
This is the pay-off. The aggregator script is now dramatically simpler and more scalable. It only needs to know the single DApp contract address.
src/scripts/readLeaderboard.ts
import 'dotenv/config'
import { SDK, SchemaDecodedItem } from '@somnia-chain/streams'
import { createPublicClient, http } from 'viem'
import { somniaTestnet } from '../lib/chain'
import { leaderboardSchema } from '../libL/schema' // Our new schema
// --- DApp Contract Setup ---
const DAPP_CONTRACT_ADDRESS = '0x...' // Your deployed GameLeaderboard contract address
// --- --- ---
function getEnv(key: string): string {
const value = process.env[key]
if (!value) throw new Error(`Missing environment variable: ${key}`)
return value
}
const publicClient = createPublicClient({
chain: somniaTestnet,
transport: http(getEnv('RPC_URL')),
})
// Helper to decode the leaderboard data
interface ScoreRecord {
timestamp: number
player: `0x${string}`
score: bigint
}
function decodeScoreRecord(row: SchemaDecodedItem[]): ScoreRecord {
const val = (field: any) => field?.value?.value ?? field?.value ?? ''
return {
timestamp: Number(val(row[0])),
player: val(row[1]) as `0x${string}`,
score: BigInt(val(row[2])),
}
}
async function main() {
// The aggregator only needs a public client
const sdk = new SDK({ public: publicClient })
const schemaId = await sdk.streams.computeSchemaId(leaderboardSchema)
if (!schemaId) throw new Error('Could not compute schemaId')
console.log('--- Global Leaderboard Aggregator ---')
console.log(`Reading all data from proxy: ${DAPP_CONTRACT_ADDRESS}\n`)
// 1. Make ONE call to get all data for the DApp
const data = await sdk.streams.getAllPublisherDataForSchema(
schemaId,
DAPP_CONTRACT_ADDRESS
)
if (!data || data.length === 0) {
console.log('No scores found.')
return
}
// 2. Decode and sort the records
const allScores = (data as SchemaDecodedItem[][]).map(decodeScoreRecord)
allScores.sort((a, b) => (b.score > a.score ? 1 : -1)) // Sort descending by score
// 3. Display the leaderboard
console.log(`Total scores found: ${allScores.length}\n`)
allScores.forEach((record, index) => {
console.log(
`#${index + 1}: Player ${record.player} - Score: ${record.score} (at ${new Date(record.timestamp).toISOString()})`
)
})
}
main().catch(console.error)
Trade-Offs & Considerations
This pattern is powerful, but it's important to understand the trade-offs.
Feature
Standard Pattern (Multi-Publisher)
Proxy Pattern (Single Publisher)
Read Scalability
Low. Requires N read calls (N = # of publishers).
High. Requires 1 read call, regardless of publisher count.
Publisher Gas Cost
Low. 1 transaction (streams.set).
High. 1 transaction + 1 internal transaction. User pays more gas.
Provenance
Automatic & Implicit. msg.sender is the user.
Manual. Must be built into the schema (address player).
Complexity
Simple. Requires only the SDK.
Complex. Requires writing, deploying, and maintaining a custom smart contract.
Conclusion
The DApp Publisher Proxy is an advanced but essential pattern for any Somnia Data Streams application that needs to scale to thousands or millions of publishers (e.t., games, social media, large IoT networks).
It simplifies the data aggregation logic from N+1 read calls down to 1, at the cost of higher gas fees for publishers and increased development complexity.
For most DApps, we recommend starting with the simpler "Multi-Publisher Aggregator" pattern. When your application's read performance becomes a bottleneck due to a high number of publishers, you can evolve to this proxy pattern to achieve massive read scalability.
Last updated