Technical specification for a dual-layer monetary system: UBX (elastic transactional layer, COP-reference band) and KNEX (fixed-supply reserve + settlement layer). Built on a Nano-inspired block-lattice DAG with Proof-of-Bandwidth (PoB) consensus and Open Representative Voting (ORV) finality.
Bootstrap: Assets are currently issued on Stellar for distribution + liquidity rails. Target: Migration to a native L1 DAG where UBX and KNEX are first-class protocol assets.
UBX is the elastic, bearer-like circulation unit for everyday payments. Merchants price goods in UBX using a COP reference (~1 UBX ≈ 1,000 COP, ±20% tolerance). KNEX is the fixed-supply (100,000,000) reserve asset for settlement, validator bonding, and long-term value accumulation. Neither token works alone: UBX provides velocity, KNEX provides scarcity. Economic gravity flows directionally from UBX (circulation) to KNEX (accumulation).
Phase 1 (Current): Both UBX and KNEX are issued on Stellar. DEX trading provides liquidity. Wallets use Stellar SDK. Settlement is 3-5 seconds. Phase 2 (Target): Native block-lattice DAG with Proof-of-Bandwidth consensus, sub-second finality, feeless UBX transactions, and full validator economics. Migration via balance snapshot and 1:1 address mapping.
UBX does not require centralized exchanges to function. Stellar DEX is used as a bootstrap liquidity layer (on/off ramps, early market depth, and conversion rails) while the “true” price discovery model is merchant-driven: goods and services priced directly in UBX using a COP reference band. Over time, as UBX velocity and merchant density grow, reliance on speculative venues decreases and stability emerges from real economic usage.
Traditional blockchains serialize all transactions into sequential blocks, creating bottlenecks. A DAG allows parallel transaction processing — each account maintains its own chain, enabling unlimited horizontal scaling. For a dual-token system processing both UBX micropayments and KNEX settlement, this parallelism is essential.
BLOCKCHAIN (Sequential) DAG / BLOCK-LATTICE (Parallel)
───────────────────── ────────────────────────────────
┌───┐ ┌───┐ ┌───┐ Account A: ●──●──●──●──●
│ 1 │──▶│ 2 │──▶│ 3 │ Account B: ●──●──●──●──●
└───┘ └───┘ └───┘ Account C: ●──●──●
▼ Account D: ●──●──●──●
All txns wait ↑ ↑ ↑
for blocks Independent chains
Parallel processing
SYSTEM ARCHITECTURE (Current → Target)
═══════════════════════════════════════
┌──────────────────────────────────────────────────┐
│ APPLICATION LAYER │
│ KnexPay PWA │ NFC SmartBills │ KnexKeys │
│ (merchant) (NTAG 424 DNA) (keygen) │
└───────────────────────┬──────────────────────────┘
│
┌───────────────────────┼──────────────────────────┐
│ DISTRIBUTION LAYER │
│ ┌─────────────┐ │ ┌─────────────────┐ │
│ │ Stellar │ ◄──┤───► │ Native L1 DAG │ │
│ │ (Phase 1) │ │ │ (Phase 2) │ │
│ │ 3-5s finality│ │ │ <1s finality │ │
│ └─────────────┘ │ └─────────────────┘ │
└──────────────────────┼───────────────────────────┘
│
┌───────────────────────┼──────────────────────────┐
│ TOKEN LAYER │
│ UBX (elastic, COP-reference band, merchant payments) │
│ KNEX (fixed 100M, settlement, validator bonds) │
└──────────────────────────────────────────────────┘
The KNEX/UBX system implements a dual-layer monetary design. Each layer serves a distinct economic function. Neither token works in isolation — UBX provides velocity, KNEX provides scarcity. Value flows directionally from the circulation layer to the reserve layer over time.
UBX is the primary user-facing currency for everyday payments. It is designed to behave like digital cash for Colombia: fast, fee-less, bearer-like, and anchored by real commerce.
| Property | Value | Notes |
|---|---|---|
| Role | Elastic circulation currency | Daily payments, merchant settlement, payroll, NFC SmartBills |
| Supply Model | Elastic / policy-driven | No permanent hard cap. Supply expands/contracts via distribution throttling + treasury ops + conversion gravity. |
| Initial Authorized Distribution Program | 10,000,000,000 UBX | “Program ceiling” for early bootstrapping releases. Future authorization requires governance + health metrics. |
| COP Reference | 1 UBX ≈ 1,000 COP | Informational reference only — not a peg or guarantee |
| Tolerance Band | ±20% (ε = 0.20) | 800 ≤ PUBX ≤ 1,200 COP operational band |
| Distribution Model | φ-decay × stability multiplier | Smooth release curve; automatically throttled by price deviation |
| DEX Liquidity | Bootstrap-only rails | Stellar DEX provides early liquidity rails; long-term anchoring comes from merchant pricing + payroll demand |
| Supply Control | Treasury ops + throttling + conversion sink | No per-transaction burns required |
| Design Goal | Circulation, not accumulation | UBX is designed to move; surplus is structurally converted into KNEX |
UBX functions as a digital bearer instrument. Possession equals ownership. No KYC required for peer-to-peer transfers. NFC SmartBills (NTAG 424 DNA) enable physical-to-digital value transfer without internet connectivity at point of sale. The combination of feeless transfers, COP-denominated pricing, and bearer-like portability makes UBX function like digital cash for the Colombian economy.
| Property | Value | Notes |
|---|---|---|
| Role | Fixed-supply reserve asset | Settlement, validator bonding, long-term accumulation |
| Maximum Supply | 100,000,000 KNEX | Absolute hard cap — no inflation possible |
| Deflation | Staking locks + conversion absorption | Validator bonds and UBX→KNEX conversion continuously remove KNEX from active circulation |
| Settlement Finality | Irreversible, on-chain | KNEX transactions represent final settlement |
| Category | Amount | % | Purpose |
|---|---|---|---|
| Network Release Reserve | 90,000,000 | 90% | Validator incentives and long-term network security via PoB |
| Treasury | 7,000,000 | 7% | Ecosystem development, liquidity support, infrastructure |
| Team | 3,000,000 | 3% | Development, operations, long-term maintenance |
| Total (HARD CAP) | 100,000,000 | 100% | Absolute maximum — no additional minting possible |
This distribution prioritizes long-term network operation over insider ownership. 90% is aligned with infrastructure participation. Team and Treasury tokens are subject to time-locked vesting schedules. All reserve accounts are publicly visible on-chain with verifiable balances. No rebase. No post-distribution minting authority.
KNEX is not freely tradable on external markets. Liquidity is utility-driven and ecosystem-based, ensuring that KNEX value derives from protocol participation — not speculation.
| Path | Mechanism | Who |
|---|---|---|
| 1. UBX → KNEX Conversion | Protocol conversion module. UBX is absorbed, KNEX is released at the current epoch rate (K = U / Ee) |
Any UBX holder |
| 2. Validator Rewards | Released from the 90M Network Reserve via PoB. Proportional to Bandwidth Score, subject to 3% cap and geographic multiplier | Active validators |
| 3. Treasury Operations | Ecosystem grants, liquidity seeding, and strategic allocations governed by treasury policy | Treasury (7M pool) |
No dependency on centralized exchange listings. KNEX does not derive value from speculative trading volume. Its worth comes from: (1) validator bond requirements, (2) slashing collateral at risk, (3) conversion gravity absorbing UBX surplus, and (4) governance weight. When demand for KNEX exceeds release rate, scarcity emerges naturally from protocol utility — not artificial restriction.
KNEX market access opens in three phases, each gated by concrete ecosystem health metrics. Premature opening risks speculation bubbles; permanent restriction risks thin liquidity. Graduated liberation balances both.
| Phase | Timeline | Access Level | Trigger Conditions |
|---|---|---|---|
| 1. Bootstrap | Years 0–2 | Protocol conversion + validator rewards + treasury ops + authorized Stellar DEX pools only | Default — active from genesis |
| 2. Maturation | Years 2–4 | Open authorized DEX pools with controlled depth. Community-governed liquidity pools (protocol approval required). No CEX. | Validators >100 AND Merchants >2,000 AND V >3 |
| 3. Liberation | Years 4+ | Remove pool authorization requirement. DEX aggregator integration. CEX becomes protocol-neutral (not endorsed, not blocked). | KNEX price stability >1 year AND Merchants >10,000 AND V ≥4 |
Each phase requires meeting ALL listed trigger conditions before progressing. Regression is possible: if metrics drop below thresholds for >90 consecutive days, the protocol reverts to the previous phase. Governance can override timelines but not trigger conditions.
New UBX distribution follows a time-decay schedule based on the golden ratio. Early epochs distribute more UBX, creating adoption incentives while limiting long-term inflationary pressure.
// Golden ratio decay applied to UBX distribution const φ = 1.6180339887 // Golden ratio // Distribution rate per epoch: Re = R0 / φe // Where: // R0 = base distribution rate // e = epoch number (fixed time intervals) // Re = distribution rate at epoch e // Conversion factor per epoch (UBX per KNEX): Ee = E0 / φe // UBX → KNEX conversion: K = U / Ee // Inverse: U = K · Ee
φ-DECAY RELEASE CURVE
═══════════════════════
Distribution
Rate (R)
│
R₀ │●
│ ╲
│ ╲
R₁ │ ●
│ ╲
│ ╲
R₂ │ ●
│ ╲
R₃ │ ●
│ ╲
R₄ │ ●───●───●───●──▸ (asymptotic)
│
└────────────────────────────▸
e₀ e₁ e₂ e₃ e₄ e₅ Epoch
Each epoch: Re = R0 / φe
Early participants receive higher distribution rates.
Release rate decreases predictably, never reaches zero.
Traditional halving models create sharp supply shocks every N years. φ-decay provides smooth, continuous reduction following the golden ratio — a curve found in natural growth patterns. This eliminates supply cliff events while still rewarding early network participation.
Canonical v5.1 requirement: UBX distribution is implemented as φ-decay multiplied by a stability multiplier (price-band throttling). Any earlier prototypes that used halving-based schedules are deprecated and must be treated as non-canonical.
UBX and KNEX use different release curves, each optimized for its economic role:
RUBX(e) = (R0 / φe) × M(P)RKNEX(t) = 4,500,000/year × NAIdampingAll 100,000,000 KNEX are created at genesis. The 90,000,000 KNEX Network Emission Reserve is locked and released over a 20-year program to fund validators and infrastructure participants via Proof-of-Bandwidth.
This is a release schedule, not inflation. Total supply is fixed from genesis and can never exceed 100,000,000 KNEX.
// All KNEX created at genesis. Reserve is RELEASED, not minted. // Invariant: Released_total ≤ 90,000,000 KNEX // Invariant: Total_supply ≤ 100,000,000 KNEX // Primary release: linear over 20 years Rbase = 90,000,000 / 20 = 4,500,000 KNEX/year // Epoch duration: 1 hour (8,760 epochs/year) // Per-epoch base release: floor(4,500,000 / 8,760) = 513 KNEX // Annual truncation remainder: 4,500,000 − (513 × 8,760) = 6,120 KNEX // Distributed in the final epoch of each year. // NAI counter-cyclical damping (see §04 PoB): Reffective = Rbase × NAIdamping // NAI damping range: 0.25× to 3.0× // Low activity → damping >1 → higher rewards (retain validators) // High activity → damping <1 → lower rewards (prevent overshoot) // NAI adjusts TIMING only, never total supply. // End of program (year 20): // No further issuance occurs. // Undistributed reserve (if any) handled by governance only. // No new minting authority exists.
At every epoch:
released_total ≤ 90,000,000 KNEX (hard cap on reserve release)total_supply ≤ 100,000,000 KNEX (hard cap on total supply)NAI adjusts timing, never total supply.
If the projected reserve depletion date would fall earlier than the canonical narrative horizon, the protocol automatically caps reward acceleration:
NAI_damping is capped at 1.0× until the horizon recovers.This prevents sustained bear-market acceleration from draining the reserve early and breaking the 20-year distribution narrative.
lib.rs) uses MAX_SUPPLY = 100,000,000 with linear release from the 90M Network Release Reserve + NAI counter-cyclical damping + horizon protection. The release.rs module handles schedule computation, horizon_safe_damping(), and validator reward distribution. See the DLT repository for current implementation status.
Proof-of-Bandwidth validators share 4,500,000 KNEX per year released from the Network Emission Reserve, distributed proportionally to bandwidth score. All KNEX was created at genesis — this is a release schedule, not inflation.
Validator rewards are funded from the 90M reserve during the primary program. After the program completes (or governance finalizes any remaining undistributed reserve), validator income transitions to protocol-level revenue sources (e.g., settlement service charges, streaming settlement rails, and other governed network services). No new KNEX minting is permitted.
UBX uses a target range model rather than a fixed peg. The protocol does not guarantee redemption at a fixed COP value. Instead, stability emerges from real economic demand, merchant acceptance, controlled distribution (φ-decay), and structural conversion toward KNEX.
// Target range (not a peg) Ptarget = 1,000 COP ε = 0.20 // 20% acceptable deviation // Tolerance formula: Ptarget(1 − ε) ≤ PUBX ≤ Ptarget(1 + ε) // With ε = 20%: 800 ≤ PUBX ≤ 1,200 COP // Deviation metric: ε = (Pmarket − Ptarget) / Ptarget |ε| < δ // Must stay within acceptable range
The system does not guarantee a fixed peg. Instead, it maintains a managed floating band using automated and treasury-driven mechanisms. Three coordinated levers keep UBX within the operational band.
// Volume-Weighted Average Price from multiple sources PUBX = Σ(Pricei × Volumei) / Σ(Volumei) // Price sources: // 1. Stellar DEX trades (UBX/COP, UBX/USDC pairs) // 2. Merchant pricing data (UBX/COP posted prices) // 3. Payroll conversion rates // 4. Treasury operations // This produces a real-time VWAP that reflects // actual economic activity, not speculative order flow.
| Parameter | Value |
|---|---|
| Ptarget | 1,000 COP |
| Lower Band | 0.8 × Ptarget = 800 COP |
| Upper Band | 1.2 × Ptarget = 1,200 COP |
| Tolerance (ε) | ±20% |
The Treasury maintains continuous liquidity on Stellar DEX with daily intervention limits to prevent reserve depletion.
// When UBX is BELOW the lower band: if PUBX < Lower { // Treasury places buy orders (UBX ← reserve assets) // Removes UBX from circulation → upward price pressure treasury.buy_ubx(amount); } // When UBX is ABOVE the upper band: if PUBX > Upper { // Treasury sells UBX from reserves // Increases circulating supply → downward price pressure treasury.sell_ubx(amount); } // Daily intervention limit (prevents reserve depletion): Maxdaily = α × TreasuryUBX // Where α = 0.5% to 2% (policy parameter) // At α=1%, a 100M UBX treasury can deploy 1M/day max
UBX distribution rate adjusts automatically based on price deviation from target. This creates supply elasticity without manual intervention. Tokens are released from the distribution pool, not minted.
// Price deviation from target: D = (PUBX − Ptarget) / Ptarget // Distribution multiplier: M = 1 + β × D // Where β ≈ 2 (policy sensitivity parameter) // Behavior: // Price LOW (D < 0) → M < 1 → distribution DECREASES // Price HIGH (D > 0) → M > 1 → distribution INCREASES // Floor and ceiling (prevents extreme swings): 0.5 ≤ M ≤ 1.5 // Example: UBX at 900 COP (D = -0.10) // M = 1 + 2(-0.10) = 0.80 → distribution cut 20% // Example: UBX at 1,100 COP (D = +0.10) // M = 1 + 2(+0.10) = 1.20 → distribution up 20%
When UBX is weak, the conversion mechanism absorbs excess supply. Users convert UBX into KNEX — UBX is locked, KNEX released from reserve pool. This is the economic gravity engine.
// UBX → KNEX conversion KNEXout = (UBXin × PUBX) / PKNEX // Effect: excess UBX flows into scarce KNEX // → UBX circulating supply decreases // → KNEX demand increases // → Both layers stabilize
Merchants display dual pricing: COP price and UBX equivalent. This creates a real-economy feedback loop that anchors UBX to tangible goods and services.
// Merchants display both COP and UBX prices UBXprice = COPprice / PUBX // Example: Coffee = 5,000 COP // At P_UBX = 1,000 COP → 5.0 UBX // At P_UBX = 900 COP → 5.6 UBX (merchant adjusts) // At P_UBX = 1,100 COP → 4.5 UBX // This creates price feedback: // UBX drops → merchants adjust → demand increases → price stabilizes
Employer payroll creates predictable, recurring baseline demand for UBX independent of speculative activity.
// Employee receives partial salary in UBX SalaryCOP = total COP salary r = UBX allocation (3% to 30%) UBXissued = (SalaryCOP × r) / PUBX // Example: 3,000,000 COP salary, r = 10% // UBX_issued = (3,000,000 × 0.10) / 1,000 = 300 UBX // At 1,000 employees: 300,000 UBX/month recurring demand // This is non-speculative, productivity-backed demand
| Condition | Action |
|---|---|
| Price within band (800-1,200 COP) | No intervention |
| Price near lower band (~800-850 COP) | Reduce distribution (Lever 2) |
| Price below band (<800 COP) | Treasury buys + conversion encouraged (Levers 1+3) |
| Price near upper band (~1,150-1,200 COP) | Increase distribution (Lever 2) |
| Price above band (>1,200 COP) | Treasury sells UBX from reserves (Lever 1) |
Over time, the system stabilizes through five reinforcing mechanisms: (1) continuous payroll demand, (2) merchant pricing anchors, (3) treasury market depth on DEX, (4) UBX→KNEX gravity conversion, and (5) controlled distribution elasticity. This avoids hard pegs, algorithmic death spirals, and purely speculative price discovery. Stellar DEX is a bootstrap market mechanism, not the economic source of truth.
The system creates a one-directional economic flow from high-velocity circulation (UBX) toward low-velocity reserves (KNEX). This structural dynamic is the engine that sustains both layers.
ECONOMIC GRAVITY: DIRECTIONAL VALUE FLOW
═════════════════════════════════════════
┌─────────────────────────────────────────────────────┐
│ REAL ECONOMY │
│ Merchants ● Payroll ● Services ● Retail │
└──────────────────────┬──────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ UBX CIRCULATION │
│ High velocity ● Bearer-like ● ~1,000 COP │
│ Daily payments, NFC SmartBills, P2P transfers │
└──────────────────────┬──────────────────────────────┘
│
Operational Surplus
│
▼
┌─────────────────────────────────────────────────────┐
│ DEX CONVERSION │
│ Merchant treasury converts excess UBX → KNEX │
│ K = U / Ee (conversion rate decays with φ) │
└──────────────────────┬──────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ KNEX ACCUMULATION │
│ Low velocity ● Fixed supply ● Settlement │
│ Validator bonds, reserves, deflation via locks │
└─────────────────────────────────────────────────────┘
Flow direction: UBX circulation → surplus → DEX → KNEX
Value flows from high-velocity to low-velocity over time.
// Merchant conversion variables P = UBX revenue received by merchant O = UBX retained for operations S = surplus converted to KNEX // Surplus calculation: S = P − O // KNEX acquired: Kacquired = S / Ee // Aggregate economic gravity: // As more merchants convert surplus → KNEX: // - UBX circulating supply stabilizes // - KNEX liquid supply decreases // - d(KNEX_accumulated)/dt > 0 → increasing scarcity
The dual-layer system must be continuously monitored against quantitative thresholds. These metrics determine whether the economic engine is functioning as designed.
| Metric | Formula | Healthy | Critical |
|---|---|---|---|
| Monetary Velocity (V) | V = T / M | V ≥ 2 (2-4x/month target) | V < 2 → stagnation |
| Conversion Flow (C) | C = Σ(UBX→KNEX) per period | C > 0 (positive flow) | C → 0 → gravity collapse |
| DEX Buy Liquidity (L) | Lbuy = total buy-side depth | Lbuy ≥ 3 × Voldaily | Lbuy < Voldaily → thin market |
| COP Deviation (ε) | ε = (Pmarket − Ptarget) / Ptarget | |ε| ≤ 0.20 | |ε| > 0.30 → anchor breach |
| Ecosystem Stability Index | ESI = f(V, C, L, ε) | ESI > 3 | ESI < 1 → overdistribution |
// Monetary velocity (UBX must circulate, not accumulate) V = T / M // T = total UBX transaction volume in period // M = UBX in active circulation // V ≥ 2 → healthy system (UBX is being used) // V < 2 → WARNING: velocity collapse risk // KNEX accumulation rate (must be positive for gravity) d(KNEXaccumulated) / dt > 0 // If accumulation rate is positive: // → liquid KNEX supply decreases // → relative economic value increases // → economic gravity is functioning // Equilibrium condition D ≥ RUBX // Demand ≥ Distribution → price stability RUBX > D // Distribution > Demand → downward pressure
The single greatest threat to this system is a liquidity death spiral. If UBX→KNEX conversion (C) approaches zero AND UBX velocity (V) drops below 2 simultaneously, the following cascade occurs:
The following mechanisms are designed to detect and halt the death spiral before it reaches critical cascade.
| Trigger | Condition | Response |
|---|---|---|
| Velocity Alert | V < 2 for > 7 days | Increase UBX distribution incentives, merchant onboarding push |
| Conversion Halt | C = 0 for > 14 days | Treasury provides bridge liquidity on DEX buy side |
| Anchor Breach | |ε| > 0.30 | Treasury intervention: sell KNEX reserves for UBX buy pressure |
| Liquidity Crisis | Lbuy < Voldaily | Emergency treasury injection into DEX order book |
Failure is not theoretical. These are the concrete failure modes that the protocol team must monitor and defend against:
| Asset | Supply Behavior | Role |
|---|---|---|
| UBX | Elastic / expanding | Circulation and liquidity |
| KNEX | Fixed / scarce (100M cap) | Reserve and settlement |
| Conversion | Directional (UBX → KNEX) | Economic gravity engine |
| Distribution | φ-Decay by epochs | Bootstrapping with natural decay |
The complementary dynamics — elastic circulation against fixed reserves — create a self-reinforcing monetary system where increased economic activity in UBX structurally benefits KNEX holders through conversion gravity.
Inspired by Nano's block-lattice architecture. Each account has its own chain (account-chain) in the DAG. Transactions only touch the sender and receiver chains.
pub struct Block { // Block identification hash: [u8; 32], // Blake2b hash of block previous: [u8; 32], // Previous block in account-chain // Account info account: [u8; 32], // Account public key representative: [u8; 32], // Voting representative balance: u128, // Account balance after block // Dual-token support (L1) token_type: TokenType, // UBX | KNEX — first-class on L1 // Block type specific block_type: BlockType, // Send | Receive | Change | Bandwidth | StreamOpen | StreamSettle | StreamClose link: [u8; 32], // Destination, source block hash, or stream session ID // Proof-of-Bandwidth (KnexCoin specific) bandwidth_proof: Option<BandwidthProof>, // Streaming payment data (optional — only for StreamOpen/Settle/Close) stream_data: Option<StreamData>, // Signature & anti-spam signature: [u8; 64], // Ed25519 signature work: u64, // Client-side anti-spam PoW (Nano model, NOT a fee) } pub enum TokenType { UBX, // Elastic transactional layer (~1,000 COP) KNEX, // Fixed-supply reserve layer (100M cap) } // Stream block types for off-chain payment channels // StreamOpen: Locks funds, creates session (sender's chain) // StreamSettle: Batch settlement of off-chain vouchers (receiver's chain) // StreamClose: Final settlement + reclaim remaining (sender's chain)
On the native L1 DAG, both UBX and KNEX are first-class token types. Each block explicitly declares its token type. An account can hold balances in both tokens simultaneously (separate balance fields or separate account-chains per token type). During the Stellar phase, this distinction is handled by Stellar asset codes; on L1, it becomes a native protocol field.
The work: u64 field is a lightweight client-side computation (Nano model) that prevents network spam. It is not a transaction fee — no value is transferred to validators or burned. "Feeless" means zero value transfer for transaction processing. The PoW is computed by the sending wallet before broadcast and verified by nodes on receipt. Difficulty adjusts based on account activity frequency.
BLOCK-LATTICE STRUCTURE (UBX Example)
══════════════════════════════════════
Alice's Chain Bob's Chain Carol's Chain
──────────── ─────────── ─────────────
│ │ │
┌────┴────┐ │ │
│ GENESIS │ │ │
│ 1000 UBX│ │ │
└────┬────┘ │ │
│ ┌────┴────┐ │
│ │ GENESIS │ │
│ │ 500 UBX │ │
│ └────┬────┘ │
┌────┴────┐ │ │
│ SEND │────────────────┼──────────────┐ │
│ -100 UBX│ │ │ │
│ 900 UBX │ │ ▼ │
└────┬────┘ │ ┌────┴────┐ │
│ │ │ RECEIVE │ │
│ │ │ +100 UBX│ │
│ │ │ 100 UBX │ │
│ │ └────┬────┘ │
│ ┌────┴────┐ │ │
│ │ SEND │─────────┼───────┼──▶ Pending
│ │ -50 UBX │ │ │
│ │ 450 UBX │ │ │
│ └────┬────┘ │ │
▼ ▼ ▼ ▼
Same structure applies to KNEX blocks (token_type = KNEX).
Each account can maintain parallel chains for each token type.
Bandwidth is difficult to prove trustlessly. Unlike PoW (verifiable computation) or PoS (verifiable stake), bandwidth proofs can be spoofed, colocated, or gamed. Our solution uses multiple verification vectors.
pub struct BandwidthProof { // Challenge data challenge_hash: [u8; 32], // Hash of random challenge data challenge_size: u64, // Size of data transferred (bytes) timestamp_start: u64, // Challenge initiated (ms) timestamp_end: u64, // Challenge completed (ms) // Verification challenger_nodes: Vec<[u8; 32]>, // Nodes that issued challenge attestations: Vec<Attestation>, // Peer confirmations // Computed metrics measured_bandwidth: u64, // Calculated Mbps latency_avg: u32, // Average latency (ms) // VDF commitment (prevents pre-computation) vdf_output: [u8; 32], vdf_proof: [u8; 64], } pub struct Attestation { attester: [u8; 32], // Attesting node pubkey observed_bandwidth: u64, // Their measurement confidence: u8, // 0-100 confidence score signature: [u8; 64], }
Validators earn a Bandwidth Score (BS) that determines their voting weight and rewards eligibility. All component metrics are normalized to [0, 1] before weighting to prevent any single factor from dominating.
// Bandwidth Score (BS) — all components normalized to [0, 1] // Step 1: Normalize each metric BW_norm = min(BW_measured / BW_cap, 1.0) // Cap: 10 Gbps Lat_norm = max(1.0 − (Latency / Lat_max), 0.0) // Max: 500ms Up_norm = Uptime / 100.0 // Already 0-100% Rep_norm = min(AttestScore / Rep_cap, 1.0) // Cap: max rep // Step 2: Weighted sum BS = (BW_norm × 0.40) + // 40% throughput (Lat_norm × 0.25) + // 25% latency (Up_norm × 0.20) + // 20% uptime (rolling 30 days) (Rep_norm × 0.15) // 15% peer reputation // Result: BS ∈ [0.0, 1.0] // Weights tunable via governance after Phase 3 // Caps prevent diminishing-return gaming: BW_cap = 10_000 // 10 Gbps — beyond this, no extra weight Lat_max = 500 // 500ms — above this, latency score = 0 Rep_cap = 1000 // Maximum peer attestation score
Challenge-response proofs require unbiased challenger selection. A Verifiable Random Function (VRF) ensures challengers are unpredictable and cannot be bribed or colocated in advance.
The full VDF commitment system (shown in the BandwidthProof struct) is deferred for v1. Initial implementation uses challenge-response with VRF-selected challengers only. VDF integration is planned for v2 when hardware-accelerated VDF libraries mature. The struct retains the VDF fields as reserved/optional.
| Parameter | v1 (Launch) | Target (Maturity) |
|---|---|---|
| Minimum challenger regions | 3 | 5 |
| Proof window | 60 seconds | 30 seconds |
| Challenge data size | 1-10 MB random | 1-50 MB adaptive |
| Minimum validators | 21 | 100+ |
| Geographic diversity | 3 countries min | 5+ regions, 10+ countries |
Bandwidth proofs are validated against anomaly thresholds to detect spoofing, proxy attacks, and colocation.
| Anomaly | Detection | Threshold | Action |
|---|---|---|---|
| Latency inconsistency | Triangulation mismatch > 2σ | > 2× standard deviation from expected RTT | Flag proof, require re-challenge |
| Bandwidth spike | Sudden >5× increase vs rolling average | 500% of 7-day rolling mean | Cap at rolling average, investigate |
| Attestation collusion | Same attesters repeatedly paired | > 3 consecutive pairings | Rotate challengers, flag both parties |
| Geographic impossibility | Claimed location vs latency mismatch | Latency < speed-of-light minimum for distance | Reject proof, slash reputation |
v1 uses threshold-based detection. ML-based anomaly detection (neural network models for bandwidth pattern analysis) is deferred to v2.
// Reputation decays toward baseline without fresh evidence Rept = Rept-1 × λ + evidence // Where: // λ = decay factor (0.95 — 5% decay per period) // evidence = new attestation score from latest proof window // Rept = reputation at time t // A validator that stops proving bandwidth sees reputation // decay exponentially toward zero. No free riding on past proofs. // Minimum reputation threshold for consensus participation: Repmin = 100 // Below this → excluded from voting
Proof-of-Bandwidth validators share 4,500,000 KNEX per year from the 90M Network Release Reserve (20-year program). Rewards are distributed proportionally to Bandwidth Score:
// Annual release from Network Reserve (not minted — unlocked) Rannual = 4,500,000 KNEX // Reward for validator i in period p: Rewardi = Rperiod × (BSi / ΣBSall) // Where: // Rperiod = KNEX released in this period (daily/weekly) // BSi = Bandwidth Score of validator i // ΣBSall = Sum of all active validator scores // Anti-centralization: max 3% of period rewards per validator // MaxRewardi = Rperiod × 0.03 // Geographic diversity multiplier: // GeoMuli = 1.0 + γ × (1 − RegionDensityi) // γ = 0.15 (diversity bonus coefficient) // RegionDensity = validators_in_region / total_validators // Validators in underrepresented regions earn up to 15% bonus // Validators in overrepresented regions receive no penalty (floor = 1.0) // Final reward with anti-centralization + geographic diversity: // Rewardi = min(MaxRewardi, Rperiod × (BSi × GeoMuli) / Σ(BSj × GeoMulj)) // Example: 100 validators, equal BS // Rewardi = 4,500,000 / 100 = 45,000 KNEX/year each // Validator in rare region (5% density): GeoMul = 1.0 + 0.15 × 0.95 ≈ 1.14 // Validator in common region (30% density): GeoMul = 1.0 + 0.15 × 0.70 ≈ 1.11
Three mechanisms prevent reward concentration and geographic monoculture:
γ = 0.15). This incentivizes geographic spread without penalizing existing operators.The NAI adjusts validator rewards based on real network activity, preventing reward gaming during low-activity periods and providing higher incentives when the network is under-utilized. This is a counter-cyclical mechanism: rewards increase when activity drops (incentivizing validators to stay) and decrease when activity spikes (preventing inflationary overshoot).
| Input | Weight | Source |
|---|---|---|
| Transaction Volume | 40% | Total UBX + KNEX transaction volume in RAW |
| Active Accounts | 20% | Unique accounts with ≥1 transaction in the epoch |
| Active Validators | 10% | Validators that submitted PoB proofs in the epoch |
| DEX Volume | 30% | Volume between D-prefix DEX accounts (Stellar bootstrap + L1 pools) |
// NAI computes a damping multiplier applied to base validator rewards // Range: 0.25× (high activity) to 3.0× (low activity) NAIdamping = clamp(target_activity / current_activity, 0.25, 3.0) // Where: // target_activity = rolling average of NAI score over 90-day window // current_activity = NAI score for current epoch (1 hour) // Effective validator reward: Rewardeffective = Rewardbase × NAIdamping // Counter-cyclical behavior: // Network busy → damping < 1.0 → rewards decrease (prevent overshoot) // Network quiet → damping > 1.0 → rewards increase (retain validators) // Network dead → damping = 3.0 → maximum incentive to bootstrap // NAI bootstrap: first 7 days (168 epochs) use damping = 1.0 // Rolling window: 90 days (2,160 epochs) for target calculation // Parameters tunable only via governance process
Any account whose Base62-encoded address starts with D is flagged as a DEX account. Transactions between two D-accounts count as DEX volume in the NAI calculation. This allows the protocol to measure on-chain DEX activity without requiring a separate oracle. On L1, D-prefix accounts host authorized liquidity pools; on Stellar, they map to the existing order book infrastructure.
Adapted from Nano's ORV consensus. Voting weight is determined primarily by Bandwidth Score. Stake functions as a bond threshold with ramp-up, not a weight multiplier.
The previous model (sqrt(Stake) × BS) still gave whales disproportionate power: 100× more stake = 10× more base weight before bandwidth. The new formula uses a bond threshold with linear ramp-up — below 10,000 KNEX you get proportional eligibility, at or above 10,000 KNEX you are fully eligible, but additional KNEX gives zero extra weight. Beyond the threshold, only bandwidth determines weight. A validator with 10,000 KNEX and excellent infrastructure has identical weight to one with 1,000,000 KNEX and the same infrastructure. Capital cannot buy consensus influence — only real infrastructure contribution matters.
// NEW: Bandwidth-primary, stake is bond threshold with ramp-up VotingWeight = BandwidthScore × min(1, Stake / MIN_STAKE) // Where: // BandwidthScore = BS ∈ [0.0, 1.0] (from Section 04) // Stake = KNEX bonded by validator // MIN_STAKE = 10,000 KNEX // The min(1, Stake/MIN_STAKE) term: // Stake < 10,000 → factor = Stake/10,000 (partial, ramping) // Stake ≥ 10,000 → factor = 1.0 (fully eligible) // Stake = 1,000,000 → factor = 1.0 (no extra weight) // RESULT: Only bandwidth determines consensus influence.
const MIN_STAKE: u128 = 10_000 * KNEX_RAW; // 10,000 KNEX in raw units fn calculate_voting_weight(validator: &Validator) -> f64 { // Bond threshold with ramp-up: 0.0 to 1.0 let stake_factor = (validator.staked_amount as f64 / MIN_STAKE as f64) .min(1.0); // Bandwidth score is the real weight (0.0 to 1.0) let bandwidth_score = validator.bandwidth_score; // Weight = bandwidth × eligibility bandwidth_score * stake_factor } // Examples: // Validator A: 10,000 KNEX, BS=0.85 → 0.85 × 1.0 = 0.85 // Validator B: 1,000,000 KNEX, BS=0.85 → 0.85 × 1.0 = 0.85 // ↳ 100× more KNEX gives ZERO extra weight // Validator C: 10,000 KNEX, BS=0.40 → 0.40 × 1.0 = 0.40 // ↳ Less infrastructure = less influence (as intended) // Validator D: 5,000 KNEX, BS=0.90 → 0.90 × 0.5 = 0.45 // ↳ Below min stake = reduced eligibility
CONSENSUS FLOW (ORV + PoB)
══════════════════════════
1. BROADCAST 2. VOTE 3. CONFIRM
─────────── ────── ─────────
┌─────────┐ ┌───────────────┐ ┌──────────┐
│ User │ │ Representatives│ │ Quorum │
│ creates │─────▶│ receive & vote │─────▶│ reached │
│ block │ │(bandwidth-wtd)│ │ (67%) │
└─────────┘ └───────────────┘ └──────────┘
│ │ │
│ ┌───────┴───────┐ │
│ │ │ │
│ ┌───┴───┐ ┌───┴───┐ │
│ │Rep A │ │Rep B │ │
│ │BS:0.85│ │BS:0.72│ │
│ │Vote:✓ │ │Vote:✓ │ ▼
│ └───────┘ └───────┘ ┌─────────┐
│ │CONFIRMED│
│ ┌───────┐ ┌───────┐ │ <1 sec │
│ │Rep C │ │Rep D │ └─────────┘
│ │BS:0.60│ │BS:0.91│
│ │Vote:✓ │ │Vote:✗ │
│ └───────┘ └───────┘
│
Weight = BandwidthScore × min(1, Stake / MIN_STAKE)
Quorum = 67% of total online voting weight
| Parameter | Value | Notes |
|---|---|---|
| Quorum threshold | 67% (2/3 + 1) | Of total online voting weight |
| Minimum stake | 10,000 KNEX | Bond threshold with linear ramp-up |
| Delegation | Supported | Users delegate to representatives via Change blocks |
| Confirmation target | <1 second (L1) | 3-5 seconds during Stellar phase |
| Fork resolution | Heaviest-weight branch | Branch with most bandwidth-weighted votes wins |
| Slashing | KNEX bond at risk | Double-voting or provably malicious behavior |
In this design, KNEX stake serves three purposes: (1) Sybil resistance — creating a validator costs 10,000 KNEX, making mass-creation expensive; (2) Slashing collateral — malicious validators lose their bond; (3) Alignment — validators have economic skin in the game. What stake does not do: determine voting power, earn proportional rewards, or create governance influence. Infrastructure contribution is the only path to consensus weight.
Both UBX and KNEX use the same asynchronous two-phase transaction model. The token_type field in each block determines which layer the transaction operates on.
The primary transaction flow. UBX payments between merchants, consumers, and payroll systems. Feeless, sub-second, NFC-compatible.
| Step | Action | Time (L1) | Time (Stellar) | State |
|---|---|---|---|---|
| 1 | Customer taps NFC SmartBill or opens wallet | ~0ms | ~0ms | Intent created |
| 2 | UBX SEND block created (token_type: UBX) | ~0ms | N/A (Stellar tx) | Block signed locally |
| 3 | Block broadcast / Stellar submit | ~50ms | ~500ms | Propagating |
| 4 | ORV quorum (67%) / Stellar consensus | ~200-500ms | ~3-5s | SEND confirmed |
| 5 | Merchant RECEIVE block created | ~0ms | Automatic | Balance credited |
| 6 | RECEIVE confirmed | ~200-500ms | Included in step 4 | Payment complete |
Full UBX transaction confirmation with zero fees. On L1: sub-second via ORV with bandwidth-weighted voting. During Stellar phase: 3-5 seconds via Stellar consensus. Both paths are suitable for point-of-sale use. NFC SmartBills (NTAG 424 DNA) enable tap-to-pay at physical locations.
The secondary flow. Merchants converting UBX surplus to KNEX via DEX, validator bond deposits, and high-value settlement transfers.
| Step | Action | Time | State |
|---|---|---|---|
| 1 | Merchant identifies UBX surplus (P − O = S) | Business logic | Surplus calculated |
| 2 | UBX sell order placed on DEX (UBX/KNEX pair) | ~1-3s | Order submitted |
| 3 | DEX matches order (K = S / Ee) | ~3-5s (Stellar) | Conversion executed |
| 4 | KNEX credited to merchant wallet | ~3-5s (Stellar) | Settlement complete |
| 5 | KNEX optionally locked in treasury/staking | Varies | Long-term reserve |
DUAL-TOKEN TRANSACTION FLOWS
════════════════════════════
FLOW 1: UBX DAILY PAYMENT (feeless, sub-second)
─────────────────────────────────────────────────
Consumer Merchant
┌──────────┐ ┌──────────┐
│ Wallet │── UBX SEND ────────▶│ Wallet │
│ (NFC/ │ token: UBX │ (POS / │
│ App) │ fee: 0 │ App) │
└──────────┘ └──────────┘
│ ORV │
└──── <1s confirmation ───────────┘
FLOW 2: KNEX SETTLEMENT (DEX conversion)
─────────────────────────────────────────
Merchant DEX Reserve
┌──────────┐ ┌────────────┐ ┌──────────┐
│ UBX │── Sell ▶│ UBX/KNEX │── Buy ─▶│ KNEX │
│ Surplus │ │ Order Book│ │ Treasury│
└──────────┘ └────────────┘ └──────────┘
│ │ │
└──── Economic Gravity (directional) ────────┘
| Network | Confirmation | Fees | Finality |
|---|---|---|---|
| KnexCoin L1 (UBX) | <1 second | Zero | Immediate (ORV quorum) |
| KnexCoin Stellar (UBX) | 3-5 seconds | ~0.00001 XLM | Immediate (SCP) |
| Nano | <1 second | Zero | Immediate (ORV) |
| Solana | ~400ms | ~$0.00025 | Probabilistic |
| Ethereum | ~12 seconds | $0.50-50+ | ~15 minutes |
| Bitcoin | ~10 minutes | $1-50+ | ~60 minutes |
UBX supports continuous, per-second payment streams via off-chain signed vouchers with periodic on-chain settlement. This enables payroll drip, bandwidth metering, service billing, and micro-commerce without chain spam.
| Token | Streamable? | Use Cases | Rationale |
|---|---|---|---|
| UBX | ✓ Yes | Payroll drip, service metering, bandwidth payments, micro-commerce | High-velocity layer designed for continuous flow |
| KNEX (retail) | ✗ No | — | Low velocity reserve asset; should not be continuously spent by retail users |
| KNEX (protocol) | ✓ Yes | Validator rewards, treasury vesting, protocol distributions | Controlled release from reserves needs continuous scheduling |
struct StreamSession { session_id: Hash, // Unique channel ID (Blake2b of params) sender: PublicKey, // Payer receiver: PublicKey, // Payee token_type: TokenType, // UBX (retail) or KNEX (protocol-only) rate_raw_per_second: u128, // Continuous payment rate in RAW max_total_raw: u128, // Spending cap for this session settled_raw: u128, // Cumulative on-chain settlements expiration: u64, // Unix timestamp auto-close created_at: u64, // Session start time } // Precision: 1.0000000 = 10,000,000 RAW (7 decimal places) // Example: Payroll stream of 400 UBX/month // rate = (400 × 10,000,000) / (30 × 86,400) = 1,543 RAW/second // max_total = 400 × 10,000,000 = 4,000,000,000 RAW // Rounding: all RAW calculations truncate toward zero (floor division) // No fractional RAW exists. Remainder stays with sender on StreamClose // Min settlement amount: 1,000 RAW (0.0001000 tokens) — prevents dust // Min settle interval: 60 seconds per session (anti-spam)
// Sender signs cumulative vouchers off-chain struct Voucher { session_id: Hash, cumulative_amount: u128, // Total owed (grows over time) sequence: u64, // Monotonically increasing signature: Signature, // Sender signs (cumulative, not delta) } // Receiver holds the LATEST voucher as proof of payment // Only the most recent voucher matters (cumulative design) // Settlement triggers (any one triggers on-chain block): // 1. Time interval: every N minutes (default 5 min) // 2. Amount threshold: unsettled exceeds X RAW // 3. Session close: final settlement on expiry or manual close // 4. Dispute: receiver submits voucher if sender goes offline // Dispute resolution: // If sender goes offline, receiver submits latest voucher // to create a StreamSettle block (unilateral close) // Sender's locked funds up to voucher.cumulative_amount // are transferred. Remainder returns to sender after grace period. // Grace period: 24 hours from session expiration // After grace: receiver can claim, sender can reclaim unclaimed // On-chain footprint: ONE StreamSettle block per settlement // NOT one block per micro-payment — anti-spam by design
| Parameter | Value | Purpose |
|---|---|---|
| Min settlement interval | 60 seconds per session | Prevents rapid-fire on-chain blocks |
| Max concurrent sessions | 100 per account | Prevents account-level spam |
| Voucher expiry | Session expiration + 24h grace | Receiver can always claim owed funds |
| KNEX streaming guard | Protocol-only (validator/treasury) | Retail KNEX streams are rejected at validation |
rate = monthly_salary / (30 × 86,400). Settles hourly on-chain. Employee can spend accrued UBX in real-time
STREAMING PAYMENT FLOW
═══════════════════════
Sender Receiver
┌──────────────┐ ┌──────────────┐
│ StreamOpen │── on-chain ──▶ │ Session │
│ (lock funds) │ │ Registered │
└──────┬───────┘ └──────┬───────┘
│ │
│ ┌─────────────────────────┐ │
│ │ OFF-CHAIN VOUCHERS │ │
├─▶│ v1: 1,543 RAW │──┤
├─▶│ v2: 3,086 RAW │──┤
├─▶│ v3: 4,629 RAW ... │──┤
│ └─────────────────────────┘ │
│ │
┌──────┴───────┐ ┌──────┴───────┐
│ │ │ StreamSettle │
│ │◀── on-chain ──│ (batch claim)│
└──────┬───────┘ └──────┬───────┘
│ │
┌──────┴───────┐ ┌──────┴───────┐
│ StreamClose │── on-chain ──▶ │ Final Settle │
│ (reclaim │ │ (claim │
│ remainder) │ │ remaining) │
└──────────────┘ └──────────────┘
UBX launches as a local payment rail in Colombia before expanding regionally. Price stability, merchant density, and payroll integration are achieved in a single market first — proving the economic model with real transaction volume before geographic expansion.
| Phase | Name | Merchants | Users | Volume (USD/mo) | Target |
|---|---|---|---|---|---|
| 1 | Pilot Local | 50–100 | 1,000–5,000 | $50K–$200K | Single city (Cali), prove unit economics |
| 2 | Service Sector | 500–2,000 | 10,000–50,000 | $500K–$2M | 3–5 cities (Bogotá, Medellín, Cali, Barranquilla) |
| 3 | Enterprise Integration | 2,000+ | 50,000–300,000 | $2M–$10M | Payroll API, corporate treasury, SME onboarding |
| 4 | Regional Network | 5,000+ | 300,000–1,000,000 | $10M+ | National merchant directory, inter-city payment corridors |
| 5 | International Settlement | 10,000+ | 1,000,000+ | $50M+ | Remittance corridors, LATAM expansion, KNEX as settlement layer |
ADOPTION FLYWHEEL
═══════════════════
┌──────────────┐ ┌──────────────┐
│ Merchant │────────▶│ Consumer │
│ Accepts │ │ Pays UBX │
│ UBX │ │ │
└──────┬───────┘ └──────┬───────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Payroll │◀────────│ Velocity │
│ Integration │ │ Increases │
│ (COP→UBX) │ │ (V ≥ 2) │
└──────┬───────┘ └──────┬───────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Surplus │────────▶│ KNEX │
│ UBX Flow │ │ Conversion │
│ │ │ Gravity │
└──────────────┘ └──────────────┘
Merchants accept UBX → Consumers spend UBX → Velocity rises →
Payroll creates recurring demand → Surplus flows to KNEX →
More merchants attracted by network density → Flywheel accelerates
KnexCoin deploys NFC-enabled physical cards and SmartBills for frictionless in-store payments without requiring advanced smartphones.
| Feature | Specification | Purpose |
|---|---|---|
| Chip | NXP NTAG 424 DNA | Cryptographic authentication hardware |
| Authentication | SUN (Signature Universal Naming) | Dynamic signature per tap — prevents cloning |
| Anti-Cloning | Dynamic counter + CMAC verification | Each tap generates a unique response |
| Wallet Binding | Card ↔ Wallet address link | Physical card triggers wallet-level transactions |
| Form Factor | Card / SmartBill / sticker | Multiple deployment formats for merchant needs |
NFC deployment begins in Phase 1 with pilot merchants. Regional-scale card distribution starts in Phase 2. Each card is provisioned via the KnexCard system at nfc.serial.cash/auth/.
Employers integrate via the Payroll API, allowing employees to receive a configurable percentage of their salary in UBX. This creates recurring, predictable demand that anchors UBX velocity.
// Company → Payroll API → Split payout // Employee configures UBX percentage (r = 3% to 30%) UBXpayout = (SalaryCOP × r) / PUBX COPpayout = SalaryCOP × (1 - r) // Example: 4,000,000 COP salary, r = 10%, P_UBX = 1,000 COP // UBX payout = (4,000,000 × 0.10) / 1,000 = 400 UBX // COP payout = 4,000,000 × 0.90 = 3,600,000 COP (to bank) // Payroll creates: // - Recurring UBX demand (buy pressure at each pay cycle) // - Velocity floor (employees spend UBX at merchants) // - Natural COP anchor reinforcement
UBX percentage is fully employee-configurable with no minimum mandate. Integration with existing accounting systems (SAP, Siigo) planned for Phase 3. Corporate treasury can also hold KNEX as a reserve asset. Streaming mode: Payroll can use the streaming payment channel (see §06) for per-second UBX drip — employees accrue UBX in real time and can spend it before the pay period ends.
Colombia receives $9B+ in annual remittances, primarily from the United States. UBX/KNEX provides a settlement rail with sub-5-second finality and less than 0.1% transaction cost.
| Corridor | Volume Estimate | Infrastructure |
|---|---|---|
| Colombia ↔ USA | $9B+/year (primary) | Stellar + local on-ramps |
| Colombia ↔ Spain | Remittances + commerce | Stellar + EU partners |
| LATAM Regional | Bilateral trade | Regional node networks |
| Colombia ↔ Asia | Imports | Stellar cross-border settlement |
| Route | Monthly Volume | Use Case |
|---|---|---|
| Bogotá ↔ Medellín | $1M–$5M | Inter-city commerce |
| Bogotá ↔ Cali | $500K–$3M | Supply chain payments |
| Venezuela Border | $200K–$1M | Remittances + commerce |
| Ecuador Border | $100K–$500K | Bilateral trade |
| Metric | Target | Why It Matters |
|---|---|---|
| WAU (Weekly Active Users) | >500 | Minimum network activity for meaningful data |
| UBX Velocity (V) | 2–4× rotations/month | Below 2 = stagnation risk, above 4 = healthy commerce |
| 3-Month Retention | >50% | Users must stay for flywheel to work |
| UBX→KNEX Conversion | >20% of merchants | Proves KNEX gravity is active |
| NPS (Merchant) | >40 | Merchant satisfaction drives word-of-mouth |
| Settlement Time | <3 seconds | Must match or beat card network UX |
| Year | Milestone | Users | Key Deliverable |
|---|---|---|---|
| 2026 | Launch & Validation | 1K–5K | Mainnet activation, pilot in Cali, 20–50 merchants |
| 2027 | Local Expansion | 20K–50K | Multiple urban zones, NFC card deployment at regional scale |
| 2028 | Enterprise Integration | 100K–300K | SME integration, payroll API live, KNEX corporate treasury |
| 2029 | National Network | 1M+ | Multiple major cities, national merchant directory |
| 2030 | International Expansion | 3M+ | Remittance corridors, 3+ LATAM countries, KNEX settlement layer |
Minimum 6 months of stable Colombian operation required before international expansion. Each market requires a local partner for regulatory compliance.
Security spans both token layers: UBX supply integrity (distribution accuracy, conversion rate manipulation, COP anchor defense), KNEX lockup enforcement (validator bonds, slashing execution, reserve accounting), and conversion manipulation prevention (rate oracle attacks, front-running, sandwich attacks on the conversion module).
All slashing penalties are denominated in KNEX from the validator’s bonded collateral. This ensures economic consequences are tied to the settlement layer, not the transactional layer.
| Violation | Penalty | Detection | Recovery |
|---|---|---|---|
| Double voting | 100% of bond (10,000 KNEX) | Cryptographic proof (conflicting signatures) | None — permanent ban from validator set |
| Bandwidth spoofing | 100% of bond | PoB cross-validation, latency triangulation | None |
| Attestation collusion | 75% of bond (7,500 KNEX) | Statistical analysis, repeated pairing detection | Re-bond after cooldown (90 days) |
| Attestation fraud | 50% of bond (5,000 KNEX) | Challenge-response verification failure | Re-bond after cooldown (30 days) |
| Extended downtime | 5% of bond per week | Missing PoB proofs for >48 hours | Resume proofs to stop penalty |
| Stale proofs | Reputation decay (no KNEX slash) | Proof age exceeds freshness window | Submit fresh PoB proofs |
Slashed KNEX is burned (removed from circulating supply), not redistributed. This makes slashing deflationary, reinforcing KNEX scarcity. The Stellar bootstrap phase uses reputation-based penalties only; full KNEX slashing activates on L1 mainnet.
Full security hardening details — including cryptographic primitives, network security, consensus safety, economic security, quantum resistance, and formal verification — are specified in the comprehensive security section below (see implementation phases and detailed protocol review).
| Layer | Domain | Key Measures |
|---|---|---|
| 1 | Cryptographic | Ed25519 signatures, Blake2b hashing, deterministic key derivation |
| 2 | Network | Peer authentication, encrypted transport (Noise protocol), eclipse resistance |
| 3 | Consensus | ORV quorum (67%), bandwidth-weighted voting, fork resolution |
| 4 | Economic | KNEX bond slashing, PoB spoofing detection, conversion rate integrity |
| 5 | Quantum | FALCON-512 post-quantum migration path, hybrid signature scheme |
| 6 | Operational | Rate limiting, DoS mitigation, client-side PoW anti-spam |
Each phase includes difficulty rating, time estimate, AI prompts for development, and test commands for validation.
Build the block-lattice data structure, account-chain management, and basic transaction processing without consensus.
Create a Rust implementation for a Nano-inspired block-lattice DAG cryptocurrency called KnexCoin. Requirements: 1. Define a Block struct with fields: hash ([u8; 32]), previous ([u8; 32]), account ([u8; 32]), representative ([u8; 32]), balance (u128), block_type (enum: Send, Receive, Change, Bandwidth), link ([u8; 32]), bandwidth_proof (Option), signature ([u8; 64]), work (u64) 2. Implement Blake2b-256 for block hashing 3. Implement Ed25519 for signatures using ed25519-dalek crate 4. Create serialization/deserialization using bincode 5. Add validation methods for each block type 6. Include proof-of-work validation (anti-spam, minimal difficulty) The block should be compatible with a block-lattice structure where each account has its own chain. Send blocks create pending receivables, Receive blocks claim them. Output complete, production-ready Rust code with proper error handling and documentation.
Create a RocksDB-based storage layer for KnexCoin's block-lattice DAG. Requirements: 1. AccountChainDB struct wrapping RocksDB with column families: - accounts: account_pubkey -> AccountInfo (head block hash, balance, representative) - blocks: block_hash -> Block (serialized) - pending: (destination_account, source_block_hash) -> amount - representatives: rep_pubkey -> total_weight 2. Implement CRUD operations: - put_block(block) - validate and store block, update account head - get_block(hash) -> Option- get_account_chain(account) -> Vec (traverse from head) - get_pending(account) -> Vec - update_representative(account, new_rep) 3. Atomic operations using RocksDB WriteBatch for consistency 4. Implement proper error types with thiserror 5. Add LRU cache for hot accounts (100k entries) Use rocksdb crate. Include comprehensive unit tests.
Create a CLI wallet for KnexCoin using Rust and the clap crate. Commands needed: 1. knex wallet create - Generate new Ed25519 keypair, display mnemonic (BIP39) 2. knex wallet import- Restore from mnemonic 3. knex wallet balance [account] - Show balance and pending 4. knex send - Create and sign send block 5. knex receive [block_hash] - Create receive block for pending 6. knex change-rep - Create change block 7. knex history [account] - Show transaction history Requirements: - Secure key storage (encrypted with password, use argon2) - BIP39 mnemonic support (24 words) - Offline signing capability - Human-readable account addresses (knex1... bech32 format) - JSON output option for scripting Use clap v4 with derive macros. Include colored output using colored crate.
# Setup Rust development environment
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup default stable
rustup component add clippy rustfmt llvm-tools-preview
# Install testing tools
cargo install cargo-tarpaulin # Code coverage
cargo install cargo-nextest # Faster test runner
cargo install cargo-watch # Auto-run tests
# Create project structure
mkdir knexcoin && cd knexcoin
cargo new node --lib
cargo new wallet
# Add dependencies to node/Cargo.toml
cat >> node/Cargo.toml << 'EOF'
[dependencies]
tokio = { version = "1", features = ["full"] }
rocksdb = "0.21"
ed25519-dalek = "2"
blake2 = "0.10"
serde = { version = "1", features = ["derive"] }
bincode = "1"
thiserror = "1"
lru = "0.12"
hex = "0.4"
rand = "0.8"
[dev-dependencies]
criterion = "0.5"
tempfile = "3"
proptest = "1"
EOF
# Run all unit tests cargo test --workspace # Run with verbose output cargo test --workspace -- --nocapture # Run specific block module tests cargo test -p knex-node block::tests # Test block creation cargo test test_create_genesis_block cargo test test_create_send_block cargo test test_create_receive_block cargo test test_create_change_block cargo test test_block_type_enum # Test block serialization cargo test test_block_serialize_deserialize cargo test test_block_bincode_roundtrip cargo test test_block_json_roundtrip # Test block validation cargo test test_block_hash_verification cargo test test_block_signature_verification cargo test test_block_balance_validation cargo test test_block_previous_link_validation cargo test test_reject_invalid_block_type cargo test test_reject_tampered_block
# Blake2b hashing tests cargo test test_blake2b_hash_correctness cargo test test_blake2b_test_vectors # RFC 7693 test vectors cargo test test_blake2b_empty_input cargo test test_blake2b_deterministic cargo test test_hash_different_inputs_differ # Ed25519 signature tests cargo test test_ed25519_keypair_generation cargo test test_ed25519_sign_verify cargo test test_ed25519_invalid_signature_fails cargo test test_ed25519_wrong_key_fails cargo test test_ed25519_test_vectors # RFC 8032 test vectors cargo test test_signature_malleability_protection # BIP39 mnemonic tests cargo test test_mnemonic_generation_24_words cargo test test_mnemonic_to_seed cargo test test_mnemonic_restore_keypair cargo test test_invalid_mnemonic_rejected cargo test test_mnemonic_checksum_validation
# Database CRUD tests cargo test test_db_put_block cargo test test_db_get_block cargo test test_db_get_nonexistent_block cargo test test_db_delete_block cargo test test_db_update_block # Account chain tests cargo test test_account_chain_traversal cargo test test_account_head_update cargo test test_account_balance_tracking cargo test test_account_representative_change cargo test test_multiple_accounts_isolation # Pending receivables tests cargo test test_add_pending_receivable cargo test test_get_pending_receivables cargo test test_claim_pending_receivable cargo test test_pending_removed_after_receive cargo test test_multiple_pending_same_account # RocksDB specific tests cargo test test_rocksdb_column_families cargo test test_rocksdb_write_batch_atomicity cargo test test_rocksdb_persistence_after_restart cargo test test_rocksdb_concurrent_access
# Full transaction flow integration test cargo test --test integration_tests test_full_send_receive_flow # Manual integration testing steps: # 1. Create two test wallets ./knex wallet create --output wallet_a.json ./knex wallet create --output wallet_b.json # 2. Fund wallet A with genesis block (testnet) ./knex genesis create --account $(cat wallet_a.json | jq -r .address) --amount 1000000 # 3. Send from A to B ./knex send --wallet wallet_a.json --to $(cat wallet_b.json | jq -r .address) --amount 100 # 4. Verify pending on B ./knex pending --wallet wallet_b.json # 5. Receive on B ./knex receive --wallet wallet_b.json # 6. Verify balances ./knex balance --wallet wallet_a.json # Should be 999900 ./knex balance --wallet wallet_b.json # Should be 100 # 7. Verify chain integrity ./knex verify-chain --account $(cat wallet_a.json | jq -r .address) ./knex verify-chain --account $(cat wallet_b.json | jq -r .address)
# Edge case tests cargo test test_send_entire_balance cargo test test_send_zero_amount_rejected cargo test test_send_negative_amount_rejected cargo test test_send_more_than_balance_rejected cargo test test_receive_already_claimed_rejected cargo test test_double_spend_rejected cargo test test_fork_detection cargo test test_orphan_block_handling cargo test test_max_balance_u128 cargo test test_very_long_account_chain # Property-based testing (proptest) cargo test test_proptest_block_roundtrip cargo test test_proptest_signature_validity cargo test test_proptest_balance_never_negative # Stress tests (run with --release) cargo test --release test_stress_1000_transactions -- --ignored cargo test --release test_stress_concurrent_sends -- --ignored cargo test --release test_stress_rapid_blocks -- --ignored
# Run all benchmarks cargo bench # Individual benchmarks cargo bench bench_block_creation cargo bench bench_block_hashing cargo bench bench_signature_creation cargo bench bench_signature_verification cargo bench bench_block_serialization cargo bench bench_db_write cargo bench bench_db_read cargo bench bench_chain_traversal # Expected targets: # - Block hashing: < 1μs # - Signature creation: < 100μs # - Signature verification: < 200μs # - DB write: < 1ms # - DB read: < 100μs # - Block serialization: < 10μs
# Check for common mistakes cargo clippy --all-targets --all-features -- -D warnings # Format code cargo fmt --all --check # Generate coverage report (target: >90%) cargo tarpaulin --out Html --output-dir coverage/ open coverage/tarpaulin-report.html # Run all checks before commit cargo fmt --all cargo clippy --all-targets --all-features cargo test --workspace cargo tarpaulin --fail-under 90
PHASE 1 COMPLETION CHECKLIST: BLOCK STRUCTURE: [ ] Block struct compiles and serializes correctly [ ] All block types implemented (Send, Receive, Change, Bandwidth) [ ] Block validation rejects invalid blocks [ ] Genesis block creation works CRYPTOGRAPHY: [ ] Blake2b hash matches RFC 7693 test vectors [ ] Ed25519 signatures match RFC 8032 test vectors [ ] BIP39 mnemonic generation and recovery works [ ] Keypair derivation is deterministic DATABASE: [ ] RocksDB stores and retrieves blocks correctly [ ] Account head updates atomically on new block [ ] Pending receivables tracked correctly [ ] Column families properly isolated [ ] Data persists after restart TRANSACTIONS: [ ] Send block decrements balance correctly [ ] Receive block increments balance correctly [ ] Previous block hash links correctly [ ] Double-spend prevention works [ ] Balance underflow prevented CLI WALLET: [ ] Creates valid Ed25519 keypairs [ ] Mnemonic backup/restore works [ ] Send command creates valid blocks [ ] Receive command claims pending [ ] Balance display accurate QUALITY: [ ] All unit tests pass [ ] All integration tests pass [ ] Code coverage >90% [ ] No clippy warnings [ ] Code formatted with rustfmt [ ] Documentation complete for public APIs [ ] Benchmarks meet performance targets
Implement node discovery, block propagation, and peer communication using libp2p.
Create a P2P networking layer for KnexCoin using libp2p in Rust.
Requirements:
1. NetworkService struct with:
- Kademlia DHT for peer discovery
- GossipSub for block/vote propagation
- Request-Response for direct queries
- Identify protocol for peer info exchange
2. Message types (use protobuf or serde):
- BlockAnnounce { block: Block }
- VoteMessage { block_hash, voter, signature }
- BlockRequest { hash } / BlockResponse { block }
- AccountInfoRequest / AccountInfoResponse
- PeerListRequest / PeerListResponse
3. Peer management:
- Bootstrap from seed nodes
- Maintain 8-50 active connections
- Peer scoring based on behavior (good blocks, response time)
- Ban misbehaving peers (invalid blocks, spam)
4. NAT traversal using AutoNAT and relay protocols
5. Encryption using Noise protocol (XX handshake pattern)
Use libp2p 0.53+. Include connection limits, rate limiting, and backpressure handling.
Implement a GossipSub-based block propagation system for KnexCoin. Requirements: 1. Topics: - /knex/blocks/1.0.0 - New block announcements - /knex/votes/1.0.0 - Consensus votes - /knex/confirmations/1.0.0 - Block confirmations 2. Message validation: - Reject messages > 1MB - Verify block signatures before forwarding - Deduplicate by message ID (hash) - Rate limit per peer (max 100 msg/sec) 3. Propagation optimization: - Priority queue for votes (consensus-critical) - Batch small blocks for efficiency - Lazy push for large blocks (announce hash first) 4. Metrics: - Messages sent/received per topic - Propagation latency percentiles - Peer message rates Include flood prevention, eclipse attack resistance, and peer diversity requirements (min 3 different /24 subnets).
# Add libp2p dependencies
cat >> node/Cargo.toml << 'EOF'
libp2p = { version = "0.53", features = [
"tokio", "tcp", "noise", "yamux", "gossipsub",
"kad", "identify", "autonat", "relay", "dcutr"
]}
prost = "0.12"
prost-build = "0.12"
tracing = "0.1"
tracing-subscriber = "0.3"
[dev-dependencies]
tokio-test = "0.4"
mockall = "0.11"
EOF
# Create protobuf definitions
mkdir -p node/proto
cat > node/proto/messages.proto << 'EOF'
syntax = "proto3";
package knex;
message BlockAnnounce {
bytes block_data = 1;
}
message VoteMessage {
bytes block_hash = 1;
bytes voter_pubkey = 2;
bytes signature = 3;
uint64 timestamp = 4;
}
EOF
# Install network testing tools
cargo install libp2p-lookup
sudo apt install iperf3 tcpdump # Linux
brew install iperf3 tcpdump # macOS
# Network service initialization cargo test test_network_service_creation cargo test test_network_service_start_stop cargo test test_peer_id_generation cargo test test_keypair_persistence # Transport layer tests cargo test test_tcp_transport_connect cargo test test_noise_handshake cargo test test_yamux_multiplexing cargo test test_connection_limits # Identify protocol tests cargo test test_identify_exchange cargo test test_protocol_negotiation cargo test test_agent_version_reporting
# Kademlia DHT tests cargo test test_kademlia_bootstrap cargo test test_kademlia_peer_discovery cargo test test_kademlia_routing_table cargo test test_kademlia_closest_peers cargo test test_kademlia_provider_records # Bootstrap tests cargo test test_bootstrap_from_seed_nodes cargo test test_bootstrap_with_invalid_seed cargo test test_bootstrap_timeout_handling cargo test test_bootstrap_multiple_seeds # Peer management tests cargo test test_peer_add_remove cargo test test_peer_connection_state cargo test test_maintain_peer_count cargo test test_peer_eviction_policy cargo test test_max_connections_enforced
# GossipSub subscription tests cargo test test_gossipsub_subscribe_blocks cargo test test_gossipsub_subscribe_votes cargo test test_gossipsub_unsubscribe cargo test test_gossipsub_topic_validation # Message propagation tests cargo test test_gossipsub_publish_block cargo test test_gossipsub_publish_vote cargo test test_gossipsub_receive_message cargo test test_gossipsub_message_deduplication cargo test test_gossipsub_message_validation # GossipSub scoring tests cargo test test_peer_scoring_valid_messages cargo test test_peer_scoring_invalid_messages cargo test test_peer_scoring_spam_detection cargo test test_peer_ban_threshold
# Message serialization tests cargo test test_block_announce_serialize cargo test test_vote_message_serialize cargo test test_protobuf_roundtrip cargo test test_invalid_protobuf_rejected # Message validation tests cargo test test_validate_block_signature_before_gossip cargo test test_reject_oversized_message cargo test test_reject_malformed_message cargo test test_message_rate_limiting # Request-response tests cargo test test_block_request_response cargo test test_account_info_request_response cargo test test_peer_list_request_response cargo test test_request_timeout_handling
# Start local test network (3 nodes) ./target/release/knexd --port 26656 --data-dir ./node1 --seed & PID1=$! sleep 2 ./target/release/knexd --port 26657 --data-dir ./node2 --bootstrap /ip4/127.0.0.1/tcp/26656 & PID2=$! ./target/release/knexd --port 26658 --data-dir ./node3 --bootstrap /ip4/127.0.0.1/tcp/26656 & PID3=$! sleep 5 # Verify peer discovery curl -s localhost:26656/api/peers | jq '.peers | length' # Should be 2 curl -s localhost:26657/api/peers | jq '.peers | length' # Should be 2 curl -s localhost:26658/api/peers | jq '.peers | length' # Should be 2 # Test block propagation curl -X POST localhost:26656/api/test/create_block sleep 1 curl -s localhost:26657/api/blocks/latest | jq # Should have new block curl -s localhost:26658/api/blocks/latest | jq # Should have new block # Cleanup kill $PID1 $PID2 $PID3
# Automated integration tests cargo test --test network_integration test_3_node_block_propagation cargo test --test network_integration test_5_node_block_propagation cargo test --test network_integration test_10_node_block_propagation # Vote propagation tests cargo test --test network_integration test_vote_reaches_all_nodes cargo test --test network_integration test_vote_aggregation # Partition tests cargo test --test network_integration test_network_partition_recovery cargo test --test network_integration test_node_rejoin_after_disconnect # Measure propagation latency cargo test --test network_integration test_measure_propagation_latency -- --nocapture # Target: <500ms for 10-node network
# Eclipse attack resistance cargo test test_eclipse_attack_detection cargo test test_peer_diversity_enforcement cargo test test_min_unique_subnets_requirement # Sybil attack resistance cargo test test_connection_limit_per_ip cargo test test_rate_limit_new_connections cargo test test_peer_reputation_scoring # DDoS protection cargo test test_message_rate_limiting cargo test test_connection_rate_limiting cargo test test_backpressure_handling # Flood prevention cargo test test_gossipsub_flood_protection cargo test test_duplicate_message_filtering cargo test test_invalid_message_ban
# AutoNAT tests cargo test test_autonat_detection cargo test test_nat_type_discovery # Relay tests (requires external setup) cargo test test_relay_reservation cargo test test_relay_connection cargo test test_dcutr_hole_punch # Test NAT traversal manually: # 1. Start relay node with public IP ./knexd --relay-mode --external-ip YOUR_PUBLIC_IP --port 26656 # 2. Start node behind NAT ./knexd --bootstrap /ip4/YOUR_PUBLIC_IP/tcp/26656 --enable-relay # 3. Verify connectivity curl localhost:26657/api/nat_status # Should show "relayed" or "public"
# Run network benchmarks cargo bench --bench network_benchmarks # Individual benchmarks cargo bench bench_message_serialization cargo bench bench_message_validation cargo bench bench_peer_discovery_time cargo bench bench_propagation_latency_10_nodes cargo bench bench_propagation_latency_50_nodes cargo bench bench_max_message_throughput # Expected targets: # - Message serialization: < 10μs # - Message validation: < 100μs # - Peer discovery: < 5s for 10 peers # - Propagation (10 nodes): < 200ms # - Propagation (50 nodes): < 500ms # - Throughput: > 10,000 msgs/sec
# Network stress tests (run with --release) cargo test --release test_stress_100_concurrent_connections -- --ignored cargo test --release test_stress_1000_messages_per_second -- --ignored cargo test --release test_stress_rapid_connect_disconnect -- --ignored cargo test --release test_stress_large_messages -- --ignored # Long-running stability test cargo test --release test_24_hour_stability -- --ignored # Memory leak detection cargo test --release test_memory_usage_over_time -- --ignored # Use valgrind for detailed memory analysis valgrind --leak-check=full ./target/release/knexd --test-mode
PHASE 2 COMPLETION CHECKLIST: LIBP2P CORE: [ ] Network service starts and stops cleanly [ ] Peer ID generated and persisted [ ] TCP transport with Noise encryption works [ ] Yamux multiplexing works [ ] Connection limits enforced PEER DISCOVERY: [ ] Kademlia DHT bootstraps from seed nodes [ ] Peers discovered within 5 seconds [ ] Routing table maintained correctly [ ] Peer count stays within limits (8-50) GOSSIPSUB: [ ] Block topic subscription works [ ] Vote topic subscription works [ ] Messages propagate to all subscribers [ ] Message deduplication works [ ] Peer scoring rejects bad actors MESSAGE HANDLING: [ ] Protobuf serialization works [ ] Invalid messages rejected [ ] Rate limiting enforced [ ] Request-response protocol works SECURITY: [ ] Eclipse attack detection works [ ] Sybil attack mitigation works [ ] DDoS protection effective [ ] Peer reputation system functional NAT TRAVERSAL: [ ] AutoNAT detects NAT type [ ] Relay connections work [ ] Hole punching works (when possible) PERFORMANCE: [ ] Propagation < 500ms for 10 nodes [ ] Throughput > 10,000 msgs/sec [ ] Memory usage stable over time [ ] No connection leaks QUALITY: [ ] All unit tests pass [ ] All integration tests pass [ ] Stress tests pass [ ] Code coverage > 85% [ ] No clippy warnings
Implement Open Representative Voting. For phased delivery, early testnets may use simplified weighting for stability testing. However, the canonical v5.1 protocol weight rule is PoB-first: VotingWeight = BandwidthScore × min(1, Stake / MIN_STAKE). Stake is eligibility + slashing collateral only; it is not a power multiplier beyond the bond threshold.
Implement Open Representative Voting (ORV) consensus for KnexCoin in Rust. Requirements: 1. Representative system: - Anyone with stake can become representative - Users delegate voting weight to representatives - Weight = BandwidthScore × min(1, Stake/MIN_STAKE) — bond threshold, not plutocratic - Track online/offline status of representatives 2. Election struct: - block_hash being voted on - votes: HashMap- status: Active | Confirmed | Rejected - created_at, confirmed_at timestamps 3. Voting logic: - Representatives vote on new blocks - Quorum = 67% of online voting weight - Confirmation when quorum votes for same block - Reject conflicting blocks (forks) 4. Fork resolution: - Higher cumulative weight wins - Rollback losing fork blocks - Reprocess transactions on winning fork 5. Election persistence: - Store active elections in memory - Persist confirmed elections to disk - Garbage collect old elections (24h) Include vote deduplication, late vote handling, and network partition recovery.
# Add consensus dependencies
cat >> node/Cargo.toml << 'EOF'
parking_lot = "0.12"
dashmap = "5"
priority-queue = "1"
[dev-dependencies]
tokio = { version = "1", features = ["test-util"] }
test-case = "3"
EOF
# Create test fixtures directory
mkdir -p node/tests/fixtures/consensus
# Representative registration tests cargo test test_register_representative cargo test test_register_below_min_stake_rejected cargo test test_unregister_representative cargo test test_representative_stake_requirements # Delegation tests cargo test test_delegate_to_representative cargo test test_change_delegation cargo test test_delegation_weight_calculation cargo test test_delegation_to_offline_rep_warning # Weight calculation tests cargo test test_bond_threshold_weight_formula cargo test test_weight_calculation_accuracy cargo test test_weight_updates_on_stake_change cargo test test_weight_cap_enforcement # Online/offline status tests cargo test test_rep_online_detection cargo test test_rep_offline_after_timeout cargo test test_online_weight_calculation cargo test test_rep_comes_back_online
# Election creation tests cargo test test_create_election_for_block cargo test test_election_struct_initialization cargo test test_election_unique_per_block cargo test test_no_duplicate_elections # Vote handling tests cargo test test_add_vote_to_election cargo test test_vote_signature_verification cargo test test_reject_duplicate_vote cargo test test_reject_vote_wrong_block cargo test test_vote_weight_accumulation # Election state transitions cargo test test_election_active_state cargo test test_election_confirms_at_quorum cargo test test_election_rejection cargo test test_election_timeout_handling
# Quorum threshold tests cargo test test_quorum_threshold_67_percent cargo test test_quorum_not_reached_66_percent cargo test test_quorum_exact_boundary cargo test test_quorum_with_offline_reps # Quorum calculation tests cargo test test_quorum_weight_calculation cargo test test_quorum_online_weight_only cargo test test_quorum_updates_on_rep_change # Confirmation tests cargo test test_block_confirms_at_quorum cargo test test_confirmation_timestamp_recorded cargo test test_confirmation_irreversible cargo test test_confirmation_broadcast
# Fork detection tests cargo test test_detect_conflicting_blocks cargo test test_detect_double_spend_fork cargo test test_fork_with_same_previous # Fork resolution tests cargo test test_higher_weight_fork_wins cargo test test_fork_resolution_deterministic cargo test test_losing_fork_rollback cargo test test_transactions_reprocessed_after_rollback # Edge cases cargo test test_equal_weight_fork_tiebreaker cargo test test_late_vote_fork_resolution cargo test test_deep_fork_resolution
# Full consensus flow integration tests cargo test --test consensus_integration test_full_block_confirmation_flow cargo test --test consensus_integration test_multi_block_consensus_sequence # Multi-node consensus tests cargo test --test consensus_integration test_5_node_consensus cargo test --test consensus_integration test_10_node_consensus cargo test --test consensus_integration test_consensus_with_slow_node # Timing tests cargo test --test consensus_integration test_confirmation_under_1_second cargo test --test consensus_integration test_vote_propagation_timing # Manual integration test steps: # 1. Start 5-node network with different stake amounts ./knexd --node-id 1 --stake 100000 --port 26656 & ./knexd --node-id 2 --stake 50000 --port 26657 --bootstrap ... & ./knexd --node-id 3 --stake 30000 --port 26658 --bootstrap ... & ./knexd --node-id 4 --stake 15000 --port 26659 --bootstrap ... & ./knexd --node-id 5 --stake 10000 --port 26660 --bootstrap ... & # 2. Create transaction on node 1 curl -X POST localhost:26656/api/test/create_transaction # 3. Monitor confirmation time curl localhost:26656/api/elections/latest | jq '.confirmation_time_ms' # Should be < 1000ms # 4. Verify all nodes agree for port in 26656 26657 26658 26659 26660; do curl -s localhost:$port/api/blocks/latest | jq '.hash' done # All should show same hash
# Byzantine fault tolerance tests cargo test --test bft test_consensus_with_33_percent_byzantine -- --ignored cargo test --test bft test_consensus_fails_with_34_percent_byzantine -- --ignored cargo test --test bft test_byzantine_double_voting_detected -- --ignored cargo test --test bft test_byzantine_equivocation_slashed -- --ignored # Malicious behavior tests cargo test --test bft test_invalid_vote_signature_rejected cargo test --test bft test_vote_for_invalid_block_rejected cargo test --test bft test_late_vote_attack_prevention cargo test --test bft test_nothing_at_stake_mitigation # Network partition tests cargo test --test partition test_partition_50_50_no_consensus -- --ignored cargo test --test partition test_partition_67_33_consensus_continues -- --ignored cargo test --test partition test_partition_recovery_convergence -- --ignored cargo test --test partition test_partition_different_blocks_resolved -- --ignored
# Election persistence tests cargo test test_active_elections_in_memory cargo test test_confirmed_elections_to_disk cargo test test_election_recovery_after_restart cargo test test_election_garbage_collection_24h # Election cleanup tests cargo test test_old_elections_cleaned cargo test test_memory_bounded_elections cargo test test_disk_bounded_elections # Durability tests cargo test test_confirmed_election_survives_crash cargo test test_partial_write_recovery
# Consensus benchmarks cargo bench --bench consensus_benchmarks # Individual benchmarks cargo bench bench_vote_verification cargo bench bench_quorum_calculation cargo bench bench_election_creation cargo bench bench_fork_resolution cargo bench bench_weight_calculation # Throughput benchmarks cargo bench bench_consensus_tps_100_validators cargo bench bench_consensus_tps_500_validators cargo bench bench_concurrent_elections # Expected targets: # - Vote verification: < 200μs # - Quorum calculation: < 10μs # - Election creation: < 50μs # - Confirmation time: < 500ms (5 nodes) # - Confirmation time: < 1000ms (100 nodes) # - Throughput: > 1000 TPS
PHASE 3 COMPLETION CHECKLIST: REPRESENTATIVE SYSTEM: [ ] Representatives can register with stake [ ] Minimum stake requirement enforced [ ] Delegation to representatives works [ ] Bond threshold weight formula implemented (BS × min(1, Stake/MIN_STAKE)) [ ] Online/offline status tracked [ ] Weight updates correctly on changes ELECTION SYSTEM: [ ] Elections created for new blocks [ ] Votes added with signature verification [ ] Duplicate votes rejected [ ] Vote weights accumulated correctly [ ] Election states transition properly QUORUM DETECTION: [ ] 67% quorum threshold works [ ] Online weight calculated correctly [ ] Confirmation at quorum is instant [ ] Confirmation is irreversible [ ] Confirmation broadcast to network FORK RESOLUTION: [ ] Conflicting blocks detected [ ] Higher weight fork wins [ ] Losing fork rolled back [ ] Transactions reprocessed correctly [ ] Deterministic tiebreaker works BYZANTINE TOLERANCE: [ ] Consensus works with 33% Byzantine [ ] Consensus fails with 34% Byzantine [ ] Double voting detected and slashed [ ] Invalid votes rejected PARTITION TOLERANCE: [ ] 50/50 partition = no consensus [ ] 67/33 partition = consensus continues [ ] Partition recovery converges PERSISTENCE: [ ] Active elections in memory [ ] Confirmed elections to disk [ ] Recovery after restart [ ] Garbage collection works PERFORMANCE: [ ] Confirmation < 1 second (10 nodes) [ ] Throughput > 1000 TPS [ ] Vote verification < 200μs [ ] Memory usage bounded QUALITY: [ ] All unit tests pass [ ] All integration tests pass [ ] BFT tests pass [ ] Code coverage > 85%
Implement bandwidth measurement, challenge-response system, VDF, and weight calculations.
Implement a Proof-of-Bandwidth challenge-response system for KnexCoin. Requirements: 1. BandwidthChallenge struct: - challenge_id: [u8; 32] (random) - challenger: PublicKey - target: PublicKey - data_size: u64 (bytes to transfer, 1-100MB) - created_at: u64 (timestamp) - vdf_difficulty: u64 2. Challenge flow: a) Challenger generates random data, computes hash b) Challenger starts VDF computation (takes ~10 seconds) c) Target receives challenge, downloads data d) Target sends back: data_hash, download_time, VDF_output e) Challenger verifies: hash matches, time reasonable, VDF valid 3. Multi-path verification: - Require 5+ challengers from different continents - Use VRF to randomly select challengers (no gaming) - Aggregate results with outlier detection 4. Spoofing prevention: - VDF prevents pre-computation - Multiple challengers prevent collusion - Latency triangulation detects proxies - Statistical anomaly detection over time 5. BandwidthProof struct (on-chain): - Contains merkle root of all challenge responses - Signatures from all challengers - Computed bandwidth_score Include rate limiting (max 1 challenge per minute per peer).
Implement a Wesolowski VDF (Verifiable Delay Function) for KnexCoin's Proof-of-Bandwidth. Requirements: 1. Use RSA group with 2048-bit modulus (or class groups) 2. Eval(x, t) -> (y, proof) where: - y = x^(2^t) mod N - t = difficulty parameter (~10 seconds on average hardware) - proof = Wesolowski proof (single group element) 3. Verify(x, t, y, proof) -> bool - Must be fast (<10ms) even for large t - Uses Fiat-Shamir for challenge 4. Security requirements: - Sequential computation (no parallelization speedup) - Fast verification - Small proof size (<256 bytes) 5. Integration: - VDF input = hash(challenge_id || target_pubkey || timestamp) - VDF output included in bandwidth proof - Verifiers check VDF before accepting proof Use existing VDF libraries if available (vdf crate), or implement from scratch with clear documentation.
# Add PoB dependencies cat >> node/Cargo.toml << 'EOF' num-bigint = "0.4" num-traits = "0.2" sha2 = "0.10" vdf = "0.1" geoip2 = "0.1" statistical = "1" [dev-dependencies] tokio-test = "0.4" mockall = "0.11" assert_approx_eq = "1" EOF # Install bandwidth testing tools sudo apt install iperf3 netperf # Linux brew install iperf3 # macOS # Download GeoIP database mkdir -p data wget https://download.maxmind.com/app/geoip_download?edition_id=GeoLite2-City -O data/GeoLite2-City.mmdb
# VDF core tests cargo test test_vdf_eval_produces_output cargo test test_vdf_verify_valid_proof cargo test test_vdf_reject_invalid_proof cargo test test_vdf_deterministic_output cargo test test_vdf_different_inputs_different_outputs # VDF timing tests cargo test test_vdf_10_second_delay cargo test test_vdf_cannot_parallelize cargo test test_vdf_verification_fast # < 10ms # Wesolowski proof tests cargo test test_wesolowski_proof_generation cargo test test_wesolowski_proof_verification cargo test test_wesolowski_proof_size # < 256 bytes # VDF security tests cargo test test_vdf_no_shortcut_attack cargo test test_vdf_commitment_binding # Benchmark VDF cargo bench bench_vdf_eval_10_seconds cargo bench bench_vdf_verify
# Challenge creation tests cargo test test_create_bandwidth_challenge cargo test test_challenge_random_data_generation cargo test test_challenge_data_hash_correct cargo test test_challenge_vdf_difficulty_set # Challenge flow tests cargo test test_challenge_initiated_correctly cargo test test_challenge_data_received cargo test test_challenge_response_generated cargo test test_challenge_timing_recorded cargo test test_challenge_complete_flow # Response validation tests cargo test test_validate_response_hash cargo test test_validate_response_timing cargo test test_validate_response_vdf cargo test test_reject_invalid_response_hash cargo test test_reject_late_response cargo test test_reject_invalid_vdf_proof # Rate limiting tests cargo test test_challenge_rate_limit_per_peer cargo test test_max_one_challenge_per_minute cargo test test_challenge_cooldown_enforced
# VRF challenger selection tests cargo test test_vrf_challenger_selection cargo test test_vrf_selection_unpredictable cargo test test_vrf_selection_verifiable cargo test test_vrf_cannot_choose_colluders # Geographic diversity tests cargo test test_challengers_from_5_continents cargo test test_reject_insufficient_geographic_diversity cargo test test_geoip_lookup_accuracy cargo test test_continent_classification # Multi-challenger aggregation tests cargo test test_aggregate_5_challenger_results cargo test test_outlier_detection cargo test test_median_bandwidth_calculation cargo test test_reject_if_too_few_responses # Merkle proof tests cargo test test_merkle_root_of_challenges cargo test test_merkle_proof_verification cargo test test_no_selective_response_attack
# Bandwidth calculation tests cargo test test_bandwidth_mbps_calculation cargo test test_bandwidth_from_size_and_time cargo test test_bandwidth_cap_at_10_gbps cargo test test_bandwidth_minimum_threshold # Latency measurement tests cargo test test_latency_measurement_accuracy cargo test test_latency_average_calculation cargo test test_latency_outlier_removal # Throughput window tests cargo test test_sustained_bandwidth_measurement cargo test test_bandwidth_over_time_window cargo test test_burst_vs_sustained_detection # Bandwidth score calculation tests cargo test test_bandwidth_score_formula cargo test test_score_weight_throughput_40_percent cargo test test_score_weight_latency_25_percent cargo test test_score_weight_uptime_20_percent cargo test test_score_weight_reputation_15_percent cargo test test_score_range_0_to_1000
# Attestation creation tests cargo test test_create_attestation cargo test test_attestation_signature cargo test test_attestation_confidence_score cargo test test_attestation_observed_bandwidth # Attestation validation tests cargo test test_validate_attestation_signature cargo test test_reject_self_attestation cargo test test_reject_invalid_attestation cargo test test_attestation_within_tolerance # Reputation-weighted attestations cargo test test_high_rep_attestation_weight cargo test test_low_rep_attestation_weight cargo test test_aggregate_weighted_attestations cargo test test_reject_colluding_attesters
# Spoofing detection tests cargo test test_detect_pre_computation_attack cargo test test_detect_proxy_usage cargo test test_detect_colocation_spoofing cargo test test_detect_timing_manipulation # Statistical anomaly detection tests cargo test test_anomaly_score_calculation cargo test test_detect_sudden_bandwidth_increase cargo test test_detect_unrealistic_latency cargo test test_historical_consistency_check cargo test test_ml_anomaly_detection # Collusion detection tests cargo test test_detect_coordinated_attestations cargo test test_detect_same_measurement_from_different_peers cargo test test_detect_timing_correlation # Latency triangulation tests cargo test test_latency_triangulation cargo test test_detect_geographic_impossibility cargo test test_speed_of_light_check
# Full PoB flow integration tests
cargo test --test pob_integration test_full_bandwidth_proof_flow
cargo test --test pob_integration test_proof_verification_by_network
# Multi-node PoB tests
cargo test --test pob_integration test_10_node_bandwidth_verification -- --ignored
cargo test --test pob_integration test_cross_continent_verification -- --ignored
# Manual integration test steps:
# 1. Start nodes on different machines (ideally different continents)
# Machine 1 (US): ./knexd --node-id us1 --port 26656 --region us-east
# Machine 2 (EU): ./knexd --node-id eu1 --port 26656 --region eu-west
# Machine 3 (ASIA): ./knexd --node-id asia1 --port 26656 --region asia-sg
# 2. Trigger bandwidth challenge
curl -X POST localhost:26656/api/bandwidth/challenge \
--data '{"target": "eu1_peer_id"}'
# 3. Wait for proof generation (~15 seconds for VDF)
sleep 20
# 4. Check bandwidth score
curl localhost:26656/api/bandwidth/score/eu1_peer_id | jq
# 5. Verify proof on-chain
curl localhost:26656/api/bandwidth/proofs/latest | jq
# Consensus weight integration tests cargo test --test consensus_pob test_voting_weight_includes_bandwidth cargo test --test consensus_pob test_bandwidth_score_affects_weight cargo test --test consensus_pob test_zero_bandwidth_zero_weight # Weight formula tests cargo test --test consensus_pob test_weight_bond_threshold_times_bandwidth cargo test --test consensus_pob test_high_bandwidth_compensates_low_stake cargo test --test consensus_pob test_balanced_weight_distribution # Update cycle tests cargo test --test consensus_pob test_bandwidth_score_updates_hourly cargo test --test consensus_pob test_stale_proof_penalty cargo test --test consensus_pob test_score_decay_without_proofs
# Attack simulation tests (run with --ignored) cargo test --release test_spoof_bandwidth_attack -- --ignored cargo test --release test_proxy_relay_attack -- --ignored cargo test --release test_colocation_attack -- --ignored cargo test --release test_vdf_precomputation_attack -- --ignored cargo test --release test_challenger_bribery_attack -- --ignored cargo test --release test_selective_response_attack -- --ignored # All attacks should be detected and rejected # Expected: 100% detection rate for known attack vectors # Slashing tests cargo test test_spoofer_slashed_100_percent cargo test test_collusion_slashed_75_percent cargo test test_attestation_fraud_slashed_50_percent # Fuzzing tests cargo fuzz run fuzz_bandwidth_proof_parsing cargo fuzz run fuzz_vdf_verification
# PoB benchmarks cargo bench --bench pob_benchmarks # Individual benchmarks cargo bench bench_vdf_eval_10_second cargo bench bench_vdf_verify cargo bench bench_bandwidth_measurement cargo bench bench_merkle_proof_generation cargo bench bench_attestation_verification cargo bench bench_score_calculation cargo bench bench_anomaly_detection # Expected targets: # - VDF eval: ~10 seconds (tunable) # - VDF verify: < 10ms # - Bandwidth measurement: < 1 second # - Merkle proof generation: < 1ms # - Attestation verification: < 100μs # - Score calculation: < 1ms # - Full proof verification: < 100ms
PHASE 4 COMPLETION CHECKLIST: VDF IMPLEMENTATION: [ ] Wesolowski VDF eval works correctly [ ] VDF verification is fast (< 10ms) [ ] VDF proof size < 256 bytes [ ] VDF prevents pre-computation [ ] VDF timing is sequential (~10 seconds) CHALLENGE-RESPONSE: [ ] Challenges created with random data [ ] VDF difficulty included in challenge [ ] Responses validated correctly [ ] Timing measured accurately [ ] Rate limiting enforced (1/min) MULTI-PATH VERIFICATION: [ ] VRF selects challengers unpredictably [ ] 5 challengers from different continents [ ] Outlier detection works [ ] Merkle proof prevents selective response [ ] Geographic diversity enforced BANDWIDTH MEASUREMENT: [ ] Mbps calculation accurate [ ] Latency measurement accurate [ ] Sustained throughput measured [ ] Bandwidth capped at 10 Gbps [ ] Score formula implemented correctly PEER ATTESTATIONS: [ ] Attestations created and signed [ ] Signature verification works [ ] Reputation weighting applied [ ] Self-attestation rejected [ ] Collusion detected SPOOFING DETECTION: [ ] Pre-computation attack detected [ ] Proxy usage detected [ ] Colocation detected [ ] Statistical anomalies detected [ ] Latency triangulation works CONSENSUS INTEGRATION: [ ] Bandwidth score affects voting weight [ ] Weight = BandwidthScore × min(1, Stake/MIN_STAKE) [ ] Scores update hourly [ ] Stale proofs penalized SECURITY: [ ] 100% slash for bandwidth spoofing [ ] 75% slash for collusion [ ] 50% slash for attestation fraud [ ] All attack vectors mitigated PERFORMANCE: [ ] VDF verify < 10ms [ ] Full proof verify < 100ms [ ] Score calculation < 1ms [ ] Measurement overhead minimal QUALITY: [ ] All unit tests pass [ ] All integration tests pass [ ] Attack simulation tests pass [ ] Code coverage > 85%
Deploy public testnet with faucet, explorer, and documentation.
Create a block explorer for KnexCoin using Next.js 14 and React. Pages needed: 1. Home - Network stats, recent blocks, TPS chart 2. /block/[hash] - Block details, transactions 3. /account/[address] - Balance, history, representative 4. /tx/[hash] - Transaction details, confirmation status 5. /validators - Validator list, bandwidth scores, uptime 6. /richlist - Top accounts by balance Features: - Real-time updates via WebSocket - Search by block hash, account, tx hash - Mobile responsive design - Dark theme (neon green/cyan accents on black) - Transaction graph visualization API integration: - Connect to KnexCoin RPC endpoint - Cache responses with SWR - Rate limit API calls Use: Next.js 14 (app router), Tailwind CSS, Recharts for graphs, Framer Motion for animations.
Create a testnet faucet service for KnexCoin.
Requirements:
1. API endpoint: POST /api/faucet
- Input: { address: "knex1..." }
- Output: { tx_hash, amount, message }
- Rate limit: 10,000 KNEX per address per 24 hours
2. Anti-abuse measures:
- CAPTCHA (hCaptcha or Cloudflare Turnstile)
- IP rate limiting (10 requests per IP per hour)
- Cooldown display for returning users
3. Backend:
- Node.js/Express or Rust/Actix
- Redis for rate limiting state
- Faucet wallet with auto-replenishment alerts
4. Frontend:
- Simple form with address input
- Real-time transaction status
- Link to explorer after success
5. Monitoring:
- Discord webhook for low balance alerts
- Daily usage statistics
- Abuse detection alerts
Deploy on: Cloudflare Workers (edge) or traditional VPS.
# Provision cloud infrastructure (Terraform)
cd infrastructure/terraform
terraform init
terraform plan -var="environment=testnet"
terraform apply -var="environment=testnet"
# Expected resources:
# - 3x seed nodes (US-East, EU-West, Asia-SG)
# - 1x load balancer for RPC
# - 1x PostgreSQL for explorer
# - 1x Redis for faucet rate limiting
# - DNS records for testnet.knexcoin.com
# Configure Ansible inventory
cat > inventory/testnet.yml << 'EOF'
all:
children:
seed_nodes:
hosts:
seed1-us-east:
ansible_host:
seed2-eu-west:
ansible_host:
seed3-asia-sg:
ansible_host:
EOF
# Deploy seed nodes
ansible-playbook -i inventory/testnet.yml playbooks/deploy-seed-nodes.yml
# Create validators.json with seed validator info
cat > validators.json << 'EOF'
{
"validators": [
{
"name": "seed1-us-east",
"pubkey": "",
"stake": 1000000
},
{
"name": "seed2-eu-west",
"pubkey": "",
"stake": 1000000
},
{
"name": "seed3-asia-sg",
"pubkey": "",
"stake": 1000000
}
]
}
EOF
# Generate genesis block
./knexd genesis create \
--chain-id knex-testnet-1 \
--genesis-time "2025-03-01T00:00:00Z" \
--total-supply 100000000 \
--network-reserve 90000000 \
--treasury 7000000 \
--team 3000000 \
--validators validators.json \
--output genesis.json
# Validate genesis
./knexd genesis validate genesis.json
# Distribute genesis to all seed nodes
scp genesis.json seed1-us-east:/opt/knexd/config/
scp genesis.json seed2-eu-west:/opt/knexd/config/
scp genesis.json seed3-asia-sg:/opt/knexd/config/
# Start seed nodes
ssh seed1-us-east "sudo systemctl start knexd"
ssh seed2-eu-west "sudo systemctl start knexd"
ssh seed3-asia-sg "sudo systemctl start knexd"
# Wait for nodes to start
sleep 30
# Test 1: Verify all nodes are running
for host in seed1-us-east seed2-eu-west seed3-asia-sg; do
echo "Checking $host..."
curl -s https://$host.testnet.knexcoin.com/status | jq '.node_info.id'
done
# Test 2: Verify peer connections
for host in seed1-us-east seed2-eu-west seed3-asia-sg; do
echo "Peers on $host:"
curl -s https://$host.testnet.knexcoin.com/net_info | jq '.n_peers'
done
# Expected: Each node should have 2 peers
# Test 3: Verify genesis block matches
GENESIS_HASH=$(./knexd genesis hash genesis.json)
for host in seed1-us-east seed2-eu-west seed3-asia-sg; do
REMOTE_HASH=$(curl -s https://$host.testnet.knexcoin.com/block/0 | jq -r '.block.hash')
if [ "$GENESIS_HASH" != "$REMOTE_HASH" ]; then
echo "ERROR: Genesis mismatch on $host"
exit 1
fi
done
echo "All nodes have matching genesis"
# Test 4: Verify consensus is working
curl -X POST https://seed1-us-east.testnet.knexcoin.com/api/test/create_block
sleep 5
for host in seed1-us-east seed2-eu-west seed3-asia-sg; do
echo "Latest block on $host:"
curl -s https://$host.testnet.knexcoin.com/block/latest | jq '.block.height'
done
# Expected: All nodes at same height
# Deploy faucet
cd faucet
npm run build
npm run deploy # Cloudflare Workers
# Configure faucet environment
wrangler secret put FAUCET_PRIVATE_KEY
wrangler secret put HCAPTCHA_SECRET
# Test 1: Basic faucet request
curl -X POST https://faucet.testnet.knexcoin.com/api/faucet \
-H "Content-Type: application/json" \
-d '{"address": "knex1test123...", "captcha": "test-token"}'
# Expected: {"tx_hash": "...", "amount": 10000}
# Test 2: Rate limiting
for i in {1..12}; do
curl -s -X POST https://faucet.testnet.knexcoin.com/api/faucet \
-d '{"address": "knex1test'$i'...", "captcha": "test"}' | jq '.error'
done
# Expected: First 10 succeed, last 2 fail with rate limit error
# Test 3: Daily limit per address
curl -X POST https://faucet.testnet.knexcoin.com/api/faucet \
-d '{"address": "knex1same...", "captcha": "test"}'
curl -X POST https://faucet.testnet.knexcoin.com/api/faucet \
-d '{"address": "knex1same...", "captcha": "test"}'
# Expected: Second request fails - already claimed today
# Test 4: Verify transaction on chain
TX_HASH=$(curl -s -X POST ... | jq -r '.tx_hash')
curl -s https://rpc.testnet.knexcoin.com/tx/$TX_HASH | jq '.status'
# Expected: "confirmed"
# Deploy explorer
cd explorer
npm run build
npm run deploy # Vercel or self-hosted
# Test 1: Homepage loads
curl -s https://explorer.testnet.knexcoin.com | grep -q "KnexCoin"
echo "Homepage: OK"
# Test 2: Block page loads
curl -s https://explorer.testnet.knexcoin.com/block/0 | grep -q "Genesis"
echo "Block page: OK"
# Test 3: Account page loads
curl -s https://explorer.testnet.knexcoin.com/account/knex1faucet... | grep -q "Balance"
echo "Account page: OK"
# Test 4: Search functionality
curl -s "https://explorer.testnet.knexcoin.com/search?q=knex1" | grep -q "results"
echo "Search: OK"
# Test 5: WebSocket real-time updates
wscat -c wss://explorer.testnet.knexcoin.com/ws
# Send: {"subscribe": "blocks"}
# Expected: Receive new block notifications
# Test 6: API endpoints
curl -s https://explorer.testnet.knexcoin.com/api/stats | jq
# Expected: {"tps": ..., "total_accounts": ..., "total_txs": ...}
# RPC endpoint base URL RPC="https://rpc.testnet.knexcoin.com" # Test all RPC methods: # Status endpoints curl -s $RPC/status | jq '.node_info.version' curl -s $RPC/health | jq '.status' # "healthy" curl -s $RPC/net_info | jq '.n_peers' # Block endpoints curl -s $RPC/block/0 | jq '.block.hash' curl -s $RPC/block/latest | jq '.block.height' curl -s "$RPC/blocks?from=0&to=10" | jq '.blocks | length' # Account endpoints curl -s $RPC/account/knex1... | jq '.balance' curl -s $RPC/account/knex1.../history | jq '.transactions | length' curl -s $RPC/account/knex1.../pending | jq '.receivables' # Transaction endpoints curl -s $RPC/tx/| jq '.status' curl -X POST $RPC/tx/broadcast -d '{"signed_tx": "..."}' # Validator endpoints curl -s $RPC/validators | jq '.validators | length' curl -s $RPC/validators/ | jq '.bandwidth_score' # Verify rate limiting for i in {1..110}; do curl -s $RPC/status > /dev/null; done # Expected: 429 Too Many Requests after 100 requests/min
# Create test wallets
./knex wallet create --output wallet_a.json
./knex wallet create --output wallet_b.json
ADDR_A=$(cat wallet_a.json | jq -r '.address')
ADDR_B=$(cat wallet_b.json | jq -r '.address')
# Get testnet tokens from faucet
curl -X POST https://faucet.testnet.knexcoin.com/api/faucet \
-d "{\"address\": \"$ADDR_A\", \"captcha\": \"test\"}"
sleep 5
# Test 1: Check balance received
BALANCE=$(./knex balance --address $ADDR_A --rpc $RPC)
echo "Balance A: $BALANCE" # Should be 10000
# Test 2: Send transaction
./knex send --wallet wallet_a.json --to $ADDR_B --amount 1000 --rpc $RPC
sleep 3
# Test 3: Verify send block confirmed
./knex history --address $ADDR_A --rpc $RPC | jq '.[0].status'
# Expected: "confirmed"
# Test 4: Check pending on B
./knex pending --address $ADDR_B --rpc $RPC | jq
# Expected: 1 pending receivable
# Test 5: Receive on B
./knex receive --wallet wallet_b.json --rpc $RPC
sleep 3
# Test 6: Verify final balances
echo "Balance A: $(./knex balance --address $ADDR_A --rpc $RPC)" # 9000
echo "Balance B: $(./knex balance --address $ADDR_B --rpc $RPC)" # 1000
# Test 7: Verify transaction in explorer
echo "Check: https://explorer.testnet.knexcoin.com/account/$ADDR_A"
# Setup monitoring (Prometheus + Grafana) cd monitoring docker-compose up -d # Health check script (run every 5 minutes via cron) cat > /opt/scripts/health-check.sh << 'EOF' #!/bin/bash RPC="https://rpc.testnet.knexcoin.com" # Check node status STATUS=$(curl -s $RPC/status | jq -r '.sync_info.catching_up') if [ "$STATUS" = "true" ]; then echo "ALERT: Node is syncing" | slack-notify fi # Check block production LAST_BLOCK_TIME=$(curl -s $RPC/block/latest | jq -r '.block.header.time') NOW=$(date +%s) BLOCK_AGE=$((NOW - $(date -d "$LAST_BLOCK_TIME" +%s))) if [ $BLOCK_AGE -gt 60 ]; then echo "ALERT: No blocks in $BLOCK_AGE seconds" | slack-notify fi # Check validator participation VALIDATORS=$(curl -s $RPC/validators | jq '.validators | map(select(.online)) | length') if [ $VALIDATORS -lt 2 ]; then echo "ALERT: Only $VALIDATORS validators online" | slack-notify fi # Check faucet balance FAUCET_BALANCE=$(curl -s $RPC/account/knex1faucet... | jq -r '.balance') if [ $FAUCET_BALANCE -lt 1000000 ]; then echo "ALERT: Faucet balance low: $FAUCET_BALANCE" | slack-notify fi EOF chmod +x /opt/scripts/health-check.sh crontab -e # Add: */5 * * * * /opt/scripts/health-check.sh
# Install load testing tools
npm install -g artillery
cargo install drill
# Load test RPC endpoints
artillery run load-test-rpc.yml
# Expected: 1000 req/sec with p99 < 100ms
# Transaction flood test
cargo run --release --bin tx-flood -- \
--rpc $RPC \
--tps 100 \
--duration 60s \
--wallets 1000
# Expected: All transactions confirmed within 5 seconds
# Stress test consensus
cargo run --release --bin consensus-stress -- \
--nodes 5 \
--conflicting-blocks 100
# Expected: All conflicts resolved correctly
# Memory leak test (24 hours)
./knexd start --test-mode &
PID=$!
for i in {1..1440}; do
MEM=$(ps -o rss= -p $PID)
echo "$(date): $MEM KB" >> memory.log
sleep 60
done
# Expected: Memory stable, no continuous growth
PHASE 5 COMPLETION CHECKLIST: INFRASTRUCTURE: [ ] Cloud resources provisioned (3 regions) [ ] DNS records configured [ ] SSL certificates installed [ ] Load balancer configured [ ] Monitoring setup (Prometheus/Grafana) GENESIS: [ ] Genesis block created [ ] Validators configured [ ] Genesis distributed to all nodes [ ] Genesis hash verified on all nodes SEED NODES: [ ] 3 seed nodes running (US, EU, Asia) [ ] Nodes peered with each other [ ] Consensus working across nodes [ ] Blocks propagating correctly [ ] RPC endpoints accessible FAUCET: [ ] Faucet deployed and accessible [ ] Rate limiting working (10/hour/IP) [ ] Daily limit working (1/day/address) [ ] CAPTCHA protection enabled [ ] Low balance alerts configured EXPLORER: [ ] Homepage loads with network stats [ ] Block pages display correctly [ ] Account pages show balance/history [ ] Transaction pages show status [ ] Search functionality works [ ] WebSocket real-time updates work RPC API: [ ] All endpoints documented [ ] Status endpoints working [ ] Block endpoints working [ ] Account endpoints working [ ] Transaction endpoints working [ ] Rate limiting configured TRANSACTIONS: [ ] End-to-end send/receive works [ ] Confirmation within 1 second [ ] Balance updates correctly [ ] History displayed correctly [ ] CLI wallet works with testnet MONITORING: [ ] Health checks running [ ] Alerts configured (Slack/Discord) [ ] Metrics dashboard available [ ] Log aggregation setup DOCUMENTATION: [ ] RPC API docs published [ ] Developer quick-start guide [ ] Wallet setup guide [ ] FAQ page [ ] Discord support channel
Comprehensive security review, attack simulations, and third-party audits.
Create a comprehensive security audit checklist for KnexCoin, covering: 1. Cryptographic Review: - Ed25519 implementation correctness - Blake2b usage and parameters - VDF security assumptions - Random number generation (CSPRNG) - Key derivation (BIP39/BIP32) 2. Consensus Security: - 51% attack resistance - Long-range attack prevention - Nothing-at-stake mitigation - Fork choice rule correctness - Finality guarantees 3. Network Security: - Eclipse attack resistance - Sybil attack resistance - DDoS mitigation - Peer authentication - Message authentication 4. Bandwidth Verification: - Spoofing attack vectors - Collusion resistance - VDF security - Statistical manipulation 5. Economic Security: - Slashing correctness - Reward calculation - Release schedule / supply accounting bugs - Integer overflow/underflow 6. Code Quality: - Memory safety (Rust guarantees) - Panic conditions - Error handling - Input validation Output as markdown checklist with severity ratings.
# Install security testing tools
cargo install cargo-fuzz
cargo install honggfuzz
cargo install cargo-audit
cargo install cargo-deny
cargo install cargo-tarpaulin
cargo install cargo-geiger
rustup component add miri --toolchain nightly
# Install external tools
pip install slither-analyzer # Smart contract analysis
npm install -g snyk # Dependency scanning
apt install afl # American Fuzzy Lop
# Setup fuzzing corpus directory
mkdir -p fuzz/corpus/{blocks,signatures,messages,proofs}
# Create security test configuration
cat > security-test-config.toml << 'EOF'
[fuzzing]
max_time = 3600
max_len = 1000000
corpus_dir = "fuzz/corpus"
[audit]
ignore_advisories = []
severity_threshold = "medium"
EOF
# Block parsing fuzzing cd node cargo fuzz run fuzz_block_deserialize -- \ -max_len=1000000 \ -max_total_time=3600 # Signature verification fuzzing cargo fuzz run fuzz_signature_verify -- \ -max_len=100000 \ -max_total_time=1800 # Network message fuzzing cargo fuzz run fuzz_network_message_parse -- \ -max_len=1048576 \ -max_total_time=3600 # Bandwidth proof fuzzing cargo fuzz run fuzz_bandwidth_proof_verify -- \ -max_len=500000 \ -max_total_time=1800 # VDF verification fuzzing cargo fuzz run fuzz_vdf_verify -- \ -max_len=10000 \ -max_total_time=1800 # RPC input fuzzing cargo fuzz run fuzz_rpc_input -- \ -max_len=100000 \ -max_total_time=1800 # Check for crashes ls fuzz/artifacts/ # Should be empty (no crashes)
# Dependency vulnerability audit cargo audit # Expected: 0 vulnerabilities found # Comprehensive dependency check cargo deny check # Checks: licenses, bans, advisories, sources # Unsafe code audit cargo geiger # Review all unsafe blocks for correctness # Check for supply chain attacks cargo verify-project # If using cargo-crev # Clippy with all lints cargo clippy --all-targets --all-features -- \ -W clippy::all \ -W clippy::pedantic \ -W clippy::nursery \ -W clippy::cargo \ -D warnings # Security-focused clippy lints cargo clippy -- \ -W clippy::unwrap_used \ -W clippy::expect_used \ -W clippy::panic \ -W clippy::todo \ -W clippy::unimplemented # Check for integer overflow potential cargo clippy -- -W clippy::integer_arithmetic
# Run tests with Miri (memory sanitizer) cargo +nightly miri test # Detects: UB, use-after-free, buffer overflows # AddressSanitizer RUSTFLAGS="-Z sanitizer=address" \ cargo +nightly test --target x86_64-unknown-linux-gnu # ThreadSanitizer (race conditions) RUSTFLAGS="-Z sanitizer=thread" \ cargo +nightly test --target x86_64-unknown-linux-gnu # MemorySanitizer (uninitialized reads) RUSTFLAGS="-Z sanitizer=memory" \ cargo +nightly test --target x86_64-unknown-linux-gnu # LeakSanitizer RUSTFLAGS="-Z sanitizer=leak" \ cargo +nightly test --target x86_64-unknown-linux-gnu # Valgrind full analysis valgrind --leak-check=full \ --show-leak-kinds=all \ --track-origins=yes \ ./target/debug/knex-tests
# Ed25519 implementation tests cargo test test_ed25519_rfc8032_test_vectors cargo test test_ed25519_malleability_protection cargo test test_ed25519_small_subgroup_attack cargo test test_ed25519_timing_safe_comparison # Blake2b implementation tests cargo test test_blake2b_rfc7693_test_vectors cargo test test_blake2b_length_extension_resistance cargo test test_blake2b_collision_resistance # VDF security tests cargo test test_vdf_sequentiality cargo test test_vdf_no_parallel_speedup cargo test test_vdf_proof_uniqueness # Random number generation tests cargo test test_csprng_entropy_source cargo test test_csprng_output_distribution cargo test test_rng_reseed_behavior # Key derivation tests cargo test test_bip39_test_vectors cargo test test_key_derivation_deterministic cargo test test_key_stretching_sufficient
# Consensus attacks cargo test --release test_sybil_attack_simulation -- --ignored cargo test --release test_eclipse_attack_simulation -- --ignored cargo test --release test_51_percent_attack_simulation -- --ignored cargo test --release test_long_range_attack_simulation -- --ignored cargo test --release test_nothing_at_stake_attack -- --ignored cargo test --release test_grinding_attack -- --ignored # Network attacks cargo test --release test_ddos_resistance -- --ignored cargo test --release test_amplification_attack -- --ignored cargo test --release test_routing_attack -- --ignored cargo test --release test_man_in_middle_attack -- --ignored # Bandwidth spoofing attacks cargo test --release test_bandwidth_spoofing_attack -- --ignored cargo test --release test_proxy_spoofing_attack -- --ignored cargo test --release test_colocation_attack -- --ignored cargo test --release test_timing_manipulation_attack -- --ignored # Economic attacks cargo test --release test_flash_loan_attack -- --ignored cargo test --release test_front_running_attack -- --ignored cargo test --release test_denial_of_service_attack -- --ignored # All simulated attacks should fail or be detected
# RPC API penetration tests
# Test for SQL injection (if using SQL backend)
curl -X POST $RPC/api/account \
-d '{"address": "knex1\" OR 1=1--"}'
# Test for path traversal
curl "$RPC/../../../etc/passwd"
curl "$RPC/api/block/../../config/secrets"
# Test for SSRF
curl -X POST $RPC/api/webhook \
-d '{"url": "http://169.254.169.254/latest/meta-data/"}'
# Test for rate limit bypass
for i in {1..1000}; do
curl -H "X-Forwarded-For: 1.2.3.$i" $RPC/status &
done
# Test for header injection
curl -H "Host: evil.com" $RPC/status
# Test for WebSocket security
wscat -c "wss://$RPC/ws" -x '{"action": "../../etc/passwd"}'
# Faucet abuse testing
# Test CAPTCHA bypass
# Test rate limit bypass with IP rotation
# Test for race conditions in claiming
# Prepare documentation for auditors mkdir -p audit-package # Generate architecture documentation cat > audit-package/ARCHITECTURE.md << 'EOF' # KnexCoin Architecture ## Components 1. Core DAG Engine (node/src/block/) 2. P2P Networking (node/src/network/) 3. ORV Consensus (node/src/consensus/) 4. Proof-of-Bandwidth (node/src/bandwidth/) 5. RPC API (node/src/rpc/) ## Security-Critical Paths - Block validation: node/src/block/validation.rs - Signature verification: node/src/crypto/ed25519.rs - VDF verification: node/src/bandwidth/vdf.rs - Consensus voting: node/src/consensus/voting.rs - Slashing logic: node/src/consensus/slashing.rs EOF # Generate threat model cat > audit-package/THREAT_MODEL.md << 'EOF' # Threat Model ## Assets - User funds (account balances) - Validator stakes - Network integrity ## Threat Actors - Malicious validators (up to 33%) - Network-level attackers - Bandwidth spoofers - Colluding validators ## Attack Vectors [See security-tests.md for full list] EOF # Package source code with commit hash git archive --format=tar.gz HEAD > audit-package/source-$(git rev-parse --short HEAD).tar.gz # Generate test coverage report cargo tarpaulin --out Html --output-dir audit-package/coverage/ # Recommended auditors: # - Trail of Bits # - OpenZeppelin # - Consensys Diligence # - NCC Group
# Bug bounty tiers CRITICAL (up to $100,000): - Remote code execution - Consensus manipulation - Fund theft - Total network disruption HIGH ($10,000 - $50,000): - Denial of service - Validator slashing bypass - Bandwidth proof forgery - Significant economic impact MEDIUM ($1,000 - $10,000): - Information disclosure - Partial DoS - Privilege escalation - Configuration bypass LOW ($100 - $1,000): - Minor security misconfigurations - Non-sensitive data exposure - Best practice violations # Scope: # - In scope: Core node, wallet, RPC API, explorer, faucet # - Out of scope: Third-party services, social engineering # Platforms to consider: # - HackerOne # - Immunefi (crypto-focused) # - Bugcrowd # Setup security@knexcoin.com for responsible disclosure
# GitHub Actions security workflow
cat > .github/workflows/security.yml << 'EOF'
name: Security Checks
on: [push, pull_request]
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Cargo audit
run: |
cargo install cargo-audit
cargo audit
clippy-security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Security lints
run: |
rustup component add clippy
cargo clippy -- -D warnings \
-W clippy::unwrap_used \
-W clippy::expect_used
fuzz-quick:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Quick fuzz
run: |
cargo install cargo-fuzz
cargo fuzz run fuzz_block_deserialize -- -max_total_time=60
snyk:
runs-on: ubuntu-latest
steps:
- uses: snyk/actions/node@master
with:
args: --severity-threshold=high
EOF
PHASE 6 COMPLETION CHECKLIST: FUZZING: [ ] Block parsing fuzzing: 0 crashes [ ] Signature verification fuzzing: 0 crashes [ ] Network message fuzzing: 0 crashes [ ] Bandwidth proof fuzzing: 0 crashes [ ] VDF verification fuzzing: 0 crashes [ ] RPC input fuzzing: 0 crashes STATIC ANALYSIS: [ ] cargo audit: 0 vulnerabilities [ ] cargo deny: All checks pass [ ] cargo clippy: 0 warnings (pedantic) [ ] cargo geiger: Unsafe blocks reviewed MEMORY SAFETY: [ ] Miri: No UB detected [ ] AddressSanitizer: No issues [ ] ThreadSanitizer: No races [ ] LeakSanitizer: No leaks [ ] Valgrind: Clean report CRYPTOGRAPHY: [ ] Ed25519 RFC test vectors pass [ ] Blake2b RFC test vectors pass [ ] VDF security verified [ ] CSPRNG properly seeded [ ] No timing side channels ATTACK SIMULATIONS: [ ] Sybil attack: Detected/prevented [ ] Eclipse attack: Detected/prevented [ ] 51% attack: Detected/prevented [ ] Long-range attack: Prevented [ ] Bandwidth spoofing: Detected [ ] DDoS: Mitigated PENETRATION TESTING: [ ] RPC API: No vulnerabilities [ ] WebSocket: No vulnerabilities [ ] Faucet: Abuse prevented [ ] Explorer: No XSS/CSRF THIRD-PARTY AUDIT: [ ] Audit firm selected [ ] Audit package prepared [ ] Audit completed [ ] Critical findings: 0 [ ] High findings: 0 [ ] All findings addressed BUG BOUNTY: [ ] Program launched [ ] Scope defined [ ] Tiers defined ($100K max) [ ] Running for 30+ days [ ] Valid submissions addressed DOCUMENTATION: [ ] Security architecture documented [ ] Threat model documented [ ] Incident response plan [ ] Security contact published
Production launch with full tokenomics, governance, and ecosystem activation.
Create the mainnet genesis configuration for KnexCoin. Genesis parameters: - Chain ID: knex-mainnet-1 - Genesis time: [TO BE DETERMINED] - Total supply (all created at genesis): 100,000,000 KNEX (hard cap — no further minting authority) Token distribution (Genesis): - Network Release Reserve: 90,000,000 KNEX (90%) — released over 20 years via PoB - Treasury: 7,000,000 KNEX (7%) — ecosystem dev, liquidity support - Team: 3,000,000 KNEX (3%) — development, operations Initial validators: - 10 foundation seed validators (first 90 days) - Bootstrap from known validator set - Progressive decentralization schedule Consensus parameters: - Block time: 2 seconds target - Quorum: 67% - Min stake: 10,000 KNEX - Unbonding period: 21 days Generate genesis.json with all allocations and parameters.
PRE-LAUNCH VERIFICATION (T-30 days): SECURITY SIGN-OFF: [ ] Third-party audit completed [ ] All critical findings resolved [ ] All high findings resolved [ ] Bug bounty running 30+ days [ ] No unpatched vulnerabilities [ ] Security documentation published TESTNET STABILITY: [ ] Testnet running 60+ days [ ] No consensus failures in 30 days [ ] TPS meets requirements (>1000) [ ] Uptime > 99.9% [ ] All features tested CODE FREEZE: [ ] Code freeze initiated [ ] Release candidate tagged [ ] Release notes prepared [ ] Changelog finalized [ ] Binary checksums published GENESIS PREPARATION: [ ] Token distribution finalized [ ] Vesting schedules coded [ ] Genesis reviewed by 3+ parties [ ] Genesis signatures collected [ ] Genesis hash published
# Validator verification script
cat > scripts/verify-validators.sh << 'EOF'
#!/bin/bash
VALIDATORS=(
"seed1-us-east"
"seed2-eu-west"
"seed3-asia-sg"
# Add community validators...
)
echo "=== VALIDATOR READINESS CHECK ==="
for VAL in "${VALIDATORS[@]}"; do
echo "Checking $VAL..."
# Check connectivity
ping -c 3 $VAL.knexcoin.com > /dev/null 2>&1
[ $? -eq 0 ] && echo " ✓ Connectivity OK" || echo " ✗ Connectivity FAILED"
# Check RPC
STATUS=$(curl -s https://$VAL.knexcoin.com/status | jq -r '.sync_info.catching_up')
[ "$STATUS" = "false" ] && echo " ✓ Synced" || echo " ✗ Not synced"
# Check version
VERSION=$(curl -s https://$VAL.knexcoin.com/status | jq -r '.node_info.version')
echo " → Version: $VERSION"
# Check stake
STAKE=$(curl -s https://$VAL.knexcoin.com/validator | jq -r '.stake')
echo " → Stake: $STAKE KNEX"
done
EOF
# Run validator checks
./scripts/verify-validators.sh
# Expected: All validators online, synced, correct version
# Infrastructure verification
echo "=== INFRASTRUCTURE CHECKS ==="
# DNS verification
echo "Checking DNS..."
dig +short rpc.knexcoin.com
dig +short explorer.knexcoin.com
dig +short wallet.knexcoin.com
# SSL verification
echo "Checking SSL..."
echo | openssl s_client -connect rpc.knexcoin.com:443 2>/dev/null | openssl x509 -noout -dates
# Load balancer check
echo "Checking load balancer..."
for i in {1..10}; do
curl -s https://rpc.knexcoin.com/status | jq -r '.node_info.id'
done | sort | uniq -c # Should show distribution
# CDN/Cache check
echo "Checking CDN..."
curl -I https://explorer.knexcoin.com | grep -i "cache\|cdn"
# Monitoring check
echo "Checking monitoring..."
curl -s https://status.knexcoin.com/api/status | jq
# Backup verification
echo "Checking backups..."
ssh backup-server "ls -la /backups/knexcoin/"
# Genesis ceremony steps (coordinate via secure channel)
# Step 1: Generate final genesis
./knexd genesis create \
--chain-id knex-mainnet-1 \
--genesis-time "2025-06-01T00:00:00Z" \
--total-supply 100000000 \
--network-reserve 90000000 \
--treasury 7000000 \
--team 3000000 \
--validators validators-mainnet.json \
--output genesis-mainnet.json
# Step 2: Each validator signs genesis
./knexd genesis sign \
--genesis genesis-mainnet.json \
--validator-key /path/to/key \
--output genesis-signed.json
# Step 3: Aggregate signatures
./knexd genesis aggregate \
--inputs genesis-signed-*.json \
--output genesis-final.json
# Step 4: Verify all signatures
./knexd genesis verify genesis-final.json
echo "Genesis hash: $(./knexd genesis hash genesis-final.json)"
# Step 5: Distribute to all validators
for VAL in seed1-us-east seed2-eu-west seed3-asia-sg; do
scp genesis-final.json $VAL:/opt/knexd/config/genesis.json
done
# Step 6: Verify distribution
for VAL in seed1-us-east seed2-eu-west seed3-asia-sg; do
REMOTE_HASH=$(ssh $VAL "./knexd genesis hash /opt/knexd/config/genesis.json")
LOCAL_HASH=$(./knexd genesis hash genesis-final.json)
if [ "$REMOTE_HASH" = "$LOCAL_HASH" ]; then
echo "$VAL: ✓ Genesis verified"
else
echo "$VAL: ✗ Genesis mismatch!"
exit 1
fi
done
# Launch day countdown script
cat > scripts/launch.sh << 'EOF'
#!/bin/bash
set -e
GENESIS_TIME="2025-06-01T00:00:00Z"
GENESIS_EPOCH=$(date -d "$GENESIS_TIME" +%s)
echo "=== KNEXCOIN MAINNET LAUNCH ==="
echo "Genesis time: $GENESIS_TIME"
# Wait for genesis time
while [ $(date +%s) -lt $GENESIS_EPOCH ]; do
REMAINING=$((GENESIS_EPOCH - $(date +%s)))
echo -ne "Launch in: ${REMAINING}s\r"
sleep 1
done
echo ""
echo "LAUNCHING..."
# Start all validators simultaneously
for VAL in seed1-us-east seed2-eu-west seed3-asia-sg; do
ssh $VAL "sudo systemctl start knexd" &
done
wait
echo "All validators started. Waiting for first block..."
# Wait for first block
while true; do
HEIGHT=$(curl -s https://rpc.knexcoin.com/status | jq -r '.sync_info.latest_block_height')
if [ "$HEIGHT" -gt "0" ]; then
echo "✓ First block produced! Height: $HEIGHT"
break
fi
sleep 1
done
echo ""
echo "=== MAINNET IS LIVE ==="
echo "Explorer: https://explorer.knexcoin.com"
echo "RPC: https://rpc.knexcoin.com"
EOF
chmod +x scripts/launch.sh
# Post-launch monitoring checklist
# Every 5 minutes for first hour:
watch -n 300 '
echo "=== NETWORK STATUS ==="
curl -s https://rpc.knexcoin.com/status | jq "{
height: .sync_info.latest_block_height,
time: .sync_info.latest_block_time,
validators: .validators.total
}"
echo "=== CONSENSUS HEALTH ==="
curl -s https://rpc.knexcoin.com/consensus_state | jq ".round_state.height_vote_set[0].prevotes_bit_array"
echo "=== PEER COUNT ==="
for VAL in seed1-us-east seed2-eu-west seed3-asia-sg; do
PEERS=$(curl -s https://$VAL.knexcoin.com/net_info | jq ".n_peers")
echo "$VAL: $PEERS peers"
done
'
# Check for:
# - Block production (continuous)
# - All validators voting
# - Peer counts stable (>5 per node)
# - No error logs
# - Memory/CPU stable
# Alert thresholds:
# - No block in 30 seconds: WARNING
# - No block in 60 seconds: CRITICAL
# - Validator offline: CRITICAL
# - Peer count < 3: WARNING
# - Memory > 80%: WARNING
# Post-launch verification tests
# Test 1: First transaction on mainnet
./knex wallet create --output mainnet-test-wallet.json
ADDR=$(cat mainnet-test-wallet.json | jq -r '.address')
echo "Test wallet: $ADDR"
# (Have a pre-funded wallet send to test wallet)
# Verify receipt
sleep 10
BALANCE=$(./knex balance --address $ADDR --rpc https://rpc.knexcoin.com)
echo "Balance received: $BALANCE"
# Test 2: Verify token distribution
curl -s https://rpc.knexcoin.com/account/knex1treasury... | jq '.balance'
# Should match genesis allocation
# Test 3: Verify vesting contracts
curl -s https://rpc.knexcoin.com/account/knex1founders... | jq '{
balance: .balance,
vested: .vesting_info.vested,
locked: .vesting_info.locked
}'
# Test 4: Governance module check
curl -s https://rpc.knexcoin.com/gov/params | jq
# Test 5: Bandwidth proof generation
curl -s https://rpc.knexcoin.com/validators/seed1/bandwidth_proof | jq '.latest_proof'
# Test 6: Explorer accuracy
# Manually verify:
# - Block data matches RPC
# - Account balances correct
# - Transaction history accurate
# - Validator list correct
# Communication checklist PRE-LAUNCH (T-7): [ ] Blog post: "Mainnet Launch Announcement" [ ] Twitter thread: Launch details [ ] Discord announcement: @everyone [ ] Telegram announcement [ ] Email newsletter to subscribers [ ] Press release to crypto media LAUNCH DAY (T-0): [ ] Tweet: "Mainnet is LIVE!" [ ] Discord: Real-time updates in #announcements [ ] Reddit post in r/CryptoCurrency, r/altcoins [ ] Medium article: Launch summary [ ] Update website hero: "Now Live" POST-LAUNCH (T+1): [ ] Blog post: "Day 1 Stats & Highlights" [ ] Twitter: First transaction, first block stats [ ] Discord: Community celebration POST-LAUNCH (T+7): [ ] Blog post: "Week 1 Report" [ ] Governance proposal for community feedback [ ] Validator onboarding guide published [ ] Tutorial videos released ONGOING: [ ] Weekly transparency reports [ ] Monthly development updates [ ] Quarterly roadmap reviews
# Emergency response procedures SEVERITY LEVELS: - P0 (Critical): Network halt, fund loss, consensus failure - P1 (High): Validator issues, performance degradation - P2 (Medium): Non-critical bugs, UX issues - P3 (Low): Minor issues, documentation P0 RESPONSE (< 15 min): 1. Alert all core team via PagerDuty 2. Assess severity and impact 3. If fund risk: Coordinate validator pause 4. If consensus failure: Identify root cause 5. Hotfix or rollback decision 6. Public communication within 1 hour EMERGENCY PAUSE: # Coordinate with validators (need 51% to pause) ./knexd admin emergency-pause --reason "Security incident" # Resume after fix: ./knexd admin emergency-resume --upgrade-heightROLLBACK PROCEDURE (last resort): # Identify safe block height SAFE_HEIGHT=$(./knexd admin find-safe-height) # Coordinate rollback ./knexd admin rollback --height $SAFE_HEIGHT # This requires 67% validator coordination WAR ROOM CONTACTS: - Lead Dev: +1-XXX-XXX-XXXX - Security Lead: +1-XXX-XXX-XXXX - Infrastructure: +1-XXX-XXX-XXXX - Communications: +1-XXX-XXX-XXXX
PHASE 7 COMPLETION CHECKLIST: PRE-LAUNCH (T-30 to T-7): [ ] Security audit passed [ ] All findings resolved [ ] Bug bounty 30+ days [ ] Testnet 60+ days stable [ ] Code freeze initiated [ ] Release candidate tagged [ ] Genesis prepared [ ] Genesis reviewed by 3+ parties VALIDATOR READINESS (T-7): [ ] All validators online [ ] All validators synced [ ] All validators correct version [ ] All validators staked [ ] Communication channels tested INFRASTRUCTURE (T-7): [ ] DNS configured [ ] SSL certificates valid [ ] Load balancer tested [ ] CDN configured [ ] Monitoring active [ ] Backups verified GENESIS CEREMONY (T-1): [ ] Genesis generated [ ] All validators signed [ ] Signatures aggregated [ ] Genesis distributed [ ] Genesis verified on all nodes LAUNCH DAY (T-0): [ ] Countdown coordinated [ ] All validators start [ ] First block produced [ ] Consensus working [ ] Explorer live [ ] RPC accessible [ ] Announcements posted POST-LAUNCH (T+24h): [ ] Continuous block production [ ] All validators voting [ ] No critical issues [ ] First transactions successful [ ] Token distribution verified [ ] Vesting active POST-LAUNCH (T+7): [ ] Network stable [ ] Community validators onboarded [ ] First governance proposal [ ] Bug bounty payouts processed [ ] Week 1 report published [ ] No P0/P1 incidents ONGOING: [ ] 24/7 monitoring active [ ] On-call rotation established [ ] Weekly reports published [ ] Governance active [ ] Community growing
| Component | Technology | Rationale |
|---|---|---|
| Core Node | Rust | Memory safety, performance, no GC pauses |
| Networking | libp2p | Battle-tested, modular, used by IPFS/Polkadot |
| Database | RocksDB | Fast key-value store, used by Nano |
| Cryptography | Ed25519 + Blake2b | Fast signatures, secure hashing |
| Serialization | bincode / MessagePack | Compact binary format |
| RPC API | JSON-RPC + WebSocket | Standard, easy integration |
| CLI Wallet | Rust (clap) | Cross-platform, consistent with node |
| Explorer | Next.js + React | Fast, SEO-friendly |
| VDF | Sloth / Wesolowski | Proven VDF constructions |
knexcoin/ ├── node/ # Core node implementation │ ├── src/ │ │ ├── block/ # Block structures & validation │ │ ├── chain/ # Account-chain management │ │ ├── consensus/ # ORV implementation │ │ ├── network/ # P2P networking │ │ ├── bandwidth/ # PoB module │ │ ├── rpc/ # JSON-RPC server │ │ └── main.rs │ └── Cargo.toml ├── wallet/ # CLI wallet ├── explorer/ # Block explorer (Next.js) ├── docs/ # Documentation └── tests/ # Integration tests
Maximize security without raising barriers to entry. Anyone with decent internet can participate, but attacks become economically and technically infeasible. Target: 9.5/10 security score.
Multiple overlapping defenses make Sybil attacks cost-prohibitive:
pub struct SybilResistance { // Progressive trust building maturity_days: u32, // Days since registration (0-90) maturity_multiplier: f64, // 0.25 → 1.0 over 90 days // Network topology ip_subnet: [u8; 3], // /24 subnet identifier geo_region: GeoRegion, // Continental region geo_diversity_bonus: f64, // 1.0-1.5x for rare regions // Hardware attestation (optional) tpm_attestation: Option<TPMProof>, hardware_trust_bonus: f64, // 1.0-1.2x with TPM // Social proof (optional) social_verifications: Vec<SocialProof>, social_trust_score: u8, // 0-100 } impl SybilResistance { fn calculate_trust_multiplier(&self) -> f64 { let base = 0.25 + (0.75 * (self.maturity_days as f64 / 90.0).min(1.0)); let geo = self.geo_diversity_bonus; let hw = self.hardware_trust_bonus; let social = 1.0 + (self.social_trust_score as f64 * 0.001); base * geo * hw * social } }
Multi-layer bandwidth verification makes spoofing practically impossible:
pub struct FortifiedBandwidthProof { // Core proof data base_proof: BandwidthProof, // Cryptographic timestamps tsa_timestamp: TSAResponse, // RFC 3161 timestamp vrf_seed: [u8; 32], // VRF output for challenger selection vrf_proof: [u8; 80], // VRF proof (verifiable) // Multi-path verification challenger_paths: Vec<ChallengerPath>, // Min 5 from different regions path_merkle_root: [u8; 32], // Root of all challenge responses // Statistical validation historical_consistency: f64, // 0-1 consistency with past proofs anomaly_score: f64, // 0-1 (lower = more trustworthy) } pub struct ChallengerPath { challenger_id: [u8; 32], challenger_region: GeoRegion, challenge_data_hash: [u8; 32], response_hash: [u8; 32], measured_latency_ms: u32, measured_throughput_mbps: u64, merkle_proof: Vec<[u8; 32]>, signature: [u8; 64], }
Make attacks economically irrational with aggressive slashing:
pub enum SlashingOffense { // Critical offenses (100% slash + ban) BandwidthSpoofing { evidence: SpoofingEvidence, slash_percent: 100, permanent_ban: true, }, DoubleVoting { conflicting_votes: (Vote, Vote), slash_percent: 100, permanent_ban: true, }, // Severe offenses (50-75% slash) CollusionDetected { colluding_validators: Vec<[u8; 32]>, evidence: CollusionEvidence, slash_percent: 75, }, AttestationFraud { false_attestation: Attestation, actual_measurement: BandwidthMeasurement, slash_percent: 50, }, // Minor offenses (gradual penalty) UptimeViolation { uptime_percent: f64, days_below_threshold: u32, slash_percent_per_day: 1, }, } pub struct SlashingResult { validator: [u8; 32], offense: SlashingOffense, slashed_amount: u128, burned_amount: u128, // 50% burned (deflationary) treasury_amount: u128, // 50% to security treasury ban_until: Option<u64>, // None = permanent proof_hash: [u8; 32], // Immutable on-chain evidence }
First 90 days are most vulnerable. Special protections:
BOOTSTRAP SECURITY TIMELINE
═══════════════════════════
Day 0 Day 30 Day 60 Day 90 Day 120+
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌────────────────────────────────────────────────────────────┐
│ FOUNDATION WEIGHT │
│ ███████████████████████▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░░░░░░░ │
│ 51% ───▶ 35% ───▶ 15% ───▶ 0% │
└────────────────────────────────────────────────────────────┘
┌────────────────────────────────────────────────────────────┐
│ COMMUNITY WEIGHT │
│ ░░░░░░░░░░░░░░░░░░░░░░▓▓▓▓▓▓▓▓███████████████████████████ │
│ 49% ───▶ 65% ───▶ 85% ───▶ 100% │
└────────────────────────────────────────────────────────────┘
Security Features:
├─ [Day 0-30] Full checkpoint system, emergency pause enabled
├─ [Day 30-60] Reduced foundation weight, community growth
├─ [Day 60-90] Minimal foundation, full community consensus
└─ [Day 90+] Fully decentralized, foundation = regular validator
Real-time monitoring with automatic response:
pub struct SecurityMonitor { // Eclipse attack detection peer_diversity_threshold: f64, // Min 0.7 diversity score min_unique_subnets: u32, // Must connect to 10+ /24s geographic_spread_min: u32, // Must have peers in 3+ continents // Long-range attack prevention checkpoint_interval: u64, // Every 1000 blocks weak_subjectivity_period: u64, // 30 days of checkpoints // 51% attack resistance finality_delay_ms: u64, // 10,000ms detection window quorum_threshold: f64, // 0.67 (67%) // Anomaly thresholds bandwidth_drop_alert: f64, // 0.20 (20% sudden drop) validator_churn_alert: f64, // 0.10 (10% leave in 1 hour) } pub enum SecurityAlert { EclipseAttempt { affected_nodes: Vec<[u8; 32]>, severity: u8 }, LongRangeAttempt { fork_point: u64, attacker_chain_length: u64 }, ConsensusAnomaly { expected_quorum: f64, actual: f64 }, BandwidthCliff { before: u64, after: u64, drop_percent: f64 }, MassValidatorExit { count: u32, timeframe_hours: u32 }, } pub enum AutoResponse { IncreaseCheckpointFrequency, RaiseQuorumTemporarily { new_threshold: f64 }, EnableEmergencyMode, AlertFoundationValidators, TriggerGovernanceVote { proposal: String }, }
Long-term behavior tracking prevents hit-and-run attacks:
pub struct ValidatorReputation { validator_id: [u8; 32], // Core reputation (0-1000) score: u16, score_history: Vec<(u64, u16)>, // (timestamp, score) // Behavior tracking uptime_30d: f64, correct_votes: u64, total_votes: u64, bandwidth_proofs_valid: u64, bandwidth_proofs_total: u64, attestations_given: u64, attestations_accurate: u64, // Vouching vouched_by: Vec<[u8; 32]>, // High-rep validators who vouch vouching_for: Vec<[u8; 32]>, // Nodes this validator vouches for // Penalties slashing_history: Vec<SlashingEvent>, warnings: u32, } // Reputation calculation fn calculate_reputation(v: &ValidatorReputation) -> u16 { let uptime_score = (v.uptime_30d * 200.0) as u16; // Max 200 let vote_accuracy = (v.correct_votes * 300 / v.total_votes.max(1)) as u16; // Max 300 let bandwidth_accuracy = (v.bandwidth_proofs_valid * 200 / v.bandwidth_proofs_total.max(1)) as u16; // Max 200 let attestation_accuracy = (v.attestations_accurate * 200 / v.attestations_given.max(1)) as u16; // Max 200 let vouch_bonus = (v.vouched_by.len() * 10).min(100) as u16; // Max 100 let base = uptime_score + vote_accuracy + bandwidth_accuracy + attestation_accuracy + vouch_bonus; // Apply penalty decay let penalty = v.slashing_history.len() as u16 * 200 + v.warnings as u16 * 50; base.saturating_sub(penalty).min(1000) }
| ✓ Sybil Resistance | 9/10 | Progressive trust + IP limits + geo-diversity |
| ✓ Bandwidth Verification | 9/10 | 5-continent VRF challengers + TSA timestamps |
| ✓ Economic Security | 10/10 | 100% slash for critical offenses |
| ✓ Bootstrap Protection | 10/10 | Foundation oversight + gradual decentralization |
| ✓ Attack Detection | 9/10 | Real-time monitoring + auto-response |
| ✓ Reputation System | 10/10 | Web of trust + slow build/fast decay |
| ✓ Quantum Resistance | 10/10 | NIST PQC from Genesis (FIPS 203-206) |
KnexCoin implements NIST-standardized post-quantum cryptography from day one. No migration needed—all addresses are quantum-safe from Genesis.
All cryptographic primitives use officially standardized, production-ready algorithms resistant to both classical and quantum attacks.
| Layer | Algorithm | FIPS | Purpose |
|---|---|---|---|
| Primary Signatures | FN-DSA-512 (FALCON) | FIPS 206 | Transaction signing (smallest PQC signatures) |
| Backup Signatures | ML-DSA-65 (Dilithium) | FIPS 204 | Governance-switchable fallback |
| Emergency Signatures | SLH-DSA (SPHINCS+) | FIPS 205 | Hash-based (different math family) |
| Key Encapsulation | ML-KEM-768 (Kyber) | FIPS 203 | Node-to-node encrypted communication |
| Backup KEM | HQC | FIPS 207 | Alternative key exchange mechanism |
| Hashing | SHA3-256 + BLAKE3 | FIPS 202 | Addresses, Merkle trees, block hashes |
| Symmetric | AES-256-GCM | FIPS 197 | Data encryption (Grover-resistant) |
For a DAG ledger with unlimited TPS, signature size directly impacts network throughput and storage:
| Algorithm | Public Key | Signature | DAG Suitability |
|---|---|---|---|
| ECDSA (legacy) | 33 bytes | 64 bytes | ❌ Quantum vulnerable |
| FN-DSA-512 (FALCON) | 897 bytes | 666 bytes | ✓ Best balance for DAG |
| ML-DSA-65 (Dilithium) | 1,952 bytes | 3,293 bytes | ⚠ 5x larger signatures |
| SLH-DSA-128f (SPHINCS+) | 32 bytes | 17,088 bytes | ⚠ Emergency backup only |
// Cargo.toml - Post-Quantum Dependencies [dependencies] pqcrypto-falcon = "0.3" # FN-DSA (FALCON) signatures pqcrypto-dilithium = "0.5" # ML-DSA backup signatures pqcrypto-sphincsplus = "0.7" # SLH-DSA emergency backup pqcrypto-kyber = "0.8" # ML-KEM key encapsulation sha3 = "0.10" # SHA3-256 hashing blake3 = "1.5" # BLAKE3 performance hashing aes-gcm = "0.10" # AES-256-GCM encryption // src/crypto/pqc.rs - Quantum-Safe Signatures use pqcrypto_falcon::falcon512::*; use pqcrypto_traits::sign::{PublicKey, SecretKey, SignedMessage}; pub struct QuantumKeyPair { pub public_key: Vec<u8>, // 897 bytes (FALCON-512) secret_key: Vec<u8>, // Never exposed } impl QuantumKeyPair { pub fn generate() -> Self { let (pk, sk) = keypair(); Self { public_key: pk.as_bytes().to_vec(), secret_key: sk.as_bytes().to_vec(), } } pub fn sign(&self, message: &[u8]) -> Vec<u8> { let sk = SecretKey::from_bytes(&self.secret_key).unwrap(); let signed = sign(message, &sk); signed.as_bytes().to_vec() // ~666 bytes } pub fn verify(pubkey: &[u8], message: &[u8], signature: &[u8]) -> bool { let pk = PublicKey::from_bytes(pubkey).unwrap(); let sm = SignedMessage::from_bytes(signature).unwrap(); open(&sm, &pk).is_ok() } } // Quantum-safe address derivation pub fn derive_address(pubkey: &[u8]) -> String { use sha3::{Sha3_256, Digest}; let hash = Sha3_256::digest(pubkey); let encoded = bs58::encode(&hash[..32]).into_string(); format!("knexq1{}", encoded) // "knexq" prefix = quantum-safe }
Legacy (vulnerable): knex1qxy2kgdygjrsqtzq2n0yrf...
Quantum-safe: knexq1qxy2kgdygjrsqtzq2n0yrf...
The knexq prefix indicates a quantum-resistant address using FALCON signatures.
KNEXCOIN QUANTUM-PROOF ARCHITECTURE (FROM GENESIS)
══════════════════════════════════════════════════
┌─────────────────────────────────────────────────┐
│ TRANSACTION LAYER │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ FN-DSA-512 │ │ ML-DSA-65 │ │
│ │ (Primary) │ │ (Backup) │ │
│ │ 666 bytes │ │ 3,293 bytes│ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────┐
│ NETWORK LAYER │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ ML-KEM-768 │ │ HQC │ │
│ │ (Primary) │ │ (Backup) │ │
│ │ FIPS 203 │ │ FIPS 207 │ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────┐
│ HASHING LAYER │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ SHA3-256 │ │ BLAKE3 │ │
│ │ Addresses │ │ Performance│ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────┐
│ SYMMETRIC LAYER │
│ ┌─────────────────────────────┐ │
│ │ AES-256-GCM │ │
│ │ (Grover-resistant @ 128-bit)│ │
│ └─────────────────────────────┘ │
└─────────────────────────────────────────────────┘
With all layers active, attack costs become prohibitive:
| Attack Type | Without Hardening | With Hardening | Improvement |
|---|---|---|---|
| Sybil Attack (1000 nodes) | ~$100K (just stake) | ~$10M+ (stake + IPs + 90 days + regions) | 100x harder |
| Bandwidth Spoofing | Moderate (single vector) | Near-impossible (5 continents + VRF) | ∞ |
| 51% Attack | Possible at bootstrap | Requires corrupting foundation + community | Foundation protected |
| Eclipse Attack | Possible with network control | Detected in minutes, auto-response | Auto-mitigated |
| Long-Range Attack | Possible without checkpoints | Blocked by weak subjectivity | Impossible |
# Create new Rust workspace mkdir knexcoin && cd knexcoin cargo new node --lib cargo new wallet # Add dependencies to node/Cargo.toml [dependencies] tokio = { version = "1", features = ["full"] } libp2p = "0.53" rocksdb = "0.21" serde = { version = "1", features = ["derive"] } bincode = "1" # Post-Quantum Cryptography (NIST FIPS Standards) pqcrypto-falcon = "0.3" # FN-DSA-512 primary signatures (FIPS 206) pqcrypto-dilithium = "0.5" # ML-DSA-65 backup signatures (FIPS 204) pqcrypto-kyber = "0.8" # ML-KEM-768 key encapsulation (FIPS 203) sha3 = "0.10" # SHA3-256 quantum-safe hashing blake3 = "1.5" # BLAKE3 performance hashing aes-gcm = "0.10" # AES-256-GCM encryption # Start with block.rs touch node/src/block.rs