Master Crypto Exchange APIs for Trading & Market Data

Wallet Finder

Blank calendar icon with grid of squares representing days.

April 29, 2026

Manual trading usually breaks at the same point. You catch a wallet buy on-chain, open a centralized exchange, check liquidity, and by the time you click through the order ticket the move is already underway. The problem isn’t just speed. It’s consistency, repeatability, and the ability to turn signals into execution without rebuilding the same workflow every hour.

That’s where crypto exchange apis stop being a developer concern and become trading infrastructure. If you’re building a copy trading system, a PnL tracker, a market scanner, or a research pipeline that blends on-chain activity with exchange liquidity, the API layer is the system. Bad integration creates stale prices, missed fills, duplicate orders, and account risk. Good integration gives you clean data, predictable execution, and logs you can trust when something goes wrong.

The practical challenge is that exchange APIs are never just about endpoints. They force decisions about transport, authentication, rate limiting, retries, schema normalization, and where your strategy should run. Most generic guides stop at “use REST for data and WebSockets for real-time updates.” That’s not enough if you’re trying to mirror wallet activity across Ethereum, Solana, and Base while routing execution through one or more CEX venues.

This guide takes the builder’s view. It focuses on what works in production, what fails under volatility, and how to design a system that turns market data plus on-chain signals into actions you can automate.

Introduction The Engine of Automated Trading

A wallet you track buys on-chain. Seconds later, the same token starts moving on a centralized exchange, spreads tighten, and liquidity shifts across venues. If your system cannot read that change, compare it against account risk, and send an order fast enough to matter, the signal has little trading value.

Exchange APIs sit at the center of that workflow. They feed market data into scanners, route orders into matching engines, return fills for PnL analysis, and expose balances and positions for risk checks. For copy trading in particular, the API layer decides whether a wallet signal becomes a controlled trade or a late entry with poor execution.

That infrastructure is now broad enough that teams do not need to wire every venue from scratch. According to ChangeHero’s roundup of crypto exchange APIs, leading aggregators cover 100,000+ coins across 200+ exchanges and 10,000+ DeFi protocols. The same source says projections for 2026 pointed to live WebSocket coverage for the top 12 exchanges. The point is not the exact count. The hard part is fragmentation. Each exchange uses different symbols, payload formats, precision rules, and timing behavior, which means integration work still decides whether the system is dependable under load.

Core API capabilities

A trading application usually needs several API surfaces working together:

  • Public market data for tickers, trades, candles, and order books
  • Private account endpoints for balances, positions, and order state
  • Execution endpoints for placing, canceling, and replacing orders
  • Supplementary data feeds for broader coverage across exchanges and DeFi sources

The harder problem starts when CEX data has to be combined with wallet activity on-chain. A useful system does more than detect a wallet buy. It checks whether the move is already reflected in the order book, whether size can be copied without heavy slippage, and whether the trade still fits portfolio and exposure limits. That is the difference between signal collection and actual execution logic.

Practical rule: If your API flow cannot take an on-chain event, price it against live exchange liquidity, and return a tracked order state, the strategy is still only partially automated.

What dependable systems have in common

The systems that hold up during volatility tend to share the same design choices:

RequirementWeak implementationDependable implementation
Market dataRepeated pollingStream first, poll only for recovery
Order handlingFire-and-forget submissionsFull order lifecycle tracking
SecurityOne key for everythingSeparate scoped keys by use case
Exchange supportVenue-specific code everywhereNormalization layer or wrapper
Failure recoveryManual restartsRetries, reconciliation, and alerts

Those choices affect trading results directly. Missed sequence numbers produce bad local books. Loose order tracking creates duplicate submissions. Poor key separation turns one leaked credential into an account-wide incident. Good API integration is not just engineering hygiene. It is part of execution quality, risk control, and trustworthy PnL.

Core Concepts REST vs WebSocket APIs

The first architectural choice isn’t strategy logic. It’s transport. In practice, most production systems use both REST and WebSocket APIs, but they use them for different jobs.

A diagram comparing REST API pull-based request-response and WebSocket API push-based persistent real-time communication methods.

REST works best for discrete actions

REST is request-response. Your application asks for something, the exchange sends back a result, and the connection ends. That model is easy to reason about and easy to test.

Use REST when you need to:

  • Fetch historical candles for backtesting or chart context
  • Check balances and positions at controlled intervals
  • Create or cancel orders through authenticated endpoints
  • Pull exchange metadata such as symbols, filters, or instrument status

REST is also where many teams start because the tooling is familiar. You can inspect payloads, replay requests, and build a clean client library around the exchange’s endpoints.

The downside is obvious once a strategy becomes time-sensitive. Polling for order book changes or recent trades is inefficient and eventually hits rate limits. You also create uneven visibility because your app only sees the market when it asks.

WebSockets are for state that changes continuously

A WebSocket stays open. The server pushes updates to you as they happen. That’s the right model for order books, live trades, and user events such as fills or cancels.

Use WebSockets when you need:

  • Live order book depth for slippage-sensitive execution
  • Trade streams for momentum or tape-based logic
  • Immediate order status changes without polling loops
  • Responsive user interfaces that can update without lag

For copy trading and wallet mirroring, WebSockets are usually the difference between reacting to a move and trailing it. If your system waits for repeated REST calls to discover price changes, it’s already late.

REST API vs. WebSocket API at a Glance

CharacteristicREST APIWebSocket API
Communication modelRequest-responsePersistent two-way connection
Best useHistorical data, account checks, order submissionLive trades, order books, user events
Latency profileHigher for repeated updatesLower for continuous streams
Rate limit pressureHigh if used for pollingLower when streams replace polling
SimplicityEasier to start withHarder to operate correctly
Failure modeMissed updates between requestsDisconnects, stale streams, resync issues

The trade-off most teams underestimate

WebSockets reduce polling, but they add state management. You need heartbeat handling, reconnect logic, sequence validation, and book resynchronization after drops. REST is simpler, but simplicity disappears when you try to brute-force real-time behavior through polling.

Use REST for control paths. Use WebSockets for time-sensitive state.

That split keeps your design sane. Historical candles, symbol metadata, and order placement can stay in a conventional client. Streaming books, trades, and account events should run in an event-driven pipeline.

Authentication and Secure Key Management

If your strategy can place trades, your API keys are operational funds access. Treat them like private keys, because the practical risk is similar. One leaked key with broad permissions can wipe out a trading account long before anyone notices.

A shield icon with a digital padlock and a placeholder API key on a binary code background.

The baseline security model is already well established. Crypto APIs’ standards and conventions documentation ties secure exchange API practice to IEEE 2140.5-2020 and ISO/TR 23576:2020, including HTTPS-only transmission, permission-scoped keys, and strict request timing, with exchanges often enforcing a ±5-second window on signed requests. That timing detail matters more than many teams expect. Clock drift can make valid requests fail, and retry logic can create duplicate actions if you don’t design around it.

Scope keys by job, not by account

One of the easiest mistakes is creating a single master key for every workflow. Don’t.

A safer setup looks like this:

  • Read-only key for balances, history, and portfolio views
  • Trading key for order entry and cancellation
  • Separate environment keys for production and testing
  • No withdrawal permission unless there is a hard operational reason

Exchanges such as Kraken also support controls like 2FA, IP whitelisting, and permission-scoped keys. Those aren’t optional extras for serious systems. They’re part of your blast-radius reduction plan.

How request signing should work

Private endpoints usually require your client to sign each request with a secret. The exact signature payload varies by exchange, but the pattern is consistent.

  1. Build the canonical payload the exchange expects.
  2. Add the current timestamp.
  3. Sign the payload with the secret, commonly with an HMAC scheme.
  4. Send the signature and key in the required headers.
  5. Reject stale requests and log the exact exchange response on failure.

Two operational rules matter here. First, your application clock must stay synchronized. Second, retries must be idempotent where possible, so a network timeout doesn’t create accidental duplicate orders.

Secure storage and rotation

A good key policy is boring by design:

  • Store secrets outside source code
  • Load them from environment variables or a dedicated secret manager
  • Rotate keys on a schedule and after any exposure
  • Audit who and what can read them
  • Split services so analytics jobs never inherit trading credentials

This walkthrough gives a decent visual overview of API credential handling before you automate anything live:

A trading stack usually fails on security through convenience. Someone hardcodes a key, reuses it across services, or grants permissions “temporarily” and never removes them.

What secure teams actually do

PracticeWhy it matters
IP whitelistingLimits where requests can originate
Permission-scoped keysContains damage if a key is exposed
Clock synchronizationPrevents replay rejection and auth failures
Regular rotationReduces exposure window
Separate keys per workflowStops analytics or reporting services from inheriting trade access

The best security posture is simple: assume a service will eventually fail, then make sure the credentials attached to it can’t do much damage.

Handling API Rate Limits and Performance

A bot can have good signals, clean auth, and solid order logic, then still fail because its data plane falls apart under load. That failure usually starts with avoidable request patterns. A worker polls too often, retries in sync with every other worker, or burns request budget on endpoints that do not affect trading decisions.

A robot reacting to a rate limit notification icon on a digital background with floating email symbols.

OpenWare’s discussion of crypto exchange API integration challenges highlights the pattern clearly. It cites an analysis projecting that by 2026, a large share of trading bot failures would come from unoptimized API polling, and it notes that some platforms enforce limits as low as 10 requests per second. The practical lesson is simple. REST is for snapshots, reconciliation, and recovery. It is a poor substitute for a live event stream.

Polling gets expensive fast in multi-strategy systems. One service asks for balances every second. Another refreshes open orders. A third checks ticker data across dozens of pairs. Add copy trading logic that watches a source account on a CEX while also tracking related wallet flows on-chain, and the pressure multiplies. If those services are not coordinated, they compete for the same request budget and create stale state at the worst possible moment.

A workable design uses three lanes:

  • WebSocket first for fast-changing state. Trades, order book deltas, fills, positions, and user events belong here.
  • REST for recovery and verification. Use it to confirm balances, reload missed state after a disconnect, and resync snapshots on startup.
  • Shared caching for low-change metadata. Symbol filters, precision rules, contract specs, and fee schedules should not be fetched repeatedly by every worker.

If you want a more detailed reference, this guide on API rate limit strategies for crypto apps covers the control patterns that keep traffic predictable.

Where performance gains usually come from

The biggest improvements rarely come from shaving a few milliseconds off JSON parsing. They come from removing waste and isolating critical paths.

Failure pointBetter design choice
Repeated REST polling from multiple servicesCentral request scheduler with shared cache
Full state rebuild after every reconnectIncremental stream processing plus periodic checksum or snapshot validation
Retry storms after 429s or timeoutsExponential backoff with jitter and per-endpoint budgets
Analytics jobs competing with execution trafficSeparate queues and credentials for trading, reporting, and research
Strategy code waiting on network callsEvent-driven ingest layer feeding internal state stores

One-sentence rule: protect order placement traffic before anything else.

Hardcoded sleeps are not rate-limit handling

Sleep-based throttling works in a toy script. It breaks in production because exchanges assign different weights to different endpoints, change quotas, and enforce limits across IPs, accounts, or API keys.

Use adaptive throttling instead. Track request cost by endpoint, maintain rolling counters, and shed low-priority work before the exchange starts rejecting calls. For example, if a copy trading engine is syncing fills from a centralized exchange and also enriching those fills with wallet activity from an EVM indexer, trade execution and account updates should keep their budget. PnL backfills and wallet-label refreshes can wait.

Controls worth building early

  • Per-endpoint rate budgets so expensive calls cannot starve execution
  • Backoff with jitter so workers do not retry in lockstep
  • Centralized token-bucket or leaky-bucket throttling across services
  • Circuit breakers that pause noncritical jobs during exchange degradation
  • Fast resubscribe and resync logic for WebSocket disconnects
  • Latency and staleness metrics on books, balances, positions, and fills

The last item matters more than many teams expect. Low latency is useful. Fresh state is what protects PnL.

For advanced strategies, performance also means matching two clocks. The CEX side tells you where you can execute and at what depth. The on-chain side tells you which wallets are accumulating, distributing, or routing flow before it becomes obvious in exchange prints. If API limits delay either side, the combined signal degrades. Copy a wallet too late and you inherit worse entry. Attribute PnL on stale fills and your strategy evaluation drifts away from reality.

Accessing Public Market Data Feeds

Public market data is the raw material for almost everything else. If the feed is incomplete, delayed, or poorly normalized, your strategy logic, PnL attribution, and execution assumptions all degrade.

The three feeds that matter most

Most applications depend on three public data categories.

Tickers

Tickers are snapshots. They usually provide a current price, a recent change view, and a volume summary. They’re useful for scanners, watchlists, and broad market dashboards.

Tickers are not enough for execution decisions. They don’t tell you how much size the market can absorb, how the book is shaped, or whether a move is being driven by actual prints.

Order books

The order book is where execution logic becomes informed. It shows resting bids and asks, which lets you estimate spread, depth, and likely slippage.

For copy trading, the book matters more than the last traded price. A wallet can buy a token on-chain with one liquidity profile while the centralized venue shows a very different depth profile. If you mirror off the last price alone, your expected fill quality is mostly guesswork.

Historical candles and trades

Historical OHLCV data gives you context for research, backtests, and signal filters. Historical trade data is even more useful when you care about microstructure, but candles are often enough for strategy prototyping.

If you’re comparing venues or validating broad price behavior, a separate market data provider can reduce a lot of effort. This overview of an API for crypto prices and market feeds is useful when you need broader coverage than a single exchange provides.

Practical retrieval pattern

A clean market data stack often looks like this:

  • REST for startup state

  • Pull recent candles
  • Fetch symbol metadata
  • Seed your initial order book snapshot
  • WebSocket for live state

    • Subscribe to trades
    • Subscribe to book deltas
    • Subscribe to ticker updates only if you need them
  • Periodic reconciliation

    • Refresh snapshots at controlled intervals
    • Detect sequence gaps
    • Rebuild state when stream integrity breaks
  • Sample workflow

    Data typeBest transportCommon use
    TickerREST or WebSocketWatchlists, broad scanners
    Order bookWebSocketSlippage checks, execution logic
    OHLCV candlesRESTBacktesting, indicators
    Public tradesWebSocketMomentum, tape, microstructure

    What not to do

    Don’t use a ticker endpoint as a substitute for book data. Don’t backtest off one venue’s candles and assume another venue will execute similarly. And don’t trust a streaming book unless you can detect gaps and rebuild it.

    Clean data doesn’t just improve research. It prevents execution mistakes that look like strategy mistakes.

    Executing Trades Programmatically

    Reading market data is the easy half. Trading through APIs is where small mistakes become financial errors. Order type selection, signing, retries, and order-state reconciliation all matter more than the initial create_order call.

    Order types that deserve different treatment

    A market order is useful when urgency matters more than price precision. It’s simple, but in thin books or fast moves it can fill worse than expected.

    A limit order gives price control. That makes it the default choice when you care about slippage or want to join liquidity. The trade-off is obvious. The order may sit partially filled or not fill at all.

    Conditional orders vary by venue, but the intent is consistent:

    • Stop-loss logic exits or reduces when price moves against you
    • Take-profit logic realizes gains at predefined levels
    • Trigger-based entries wait for a threshold before placing the active order

    The actual order lifecycle

    A capable bot doesn’t treat order submission as the end of the task. It tracks the entire lifecycle:

    1. Build the order request with venue-specific precision and symbol formatting.
    2. Sign and submit it through the private endpoint.
    3. Receive an acknowledgment, or a rejection with a reason.
    4. Watch for status changes through user streams or fallback polling.
    5. Reconcile partial fills, final fills, or cancellations.
    6. Update internal positions and realized or unrealized PnL.

    If you skip reconciliation, you eventually trade on stale assumptions. That’s how bots double-enter, fail to exit, or size a new order using a position that no longer exists.

    Guardrails worth adding before live deployment

    RiskGuardrail
    Duplicate order on retryClient order IDs and idempotency design
    Rejection from precision rulesLocal validation against exchange filters
    Stale position stateUser data stream plus periodic reconciliation
    Slippage on urgent entriesPre-trade book check and size caps
    Hanging open ordersTimeout policy and cancellation routine

    Practical implementation choices

    Many teams write a thin exchange adapter that normalizes a few concepts across venues:

    • place_limit_order
    • place_market_order
    • cancel_order
    • get_open_orders
    • get_position_state

    That works better than exposing the raw exchange schema everywhere in your codebase. Keep the strategy thinking in normalized concepts, and isolate exchange quirks inside the adapter.

    A final caution matters here. Never let “temporary” execution shortcuts survive into production. If your code can’t prove whether an order filled, it can’t manage risk.

    Advanced Trading Margin and Derivatives APIs

    Margin and derivatives endpoints look similar to spot APIs on the surface, but the risk model is different enough that they deserve their own controls. You’re no longer just buying or selling an asset. You’re managing borrowed capital, collateral, liquidation risk, and contract-specific behavior.

    What changes compared with spot

    The API surface expands in a few important ways:

    • Position endpoints become central because exposure can persist without a simple asset balance
    • Margin controls matter because available collateral changes as markets move
    • Contract metadata matters because perpetuals and futures have their own symbols, tick sizes, and settlement rules
    • Price references become critical because mark price and index price can drive liquidation behavior differently from the last traded price

    That means the execution engine can’t just “reuse spot code” and call it done.

    Checks worth enforcing before every leveraged trade

    A safer derivatives workflow usually includes:

    1. Validate current margin mode and product settings.
    2. Confirm available collateral before placing the order.
    3. Check whether the order changes net exposure or opens a new position.
    4. Attach or schedule protective risk controls where the venue supports them.
    5. Reconcile fills and liquidation-sensitive metrics after execution.

    Common failure points

    Failure pointWhy it hurts
    Assuming spot and perp symbols map cleanlyContract naming often differs
    Ignoring funding and margin stateCarry and exposure can drift
    Using market orders blindlySlippage compounds under leverage
    Missing partial liquidation signalsPosition state can change before your strategy reacts

    With leveraged APIs, every weak assumption gets magnified. A stale balance view on spot is annoying. A stale margin view can become a liquidation problem.

    Practical stance

    If you trade derivatives through APIs, keep the risk engine close to the execution engine. Don’t separate them into unrelated services that communicate lazily. Position state, collateral state, and order state need tight reconciliation. If they drift apart, your strategy logic can be technically correct and still operationally dangerous.

    Leveraging SDKs and Data Aggregators

    Writing every exchange connector by hand is rarely the best use of engineering time. The better question is where abstraction helps and where it creates blind spots that hurt execution, monitoring, or reconciliation later.

    Where SDKs help

    Official SDKs are a good fit when one venue matters more than portability. They usually handle signing, timestamp formatting, endpoint models, and version updates with less friction than a custom client. That saves time on setup and reduces avoidable mistakes in authentication code.

    Multi-exchange wrappers such as CCXT fit a different job. They are useful for research systems, market scanners, and early-stage bots where the goal is to compare venues quickly without building five separate adapters first. For normalized tasks like fetching markets, balances, tickers, and basic order types, that trade-off is often worth it.

    The catch is predictable. Exchange-specific behavior still leaks through the abstraction layer. Conditional orders, client order IDs, trigger price rules, post-only handling, and settlement asset conventions often differ enough that a wrapper gives you a common interface but not identical behavior. Teams usually discover that during production incidents, not during backtests.

    Where aggregators help

    Aggregators solve a different problem. They are better for coverage than for execution.

    If the system needs broad market discovery, historical reference data, or cross-venue monitoring, an aggregator can simplify the pipeline a lot. Instead of polling dozens of exchange APIs just to answer basic questions about listings, prices, or market availability, one data provider can feed the research layer while direct exchange connections stay focused on trading.

    That separation is useful in hybrid strategies built around wallet tracking. An on-chain watcher might flag a token because a smart wallet accumulated it on DEXs, but the execution service still needs to answer practical questions fast: Is the asset listed on a liquid CEX, which symbol maps correctly, and do venue prices diverge enough to justify acting? Aggregator data helps triage those questions before the strategy commits more expensive exchange-specific calls.

    If you want a practical reference for that workflow, this guide to CoinGecko API documentation for crypto data workflows shows how developers use an aggregator feed for discovery and analytics.

    A simple decision table

    NeedBetter fit
    Single-venue execution with venue-specific featuresOfficial SDK or direct client
    Multi-venue prototypingCCXT or similar wrapper
    Broad market discovery and analyticsAggregator API
    Low-level control over exchange edge casesDirect exchange integration

    The real trade-off

    SDKs and aggregators reduce plumbing. They do not remove responsibility for correctness.

    Production systems still need exchange-aware validation, retries with backoff, idempotent order handling, symbol mapping checks, and post-trade reconciliation. For copy trading and PnL analysis, this matters even more. If your on-chain signal says a target wallet bought an asset, but your CEX adapter maps the wrong market, rounds size incorrectly, or lags on fill status, the strategy can look right in theory and lose money in practice. Use abstractions to speed up integration. Keep the hard parts close to your own control.

    Integration Pattern Combining CEX Data and On-Chain Signals

    The most useful pattern for advanced crypto systems is a hybrid one. Let on-chain activity generate the signal, then let centralized exchange infrastructure handle validation and execution. That gives you early intent from smart wallets and better trade mechanics where books are deeper and execution is easier to automate.

    A diagram illustrating data flow from a centralized exchange and blockchain into a brain representing insights.

    The architecture that supports this pattern has to be deliberate. This practical guide to crypto exchange architecture describes an effective model built around microservices, an API Gateway for authentication and rate limiting, an Order Management System, and WebSocket feeds for real-time price updates, enabling sub-100ms latency on high-volume workflows. That’s the right shape for a system that ingests on-chain events and reacts on a CEX quickly enough to matter.

    A production-friendly hybrid flow

    At a high level, the flow looks like this:

    1. An on-chain watcher detects a wallet action.
    2. The signal service classifies the event.
    3. A market-data service checks whether the asset is tradable on one or more centralized venues.
    4. A liquidity module evaluates spread, depth, and likely slippage.
    5. The execution service decides whether to place an order and how to size it.
    6. A portfolio service tracks fills, position state, and PnL.
    7. An analytics layer records the full decision trail for review.

    That sequence sounds simple. The edge cases are not.

    Where systems usually break

    Asset mapping

    The on-chain token identifier rarely maps cleanly to a CEX symbol without a maintained reference layer. Wrapped assets, migrated contracts, and exchange naming quirks can all produce false matches.

    Time alignment

    On-chain timestamps, exchange event timestamps, and your own internal processing timestamps need to be comparable. If they aren’t, your replay analysis becomes unreliable and your live strategy can react to stale signals.

    Liquidity mismatch

    A wallet may execute into a DEX pool that supports the trade, while the token’s CEX market has weak depth or wide spreads. If your copy engine doesn’t check the book, it can chase the signal into bad execution.

    A practical service split

    ServicePrimary responsibility
    Signal ingestionCapture wallet buys, sells, and swaps
    Asset normalizationMap token contracts to tradeable symbols
    Market dataMaintain exchange books and trades
    ExecutionSubmit, cancel, and reconcile orders
    Risk engineEnforce sizing, exposure, and venue rules
    PnL and analyticsAttribute fills and evaluate signal quality

    Decision logic that works better than blind mirroring

    Don’t copy every wallet action. Score it.

    Useful filters often include:

    • Was the wallet action large enough to matter?
    • Is the centralized venue liquid enough right now?
    • Has the spread widened beyond acceptable execution bounds?
    • Does the signal conflict with existing exposure?
    • Can the trade be entered with controlled slippage using limit logic?

    That’s where a lot of generic crypto exchange apis content falls short. It talks about data access and order entry, but not about the middle layer where on-chain intelligence becomes a constrained execution decision.

    The best hybrid systems don’t “mirror trades.” They translate signals into venue-aware execution plans.

    Operational rules worth enforcing

    A few hard rules make these systems much safer:

    • Make user streams authoritative for fills. Poll only to reconcile.
    • Require deterministic asset mapping before any order can route.
    • Treat every external event as replayable. You’ll need it for debugging.
    • Use idempotent internal commands. Retries happen.
    • Separate signal confidence from execution permission. A strong signal can still be a bad trade if the venue is thin.

    What this architecture buys you

    You get earlier discovery from wallets, cleaner execution on centralized books, and far better observability than trying to trade directly from fragmented manual workflows. You also get a system you can audit. That matters when performance changes and you need to know whether the issue came from the wallets you track, the exchange feed, the order router, or your own risk filters.

    The result is not “automatic alpha.” It’s something more useful. A pipeline that can consistently turn external signals into structured decisions, then into orders, then into measurable PnL.


    Wallet tracking is only useful if you can act on it. Wallet Finder.ai helps traders identify profitable wallets across ecosystems like Ethereum, Solana, and Base, inspect full trading histories, monitor PnL and entry timing, and receive real-time alerts when tracked wallets move. If you’re building or refining a hybrid workflow that combines on-chain signals with exchange execution, it’s a practical way to surface the signals worth routing into your trading stack.