Deep Value uses a proprietary execution management system that allows us to scale trading simply by adding additional commodity boxes. Our trading grid is tolerant to failure of machines and processes. It provides load balancing and the ideal multi-threaded platform for development of sophisticated, event-based trading strategies. The grid is a hybrid, shared-memory and message-passing design that allows for high throughput with low latency, while still preserving transactionality and simplicity. We use transactional databases, with a custom database-mediation layer providing batch updates in a fault-tolerant, zero-pause fashion. We have sophisticated monitoring infrastructure to understand what the trading grid and the algorithms are doing in fine-detail, both real time and at end-of-day.
Master orders are typically received via direct FIX connections, partner front-end vendors, or at in in-house install within your infrastructure. In-house installation is typical for high-volume clients looking to leverage their own economics on market center connectivity and/or clearing.
Deep Value has developed a sophisticated behavior-oriented API. Complex algorithms are broken down in smaller agent-based behaviors that coordinate to produce more complex overall algorithmic outcomes. These sit on top of sophisticated market data and a stock-oriented toolkit. The market data toolkit allows us to ask detailed, statistical questions about the market data we are seeing. This in turn is built on top of a proprietary event- and message-passing model that allows inter-cluster call-backs to be easily managed with both intra-node and available cluster-wide eventing.
Our algorithms emit structured descriptions of what they are doing through time, allowing our research and automated validation groups to develop sophisticated visualizations and testing to ensure that orders are being processed as expected and anomalies in behavior are caught early and addressed.
We have a dedicated support desk that monitors our trading cluster and algorithms. They interact with the market and clients on specific questions. In the event of an issue with specific child orders or market connections, they co-ordinate and take appropriate action. They have a large array of both cluster and algorithmic monitoring tools (graphs, alerts, automated tests) to help them understand what is occurring at a fine level of detail and respond to customer queries in a timely and informed manner.
Deep Value’s research team uses state-of-the-art Big Data approaches for work distribution frameworks — currently Hadoop based — to test algorithm behavior against historical level 2 and trade data to ensure performance is as expected and to try out new ideas. They routinely run 200 high performance machine clusters to analyse how specific changes will impact performance. Our simulation framework lets us test algorithmic changes against actual order flow to ascertain the impact of any change.
Testing algorithmic logic is hard, as it is a complex set of interactions between the market and our event-based logic. Deep Value has extensive technology investments in automated approaches to behavior validation. All objects in the system store their state as they move through time, both in our continuous integration environment, as well as in production. Test suites written by the firm’s Automated Validation team “dropped” onto these object-time graphs. The underlying framework maps these tests against the emitted data to confirms that each object behaved as expected (e.g. the quote spread increased between this quote and the previous by more than 10bps versus the average spread over the last 5 minutes. Question: Did the SpreadNarrowerTrader start and execute its logic?”)