How three-phase commits led to better testing

An algorithmic trading saga unfolds

Testing our high frequency trading platform has always been a challenge. The amount of trading, and the complexity of that trading, have been increasing rapidly. This has led us to deploy more machinery to ensure we are performing as we expect. The three phases of our testing life cycle were as follows.

Phase 1: JUnit, Market Simulation & Logging

At the core of testing any algorithmic strategy is a suitable market simulator. Two things need to be simulated:

The market data (“IBM is 103.05/6 right now”) coming into the system, and what would happen to the orders sent to the market (“will the Buy 100 shares of IBM@103.07 be filled?”)

Our simulators started off simply. The market data simulator produced random but meaningful market data (shout out to the prroject.) The market simulator (shout out to quickfix) filled orders in a sensible fashion (“we have an order to BUY 100 shares of IBM@103.07, so lets fill it all.”)

Our algorithmic logic then produced logs (shout out to log4) saying “now doing this because 3 secs have elapsed” or perhaps “not doing that because (ask-bid) > 0.01.” We ran several strategies, watched what orders they created visually and pored through the logs to confrim everything was working well. It was tedious, but this method allowed us to trade 500,000 shares a day.

Example of a log line

17:01:00,653 | DEBUG | SerialDepletionTrader | workContainer:127.0.1.1:6053 | In:B:GCF:IN_SIM1:L000HB7W:IS:CUSTOMER_1:0      | pool-3-thread-15-AlgoTimerWaitComplete                   | SerialDepletionTrader:FIRST_REFRESHING:IN_SIM1:L000HB7W:0:5:163: We did not receive any fills from the PNP orders & its not a DNS Order so doing a NON-PNP Sweep.destination=AMEX,_isDns=false, doNonPNPNow=false | DVInboundTicket | 127.0.1.1 | 4e8ab041-2b19-4b24-82a1-91d7e267d616 |  | 01 Jul 2011

Phase 2: Structured logging and graphs

As our trading increased, understanding how the algorithm was operating over time became more important, and visualization is a key answer to looking at large amounts of data. To visualize, you need to know what the data is, so we developed an “emit” data format. As the algorithmic logic makes trading decisions, it logs name-value pairs of all the data it is using to make a decision. As key events in the algorithm’s life change, we keep spitting out lines with key value pairs.

Example of an emission

emitTime=09:21:33:381\ticketId=IN_SIM:GOA 3089/03252011, emitType=Outbound:New, traderId=DisplayedPegOffOppositeCoreTrader, traderRunID=IN_SIM:GOA3087/03252008:0:4:10, algoId=10b18_twap, customer=CUST1, side=BUY, symbol=ARI, orderQty=97, limitPrice=4.92, UOT=100, cumQty=0, leavesQty=97, avgFillPrice=0.0, openOutboundTicketCount=1, committedQty=97, clOrdID=GOA3089/03252011, quoteBid=4.92, quoteAsk=4.94, quoteLast=4.93, quoteVolume=273000, quoteTradeCount=72, quoteVwap=4.94, quoteLastUpdateFromMarket=09:21:33:369, level2LastRefresh=09:20:50:971, , obtAction=New, outboundTicketID=DEEPVALIN_SIM:JOA7217/03252011, obtOrderQty=97, obtDisplayQty=97, obtLimit=4.92, obtDisplayLimit=4.92, obtOrderType=LIMIT, obtDestination=NYSE, obtIsImmediateOrCancel=false, obtCumQty=0, obtLeavesQty=97job

We then parsed these name-value pairs into a table format (with the name mapped in columns, and each time-stamp in rows). Various variables were then grouped together and graphed (“fill % and price against time” for example). We then created numerous graphs, up to 20 per algorithm. Our research team then looked in detail at these graphs to understand whether the algorithm was behaving as expected.

Example of a graph generated from an emission

This was very helpful as we could now acertain what our system was doing and discuss the results easily as a group.

We rely on graphs to discuss complex corner cases to help us understand complex market structure. The issue, however, is that to understand the graphs requires someone who is well-versed in what each particular algorithm is trying to do. This helped us reach 5 million shares a day, but it began having problems identifying when corner cases had been encountered, and analyzing enough data to know we were always doing the right thing.

Phase 3: EmitTest framework

Graphs were good but we needed automation. Our EmitTest framework evolved out of this need.
The central idea around the EmitTest framework is that a system is comprised of various interacting objects that change as time passes. These are linked to each other by references in the JVM (shout out to Java: why we use Java is a topic for another post). Events take place that cause these objects to change, sometime resulting in changes to other objects. The EmitTest framework looks to capture and analyze this object-state-through-time graph.

The EmitTest framework is comprised of two portions – IEmittable and EmitTests. IEmittable are objects that can spit out their state at any time. When they do so, they create a unique emit row (emitID being the primary key). Part of this emit row is the emitID of all the IEmittables that this IEmittable is linked to.

public interface IEmittable
{
String getEmitObjectID();
Set getLinkedEmittables();
}

What this gives us in the emit data is all the objects in the system – whhat their there state was at particular known times in the past, and what the state of their linked objects was at that time. This emit data can now be loaded into a set of Java objects called Emissions. Emissions, which are wrappers around a map, allow querying on the value of a particular variable for that Emission. Emissions also allow navigation through time (Emissions.getNext returns the next Emission for this object in time) and to their linked emissions (“get the linked quote Emissions from this strategy Emission”). This creates an entire object graph of each object in our system, its state through time, and its link to every other object (and its state through time.)

EmitTests can then be “dropped” onto this Emission graph at any node they are interested in (“drop me on any Strategy Emission node were we just got a fill”), navigate to any other linked node (“give me the next Strategy Emission node in time and the linked Quote Emission node”), and confirm that what we expect to happen did happen (“did we place another order between 15 and 30ms later 1 cent less than the current bid – which we would get from the linked quote node”).

These EmitTests can then either pass or fail, and we log the fails and pass-counts. This allows us to then confirm that a production day has no issues. This also works with our continuous build server (shout out to Bamboo).

We make extensive use of annotations and ensure the tests can be written as part of the strategy itself. In addition, Emissions and EmitTests can be derived from, making this a suitable way to make the nodes of the test graph (the Emissions) become stronger through time, to allow building blocks to be available to further test writers.

This has to reach 140m shares per day.

Whats next

We are now planning to allow our trading platform to be run with sped up time by moving the “clock” forward whenever there is no work to be done, and pushing in recorded market data through the market-data simulator. This, in combination with Hadoop and Amazon EC2 (shout out to them), will allow us to run simulations of our clients trading back through time and allow us to test far more corner cases more quickly.

Testing of algorithmic trading strategies is a complex task, one that has taken us several years to crack. The EmitTest framework has been running now for some time and has helped identify numerous issues with our existing algorithms, allowing us to become more confident when rolling out new Strategies.

By Paul Haefele, Managing Director – Technology

https://bastaapoteket.se/inkop-super-kamagra-receptutan-online/

Nonukcasinosites

These ones