Tuesday, May 08, 2012

Quantopian Update

I've been working for almost a year on Quantopian! The year passed very quickly, but the exuberance I felt embarking on a new venture hasn't faded one bit. One of my best friends (JB) joined as co-founder in January, we hired a stellar python engineer in early February (the world famous RealDiehl), and lately an ML specialist. The four of us have been heads down on a from-scratch build of the Quantopian backtesting engine and IDE. Working with my teammates has been exhilarating and humbling - the code they cut is beautiful stuff, and the breadth of their combined expertise is overwhelming. 

We are still sprinting to roll the latest and greatest to private alpha but we need more help! Please check out the Quantopian jobs page and pass it to your nerd/geek/hacker/supercool friends and anyone who wants to come hack Wall Street with us.

Another big announcement: Quantopian will soon have a permanent office right in town Boston. We debated heavily where we should base ourselves: NYC, Boston, or San Francisco. I had Caitie and the kids ready to roll west, but one of our board members made an impassioned pitch for Boston. He got us to fall in love with the Hub all over again. In Boston, a startup stands out against the backdrop of established businesses, the staid medical profession, and academia. In a way, you feel a bit more respect for the risks you're taking with a new venture. You also feel just a tad crazy trying to start something brand new while you watch friends in other industries enjoy the country club living. That is good, because you have to be a little crazy to try starting a company. If you forget that, you might lull yourself into thinking this startup life is a safe bet, which would likely be the last thing to cross your mind before you flame out. 

On the more practical side, Boston is where JB and I have the strongest network. After we made the decision to stay in Boston, JB found a great article describing the quantitative analysis of startups done at Google Ventures. Here's the best snippet: 
Kraus says that analysts have discovered research that overturns some of Silicon Valley's most cherished bits of lore. Take that old idea that it pays to fail in the Valley: Wrong! Google Ventures' analysts found that first-time entrepreneurs with VC backing have a 15% chance of creating a successful company, while second-timers who had an auspicious debut see a 29% chance of repeating their achievement. By contrast, second-time entrepreneurs who failed the first time? They have only a 16% chance of success, in effect returning them to square one. "Failure doesn't teach you much," Kraus says with a shrug.
Location, in fact, plays a larger role in determining an entrepreneur's odds than failure, according to the Google Ventures data team. A guy who founded a successful company in Boston but is planning to start his next firm in San Francisco isn't a sure bet. "He'll revert back to that 15% rate," Kraus says, "because he's out of his personal network and that limits how quickly he can scale up." 
Nice to have a little statistical rationale, but it was really the speech in the board room that got us.

If you want to read more about Quantopian, please also check out the inaugural Quantopian blog post

3 comments:

  1. That's great! If I see you in Boston I'll try mug you. What is an ML specialist btw? I'm acronym challenged.

    ReplyDelete
  2. Mug?! You can take the kid out of Sommerville...

    ML is for machine learning. We are researching different parameter optimization techniques for trading algorithms. Suppose you have a trading signal that takes the price of a stock as its input, and has a range from 0-10.0 as its output. You could declare a parameter called "threshold" and tell your algorithm to buy the stock when signal >= threshold. Something like this:

    def handle_frame(frame, context):
    if frame[context.IBM]['price'] >= context.threshold:
    order(context.IBM, 100)

    Parameter optimization will then try to find the optimal value for threshold. Our first approach is rather brute force - we just run many backtest simulations and use the output of the simulations as feedback for optimization methods. Right now we're trying gradient descent. Originally I had tried a genetic algorithm, but our ML expert politely told me that genetic algorithms are what people unfamiliar with ML use :).

    The main problem with the brute force approach is the amount of computational time required. Our next project is to try online learning techniques. That techniques from that family of optimizations attempt to adjust the parameter in the course of a single simulation - or if you imagine flicking a switch - during the course of live trading.

    Some of this may sound a bit too magical, so to be sure people feel comfortable trusting our implementation for optimization (and for trade simulation) we will be open sourcing zipline our backtest engine. Drop me a line if you want preview access to the Zipline code!

    ReplyDelete
  3. I would have said GA as well so I don't know ML much either apparently. That sounds really interesting. I will probably have more time to play with this stuff after I finish my last two courses. One this Summer and another in the fall.

    It is funny that you mentioned performance because a buddy of mine has been blathering about using Hadoop and Amazon's cloud to do real-time Beta calculations for stocks and bonds.

    Keep us all posted on how you are doing.

    -Crash

    ReplyDelete