{"title": "How to Hedge an Option Against an Adversary: Black-Scholes Pricing is Minimax Optimal", "book": "Advances in Neural Information Processing Systems", "page_first": 2346, "page_last": 2354, "abstract": "We consider a popular problem in finance, option pricing, through the lens of an online learning game between Nature and an Investor. In the Black-Scholes option pricing model from 1973, the Investor can continuously hedge the risk of an option by trading the underlying asset, assuming that the asset's price fluctuates according to Geometric Brownian Motion (GBM). We consider a worst-case model, in which Nature chooses a sequence of price fluctuations under a cumulative quadratic volatility constraint, and the Investor can make a sequence of hedging decisions. Our main result is to show that the value of our proposed game, which is the regret'' of hedging strategy, converges to the Black-Scholes option price. We use significantly weaker assumptions than previous work---for instance, we allow large jumps in the asset price---and show that the Black-Scholes hedging strategy is near-optimal for the Investor even in this non-stochastic framework.\"", "full_text": "How to Hedge an Option Against an Adversary:\n\nBlack-Scholes Pricing is Minimax Optimal\n\nJacob Abernethy\n\nUniversity of Michigan\n\njabernet@umich.edu\n\nPeter L. Bartlett\n\nUniversity of California at Berkeley\n\nand Queensland University of Technology\n\nbartlett@cs.berkeley.edu\n\nRafael M. Frongillo\nMicrosoft Research\n\nraf@cs.berkeley.edu\n\nAndre Wibisono\n\nUniversity of California at Berkeley\nwibisono@cs.berkeley.edu\n\nAbstract\n\nWe consider a popular problem in \ufb01nance, option pricing, through the lens of an\nonline learning game between Nature and an Investor. In the Black-Scholes op-\ntion pricing model from 1973, the Investor can continuously hedge the risk of\nan option by trading the underlying asset, assuming that the asset\u2019s price \ufb02uctu-\nates according to Geometric Brownian Motion (GBM). We consider a worst-case\nmodel, in which Nature chooses a sequence of price \ufb02uctuations under a cumula-\ntive quadratic volatility constraint, and the Investor can make a sequence of hedg-\ning decisions. Our main result is to show that the value of our proposed game,\nwhich is the \u201cregret\u201d of hedging strategy, converges to the Black-Scholes option\nprice. We use signi\ufb01cantly weaker assumptions than previous work\u2014for instance,\nwe allow large jumps in the asset price\u2014and show that the Black-Scholes hedging\nstrategy is near-optimal for the Investor even in this non-stochastic framework.\n\n1\n\nIntroduction\n\nAn option is a \ufb01nancial contract that allows the purchase or sale of a given asset, such as a stock,\nbond, or commodity, for a predetermined price on a predetermined date. The contract is named as\nsuch because the transaction in question is optional for the purchaser of the contract. Options are\nbought and sold for any number of reasons, but in particular they allow \ufb01rms and individuals with\nrisk exposure to hedge against potential price \ufb02uctuations. Airlines, for example, have heavy fuel\ncosts and hence are frequent buyers of oil options.\nWhat ought we pay for the privilege of purchasing an asset at a \ufb01xed price on a future expiration\ndate? The dif\ufb01culty with this question, of course, is that while we know the asset\u2019s previous prices,\nwe are uncertain as to its future price. In a seminal paper from 1973, Fischer Black and Myron\nScholes introduced what is now known as the Black-Scholes Option Pricing Model, which led to a\nboom in options trading as well as a huge literature on the problem of derivative pricing [2]. Black\nand Scholes had a key insight that a \ufb01rm which had sold/purchased an option could \u201chedge\u201d against\nthe future cost/return of the option by buying and selling the underlying asset as its price \ufb02uctuates.\nTheir model is based on stochastic calculus and requires a critical assumption that the asset\u2019s price\nbehaves according to a Geometric Brownian Motion (GBM) with known drift and volatility.\nThe GBM assumption in particular implies that (almost surely) an asset\u2019s price \ufb02uctuates continu-\nously. The Black-Scholes model additionally requires that the \ufb01rm be able to buy and sell contin-\nuously until the option\u2019s expiration date. Neither of these properties are true in practice: the stock\nmarket is only open eight hours per day, and stock prices are known to make signi\ufb01cant jumps even\n\n1\n\n\fduring regular trading. These and other empirical observations have led to much criticism of the\nBlack-Scholes model.\nAn alternative model for option pricing was considered1 by DeMarzo et al. [3], who posed the\nquestion: \u201cCan we construct hedging strategies that are robust to adversarially chosen price \ufb02uc-\ntuations?\u201d Essentially, the authors asked if we may consider hedging through the lens of regret\nminimization in online learning, an area that has proved fruitful, especially for obtaining guarantees\nrobust to worst-case conditions. Within this minimax option pricing framework, DeMarzo et al. pro-\nvided a particular algorithm resembling the Weighted Majority and Hedge algorithms [5, 6] with a\nnice bound.\nRecently, Abernethy et al. [1] took the minimax option pricing framework a step further, analyzing\nthe zero-sum game being played between an Investor, who is attempting to replicate the option\npayoff, and Nature, who is sequentially setting the price changes of the underlying asset. The\nInvestor\u2019s goal is to \u201chedge\u201d the payoff of the option as the price \ufb02uctuates, whereas Nature attempts\nto foil the Investor by choosing a challenging sequence of price \ufb02uctuations. The value of this game\ncan be interpreted as the \u201cminimax option price,\u201d since it is what the Investor should pay for the\noption against an adversarially chosen price path. The main result of Abernethy et al. was to show\nthat the game value approaches the Black-Scholes option price as the Investor\u2019s trading frequency\nincreases. Put another way, the minimax price tends to the option price under the GBM assumption.\nThis lends signi\ufb01cant further credibility to the Black-Scholes model, as it suggests that the GBM\nassumption may already be a \u201cworst-case model\u201d in a certain sense.\nThe previous result, while useful and informative, left two signi\ufb01cant drawbacks. First, their tech-\nniques used minimax duality to compute the value of the game, but no particular hedging algorithm\nfor the Investor is given. This is in contrast to the Black-Scholes framework (as well as to the De-\nMarzo et al.\u2019s result [3]) in which a hedging strategy is given explicitly. Second, the result depended\non a strong constraint on Nature\u2019s choice of price path: the multiplicative price variance is uniformly\nconstrained, which forbids price jumps and other large \ufb02uctuations.\nIn this paper, we resolve these two drawbacks. We consider the problem of minimax option pricing\nwith much weaker constraints: we restrict the sum over the length of the game of the squared price\n\ufb02uctuations to be no more than a constant c, and we allow arbitrary price jumps, up to a bound \u21e3. We\nshow that the minimax option price is exactly the Black-Scholes price of the option, up to an additive\nterm of O(c\u21e31/4). Furthermore, we give an explicit hedging strategy: this upper bound is achieved\nwhen the Investor\u2019s strategy is essentially a version of the Black-Scholes hedging algorithm.\n\n2 The Black-Scholes Formula\n\nLet us now brie\ufb02y review the Black-Scholes pricing formula and hedging strategy. The derivation\nrequires some knowledge of continuous random walks and stochastic calculus\u2014Brownian motion,\nIt\u02c6o\u2019s Lemma, a second-order partial differential equation\u2014and we shall only give a cursory treat-\nment of the material. For further development we recommend a standard book on stochastic cal-\nculus, e.g. [8]. Let us imagine we have an underlying asset A whose price is \ufb02uctuating. We let\nW (t) be a Brownian motion, also known as a Weiner process, with zero drift and unit variance; in\nparticular, W (0) = 0 and W (t) \u21e0 N(0, t) for t > 0. We shall imagine that A\u2019s price path G(t) is\ndescribed by a geometric Brownian motion with drift \u00b5 and volatility , which we can describe via\nthe de\ufb01nition of a Brownian motion: G(t) d= exp{(\u00b5 1\nIf an Investor purchases a European call option on some asset A (say, MSFT stock) with a strike\nprice of K > 0 that matures at time T , then the Investor has the right to buy a share of A at price K\nat time T . Of course, if the market price of A at T is G(T ), then the Investor will only \u201cexercise\u201d\nthe option if G(T ) > K, since the Investor has no bene\ufb01t of purchasing the asset at a price higher\nthan the market price. Hence, the payoff of a European call option has a pro\ufb01t function of the form\nmax{0, G(T ) K}. Throughout the paper we shall use gEC(x) := max{0, x K} to refer to the\npayout of the European call when the price of asset A at time T is x (the parameter K is implicit).\n\n2 2)t + W (t)}.\n\n1Although it does not have quite the same \ufb02avor, a similar approach was explored in the book of Vovk and\n\nShafer [7].\n\n2\n\n\fWe assume the current time is t. The Black-Scholes derivation begins with a guess: assume that the\n\u201cvalue\u201d of the European call option can be described by a smooth function V(G(t), t), depending\nonly on the current price of the asset G(t) and the time to expiration T t. We can immedi-\nately de\ufb01ne a boundary condition on V, since at the expiration time T the value of the option is\nV(G(T ), 0) = gEC(G(T )).\nSo how do we arrive at a value for the option at another time point t? We assume the Investor has\na hedging strategy, (x, t) that determines the amount to invest when the current price is x and the\ntime is t. Notice that if the asset\u2019s current price is G(t) and the Investor purchases (G(t), t) dollars\nof asset A at t, then the incremental amount of money made in an in\ufb01nitesimal amount of time is\n(G(t), t) dG/G(t), since dG/G(t) is the instantaneous multiplicative price change at time t. Of\ncourse, if the earnings of the Investor are guaranteed to exactly cancel out the in\ufb01nitesimal change\nin the value of the option dV(G(t), t), then the Investor is totally hedged with respect to the option\npayout for any sample of G for the remaining time to expiration. In other words, we hope to achieve\ndV(G, t) = (G, t) dG/G. However, by It\u02c6o\u2019s Lemma [8] we have the following useful identity:\n\ndV(G, t) = @V\n\n@x\n\ndG + @V\n@t\n\ndt +\n\n1\n2 2G2 @2V\n\n@x2 dt.\n\n(1)\n\n(2)\n\nBlack and Scholes proposed a generic hedging strategy, that the investor should invest\n\n(x, t) = x\n\n@V\n@x\n\ndollars in the asset A when the price of A is x at time t. As mentioned, the goal of the Investor is\nto hedge out risk so that it is always the case that dV(G, t) = (G, t) dG/G. Combining this goal\nwith Equations (1) and (2), we have\n\n@V\n@t\n\n+\n\n1\n2 2x2 @2V\n\n@x2 = 0.\n\n(3)\n\nV(x, t) = EY [gEC(x \u00b7 exp(Y ))] where\n\nNotice the latter is an entirely non-stochastic PDE, and indeed it can be solved explicitly:\n2 2(T t), 2(T t))\n\n(4)\nRemark: While we have described the derivation for the European call option, with payoff function\ngEC, the analysis above does not rely on this speci\ufb01c choice of g. We refer the reader to a standard\ntext on asset pricing for more on this [8].\n\nY \u21e0 N ( 1\n\n3 The Minimax Hedging Game\n\nWe now describe a sequential decision protocol in which an Investor makes a sequence of trading\ndecisions on some underlying asset, with the goal of hedging away the risk of some option (or other\n\ufb01nancial derivative) whose payout depends on the \ufb01nal price of the asset at the expiration time T .\nWe assume the Investor is allowed to make a trading decision at each of n time periods, and before\nmaking this trade the investor observes how the price of the asset has changed since the previous\nperiod. Without loss of generality, we can assume that the current time is 0 and the trading periods\noccur at {T /n, 2T /n, . . . , 1}, although this will not be necessary for our analysis.\nThe protocol is as follows.\n1: Initial price of asset is S = S0.\n2: for i = 1, 2, . . . , n do\n3:\n4:\n5:\n6: end for\n\nInvestor hedges, invests i 2 R dollars in asset.\nNature selects a price \ufb02uctuation ri and updates price S S(1 + ri).\nInvestor receives (potentially negative) pro\ufb01t of iri.\n\nStepping back for a moment, we see that the Investor is essentially trying to minimize the following\nobjective:\n\n7: Investor is charged the cost of the option, g(S) = g (S0 \u00b7Qn\nnXi=1\n\n(1 + ri)! \n\ng S0 \u00b7\n\nnYi=1\n\niri.\n\ni=1(1 + ri)).\n\n3\n\n\fWe can interpret the above expression as a form of regret: the Investor chose to execute a trading\ni=1 iri, but in hindsight might have rather purchased the option instead,\ni=1(1 + ri)). What is the best hedging strategy the Investor can execute\nto minimize the difference between the option payoff and the gains/losses from hedging? Indeed,\nhow much regret may be suffered against a worst-case sequence of price \ufb02uctuations?\n\nstrategy, earning himPn\nwith a payout of g (S0 \u00b7Qn\n\nConstraining Nature. The cost of playing the above sequential game is clearly going to de-\nIn the original Black-Scholes formula-\npend on how much we expect the price to \ufb02uctuate.\ntion, the price volatility is a major parameter in the pricing function.\nIn the work of Aber-\nnethy et al., a key assumption was that Nature may choose any r1, . . . , rn with the constraint that\ni | r1, . . . , ri1] = O(1/n). 2 Roughly, this constraint means that in any \u270f-sized time interval,\nE[r2\nthe price \ufb02uctuation variance shall be no more than \u270f. This constraint, however, does not allow for\nlarge price jumps during trading. In the present work, we impose a much weaker set of constraints,\ndescribed as follows:3\n\n\u2022 TotVarConstraint: The total price \ufb02uctuation is bounded by a constant c:Pn\ni \uf8ff c.\n\u2022 JumpConstraint: Every price jump |ri| is no more than \u21e3, for some \u21e3 > 0 (which may\n\ni=1 r2\n\ndepend on n).\n\nThe \ufb01rst constraint above says that Nature is bounded by how much, in total, the asset\u2019s price path\ncan \ufb02uctuate. The latter says that at no given time can the asset\u2019s price jump more than a given value.\nIt is worth noting that if c n\u21e32 then TotVarConstraint is super\ufb02uous, whereas JumpConstraint\nbecomes super\ufb02uous if c < \u21e32.\n\nThe Minimax Option Price We are now in a position to de\ufb01ne the value of the sequential option\npricing game using a minimax formulation. That is, we shall ask how much the Investor loses\nwhen making optimal trading decisions against worst-case price \ufb02uctuations chosen by Nature. Let\nV (n)\n(S; c, m) be the value of the game, measured by the investor\u2019s loss, when the asset\u2019s current\n\u21e3\nprice is S 0, the TotVarConstraint is c 0, the JumpConstraint is \u21e3 > 0, the total number of\ntrading rounds are n 2 N, and there are 0 \uf8ff m \uf8ff n rounds remaining. We de\ufb01ne recursively:\n\nV (n)\n\u21e3\n\n\u21e3\n\nsup\n\nr : |r|\uf8ffmin{\u21e3,pc}\n\nr + V (n)\n\n\u21e3\n\n((1 + r)S; c r2, m 1),\n\n(S; c, n). This is the value of the game that we are interested in analyzing.\n\n(S; c, m) = inf\n2R\n(S; c, 0) = g(S). Notice that the constraint under the supremum en-\n(S; c) :=\n\nwith the base case V (n)\nforces both TotVarConstraint and JumpConstraint. For simplicity, we will write V (n)\nV (n)\n\u21e3\nTowards establishing an upper bound on the value (5), we shall discuss the question of how to\nchoose the hedge parameter on each round. We can refer to a \u201chedging strategy\u201d in this game as\na function of the tuple (S, c, m, n, \u21e3, g(\u00b7)) that returns hedge position. In our upper bound, in fact\nwe need only consider hedging strategies (S, c) that depend on S and c; there certainly will be a\ndependence on g(\u00b7) as well but we leave this implicit.\n4 Asymptotic Results\n\n(5)\n\n\u21e3\n\n\u21e3\n\nThe central focus of the present paper is the following question: \u201cFor \ufb01xed c and S, what is the\nasymptotic behavior of the value V (n)\n(S; c)?\u201d and \u201cIs there a natural hedging strategy (S, c) that\n(roughly) achieves this value?\u201d In other words, what is the minimax value of the option, as well\nas the optimal hedge, when we \ufb01x the variance budget c and the asset\u2019s current price S, but let the\nnumber of rounds tend to 1? We now give answers to these questions, and devote the remainder of\nthe paper to developing the results in detail.\nWe consider payoff functions g : R0 ! R0 satisfying three constraints:\n2The constraint in [1] was E[r2\ni | r1, . . . , ri1] \uf8ff exp(c/n) 1, but this is roughly equivalent.\n3We note that Abernethy et al. [1] also assumed that the multiplicative price jumps |ri| are bounded by\n\u02c6\u21e3n = \u2326(p(log n)/n); this is a stronger assumption than what we impose on (\u21e3n) in Theorem 1.\n\n4\n\n\f1. g is convex.\n2. g is L-Lipschitz, i.e. |g(x) g(y)| \uf8ff L|x y|.\n3. g is eventually linear, i.e. there exists K > 0 such that g(x) is a linear function for all\nx K; in this case we also say g is K-linear.\n\nWe believe the \ufb01rst two conditions are strictly necessary to achieve the desired results. The K-\nlinearity may not be necessary but makes our analysis possible. We note that the constraints above\nencompass the standard European call and put options.\nHenceforth we shall let G be a zero-drift GBM with unit volatility.\nlog G(t) \u21e0 N ( 1\n\n2 t, t). For S, c 0, de\ufb01ne the function\n\nIn particular, we have that\n\nU(S, c) = EG[g(S \u00b7 G(c))],\n\nand observe that U(S, 0) = g(S). Our goal will be to show that U is asymptotically the minimax\nprice of the option. Most importantly, this function U(S, c) is identical to V(S, 1\n2 (T c)), the\nBlack-Scholes value of the option in (4) when the GBM volatility parameter is in the Black-\nScholes analysis. In particular, analogous to to (3), U(S, c) satis\ufb01es a differential equation:\n\n1\n2 S2 @2U\n@S2 \n\n@U\n@c\n\n= 0.\n\n(6)\n\nThe following is our main result of this paper.\nTheorem 1. Let S > 0 be the initial asset price and let c > 0 be the variance budget. Assume we\nhave a sequence {\u21e3n} with limn!1 \u21e3n = 0 and lim inf n!1 n\u21e32\n(S; c) = U(S, c).\n\nn > c. Then\n\nV (n)\n\u21e3n\n\nlim\nn!1\n\nThis statement tells us that the minimax price of an option, when Nature has a total \ufb02uctuation\nbudget of c, approaches the Black-Scholes price of the option when the time to expiration is c.\nThis is particularly surprising since our minimax pricing framework made no assumptions as to\nthe stochastic process generating the price path. This is the same conclusion as in [1], but we\nobtained our result with a signi\ufb01cantly weaker assumption. Unlike [1], however, we do not show\nthat the adversary\u2019s minimax optimal stochastic price path necessarily converges to a GBM. The\nconvergence of Nature\u2019s price path to GBM in [1] was made possible by the uniform per-round\nvariance constraint.\nThe previous theorem is the result of two main technical contributions. First, we prove a lower\nbound on the limiting value of V (n)\n(S; c) by exhibiting a simple randomized strategy for Nature\n\u21e3n\nin the form of a stochastic price path, and appealing to the Lindeberg-Feller central limit theorem.\nSecond, we prove an O(c\u21e31/4) upper bound on the deviation between V (n)\n(S; c) and U(S, c). The\n\u21e3\nupper bound is obtained by providing an explicit strategy for the Investor:\n\n(S, c) = S\n\n@U(S, c)\n\n@S\n\nand carefully bounding the difference between the output using this strategy and the game value. In\nthe course of doing so, because we are invoking Taylor\u2019s remainder theorem, we need to bound the\n\ufb01rst few derivatives of U(S, c). Bounding these derivatives turns out to be the crux of the analysis;\nin particular, it uses the full force of the assumptions on g, including that g is eventually linear, to\navoid the pathological cases when the derivative of g converges to its limiting value very slowly.\n\n5 Lower Bound\n\nIn this section we prove that U(S, c) is a lower bound to the game value V (n)\n(S; c). We note that the\n\u21e3n\nresult in this section does not use the assumptions in Theorem 1 that \u21e3n ! 0, nor that g is convex\nand eventually linear. In particular, the following result also applies when the sequence (\u21e3n) is a\nconstant \u21e3 > 0.\n\n5\n\n\fTheorem 2. Let g : R0 ! R0 be an L-Lipschitz function, and let {\u21e3n} be a sequence of positive\nnumbers with lim inf n!1 n\u21e32\n\nn > c. Then for every S, c > 0,\n\nlim inf\nn!1\n\nV (n)\n\u21e3n\n\n(S; c) U(S, c).\n\nThe proof of Theorem 2 is based on correctly \u201cguessing\u201d a randomized strategy for Nature. For each\n\nn 2 N, de\ufb01ne the random variables R1,n, . . . , Rn,n \u21e0 Uniform{\u00b1pc/n} i.i.d. Note that (Ri,n)n\nplies \u21e3n >pc/n for all suf\ufb01ciently large n, so eventually (Ri,n) also satis\ufb01es JumpConstraint.\n\nsatis\ufb01es TotVarConstraint by construction. Moreover, the assumption lim inf n!1 n\u21e32\nWe have the following convergence result for (Ri,n), which we prove in Appendix A.\nLemma 3. Under the same setting as in Theorem 2, we have the convergence in distribution\n\ni=1\nn > c im-\n\nnYi=1\n\n(1 + Ri,n)\n\nd! G(c)\nMoreover, we also have the convergence in expectation\n(1 + Ri,n)!# = U(S, c).\nnYi=1\n\nE\"g S \u00b7\n\nn ! 1.\n\nlim\nn!1\n\nas\n\nWith the help of Lemma 3, we are now ready to prove Theorem 2.\n\n(7)\n\nV (n)\n\u21e3n\n\nsup\nr1 \u00b7\u00b7\u00b7 inf\n\n(S; c) = inf\n1\n\nn > c. Let Ri,n \u21e0 Uniform{\u00b1pc/n}\nProof of Theorem 2. Let n be suf\ufb01ciently large such that n\u21e32\ni.i.d., for 1 \uf8ff i \uf8ff n. As noted above, (Ri,n) satis\ufb01es both TotVarConstraint and JumpConstraint.\nThen we have\ng\u21e3S \u00b7\n(1 + ri)\u2318 \nnYi=1\nnXi=1\niRi,ni\n(1 + Ri,n)\u2318 \nEhg\u21e3S \u00b7\nnXi=1\nnYi=1\n1 \u00b7\u00b7\u00b7 inf\n inf\n(1 + Ri,n)\u2318i.\n= Ehg\u21e3S \u00b7\nnYi=1\n\nThe \ufb01rst line follows from unrolling the recursion in the de\ufb01nition (5); the second line from replacing\nthe supremum over (ri) with expectation over (Ri,n); and the third line from E[Ri,n] = 0. Taking\nlimit on both sides and using (7) from Lemma 3 give us the desired conclusion.\n\nsup\nrn\n\niri\n\nn\n\nn\n\n6 Upper Bound\nIn this section we prove that U(S, c) is an upper bound to the limit of V (n)\nTheorem 4. Let g : R0 ! R0 be a convex, L-Lipschitz, K-linear function. Let 0 < \u21e3 \uf8ff 1/16. Then\nfor any S, c > 0 and n 2 N, we have\n\n(S; c).\n\n\u21e3\n\nV (n)\n\u21e3\n\n(S; c) \uf8ff U(S, c) +\u271318c +\n\n8\n\np2\u21e1\u25c6 LK \u21e31/4.\n\nWe remark that the right-hand side of the above bound does not depend on the number of trading\nperiods n. The key parameter is \u21e3, which determines the size of the largest price jump of the stock.\nHowever, we expect that as the trading frequency increases, the size of the largest price jump will\nshrink. Plugging a sequence {\u21e3n} in place of \u21e3 in Theorem 4 gives us the following corollary.\nCorollary 1. Let g : R0 ! R0 be a convex, L-Lipschitz, K-linear function. Let {\u21e3n} be a sequence\nof positive numbers with \u21e3n ! 0. Then for S, c > 0,\n\nlim sup\nn!1\n\nV (n)\n\u21e3n\n\n(S; c) \uf8ff U(S, c).\n\nNote that the above upper bound relies on the convexity of g, for if g were concave, then we would\nhave the reverse conclusion:\n\nV (n)\n\u21e3\n\n(S; c) g(S) = g(S \u00b7 E[G(c)]) E[g(S \u00b7 G(c))] = U(S, c).\n\nHere the \ufb01rst inequality follows from setting all r = 0 in (5), and the second is by Jensen\u2019s inequality.\n\n6\n\n\f6.1\n\nIntuition\n\nFor brevity, we write the partial derivatives Uc(S, c) = @U(S, c)/@c, US(S, c) = @U(S, c)/@S, and\nUS2(S, c) = @2U(S, c)/@S2. The proof of Theorem 4 proceeds by providing a \u201cguess\u201d for the In-\nvestor\u2019s action, which is a modi\ufb01cation of the original Black-Scholes hedging strategy. Speci\ufb01cally,\nwhen the current price is S and the remaining budget is c, then the Investor invests\n\n(S, c) := SUS(S, c).\n\nWe now illustrate how this strategy gives rise to a bound on V (n)\nsuppose for some m 1 we know that V (n)\nm) around U(S, c) gives us\nthat a Taylor approximation of the function rm 7! U(S + Srm, c r2\n1\nmUc(S, c) +\n2 r2\n\n(S; c) as stated in Theorem 4. First\n(S; c, m1) is a rough approximation to U(S, c). Note\n\nm) = U(S, c) + rmSUS(S, c) r2\n\nU(S + Srm, c r2\n\nmS2US2(S, c) + O(r3\nm)\n\n\u21e3\n\n\u21e3\n\n= U(S, c) + rmSUS(S, c) + O(r3\n\nm),\n\nwhere the last line follows from the Black-Scholes equation (6). Now by setting = SUS(S, c) in\nthe de\ufb01nition (5), and using the assumption and the Taylor approximation above, we obtain\nm, m 1)\n\nrm + V (n)\n\nV (n)\n\u21e3\n\n\u21e3\n\nsup\n\n|rm|\uf8ffmin{\u21e3,pc}\n\n(S; c, m) = inf\n2R\nrm rmSUS(S, c) + V (n)\n\uf8ff sup\n= sup\nrm rmSUS(S, c) + U(S + Srm, c r2\n\n(S + Srm; c r2\n\n(S + Srm; c r2\nm, m 1)\n\n\u21e3\n\nm) + (approx terms)\n\n= U(S, c) + O(r3\n\nm) + (approx terms).\n\nIn other words, on each round of the game we add an O(r3\ntime we reach V (n)\n\n(S; c, n) we will have an error term that is roughly on the order ofPn\nm \uf8ff c and |rm| \uf8ff \u21e3 by assumption, we getPn\n\nm=1 |rm|3 \uf8ff \u21e3c.\n\nm) term to the approximation error. By the\nm=1 |rm|3.\n\nThe details are more intricate because the error O(r3\non S and c. Trading off the dependencies of c and \u21e3 leads us to the bound stated in Theorem 4.\n\nm) from the Taylor approximation also depends\n\nSincePn\n\nm=1 r2\n\n\u21e3\n\n6.2 Proof (Sketch) of Theorem 4\n\nIn this section we describe an outline of the proof of Theorem 4. Throughout, we assume g is a\nconvex, L-Lipschitz, K-linear function, and 0 < \u21e3 \uf8ff 1/16. The proofs of Lemma 5 and Lemma 7\nare provided in Appendix B, and Lemma 6 is proved in Appendix C.\nFor S, c > 0 and |r| \uf8ff pc, we de\ufb01ne the (single-round) error term of the Taylor approximation,\n\n\u270fr(S, c) := U(S + Sr, c r2) U(S, c) rSUS(S, c).\n\n(8)\nm=0 to keep track of the cumulative errors. We de\ufb01ne\n\nWe also de\ufb01ne a sequence {\u21b5(n)(S, c, m)}n\nthis sequence by setting \u21b5(n)(S, c, 0) = 0, and for 1 \uf8ff m \uf8ff n,\n\n\u21b5(n)(S, c, m) :=\n\nsup\n\n|r|\uf8ffmin{\u21e3,pc}\n\n\u270fr(S, c) + \u21b5(n)(S + Sr, c r2, m 1).\n\n(9)\n\nFor simplicity, we write \u21b5(n)(S, c) \u2318 \u21b5(n)(S, c, n). Then we have the following result, which\nformalizes the notion from the preceding section that V (n)\n(S; c, m) is an approximation to U(S, c).\nLemma 5. For S, c > 0, n 2 N, and 0 \uf8ff m \uf8ff n, we have\n\n\u21e3\n\nV (n)\n\u21e3\n\n(S; c, m) \uf8ff U(S, c) + \u21b5(n)(S, c, m).\n\n(10)\n\nIt now remains to bound \u21b5(n)(S, c) from above. A key step in doing so is to show the following\nbounds on \u270fr. This is where the assumptions that g be L-Lipschitz and K-linear are important.\n\n7\n\n\fLemma 6. For S, c > 0, and |r| \uf8ff min{1/16, pc/8}, we have\n\n\u270fr(S, c) \uf8ff 16LK \u21e3max{c3/2, c1/2} |r|3 + max{c2, c1/2} r4\u2318 .\n\nMoreover, for S > 0, 0 < c \uf8ff 1/4, and |r| \uf8ff pc, we also have\nr2\npc\n\n\u270fr(S, c) \uf8ff\n\n4LK\np2\u21e1 \u00b7\n\n.\n\n(11)\n\n(12)\n\nUsing Lemma 6, we have the following bound on \u21b5(n)(S, c).\nLemma 7. For S, c > 0, n 2 N, and 0 < \u21e3 \uf8ff 1/16, we have\n\n\u21b5(n)(S, c) \uf8ff\u271318c +\n\n8\n\np2\u21e1\u25c6 LK \u21e31/4.\n\nProof (sketch). By unrolling the inductive de\ufb01nition (9), we can write \u21b5(n)(S, c) as the supremum\nof f(r1, . . . , rn), where\n\nf(r1, . . . , rn) =\n\n\u270frm\u21e3S\ni\u2318.\nLet (r1, . . . , rn) be such that |rm| \uf8ff \u21e3 andPn\n(18c + 8/p2\u21e1) LK \u21e31/4. Let 0 \uf8ff n\u21e4 \uf8ff n be the largest index such thatPn\u21e4\n\nm1Yi=1\n(1 + ri), c \nm \uf8ff c. We will show that f(r1, . . . , rn) \uf8ff\nm \uf8ff c p\u21e3. We\n\nsplit the analysis into two parts.\n\nm1Xi=1\n\nnXm=1\n\nm=1 r2\n\nm=1 r2\n\nr2\n\n1. For 1 \uf8ff m \uf8ff min{n, n\u21e4 + 1}: By (11) from Lemma 6 and a little calculation, we have\n\n\u270frm\u21e3S\n\nm1Yi=1\n\n(1 + ri), c \n\nm1Xi=1\n\nr2\n\ni\u2318 \uf8ff 18LK \u21e31/4 r2\n\nm.\n\nSumming over 1 \uf8ff m \uf8ff min{n, n\u21e4 + 1} then gives us\ni\u2318 \uf8ff 18LK \u21e31/4\nmin{n, n\u21e4+1}Xm=1\nmin{n, n\u21e4+1}Xm=1\n2. For n\u21e4 + 2 \uf8ff m \uf8ff n (if n\u21e4 \uf8ff n 2): By (12) from Lemma 6, we have\n\n\u270frm\u21e3S\n\n(1+ri), c\n\nm1Xi=1\n\nm1Yi=1\n\nr2\n\nm1Yi=1\n\n(1 + ri), c \n\nr2\n\ni\u2318 \uf8ff\n\n4LK\np2\u21e1 \u00b7\n\nm1Xi=1\ni\u2318 \uf8ff\n\nr2\ni=m r2\ni\n\n.\n\nmpPn\nmpPn\n\nm \uf8ff 18LK \u21e31/4 c.\nr2\n\n\u270frm\u21e3S\nm1Yi=1\n\nTherefore,\n\nnXm=n\u21e4+2\n\n\u270frm\u21e3S\n\n(1 + ri), c \n\nr2\n\nm1Xi=1\n\n4LK\np2\u21e1\n\nnXm=n\u21e4+2\n\nr2\ni=m r2\n\ni \uf8ff\n\n8LK\np2\u21e1\n\n\u21e31/4,\n\nwhere the last inequality follows from Lemma 8 in Appendix B.\n\nCombining the two cases above gives us the desired conclusion.\n\nProof of Theorem 4. Theorem 4 follows immediately from Lemma 5 and Lemma 7.\n\nAcknowledgments. We gratefully acknowledge the support of the NSF through grant CCF-\n1115788 and of the ARC through Australian Laureate Fellowship FL110100281.\n\n8\n\n\fReferences\n[1] J. Abernethy, R. M. Frongillo, and A. Wibisono. Minimax option pricing meets Black-Scholes\nin the limit. In Howard J. Karloff and Toniann Pitassi, editors, STOC, pages 1029\u20131040. ACM,\n2012.\n\n[2] F. Black and M. Scholes. The pricing of options and corporate liabilities. The Journal of Political\n\nEconomy, pages 637\u2013654, 1973.\n\n[3] P. DeMarzo, I. Kremer, and Y. Mansour. Online trading algorithms and robust option pricing.\nIn Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pages 477\u2013486.\nACM, 2006.\n\n[4] R. Durrett. Probability: Theory and Examples (Fourth Edition). Cambridge University Press,\n\n2010.\n\n[5] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an appli-\n\ncation to boosting. In Computational learning theory, pages 23\u201337. Springer, 1995.\n\n[6] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Compu-\n\ntation, 108(2):212\u2013261, 1994.\n\n[7] G. Shafer and V. Vovk. Probability and Finance: It\u2019s Only a Game!, volume 373. Wiley-\n\nInterscience, 2001.\n\n[8] J. M. Steele. Stochastic Calculus and Financial Applications, volume 45. Springer Verlag,\n\n2001.\n\n9\n\n\f", "award": [], "sourceid": 1126, "authors": [{"given_name": "Jacob", "family_name": "Abernethy", "institution": "University of Pennsylvania"}, {"given_name": "Peter", "family_name": "Bartlett", "institution": "UC Berkeley"}, {"given_name": "Rafael", "family_name": "Frongillo", "institution": "Microsoft Research"}, {"given_name": "Andre", "family_name": "Wibisono", "institution": "UC Berkeley"}]}