{"title": "Poincar\u00e9 Recurrence, Cycles and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games", "book": "Advances in Neural Information Processing Systems", "page_first": 10450, "page_last": 10461, "abstract": "We study a wide class of non-convex non-concave min-max games that generalizes over standard bilinear zero-sum games. In this class, players control the inputs of a smooth function whose output is being applied to a bilinear zero-sum game. This class of games is motivated by the indirect nature of the competition in\nGenerative Adversarial Networks, where players control the parameters of a neural network while the actual competition happens between the distributions that the generator and discriminator capture. We establish theoretically, that depending on the specific instance of the problem gradient-descent-ascent dynamics can exhibit a variety of behaviors antithetical to convergence to the game theoretically meaningful min-max solution. Specifically, different forms of recurrent behavior (including periodicity and Poincar\\'{e} recurrence) are possible as well as convergence to spurious (non-min-max) equilibria for a positive measure of initial conditions. At the technical level, our analysis combines tools from optimization theory, game theory and dynamical systems.", "full_text": "Poincar\u00e9 Recurrence, Cycles and Spurious Equilibria\n\nin Gradient-Descent-Ascent for Non-Convex\n\nNon-Concave Zero-Sum Games\n\nLampros Flokas\u2217\n\nDepartment of Computer Science\n\nColumbia University\nNew York, NY 10025\n\nEmmanouil V. Vlatakis-Gkaragkounis\u2217\n\nDepartment of Computer Science\n\nColumbia University\nNew York, NY 10025\n\nlamflokas@cs.columbia.edu\n\nemvlatakis@cs.columbia.edu\n\nGeorgios Piliouras\n\nEngineering Systems and Design\n\nSingapore University of Technology and Design\n\nSingapore\n\ngeorgios@sutd.edu.sg\n\nAbstract\n\nWe study a wide class of non-convex non-concave min-max games that generalizes\nover standard bilinear zero-sum games. In this class, players control the inputs of a\nsmooth function whose output is being applied to a bilinear zero-sum game. This\nclass of games is motivated by the indirect nature of the competition in Generative\nAdversarial Networks, where players control the parameters of a neural network\nwhile the actual competition happens between the distributions that the generator\nand discriminator capture. We establish theoretically, that depending on the speci\ufb01c\ninstance of the problem gradient-descent-ascent dynamics can exhibit a variety of\nbehaviors antithetical to convergence to the game theoretically meaningful min-max\nsolution. Speci\ufb01cally, different forms of recurrent behavior (including periodicity\nand Poincar\u00e9 recurrence) are possible as well as convergence to spurious (non-min-\nmax) equilibria for a positive measure of initial conditions. At the technical level,\nour analysis combines tools from optimization theory, game theory and dynamical\nsystems.\n\n1\n\nIntroduction\n\nMin-max optimization is a problem of interest in several communities including Optimization, Game\nTheory and Machine Learning. In its most general form, given an objective function r : Rn\u00d7Rm \u2192 R\nand we would like to solve the following problem\n(\u03b8\u03b8\u03b8\u2217, \u03c6\u03c6\u03c6\u2217) = arg min\n\u03b8\u03b8\u03b8\u2208Rn\n\narg max\n\u03c6\u03c6\u03c6\u2208Rm\n\nr(\u03b8\u03b8\u03b8, \u03c6\u03c6\u03c6).\n\n(1)\n\nThis problem is much more complicated compared to classical minimization problems, as\neven understanding under which conditions such a solution is meaning-full is far from trivial\n[DP18, MPR+17, OSG+18, JNJ19]. What is even more demanding is understanding what kind\nof algorithms/dynamics are able to solve this problem when a solution is well de\ufb01ned.\n\n\u2217Equal contribution\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fRecently this problem has attracted renewed interest motivated by the advent of Generative Adver-\nsarial Networks (GANs) and their numerous applications [GPM+14, RMC16, IZZE17, GPM+14,\nZXL17, ACB17, LTH+17, SGZ+16]. A classical GAN architecture mainly revolves around the\ncompetition between two players, the generator and the discriminator. On the one hand, the generator\naims to train a neural network based generative model that can generate high \ufb01delity samples from a\ntarget distribution. On the other hand, the discriminator\u2019s goal is to train a neural network classi\ufb01er\nthan can distinguish between the samples of the target distribution and arti\ufb01cially generated samples.\nWhile one could consider each of the tasks in isolation, it is the competitive interaction between\nthe generator and the discriminator that has lead to the resounding success of GANs. It is the\n\"criticism\" from a powerful discriminator that pushes the generator to capture the target distribution\nmore accurately and it is the access to high \ufb01delity arti\ufb01cial samples from a good generator that gives\nrise to better discriminators. Machine Learning researchers and practitioners have tried to formalize\nthis competition using the min-max optimization framework mentioned above with great success\n[AGL+17, Ma18, GXC+18, YFW+19].\nOne of the main limitations of this framework however is that to this day ef\ufb01ciently training GANs can\nbe a notoriously dif\ufb01cult task [SGZ+16, MPPS17, MPP18, KAHK17]. Addressing this limitation has\nbeen the object of interest for a long line work in the recent years [MGN18, MPPS17, PV16, RMC16,\nTGB+17, BSM17, GAA+17]. Despite the intensi\ufb01ed study, very little is known about ef\ufb01ciently\nsolving general min-max optimization problems. Even for the relatively simple case of bilinear games,\nthe little results that are known have usually a negative \ufb02avour. For example, the continuous time\nanalogue of standard game dynamics such as gradient-descent-ascent or multiplicative weights lead\nto cyclic or recurrent behavior [PS14, MPP18] whereas when they are actually run in discrete-time2\nthey lead to divergence and chaos [BP18, CP19, BP19a]. While positive results for the case of\nbilinear games exist, like extra-gradient (optimistic) training ([DISZ18, MLZ+19a, DP19]) and other\ntechniques [BRM+18, GHP+19a, GBV+19, ALW19], these results fail to generalize to complex\nnon-convex non-concave settings [OSG+18, LLRY18, SRL18]. In fact, for the case of non-convex-\nconcave optimization, game theoretic interpretations of equilibria might not even be meaningful\n[MR18, JNJ19, ADLH19].\nIn order to shed some light to this intellectually challenging problem, we propose a quite general\nclass of min-max optimization problems that includes bilinear games as well as a wide range of\nnon-convex non-concave games. In this class of problems, each player submits its own decision\nvector just like in general min-max optimization problems. Then each decision vector is processed\nseparately by a (potentially different) smooth function. Each player \ufb01nally gets rewarded by plugging\nin the processed decision vectors to a simple bilinear game. More concretely, there are functions\nF : Rn \u2192 RN and G : Rm \u2192 RM and a matrix UN\u00d7M such that\n\n(2)\n\nr(\u03b8\u03b8\u03b8, \u03c6\u03c6\u03c6) = FFF (\u03b8\u03b8\u03b8)(cid:62)UGGG(\u03c6\u03c6\u03c6).\nWe call the resulting class of problems Hidden Bilinear Games.\nThe motivation behind the proposed class of gamess is actually the setting of training GANs itself.\nDuring the training process of GANs, the discriminator and the generator \"submit\" the parameters\nof their corresponding neural network architectures, denoted as \u03b8\u03b8\u03b8 and \u03c6\u03c6\u03c6 in our problem formulation.\nHowever, deep networks introduce nonlinearities in mapping their parameters to their output space\nwhich we capture through the non-convex functions F, G. Thus, even though hidden bilinear games\ndo not demonstrate the full complexity of modern GAN architectures and training, they manage\nto capture two of its most pervasive properties: i) the indirect competition of the generator and\nthe discriminator and ii) the non-convex non-concave nature of training GANs. Both features are\nmarkedly missing from simple bilinear games.\nOur results. We provide, the \ufb01rst to our own knowledge, global analysis of gradient-descent-ascent\nfor a class of non-convex non-concave zero-sum games that by design includes both features of\nbilinear zero-sum games as well as of single-agent non-convex optimization. Our analysis focuses\non the (smoother) continuous time dynamics (Section 4,5) but we also discuss the implications for\ndiscrete time (Section 7). The uni\ufb01ed thread of our results is that gradient-descent-ascent can exhibit\na variety of behaviors antithetical to convergence to the min-max solution. In fact, convergence to a\nset of parameters that implement the desired min-max solution (as e.g. GANs require), if it actually\n\n2Interestingly, running alternating gradient-descent-ascent in discrete-time results once again in recurrent\n\nbehavior [BGP19].\n\n2\n\n\fhappens, is more of an accident due to fortuitous system initialization rather than an implication of\nthe adversarial network architecture.\nInformally, we prove that these dynamics exhibit conservation laws, akin to energy conservation\nin physics. Thus, in contrast to them making progress over time their natural tendencies is to\n\"cycle\" through their parameter space. If the hidden bilinear game U is 2x2 (e.g. Matching Pennies)\nwith an interior Nash equilibrium, then the behavior is typically periodic (Theorem 3). If it is a\nhigher dimensional game (e.g. akin to Rock-Paper-Scissors) then even more complex behavior is\npossible. Speci\ufb01cally, the system is formally analogous to Poincar\u00e9 recurrent systems (e.g. many\nbody problem in physics) (Theorems 6, 7). Due to the non-convexity of the operators F, G, the system\ncan actually sometimes get stuck at equilibria, however, these \ufb01xed points may be merely artifacts\nof the nonlinearities of F, G instead of meaningful solutions to the underline minmax problem U.\n(Theorem 8).\nIn Section 7, we show that moving from continuous to discrete time, only enhances the disequilibrium\nproperties of the dynamics. Speci\ufb01cally, instead of energy conservation now energy increases over\ntime leading away from equilibrium (Theorem 9), whilst spurious (non-minmax) equilibria are still\nan issue (Theorem 10). Despite these negative results, there are some positive news, as at least in\nsome cases we can show that time-averaging over these non-equilibrium trajectories (or equivalently\nchoosing a distribution of parameters instead of a single set of parameters) can recover the min-\nmax equilibrium (Theorem 4). Technically our results combine tools from dynamical systems (e.g.\nPoincar\u00e9 recurrence theorem, Poincar\u00e9-Bendixson theorem, Liouville\u2019s theorem) along with tools\nfrom game theory and non-convex optimization.\nUnderstanding the intricacies of GAN training requires broadening our vocabulary and horizons\nin terms of what type of long term behaviors are possible and developing new techniques that can\nhopefully counter them.\nThe structure of the rest of the paper is as follows. In Section 2 we will present key results from prior\nwork on the problem of min-max optimization. In Section 3 we will present the main mathematical\ntools for our analysis. Sections 4 through 6 will be devoted to studying interesting special cases of\nhidden bilinear games. Section 8 will be the conclusion of our work.\n\nFigure 1: Trajectories of a single player using gradient-descent-ascent dynamics for a hidden\nRock-Paper-Scissors game with sigmoid activations. The different colors correspond to different ini-\ntializations of the dynamics. The trajectories exhibit Poincar\u00e9 recurrence as expected by Theorem 7.\n\n3\n\n\f2 Related Work\n\nNon-equilibrating dynamics in game theory. [KLPT11] established non-convergence for a continuous-\ntime variant of Multiplicative Weights Update (MWU), known as the replicator dynamic, for a 2x2x2\ngame and showed that as a result the system converges to states whose social welfare dominates\nthat of all Nash equilibria. [PPP17] proved the existence of Li-Yorke chaos in MWU dynamics of\n2x2 potential games. From the perspective of evolutionary game theory, which typically studies\ncontinuous time dynamics, numerous nonconvergence results are known but again typically for\nsmall games, e.g., [San10]. [PS14] shows that replicator dynamics exhibit a speci\ufb01c type of near\nperiodic behavior in bilinear (network) zero-sum games, which is known as Poincar\u00e9 recurrence.\nRecently, [MPP18] generalized these results to more general continuous time variants of FTRL\ndynamics (e.g. gradient-descent-ascent). Cycles arise also in evolutionary team competition [PS18]\nas well as in network competition [NMP18]. Technically, [PS18] is the closest paper to our own\nas it studies evolutionary competition between Boolean functions, however, the dynamics in the\ntwo models are different and that paper is strictly focused on periodic systems. The papers in the\ncategory of cyclic/recurrent dynamics combine delicate arguments such as volume preservation\nand the existence of constants of motions (\u201cenergy preservation\"). In this paper we provide a wide\ngeneralization of these type of results by establishing cycles and recurrence type of behavior for a\nlarge class of non-convex non-concave games. In the case of discrete time dynamics, such as standard\ngradient-descent-ascent, the system trajectories are \ufb01rst order approximations of the above motion\nand these conservation arguments do not hold exactly. Instead, even in bilinear games, the \u201cenergy\"\nslowly increases over time [BP18] implying chaotic divergence away from equilibrium [CP19]. We\nextend such energy increase results to non-linear settings.\nLearning in zero-sum games and connections to GANs. Several recent papers have shown positive\nresults about convergence to equilibria in (mostly bilinear) zero-sum games for suitable adapted\nvariants of \ufb01rst-order methods and then apply these techniques to Generative Adversarial Networks\n(GANs) showing improved performance (e.g. [DISZ18, DP19]). [BRM+18] made use of conser-\nvation laws of learning dynamics in zero-sum games (e.g. [BP19b]) to develop new algorithms for\ntraining GANs that add a new component to the vector \ufb01eld that aims at minimizing this energy\nfunction. Different energy shrinking techniques for convergence in GANs (non-convex saddle point\nproblems) exploit connections to variational inequalities and employ mirror descent techniques with\nan extra gradient step [GBVL18, MLZ+19a]. Moreover, adding negative momentum can help with\nstability in zero-sum games [GHP+19b]. Game theoretic inspired methods such as time-averaging\nwork well in practice for a wide range of architectures [YFW+19].\n\n3 Preliminaries\n\n3.1 Notation\n\nVectors are denoted in boldface xxx, yyy unless otherwise indicated are considered as column vectors.\nWe use (cid:107)\u00b7(cid:107) corresponds to denote the (cid:96)2\u2212norm. For a function f : Rd \u2192 R we use \u2207f to denote\nits gradient. For functions of two vector arguments, f (xxx, yyy) : Rd1 \u00d7 Rd2 \u2192 R , we use \u2207xxxf,\u2207yyyf\nto denote its partial gradient. For the time derivative we will use the dot accent abbreviation, i.e.,\ndt [xxx(t)]. A function f will belong to C r if it is r times continuously differentiable. The term\n\u02d9xxx = d\n\u201csigmoid\" function refers to \u03c3 : R \u2192 R such that \u03c3(x) = (1 + e\u2212x)\u22121. Finally, we use P (\u00b7),\noperating over a set, to denote its (Lebesgue) measure.\n\n3.2 De\ufb01nitions\n\nDe\ufb01nition 1 (Hidden Bilinear Zero-Sum Game). In a hidden bilinear zero-sum game there are two\nplayers, each one equipped with a smooth function FFF : Rn \u2192 RN and GGG : Rm \u2192 RM and a payoff\nmatrix UN\u00d7M such that each player inputs its own decision vector \u03b8\u03b8\u03b8 \u2208 Rn and \u03c6\u03c6\u03c6 \u2208 Rm and is trying\nto maximize or minimize r(\u03b8\u03b8\u03b8, \u03c6\u03c6\u03c6) = FFF (\u03b8\u03b8\u03b8)(cid:62)UGGG(\u03c6\u03c6\u03c6) respectively.\n\nIn this work we will mostly study continuous time dynamics of solutions for the problem of Equation\n1 for hidden bilinear zero-sum games but we will also make some important connections to discrete\ntime dynamics that are also prevalent in practice. In order to make this distinction clear, let us de\ufb01ne\nthe following terms.\n\n4\n\n\fDe\ufb01nition 2 (Continuous Time Dynamical System). A system of ordinary differential equations\n\u02d9xxx = f (xxx) where f : Rd \u2192 Rd will be called a continuous time dynamical system. Solutions of the\nequation f (xxx) = 0 are called the \ufb01xed points of the dynamical system.\n\nWe will call f the vector \ufb01eld of the dynamical system. In order to understand the properties of\ncontinuous time dynamical systems, we will often need to study their behaviour given different initial\nconditions. This behaviour is captured by the \ufb02ow of the dynamical system. More precisely,\nDe\ufb01nition 3. If f is Lipschitz-continuous, there exists a continuous map \u03a6(xxx0, t) : Rd \u00d7 R \u2192 Rd\ncalled \ufb02ow of the dynamical system such that for all xxx0 \u2208 Rd we have that \u03a6(xxx0, t) is the unique\nsolution of the problem { \u02d9xxx = f (xxx), xxx(0) = xxx0}. We will refer to \u03a6(xxx0, t) as a trajectory or orbit of\nthe dynamical system.\n\nIn this work we will be mainly study the gradient-descent-ascent dynamics for the problem of\nEquation 1. The continuous (discrete) time version of the dynamics (with learning rate \u03b1) are based\non the following equations:\n\n(cid:26) \u02d9\u03b8\u03b8\u03b8 = \u2212\u2207\u03b8\u03b8\u03b8r(\u03b8\u03b8\u03b8, \u03c6\u03c6\u03c6)\n(cid:27)\n\n\u02d9\u03c6\u03c6\u03c6 = \u2207\u03c6\u03c6\u03c6r(\u03b8\u03b8\u03b8, \u03c6\u03c6\u03c6)\n\n(CGDA) :\n\n(DGDA) :\n\n(cid:26)\u03b8\u03b8\u03b8k+1 = \u03b8\u03b8\u03b8k \u2212 \u03b1\u2207\u03b8\u03b8\u03b8r(\u03b8\u03b8\u03b8k, \u03c6\u03c6\u03c6k)\n\n\u03c6\u03c6\u03c6k+1 = \u03c6\u03c6\u03c6k + \u03b1\u2207\u03c6\u03c6\u03c6r(\u03b8\u03b8\u03b8k, \u03c6\u03c6\u03c6k)\n\n(cid:27)\n\nA key notion in our analysis is that of (Poincar\u00e9) recurrence. Intuitively, a dynamical system is\nrecurrent if, after a suf\ufb01ciently long (but \ufb01nite) time, almost every state returns arbitrarily close to the\nsystem\u2019s initial state.\nDe\ufb01nition 4. A point x \u2208 Rd is said to be recurrent under the \ufb02ow \u03a6, if for every neighborhood\nU \u2286 Rd of x, there exists an increasing sequence of times tn such that\nn\u2192\u221e tn = \u221e and\n\u03a6(x, tn) \u2208 U for all n. Moreover, the \ufb02ow \u03a6 is called Poincar\u00e9 recurrent in non-zero measure set\nA \u2286 Rd if the set of the non-recurrent points in A has zero measure.\n\nlim\n\n4 Cycles in hidden bilinear games with two strategies\n\nIn this section we will focus on a particular case of hidden biinear games where both the generator\nand the discriminator play only two strategies. Let U be our zero-sum game and without loss of\ngenerality we can assume that there are functions f : Rn \u2192 [0, 1] and g : Rm \u2192 [0, 1] such that\n\n(cid:18) f (\u03b8\u03b8\u03b8)\n\n(cid:19)\n\n(cid:18)u0,0\n\nu1,0\n\n(cid:19)\n\nu0,1\nu1,1\n\nFFF (\u03b8\u03b8\u03b8) =\n\n1 \u2212 f (\u03b8\u03b8\u03b8)\n\n1 \u2212 g(\u03c6\u03c6\u03c6)\nLet us assume that the hidden bi-linear game has a unique mixed Nash equilibrium (p, q):\nv = u0,0 \u2212 u0,1 \u2212 u1,0 + u1,1 (cid:54)= 0,\n\nq = \u2212 u0,1 \u2212 u1,1\n\n\u2208 (0, 1),\n\nGGG(\u03c6\u03c6\u03c6) =\n\nU =\n\n(cid:18) g(\u03c6\u03c6\u03c6)\n\n(cid:19)\n\np = \u2212 u1,0 \u2212 u1,1\n(cid:40) \u02d9\u03b8\u03b8\u03b8 = \u2212v\u2207f (\u03b8\u03b8\u03b8)(g(\u03c6\u03c6\u03c6) \u2212 q)\n(cid:41)\n\n\u2208 (0, 1)\n\nv\n\n\u02d9\u03c6\u03c6\u03c6 = v\u2207g(\u03c6\u03c6\u03c6)(f (\u03b8\u03b8\u03b8) \u2212 p)\n\n(3)\n\nv\n\n(cid:26) \u02d9zzz\n\nThen we can write down the equations of gradient-descent-ascent :\n\nIn order to analyze the behavior of this system, we would like to understand the topology of the\ntrajectories of \u03b8\u03b8\u03b8 and \u03c6\u03c6\u03c6, at least individually. The following lemma makes a connection between the\ntrajectories of each variable in the min-max optimization system of Equation 3 and simple gradient\nascent dynamics.\nLemma 1. Let k : Rd \u2192 R be a C 2 function. Let h : R \u2192 R be a C 1 function and xxx(t) = \u03c1(t)\nbe the unique solution of the dynamical system \u03a31. Then for the dynamical system \u03a32 the unique\n\nsolution is zzz(t) = \u03c1((cid:82) t\n(cid:26) \u02d9xxx\n\n0 h(s)ds)\n\n= \u2207k(xxx)\n\n(cid:27)\n\n(cid:27)\n\n= h(t)\u2207k(zzz)\n\nxxx(0) =\n\nxxx0\n\nzzz(0) =\n\nxxx0\n\n: \u03a31\n\n: \u03a32\n\nBy applying the previous result for \u03b8\u03b8\u03b8 with k = f and h(t) = \u2212v(g(\u03c6\u03c6\u03c6(t)) \u2212 q), we get that even\nunder the dynamics of Equation 3, \u03b8\u03b8\u03b8 remains on a trajectory of the simple gradient ascent dynamics\nwith initial condition \u03b8\u03b8\u03b8(0). This necessarily affects the possible values of f and g given the initial\nconditions. Let us de\ufb01ne the sets of values attainable for each initialization.\n\n5\n\n\fDe\ufb01nition 5. For each \u03b8\u03b8\u03b8(0), f\u03b8\u03b8\u03b8(0) is the set of possible values of f (\u03b8\u03b8\u03b8(t)) can attain under gradient\nascent dynamics. Similarly, we de\ufb01ne g\u03c6\u03c6\u03c6(0) the corresponding set for g.\n\nWhat is special about the trajectories of gradient ascent is that along this curve f is strictly increasing\n(For a detailed explanation, reader could check the proof of Theorem 1 in the Appendix) and therefore\neach point \u03b8\u03b8\u03b8(t) in the trajectory has a unique value for f. Therefore even in the system of Equation 3,\nf (\u03b8\u03b8\u03b8(t)) uniquely identi\ufb01es \u03b8\u03b8\u03b8(t). This can be formalized in the next theorem.\nTheorem 1. For each \u03b8\u03b8\u03b8(0), \u03c6\u03c6\u03c6(0), under the dynamics of Equation 3, there are C 1 functions\n(X\u03b8\u03b8\u03b8(0), X\u03c6\u03c6\u03c6(0)) such that X\u03b8\u03b8\u03b8(0) : f\u03b8\u03b8\u03b8(0) \u2192 Rn ,X\u03c6\u03c6\u03c6(0) : g\u03c6\u03c6\u03c6(0) \u2192 Rn and \u03b8\u03b8\u03b8(t) = X\u03b8\u03b8\u03b8(0)(f (t)),\n\u03c6\u03c6\u03c6(t) = X\u03c6\u03c6\u03c6(0)(g(t)).\n\nEquipped with these results, we are able to reduce this complicated dynamical system of \u03b8\u03b8\u03b8 and \u03c6\u03c6\u03c6 to a\nplanar dynamical system involving f and g alone.\nLemma 2. If \u03b8\u03b8\u03b8(t) and \u03c6\u03c6\u03c6(t) are solutions to Equation 3 with initial conditions (\u03b8\u03b8\u03b8(0), \u03c6\u03c6\u03c6(0)), then we\nhave that f (t) = f (\u03b8\u03b8\u03b8(t)) and g(t) = g(\u03c6\u03c6\u03c6(t)) satisfy the following equations\n\n\u02d9f = \u2212v(cid:107)\u2207f (X\u03b8\u03b8\u03b8(0)(f ))(cid:107)2(g \u2212 q)\n\u02d9g = v(cid:107)\u2207g(X\u03c6\u03c6\u03c6(0)(g))(cid:107)2(f \u2212 p)\n\n(4)\n\nAs one can observe both form Equation 3 and Equation 4, \ufb01xed points of the gradient-descent-ascent\ndynamics correspond to either solutions of f (\u03b8\u03b8\u03b8) = p and g(\u03c6\u03c6\u03c6) = q or stationary points of f and\ng or even some combinations of the aforementioned conditions. Although, all of them are \ufb01xed\npoints of the dynamical system, only the former equilibria are game theoretically meaningful. We\nwill therefore de\ufb01ne a subset of initial conditions for Equation 3 such that convergence to game\ntheoretically meaningful \ufb01xed points may actually be feasible:\nDe\ufb01nition 6. We will call the initialization (\u03b8\u03b8\u03b8(0), \u03c6\u03c6\u03c6(0)) safe for Equation 3 if \u03b8\u03b8\u03b8(0) and \u03c6\u03c6\u03c6(0) are not\nstationary points of f and g respectively and p \u2208 f\u03b8\u03b8\u03b8(0) and q \u2208 g\u03c6\u03c6\u03c6(0).\nFor safe initial conditions we can show that gradient-descent-ascent dynamics applied in the class\nof the hidden bilinear zero-sum game mimic properties and behaviors of conservative/Hamiltonian\nphysical systems [BP19b], like an ideal pendulum or an ideal spring-mass system. In such systems,\nthere is a notion of energy that remains constant over time and hence the system trajectories lie\non level sets of these functions. To motivate further this intuition, it is easy to check that for the\nsimpli\ufb01ed case where (cid:107)\u2207f(cid:107) = (cid:107)\u2207g(cid:107) = 1 the level sets correspond to cycles centered at the Nash\nequilibrium and the system as a whole captures gradient-descent-ascent for a bilinear 2 \u00d7 2 zero-sum\ngame (e.g. Matching Pennies).\nTheorem 2. Let \u03b8\u03b8\u03b8(0) and \u03c6\u03c6\u03c6(0) be safe initial conditions. Then for the system of Equation 3, the\nfollowing quantity is time-invariant\n\nH(f, g) =\n\nz \u2212 p\n\n(cid:107)\u2207f (X\u03b8\u03b8\u03b8(0)(z))(cid:107)2 dz +\n\nz \u2212 q\n\n(cid:107)\u2207g(X\u03c6\u03c6\u03c6(0)(z))(cid:107)2 dz\n\n(cid:90) g\n\nq\n\n(cid:90) f\n\np\n\nThe existence of this invariant immediately guarantees that Nash Equilibrium (p, q) cannot be reached\nif the dynamical system is not initialized there. Taking advantage of the planarity of the induced\nsystem - a necessary condition of Poincar\u00e9-Bendixson Theorem - we can prove that:\nTheorem 3. Let \u03b8\u03b8\u03b8(0) and \u03c6\u03c6\u03c6(0) be safe initial conditions. Then for the system of Equation 3, the orbit\n(\u03b8\u03b8\u03b8(t), \u03c6\u03c6\u03c6(t)) is periodic.\n\nOn a positive note, we can prove that the time averages of f and g as well as the time averages of\nexpected utilities of both players converge to their Nash equilibrium values.\nTheorem 4. Let \u03b8\u03b8\u03b8(0) and \u03c6\u03c6\u03c6(0) be safe initial conditions and (PPP , QQQ) =\nsystem of Equation 3\n\n, then for the\n\n(cid:16)(cid:0) p\n\n1\u2212p\n\n1\u2212q\n\n(cid:1)(cid:17)\n\nlim\nT\u2192\u221e\n\n0 f (\u03b8\u03b8\u03b8(t))dt\n\nT\n\n= p,\n\nlim\nT\u2192\u221e\n\n0 r(\u03b8\u03b8\u03b8(t), \u03c6\u03c6\u03c6(t))dt\n\n= PPP (cid:62)UQQQ,\n\nlim\nT\u2192\u221e\n\n0 g(\u03c6\u03c6\u03c6(t))dt\n\n= q\n\nT\n\n(cid:1),(cid:0) q\n(cid:82) T\n\n(cid:82) T\n\n(cid:82) T\n\nT\n\n6\n\n\f5 Poincar\u00e9 recurrence in hidden bilinear games with more strategies\n\nIn this section we will extend our results by allowing both the generator and the discriminator to play\nhidden bilinear games with more than two strategies. We will speci\ufb01cally study the case of hidden\nbilinear games where each coordinate of the vector valued functions F and G is controlled by disjoint\nsubsets of the variables \u03b8\u03b8\u03b8 and \u03c6\u03c6\u03c6, i.e.\n\n\uf8ee\uf8ef\uf8ef\uf8f0 \u03b8\u03b8\u03b81\n\n\u03b8\u03b8\u03b82\n...\n\u03b8\u03b8\u03b8N\n\n\uf8f9\uf8fa\uf8fa\uf8fb FFF (\u03b8\u03b8\u03b8) =\n\n\uf8ee\uf8ef\uf8ef\uf8f0 f1(\u03b8\u03b8\u03b81)\n\nf2(\u03b8\u03b8\u03b82)\n\n...\n\nfN (\u03b8\u03b8\u03b8N )\n\n\u03b8\u03b8\u03b8 =\n\nwhere each function fi and gi takes an appropriately sized vector and returns a non-negative number.\nTo account for possible constraints (e.g. that probabilities of each distribution must sum to one), we\nwill incorporate this restriction using Lagrange Multipliers. The resulting problem becomes\n\n\uf8f9\uf8fa\uf8fa\uf8fb\n\ngM (\u03c6\u03c6\u03c6M )\n\n...\n\ng2(\u03c6\u03c6\u03c62)\n\n\u03c6\u03c6\u03c62\n...\n\u03c6\u03c6\u03c6M\n\n\uf8ee\uf8ef\uf8ef\uf8f0 \u03c6\u03c6\u03c61\n\n\uf8ee\uf8ef\uf8ef\uf8f0 g1(\u03c6\u03c6\u03c61)\n\uf8f9\uf8fa\uf8fa\uf8fb GGG(\u03c6\u03c6\u03c6) =\n\uf8f9\uf8fa\uf8fa\uf8fb \u03c6\u03c6\u03c6 =\n\uf8eb\uf8ed M(cid:88)\n(cid:32) N(cid:88)\n(cid:33)\n\uf8f6\uf8f8 \u02d9\u03c6\u03c6\u03c6j =\u2207gj(\u03c6\u03c6\u03c6j)\n(cid:32) N(cid:88)\n(cid:32) N(cid:88)\n\nfi(\u03b8\u03b8\u03b8i) \u2212 1\n\n(cid:33)\n\n+ \u00b5\n\ni=1\n\ni=1\n\ni=j\n\nfi(\u03b8\u03b8\u03b8i) \u2212 1\n\n\u02d9\u03bb =\n\ngj(\u03c6\u03c6\u03c6j) \u2212 1\n\nui,jfi(\u03b8\u03b8\u03b8i) + \u00b5\n\n\uf8f6\uf8f8\n(cid:33)\n\n(5)\n\n(6)\n\n(7)\n\n(8)\n\nWriting down the equations of gradient-ascent-descent we get\n\n\u03b8\u03b8\u03b8\u2208Rn,\u00b5\u2208R max\n\n\u03c6\u03c6\u03c6\u2208Rm,\u03bb\u2208R FFF (\u03b8\u03b8\u03b8)(cid:62)UGGG(\u03c6\u03c6\u03c6) + \u03bb\n\nmin\n\n\uf8eb\uf8ed M(cid:88)\n\nj=1\n\n\uf8f6\uf8f8\n\n\u02d9\u03b8\u03b8\u03b8i = \u2212 \u2207fi(\u03b8\u03b8\u03b8i)\n\nui,jgj(\u03c6\u03c6\u03c6j) + \u03bb\n\n\uf8eb\uf8ed M(cid:88)\n\n\u02d9\u00b5 = \u2212\n\ngj(\u03c6\u03c6\u03c6j) \u2212 1\n\nj=1\n\ni=1\n\nOnce again we can show that along the trajectories of the system of Equation 7, \u03b8\u03b8\u03b8i can be uniquely\nidenti\ufb01ed by fi(\u03b8\u03b8\u03b8i) given \u03b8\u03b8\u03b8i(0) and the same holds for the discriminator. This allows us to construct\nfunctions X\u03b8\u03b8\u03b8i(0) and X\u03c6\u03c6\u03c6j (0) just like in Theorem 1. We can now write down a dynamical system\ninvolving only fi and gj.\nLemma 3. If \u03b8\u03b8\u03b8(t) and \u03c6\u03c6\u03c6(t) are solutions to Equation 7 with initial conditions (\u03b8\u03b8\u03b8(0), \u03c6\u03c6\u03c6(0), \u03bb(0), \u00b5(0)),\nthen we have that fi(t) = fi(\u03b8\u03b8\u03b8i(t)) and gj(t) = gj(\u03c6\u03c6\u03c6j(t)) satisfy the following equations\n\n\u02d9fi = \u2212(cid:107)\u2207fi(X\u03b8\u03b8\u03b8i(0)(fi))(cid:107)2\n\nui,jgj + \u03bb\n\n\u02d9gj = (cid:107)\u2207gj(X\u03c6\u03c6\u03c6j (0)(gj))(cid:107)2\n\nui,jfi + \u00b5\n\n\uf8f6\uf8f8\n(cid:33)\n\n\uf8eb\uf8ed M(cid:88)\n(cid:32) N(cid:88)\n\nj=1\n\ni=1\n\nSimilarly to the previous section, we can de\ufb01ne a notion of safety for Equation 7. Let us assume that\nthe hidden Game has a fully mixed Nash equilibrium (ppp, qqq). Then we can de\ufb01ne\nDe\ufb01nition 7. We will call the initialization (\u03b8\u03b8\u03b8(0), \u03c6\u03c6\u03c6(0), \u03bb(0), \u00b5(0)) safe for Equation 7 if \u03b8\u03b8\u03b8i(0) and\n\u03c6\u03c6\u03c6j(0) are not stationary points of fi and gj respectively and pi \u2208 fi\u03b8\u03b8\u03b8i(0) and qj \u2208 gj\u03c6\u03c6\u03c6j (0).\nTheorem 5. Assume that (\u03b8\u03b8\u03b8(0), \u03c6\u03c6\u03c6(0), \u03bb(0), \u00b5(0)) is a safe initialization. Then there exist \u03bb\u2217 and \u00b5\u2217\nsuch that the following quantity is time invariant:\n\nH(FFF , GGG, \u03bb, \u00b5) =\n\n(cid:90) fi\n\nN(cid:88)\n(cid:90) \u03bb\n\ni=1\n\n\u03bb\u2217\n\nz \u2212 pi\n\npi\n\n(cid:107)\u2207fi(X\u03b8\u03b8\u03b8i(0)(z))(cid:107)2 dz +\n(z \u2212 \u00b5\u2217) dz\n\n(cid:90) \u00b5\n\n(z \u2212 \u03bb\u2217) dz +\n\n\u00b5\u2217\n\n(cid:90) gj\n\nM(cid:88)\n\nj=1\n\nqj\n\nz \u2212 qj\n\n(cid:107)\u2207gj(X\u03c6\u03c6\u03c6j (0)(z))(cid:107)2 dz+\n\nGiven that even our reduced dynamical system has more than two state variables we cannot apply\nthe Poincar\u00e9-Bendixson Theorem. Instead we can prove that there exists a one to one differentiable\n\n7\n\n\ftransformation of our dynamical system so that the resulting system becomes divergence free.\nApplying Louville\u2019s formula, the \ufb02ow of the the transformed system is volume preserving. Combined\nwith the invariant of Theorem 5, we can prove that the variables of the transformed system remain\nbounded. This gives us the following guarantees\nTheorem 6. Assume that (\u03b8\u03b8\u03b8(0), \u03c6\u03c6\u03c6(0), \u03bb(0), \u00b5(0)) is a safe initialization. Then the trajectory under\nthe dynamics of Equation 7 is diffeomoprphic to one trajectory of a Poincar\u00e9 recurrent \ufb02ow.\n\nThis result implies that if the corresponding trajectory of the Poincar\u00e9 recurrent \ufb02ow is itself recurrent,\nwhich almost all of them are, then the trajectory of the dynamics of Equation 7 is also recurrent. This\nis however not enough to reason about how often any of the trajectories of the dynamics of Equation\n7 is recurrent. In order to prove that the \ufb02ow of Equation 7 is Poincar\u00e9 recurrent we will make some\nadditional assumptions\nTheorem 7. Let fi and gj be sigmoid functions. Then the \ufb02ow of Equation 7 is Poincar\u00e9 recurrent.\nThe same holds for all functions fi and gj that are one to one functions and for which all initializations\nare safe.\n\nIt is worth noting that for the unconstrained version of the previous min-max problem we arrive at the\nsame conclusions/theorems by repeating the above analysis without using the Lagrange multipliers.\n\n6 Spurious equilibria\n\nIn the previous sections we have analyzed the behavior of safe initializations and we have proved\nthat they lead to either periodic or recurrent trajectories. For initializations that are not safe for some\nequilibrium of the hidden game, game theoretically interesting \ufb01xed points are not even realizable\nsolutions. In fact we can prove something stronger:\nTheorem 8. One can construct functions f and g for the system of Equation 3 so that for a positive\nmeasure set of initial conditions the trajectories converge to \ufb01xed points that do not correspond to\nequilibria of the hidden game.\n\nThe main idea behind our theorem is that we can construct functions f and g that have local optima\nthat break the safety assumption. For a careful choice of the value of the local optima we can make\nthese \ufb01xed points stable and then the Stable Manifold Theorem guarantees that a non zero measure\nset of points in the vicinity of the \ufb01xed point converges to it. Of course the idea of these constructions\ncan be extended to our analysis of hidden games with more strategies.\n\n7 Discrete Time Gradient-Ascent-Descent\n\nIn this section we will discuss the implications of our analysis of continuous time gradient-ascent-\ndescent dynamics on the properties of their discrete time counterparts. In general, the behavior of\ndiscrete time dynamical systems can be signi\ufb01cantly different [LY75, BP18, PPP17] so it is critical\nto perform this non-trivial analysis. We are able to show that the picture of non-equilibriation persists\nfor an interesting class of hidden bilinear games.\nTheorem 9. Let fi and gj be sigmoid functions. Then for the discretized version of the system of\nEquation 7 and for safe intializations, function H of Theorem 5 is non-decreasing.\n\nAn immediate consequence of the above theorem is that the discretized system cannot converge to the\nequlibrium (ppp, qqq) if its not initialized there. For the case of non-safe initializations, the conclusions of\nTheorem 8 persist in this case as well.\nTheorem 10. One can choose a learning rate \u03b1 and functions f and g for the discretized version\nof the system of Equation 3 so that for a positive measure set of initial conditions the trajectories\nconverge to \ufb01xed points that do not correspond to equilibria of the hidden game.\n\n8 Conclusion\n\nIn this work, inspired broadly by the structure of the complex competition between generators and\ndiscriminators in GANs, we de\ufb01ned a broad class of non-convex non-concave min max optimization\n\n8\n\n\fgames, which we call hidden bilinear zero-sum games. In this setting, we showed that gradient-\ndescent-ascent behavior is considerably more complex than a straightforward convergence to the\nmin-max solution that one might at \ufb01rst suspect. We showed that the trajectories even for the\nsimplest but evocative 2x2 game exhibits cycles. In higher dimensional games, the induced dynamical\nsystem could exhibit even more complex behavior like Poincare recurrence. On the other hand, we\nexplored safety conditions whose violation may result in convergence to spurious game-theoretically\nmeaningless equilibria. Finally, we show that even for a simple but widespread family of functions\nlike sigmoids discretizing gradient-descent-ascent can further intensify the disequilibrium phenomena\nresulting in divergence away from equilibrium.\nAs a consequence of this work numerous open problems emerge; Firstly, extending such recurrence\nresults to more general families of functions, as well as examining possible generalizations to multi-\nplayer network zero-sum games are fascinating questions. Recently, there has been some progress in\nresolving cyclic behavior in simpler settings by employing different training algorithms/dynamics\n(e.g., [DISZ18, MLZ+19b, GHP+19b]). It would be interesting to examine if these algorithms could\nenhance equilibration in our setting as well. Additionally, the proposed safety conditions shows\nthat a major source of spurious equilibria in GANs could be the bad local optima of the individual\nneural networks of the discriminator and the generator. Lessons learned from overparametrized\nneural network architectures that converge to global optima [DLL+18] could lead to improved\nef\ufb01ciency in training GANs. Finally, analyzing different simpli\ufb01cation/models of GANs where\nprovable convergence is possible could lead to interesting comparisons as well as to the emergence\nof theoretically tractable hybrid models that capture both the hardness of GAN training (e.g. non-\nconvergence, cycling, spurious equilibria, mode collapse, etc) as well as their power.\n\nAcknowledgements\n\nGeorgios Piliouras acknowledges MOE AcRF Tier 2 Grant 2016-T2-1-170, grant PIE-SGP-AI-2018-\n01 and NRF 2018 Fellowship NRF-NRFF2018-07. Emmanouil-Vasileios Vlatakis-Gkaragkounis\nwas supported by NSF CCF-1563155, NSF CCF-1814873, NSF CCF-1703925, NSF CCF-1763970.\nFinally this work was supported by the Onassis Foundation - Scholarship ID: F ZN 010-1/2017-2018.\n\nReferences\n\n[ACB17] Mart\u00edn Arjovsky, Soumith Chintala, and L\u00e9on Bottou. Wasserstein GAN. CoRR,\n\nabs/1701.07875, 2017.\n\n[ADLH19] Leonard Adolphs, Hadi Daneshmand, Aur\u00e9lien Lucchi, and Thomas Hofmann. Local\nsaddle point optimization: A curvature exploitation approach. In The 22nd International\nConference on Arti\ufb01cial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019,\nNaha, Okinawa, Japan, pages 486\u2013495, 2019.\n\n[AGL+17] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization\nIn Proceedings of the 34th\nand equilibrium in generative adversarial nets (gans).\nInternational Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia,\n6-11 August 2017, pages 224\u2013232, 2017.\n\n[ALW19] Jacob Abernethy, Kevin A. Lai, and Andre Wibisono. Last-iterate convergence rates for\n\nmin-max optimization. CoRR, abs/1906.02027, 2019.\n\n[BGP19] James P. Bailey, Gauthier Gidel, and Georgios Piliouras. Finite regret and cycles with\n\ufb01xed step-size via alternating gradient descent-ascent. CoRR, abs/1907.04392, 2019.\n[BP18] James P. Bailey and Georgios Piliouras. Multiplicative weights update in zero-sum\ngames. In Proceedings of the 2018 ACM Conference on Economics and Computation,\nIthaca, NY, USA, June 18-22, 2018, pages 321\u2013338, 2018.\n\n[BP19a] James P Bailey and Georgios Piliouras. Fast and furious learning in zero-sum games:\n\nVanishing regret with non-vanishing step sizes. In NeurIPS, 2019.\n\n[BP19b] James P. Bailey and Georgios Piliouras. Multi-agent learning in network zero-sum\ngames is a hamiltonian system. In 18th International Conference on Autonomous Agents\nand Multiagent Systems (AAMAS), 2019.\n\n9\n\n\f[BRM+18] David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and\nIn International\n\nThore Graepel. The mechanics of n-player differentiable games.\nConference on Machine Learning, pages 363\u2013372, 2018.\n\n[BSM17] David Berthelot, Tom Schumm, and Luke Metz. BEGAN: boundary equilibrium\n\ngenerative adversarial networks. CoRR, abs/1703.10717, 2017.\n\n[CP19] Yun Kuen Cheung and Georgios Piliouras. Vortices instead of equilibria in minmax\noptimization: Chaos and butter\ufb02y effects of online learning in zero-sum games. In\nCOLT, 2019.\n\n[DISZ18] Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training\ngans with optimism. In 6th International Conference on Learning Representations, ICLR\n2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings,\n2018.\n\n[DLL+18] Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent\n\n\ufb01nds global minima of deep neural networks. CoRR, abs/1811.03804, 2018.\n\n[DP18] Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient\nIn Advances in Neural Information Processing\ndescent in min-max optimization.\nSystems 31: Annual Conference on Neural Information Processing Systems 2018,\nNeurIPS 2018, 3-8 December 2018, Montr\u00e9al, Canada., pages 9256\u20139266, 2018.\n\n[DP19] Constantinos Daskalakis and Ioannis Panageas. Last-iterate convergence: Zero-sum\nIn 10th Innovations in Theoretical\ngames and constrained min-max optimization.\nComputer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California,\nUSA, pages 27:1\u201327:18, 2019.\n\n[GAA+17] Ishaan Gulrajani, Faruk Ahmed, Mart\u00edn Arjovsky, Vincent Dumoulin, and Aaron C.\nCourville. Improved training of wasserstein gans. In Advances in Neural Information\nProcessing Systems 30: Annual Conference on Neural Information Processing Systems\n2017, 4-9 December 2017, Long Beach, CA, USA, pages 5767\u20135777, 2017.\n\n[GBV+19] Gauthier Gidel, Hugo Berard, Ga\u00ebtan Vignoud, Pascal Vincent, and Simon Lacoste-\nJulien. A variational inequality perspective on generative adversarial networks. In ICLR,\n2019.\n\n[GBVL18] Gauthier Gidel, Hugo Berard, Pascal Vincent, and Simon Lacoste-Julien. A variational\n\ninequality perspective on generative adversarial nets. CoRR, abs/1802.10551, 2018.\n\n[GHP+19a] Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Gabriel Huang, R\u00e9mi\nLepriol, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for im-\nproved game dynamics. In AISTATS, 2019.\n\n[GHP+19b] Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, R\u00e9mi Le Priol, Gabriel\nHuang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for im-\nproved game dynamics. In The 22nd International Conference on Arti\ufb01cial Intelligence\nand Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, pages 1802\u2013\n1811, 2019.\n\n[GPM+14] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,\nSherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets.\nIn Advances in Neural Information Processing Systems 27: Annual Conference on\nNeural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec,\nCanada, pages 2672\u20132680, 2014.\n\n[GXC+18] Hao Ge, Yin Xia, Xu Chen, Randall Berry, and Ying Wu. Fictitious GAN: training gans\nwith historical models. In Computer Vision - ECCV 2018 - 15th European Conference,\nMunich, Germany, September 8-14, 2018, Proceedings, Part I, pages 122\u2013137, 2018.\n\n10\n\n\f[IZZE17] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image trans-\nlation with conditional adversarial networks. In 2017 IEEE Conference on Computer\nVision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017,\npages 5967\u20135976, 2017.\n\n[JNJ19] Chi Jin, Praneeth Netrapalli, and Michael I. Jordan. Minmax optimization: Stable limit\n\npoints of gradient descent ascent are locally optimal. CoRR, abs/1902.00618, 2019.\n\n[KAHK17] Naveen Kodali, Jacob D. Abernethy, James Hays, and Zsolt Kira. On convergence and\n\nstability of gans. CoRR, abs/1705.07215, 2017.\n\n[KLPT11] Robert D. Kleinberg, Katrina Ligett, Georgios Piliouras, and \u00c9va Tardos. Beyond the\nnash equilibrium barrier. In Innovations in Computer Science - ICS 2010, Tsinghua\nUniversity, Beijing, China, January 7-9, 2011. Proceedings, pages 125\u2013140, 2011.\n\n[LLRY18] Qihang Lin, Mingrui Liu, Hassan Ra\ufb01que, and Tianbao Yang. Solving weakly-convex-\nweakly-concave saddle-point problems as weakly-monotone variational inequality.\nCoRR, abs/1810.10207, 2018.\n\n[LTH+17] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham,\nAlejandro Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and\nWenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial\nnetwork. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR\n2017, Honolulu, HI, USA, July 21-26, 2017, pages 105\u2013114, 2017.\n\n[LY75] Tien-Yien Li and James A. Yorke. Period three implies chaos. The American Mathe-\n\nmatical Monthly, 82(10):985\u2013992, 1975.\n\n[Ma18] Tengyu Ma. Generalization and equilibrium in generative adversarial nets (gans)\n(invited talk). In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory\nof Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, page 2, 2018.\n\n[MGN18] Lars M. Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods\nfor gans do actually converge? In Proceedings of the 35th International Conference\non Machine Learning, ICML 2018, Stockholmsm\u00e4ssan, Stockholm, Sweden, July 10-15,\n2018, pages 3478\u20133487, 2018.\n\n[MLZ+19a] Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay\nChandrasekhar, and Georgios Piliouras. Optimistic mirror descent in saddle-point\nproblems: Going the extra (gradient) mile. In 7th International Conference on Learning\nRepresentations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019.\n\n[MLZ+19b] Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay\nChandrasekhar, and Georgios Piliouras. Optimistic mirror descent in saddle-point\nproblems: Going the extra(-gradient) mile. In ICLR, 2019.\n\n[MPP18] Panayotis Mertikopoulos, Christos H. Papadimitriou, and Georgios Piliouras. Cycles in\nadversarial regularized learning. In Proceedings of the Twenty-Ninth Annual ACM-SIAM\nSymposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10,\n2018, pages 2703\u20132717, 2018.\n\n[MPPS17] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative\nadversarial networks. In 5th International Conference on Learning Representations,\nICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.\n\n[MPR+17] Tung Mai, Ioannis Panageas, Will Ratcliff, Vijay V. Vazirani, and Peter Yunker. Rock-\npaper-scissors, differential games and biological diversity. CoRR, abs/1710.11249,\n2017.\n\n[MR18] Eric Mazumdar and Lillian J. Ratliff. On the convergence of gradient-based learning in\n\ncontinuous games. CoRR, abs/1804.05464, 2018.\n\n11\n\n\f[NMP18] Sai Ganesh Nagarajan, Sameh Mohamed, and Georgios Piliouras. Three body problems\nin evolutionary game dynamics: Convergence, periodicity and limit cycles. In Pro-\nceedings of the 17th International Conference on Autonomous Agents and MultiAgent\nSystems, AAMAS 2018, Stockholm, Sweden, July 10-15, 2018, pages 685\u2013693, 2018.\n\n[OSG+18] Frans A. Oliehoek, Rahul Savani, Jose Gallego-Posada, Elise van der Pol, and Roderich\nGro\u00df. Beyond local nash equilibria for adversarial networks. CoRR, abs/1806.07268,\n2018.\n\n[PPP17] Gerasimos Palaiopanos, Ioannis Panageas, and Georgios Piliouras. Multiplicative\nweights update with constant step-size in congestion games: Convergence, limit cycles\nand chaos. In Advances in Neural Information Processing Systems 30: Annual Con-\nference on Neural Information Processing Systems 2017, 4-9 December 2017, Long\nBeach, CA, USA, pages 5872\u20135882, 2017.\n\n[PS14] Georgios Piliouras and Jeff S. Shamma. Optimization despite chaos: Convex relaxations\nto complex limit sets via poincar\u00e9 recurrence. In Proceedings of the Twenty-Fifth Annual\nACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA,\nJanuary 5-7, 2014, pages 861\u2013873, 2014.\n\n[PS18] Georgios Piliouras and Leonard J. Schulman. Learning dynamics and the co-evolution\nIn 9th Innovations in Theoretical Computer Science\nof competing sexual species.\nConference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, pages 59:1\u201359:3,\n2018.\n\n[PV16] David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-\n\ncritic methods. CoRR, abs/1610.01945, 2016.\n\n[RMC16] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning\nwith deep convolutional generative adversarial networks. In 4th International Confer-\nence on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016,\nConference Track Proceedings, 2016.\n\n[San10] William H. Sandholm. Population Games and Evolutionary Dynamics. MIT Press,\n\n2010.\n\n[SGZ+16] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and\nXi Chen. Improved techniques for training gans. In Advances in Neural Information\nProcessing Systems 29: Annual Conference on Neural Information Processing Systems\n2016, December 5-10, 2016, Barcelona, Spain, pages 2226\u20132234, 2016.\n\n[SRL18] Maziar Sanjabi, Meisam Razaviyayn, and Jason D. Lee. Solving non-convex non-\nconcave min-max games under polyak-\u0142ojasiewicz condition. CoRR, abs/1812.02878,\n2018.\n\n[TGB+17] Ilya O. Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, and\nBernhard Sch\u00f6lkopf. Adagan: Boosting generative models. In Advances in Neural\nInformation Processing Systems 30: Annual Conference on Neural Information Pro-\ncessing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5424\u20135433,\n2017.\n\n[YFW+19] Yasin Yaz\u0131c\u0131, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, and\nVijay Chandrasekhar. The unusual effectiveness of averaging in gan training. In ICLR,\n2019.\n\n[ZXL17] Han Zhang, Tao Xu, and Hongsheng Li. Stackgan: Text to photo-realistic image syn-\nthesis with stacked generative adversarial networks. In IEEE International Conference\non Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 5908\u20135916,\n2017.\n\n12\n\n\f", "award": [], "sourceid": 5517, "authors": [{"given_name": "Emmanouil-Vasileios", "family_name": "Vlatakis-Gkaragkounis", "institution": "Columbia University"}, {"given_name": "Lampros", "family_name": "Flokas", "institution": "Columbia University"}, {"given_name": "Georgios", "family_name": "Piliouras", "institution": "Singapore University of Technology and Design"}]}