{"title": "Adaptive Influence Maximization with Myopic Feedback", "book": "Advances in Neural Information Processing Systems", "page_first": 5574, "page_last": 5583, "abstract": "We study the adaptive influence maximization problem with myopic feedback under the independent cascade model: one sequentially selects k nodes as seeds one by one from a social network, and each selected seed returns the immediate neighbors it activates as the feedback available for by later selections, and the goal is to maximize the expected number of total activated nodes, referred as the influence spread. We show that the adaptivity gap, the ratio between the optimal adaptive influence spread and the optimal non-adaptive influence spread, is at most 4 and at least e/(e-1), and the approximation ratios with respect to the optimal adaptive influence spread of both the non-adaptive greedy and adaptive greedy algorithms are at least \\frac{1}{4}(1 - \\frac{1}{e}) and at most \\frac{e^2 + 1}{(e + 1)^2} < 1 - \\frac{1}{e}. Moreover, the approximation ratio of the non-adaptive greedy algorithm is no worse than that of the adaptive greedy algorithm, when considering all graphs.\nOur result confirms a long-standing open conjecture of Golovin and Krause (2011) on the constant approximation ratio of adaptive greedy with myopic feedback, and it also suggests that adaptive greedy may not bring much benefit under myopic feedback.", "full_text": "Adaptive In\ufb02uence Maximization with\n\nMyopic Feedback\n\nBinghui Peng\u2217\n\nColumbia University\n\nbp2601@columbia.edu\n\nWei Chen\n\nMicrosoft Research\nweic@microsoft.com\n\nAbstract\n\nWe study the adaptive in\ufb02uence maximization problem with myopic feedback\nunder the independent cascade model: one sequentially selects k nodes as seeds\none by one from a social network, and each selected seed returns the immediate\nneighbors it activates as the feedback available for later selections, and the goal is\nto maximize the expected number of total activated nodes, referred as the in\ufb02uence\nspread. We show that the adaptivity gap, the ratio between the optimal adaptive\nin\ufb02uence spread and the optimal non-adaptive in\ufb02uence spread, is at most 4 and at\nleast e/(e \u2212 1), and the approximation ratios with respect to the optimal adaptive\nin\ufb02uence spread of both the non-adaptive greedy and adaptive greedy algorithms\nare at least 1\ne . Moreover, the approximation\nratio of the non-adaptive greedy algorithm is no worse than that of the adaptive\ngreedy algorithm, when considering all graphs. Our result con\ufb01rms a long-standing\nopen conjecture of Golovin and Krause (2011) on the constant approximation ratio\nof adaptive greedy with myopic feedback, and it also suggests that adaptive greedy\nmay not bring much bene\ufb01t under myopic feedback.\n\ne ) and at most e2+1\n\n(e+1)2 < 1 \u2212 1\n\n4 (1 \u2212 1\n\n1\n\nIntroduction\n\nIn\ufb02uence maximization is the task of given a social network and a stochastic diffusion model on\nthe network, \ufb01nding the k seed nodes with the largest expected in\ufb02uence spread in the model [11].\nIn\ufb02uence maximization and its variants have applications in viral marketing, rumor control, etc. and\nhave been extensively studied (cf. [6, 12]).\nIn this paper, we focus on the adaptive in\ufb02uence maximization problem, where seed nodes are\nsequentially selected one by one, and after each seed selection, partial or full diffusion results from\nthe seed are returned as the feedback, which could be used for subsequent seed selections. Two main\ntypes of feedback have been proposed and studied before: (a) full-adoption feedback, where the entire\ndiffusion process from the seed selected is returned as the feedback, and (b) myopic feedback, where\nonly the immediate neighbors activated by the selected seed are returned as the feedback. Under\nthe common independent cascade (IC) model where every edge in the graph has an independent\nprobability of passing in\ufb02uence, Golovin and Krause [7] show that the full-adoption feedback model\nsatis\ufb01es the key adaptive submodularity property, which enables a simple adaptive greedy algorithm\nto achieve a (1 \u2212 1/e) approximation to the adaptive optimal solution. However, the IC model with\nmyopic feedback is not adaptive submodular, and Golovin and Krause [7] only conjecture that in\nthis case the adaptive greedy algorithm still guarantees a constant approximation. To the best of our\nknowledge, this conjecture is still open before our result in this paper, which con\ufb01rms that indeed\nadaptive greedy is a constant approximation of the adaptive optimal solution.\n\n\u2217Work is mostly done while Binghui was at Tsinghua University and visiting Microsoft Research Asia as an\n\nintern.\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fIn particular, our paper presents two sets of related results on adaptive in\ufb02uence maximization with\nmyopic feedback under the IC model. We \ufb01rst study the adaptivity gap of the problem (Section 3),\nwhich is de\ufb01ned as the ratio between the adaptive optimal solution and the non-adaptive optimal\nsolution, and is an indicator on how useful the adaptivity could be to the problem. We show that the\nadaptivity gap for our problem is at most 4 (Theorem 1) and at least e/(e \u2212 1) (Theorem 2). The\nproof of the upper bound 4 is the most involved, because the problem is not adaptive submodular, and\nwe have to create a hybrid policy that involves three independent runs of the diffusion process in order\nto connect between an adaptive policy and a non-adaptive policy. Next we study the approximation\nratio with respect to the adaptive optimal solution for both non-adaptive greedy and adaptive greedy\n4 (1\u2212 1\nalgorithms (Section 4). We show that the approximation ratios of both algorithms are at least 1\ne )\n(Theorem 3), which combines the adaptivity upper bound of 4 with the results that both algorithms\nachieve (1 \u2212 1/e) approximation of the non-adaptive optimal solution (the (1 \u2212 1/e) approximation\nratio for the adaptive greedy algorithm requires a new proof). We further show that the approximation\n(e+1)2 \u2248 0.606, which is strictly less than 1 \u2212 1/e \u2248 0.632,\nratios for both algorithms are at most e2+1\nand the approximation ratio of non-adaptive greedy is the same as the worst approximation ratio of\nthe adaptive greedy over a family of graphs (Theorem 4).\nIn summary, our contribution is the systematic study on adaptive in\ufb02uence maximization with myopic\nfeedback under the IC model. We prove both constant upper and lower bounds on the adaptivity\ngap in this case, and constant upper and lower bounds on the approximation ratios (with respect\nto the optimal adaptive solution) achieved by non-adaptive greedy and adaptive greedy algorithms.\nThe constant approximation ratio of the adaptive greedy algorithm answers a long-standing open\nconjecture af\ufb01rmatively. Our result on the adaptivity gap is the \ufb01rst one on a problem not satisfying\nadaptive submodularity. Our results also suggest that adaptive greedy may not bring much bene\ufb01t\nunder the myopic feedback model.\nDue to the space constraint, full proof details are included in the supplementary material.\n\nRelated Work.\nIn\ufb02uence maximization as a discrete optimization task is \ufb01rst proposed by Kempe\net al. [11], who propose the independent cascade, linear threshold and other models, study their\nsubmodularity and the greedy approximation algorithm for the in\ufb02uence maximization task. Since\nthen, in\ufb02uence maximization and its variants have been extensively studied. We refer to recent\nsurveys [6, 12] for the general coverage of this area.\nAdaptive submodularity is formulated by Golovin and Krause [7] for general stochastic adaptive opti-\nmization problems, and they show that the adaptive greedy algorithm achieves 1\u2212 1/e approximation\nif the problem is adaptive monotone and adaptive submodular. They study the in\ufb02uence maximization\nproblem under the IC model as an application, and prove that the full-adoption feedback under the IC\nmodel is adaptive submodular. However, in their arXiv version, they show that the myopic feedback\nversion is not adaptive submodular, and they conjecture that adaptive greedy would still achieve a\nconstant approximation in this case.\nAdaptive in\ufb02uence maximization has been studied in [19, 20, 16, 13, 18, 10, 17, 5]. Tong et al. [19]\nprovide both adaptive greedy and ef\ufb01cient heuristic algorithms for adaptive in\ufb02uence maximization.\nTheir theoretical analysis works for the full-adoption feedback model but has a gap when applied\nto myopic feedback, which is con\ufb01rmed by the authors. Yuan and Tang [20] introduce the partial\nfeedback model and develop algorithms that balance the tradeoff between delay and performance,\nand their partial feedback model does not coincide with the myopic feedback model. Salha et al. [13]\nconsider a different diffusion model where edges can be reactivated at each time step, and they show\nthat myopic feedback under this model is adaptive submodular. Sun et al. [16] study the multi-round\nadaptive in\ufb02uence maximization problem, where k seeds are selected in each round and at the end\nof the round the full-adoption feedback is returned. Tong [18] introduces a general feedback model\nand develops some heuristic algorithms for this model. Han et al. [10] and Tang et al. [17] propose\nef\ufb01cient adaptive algorithms for in\ufb02uence maximization and seed set minimization respectively based\non the reverse in\ufb02uence sampling approach, both for IC models with full-adoption feedback. In\na separate paper [5], we study the adaptivity gap in the IC model with full-adoption feedback for\nseveral classes of graphs such as trees and bipartite graphs. A different two stage seeding process\nhas also been studied [14, 3, 15], but the model is quite different, since their \ufb01rst stage of selecting a\nnode set X is only to introduce the neighbors of X as seeding candidates for the second stage.\n\n2\n\n\fAdaptivity gap has been studied by two lines of research. The \ufb01rst line of work utilizes multilinear\nextension and adaptive submodularity to study adaptivity gaps for the class of stochastic submodular\nmaximization problems and give an e/(e\u2212 1) upper bound for matroid constraints [2, 1]. The second\nline of work [8, 9, 4] studies the stochastic probing problem and proposes the idea of random-walk\nnon-adaptive policy on the decision tree, which partially inspires our analysis. However, their analysis\nalso implicitly depends on adaptive submodularity. In contrast, our result on the adaptivity gap is the\n\ufb01rst on a problem that does not satisfy adaptive submodularity (see Section 3.1 for more discussions).\n\n2 Model and Problem De\ufb01nition\n\nDiffusion Model.\nIn this paper, we focus on the well known Independent Cascade (IC) model\nas the diffusion model. In the IC model, the social network is described by a directed in\ufb02uence\ngraph G = (V, E, p), where V is the set of nodes (|V | = n), E \u2286 V \u00d7 V is the set of directed\nedges, and each directed edge (u, v) \u2208 E is associated with a probability puv \u2208 [0, 1]. The live edge\ngraph L = (V, L(E)) is a random subgraph of G, for any edge (u, v) \u2208 E, (u, v) \u2208 L(E) with\nindependent probability puv. If (u, v) \u2208 L(E), we say edge (u, v) is live, otherwise we say it is\nblocked. The dynamic diffusion in the IC model is as follows: at time t = 0 a live-edge graph L\nis sampled and nodes in a seed set S \u2286 V are activated. At every discrete time t = 1, 2, . . ., if a\nnode u was activated at time t \u2212 1, then all of u\u2019s out-going neighbors in L are activated at time\nt. The propagation continues until there are no more activated nodes at a time step. The dynamic\nmodel can be viewed equivalently as every activated node u has one chance to activate each of its\nout-going neighbor v with independent success probability puv. Given a seed set S, the in\ufb02uence\nspread of S, denoted \u03c3(S), is the expected number of nodes activated in the diffusion process from\nS, i.e. \u03c3(S) = EL[|\u0393(S, L)|], where \u0393(S, L) is the set of nodes reachable from S in graph L.\nIn\ufb02uence Maximization Problem. Under the IC model, we formalize the in\ufb02uence maximization\n(IM) problem in both non-adaptive and adaptive settings. In\ufb02uence maximization in the non-adaptive\nsetting follows the classical work of [11], and is de\ufb01ned below.\nDe\ufb01nition 1 (Non-adaptive In\ufb02uence Maximization). Non-adaptive in\ufb02uence maximization is the\nproblem of given a directed in\ufb02uence graph G = (V, E, p) with IC model parameters {puv}(u,v)\u2208E\nand a budget k, \ufb01nding a seed set S\u2217 of at most k nodes such that the in\ufb02uence spread of S\u2217, \u03c3(S\u2217),\nis maximized, i.e. \ufb01nding S\u2217 \u2208 argmaxS\u2286V,|S|\u2264k\u03c3(S).\nWe formulate in\ufb02uence maximization in the adaptive setting following the framework of [7]. Let O\ndenote the set of states, which informally correspond to the feedback information in the adaptive\nsetting. A realization \u03c6 is a function \u03c6 : V \u2192 O, such that for u \u2208 V , \u03c6(u) represents the feedback\nobtained when selecting u as a seed node. In this paper, we focus on the myopic feedback model [7],\nwhich means the feedback of a node u only contains the status of the out-going edges of u being live\nor blocked. Informally it means that after selecting a seed we can only see its one step propagation\neffect as the feedback. The realization \u03c6 then determines the status of every edge in G, and thus\ncorresponds to a live-edge graph. As a comparison, the full-adoption feedback model [7] is such\nthat for each seed node u, the feedback contains the status of every out-going edge of every node v\nthat is reachable from u in a live-edge graph L. This means that after selecting a seed u, we can see\nthe full cascade from u as the feedback. In the full-adoption feedback case, each realization \u03c6 also\ncorresponds to a unique live-edge graph. Henceforth, we refer to \u03c6 as both a realization and a live-\nedge graph interchangeably. In the remainder of this section, the terminologies we introduce apply to\nboth feedback models, unless we explicitly point out which feedback model we are discussing.\nLet R denote the set of all realizations. We use \u03a6 to denote a random realization, following the\ndistribution P over random live-edge graphs (i.e. each edge (u, v) \u2208 E has an independent probability\nof puv to be live in \u03a6). Given a subset S and a realization \u03c6, we de\ufb01ne in\ufb02uence utility function\nf : 2V \u00d7 R \u2192 R+ as f (S, \u03c6) = |\u0393(S, \u03c6)|, where R+ is the set of non-negative real numbers. That\nis, f (S, \u03c6) is the number of nodes reachable from S in realization (live-edge graph) \u03c6. Then it is\nclear that in\ufb02uence spread \u03c3(S) = E\u03a6\u223cP [f (S, \u03a6)].\nIn the adaptive in\ufb02uence maximization problem, we could sequentially select nodes as seeds, and\nafter selecting one seed node, we could obtain its feedback, and use the feedback to guide further\nseed selections. A partial realization \u03c8 maps a subset of nodes in V , denoted dom(\u03c8) for domain of\n\u03c8, to their states. Partial realization \u03c8 represents the feedback we could obtain after nodes in dom(\u03c8)\nare selected as seeds. For convenience, we also represent \u03c8 as a relation, i.e., \u03c8 = {(u, o) \u2208 V \u00d7 O :\n\n3\n\n\fu \u2208 dom(\u03c8), o = \u03c8(u)}. We say that a full realization \u03c6 is consistent with a partial realization \u03c8,\ndenoted as \u03c6 \u223c \u03c8, if \u03c6(u) = \u03c8(u) for every u \u2208 dom(\u03c8).\nAn adaptive policy \u03c0 is a mapping from partial realizations to nodes. Given a partial realization \u03c8,\n\u03c0(\u03c8) represents the next seed node that policy \u03c0 would select when it sees the feedback represented\nby \u03c8. Under a full realization \u03c6 consistent with \u03c8, after selecting \u03c0(\u03c8), the policy would obtain\nfeedback \u03c6(\u03c0(\u03c8)), and the partial realization would grow to \u03c8(cid:48) = \u03c8 \u222a {(\u03c0(\u03c8), \u03c6(\u03c0(\u03c8)))}, and\npolicy \u03c0 could pick the next seed node \u03c0(\u03c8(cid:48)) based on partial realization \u03c8(cid:48). For convenience, we\nonly consider deterministic policies in this paper, and the results we derive can be easily extend to\nrandomized policies. Let V (\u03c0, \u03c6) denote the set of nodes selected by policy \u03c0 under realization \u03c6.\nFor the adaptive in\ufb02uence maximization problem, we consider the simple cardinality constraint such\nthat |V (\u03c0, \u03c6)| \u2264 k, i.e. the policy only selects at most k nodes. Let \u03a0(k) denote the set of such\npolicies.\nThe objective of an adaptive policy \u03c0 is its adaptive in\ufb02uence spread, which is the expected number\nof nodes that are activated under policy \u03c0. Formally, we de\ufb01ne the adaptive in\ufb02uence spread of \u03c0 as\n\u03c3(\u03c0) = E\u03a6\u223cP [f (V (\u03c0, \u03a6), \u03a6)]. The adaptive in\ufb02uence maximization problem is de\ufb01ned as follows.\nDe\ufb01nition 2 (Adaptive In\ufb02uence Maximization). Adaptive in\ufb02uence maximization is the problem\nof given a directed in\ufb02uence graph G = (V, E, p) with IC model parameters {puv}(u,v)\u2208E and a\nbudget k, \ufb01nding an adaptive policy \u03c0\u2217 that selects at most k seed nodes such that the adaptive\nin\ufb02uence spread of \u03c0\u2217, \u03c3(\u03c0\u2217), is maximized, i.e. \ufb01nding \u03c0\u2217 \u2208 argmax\u03c0\u2208\u03a0(k)\u03c3(\u03c0).\nNote that for any \ufb01xed seed set S, we can create a policy \u03c0S that always selects set S regardless of\nthe feedback, which means any non-adaptive solution is a feasible solution for adaptive in\ufb02uence\nmaximization. Therefore, the optimal adaptive in\ufb02uence spread should be at least as good as the\noptimal non-adaptive in\ufb02uence spread, under the same budget constraint.\n\nAdaptivity Gap. Since the adaptive policy is usually hard to design and analyze and the adaptive\ninteraction process may also be slow in practice, a fundamental question for adaptive stochastic\noptimization problems is whether adaptive algorithms are really superior to non-adaptive algorithms.\nThe adaptivity gap measures the gap between the optimal adaptive solution and the optimal non-\nadaptive solution. More concretely, if we use OPTN (G, k) (resp. OPTA(G, k)) to denote the\nin\ufb02uence spread of the optimal non-adaptive (resp. adaptive) solution for the IM problem in an\nin\ufb02uence graph G under the IC model with seed budget k, then we have the following de\ufb01nition.\nDe\ufb01nition 3 (Adaptivity Gap for IM). The adaptivity gap in the IC model is de\ufb01ned as the supremum\nof the ratios of the in\ufb02uence spread between the optimal adaptive policy and the optimal non-adaptive\npolicy, over all possible in\ufb02uence graphs G and seed budget k, i.e.,\n\nOPTA(G, k)\nOPTN (G, k)\n\n.\n\nsup\nG,k\n\n(1)\n\nSubmodularity and Adaptive Submodularity. Non-adaptive in\ufb02uence maximization is often\nsolved via submodular function maximization technique. A set function f : 2V \u2192 R is submodular\nif for all S \u2286 T \u2286 V and all u \u2208 V \\ T , f (S \u222a {u}) \u2212 f (S) \u2265 f (T \u222a {u}) \u2212 f (T ). Set function f\nis monotone if for all S \u2286 T \u2286 V , f (S) \u2264 f (T ). Kempe et al. [11] show that the in\ufb02uence spread\nfunction \u03c3(S) under the IC model is monotone and submodular, and thus a simple non-adaptive\ngreedy algorithm achieves a (1 \u2212 1\ne ) approximation of the optimal non-adaptive solution, assuming\nfunction evaluation \u03c3(S) is given by an oracle.\nGolovin and Krause [7] de\ufb01ne adaptive submodularity for the adaptive stochastic optimization\nframework. In the context of adaptive in\ufb02uence maximization, adaptive submodularity can be de\ufb01ned\nas follows. Given a utility function f, for any partial realization \u03c8 and a node u (cid:54)\u2208 dom(\u03c8), we de\ufb01ne\nthe marginal gain of u given \u03c8 as \u2206f (u | \u03c8) = E\u03a6\u223cP [f (dom(\u03c8)\u222a{u}, \u03a6)\u2212f (dom(\u03c8), \u03a6)|\u03a6 \u223c \u03c8],\ni.e. the expected marginal gain on in\ufb02uence spread when adding u to the partial realization \u03c8. A\npartial realization \u03c8 is a sub-realization of another partial realization \u03c8(cid:48) if \u03c8 \u2286 \u03c8(cid:48) when treating\nboth as relations. We say that the utility function f is adaptive submodular with respect to P if\nfor any two \ufb01xed partial realizations \u03c8 and \u03c8(cid:48) such that \u03c8 \u2286 \u03c8(cid:48), for any u (cid:54)\u2208 dom(\u03c8(cid:48)), we have\n\u2206f (u | \u03c8) \u2265 \u2206f (u | \u03c8(cid:48)), that is, the marginal in\ufb02uence spread of a node given more feedback is\nat most its marginal in\ufb02uence spread given less feedback. We say that f is adaptive monotone with\nrespect to P if for any partial realization \u03c8 with Pr\u03a6\u223cP (\u03a6 \u223c \u03c8) > 0, \u2206f (u | \u03c8) \u2265 0.\n\n4\n\n\fGolovin and Krause [7] show that the in\ufb02uence utility function under the IC model with full-adoption\nfeedback is adaptive monotone and adaptive submodular, and thus the adaptive greedy algorithm\nachieves (1 \u2212 1\ne ) approximation of the adaptive optimal solution. However, they show that the\nin\ufb02uence utility function under the IC model with myopic feedback is not adaptive submodular. They\nconjecture that the adaptive greedy policy still provides a constant approximation. In this paper, we\nshow that the adaptive greedy policy provides a 1\ne ) approximation, and thus \ufb01nally address\nthis conjecture af\ufb01rmatively.\n\n4 (1 \u2212 1\n\n3 Adaptivity Gap in Myopic Feedback Model\n\nIn this section, we analyze the adaptivity gap for in\ufb02uence maximization problems under the myopic\nfeedback model and derive both upper and lower bounds.\n\n3.1 Upper Bound on the Adaptivity Gap\n\nOur main result is an upper bound on the adaptivity gap for the myopic feedback model, which is\nformally stated below.\nTheorem 1. Under the IC model with myopic feedback, the adaptivity gap for the in\ufb02uence maxi-\nmization problem is at most 4.\n\nProof outline. We now outline the main ideas and the structure of the proof of Theorem 1. The main\nidea is to show that for each adaptive policy \u03c0, we could construct a non-adaptive randomized policy\nW(\u03c0), such that the adaptive in\ufb02uence spread \u03c3(\u03c0) is at most four times the non-adaptive in\ufb02uence\nspread of W(\u03c0), denoted \u03c3(W(\u03c0)). This would immediately imply Theorem 1. The non-adaptive\npolicy W(\u03c0) is constructed by viewing adaptive policy \u03c0 as a decision tree with leaves representing\nthe \ufb01nal seed set selected (De\ufb01nition 4), and W(\u03c0) simply samples such a seed set based on the\ndistribution of the leaves (De\ufb01nition 5). The key to connect \u03c3(\u03c0) with \u03c3(W(\u03c0)) is by introducing\na \ufb01ctitious hybrid policy \u00af\u03c0, such that \u03c3(\u03c0) \u2264 \u00af\u03c3(\u00af\u03c0) \u2264 4\u03c3(W(\u03c0)), where \u00af\u03c3(\u00af\u03c0) is the aggregate\nadaptive in\ufb02uence spread (de\ufb01ned in Eqs. (2) and (3)). Intuitively, \u00af\u03c0 works on three independent\nrealizations \u03a61, \u03a62, \u03a63 and it adaptively selects seeds as \u03c0 working on \u03a61. The difference is that\neach selected seed has three independent chances to activate its out-neighbors according to the\nunion of \u03a61, \u03a62, \u03a63. The inequality \u03c3(\u03c0) \u2264 \u00af\u03c3(\u00af\u03c0) is immediate and the main effort is on proving\n\u00af\u03c3(\u00af\u03c0) \u2264 4\u03c3(W(\u03c0)).\nTo do so, we \ufb01rst introduce general notations \u03c3t(S) and \u03c3t(\u03c0) with t = 1, 2, 3, where \u03c3t(S) is the t-th\naggregate in\ufb02uence spread for a seed set S and \u03c3t(\u03c0) is the t-th aggregate adaptive in\ufb02uence spread\nfor an adaptive policy \u03c0, and they mean that all seed nodes have t independent chances to activate\ntheir out-neighbors. Obviously, \u00af\u03c3(\u00af\u03c0) = \u03c33(\u03c0) and \u03c3(W(\u03c0)) = \u03c31(W(\u03c0)). We then represent\n\u03c3t(W(\u03c0)) and \u03c3t(\u03c0) as a summation of k non-adaptive marginal gains \u2206f t(u | dom(\u03c8s))\u2019s and\nadaptive marginal gains \u2206f t(u | \u03c8s)\u2019s, respectively (De\ufb01nition 6 and Lemma 1), with respect to node\ns in different levels of the decision tree. Next, we establish the key connection between the adaptive\nmarginal gain and the nonadaptive marginal gain (Lemma 3): \u2206f 3(u | \u03c81) \u2264 2\u2206f 2(u | dom(\u03c81)).\nThis immediately implies that \u03c33(\u03c0) \u2264 2\u03c32(W(\u03c0)). Finally, we prove that the t-th aggregate non-\nadaptive in\ufb02uence spread \u03c3t(S) is bounded by t \u00b7 \u03c3(S), which implies that \u03c32(W(\u03c0)) \u2264 2\u03c3(W(\u03c0)).\nThis concludes the proof.\nWe remark that our introduction of the hybrid policy \u00af\u03c0 is inspired by the analysis in [4], which shows\nthat the adaptivity gap for the stochastic multi-value probing (SMP) problem is at most 2. However,\nour analysis is more complicated than theirs and thus is novel in several aspects. First, the SMP\nproblem is simpler than our problem, with the key difference being that SMP is adaptive submodular\nbut our problem is not. Therefore, we cannot apply their way of inductive reasoning that implicitly\nrelies on adaptive submodularity. Instead, we have to use our marginal gain representation and redo\nthe bounding analysis carefully based on the (non-adaptive) submodularity of the in\ufb02uence utility\nfunction on live-edge graphs. Moreover, our in\ufb02uence utility function is also sophisticated and we\nhave to use three independent realizations in order to apply the submodularity on live-edge graphs,\nwhich results in an adaptivity bound of 4, while their analysis only needs two independent realizations\nto achieve a bound of 2. We now provide the technical proof of Theorem 1. We \ufb01rst formally de\ufb01ne\nthe decision tree representation.\n\n5\n\n\fDe\ufb01nition 4 (Decision tree representation for adaptive policy). An adaptive policy \u03c0 can be seen as a\ndecision tree T (\u03c0), where each node s of T (\u03c0) corresponds to a partial realization \u03c8s, with the root\nbeing the empty partial realization, and node s(cid:48) is a child of s if \u03c8s(cid:48) = \u03c8s \u222a{(\u03c0(\u03c8s), \u03c6(\u03c0(\u03c8s)))} for\nsome realization \u03c6 \u223c \u03c8s. Each node s is associated with a probability ps, which is the probability\nthat the policy \u03c0 generates partial realization \u03c8s, i.e. the probability that the policy would walk on\nthe tree from the root to node s.\nNext we de\ufb01ne the non-adaptive randomized policy W(\u03c0), which randomly selects a leaf of T (\u03c0).\nDe\ufb01nition 5 (Random-walk non-adaptive policy [9]). For any adaptive policy \u03c0, let L(\u03c0) denote the\nset of leaves of T (\u03c0). Then we construct a randomized non-adaptive policy W(\u03c0) as follows: for\nany leaf (cid:96) \u2208 L(\u03c0), W(\u03c0) picks leaf (cid:96) with probability p(cid:96) and selects dom(\u03c8(cid:96)) as the seed set.\nBefore proceeding further with our analysis, we introduce some notations for the myopic feedback\nmodel. In the myopic feedback model, we notice that the state spaces for all nodes are mutually\nindependent and disjoint. Thus we could decompose the realization space R into independent\nsubspace, R = \u00d7u\u2208V Ou, where Ou is the set of all possible states for node u. For any full realization\n\u03c6 (resp. partial realization \u03c8), we would use \u03c6S (resp. \u03c8S) to denote the feedback for the node set\nS \u2286 V . Note that \u03c6S and \u03c8S are partial realizations with domain S. Similarly, we would also use PS\nto denote the probability space \u00d7u\u2208SPu, where Pu is the probability distribution over Ou (i.e. each\nout-going edge (u, v) of u is live with independent probability puv). With a slight abuse of notation,\nwe further use \u03c6S (resp. \u03c8S) to denote the set of live edges leaving from S under \u03c6 (resp. \u03c8). Then\nwe could use notation \u03c61\nS to represent the union of live-edges from \u03c61 and \u03c62 leaving from S,\nand similarly \u03c8 \u222a \u03c62\n\nS \u222a \u03c62\n\nS with dom(\u03c8) = S.\n\nConstruction of the hybrid policy \u00af\u03c0. For any adaptive policy \u03c0, we de\ufb01ne a \ufb01ctitious hybrid policy\n\u00af\u03c0 that works on three independent random realizations \u03a61, \u03a62 and \u03a63 simultaneously, thinking about\nthem as from three copies of the graphs G1, G2 and G3. Note that \u00af\u03c0 is not a real adaptive policy\n\u2014 it is only used for our analytical purpose to build connections between the adaptive policy \u03c0 and\nthe non-adaptive policy W(\u03c0). In terms of adaptive seed selection, \u00af\u03c0 acts exactly the same as \u03c0\non G1, responding to partial realizations \u03c81 obtained so far from the full realization \u03a61 of G1, and\ndisregarding the realizations \u03a62 and \u03a63. However, the difference is when we de\ufb01ne adaptive in\ufb02uence\nspread for \u00af\u03c0, we aggregate the three partial realizations on the seed set together. More precisely, for\nany t = 1, 2, 3, we de\ufb01ne the t-th aggregate in\ufb02uence utility function as f t : 2V \u00d7 Rt \u2192 R+\n\n(2)\nwhere (\u222ai\u2208[t]\u03c6i\nV \\S) means a new realization \u03c6(cid:48) where on set S its set of out-going live-edges is\nthe same as the union of \u03c61,\u00b7\u00b7\u00b7 \u03c6t, and on set V \\ S its set of out-going live-edges is the same as \u03c61,\nand f is the original in\ufb02uence utility function de\ufb01ned in Section 2. The objective of the hybrid policy\n\u00af\u03c0 is then de\ufb01ned as the adaptive in\ufb02uence spread under policy \u00af\u03c0, i.e.,\n\nS, \u03c61\n\nS, \u03c61\n\nV \\S)\n\n,\n\nS, (\u222ai\u2208[t]\u03c6i\n\nf t(cid:0)S, \u03c61,\u00b7\u00b7\u00b7 , \u03c6t(cid:1) := f\n\n(cid:16)\n\n(cid:17)\n\n(cid:2)f 3(V (\u03c0, \u03a61), \u03a61, \u03a62, \u03a63)(cid:3)\n(cid:16)\n(cid:104)\n\nf\n\nV (\u03c0, \u03a61), (\u03a61\n\n\u00af\u03c3(\u00af\u03c0) :=\n\nE\n\n\u03a61,\u03a62,\u03a63\u223cP\n\n=\n\nE\n\n\u03a61,\u03a62,\u03a63\u223cP\n\nV (\u03c0,\u03a61) \u222a \u03a62\n\nV (\u03c0,\u03a61) \u222a \u03a63\n\nV (\u03c0,\u03a61), \u03a61\n\nV \\V (\u03c0,\u03a61))\n\n.\n\n(3)\n\n(cid:17)(cid:105)\n\nIn other words, the adaptive in\ufb02uence spread of the hybrid policy \u00af\u03c0 is the in\ufb02uence spread of seed\nnodes V (\u03c0, \u03a61) selected in graph G1 by policy \u03c0, where the live-edge graph on the seed set part\nV (\u03c0, \u03a61) is the union of live-edge graphs of G1, G2 and G3, and the live-edge graph on the non-seed\nset part is only that of G1. It can also be viewed as each seed node has three independent chances to\nactivate its out-neighbors. Since the hybrid policy \u00af\u03c0 acts the same as policy \u03c0 on in\ufb02uence graph G1,\nwe can easily conclude:\nClaim 1. \u00af\u03c3(\u00af\u03c0) \u2265 \u03c3(\u03c0).\nWe also de\ufb01ne t-th aggregate in\ufb02uence spread for a seed set S, \u03c3t(S), as \u03c3t(S) =\n(cid:96)\u2208L(\u03c0) p(cid:96) \u00b7 \u03c3t(dom(\u03c8(cid:96))), that is, the t-th aggregate in\ufb02uence spread of W(\u03c0) is the\naverage t-th aggregate in\ufb02uence spread of seed nodes selected by W(\u03c0) according to distribution\nof the leaves in the decision tree T (\u03c0). Similarly, we de\ufb01ne the t-th aggregate adaptive in\ufb02uence\n\nE\u03a61,\u00b7\u00b7\u00b7 ,\u03a6t\u223cP(cid:2)f t(S, \u03a61,\u00b7\u00b7\u00b7 , \u03a6t)(cid:3). Then, for the random-walk non-adaptive policy W(\u03c0), we de\ufb01ne\n\u03c3t(W(\u03c0)) =(cid:80)\n\n6\n\n\fspread for an adaptive policy \u03c0 as \u03c3t(\u03c0) = E\u03a61,\u00b7\u00b7\u00b7 ,\u03a6t\u223cP(cid:2)f t(V (\u03c0, \u03a61), \u03a61,\u00b7\u00b7\u00b7 , \u03a6t)(cid:3). Note that\n\n\u00af\u03c3(\u00af\u03c0) = \u03c33(\u03c0).\nNow, we could give a de\ufb01nition for the conditional expected marginal gain for the aggregate in\ufb02uence\nutility function f t over live-edge graph distributions.\nDe\ufb01nition 6. The expected non-adaptive marginal gain of u given set S under f t is de\ufb01ned as:\n\n(cid:2)f t(cid:0)S \u222a {u}, \u03a61,\u00b7\u00b7\u00b7 , \u03a6t(cid:1) \u2212 f t(cid:0)S, \u03a61,\u00b7\u00b7\u00b7 , \u03a6t(cid:1)(cid:3) .\n\n\u2206f t(u | S) =\n\n(4)\n\nE\n\n\u03a61,\u00b7\u00b7\u00b7 ,\u03a6t\u223cP\n\nThe expected adaptive marginal gain of u given partial realization \u03c81 under f t is de\ufb01ned as:\n\u2206f t(u | \u03c81) =\n\n(cid:2)f t(cid:0)dom(\u03c81) \u222a {u}, \u03a61,\u00b7\u00b7\u00b7 , \u03a6t(cid:1) \u2212 f t(cid:0)dom(\u03c81), \u03a61,\u00b7\u00b7\u00b7 , \u03a6t(cid:1) | \u03a61 \u223c \u03c81(cid:3) .\n\nE\n\n\u03a61,\u00b7\u00b7\u00b7 ,\u03a6t\u223cP\n\n(5)\nThe following lemma connects \u03c3t(\u03c0) (and thus \u00af\u03c3(\u00af\u03c0)) with adaptive marginal gain \u2206f t(u | \u03c8), and\nconnects \u03c3t(W(\u03c0)) with non-adaptive marginal gain \u2206f t (u | S). Let P \u03c0\ni denote the probability\ndistribution over nodes at depth i of the decision T (\u03c0). The proof is by applying telescoping series to\nconvert in\ufb02uence spread into the sum of marginal gains.\nLemma 1. For any adaptive policy \u03c0, and t \u2265 1, we have\n\n\u03c3t(\u03c0) =\n\n[\u2206f t (\u03c0(\u03c8s) | \u03c8s)] , and \u03c3t(W(\u03c0)) =\n\nE\ns\u223cP \u03c0\n\ni\n\n[\u2206f t (\u03c0(\u03c8s) | dom(\u03c8s))] .\n\nE\ns\u223cP \u03c0\n\ni\n\nk\u22121(cid:88)\n\ni=0\n\nk\u22121(cid:88)\n\ni=0\n\nThe next lemma bounds two intermediate adaptive marginal gains to be used for Lemma 3. The\nproof crucially depend on (a) the independence of realizations \u03a61, \u03a62, \u03a63, (b) the independence of\nfeedback of different selected seed nodes, and (c) the submodularity of the in\ufb02uence utility function\non live-edge graphs.\nLemma 2. Let S = dom(\u03c81) and S+ = S \u222a {u} for any partial realization \u03c81 and any u (cid:54)\u2208\ndom(\u03c81). Then we have\n\n(cid:16)\n\n(cid:104)\n\nf\n\n(cid:16)\n(cid:16)\n\nE\n\n(cid:104)\n\nf\n\n\u03a61,\u03a62,\u03a63\u223cP\n\n\u2212 f\n\nS, (\u03a61\n\nS+, (\u03a61\nS \u222a \u03a62\n\nS \u222a \u03a62\nS \u222a \u03a63\n\nS \u222a \u03a63\n\nS, \u03a61\n\nu, \u03a61\n\nu \u222a \u03a62\n\n(cid:17) | \u03a61 \u223c \u03c81(cid:105) \u2264 \u2206f 2(u | S).\n\nV \\S+)\n\nS, \u03a61\n\nV \\S)\n\n\u03a61,\u03a62,\u03a63\u223cP\n\n(cid:16)\n\nE\n\n\u2212 f\n\nS+, (\u03a61\nS \u222a \u03a62\n\nS \u222a \u03a62\nS \u222a \u03a63\n\nS+, (\u03a61\n\nS, \u03a61\nu \u222a \u03a62\n\nS \u222a \u03a63\n\nu \u222a \u03a62\n\nu \u222a \u03a63\n\nS, \u03a61\n\nu, \u03a61\n\nV \\S+ )\n\nu, \u03a61\n\nV \\S+)\n\n(cid:17) | \u03a61 \u223c \u03c81(cid:105) \u2264 \u2206f 2 (u | S).\n\n(6)\n\n(7)\n\n(cid:17)\n\n(cid:17)\n\nCombining the two inequalities above, we obtain the following key lemma, which bounds the adaptive\nmarginal gain \u2206f 3 (u | \u03c81) with the non-adaptive marginal gain \u2206f 2(u | dom(\u03c81)).\nLemma 3. For any partial realization \u03c81 and node u /\u2208 dom(\u03c81), we have\n\n\u2206f 3(u | \u03c81) \u2264 2\u2206f 2(u | dom(\u03c81)).\n\n(8)\n\nThe next lemma gives an upper bound on the t-th aggregate (non-adaptive) in\ufb02uence spread \u03c3t(S)\nusing the original in\ufb02uence spread \u03c3(S). The idea of the proof is that each seed node in S has t\nindependent chances to active its out-neighbors, but afterwards the diffusion is among nodes not in S\nas in the original diffusion.\nLemma 4. For any t \u2265 1 and any subset S \u2286 V , \u03c3t(S) \u2264 t \u00b7 \u03c3(S).\nProof of Theorem 1. It is enough to show that for every adaptive policy \u03c0, \u03c3(\u03c0) \u2264 4\u03c3(W(\u03c0)).\n\u03c3(\u03c0) \u2264 \u00af\u03c3(\u00af\u03c0) = \u03c33(\u03c0) =\nThis is done by the following derivation sequence:\n4\u03c3(W(\u03c0)), where the \ufb01rst inequality is by Claim 1, the second and the third equalities are by\nLemma 1, the second inequality is by Lemma 3 and the last inequality is by Lemma 4.\n\n(cid:2)2\u2206f 2 (\u03c0(\u03c8s) | dom(\u03c8s))(cid:3) = 2\u03c32(W(\u03c0)) \u2264\n\n(cid:2)\u2206f 3 (\u03c0(\u03c8s) | \u03c8s)(cid:3) \u2264 (cid:80)k\u22121\n\n(cid:80)k\u22121\n\ni=0 Es\u2208P \u03c0\n\ni=0 Es\u2208P \u03c0\n\ni\n\ni\n\n7\n\n\f3.2 Lower bound\n\nNext, we proceed to give a lower bound on the adaptivity gap for the in\ufb02uence maximization problem\nin the myopic feedback model. Our result is stated as follow:\nTheorem 2. Under the IC model with myopic feedback, the adaptivity gap for the in\ufb02uence maxi-\nmization problem is at least e/(e \u2212 1).\n\nProof Sketch. We construct a bipartite graph G = (L, R, E, p) with |L| =(cid:0)m3\n\n(cid:1) and |R| = m3. For\n\neach subset X \u2282 R with |X| = m2, there is exactly one node u \u2208 L that connects to all nodes in X.\nWe show that for any \u0001 > 0, there is a large enough m such that in the above graph with parameter m\nthe adaptivity gap is at least e/(e \u2212 1) \u2212 \u0001.\n\nm2\n\n4 Adaptive and Non-Adaptive Greedy Algorithms\n\nIn this section, we consider two prevalent algorithms \u2014 the greedy algorithm and the adaptive greedy\nalgorithm for the in\ufb02uence maximization problem. To the best of our knowledge, we provide the \ufb01rst\napproximation ratio of these algorithms with respect to the adaptive optimal solution in the IC model\nwith myopic feedback. We formally describe the algorithms in Figure 1.\n\nGreedy Algorithm:\nS = \u2205\nwhile |S| < k do\nu = argmaxu\u2208V \\S\u2206f (u|S)\nS = S \u222a {u}\n\nend while\nreturn S\n\nAdaptive Greedy Algorithm:\nS = \u2205, \u03a8 = \u2205\nwhile |S| < k do\n\nu = argmaxu\u2208V \\S\u2206f (u|\u03a8)\nSelect u as seed and observe \u03a6(u).\nS = S \u222a {u}, \u03a8 = \u03a8 \u222a {(u, \u03a6(u))}\n\nend while\n\nFigure 1: Description for greedy and adaptive greedy.\n\nOur main result is summarized below.\nTheorem 3. Both greedy and adaptive greedy are 1\npolicy under the IC model with myopic feedback.\n\n4 (1 \u2212 1\n\ne ) approximate to the optimal adaptive\n\nProof Sketch. The proof for the non-adaptive greedy algorithm is straightforward since the non-\nadaptive greedy algorithm provides a (1 \u2212 1\ne ) approximation to the non-adaptive optimal solution,\n4 of the adaptive optimal solution. For the adaptive greedy algorithm,\nwhich by Theorem 1 is at least 1\nwe need to separately prove that it also provides a (1 \u2212 1\ne ) approximation to the non-adaptive optimal\nsolution, and then the result is immediate similar to the non-adaptive greedy algorithm.\n\nTheorem 3 shows that greedy and adaptive greedy can achieve at least an approximation ratio of\n4 (1 \u2212 1\ne ) with respect to the adaptive optimal solution. We further show that their approximation\n1\n(e+1)2 \u2248 0.606, which is strictly less than 1 \u2212 1/e \u2248 0.632. To do so, we \ufb01rst\nratio is at most e2+1\npresent an example for non-adaptive greedy with approximation ratio at most e2+1\n(e+1)2 . Next, we show\nthat myopic feedback does not help much to adaptive greedy, in that the approximation ratio for the\nnon-adaptive greedy algorithm is no worse than that of adaptive greedy over a family of graphs.\n(e+1)2 \u2248\nTheorem 4. The approximation ratio for greedy and adaptive greedy is no better than e2+1\n0.606, which is strictly less than 1\u2212 1/e \u2248 0.632. Moreover, the approximation ratio of non-adaptive\ngreedy given any in\ufb02uence graph G and budget k is the same as the in\ufb01mum of the approximation\nratios of adaptive greedy on a family of graphs with the same budget k.\n\n8\n\n\f5 Conclusion and Future Work\n\nIn this paper, we systematically study the adaptive in\ufb02uence maximization problem with myopic\nfeedback under the independent cascade model, and provide constant upper and lower bounds on\nthe adaptivity gap and the approximation ratios of the non-adaptive greedy and adaptive greedy\nalgorithms. There are a number of future directions to continue this line of research. First, there is\nstill a gap between the upper and lower bound results in this paper, and thus how to close this gap\nis the next challenge. Second, our result suggests that adaptive greedy may not bring much bene\ufb01t\nunder the myopic feedback model, so are there other adaptive algorithms that could do much better?\nThird, for the IC model with full-adoption feedback, because the feedback on different seed nodes\nmay be correlated, existing adaptivity gap results in [1, 4] cannot be applied even though it is adaptive\nsubmodular. For this, our recent study in [5] provides partial answers on several special classes of\ngraphs such as trees and bipartite graphs, but the adaptivity gap on general graphs is still open. One\nmay also explore beyond the IC model, and study adaptive solutions for other models such as the\nlinear threshold model and the general threshold model [11].\n\nAcknowledgment\n\nWei Chen is partially supported by the National Natural Science Foundation of China (Grant No.\n61433014).\n\nReferences\n[1] Arash Asadpour and Hamid Nazerzadeh. Maximizing stochastic monotone submodular func-\n\ntions. Management Science, 62(8):2374\u20132391, 2016.\n\n[2] Arash Asadpour, Hamid Nazerzadeh, and Amin Saberi. Stochastic submodular maximization.\nIn International Workshop on Internet and Network Economics, pages 477\u2013489. Springer, 2008.\n\n[3] Ashwinkumar Badanidiyuru, Christos Papadimitriou, Aviad Rubinstein, Lior Seeman, and\nYaron Singer. Locally adaptive optimization: Adaptive seeding for monotone submodular\nfunctions. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete\nalgorithms, pages 414\u2013429. SIAM, 2016.\n\n[4] Domagoj Bradac, Sahil Singla, and Goran Zuzic. (Near) optimal adaptivity gaps for stochastic\nmulti-value probing. In Proceedings of the 23rd International Conference Randomization and\nComputation (RANDOM), 2019.\n\n[5] Wei Chen and Binghui Peng. On adaptivity gaps of in\ufb02uence maximization under the inde-\npendent cascade model with full adoption feedback. In Proceedings of the 30th International\nSymposium on Algorithms and Computation (ISAAC), 2018.\n\n[6] Wei Chen, Laks VS Lakshmanan, and Carlos Castillo. Information and In\ufb02uence Propagation\n\nin Social Networks. Morgan & Claypool Publishers, 2013.\n\n[7] Daniel Golovin and Andreas Krause. Adaptive submodularity:theory and applications in active\nlearning and stochastic optimization. Journal of Arti\ufb01cial Intelligence Research, 42:427\u2013486,\n2011. arXiv version (arxiv.org/abs/1003.3967) includes discussions on the myopic feedback\nmodel.\n\n[8] Anupam Gupta, Viswanath Nagarajan, and Sahil Singla. Algorithms and adaptivity gaps for\nstochastic probing. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on\nDiscrete algorithms, pages 1731\u20131747. SIAM, 2016.\n\n[9] Anupam Gupta, Viswanath Nagarajan, and Sahil Singla. Adaptivity gaps for stochastic probing:\nSubmodular and XOS functions. In Proceedings of the Twenty-Eighth Annual ACM-SIAM\nSymposium on Discrete Algorithms, pages 1688\u20131702. SIAM, 2017.\n\n[10] Kai Han, Keke Huang, Xiaokui Xiao, Jing Tang, Aixin Sun, and Xueyan Tang. Ef\ufb01cient\n\nalgorithms for adaptive in\ufb02uence maximization. PVLDB, 11(9):1029\u20131040, 2018.\n\n9\n\n\f[11] David Kempe, Jon Kleinberg, and \u00c9va Tardos. Maximizing the spread of in\ufb02uence through a\n\nsocial network. In Proceedings of the ninth ACM SIGKDD, pages 137\u2013146. ACM, 2003.\n\n[12] Yuchen Li, Ju Fan, Yanhao Wang, and Kian-Lee Tan. In\ufb02uence maximization on social graphs:\n\nA survey. IEEE Trans. Knowl. Data Eng., 30(10):1852\u20131872, 2018.\n\n[13] Guillaume Salha, Nikolaos Tziortziotis, and Michalis Vazirgiannis. Adaptive submodular\nin\ufb02uence maximization with myopic feedback. In 2018 IEEE/ACM International Conference\non Advances in Social Networks Analysis and Mining (ASONAM), pages 455\u2013462. IEEE, 2018.\n\n[14] Lior Seeman and Yaron Singer. Adaptive seeding in social networks. In 2013 IEEE 54th Annual\n\nSymposium on Foundations of Computer Science, pages 459\u2013468. IEEE, 2013.\n\n[15] Yaron Singer. In\ufb02uence maximization through adaptive seeding. ACM SIGecom Exchanges, 15\n\n(1):32\u201359, 2016.\n\n[16] Lichao Sun, Weiran Huang, Philip S Yu, and Wei Chen. Multi-round in\ufb02uence maximization.\nIn Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery\n& Data Mining, pages 2249\u20132258. ACM, 2018.\n\n[17] Jing Tang, Keke Huang, Xiaokui Xiao, Laks V. S. Lakshmanan, Xueyan Tang, Aixin Sun, and\nAndrew Lim. Ef\ufb01cient approximation algorithms for adaptive seed minimization. In Proceedings\nof the 2019 International Conference on Management of Data, SIGMOD Conference 2019,\nAmsterdam, The Netherlands, June 30 - July 5, 2019., pages 1096\u20131113, 2019.\n\n[18] Guangmo Tong. Adaptive in\ufb02uence maximization under general feedback models. arXiv\n\npreprint arXiv:1902.00192, 2019.\n\n[19] Guangmo Tong, Weili Wu, Shaojie Tang, and Ding-Zhu Du. Adaptive in\ufb02uence maximization\nin dynamic social networks. IEEE/ACM Transactions on Networking (TON), 25(1):112\u2013125,\n2017.\n\n[20] Jing Yuan and Shao-Jie Tang. No time to observe: Adaptive in\ufb02uence maximization with partial\nfeedback. In Twenty-Sixth International Joint Conference on Arti\ufb01cial Intelligence (IJCAI),\n2017.\n\n10\n\n\f", "award": [], "sourceid": 2990, "authors": [{"given_name": "Binghui", "family_name": "Peng", "institution": "Columbia University"}, {"given_name": "Wei", "family_name": "Chen", "institution": "Microsoft Research"}]}