__ Summary and Contributions__: The author proposed a model-agnostic method for generating the graph node adversarial samples and attacking some graph matching models. Specifically, a kernel density estimation function is proposed for pushing attacked nodes (around the target node) to aggregate together. Hence, the attacked node can confuse the graph model to make a wrong decision when matching the pairwise nodes. Also, a meta learning projected gradient method is proposed to select the attack start nodes.

__ Strengths__: The author conducts a solid analysis of their proposed methods and also evaluate their attack methods on multiple graph models. Besides, the comparison baseline models are also solid.

__ Weaknesses__: In the experiment part, the author should show some evidence of the unnoticeable perturbations on the graph. Also, could the author shortly explain why you chose the Gaussian model on parameter estimation?

__ Correctness__: The whole method is valid, and the author showed enough analysis on their methodology description.

__ Clarity__: The paper is well organized and written except for some little flaws. (E.g. equation 5 is somehow unexpected, the author should better introduce the equation before the equation.)

__ Relation to Prior Work__: The paper firstly introduces the kernel density estimation method (KDE) into graph adversarial attack task. Also, few work are conducted on adversarial attack to graph matching model, which is an important work to evaluate the robustness of the graph matching models

__ Reproducibility__: Yes

__ Additional Feedback__: The rebuttal indeed improved the paper, and I will increase the score.

__ Summary and Contributions__: The paper proposed an attack method on GNN. They utilize a kernel density estimation function to estimate the densities of nodes and generate perturbations under a specific budget by pushing attacked nodes to dense regions in two graphs. Moreover, they developed meta learning based projected gradient descent method to optimize the objective function.

__ Strengths__: 1. The method is novel and sound. The theoretical analysis is comprehensive and convincing.
2. The meta-learning helps in finding good attack starting points that alleviate the overlarge search domain on large graphs.
2. The experimental results demonstrated the proposed method.

__ Weaknesses__: 1. What's the time complexity of the attack method? Since the meta-learning involved in a repeatedly training/attacking procedure, I'm wondering the time cost could be very high to find the adversarial nodes especially compare with existing methods.
2. Recently, a lot of defense methods against adversarial attacks on GNN have been proposed, such as [1-4]. I think the authors should at least choose one to demonstrate attack is still useful even under such defenses.
3. The related work part is missing, so I yield a question that by leverage meta-learning to help generate adversarial attacks is shown in [5]. So the difference should be clarified.
Minor question:
It seems that the proposed method is getting more powerful as the dataset comes larger from Table 2. Any insight can be obtained from this?
[1] Certifiable Robustness and Robust Training for Graph Convolutional Networks
[2] Adversarial Examples on Graph Data: Deep Insights into Attack and Defense
[3] Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
[4] All You Need is Low (Rank): Defending Against Adversarial Attacks on Graphs
[5] Adversarial Attacks on Graph Neural Networks via Meta Learning

__ Correctness__: The claims are proofed in the paper and maybe correct.

__ Clarity__: The paper is well written and many experimental details are stated in the appendix.

__ Relation to Prior Work__: Not very comprehensive.

__ Reproducibility__: Yes

__ Additional Feedback__: Overall, I agree the novelty of this paper, I'd like to change my score if the authors can answer my questions in [Weaknesses] part.
===================================
After read the rebutall, the author addressed my question well and I decided to rasie my score to 6. The contribution needs to be clarified in the revision.

__ Summary and Contributions__: In this paper, the authors propose a method for adversarial attacking graph matching methods based on a kernel density estimation approach and a meta-learning based gradient descent method. Theoretical results are provided in deriving the KDE and experiments demonstrate the effectiveness of the proposed method in successfully downgrading a range of recent deep graph matching methods. In the supplementary material, the authors also generalize their method in node classification and link prediction tasks.

__ Strengths__: (+): The authors study a novel problem, i.e., adversarial attacks of graph matching.
(+): The literature survey is quite comprehensive.
(+): Experiments demonstrate the effectiveness of the proposed method.
(+): The proposed method may be generalized to other settings such as node classification.

__ Weaknesses__: (-): The proposed method does not precisely correspond to the motivation.
(-): Some technical details and experimental settings are not clear.
The negative points are explained more specifically as follows.
(1) The authors claimed the main attack strategy as “a kernel density estimation approach to push attacked nodes to dense regions in two graphs, such that they are indistinguishable from many neighbors” and focus on deriving the KDE in Section 3. However, in the objective function Eq. 9, KDE is only adopted as a sort of regularization. I feel it is the first term, which explicitly pushes away ground-truth matching points, that really matters and it does not depend on the sophisticated KDE. The authors should prove that the KDE term is indeed helpful, e.g., by an explicit theorem or conducting an ablation study.
(2) I am also not sure why the authors claimed using KDE “reduces the possibility of perturbation detection by humans or defender programs” since the level of attack, i.e., whether the permutation is perceptible or not, is only determined by the budget.
(3) As for the experimental setting, it seems that most baselines are designed for GNN-based node classification tasks and how to adapt them in the graph matching problem remains unexplained (e.g., do you still use misclassifying node labels as the objective function?). In addition, following (1), I think the authors should also compare with the most intuitive method of directly maximizing the first term in Eq. 9.
(4) Also, what’s the adopted projection M function? Do you use a surrogate model as [98] or do you need the actual graph matching model?

__ Correctness__: Yes, as far as I can tell.

__ Clarity__: Yes

__ Relation to Prior Work__: Yes

__ Reproducibility__: Yes

__ Additional Feedback__: (1) I suggest using a consistent metric, i.e., either mismatching rate or precision, in all the figures, since they show explicitly the opposite trends and mixing them is confusing.
(2) I also suggest the authors trying/extending their method to an “unsupervised” setting, e.g., not using ground-truth matching pairs, which will make the proposed model more practical, e.g., for a social network to anonymize, since it may not be feasible to collect such ground-truths in the first place.
(3) A brief discussion of how KDE can be utilized to other graph tasks beyond attacking graph matching may also be interesting.
===Updates===
(1) I appreciate the ablation study in Figure 5 about KDE and MLPGD, but this is not what I asked. Let me rephrase to see whether I understand this correctly: the authors termed all Eq.5 as the KDE, and I am curious in this equation, whether the first part, i.e. pushing away matching nodes, or the second part, i.e., maximizing density, actually matters. I believe this is important since only the second term is the actual KDE (we can push away matching nodes even we do not know KDE).
(2) I agree with the authors that “small attack budget is not enough for imperceptible attacks”. However, the examples in the rebuttal slightly improve the motivation but do not entirely clarify it since what the authors suggested seems to be a degree-related attack budget. Using the authors’ example, for a node with two edges, how KDE can make the attack imperceptible since changing one edge is inevitably obvious? The square example seems reasonable at first, but raises questions at second thoughts since this is not how humans/most machine learning models analyze graphs (since we cannot assess the graph density easily as in assessing crowd density) . More appropriate examples are needed (e.g., a synthesis graph example?).
(3) Baseline setting: I appreciate the clarification and believe it’s important to add these details in an updated version.
Considering that the rebuttal addressed some of my concerns (though not entirely) and this paper studies a new question, I have raised my rating to 6.

__ Summary and Contributions__: The paper studies the problem of attacking the graph matching models. The authors propose an adversarial attack model to perturb the structure and degrade the quality of graph matching. First, a kernel density estimation is used to maximize node densities to derive imperceptible perturbation. Then a meta-learning-based PGD method is utilized to choose the attack starting points to improve the search performance. The experimental results show the effectiveness of their approach.

__ Strengths__: The paper is well motivated, and the idea is very interesting.
The paper is the first work to design the strategy to attack the graph matching model.
The theoretical analysis is detailed.
The empirical evaluation setting is reasonable, the experiment is complete, and the results look great.

__ Weaknesses__: The authors assume that the graph data follow the Gaussian distribution, but I cannot find any evidence to support the assumption. The empirical analysis or reference should be provided.

__ Correctness__: In my opinion, the method and claims are correct. Also, the empirical methodology is correct.

__ Clarity__: The motivation for the paper is clear. Also the motivations for using the kernel density estimation and the meta-learning-based PGD are provided.
But the methodology part is not very easy to follow.

__ Relation to Prior Work__: Yes

__ Reproducibility__: No

__ Additional Feedback__: ---Updates---
The response from the authors solves my concern, and my score remains the same. Please add the references and results in the paper.