The paper attempts to scale nearest neighbor search using heterogenous memory hardware. In this regard, authors devised a practical trick on top of HNSW. It is a clean node promotion strategy along the memory hierarchy using the degree information. The method was evaluated on some common large datasets, but not necessarily difficult ones. Reviewers found the setup to leverage the memory hierarchy interesting and the benefits obtained from it appears promising. Although all reviewers agree that algorithmic contribution is straightforward and fairly limited, but simplicity is not always bad. The other concern was whether the empirical evaluation is sufficient and as a result an extra expert review was sought after the feedback period (which is provided below in detail). In particular, there are still a few missing comparison to baselines and tuning system based caching properly. Authors are strongly encouraged to add these further experiments to the final version of the paper. Overall owing to simplicity and niche application, I am recommending acceptance to NeurIPS. Another expert review === 4:reject === Overview: The central contribution of the algorithm is a graph node promotion strategy, and this is applied to HNSW algorithm to perform fast nearest neighbor search. The node promotion strategy basically moves large-degree graph nodes higher in the memory hierarchy. In the ablation test, this simple strategy is compared against random node promotion, or using a system-level data management solution, and the paper showed the proposed strategy works better in terms of latency at higher recall region. === Pro: 1. Working on a practically useful direction leveraging cutting-edge hardware for fast nearest neighbor search. 2. Provided source code. Cons: 1. The amount of algorithimic innovation is pretty small. While node promotion based on degree is a simple and nice strategy, this is effecitvely a heuristic to cache more-frequently accessed items. As such, I am not very convinced that this is significant enough for a NeurIPS conference paper. In my opinion, there are other more important, unsolved problem in this architecture. For example, the indexing time of 1B nodes requires 96/117hours, and this practically limits the ability of the algorithm to further scale up to larger datasets. 2. There is insufficient amount of comparison to DiskANN. There is no big difference between using SSD or Optane for NNS (except the underlying hardware is different). Implementation is can be very similar for both hardware and can be done through file IO or mmap-ed memory. While I understand DiskANN is not open source, it will be at least nice to test out the strategy in this paper on SSD and compare it against DiskANN's numbers. === Question: In Figure4, system based caching strategy "memory mode" is surprisingly effective, almost on-a-par with the proposed method at 90% recall. The proposed caching strategy becomes visibily more effective only after 98%. This is a little alarming to me and makes me feel it might be possible to "tune" the system strategy to achieve the more or less same effect of the proposed strategy. === Conclusion: While this paper takes a practically interesting direction, the amount of research contribution seems incremental and some important experiments are missing (comparison to DiskANN).