NeurIPS 2020

BayReL: Bayesian Relational Learning for Multi-omics Data Integration


Review 1

Summary and Contributions: This study proposes a Bayesian formulation for multiomics data integration by combining within-view and between-view interactions.

Strengths: The problem the authors addressed is a very important problem in computational biology. Their formulation seems novel.

Weaknesses: The \alpha parameter mentioned in line 180 were not mentioned before. This parameter basically determines the trade-off between within-view and between-view interactions. There is no discussion about the effect of this parameter on the solution quality. They compared the proposed algorithm mainly against BCCA algorithm, which is by definition using within-view interactions only. If otherwise, the authors should explain how they integrated between-view interactions into BCCA. I would suggest them to compare their algorithm against BCCA under the setting \alpha = 0 if within-view interactions are not used by BCCA.

Correctness: The claims and method seem correct.

Clarity: The paper is well written.

Relation to Prior Work: Most of the related prior work have been discussed. However, there are some co-embedding algorithms that makes use of within-view and between-view interactions, and they were not mentioned.

Reproducibility: Yes

Additional Feedback:


Review 2

Summary and Contributions: This work considers the problem of relation learning across multiple views where each view corresponds to an omic data type. The authors propose a Bayesian model for relation learning that accounts for known relations in each view and can leverage non-linear transformations. Inference in this model is performed using a variational formalism. The authors apply their model to 3 specific applications to demonstrate promising results.

Strengths: This work provides a coherent probabilistic framework that generalizes a number of previous studies. The full probabilistic treatment coupled with the flexibility is a strength. The empirical results across a range of applications are careful and convincing and support the broad utility of this model.

Weaknesses: A discussion of the computational requirements of inference and learning (relative to the baselines) would be important to add.

Correctness: - The empirical results are convincing. - While the comparison to BCCA and Spearman rank correlation are useful, it would be informative to probe the BayRel model directly to understand which of its components contribute to the improvements e.g. how important are the graphs vs the feature embeddings? Comparing BayRel to simpler sub-models could provide additional insights.

Clarity: Yes

Relation to Prior Work: Yes

Reproducibility: Yes

Additional Feedback: Update after reading author comments: I appreciate the additional results comparing BayRel to MOFA and the ablation experiments. This work presents a useful methodological contribution.


Review 3

Summary and Contributions: The authors present BayRel, which is a method for analysing linked data-sets from different genomics platforms. These linked data-sets may relate to different types of genomic data, such as gene-expression / transcriptomic data, or gene-regulatory data such as chromatin accessibility data. The main idea of BayRel is to uncover latent variables that give rise to the structure observed in the various data-sets (similar to canonical correlation analysis). In this context, these latent variables represent biological processes related to gene regulation that give rise to the patterns observed in the various types of genomics data.

Strengths: The problem of simultaneously analysing several types of genomics data that are linked is timely in the fields of cell biology and computational statistics. To approach this problem, these authors fit a sophisticated hierarchical probabilistic model using variational inference. The mathematical arguments are detailed and clear, the results are well presented.

Weaknesses: Conceptually, BayRel is very similar to Mult-Omics Factor Analysis (MOFA): https://doi.org/10.15252/msb.20178124 which also uses variational inference to fit a latent variable model simultaneously to multiple genomic data-sets, to infer common structure and variation corresponding to biological processes of interest. As MOFA is well established, has an equivalent level of mathematical and computational sophistication to BayRel, and has been validated in high-profile studies in the genomics application-area, I recommend that the authors benchmark BayRel against MOFA. Furthermore, I recommend that the authors specifically include the most important types of genomic data in this comparison: in particular, I think that they should include RNA-Seq and ChIP-Seq data. This would give a clearer picture of the relative merits of BayRel.

Correctness: The authors evaluate BayRel using some unusual types of genomic data: specifically, microbiome and micro-RNA data. As BayRel is designed to simultaneously analyse different types of genomic data in the most general sense, it seems important to first benchmark BayRel against much more common types of genomic data. Specifically, I would recommend the authors benchmark BayRel against gene-expression (such as RNA-seq) data in conjunction with ChIP-seq data as a minimum, and also including DNA methylation data if possible, before considering micro-RNA and microbiome data.

Clarity: The mathematical arguments are detailed and clear, the results are well presented, and the manuscript is easy to read.

Relation to Prior Work: Conceptually, BayRel is very similar to Mult-Omics Factor Analysis (MOFA): https://doi.org/10.15252/msb.20178124 This previous work was not mentioned in the current manuscript, but should be discussed.

Reproducibility: Yes

Additional Feedback:


Review 4

Summary and Contributions: In this paper, the authors propose a Bayesian representation learning framework that can infer links between heterogeneous graphs generated from multi-omics datasets. The main idea is to use the underlying relationship information within each dataset (or view) by modeling it as a graph. The method has 4 steps - (1) to embed the nodes of each view-specific graph into in the same latent space (2) generate a multi-view adjacency tensor using the similarity scores for node embeddings across views (3) Infer prior latent variables from the node embeddings and multi-view graphs and posterior from the view-specific data (4) Finally, perform variational inference to optimize model parameters and variational parameters. The paper attempts to solve an important problem of multi-omics data integration by learning relationships that can exist between different modalities by modeling them as multi-view link prediction. This work could be useful to the broader ML community.

Strengths: — The paper introduces an interesting Bayesian framework to perform multi-view link predictions by using the relationships within each view as prior information (in the form of graphs) -- The use of GCNs to learn node embeddings allows the method to capture the complex non-linear interactions in the data -- The probabilistic framework lends the method interpretation properties, useful for datasets in biology and clinical domain — This task is important for multi-domain data integration — The paper uses datasets from biology like (microbiome networks, gene, and drug interaction networks, etc.) to demonstrate that the proposed method outperforms the chosen baselines - Spearman’s Rank Correlation Analysis and Bayesian CCA - by producing higher precision scores — The paper also uses the scores for inferred interactions to verify the top interactions with evidence in the literature.

Weaknesses: — It would be useful to know how well the method scales to the size of the graphs for practical implementations — The paper mentions two different similarity functions (lines 83-84), which of these were used for the final results? The paper should clarify how the performance varies with this choice. 
— It is unclear why the scalar multiplication is useful (lines 180-181). How does this quantity affect the performance?

Correctness: It would be great if the following were clarified: — It is unclear if proper cross-validation was performed to select the final set of hyperparameters (for the method and the baselines). If not then the results could be misleading. — Is there a reason to just report the precision scores for the evaluation instead of false positives (important for clinical/biological tasks) or F1-scores? — Line 217, why the threshold for negative accuracy chosen to be 97% 
— Why is table 2 missing the SRCA baseline? — It would be useful if the results in lines 278-279 are quantified. For example, what was the % of the occurrence of mir-155 to be investigated by the authors? Was there a threshold to pick these examples?

Clarity: The paper is quite well written

Relation to Prior Work: Yes

Reproducibility: Yes

Additional Feedback: -- Figure 2 panel A: the negative accuracy axis has label 1.1 -- 
The broader impact section could be revised to be more specific towards applications as well as mention the potential issues/advantages of deploying such methods for the clinical domain. -- It is a little unclear why the graph-based neural networks (handling multi-view or heterogeneous data) could not be used as baselines for the paper