NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8097
Title:Computational Mirrors: Blind Inverse Light Transport by Deep Matrix Factorization

Reviewer 1


		
I actually don't have much to say about this paper. It is clearly written and easy to understand. It presents compelling results which outperform existing techniques. I'm not sure why the title of the paper refers to "computational mirrors"---this feels out of place and it's not referred to in the paper except in the very last section (the conclusion). Also, since motion is the focus of the work, perhaps it would be worth mentioning "video" in the title? Perhaps something like "Blind Inverse Video Light Transport by Deep Matrix Factorization"?

Reviewer 2


		
Non-line-of-Sight imaging is recently receiving attention in the computer vision/computational photography community, for example, a paper regarding this received the CVPR2019 best paper award. Most existing researches on NLoS imaging are active. On the contrary, this paper is purely passive. It tries to factorize the observed video into a matrix product between unknown hidden scene video and an unknown light transport matrix. So, the problem setting of this paper is extremely challenging but interesting. Considering that the solution space is extremely huge, and popular priors of nonnegative constraints and spatial smoothness are shown to be insufficient. The authors proposed to use deep image prior, which seems to confine the solution space properly. Experiment results show that the factorization is quite nice in capturing motions in the illumination, yet less capable of recovering color and object details. I think this is reasonable, since the cluttered scene has obvious cascade shadows.

Reviewer 3


		
In this paper, the authors study the problem of reconstructing a hidden scene from the observed videos. The proposed method seeks to invert tight transport matrix without a calibration step. The problem is challenging and ill posed. The author learn a low-dimensional basis from observed videos and use deep image prior models for generating hidden scene and coefficients of the light transport basis. Originality: + The paper uses inverse light transport to recover a video of hidden scene without any calibration, which seems novel. + The idea of using deep image prior to model the light transport coefficients and hidden scene is also interesting. - The paper is missing comparison with other related techniques for reconstructing hidden scenes. Quality: + The paper is well written. + The paper provides experiments on real data. - The results presented in the paper are very weak. Even though the authors claim that they can see the motion of hidden objects, that seems convincing only in the example of moving discs. I am not convinced that the true scene or motions is reconstructed in the case of more complex examples. - The authors did not quantify the accuracy of their reconstruction in any meaningful way. Since the deep image prior or deep decoders tend to produce "realistic" images, the blurry patterns produced by them (that look kind of like images) alone can not be used as a sufficient evidence that the method is reconstructing the hidden scene. Clarity: + The paper is well-written and structured. The problem is clearly stated and formulated. Supplementary materials help to understand the experimental setup and significance of the problem. + I like the fact that the authors provided experimental results for both cases where the light transport matrix is known and the blind factorization where both T and L are unknown Significance: - The results are not significant or convincing in my opinion. - The reconstructed videos are too blurry to perform any reasonable computer vision or machine learning task. + I think the paper has some merits, but it is not complete. It could be a really strong paper if the authors had performed some analysis about the accuracy of the reconstruction or demonstrated that the recovered video contains some salient information about the hidden scene.