We present a framework for image-based surface appearance editing for light-field data. Our framework improves over the state-of-the-art without the need for a full "inverse-rendering", so that full geometrical data, or presence of highly specular or reflective surfaces are not required. It is robust to noisy or missing data, and handles many types of camera array setup ranging from a dense light-field to a wide baseline stereo-image pair. We start by extracting intrinsic layers from the light-field image set maintaining consistency between views. It is followed by decomposing each layer separately into frequency bands, and applying a wide range of "band-sifting" operations. The above approach enables a rich variety of perceptually plausible surface finishing and materials, achieving novel effects like translucency. Our GPU based implementation allow interactive editing of an arbitrary light-field view, which can then be consistently propagated to the rest of the views. We provide extensive evaluation of our framework on various datasets and against state-of-the-art solutions.
Shida Beigpour, Sumit Shekhar, Mohsen Mansouryar, Karol Myszkowski, Hans-Peter Seidel
Light-Field Appearance Editing Based on Intrinsic Decomposition
Journal of Perceptual Imaging (JPI), 2018
@article {Beigpour:2018,
title = "Light-Field Appearance Editing based on Intrinsic Decomposition",
journal = "Journal of Perceptual Imaging",
parent_itemid = "infobike://ist/jpi",
publishercode ="ist",
year = "2018",
volume = "1",
number = "1",
publication date ="2018-01-01T00:00:00",
pages = "10502-1-10502-15",
itemtype = "ARTICLE",
issn = "2575-8144",
eissn = "2575-8144",
url = "https://www.ingentaconnect.com/content/ist/jpi/2018/00000001/00000001/art00003",
doi = "doi:10.2352/J.Percept.Imaging.2018.1.1.010502",
author = "Beigpour, Shida and Shekhar, Sumit and Mansouryar, Mohsen and Myszkowski, Karol and Seidel, Hans-Peter"
}
© 2018 The Authors. This is the author's version of the work. It is posted here for your personal use. Not for redistribution.
|
We would like to thank Osman Ali Mian and Zeeshan Khan Suri for their help during this project. We would like to thank Abhimitra Meka and Elena Garces for kindly providing the necessary comparisons. We thank the reviewers for their insightful comments. The project was supported by the Fraunhofer and Max Planck cooperation program within the German pact for research and innovation (PFI).