Podcast Episode Details

Back to Podcast Episodes

Generative Refocusing: Flexible Defocus Control from a Single Image


Episode 1499


🤗 Upvotes: 26 | cs.CV

Authors:
Chun-Wei Tuan Mu, Jia-Bin Huang, Yu-Lun Liu

Title:
Generative Refocusing: Flexible Defocus Control from a Single Image

Arxiv:
http://arxiv.org/abs/2512.16923v1

Abstract:
Depth-of-field control is essential in photography, but getting the perfect focus often takes several tries or special equipment. Single-image refocusing is still difficult. It involves recovering sharp content and creating realistic bokeh. Current methods have significant drawbacks. They need all-in-focus inputs, depend on synthetic data from simulators, and have limited control over aperture. We introduce Generative Refocusing, a two-step process that uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh. Our main innovation is semi-supervised training. This method combines synthetic paired data with unpaired real bokeh images, using EXIF metadata to capture real optical characteristics beyond what simulators can provide. Our experiments show we achieve top performance in defocus deblurring, bokeh synthesis, and refocusing benchmarks. Additionally, our Generative Refocusing allows text-guided adjustments and custom aperture shapes.


Published on 6 days, 13 hours ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate