Podcast Episode Details

Back to Podcast Episodes

OmniPSD: Layered PSD Generation with Diffusion Transformer


Episode 1465


🤗 Upvotes: 27 | cs.CV

Authors:
Cheng Liu, Yiren Song, Haofan Wang, Mike Zheng Shou

Title:
OmniPSD: Layered PSD Generation with Diffusion Transformer

Arxiv:
http://arxiv.org/abs/2512.09247v1

Abstract:
Recent advances in diffusion models have greatly improved image generation and editing, yet generating or reconstructing layered PSD files with transparent alpha channels remains highly challenging. We propose OmniPSD, a unified diffusion framework built upon the Flux ecosystem that enables both text-to-PSD generation and image-to-PSD decomposition through in-context learning. For text-to-PSD generation, OmniPSD arranges multiple target layers spatially into a single canvas and learns their compositional relationships through spatial attention, producing semantically coherent and hierarchically structured layers. For image-to-PSD decomposition, it performs iterative in-context editing, progressively extracting and erasing textual and foreground components to reconstruct editable PSD layers from a single flattened image. An RGBA-VAE is employed as an auxiliary representation module to preserve transparency without affecting structure learning. Extensive experiments on our new RGBA-layered dataset demonstrate that OmniPSD achieves high-fidelity generation, structural consistency, and transparency awareness, offering a new paradigm for layered design generation and decomposition with diffusion transformers.


Published on 2 weeks ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate