← Back to archive day

PhyEdit: Towards Real-World Object Manipulation via Physically-Grounded Image Editing

Ruihang Xu, Dewei Zhou, Xiaolong Shen, Fan Ma, Yi Yang

32

Recommendation Score

significant🔴 AdvancedRoboticsRobot ManipulationBenchmarkUseful for both

Research context

Primary field

Robotics

Embodied systems, control, manipulation, and navigation.

Topics

Robot Manipulation

Paper type

Benchmark

Best for

Useful for both

arXiv categories

cs.CVcs.CV

Why It Matters

Adds 3D geometry and physical constraints to image editing, plus a new benchmark, making object manipulation edits far more reliable for world-model, simulation, and synthetic-data workflows.

Abstract

Achieving physically accurate object manipulation in image editing is essential for its potential applications in interactive world models. However, existing visual generative models often fail at precise spatial manipulation, resulting in incorrect scaling and positioning of objects. This limitation primarily stems from the lack of explicit mechanisms to incorporate 3D geometry and perspective projection. To achieve accurate manipulation, we develop PhyEdit, an image editing framework that leverages explicit geometric simulation as contextual 3D-aware visual guidance. By combining this plug-and-play 3D prior with joint 2D--3D supervision, our method effectively improves physical accuracy and manipulation consistency. To support this method and evaluate performance, we present a real-world dataset, RealManip-10K, for 3D-aware object manipulation featuring paired images and depth annotations. We also propose ManipEval, a benchmark with multi-dimensional metrics to evaluate 3D spatial control and geometric consistency. Extensive experiments show that our approach outperforms existing methods, including strong closed-source models, in both 3D geometric accuracy and manipulation consistency.

Published April 8, 2026
© 2026 A2A.pub — AI to Action. From papers to practice, daily.
Summaries are AI-assistedPrivacyTerms