High Dynamic Range (HDR) images offer clear advantages over Low Dynamic Range (LDR) images, such as greater bit depth, a wider color gamut, and improved dynamic range, enhancing both visual quality and post-production flexibility. However, challenges remain in HDR content
acquisition and display. This thesis investigates deep learning methods informed by physical priors to address these challenges. It explores HDR
reconstruction from sparse, defocused LDR inputs using implicit neural representations, and extends to HDR all-in-focus field reconstruction
via 3D Gaussian Splatting from multi-view inputs. It further explores HDR generation from in-the-wild LDR images or limited HDR images,
leveraging the learned HDR prior for LDR-to-HDR restoration. Lastly, it proposes a self-supervised tone mapping framework based on feature
contrast masking loss to enable perceptually faithful HDR display on LDR devices.