ViViD·Lab

Vision and Video Dynamics Lab

We study how the visual world changes over time — and build systems that perceive, reconstruct, and predict that change.

From pixel-level motion to world-scale simulation, ViViD Lab pursues a single research agenda — temporal dynamics — at increasing levels of abstraction.

Video Processing

Low-level video enhancement

  • Frame Interpolation
  • Super-Resolution
  • Stabilization
  • Compression

Video Understanding

Temporal semantics & representation

  • Video Foundation Models
  • Action Recognition
  • Retrieval & QA

3D / 4D Vision

3D scene reconstruction & rendering

  • NeRF
  • Gaussian Splatting
  • Novel View Synthesis

World Models

Generative simulation of the visual world

  • World Foundation Models
  • Action-Conditioned Video Generation
  • Physics-Aware Generation
  • Embodied AI
2026
  • Lab

    ViViD Lab officially launches at SeoulTech. Welcome!

2025
  • Paper

    Temporal Smoothness-Aware Rate-Distortion Optimized 4D Gaussian Splatting accepted to NeurIPS 2025.

2023
  • Paper

    Exploring Discontinuity for Video Frame Interpolation accepted to CVPR 2023 (Highlight, top 10%).