Augmented reality and efficient endoscopic video segmentation

In this project I designed and developed an efficient intra-operative endoscopic video segmentation technique for robotic surgery systems.
Above video shows the overlaying of 3D models of kidney (green), external tumor (red) and internal tumor (blue) onto a 2D intra-operative video. The big black sign is an artificial tool that occludes the objects in the 2D scene. Our framework incorporates camera motion information to not to loose the track of the objects during the occlusion. Our framework:
✔ segments multiple structures;
✔ handles multi-view endoscopic videos simultaneously;
✔ incorporates patient-specific prior (pre-operative 3D scan);
✔ handles non-rigid deformations of different structures;
✔ incorporates laparoscopic camera motion prior to stabilize the segmentation;
✔ runs in real-time (45 msec.) on a single CPU core.

The idea is to segment the kidney and tumors in the 3D pre-operative data, transform and deform them such that their projections on the 2D intra-operative views delineate the boundaries of the objects of interest.