1 00:00:01,834 --> 00:00:06,434 Our model for disparity remapping is built on perceptual experiments. 2 00:00:08,261 --> 00:00:15,561 In those paricipants had to decide whether stimulus is moving in depth or not. 3 00:00:17,140 --> 00:00:22,073 Or whether its disparity corrugation is being scaled. 4 00:00:28,130 --> 00:00:32,930 From that we have learnt threshold speeds for our model. 5 00:00:33,579 --> 00:00:38,446 We use them to power our applications such as the gaze-driven remapping. 6 00:00:38,807 --> 00:00:43,607 Here you see a static linear mapping of disparic and gaze as a white dot. 7 00:00:44,098 --> 00:00:48,198 Such mapping does not adapt to a gaze location change in any way. 8 00:00:48,592 --> 00:00:53,792 An alternative shown here shifts the disparity at the gaze to the screen. 9 00:00:54,318 --> 00:00:56,918 That, however, leads to annoying jumps. 10 00:01:00,332 --> 00:01:07,465 Hanhart et al. proposed a temporally coherent extension to improve stability. 11 00:01:12,382 --> 00:01:15,882 Our method takes scaling of depth into account. 12 00:01:16,264 --> 00:01:19,064 That opens much more mapping possibilities. 13 00:01:19,135 --> 00:01:23,135 The model ensures that no temporal artifacts are introduced. 14 00:01:28,081 --> 00:01:32,614 Here we present resulting mappings as recorded using an eye tracker. 15 00:01:32,792 --> 00:01:36,459 Note that this cannot reproduce a real user experience. 16 00:01:36,848 --> 00:01:38,915 First we show a static mapping. 17 00:01:41,564 --> 00:01:43,631 Then our gaze adaptive mapping. 18 00:01:44,044 --> 00:01:47,977 You may follow the gaze dot and note the depth enhancement. 19 00:01:53,624 --> 00:01:57,824 And finally a comparison of all 4 method using a splitted screen. 20 00:01:58,183 --> 00:02:02,783 Our method does not show artifacts like the immediate shift. 21 00:02:03,164 --> 00:02:07,097 It also leads to bigger depth impression than alternatives. 22 00:02:07,536 --> 00:02:10,403 Please refer to the paper for a user study. 23 00:02:10,821 --> 00:02:14,421 Here some more results for both CG and real content. 24 00:03:54,461 --> 00:03:58,094 Our transition model is also useful to reduce abrupt disparity changes... 25 00:03:58,210 --> 00:04:01,077 ...in movie cuts such as this one. 26 00:04:01,246 --> 00:04:05,446 The disparity change is seamlessly distributed to other frames. 27 00:04:05,876 --> 00:04:06,876 A comparison. 28 00:04:11,353 --> 00:04:14,286 We also support an offline depth preprocessing. 29 00:04:15,926 --> 00:04:20,126 Per-frame constructed mapping curves are temporally incoherent. 30 00:04:27,576 --> 00:04:32,303 Our model finds new curves and seamless transitions between them. 31 00:04:34,912 --> 00:04:38,122 A comparison. 32 00:04:47,176 --> 00:04:49,676 Thank you.