@@ -207,10 +207,10 @@ <h4 class="speaker-name">
207
207
--> </ a > , Meta
208
208
</ h4 >
209
209
< p class ="speaker-bio ">
210
- < span class ="talk-title "> Talk title: </ span > TBC
210
+ < span class ="talk-title "> Talk title</ span > : TBC
211
211
</ p >
212
212
< p class ="speaker-bio ">
213
- < span class ="talk-title "> Bio: </ span > Jakob Engel joined the Surreal Vision team at Oculus
213
+ < span class ="talk-title "> Bio</ span > : Jakob Engel joined the Surreal Vision team at Oculus
214
214
Research in Redmond in 2016, working on the future of 3D-enabled Machine Perception. He did his
215
215
Bachelor and Master at TU Munich (2009 and 2012), followed up with a PhD at the Computer Vision
216
216
Group there, headed by Professor Daniel Cremers. He spent 6 months as Intern at Intel Research
@@ -231,16 +231,16 @@ <h4 class="speaker-name">
231
231
--> </ a > , Naver Labs Europe
232
232
</ h4 >
233
233
< p class ="speaker-bio ">
234
- < span class ="talk-title "> Talk title: </ span > Grounding Image Matching in 3D with MASt3R < br />
235
- < span class ="talk-title "> Summary: </ span > The journey from CroCo to MASt3R exemplify a
234
+ < span class ="talk-title "> Talk title</ span > : Grounding Image Matching in 3D with MASt3R < br />
235
+ < span class ="talk-title "> Summary</ span > : The journey from CroCo to MASt3R exemplify a
236
236
significant paradigm shift in 3D vision technologies. This presentation will delve into the
237
237
methodologies, innovations, and synergistic integration of these frameworks, demonstrating their
238
238
impact on the field and potential future directions. The discussion aims to highlight how these
239
239
advancements unify and streamline the processing of 3D visual data, offering new perspectives
240
240
and capabilities in map-free visual relocalization, robotic navigation and beyond.
241
241
</ p >
242
242
< p class ="speaker-bio ">
243
- < span class ="talk-title "> Bio: </ span > Vincent is a research scientist in Geometric Deep Learning
243
+ < span class ="talk-title "> Bio</ span > : Vincent is a research scientist in Geometric Deep Learning
244
244
at Naver Labs Europe.
245
245
He joined 5 years ago, in 2019, after completing his PhD on Multi-View Stereo Reconstruction for
246
246
dynamic shapes at the INRIA Grenoble-Alpes under the supervision of E. Boyer and J-S. Franco.
@@ -261,10 +261,10 @@ <h4 class="speaker-name">
261
261
--> </ a > , Google
262
262
</ h4 >
263
263
< p class ="speaker-bio ">
264
- < span class ="talk-title "> Talk title: </ span > TBC
264
+ < span class ="talk-title "> Talk title</ span > : TBC
265
265
</ p >
266
266
< p class ="speaker-bio ">
267
- < span class ="talk-title "> Bio: </ span > Simon Lynen is a tech lead manager at Google Zurich. His
267
+ < span class ="talk-title "> Bio</ span > : Simon Lynen is a tech lead manager at Google Zurich. His
268
268
group focuses on providing high precision mobile-phone localization as part of the Visual
269
269
Positioning Service (VPS). Devices with Google’s augmented reality capabilities can leverage VPS
270
270
to enable global scale location aware experiences such as ARCore CloudAnchors and GoogleMaps
@@ -287,10 +287,16 @@ <h4 class="speaker-name">
287
287
--> </ a > , CTU Prague
288
288
</ h4 >
289
289
< p class ="speaker-bio ">
290
- < span class ="talk-title "> Talk title:</ span > TBC
290
+ < span class ="talk-title "> Talk title</ span > : Scene Representations for Visual Localization < br />
291
+ < span class ="talk-title "> Summary</ span > : Visual localization is the problem of estimating the
292
+ exact position and orientation from which a given image was taken. Traditionally, localization
293
+ approaches either used a set of images with known camera poses or a sparse point cloud, obtained
294
+ from Structure-from-Motion, to represent the scene. In recent years, the list of available scene
295
+ representations has grown considerably. In this talk, we review a subset of the available
296
+ representations.
291
297
</ p >
292
298
< p class ="speaker-bio ">
293
- < span class ="talk-title "> Bio: </ span > Torsten Sattler is a Senior Researcher at CTU. Before, he
299
+ < span class ="talk-title "> Bio</ span > : Torsten Sattler is a Senior Researcher at CTU. Before, he
294
300
was a tenured associate professor at Chalmers University of Technology. He received a PhD in
295
301
Computer Science from RWTH Aachen University, Germany, in 2014. From Dec. 2013 to Dec. 2018, he
296
302
was a post-doctoral and senior researcher at ETH Zurich. Torsten has worked on feature-based
@@ -316,9 +322,9 @@ <h4 class="speaker-name">
316
322
--> </ a > , Carnegie Mellon University
317
323
</ h4 >
318
324
< p class ="speaker-bio ">
319
- < span class ="talk-title "> Talk title: </ span > Rethinking Camera Parametrization for Pose
325
+ < span class ="talk-title "> Talk title</ span > : Rethinking Camera Parametrization for Pose
320
326
Prediction< br />
321
- < span class ="talk-title "> Summary: </ span > Every student of projective geometry is taught to
327
+ < span class ="talk-title "> Summary</ span > : Every student of projective geometry is taught to
322
328
represent camera matrices via an extrinsic and intrinsic matrix and learning-based methods that
323
329
seek to predict viewpoints given a set of images typically adopt this (global) representation.
324
330
In this talk, I will advocate for an over-parametrized local representation which represents
@@ -327,7 +333,7 @@ <h4 class="speaker-name">
327
333
for neural learning and lead to more accurate camera prediction.
328
334
</ p >
329
335
< p class ="speaker-bio ">
330
- < span class ="talk-title "> Bio: </ span > Shubham Tulsiani is an Assistant Professor at Carnegie
336
+ < span class ="talk-title "> Bio</ span > : Shubham Tulsiani is an Assistant Professor at Carnegie
331
337
Mellon University in the Robotics Institute. Prior to this, he was a research scientist at
332
338
Facebook AI Research (FAIR). He received a PhD. in Computer Science from UC Berkeley in 2018
333
339
where his work was supported by the Berkeley Fellowship. He is interested in building perception
@@ -347,10 +353,10 @@ <h4 class="speaker-name">
347
353
--> </ a > , Niantic and Oxford University
348
354
</ h4 >
349
355
< p class ="speaker-bio ">
350
- < span class ="talk-title "> Talk title: </ span > TBC (opening remarks)
356
+ < span class ="talk-title "> Talk title</ span > : TBC (opening remarks)
351
357
</ p >
352
358
< p class ="speaker-bio ">
353
- < span class ="talk-title "> Bio: </ span > Professor Victor Adrian Prisacariu received the Graduate
359
+ < span class ="talk-title "> Bio</ span > : Professor Victor Adrian Prisacariu received the Graduate
354
360
degree (with first class hons.) in computer engineering from Gheorghe Asachi Technical
355
361
University, Iasi, Romania, in 2008, and the D.Phil. degree in engineering science from the
356
362
University of Oxford in 2012.< br />
@@ -464,7 +470,14 @@ <h4>📅 On Monday, 30<sup>th</sup> September, 2024, AM </h4
464
470
</ tr >
465
471
< tr >
466
472
< td class ="text-nowrap "> 11:50 - 12:20</ td >
467
- < td > < a href ="https://tsattler.github.io/ "> Torsten Sattler</ a > , CTU Prague</ td >
473
+ < td > < a href ="https://tsattler.github.io/ "> Torsten Sattler</ a > , CTU Prague< br />
474
+ < i > Scene Representations for Visual Localization</ i > < br />
475
+ Visual localization is the problem of estimating the exact position and orientation from which a
476
+ given image was taken. Traditionally, localization approaches either used a set of images with
477
+ known camera poses or a sparse point cloud, obtained from Structure-from-Motion, to represent
478
+ the scene. In recent years, the list of available scene representations has grown considerably.
479
+ In this talk, we review a subset of the available representations.
480
+ </ td >
468
481
</ tr >
469
482
< tr >
470
483
< td class ="text-nowrap "> 12:25 - 12:55</ td >
@@ -481,7 +494,25 @@ <h4>📅 On Monday, 30<sup>th</sup> September, 2024, AM </h4
481
494
</ tr >
482
495
< tr >
483
496
< td class ="text-nowrap "> 12:55 - 13:00</ td >
484
- < td > Closing Remarks and award photos</ td >
497
+ < td >
498
+ < div class ="table-responsive ">
499
+ < table class ="table table-borderless p-0 m-0 ">
500
+ < tr class ="p-0 m-0 ">
501
+ < td class ="p-0 m-0 col-4 "> Closing Remarks and award photos</ td >
502
+ < td class ="p-0 m-0 col-1 "> </ td >
503
+ < td class ="p-0 m-0 col-6 ">
504
+ < div class ="ratio ratio-1x1 text-center align-middle ">
505
+ < iframe class ="yourClass "
506
+ style ="border: 0; width: calc(100% - 0px); height: calc(100% - 0px); "
507
+ allowfullscreen
508
+ src ="https://scaniverse.com/scan/qafjd2g35pd7awe4?embed=1 "> </ iframe >
509
+ </ div > <!-- ratio -->
510
+ </ td >
511
+ < td class ="p-0 m-0 col-1 "> </ td >
512
+ </ tr >
513
+ </ table >
514
+ </ div > <!-- table-responsive -->
515
+ </ td >
485
516
</ tr >
486
517
</ tbody >
487
518
</ table >
0 commit comments