Dataset Viewer
Auto-converted to Parquet
image
imagewidth (px)
1.73k
3.3k
pdf_name
stringclasses
100 values
page_number
int64
0
49
markdown
stringlengths
0
10.8k
html
stringlengths
0
10.8k
layout
stringlengths
2
11.3k
lines
stringlengths
2
29.6k
images
stringlengths
2
2.03k
equations
stringlengths
2
7.76k
tables
stringclasses
1 value
page_size
stringclasses
4 values
content_list
stringlengths
2
34.2k
base_layout_detection
stringlengths
435
49.1k
pdf_info
stringlengths
584
161k
2305.03027
0
# NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads TOBIAS KIRSCHSTEIN, Technical University of Munich, Germany SHENHAN QIAN, Technical University of Munich, Germany SIMON GIEBENHAIN, Technical University of Munich, Germany TIM WALTER, Technical University of Munich, Germany MATTHIAS NIESSNER, Technical University of Munich, Germany Fig. 1. NeRSemble: Given multi-view video recordings from twelve cameras (left), our method is capable of synthesizing highly realistic novel views of human heads in complex motion. Our renderings from unseen views (right) faithfully represent static scene parts and regions undergoing highly non-rigid deformations. Along with our method, we publish our high-quality multi-view video capture data of 31.7 million frames from a total of 222 subjects. We focus on reconstructing high-fidelity radiance fields of human heads, capturing their animations over time, and synthesizing re-renderings from novel viewpoints at arbitrary time steps. To this end, we propose a new multi-view capture setup composed of 16 calibrated machine vision cameras that record time-synchronized images at $$7.1\;\mathrm{MP}$$ resolution and 73 frames per second. With our setup, we collect a new dataset of over 4700 high- resolution, high-framerate sequences of more than 220 human heads, from which we introduce a new human head reconstruction benchmark. The recorded sequences cover a wide range of facial dynamics, including head motions, natural expressions, emotions, and spoken language. In order to re- construct high-fidelity human heads, we propose Dynamic Neural Radiance Fields using Hash Ensembles (NeRSemble). We represent scene dynamics by combining a deformation field and an ensemble of 3D multi-resolution hash encodings. The deformation field allows for precise modeling of simple scene movements, while the ensemble of hash encodings helps to represent complex dynamics. As a result, we obtain radiance field representations of human heads that capture motion over time and facilitate re-rendering of arbitrary novel viewpoints. In a series of experiments, we explore the design choices of our method and demonstrate that our approach outperforms state-of-the-art dynamic radiance field approaches by a significant margin. CCS Concepts: $$\cdot$$ Computing methodologies $$\rightarrow$$ Rendering; 3D imaging; Volumetric models; Reconstruction. Additional Key Words and Phrases: Neural Radiance Fields, Dynamic Scene Representations, Novel View Synthesis, Multi-View Video Dataset, Human Heads # 1 INTRODUCTION In recent years, we have seen tremendous growth in the impor- tance of digital applications that rely on photo-realistic rendering of images from captured scene representations, both in society and in- dustry. In particular, the synthesis of novel views of dynamic human faces and heads has become the center of attention in many graphics applications ranging from computer games and movie productions to settings in virtual or augmented reality. Here, the key task is the following: given a recording of a human actor who is displaying facial expressions or talking, reconstruct a temporally-consistent 3D representation. This representation should enable the synthesis of photo-realistic re-renderings of the human face from arbitrary viewpoints and time steps. However, reconstructing a 3D representation capable of photo- realistic novel viewpoint rendering is particularly challenging for dynamic objects. Here, we not only have to reconstruct the static appearance of a person, but we also have to simultaneously capture the motion over time and encode it in a compact scene represen- tation. The task becomes even more challenging in the context of human faces, as fine-scale and high-fidelity detail are required for downstream applications, where the tolerance for visual artifacts
<h1>NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads</h1> <p>TOBIAS KIRSCHSTEIN, Technical University of Munich, Germany SHENHAN QIAN, Technical University of Munich, Germany SIMON GIEBENHAIN, Technical University of Munich, Germany TIM WALTER, Technical University of Munich, Germany MATTHIAS NIESSNER, Technical University of Munich, Germany</p> <p>Fig. 1. NeRSemble: Given multi-view video recordings from twelve cameras (left), our method is capable of synthesizing highly realistic novel views of human heads in complex motion. Our renderings from unseen views (right) faithfully represent static scene parts and regions undergoing highly non-rigid deformations. Along with our method, we publish our high-quality multi-view video capture data of 31.7 million frames from a total of 222 subjects.</p> <p>We focus on reconstructing high-fidelity radiance fields of human heads, capturing their animations over time, and synthesizing re-renderings from novel viewpoints at arbitrary time steps. To this end, we propose a new multi-view capture setup composed of 16 calibrated machine vision cameras that record time-synchronized images at $$7.1\;\mathrm{MP}$$ resolution and 73 frames per second. With our setup, we collect a new dataset of over 4700 high- resolution, high-framerate sequences of more than 220 human heads, from which we introduce a new human head reconstruction benchmark. The recorded sequences cover a wide range of facial dynamics, including head motions, natural expressions, emotions, and spoken language. In order to re- construct high-fidelity human heads, we propose Dynamic Neural Radiance Fields using Hash Ensembles (NeRSemble). We represent scene dynamics by combining a deformation field and an ensemble of 3D multi-resolution hash encodings. The deformation field allows for precise modeling of simple scene movements, while the ensemble of hash encodings helps to represent complex dynamics. As a result, we obtain radiance field representations of human heads that capture motion over time and facilitate re-rendering of arbitrary novel viewpoints. In a series of experiments, we explore the design choices of our method and demonstrate that our approach outperforms state-of-the-art dynamic radiance field approaches by a significant margin.</p> <p>CCS Concepts: $$\cdot$$ Computing methodologies $$\rightarrow$$ Rendering; 3D imaging; Volumetric models; Reconstruction.</p> <p>Additional Key Words and Phrases: Neural Radiance Fields, Dynamic Scene Representations, Novel View Synthesis, Multi-View Video Dataset, Human Heads</p> <h1>1 INTRODUCTION</h1> <p>In recent years, we have seen tremendous growth in the impor- tance of digital applications that rely on photo-realistic rendering of images from captured scene representations, both in society and in- dustry. In particular, the synthesis of novel views of dynamic human faces and heads has become the center of attention in many graphics applications ranging from computer games and movie productions to settings in virtual or augmented reality. Here, the key task is the following: given a recording of a human actor who is displaying facial expressions or talking, reconstruct a temporally-consistent 3D representation. This representation should enable the synthesis of photo-realistic re-renderings of the human face from arbitrary viewpoints and time steps.</p> <p>However, reconstructing a 3D representation capable of photo- realistic novel viewpoint rendering is particularly challenging for dynamic objects. Here, we not only have to reconstruct the static appearance of a person, but we also have to simultaneously capture the motion over time and encode it in a compact scene represen- tation. The task becomes even more challenging in the context of human faces, as fine-scale and high-fidelity detail are required for downstream applications, where the tolerance for visual artifacts</p>
[{"type": "title", "coordinates": [51, 75, 561, 94], "content": "NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads", "block_type": "title", "index": 1}, {"type": "text", "coordinates": [51, 105, 343, 174], "content": "TOBIAS KIRSCHSTEIN, Technical University of Munich, Germany\nSHENHAN QIAN, Technical University of Munich, Germany\nSIMON GIEBENHAIN, Technical University of Munich, Germany\nTIM WALTER, Technical University of Munich, Germany\nMATTHIAS NIESSNER, Technical University of Munich, Germany", "block_type": "text", "index": 2}, {"type": "image", "coordinates": [49, 184, 561, 337], "content": "", "block_type": "image", "index": 3}, {"type": "text", "coordinates": [51, 356, 561, 384], "content": "Fig. 1. NeRSemble: Given multi-view video recordings from twelve cameras (left), our method is capable of synthesizing highly realistic novel views of\nhuman heads in complex motion. Our renderings from unseen views (right) faithfully represent static scene parts and regions undergoing highly non-rigid\ndeformations. Along with our method, we publish our high-quality multi-view video capture data of 31.7 million frames from a total of 222 subjects.", "block_type": "text", "index": 4}, {"type": "text", "coordinates": [51, 391, 295, 592], "content": "We focus on reconstructing high-fidelity radiance fields of human heads,\ncapturing their animations over time, and synthesizing re-renderings from\nnovel viewpoints at arbitrary time steps. To this end, we propose a new\nmulti-view capture setup composed of 16 calibrated machine vision cameras\nthat record time-synchronized images at $$7.1\\;\\mathrm{MP}$$ resolution and 73 frames\nper second. With our setup, we collect a new dataset of over 4700 high-\nresolution, high-framerate sequences of more than 220 human heads, from\nwhich we introduce a new human head reconstruction benchmark. The\nrecorded sequences cover a wide range of facial dynamics, including head\nmotions, natural expressions, emotions, and spoken language. In order to re-\nconstruct high-fidelity human heads, we propose Dynamic Neural Radiance\nFields using Hash Ensembles (NeRSemble). We represent scene dynamics\nby combining a deformation field and an ensemble of 3D multi-resolution\nhash encodings. The deformation field allows for precise modeling of simple\nscene movements, while the ensemble of hash encodings helps to represent\ncomplex dynamics. As a result, we obtain radiance field representations of\nhuman heads that capture motion over time and facilitate re-rendering of\narbitrary novel viewpoints. In a series of experiments, we explore the design\nchoices of our method and demonstrate that our approach outperforms\nstate-of-the-art dynamic radiance field approaches by a significant margin.", "block_type": "text", "index": 5}, {"type": "text", "coordinates": [317, 392, 561, 412], "content": "CCS Concepts: $$\\cdot$$ Computing methodologies $$\\rightarrow$$ Rendering; 3D imaging;\nVolumetric models; Reconstruction.", "block_type": "text", "index": 6}, {"type": "text", "coordinates": [317, 418, 561, 449], "content": "Additional Key Words and Phrases: Neural Radiance Fields, Dynamic Scene\nRepresentations, Novel View Synthesis, Multi-View Video Dataset, Human\nHeads", "block_type": "text", "index": 7}, {"type": "title", "coordinates": [318, 460, 405, 471], "content": "1 INTRODUCTION", "block_type": "title", "index": 8}, {"type": "text", "coordinates": [317, 475, 562, 606], "content": "In recent years, we have seen tremendous growth in the impor-\ntance of digital applications that rely on photo-realistic rendering of\nimages from captured scene representations, both in society and in-\ndustry. In particular, the synthesis of novel views of dynamic human\nfaces and heads has become the center of attention in many graphics\napplications ranging from computer games and movie productions\nto settings in virtual or augmented reality. Here, the key task is the\nfollowing: given a recording of a human actor who is displaying\nfacial expressions or talking, reconstruct a temporally-consistent\n3D representation. This representation should enable the synthesis\nof photo-realistic re-renderings of the human face from arbitrary\nviewpoints and time steps.", "block_type": "text", "index": 9}, {"type": "text", "coordinates": [317, 606, 561, 694], "content": "However, reconstructing a 3D representation capable of photo-\nrealistic novel viewpoint rendering is particularly challenging for\ndynamic objects. Here, we not only have to reconstruct the static\nappearance of a person, but we also have to simultaneously capture\nthe motion over time and encode it in a compact scene represen-\ntation. The task becomes even more challenging in the context of\nhuman faces, as fine-scale and high-fidelity detail are required for\ndownstream applications, where the tolerance for visual artifacts", "block_type": "text", "index": 10}]
[{"type": "text", "coordinates": [52, 78, 559, 94], "content": "NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads", "score": 1.0, "index": 1}, {"type": "text", "coordinates": [50, 107, 343, 120], "content": "TOBIAS KIRSCHSTEIN, Technical University of Munich, Germany", "score": 1.0, "index": 2}, {"type": "text", "coordinates": [51, 120, 315, 135], "content": "SHENHAN QIAN, Technical University of Munich, Germany", "score": 1.0, "index": 3}, {"type": "text", "coordinates": [51, 134, 338, 148], "content": "SIMON GIEBENHAIN, Technical University of Munich, Germany", "score": 1.0, "index": 4}, {"type": "text", "coordinates": [50, 149, 296, 162], "content": "TIM WALTER, Technical University of Munich, Germany", "score": 1.0, "index": 5}, {"type": "text", "coordinates": [51, 162, 341, 176], "content": "MATTHIAS NIESSNER, Technical University of Munich, Germany", "score": 1.0, "index": 6}, {"type": "text", "coordinates": [49, 355, 562, 368], "content": "Fig. 1. NeRSemble: Given multi-view video recordings from twelve cameras (left), our method is capable of synthesizing highly realistic novel views of", "score": 1.0, "index": 7}, {"type": "text", "coordinates": [50, 366, 561, 377], "content": "human heads in complex motion. Our renderings from unseen views (right) faithfully represent static scene parts and regions undergoing highly non-rigid", "score": 1.0, "index": 8}, {"type": "text", "coordinates": [51, 377, 536, 386], "content": "deformations. Along with our method, we publish our high-quality multi-view video capture data of 31.7 million frames from a total of 222 subjects.", "score": 1.0, "index": 9}, {"type": "text", "coordinates": [50, 393, 295, 403], "content": "We focus on reconstructing high-fidelity radiance fields of human heads,", "score": 1.0, "index": 10}, {"type": "text", "coordinates": [51, 403, 294, 412], "content": "capturing their animations over time, and synthesizing re-renderings from", "score": 1.0, "index": 11}, {"type": "text", "coordinates": [51, 414, 294, 423], "content": "novel viewpoints at arbitrary time steps. To this end, we propose a new", "score": 1.0, "index": 12}, {"type": "text", "coordinates": [51, 424, 294, 432], "content": "multi-view capture setup composed of 16 calibrated machine vision cameras", "score": 1.0, "index": 13}, {"type": "text", "coordinates": [51, 433, 186, 443], "content": "that record time-synchronized images at", "score": 1.0, "index": 14}, {"type": "inline_equation", "coordinates": [186, 432, 210, 441], "content": "7.1\\;\\mathrm{MP}", "score": 0.39, "index": 15}, {"type": "text", "coordinates": [210, 433, 295, 443], "content": " resolution and 73 frames", "score": 1.0, "index": 16}, {"type": "text", "coordinates": [50, 443, 295, 452], "content": "per second. With our setup, we collect a new dataset of over 4700 high-", "score": 1.0, "index": 17}, {"type": "text", "coordinates": [51, 453, 295, 463], "content": "resolution, high-framerate sequences of more than 220 human heads, from", "score": 1.0, "index": 18}, {"type": "text", "coordinates": [51, 463, 294, 471], "content": "which we introduce a new human head reconstruction benchmark. The", "score": 1.0, "index": 19}, {"type": "text", "coordinates": [51, 473, 294, 482], "content": "recorded sequences cover a wide range of facial dynamics, including head", "score": 1.0, "index": 20}, {"type": "text", "coordinates": [51, 483, 296, 493], "content": "motions, natural expressions, emotions, and spoken language. In order to re-", "score": 1.0, "index": 21}, {"type": "text", "coordinates": [51, 492, 295, 503], "content": "construct high-fidelity human heads, we propose Dynamic Neural Radiance", "score": 1.0, "index": 22}, {"type": "text", "coordinates": [50, 502, 294, 512], "content": "Fields using Hash Ensembles (NeRSemble). We represent scene dynamics", "score": 1.0, "index": 23}, {"type": "text", "coordinates": [50, 513, 294, 522], "content": "by combining a deformation field and an ensemble of 3D multi-resolution", "score": 1.0, "index": 24}, {"type": "text", "coordinates": [50, 522, 294, 532], "content": "hash encodings. The deformation field allows for precise modeling of simple", "score": 1.0, "index": 25}, {"type": "text", "coordinates": [50, 532, 295, 542], "content": "scene movements, while the ensemble of hash encodings helps to represent", "score": 1.0, "index": 26}, {"type": "text", "coordinates": [51, 543, 295, 552], "content": "complex dynamics. As a result, we obtain radiance field representations of", "score": 1.0, "index": 27}, {"type": "text", "coordinates": [50, 552, 295, 562], "content": "human heads that capture motion over time and facilitate re-rendering of", "score": 1.0, "index": 28}, {"type": "text", "coordinates": [51, 563, 294, 572], "content": "arbitrary novel viewpoints. In a series of experiments, we explore the design", "score": 1.0, "index": 29}, {"type": "text", "coordinates": [51, 572, 294, 582], "content": "choices of our method and demonstrate that our approach outperforms", "score": 1.0, "index": 30}, {"type": "text", "coordinates": [51, 582, 294, 592], "content": "state-of-the-art dynamic radiance field approaches by a significant margin.", "score": 1.0, "index": 31}, {"type": "text", "coordinates": [317, 392, 367, 404], "content": "CCS Concepts:", "score": 1.0, "index": 32}, {"type": "inline_equation", "coordinates": [367, 394, 373, 400], "content": "\\cdot", "score": 0.35, "index": 33}, {"type": "text", "coordinates": [374, 392, 471, 404], "content": "Computing methodologies", "score": 1.0, "index": 34}, {"type": "inline_equation", "coordinates": [471, 393, 483, 401], "content": "\\rightarrow", "score": 0.77, "index": 35}, {"type": "text", "coordinates": [483, 392, 561, 404], "content": "Rendering; 3D imaging;", "score": 1.0, "index": 36}, {"type": "text", "coordinates": [318, 402, 428, 412], "content": "Volumetric models; Reconstruction.", "score": 1.0, "index": 37}, {"type": "text", "coordinates": [317, 419, 560, 429], "content": "Additional Key Words and Phrases: Neural Radiance Fields, Dynamic Scene", "score": 1.0, "index": 38}, {"type": "text", "coordinates": [317, 430, 561, 440], "content": "Representations, Novel View Synthesis, Multi-View Video Dataset, Human", "score": 1.0, "index": 39}, {"type": "text", "coordinates": [316, 441, 339, 449], "content": "Heads", "score": 1.0, "index": 40}, {"type": "text", "coordinates": [319, 463, 323, 469], "content": "1", "score": 1.0, "index": 41}, {"type": "text", "coordinates": [331, 461, 405, 471], "content": "INTRODUCTION", "score": 1.0, "index": 42}, {"type": "text", "coordinates": [317, 476, 561, 486], "content": "In recent years, we have seen tremendous growth in the impor-", "score": 1.0, "index": 43}, {"type": "text", "coordinates": [317, 487, 561, 497], "content": "tance of digital applications that rely on photo-realistic rendering of", "score": 1.0, "index": 44}, {"type": "text", "coordinates": [316, 498, 561, 508], "content": "images from captured scene representations, both in society and in-", "score": 1.0, "index": 45}, {"type": "text", "coordinates": [317, 509, 561, 519], "content": "dustry. In particular, the synthesis of novel views of dynamic human", "score": 1.0, "index": 46}, {"type": "text", "coordinates": [316, 519, 560, 531], "content": "faces and heads has become the center of attention in many graphics", "score": 1.0, "index": 47}, {"type": "text", "coordinates": [317, 531, 561, 541], "content": "applications ranging from computer games and movie productions", "score": 1.0, "index": 48}, {"type": "text", "coordinates": [316, 542, 560, 552], "content": "to settings in virtual or augmented reality. Here, the key task is the", "score": 1.0, "index": 49}, {"type": "text", "coordinates": [316, 552, 561, 564], "content": "following: given a recording of a human actor who is displaying", "score": 1.0, "index": 50}, {"type": "text", "coordinates": [316, 563, 561, 574], "content": "facial expressions or talking, reconstruct a temporally-consistent", "score": 1.0, "index": 51}, {"type": "text", "coordinates": [317, 575, 560, 585], "content": "3D representation. This representation should enable the synthesis", "score": 1.0, "index": 52}, {"type": "text", "coordinates": [317, 586, 560, 596], "content": "of photo-realistic re-renderings of the human face from arbitrary", "score": 1.0, "index": 53}, {"type": "text", "coordinates": [316, 596, 415, 608], "content": "viewpoints and time steps.", "score": 1.0, "index": 54}, {"type": "text", "coordinates": [325, 607, 560, 617], "content": "However, reconstructing a 3D representation capable of photo-", "score": 1.0, "index": 55}, {"type": "text", "coordinates": [316, 618, 560, 629], "content": "realistic novel viewpoint rendering is particularly challenging for", "score": 1.0, "index": 56}, {"type": "text", "coordinates": [317, 629, 560, 639], "content": "dynamic objects. Here, we not only have to reconstruct the static", "score": 1.0, "index": 57}, {"type": "text", "coordinates": [316, 640, 560, 651], "content": "appearance of a person, but we also have to simultaneously capture", "score": 1.0, "index": 58}, {"type": "text", "coordinates": [316, 650, 561, 662], "content": "the motion over time and encode it in a compact scene represen-", "score": 1.0, "index": 59}, {"type": "text", "coordinates": [316, 662, 562, 673], "content": "tation. The task becomes even more challenging in the context of", "score": 1.0, "index": 60}, {"type": "text", "coordinates": [317, 673, 560, 682], "content": "human faces, as fine-scale and high-fidelity detail are required for", "score": 1.0, "index": 61}, {"type": "text", "coordinates": [317, 685, 560, 694], "content": "downstream applications, where the tolerance for visual artifacts", "score": 1.0, "index": 62}]
[{"coordinates": [49, 184, 561, 337], "index": 7, "caption": "", "caption_coordinates": []}]
[{"type": "inline", "coordinates": [186, 432, 210, 441], "content": "7.1\\;\\mathrm{MP}", "caption": ""}, {"type": "inline", "coordinates": [367, 394, 373, 400], "content": "\\cdot", "caption": ""}, {"type": "inline", "coordinates": [471, 393, 483, 401], "content": "\\rightarrow", "caption": ""}]
[]
[612.0, 792.0]
[{"type": "text", "text": "NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads ", "text_level": 1, "page_idx": 0}, {"type": "text", "text": "TOBIAS KIRSCHSTEIN, Technical University of Munich, Germany SHENHAN QIAN, Technical University of Munich, Germany SIMON GIEBENHAIN, Technical University of Munich, Germany TIM WALTER, Technical University of Munich, Germany MATTHIAS NIESSNER, Technical University of Munich, Germany ", "page_idx": 0}, {"type": "image", "img_path": "images/ab2245f0990c99a4bec88df2f438eac35b5b04162fa6cb5df763240b2d3f3494.jpg", "img_caption": [], "img_footnote": [], "page_idx": 0}, {"type": "text", "text": "Fig. 1. NeRSemble: Given multi-view video recordings from twelve cameras (left), our method is capable of synthesizing highly realistic novel views of human heads in complex motion. Our renderings from unseen views (right) faithfully represent static scene parts and regions undergoing highly non-rigid deformations. Along with our method, we publish our high-quality multi-view video capture data of 31.7 million frames from a total of 222 subjects. ", "page_idx": 0}, {"type": "text", "text": "We focus on reconstructing high-fidelity radiance fields of human heads, capturing their animations over time, and synthesizing re-renderings from novel viewpoints at arbitrary time steps. To this end, we propose a new multi-view capture setup composed of 16 calibrated machine vision cameras that record time-synchronized images at $7.1\\;\\mathrm{MP}$ resolution and 73 frames per second. With our setup, we collect a new dataset of over 4700 highresolution, high-framerate sequences of more than 220 human heads, from which we introduce a new human head reconstruction benchmark. The recorded sequences cover a wide range of facial dynamics, including head motions, natural expressions, emotions, and spoken language. In order to reconstruct high-fidelity human heads, we propose Dynamic Neural Radiance Fields using Hash Ensembles (NeRSemble). We represent scene dynamics by combining a deformation field and an ensemble of 3D multi-resolution hash encodings. The deformation field allows for precise modeling of simple scene movements, while the ensemble of hash encodings helps to represent complex dynamics. As a result, we obtain radiance field representations of human heads that capture motion over time and facilitate re-rendering of arbitrary novel viewpoints. In a series of experiments, we explore the design choices of our method and demonstrate that our approach outperforms state-of-the-art dynamic radiance field approaches by a significant margin. ", "page_idx": 0}, {"type": "text", "text": "CCS Concepts: $\\cdot$ Computing methodologies $\\rightarrow$ Rendering; 3D imaging; \nVolumetric models; Reconstruction. ", "page_idx": 0}, {"type": "text", "text": "Additional Key Words and Phrases: Neural Radiance Fields, Dynamic Scene Representations, Novel View Synthesis, Multi-View Video Dataset, Human Heads ", "page_idx": 0}, {"type": "text", "text": "1 INTRODUCTION ", "text_level": 1, "page_idx": 0}, {"type": "text", "text": "In recent years, we have seen tremendous growth in the importance of digital applications that rely on photo-realistic rendering of images from captured scene representations, both in society and industry. In particular, the synthesis of novel views of dynamic human faces and heads has become the center of attention in many graphics applications ranging from computer games and movie productions to settings in virtual or augmented reality. Here, the key task is the following: given a recording of a human actor who is displaying facial expressions or talking, reconstruct a temporally-consistent 3D representation. This representation should enable the synthesis of photo-realistic re-renderings of the human face from arbitrary viewpoints and time steps. ", "page_idx": 0}, {"type": "text", "text": "However, reconstructing a 3D representation capable of photorealistic novel viewpoint rendering is particularly challenging for dynamic objects. Here, we not only have to reconstruct the static appearance of a person, but we also have to simultaneously capture the motion over time and encode it in a compact scene representation. The task becomes even more challenging in the context of human faces, as fine-scale and high-fidelity detail are required for downstream applications, where the tolerance for visual artifacts is typically very low. In particular, human heads exhibit several properties that make novel view synthesis (NVS) extremely challenging, such as the complexity of hair, differences in reflectance properties, and the elasticity of human skin that creates heavily non-rigid deformations and fine-scale wrinkles. ", "page_idx": 0}]
[{"category_id": 2, "poly": [143.1289825439453, 1790.775146484375, 820.7708740234375, 1790.775146484375, 820.7708740234375, 1927.2777099609375, 143.1289825439453, 1927.2777099609375], "score": 0.9999957084655762}, {"category_id": 1, "poly": [881.7755737304688, 1320.6949462890625, 1561.79541015625, 1320.6949462890625, 1561.79541015625, 1683.9954833984375, 881.7755737304688, 1683.9954833984375], "score": 0.9999874830245972}, {"category_id": 1, "poly": [881.644775390625, 1684.75830078125, 1559.719482421875, 1684.75830078125, 1559.719482421875, 1928.7205810546875, 881.644775390625, 1928.7205810546875], "score": 0.9999843835830688}, {"category_id": 1, "poly": [880.8367919921875, 1161.34326171875, 1559.1865234375, 1161.34326171875, 1559.1865234375, 1249.6429443359375, 880.8367919921875, 1249.6429443359375], "score": 0.9999812841415405}, {"category_id": 3, "poly": [138.2310333251953, 513.0323486328125, 1560.5162353515625, 513.0323486328125, 1560.5162353515625, 938.2305908203125, 138.2310333251953, 938.2305908203125], "score": 0.999976634979248}, {"category_id": 1, "poly": [142.70962524414062, 292.5888366699219, 954.7571411132812, 292.5888366699219, 954.7571411132812, 485.52801513671875, 142.70962524414062, 485.52801513671875], "score": 0.9999756813049316}, {"category_id": 2, "poly": [143.85250854492188, 1682.814697265625, 817.486328125, 1682.814697265625, 817.486328125, 1751.0677490234375, 143.85250854492188, 1751.0677490234375], "score": 0.9999669790267944}, {"category_id": 0, "poly": [142.28530883789062, 210.73602294921875, 1559.8887939453125, 210.73602294921875, 1559.8887939453125, 261.8651123046875, 142.28530883789062, 261.8651123046875], "score": 0.9999582171440125}, {"category_id": 2, "poly": [42.772090911865234, 575.1369018554688, 101.96903228759766, 575.1369018554688, 101.96903228759766, 1548.8795166015625, 42.772090911865234, 1548.8795166015625], "score": 0.9999510645866394}, {"category_id": 1, "poly": [142.10813903808594, 1086.964111328125, 821.2460327148438, 1086.964111328125, 821.2460327148438, 1644.5509033203125, 142.10813903808594, 1644.5509033203125], "score": 0.9998125433921814}, {"category_id": 0, "poly": [883.4697875976562, 1280.06591796875, 1126.474609375, 1280.06591796875, 1126.474609375, 1309.957275390625, 883.4697875976562, 1309.957275390625], "score": 0.9997137784957886}, {"category_id": 1, "poly": [882.9937133789062, 1090.837890625, 1558.720703125, 1090.837890625, 1558.720703125, 1145.205810546875, 882.9937133789062, 1145.205810546875], "score": 0.9942539930343628}, {"category_id": 1, "poly": [144.4083709716797, 990.1754760742188, 1559.2747802734375, 990.1754760742188, 1559.2747802734375, 1069.1806640625, 144.4083709716797, 1069.1806640625], "score": 0.9920875430107117}, {"category_id": 13, "poly": [1311, 1093, 1342, 1093, 1342, 1114, 1311, 1114], "score": 0.77, "latex": "\\rightarrow"}, {"category_id": 13, "poly": [519, 1202, 585, 1202, 585, 1226, 519, 1226], "score": 0.39, "latex": "7.1\\;\\mathrm{MP}"}, {"category_id": 13, "poly": [1022, 1097, 1038, 1097, 1038, 1112, 1022, 1112], "score": 0.35, "latex": "\\cdot"}, {"category_id": 15, "poly": [142.0, 1792.0, 822.0, 1792.0, 822.0, 1820.0, 142.0, 1820.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1814.0, 822.0, 1814.0, 822.0, 1843.0, 141.0, 1843.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 1834.0, 825.0, 1834.0, 825.0, 1866.0, 140.0, 1866.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 1857.0, 824.0, 1857.0, 824.0, 1888.0, 140.0, 1888.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1883.0, 820.0, 1883.0, 820.0, 1906.0, 143.0, 1906.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1906.0, 371.0, 1906.0, 371.0, 1929.0, 143.0, 1929.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1323.0, 1561.0, 1323.0, 1561.0, 1352.0, 881.0, 1352.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1355.0, 1559.0, 1355.0, 1559.0, 1381.0, 882.0, 1381.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1384.0, 1560.0, 1384.0, 1560.0, 1412.0, 880.0, 1412.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1415.0, 1559.0, 1415.0, 1559.0, 1444.0, 882.0, 1444.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1442.0, 1558.0, 1442.0, 1558.0, 1475.0, 880.0, 1475.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 1476.0, 1559.0, 1476.0, 1559.0, 1505.0, 883.0, 1505.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [879.0, 1506.0, 1558.0, 1506.0, 1558.0, 1535.0, 879.0, 1535.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [878.0, 1534.0, 1560.0, 1534.0, 1560.0, 1568.0, 878.0, 1568.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1566.0, 1559.0, 1566.0, 1559.0, 1597.0, 880.0, 1597.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1598.0, 1558.0, 1598.0, 1558.0, 1627.0, 882.0, 1627.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 1628.0, 1558.0, 1628.0, 1558.0, 1657.0, 883.0, 1657.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1657.0, 1154.0, 1657.0, 1154.0, 1689.0, 880.0, 1689.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [905.0, 1687.0, 1558.0, 1687.0, 1558.0, 1716.0, 905.0, 1716.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1717.0, 1557.0, 1717.0, 1557.0, 1749.0, 880.0, 1749.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 1749.0, 1557.0, 1749.0, 1557.0, 1777.0, 883.0, 1777.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1780.0, 1558.0, 1780.0, 1558.0, 1810.0, 880.0, 1810.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1808.0, 1560.0, 1808.0, 1560.0, 1840.0, 880.0, 1840.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1840.0, 1563.0, 1840.0, 1563.0, 1870.0, 880.0, 1870.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1872.0, 1556.0, 1872.0, 1556.0, 1897.0, 882.0, 1897.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1903.0, 1558.0, 1903.0, 1558.0, 1930.0, 882.0, 1930.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1165.0, 1557.0, 1165.0, 1557.0, 1192.0, 881.0, 1192.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1195.0, 1559.0, 1195.0, 1559.0, 1223.0, 882.0, 1223.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1226.0, 943.0, 1226.0, 943.0, 1249.0, 880.0, 1249.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 298.0, 954.0, 298.0, 954.0, 335.0, 141.0, 335.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 334.0, 876.0, 334.0, 876.0, 375.0, 142.0, 375.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 374.0, 941.0, 374.0, 941.0, 413.0, 142.0, 413.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 415.0, 824.0, 415.0, 824.0, 452.0, 141.0, 452.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 451.0, 948.0, 451.0, 948.0, 490.0, 143.0, 490.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 1678.0, 821.0, 1678.0, 821.0, 1711.0, 140.0, 1711.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1705.0, 795.0, 1705.0, 795.0, 1731.0, 141.0, 1731.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1730.0, 665.0, 1730.0, 665.0, 1753.0, 143.0, 1753.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [146.0, 218.0, 1555.0, 218.0, 1555.0, 262.0, 146.0, 262.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [40.0, 577.0, 110.0, 577.0, 110.0, 1551.0, 40.0, 1551.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1093.0, 821.0, 1093.0, 821.0, 1120.0, 141.0, 1120.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [144.0, 1122.0, 818.0, 1122.0, 818.0, 1147.0, 144.0, 1147.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1150.0, 819.0, 1150.0, 819.0, 1175.0, 142.0, 1175.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1178.0, 819.0, 1178.0, 819.0, 1202.0, 143.0, 1202.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1203.0, 518.0, 1203.0, 518.0, 1231.0, 142.0, 1231.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [586.0, 1203.0, 820.0, 1203.0, 820.0, 1231.0, 586.0, 1231.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [139.0, 1231.0, 822.0, 1231.0, 822.0, 1258.0, 139.0, 1258.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1259.0, 820.0, 1259.0, 820.0, 1287.0, 142.0, 1287.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1287.0, 819.0, 1287.0, 819.0, 1311.0, 143.0, 1311.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1316.0, 819.0, 1316.0, 819.0, 1341.0, 143.0, 1341.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1342.0, 824.0, 1342.0, 824.0, 1370.0, 142.0, 1370.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1369.0, 821.0, 1369.0, 821.0, 1399.0, 142.0, 1399.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1397.0, 819.0, 1397.0, 819.0, 1424.0, 141.0, 1424.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1426.0, 819.0, 1426.0, 819.0, 1450.0, 141.0, 1450.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1452.0, 818.0, 1452.0, 818.0, 1480.0, 141.0, 1480.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1480.0, 821.0, 1480.0, 821.0, 1508.0, 141.0, 1508.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1510.0, 822.0, 1510.0, 822.0, 1534.0, 143.0, 1534.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1534.0, 820.0, 1534.0, 820.0, 1562.0, 141.0, 1562.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1564.0, 819.0, 1564.0, 819.0, 1591.0, 143.0, 1591.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1591.0, 818.0, 1591.0, 818.0, 1618.0, 143.0, 1618.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1618.0, 817.0, 1618.0, 817.0, 1646.0, 142.0, 1646.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [887.0, 1288.0, 898.0, 1288.0, 898.0, 1303.0, 887.0, 1303.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [921.0, 1281.0, 1127.0, 1281.0, 1127.0, 1310.0, 921.0, 1310.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1089.0, 1021.0, 1089.0, 1021.0, 1123.0, 881.0, 1123.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [1039.0, 1089.0, 1310.0, 1089.0, 1310.0, 1123.0, 1039.0, 1123.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [1343.0, 1089.0, 1561.0, 1089.0, 1561.0, 1123.0, 1343.0, 1123.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [884.0, 1119.0, 1191.0, 1119.0, 1191.0, 1147.0, 884.0, 1147.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [138.0, 988.0, 1563.0, 988.0, 1563.0, 1024.0, 138.0, 1024.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1018.0, 1560.0, 1018.0, 1560.0, 1049.0, 141.0, 1049.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1048.0, 1491.0, 1048.0, 1491.0, 1074.0, 143.0, 1074.0], "score": 1.0, "text": ""}]
{"preproc_blocks": [{"type": "title", "bbox": [51, 75, 561, 94], "lines": [{"bbox": [52, 78, 559, 94], "spans": [{"bbox": [52, 78, 559, 94], "score": 1.0, "content": "NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads", "type": "text"}], "index": 0}], "index": 0}, {"type": "text", "bbox": [51, 105, 343, 174], "lines": [{"bbox": [50, 107, 343, 120], "spans": [{"bbox": [50, 107, 343, 120], "score": 1.0, "content": "TOBIAS KIRSCHSTEIN, Technical University of Munich, Germany", "type": "text"}], "index": 1}, {"bbox": [51, 120, 315, 135], "spans": [{"bbox": [51, 120, 315, 135], "score": 1.0, "content": "SHENHAN QIAN, Technical University of Munich, Germany", "type": "text"}], "index": 2}, {"bbox": [51, 134, 338, 148], "spans": [{"bbox": [51, 134, 338, 148], "score": 1.0, "content": "SIMON GIEBENHAIN, Technical University of Munich, Germany", "type": "text"}], "index": 3}, {"bbox": [50, 149, 296, 162], "spans": [{"bbox": [50, 149, 296, 162], "score": 1.0, "content": "TIM WALTER, Technical University of Munich, Germany", "type": "text"}], "index": 4}, {"bbox": [51, 162, 341, 176], "spans": [{"bbox": [51, 162, 341, 176], "score": 1.0, "content": "MATTHIAS NIESSNER, Technical University of Munich, Germany", "type": "text"}], "index": 5}], "index": 3}, {"type": "image", "bbox": [49, 184, 561, 337], "blocks": [{"type": "image_body", "bbox": [49, 184, 561, 337], "group_id": 0, "lines": [{"bbox": [49, 184, 561, 337], "spans": [{"bbox": [49, 184, 561, 337], "score": 0.999976634979248, "type": "image", "image_path": "ab2245f0990c99a4bec88df2f438eac35b5b04162fa6cb5df763240b2d3f3494.jpg"}]}], "index": 7, "virtual_lines": [{"bbox": [49, 184, 561, 235.0], "spans": [], "index": 6}, {"bbox": [49, 235.0, 561, 286.0], "spans": [], "index": 7}, {"bbox": [49, 286.0, 561, 337.0], "spans": [], "index": 8}]}], "index": 7}, {"type": "text", "bbox": [51, 356, 561, 384], "lines": [{"bbox": [49, 355, 562, 368], "spans": [{"bbox": [49, 355, 562, 368], "score": 1.0, "content": "Fig. 1. NeRSemble: Given multi-view video recordings from twelve cameras (left), our method is capable of synthesizing highly realistic novel views of", "type": "text"}], "index": 9}, {"bbox": [50, 366, 561, 377], "spans": [{"bbox": [50, 366, 561, 377], "score": 1.0, "content": "human heads in complex motion. Our renderings from unseen views (right) faithfully represent static scene parts and regions undergoing highly non-rigid", "type": "text"}], "index": 10}, {"bbox": [51, 377, 536, 386], "spans": [{"bbox": [51, 377, 536, 386], "score": 1.0, "content": "deformations. Along with our method, we publish our high-quality multi-view video capture data of 31.7 million frames from a total of 222 subjects.", "type": "text"}], "index": 11}], "index": 10}, {"type": "text", "bbox": [51, 391, 295, 592], "lines": [{"bbox": [50, 393, 295, 403], "spans": [{"bbox": [50, 393, 295, 403], "score": 1.0, "content": "We focus on reconstructing high-fidelity radiance fields of human heads,", "type": "text"}], "index": 12}, {"bbox": [51, 403, 294, 412], "spans": [{"bbox": [51, 403, 294, 412], "score": 1.0, "content": "capturing their animations over time, and synthesizing re-renderings from", "type": "text"}], "index": 13}, {"bbox": [51, 414, 294, 423], "spans": [{"bbox": [51, 414, 294, 423], "score": 1.0, "content": "novel viewpoints at arbitrary time steps. To this end, we propose a new", "type": "text"}], "index": 14}, {"bbox": [51, 424, 294, 432], "spans": [{"bbox": [51, 424, 294, 432], "score": 1.0, "content": "multi-view capture setup composed of 16 calibrated machine vision cameras", "type": "text"}], "index": 15}, {"bbox": [51, 432, 295, 443], "spans": [{"bbox": [51, 433, 186, 443], "score": 1.0, "content": "that record time-synchronized images at", "type": "text"}, {"bbox": [186, 432, 210, 441], "score": 0.39, "content": "7.1\\;\\mathrm{MP}", "type": "inline_equation", "height": 9, "width": 24}, {"bbox": [210, 433, 295, 443], "score": 1.0, "content": " resolution and 73 frames", "type": "text"}], "index": 16}, {"bbox": [50, 443, 295, 452], "spans": [{"bbox": [50, 443, 295, 452], "score": 1.0, "content": "per second. With our setup, we collect a new dataset of over 4700 high-", "type": "text"}], "index": 17}, {"bbox": [51, 453, 295, 463], "spans": [{"bbox": [51, 453, 295, 463], "score": 1.0, "content": "resolution, high-framerate sequences of more than 220 human heads, from", "type": "text"}], "index": 18}, {"bbox": [51, 463, 294, 471], "spans": [{"bbox": [51, 463, 294, 471], "score": 1.0, "content": "which we introduce a new human head reconstruction benchmark. The", "type": "text"}], "index": 19}, {"bbox": [51, 473, 294, 482], "spans": [{"bbox": [51, 473, 294, 482], "score": 1.0, "content": "recorded sequences cover a wide range of facial dynamics, including head", "type": "text"}], "index": 20}, {"bbox": [51, 483, 296, 493], "spans": [{"bbox": [51, 483, 296, 493], "score": 1.0, "content": "motions, natural expressions, emotions, and spoken language. In order to re-", "type": "text"}], "index": 21}, {"bbox": [51, 492, 295, 503], "spans": [{"bbox": [51, 492, 295, 503], "score": 1.0, "content": "construct high-fidelity human heads, we propose Dynamic Neural Radiance", "type": "text"}], "index": 22}, {"bbox": [50, 502, 294, 512], "spans": [{"bbox": [50, 502, 294, 512], "score": 1.0, "content": "Fields using Hash Ensembles (NeRSemble). We represent scene dynamics", "type": "text"}], "index": 23}, {"bbox": [50, 513, 294, 522], "spans": [{"bbox": [50, 513, 294, 522], "score": 1.0, "content": "by combining a deformation field and an ensemble of 3D multi-resolution", "type": "text"}], "index": 24}, {"bbox": [50, 522, 294, 532], "spans": [{"bbox": [50, 522, 294, 532], "score": 1.0, "content": "hash encodings. The deformation field allows for precise modeling of simple", "type": "text"}], "index": 25}, {"bbox": [50, 532, 295, 542], "spans": [{"bbox": [50, 532, 295, 542], "score": 1.0, "content": "scene movements, while the ensemble of hash encodings helps to represent", "type": "text"}], "index": 26}, {"bbox": [51, 543, 295, 552], "spans": [{"bbox": [51, 543, 295, 552], "score": 1.0, "content": "complex dynamics. As a result, we obtain radiance field representations of", "type": "text"}], "index": 27}, {"bbox": [50, 552, 295, 562], "spans": [{"bbox": [50, 552, 295, 562], "score": 1.0, "content": "human heads that capture motion over time and facilitate re-rendering of", "type": "text"}], "index": 28}, {"bbox": [51, 563, 294, 572], "spans": [{"bbox": [51, 563, 294, 572], "score": 1.0, "content": "arbitrary novel viewpoints. In a series of experiments, we explore the design", "type": "text"}], "index": 29}, {"bbox": [51, 572, 294, 582], "spans": [{"bbox": [51, 572, 294, 582], "score": 1.0, "content": "choices of our method and demonstrate that our approach outperforms", "type": "text"}], "index": 30}, {"bbox": [51, 582, 294, 592], "spans": [{"bbox": [51, 582, 294, 592], "score": 1.0, "content": "state-of-the-art dynamic radiance field approaches by a significant margin.", "type": "text"}], "index": 31}], "index": 21.5}, {"type": "text", "bbox": [317, 392, 561, 412], "lines": [{"bbox": [317, 392, 561, 404], "spans": [{"bbox": [317, 392, 367, 404], "score": 1.0, "content": "CCS Concepts:", "type": "text"}, {"bbox": [367, 394, 373, 400], "score": 0.35, "content": "\\cdot", "type": "inline_equation", "height": 6, "width": 6}, {"bbox": [374, 392, 471, 404], "score": 1.0, "content": "Computing methodologies", "type": "text"}, {"bbox": [471, 393, 483, 401], "score": 0.77, "content": "\\rightarrow", "type": "inline_equation", "height": 8, "width": 12}, {"bbox": [483, 392, 561, 404], "score": 1.0, "content": "Rendering; 3D imaging;", "type": "text"}], "index": 32}, {"bbox": [318, 402, 428, 412], "spans": [{"bbox": [318, 402, 428, 412], "score": 1.0, "content": "Volumetric models; Reconstruction.", "type": "text"}], "index": 33}], "index": 32.5}, {"type": "text", "bbox": [317, 418, 561, 449], "lines": [{"bbox": [317, 419, 560, 429], "spans": [{"bbox": [317, 419, 560, 429], "score": 1.0, "content": "Additional Key Words and Phrases: Neural Radiance Fields, Dynamic Scene", "type": "text"}], "index": 34}, {"bbox": [317, 430, 561, 440], "spans": [{"bbox": [317, 430, 561, 440], "score": 1.0, "content": "Representations, Novel View Synthesis, Multi-View Video Dataset, Human", "type": "text"}], "index": 35}, {"bbox": [316, 441, 339, 449], "spans": [{"bbox": [316, 441, 339, 449], "score": 1.0, "content": "Heads", "type": "text"}], "index": 36}], "index": 35}, {"type": "title", "bbox": [318, 460, 405, 471], "lines": [{"bbox": [319, 461, 405, 471], "spans": [{"bbox": [319, 463, 323, 469], "score": 1.0, "content": "1", "type": "text"}, {"bbox": [331, 461, 405, 471], "score": 1.0, "content": "INTRODUCTION", "type": "text"}], "index": 37}], "index": 37}, {"type": "text", "bbox": [317, 475, 562, 606], "lines": [{"bbox": [317, 476, 561, 486], "spans": [{"bbox": [317, 476, 561, 486], "score": 1.0, "content": "In recent years, we have seen tremendous growth in the impor-", "type": "text"}], "index": 38}, {"bbox": [317, 487, 561, 497], "spans": [{"bbox": [317, 487, 561, 497], "score": 1.0, "content": "tance of digital applications that rely on photo-realistic rendering of", "type": "text"}], "index": 39}, {"bbox": [316, 498, 561, 508], "spans": [{"bbox": [316, 498, 561, 508], "score": 1.0, "content": "images from captured scene representations, both in society and in-", "type": "text"}], "index": 40}, {"bbox": [317, 509, 561, 519], "spans": [{"bbox": [317, 509, 561, 519], "score": 1.0, "content": "dustry. In particular, the synthesis of novel views of dynamic human", "type": "text"}], "index": 41}, {"bbox": [316, 519, 560, 531], "spans": [{"bbox": [316, 519, 560, 531], "score": 1.0, "content": "faces and heads has become the center of attention in many graphics", "type": "text"}], "index": 42}, {"bbox": [317, 531, 561, 541], "spans": [{"bbox": [317, 531, 561, 541], "score": 1.0, "content": "applications ranging from computer games and movie productions", "type": "text"}], "index": 43}, {"bbox": [316, 542, 560, 552], "spans": [{"bbox": [316, 542, 560, 552], "score": 1.0, "content": "to settings in virtual or augmented reality. Here, the key task is the", "type": "text"}], "index": 44}, {"bbox": [316, 552, 561, 564], "spans": [{"bbox": [316, 552, 561, 564], "score": 1.0, "content": "following: given a recording of a human actor who is displaying", "type": "text"}], "index": 45}, {"bbox": [316, 563, 561, 574], "spans": [{"bbox": [316, 563, 561, 574], "score": 1.0, "content": "facial expressions or talking, reconstruct a temporally-consistent", "type": "text"}], "index": 46}, {"bbox": [317, 575, 560, 585], "spans": [{"bbox": [317, 575, 560, 585], "score": 1.0, "content": "3D representation. This representation should enable the synthesis", "type": "text"}], "index": 47}, {"bbox": [317, 586, 560, 596], "spans": [{"bbox": [317, 586, 560, 596], "score": 1.0, "content": "of photo-realistic re-renderings of the human face from arbitrary", "type": "text"}], "index": 48}, {"bbox": [316, 596, 415, 608], "spans": [{"bbox": [316, 596, 415, 608], "score": 1.0, "content": "viewpoints and time steps.", "type": "text"}], "index": 49}], "index": 43.5}, {"type": "text", "bbox": [317, 606, 561, 694], "lines": [{"bbox": [325, 607, 560, 617], "spans": [{"bbox": [325, 607, 560, 617], "score": 1.0, "content": "However, reconstructing a 3D representation capable of photo-", "type": "text"}], "index": 50}, {"bbox": [316, 618, 560, 629], "spans": [{"bbox": [316, 618, 560, 629], "score": 1.0, "content": "realistic novel viewpoint rendering is particularly challenging for", "type": "text"}], "index": 51}, {"bbox": [317, 629, 560, 639], "spans": [{"bbox": [317, 629, 560, 639], "score": 1.0, "content": "dynamic objects. Here, we not only have to reconstruct the static", "type": "text"}], "index": 52}, {"bbox": [316, 640, 560, 651], "spans": [{"bbox": [316, 640, 560, 651], "score": 1.0, "content": "appearance of a person, but we also have to simultaneously capture", "type": "text"}], "index": 53}, {"bbox": [316, 650, 561, 662], "spans": [{"bbox": [316, 650, 561, 662], "score": 1.0, "content": "the motion over time and encode it in a compact scene represen-", "type": "text"}], "index": 54}, {"bbox": [316, 662, 562, 673], "spans": [{"bbox": [316, 662, 562, 673], "score": 1.0, "content": "tation. The task becomes even more challenging in the context of", "type": "text"}], "index": 55}, {"bbox": [317, 673, 560, 682], "spans": [{"bbox": [317, 673, 560, 682], "score": 1.0, "content": "human faces, as fine-scale and high-fidelity detail are required for", "type": "text"}], "index": 56}, {"bbox": [317, 685, 560, 694], "spans": [{"bbox": [317, 685, 560, 694], "score": 1.0, "content": "downstream applications, where the tolerance for visual artifacts", "type": "text"}], "index": 57}], "index": 53.5}], "layout_bboxes": [], "page_idx": 0, "page_size": [612.0, 792.0], "_layout_tree": [], "images": [{"type": "image", "bbox": [49, 184, 561, 337], "blocks": [{"type": "image_body", "bbox": [49, 184, 561, 337], "group_id": 0, "lines": [{"bbox": [49, 184, 561, 337], "spans": [{"bbox": [49, 184, 561, 337], "score": 0.999976634979248, "type": "image", "image_path": "ab2245f0990c99a4bec88df2f438eac35b5b04162fa6cb5df763240b2d3f3494.jpg"}]}], "index": 7, "virtual_lines": [{"bbox": [49, 184, 561, 235.0], "spans": [], "index": 6}, {"bbox": [49, 235.0, 561, 286.0], "spans": [], "index": 7}, {"bbox": [49, 286.0, 561, 337.0], "spans": [], "index": 8}]}], "index": 7}], "tables": [], "interline_equations": [], "discarded_blocks": [{"type": "discarded", "bbox": [51, 644, 295, 693], "lines": [{"bbox": [51, 645, 295, 655], "spans": [{"bbox": [51, 645, 295, 655], "score": 1.0, "content": "Authors\u2019 addresses: Tobias Kirschstein, Technical University of Munich, Germany,", "type": "text"}]}, {"bbox": [50, 653, 295, 663], "spans": [{"bbox": [50, 653, 295, 663], "score": 1.0, "content": "[email protected]; Shenhan Qian, Technical University of Munich, Germany,", "type": "text"}]}, {"bbox": [50, 660, 297, 671], "spans": [{"bbox": [50, 660, 297, 671], "score": 1.0, "content": "[email protected]; Simon Giebenhain, Technical University of Munich, Germany,", "type": "text"}]}, {"bbox": [50, 668, 296, 679], "spans": [{"bbox": [50, 668, 296, 679], "score": 1.0, "content": "[email protected]; Tim Walter, Technical University of Munich, Germany,", "type": "text"}]}, {"bbox": [51, 677, 295, 686], "spans": [{"bbox": [51, 677, 295, 686], "score": 1.0, "content": "[email protected]; Matthias Nie\u00dfner, Technical University of Munich,", "type": "text"}]}, {"bbox": [51, 686, 133, 694], "spans": [{"bbox": [51, 686, 133, 694], "score": 1.0, "content": "Germany, [email protected].", "type": "text"}]}]}, {"type": "discarded", "bbox": [51, 605, 294, 630], "lines": [{"bbox": [50, 604, 295, 615], "spans": [{"bbox": [50, 604, 295, 615], "score": 1.0, "content": "1We will release all of our captured data, including all 4734 recordings and baseline", "type": "text"}]}, {"bbox": [50, 613, 286, 623], "spans": [{"bbox": [50, 613, 286, 623], "score": 1.0, "content": "codes, along with a new public benchmark to support further research in the area.", "type": "text"}]}, {"bbox": [51, 622, 239, 631], "spans": [{"bbox": [51, 622, 239, 631], "score": 1.0, "content": "Website: https://tobias-kirschstein.github.io/nersemble", "type": "text"}]}]}, {"type": "discarded", "bbox": [15, 207, 36, 557], "lines": [{"bbox": [14, 207, 39, 558], "spans": [{"bbox": [14, 207, 39, 558], "score": 1.0, "content": "arXiv:2305.03027v1 [cs.CV] 4 May 2023", "type": "text", "height": 351, "width": 25}]}]}], "need_drop": false, "drop_reason": [], "para_blocks": [{"type": "title", "bbox": [51, 75, 561, 94], "lines": [{"bbox": [52, 78, 559, 94], "spans": [{"bbox": [52, 78, 559, 94], "score": 1.0, "content": "NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads", "type": "text"}], "index": 0}], "index": 0, "page_num": "page_0", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [51, 105, 343, 174], "lines": [{"bbox": [50, 107, 343, 120], "spans": [{"bbox": [50, 107, 343, 120], "score": 1.0, "content": "TOBIAS KIRSCHSTEIN, Technical University of Munich, Germany", "type": "text"}], "index": 1}, {"bbox": [51, 120, 315, 135], "spans": [{"bbox": [51, 120, 315, 135], "score": 1.0, "content": "SHENHAN QIAN, Technical University of Munich, Germany", "type": "text"}], "index": 2}, {"bbox": [51, 134, 338, 148], "spans": [{"bbox": [51, 134, 338, 148], "score": 1.0, "content": "SIMON GIEBENHAIN, Technical University of Munich, Germany", "type": "text"}], "index": 3}, {"bbox": [50, 149, 296, 162], "spans": [{"bbox": [50, 149, 296, 162], "score": 1.0, "content": "TIM WALTER, Technical University of Munich, Germany", "type": "text"}], "index": 4}, {"bbox": [51, 162, 341, 176], "spans": [{"bbox": [51, 162, 341, 176], "score": 1.0, "content": "MATTHIAS NIESSNER, Technical University of Munich, Germany", "type": "text"}], "index": 5}], "index": 3, "page_num": "page_0", "page_size": [612.0, 792.0], "bbox_fs": [50, 107, 343, 176]}, {"type": "image", "bbox": [49, 184, 561, 337], "blocks": [{"type": "image_body", "bbox": [49, 184, 561, 337], "group_id": 0, "lines": [{"bbox": [49, 184, 561, 337], "spans": [{"bbox": [49, 184, 561, 337], "score": 0.999976634979248, "type": "image", "image_path": "ab2245f0990c99a4bec88df2f438eac35b5b04162fa6cb5df763240b2d3f3494.jpg"}]}], "index": 7, "virtual_lines": [{"bbox": [49, 184, 561, 235.0], "spans": [], "index": 6}, {"bbox": [49, 235.0, 561, 286.0], "spans": [], "index": 7}, {"bbox": [49, 286.0, 561, 337.0], "spans": [], "index": 8}]}], "index": 7, "page_num": "page_0", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [51, 356, 561, 384], "lines": [{"bbox": [49, 355, 562, 368], "spans": [{"bbox": [49, 355, 562, 368], "score": 1.0, "content": "Fig. 1. NeRSemble: Given multi-view video recordings from twelve cameras (left), our method is capable of synthesizing highly realistic novel views of", "type": "text"}], "index": 9}, {"bbox": [50, 366, 561, 377], "spans": [{"bbox": [50, 366, 561, 377], "score": 1.0, "content": "human heads in complex motion. Our renderings from unseen views (right) faithfully represent static scene parts and regions undergoing highly non-rigid", "type": "text"}], "index": 10}, {"bbox": [51, 377, 536, 386], "spans": [{"bbox": [51, 377, 536, 386], "score": 1.0, "content": "deformations. Along with our method, we publish our high-quality multi-view video capture data of 31.7 million frames from a total of 222 subjects.", "type": "text"}], "index": 11}], "index": 10, "page_num": "page_0", "page_size": [612.0, 792.0], "bbox_fs": [49, 355, 562, 386]}, {"type": "text", "bbox": [51, 391, 295, 592], "lines": [{"bbox": [50, 393, 295, 403], "spans": [{"bbox": [50, 393, 295, 403], "score": 1.0, "content": "We focus on reconstructing high-fidelity radiance fields of human heads,", "type": "text"}], "index": 12}, {"bbox": [51, 403, 294, 412], "spans": [{"bbox": [51, 403, 294, 412], "score": 1.0, "content": "capturing their animations over time, and synthesizing re-renderings from", "type": "text"}], "index": 13}, {"bbox": [51, 414, 294, 423], "spans": [{"bbox": [51, 414, 294, 423], "score": 1.0, "content": "novel viewpoints at arbitrary time steps. To this end, we propose a new", "type": "text"}], "index": 14}, {"bbox": [51, 424, 294, 432], "spans": [{"bbox": [51, 424, 294, 432], "score": 1.0, "content": "multi-view capture setup composed of 16 calibrated machine vision cameras", "type": "text"}], "index": 15}, {"bbox": [51, 432, 295, 443], "spans": [{"bbox": [51, 433, 186, 443], "score": 1.0, "content": "that record time-synchronized images at", "type": "text"}, {"bbox": [186, 432, 210, 441], "score": 0.39, "content": "7.1\\;\\mathrm{MP}", "type": "inline_equation", "height": 9, "width": 24}, {"bbox": [210, 433, 295, 443], "score": 1.0, "content": " resolution and 73 frames", "type": "text"}], "index": 16}, {"bbox": [50, 443, 295, 452], "spans": [{"bbox": [50, 443, 295, 452], "score": 1.0, "content": "per second. With our setup, we collect a new dataset of over 4700 high-", "type": "text"}], "index": 17}, {"bbox": [51, 453, 295, 463], "spans": [{"bbox": [51, 453, 295, 463], "score": 1.0, "content": "resolution, high-framerate sequences of more than 220 human heads, from", "type": "text"}], "index": 18}, {"bbox": [51, 463, 294, 471], "spans": [{"bbox": [51, 463, 294, 471], "score": 1.0, "content": "which we introduce a new human head reconstruction benchmark. The", "type": "text"}], "index": 19}, {"bbox": [51, 473, 294, 482], "spans": [{"bbox": [51, 473, 294, 482], "score": 1.0, "content": "recorded sequences cover a wide range of facial dynamics, including head", "type": "text"}], "index": 20}, {"bbox": [51, 483, 296, 493], "spans": [{"bbox": [51, 483, 296, 493], "score": 1.0, "content": "motions, natural expressions, emotions, and spoken language. In order to re-", "type": "text"}], "index": 21}, {"bbox": [51, 492, 295, 503], "spans": [{"bbox": [51, 492, 295, 503], "score": 1.0, "content": "construct high-fidelity human heads, we propose Dynamic Neural Radiance", "type": "text"}], "index": 22}, {"bbox": [50, 502, 294, 512], "spans": [{"bbox": [50, 502, 294, 512], "score": 1.0, "content": "Fields using Hash Ensembles (NeRSemble). We represent scene dynamics", "type": "text"}], "index": 23}, {"bbox": [50, 513, 294, 522], "spans": [{"bbox": [50, 513, 294, 522], "score": 1.0, "content": "by combining a deformation field and an ensemble of 3D multi-resolution", "type": "text"}], "index": 24}, {"bbox": [50, 522, 294, 532], "spans": [{"bbox": [50, 522, 294, 532], "score": 1.0, "content": "hash encodings. The deformation field allows for precise modeling of simple", "type": "text"}], "index": 25}, {"bbox": [50, 532, 295, 542], "spans": [{"bbox": [50, 532, 295, 542], "score": 1.0, "content": "scene movements, while the ensemble of hash encodings helps to represent", "type": "text"}], "index": 26}, {"bbox": [51, 543, 295, 552], "spans": [{"bbox": [51, 543, 295, 552], "score": 1.0, "content": "complex dynamics. As a result, we obtain radiance field representations of", "type": "text"}], "index": 27}, {"bbox": [50, 552, 295, 562], "spans": [{"bbox": [50, 552, 295, 562], "score": 1.0, "content": "human heads that capture motion over time and facilitate re-rendering of", "type": "text"}], "index": 28}, {"bbox": [51, 563, 294, 572], "spans": [{"bbox": [51, 563, 294, 572], "score": 1.0, "content": "arbitrary novel viewpoints. In a series of experiments, we explore the design", "type": "text"}], "index": 29}, {"bbox": [51, 572, 294, 582], "spans": [{"bbox": [51, 572, 294, 582], "score": 1.0, "content": "choices of our method and demonstrate that our approach outperforms", "type": "text"}], "index": 30}, {"bbox": [51, 582, 294, 592], "spans": [{"bbox": [51, 582, 294, 592], "score": 1.0, "content": "state-of-the-art dynamic radiance field approaches by a significant margin.", "type": "text"}], "index": 31}], "index": 21.5, "page_num": "page_0", "page_size": [612.0, 792.0], "bbox_fs": [50, 393, 296, 592]}, {"type": "list", "bbox": [317, 392, 561, 412], "lines": [{"bbox": [317, 392, 561, 404], "spans": [{"bbox": [317, 392, 367, 404], "score": 1.0, "content": "CCS Concepts:", "type": "text"}, {"bbox": [367, 394, 373, 400], "score": 0.35, "content": "\\cdot", "type": "inline_equation", "height": 6, "width": 6}, {"bbox": [374, 392, 471, 404], "score": 1.0, "content": "Computing methodologies", "type": "text"}, {"bbox": [471, 393, 483, 401], "score": 0.77, "content": "\\rightarrow", "type": "inline_equation", "height": 8, "width": 12}, {"bbox": [483, 392, 561, 404], "score": 1.0, "content": "Rendering; 3D imaging;", "type": "text"}], "index": 32, "is_list_end_line": true}, {"bbox": [318, 402, 428, 412], "spans": [{"bbox": [318, 402, 428, 412], "score": 1.0, "content": "Volumetric models; Reconstruction.", "type": "text"}], "index": 33, "is_list_start_line": true, "is_list_end_line": true}], "index": 32.5, "page_num": "page_0", "page_size": [612.0, 792.0], "bbox_fs": [317, 392, 561, 412]}, {"type": "text", "bbox": [317, 418, 561, 449], "lines": [{"bbox": [317, 419, 560, 429], "spans": [{"bbox": [317, 419, 560, 429], "score": 1.0, "content": "Additional Key Words and Phrases: Neural Radiance Fields, Dynamic Scene", "type": "text"}], "index": 34}, {"bbox": [317, 430, 561, 440], "spans": [{"bbox": [317, 430, 561, 440], "score": 1.0, "content": "Representations, Novel View Synthesis, Multi-View Video Dataset, Human", "type": "text"}], "index": 35}, {"bbox": [316, 441, 339, 449], "spans": [{"bbox": [316, 441, 339, 449], "score": 1.0, "content": "Heads", "type": "text"}], "index": 36}], "index": 35, "page_num": "page_0", "page_size": [612.0, 792.0], "bbox_fs": [316, 419, 561, 449]}, {"type": "title", "bbox": [318, 460, 405, 471], "lines": [{"bbox": [319, 461, 405, 471], "spans": [{"bbox": [319, 463, 323, 469], "score": 1.0, "content": "1", "type": "text"}, {"bbox": [331, 461, 405, 471], "score": 1.0, "content": "INTRODUCTION", "type": "text"}], "index": 37}], "index": 37, "page_num": "page_0", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [317, 475, 562, 606], "lines": [{"bbox": [317, 476, 561, 486], "spans": [{"bbox": [317, 476, 561, 486], "score": 1.0, "content": "In recent years, we have seen tremendous growth in the impor-", "type": "text"}], "index": 38}, {"bbox": [317, 487, 561, 497], "spans": [{"bbox": [317, 487, 561, 497], "score": 1.0, "content": "tance of digital applications that rely on photo-realistic rendering of", "type": "text"}], "index": 39}, {"bbox": [316, 498, 561, 508], "spans": [{"bbox": [316, 498, 561, 508], "score": 1.0, "content": "images from captured scene representations, both in society and in-", "type": "text"}], "index": 40}, {"bbox": [317, 509, 561, 519], "spans": [{"bbox": [317, 509, 561, 519], "score": 1.0, "content": "dustry. In particular, the synthesis of novel views of dynamic human", "type": "text"}], "index": 41}, {"bbox": [316, 519, 560, 531], "spans": [{"bbox": [316, 519, 560, 531], "score": 1.0, "content": "faces and heads has become the center of attention in many graphics", "type": "text"}], "index": 42}, {"bbox": [317, 531, 561, 541], "spans": [{"bbox": [317, 531, 561, 541], "score": 1.0, "content": "applications ranging from computer games and movie productions", "type": "text"}], "index": 43}, {"bbox": [316, 542, 560, 552], "spans": [{"bbox": [316, 542, 560, 552], "score": 1.0, "content": "to settings in virtual or augmented reality. Here, the key task is the", "type": "text"}], "index": 44}, {"bbox": [316, 552, 561, 564], "spans": [{"bbox": [316, 552, 561, 564], "score": 1.0, "content": "following: given a recording of a human actor who is displaying", "type": "text"}], "index": 45}, {"bbox": [316, 563, 561, 574], "spans": [{"bbox": [316, 563, 561, 574], "score": 1.0, "content": "facial expressions or talking, reconstruct a temporally-consistent", "type": "text"}], "index": 46}, {"bbox": [317, 575, 560, 585], "spans": [{"bbox": [317, 575, 560, 585], "score": 1.0, "content": "3D representation. This representation should enable the synthesis", "type": "text"}], "index": 47}, {"bbox": [317, 586, 560, 596], "spans": [{"bbox": [317, 586, 560, 596], "score": 1.0, "content": "of photo-realistic re-renderings of the human face from arbitrary", "type": "text"}], "index": 48}, {"bbox": [316, 596, 415, 608], "spans": [{"bbox": [316, 596, 415, 608], "score": 1.0, "content": "viewpoints and time steps.", "type": "text"}], "index": 49}], "index": 43.5, "page_num": "page_0", "page_size": [612.0, 792.0], "bbox_fs": [316, 476, 561, 608]}, {"type": "text", "bbox": [317, 606, 561, 694], "lines": [{"bbox": [325, 607, 560, 617], "spans": [{"bbox": [325, 607, 560, 617], "score": 1.0, "content": "However, reconstructing a 3D representation capable of photo-", "type": "text"}], "index": 50}, {"bbox": [316, 618, 560, 629], "spans": [{"bbox": [316, 618, 560, 629], "score": 1.0, "content": "realistic novel viewpoint rendering is particularly challenging for", "type": "text"}], "index": 51}, {"bbox": [317, 629, 560, 639], "spans": [{"bbox": [317, 629, 560, 639], "score": 1.0, "content": "dynamic objects. Here, we not only have to reconstruct the static", "type": "text"}], "index": 52}, {"bbox": [316, 640, 560, 651], "spans": [{"bbox": [316, 640, 560, 651], "score": 1.0, "content": "appearance of a person, but we also have to simultaneously capture", "type": "text"}], "index": 53}, {"bbox": [316, 650, 561, 662], "spans": [{"bbox": [316, 650, 561, 662], "score": 1.0, "content": "the motion over time and encode it in a compact scene represen-", "type": "text"}], "index": 54}, {"bbox": [316, 662, 562, 673], "spans": [{"bbox": [316, 662, 562, 673], "score": 1.0, "content": "tation. The task becomes even more challenging in the context of", "type": "text"}], "index": 55}, {"bbox": [317, 673, 560, 682], "spans": [{"bbox": [317, 673, 560, 682], "score": 1.0, "content": "human faces, as fine-scale and high-fidelity detail are required for", "type": "text"}], "index": 56}, {"bbox": [317, 685, 560, 694], "spans": [{"bbox": [317, 685, 560, 694], "score": 1.0, "content": "downstream applications, where the tolerance for visual artifacts", "type": "text"}], "index": 57}, {"bbox": [50, 81, 295, 91], "spans": [{"bbox": [50, 81, 295, 91], "score": 1.0, "content": "is typically very low. In particular, human heads exhibit several", "type": "text", "cross_page": true}], "index": 0}, {"bbox": [51, 92, 294, 102], "spans": [{"bbox": [51, 92, 294, 102], "score": 1.0, "content": "properties that make novel view synthesis (NVS) extremely chal-", "type": "text", "cross_page": true}], "index": 1}, {"bbox": [50, 103, 295, 114], "spans": [{"bbox": [50, 103, 295, 114], "score": 1.0, "content": "lenging, such as the complexity of hair, differences in reflectance", "type": "text", "cross_page": true}], "index": 2}, {"bbox": [51, 114, 294, 125], "spans": [{"bbox": [51, 114, 294, 125], "score": 1.0, "content": "properties, and the elasticity of human skin that creates heavily", "type": "text", "cross_page": true}], "index": 3}, {"bbox": [51, 126, 226, 135], "spans": [{"bbox": [51, 126, 226, 135], "score": 1.0, "content": "non-rigid deformations and fine-scale wrinkles.", "type": "text", "cross_page": true}], "index": 4}], "index": 53.5, "page_num": "page_0", "page_size": [612.0, 792.0], "bbox_fs": [316, 607, 562, 694]}]}
2305.03027
1
is typically very low. In particular, human heads exhibit several properties that make novel view synthesis (NVS) extremely chal- lenging, such as the complexity of hair, differences in reflectance properties, and the elasticity of human skin that creates heavily non-rigid deformations and fine-scale wrinkles. In the context of static scenes, we have seen neural radiance field representations (NeRFs) [Mildenhall et al. 2020] obtain compelling NVS results. The core idea of this seminal work is to leverage a vol- umetric rendering formulation as a reconstruction loss and encode the resulting radiance field in a neural field-based representation. Recently, there has been significant research interest in extending NeRFs to represent dynamic scenes. While some approaches rely on deformation fields to model dynamically changing scene content [Park et al. 2021a,b], others propose to replace the deformation field in favor of a time-conditioned latent code [Li et al. 2022b]. These methods have shown convincing results on short sequences with limited motion; however, faithful reconstructions of human heads with complex motion remain challenging. In this work, we focus on addressing these challenges in the con- text of a newly-designed multi-view capture setup and propose NeRSemble, a novel method that combines the strengths of de- formation fields and flexible latent conditioning to represent the appearance of dynamic human heads. The core idea of our approach is to store latent features in an ensemble of multi-resolution hash grids, similar to Instant NGP [Müller et al. 2022], which are blended to describe a given time step. Importantly, we utilize a deformation field before querying features from the hash grids. As a result, the deformation field represents all coarse dynamics of the scene and aligns the coordinate systems of the hash grids, which are then responsible for modeling fine details and complex movements. In order to train and evaluate our method, we design a new multi-view capture setup to record $$7.1\ \mathrm{MP}$$ videos at 73 fps with 16 machine vision cameras. With this setup, we capture a new dataset of 4734 sequences of 222 human heads with a total of 31.7 million individual frames. We evaluate our method on this newly-introduced dataset and demonstrate that we significantly outperform existing dynamic NeRF reconstruction approaches. Our dataset exceeds all compara- ble datasets w.r.t. resolution and number of frames per second by a large margin, and will be made publicly available. Furthermore, we will host a public benchmark on dynamic NVS of human heads, which will help to advance the field and increase comparability across methods. # To summarize, our contributions are as follows: • A dynamic head reconstruction method based on a NeRF representation that combines a deformation field and an en- semble of multi-resolution hash encodings. This facilitates high-fidelity NVS from a sparse camera array and enables detailed representation of scenes with complex motion. • A high-framerate and high-resolution multi-view video dataset of diverse human heads with over 4700 sequences of more than 220 subjects. The dataset will be publicly released and include a new benchmark for dynamic NVS of human heads. # 2 RELATED WORK Modeling and rendering human faces is a central topic in graphics and plays a crucial role in many applications, such as computer games, social media, telecommunication, and virtual reality. # 2.1 3D Morphable Models 3D morphable models (3DMMs) have been a staple approach over the last two decades. The use of a unified mesh topology enables rep- resenting identity and expression using simple statistical tools [Blanz and Vetter 1999; Li et al. 2017]. With the additional use of texture, one can already produce compelling renderings [Blanz and Vetter 1999; Paysan et al. 2009], but mesh-based 3DMMs are inherently limited w.r.t. modeling hair or fine identity-specific details. More recently, the use of neural fields [Xie et al. 2022] has alleviated the constraint of working on topologically uniform meshes. These mod- els are capable of modeling complete human heads, including hair [Yenamandra et al. 2021] and fine details [Giebenhain et al. 2022]. In another line of work, Zheng et al. [2022] combine ideas from neural fields and classical 3DMMs to fit monocular videos. # 2.2 Neural Radiance Fields Our work strives to achieve highly-realistic renderings of videos, including detailed hairstyles and complex deformations. Therefore, we deviate from common assumptions made in 3DMMs and focus on fitting a single multi-view video sequence to the highest de- gree of detail possible. Neural Radiance Fields (NeRFs) [Mildenhall et al. 2020] have recently become state-of-the-art in NVS. While the first NeRFs were usually trained for hours or days on a single scene, recent research advances have reduced the training time to several minutes. For example, this can be achieved by grid-based optimization [Fridovich-Keil and Yu et al. 2022; Karnewar et al. 2022; Sun et al. 2022], tensor decomposition [Chen et al. 2022], or Instant NGP’s [Müller et al. 2022] multi-resolution voxel hashing. # 2.3 Dynamic NeRF Extending NeRFs to time-varying, non-rigid content is another cen- tral research topic that has seen fast progress. Pumarola et al. [2020] and Park et al. [2021a; 2021b] model a single NeRF in canonical space and explicitly model backward deformations from observed frames to explain the non-rigid content of the scene. OLD: On the
<p>is typically very low. In particular, human heads exhibit several properties that make novel view synthesis (NVS) extremely chal- lenging, such as the complexity of hair, differences in reflectance properties, and the elasticity of human skin that creates heavily non-rigid deformations and fine-scale wrinkles.</p> <p>In the context of static scenes, we have seen neural radiance field representations (NeRFs) [Mildenhall et al. 2020] obtain compelling NVS results. The core idea of this seminal work is to leverage a vol- umetric rendering formulation as a reconstruction loss and encode the resulting radiance field in a neural field-based representation. Recently, there has been significant research interest in extending NeRFs to represent dynamic scenes. While some approaches rely on deformation fields to model dynamically changing scene content [Park et al. 2021a,b], others propose to replace the deformation field in favor of a time-conditioned latent code [Li et al. 2022b]. These methods have shown convincing results on short sequences with limited motion; however, faithful reconstructions of human heads with complex motion remain challenging.</p> <p>In this work, we focus on addressing these challenges in the con- text of a newly-designed multi-view capture setup and propose NeRSemble, a novel method that combines the strengths of de- formation fields and flexible latent conditioning to represent the appearance of dynamic human heads. The core idea of our approach is to store latent features in an ensemble of multi-resolution hash grids, similar to Instant NGP [Müller et al. 2022], which are blended to describe a given time step. Importantly, we utilize a deformation field before querying features from the hash grids. As a result, the deformation field represents all coarse dynamics of the scene and aligns the coordinate systems of the hash grids, which are then responsible for modeling fine details and complex movements. In order to train and evaluate our method, we design a new multi-view capture setup to record $$7.1\ \mathrm{MP}$$ videos at 73 fps with 16 machine vision cameras. With this setup, we capture a new dataset of 4734 sequences of 222 human heads with a total of 31.7 million individual frames. We evaluate our method on this newly-introduced dataset and demonstrate that we significantly outperform existing dynamic NeRF reconstruction approaches. Our dataset exceeds all compara- ble datasets w.r.t. resolution and number of frames per second by a large margin, and will be made publicly available. Furthermore, we will host a public benchmark on dynamic NVS of human heads, which will help to advance the field and increase comparability across methods.</p> <h1>To summarize, our contributions are as follows:</h1> <p>• A dynamic head reconstruction method based on a NeRF representation that combines a deformation field and an en- semble of multi-resolution hash encodings. This facilitates high-fidelity NVS from a sparse camera array and enables detailed representation of scenes with complex motion. • A high-framerate and high-resolution multi-view video dataset of diverse human heads with over 4700 sequences of more than 220 subjects. The dataset will be publicly released and include a new benchmark for dynamic NVS of human heads.</p> <h1>2 RELATED WORK</h1> <p>Modeling and rendering human faces is a central topic in graphics and plays a crucial role in many applications, such as computer games, social media, telecommunication, and virtual reality.</p> <h1>2.1 3D Morphable Models</h1> <p>3D morphable models (3DMMs) have been a staple approach over the last two decades. The use of a unified mesh topology enables rep- resenting identity and expression using simple statistical tools [Blanz and Vetter 1999; Li et al. 2017]. With the additional use of texture, one can already produce compelling renderings [Blanz and Vetter 1999; Paysan et al. 2009], but mesh-based 3DMMs are inherently limited w.r.t. modeling hair or fine identity-specific details. More recently, the use of neural fields [Xie et al. 2022] has alleviated the constraint of working on topologically uniform meshes. These mod- els are capable of modeling complete human heads, including hair [Yenamandra et al. 2021] and fine details [Giebenhain et al. 2022]. In another line of work, Zheng et al. [2022] combine ideas from neural fields and classical 3DMMs to fit monocular videos.</p> <h1>2.2 Neural Radiance Fields</h1> <p>Our work strives to achieve highly-realistic renderings of videos, including detailed hairstyles and complex deformations. Therefore, we deviate from common assumptions made in 3DMMs and focus on fitting a single multi-view video sequence to the highest de- gree of detail possible. Neural Radiance Fields (NeRFs) [Mildenhall et al. 2020] have recently become state-of-the-art in NVS. While the first NeRFs were usually trained for hours or days on a single scene, recent research advances have reduced the training time to several minutes. For example, this can be achieved by grid-based optimization [Fridovich-Keil and Yu et al. 2022; Karnewar et al. 2022; Sun et al. 2022], tensor decomposition [Chen et al. 2022], or Instant NGP’s [Müller et al. 2022] multi-resolution voxel hashing.</p> <h1>2.3 Dynamic NeRF</h1> <p>Extending NeRFs to time-varying, non-rigid content is another cen- tral research topic that has seen fast progress. Pumarola et al. [2020] and Park et al. [2021a; 2021b] model a single NeRF in canonical space and explicitly model backward deformations from observed frames to explain the non-rigid content of the scene. OLD: On the</p>
[{"type": "text", "coordinates": [52, 80, 294, 134], "content": "is typically very low. In particular, human heads exhibit several\nproperties that make novel view synthesis (NVS) extremely chal-\nlenging, such as the complexity of hair, differences in reflectance\nproperties, and the elasticity of human skin that creates heavily\nnon-rigid deformations and fine-scale wrinkles.", "block_type": "text", "index": 1}, {"type": "text", "coordinates": [52, 136, 294, 277], "content": "In the context of static scenes, we have seen neural radiance field\nrepresentations (NeRFs) [Mildenhall et al. 2020] obtain compelling\nNVS results. The core idea of this seminal work is to leverage a vol-\numetric rendering formulation as a reconstruction loss and encode\nthe resulting radiance field in a neural field-based representation.\nRecently, there has been significant research interest in extending\nNeRFs to represent dynamic scenes. While some approaches rely\non deformation fields to model dynamically changing scene content\n[Park et al. 2021a,b], others propose to replace the deformation field\nin favor of a time-conditioned latent code [Li et al. 2022b]. These\nmethods have shown convincing results on short sequences with\nlimited motion; however, faithful reconstructions of human heads\nwith complex motion remain challenging.", "block_type": "text", "index": 2}, {"type": "text", "coordinates": [52, 277, 294, 540], "content": "In this work, we focus on addressing these challenges in the con-\ntext of a newly-designed multi-view capture setup and propose\nNeRSemble, a novel method that combines the strengths of de-\nformation fields and flexible latent conditioning to represent the\nappearance of dynamic human heads. The core idea of our approach\nis to store latent features in an ensemble of multi-resolution hash\ngrids, similar to Instant NGP [M\u00fcller et al. 2022], which are blended\nto describe a given time step. Importantly, we utilize a deformation\nfield before querying features from the hash grids. As a result, the\ndeformation field represents all coarse dynamics of the scene and\naligns the coordinate systems of the hash grids, which are then\nresponsible for modeling fine details and complex movements. In\norder to train and evaluate our method, we design a new multi-view\ncapture setup to record $$7.1\\ \\mathrm{MP}$$ videos at 73 fps with 16 machine\nvision cameras. With this setup, we capture a new dataset of 4734\nsequences of 222 human heads with a total of 31.7 million individual\nframes. We evaluate our method on this newly-introduced dataset\nand demonstrate that we significantly outperform existing dynamic\nNeRF reconstruction approaches. Our dataset exceeds all compara-\nble datasets w.r.t. resolution and number of frames per second by\na large margin, and will be made publicly available. Furthermore,\nwe will host a public benchmark on dynamic NVS of human heads,\nwhich will help to advance the field and increase comparability\nacross methods.", "block_type": "text", "index": 3}, {"type": "title", "coordinates": [60, 552, 234, 561], "content": "To summarize, our contributions are as follows:", "block_type": "title", "index": 4}, {"type": "text", "coordinates": [67, 573, 295, 682], "content": "\u2022 A dynamic head reconstruction method based on a NeRF\nrepresentation that combines a deformation field and an en-\nsemble of multi-resolution hash encodings. This facilitates\nhigh-fidelity NVS from a sparse camera array and enables\ndetailed representation of scenes with complex motion.\n\u2022 A high-framerate and high-resolution multi-view video\ndataset of diverse human heads with over 4700 sequences of\nmore than 220 subjects. The dataset will be publicly released\nand include a new benchmark for dynamic NVS of human\nheads.", "block_type": "text", "index": 5}, {"type": "table", "coordinates": [316, 108, 561, 225], "content": "", "block_type": "table", "index": 6}, {"type": "title", "coordinates": [317, 239, 404, 251], "content": "2 RELATED WORK", "block_type": "title", "index": 7}, {"type": "text", "coordinates": [317, 254, 560, 286], "content": "Modeling and rendering human faces is a central topic in graphics\nand plays a crucial role in many applications, such as computer\ngames, social media, telecommunication, and virtual reality.", "block_type": "text", "index": 8}, {"type": "title", "coordinates": [318, 300, 431, 310], "content": "2.1 3D Morphable Models", "block_type": "title", "index": 9}, {"type": "text", "coordinates": [317, 313, 561, 455], "content": "3D morphable models (3DMMs) have been a staple approach over\nthe last two decades. The use of a unified mesh topology enables rep-\nresenting identity and expression using simple statistical tools [Blanz\nand Vetter 1999; Li et al. 2017]. With the additional use of texture,\none can already produce compelling renderings [Blanz and Vetter\n1999; Paysan et al. 2009], but mesh-based 3DMMs are inherently\nlimited w.r.t. modeling hair or fine identity-specific details. More\nrecently, the use of neural fields [Xie et al. 2022] has alleviated the\nconstraint of working on topologically uniform meshes. These mod-\nels are capable of modeling complete human heads, including hair\n[Yenamandra et al. 2021] and fine details [Giebenhain et al. 2022]. In\nanother line of work, Zheng et al. [2022] combine ideas from neural\nfields and classical 3DMMs to fit monocular videos.", "block_type": "text", "index": 10}, {"type": "title", "coordinates": [317, 468, 433, 478], "content": "2.2 Neural Radiance Fields", "block_type": "title", "index": 11}, {"type": "text", "coordinates": [317, 481, 561, 613], "content": "Our work strives to achieve highly-realistic renderings of videos,\nincluding detailed hairstyles and complex deformations. Therefore,\nwe deviate from common assumptions made in 3DMMs and focus\non fitting a single multi-view video sequence to the highest de-\ngree of detail possible. Neural Radiance Fields (NeRFs) [Mildenhall\net al. 2020] have recently become state-of-the-art in NVS. While\nthe first NeRFs were usually trained for hours or days on a single\nscene, recent research advances have reduced the training time to\nseveral minutes. For example, this can be achieved by grid-based\noptimization [Fridovich-Keil and Yu et al. 2022; Karnewar et al.\n2022; Sun et al. 2022], tensor decomposition [Chen et al. 2022], or\nInstant NGP\u2019s [M\u00fcller et al. 2022] multi-resolution voxel hashing.", "block_type": "text", "index": 12}, {"type": "title", "coordinates": [318, 625, 401, 636], "content": "2.3 Dynamic NeRF", "block_type": "title", "index": 13}, {"type": "text", "coordinates": [317, 640, 560, 693], "content": "Extending NeRFs to time-varying, non-rigid content is another cen-\ntral research topic that has seen fast progress. Pumarola et al. [2020]\nand Park et al. [2021a; 2021b] model a single NeRF in canonical\nspace and explicitly model backward deformations from observed\nframes to explain the non-rigid content of the scene. OLD: On the", "block_type": "text", "index": 14}]
[{"type": "text", "coordinates": [50, 81, 295, 91], "content": "is typically very low. In particular, human heads exhibit several", "score": 1.0, "index": 1}, {"type": "text", "coordinates": [51, 92, 294, 102], "content": "properties that make novel view synthesis (NVS) extremely chal-", "score": 1.0, "index": 2}, {"type": "text", "coordinates": [50, 103, 295, 114], "content": "lenging, such as the complexity of hair, differences in reflectance", "score": 1.0, "index": 3}, {"type": "text", "coordinates": [51, 114, 294, 125], "content": "properties, and the elasticity of human skin that creates heavily", "score": 1.0, "index": 4}, {"type": "text", "coordinates": [51, 126, 226, 135], "content": "non-rigid deformations and fine-scale wrinkles.", "score": 1.0, "index": 5}, {"type": "text", "coordinates": [60, 137, 294, 146], "content": "In the context of static scenes, we have seen neural radiance field", "score": 1.0, "index": 6}, {"type": "text", "coordinates": [50, 147, 294, 157], "content": "representations (NeRFs) [Mildenhall et al. 2020] obtain compelling", "score": 1.0, "index": 7}, {"type": "text", "coordinates": [50, 158, 295, 168], "content": "NVS results. The core idea of this seminal work is to leverage a vol-", "score": 1.0, "index": 8}, {"type": "text", "coordinates": [51, 169, 295, 180], "content": "umetric rendering formulation as a reconstruction loss and encode", "score": 1.0, "index": 9}, {"type": "text", "coordinates": [51, 180, 295, 190], "content": "the resulting radiance field in a neural field-based representation.", "score": 1.0, "index": 10}, {"type": "text", "coordinates": [51, 191, 294, 201], "content": "Recently, there has been significant research interest in extending", "score": 1.0, "index": 11}, {"type": "text", "coordinates": [50, 201, 295, 213], "content": "NeRFs to represent dynamic scenes. While some approaches rely", "score": 1.0, "index": 12}, {"type": "text", "coordinates": [50, 212, 295, 224], "content": "on deformation fields to model dynamically changing scene content", "score": 1.0, "index": 13}, {"type": "text", "coordinates": [50, 223, 295, 235], "content": "[Park et al. 2021a,b], others propose to replace the deformation field", "score": 1.0, "index": 14}, {"type": "text", "coordinates": [50, 234, 294, 245], "content": "in favor of a time-conditioned latent code [Li et al. 2022b]. These", "score": 1.0, "index": 15}, {"type": "text", "coordinates": [50, 245, 294, 256], "content": "methods have shown convincing results on short sequences with", "score": 1.0, "index": 16}, {"type": "text", "coordinates": [50, 256, 295, 267], "content": "limited motion; however, faithful reconstructions of human heads", "score": 1.0, "index": 17}, {"type": "text", "coordinates": [50, 267, 205, 279], "content": "with complex motion remain challenging.", "score": 1.0, "index": 18}, {"type": "text", "coordinates": [59, 278, 295, 289], "content": "In this work, we focus on addressing these challenges in the con-", "score": 1.0, "index": 19}, {"type": "text", "coordinates": [51, 290, 294, 300], "content": "text of a newly-designed multi-view capture setup and propose", "score": 1.0, "index": 20}, {"type": "text", "coordinates": [51, 300, 294, 311], "content": "NeRSemble, a novel method that combines the strengths of de-", "score": 1.0, "index": 21}, {"type": "text", "coordinates": [50, 311, 294, 322], "content": "formation fields and flexible latent conditioning to represent the", "score": 1.0, "index": 22}, {"type": "text", "coordinates": [50, 322, 295, 333], "content": "appearance of dynamic human heads. The core idea of our approach", "score": 1.0, "index": 23}, {"type": "text", "coordinates": [50, 333, 295, 343], "content": "is to store latent features in an ensemble of multi-resolution hash", "score": 1.0, "index": 24}, {"type": "text", "coordinates": [50, 344, 295, 354], "content": "grids, similar to Instant NGP [M\u00fcller et al. 2022], which are blended", "score": 1.0, "index": 25}, {"type": "text", "coordinates": [50, 354, 294, 366], "content": "to describe a given time step. Importantly, we utilize a deformation", "score": 1.0, "index": 26}, {"type": "text", "coordinates": [50, 365, 295, 377], "content": "field before querying features from the hash grids. As a result, the", "score": 1.0, "index": 27}, {"type": "text", "coordinates": [51, 376, 295, 387], "content": "deformation field represents all coarse dynamics of the scene and", "score": 1.0, "index": 28}, {"type": "text", "coordinates": [52, 388, 294, 398], "content": "aligns the coordinate systems of the hash grids, which are then", "score": 1.0, "index": 29}, {"type": "text", "coordinates": [51, 399, 295, 410], "content": "responsible for modeling fine details and complex movements. In", "score": 1.0, "index": 30}, {"type": "text", "coordinates": [51, 410, 294, 421], "content": "order to train and evaluate our method, we design a new multi-view", "score": 1.0, "index": 31}, {"type": "text", "coordinates": [51, 421, 140, 431], "content": "capture setup to record", "score": 1.0, "index": 32}, {"type": "inline_equation", "coordinates": [141, 420, 168, 430], "content": "7.1\\ \\mathrm{MP}", "score": 0.34, "index": 33}, {"type": "text", "coordinates": [168, 421, 294, 431], "content": " videos at 73 fps with 16 machine", "score": 1.0, "index": 34}, {"type": "text", "coordinates": [51, 432, 295, 442], "content": "vision cameras. With this setup, we capture a new dataset of 4734", "score": 1.0, "index": 35}, {"type": "text", "coordinates": [51, 443, 295, 453], "content": "sequences of 222 human heads with a total of 31.7 million individual", "score": 1.0, "index": 36}, {"type": "text", "coordinates": [51, 454, 294, 464], "content": "frames. We evaluate our method on this newly-introduced dataset", "score": 1.0, "index": 37}, {"type": "text", "coordinates": [51, 465, 294, 476], "content": "and demonstrate that we significantly outperform existing dynamic", "score": 1.0, "index": 38}, {"type": "text", "coordinates": [50, 475, 295, 487], "content": "NeRF reconstruction approaches. Our dataset exceeds all compara-", "score": 1.0, "index": 39}, {"type": "text", "coordinates": [50, 486, 294, 497], "content": "ble datasets w.r.t. resolution and number of frames per second by", "score": 1.0, "index": 40}, {"type": "text", "coordinates": [51, 498, 295, 508], "content": "a large margin, and will be made publicly available. Furthermore,", "score": 1.0, "index": 41}, {"type": "text", "coordinates": [51, 509, 295, 519], "content": "we will host a public benchmark on dynamic NVS of human heads,", "score": 1.0, "index": 42}, {"type": "text", "coordinates": [51, 519, 294, 530], "content": "which will help to advance the field and increase comparability", "score": 1.0, "index": 43}, {"type": "text", "coordinates": [50, 531, 110, 540], "content": "across methods.", "score": 1.0, "index": 44}, {"type": "text", "coordinates": [60, 552, 235, 562], "content": "To summarize, our contributions are as follows:", "score": 1.0, "index": 45}, {"type": "text", "coordinates": [67, 574, 294, 584], "content": "\u2022 A dynamic head reconstruction method based on a NeRF", "score": 1.0, "index": 46}, {"type": "text", "coordinates": [74, 585, 296, 596], "content": "representation that combines a deformation field and an en-", "score": 1.0, "index": 47}, {"type": "text", "coordinates": [74, 596, 294, 606], "content": "semble of multi-resolution hash encodings. This facilitates", "score": 1.0, "index": 48}, {"type": "text", "coordinates": [74, 607, 295, 619], "content": "high-fidelity NVS from a sparse camera array and enables", "score": 1.0, "index": 49}, {"type": "text", "coordinates": [74, 618, 278, 629], "content": "detailed representation of scenes with complex motion.", "score": 1.0, "index": 50}, {"type": "text", "coordinates": [66, 630, 278, 639], "content": "\u2022 A high-framerate and high-resolution multi-view video", "score": 1.0, "index": 51}, {"type": "text", "coordinates": [74, 640, 295, 650], "content": "dataset of diverse human heads with over 4700 sequences of", "score": 1.0, "index": 52}, {"type": "text", "coordinates": [74, 651, 294, 661], "content": "more than 220 subjects. The dataset will be publicly released", "score": 1.0, "index": 53}, {"type": "text", "coordinates": [74, 662, 294, 672], "content": "and include a new benchmark for dynamic NVS of human", "score": 1.0, "index": 54}, {"type": "text", "coordinates": [74, 673, 98, 682], "content": "heads.", "score": 1.0, "index": 55}, {"type": "text", "coordinates": [317, 242, 323, 249], "content": "2", "score": 1.0, "index": 56}, {"type": "text", "coordinates": [331, 240, 404, 250], "content": "RELATED WORK", "score": 1.0, "index": 57}, {"type": "text", "coordinates": [317, 255, 560, 266], "content": "Modeling and rendering human faces is a central topic in graphics", "score": 1.0, "index": 58}, {"type": "text", "coordinates": [317, 266, 560, 277], "content": "and plays a crucial role in many applications, such as computer", "score": 1.0, "index": 59}, {"type": "text", "coordinates": [317, 277, 536, 288], "content": "games, social media, telecommunication, and virtual reality.", "score": 1.0, "index": 60}, {"type": "text", "coordinates": [316, 298, 431, 311], "content": "2.1 3D Morphable Models", "score": 1.0, "index": 61}, {"type": "text", "coordinates": [317, 314, 560, 325], "content": "3D morphable models (3DMMs) have been a staple approach over", "score": 1.0, "index": 62}, {"type": "text", "coordinates": [317, 324, 561, 336], "content": "the last two decades. The use of a unified mesh topology enables rep-", "score": 1.0, "index": 63}, {"type": "text", "coordinates": [317, 336, 561, 346], "content": "resenting identity and expression using simple statistical tools [Blanz", "score": 1.0, "index": 64}, {"type": "text", "coordinates": [317, 346, 562, 357], "content": "and Vetter 1999; Li et al. 2017]. With the additional use of texture,", "score": 1.0, "index": 65}, {"type": "text", "coordinates": [317, 358, 560, 369], "content": "one can already produce compelling renderings [Blanz and Vetter", "score": 1.0, "index": 66}, {"type": "text", "coordinates": [317, 369, 560, 379], "content": "1999; Paysan et al. 2009], but mesh-based 3DMMs are inherently", "score": 1.0, "index": 67}, {"type": "text", "coordinates": [316, 380, 560, 390], "content": "limited w.r.t. modeling hair or fine identity-specific details. More", "score": 1.0, "index": 68}, {"type": "text", "coordinates": [317, 391, 560, 400], "content": "recently, the use of neural fields [Xie et al. 2022] has alleviated the", "score": 1.0, "index": 69}, {"type": "text", "coordinates": [316, 402, 561, 412], "content": "constraint of working on topologically uniform meshes. These mod-", "score": 1.0, "index": 70}, {"type": "text", "coordinates": [317, 413, 560, 423], "content": "els are capable of modeling complete human heads, including hair", "score": 1.0, "index": 71}, {"type": "text", "coordinates": [317, 424, 560, 433], "content": "[Yenamandra et al. 2021] and fine details [Giebenhain et al. 2022]. In", "score": 1.0, "index": 72}, {"type": "text", "coordinates": [317, 434, 561, 445], "content": "another line of work, Zheng et al. [2022] combine ideas from neural", "score": 1.0, "index": 73}, {"type": "text", "coordinates": [317, 445, 506, 455], "content": "fields and classical 3DMMs to fit monocular videos.", "score": 1.0, "index": 74}, {"type": "text", "coordinates": [317, 468, 434, 478], "content": "2.2 Neural Radiance Fields", "score": 1.0, "index": 75}, {"type": "text", "coordinates": [317, 482, 561, 493], "content": "Our work strives to achieve highly-realistic renderings of videos,", "score": 1.0, "index": 76}, {"type": "text", "coordinates": [317, 493, 562, 504], "content": "including detailed hairstyles and complex deformations. Therefore,", "score": 1.0, "index": 77}, {"type": "text", "coordinates": [316, 504, 561, 515], "content": "we deviate from common assumptions made in 3DMMs and focus", "score": 1.0, "index": 78}, {"type": "text", "coordinates": [317, 516, 560, 525], "content": "on fitting a single multi-view video sequence to the highest de-", "score": 1.0, "index": 79}, {"type": "text", "coordinates": [315, 526, 561, 537], "content": "gree of detail possible. Neural Radiance Fields (NeRFs) [Mildenhall", "score": 1.0, "index": 80}, {"type": "text", "coordinates": [317, 537, 560, 547], "content": "et al. 2020] have recently become state-of-the-art in NVS. While", "score": 1.0, "index": 81}, {"type": "text", "coordinates": [316, 547, 561, 559], "content": "the first NeRFs were usually trained for hours or days on a single", "score": 1.0, "index": 82}, {"type": "text", "coordinates": [316, 559, 561, 569], "content": "scene, recent research advances have reduced the training time to", "score": 1.0, "index": 83}, {"type": "text", "coordinates": [316, 570, 561, 581], "content": "several minutes. For example, this can be achieved by grid-based", "score": 1.0, "index": 84}, {"type": "text", "coordinates": [318, 581, 562, 591], "content": "optimization [Fridovich-Keil and Yu et al. 2022; Karnewar et al.", "score": 1.0, "index": 85}, {"type": "text", "coordinates": [317, 592, 560, 603], "content": "2022; Sun et al. 2022], tensor decomposition [Chen et al. 2022], or", "score": 1.0, "index": 86}, {"type": "text", "coordinates": [317, 603, 557, 614], "content": "Instant NGP\u2019s [M\u00fcller et al. 2022] multi-resolution voxel hashing.", "score": 1.0, "index": 87}, {"type": "text", "coordinates": [316, 625, 402, 637], "content": "2.3 Dynamic NeRF", "score": 1.0, "index": 88}, {"type": "text", "coordinates": [316, 640, 561, 651], "content": "Extending NeRFs to time-varying, non-rigid content is another cen-", "score": 1.0, "index": 89}, {"type": "text", "coordinates": [317, 651, 560, 661], "content": "tral research topic that has seen fast progress. Pumarola et al. [2020]", "score": 1.0, "index": 90}, {"type": "text", "coordinates": [317, 662, 561, 673], "content": "and Park et al. [2021a; 2021b] model a single NeRF in canonical", "score": 1.0, "index": 91}, {"type": "text", "coordinates": [317, 673, 561, 683], "content": "space and explicitly model backward deformations from observed", "score": 1.0, "index": 92}, {"type": "text", "coordinates": [316, 684, 561, 695], "content": "frames to explain the non-rigid content of the scene. OLD: On the", "score": 1.0, "index": 93}]
[]
[{"type": "inline", "coordinates": [141, 420, 168, 430], "content": "7.1\\ \\mathrm{MP}", "caption": ""}]
[]
[612.0, 792.0]
[{"type": "text", "text": "", "page_idx": 1}, {"type": "text", "text": "In the context of static scenes, we have seen neural radiance field representations (NeRFs) [Mildenhall et al. 2020] obtain compelling NVS results. The core idea of this seminal work is to leverage a volumetric rendering formulation as a reconstruction loss and encode the resulting radiance field in a neural field-based representation. Recently, there has been significant research interest in extending NeRFs to represent dynamic scenes. While some approaches rely on deformation fields to model dynamically changing scene content [Park et al. 2021a,b], others propose to replace the deformation field in favor of a time-conditioned latent code [Li et al. 2022b]. These methods have shown convincing results on short sequences with limited motion; however, faithful reconstructions of human heads with complex motion remain challenging. ", "page_idx": 1}, {"type": "text", "text": "In this work, we focus on addressing these challenges in the context of a newly-designed multi-view capture setup and propose NeRSemble, a novel method that combines the strengths of deformation fields and flexible latent conditioning to represent the appearance of dynamic human heads. The core idea of our approach is to store latent features in an ensemble of multi-resolution hash grids, similar to Instant NGP [M\u00fcller et al. 2022], which are blended to describe a given time step. Importantly, we utilize a deformation field before querying features from the hash grids. As a result, the deformation field represents all coarse dynamics of the scene and aligns the coordinate systems of the hash grids, which are then responsible for modeling fine details and complex movements. In order to train and evaluate our method, we design a new multi-view capture setup to record $7.1\\ \\mathrm{MP}$ videos at 73 fps with 16 machine vision cameras. With this setup, we capture a new dataset of 4734 sequences of 222 human heads with a total of 31.7 million individual frames. We evaluate our method on this newly-introduced dataset and demonstrate that we significantly outperform existing dynamic NeRF reconstruction approaches. Our dataset exceeds all comparable datasets w.r.t. resolution and number of frames per second by a large margin, and will be made publicly available. Furthermore, we will host a public benchmark on dynamic NVS of human heads, which will help to advance the field and increase comparability across methods. ", "page_idx": 1}, {"type": "text", "text": "To summarize, our contributions are as follows: ", "text_level": 1, "page_idx": 1}, {"type": "text", "text": "\u2022 A dynamic head reconstruction method based on a NeRF representation that combines a deformation field and an ensemble of multi-resolution hash encodings. This facilitates high-fidelity NVS from a sparse camera array and enables detailed representation of scenes with complex motion. \u2022 A high-framerate and high-resolution multi-view video dataset of diverse human heads with over 4700 sequences of more than 220 subjects. The dataset will be publicly released and include a new benchmark for dynamic NVS of human heads. ", "page_idx": 1}, {"type": "table", "img_path": "images/2326e82f66c74d3f54ec74ede6cced478976941a27f2fd85197bfcac0b9dca92.jpg", "table_caption": ["Table 1. Existing multi-view video datasets of human faces. Note that for each dataset, we only count the publicly accessible recordings. "], "table_footnote": [], "page_idx": 1}, {"type": "text", "text": "2 RELATED WORK ", "text_level": 1, "page_idx": 1}, {"type": "text", "text": "Modeling and rendering human faces is a central topic in graphics and plays a crucial role in many applications, such as computer games, social media, telecommunication, and virtual reality. ", "page_idx": 1}, {"type": "text", "text": "2.1 3D Morphable Models ", "text_level": 1, "page_idx": 1}, {"type": "text", "text": "3D morphable models (3DMMs) have been a staple approach over the last two decades. The use of a unified mesh topology enables representing identity and expression using simple statistical tools [Blanz and Vetter 1999; Li et al. 2017]. With the additional use of texture, one can already produce compelling renderings [Blanz and Vetter 1999; Paysan et al. 2009], but mesh-based 3DMMs are inherently limited w.r.t. modeling hair or fine identity-specific details. More recently, the use of neural fields [Xie et al. 2022] has alleviated the constraint of working on topologically uniform meshes. These models are capable of modeling complete human heads, including hair [Yenamandra et al. 2021] and fine details [Giebenhain et al. 2022]. In another line of work, Zheng et al. [2022] combine ideas from neural fields and classical 3DMMs to fit monocular videos. ", "page_idx": 1}, {"type": "text", "text": "2.2 Neural Radiance Fields ", "text_level": 1, "page_idx": 1}, {"type": "text", "text": "Our work strives to achieve highly-realistic renderings of videos, including detailed hairstyles and complex deformations. Therefore, we deviate from common assumptions made in 3DMMs and focus on fitting a single multi-view video sequence to the highest degree of detail possible. Neural Radiance Fields (NeRFs) [Mildenhall et al. 2020] have recently become state-of-the-art in NVS. While the first NeRFs were usually trained for hours or days on a single scene, recent research advances have reduced the training time to several minutes. For example, this can be achieved by grid-based optimization [Fridovich-Keil and Yu et al. 2022; Karnewar et al. 2022; Sun et al. 2022], tensor decomposition [Chen et al. 2022], or Instant NGP\u2019s [M\u00fcller et al. 2022] multi-resolution voxel hashing. ", "page_idx": 1}, {"type": "text", "text": "2.3 Dynamic NeRF ", "text_level": 1, "page_idx": 1}, {"type": "text", "text": "Extending NeRFs to time-varying, non-rigid content is another central research topic that has seen fast progress. Pumarola et al. [2020] and Park et al. [2021a; 2021b] model a single NeRF in canonical space and explicitly model backward deformations from observed frames to explain the non-rigid content of the scene. OLD: On the other hand, Li et al. [2022b] refrain from using explicit deformations and instead encode the state of the scene in a latent vector, which is directly conditioning a NeRF. Wang et al. [2022b] utilize Fourier-based compression of grid features to represent a 4D radiance field. Lombardi et al. [2019] use an image-to-volume generator in conjunction with deformation fields. ", "page_idx": 1}]
[{"category_id": 5, "poly": [880.1297607421875, 301.76025390625, 1559.5367431640625, 301.76025390625, 1559.5367431640625, 625.3253784179688, 880.1297607421875, 625.3253784179688], "score": 0.9999885559082031}, {"category_id": 6, "poly": [880.38330078125, 219.86480712890625, 1558.1937255859375, 219.86480712890625, 1558.1937255859375, 272.0751647949219, 880.38330078125, 272.0751647949219], "score": 0.999987781047821}, {"category_id": 1, "poly": [882.3058471679688, 871.0899658203125, 1558.7603759765625, 871.0899658203125, 1558.7603759765625, 1263.923583984375, 882.3058471679688, 1263.923583984375], "score": 0.9999873042106628}, {"category_id": 1, "poly": [882.7766723632812, 1338.2515869140625, 1558.5572509765625, 1338.2515869140625, 1558.5572509765625, 1703.77197265625, 882.7766723632812, 1703.77197265625], "score": 0.9999856948852539}, {"category_id": 1, "poly": [145.07762145996094, 224.6790313720703, 818.389404296875, 224.6790313720703, 818.389404296875, 374.22576904296875, 145.07762145996094, 374.22576904296875], "score": 0.9999822378158569}, {"category_id": 1, "poly": [144.613525390625, 377.9239196777344, 818.6535034179688, 377.9239196777344, 818.6535034179688, 771.5071411132812, 144.613525390625, 771.5071411132812], "score": 0.9999818801879883}, {"category_id": 1, "poly": [882.4254150390625, 708.210693359375, 1555.644775390625, 708.210693359375, 1555.644775390625, 796.6608276367188, 882.4254150390625, 796.6608276367188], "score": 0.9999811053276062}, {"category_id": 1, "poly": [187.54931640625, 1594.272705078125, 820.2022705078125, 1594.272705078125, 820.2022705078125, 1896.16064453125, 187.54931640625, 1896.16064453125], "score": 0.9999771118164062}, {"category_id": 1, "poly": [145.75416564941406, 772.1171875, 818.660400390625, 772.1171875, 818.660400390625, 1502.0670166015625, 145.75416564941406, 1502.0670166015625], "score": 0.9999712705612183}, {"category_id": 1, "poly": [882.2962646484375, 1777.8856201171875, 1557.1324462890625, 1777.8856201171875, 1557.1324462890625, 1927.4649658203125, 882.2962646484375, 1927.4649658203125], "score": 0.9999278783798218}, {"category_id": 0, "poly": [881.2869262695312, 666.5784912109375, 1122.9940185546875, 666.5784912109375, 1122.9940185546875, 697.5755615234375, 881.2869262695312, 697.5755615234375], "score": 0.9999236464500427}, {"category_id": 0, "poly": [884.7431030273438, 1738.75927734375, 1114.5732421875, 1738.75927734375, 1114.5732421875, 1767.9803466796875, 884.7431030273438, 1767.9803466796875], "score": 0.9998674392700195}, {"category_id": 0, "poly": [168.23590087890625, 1534.6922607421875, 652.7244873046875, 1534.6922607421875, 652.7244873046875, 1560.6395263671875, 168.23590087890625, 1560.6395263671875], "score": 0.9994601011276245}, {"category_id": 0, "poly": [882.8785400390625, 1301.167236328125, 1204.457763671875, 1301.167236328125, 1204.457763671875, 1329.5706787109375, 882.8785400390625, 1329.5706787109375], "score": 0.9991192817687988}, {"category_id": 0, "poly": [883.481201171875, 833.8076782226562, 1198.1636962890625, 833.8076782226562, 1198.1636962890625, 862.0162963867188, 883.481201171875, 862.0162963867188], "score": 0.9990148544311523}, {"category_id": 2, "poly": [141.11195373535156, 154.13836669921875, 909.0712280273438, 154.13836669921875, 909.0712280273438, 177.20474243164062, 141.11195373535156, 177.20474243164062], "score": 0.9988362789154053}, {"category_id": 13, "poly": [392, 1168, 467, 1168, 467, 1195, 392, 1195], "score": 0.34, "latex": "7.1\\ \\mathrm{MP}"}, {"category_id": 15, "poly": [882.0, 220.0, 1558.0, 220.0, 1558.0, 246.0, 882.0, 246.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 247.0, 1452.0, 247.0, 1452.0, 276.0, 881.0, 276.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 874.0, 1558.0, 874.0, 1558.0, 903.0, 883.0, 903.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 902.0, 1560.0, 902.0, 1560.0, 934.0, 881.0, 934.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 935.0, 1560.0, 935.0, 1560.0, 963.0, 882.0, 963.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 963.0, 1562.0, 963.0, 1562.0, 993.0, 882.0, 993.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 996.0, 1558.0, 996.0, 1558.0, 1025.0, 883.0, 1025.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1026.0, 1558.0, 1026.0, 1558.0, 1055.0, 882.0, 1055.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1057.0, 1558.0, 1057.0, 1558.0, 1085.0, 880.0, 1085.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1088.0, 1556.0, 1088.0, 1556.0, 1113.0, 881.0, 1113.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1117.0, 1561.0, 1117.0, 1561.0, 1146.0, 880.0, 1146.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1148.0, 1558.0, 1148.0, 1558.0, 1177.0, 882.0, 1177.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 1180.0, 1556.0, 1180.0, 1556.0, 1205.0, 883.0, 1205.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1208.0, 1560.0, 1208.0, 1560.0, 1237.0, 882.0, 1237.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1238.0, 1407.0, 1238.0, 1407.0, 1266.0, 881.0, 1266.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1341.0, 1560.0, 1341.0, 1560.0, 1370.0, 882.0, 1370.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1372.0, 1563.0, 1372.0, 1563.0, 1402.0, 882.0, 1402.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1402.0, 1560.0, 1402.0, 1560.0, 1432.0, 880.0, 1432.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 1434.0, 1558.0, 1434.0, 1558.0, 1461.0, 883.0, 1461.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [877.0, 1462.0, 1560.0, 1462.0, 1560.0, 1493.0, 877.0, 1493.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1494.0, 1558.0, 1494.0, 1558.0, 1521.0, 882.0, 1521.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [879.0, 1522.0, 1559.0, 1522.0, 1559.0, 1554.0, 879.0, 1554.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1554.0, 1559.0, 1554.0, 1559.0, 1583.0, 880.0, 1583.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [879.0, 1585.0, 1560.0, 1585.0, 1560.0, 1614.0, 879.0, 1614.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [884.0, 1616.0, 1563.0, 1616.0, 1563.0, 1644.0, 884.0, 1644.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1646.0, 1557.0, 1646.0, 1557.0, 1675.0, 882.0, 1675.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1676.0, 1549.0, 1676.0, 1549.0, 1706.0, 881.0, 1706.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 227.0, 820.0, 227.0, 820.0, 255.0, 141.0, 255.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 258.0, 819.0, 258.0, 819.0, 286.0, 143.0, 286.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 288.0, 820.0, 288.0, 820.0, 317.0, 141.0, 317.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 319.0, 818.0, 319.0, 818.0, 348.0, 142.0, 348.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 350.0, 628.0, 350.0, 628.0, 376.0, 142.0, 376.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [168.0, 381.0, 819.0, 381.0, 819.0, 406.0, 168.0, 406.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 409.0, 819.0, 409.0, 819.0, 438.0, 141.0, 438.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 439.0, 821.0, 439.0, 821.0, 468.0, 141.0, 468.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 471.0, 820.0, 471.0, 820.0, 500.0, 142.0, 500.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 502.0, 821.0, 502.0, 821.0, 528.0, 143.0, 528.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 532.0, 819.0, 532.0, 819.0, 561.0, 142.0, 561.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 560.0, 821.0, 560.0, 821.0, 593.0, 140.0, 593.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 589.0, 822.0, 589.0, 822.0, 624.0, 140.0, 624.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 621.0, 821.0, 621.0, 821.0, 653.0, 141.0, 653.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 651.0, 819.0, 651.0, 819.0, 681.0, 140.0, 681.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 682.0, 819.0, 682.0, 819.0, 713.0, 141.0, 713.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 712.0, 820.0, 712.0, 820.0, 742.0, 141.0, 742.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 743.0, 570.0, 743.0, 570.0, 777.0, 140.0, 777.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 709.0, 1556.0, 709.0, 1556.0, 739.0, 881.0, 739.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 739.0, 1556.0, 739.0, 1556.0, 771.0, 882.0, 771.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 771.0, 1491.0, 771.0, 1491.0, 800.0, 881.0, 800.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [187.0, 1597.0, 819.0, 1597.0, 819.0, 1624.0, 187.0, 1624.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [207.0, 1626.0, 823.0, 1626.0, 823.0, 1656.0, 207.0, 1656.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [207.0, 1657.0, 819.0, 1657.0, 819.0, 1686.0, 207.0, 1686.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [207.0, 1687.0, 820.0, 1687.0, 820.0, 1720.0, 207.0, 1720.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [206.0, 1717.0, 773.0, 1717.0, 773.0, 1748.0, 206.0, 1748.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [186.0, 1750.0, 773.0, 1750.0, 773.0, 1777.0, 186.0, 1777.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [207.0, 1779.0, 821.0, 1779.0, 821.0, 1808.0, 207.0, 1808.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [208.0, 1811.0, 819.0, 1811.0, 819.0, 1838.0, 208.0, 1838.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [208.0, 1841.0, 818.0, 1841.0, 818.0, 1869.0, 208.0, 1869.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [207.0, 1871.0, 273.0, 1871.0, 273.0, 1897.0, 207.0, 1897.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [165.0, 773.0, 821.0, 773.0, 821.0, 805.0, 165.0, 805.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 806.0, 818.0, 806.0, 818.0, 835.0, 142.0, 835.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 836.0, 818.0, 836.0, 818.0, 864.0, 142.0, 864.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 866.0, 819.0, 866.0, 819.0, 896.0, 141.0, 896.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 896.0, 821.0, 896.0, 821.0, 925.0, 141.0, 925.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 927.0, 820.0, 927.0, 820.0, 954.0, 141.0, 954.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 958.0, 820.0, 958.0, 820.0, 984.0, 141.0, 984.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [140.0, 985.0, 819.0, 985.0, 819.0, 1018.0, 140.0, 1018.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1016.0, 820.0, 1016.0, 820.0, 1049.0, 141.0, 1049.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1047.0, 820.0, 1047.0, 820.0, 1077.0, 143.0, 1077.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [145.0, 1080.0, 819.0, 1080.0, 819.0, 1108.0, 145.0, 1108.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [144.0, 1111.0, 820.0, 1111.0, 820.0, 1139.0, 144.0, 1139.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1140.0, 819.0, 1140.0, 819.0, 1170.0, 142.0, 1170.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [144.0, 1172.0, 391.0, 1172.0, 391.0, 1198.0, 144.0, 1198.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [468.0, 1172.0, 819.0, 1172.0, 819.0, 1198.0, 468.0, 1198.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1200.0, 821.0, 1200.0, 821.0, 1230.0, 142.0, 1230.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1232.0, 820.0, 1232.0, 820.0, 1259.0, 143.0, 1259.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1262.0, 819.0, 1262.0, 819.0, 1289.0, 142.0, 1289.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1292.0, 819.0, 1292.0, 819.0, 1323.0, 142.0, 1323.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [139.0, 1321.0, 820.0, 1321.0, 820.0, 1354.0, 139.0, 1354.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1352.0, 819.0, 1352.0, 819.0, 1382.0, 141.0, 1382.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1384.0, 822.0, 1384.0, 822.0, 1412.0, 143.0, 1412.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [143.0, 1415.0, 821.0, 1415.0, 821.0, 1443.0, 143.0, 1443.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [142.0, 1443.0, 818.0, 1443.0, 818.0, 1473.0, 142.0, 1473.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 1475.0, 308.0, 1475.0, 308.0, 1501.0, 141.0, 1501.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [880.0, 1780.0, 1561.0, 1780.0, 1561.0, 1810.0, 880.0, 1810.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [882.0, 1811.0, 1557.0, 1811.0, 1557.0, 1838.0, 882.0, 1838.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1840.0, 1560.0, 1840.0, 1560.0, 1871.0, 881.0, 1871.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1872.0, 1560.0, 1872.0, 1560.0, 1898.0, 881.0, 1898.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [879.0, 1901.0, 1559.0, 1901.0, 1559.0, 1931.0, 879.0, 1931.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 673.0, 899.0, 673.0, 899.0, 693.0, 883.0, 693.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [921.0, 668.0, 1124.0, 668.0, 1124.0, 697.0, 921.0, 697.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [879.0, 1737.0, 1118.0, 1737.0, 1118.0, 1770.0, 879.0, 1770.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [167.0, 1535.0, 654.0, 1535.0, 654.0, 1563.0, 167.0, 1563.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [881.0, 1301.0, 1206.0, 1301.0, 1206.0, 1329.0, 881.0, 1329.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [879.0, 830.0, 1199.0, 830.0, 1199.0, 864.0, 879.0, 864.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [141.0, 155.0, 908.0, 155.0, 908.0, 179.0, 141.0, 179.0], "score": 1.0, "text": ""}]
{"preproc_blocks": [{"type": "text", "bbox": [52, 80, 294, 134], "lines": [{"bbox": [50, 81, 295, 91], "spans": [{"bbox": [50, 81, 295, 91], "score": 1.0, "content": "is typically very low. In particular, human heads exhibit several", "type": "text"}], "index": 0}, {"bbox": [51, 92, 294, 102], "spans": [{"bbox": [51, 92, 294, 102], "score": 1.0, "content": "properties that make novel view synthesis (NVS) extremely chal-", "type": "text"}], "index": 1}, {"bbox": [50, 103, 295, 114], "spans": [{"bbox": [50, 103, 295, 114], "score": 1.0, "content": "lenging, such as the complexity of hair, differences in reflectance", "type": "text"}], "index": 2}, {"bbox": [51, 114, 294, 125], "spans": [{"bbox": [51, 114, 294, 125], "score": 1.0, "content": "properties, and the elasticity of human skin that creates heavily", "type": "text"}], "index": 3}, {"bbox": [51, 126, 226, 135], "spans": [{"bbox": [51, 126, 226, 135], "score": 1.0, "content": "non-rigid deformations and fine-scale wrinkles.", "type": "text"}], "index": 4}], "index": 2}, {"type": "text", "bbox": [52, 136, 294, 277], "lines": [{"bbox": [60, 137, 294, 146], "spans": [{"bbox": [60, 137, 294, 146], "score": 1.0, "content": "In the context of static scenes, we have seen neural radiance field", "type": "text"}], "index": 5}, {"bbox": [50, 147, 294, 157], "spans": [{"bbox": [50, 147, 294, 157], "score": 1.0, "content": "representations (NeRFs) [Mildenhall et al. 2020] obtain compelling", "type": "text"}], "index": 6}, {"bbox": [50, 158, 295, 168], "spans": [{"bbox": [50, 158, 295, 168], "score": 1.0, "content": "NVS results. The core idea of this seminal work is to leverage a vol-", "type": "text"}], "index": 7}, {"bbox": [51, 169, 295, 180], "spans": [{"bbox": [51, 169, 295, 180], "score": 1.0, "content": "umetric rendering formulation as a reconstruction loss and encode", "type": "text"}], "index": 8}, {"bbox": [51, 180, 295, 190], "spans": [{"bbox": [51, 180, 295, 190], "score": 1.0, "content": "the resulting radiance field in a neural field-based representation.", "type": "text"}], "index": 9}, {"bbox": [51, 191, 294, 201], "spans": [{"bbox": [51, 191, 294, 201], "score": 1.0, "content": "Recently, there has been significant research interest in extending", "type": "text"}], "index": 10}, {"bbox": [50, 201, 295, 213], "spans": [{"bbox": [50, 201, 295, 213], "score": 1.0, "content": "NeRFs to represent dynamic scenes. While some approaches rely", "type": "text"}], "index": 11}, {"bbox": [50, 212, 295, 224], "spans": [{"bbox": [50, 212, 295, 224], "score": 1.0, "content": "on deformation fields to model dynamically changing scene content", "type": "text"}], "index": 12}, {"bbox": [50, 223, 295, 235], "spans": [{"bbox": [50, 223, 295, 235], "score": 1.0, "content": "[Park et al. 2021a,b], others propose to replace the deformation field", "type": "text"}], "index": 13}, {"bbox": [50, 234, 294, 245], "spans": [{"bbox": [50, 234, 294, 245], "score": 1.0, "content": "in favor of a time-conditioned latent code [Li et al. 2022b]. These", "type": "text"}], "index": 14}, {"bbox": [50, 245, 294, 256], "spans": [{"bbox": [50, 245, 294, 256], "score": 1.0, "content": "methods have shown convincing results on short sequences with", "type": "text"}], "index": 15}, {"bbox": [50, 256, 295, 267], "spans": [{"bbox": [50, 256, 295, 267], "score": 1.0, "content": "limited motion; however, faithful reconstructions of human heads", "type": "text"}], "index": 16}, {"bbox": [50, 267, 205, 279], "spans": [{"bbox": [50, 267, 205, 279], "score": 1.0, "content": "with complex motion remain challenging.", "type": "text"}], "index": 17}], "index": 11}, {"type": "text", "bbox": [52, 277, 294, 540], "lines": [{"bbox": [59, 278, 295, 289], "spans": [{"bbox": [59, 278, 295, 289], "score": 1.0, "content": "In this work, we focus on addressing these challenges in the con-", "type": "text"}], "index": 18}, {"bbox": [51, 290, 294, 300], "spans": [{"bbox": [51, 290, 294, 300], "score": 1.0, "content": "text of a newly-designed multi-view capture setup and propose", "type": "text"}], "index": 19}, {"bbox": [51, 300, 294, 311], "spans": [{"bbox": [51, 300, 294, 311], "score": 1.0, "content": "NeRSemble, a novel method that combines the strengths of de-", "type": "text"}], "index": 20}, {"bbox": [50, 311, 294, 322], "spans": [{"bbox": [50, 311, 294, 322], "score": 1.0, "content": "formation fields and flexible latent conditioning to represent the", "type": "text"}], "index": 21}, {"bbox": [50, 322, 295, 333], "spans": [{"bbox": [50, 322, 295, 333], "score": 1.0, "content": "appearance of dynamic human heads. The core idea of our approach", "type": "text"}], "index": 22}, {"bbox": [50, 333, 295, 343], "spans": [{"bbox": [50, 333, 295, 343], "score": 1.0, "content": "is to store latent features in an ensemble of multi-resolution hash", "type": "text"}], "index": 23}, {"bbox": [50, 344, 295, 354], "spans": [{"bbox": [50, 344, 295, 354], "score": 1.0, "content": "grids, similar to Instant NGP [M\u00fcller et al. 2022], which are blended", "type": "text"}], "index": 24}, {"bbox": [50, 354, 294, 366], "spans": [{"bbox": [50, 354, 294, 366], "score": 1.0, "content": "to describe a given time step. Importantly, we utilize a deformation", "type": "text"}], "index": 25}, {"bbox": [50, 365, 295, 377], "spans": [{"bbox": [50, 365, 295, 377], "score": 1.0, "content": "field before querying features from the hash grids. As a result, the", "type": "text"}], "index": 26}, {"bbox": [51, 376, 295, 387], "spans": [{"bbox": [51, 376, 295, 387], "score": 1.0, "content": "deformation field represents all coarse dynamics of the scene and", "type": "text"}], "index": 27}, {"bbox": [52, 388, 294, 398], "spans": [{"bbox": [52, 388, 294, 398], "score": 1.0, "content": "aligns the coordinate systems of the hash grids, which are then", "type": "text"}], "index": 28}, {"bbox": [51, 399, 295, 410], "spans": [{"bbox": [51, 399, 295, 410], "score": 1.0, "content": "responsible for modeling fine details and complex movements. In", "type": "text"}], "index": 29}, {"bbox": [51, 410, 294, 421], "spans": [{"bbox": [51, 410, 294, 421], "score": 1.0, "content": "order to train and evaluate our method, we design a new multi-view", "type": "text"}], "index": 30}, {"bbox": [51, 420, 294, 431], "spans": [{"bbox": [51, 421, 140, 431], "score": 1.0, "content": "capture setup to record", "type": "text"}, {"bbox": [141, 420, 168, 430], "score": 0.34, "content": "7.1\\ \\mathrm{MP}", "type": "inline_equation", "height": 10, "width": 27}, {"bbox": [168, 421, 294, 431], "score": 1.0, "content": " videos at 73 fps with 16 machine", "type": "text"}], "index": 31}, {"bbox": [51, 432, 295, 442], "spans": [{"bbox": [51, 432, 295, 442], "score": 1.0, "content": "vision cameras. With this setup, we capture a new dataset of 4734", "type": "text"}], "index": 32}, {"bbox": [51, 443, 295, 453], "spans": [{"bbox": [51, 443, 295, 453], "score": 1.0, "content": "sequences of 222 human heads with a total of 31.7 million individual", "type": "text"}], "index": 33}, {"bbox": [51, 454, 294, 464], "spans": [{"bbox": [51, 454, 294, 464], "score": 1.0, "content": "frames. We evaluate our method on this newly-introduced dataset", "type": "text"}], "index": 34}, {"bbox": [51, 465, 294, 476], "spans": [{"bbox": [51, 465, 294, 476], "score": 1.0, "content": "and demonstrate that we significantly outperform existing dynamic", "type": "text"}], "index": 35}, {"bbox": [50, 475, 295, 487], "spans": [{"bbox": [50, 475, 295, 487], "score": 1.0, "content": "NeRF reconstruction approaches. Our dataset exceeds all compara-", "type": "text"}], "index": 36}, {"bbox": [50, 486, 294, 497], "spans": [{"bbox": [50, 486, 294, 497], "score": 1.0, "content": "ble datasets w.r.t. resolution and number of frames per second by", "type": "text"}], "index": 37}, {"bbox": [51, 498, 295, 508], "spans": [{"bbox": [51, 498, 295, 508], "score": 1.0, "content": "a large margin, and will be made publicly available. Furthermore,", "type": "text"}], "index": 38}, {"bbox": [51, 509, 295, 519], "spans": [{"bbox": [51, 509, 295, 519], "score": 1.0, "content": "we will host a public benchmark on dynamic NVS of human heads,", "type": "text"}], "index": 39}, {"bbox": [51, 519, 294, 530], "spans": [{"bbox": [51, 519, 294, 530], "score": 1.0, "content": "which will help to advance the field and increase comparability", "type": "text"}], "index": 40}, {"bbox": [50, 531, 110, 540], "spans": [{"bbox": [50, 531, 110, 540], "score": 1.0, "content": "across methods.", "type": "text"}], "index": 41}], "index": 29.5}, {"type": "title", "bbox": [60, 552, 234, 561], "lines": [{"bbox": [60, 552, 235, 562], "spans": [{"bbox": [60, 552, 235, 562], "score": 1.0, "content": "To summarize, our contributions are as follows:", "type": "text"}], "index": 42}], "index": 42}, {"type": "text", "bbox": [67, 573, 295, 682], "lines": [{"bbox": [67, 574, 294, 584], "spans": [{"bbox": [67, 574, 294, 584], "score": 1.0, "content": "\u2022 A dynamic head reconstruction method based on a NeRF", "type": "text"}], "index": 43}, {"bbox": [74, 585, 296, 596], "spans": [{"bbox": [74, 585, 296, 596], "score": 1.0, "content": "representation that combines a deformation field and an en-", "type": "text"}], "index": 44}, {"bbox": [74, 596, 294, 606], "spans": [{"bbox": [74, 596, 294, 606], "score": 1.0, "content": "semble of multi-resolution hash encodings. This facilitates", "type": "text"}], "index": 45}, {"bbox": [74, 607, 295, 619], "spans": [{"bbox": [74, 607, 295, 619], "score": 1.0, "content": "high-fidelity NVS from a sparse camera array and enables", "type": "text"}], "index": 46}, {"bbox": [74, 618, 278, 629], "spans": [{"bbox": [74, 618, 278, 629], "score": 1.0, "content": "detailed representation of scenes with complex motion.", "type": "text"}], "index": 47}, {"bbox": [66, 630, 278, 639], "spans": [{"bbox": [66, 630, 278, 639], "score": 1.0, "content": "\u2022 A high-framerate and high-resolution multi-view video", "type": "text"}], "index": 48}, {"bbox": [74, 640, 295, 650], "spans": [{"bbox": [74, 640, 295, 650], "score": 1.0, "content": "dataset of diverse human heads with over 4700 sequences of", "type": "text"}], "index": 49}, {"bbox": [74, 651, 294, 661], "spans": [{"bbox": [74, 651, 294, 661], "score": 1.0, "content": "more than 220 subjects. The dataset will be publicly released", "type": "text"}], "index": 50}, {"bbox": [74, 662, 294, 672], "spans": [{"bbox": [74, 662, 294, 672], "score": 1.0, "content": "and include a new benchmark for dynamic NVS of human", "type": "text"}], "index": 51}, {"bbox": [74, 673, 98, 682], "spans": [{"bbox": [74, 673, 98, 682], "score": 1.0, "content": "heads.", "type": "text"}], "index": 52}], "index": 47.5}, {"type": "table", "bbox": [316, 108, 561, 225], "blocks": [{"type": "table_caption", "bbox": [316, 79, 560, 97], "group_id": 0, "lines": [{"bbox": [317, 79, 560, 88], "spans": [{"bbox": [317, 79, 560, 88], "score": 1.0, "content": "Table 1. Existing multi-view video datasets of human faces. Note that for", "type": "text"}], "index": 53}, {"bbox": [317, 88, 522, 99], "spans": [{"bbox": [317, 88, 522, 99], "score": 1.0, "content": "each dataset, we only count the publicly accessible recordings.", "type": "text"}], "index": 54}], "index": 53.5}, {"type": "table_body", "bbox": [316, 108, 561, 225], "group_id": 0, "lines": [{"bbox": [316, 108, 561, 225], "spans": [{"bbox": [316, 108, 561, 225], "score": 0.9999885559082031, "type": "table", "image_path": "2326e82f66c74d3f54ec74ede6cced478976941a27f2fd85197bfcac0b9dca92.jpg"}]}], "index": 56, "virtual_lines": [{"bbox": [316, 108, 561, 147.0], "spans": [], "index": 55}, {"bbox": [316, 147.0, 561, 186.0], "spans": [], "index": 56}, {"bbox": [316, 186.0, 561, 225.0], "spans": [], "index": 57}]}], "index": 54.75}, {"type": "title", "bbox": [317, 239, 404, 251], "lines": [{"bbox": [317, 240, 404, 250], "spans": [{"bbox": [317, 242, 323, 249], "score": 1.0, "content": "2", "type": "text"}, {"bbox": [331, 240, 404, 250], "score": 1.0, "content": "RELATED WORK", "type": "text"}], "index": 58}], "index": 58}, {"type": "text", "bbox": [317, 254, 560, 286], "lines": [{"bbox": [317, 255, 560, 266], "spans": [{"bbox": [317, 255, 560, 266], "score": 1.0, "content": "Modeling and rendering human faces is a central topic in graphics", "type": "text"}], "index": 59}, {"bbox": [317, 266, 560, 277], "spans": [{"bbox": [317, 266, 560, 277], "score": 1.0, "content": "and plays a crucial role in many applications, such as computer", "type": "text"}], "index": 60}, {"bbox": [317, 277, 536, 288], "spans": [{"bbox": [317, 277, 536, 288], "score": 1.0, "content": "games, social media, telecommunication, and virtual reality.", "type": "text"}], "index": 61}], "index": 60}, {"type": "title", "bbox": [318, 300, 431, 310], "lines": [{"bbox": [316, 298, 431, 311], "spans": [{"bbox": [316, 298, 431, 311], "score": 1.0, "content": "2.1 3D Morphable Models", "type": "text"}], "index": 62}], "index": 62}, {"type": "text", "bbox": [317, 313, 561, 455], "lines": [{"bbox": [317, 314, 560, 325], "spans": [{"bbox": [317, 314, 560, 325], "score": 1.0, "content": "3D morphable models (3DMMs) have been a staple approach over", "type": "text"}], "index": 63}, {"bbox": [317, 324, 561, 336], "spans": [{"bbox": [317, 324, 561, 336], "score": 1.0, "content": "the last two decades. The use of a unified mesh topology enables rep-", "type": "text"}], "index": 64}, {"bbox": [317, 336, 561, 346], "spans": [{"bbox": [317, 336, 561, 346], "score": 1.0, "content": "resenting identity and expression using simple statistical tools [Blanz", "type": "text"}], "index": 65}, {"bbox": [317, 346, 562, 357], "spans": [{"bbox": [317, 346, 562, 357], "score": 1.0, "content": "and Vetter 1999; Li et al. 2017]. With the additional use of texture,", "type": "text"}], "index": 66}, {"bbox": [317, 358, 560, 369], "spans": [{"bbox": [317, 358, 560, 369], "score": 1.0, "content": "one can already produce compelling renderings [Blanz and Vetter", "type": "text"}], "index": 67}, {"bbox": [317, 369, 560, 379], "spans": [{"bbox": [317, 369, 560, 379], "score": 1.0, "content": "1999; Paysan et al. 2009], but mesh-based 3DMMs are inherently", "type": "text"}], "index": 68}, {"bbox": [316, 380, 560, 390], "spans": [{"bbox": [316, 380, 560, 390], "score": 1.0, "content": "limited w.r.t. modeling hair or fine identity-specific details. More", "type": "text"}], "index": 69}, {"bbox": [317, 391, 560, 400], "spans": [{"bbox": [317, 391, 560, 400], "score": 1.0, "content": "recently, the use of neural fields [Xie et al. 2022] has alleviated the", "type": "text"}], "index": 70}, {"bbox": [316, 402, 561, 412], "spans": [{"bbox": [316, 402, 561, 412], "score": 1.0, "content": "constraint of working on topologically uniform meshes. These mod-", "type": "text"}], "index": 71}, {"bbox": [317, 413, 560, 423], "spans": [{"bbox": [317, 413, 560, 423], "score": 1.0, "content": "els are capable of modeling complete human heads, including hair", "type": "text"}], "index": 72}, {"bbox": [317, 424, 560, 433], "spans": [{"bbox": [317, 424, 560, 433], "score": 1.0, "content": "[Yenamandra et al. 2021] and fine details [Giebenhain et al. 2022]. In", "type": "text"}], "index": 73}, {"bbox": [317, 434, 561, 445], "spans": [{"bbox": [317, 434, 561, 445], "score": 1.0, "content": "another line of work, Zheng et al. [2022] combine ideas from neural", "type": "text"}], "index": 74}, {"bbox": [317, 445, 506, 455], "spans": [{"bbox": [317, 445, 506, 455], "score": 1.0, "content": "fields and classical 3DMMs to fit monocular videos.", "type": "text"}], "index": 75}], "index": 69}, {"type": "title", "bbox": [317, 468, 433, 478], "lines": [{"bbox": [317, 468, 434, 478], "spans": [{"bbox": [317, 468, 434, 478], "score": 1.0, "content": "2.2 Neural Radiance Fields", "type": "text"}], "index": 76}], "index": 76}, {"type": "text", "bbox": [317, 481, 561, 613], "lines": [{"bbox": [317, 482, 561, 493], "spans": [{"bbox": [317, 482, 561, 493], "score": 1.0, "content": "Our work strives to achieve highly-realistic renderings of videos,", "type": "text"}], "index": 77}, {"bbox": [317, 493, 562, 504], "spans": [{"bbox": [317, 493, 562, 504], "score": 1.0, "content": "including detailed hairstyles and complex deformations. Therefore,", "type": "text"}], "index": 78}, {"bbox": [316, 504, 561, 515], "spans": [{"bbox": [316, 504, 561, 515], "score": 1.0, "content": "we deviate from common assumptions made in 3DMMs and focus", "type": "text"}], "index": 79}, {"bbox": [317, 516, 560, 525], "spans": [{"bbox": [317, 516, 560, 525], "score": 1.0, "content": "on fitting a single multi-view video sequence to the highest de-", "type": "text"}], "index": 80}, {"bbox": [315, 526, 561, 537], "spans": [{"bbox": [315, 526, 561, 537], "score": 1.0, "content": "gree of detail possible. Neural Radiance Fields (NeRFs) [Mildenhall", "type": "text"}], "index": 81}, {"bbox": [317, 537, 560, 547], "spans": [{"bbox": [317, 537, 560, 547], "score": 1.0, "content": "et al. 2020] have recently become state-of-the-art in NVS. While", "type": "text"}], "index": 82}, {"bbox": [316, 547, 561, 559], "spans": [{"bbox": [316, 547, 561, 559], "score": 1.0, "content": "the first NeRFs were usually trained for hours or days on a single", "type": "text"}], "index": 83}, {"bbox": [316, 559, 561, 569], "spans": [{"bbox": [316, 559, 561, 569], "score": 1.0, "content": "scene, recent research advances have reduced the training time to", "type": "text"}], "index": 84}, {"bbox": [316, 570, 561, 581], "spans": [{"bbox": [316, 570, 561, 581], "score": 1.0, "content": "several minutes. For example, this can be achieved by grid-based", "type": "text"}], "index": 85}, {"bbox": [318, 581, 562, 591], "spans": [{"bbox": [318, 581, 562, 591], "score": 1.0, "content": "optimization [Fridovich-Keil and Yu et al. 2022; Karnewar et al.", "type": "text"}], "index": 86}, {"bbox": [317, 592, 560, 603], "spans": [{"bbox": [317, 592, 560, 603], "score": 1.0, "content": "2022; Sun et al. 2022], tensor decomposition [Chen et al. 2022], or", "type": "text"}], "index": 87}, {"bbox": [317, 603, 557, 614], "spans": [{"bbox": [317, 603, 557, 614], "score": 1.0, "content": "Instant NGP\u2019s [M\u00fcller et al. 2022] multi-resolution voxel hashing.", "type": "text"}], "index": 88}], "index": 82.5}, {"type": "title", "bbox": [318, 625, 401, 636], "lines": [{"bbox": [316, 625, 402, 637], "spans": [{"bbox": [316, 625, 402, 637], "score": 1.0, "content": "2.3 Dynamic NeRF", "type": "text"}], "index": 89}], "index": 89}, {"type": "text", "bbox": [317, 640, 560, 693], "lines": [{"bbox": [316, 640, 561, 651], "spans": [{"bbox": [316, 640, 561, 651], "score": 1.0, "content": "Extending NeRFs to time-varying, non-rigid content is another cen-", "type": "text"}], "index": 90}, {"bbox": [317, 651, 560, 661], "spans": [{"bbox": [317, 651, 560, 661], "score": 1.0, "content": "tral research topic that has seen fast progress. Pumarola et al. [2020]", "type": "text"}], "index": 91}, {"bbox": [317, 662, 561, 673], "spans": [{"bbox": [317, 662, 561, 673], "score": 1.0, "content": "and Park et al. [2021a; 2021b] model a single NeRF in canonical", "type": "text"}], "index": 92}, {"bbox": [317, 673, 561, 683], "spans": [{"bbox": [317, 673, 561, 683], "score": 1.0, "content": "space and explicitly model backward deformations from observed", "type": "text"}], "index": 93}, {"bbox": [316, 684, 561, 695], "spans": [{"bbox": [316, 684, 561, 695], "score": 1.0, "content": "frames to explain the non-rigid content of the scene. OLD: On the", "type": "text"}], "index": 94}], "index": 92}], "layout_bboxes": [], "page_idx": 1, "page_size": [612.0, 792.0], "_layout_tree": [], "images": [], "tables": [{"type": "table", "bbox": [316, 108, 561, 225], "blocks": [{"type": "table_caption", "bbox": [316, 79, 560, 97], "group_id": 0, "lines": [{"bbox": [317, 79, 560, 88], "spans": [{"bbox": [317, 79, 560, 88], "score": 1.0, "content": "Table 1. Existing multi-view video datasets of human faces. Note that for", "type": "text"}], "index": 53}, {"bbox": [317, 88, 522, 99], "spans": [{"bbox": [317, 88, 522, 99], "score": 1.0, "content": "each dataset, we only count the publicly accessible recordings.", "type": "text"}], "index": 54}], "index": 53.5}, {"type": "table_body", "bbox": [316, 108, 561, 225], "group_id": 0, "lines": [{"bbox": [316, 108, 561, 225], "spans": [{"bbox": [316, 108, 561, 225], "score": 0.9999885559082031, "type": "table", "image_path": "2326e82f66c74d3f54ec74ede6cced478976941a27f2fd85197bfcac0b9dca92.jpg"}]}], "index": 56, "virtual_lines": [{"bbox": [316, 108, 561, 147.0], "spans": [], "index": 55}, {"bbox": [316, 147.0, 561, 186.0], "spans": [], "index": 56}, {"bbox": [316, 186.0, 561, 225.0], "spans": [], "index": 57}]}], "index": 54.75}], "interline_equations": [], "discarded_blocks": [{"type": "discarded", "bbox": [50, 55, 327, 63], "lines": [{"bbox": [50, 55, 326, 64], "spans": [{"bbox": [50, 55, 326, 64], "score": 1.0, "content": "2 \u2022 Tobias Kirschstein, Shenhan Qian, Simon Giebenhain, Tim Walter, and Matthias Nie\u00dfner", "type": "text"}]}]}], "need_drop": false, "drop_reason": [], "para_blocks": [{"type": "text", "bbox": [52, 80, 294, 134], "lines": [], "index": 2, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [50, 81, 295, 135], "lines_deleted": true}, {"type": "text", "bbox": [52, 136, 294, 277], "lines": [{"bbox": [60, 137, 294, 146], "spans": [{"bbox": [60, 137, 294, 146], "score": 1.0, "content": "In the context of static scenes, we have seen neural radiance field", "type": "text"}], "index": 5}, {"bbox": [50, 147, 294, 157], "spans": [{"bbox": [50, 147, 294, 157], "score": 1.0, "content": "representations (NeRFs) [Mildenhall et al. 2020] obtain compelling", "type": "text"}], "index": 6}, {"bbox": [50, 158, 295, 168], "spans": [{"bbox": [50, 158, 295, 168], "score": 1.0, "content": "NVS results. The core idea of this seminal work is to leverage a vol-", "type": "text"}], "index": 7}, {"bbox": [51, 169, 295, 180], "spans": [{"bbox": [51, 169, 295, 180], "score": 1.0, "content": "umetric rendering formulation as a reconstruction loss and encode", "type": "text"}], "index": 8}, {"bbox": [51, 180, 295, 190], "spans": [{"bbox": [51, 180, 295, 190], "score": 1.0, "content": "the resulting radiance field in a neural field-based representation.", "type": "text"}], "index": 9}, {"bbox": [51, 191, 294, 201], "spans": [{"bbox": [51, 191, 294, 201], "score": 1.0, "content": "Recently, there has been significant research interest in extending", "type": "text"}], "index": 10}, {"bbox": [50, 201, 295, 213], "spans": [{"bbox": [50, 201, 295, 213], "score": 1.0, "content": "NeRFs to represent dynamic scenes. While some approaches rely", "type": "text"}], "index": 11}, {"bbox": [50, 212, 295, 224], "spans": [{"bbox": [50, 212, 295, 224], "score": 1.0, "content": "on deformation fields to model dynamically changing scene content", "type": "text"}], "index": 12}, {"bbox": [50, 223, 295, 235], "spans": [{"bbox": [50, 223, 295, 235], "score": 1.0, "content": "[Park et al. 2021a,b], others propose to replace the deformation field", "type": "text"}], "index": 13}, {"bbox": [50, 234, 294, 245], "spans": [{"bbox": [50, 234, 294, 245], "score": 1.0, "content": "in favor of a time-conditioned latent code [Li et al. 2022b]. These", "type": "text"}], "index": 14}, {"bbox": [50, 245, 294, 256], "spans": [{"bbox": [50, 245, 294, 256], "score": 1.0, "content": "methods have shown convincing results on short sequences with", "type": "text"}], "index": 15}, {"bbox": [50, 256, 295, 267], "spans": [{"bbox": [50, 256, 295, 267], "score": 1.0, "content": "limited motion; however, faithful reconstructions of human heads", "type": "text"}], "index": 16}, {"bbox": [50, 267, 205, 279], "spans": [{"bbox": [50, 267, 205, 279], "score": 1.0, "content": "with complex motion remain challenging.", "type": "text"}], "index": 17}], "index": 11, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [50, 137, 295, 279]}, {"type": "text", "bbox": [52, 277, 294, 540], "lines": [{"bbox": [59, 278, 295, 289], "spans": [{"bbox": [59, 278, 295, 289], "score": 1.0, "content": "In this work, we focus on addressing these challenges in the con-", "type": "text"}], "index": 18}, {"bbox": [51, 290, 294, 300], "spans": [{"bbox": [51, 290, 294, 300], "score": 1.0, "content": "text of a newly-designed multi-view capture setup and propose", "type": "text"}], "index": 19}, {"bbox": [51, 300, 294, 311], "spans": [{"bbox": [51, 300, 294, 311], "score": 1.0, "content": "NeRSemble, a novel method that combines the strengths of de-", "type": "text"}], "index": 20}, {"bbox": [50, 311, 294, 322], "spans": [{"bbox": [50, 311, 294, 322], "score": 1.0, "content": "formation fields and flexible latent conditioning to represent the", "type": "text"}], "index": 21}, {"bbox": [50, 322, 295, 333], "spans": [{"bbox": [50, 322, 295, 333], "score": 1.0, "content": "appearance of dynamic human heads. The core idea of our approach", "type": "text"}], "index": 22}, {"bbox": [50, 333, 295, 343], "spans": [{"bbox": [50, 333, 295, 343], "score": 1.0, "content": "is to store latent features in an ensemble of multi-resolution hash", "type": "text"}], "index": 23}, {"bbox": [50, 344, 295, 354], "spans": [{"bbox": [50, 344, 295, 354], "score": 1.0, "content": "grids, similar to Instant NGP [M\u00fcller et al. 2022], which are blended", "type": "text"}], "index": 24}, {"bbox": [50, 354, 294, 366], "spans": [{"bbox": [50, 354, 294, 366], "score": 1.0, "content": "to describe a given time step. Importantly, we utilize a deformation", "type": "text"}], "index": 25}, {"bbox": [50, 365, 295, 377], "spans": [{"bbox": [50, 365, 295, 377], "score": 1.0, "content": "field before querying features from the hash grids. As a result, the", "type": "text"}], "index": 26}, {"bbox": [51, 376, 295, 387], "spans": [{"bbox": [51, 376, 295, 387], "score": 1.0, "content": "deformation field represents all coarse dynamics of the scene and", "type": "text"}], "index": 27}, {"bbox": [52, 388, 294, 398], "spans": [{"bbox": [52, 388, 294, 398], "score": 1.0, "content": "aligns the coordinate systems of the hash grids, which are then", "type": "text"}], "index": 28}, {"bbox": [51, 399, 295, 410], "spans": [{"bbox": [51, 399, 295, 410], "score": 1.0, "content": "responsible for modeling fine details and complex movements. In", "type": "text"}], "index": 29}, {"bbox": [51, 410, 294, 421], "spans": [{"bbox": [51, 410, 294, 421], "score": 1.0, "content": "order to train and evaluate our method, we design a new multi-view", "type": "text"}], "index": 30}, {"bbox": [51, 420, 294, 431], "spans": [{"bbox": [51, 421, 140, 431], "score": 1.0, "content": "capture setup to record", "type": "text"}, {"bbox": [141, 420, 168, 430], "score": 0.34, "content": "7.1\\ \\mathrm{MP}", "type": "inline_equation", "height": 10, "width": 27}, {"bbox": [168, 421, 294, 431], "score": 1.0, "content": " videos at 73 fps with 16 machine", "type": "text"}], "index": 31}, {"bbox": [51, 432, 295, 442], "spans": [{"bbox": [51, 432, 295, 442], "score": 1.0, "content": "vision cameras. With this setup, we capture a new dataset of 4734", "type": "text"}], "index": 32}, {"bbox": [51, 443, 295, 453], "spans": [{"bbox": [51, 443, 295, 453], "score": 1.0, "content": "sequences of 222 human heads with a total of 31.7 million individual", "type": "text"}], "index": 33}, {"bbox": [51, 454, 294, 464], "spans": [{"bbox": [51, 454, 294, 464], "score": 1.0, "content": "frames. We evaluate our method on this newly-introduced dataset", "type": "text"}], "index": 34}, {"bbox": [51, 465, 294, 476], "spans": [{"bbox": [51, 465, 294, 476], "score": 1.0, "content": "and demonstrate that we significantly outperform existing dynamic", "type": "text"}], "index": 35}, {"bbox": [50, 475, 295, 487], "spans": [{"bbox": [50, 475, 295, 487], "score": 1.0, "content": "NeRF reconstruction approaches. Our dataset exceeds all compara-", "type": "text"}], "index": 36}, {"bbox": [50, 486, 294, 497], "spans": [{"bbox": [50, 486, 294, 497], "score": 1.0, "content": "ble datasets w.r.t. resolution and number of frames per second by", "type": "text"}], "index": 37}, {"bbox": [51, 498, 295, 508], "spans": [{"bbox": [51, 498, 295, 508], "score": 1.0, "content": "a large margin, and will be made publicly available. Furthermore,", "type": "text"}], "index": 38}, {"bbox": [51, 509, 295, 519], "spans": [{"bbox": [51, 509, 295, 519], "score": 1.0, "content": "we will host a public benchmark on dynamic NVS of human heads,", "type": "text"}], "index": 39}, {"bbox": [51, 519, 294, 530], "spans": [{"bbox": [51, 519, 294, 530], "score": 1.0, "content": "which will help to advance the field and increase comparability", "type": "text"}], "index": 40}, {"bbox": [50, 531, 110, 540], "spans": [{"bbox": [50, 531, 110, 540], "score": 1.0, "content": "across methods.", "type": "text"}], "index": 41}], "index": 29.5, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [50, 278, 295, 540]}, {"type": "title", "bbox": [60, 552, 234, 561], "lines": [{"bbox": [60, 552, 235, 562], "spans": [{"bbox": [60, 552, 235, 562], "score": 1.0, "content": "To summarize, our contributions are as follows:", "type": "text"}], "index": 42}], "index": 42, "page_num": "page_1", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [67, 573, 295, 682], "lines": [{"bbox": [67, 574, 294, 584], "spans": [{"bbox": [67, 574, 294, 584], "score": 1.0, "content": "\u2022 A dynamic head reconstruction method based on a NeRF", "type": "text"}], "index": 43}, {"bbox": [74, 585, 296, 596], "spans": [{"bbox": [74, 585, 296, 596], "score": 1.0, "content": "representation that combines a deformation field and an en-", "type": "text"}], "index": 44}, {"bbox": [74, 596, 294, 606], "spans": [{"bbox": [74, 596, 294, 606], "score": 1.0, "content": "semble of multi-resolution hash encodings. This facilitates", "type": "text"}], "index": 45}, {"bbox": [74, 607, 295, 619], "spans": [{"bbox": [74, 607, 295, 619], "score": 1.0, "content": "high-fidelity NVS from a sparse camera array and enables", "type": "text"}], "index": 46}, {"bbox": [74, 618, 278, 629], "spans": [{"bbox": [74, 618, 278, 629], "score": 1.0, "content": "detailed representation of scenes with complex motion.", "type": "text"}], "index": 47}, {"bbox": [66, 630, 278, 639], "spans": [{"bbox": [66, 630, 278, 639], "score": 1.0, "content": "\u2022 A high-framerate and high-resolution multi-view video", "type": "text"}], "index": 48}, {"bbox": [74, 640, 295, 650], "spans": [{"bbox": [74, 640, 295, 650], "score": 1.0, "content": "dataset of diverse human heads with over 4700 sequences of", "type": "text"}], "index": 49}, {"bbox": [74, 651, 294, 661], "spans": [{"bbox": [74, 651, 294, 661], "score": 1.0, "content": "more than 220 subjects. The dataset will be publicly released", "type": "text"}], "index": 50}, {"bbox": [74, 662, 294, 672], "spans": [{"bbox": [74, 662, 294, 672], "score": 1.0, "content": "and include a new benchmark for dynamic NVS of human", "type": "text"}], "index": 51}, {"bbox": [74, 673, 98, 682], "spans": [{"bbox": [74, 673, 98, 682], "score": 1.0, "content": "heads.", "type": "text"}], "index": 52}], "index": 47.5, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [66, 574, 296, 682]}, {"type": "table", "bbox": [316, 108, 561, 225], "blocks": [{"type": "table_caption", "bbox": [316, 79, 560, 97], "group_id": 0, "lines": [{"bbox": [317, 79, 560, 88], "spans": [{"bbox": [317, 79, 560, 88], "score": 1.0, "content": "Table 1. Existing multi-view video datasets of human faces. Note that for", "type": "text"}], "index": 53}, {"bbox": [317, 88, 522, 99], "spans": [{"bbox": [317, 88, 522, 99], "score": 1.0, "content": "each dataset, we only count the publicly accessible recordings.", "type": "text"}], "index": 54}], "index": 53.5}, {"type": "table_body", "bbox": [316, 108, 561, 225], "group_id": 0, "lines": [{"bbox": [316, 108, 561, 225], "spans": [{"bbox": [316, 108, 561, 225], "score": 0.9999885559082031, "type": "table", "image_path": "2326e82f66c74d3f54ec74ede6cced478976941a27f2fd85197bfcac0b9dca92.jpg"}]}], "index": 56, "virtual_lines": [{"bbox": [316, 108, 561, 147.0], "spans": [], "index": 55}, {"bbox": [316, 147.0, 561, 186.0], "spans": [], "index": 56}, {"bbox": [316, 186.0, 561, 225.0], "spans": [], "index": 57}]}], "index": 54.75, "page_num": "page_1", "page_size": [612.0, 792.0]}, {"type": "title", "bbox": [317, 239, 404, 251], "lines": [{"bbox": [317, 240, 404, 250], "spans": [{"bbox": [317, 242, 323, 249], "score": 1.0, "content": "2", "type": "text"}, {"bbox": [331, 240, 404, 250], "score": 1.0, "content": "RELATED WORK", "type": "text"}], "index": 58}], "index": 58, "page_num": "page_1", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [317, 254, 560, 286], "lines": [{"bbox": [317, 255, 560, 266], "spans": [{"bbox": [317, 255, 560, 266], "score": 1.0, "content": "Modeling and rendering human faces is a central topic in graphics", "type": "text"}], "index": 59}, {"bbox": [317, 266, 560, 277], "spans": [{"bbox": [317, 266, 560, 277], "score": 1.0, "content": "and plays a crucial role in many applications, such as computer", "type": "text"}], "index": 60}, {"bbox": [317, 277, 536, 288], "spans": [{"bbox": [317, 277, 536, 288], "score": 1.0, "content": "games, social media, telecommunication, and virtual reality.", "type": "text"}], "index": 61}], "index": 60, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [317, 255, 560, 288]}, {"type": "title", "bbox": [318, 300, 431, 310], "lines": [{"bbox": [316, 298, 431, 311], "spans": [{"bbox": [316, 298, 431, 311], "score": 1.0, "content": "2.1 3D Morphable Models", "type": "text"}], "index": 62}], "index": 62, "page_num": "page_1", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [317, 313, 561, 455], "lines": [{"bbox": [317, 314, 560, 325], "spans": [{"bbox": [317, 314, 560, 325], "score": 1.0, "content": "3D morphable models (3DMMs) have been a staple approach over", "type": "text"}], "index": 63}, {"bbox": [317, 324, 561, 336], "spans": [{"bbox": [317, 324, 561, 336], "score": 1.0, "content": "the last two decades. The use of a unified mesh topology enables rep-", "type": "text"}], "index": 64}, {"bbox": [317, 336, 561, 346], "spans": [{"bbox": [317, 336, 561, 346], "score": 1.0, "content": "resenting identity and expression using simple statistical tools [Blanz", "type": "text"}], "index": 65}, {"bbox": [317, 346, 562, 357], "spans": [{"bbox": [317, 346, 562, 357], "score": 1.0, "content": "and Vetter 1999; Li et al. 2017]. With the additional use of texture,", "type": "text"}], "index": 66}, {"bbox": [317, 358, 560, 369], "spans": [{"bbox": [317, 358, 560, 369], "score": 1.0, "content": "one can already produce compelling renderings [Blanz and Vetter", "type": "text"}], "index": 67}, {"bbox": [317, 369, 560, 379], "spans": [{"bbox": [317, 369, 560, 379], "score": 1.0, "content": "1999; Paysan et al. 2009], but mesh-based 3DMMs are inherently", "type": "text"}], "index": 68}, {"bbox": [316, 380, 560, 390], "spans": [{"bbox": [316, 380, 560, 390], "score": 1.0, "content": "limited w.r.t. modeling hair or fine identity-specific details. More", "type": "text"}], "index": 69}, {"bbox": [317, 391, 560, 400], "spans": [{"bbox": [317, 391, 560, 400], "score": 1.0, "content": "recently, the use of neural fields [Xie et al. 2022] has alleviated the", "type": "text"}], "index": 70}, {"bbox": [316, 402, 561, 412], "spans": [{"bbox": [316, 402, 561, 412], "score": 1.0, "content": "constraint of working on topologically uniform meshes. These mod-", "type": "text"}], "index": 71}, {"bbox": [317, 413, 560, 423], "spans": [{"bbox": [317, 413, 560, 423], "score": 1.0, "content": "els are capable of modeling complete human heads, including hair", "type": "text"}], "index": 72}, {"bbox": [317, 424, 560, 433], "spans": [{"bbox": [317, 424, 560, 433], "score": 1.0, "content": "[Yenamandra et al. 2021] and fine details [Giebenhain et al. 2022]. In", "type": "text"}], "index": 73}, {"bbox": [317, 434, 561, 445], "spans": [{"bbox": [317, 434, 561, 445], "score": 1.0, "content": "another line of work, Zheng et al. [2022] combine ideas from neural", "type": "text"}], "index": 74}, {"bbox": [317, 445, 506, 455], "spans": [{"bbox": [317, 445, 506, 455], "score": 1.0, "content": "fields and classical 3DMMs to fit monocular videos.", "type": "text"}], "index": 75}], "index": 69, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [316, 314, 562, 455]}, {"type": "title", "bbox": [317, 468, 433, 478], "lines": [{"bbox": [317, 468, 434, 478], "spans": [{"bbox": [317, 468, 434, 478], "score": 1.0, "content": "2.2 Neural Radiance Fields", "type": "text"}], "index": 76}], "index": 76, "page_num": "page_1", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [317, 481, 561, 613], "lines": [{"bbox": [317, 482, 561, 493], "spans": [{"bbox": [317, 482, 561, 493], "score": 1.0, "content": "Our work strives to achieve highly-realistic renderings of videos,", "type": "text"}], "index": 77}, {"bbox": [317, 493, 562, 504], "spans": [{"bbox": [317, 493, 562, 504], "score": 1.0, "content": "including detailed hairstyles and complex deformations. Therefore,", "type": "text"}], "index": 78}, {"bbox": [316, 504, 561, 515], "spans": [{"bbox": [316, 504, 561, 515], "score": 1.0, "content": "we deviate from common assumptions made in 3DMMs and focus", "type": "text"}], "index": 79}, {"bbox": [317, 516, 560, 525], "spans": [{"bbox": [317, 516, 560, 525], "score": 1.0, "content": "on fitting a single multi-view video sequence to the highest de-", "type": "text"}], "index": 80}, {"bbox": [315, 526, 561, 537], "spans": [{"bbox": [315, 526, 561, 537], "score": 1.0, "content": "gree of detail possible. Neural Radiance Fields (NeRFs) [Mildenhall", "type": "text"}], "index": 81}, {"bbox": [317, 537, 560, 547], "spans": [{"bbox": [317, 537, 560, 547], "score": 1.0, "content": "et al. 2020] have recently become state-of-the-art in NVS. While", "type": "text"}], "index": 82}, {"bbox": [316, 547, 561, 559], "spans": [{"bbox": [316, 547, 561, 559], "score": 1.0, "content": "the first NeRFs were usually trained for hours or days on a single", "type": "text"}], "index": 83}, {"bbox": [316, 559, 561, 569], "spans": [{"bbox": [316, 559, 561, 569], "score": 1.0, "content": "scene, recent research advances have reduced the training time to", "type": "text"}], "index": 84}, {"bbox": [316, 570, 561, 581], "spans": [{"bbox": [316, 570, 561, 581], "score": 1.0, "content": "several minutes. For example, this can be achieved by grid-based", "type": "text"}], "index": 85}, {"bbox": [318, 581, 562, 591], "spans": [{"bbox": [318, 581, 562, 591], "score": 1.0, "content": "optimization [Fridovich-Keil and Yu et al. 2022; Karnewar et al.", "type": "text"}], "index": 86}, {"bbox": [317, 592, 560, 603], "spans": [{"bbox": [317, 592, 560, 603], "score": 1.0, "content": "2022; Sun et al. 2022], tensor decomposition [Chen et al. 2022], or", "type": "text"}], "index": 87}, {"bbox": [317, 603, 557, 614], "spans": [{"bbox": [317, 603, 557, 614], "score": 1.0, "content": "Instant NGP\u2019s [M\u00fcller et al. 2022] multi-resolution voxel hashing.", "type": "text"}], "index": 88}], "index": 82.5, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [315, 482, 562, 614]}, {"type": "title", "bbox": [318, 625, 401, 636], "lines": [{"bbox": [316, 625, 402, 637], "spans": [{"bbox": [316, 625, 402, 637], "score": 1.0, "content": "2.3 Dynamic NeRF", "type": "text"}], "index": 89}], "index": 89, "page_num": "page_1", "page_size": [612.0, 792.0]}, {"type": "text", "bbox": [317, 640, 560, 693], "lines": [{"bbox": [316, 640, 561, 651], "spans": [{"bbox": [316, 640, 561, 651], "score": 1.0, "content": "Extending NeRFs to time-varying, non-rigid content is another cen-", "type": "text"}], "index": 90}, {"bbox": [317, 651, 560, 661], "spans": [{"bbox": [317, 651, 560, 661], "score": 1.0, "content": "tral research topic that has seen fast progress. Pumarola et al. [2020]", "type": "text"}], "index": 91}, {"bbox": [317, 662, 561, 673], "spans": [{"bbox": [317, 662, 561, 673], "score": 1.0, "content": "and Park et al. [2021a; 2021b] model a single NeRF in canonical", "type": "text"}], "index": 92}, {"bbox": [317, 673, 561, 683], "spans": [{"bbox": [317, 673, 561, 683], "score": 1.0, "content": "space and explicitly model backward deformations from observed", "type": "text"}], "index": 93}, {"bbox": [316, 684, 561, 695], "spans": [{"bbox": [316, 684, 561, 695], "score": 1.0, "content": "frames to explain the non-rigid content of the scene. OLD: On the", "type": "text"}], "index": 94}, {"bbox": [51, 316, 295, 326], "spans": [{"bbox": [51, 316, 295, 326], "score": 1.0, "content": "other hand, Li et al. [2022b] refrain from using explicit deforma-", "type": "text", "cross_page": true}], "index": 4}, {"bbox": [51, 326, 295, 336], "spans": [{"bbox": [51, 326, 295, 336], "score": 1.0, "content": "tions and instead encode the state of the scene in a latent vector,", "type": "text", "cross_page": true}], "index": 5}, {"bbox": [51, 337, 294, 348], "spans": [{"bbox": [51, 337, 294, 348], "score": 1.0, "content": "which is directly conditioning a NeRF. Wang et al. [2022b] utilize", "type": "text", "cross_page": true}], "index": 6}, {"bbox": [50, 348, 295, 360], "spans": [{"bbox": [50, 348, 295, 360], "score": 1.0, "content": "Fourier-based compression of grid features to represent a 4D radi-", "type": "text", "cross_page": true}], "index": 7}, {"bbox": [50, 359, 294, 371], "spans": [{"bbox": [50, 359, 294, 371], "score": 1.0, "content": "ance field. Lombardi et al. [2019] use an image-to-volume generator", "type": "text", "cross_page": true}], "index": 8}, {"bbox": [51, 371, 195, 380], "spans": [{"bbox": [51, 371, 195, 380], "score": 1.0, "content": "in conjunction with deformation fields.", "type": "text", "cross_page": true}], "index": 9}], "index": 92, "page_num": "page_1", "page_size": [612.0, 792.0], "bbox_fs": [316, 640, 561, 695]}]}
2305.03027
10
"Contribution of Architectural Components. We ablate the effect of\nusing a hash ensemble and the de(...TRUNCATED)
"<p>Contribution of Architectural Components. We ablate the effect of\nusing a hash ensemble and the(...TRUNCATED)
"[{\"type\": \"image\", \"coordinates\": [52, 77, 558, 302], \"content\": \"\", \"block_type\": \"im(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [61, 375, 295, 384], \"content\": \"Contribution of Architec(...TRUNCATED)
"[{\"coordinates\": [52, 77, 558, 302], \"index\": 3.0, \"caption\": \"sharp detail already returns (...TRUNCATED)
"[{\"type\": \"inline\", \"coordinates\": [268, 396, 295, 405], \"content\": \"(\\\\mathrm{NGP~+~}\"(...TRUNCATED)
[]
[612.0, 792.0]
"[{\"type\": \"image\", \"img_path\": \"images/4c08c14de2f416c56b99f18635ac6f1bae045b127331a2e699473(...TRUNCATED)
"[{\"category_id\": 5, \"poly\": [963.1829223632812, 1228.8831787109375, 1476.2489013671875, 1228.88(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"image\", \"bbox\": [52, 77, 558, 302], \"blocks\": [{\"type\": \(...TRUNCATED)
2305.03027
11
"Content of Individual Hash Grids. We analyze the contents of the\nindividual hash grids $$\\mathcal(...TRUNCATED)
"<p>Content of Individual Hash Grids. We analyze the contents of the\nindividual hash grids $$\\math(...TRUNCATED)
"[{\"type\": \"image\", \"coordinates\": [51, 77, 296, 172], \"content\": \"\", \"block_type\": \"im(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [60, 259, 294, 270], \"content\": \"Content of Individual Ha(...TRUNCATED)
"[{\"coordinates\": [51, 77, 296, 172], \"index\": 3.25, \"caption\": \" denotes the first frame.\",(...TRUNCATED)
"[{\"type\": \"inline\", \"coordinates\": [132, 270, 144, 280], \"content\": \"\\\\mathcal{H}_{i}\",(...TRUNCATED)
[]
[612.0, 792.0]
"[{\"type\": \"image\", \"img_path\": \"images/c3276da60dd007d19d9f774856b5a11a0b6bb9b63e68638e85a72(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [138.63345336914062, 1562.0018310546875, 822.1433715820312, 1562.00(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"image\", \"bbox\": [51, 77, 296, 172], \"blocks\": [{\"type\": \(...TRUNCATED)
2305.03027
12
"Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew\nDuvall, Jason D(...TRUNCATED)
"<p>Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew\nDuvall, Jaso(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [50, 79, 297, 689], \"content\": \"Michael Broxton, John Fly(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [50, 82, 296, 92], \"content\": \"Michael Broxton, John Flyn(...TRUNCATED)
[]
"[{\"type\": \"inline\", \"coordinates\": [438, 536, 450, 545], \"content\": \"\\\\mathrm{Ng}^{\\\\a(...TRUNCATED)
[]
[612.0, 792.0]
"[{\"type\": \"text\", \"text\": \"\", \"page_idx\": 12}, {\"type\": \"text\", \"text\": \"\", \"pag(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [139.82847595214844, 222.00709533691406, 827.61572265625, 222.00709(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [50, 79, 297, 689], \"lines\": [{\"bbox\": [50(...TRUNCATED)
2305.03027
13
"Cheng-hsin Wuu, Ningyuan Zheng, Scott Ardisson, Rohan Bali, Danielle Belko, Eric\nBrockmeyer, Lucas(...TRUNCATED)
"<p>Cheng-hsin Wuu, Ningyuan Zheng, Scott Ardisson, Rohan Bali, Danielle Belko, Eric\nBrockmeyer, Lu(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [49, 82, 295, 227], \"content\": \"Cheng-hsin Wuu, Ningyuan (...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [51, 82, 294, 91], \"content\": \"Cheng-hsin Wuu, Ningyuan Z(...TRUNCATED)
[]
[]
[]
[612.0, 792.0]
"[{\"type\": \"text\", \"text\": \"\", \"page_idx\": 13}, {\"type\": \"text\", \"text\": \"\", \"pag(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [877.4862060546875, 228.1995391845703, 1563.5543212890625, 228.1995(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [49, 82, 295, 227], \"lines\": [{\"bbox\": [51(...TRUNCATED)
2305.03027
2
"other hand, Li et al. [2022b] refrain from using explicit deforma-\ntions and instead encode the st(...TRUNCATED)
"<p>other hand, Li et al. [2022b] refrain from using explicit deforma-\ntions and instead encode the(...TRUNCATED)
"[{\"type\": \"image\", \"coordinates\": [50, 80, 560, 274], \"content\": \"\", \"block_type\": \"im(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [51, 316, 295, 326], \"content\": \"other hand, Li et al. [2(...TRUNCATED)
"[{\"coordinates\": [50, 80, 560, 274], \"index\": 2.0, \"caption\": \"Fig. 2. Left: Our custom-buil(...TRUNCATED)
"[{\"type\": \"inline\", \"coordinates\": [345, 538, 357, 547], \"content\": \"93^{\\\\circ}\", \"ca(...TRUNCATED)
[]
[612.0, 792.0]
"[{\"type\": \"image\", \"img_path\": \"images/acfed8671b11540da9027122686f9fd7112341be8f37f18143bfb(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [143.4579315185547, 875.9676513671875, 820.0895385742188, 875.96765(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"image\", \"bbox\": [50, 80, 560, 274], \"blocks\": [{\"type\": \(...TRUNCATED)
2305.03027
3
"order to maximize the variety of motion. Specifically, our capture\nscript consists of 9 expression(...TRUNCATED)
"<p>order to maximize the variety of motion. Specifically, our capture\nscript consists of 9 express(...TRUNCATED)
"[{\"type\": \"table\", \"coordinates\": [51, 101, 296, 129], \"content\": \"\", \"block_type\": \"t(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [51, 343, 294, 353], \"content\": \"order to maximize the va(...TRUNCATED)
"[{\"coordinates\": [55, 126, 292, 294], \"index\": 14.75, \"caption\": \"quences feature a wide ran(...TRUNCATED)
[]
[]
[612.0, 792.0]
"[{\"type\": \"table\", \"img_path\": \"images/f9707d6954a293da079c8968b4d30337cc51e52e12dab78254bdc(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [143.88133239746094, 954.058349609375, 818.1675415039062, 954.05834(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"table\", \"bbox\": [51, 101, 296, 129], \"blocks\": [{\"type\": (...TRUNCATED)
2305.03027
4
"# 4.1 Preliminaries: Neural Radiance Fields\n\nOur work builds on top of the recent success of Neur(...TRUNCATED)
"<h1>4.1 Preliminaries: Neural Radiance Fields</h1>\n<p>Our work builds on top of the recent success(...TRUNCATED)
"[{\"type\": \"image\", \"coordinates\": [48, 77, 559, 260], \"content\": \"\", \"block_type\": \"im(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [51, 338, 226, 348], \"content\": \"4.1 Preliminaries: Neura(...TRUNCATED)
"[{\"coordinates\": [48, 77, 559, 260], \"index\": 3.0, \"caption\": \" from the blended features us(...TRUNCATED)
"[{\"type\": \"block\", \"coordinates\": [101, 402, 244, 438], \"content\": \"\", \"caption\": \"\"}(...TRUNCATED)
[]
[612.0, 792.0]
"[{\"type\": \"image\", \"img_path\": \"images/c7540b1718164c4caaba8f5f90071593dad073290846a6de23090(...TRUNCATED)
"[{\"category_id\": 8, \"poly\": [280.868408203125, 1119.9073486328125, 678.4691162109375, 1119.9073(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"image\", \"bbox\": [48, 77, 559, 260], \"blocks\": [{\"type\": \(...TRUNCATED)
2305.03027
5
"Using these learned correspondences, we modify Equation 4 to\noperate in the canonical space:\n\nTh(...TRUNCATED)
"<p>Using these learned correspondences, we modify Equation 4 to\noperate in the canonical space:</p(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [51, 80, 294, 102], \"content\": \"Using these learned corre(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [60, 80, 294, 92], \"content\": \"Using these learned corres(...TRUNCATED)
[]
"[{\"type\": \"block\", \"coordinates\": [117, 107, 227, 138], \"content\": \"\", \"caption\": \"\"}(...TRUNCATED)
[]
[612.0, 792.0]
"[{\"type\": \"text\", \"text\": \"Using these learned correspondences, we modify Equation 4 to oper(...TRUNCATED)
"[{\"category_id\": 9, \"poly\": [1517.369140625, 716.4633178710938, 1556.6090087890625, 716.4633178(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [51, 80, 294, 102], \"lines\": [{\"bbox\": [60(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
42