diff --git "a/FNE0T4oBgHgl3EQfzAKR/content/tmp_files/load_file.txt" "b/FNE0T4oBgHgl3EQfzAKR/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/FNE0T4oBgHgl3EQfzAKR/content/tmp_files/load_file.txt" @@ -0,0 +1,780 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf,len=779 +page_content='Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments Jiye Lee Hanbyul Joo Seoul National University {kay2353,hbjoo}@snu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='kr Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Our system, LAMA, produces high-quality and realistic 3D human motions that include locomotion, scene interactions, and manipulations given a 3D environment and designated interaction cues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Abstract Synthesizing interaction-involved human motions has been challenging due to the high complexity of 3D environ- ments and the diversity of possible human behaviors within.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We present LAMA, Locomotion-Action-MAnipulation, to synthesize natural and plausible long term human move- ments in complex indoor environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The key motivation of LAMA is to build a unified framework to encompass a series of motions commonly observable in our daily lives, including locomotion, interactions with 3D scenes, and ma- nipulations of 3D objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' LAMA is based on a reinforce- ment learning framework coupled with a motion matching algorithm to synthesize locomotion and scene interaction seamlessly under common constraints and collision avoid- ance handling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' LAMA also exploits a motion editing frame- work via manifold learning to cover possible variations in interaction and manipulation motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We quantitatively and qualitatively demonstrate that LAMA outperforms ex- isting approaches in various challenging scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Project page: https://lama-www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='io/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Introduction In our daily lives, we can easily observe that humans do not live in isolation nor in voids, but continuously interact with a complex environment surrounded by many objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Notably, humans perform such a diverse set of daily life actions effortlessly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Imagine that we visit a new indoor en- vironment (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', a hotel room) we have never been before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' It is expected that we can still easily figure out how to move from rooms to rooms, how to sit on a chair, how to open the doors of closets, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' However, endowing machines or virtual humans with such abilities is still a largely unex- plored area, despite its importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Synthesizing scene interactions within real-life 3D envi- ronments has been a challenging research problem due to its complexity and diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Human movements in real life consists of various types of behaviors, including locomotion with avoiding cluttered areas, diverse interactions with 3D scenes, and sophisticated object-manipulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In particu- lar, the spatial constraint that arises from real-life 3D envi- ronments where many objects are cluttered makes motion synthesis highly constrained and complex, and various pos- sible arrangements of 3D environments make generalization difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As human-scene interactions cover a wide range of technical challenges, previous approaches have focused on sub-problems, such as (1) modeling static poses [17,24,49, 64,69,71,72] or (2) human object interactions with a single target object or interaction type [10, 47, 53–55, 66, 67, 70].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Recent methods [15,59,60] extend to synthesizing dynamic interaction motions in cluttered real-world 3D scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' How- ever, the performance of these methods are fundamentally limited due to the lack of 3D ground truth data that contains both human motions and paired 3D environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='02667v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='CV] 9 Jan 2023 In this paper, we present LAMA, Locomotion-Action- MAnipulation, to synthesize natural and plausible long term human motions in complex indoor environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The key motivation of LAMA is to build a unified framework to include locomotion, interactions with 3D scenes, and ma- nipulations of 3D objects, which are the series of motions commonly observable in our daily lives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' LAMA is based on a reinforcement learning framework coupled with a mo- tion matching algorithm to synthesize locomotion and scene interaction seamlessly while adapting to complicated 3D scenes with collision avoidance handling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The reinforce- ment learning framework interprets the 3D information of the given scene and optimally traverses among the motion capture database via motion matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As an advantage, our system does not require any “scene-paired” datasets where human movements are captured with the surrounding 3D environments simultaneously, which is rarely available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To further cover the numerous variations of interaction mo- tions, we also exploit an autoencoder based motion editing approach to learn the motion manifold space [20] in which the editing is performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Through extensive quantitative and qualitative evaluations against existing approaches, we demonstrate that our method outperforms previous methods in various challenging scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Our contributions are summarized as follows: (1) we present the first method to generate realistic long term mo- tions combined with locomotion, interaction with scene, and manipulation in complicated cluttered scenes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (2) we propose a novel, unified framework that synthesizes loco- motion and human-scene interactions in a seamless man- ner, by introducing scene interpretation terms to a reinforce- ment learning based approach to automatically generate op- timal transitions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' and (3) our outputs show the state-of-the- art motion synthesis quality with longer duration (more than 10 sec) than previous methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Related Work Generating Human-Scene Interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Generating natural human motion has been a widely researched topic in the computer vision community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Early methods focus on synthesizing or predicting human movements by exploiting neural networks [11,13,35,35,38,46,56,58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' However, these approaches primarily address the synthesis of human mo- tion itself, without taking into account the surrounding 3D environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Recent approaches begin to tackle modeling and synthesizing human interactions within 3D scenes, or with objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Most of the researches focus on statically pos- ing humans within the given 3D environment [16,24,69,71], by generating human scene interaction poses from vari- ous types of input including object semantics [17], im- ages [21,23,64,65,68], and text descriptions [49,72].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' More recently, there have been approaches to synthesize dynamic human object interactions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', sitting on chairs, Encoder Decoder Task-Adaptive Motion Editing Motion Generation Action Controller 3D Scene Interaction Cue Action Posture Motion Synthesizer Optimization Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Overview of LAMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' carrying boxes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Starke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' [53] introduce an autoregres- sive learning framework with object geometry-based envi- ronmental encodings to synthesize various human-object interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Later work [15, 70] extends this by synthe- sizing motions conditioned with variations of objects and contact points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Other approaches [47, 54, 55, 66, 67] focus on generating natural hand movements for manipulation, which is extended by including full body motions [54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Physics-based character control to synthesize human object interactions has been also explored in [8,10,39,47,66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Al- though these approaches cover a wide range of human ob- ject interactions, most of them solely focus on the relation- ship between human and the target object without long-term navigation in cluttered 3D scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' More recent approaches include generating natural hu- man scene interactions within a complex 3D scene clut- tered with many objects [6, 59–61], closely related to ours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' These methods are trained using human motion datasets paired with 3D scenes, which require both ground truth mo- tions and simultaneously captured 3D scenes for supervi- sion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Due to such difficulties, some methods exploit syn- thetic datasets [6,61] or data fitted from depth videos [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In previous approaches [15,59], navigation to move through cluttered environments is often performed by a separate module via a path planning algorithm (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', A∗ algorithm) by approximating the volume of a human as a cylinder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' This path planning based methods approximate the spatial infor- mation of the scene and the human body and therefore have limitations under highly complex conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion Synthesis and Editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Synthesizing natural human motions by leveraging motion capture data has also been a long-researched topic in computer graphics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Some approaches [26,37] construct motion graphs, where plausi- ble transitions are inserted as edges and motion synthesis is done by traversing through the constructed graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Similar approaches [31, 51] connect motion patches to synthesize interactions in a virtual environment or multi-person inter- actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Due to its versatility and simplicity, a number of variations have been made on the graph based approach, such as motion grammar [22] which enforces traversing rules in the motion graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion matching [5, 9] can also be understood as a special case of motion graph traver- sal, where the plausible transitions are not precomputed but searched during runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Recent advances in deep learning allow to leverage motion capture data for motion manifold 2 rendel reset prev play nextlearning [19, 20, 52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Autoregressive approaches based on variational autoencoders (VAE) [36, 46] and recurrent neu- ral networks [14,29,41] are also used to forecast future mo- tions based on past frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' These frameworks are general- ized to synthesizing a diverse set of motions including lo- comotion on terrains [19] mazes [36], action-specified mo- tions [46], and interaction-involved sports [29, 41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Neural network-based methods are also reported to be successful in various motion editing tasks such as skeleton retarget- ing [2], style transfer [3,20], and inbetweening [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Reinforcement learning (RL) has also been successful in combination with both data-driven and physics-based ap- proaches for synthesizing human motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Combined with data-driven approaches, these RL frameworks serve as a control module that generates corresponding motions to a given user input by traversing through motion graphs [28], latent space [34, 36, 57], and precomputed transition ta- bles [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Deep reinforcement learning (DRL) has been widely used recently in physics simulation as well to syn- thesize physically plausible movements with a diverse set of motor skills [4,32,41,43–45,62].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Method 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Overview Our system, dubbed as LAMA, outputs a sequence of human poses M = {mt}T t=1 by taking the 3D surrounding cues W and desired interaction cues Φ, as inputs: M = LAMA(W, Φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (1) The output posture at time t, mt = (p0, r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', rJ) ∈ R3J+3, is represented by a concatenated vector of global root position p0 ∈ R3 and local joint orientations of J joints where each j-th joint is in angle-axis representations rj ∈ so(3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Throughout our system, the skeleton tree struc- ture and joint offsets are fixed and shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We represent the 3D environments W = {wi} as a set of 3D object and environment meshes, including the background scene mesh and other object meshes targeted for manip- ulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The interaction cues, Φ = [φ1, φ2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='φn], are an ordered list of desired interaction inputs φi = {qj}j∈Ji where qj ∈ R3 indicates desired positions of j-th joint, and Ji is a set of specified joints for interaction (in prac- tice, few joints such as root 1 or end-effectors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Examples of the 3D environment W and interaction inputs φi are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 5 (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Intuitively, φi specifies the expected positions of selected joints of the human character.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Note that we do not specify the exact timing of the interaction, as the timing is automatically determined by our action controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' More details are addressed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To synthesize locomotion, interaction, and manipulation together, LAMA is designed via a three-level system com- 1For root, orientation in angle-axis representation is also included in φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (a) Skeleton with joints and box nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (b) Automatically detected collision points (colored as red).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' posed of the action controller A and the motion synthesizer S, followed by a manifold-based motion editor E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' By taking 3D scene cues W and desired interaction cues Φ as input, the action controller A makes the use of a reinforcement learning (RL) framework by training the control policy π to sample an action at time t, π(at|st, W, Φ), where at con- tains the plausible next action cues including predicted ac- tion types and short-term future forecasting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' st is the state cues to represent the current status of human characters in- cluding its body posture, surrounding scene occupancy, and current target interaction cue, which can be computed via a function ψ, st = ψ(mt−1, mt, W, Φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Intuitively, action controller A predicts the plausible next action cues at by considering the current character-scene state st.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The gener- ated action signals at from the action controller A is pro- vided as the input for the motion synthesizer S, which then determines the posture at the next time step mt+1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', S(mt, at) = mt+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Afterwards, the character’s next state can be computed again via st+1 = ψ(mt, mt+1, W, Φ), which is input to the action controller recursively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Followed by the initial motion generation part from A and S, our system furthermore applies a motion editor E(M) = ˜M, where ˜M = { ˜Mt}T t=1 is the edited motions to further express the motions involving complex human- object interactions such as manipulation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' moving ob- jects, opening doors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 shows the overview of LAMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Scene-Aware Action Controller Based on reinforcement learning, our action controller A enables the character to perform locomotion and desired actions with fulfilling the interaction cues Φ and avoiding collisions in the 3D environment W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' A is a trained control policy π where π(at|st, W, Φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Different from previous approaches where navigation and scene-object interactions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', sitting) are performed by separate modules [15, 59], our RL-based framework performs both in a unified way with a common objective by automatically determining the transition from navigation to specific actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As a key ad- vantage, LAMA can be robustly generalized to challenging unseen 3D clutters in long-term human motion synthesis and also outperforms previous methods by avoiding colli- sions throughout the whole process, including navigation 3 Joint Node Jointand interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' State.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The state st = ψ(mt−1, mt, W, Φ) at time t is a feature vector representing the current status of the hu- man character.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' st = (sbody t , sscene t , sinter t ) is composed of body configuration sbody, 2D scene occupancy sscene, and desired current target interaction sinter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Body configuration sbody = {r, ˙r, θup, h, pe} includes r, ˙r ∈ RJ′×6 that are the joint rotations and velocities respectively for the J′ joints excluding the root in 6D representations [73], θup ∈ R that is the up vector of the root (represented by the angle w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='t the Y-axis), h ∈ R that is the root height from the floor, and pe ∈ Re×3 that is the end-effector positions in person- centric coordinate (where e is the number of end-effectors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' sscene = {gocc, groot} includes scene occupancy informa- tion in 2D floor plane, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' gocc ∈ Rn2 rep- resents the 2D occupancy grid on the floor plane of neigh- boring n cells around the agent and groot ∈ R2 denote the current 2D global root position of the character in the dis- cretized grid plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' sinter is an element of Φ and represents the interaction cue the character is currently targeting, that is sinter = φi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Given the current status of the character st, the control policy π outputs the feasible action at = (atype t , afuture t , aoffset t ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' atype t provides the probabilities of next action type among all possible actions, determining the transition timing between actions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', from locomo- tion to sitting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' afuture t predict future motion cues such as plausible root position for the next 10, 20, and 30 frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' aoffset t is intended to update the raw motion data searched from the motion database in motion synthesizer module S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Intuitively, our learned control policy generates an optimal posture offset aoffset t which is applied to the closest plau- sible raw posture in the database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' This enables the character to perform more plausible scene-aware human poses, allow- ing our system to be generalized to any unseen 3D scenes given a limited amount of motion capture data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' More details are addressed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion Synthesizer Given the current motion output mt and actions sig- nals at from the action controller A as inputs, the mo- tion synthesizer produces the next plausible character pos- ture: S(mt, at) = mt+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As the first step, motion synthe- sizer searches for the motion from a motion database that best matches the closest motion feature, then modifies the searched raw motion to be more suitable for the scene envi- ronment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To this end, motion synthesizer’s output mt+1 is in turn fed into the action controller recursively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We exploit a modified version of motion matching algorithm [5, 9, 18] for the first step of motion synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In motion matching, motion synthesis is performed periodically by searching the most plausible next shot motion segments from a motion DB, and compositing them into a long connected sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Visual representation of the 2D occupancy grid near the root.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Grid on the right represents top view of the grid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Blue in- dicates root position while gray represents the space is occupied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Occupied cells near the root are colored as black.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion feature represents the charac- teristic of each frame in the short motion segment and is computed as f(m) = {{pj}, { ˙pj}, θup, c, ofuture}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' From a posture m, the positions and velocities pj, ˙pj ∈ R3 are extracted for the selected joints j ∈ {Head, Hand, Foot}, which are defined in a person-centric coordinate of m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' θup ∈ R3 is the up-vector of the root joint, and c ∈ {0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='5, 1} indicates automatically computed foot con- tact cues of the left and right foot (0 for non-contact, 1 for contact, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='5 for non-contact but close to the floor within a threshold).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ofuture = {{pdt 0 }, {rdt 0 }} contains the cues for the short-term future postures, where pdt 0 and rdt 0 are the position and orientation of root joint at dt frames later from the current target frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ofuture are computed in 2D XZ plane in person-centric coordinate of the current tar- get motion m, and thus pdt 0 , rdt 0 ∈ R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The selected fu- ture frames are action-type specific, and for locomotion we extract 10, 20, and 30 frames in the future (at 30Hz) fol- lowing [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Intuitively, the motion feature extracts the target frame’s posture and temporal cues by considering neigh- boring frames 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We pre-compute motion features for every frame of the motion clips in the motion database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion feature of the current state of the character, or the query feature, is also computed in the same way based on posture mt−1, mt and afuture t produced by the action controller, that is xt = f(mt−1, mt, atype t , afuture t ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The component afuture t serves as ofuture in the query feature, which can be understood as the action controller providing cues for pre- dicted future postures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion searching and updating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The query motion fea- ture xt from the current character is computed as addressed above, and let the motion features in motion database de- noted as yk for the k-th clips in the DB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion searching finds the best matches in the motion database by computing the weighted euclidean distances between the query feature and DB features: k∗ = arg min k ||wT f (xt − yk)||2, (2) where wf is a fixed weight vector to control the impor- 2In practice, the input of feature extractor function f should take into account the motions of neighboring timesteps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4 2D occupancy gridtance of feature elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' After finding the best match ˆmk∗ from motion database, the motion synthesizer further up- dates it with the predicted motion offset aoffset t from at, that is τ( ˆmk∗+1, aoffset) = mt+1, where ˆmk∗+1 is the next plausible character posture and τ is an update function to update selected joints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In practice, the motion searching is performed periodically (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', every N-th frames) to make the synthesized motion temporally more coherent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning for Scene-Aware Action Controller In the reinforcement learning framework, the objective is to learn the optimal policy which maximizes the discounted cumulative reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In our method, we design rewards to guide the agent to perform locomotion towards the target objects (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', sofa) and also perform desired interaction with the object (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', sitting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In particular, our RL-framework performs both navigation and interaction with common con- straints (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', smooth transitions, collision avoidance).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Our reward function consist of the following terms: Rtotal = wtrRtr + wactRact + wregRreg, (3) where wtr, wact, and wreg are the weights to balance among reward terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The trajectory reward Rtr is obtained when the character moves towards the desired interaction input φ while meeting the spatial constraints from the surrounding 3D scene, described below: Rtr = rcoli · rpos · rroot, where (4) rcoli = exp � − 1 σ2 coli � b∈B wbρ(b, W) � , (5) rpos = exp � �− 1 σ2 root � j∈J ∥p0 − qj∥2 � � , (6) rvel = � 1 when ˙proot ≥ σth σvel∥ ˙p0∥2 else.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (7) The collision-avoidance reward rcoli penalizes collisions with 3D scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 (a), body limbs in the skeletal structure are represented as a set of box-shaped nodes B with a fixed width, where each element b ∈ B is a 3D box representation of legs and arms (we exclude torso and head).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The function ρ(b, W) detects the collision be- tween edges of a box-shaped node b with 3D scene meshes W and returns the number of intersection points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 (b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' wb is the weights to control importance of each limb b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The collision-avoidance reward is maximized when no pen- etration occurs, making the control policy π to find the opti- mal trajectory and pose offset to avoid physically implausi- ble collisions and penetrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' rpos are obtained when the agent moves to reach the targeting interaction cue φ, by en- couraging agent’s root position p0 to be closer to the target interaction cue {qj}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' rvel encourages the character to move by penalizing when the root velocity ˙proot is less than a threshold σth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' σcoli, σroot, and vel are weights to control the balance between terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Action reward Ract enforces the synthesized motion to fulfill the given interaction cue φ = {qj}: Ract = rinter · r∆t · r∆v, where rinter = exp � �− 1 σ2 inter � j∈J ∥pj − qj∥2 � � , r∆t = exp � −σ2 ∆tCtr � , r∆v = exp � −σ2 ∆vCvel � , (8) where interaction reward term rinter is maximized when the performed action meets the positional constraints provided by interaction cues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Smoothness reward terms r∆t and r∆v minimizes the transition cost, which is based on the subpart of the feature distances defined in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2, where Ctr is the weighted feature distances of pj, θup, and c, and Cvel is from ˙p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' These are intended to penalize the case where the character makes abrupt changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Regularization reward Rreg penalizes the aoffset t exces- sively modifying the original posture brought from the mo- tion synthesizer, denoted as ˆmt, and maintains temporal consistency among frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Rreg = exp � − 1 σ2reg � ∥ ˆmt − mt∥2 + ∥mt − mt−1∥2�� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' It is reported that [33, 41] multiplying rewards with con- sistent goals are suitable for learning, as the reward is re- ceived when the conditions are simultaneously met.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Fur- thermore, to accelerate learning, we use early termination conditions [43] and limited action transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The episode is terminated when the character moves out of the scene bounding box, or when the collision reward rcoli is under a certain threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Also, the action controller first checks in advance whether the action signal is valid when it makes transitions from locomotion to other actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' When the nearest feature distance of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 in the motion synthesizer is over a certain threshold, the action controller discards the transition and continues navigating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The control policy is learned through Proximal Policy Optimization (PPO) algo- rithm [50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Task-Adaptive Motion Editing Interaction includes a massively diverse pool of mo- tions, and these variations cannot be fully handled by lim- ited amount of motion database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In order to cover such di- versity, we include a task-adaptive motion editing module in our motion synthesis framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The goal of our edit- ing module E is (1) to edit motion M to fit into diverse 5 Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Visual representation of system input Φ, W and output motion sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' On the left, interaction cues are shown as cyan spheres and arrows (indicating orientation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The right is the syn- thesized human motion ˜ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' target object geometries (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', sitting on a chair with dif- ferent height), and (2) to generate additional hand move- ments for manipulation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', grasping).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In particular, in the case of manipulation, additional interaction cue φ can be provided to enforce an end-effector (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', a hand) to fol- low the desired trajectories to express the manipulation task on the target object, as shown in Fig 8 (left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The edited motion ˜M = E(M) should not only fulfill the sparsely given positional constraints, but also preserve the temporal consistency between frames and spatial correlations among joints in order to maintain its naturalness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We adopt the mo- tion manifold learning approach with convolutional autoen- coders [20] to compress motion to a latent vector within a motion manifold space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion editing is done by searching for an optimal latent vector among the manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For train- ing the autoencoder, motion sequence, which we denote as X converted from M, is represented as a time-series of hu- man postures by concatenating joint rotations in 6D repre- sentations [73], root height, root transform relative to the previous frame projected on the XZ plane, and foot contact labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The encoder and decoder module are trained based on reconstruction loss, ||X − Ψ−1(Ψ (X)) ||2, where Ψ is the encoder and Ψ−1 is the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The latent vector from the encoder z = Ψ(X) repre- sent the motion manifold space by preserving the spatio- temporal relationship among joints and frames within the motion sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As demonstrated in [20], editing motions in this manifold space ensures the edited motion to be re- alistic and temporally coherent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To this end, we find the optimal latent vector z∗ by minimizing a loss function L by constraining the outputs motions to follow the interac- tion constraint φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We also include additional regularizers in L so that the output motion to maintain the foot locations and root trajectories to the original motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' See supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' for more details on L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Finally, the edited motion ˜M can be computed via Ψ−1(z∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Experiments We evaluate LAMA’s ability on synthesizing long-term motions with various human-scene and human-object inter- Method Plausibility Naturalness Slip Penetration FDtotal FDroot FDjoint Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' [60] 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='13 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='88 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='93 Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' [60]* 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='8 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='44 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='00 SAMP [15] 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='5 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='95 LAMA (ours) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='21 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='52 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='91 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Baseline comparison Foot slip loss (cm, ↓) averaged over all frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Penetration loss(percentage, ↓) is counted based on in- tersection points of the 3D environment and the skeleton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Natural- ness score is based on fr´echet distance (FD ↓).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' with an asterisk indicates without post-processing optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' actions involved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We exploit an extensive set of quantitative metrics and perceptual study to evaluate the physical plau- sibility and naturalness of the synthesized motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For constructing the database for the motion synthesizer, motion capture data are selectively collected and refined from Ubisoft La Forge [14], COUCH [70], and SAMP [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' All the data used in this system are mo- tion capture data (in bvh format) with no scene or ob- ject related information, and are retargeted into a unified skeletal structure with MotionBuilder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We use PROX [16] and Matterport3D [7] datasets for 3D environment and SAPIEN [63] object meshes for manipulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Our code and pre-processed data will be publicly released.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Implementation Details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The policy and the value net- work of the action controller module consists of 4 and 2 fully connected layers of 256 nodes, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The encoder and decoder of the task-adaptive motion editing module consist of three convolutional layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Adam opti- mizer [25] is used for training and optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We use Nvidia RTX 3090 for training the action controller and the motion editing module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' It takes 10 to 80 minutes to learn a single control policy, where the training time mainly de- pends on how difficult the interaction cues are to achieve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For optimization in the motion editing module, it takes 3 to 4 minutes for 500 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' See supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' for more detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Experimental Setup Evaluation metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Quantifying motion synthesis quality is challenging due to the lack of ground-truth data or offi- cial evaluation metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We try to quantify them in terms of physical plausibility and naturalness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Physical plausibility: We use contact and penetration metrics to evaluate the physical plausibility of the synthe- sized motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Contact loss penalizes the foot movement when the foot is in contact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Since foot contact is a critical element in dynamics, contact-based metric is closely related in determining the physical plausibility of motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Pene- tration loss (“Penetration” in Table 1) measures implausible cases when the body penetrates the objects in the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We compute penetration metric by counting frames where the 6 interaction cue Φ2 interaction cue ΦFigure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Comparison with LAMA (left) and LAMA without col- lision reward (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As shown in the right, without collision re- ward the character fails to avoid collisions with obstacles (marked as red).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' intersection points (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='4) goes over a certain threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 Naturalness: We measure the naturalness of the synthe- sized motions by measuring the Fr´echet distance, as re- ported in [15, 35, 40] between the synthesized motion and motions from motion capture data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Features are extracted from motion sequences and the Fr´echet distance is com- puted with the extracted features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We measure the natural- ness of character root movements FDroot, including root ori- entation and velocity, and character joint rotations FDjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We compare our LAMA with the state-of-the-art approaches as well as variations of ours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' [60] is the state-of-the-art long term mo- tion synthesis method for human-scene interactions within a given 3D scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We use the author’s code for evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' uses optimization to post-process the synthe- sized motion to improve foot contact and reduce collisions, we both compare Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' with and without optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' SAMP [15] generates interactions which can be general- ized not only for object variations but also random starting points within a given 3D scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' SAMP explicitly exploits a path planning module to navigate through cluttered 3D en- vironments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Ablative baselines We perform ablation studies on the ac- tion controller and task-adaptive motion editing module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We perform ablation studies on the scene reward rcoli, and ac- tion offset aoffset t to present the contribution of both terms on our system’s capability to generate scene-aware motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We also compare our method without the transition reward r∆t and r∆v terms (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='4) in the action controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Fi- nally, we demonstrate the strength of our task-adaptive mo- tion editing module to edit motions naturally (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='5) by comparing with inverse kinematics (IK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Comparisons with Previous Work Quantitative Evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We compare methods in 6 different scenarios from various 3D scenes in the PROX dataset [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Foot contact is automatically labeled based on 310 for legs and 7 for arms Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Comparison with LAMA (left) and LAMA without ac- tion offset (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The character in original LAMA moves forward while tilting its arms to avoid collision with walls, while in LAMA without action offset does not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' positional velocity of the foot joint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Foot slip metric is mea- sured by foot joint positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To compute penetration metric in a fair way, SMPL-X outputs of Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' and SAMP are converted to box-shaped skeletons as in ours and intersec- tion point are counted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Table 1 shows the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As shown, our LAMA outperforms Wang et al both in naturalness and physical plausibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' It is noted that Wang et al performs optimization as post-processing to explic- itly minimize foot slip, and yet LAMA still shows on-par performance against it (and better in all other metrics).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Compared with SAMP, our method shows much better re- sults in plausibility metrics (both Slip and Penetration), and shows slightly better performance in naturalness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Apart from SAMP which relies on a separate navigation mod- ule, our RL-based action controller handles collisions in the same way of scene-interaction and shows much better per- formance in in complex and cluttered 3D scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' A Human Study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To further validate our results, we compare the quality of our output over other baselines, Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' and SAMP, through A/B testing from human observers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For the study, we choose 5 scenarios from dif- ferent indoor scenes, and render the results of each method using the exactly same view and 3D characters, so that they cannot be distinguished from the appearance side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We build two separate sets, where in each set the result videos of our method are shown with each competitor side by side in a random order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Human observers are asked to choose a motion clip that is more human-like and plausible in the given 3D scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We perform each set of tests with non- overlapping 15 participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' See our supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' for more details about the study setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As the result, the outputs of our method are preferred by the majority (more than 50% voting) in all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' By considering all votes independently, our method are preferred 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='0% over SAMP and 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='3% over Wang et al.’s work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In particular, we found that our method greatly outperform the competing methods in terms of the naturalism of foot stepping, transition between loco- motion and action, and collision avoidance with the scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' See our supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' videos for more results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 7 LAMA LAMA w/o collision rewardoffset LAMA w/o a LAMA LAMAFigure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (a) Comparison with LAMA (top) and LAMA without manifold and replaced with IK (bottom) of a character opening the toilet lid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (b) Comparison with LAMA (top) and LAMA without motion editing (bottom) in sitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Ablation Studies Ablation Studies on Action Controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We quantita- tively compare the original LAMA and the LAMA without collision reward rcoli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We intend to demonstrate the role of rcoli that enforces the action controller to search for optimal actions for generating motions without collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Ablation studies are done in 5 PROX scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In the original LAMA, penetrations occur in only 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='1% of the frames among the whole motion sequences, while the ratio is 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='7% in LAMA without collision reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The result supports that the colli- sion reward rcoli enforces the action controller to compute optimal actions for synthesizing body movement according to the spatial constraint of the given 3D scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Example re- sults are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We also compare the contribution of other components in the action controller module in generating natural inter- actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 7, with the action controller without aoffset t the character fails to avoid penetration with objects or walls, as the raw motion from the motion database does not have any information of the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' This demonstrates that action offset also plays a role in generating detailed scene-aware poses even from raw motion capture data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Moreover, the results with the action controller without smoothness rewards r∆t and r∆v are not smooth enough, showing unnatural movements such as jerking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' These abla- tion studies justify the advantages of our reward terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Ablation Studies on Task-Adaptive Motion Editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We ablate our motion editing module by replacing it with an alternative approach via Inverse-Kinematics (IK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' An ex- ample result is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 8 (left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For manipulation, the results with IK show jerky and awkward motions because the temporal and inter-joint correlations in natural human motions are not reflected in IK, while original LAMA with task-adaptive motion editing module shows much natural motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Our motion editing module can also be used to Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Examples of synthesized manipulation motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The tar- get object for manipulation is colored as orange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Top is a motion sequence of walking and opening a toilet lid, and the bottom is a sequence of walking and opening doors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The character is colored purple at start and aqua at the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' further adjust the character movements in different object geometries, going over the limit of the motion database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As seen in Fig 8 (right), the motion editing module enables the character to properly sit in chairs with various sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Discussion In this paper, we present a method to synthesize locomo- tion, scene-interaction, and manipulation in a unified sys- tem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Leveraging a RL framework with motion matching, our method enables to produce natural and plausible hu- mans motions in complex and cluttered 3D environments only with a limited amount of motion-only datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Our method has been thoroughly evaluated in diverse scenar- ios, outperforming previous approaches [15, 60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We also demonstrate the robustness and generalization ability of our system by covering a wide range of human interactions in many different 3D environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' While our RL-based method can be generalized to any unseen 3D environments, a new control policy has to be trained for each motion sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Combining RL with a supervised learning framework for better efficiency can be an interesting future research direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Furthermore, al- though we assume a fixed skeletal information throughout the system, interaction motions may change depending on the character’s body shape and sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We leave synthesizing motions on varying body shapes as future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Acknowledgments: This work was supported by SNU- Naver Hyperscale AI Center, SNU Creative-Pioneering Re- searchers Program, and NRF grant funded by the Korea government (MSIT) (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2022R1A2C209272411).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 8 LAMA LAMA LAMA w/o motion editing LAMA w/o manifold + IK梦人庆A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Supplementary Video The supplementary video shows the results of our method, LAMA, on various scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In the video, we show the human motion synthesis results on PROX [16], Matterport3D [7], and also our own home-brewed 3D scene produced by Polycam App [1] in an iPad pro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We use SAPIEN [63] object meshes for manipulation examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As shown, our method successfully produces plausible and nat- ural human motions in many challenging scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Our supplementary video contains several ablation studies of our method by showing the importance of collision reward rcoli in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (4), transition reward (r∆t , r∆v) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (8), pos- ture offset aoffset t in Action Controller (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='2), and our motion editing modules (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='5) compared to the tradi- tional Inverse Kinematics (IK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We also show the compari- son with previous state-of-the arts [15, 59, 60] and demon- strate that our results produces better quality of motions with better collision avoidance performance in complicated 3D scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Additional Details on Implementations B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Action Controller Implementation Details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For the action controller A and motion synthesizer module S, we use the animation library DART [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We also use a publicly available PPO imple- mentation [32, 41], where we remove the variable time- stepping functions stepping in [32] by following the origi- nal PPO algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The details of the training regarding the policy and value network of the action controller are written in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Early Termination Conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As written in the main paper, the episode is terminated (1) when the character moves out of the scene bounding box;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (2) when the colli- sion reward rcoli is under a certain threshold;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' or (3) when the root of the human character is located in the blocked (occupied) regions of the scenes in 2D grid space during the locomotion status.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Name Value Learning rate of policy network 2e-4 Learning rate of value network 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='001 Discount factor (γ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='95 GAE and TD (λ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='95 Clip parameter (ϵ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='2 # of tuples per policy update 30000 Batch size for policy/value update 512 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Details on the hyper-parameters for learning the control policy of the action controller A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion Synthesizer Motion Database Information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As described in our main paper, we pre-process the motion segments by selec- tively collecting and clipping from Ubisoft La Forge [14], COUCH [70], and SAMP [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The length (in frames) of motion segments (“Seg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Length” in tables), number of motion segment (“Seg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Count” in tables), and the number of total frames (“Total Frames” in tables) are summarized in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Action-Specific Feature Definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The motion feature, as defined in our main paper Sec 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='3, represents both the current state of the motion and a short term future move- ments: f(m) = {{pj}, { ˙pj}, θup, c, ofuture}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In particu- lar the action specific feature ofuture = {{pdt 0 }, {rdt 0 }} contains future motions so that the motion search process can take into account the future motion consistency, where pdt 0 , rdt 0 ∈ R2 are the position and orientation of root joint at dt frames later from the current target frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For locomo- tion, we extract dt = 10, 20, and 30 frames in the future (at 30Hz) following [9], as addressed in our main paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For sit- ting, we specifically choose dt as the frame where the char- acter completes the sit-down motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The major motivation of this design choice is encourage the motion synthesizer to search the motion clips with the desired target action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Computation Cost for Searching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The computation time for searching the motion database is done between 1-2 mil- liseconds in CPU, where we test on AMD Ryzen 5950X CPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The number of search times varies and is dependent to the 3D scenes and desired motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In one of our sce- narios, total 17 searches in locomotion(walk) and 14 in ac- tion(sit) were done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For locomotion, the searching time is average 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='743 milliseconds (standard deviation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='46) and for action(sit) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='103 milliseconds (standard deviation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='63).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion Editing via Motion Manifold Implementation Details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' For the convolutional autoen- coder of task-adaptive motion editing, we use PyTorch [42], FairMotion [12], and PyTorch3d [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The autoencoder is trained with the Adam optimizer with learning rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='0001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We use 3 layers of 1D temporal-convolutions with kernel width of 25 and stride 2, and the channel dimension of each output feature is 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The training datasets are summarized in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Note that we use different pre-processing steps between Motion editing module and Motion Synthesizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Reconstruction Loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The encoder Ψ and decoder Ψ−1 are trained based on reconstruction loss Lrecon = ||X − Ψ−1(Ψ (X)) ||2, where: Lrecon = wcLcontact + wrLroot + wqLquat + wpLpos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (9) 9 Lcontact, Lroot, and Lquat are the MSE losses of foot con- tact labels, root status (height and transform relative to the previous frame projected on the XZ plane), and the joint rotations in 6D representations [73].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To penalize errors ac- cumulating along the kinematic chain, we perform forward kinematics (FK) and measure the global position distance of joints between original and reconstructed motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' As global positions of the joints are highly dependent on the root po- sitions, for the early epochs, the distance is measured based on root-centric coordinates to ignore the global location of roots, which we found empirically more stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion Editing Loss For motion editing, the positional loss and regularization loss are defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' L = wpLpos + wfLfoot + wrLroot, where Lpos = � j,qj∈φ ∥pj − qj∥2, if φ exists at t Lfoot = � foot ∥pe foot − pi foot∥2, Lroot = wr∥re xz − ri xz∥2 + w∆r∥˙re xz − ˙ri xz∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' (10) pj denotes positions of joint j, and r, ˙r denotes root po- sitions and velocities respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Superscript e and i in- dicates whether it is from edited or initial motion, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Subscript xz indicates the vector is projected onto the XZ plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The loss term L enforces the edited motion to maintain contact and root trajectory (in the XZ plane) of the initial motion, while generating natural movements of the other joints to meet the sparse positional constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Generating Interaction Cue for Manipulation To syn- thesize character’s arm motions naturally interacting with the movements of articulated target objects, we produce desired interaction cues by producing the 3D trajectories of a chosen 3D position of the object at which the hand part of the character are expected to touch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Specifically, we apply the expected articulated motion of the 3D object model to produce the 3D trajectory of a chosen object ver- tex, v(Rt, Tt, θt), where Rt, Tt, are the global orientation and translation of the object and θt is the parameters for the object articulation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', the hinge angle of the cover of a laptop) at time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' v(·) represents the 3D location of the cho- sen vertex v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' To this end, we input the produced trajectory as the desired 3D interaction cue for a character’s joint (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', a hand joint) assuming the joint is touching this object tra- jectory for manipulation φ = [v(Rt, Tt, θt)]t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Note that, in our visualization, we apply the desired articulated motions for the 3D object at each time, synced to the produced in- teraction cues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Label Seg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Length Seg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Count Total Frames Locomotion 10 11063 11498 Sit 50 – 85 5842 14942 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Details on pre-processed motion datasets per each action category for training our motion synthesizer S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Name Value Motion sequence length 120 Number of sequence (training) 11397 Number of sequence (validation) 3135 Number of sequence (test) 2139 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Details on pre-processed motion datasets for training our motion editing module M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' More Details on Experiments C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Frechet Distance Features FDroot is computed by root feature vector, which is a con- catenated vector of root orientation in angle-axis represen- tation, root up vector, and root transform relative to the pre- vious frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' We note that all of the motions for comparison have the same up axis (y) and floor plane (xz).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' FDjoint is computed by joint feature vector, represented as joint orien- tations in angle-axis representation, excluding the root.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' References [1] Polycam - lidar and 3d scanner for iphone & android.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' https://poly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='cam/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 9 [2] Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine- Hornung, Daniel Cohen-Or, and Baoquan Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Skeleton- aware networks for deep motion retargeting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph, 39(4), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [3] Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen- Or, and Baoquan Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Unpaired motion style transfer from video to animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 39(4), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [4] Kevin Bergamin, Simon Clavet, Daniel Holden, and James Richard Forbes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Drecon: data-driven responsive con- trol of physics-based characters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 38(6), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [5] Michael B¨uttner and Simon Clavet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion matching - the road to next gen animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' of Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='ai, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2, 4 [6] Zhe Cao, Hang Gao, Karttikeya Mangalam, Qi-Zhi Cai, Minh Vo, and Jitendra Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Long-term human motion pre- diction with scene context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [7] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Hal- ber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Matterport3d: Learning from rgb-d data in indoor environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In 3DV, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 6, 9 [8] Yu-Wei Chao, Jimei Yang, Weifeng Chen, and Jia Deng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning to sit: Synthesizing human-chair interactions via hierarchical control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In AAAI, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 10 [9] Simon Clavet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion matching and the road to next-gen animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' of GDC, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2, 4, 9 [10] Haegwang Eom, Daseong Han, Joseph S Shin, and Junyong Noh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Model predictive control with a visuomotor system for physics-based character animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 39(1), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [11] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Ji- tendra Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Recurrent network models for human dynam- ics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ICCV, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [12] Deepak Gopinath and Jungdam Won.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' fairmotion - tools to load, process and visualize motion capture data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Github, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 9 [13] Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe Yearsley, and Taku Komura.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' A recurrent variational autoen- coder for human motion synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In BMVC, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [14] F´elix G Harvey, Mike Yurick, Derek Nowrouzezahrai, and Christopher Pal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Robust motion in-betweening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph, 39(4), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3, 6, 9 [15] Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael Black.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Stochastic scene- aware motion prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ICCV, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2, 3, 6, 7, 8, 9 [16] Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas, and Michael J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Black.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Resolving 3D human pose ambigu- ities with 3D scene constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2, 6, 7, 9 [17] Mohamed Hassan, Partha Ghosh, Joachim Tesch, Dimitrios Tzionas, and Michael J Black.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Populating 3d scenes by learning human-scene interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [18] Daniel Holden, Oussama Kanoun, Maksym Perepichka, and Tiberiu Popa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learned motion matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 39(4), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4 [19] Daniel Holden, Taku Komura, and Jun Saito.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Phase- functioned neural networks for character control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 36(4), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [20] Daniel Holden, Jun Saito, and Taku Komura.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' A deep learning framework for character motion synthesis and editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 35(4), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2, 3, 6 [21] Chun-Hao P Huang, Hongwei Yi, Markus H¨oschle, Matvey Safroshkin, Tsvetelina Alexiadis, Senya Polikovsky, Daniel Scharstein, and Michael J Black.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Capturing and inferring dense full-body human-scene contact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [22] Kyunglyul Hyun, Kyungho Lee, and Jehee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion grammars for character animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In Computer Graphics Forum, volume 35, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [23] Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, and Lan Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Neuralhofusion: Neural volumetric rendering under human-object interac- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [24] Vladimir G Kim, Siddhartha Chaudhuri, Leonidas Guibas, and Thomas Funkhouser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Shape2pose: Human-centric shape analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 33(4), 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [25] Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 6 [26] Jehee Lee, Jinxiang Chai, Paul SA Reitsma, Jessica K Hod- gins, and Nancy S Pollard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Interactive control of avatars animated with human motion data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In Proceedings of the 29th annual conference on Computer graphics and interac- tive techniques, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [27] Jeongseok Lee, Michael X Grey, Sehoon Ha, Tobias Kunz, Sumit Jain, Yuting Ye, Siddhartha S Srinivasa, Mike Stilman, and C Karen Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Dart: Dynamic animation and robotics toolkit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' The Journal of Open Source Software, 3(22), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 9 [28] Jehee Lee and Kang Hoon Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Precomputing avatar be- havior from human motion data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Com- puter animation, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [29] Kyungho Lee, Seyoung Lee, and Jehee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Interactive char- acter animation by learning multi-objective control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 37(6), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [30] Kyungho Lee, Sehee Min, Sunmin Lee, and Jehee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning time-critical responses for interactive character control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 40(4), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [31] Kang Hoon Lee, Myung Geol Choi, and Jehee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion patches: building blocks for virtual environments annotated with motion data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ACM SIGGRAPH 2006 Papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [32] Seyoung Lee, Sunmin Lee, Yongwoo Lee, and Jehee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning a family of motor skills from a single motion clip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 40(4), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3, 9 [33] Seunghwan Lee, Moonseok Park, Kyoungmin Lee, and Je- hee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Scalable muscle-actuated human simulation and control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 38(4), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 5 [34] Sergey Levine, Jack M Wang, Alexis Haraux, Zoran Popovi´c, and Vladlen Koltun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Continuous character con- trol with low-dimensional embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph, 31(4), 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [35] Ruilong Li, Shan Yang, David A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Ross, and Angjoo Kanazawa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Ai choreographer: Music conditioned 3d dance generation with aist++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ICCV, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2, 7 [36] Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel Van De Panne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Character controllers using motion vaes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 39(4), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [37] Kovar Lucas, Gleicher Michael, and Pighin Fr´ed´eric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Motion graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [38] Julieta Martinez, Michael J Black, and Javier Romero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' On human motion prediction using recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [39] Josh Merel, Saran Tunyasuvunakool, Arun Ahuja, Yuval Tassa, Leonard Hasenclever, Vu Pham, Tom Erez, Greg Wayne, and Nicolas Heess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Catch & carry: reusable neural controllers for vision-guided whole-body tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 39(4), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [40] Evonne Ng, Hanbyul Joo, Liwen Hu, Hao Li, , Trevor Dar- rell, Angjoo Kanazawa, and Shiry Ginosar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning to lis- ten: Modeling non-deterministic dyadic facial motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 7 [41] Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and Jehee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning predict-and-simulate policies from un- organized human motion data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 38(6), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3, 5, 9 11 [42] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Rai- son, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Pytorch: An imperative style, high-performance deep learning library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems 32, pages 8024–8035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 9 [43] Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel Van de Panne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Deepmimic: Example-guided deep reinforce- ment learning of physics-based character skills.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 37(4), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3, 5 [44] Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, and Sanja Fidler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph, 41(4), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [45] Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, and Angjoo Kanazawa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Amp: Adversarial motion priors for styl- ized physics-based character control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph, 40(4), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [46] Mathis Petrovich, Michael J Black, and G¨ul Varol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Action- conditioned 3d human motion synthesis with transformer vae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ICCV, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2, 3 [47] Yuzhe Qin, Yueh-Hua Wu, Shaowei Liu, Hanwen Jiang, Rui- han Yang, Yang Fu, and Xiaolong Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Dexmv: Imitation learning for dexterous manipulation from human videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ECCV, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [48] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Tay- lor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Accelerating 3d deep learning with pytorch3d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='08501, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 9 [49] Manolis Savva, Angel X Chang, Pat Hanrahan, Matthew Fisher, and Matthias Nießner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Pigraphs: learning interaction snapshots from observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 35(4), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [50] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Rad- ford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Proximal policy optimization algo- rithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' arXiv preprint arXiv:1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='06347, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 5 [51] Hubert PH Shum, Taku Komura, Masashi Shiraishi, and Shuntaro Yamazaki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Interaction patches for multi-character animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 27(5), 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [52] Sebastian Starke, Ian Mason, and Taku Komura.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Deepphase: periodic autoencoders for learning motion phase manifolds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 41(4):1–13, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [53] Sebastian Starke, He Zhang, Taku Komura, and Jun Saito.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Neural state machine for character-scene interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 38(6), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [54] Omid Taheri, Vasileios Choutas, Michael J Black, and Dim- itrios Tzionas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Goal: Generating 4d whole-body motion for hand-object grasping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [55] Omid Taheri, Nima Ghorbani, Michael J Black, and Dim- itrios Tzionas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Grab: A dataset of whole-body human grasp- ing of objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [56] Graham W Taylor and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Factored con- ditional restricted boltzmann machines for modeling motion style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ICML, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [57] Adrien Treuille, Yongjoon Lee, and Zoran Popovi´c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Near- optimal character animation with continuous control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ACM SIGGRAPH 2007 papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [58] Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning to generate long- term future via hierarchical prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ICML, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [59] Jingbo Wang, Yu Rong, Jingyuan Liu, Sijie Yan, Dahua Lin, and Bo Dai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Towards diverse and natural scene-aware 3d human motion synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2, 3, 9 [60] Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, and Xiao- long Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Synthesizing long-term 3d human motion and interaction in 3d scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2, 6, 7, 8, 9 [61] Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Scene- aware generative network for human motion synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [62] Jungdam Won, Deepak Gopinath, and Jessica Hodgins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' A scalable approach to control diverse behaviors for physically simulated characters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 39(4), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 3 [63] Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Sapien: A simulated part-based interactive environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 6, 9 [64] Xianghui Xie, Bharat Lal Bhatnagar, and Gerard Pons-Moll.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Chore: Contact, human and object reconstruction from a sin- gle rgb image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ECCV, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [65] Xiang Xu, Hanbyul Joo, Greg Mori, and Manolis Savva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' D3d-hoi: Dynamic 3d human-object interactions from videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content='08420, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [66] Zeshi Yang, Kangkang Yin, and Libin Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Learning to use chopsticks in diverse gripping styles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 41(4), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [67] He Zhang, Yuting Ye, Takaaki Shiratori, and Taku Komura.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Manipnet: Neural manipulation synthesis with a hand-object spatial representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=', 40(4), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [68] Jason Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Zhang, Sam Pepose, Hanbyul Joo, Deva Ramanan, Jitendra Malik, and Angjoo Kanazawa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Perceiving 3d human-object spatial arrangements from a single image in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 2 [69] Siwei Zhang, Yan Zhang, Qianli Ma, Michael J Black, and Siyu Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Place: Proximity learning of articulation and con- tact in 3d environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In 3DV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [70] Xiaohan Zhang, Bharat Lal Bhatnagar, Sebastian Starke, Vladimir Guzov, and Gerard Pons-Moll.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Couch: Towards controllable human-chair interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ECCV, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2, 6, 9 [71] Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J Black, and Siyu Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Generating 3d people in scenes with- out people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [72] Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler, , and Siyu Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' Compositional human-scene interaction syn- thesis with semantic control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In ECCV, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 1, 2 [73] Yi Zhou, Connelly Barnes, Lu Jingwan, Yang Jimei, and Li Hao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' On the continuity of rotation representations in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' In CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'} +page_content=' 4, 6, 10 12' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNE0T4oBgHgl3EQfzAKR/content/2301.02667v1.pdf'}