title
stringlengths
24
189
detail_url
stringlengths
27
46
author_list
sequencelengths
0
34
abstract
stringlengths
33
403
KGNv2: Separating Scale and Pose Prediction for Keypoint-Based 6-DoF Grasp Synthesis on RGB-D Input
https://ieeexplore.ieee.org/document/10342514/
[ "Yiye Chen", "Ruinian Xu", "Yunzhi Lin", "Hongyi Chen", "Patricio A. Vela", "Yiye Chen", "Ruinian Xu", "Yunzhi Lin", "Hongyi Chen", "Patricio A. Vela" ]
We propose an improved keypoint approach for 6-DoF grasp pose synthesis from RGB-D input. Keypoint-based grasp detection from image input demonstrated promising results in a previous study, where the visual information provided by color imagery compensates for noisy or imprecise depth measurements. However, it relies heavily on accurate keypoint prediction in image space. We devise a new grasp gen...
Learning-Based Real-Time Torque Prediction for Grasping Unknown Objects with a Multi-Fingered Hand
https://ieeexplore.ieee.org/document/10341970/
[ "Dominik Winkelbauer", "Berthold Bäuml", "Rudolph Triebel", "Dominik Winkelbauer", "Berthold Bäuml", "Rudolph Triebel" ]
When grasping objects with a multi-finger hand, it is crucial for the grasp stability to apply the correct torques at each joint so that external forces are countered. Most current systems use simple heuristics instead of modeling the required torque correctly. Instead, we propose a learning-based approach that is able to predict torques for grasps on unknown objects in real-time. The neural netwo...
A Grasp Pose is All You Need: Learning Multi-Fingered Grasping with Deep Reinforcement Learning from Vision and Touch
https://ieeexplore.ieee.org/document/10341776/
[ "Federico Ceola", "Elisa Maiettini", "Lorenzo Rosasco", "Lorenzo Natale", "Federico Ceola", "Elisa Maiettini", "Lorenzo Rosasco", "Lorenzo Natale" ]
Multi-fingered robotic hands have potential to enable robots to perform sophisticated manipulation tasks. However, teaching a robot to grasp objects with an anthropomorphic hand is an arduous problem due to the high dimensionality of state and action spaces. Deep Reinforcement Learning (DRL) offers techniques to design control policies for this kind of problems without explicit environment or hand...
Physics-Informed Learning to Enable Robotic Screw-Driving Under Hole Pose Uncertainties
https://ieeexplore.ieee.org/document/10342151/
[ "Omey M. Manyar", "Santosh V Narayan", "Rohin Lengade", "Satyandra K. Gupta", "Omey M. Manyar", "Santosh V Narayan", "Rohin Lengade", "Satyandra K. Gupta" ]
Screw-driving is an important operation in numerous applications. In many situations, hole pose cannot be estimated very accurately. Autonomous screw-driving cannot be performed by traditional industrial manipulators in position control mode when the hole pose uncertainty is high. This paper presents a mobile manipulator system for performing autonomous screw-driving in the presence of uncertainti...
ADMNet: Anti-Drone Real-Time Detection and Monitoring
https://ieeexplore.ieee.org/document/10341901/
[ "Xunkuai Zhou", "Guidong Yang", "Yizhou Chen", "Chuangxiang Gao", "Benyun Zhao", "Li Li", "Ben M. Chen", "Xunkuai Zhou", "Guidong Yang", "Yizhou Chen", "Chuangxiang Gao", "Benyun Zhao", "Li Li", "Ben M. Chen" ]
We propose a lightweight, effective, and efficient anti-drone network, namely ADMNet, for visually detecting and monitoring unfriendly drones with a constrained view field, flying against a complex environment. We merge an SPP module to the first head of YOLOv4 to improve accuracy and perform network compression to reduce inference latency and model size. To compensate for the accuracy loss caused...
Multi-View Stereo with Learnable Cost Metric
https://ieeexplore.ieee.org/document/10341606/
[ "Guidong Yang", "Xunkuai Zhou", "Chuanxiang Gao", "Benyun Zhao", "Jihan Zhang", "Yizhou Chen", "Xi Chen", "Ben M. Chen", "Guidong Yang", "Xunkuai Zhou", "Chuanxiang Gao", "Benyun Zhao", "Jihan Zhang", "Yizhou Chen", "Xi Chen", "Ben M. Chen" ]
In this paper, we present LCM-MVSNet, a novel multi-view stereo (MVS) network with learnable cost metric (LCM) for more accurate and complete depth estimation and dense point cloud reconstruction. To adapt to the scene variation and improve the reconstruction quality in non-Lambertian low-textured scenes, we propose LCM to adaptively aggregate multi-view matching similarity into the 3D cost volume...
A Comparison Between Framed-Based and Event-Based Cameras for Flapping-Wing Robot Perception
https://ieeexplore.ieee.org/document/10342500/
[ "R. Tapia", "J.P. Rodríguez-Gómez", "J.A. Sanchez-Diaz", "F.J. Gañán", "I.G. Rodríguez", "J. Luna-Santamaria", "J.R. Martínez-De Dios", "A. Ollero", "R. Tapia", "J.P. Rodríguez-Gómez", "J.A. Sanchez-Diaz", "F.J. Gañán", "I.G. Rodríguez", "J. Luna-Santamaria", "J.R. Martínez-De Dios", "A. Ollero" ]
Perception systems for ornithopters face severe challenges. The harsh vibrations and abrupt movements caused during flapping are prone to produce motion blur and strong lighting condition changes. Their strict restrictions in weight, size, and energy consumption also limit the type and number of sensors to mount onboard. Lightweight traditional cameras have become a standard off-the-shelf solution...
Flexible Multi-DoF Aerial 3D Printing Supported with Automated Optimal Chunking
https://ieeexplore.ieee.org/document/10341882/
[ "Marios-Nektarios Stamatopoulos", "Avijit Banerjee", "George Nikolakopoulos", "Marios-Nektarios Stamatopoulos", "Avijit Banerjee", "George Nikolakopoulos" ]
The future of 3D printing utilizing unmanned aerial vehicles (UAVs) presents a promising capability to revolutionize manufacturing and to enable the creation of large-scale structures in remote and hard-to-reach areas e.g. in other planetary systems. Nevertheless, the limited payload capacity of UAVs and the complexity in the 3D printing of large objects pose significant challenges. In this articl...
Memory Maps for Video Object Detection and Tracking on UAVs
https://ieeexplore.ieee.org/document/10342453/
[ "Benjamin Kiefer", "Yitong Quan", "Andreas Zell", "Benjamin Kiefer", "Yitong Quan", "Andreas Zell" ]
This paper introduces a novel approach to video object detection detection and tracking on Unmanned Aerial Vehicles (UAVs). By incorporating metadata, the proposed approach creates a memory map of object locations in actual world coordinates, providing a more robust and interpretable representation of object locations in both, image space and the real world. We use this representation to boost con...
Robust Localization of Aerial Vehicles via Active Control of Identical Ground Vehicles
https://ieeexplore.ieee.org/document/10341900/
[ "Igor Spasojevic", "Xu Liu", "Ankit Prabhu", "Alejandro Ribeiro", "George J. Pappas", "Vijay Kumar", "Igor Spasojevic", "Xu Liu", "Ankit Prabhu", "Alejandro Ribeiro", "George J. Pappas", "Vijay Kumar" ]
This paper addresses the problem of active collaborative localization in heterogeneous robot teams with unknown data association. It involves positioning a small number of identical unmanned ground vehicles (UGVs) at desired positions so that an unmanned aerial vehicle (UAV) can, through unlabelled measurements of UGVs, uniquely determine its global pose. We model the problem as a sequential two p...
Semantically-Enhanced Deep Collision Prediction for Autonomous Navigation Using Aerial Robots
https://ieeexplore.ieee.org/document/10342297/
[ "Mihir Kulkarni", "Huan Nguyen", "Kostas Alexis", "Mihir Kulkarni", "Huan Nguyen", "Kostas Alexis" ]
This paper contributes a novel and modularized learning-based method for aerial robots navigating cluttered environments containing hard-to-perceive thin obstacles without assuming access to a map or the full pose estimation of the robot. The proposed solution builds upon a semantically-enhanced Variational Autoencoder that is trained with both real-world and simulated depth images to compress the...
Demonstrating Autonomous 3D Path Planning on a Novel Scalable UGV-UAV Morphing Robot
https://ieeexplore.ieee.org/document/10342189/
[ "Eric Sihite", "Filip Slezak", "Ioannis Mandralis", "Adarsh Salagame", "Milad Ramezani", "Arash Kalantari", "Alireza Ramezani", "Morteza Gharib", "Eric Sihite", "Filip Slezak", "Ioannis Mandralis", "Adarsh Salagame", "Milad Ramezani", "Arash Kalantari", "Alireza Ramezani", "Morteza Gharib" ]
Some animals exhibit multi-modal locomotion capability to traverse a wide range of terrains and environments, such as amphibians that can swim and walk or birds that can fly and walk. This capability is extremely beneficial for expanding the animal's habitat range and they can choose the most energy efficient mode of locomotion in a given environment. The robotic biomimicry of this multi-modal loc...
Topology-Guided Perception-Aware Receding Horizon Trajectory Generation for UAVs
https://ieeexplore.ieee.org/document/10342075/
[ "Gang Sun", "Xuetao Zhang", "Yisha Liu", "Hanzhang Wang", "Xuebo Zhang", "Yan Zhuang", "Gang Sun", "Xuetao Zhang", "Yisha Liu", "Hanzhang Wang", "Xuebo Zhang", "Yan Zhuang" ]
The perception-aware motion planning method based on the localization uncertainty has the potential to improve the localization accuracy for robot navigation. How-ever, most of the existing perception-aware methods pre-build a global feature map and can not generate the perception- aware trajectory in real time. This paper proposes a topology- guided perception-aware receding horizon trajectory ge...
Nonlinear Deterministic Observer for Inertial Navigation Using Ultra-Wideband and IMU Sensor Fusion
https://ieeexplore.ieee.org/document/10342083/
[ "Hashim A. Hashim", "Abdelrahman E. E. Eltoukhy", "Kyriakos G. Vamvoudakis", "Mohammed I. Abouheaf", "Hashim A. Hashim", "Abdelrahman E. E. Eltoukhy", "Kyriakos G. Vamvoudakis", "Mohammed I. Abouheaf" ]
Navigation in Global Positioning Systems (GPS)-denied environments requires robust estimators reliant on fusion of inertial sensors able to estimate rigid-body's orientation, position, and linear velocity. Ultra-wideband (UWB) and Inertial Measurement Unit (IMU) represent low-cost measurement technology that can be utilized for successful Inertial Navigation. This paper presents a nonlinear determ...
Model-Free Grasping with Multi-Suction Cup Grippers for Robotic Bin Picking
https://ieeexplore.ieee.org/document/10341555/
[ "Philipp Schillinger", "Miroslav Gabriel", "Alexander Kuss", "Hanna Ziesche", "Ngo Anh Vien", "Philipp Schillinger", "Miroslav Gabriel", "Alexander Kuss", "Hanna Ziesche", "Ngo Anh Vien" ]
This paper presents a novel method for model-free prediction of grasp poses for suction grippers with multiple suction cups. Our approach is agnostic to the design of the gripper and does not require gripper-specific training data. In particular, we propose a two-step approach, where first, a neural network predicts pixel-wise grasp quality for an input image to indicate areas that are generally g...
Vision-Based State and Pose Estimation for Robotic Bin Picking of Cables
https://ieeexplore.ieee.org/document/10342374/
[ "Andrea Monguzzi", "Christian Cella", "Andrea Maria Zanchettin", "Paolo Rocco", "Andrea Monguzzi", "Christian Cella", "Andrea Maria Zanchettin", "Paolo Rocco" ]
This paper deals with the challenging task of picking semi-deformable linear objects (SDLOs) from a bin. SDLOs are deformable elements, such as cables, joined to a rigid part as a connector. We propose a vision-based strategy to detect, classify and estimate the pose and the state (free or occluded) of connectors belonging to an unspecified number of SDLOs, arranged in an unknown configuration in ...
Efficient Visuo-Haptic Object Shape Completion for Robot Manipulation
https://ieeexplore.ieee.org/document/10342200/
[ "Lukas Rustler", "Jiri Matas", "Matej Hoffmann", "Lukas Rustler", "Jiri Matas", "Matej Hoffmann" ]
For robot manipulation, a complete and accurate object shape is desirable. Here, we present a method that combines visual and haptic reconstruction in a closed-loop pipeline. From an initial viewpoint, the object shape is reconstructed using an implicit surface deep neural network. The location with highest uncertainty is selected for haptic exploration, the object is touched, the new information ...
Force Map: Learning to Predict Contact Force Distribution from Vision
https://ieeexplore.ieee.org/document/10342092/
[ "Ryo Hanai", "Yukiyasu Domae", "Ixchel G. Ramirez-Alpizar", "Bruno Leme", "Tetsuya Ogata", "Ryo Hanai", "Yukiyasu Domae", "Ixchel G. Ramirez-Alpizar", "Bruno Leme", "Tetsuya Ogata" ]
When humans see a scene, they can roughly imagine the forces applied to objects based on their expe-rience and use them to handle the objects properly. This paper considers transferring this “force-visualization” ability to robots. We hypothesize that a rough force distribution (named “force map”) can be utilized for object manipulation strategies even if accurate force estimation is impossible. B...
Push to Know! - Visuo-Tactile Based Active Object Parameter Inference with Dual Differentiable Filtering
https://ieeexplore.ieee.org/document/10341832/
[ "Anirvan Dutta", "Etienne Burdet", "Mohsen Kaboli", "Anirvan Dutta", "Etienne Burdet", "Mohsen Kaboli" ]
For robotic systems to interact with objects in dynamic environments, it is essential to perceive the physical properties of the objects such as shape, friction coefficient, mass, center of mass, and inertia. This not only eases selecting manipulation action but also ensures the task is performed as desired. However, estimating the physical properties of especially novel objects is a challenging p...
IOSG: Image-Driven Object Searching and Grasping
https://ieeexplore.ieee.org/document/10342009/
[ "Houjian Yu", "Xibai Lou", "Yang Yang", "Changhyun Choi", "Houjian Yu", "Xibai Lou", "Yang Yang", "Changhyun Choi" ]
When robots retrieve specific objects from cluttered scenes, such as home and warehouse environments, the target objects are often partially occluded or completely hidden. Robots are thus required to search, identify a target object, and successfully grasp it. Preceding works have relied on pre-trained object recognition or segmentation models to find the target object. However, such methods requi...
DexRepNet: Learning Dexterous Robotic Grasping Network with Geometric and Spatial Hand-Object Representations
https://ieeexplore.ieee.org/document/10342334/
[ "Qingtao Liu", "Yu Cui", "Qi Ye", "Zhengnan Sun", "Haoming Li", "Gaofeng Li", "Lin Shao", "Jiming Chen", "Qingtao Liu", "Yu Cui", "Qi Ye", "Zhengnan Sun", "Haoming Li", "Gaofeng Li", "Lin Shao", "Jiming Chen" ]
Robotic dexterous grasping is a challenging problem due to the high degree of freedom (DoF) and complex contacts of multi-fingered robotic hands. Existing deep re-inforcement learning (DRL) based methods leverage human demonstrations to reduce sample complexity due to the high dimensional action space with dexterous grasping. However, less attention has been paid to hand-object interaction represe...
Active Acoustic Sensing for Robot Manipulation
https://ieeexplore.ieee.org/document/10342481/
[ "Shihan Lu", "Heather Culbertson", "Shihan Lu", "Heather Culbertson" ]
Perception in robot manipulation has been actively explored with the goal of advancing and integrating vision and touch for global and local feature extraction. However, it is difficult to perceive certain object internal states, and the integration of visual and haptic perception is not compact and is easily biased. We propose to address these limitations by developing an active acoustic sensing ...
Grasp Region Exploration for 7-DoF Robotic Grasping in Cluttered Scenes
https://ieeexplore.ieee.org/document/10341757/
[ "Zibo Chen", "Zhixuan Liu", "Shangjin Xie", "Wei-Shi Zheng", "Zibo Chen", "Zhixuan Liu", "Shangjin Xie", "Wei-Shi Zheng" ]
Robotic grasping is a fundamental skill for robots, but it is quite challenging in cluttered scenes. In cluttered scenes, the precise prediction of high-quality grasp configurations such as rotation and grasping width while avoiding collisions is essential. To accomplish this, the grasp detection models require the capabilities of stronger fine-grained information extracted around the grasp points...
Bagging by Learning to Singulate Layers Using Interactive Perception
https://ieeexplore.ieee.org/document/10341634/
[ "Lawrence Yunliang Chen", "Baiyu Shi", "Roy Lin", "Daniel Seita", "Ayah Ahmad", "Richard Cheng", "Thomas Kollar", "David Held", "Ken Goldberg", "Lawrence Yunliang Chen", "Baiyu Shi", "Roy Lin", "Daniel Seita", "Ayah Ahmad", "Richard Cheng", "Thomas Kollar", "David Held", "Ken Goldberg" ]
Many fabric handling and 2D deformable material tasks in homes and industries require singulating layers of material such as opening a bag or arranging garments for sewing. In contrast to methods requiring specialized sensing or end effectors, we use only visual observations with ordinary parallel jaw grippers. We propose SLIP: Singulating Layers using Interactive Perception, and apply SLIP to the...
Real-Time Simultaneous Multi-Object 3D Shape Reconstruction, 6DoF Pose Estimation and Dense Grasp Prediction
https://ieeexplore.ieee.org/document/10342307/
[ "Shubham Agrawal", "Nikhil Chavan-Dafle", "Isaac Kasahara", "Selim Engin", "Jinwook Huh", "Volkan Isler", "Shubham Agrawal", "Nikhil Chavan-Dafle", "Isaac Kasahara", "Selim Engin", "Jinwook Huh", "Volkan Isler" ]
In this paper, we present a realtime method for simultaneous object-level scene understanding and grasp prediction. Specifically, given a single RGBD image of a scene, our method localizes all the objects in the scene and for each object, it generates the following: full 3D shape, scale, pose with respect to the camera frame, and a dense set of feasible grasps. The main advantage of our method is ...
Flexible Handover with Real-Time Robust Dynamic Grasp Trajectory Generation
https://ieeexplore.ieee.org/document/10341777/
[ "Gu Zhang", "Hao-Shu Fang", "Hongjie Fang", "Cewu Lu", "Gu Zhang", "Hao-Shu Fang", "Hongjie Fang", "Cewu Lu" ]
In recent years, there has been a significant effort dedicated to developing efficient, robust, and general human-to-robot handover systems. However, the area of flexible handover in the context of complex and continuous objects' motion remains relatively unexplored. In this work, we propose an approach for effective and robust flexible handover, which enables the robot to grasp moving objects wit...
HyperTraj: Towards Simple and Fast Scene-Compliant Endpoint Conditioned Trajectory Prediction
https://ieeexplore.ieee.org/document/10341647/
[ "Renhao Huang", "Maurice Pagnucco", "Yang Song", "Renhao Huang", "Maurice Pagnucco", "Yang Song" ]
An important task in trajectory prediction is to model the uncertainty of agents' motions, which requires the system to propose multiple plausible future trajectories for agents based on their past movements. Recently, many approaches have been developed following an endpointconditioned deep learning framework by firstly predicting the distribution of endpoints, then sampling endpoints from it and...
PanelPose: A 6D Pose Estimation of Highly-Variable Panel Object for Robotic Robust Cockpit Panel Inspection
https://ieeexplore.ieee.org/document/10342304/
[ "Han Sun", "Peiyuan Ni", "Zhiqi Li", "Yizhao Wang", "Xiaoxiao Zhu", "Qixin Cao", "Han Sun", "Peiyuan Ni", "Zhiqi Li", "Yizhao Wang", "Xiaoxiao Zhu", "Qixin Cao" ]
In robotic cockpit inspection scenarios, the 6D pose of highly-variable panel objects is necessary. However, the buttons with different states on the panel cause the variable texture and point cloud, which confuses the traditional invariable object pose estimation method. The bottleneck is the variable texture and point cloud. To address this issue, we propose a simple yet effective method denoted...
Image Restoration via UAVFormer for Under-Display Camera of UAV
https://ieeexplore.ieee.org/document/10342454/
[ "Zhuoran Zheng", "Xiuyi Jia", "Zhuoran Zheng", "Xiuyi Jia" ]
The exposed cameras of UAVs can shake, shift, or even malfunction under the influence of harsh weather, while the add-on devices (Dupont lines) are very vulnerable to dam-age. Although we can place a low-cost transparent film overlay around the camera to protect it, this would also introduce image degradation issues (such as oversaturation, astigmatism, etc). To tackle the image degradation proble...
Semantic Scene Difference Detection in Daily Life Patroling by Mobile Robots Using Pre-Trained Large-Scale Vision-Language Model
https://ieeexplore.ieee.org/document/10342467/
[ "Yoshiki Obinata", "Kento Kawaharazuka", "Naoaki Kanazawa", "Naoya Yamaguchi", "Naoto Tsukamoto", "Iori Yanokura", "Shingo Kitagawa", "Koki Shinjo", "Kei Okada", "Masayuki Inaba", "Yoshiki Obinata", "Kento Kawaharazuka", "Naoaki Kanazawa", "Naoya Yamaguchi", "Naoto Tsukamoto", "Iori Yanokura", "Shingo Kitagawa", "Koki Shinjo", "Kei Okada", "Masayuki Inaba" ]
It is important for daily life support robots to detect changes in their environment and perform tasks. In the field of anomaly detection in computer vision, probabilistic and deep learning methods have been used to calculate the image distance. These methods calculate distances by focusing on image pixels. In contrast, this study aims to detect semantic changes in the daily life environment using...
Seeing the Fruit for the Leaves: Robotically Mapping Apple Fruitlets in a Commercial Orchard
https://ieeexplore.ieee.org/document/10341502/
[ "Ans Qureshi", "David Smith", "Trevor Gee", "Mahla Nejati", "Jalil Shahabi", "JongYoon Lim", "Ho Seok Ahn", "Ben McGuinness", "Catherine Downes", "Rahul Jangali", "Kale Black", "Hin Lim", "Mike Duke", "Bruce MacDonald", "Henry Williams", "Ans Qureshi", "David Smith", "Trevor Gee", "Mahla Nejati", "Jalil Shahabi", "JongYoon Lim", "Ho Seok Ahn", "Ben McGuinness", "Catherine Downes", "Rahul Jangali", "Kale Black", "Hin Lim", "Mike Duke", "Bruce MacDonald", "Henry Williams" ]
Aotearoa New Zealand has a strong and growing apple industry but struggles to access workers to complete skilled, seasonal tasks such as thinning. To ensure effective thinning and make informed decisions on a per-tree basis, it is crucial to accurately measure the crop load of individual apple trees. However, this task poses challenges due to the dense foliage that hides the fruitlets within the t...
Cross-Domain Autonomous Driving Perception Using Contrastive Appearance Adaptation
https://ieeexplore.ieee.org/document/10342103/
[ "Ziqiang Zheng", "Yingshu Chen", "Binh-Son Hua", "Yang Wu", "Sai-Kit Yeung", "Ziqiang Zheng", "Yingshu Chen", "Binh-Son Hua", "Yang Wu", "Sai-Kit Yeung" ]
Addressing domain shifts for complex perception tasks in autonomous driving has long been a challenging problem. In this paper, we show that existing domain adaptation methods pay little attention to the content mismatch issue between source and target domains, thus weakening the domain adaptation per-formance and the decoupling of domain-invariant and domain-specific representations. To solve the...
MENTOR: Multilingual Text Detection Toward Learning by Analogy
https://ieeexplore.ieee.org/document/10342419/
[ "Hsin-Ju Lin", "Tsu-Chun Chung", "Ching-Chun Hsiao", "Pin-Yu Chen", "Wei-Chen Chiu", "Ching-Chun Huang", "Hsin-Ju Lin", "Tsu-Chun Chung", "Ching-Chun Hsiao", "Pin-Yu Chen", "Wei-Chen Chiu", "Ching-Chun Huang" ]
Text detection is frequently used in vision-based mobile robots when they need to interpret texts in their surroundings to perform a given task. For instance, delivery robots in multilingual cities need to be capable of doing multilingual text detection so that the robots can read traffic signs and road markings. Moreover, the target languages change from region to region, implying the need of eff...
Towards a Robust Adversarial Patch Attack Against Unmanned Aerial Vehicles Object Detection
https://ieeexplore.ieee.org/document/10342460/
[ "Samridha Shrestha", "Saurabh Pathak", "Eduardo K. Viegas", "Samridha Shrestha", "Saurabh Pathak", "Eduardo K. Viegas" ]
Object detection techniques for autonomous Un-manned Aerial Vehicles (UAV) are built upon Deep Neural Networks (DNN), which are known to be vulnerable to adversarial patch perturbation attacks that lead to object detection evasion. Yet, current adversarial patch generation schemes are not designed for UAV imagery settings. This paper proposes a new robust adversarial patch generation attack agains...
Fast Point to Mesh Distance by Domain Voxelization
https://ieeexplore.ieee.org/document/10341468/
[ "Geordan Gutow", "Howie Choset", "Geordan Gutow", "Howie Choset" ]
Computing the distance from a point to a triangle mesh is a key computational step in robotics pipelines such as registration and collision detection, with applications to path planning, SLAM, and RGB-D vision. Numerous techniques to accelerate this computation have been developed, many of which use a cheap pre-processing step to construct a hierarchical decomposition of the mesh. If the mesh is f...
AirLine: Efficient Learnable Line Detection with Local Edge Voting
https://ieeexplore.ieee.org/document/10341655/
[ "Xiao Lin", "Chen Wang", "Xiao Lin", "Chen Wang" ]
Line detection is widely used in many robotic tasks such as scene recognition, 3D reconstruction, and simultaneous localization and mapping (SLAM). Compared to points, lines can provide both low-level and high-level geometrical information for downstream tasks. In this paper, we propose a novel learnable edge-based line detection algorithm, AirLine, which can be applied to various tasks. In contra...
3D Skeletonization of Complex Grapevines for Robotic Pruning
https://ieeexplore.ieee.org/document/10341828/
[ "Eric Schneider", "Sushanth Jayanth", "Abhisesh Silwal", "George Kantor", "Eric Schneider", "Sushanth Jayanth", "Abhisesh Silwal", "George Kantor" ]
Robotic pruning of dormant grapevines is an area of active research in order to promote vine balance and grape quality, but so far robotic efforts have largely focused on planar, simplified vines not representative of commercial vineyards. This paper aims to advance the robotic perception capabilities necessary for pruning in denser and more complex vine structures by extending plant skeletonizati...
AdaptSeqVPR: An Adaptive Sequence-Based Visual Place Recognition Pipeline
https://ieeexplore.ieee.org/document/10341533/
[ "Heshan Li", "Guohao Peng", "Jun Zhang", "Sriram Vaikundam", "Danwei Wang", "Heshan Li", "Guohao Peng", "Jun Zhang", "Sriram Vaikundam", "Danwei Wang" ]
Visual Place Recognition (VPR) is essential for autonomous robots and unmanned vehicles, as an accurate identification of visited places can trigger a loop closure to optimize the built map. The most prevalent methods tackle VPR as a single-frame retrieval task, which uses a CNN-based encoder to describe and compare each individual frame. These methods, however, overlook the temporal information b...
Towards Automated Void Detection for Search and Rescue with 3D Perception
https://ieeexplore.ieee.org/document/10341454/
[ "Ananya Bal", "Ashutosh Gupta", "Pranav Goyal", "David Merrick", "Robin Murphy", "Howie Choset", "Ananya Bal", "Ashutosh Gupta", "Pranav Goyal", "David Merrick", "Robin Murphy", "Howie Choset" ]
In a structural collapse, debris piles up in a chaotic and unstable manner, creating pockets and void spaces that are difficult to see or access. Often, these regions have the highest chances of concealing survivors and identifying such regions can increase the success of a search and rescue (SAR) operation while ensuring the safety of both survivors and rescue teams. In this paper, we present an ...
Visual Localization Based on Multiple Maps
https://ieeexplore.ieee.org/document/10341812/
[ "Yukai Lin", "Liu Liu", "Xiao Liang", "Jiangwei Li", "Yukai Lin", "Liu Liu", "Xiao Liang", "Jiangwei Li" ]
This paper proposes a multi-map based visual localization method for image sequences. Given multiple single-map based localization results, we combine them with SLAM to estimate robust and accurate camera poses under challenging conditions. Our method comprises three modules connected in a sequence. First, we reconstruct multiple reference maps using the Structure-from-Motion technique, one map fo...
An Interacting Multiple Model Approach Based on Maximum Correntropy Student's T Filter
https://ieeexplore.ieee.org/document/10341366/
[ "Fethi Candan", "Aykut Beke", "Lyudmila Mihaylova", "Fethi Candan", "Aykut Beke", "Lyudmila Mihaylova" ]
This paper presents a novel approach called the Interacting Multiple Model (IMM)-based Maximum Correntropy Student's T Filter (MCStF), which addresses the challenges posed by non-Gaussian measurement noises. The MCStF demonstrates superior performance compared to the IMM algorithm based on Kalman Filters (KFs) in both simulation environments and real-time systems. The Crazyflie 2.0 nano Unmanned A...
Deep Robust Multi-Robot Re-Localisation in Natural Environments
https://ieeexplore.ieee.org/document/10341798/
[ "Milad Ramezani", "Ethan Griffiths", "Maryam Haghighat", "Alex Pitt", "Peyman Moghadam", "Milad Ramezani", "Ethan Griffiths", "Maryam Haghighat", "Alex Pitt", "Peyman Moghadam" ]
The success of re-localisation has crucial implications for the practical deployment of robots operating within a prior map or relative to one another in real-world scenarios. Using single-modality, place recognition and localisation can be compromised in challenging environments such as forests. To address this, we propose a strategy to prevent lidar-based re-localisation failure using lidar-imag...
FVLoc-NeRF : Fast Vision-Only Localization within Neural Radiation Field
https://ieeexplore.ieee.org/document/10342310/
[ "Guo Wenzhi", "Bai Haiyang", "Mou Yuanqu", "Liu Jia", "Chen Lijun", "Guo Wenzhi", "Bai Haiyang", "Mou Yuanqu", "Liu Jia", "Chen Lijun" ]
In recent years, Neural Radiation Fields (NeRF) have shown tremendous potential in encoding highly-detailed 3D geometry and environmental appearance, thus making it a promising alternative to traditional explicit maps for robot localization. However, current NeRF localization methods suffer from significant computational overheads, primarily resulting from the large number of iterations or particl...
RADA: Robust Adversarial Data Augmentation for Camera Localization in Challenging Conditions
https://ieeexplore.ieee.org/document/10341653/
[ "Jialu Wang", "Muhamad Risqi U. Saputra", "Chris Xiaoxuan Lu", "Niki Trigoni", "Andrew Markham", "Jialu Wang", "Muhamad Risqi U. Saputra", "Chris Xiaoxuan Lu", "Niki Trigoni", "Andrew Markham" ]
Camera localization is a fundamental problem for many applications in computer vision, robotics, and autonomy. Despite recent deep learning-based approaches, the lack of robustness in challenging conditions persists due to changes in appearance caused by texture-less planes, repeating structures, reflective surfaces, motion blur, and illumination changes. Data augmentation is an attractive solutio...
MagHT: A Magnetic Hough Transform for Fast Indoor Place Recognition
https://ieeexplore.ieee.org/document/10342269/
[ "Iad Abdul Raouf", "Vincent Gay-Bellile", "Steve Bourgeois", "Cyril Joly", "Alexis Paljic", "Iad Abdul Raouf", "Vincent Gay-Bellile", "Steve Bourgeois", "Cyril Joly", "Alexis Paljic" ]
This article proposes a novel indoor magnetic field-based place recognition algorithm that is accurate and fast to compute. For that, we modified the generalized “Hough Transform” to process magnetic data (MagHT). It takes as input a sequence of magnetic measures whose relative positions are recovered by an odometry system and recognizes the places in the magnetic map where they were acquired. It ...
What to Learn: Features, Image Transformations, or Both?
https://ieeexplore.ieee.org/document/10342415/
[ "Yuxuan Chen", "Binbin Xu", "Frederike Dümbgen", "Timothy D. Barfoot", "Yuxuan Chen", "Binbin Xu", "Frederike Dümbgen", "Timothy D. Barfoot" ]
Long-term visual localization is an essential problem in robotics and computer vision, but remains challenging due to the environmental appearance changes caused by lighting and seasons. While many existing works have attempted to solve it by directly learning invariant sparse keypoints and descriptors to match scenes, these approaches still struggle with adverse appearance changes. Recent develop...
Global Localization: Utilizing Relative Spatio-Temporal Geometric Constraints from Adjacent and Distant Cameras
https://ieeexplore.ieee.org/document/10342050/
[ "Mohammad Altillawi", "Zador Pataki", "Shile Li", "Ziyuan Liu", "Mohammad Altillawi", "Zador Pataki", "Shile Li", "Ziyuan Liu" ]
Re-Iocalizing a camera from a single image in a previously mapped area is vital for many computer vision applications in robotics and augmented/virtual reality. In this work, we address the problem of estimating the 6 DoF camera pose relative to a global frame from a single image. We propose to leverage a novel network of relative spatial and temporal geometric constraints to guide the training of...
Uncertainty-Aware Lidar Place Recognition in Novel Environments
https://ieeexplore.ieee.org/document/10341383/
[ "Keita Mason", "Joshua Knights", "Milad Ramezani", "Peyman Moghadam", "Dimity Miller", "Keita Mason", "Joshua Knights", "Milad Ramezani", "Peyman Moghadam", "Dimity Miller" ]
State-of-the-art lidar place recognition models exhibit unreliable performance when tested on environments different from their training dataset, which limits their use in complex and evolving environments. To address this issue, we investigate the task of uncertainty-aware lidar place recognition, where each predicted place must have an associated uncertainty that can be used to identify and reje...
Data-Driven Based Cascading Orientation and Translation Estimation for Inertial Navigation
https://ieeexplore.ieee.org/document/10341493/
[ "Xiangyu Deng", "Shenyue Wang", "Chunxiang Shan", "Jinjie Lu", "Ke Jin", "Jijunnan Li", "Yandong Guo", "Xiangyu Deng", "Shenyue Wang", "Chunxiang Shan", "Jinjie Lu", "Ke Jin", "Jijunnan Li", "Yandong Guo" ]
Recently, data-driven approaches have brought both opportunities and challenges for Inertial Navigation Systems. In this paper, we propose a novel data-driven method which is composed of cascading orientation and translation estimation with IMU-only measurements. For robust orientation estimation, we combine a CNN-based neural network with an EKF to eliminate orientation errors caused by sensor no...
Converting Depth Images and Point Clouds for Feature-Based Pose Estimation
https://ieeexplore.ieee.org/document/10341758/
[ "Robert Lösch", "Mark Sastuba", "Jonas Toth", "Bernhard Jung", "Robert Lösch", "Mark Sastuba", "Jonas Toth", "Bernhard Jung" ]
In recent years, depth sensors have become more and more affordable and have found their way into a growing amount of robotic systems. However, mono- or multi-modal sensor registration, often a necessary step for further pro-cessing, faces many challenges on raw depth images or point clouds. This paper presents a method of converting depth data into images capable of visualizing spatial details th...
AirVO: An Illumination-Robust Point-Line Visual Odometry
https://ieeexplore.ieee.org/document/10341914/
[ "Kuan Xu", "Yuefan Hao", "Shenghai Yuan", "Chen Wang", "Lihua Xie", "Kuan Xu", "Yuefan Hao", "Shenghai Yuan", "Chen Wang", "Lihua Xie" ]
This paper proposes an illumination-robust visual odometry (VO) system that incorporates both accelerated learning-based corner point algorithms and an extended line feature algorithm. To be robust to dynamic illumination, the proposed system employs the convolutional neural network (CNN) and graph neural network (GNN) to detect and match reliable and informative corner points. Then point feature ...
NeRF-SLAM: Real-Time Dense Monocular SLAM with Neural Radiance Fields
https://ieeexplore.ieee.org/document/10341922/
[ "Antoni Rosinol", "John J. Leonard", "Luca Carlone", "Antoni Rosinol", "John J. Leonard", "Luca Carlone" ]
We propose a novel geometric and photometric 3D mapping pipeline for accurate and real-time scene reconstruction from casually taken monocular images. To achieve this, we leverage recent advances in dense monocular SLAM and real-time hierarchical volumetric neural radiance fields. Our insight is that dense monocular SLAM provides the right information to fit a neural radiance field of the scene in...
Scale Jump-Aware Pose Graph Relaxation for Monocular SLAM with Re-Initializations
https://ieeexplore.ieee.org/document/10341995/
[ "Runze Yuan", "Ran Cheng", "Lige Liu", "Tao Sun", "Laurent Kneipl", "Runze Yuan", "Ran Cheng", "Lige Liu", "Tao Sun", "Laurent Kneipl" ]
Pose graph relaxation has become an indispensable addition to SLAM enabling efficient global registration of sensor reference frames under the objective of satisfying pair-wise relative transformation constraints. The latter may be given by incremental motion estimation or global place recognition. While the latter case enables loop closures and drift compensation, care has to be taken in the mono...
Optimizing the Extended Fourier Mellin Transformation Algorithm
https://ieeexplore.ieee.org/document/10341356/
[ "Wenqing Jiang", "Chengqian Li", "Jinyue Cao", "Sören Schwertfeger", "Wenqing Jiang", "Chengqian Li", "Jinyue Cao", "Sören Schwertfeger" ]
With the increasing application of robots, stable and efficient Visual Odometry (VO) algorithms are becoming more and more important. Based on the Fourier Mellin Transformation (FMT) algorithm, the extended Fourier Mellin Transformation (eFMT) is an image registration approach that can be applied to downward-looking cameras, for example on aerial and underwater vehicles. eFMT extends FMT to multi-...
Marker-Based Visual SLAM Leveraging Hierarchical Representations
https://ieeexplore.ieee.org/document/10341891/
[ "Ali Tourani", "Hriday Bavle", "Jose Luis Sanchez-Lopez", "Rafael Muñoz Salinas", "Holger Voos", "Ali Tourani", "Hriday Bavle", "Jose Luis Sanchez-Lopez", "Rafael Muñoz Salinas", "Holger Voos" ]
Fiducial markers can encode rich information about the environment and aid Visual SLAM (VSLAM) approaches in reconstructing maps with practical semantic information. Current marker-based VSLAM approaches mainly utilize markers for improving feature detections in low-feature environments and/or incorporating loop closure constraints, generating only low-level geometric maps of the environment prone...
RVWO: A Robust Visual-Wheel SLAM System for Mobile Robots in Dynamic Environments
https://ieeexplore.ieee.org/document/10342183/
[ "Jaafar Mahmoud", "Andrey Penkovskiy", "Ha The Long Vuong", "Aleksey Burkov", "Sergey Kolyubin", "Jaafar Mahmoud", "Andrey Penkovskiy", "Ha The Long Vuong", "Aleksey Burkov", "Sergey Kolyubin" ]
This paper presents RVWO, a system designed to provide robust localization and mapping for wheeled mobile robots in challenging scenarios. The proposed approach leverages a probabilistic framework that incorporates semantic prior information about landmarks and visual re-projection error to create a landmark reliability model, which acts as an adaptive kernel for the visual residuals in optimizati...
Event Camera-Based Visual Odometry for Dynamic Motion Tracking of a Legged Robot Using Adaptive Time Surface
https://ieeexplore.ieee.org/document/10342048/
[ "Shifan Zhu", "Zhipeng Tang", "Michael Yang", "Erik Learned-Miller", "Donghyun Kim", "Shifan Zhu", "Zhipeng Tang", "Michael Yang", "Erik Learned-Miller", "Donghyun Kim" ]
Our paper proposes a direct sparse visual odometry method that combines event and RGBD data to estimate the pose of agile-legged robots during dynamic locomotion and acrobatic behaviors. Event cameras offer high temporal resolution and dynamic range, which can eliminate the issue of blurred RGB images during fast movements. This unique strength holds a potential for accurate pose estimation of agi...
Enhancing Robustness of Line Tracking Through Semi-Dense Epipolar Search in Line-Based SLAM
https://ieeexplore.ieee.org/document/10342497/
[ "Dong-Uk Seo", "Hyungtae Lim", "Eungchang Mason Lee", "Hyunjun Lim", "Hyun Myung", "Dong-Uk Seo", "Hyungtae Lim", "Eungchang Mason Lee", "Hyunjun Lim", "Hyun Myung" ]
Line information from urban structures can be exploited as an additional geometrical feature to achieve robust vision-based simultaneous localization and mapping (SLAM) systems in textureless scenes. Sometimes, however, conventional line tracking methods fail to track caused by image blur or occlusion. Even though these lost line features are just a subset of plenty of features, the failure in fea...
Stereo Visual Odometry with Deep Learning-Based Point and Line Feature Matching Using an Attention Graph Neural Network
https://ieeexplore.ieee.org/document/10341872/
[ "Shenbagaraj Kannapiran", "Nalin Bendapudi", "Ming-Yuan Yu", "Devarth Parikh", "Spring Berman", "Ankit Vora", "Gaurav Pandey", "Shenbagaraj Kannapiran", "Nalin Bendapudi", "Ming-Yuan Yu", "Devarth Parikh", "Spring Berman", "Ankit Vora", "Gaurav Pandey" ]
Robust feature matching forms the backbone for most Visual Simultaneous Localization and Mapping (vSLAM), visual odometry, 3D reconstruction, and Structure from Motion (SfM) algorithms. However, recovering feature matches from texture-poor scenes is a major challenge and still remains an open area of research. In this paper, we present a Stereo Visual Odometry (StereoVO) technique based on point a...
Selective Presentation of AI Object Detection Results While Maintaining Human Reliance
https://ieeexplore.ieee.org/document/10341684/
[ "Yosuke Fukuchi", "Seiji Yamada", "Yosuke Fukuchi", "Seiji Yamada" ]
Transparency in decision-making is an important factor for AI-driven autonomous systems to be trusted and relied on by users. Studies in the field of visual information processing typically attempt to make an AI system's behavior transparent by showing bounding boxes or heatmaps as explanations. However, it has also been found that an excessive amount of explanations sometimes causes information o...
Ego-Noise Reduction of a Mobile Robot Using Noise Spatial Covariance Matrix Learning and Minimum Variance Distortionless Response
https://ieeexplore.ieee.org/document/10342193/
[ "Pierre-Olivier Lagacé", "François Ferland", "François Grondin", "Pierre-Olivier Lagacé", "François Ferland", "François Grondin" ]
The performance of speech and events recognition systems significantly improved recently thanks to deep learning methods. However, some of these tasks remain challenging when algorithms are deployed on robots due to the unseen mechanical noise and electrical interference generated by their actuators while training the neural networks. Ego-noise reduction as a preprocessing step therefore can help ...
Extracting Dynamic Navigation Goal from Natural Language Dialogue
https://ieeexplore.ieee.org/document/10342509/
[ "Lanjun Liang", "Ganghui Bian", "Huailin Zhao", "Yanzhi Dong", "Huaping Liu", "Lanjun Liang", "Ganghui Bian", "Huailin Zhao", "Yanzhi Dong", "Huaping Liu" ]
Effective access to relevant environmental changes in large human environments is critical for service robots to perform tasks. Since the position of a dynamic goal such as a human is variable, it will be difficult for the robot to locate him accurately. It is worth noting that humans can obtain information through social software, and deal with daily affairs. The current robots search for targets...
TidyBot: Personalized Robot Assistance with Large Language Models
https://ieeexplore.ieee.org/document/10341577/
[ "Jimmy Wu", "Rika Antonova", "Adam Kan", "Marion Lepert", "Andy Zeng", "Shuran Song", "Jeannette Bohg", "Szymon Rusinkiewicz", "Thomas Funkhouser", "Jimmy Wu", "Rika Antonova", "Adam Kan", "Marion Lepert", "Andy Zeng", "Shuran Song", "Jeannette Bohg", "Szymon Rusinkiewicz", "Thomas Funkhouser" ]
For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios. In this work, we investigate personalization of household cleanup with robots that can tidy up rooms by picking up objects and putting them away. A key challenge is determining the proper place to put each object, as people's preferences can vary greatly d...
L3MVN: Leveraging Large Language Models for Visual Target Navigation
https://ieeexplore.ieee.org/document/10342512/
[ "Bangguo Yu", "Hamidreza Kasaei", "Ming Cao", "Bangguo Yu", "Hamidreza Kasaei", "Ming Cao" ]
Visual target navigation in unknown environments is a crucial problem in robotics. Despite extensive investigation of classical and learning-based approaches in the past, robots lack common-sense knowledge about household objects and layouts. Prior state-of-the-art approaches to this task rely on learning the priors during the training and typically require significant expensive resources and time...
TopSpark: A Timestep Optimization Methodology for Energy-Efficient Spiking Neural Networks on Autonomous Mobile Agents
https://ieeexplore.ieee.org/document/10342499/
[ "Rachmad Vidya Wicaksana Putra", "Muhammad Shafique", "Rachmad Vidya Wicaksana Putra", "Muhammad Shafique" ]
Autonomous mobile agents (e.g., mobile ground robots and UAVs) typically require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks (e.g., object recognition) while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy process...
Generating Executable Action Plans with Environmentally-Aware Language Models
https://ieeexplore.ieee.org/document/10341989/
[ "Maitrey Gramopadhye", "Daniel Szafir", "Maitrey Gramopadhye", "Daniel Szafir" ]
Large Language Models (LLMs) trained using massive text datasets have recently shown promise in generating action plans for robotic agents from high-level text queries. However, these models typically do not consider the robot's environment, resulting in generated plans that may not actually be executable, due to ambiguities in the planned actions or environmental constraints. In this paper, we pr...
Interaction-Aware and Hierarchically-Explainable Heterogeneous Graph-based Imitation Learning for Autonomous Driving Simulation
https://ieeexplore.ieee.org/document/10342051/
[ "Mahan Tabatabaie", "Suining He", "Kang G. Shin", "Mahan Tabatabaie", "Suining He", "Kang G. Shin" ]
Understanding and learning the actor-to-X inter-actions (AXIs), such as those between the focal vehicles (actor) and other traffic participants (e.g., other vehicles, pedestrians) as well as traffic environments (e.g., city/road map), is essential for the development of a decision-making model and simulation of autonomous driving (AD). Existing practices on imitation learning (IL) for AD simulatio...
Zero-Shot Fault Detection for Manipulators Through Bayesian Inverse Reinforcement Learning
https://ieeexplore.ieee.org/document/10342143/
[ "Hanqing Zhao", "Xue Liu", "Gregory Dudek", "Hanqing Zhao", "Xue Liu", "Gregory Dudek" ]
We consider the detection of faults in robotic manipulators, with particular emphasis on faults that have not been observed or identified in advance, which naturally includes those that occur very infrequently. Recent studies indicate that the reward function obtained through Inverse Reinforcement Learning (IRL) can help detect anomalies caused by faults in a control system (i.e. fault detection)....
Chat with the Environment: Interactive Multimodal Perception Using Large Language Models
https://ieeexplore.ieee.org/document/10342363/
[ "Xufeng Zhao", "Mengdi Li", "Cornelius Weber", "Muhammad Burhan Hafez", "Stefan Wermter", "Xufeng Zhao", "Mengdi Li", "Cornelius Weber", "Muhammad Burhan Hafez", "Stefan Wermter" ]
Programming robot behavior in a complex world faces challenges on multiple levels, from dextrous low-level skills to high-level planning and reasoning. Recent pre-trained Large Language Models (LLMs) have shown remarkable reasoning ability in few-shot robotic planning. However, it remains challenging to ground LLMs in multimodal sensory input and continuous action output, while enabling a robot to...
Reinforcement Learning for Robot Navigation with Adaptive Forward Simulation Time (AFST) in a Semi-Markov Model
https://ieeexplore.ieee.org/document/10341985/
[ "Yu'an Chen", "Ruosong Ye", "Ziyang Tao", "Hongjian Liu", "Guangda Chen", "Jie Peng", "Jun Ma", "Yu Zhang", "Jianmin Ji", "Yanyong Zhang", "Yu'an Chen", "Ruosong Ye", "Ziyang Tao", "Hongjian Liu", "Guangda Chen", "Jie Peng", "Jun Ma", "Yu Zhang", "Jianmin Ji", "Yanyong Zhang" ]
Deep reinforcement learning (DRL) algorithms have proven effective in robot navigation, especially in unknown environments, by directly mapping perception inputs into robot control commands. However, most existing methods ignore the local minimum problem in navigation and thereby cannot handle complex unknown environments. In this paper, we propose the first DRL-based navigation method modeled by ...
PACT: Perception-Action Causal Transformer for Autoregressive Robotics Pre-Training
https://ieeexplore.ieee.org/document/10342381/
[ "Rogerio Bonatti", "Sai Vemprala", "Shuang Ma", "Felipe Frujeri", "Shuhang Chen", "Ashish Kapoor", "Rogerio Bonatti", "Sai Vemprala", "Shuang Ma", "Felipe Frujeri", "Shuhang Chen", "Ashish Kapoor" ]
Robotics has long been a field riddled with complex systems architectures whose modules and connections, whether traditional or learning-based, require significant human expertise and prior knowledge. Inspired by large pre-trained language models, this work introduces a paradigm for pretraining a general purpose representation that can serve as a starting point for multiple tasks on a given robot....
Neural Field Movement Primitives for Joint Modelling of Scenes and Motions
https://ieeexplore.ieee.org/document/10342170/
[ "Ahmet Tekden", "Marc Peter Deisenroth", "Yasemin Bekiroglu", "Ahmet Tekden", "Marc Peter Deisenroth", "Yasemin Bekiroglu" ]
This paper presents a novel Learning from Demonstration (LfD) method that uses neural fields to learn new skills efficiently and accurately. It achieves this by utilizing a shared embedding to learn both scene and motion representations in a generative way. Our method smoothly maps each expert demonstration to a scene-motion embedding and learns to model them without requiring hand-crafted task pa...
Augmentation Enables One-Shot Generalization in Learning from Demonstration for Contact-Rich Manipulation
https://ieeexplore.ieee.org/document/10341625/
[ "Xing Li", "Manuel Baum", "Oliver Brock", "Xing Li", "Manuel Baum", "Oliver Brock" ]
We introduce a Learning from Demonstration (LID) approach for contact-rich manipulation tasks, i.e., tasks in which the manipulandum's motion is constrained by contact with the environment. Our approach is motivated by the insight that even a large number of demonstrations will often not contain sufficient information to obtain a general policy for the task. To obtain general policies, our approac...
Using Single Demonstrations to Define Autonomous Manipulation Contact Tasks in Unstructured Environments via Object Affordances
https://ieeexplore.ieee.org/document/10342493/
[ "Frank Regal", "Adam Pettinger", "John A. Duncan", "Fabian Parra", "Emmanuel Akita", "Alex Navarro", "Mitch Pryor", "Frank Regal", "Adam Pettinger", "John A. Duncan", "Fabian Parra", "Emmanuel Akita", "Alex Navarro", "Mitch Pryor" ]
Performing a manipulation contact task in an unknown and unstructured environment is still a challenge. Learning from Demonstration (LfD) techniques provide an intuitive means to define difficult-to-model contact tasks, but have attributes that make them undesirable for novice users in uncertain environments. We present a novel end-to-end system that captures a single manipulation task demonstrati...
Constrained Dynamic Movement Primitives for Collision Avoidance in Novel Environments
https://ieeexplore.ieee.org/document/10341839/
[ "Seiji Shaw", "Devesh K. Jha", "Arvind U. Raghunathan", "Radu Corcodel", "Diego Romeres", "George Konidaris", "Daniel Nikovski", "Seiji Shaw", "Devesh K. Jha", "Arvind U. Raghunathan", "Radu Corcodel", "Diego Romeres", "George Konidaris", "Daniel Nikovski" ]
Dynamic movement primitives are widely used for learning skills that can be demonstrated to a robot by a skilled human or controller. While their generalization capabilities and simple formulation make them very appealing to use, they possess no strong guarantees to satisfy operational safety constraints for a task. We present constrained dynamic movement primitives (CDMPs), which can allow for po...
Learning Constraints on Autonomous Behavior from Proactive Feedback
https://ieeexplore.ieee.org/document/10341801/
[ "Connor Basich", "Saaduddin Mahmud", "Shlomo Zilberstein", "Connor Basich", "Saaduddin Mahmud", "Shlomo Zilberstein" ]
Learning from feedback is a common paradigm to acquire information that is hard to specify a priori. In this work, we consider an agent with a known nominal reward model that captures its high-level task objective. Furthermore, the agent operates subject to constraints that are unknown a priori and must be inferred from human interventions. Unlike existing methods, our approach does not rely on fu...
Learning Models of Adversarial Agent Behavior Under Partial Observability
https://ieeexplore.ieee.org/document/10341378/
[ "Sean Ye", "Manisha Natarajan", "Zixuan Wu", "Rohan Paleja", "Letian Chen", "Matthew C. Gombolay", "Sean Ye", "Manisha Natarajan", "Zixuan Wu", "Rohan Paleja", "Letian Chen", "Matthew C. Gombolay" ]
The need for opponent modeling and tracking arises in several real-world scenarios, such as professional sports, video game design, and drug-trafficking interdiction. In this work, we present Graph based Adversarial Modeling with Mutual Information (GrAMMI) for modeling the behavior of an adversarial opponent agent. GrAMMI is a novel graph neural network (GNN) based approach that uses mutual infor...
Robust Real-Time Motion Retargeting via Neural Latent Prediction
https://ieeexplore.ieee.org/document/10342022/
[ "Tiantian Wang", "Haodong Zhang", "Lu Chen", "Dongqi Wang", "Yue Wang", "Rong Xiong", "Tiantian Wang", "Haodong Zhang", "Lu Chen", "Dongqi Wang", "Yue Wang", "Rong Xiong" ]
Human-robot motion retargeting is a crucial approach for fast learning motion skills. Achieving real-time retargeting demands high levels of synchronization and accuracy. Even though existing retargeting methods have swift calculation, they still cause time-delay effect on the synchronous retargeting. To mitigate this issue, this paper proposes a motion retargeting method guided by prediction, whi...
Deep Probabilistic Movement Primitives with a Bayesian Aggregator
https://ieeexplore.ieee.org/document/10342441/
[ "Michael Przystupa", "Faezeh Haghverd", "Martin Jagersand", "Samuele Tosatto", "Michael Przystupa", "Faezeh Haghverd", "Martin Jagersand", "Samuele Tosatto" ]
Movement primitives are trainable parametric models that reproduce robotic movements starting from a limited set of demonstrations. Previous works proposed simple linear models that exhibited high sample efficiency and generalization power by allowing temporal modulation of move-ments (reproducing movements faster or slower), blending (merging two movements into one), via-point conditioning (const...
Self-Supervised Visual Motor Skills via Neural Radiance Fields
https://ieeexplore.ieee.org/document/10341682/
[ "Paul Gesel", "Noushad Sojib", "Momotaz Begum", "Paul Gesel", "Noushad Sojib", "Momotaz Begum" ]
In this paper, we propose a novel network architecture for visual imitation learning that exploits neural radiance fields (NeRFs) and key-point correspondence for self-supervised visual motor policy learning. The proposed network architecture incorporates a dynamic system output layer for policy learning. Combining the stability and goal adaption properties of dynamic systems with the robustness o...
Autonomous Ultrasound Scanning Towards Standard Plane Using Interval Interaction Probabilistic Movement Primitives
https://ieeexplore.ieee.org/document/10341685/
[ "Yi Hu", "Mahdi Tavakoli", "Yi Hu", "Mahdi Tavakoli" ]
Learning from demonstrations is the paradigm where robots acquire new skills demonstrated by an expert and alleviate the physical burden on experts to perform repetitive tasks. Ultrasound scanning is one of the ways to view the anatomical structures of soft tissues, but it is repetitive for some tissue scanning tasks. In this study, an autonomous ultrasound scanning towards a standard plane framew...
Automated Key Action Detection for Closed Reduction of Pelvic Fractures by Expert Surgeons in Robot-Assisted Surgery
https://ieeexplore.ieee.org/document/10342019/
[ "Ming-Zhang Pan", "Ya-Wen Deng", "Zhen Li", "Yuan Chen", "Xiao-Lan Liao", "Gui-Bin Bian", "Ming-Zhang Pan", "Ya-Wen Deng", "Zhen Li", "Yuan Chen", "Xiao-Lan Liao", "Gui-Bin Bian" ]
Pelvic fractures are one of the most serious traumas in orthopedics, and the technical proficiency and expertise of the surgical team strongly influence the quality of reduction results. With the advancement of information technology and robotics, robot-assisted pelvic fracture reduction surgery is expected to reduce the impact caused by inexperienced doctors and improve the accuracy and stability...
LAMP: Leveraging Language Prompts for Multi-Person Pose Estimation
https://ieeexplore.ieee.org/document/10341430/
[ "Shengnan Hu", "Ce Zheng", "Zixiang Zhou", "Chen Chen", "Gita Sukthankar", "Shengnan Hu", "Ce Zheng", "Zixiang Zhou", "Chen Chen", "Gita Sukthankar" ]
Human-centric visual understanding is an important desideratum for effective human-robot interaction. In order to navigate crowded public places, social robots must be able to interpret the activity of the surrounding humans. This paper addresses one key aspect of human-centric visual understanding, multi-person pose estimation. Achieving good performance on multi-person pose estimation in crowded...
Detecting Changes in Functional State: A Comparative Analysis Using Wearable Sensors and a Sensorized Tip
https://ieeexplore.ieee.org/document/10341723/
[ "Janire Otamendi", "Asier Zubizarreta", "Janire Otamendi", "Asier Zubizarreta" ]
Gait analysis can provide relevant information about the physical and neurological conditions of individuals. For this reason, several studies have recently been carried out in an attempt to monitor people's gait and automatically detect gait anomalies. Among the various monitoring systems available for gait analysis, wearable sensors are considered the gold standard due to their wide capture rang...
DiffuPose: Monocular 3D Human Pose Estimation via Denoising Diffusion Probabilistic Model
https://ieeexplore.ieee.org/document/10342204/
[ "Jeongjun Choi", "Dongseok Shim", "H. Jin Kim", "Jeongjun Choi", "Dongseok Shim", "H. Jin Kim" ]
Thanks to the development of 2D keypoint detectors, monocular 3D human pose estimation (HPE) via 2D-to-3D uplifting approaches have achieved remarkable improvements. Still, monocular 3D HPE is a challenging problem due to the inherent depth ambiguities and occlusions. To handle this problem, many previous works exploit temporal information to mitigate such difficulties. However, there are many rea...
BodySLAM++: Fast and Tightly-Coupled Visual-Inertial Camera and Human Motion Tracking
https://ieeexplore.ieee.org/document/10342291/
[ "Dorian F. Henning", "Christopher Choi", "Simon Schaefer", "Stefan Leutenegger", "Dorian F. Henning", "Christopher Choi", "Simon Schaefer", "Stefan Leutenegger" ]
Robust, fast, and accurate human state - 6D pose and posture - estimation remains a challenging problem. For real-world applications, the ability to estimate the human state in realtime is highly desirable. In this paper, we present BodySLAM++, a fast, efficient, and accurate human and camera state estimation framework relying on visual-inertial data. BodySLAM++ extends an existing visual-inertial...
Characterizing the Onset and Offset of Motor Imagery During Passive Arm Movements Induced by an Upper-Body Exoskeleton
https://ieeexplore.ieee.org/document/10342492/
[ "Kanishka Mitra", "Frigyes Samuel Racz", "Satyam Kumar", "Ashish D. Deshpande", "José Del R. Millán", "Kanishka Mitra", "Frigyes Samuel Racz", "Satyam Kumar", "Ashish D. Deshpande", "José Del R. Millán" ]
Two distinct technologies have gained attention lately due to their prospects for motor rehabilitation: robotics and brain-machine interfaces (BMIs). Harnessing their combined efforts is a largely uncharted and promising direction that has immense clinical potential. However, a significant challenge is whether motor intentions from the user can be accurately detected using non-invasive BMIs in the...
CLiFF-LHMP: Using Spatial Dynamics Patterns for Long- Term Human Motion Prediction
https://ieeexplore.ieee.org/document/10342031/
[ "Yufei Zhu", "Andrey Rudenko", "Tomasz P. Kucner", "Luigi Palmieri", "Kai O. Arras", "Achim J. Lilienthal", "Martin Magnusson", "Yufei Zhu", "Andrey Rudenko", "Tomasz P. Kucner", "Luigi Palmieri", "Kai O. Arras", "Achim J. Lilienthal", "Martin Magnusson" ]
Human motion prediction is important for mobile service robots and intelligent vehicles to operate safely and smoothly around people. The more accurate predictions are, particularly over extended periods of time, the better a system can, e.g., assess collision risks and plan ahead. In this paper, we propose to exploit maps of dynamics (MoDs, a class of general representations of place-dependent sp...
GloPro: Globally-Consistent Uncertainty-Aware 3D Human Pose Estimation & Tracking in the Wild
https://ieeexplore.ieee.org/document/10342032/
[ "Simon Schaefer", "Dorian F. Henning", "Stefan Leutenegger", "Simon Schaefer", "Dorian F. Henning", "Stefan Leutenegger" ]
An accurate and uncertainty-aware 3D human body pose estimation is key to enabling truly safe but efficient human-robot interactions. Current uncertainty-aware methods in 3D human pose estimation are limited to predicting the uncertainty of the body posture, while effectively neglecting the body shape and root pose. In this work, we present GloPro, which to the best of our knowledge the first fram...
Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation
https://ieeexplore.ieee.org/document/10341624/
[ "Fabian C Weigend", "Shubham Sonawani", "Michael Drolet", "Heni Ben Amor", "Fabian C Weigend", "Shubham Sonawani", "Michael Drolet", "Heni Ben Amor" ]
This work devises an optimized machine learning approach for human arm pose estimation from a single smart-watch. Our approach results in a distribution of possible wrist and elbow positions, which allows for a measure of uncertainty and the detection of multiple possible arm posture solutions, i.e., multimodal pose distributions. Combining estimated arm postures with speech recognition, we turn t...
Recognizing Real-World Intentions using A Multimodal Deep Learning Approach with Spatial-Temporal Graph Convolutional Networks
https://ieeexplore.ieee.org/document/10341981/
[ "Jiaqi Shi", "Chaoran Liu", "Carlos Toshinori Ishi", "Bowen Wu", "Hiroshi Ishiguro", "Jiaqi Shi", "Chaoran Liu", "Carlos Toshinori Ishi", "Bowen Wu", "Hiroshi Ishiguro" ]
Identifying intentions is a critical task for comprehending the actions of others, anticipating their future behavior, and making informed decisions. However, it is challenging to recognize intentions due to the uncertainty of future human activities and the complex influence factors. In this work, we explore the method of recognizing intentions alluded under human behaviors in the real world, aim...
VADER: Vector-Quantized Generative Adversarial Network for Motion Prediction
https://ieeexplore.ieee.org/document/10342324/
[ "Mohammad Samin Yasar", "Tariq Iqbal", "Mohammad Samin Yasar", "Tariq Iqbal" ]
Human motion prediction is an essential component for enabling close-proximity human-robot collaboration. The task of accurately predicting human motion is non-trivial and is compounded by the variability of human motion and the presence of multiple humans in proximity. To address some of the open challenges in motion prediction, in this work, we propose VADER, a novel sequence learning algorithm ...
SG-LSTM: Social Group LSTM for Robot Navigation Through Dense Crowds
https://ieeexplore.ieee.org/document/10341954/
[ "Rashmi Bhaskara", "Maurice Chiu", "Aniket Bera", "Rashmi Bhaskara", "Maurice Chiu", "Aniket Bera" ]
As personal robots become increasingly accessible and affordable, their applications extend beyond large corporate warehouses and factories to operate in diverse, less controlled environments, where they interact with larger groups of people. In such contexts, ensuring not only safety and efficiency but also mitigating potential adverse psychological impacts on humans and adhering to unwritten soc...
Online Continual Learning for Robust Indoor Object Recognition
https://ieeexplore.ieee.org/document/10341474/
[ "Umberto Michieli", "Mete Ozay", "Umberto Michieli", "Mete Ozay" ]
Vision systems mounted on home robots need to interact with unseen classes in changing environments. Robots have limited computational resources, labelled data and storage capability. These requirements pose some unique challenges: models should adapt without forgetting past knowledge in a data- and parameter-efficient way. We characterize the problem as few-shot (FS) online continual learning (OC...
PaintNet: Unstructured Multi-Path Learning from 3D Point Clouds for Robotic Spray Painting
https://ieeexplore.ieee.org/document/10341480/
[ "Gabriele Tiboni", "Raffaello Camoriano", "Tatiana Tommasi", "Gabriele Tiboni", "Raffaello Camoriano", "Tatiana Tommasi" ]
Popular industrial robotic problems such as spray painting and welding require (i) conditioning on free-shape 3D objects and (ii) planning of multiple trajectories to solve the task. Yet, existing solutions make strong assumptions on the form of input surfaces and the nature of output paths, resulting in limited approaches unable to cope with real-data variability. By leveraging on recent advances...
Switching Head-Tail Funnel UNITER for Dual Referring Expression Comprehension with Fetch-and-Carry Tasks
https://ieeexplore.ieee.org/document/10342165/
[ "Ryosuke Korekata", "Motonari Kambara", "Yu Yoshida", "Shintaro Ishikawa", "Yosuke Kawasaki", "Masaki Takahashi", "Komei Sugiura", "Ryosuke Korekata", "Motonari Kambara", "Yu Yoshida", "Shintaro Ishikawa", "Yosuke Kawasaki", "Masaki Takahashi", "Komei Sugiura" ]
This paper describes a domestic service robot (DSR) that fetches everyday objects and carries them to specified destinations according to free-form natural language instructions. Given an instruction such as “Move the bottle on the left side of the plate to the empty chair,” the DSR is expected to identify the bottle and the chair from multiple candidates in the environment and carry the target ob...
FeatDANet: Feature-level Domain Adaptation Network for Semantic Segmentation
https://ieeexplore.ieee.org/document/10341639/
[ "Jiao Li", "Wenjun Shi", "Dongchen Zhu", "Guanghui Zhang", "Xiaolin Zhang", "Jiamao Li", "Jiao Li", "Wenjun Shi", "Dongchen Zhu", "Guanghui Zhang", "Xiaolin Zhang", "Jiamao Li" ]
Unsupervised domain adaptation (UDA) is proposed to better adapt the network trained on labeled synthetic data to unlabeled real-world data for addressing the annotation cost. However, most of these methods pay more attention to domain distributions in input and output stages while ignoring the important differences in semantic expressions and local details in middle feature stages. Therefore, a n...
BlinkFlow: A Dataset to Push the Limits of Event-Based Optical Flow Estimation
https://ieeexplore.ieee.org/document/10341802/
[ "Yijin Li", "Zhaoyang Huang", "Shuo Chen", "Xiaoyu Shi", "Hongsheng Li", "Hujun Bao", "Zhaopeng Cui", "Guofeng Zhang", "Yijin Li", "Zhaoyang Huang", "Shuo Chen", "Xiaoyu Shi", "Hongsheng Li", "Hujun Bao", "Zhaopeng Cui", "Guofeng Zhang" ]
Event cameras provide high temporal precision, low data rates, and high dynamic range visual perception, which are well-suited for optical flow estimation. While data-driven optical flow estimation has obtained great success in RGB cameras, its generalization performance is seriously hindered in event cameras mainly due to the limited and biased training data. In this paper, we present a novel sim...
Discovering Adaptable Symbolic Algorithms from Scratch
https://ieeexplore.ieee.org/document/10341979/
[ "Stephen Kelly", "Daniel S. Park", "Xingyou Song", "Mitchell McIntire", "Pranav Nashikkar", "Ritam Guha", "Wolfgang Banzhaf", "Kalyanmoy Deb", "Vishnu Naresh Boddeti", "Jie Tan", "Esteban Real", "Stephen Kelly", "Daniel S. Park", "Xingyou Song", "Mitchell McIntire", "Pranav Nashikkar", "Ritam Guha", "Wolfgang Banzhaf", "Kalyanmoy Deb", "Vishnu Naresh Boddeti", "Jie Tan", "Esteban Real" ]
Autonomous robots deployed in the real world will need control policies that rapidly adapt to environmental changes. To this end, we propose AutoRobotics-Zero (ARZ), a method based on AutoML-Zero that discovers zero-shot adaptable policies from scratch. In contrast to neural network adaption policies, where only model parameters are optimized, ARZ can build control algorithms with the full express...
Visual Pre-Training for Navigation: What Can We Learn from Noise?
https://ieeexplore.ieee.org/document/10342521/
[ "Yanwei Wang", "Ching-Yun Ko", "Pulkit Agrawal", "Yanwei Wang", "Ching-Yun Ko", "Pulkit Agrawal" ]
One powerful paradigm in visual navigation is to predict actions from observations directly. Training such an end-to-end system allows representations useful for downstream tasks to emerge automatically. However, the lack of inductive bias makes this system data inefficient. We hypothesize a sufficient representation of the current view and the goal view for a navigation policy can be learned by p...