Source: Robohub / by Women in Robotics It’s Ada Lovelace Day and once again we’re delighted to introduce you to “30 women in robotics you need to know about”! From 13 year old Avye Couloute to Bala Krishnamurthy who worked alongside the ‘Father of Robotics’ Joseph Engelberger in the 1970s & 1980s, these women showcase a wide range of roles in robotics. We hope these short bios will provide a world of …Read More
Tag Archives: research
It matters to me if you are human – Examining categorical perception in human and nonhuman agents
Source: Science Direct Authors: EvaWiese, Patrick P.Weis Highlights Evidence for categorical perception for human as well as nonhuman agent spectra. Cognitive conflict maxima located around category boundaries for human and nonhuman agent spectra. Enhanced cognitive conflict processing for the human as opposed to nonhuman agent spectra category boundaries. Category boundary shift stronger for human than nonhuman agent spectra. Categorical perception quantitatively, not qualitatively different in human agent spectra. Abstract Humanlike but …Read More
Better together: human and robot co-workers
Source: Science Daily More and more processes are being automated and digitised. Self-driving delivery vehicles, such as forklifts, are finding their way into many areas — and companies are reporting potential time and cost savings.
TECHNICAL GUEST POST: Reactive grasping using tactile sensors (Fraunhofer-IFF)
Industrial bin-picking applications usually involve a vision system. These systems identify and localize the objects to be picked, and also often suggest an optimal picking strategy. In such situations, the robot moves from its original position to the defined grasp position of the object inside the bin (as defined by the vision system), grasps the object, and moves it to the final placing position. However this whole process raises some …Read More
A dataset of daily interactive manipulation (ABSTRACT)
Source: The International Journal of Robotics Research Authors: Yongqiang Huang, Yu Sun Publication: First published May 13, 2019 Robots that succeed in factories may struggle to complete even the simplest daily task that humans take for granted, because the change of environment makes the task exceedingly difficult. Aiming to teach robots to perform daily interactive manipulation in a changing environment using human demonstrations, we collected our own data of interactive …Read More
TECHNICAL GUEST POST: Human-aware motion planning (CNR-STIIMA)
What does human-aware motion planning mean? In human-robot collaborative cells, one of the main problems is represented by the robot’s stopping or slowing down due to the human presence. The origin of this problem is the lack of awareness of the robot with respect to the human motions. In fact, motion planners take care of the current human position as a mere obstacle or they do not take care of …Read More
TECHNICAL GUEST POST: Multifunctional gripper (Mondragon Assembly)
What technologies are necessary to optimize the use of a multifunctional gripper? The use of bin-picking applications is increasing in the industry. Generally speaking, these applications allow to carry out operations of picking up different products from a box, previously chosen through a vision system. When picking different products, they can be different in shape, weight and form. To ensure that an object is picked up, a technology capable of …Read More
Library of actions: Implementing a generic robot execution framework by using manipulation action semantics (ABSTRACT)
Source: The International Journal of Robotics Research Authors: Mohamad Javad Aein, Eren Erdal Aksoy, Florentin Wörgötter Publication: First published May 29, 2019 When a robot has to imitate an observed action sequence, it must first understand the inherent characteristic features of the individual actions. Such features need to reflect the semantics of the action with a high degree of invariance between different demonstrations of the same action. At the same …Read More
TECHNICAL GUEST POST: Object selection and detection (IK4-TEKNIKER)
Which approach are you using for object selection and detection within the PICK-PLACE project? Due to the large number of references that the system needs to be able to cope with, we are using a deep learning based approach for object identification, segmentation and grasping point selection. There are different steps involved in order to generate a deep learning model. First, a dataset with images of different objects needs to …Read More
Sensor-packed glove learns signatures of the human grasp (ABSTRACT)
Source: Science Daily Wearing a sensor-packed glove while handling a variety of objects, MIT researchers have compiled a massive dataset that enables an AI system to recognize objects through touch alone. The information could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design.