-
Worm Preston opublikował 1 rok, 3 miesiące temu
An essential characteristic that an exploration robot must possess is to be autonomous. This is necessary because it will usually do its task in remote or hard-to-reach places. One of the primary elements of a navigation system is the information that can be acquired by the sensors of the environment in which it will operate. For this reason, an algorithm based on convolutional neural networks is proposed for the detection of rocks in environments similar to Mars. The methodology proposed here is based on the use of a Single-Shot-Detector (SSD) network architecture, which has been modified to evaluate the performance. The main contribution of this study is to provide an alternative methodology to detect rocks in planetary images because most of the previous works only focus on classification problems and used handmade feature vectors.Relation classification (RC) aims at extracting structural information, i.e., triplets of two entities with a relation, from free texts, which is pivotal for automatic knowledge base construction. In this paper, we investigate a fully automatic method to train a RC model which facilitates to boost the knowledge base. Traditional RC models cannot extract new relations unseen during training since they define RC as a multiclass classification problem. The recent development of few-shot learning (FSL) provides a feasible way to accommodate to fresh relation types with a handful of examples. However, it requires a moderately large amount of training data to learn a promising few-shot RC model, which consumes expensive human labor. This issue recalls a kind of weak supervision methods, dubbed distant supervision (DS), which can generate the training data automatically. To this end, we propose to investigate the task of few-shot relation classification under distant supervision. As DS naturally brings in mislabeled training instances, to alleviate the negative impact, we incorporate various multiple instance learning methods into the classic prototypical networks, which can achieve sentence-level noise reduction. In experiments, we evaluate our proposed model under the standard N-way K-shot setting of few-shot learning. The experiment results show that our proposal achieves better performance.Stroke is the leading cause of severe disability in adults resulting in mobility, balance, and coordination deficits. Robotic exoskeletons (REs) for stroke rehabilitation can provide the user with consistent, high dose repetition of movement, as well as balance and stability. The goal of this intervention study is to evaluate the ability of a RE to provide high dose gait therapy and the resulting effect on functional recovery for individuals with acute stroke. The investigation included a total of 44 participants. Twenty-two participants received RE gait training during inpatient rehabilitation (RE+SOC Group), and a matched sample of 22 individuals admitted to the same inpatient rehabilitation facility-receiving conventional standard of care treatment (SOC group). The effect of RE training was quantified using total distance walked during inpatient rehabilitation and functional independence measure (FIM). The total distance walked during inpatient rehabilitation showed a significant difference between the SOC and RE+SOC groups. RE+SOC walked twice the distance as SOC during the same duration (time spent in inpatient rehabilitation) of training. In addition, the average change in motor FIM showed a significant difference between the SOC and RE+SOC groups, where the average difference in motor FIM was higher in RE+SOC compared to the SOC group. The results suggest that RE provided increased dosing of gait training without increasing the duration of training during acute stroke rehabilitation. The RE+SOC group increased their motor FIM score (change from admission to discharge) compared to SOC group, both groups were matched for admission motor FIM scores suggesting that increased dosing may have improved motor function.The elderly population has rapidly increased in past years, bringing huge demands for elderly serving devices, especially for those with mobility impairment. Present assistant walkers designed for elderly users are primitive with limited user interactivity and intelligence. We propose a novel smart robotic walker that targets a convenient-to-use indoor walking aid for the elderly. The walker supports multiple modes of interactions through voice, gait or haptic touch, and allows intelligent control via learning-based methods to achieve mobility safety. Our design enables a flexible, initiative and reliable walker due to the following (1) we take a hybrid approach by combining the conventional mobile robotic platform with the existing rollator design, to achieve a novel robotic system that fulfills expected functionalities; (2) our walker tracks users in front by detecting lower limb gait, while providing close-proximity walking safety support; (3) our walker can detect human intentions and predict emergency events, e.g., falling, by monitoring force pressure on a specially designed soft-robotic interface on the handle; (4) our walker performs reinforcement learning-based sound source localization to locate and navigate to the user based on his/her voice signals. Experiment results demonstrate the sturdy mechanical structure, the reliability of multiple novel interactions, and the efficiency of the intelligent control algorithms implemented. The demonstration video is available at https//sites.google.com/view/smart-walker-hku.Quantifying rat behavior through video surveillance is crucial for medicine, neuroscience, and other fields. In this paper, we focus on the challenging problem of estimating landmark points, such as the rat’s eyes and joints, only with image processing and quantify the motion behavior of the rat. Firstly, we placed the rat on a special running machine and used a high frame rate camera to capture its motion. Secondly, we designed the cascade convolution network (CCN) and cascade hourglass network (CHN), which are two structures to extract features of the images. Three coordinate calculation methods-fully connected regression (FCR), heatmap maximum position (HMP), and heatmap integral regression (HIR)-were used to locate the coordinates of the landmark points. Thirdly, through a strict normalized evaluation criterion, we analyzed the accuracy of the different structures and coordinate calculation methods for rat landmark point estimation in various feature map sizes. The results demonstrated that the CCN structure with the HIR method achieved the highest estimation accuracy of 75%, which is sufficient to accurately track and quantify rat joint motion.Understanding why deep neural networks and machine learning algorithms act as they do is a difficult endeavor. Neuroscientists are faced with similar problems. One way biologists address this issue is by closely observing behavior while recording neurons or manipulating brain circuits. This has been called neuroethology. In a similar way, neurorobotics can be used to explain how neural network activity leads to behavior. In real world settings, neurorobots have been shown to perform behaviors analogous to animals. Moreover, a neuroroboticist has total control over the network, and by analyzing different neural groups or studying the effect of network perturbations (e.g., simulated lesions), they may be able to explain how the robot’s behavior arises from artificial brain activity. In this paper, we review neurorobot experiments by focusing on how the robot’s behavior leads to a qualitative and quantitative explanation of neural activity, and vice versa, that is, how neural activity leads to behavior. We suggest that using neurorobots as a form of computational neuroethology can be a powerful methodology for understanding neuroscience, as well as for artificial intelligence and machine learning.Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extractust results of over 96 and 81% for accuracy and Intersection over Union, ensuring such a system can be successfully used within object recognition, classification and tracking problem. This demonstrates that the attention of the system can be tracked accurately, while the asynchronous processing means the controller can give precise track updates with minimal latency.Diverse stereotactic neuro-navigation systems are used daily in neurosurgery and novel systems are continuously being developed. Prior to clinical implementation of new surgical tools, methods or instruments, in vitro experiments on phantoms should be conducted. A stereotactic neuro-navigation phantom denotes a rigid or deformable structure resembling the cranium with the intracranial area. The use of phantoms is essential for the testing of complete procedures and their workflows, as well as for the final validation of the application accuracy. The aim of this study is to provide a systematic review of stereotactic neuro-navigation phantom designs, to identify their most relevant features, and to identify methodologies for measuring the target point error, the entry point error, and the angular error (α). The literature on phantom designs used for evaluating the accuracy of stereotactic neuro-navigation systems, i.e., robotic navigation systems, stereotactic frames, frameless navigation systems, and aiming devices, was searched. Eligible articles among the articles written in English in the period 2000-2020 were identified through the electronic databases PubMed, IEEE, Web of Science, and Scopus. The majority of phantom designs presented in those articles provide a suitable methodology for measuring the target point error, while there is a lack of objective measurements of the entry point error and angular error. We identified the need for a universal phantom design, which would be compatible with most common imaging techniques (e.g., computed tomography and magnetic resonance imaging) and suitable for simultaneous measurement of the target point, entry point, and angular errors.We developed an intuitively operational shoulder disarticulation prosthesis system that can be used without long-term training. The developed system consisted of four degrees of freedom joints, as well as a user adapting control system based on a machine learning technique and surface electromyogram (EMG) of the trunk. We measured the surface EMG of the trunk of healthy subjects at multiple points and analyzed through principal component analysis to identify the proper EMG measurement portion of the trunk, which was determined to be distributed in the chest and back. Additionally, evaluation experiments demonstrated the capability of four healthy subjects to grasp and move objects in the horizontal as well as the vertical directions, using our developed system controlled via the EMG of the chest and back. Moreover, we also quantitatively confirmed the ability of a bilateral shoulder disarticulation amputee to complete the evaluation experiment similar to healthy subjects.


