-
Shelton Stafford opublikował 1 rok, 3 miesiące temu
In particular, co-evolution enables the learned knowledge to be orchestrated on the fly, expediting convergence in the target optimization task. We have conducted an extensive series of experiments across a set of practically motivated discrete and continuous optimization examples comprising a large number of source task instances, of which only a small fraction indicate source-target relatedness. The experimental results show that not only does our proposed framework scale efficiently with a growing number of source tasks but is also effective in capturing relevant knowledge against sparsity of related sources, fulfilling the two salient features of scalability and online learning agility.Automatic coronary artery segmentation is of great value in diagnosing coronary disease. In this paper, we propose an automatic coronary artery segmentation method for coronary computerized tomography angiography (CCTA) images based on a deep convolutional neural network. The proposed method consists of three steps. First, to improve the efficiency and effectiveness of the segmentation, a 2D DenseNet classification network is utilized to screen out the non-coronary-artery slices. Second, we propose a coronary artery segmentation network based on the 3D-UNet, which is capable of extracting, fusing and rectifying features efficiently for accurate coronary artery segmentation. Specifically, in the encoding process of the 3D-UNet network, we adapt the dense block into the 3D-UNet so that it can extract rich and representative features for coronary artery segmentation; In the decoding process, 3D residual blocks with feature rectification capability are applied to improve the segmentation quality further. Third, we introduce a Gaussian weighting method to obtain the final segmentation results. This operation can highlight the more reliable segmentation results at the center of the 3D data blocks while weakening the less reliable segmentations at the block boundary when merging the segmentation results of spatially overlapping data blocks. Experiments demonstrate that our proposed method achieves a Dice Similarity Coefficient (DSC) value of 0.826 on a CCTA dataset constructed by us. The code of the proposed method is available at https//github.com/alongsong/3D_CAS.In this paper, a novel denoising method for electrocardiogram (ECG) signal is proposed to improve performance and availability under multiple noise cases. The method is based on the framework of conditional generative adversarial network (CGAN), and we improved the CGAN framework for ECG denoising. The proposed framework consists of two networks a generator that is composed of the optimized convolutional auto-encoder (CAE) and a discriminator that is composed of four convolution layers and one full connection layer. As the convolutional layers of CAE can preserve spatial locality and the neighborhood relations in the latent higher-level feature representations of ECG signal, and the skip connection facilitates the gradient propagation in the denoising training process, the trained denoising model has good performance and generalization ability. The extensive experimental results on MIT-BIH databases show that for single noise and mixed noises, the average signal-to-noise ratio (SNR) of denoised ECG signal is above 39 dB, and it is better than that of the state-of-the-art methods. Furthermore, the denoised classification results of four cardiac diseases show that the average accuracy increased above 32 % under multiple noises under SNR=0 dB. So, the proposed method can remove noise effectively as well as keep the details of the features of ECG signals.Machine learning models have been successfully employed in the diagnosis of Schizophrenia disease. The impact of classification models and the feature selection techniques on the diagnosis of Schizophrenia have not been evaluated. Here, we sought to access the performance of classification models along with different feature selection approaches on the structural magnetic resonance imaging data. The data consist of 72 subjects with Schizophrenia and 74 healthy control subjects. We evaluated different classification algorithms based on support vector machine (SVM), random forest, kernel ridge regression and randomized neural networks. Moreover, we evaluated T-Test, Receiver Operator Characteristics (ROC), Wilcoxon, entropy, Bhattacharyya, Minimum Redundancy Maximum Relevance (MRMR) and Neighbourhood Component Analysis (NCA) as the feature selection techniques. Based on the evaluation, SVM based models with Gaussian kernel proved better compared to other classification models and Wilcoxon feature selection emerged as the best feature selection approach. Moreover, in terms of data modality the performance on integration of the grey matter and white matter proved better compared to the performance on the grey and white matter individually. Our evaluation showed that classification algorithms along with the feature selection approaches impact the diagnosis of Schizophrenia disease. This indicates that proper selection of the features and the classification models can improve the diagnosis of Schizophrenia.This brief focuses on reachable set estimation for memristive complex-valued neural networks (MCVNNs) with disturbances. Based on algebraic calculation and Gronwall-Bellman inequality, the states of MCVNNs with bounded input disturbances converge within a sphere. From this, the convergence speed is also obtained. In addition, an observer for MCVNNs is designed. Two illustrative simulations are also given to show the effectiveness of the obtained conclusions.Existing supervised methods have achieved impressive performance in forecasting skeleton-based human motion. However, they often rely on action class labels in both training and inference phases. In practice, it could be a burden to request action class labels in the inference phase, and even for the training phase, the collected labels could be incomplete for sequences with a mixture of multiple actions. In this article, we take action class labels as a kind of privileged supervision that only exists in the training phase. We design a new architecture that includes a motion classification as an auxiliary task with motion prediction. To deal with potential missing labels of motion sequence, we propose a new classification loss function to exploit their relationships with those observed labels and a perceptual loss to measure the difference between ground truth sequence and generated sequence in the classification task. Experimental results on the most challenging Human 3.6M dataset and the Carnegie Mellon University (CMU) dataset demonstrate the effectiveness of the proposed algorithm to exploit action class labels for improved modeling of human dynamics.Deep graph networks (DGNs) are a family of machine learning models for structured data which are finding heavy application in life sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and DGNs. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimizing a multiobjective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighboring samples for the query graph, which is then used to fit an interpretable model for the predictive behavior of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOX21 and Estimated SOLubility (ESOL) and by quantitative results on a benchmark dataset for explanations, CYCLIQ.With the development of artificial intelligence, speech recognition and prediction have become one of the important research domains with wild applications, such as intelligent control, education, individual identification, and emotion analysis. Chinese poetry reading contains rich features of continuous pronunciations, such as mood, emotion, rhythm schemes, lyric reading, and artistic expression. Therefore, the prediction of the pronunciation characteristics of a Chinese poetry reading is the significance for the presentation of high-level machine intelligence and has the potential to create a high-level intelligent system for teaching children to read Tang poetry. Mel frequency cepstral coefficient (MFCC) is currently used to present important speech features. Due to the complexity and high degree of nonlinearity in poetry reading, however, there is a tough challenge facing accurate pronunciation feature prediction, that is, how to model complex spatial correlations and time dynamics, such as rhyme schemes. As for many current methods, they ignore the spatial and temporal characteristics in MFCC presentation. In addition, these methods are subjected to certain limitations on prediction for long-term performance. In order to solve these problems, we propose a novel spatial-temporal graph model (STGM-MHA) based on multihead attention for the purpose of pronunciation feature prediction of Chinese poetry. The STGM-MHA is designed using an encoder-decoder structure. The encoder compresses the data into a hidden space representation, while the decoder reconstructs the hidden space representation as output. In the model, a novel gated recurrent unit (GRU) module (AGRU) based on multihead attention is proposed to extract the spatial and temporal features of MFCC data effectively. The evaluation comparison of our proposed model versus state-of-the-art methods in six datasets reveals the clear advantage of the proposed model.Body-centric locomotion allows users to control both movement speed and direction with body parts (e.g., head tilt, arm swing or torso lean) to navigate in virtual reality (VR). However, there is little research to systematically investigate the effects of body parts for speed and direction control on virtual locomotion by taking in account different transfer functions(L linear function, P power function, and CL piecewise function with constant and linear function). Therefore, we conducted an experiment to evaluate the combinational effects of the three factors (body parts for direction control, body parts for speed control, and transfer functions) on virtual locomotion. Results showed that (1) the head outperformed the torso for movement direction control in task completion time and environmental collisions; (2) Arm-based speed control led to shorter traveled distances than both head and knee. Head-based speed control had fewer environmental collisions than knee; (3) Body-centric locomotion with CL function was faster but less accurate than both L and P functions. Task time significantly decreased from P, L to CL functions, while traveled distance and overshoot significantly increased from P, L to CL functions. L function was rated with the highest score of USE-S, -pragmatic and -hedonic; (4) Transfer function had a significant main effect on motion sickness the participants felt more headache and nausea when performing locomotion with CL function. Our results provide implications for body-centric locomotion design in VR applications.


