-
Holck Levin opublikował 1 rok, 3 miesiące temu
Five-fold cross-validation was applied to 70% of the data in each group, and the remaining 30% were used as test data. An F1-score of 0.974 was achieved in classifying the three groups by the long short-term memory network-based classifier that used the double-limb support, stance, step, and stride times at usual-paced walking and the double- and single-limb support, stance, and stride times at fast-paced walking as inputs. The proposed approach would pave the way for earlier diagnosis of cognitive impairment in non-clinical settings without professional help, which can facilitate more timely intervention.Individuals with spinal cord injury suffer from seated instability due to impaired trunk neuromuscular function. Monitoring seated stability toward the development of closed-loop controlled neuroprosthetic technologies could be beneficial for restoring trunk stability during sitting in affected individuals. However, there is a lack of (1) a biomechanical characterization to quantify the relationship between the trunk kinematics and sitting balance; and (2) a validated wearable biomedical device for assessing dynamic sitting posture and fall-risk in real-time. This study aims to (a) determine the limit of dynamic seated stability as a function of the trunk center of mass (COM) position and velocity relative to the base of support; (b) experimentally validate the predicted limit of stability using traditional motion capture; (c) compare the predicted limit of stability with that predicted in the literature for standing and walking; and (d) validate a wearable device for assessing dynamic seated stability and rice were observed between the estimated trunk COM states obtained by the motion capture system and IMUs. IMU-based wearable technology, along with the predicted limit of dynamic seated stability, can estimate the margin of stability during perturbed sitting. Therefore, it has the potential to monitor the seated stability of wheelchair users affected by trunk instability.This article investigates the consensus tracking problem of the heterogeneous multivehicle systems (MVSs) under a repeatable control environment. First, a unified iterative learning control (ILC) algorithm is presented for all autonomous vehicles, each of which is governed by both discrete- and continuous-time nonlinear dynamics. Then, several consensus criteria for MVSs with switching topology and external disturbances are established based on our proposed distributed ILC protocols. For discrete-time systems, all vehicles can perfectly track to the common reference trajectory over a specified finite time interval, and the corresponding digraphs may not have spanning trees. Existing approaches dealing with the continuous-time systems generally require that all vehicles have strictly identical initial conditions, being too ideal in practice. We relax this unpractical assumption and propose an extra distributed initial state learning protocol such that vehicles can take different initial states, leading to the fact that the finite time tracking is achieved ultimately regardless of the initial errors. Finally, a numerical example demonstrates the effectiveness of our theoretical results.Scene classification of high spatial resolution (HSR) images can provide data support for many practical applications, such as land planning and utilization, and it has been a crucial research topic in the remote sensing (RS) community. Recently, deep learning methods driven by massive data show the impressive ability of feature learning in the field of HSR scene classification, especially convolutional neural networks (CNNs). Although traditional CNNs achieve good classification results, it is difficult for them to effectively capture potential context relationships. The graphs have powerful capacity to represent the relevance of data, and graph-based deep learning methods can spontaneously learn intrinsic attributes contained in RS images. Inspired by the abovementioned facts, we develop a deep feature aggregation framework driven by graph convolutional network (DFAGCN) for the HSR scene classification. First, the off-the-shelf CNN pretrained on ImageNet is employed to obtain multilayer features. Second, a graph convolutional network-based model is introduced to effectively reveal patch-to-patch correlations of convolutional feature maps, and more refined features can be harvested. Finally, a weighted concatenation method is adopted to integrate multiple features (i.e., multilayer convolutional features and fully connected features) by introducing three weighting coefficients, and then a linear classifier is employed to predict semantic classes of query images. Experimental results performed on the UCM, AID, RSSCN7, and NWPU-RESISC45 data sets demonstrate that the proposed DFAGCN framework obtains more competitive performance than some state-of-the-art methods of scene classification in terms of OAs.The Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM) is a useful generative model that captures meaningful features from the given n-dimensional continuous data. The difficulties associated with learning GB-RBM are reported extensively in earlier studies. They indicate that the training of the GB-RBM using the current standard algorithms, namely contrastive divergence (CD) and persistent contrastive divergence (PCD), needs a carefully chosen small learning rate to avoid divergence which, in turn, results in slow learning. In this work, we alleviate such difficulties by showing that the negative log-likelihood for a GB-RBM can be expressed as a difference of convex functions if we keep the variance of the conditional distribution of visible units (given hidden unit states) and the biases of the visible units, constant. Using this, we propose a stochastic difference of convex (DC) functions programming (S-DCP) algorithm for learning the GB-RBM. We present extensive empirical studies on several benchmark data sets to validate the performance of this S-DCP algorithm. It is seen that S-DCP is better than the CD and PCD algorithms in terms of speed of learning and the quality of the generative model learned.The linear discriminant analysis (LDA) method needs to be transformed into another form to acquire an approximate closed-form solution, which could lead to the error between the approximate solution and the true value. Furthermore, the sensitivity of dimensionality reduction (DR) methods to subspace dimensionality cannot be eliminated. In this article, a new formulation of trace ratio LDA (TRLDA) is proposed, which has an optimal solution of LDA. When solving the projection matrix, the TRLDA method given by us is transformed into a quadratic problem with regard to the Stiefel manifold. In addition, we propose a new trace difference problem named optimal dimensionality linear discriminant analysis (ODLDA) to determine the optimal subspace dimension. The nonmonotonicity of ODLDA guarantees the existence of optimal subspace dimensionality. Both the two approaches have achieved efficient DR on several data sets.The Sit-to-Stand (STS) test is used in clinical practice as an indicator of lower-limb functionality decline, especially for older adults. Due to its high variability, there is no standard approach for categorising the STS movement and recognising its motion pattern. This paper presents a comparative analysis between visual assessments and an automated-software for the categorisation of STS, relying on registrations from a force plate. 5 participants (30 ± 6 years) took part in 2 different sessions of visual inspections on 200 STS movements under self-paced and controlled speed conditions. Assessors were asked to identify three specific STS events from the Ground Reaction Force, simultaneously with the software analysis the start of the trunk movement (Initiation), the beginning of the stable upright stance (Standing) and the sitting movement (Sitting). The absolute agreement between the repeated raters’ assessments as well as between the raters’ and software’s assessment in the first trial, were considered as indexes of human and software performance, respectively. No statistical differences between methods were found for the identification of the Initiation and the Sitting events at self-paced speed and for only the Sitting event at controlled speed. The estimated significant values of maximum discrepancy between visual and automated assessments were 0.200 [0.039; 0.361] s in unconstrained conditions and 0.340 [0.014; 0.666] s for standardised movements. The software assessments displayed an overall good agreement against visual evaluations of the Ground Reaction Force, relying, at the same time, on objective measures.Common reporting styles for statistical results in scientific articles, such as p-values and confidence intervals (CI), have been reported to be prone to dichotomous interpretations, especially with respect to the null hypothesis significance testing framework. For example when the p-value is small enough or the CIs of the mean effects of a studied drug and a placebo are not overlapping, scientists tend to claim significant differences while often disregarding the magnitudes and absolute differences in the effect sizes. This type of reasoning has been shown to be potentially harmful to science. Techniques relying on the visual estimation of the strength of evidence have been recommended to reduce such dichotomous interpretations but their effectiveness has also been challenged. We ran two experiments on researchers with expertise in statistical analysis to compare several alternative representations of confidence intervals and used Bayesian multilevel models to estimate the effects of the representation styles on differences in researchers’ subjective confidence in the results. We also asked the respondents’ opinions and preferences in representation styles. Our results suggest that adding visual information to classic CI representation can decrease the tendency towards dichotomous interpretations – measured as the 'cliff effect’ the sudden drop in confidence around p-value 0.05 – compared with classic CI visualization and textual representation of the CI with p-values. All data and analyses are publicly available at https//github.com/helske/statvis.We present the Feature Tracking Kit (FTK), a framework that simplifies, scales, and delivers various feature-tracking algorithms for scientific data. The key of FTK is our simplicial spacetime meshing scheme that generalizes both regular and unstructured spatial meshes to spacetime while tessellating spacetime mesh elements into simplices. The benefits of using simplicial spacetime meshes include (1) reducing ambiguity cases for feature extraction and tracking, (2) simplifying the handling of degeneracies using symbolic perturbations, and (3) enabling scalable and parallel processing. The use of simplicial spacetime meshing simplifies and improves the implementation of several feature-tracking algorithms for critical points, quantum vortices, and isosurfaces. As a software framework, FTK provides end users with VTK/ParaView filters, Python bindings, a command line interface, and programming interfaces for feature-tracking applications. We demonstrate use cases as well as scalability studies through both synthetic data and scientific applications including tokamak, fluid dynamics, and superconductivity simulations.


