-
Topp Grossman opublikował 1 rok, 8 miesięcy temu
[This corrects the article DOI 10.2196/mhealth.9987.].
Patient outcomes and experience during a Spinal Cord Stimulation (SCS) screening trial can have a significant effect on whether to proceed with long-term, permanent implantation of an SCS device for the treatment of chronic pain. Enhancing the ability to track and assess patients during this initial trial evaluation offers the potential for improved understanding regarding the suitability of permanent device implantation as well as identification of the SCS-based neurostimulative modalities and parameters that may provide substantial analgesia in a patient-specific manner.
In this report, we aimed to describe a preliminary, real-world assessment of a new, real time tracking, smart, device-based digital app used by patients with chronic pain undergoing trial screening for SCS therapy.
This is a real-world, retrospective evaluation of 13,331 patients diagnosed with chronic pain who used the new „mySCS” mobile app during an SCS screening trial. The app design is health insurance portability and accountabil84% (n=10,377) of patients who reached day 3 of the screening phase assessment and by 83.04% (n=11,070) of those who reached trial completion. A trial success rate of 91% was determined for those who used the app (versus 85% success rate for nonusers).
Data from this initial, real-world examination of a mobile, digital-health-based tracking app („mySCS”), as used during the SCS screening phase, demonstrate that substantial patient engagement can be achieved while also providing for the acquisition of more real time patient-outcome measures that may help facilitate improved SCS trial success.
Data from this initial, real-world examination of a mobile, digital-health-based tracking app („mySCS”), as used during the SCS screening phase, demonstrate that substantial patient engagement can be achieved while also providing for the acquisition of more real time patient-outcome measures that may help facilitate improved SCS trial success.We study the denial-of-service (DoS) attack power allocation optimization in a multiprocess cyber-physical system (CPS), where sensors observe different dynamic processes and send the local estimated states to a remote estimator through wireless channels, while a DoS attacker allocates its attack power on different channels as interference to reduce the wireless transmission rates, and thus degrading the estimation accuracy of the remote estimator. We consider two attack optimization problems. One is to maximize the average estimation error of different processes, and the other is to maximize the minimal one. We formulate these problems as Markov decision processes (MDPs). Unlike the majority of existing works where the attacker is assumed to have complete knowledge of the CPS, we consider an attacker with no prior knowledge of the wireless channel model and the sensor information. To address this uncertainty issue and the curse of dimensionality, we provide a learning-based attack power allocation algorithm stemming from the double deep Q-network (DDQN) method. First, with a defined partial order, the maximal elements of the action space are determined. By investigating the characteristic of the MDP, we prove that the optimal attack allocations of both problems belong to the set of these elements. This property reduces the entire action space to a smaller subset and speeds up the learning algorithm. In addition, to further improve the data efficiency and learning performance, we propose two enhanced attack power allocation algorithms which add two auxiliary tasks of MDP transition estimation inspired by model-based reinforcement learning, i.e., the next state prediction and the current action estimation. Experimental results demonstrate the versatility and efficiency of the proposed algorithms in different system settings compared with other algorithms, such as the conventional value iteration, double Q-learning, and deep Q-network.In this article, we investigate how multiple agents learn to coordinate to form efficient exploration in reinforcement learning. Though straightforward, independent exploration of the joint action space of multiple agents will become exponentially more difficult as the number of agents increases. To tackle this problem, we propose feudal latent-space exploration (FLE) for multi-agent reinforcement learning (MARL). FLE introduces a feudal commander to learn a low-dimensional global latent structure that instructs multiple agents to explore coordinately. Under this framework, the multi-agent policy gradient (PG) is adopted to optimize both the agent policy and latent structure end-to-end. We demonstrate the effectiveness of this method in two multi-agent environments that need explicit coordination. Experimental results validate that FLE outperforms baseline MARL approaches that use independent exploration strategy in terms of mean rewards, efficiency, and the expressiveness of coordination policies.Graph neural networks (GNNs) are recently proposed neural network structures for the processing of graph-structured data. Due to their employed neighbor aggregation strategy, existing GNNs focus on capturing node-level information and neglect high-level information. Existing GNNs, therefore, suffer from representational limitations caused by the local permutation invariance (LPI) problem. To overcome these limitations and enrich the features captured by GNNs, we propose a novel GNN framework, referred to as the two-level GNN (TL-GNN). This merges subgraph-level information with node-level information. Moreover, we provide a mathematical analysis of the LPI problem, which demonstrates that subgraph-level information is beneficial to overcoming the problems associated with LPI. A subgraph counting method based on the dynamic programming algorithm is also proposed, and this has the time complexity of O(n³), where n is the number of nodes of a graph. Experiments show that TL-GNN outperforms existing GNNs and achieves state-of-the-art performance.Omics technologies are powerful tools for analyzing patterns in gene expression data for thousands of genes. Due to a number of systematic variations in experiments, the raw gene expression data is often obfuscated by undesirable technical noises. Various normalization techniques were designed in an attempt to remove these non- biological errors prior to any statistical analysis. One of the reasons for normalizing data is the need for recovering the covariance matrix used in gene network analysis. In this paper, we introduce a novel normalization technique, called the covariance shift (C-SHIFT) method. This normalization algorithm uses optimization techniques together with the blessing of dimensionality philosophy and energy minimization hypothesis for covariance matrix recovery under additive noise (in biology, known as the bias). Thus, it is perfectly suited for the analysis of logarithmic gene expression data. Numerical experiments on synthetic data demonstrate the methods advantage over the classical normalization techniques. Namely, the comparison is made with Rank, Quantile, cyclic LOESS (locally estimated scatterplot smoothing), and MAD (median absolute deviation) normalization methods. We also evaluate the performance of C-SHIFT algorithm on real biological data.Navigated transcranial magnetic stimulation (nTMS) is a widely used tool for motor cortex mapping. However, the full details of the activated cortical area during the mapping remain unknown due to the spread of the stimulating electric field (E-field). Computational tools, which combine the E-field with physiological responses, have potential for revealing the activated source area. We applied the minimum-norm estimate (MNE) method in a realistic head geometry to estimate the activated cortical area in nTMS motor mappings of the leg and hand muscles. We calculated the MNE also in a spherical head geometry to assess the effect of the head model on the MNE maps. Finally, we determined optimized coil placements based on the MNE map maxima and compared these placements with the initial hotspot placement. The MNE maps generally agreed well with the original motor maps in the realistic head geometry, the distance from the MNE map maximum to the motor map center of gravity (CoG) was 8.8 ± 4.6 mm in the leg motor area and 6.6 ± 2.5 mm in the hand motor area. The head model did not have a significant effect on these distances; however, it had a significant effect on the distance between the MNE CoG and the motor map ( ). The optimized coil locations were less then 1 cm from the initial hotspot in 7/10 subjects. Further research is required to determine the level of anatomical detail and the optimal mapping parameters required for robust and accurate localization.During the creation of graphic designs, individuals inevitably spend a lot of time and effort on adjusting visual attributes (e.g., positions, colors, and fonts) of elements to make them more aesthetically pleasing. It is a trial-and-error process, requires repetitive edits, and relies on good design knowledge. In this work, we seek to alleviate such difficulty by automatically suggesting aesthetic improvements, i.e., taking an existing design as the input and generating a refined version with improved aesthetic quality as the output. This goal presents two challenges proposing a refined design based on the user-given one, and assessing whether the new design is better aesthetically. To cope with these challenges, we propose a design principle-guided candidate generation stage and a data-driven candidate evaluation stage. In the candidate generation stage, we generate candidate designs by leveraging design principles as the guidance to make changes around the existing design. In the candidate evaluation stage, we learn a ranking model upon a dataset that can reflect humans aesthetic preference, and use it to choose the most aesthetically pleasing one from the generated candidates. We implement a prototype system on presentation slides and demonstrate the effectiveness of our approach through quantitative analysis, sample results, and user studies.Studies in virtual reality (VR) have introduced numerous multisensory simulation techniques for more immersive VR experiences. However, although they primarily focus on expanding sensory types or increasing individual sensory quality, they lack consensus in designing appropriate interactions between different sensory stimuli. This paper explores how the congruence between auditory and visual (AV) stimuli, which are the sensory stimuli typically provided by VR devices, affects the cognition and experience of VR users as a critical interaction factor in promoting multisensory integration. We defined the types of (in)congruence between AV stimuli, and then designed 12 virtual spaces with different types or degrees of congruence between AV stimuli. We then evaluated the presence, immersion, motion sickness, and cognition changes in each space. We observed the following key findings 1) there is a limit to the degree of temporal or spatial incongruence that can be tolerated, with few negative effects on user experience until that point is exceeded; 2) users are tolerant of semantic incongruence; 3) a simulation that considers synesthetic congruence contributes to the user’s sense of immersion and presence. Based on these insights, we identified the essential considerations for designing sensory simulations in VR and proposed future research directions.


