• Garrison Gibbons opublikował 5 miesięcy, 2 tygodnie temu

    This model should therefore help the development of optimized dielectric elastomer loudspeakers, with improved frequency responses and directivity.Little information exists on endocrine responses to noise exposure in marine mammals. In the present study, cortisol, aldosterone, and epinephrine levels were measured in 30 bottlenose dolphins (Tursiops truncatus) before and after exposure to simulated U.S. Navy mid-frequency sonar signals (3250-3450 Hz). Control and exposure sessions, each consisting of ten trials, were performed sequentially with each dolphin. While swimming across the experimental enclosure during exposure trials, each dolphin received a single 1-s exposure with received sound pressure levels (SPLs, dB re 1 μPa) of 115, 130, 145, 160, 175, or 185 dB. Blood samples were collected through behaviorally conditioned, voluntary participation of the dolphins approximately one week prior to, immediately following, and approximately one week after exposure were analyzed for hormones via radioimmunoassay. Aldosterone was below detection limits in all samples. Neither cortisol nor epinephrine showed a consistent relationship with received SPL, even though dolphins abandoned trained behaviors after exposure to the highest SPLs and the severity of behavioral changes scaled with SPL. It remains unclear if dolphins interpret high-level anthropogenic sound as stressful, annoying, or threatening and whether behavioral responses to sound can be equated to a physiological (endocrine) response.Over a decade after the Cook Inlet beluga (Delphinapterus leucas) was listed as endangered in 2008, the population has shown no sign of recovery. Lack of ecological knowledge limits the understanding of, and ability to manage, potential threats impeding recovery of this declining population. National Oceanic and Atmospheric Administration Fisheries, in partnership with the Alaska Department of Fish and Game, initiated a passive acoustics monitoring program in 2017 to investigate beluga seasonal occurrence by deploying a series of passive acoustic moorings. Data have been processed with semi-automated tonal detectors followed by time intensive manual validation. To reduce this labor intensive and time-consuming process, in addition to increasing the accuracy of classification results, the authors constructed an ensembled deep learning convolutional neural network model to classify beluga detections as true or false. Using a 0.5 threshold, the final model achieves 96.57% precision and 92.26% recall on testing dataset. This methodology proves to be successful at classifying beluga signals, and the framework can be easily generalized to other acoustic classification problems.This study compares the classification of Azerbaijani fricatives based on two sets of features (a) spectral moments, spectral peak, amplitude, duration, and (b) cepstral coefficients employing Hidden Markov Models to divide each fricative into three regions such that the variances of the measures within each region are minimized. The cepstral coefficients were found to be more reliable predictors in the classification of all nine Azerbaijani fricatives and the cepstral measures yielded highly successful classification rates (91.21% across both genders) in the identification of the full set of fricatives of Azerbaijani.This study characterized medial olivocochlear (MOC) reflex activity on synchronized spontaneous otoacoustic emissions (SSOAEs) as compared to transient-evoked otoacoustic emissions (TEOAEs) in normal-hearing adults. Using two time windows, changes in TEOAE and SSOAE magnitude and phase due to a MOC reflex elicitor were quantified from 1 to 4 kHz. In lower frequency bands, changes in TEOAE and SSOAE magnitude were significantly correlated and were significantly larger for SSOAEs. Changes in TEOAE and SSOAE phase were not significantly different, nor were they significantly correlated. The larger effects on SSOAE magnitude may improve the sensitivity for detecting the MOC reflex.Classical ocean acoustic experiments involve the use of synchronized arrays of sensors. However, the need to cover large areas and/or the use of small robotic platforms has evoked interest in single-hydrophone processing methods for localizing a source or characterizing the propagation environment. One such processing method is „warping,” a non-linear, physics-based signal processing tool dedicated to decomposing multipath features of low-frequency transient signals (frequency f  1 km). Since its introduction to the underwater acoustics community in 2010, warping has been adopted in the ocean acoustics literature, mostly as a pre-processing method for single receiver geoacoustic inversion. Warping also has potential applications in other specialties, including bioacoustics; however, the technique can be daunting to many potential users unfamiliar with its intricacies. Consequently, this tutorial article covers basic warping theory, presents simulation examples, and provides practical experimental strategies. Accompanying supplementary material provides matlab code and simulated and experimental datasets for easy implementation of warping on both impulsive and frequency-modulated signals from both biotic and man-made sources. This combined material should provide interested readers with user-friendly resources for implementing warping methods into their own research.Previous work has shown mixed findings concerning the role of voice quality cues in Mandarin tones, with some studies showing that creak improves identification. This study tests the linguistic importance of acoustic properties of creak for Mandarin tone perception. Mandarin speakers identified tones with four resynthesized creak manipulations low spectral tilt, irregular F0, period doubling, and extra-low F0. Two experiments with three conditions were conducted. In Experiment 1, the manipulations were confined to a portion of the stimuli’s duration; in Experiment 2 the creak manipulations were modified and lengthened throughout the stimuli, and in a second condition, noise was incorporated to weaken F0 cues. Listeners remained most sensitive to extra-low F0, which affected identification of the four tones differently it improved the identification accuracy of Tone 3 and hindered that of Tones 1 and 4. Irregular F0 consistently hindered T1 identification. The effects of irregular F0, period doubling, and low spectral tilt emerged in Experiment 2, where F0 cues were less robust and creak cues were stronger. Thus, low F0 is the most prominent cue used in Mandarin tone identification, but other voice quality cues become more salient to listeners when the F0 cues are less retrievable.This study examined how well individual speech recognition thresholds in complex listening scenarios could be predicted by a current binaural speech intelligibility model. Model predictions were compared with experimental data measured for seven normal-hearing and 23 hearing-impaired listeners who differed widely in their degree of hearing loss, age, as well as performance in clinical speech tests. The experimental conditions included two masker types (multi-talker or two-talker maskers), and two spatial conditions (maskers co-located with the frontal target or symmetrically separated from the target). The results showed that interindividual variability could not be well predicted by a model including only individual audiograms. Predictions improved when an additional individual „proficiency factor” was derived from one of the experimental conditions or a standard speech test. Overall, the current model can predict individual performance relatively well (except in conditions high in informational masking), but the inclusion of age-related factors may lead to even further improvements.Intracochlear electrocochleography (ECochG) is a potential tool for the assessment of residual hearing in cochlear implant users during implantation and acoustical tuning postoperatively. It is, however, unclear how these ECochG recordings from different locations in the cochlea depend on the stimulus parameters, cochlear morphology, implant design, or hair cell degeneration. In this paper, a model is presented that simulates intracochlear ECochG recordings by combining two existing models, namely a peripheral one that simulates hair cell activation and a three-dimensional (3D) volume-conduction model of the current spread in the cochlea. The outcomes were compared to actual ECochG recordings from subjects with a cochlear implant (CI). The 3D volume conduction simulations showed that the intracochlear ECochG is a local measure of activation. Simulations showed that increasing stimulus frequency resulted in a basal shift of the peak cochlear microphonic (CM) amplitude. Increasing the stimulus level resulted in wider tuning curves as recorded along the array. Simulations with hair cell degeneration resulted in ECochG responses that resembled the recordings from the two subjects in terms of CM onset responses, higher harmonics, and the width of the tuning curve. It was concluded that the model reproduced the patterns seen in intracochlear hair cell responses recorded from CI-subjects.An infant perceptual experiment investigated the role of prosody. All-nonsense-word sentences (e.g., Guin felli crale vur ti gosine), each in structure 1 ([[Determiner + Adjective + Noun] [Verb + Determiner + Noun]]) and structure 2 ([[Determiner + Noun] [Verb + Preposition + Determiner + Noun]]), were recorded (by mimicking real-word French sentences) with disambiguating prosodic groupings matching the two major constituents. French-learning 20- and 24-month-olds were familiarized with either structure 1 or structure 2. All infants were tested with noun-use trials (e.g., Le crale „the crale-Noun”) versus verb-use trials (Tu crales „You crale-Verb”). Structure-2-familiarized infants, but not structure-1-familiarized infants, discriminated the test trials, demonstrating that prosody alone guides verb categorization. Noun categorization requires determiners, as shown in earlier work [S. Massicotte-Laforge and R. Shi, J. Acoust. Soc. Am. 138(4), EL441-EL446 (2015)].Measurements along two ship tracks were obtained in an experiment to investigate the properties of acoustic propagation over the continental slope in the South China Sea. The measured data show a notable difference in transmission loss about 35 dB as sound crosses different geodesic paths. Numerical simulations indicate that the range and azimuth-dependent geological properties control the level of the transmission loss and lead to this large transmission loss fluctuation. In addition, the model also suggests some small-scale features of horizontal refraction effect caused by irregular topography, but they are not observed in the measured data.The vestibular and cochlear aqueducts serve as additional sound transmission paths and produce different degrees of volume velocity shunt flow in cochlear sound transmission. To investigate its effect on forward and reverse stimulations, a lumped-parameter model of the human ear, which incorporates the third windows, was developed. The model combines a transmission-line ear-canal model, a middle-ear model, and an inner-ear model, which were developed previously by different investigators. The model is verified by comparison with experiments. The intracochlear differential-pressure transfer functions, which reflect the input force to the organ of Corti, were calculated. The results show that middle-ear gain for forward sound transmission is greater than the gain for reverse sound transmission. Changes in the cochlear aqueduct impedance have little effect on forward and reverse stimulations. The vestibular aqueduct has little effect on forward stimulation, but increasing its impedance causes deterioration on reverse stimulation below 300 Hz.

Szperamy.pl
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0