hessen.social ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
hessen.social ist die Mastodongemeinschaft für alle Hessen:innen und alle, die sich Hessen verbunden fühlen

Serverstatistik:

1,7 Tsd.
aktive Profile

#newarticle

0 Beiträge0 Beteiligte0 Beiträge heute

Journals | JEOS-Rapid Publications
#NewArticle #OpenAccess

"Research on the photoelectrical properties of TiO2-doped V2O5/FTO nanocomposite thin films under thermal and electrical excitation"
✍️ Yi Li et al. University of Shanghai for Science and Technology (#USST)
➡️ bit.ly/4gx3pAt

#JEOS_RP #EuropeanOptics #Physics #AppliedPhysics #OpticalEngineering #ScienceMastodon #ScientificPublishing @ScienceScholar
@academia
@academicsunite @academicchatter

Our new #OpenScience paper is out in Scientific Reports today! My coauthors and I jokingly called it "the masterpiece" because it wraps up a research line I began during my PhD ... 10 years ago! Here's a quick thread on the backstory and what we found. [1/X]
nature.com/articles/s41598-024 @psycholinguistics @psycholinguistic #psycholinguistics #psycholinguistique #phonetic #phonetics #NewPaper #NewArticle #ScienceMastodon

NatureMapping the spectrotemporal regions influencing perception of French stop consonants in noise - Scientific ReportsUnderstanding how speech sounds are decoded into linguistic units has been a central research challenge over the last century. This study follows a reverse-correlation approach to reveal the acoustic cues listeners use to categorize French stop consonants in noise. Compared to previous methods, this approach ensures an unprecedented level of detail with only minimal theoretical assumptions. Thirty-two participants performed a speech-in-noise discrimination task based on natural /aCa/ utterances, with C = /b/, /d/, /g/, /p/, /t/, or /k/. The trial-by-trial analysis of their confusions enabled us to map the spectrotemporal information they relied on for their decisions. In place-of-articulation contrasts, the results confirmed the critical role of formant consonant-vowel transitions, used by all participants, and, to a lesser extent, vowel-consonant transitions and high-frequency release bursts. Similarly, for voicing contrasts, we validated the prominent role of the voicing bar cue, with some participants also using formant transitions and burst cues. This approach revealed that most listeners use a combination of several cues for each task, with significant variability within the participant group. These insights shed new light on decades-old debates regarding the relative importance of cues for phoneme perception and suggest that research on acoustic cues should not overlook individual variability in speech perception.