Information

Does the desynchronization of some bands mean an increase in the prevalence of the other bands?

Does the desynchronization of some bands mean an increase in the prevalence of the other bands?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In this article, Broadband Cortical Desynchronization Underlies the Human Psychedelic State, this sentene is said:

Psilocybin reduced spontaneous cortical oscillatory power from 1 to 50 Hz in posterior association cortices, and from 8 to 100 Hz in frontal association cortices.

I'll abbreviate posterior association cortices to PAC, and frontal association cortices to FAC.

As found out in this question, the above sentence really means that after psilocybin intake, the brain waves present in the PAC are high-level gamma frequencies, and the brain waves present in the FAC are delta and theta frequencies. They are the ones left, since the other bands were desynchronized away.

Now, since these are the bands left, does that mean that they are more prevalent? Does that mean that the PAC will be operating fully on high-level gamma frequencies and therefore there'll be more of these frequencies in the PAC than usually?

If so, does this mean that the senses of which the PAC is responsible for, will be heightened, since the gamma frequency is a higher frequency, meaning more information is passed in less time? And does that mean that the senses of which the FAC is responsible for, will be diminished, since the delta and theta frequencies are the lowest?


53d. America Rocks and Rolls


The prosperity of the '50s allowed teenagers to spend money on records by their favorite bands and singers.

Rock and roll was everything the suburban 1950s were not. While parents of the decade were listening to Frank Sinatra , Perry Como , and big bands , their children were moving to a new beat.

In fact, to the horror of the older generation, their children were twisting, thrusting, bumping, and grinding to the sounds of rock and roll.

This generation of youth was much larger than any in recent memory, and the prosperity of the era gave them money to spend on records and phonographs. By the end of the decade, the phenomenon of rock and roll helped define the difference between youth and adulthood.

The Roots of Rock


Alan Freed, the Cleveland disc jockey credited with coining the phrase "rock and roll," was the master of ceremonies at many of the first rock concerts, including his 1955 Easter Jubilee.

The roots of rock and roll lay in African American blues and gospel . As the Great Migration brought many African Americans to the cities of the north, the sounds of rhythm and blues attracted suburban teens. Due to segregation and racist attitudes, however, none of the greatest artists of the genre could get much airplay.

Disc jockey Alan Freed began a rhythm-and-blues show on a Cleveland radio station. Soon the audience grew and grew, and Freed coined the term "rock and roll."

Early attempts by white artists to cover R&B songs resulted in weaker renditions that bled the heart and soul out of the originals. Record producers saw the market potential and began to search for a white artist who could capture the African American sound.


Chuck Berry's songs about girls and cars hit a nerve with American teens and sent his star rising high in the early days of rock and roll.

Sam Phillips , a Memphis record producer, found the answer in Elvis Presley. With a deep Southern sound, pouty lips, and gyrating hips, Elvis took an old style and made it his own.

From Memphis, the sound spread to other cities, and demand for Elvis records skyrocketed. Within two years, Elvis was the most popular name in the entertainment business.

After the door to rock and roll acceptance was opened, African American performers such as Chuck Berry, Fats Domino , and Little Richard began to enjoy broad success, as well. White performers such as Buddy Holly and Jerry Lee Lewis also found artistic freedom and commercial success.

Satan's Music


Elvis Presley brought rock-and-roll music to the masses during the 1950s with hits such as "Love Me Tender" and "Heartbreak Hotel."

Rock and roll sent shockwaves across America. A generation of young teenagers collectively rebelled against the music their parents loved. In general, the older generation loathed rock and roll. Appalled by the new styles of dance the movement evoked, churches proclaimed it Satan's music.

Because rock and roll originated among the lower classes and a segregated ethnic group, many middle-class whites thought it was tasteless. Rock and roll records were banned from many radio stations and hundreds of schools.

But the masses spoke louder. When Elvis appeared on TV's The Ed Sullivan Show, the show's ratings soared.

Rock and roll is the most brutal, ugly, degenerate, vicious form of expression &mdash lewd, sly, in plain fact, dirty &mdash a rancid-smelling aphrodisiac and the martial music of every side-burned delinquent on the face of the earth.

The commercial possibilities were limitless. As a generation of young adults finished military service, bought houses in suburbia, and longed for stability and conformity, their children seemed to take comfort for granted. They wanted to release the tensions that bubbled beneath the smooth surface of postwar America.


Introduction

The way we sense and perceive our environment is not determined by physical input through the senses alone. The dynamics of ongoing brain activity affect the build-up of sensory representations and our conscious perception of the physical world. Recently, instantaneous fluctuations of both pupil-linked arousal (McGinley et al., 2015b Lee et al., 2018 Pfeffer et al., 2018) and neural desynchronization (Curto et al., 2009 Marguet and Harris, 2011 Pachitariu et al., 2015) have been highlighted as sources of such sensory and perceptual variation: Arousal and cortical desynchronization are two ways of characterizing the brain state, which strongly influences sensory cortical responses, the encoding of information, thus perception and ultimately behaviour.

The term arousal here and henceforth is used to refer to the general level of alertness which likely traces back to neuromodulatory activity and is associated with the ascending reticular activating system (ARAS). Pupil-linked arousal, which captures locus coeruleus-norepinephrine activity (LC–NE Aston-Jones and Cohen, 2005 Joshi et al., 2016 Reimer et al., 2016) has been shown to influence sensory evoked activity (McGinley et al., 2015a McGinley et al., 2015b Gelbard-Sagiv et al., 2018) and the processing of task-relevant information (Murphy et al., 2014 Lee et al., 2018). Despite evidence for an inverted u-shaped relation of tonic LC–NE activity to performance long suspected from the Yerkes-Dodson law (Yerkes and Dodson, 1908), the precise associations between arousal, sensory processing, and behaviour are underspecified: Although optimal performance at intermediate levels of arousal has reliably been observed (Murphy et al., 2014 McGinley et al., 2015b McGinley et al., 2015a van den Brink et al., 2016 Faller et al., 2019), reports of linear effects on performance (Gelbard-Sagiv et al., 2018) or evoked activity (Neske and McCormick, 2018) in different tasks and species complicate this picture.

In a separate line of experimental work in non-human animals, relatively high neural desynchronization yielded improved encoding and representation of visual (Goard and Dan, 2009 Pinto et al., 2013 Beaman et al., 2017) as well as auditory input (Marguet and Harris, 2011 Pachitariu et al., 2015 Sakata, 2016). Such periods of desynchronization are characterized by reduced noise correlations in population activity, and these patterns are commonly referred to as desynchronized cortical states. They likely result from subtle changes in the balance of excitatory and inhibitory activity (Renart et al., 2010 Haider et al., 2013). Notably, behaviourally relevant changes in cortical desynchronization have been suggested to trace back to attention-related changes in thalamo-cortical interactions (Harris and Thiele, 2011). Thus, such desynchronization states can be expected to be of local nature and be limited to sensory cortical areas of the currently attended sensory domain (Beaman et al., 2017). Although local desynchronization and perceptual performance are positively linked in general (Beaman et al., 2017 Speed et al., 2019), the exact shape of their relationship (e.g., linear vs. quadratic) is unclear. Most notably, evidence for a similar mechanism in humans has remained elusive.

On the one hand, a tight link of pupil size and desynchronization has been claimed (McCormick, 1989 McCormick et al., 1991 McGinley et al., 2015a Vinck et al., 2015). On the other hand, both measures have also been found to be locally unrelated (Beaman et al., 2017 Okun et al., 2019). As of now, pupil-linked arousal and local cortical desynchronization may or may not be distinct signatures of the same underlying process: Varying noradrenergic and cholinergic activity could influence both, local cortical activity and the more global measure of pupil size via afferent projections from brain-stem nuclei (Harris and Thiele, 2011). In sum, it is, first, unclear how pupil-linked arousal and local cortical desynchronization precisely shape sensory processing and perceptual performance in humans. Second, the interrelation of both measures and their potentially shared underlying formative process lacks specification.

Here, we set out to test the relationship of local desynchronization states and pupil-linked arousal, and to specify their relative impact on sensory processing and perception in healthy human participants. We recorded EEG and pupillometry while participants performed a challenging auditory discrimination task. We modelled ongoing neural activity, sensory processing, and perceptual performance based on both local cortical desynchronization and pupil-linked arousal. This way we were able to test the interrelations of both measures but also to directly inspect their shared as well as exclusive influence on sensory processing and behaviour. Specifically, the effects of local cortical desynchronization and pupil-linked arousal on perceptual sensitivity as well as response criterion were analysed.

A closed-loop real-time algorithm calculated on-line an information theoretic proxy of auditory cortical desynchronization (weighted permutation entropy, WPE Fadlallah et al., 2013 Waschke et al., 2017) based on EEG signal arising predominantly from auditory cortices. Of note, WPE as a proxy of desynchronization is tailored to the analysis of electrophysiological time series: It captures oscillatory as well as non-oscillatory contributions as a time-resolved estimate of desynchronization (see Materials and methods for details). Importantly, EEG entropy calculated for a previously published data set (Sarasso et al., 2015) aptly tracks changes in excitatory and inhibitory (E/I) cortical activity that occur under different anaesthetics (Figure 2—figure supplement 1). Also, EEG entropy as measured in the present data aligns closely with the spectral exponent, a previously suggested measure of E/I (Figure 2—figure supplement 1 Gao et al., 2017 Waschke et al., 2017). Entropy of EEG signals thus is not only sensitive to the basic features of desynchronization (e.g. reduced oscillatory power) but also captures changes in a central underlying mechanism (E/I balance).

We used this measure of ongoing desynchronization to trigger stimulus presentation during relatively synchronized and desynchronized states, respectively. A continuously adapting criterion enabled us to effectively sample the whole desynchronization state space (Jazayeri and Afraz, 2017). Such a closed-loop set up allows for selective stimulation during specific states of brain activity while accounting for changes in the appearance of those states and hence represents a powerful tool with a multitude of potential applications in research but also therapy (Sitaram et al., 2017 Ezzyat et al., 2018). To evaluate the interrelation of pre-stimulus desynchronization with simultaneously acquired pupil-linked arousal as well as their influence on stimulus-related activity we employed linear mixed-effect models. Furthermore, psychophysical models were used to evaluate the impact of desynchronization and arousal on perceptual sensitivity, response criterion, and response speed.

Although local cortical desynchronization and pupil-linked arousal were weakly positively correlated, both did not only shape the ongoing EEG activity into distinct states, but also differentially influenced sensory processing at the level of single trials: On the one hand, phase-locked activity in low frequencies as well as stimulus-related gamma power over auditory cortices was highest following intermediate levels of pre-stimulus desynchronization. On the other hand, low-frequency power during and after a stimulus increased linearly with pre-stimulus arousal. Response criterion and speed exhibited an inverted u-shaped relationship with local cortical desynchronization, where intermediate desynchronization corresponded to minimal response bias and fastest responses. An analogous relationship was found for arousal and sensitivity, revealing highest sensitivity at intermediate arousal levels.

Our results speak to a model in which global arousal states and local desynchronization states jointly influence sensory processing and performance. While fluctuations in arousal are likely realized by afferent cholinergic and noradrenergic projections into sensory cortical areas (Robbins, 1997 Carter et al., 2010), desynchronization states might result from efferent feedback connections (Harris and Thiele, 2011 Zagha et al., 2013).


Introduction

Adaptive sensory processing entails the prioritization of task-relevant features with respect to competing information. Top-down modulation of activity in neural ensembles encoding task-relevant or distracting information is crucial in achieving this goal. In particular, regionally specific power changes around the alpha frequency range have been linked to such a putative top-down-mediated gain modulation, with enhanced power reflecting relatively inhibited states (Jensen and Mazaheri, 2010 Klimesch et al., 2007). For the visual modality especially, a vast amount of empirical evidence supports this notion. For example, increased alpha power in parieto-occipital cortical regions contralateral to the unattended hemifield is a very robust finding (e.g. Busch and VanRullen, 2010 Thut et al., 2006). The general inhibitory gating function of localized alpha increases has also been reported with respect to more specific visual features, leading to remarkable spatially circumscribed alpha modulations (Jokisch and Jensen, 2007 Zumer et al., 2014) even at a retinotopic level (Popov et al., 2019). Also for the domain of working memory, alpha increases have been reported during the retention period in the visual (e.g. Jensen et al., 2002 Klimesch et al., 1999), somatosensory (e.g. Haegens et al., 2009) and auditory modalities (e.g. Obleser et al., 2012), putatively protecting the to-be-remembered information against interference. This load-dependent top-down amplification of alpha and its concomitant inhibition account are widely accepted, but circumscribed decreases in alpha to beta power (often labeled as desynchronization) have also been deemed functionally important in the context of working-memory tasks. In a prioritization account, they reflect an enhanced activation of performance-relevant neural ensembles (e.g. Noh et al., 2014 Sauseng et al., 2009). A recent framework by Hanslmayr et al., 2016 explicitly links the extent of alpha/beta desynchronization to the representational strength of the information content in episodic memory (for supportive evidence see Griffiths et al., 2019). This is in line with a framework by van Ede, 2018 stressing the importance of regionally specific alpha and beta decreases when item-specific information needs to be prioritized in the retention period of working-memory tasks.

Distracting sounds are ever-present in natural listening environments and necessitate flexible exertion of inhibition or prioritization processes. Besides stimulus-feature information, which can influence the precise location of alpha modulations in the visual system (Popov et al., 2019), temporal cues can also be exploited (Rohenkohl and Nobre, 2011 van Ede and Chekroud, 2018): that is, when distracting sound input can be temporally predicted, inhibition or prioritization processes should be regulated in an anticipatory manner in relevant auditory regions. As in other sensory modalities (Frey et al., 2015 Weisz and Obleser, 2014), an increasing amount of evidence points to a functional role of alpha oscillations in listening tasks. Increased alpha oscillations have been observed in putatively visual brain regions when focusing attention on auditory input in cue-target periods (Frey et al., 2014 Fu et al., 2001 Snyder and Foxe, 2010). A similar posterior pattern is also observed in challenging listening situations, for example, with increased cognitive load or when faced with background noise (for reviews see Johnsrude and Rodd, 2016 Rönnberg et al., 2011). However, increases in alpha oscillations as a mechanism for selective inhibition (Strauß et al., 2014) have rarely been shown for auditory cortex, in which feature-specific processing of target and distractor sounds takes place. With regards to alpha desynchronization in auditory cortex, different lines of evidence showing an association between (also illusory) sound perception and low auditory cortical alpha power (e.g. Lange et al., 2014 Weisz et al., 2007 Weisz and Obleser, 2014 for invasive recordings illustrating sound-sensitive alpha desynchronization in anterolateral Heschl’s Gyrus, see Billig et al., 2019), suggest a link to representational content as described above.

The goal of the present study was to test whether power modulations in the alpha/beta range (Hanslmayr et al., 2016 van Ede, 2018) in task-relevant auditory cortical areas prior to a temporally predictable distractor, which was presented in the same (i.e. auditory) modality as the target, would better fit with an inhibition or prioritization account. On a general level, power increases would be predicted by an inhibition account, whereas decreases would be expected according to a prioritization account. Furthermore, both alternative accounts make opposing predictions regarding the relationship between pre-distractor alpha modulations and the strength of memorized information in the retention period (see Figure 1).

Modified auditory Sternberg paradigm and cartoon depiction of analysis rationale.

(A) A sequence of four consonants spoken by a female voice was presented. After the retention period, either a strong (consonant spoken by a male voice) or a weak (scrambled consonant) distractor was presented (at 1 s). Distractor type was kept constant during a block. Subsequently, participants indicated by a button press whether the probe was part of the memory set (‘part’) or not (‘no part’). At an individual level, temporal decoding was performed on whether the probe was part of memory set or not. When the probe was part of the memory set, it should have been seen to share distinct neural patterns with those elicited by the items of the memory set, while this should not have been the case when the stimulus was not part of the memory set. By time-generalizing the classifiers trained on the probe to the period of the retention interval, we obtained a quantitative proxy for the strength of memorized information at the time of distractor presentation. The results were then statistically contrasted between weak and strong distractors across the group. (B) Alpha/Beta power in lSTG was calculated at a single trial level in a pre-distractor period and was used to bin high and low power trials. For a 0.5-s pre-distractor period, analysis analogous to (A) was performed to quantify the relationship between regionally specific alpha/beta power and strength of memorized information. A prioritization account would predict that lower ‘desynchronized’ states go along with relatively increased strength of memorized information. This pattern should be captured when contrasting the bins across the entire group and when taking into account the extent of modulation within single participants. An inhibition account would predict an opposing pattern.

We adapted a Sternberg task variant introduced by Bonnefond and Jensen, 2012 to the auditory modality. These researchers illustrated pronounced alpha and beta increases, as well as phase effects, in parieto-occipital regions prior to the presentation of a more potent but temporally predictable visual distractor in the retention period. Using magnetoencephalography (MEG) and decoding, we first identified regions that were informative as to whether a speech item was part of a memory set or not, and focused subsequent spectral analysis on the left superior temporal gyrus (lSTG). This region, which is crucially involved in phonological short-term memory (Jacquemot and Scott, 2006), expressed marked alpha/beta desynchronization prior to a strong distractor. Importantly, by time-generalizing the aforementioned classifier (King and Dehaene, 2014), we implemented a proxy for the strength of memorized information that could be compared between trials with high or low power. Specifically, we show that lower pre-distractor beta power in lSTG goes along with relatively enhanced memory representation in the same period. For alpha power, however, a negative correlation was observed between the strength of memorized information and the extent to which power was modulated at an individual level. Overall, our study draws a nuanced picture that points to differential alpha and beta processes in the auditory cortex that altogether support the prioritization of relevant information in working memory (van Ede, 2018).


Introduction

Brain-Computer Interfaces (BCI) are noninvasive systems that provide a channel of real-time communication and allow control of the external devices e.g. computers with no muscle activity. The input signal for the BCIs is the physiological data obtained by various neuroimaging methods. The data are transformed into output response of the effector. Almost 60% of BCI systems tested currently used electroencephalography (EEG) to register brain activity 1 . The most common EEG methods implemented for BCIs are based on sensorimotor rhythms (SMR) activity registered during imagery of movement 1 and allow the design of so-called motor-imagery BCI (MI BCI) or sensorimotor rhythm-based BCI (SMR BCI). MI-BCI are experimentally used to control such devices as orthosis 2 , drones 3 , and wheelchairs 4 , as well as software for communication 5 .

Sensorimotor rhythms are brain oscillations registered during the preparation, execution, and imagery of a motor act at the electrodes placed over the sensorimotor cortex 6,7,8,9,10 . Two phenomena in SMR can be observed: a decrease of power during movement preparation or execution, which is the event-related desynchronization (ERD) 11 , and an increase of power after completing a movement, i.e., event-related synchronization (ERS) 12 . In the event of hand actual movement or imagery, there is a preponderance of contralateral ERD/ERS effect on central-parietal electrodes 7 . Duann and Chiou 13 show that based on the independent components analysis (ICA) it is possible to identify sources related to ERD/ERS activity in the motor cortex. The lateralization of the ERD/ERS effect is used to control SMR-BCIs.

SMR can be separated into an alpha band (8–13 Hz) and a beta band (15–30 Hz), which display different functional properties within the sensorimotor system. Oscillations in alpha band enable functional coupling of remote cortical areas by the selection of task-relevant cortical regions, as well as for inhibition of activity in task-irrelevant regions 14 . Beta oscillations are engaged in control of muscular activity and communication between the cortex and periphery and related to some cognitive aspects of motor control, like visual cue anticipation and processing 15 . On the one hand, researchers state that the activity in beta bands simply reflects the maintenance of the current sensorimotor parameters and a cognitive states 16 or activation/deactivation of motor cortical areas 17 . On the other hand, in the course of movement preparation, the ERD is regulated by uncertainty about the direction of an upcoming movement. The less confidence about the movement direction is associated with the reduction of ERD effect 18 .

The widespread usage of brain-computer interfaces encounters several obstacles. Among the most significant issues, two shall be highlighted, that is a low efficiency in translating brain activity into information and large individual differences in the ability to control effectively BCIs. Indeed, the reported phenomenon of BCI illiteracy 6 , indicates that a subject is unable to operate a given type of device at a rate higher than random. This problem is present in various BCI approaches in the group of 15–30% research participants. A lot of research currently focuses on inter- and intra-subject variation in BCI performance (see review by Ahn and Jun 19 ). In recent years, the efforts to identify individual factors correlated with the BCI performance have yielded interesting results. The ability to control a SMR-BCI is moderated by motor experience, particularly the average number of hand-and-arm movements per day, practice in playing musical instruments 20,21 , frequency of manual activity 22 and cognitive skills, e.g. visual-motor integration 23 or mental rotation accuracy 24 . Vuckovic and Osuagwu 25 reported that people with high BCI aptitude prefer kinesthetic rather than visual forms of motor imagery, measured by a self-reported questionnaire. However, this finding has not been confirmed in more recent studies 22 , so the estimation of the BCI performance using subjective methods could be ineffective.

In the study by Marchesotti et al. 26 , participants with a similar temporal profile, obtained in the mental chronometry task, showed both higher BCI abilities and stronger lateralization of sensorimotor rhythms during motor imagery. In fact, SMR lateralization is a major issue in the area of motor cognition research. Stancák and Pfurtscheller 27 showed that hand dominance and handedness influence the lateralization of sensorimotor rhythms desynchronization during motor preparation. Right-handers exhibit stronger lateralization of SMRs preceding right-finger in comparison to left-finger movements, whereas in left-handed participants the similar contralateral preponderance for both sides was found. In another experiment, McFarland and colleagues 28 reported differences between right- and left-hand movement or imagery in scalp topographies of SMR bands and individual differences in lateralization of the signal between the subjects. Still, there are no data indicating whether this pattern depends on the handedness of the subjects. Bai, Mari, Vorbach and Hallet 29 investigated the patterns of ERD prior to sequential finger movements in the group of right-handers. They observed the contralateral prevalence of ERD only during right-hand finger movements, while ERD during left-hand finger movements was bilateral. Researchers also draw the conclusion that for the right-handers, the activation on the left hemisphere during non-dominant hand movements is greater than that on the right hemisphere during dominant hand movements. The neuroimaging studies also indicate that activation of motor areas during hand movements is different in right- and left-handed individuals, e.g. during sequential movement, the left-handers activate larger volumes and a larger number of brain areas than the right-handers. They also show significantly less brain lateralization however, there are no such differences for simple movement 30 . In the other fMRI studies Pool et al. 31,32 reported weaker asymmetry in the motor network effective connectivity in left-handers than right-handers during fist closures and a resting state.

To the best of our knowledge, there is no research directly investigating the effects of a subject’s handedness on SMR desynchronization in motor imagery tasks. On the one hand, the ERD/ERS pattern for left and right-hand imagery is well recognized and underlies SMR-BCI functioning, but the majority of research concerns only right-handed participants, as they represent a vast majority of the population 33,34 . To the extent that SMR-BCIs are based on recognizing the patterns of EEG activity during hand movement imagery, and the existing research proves dependence of SMR patterns in movement execution on handedness of the subjects, it seems crucial to investigate the role of handedness in SMR-BCI controls. For this purpose, we decided to use the ICA decomposition to identify the neural substrates of SMR activity evoked by the performance of the motor imagery task. Another essential value of our current study is that we investigate this issue based on an on-line BCI control task, while most of the studies use off-line data only, essentially relying on the classification of previously acquired signals without feedback. Moreover, we have decided to apply a complex and goal-oriented experimental procedure in order to eliminate random hits which could result in lower BCI accuracy. For this reason, the reported results may seem to be relatively low compared to typical offline or online studies (e.g. based on Graz BCI paradigm).

In this study, we hypothesized that handedness would influence both sensorimotor rhythm distribution during motor imagery and the accuracy of SMR-BCI control. We predicted that desynchronization of SMR would be more pronounced and BCI aptitude will be higher in right- than in left-handed individuals. Additionally, our goal was to determine whether there would be differences in ERD of SMR in motor imagery tasks, depending on an individual’s handedness and the estimated cortical location. We investigated the differences between the groups in a pure motor imagery task (off-line session) and a real-time BCI cursor control task (online session).

The description “Methods” can be found after the “Results” section.


The ‘DAWBA bands’ as an ordered-categorical measure of child mental health: description and validation in British and Norwegian samples

To describe and validate the ‘DAWBA bands’. These are novel ordered-categorical measures of child mental health, based on the structured sections of the Development and Well-Being Assessment (DAWBA).

Methods

We developed computer algorithms to generate parent, teacher, child and multi-informant DAWBA bands for individual disorders and for groups of disorder (e.g. ‘any emotional disorder’). The top two (out of 6) levels of the DAWBA bands were used as computer-generated DAWBA diagnoses. We validated these DAWBA bands in 7,912 British children (7–19 years) and 1,364 Norwegian children (11–13 years), using clinician-rated DAWBA diagnoses as a gold standard.

Results

In general, the prevalence of clinician-rated diagnosis increased monotonically across all levels of the DAWBA bands, and also showed a dose–response association with service use and risk factors. The prevalence estimates of the computer-generated DAWBA diagnoses were of roughly comparable magnitude to the prevalence estimates from the clinician-generated diagnoses, but the estimates were not always very close. In contrast, the estimated effect sizes, significance levels and substantive conclusions regarding risk factor associations were very similar or identical. The multi-informant and parent DAWBA bands performed especially well in these regards.

Conclusion

Computer-generated DAWBA bands avoid the cost and delay occasioned by clinical rating. They may, therefore, sometimes provide a useful alternative to clinician-rated diagnoses, when studying associations with risk factors, generating rough prevalence estimates or implementing routine mental health screening.


Ecology of Musical Preference: the Relationship Between Pathogen Prevalence and the Number and Intensity of Metal Bands

Ecological conditions and pathogen stress shape human cognition and behavior and can explain cultural and behavioral patterns cross-culturally. It has been previously shown that human values and preferences are also phenotypically plastic in response to parasitic stress across regions, and that parasite prevalence is associated with the out-group prejudice and nonconformity tolerance. Human preference for music is also variable across the world, and no previous study has considered this variability in the light of behavioral ecology. Research has shown that in the regions in which the parasite stress is higher, there is higher aversion against out-groups, and unusual and deviant behaviors. In the current study we hypothesized that extreme forms of music such as heavy metal, which is associated with antisocial behavior, irreligiosity, and deviation from the norm is less prevalent in the regions with higher prevalence of pathogenic stress. We tested our hypothesis using publicly available data measuring number and intensity of metal bands and parasite prevalence while controlling for importance of religion, human development index, and population size across European countries. Results showed that parasite stress negatively predicts the number of heavy metal bands. However, no relationship was found between the intensity of the music and parasite stress. We discussed our results in terms of association of parasite stress with tolerance to out-group members (metal community and culture) and with the openness to new musical endeavors. Overall, this study extends the role of the ecological and biological variation (parasite stress) in shaping human cognition to musical preference and prevalence.

This is a preview of subscription content, access via your institution.


Introduction

1.3.4 Beta Waves (β)

Beta waves are very fast in action. It has a frequency range of 14 Hz and greater. It is most frequently picked up along two sides in symmetrical distribution and is most evident frontally. It generally occurs due to sedative-hypnotic drugs, especially the benzodiazepines and the barbiturates. It may be missing in fields of cortical damage. It is mostly seen as a normal brain wave pattern. It is the dominant brain wave in case of patients who are alert or anxious or have their eyes open. Fig. 1.7 shows a sample beta wave.


Results

Figure 1 illustrates the spatial distribution and time course of ERS in the alpha, beta, and gamma bands. Figure 2 displays the electrode-averaged ERS at each interval in the alpha, beta, and gamma bands.

Figure 1. ERS in the (A) alpha, (B) beta, and (C) gamma bands. Each topographic plot represents mean ERS calculated over a 20-s long interval, averaged over all trials within each music condition, and all participants within each group. Electrode sites, represented by the dots, are at standard positions given by the international 10� system.

Figure 2. Group × Music × Interval interaction plots of mean ERS in (A) alpha, (B) beta, (C), and gamma bands. Line plots show mean ERS over each time Interval within each Music condition, per Group.

Alpha

There was a significant Group × Music × Interval interaction [F(4,120) = 8.18, p < 0.001, η p 2 = 0.21] and a main effect of Electrode [F(29,870) = 4.04, p = 0.002, η p 2 = 0.12]. See Figures 1A, 2A. The absence of a main effect of Music [F(1,30) = 0.70, p = 0.41] suggests that differences in intrinsic music features such as tempo, meter, and rhythm were unlikely to have played a role in either group's alpha activity.

Post-hoc comparisons are provided in Table 1, using Bonferroni-corrected t-tests. Table 1A shows comparisons between Groups as they listened to their own music (tango in dancers vs. non-tango in non-dancers) at each interval, using independent-samples t-tests. Table 1B shows comparisons between Music conditions within each Group (for example, dancers listening to tango vs. dancers listening to non-tango) at each interval, using paired-samples t-tests.

Table 1A. Group-wise comparison: Post-hoc tests of significant Group × Music × Interval interactions in each band.

Table 1B. Music-wise comparison: Post-hoc tests of significant Group × Music × Interval interactions in each band.

Post-hoc analysis of the main effect of electrode identified that the posterior electrodes O1, Oz, and O2 were most active when compared with each of the other 29 electrodes (p-values < 0.001, 0.011, and 0.046, respectively).

There were significant interactions of Group × Music [F(1,30) = 6.53, p = 0.016, η p 2 = 0.18], Group × Music × Interval [F(4,120) = 6.66, p = 0.001, η p 2 = 0.18], and Group × Music × Electrode [F(29,870) = 3.83, p = 0.003, η p 2 = 0.11]. See Figures 1B, 2B. The lack of a main effect of Music [F(1,30) = 2.11, p = 0.16] suggests that differences in intrinsic music features such as tempo, meter, and rhythm were unlikely to have played a role in either group's beta activity.

Post-hoc tests for the Group × Music interaction via Bonferroni-corrected paired-samples t-tests indicated higher ERS in the tango condition compared to non-tango for the dancer group [t(2,399) = 18.62, p < 0.001, d = 0.38], and higher ERS in the non-tango condition compared to tango for the non-dancer group [t(2,399) = 𢄦.89, p < 0.001, d = 𢄠.14].

Post-hoc comparisons using Bonferroni-corrected t-tests for the Group × Music × Interval interaction for beta are shown in Table 1. Table 1A shows comparisons between groups as they listened to their own music (tango in dancers vs. non-tango in non-dancers) at each interval, using independent-samples t-tests. Table 1B shows comparisons between music conditions within each group (for example, dancers listening to tango vs. dancers listening to non-tango) at each Interval, using paired-samples t-tests.

Gamma

There were significant interactions of Group × Music [F(1,30) = 27.73, p < 0.001, η p 2 = 0.48], and Group × Music × Electrode [F(29,870) = 3.00, p < 0.001, η p 2 = 0.09]. See Figures 1C, 2C. The lack of a main effect of Music [F(1,30) = 0.85, p = 0.37] suggests that differences in intrinsic music features such as tempo, meter, and rhythm were unlikely to have played a role in either group's gamma activity.

Post-hoc comparisons for the Group × Music interaction indicated higher ERS in the tango condition relative to non-tango in dancers [t(2,399) = 37.81, p < 0.001, d = 0.77], and higher ERS in the non-tango condition relative to tango in non-dancers [t(2,399) = �.94, p < 0.001, d = 𢄠.51].

Comparisons between each Group's own Music (tango in dancers vs. non-tango in non-dancers) via Bonferroni-corrected independent-samples t-tests showed higher gamma activity in non-dancers when they listened to non-tango, compared to dancers when they listened to tango [t(4615.5) = 𢄢.17, p = 0.03, d = 𢄠.06]. Non-dancers also showed higher gamma activity when they listened to tango, compared to dancers when they listened to non-tango [t(4790.7) = �.10, p < 0.001, d = 𢄠.35].

Behavioral

Both groups rated their own music higher than the other group's music on the enjoyment scale (1 = no enjoyment, 10 = most enjoyable). See Figure 3A. Mean rating scores for their own music were 9.4 ± 0.2 (dancers) and 8.8 ± 0.2 (non-dancers) and for the other group's music 6.7 ± 0.5 (dancers) and 5.7 ± 0.5 (non-dancers). A Group × Music ANOVA on the ratings yielded a significant Group × Music interaction [F(1,30) = 61.4, p < 0.001, η p 2 = 0.67], signifying higher ratings for their own music in both groups.

Figure 3. (A) Mean enjoyment ratings of music (1 = no enjoyment, 10 = most enjoyable) by music condition in each participant group. Error bars represent standard error of the mean. Tango condition served as dancers' own music, while non-tango condition served as non-dancers' own music. (B) Estimated rating scores for all tango and non-tango music as a linear function of ERS in all participants for the alpha, beta, and gamma bands. Shaded regions indicate confidence intervals at each estimate. A linear mixed regression excluding extreme values (ଓSD from mean ERS) did not alter the findings [Main effect of ERS p-values in original analysis: p = 0.991 (alpha), p = 0.695 (beta), p = 0.011 (gamma) Main effect of ERS p-values in analysis without outliers: p = 0.388 (alpha), p = 0.888 (beta), p = 0.012 (gamma)].

To test the relation of the spectral activity with the enjoyment ratings of the music, a linear mixed regression was performed on rating scores for all tango and non-tango music using ERS values, and Group (dancer, non-dancer), as fixed-effects factors, and participant as a random-effects factor. Overall model fit was assessed in each frequency band by comparing the null regression (intercept-only) model against the full model (with factors ERS and Group) with an F-test with Satterthwaite approximation for degree of freedom (Luke, 2017). Figure 3B shows marginal estimates produced by the overall models of enjoyment scores as a function of ERS in each frequency band. Results showed that ratings were significantly predicted by ERS values in both groups only in the gamma band [Main effect of ERS: t(60.0) = 2.63, p = 0.011], and the full model was significantly different from the null model in the gamma band [χ 2 (3) = 12.30, p = 0.006]. ERS values did not predict ratings in the alpha or beta bands (p > 0.623) the full model was not significantly different from the null model in these bands [alpha: χ 2 (3) = 2.32, p = 0.509 beta: χ 2 (3) = 2.72, p = 0.437].


Background

Anxiety and depression are the most frequently diagnosed mental health disorders, both in the general population, and consequently also in child and adolescent mental health outpatient services (CAMHS) [1,2,3]. In the general population up to 10% of children and 20% of adolescents will meet the criteria of an anxiety disorder at any point in time [4]. Adolescents are at high risk for the development of depression. The percentage of adolescents with major depressive disorder range from 8 to 20% before the age of 18 [5,6,7,8]. Emotional disorders interfere negatively with various aspects of functioning and quality of life [9,10,11,12,13]. The prevalence of both anxiety and depressive disorders increase during adolescence [14, 15]. Comorbidity and co-occurrence of anxiety and depression is high [16] and studies show that anxiety and depression both have shared and separate features and etiology [16]. Hence, combined treatments for emotional disorders could offer effective treatments for these complex disorders.

Cognitive behavioral therapy (CBT) and interpersonal treatment (IPT) are well-established interventions for adolescent depression [17], and numerous studies have also demonstrated that CBT relieves anxiety symptoms in youths [18, 19]. In a comprehensive multilevel meta-analysis [20], integrating the results of 140 studies from the past five decades (1963–2013), youth psychotherapies showed a significant post-treatment effect size (ES) of 0.46. For the separate disorders, the largest ES was reported for anxiety (0.61), while treatments of depression in youths have yielded weaker ES (0.29) [20]. Other systematic reviews and meta-analyses examining the effect of youth CBT, show moderate to large treatment effects on anxiety and depression in youths [21, 22]. The majority of the evidence-based protocols for youths target symptoms of single disorders [23, 24], or symptom domains [25]. So far, the most well-known and well-studied combined treatment for the comorbid features of emotional disorders is the unified protocol for treatment of emotional disorders in adolescents [26]. Although not all studies have found a relationship between treatment outcome and comorbidity [22], some have found that comorbidity predicts poorer response to interventions in youth with both primary anxiety [27,28,29] and primary depression [16]. According to the previously mentioned multilevel meta-analysis [20], treatments of concurrent multiple problems, as opposed to any single targeted problem, showed an effect that was not significantly different from zero at post-treatment or follow-up [20]. Some argue that this could suggest that efforts made to concurrently treat multiple problems have been less effective than focusing more narrowly [30], suggesting new ways to address comorbidity in youths [31,32,33]. In an earlier review of trials of 461 youth psychotherapies, spanning from the 1960-ies and 50 years onward, Weisz and colleagues [20] found that the interventions were usually delivered in settings outside regular clinical practice, i.e. in research settings. Across the trials, only 2.1% of all study groups were described as involving clinically referred clients treated by practitioners in regular clinical practice settings [20]. When delivered in regular clinical practice, evidence-based treatments (EBT), compared to treatment as usual (TAU), has modest outcome (ES, d = 0.29) [29]. Furthermore, in several instances TAU delivered in regular clinical practice, outperformed standard EBT, usually delivered as single-disorder interventions. Even studies using exclusively diagnosed samples (d = 0.09) and studies on clinically referred youths (d = 0.17) showed low and non-significant ES values [34]. Despite the importance of quality assurance in routine practice, most CAMHS do not evaluate patients clinical change systematically [35]. A report from the Child and Outcomes Research Consortium (CORC) 2013–2016 with patients receiving treatment over six months in one of our participating CAMHS, showed improvement in many patients, however as many as 27% deteriorated [36].

In a CAMHS setting, there are high production requirements for staff, so the treatments need to be short and effective. Clinicians in a managed care setting reportedly emphasize short-term cognitive behavioral strategies [37]. Transdiagnostic treatment focuses on treatment strategies that may be generic across diverse conditions and can be defined as a therapy made available to individuals with a wide range of disorders [38]. Transdiagnostic treatment is characterized by a focus on cognitive, behavioral, and physiological processes that are shared or common across diverse disorders [38]. Although focal EBT are excellent in many ways (see 20), there may be challenges associated with implementation of several disorder-specific CBTs in regular clinical practice, and hence reasons for advocating training in one transdiagnostic CBT intervention that spans over several disorders or symptom clusters. In the framework of regular clinical practice, transdiagnostic CBT could be more applicable, time-saving, realistic to learn and cost-efficient for therapists in terms of training and application, and last but not least, it addresses the comorbid states we encounter in regular practice (e.g. 20,33).

There is a growing body of evidence demonstrating that transdiagnostic treatments could be effective in the reduction of symptoms of anxiety and depression [39], furthermore that transdiagnostic CBT has similar effects as disorder-specific interventions [40], and finally that effect sizes range from medium to large for these types of interventions [41]. The Unified Protocol for the Treatment of Emotional Disorders in Adolescents (UP-A) showed a significant effect compared to waiting list controls on all outcome measures [26]. However, as highlighted in Weisz and colleagues [20] extensive meta- analysis, the vast majority of the 1160 treatment and control groups included therapy that was not delivered in regular clinical care settings.

To the best of our knowledge, the present study is the first RCT performed with short-term transdiagnostic CBT for adolescents, the SMART protocol, with combined emotional disorders in regular clinical settings in CAMHS.

Objectives

The objective of the present study was to examine the effectiveness of a short-term, transdiagnostic CBT (SMART) in adolescents with clinically significant emotional symptoms referred to community clinics. The effectiveness is investigated both with regard to


53d. America Rocks and Rolls


The prosperity of the '50s allowed teenagers to spend money on records by their favorite bands and singers.

Rock and roll was everything the suburban 1950s were not. While parents of the decade were listening to Frank Sinatra , Perry Como , and big bands , their children were moving to a new beat.

In fact, to the horror of the older generation, their children were twisting, thrusting, bumping, and grinding to the sounds of rock and roll.

This generation of youth was much larger than any in recent memory, and the prosperity of the era gave them money to spend on records and phonographs. By the end of the decade, the phenomenon of rock and roll helped define the difference between youth and adulthood.

The Roots of Rock


Alan Freed, the Cleveland disc jockey credited with coining the phrase "rock and roll," was the master of ceremonies at many of the first rock concerts, including his 1955 Easter Jubilee.

The roots of rock and roll lay in African American blues and gospel . As the Great Migration brought many African Americans to the cities of the north, the sounds of rhythm and blues attracted suburban teens. Due to segregation and racist attitudes, however, none of the greatest artists of the genre could get much airplay.

Disc jockey Alan Freed began a rhythm-and-blues show on a Cleveland radio station. Soon the audience grew and grew, and Freed coined the term "rock and roll."

Early attempts by white artists to cover R&B songs resulted in weaker renditions that bled the heart and soul out of the originals. Record producers saw the market potential and began to search for a white artist who could capture the African American sound.


Chuck Berry's songs about girls and cars hit a nerve with American teens and sent his star rising high in the early days of rock and roll.

Sam Phillips , a Memphis record producer, found the answer in Elvis Presley. With a deep Southern sound, pouty lips, and gyrating hips, Elvis took an old style and made it his own.

From Memphis, the sound spread to other cities, and demand for Elvis records skyrocketed. Within two years, Elvis was the most popular name in the entertainment business.

After the door to rock and roll acceptance was opened, African American performers such as Chuck Berry, Fats Domino , and Little Richard began to enjoy broad success, as well. White performers such as Buddy Holly and Jerry Lee Lewis also found artistic freedom and commercial success.

Satan's Music


Elvis Presley brought rock-and-roll music to the masses during the 1950s with hits such as "Love Me Tender" and "Heartbreak Hotel."

Rock and roll sent shockwaves across America. A generation of young teenagers collectively rebelled against the music their parents loved. In general, the older generation loathed rock and roll. Appalled by the new styles of dance the movement evoked, churches proclaimed it Satan's music.

Because rock and roll originated among the lower classes and a segregated ethnic group, many middle-class whites thought it was tasteless. Rock and roll records were banned from many radio stations and hundreds of schools.

But the masses spoke louder. When Elvis appeared on TV's The Ed Sullivan Show, the show's ratings soared.

Rock and roll is the most brutal, ugly, degenerate, vicious form of expression &mdash lewd, sly, in plain fact, dirty &mdash a rancid-smelling aphrodisiac and the martial music of every side-burned delinquent on the face of the earth.

The commercial possibilities were limitless. As a generation of young adults finished military service, bought houses in suburbia, and longed for stability and conformity, their children seemed to take comfort for granted. They wanted to release the tensions that bubbled beneath the smooth surface of postwar America.


Introduction

The way we sense and perceive our environment is not determined by physical input through the senses alone. The dynamics of ongoing brain activity affect the build-up of sensory representations and our conscious perception of the physical world. Recently, instantaneous fluctuations of both pupil-linked arousal (McGinley et al., 2015b Lee et al., 2018 Pfeffer et al., 2018) and neural desynchronization (Curto et al., 2009 Marguet and Harris, 2011 Pachitariu et al., 2015) have been highlighted as sources of such sensory and perceptual variation: Arousal and cortical desynchronization are two ways of characterizing the brain state, which strongly influences sensory cortical responses, the encoding of information, thus perception and ultimately behaviour.

The term arousal here and henceforth is used to refer to the general level of alertness which likely traces back to neuromodulatory activity and is associated with the ascending reticular activating system (ARAS). Pupil-linked arousal, which captures locus coeruleus-norepinephrine activity (LC–NE Aston-Jones and Cohen, 2005 Joshi et al., 2016 Reimer et al., 2016) has been shown to influence sensory evoked activity (McGinley et al., 2015a McGinley et al., 2015b Gelbard-Sagiv et al., 2018) and the processing of task-relevant information (Murphy et al., 2014 Lee et al., 2018). Despite evidence for an inverted u-shaped relation of tonic LC–NE activity to performance long suspected from the Yerkes-Dodson law (Yerkes and Dodson, 1908), the precise associations between arousal, sensory processing, and behaviour are underspecified: Although optimal performance at intermediate levels of arousal has reliably been observed (Murphy et al., 2014 McGinley et al., 2015b McGinley et al., 2015a van den Brink et al., 2016 Faller et al., 2019), reports of linear effects on performance (Gelbard-Sagiv et al., 2018) or evoked activity (Neske and McCormick, 2018) in different tasks and species complicate this picture.

In a separate line of experimental work in non-human animals, relatively high neural desynchronization yielded improved encoding and representation of visual (Goard and Dan, 2009 Pinto et al., 2013 Beaman et al., 2017) as well as auditory input (Marguet and Harris, 2011 Pachitariu et al., 2015 Sakata, 2016). Such periods of desynchronization are characterized by reduced noise correlations in population activity, and these patterns are commonly referred to as desynchronized cortical states. They likely result from subtle changes in the balance of excitatory and inhibitory activity (Renart et al., 2010 Haider et al., 2013). Notably, behaviourally relevant changes in cortical desynchronization have been suggested to trace back to attention-related changes in thalamo-cortical interactions (Harris and Thiele, 2011). Thus, such desynchronization states can be expected to be of local nature and be limited to sensory cortical areas of the currently attended sensory domain (Beaman et al., 2017). Although local desynchronization and perceptual performance are positively linked in general (Beaman et al., 2017 Speed et al., 2019), the exact shape of their relationship (e.g., linear vs. quadratic) is unclear. Most notably, evidence for a similar mechanism in humans has remained elusive.

On the one hand, a tight link of pupil size and desynchronization has been claimed (McCormick, 1989 McCormick et al., 1991 McGinley et al., 2015a Vinck et al., 2015). On the other hand, both measures have also been found to be locally unrelated (Beaman et al., 2017 Okun et al., 2019). As of now, pupil-linked arousal and local cortical desynchronization may or may not be distinct signatures of the same underlying process: Varying noradrenergic and cholinergic activity could influence both, local cortical activity and the more global measure of pupil size via afferent projections from brain-stem nuclei (Harris and Thiele, 2011). In sum, it is, first, unclear how pupil-linked arousal and local cortical desynchronization precisely shape sensory processing and perceptual performance in humans. Second, the interrelation of both measures and their potentially shared underlying formative process lacks specification.

Here, we set out to test the relationship of local desynchronization states and pupil-linked arousal, and to specify their relative impact on sensory processing and perception in healthy human participants. We recorded EEG and pupillometry while participants performed a challenging auditory discrimination task. We modelled ongoing neural activity, sensory processing, and perceptual performance based on both local cortical desynchronization and pupil-linked arousal. This way we were able to test the interrelations of both measures but also to directly inspect their shared as well as exclusive influence on sensory processing and behaviour. Specifically, the effects of local cortical desynchronization and pupil-linked arousal on perceptual sensitivity as well as response criterion were analysed.

A closed-loop real-time algorithm calculated on-line an information theoretic proxy of auditory cortical desynchronization (weighted permutation entropy, WPE Fadlallah et al., 2013 Waschke et al., 2017) based on EEG signal arising predominantly from auditory cortices. Of note, WPE as a proxy of desynchronization is tailored to the analysis of electrophysiological time series: It captures oscillatory as well as non-oscillatory contributions as a time-resolved estimate of desynchronization (see Materials and methods for details). Importantly, EEG entropy calculated for a previously published data set (Sarasso et al., 2015) aptly tracks changes in excitatory and inhibitory (E/I) cortical activity that occur under different anaesthetics (Figure 2—figure supplement 1). Also, EEG entropy as measured in the present data aligns closely with the spectral exponent, a previously suggested measure of E/I (Figure 2—figure supplement 1 Gao et al., 2017 Waschke et al., 2017). Entropy of EEG signals thus is not only sensitive to the basic features of desynchronization (e.g. reduced oscillatory power) but also captures changes in a central underlying mechanism (E/I balance).

We used this measure of ongoing desynchronization to trigger stimulus presentation during relatively synchronized and desynchronized states, respectively. A continuously adapting criterion enabled us to effectively sample the whole desynchronization state space (Jazayeri and Afraz, 2017). Such a closed-loop set up allows for selective stimulation during specific states of brain activity while accounting for changes in the appearance of those states and hence represents a powerful tool with a multitude of potential applications in research but also therapy (Sitaram et al., 2017 Ezzyat et al., 2018). To evaluate the interrelation of pre-stimulus desynchronization with simultaneously acquired pupil-linked arousal as well as their influence on stimulus-related activity we employed linear mixed-effect models. Furthermore, psychophysical models were used to evaluate the impact of desynchronization and arousal on perceptual sensitivity, response criterion, and response speed.

Although local cortical desynchronization and pupil-linked arousal were weakly positively correlated, both did not only shape the ongoing EEG activity into distinct states, but also differentially influenced sensory processing at the level of single trials: On the one hand, phase-locked activity in low frequencies as well as stimulus-related gamma power over auditory cortices was highest following intermediate levels of pre-stimulus desynchronization. On the other hand, low-frequency power during and after a stimulus increased linearly with pre-stimulus arousal. Response criterion and speed exhibited an inverted u-shaped relationship with local cortical desynchronization, where intermediate desynchronization corresponded to minimal response bias and fastest responses. An analogous relationship was found for arousal and sensitivity, revealing highest sensitivity at intermediate arousal levels.

Our results speak to a model in which global arousal states and local desynchronization states jointly influence sensory processing and performance. While fluctuations in arousal are likely realized by afferent cholinergic and noradrenergic projections into sensory cortical areas (Robbins, 1997 Carter et al., 2010), desynchronization states might result from efferent feedback connections (Harris and Thiele, 2011 Zagha et al., 2013).


Ecology of Musical Preference: the Relationship Between Pathogen Prevalence and the Number and Intensity of Metal Bands

Ecological conditions and pathogen stress shape human cognition and behavior and can explain cultural and behavioral patterns cross-culturally. It has been previously shown that human values and preferences are also phenotypically plastic in response to parasitic stress across regions, and that parasite prevalence is associated with the out-group prejudice and nonconformity tolerance. Human preference for music is also variable across the world, and no previous study has considered this variability in the light of behavioral ecology. Research has shown that in the regions in which the parasite stress is higher, there is higher aversion against out-groups, and unusual and deviant behaviors. In the current study we hypothesized that extreme forms of music such as heavy metal, which is associated with antisocial behavior, irreligiosity, and deviation from the norm is less prevalent in the regions with higher prevalence of pathogenic stress. We tested our hypothesis using publicly available data measuring number and intensity of metal bands and parasite prevalence while controlling for importance of religion, human development index, and population size across European countries. Results showed that parasite stress negatively predicts the number of heavy metal bands. However, no relationship was found between the intensity of the music and parasite stress. We discussed our results in terms of association of parasite stress with tolerance to out-group members (metal community and culture) and with the openness to new musical endeavors. Overall, this study extends the role of the ecological and biological variation (parasite stress) in shaping human cognition to musical preference and prevalence.

This is a preview of subscription content, access via your institution.


Results

Figure 1 illustrates the spatial distribution and time course of ERS in the alpha, beta, and gamma bands. Figure 2 displays the electrode-averaged ERS at each interval in the alpha, beta, and gamma bands.

Figure 1. ERS in the (A) alpha, (B) beta, and (C) gamma bands. Each topographic plot represents mean ERS calculated over a 20-s long interval, averaged over all trials within each music condition, and all participants within each group. Electrode sites, represented by the dots, are at standard positions given by the international 10� system.

Figure 2. Group × Music × Interval interaction plots of mean ERS in (A) alpha, (B) beta, (C), and gamma bands. Line plots show mean ERS over each time Interval within each Music condition, per Group.

Alpha

There was a significant Group × Music × Interval interaction [F(4,120) = 8.18, p < 0.001, η p 2 = 0.21] and a main effect of Electrode [F(29,870) = 4.04, p = 0.002, η p 2 = 0.12]. See Figures 1A, 2A. The absence of a main effect of Music [F(1,30) = 0.70, p = 0.41] suggests that differences in intrinsic music features such as tempo, meter, and rhythm were unlikely to have played a role in either group's alpha activity.

Post-hoc comparisons are provided in Table 1, using Bonferroni-corrected t-tests. Table 1A shows comparisons between Groups as they listened to their own music (tango in dancers vs. non-tango in non-dancers) at each interval, using independent-samples t-tests. Table 1B shows comparisons between Music conditions within each Group (for example, dancers listening to tango vs. dancers listening to non-tango) at each interval, using paired-samples t-tests.

Table 1A. Group-wise comparison: Post-hoc tests of significant Group × Music × Interval interactions in each band.

Table 1B. Music-wise comparison: Post-hoc tests of significant Group × Music × Interval interactions in each band.

Post-hoc analysis of the main effect of electrode identified that the posterior electrodes O1, Oz, and O2 were most active when compared with each of the other 29 electrodes (p-values < 0.001, 0.011, and 0.046, respectively).

There were significant interactions of Group × Music [F(1,30) = 6.53, p = 0.016, η p 2 = 0.18], Group × Music × Interval [F(4,120) = 6.66, p = 0.001, η p 2 = 0.18], and Group × Music × Electrode [F(29,870) = 3.83, p = 0.003, η p 2 = 0.11]. See Figures 1B, 2B. The lack of a main effect of Music [F(1,30) = 2.11, p = 0.16] suggests that differences in intrinsic music features such as tempo, meter, and rhythm were unlikely to have played a role in either group's beta activity.

Post-hoc tests for the Group × Music interaction via Bonferroni-corrected paired-samples t-tests indicated higher ERS in the tango condition compared to non-tango for the dancer group [t(2,399) = 18.62, p < 0.001, d = 0.38], and higher ERS in the non-tango condition compared to tango for the non-dancer group [t(2,399) = 𢄦.89, p < 0.001, d = 𢄠.14].

Post-hoc comparisons using Bonferroni-corrected t-tests for the Group × Music × Interval interaction for beta are shown in Table 1. Table 1A shows comparisons between groups as they listened to their own music (tango in dancers vs. non-tango in non-dancers) at each interval, using independent-samples t-tests. Table 1B shows comparisons between music conditions within each group (for example, dancers listening to tango vs. dancers listening to non-tango) at each Interval, using paired-samples t-tests.

Gamma

There were significant interactions of Group × Music [F(1,30) = 27.73, p < 0.001, η p 2 = 0.48], and Group × Music × Electrode [F(29,870) = 3.00, p < 0.001, η p 2 = 0.09]. See Figures 1C, 2C. The lack of a main effect of Music [F(1,30) = 0.85, p = 0.37] suggests that differences in intrinsic music features such as tempo, meter, and rhythm were unlikely to have played a role in either group's gamma activity.

Post-hoc comparisons for the Group × Music interaction indicated higher ERS in the tango condition relative to non-tango in dancers [t(2,399) = 37.81, p < 0.001, d = 0.77], and higher ERS in the non-tango condition relative to tango in non-dancers [t(2,399) = �.94, p < 0.001, d = 𢄠.51].

Comparisons between each Group's own Music (tango in dancers vs. non-tango in non-dancers) via Bonferroni-corrected independent-samples t-tests showed higher gamma activity in non-dancers when they listened to non-tango, compared to dancers when they listened to tango [t(4615.5) = 𢄢.17, p = 0.03, d = 𢄠.06]. Non-dancers also showed higher gamma activity when they listened to tango, compared to dancers when they listened to non-tango [t(4790.7) = �.10, p < 0.001, d = 𢄠.35].

Behavioral

Both groups rated their own music higher than the other group's music on the enjoyment scale (1 = no enjoyment, 10 = most enjoyable). See Figure 3A. Mean rating scores for their own music were 9.4 ± 0.2 (dancers) and 8.8 ± 0.2 (non-dancers) and for the other group's music 6.7 ± 0.5 (dancers) and 5.7 ± 0.5 (non-dancers). A Group × Music ANOVA on the ratings yielded a significant Group × Music interaction [F(1,30) = 61.4, p < 0.001, η p 2 = 0.67], signifying higher ratings for their own music in both groups.

Figure 3. (A) Mean enjoyment ratings of music (1 = no enjoyment, 10 = most enjoyable) by music condition in each participant group. Error bars represent standard error of the mean. Tango condition served as dancers' own music, while non-tango condition served as non-dancers' own music. (B) Estimated rating scores for all tango and non-tango music as a linear function of ERS in all participants for the alpha, beta, and gamma bands. Shaded regions indicate confidence intervals at each estimate. A linear mixed regression excluding extreme values (ଓSD from mean ERS) did not alter the findings [Main effect of ERS p-values in original analysis: p = 0.991 (alpha), p = 0.695 (beta), p = 0.011 (gamma) Main effect of ERS p-values in analysis without outliers: p = 0.388 (alpha), p = 0.888 (beta), p = 0.012 (gamma)].

To test the relation of the spectral activity with the enjoyment ratings of the music, a linear mixed regression was performed on rating scores for all tango and non-tango music using ERS values, and Group (dancer, non-dancer), as fixed-effects factors, and participant as a random-effects factor. Overall model fit was assessed in each frequency band by comparing the null regression (intercept-only) model against the full model (with factors ERS and Group) with an F-test with Satterthwaite approximation for degree of freedom (Luke, 2017). Figure 3B shows marginal estimates produced by the overall models of enjoyment scores as a function of ERS in each frequency band. Results showed that ratings were significantly predicted by ERS values in both groups only in the gamma band [Main effect of ERS: t(60.0) = 2.63, p = 0.011], and the full model was significantly different from the null model in the gamma band [χ 2 (3) = 12.30, p = 0.006]. ERS values did not predict ratings in the alpha or beta bands (p > 0.623) the full model was not significantly different from the null model in these bands [alpha: χ 2 (3) = 2.32, p = 0.509 beta: χ 2 (3) = 2.72, p = 0.437].


Introduction

1.3.4 Beta Waves (β)

Beta waves are very fast in action. It has a frequency range of 14 Hz and greater. It is most frequently picked up along two sides in symmetrical distribution and is most evident frontally. It generally occurs due to sedative-hypnotic drugs, especially the benzodiazepines and the barbiturates. It may be missing in fields of cortical damage. It is mostly seen as a normal brain wave pattern. It is the dominant brain wave in case of patients who are alert or anxious or have their eyes open. Fig. 1.7 shows a sample beta wave.


The ‘DAWBA bands’ as an ordered-categorical measure of child mental health: description and validation in British and Norwegian samples

To describe and validate the ‘DAWBA bands’. These are novel ordered-categorical measures of child mental health, based on the structured sections of the Development and Well-Being Assessment (DAWBA).

Methods

We developed computer algorithms to generate parent, teacher, child and multi-informant DAWBA bands for individual disorders and for groups of disorder (e.g. ‘any emotional disorder’). The top two (out of 6) levels of the DAWBA bands were used as computer-generated DAWBA diagnoses. We validated these DAWBA bands in 7,912 British children (7–19 years) and 1,364 Norwegian children (11–13 years), using clinician-rated DAWBA diagnoses as a gold standard.

Results

In general, the prevalence of clinician-rated diagnosis increased monotonically across all levels of the DAWBA bands, and also showed a dose–response association with service use and risk factors. The prevalence estimates of the computer-generated DAWBA diagnoses were of roughly comparable magnitude to the prevalence estimates from the clinician-generated diagnoses, but the estimates were not always very close. In contrast, the estimated effect sizes, significance levels and substantive conclusions regarding risk factor associations were very similar or identical. The multi-informant and parent DAWBA bands performed especially well in these regards.

Conclusion

Computer-generated DAWBA bands avoid the cost and delay occasioned by clinical rating. They may, therefore, sometimes provide a useful alternative to clinician-rated diagnoses, when studying associations with risk factors, generating rough prevalence estimates or implementing routine mental health screening.


Introduction

Adaptive sensory processing entails the prioritization of task-relevant features with respect to competing information. Top-down modulation of activity in neural ensembles encoding task-relevant or distracting information is crucial in achieving this goal. In particular, regionally specific power changes around the alpha frequency range have been linked to such a putative top-down-mediated gain modulation, with enhanced power reflecting relatively inhibited states (Jensen and Mazaheri, 2010 Klimesch et al., 2007). For the visual modality especially, a vast amount of empirical evidence supports this notion. For example, increased alpha power in parieto-occipital cortical regions contralateral to the unattended hemifield is a very robust finding (e.g. Busch and VanRullen, 2010 Thut et al., 2006). The general inhibitory gating function of localized alpha increases has also been reported with respect to more specific visual features, leading to remarkable spatially circumscribed alpha modulations (Jokisch and Jensen, 2007 Zumer et al., 2014) even at a retinotopic level (Popov et al., 2019). Also for the domain of working memory, alpha increases have been reported during the retention period in the visual (e.g. Jensen et al., 2002 Klimesch et al., 1999), somatosensory (e.g. Haegens et al., 2009) and auditory modalities (e.g. Obleser et al., 2012), putatively protecting the to-be-remembered information against interference. This load-dependent top-down amplification of alpha and its concomitant inhibition account are widely accepted, but circumscribed decreases in alpha to beta power (often labeled as desynchronization) have also been deemed functionally important in the context of working-memory tasks. In a prioritization account, they reflect an enhanced activation of performance-relevant neural ensembles (e.g. Noh et al., 2014 Sauseng et al., 2009). A recent framework by Hanslmayr et al., 2016 explicitly links the extent of alpha/beta desynchronization to the representational strength of the information content in episodic memory (for supportive evidence see Griffiths et al., 2019). This is in line with a framework by van Ede, 2018 stressing the importance of regionally specific alpha and beta decreases when item-specific information needs to be prioritized in the retention period of working-memory tasks.

Distracting sounds are ever-present in natural listening environments and necessitate flexible exertion of inhibition or prioritization processes. Besides stimulus-feature information, which can influence the precise location of alpha modulations in the visual system (Popov et al., 2019), temporal cues can also be exploited (Rohenkohl and Nobre, 2011 van Ede and Chekroud, 2018): that is, when distracting sound input can be temporally predicted, inhibition or prioritization processes should be regulated in an anticipatory manner in relevant auditory regions. As in other sensory modalities (Frey et al., 2015 Weisz and Obleser, 2014), an increasing amount of evidence points to a functional role of alpha oscillations in listening tasks. Increased alpha oscillations have been observed in putatively visual brain regions when focusing attention on auditory input in cue-target periods (Frey et al., 2014 Fu et al., 2001 Snyder and Foxe, 2010). A similar posterior pattern is also observed in challenging listening situations, for example, with increased cognitive load or when faced with background noise (for reviews see Johnsrude and Rodd, 2016 Rönnberg et al., 2011). However, increases in alpha oscillations as a mechanism for selective inhibition (Strauß et al., 2014) have rarely been shown for auditory cortex, in which feature-specific processing of target and distractor sounds takes place. With regards to alpha desynchronization in auditory cortex, different lines of evidence showing an association between (also illusory) sound perception and low auditory cortical alpha power (e.g. Lange et al., 2014 Weisz et al., 2007 Weisz and Obleser, 2014 for invasive recordings illustrating sound-sensitive alpha desynchronization in anterolateral Heschl’s Gyrus, see Billig et al., 2019), suggest a link to representational content as described above.

The goal of the present study was to test whether power modulations in the alpha/beta range (Hanslmayr et al., 2016 van Ede, 2018) in task-relevant auditory cortical areas prior to a temporally predictable distractor, which was presented in the same (i.e. auditory) modality as the target, would better fit with an inhibition or prioritization account. On a general level, power increases would be predicted by an inhibition account, whereas decreases would be expected according to a prioritization account. Furthermore, both alternative accounts make opposing predictions regarding the relationship between pre-distractor alpha modulations and the strength of memorized information in the retention period (see Figure 1).

Modified auditory Sternberg paradigm and cartoon depiction of analysis rationale.

(A) A sequence of four consonants spoken by a female voice was presented. After the retention period, either a strong (consonant spoken by a male voice) or a weak (scrambled consonant) distractor was presented (at 1 s). Distractor type was kept constant during a block. Subsequently, participants indicated by a button press whether the probe was part of the memory set (‘part’) or not (‘no part’). At an individual level, temporal decoding was performed on whether the probe was part of memory set or not. When the probe was part of the memory set, it should have been seen to share distinct neural patterns with those elicited by the items of the memory set, while this should not have been the case when the stimulus was not part of the memory set. By time-generalizing the classifiers trained on the probe to the period of the retention interval, we obtained a quantitative proxy for the strength of memorized information at the time of distractor presentation. The results were then statistically contrasted between weak and strong distractors across the group. (B) Alpha/Beta power in lSTG was calculated at a single trial level in a pre-distractor period and was used to bin high and low power trials. For a 0.5-s pre-distractor period, analysis analogous to (A) was performed to quantify the relationship between regionally specific alpha/beta power and strength of memorized information. A prioritization account would predict that lower ‘desynchronized’ states go along with relatively increased strength of memorized information. This pattern should be captured when contrasting the bins across the entire group and when taking into account the extent of modulation within single participants. An inhibition account would predict an opposing pattern.

We adapted a Sternberg task variant introduced by Bonnefond and Jensen, 2012 to the auditory modality. These researchers illustrated pronounced alpha and beta increases, as well as phase effects, in parieto-occipital regions prior to the presentation of a more potent but temporally predictable visual distractor in the retention period. Using magnetoencephalography (MEG) and decoding, we first identified regions that were informative as to whether a speech item was part of a memory set or not, and focused subsequent spectral analysis on the left superior temporal gyrus (lSTG). This region, which is crucially involved in phonological short-term memory (Jacquemot and Scott, 2006), expressed marked alpha/beta desynchronization prior to a strong distractor. Importantly, by time-generalizing the aforementioned classifier (King and Dehaene, 2014), we implemented a proxy for the strength of memorized information that could be compared between trials with high or low power. Specifically, we show that lower pre-distractor beta power in lSTG goes along with relatively enhanced memory representation in the same period. For alpha power, however, a negative correlation was observed between the strength of memorized information and the extent to which power was modulated at an individual level. Overall, our study draws a nuanced picture that points to differential alpha and beta processes in the auditory cortex that altogether support the prioritization of relevant information in working memory (van Ede, 2018).


Introduction

Brain-Computer Interfaces (BCI) are noninvasive systems that provide a channel of real-time communication and allow control of the external devices e.g. computers with no muscle activity. The input signal for the BCIs is the physiological data obtained by various neuroimaging methods. The data are transformed into output response of the effector. Almost 60% of BCI systems tested currently used electroencephalography (EEG) to register brain activity 1 . The most common EEG methods implemented for BCIs are based on sensorimotor rhythms (SMR) activity registered during imagery of movement 1 and allow the design of so-called motor-imagery BCI (MI BCI) or sensorimotor rhythm-based BCI (SMR BCI). MI-BCI are experimentally used to control such devices as orthosis 2 , drones 3 , and wheelchairs 4 , as well as software for communication 5 .

Sensorimotor rhythms are brain oscillations registered during the preparation, execution, and imagery of a motor act at the electrodes placed over the sensorimotor cortex 6,7,8,9,10 . Two phenomena in SMR can be observed: a decrease of power during movement preparation or execution, which is the event-related desynchronization (ERD) 11 , and an increase of power after completing a movement, i.e., event-related synchronization (ERS) 12 . In the event of hand actual movement or imagery, there is a preponderance of contralateral ERD/ERS effect on central-parietal electrodes 7 . Duann and Chiou 13 show that based on the independent components analysis (ICA) it is possible to identify sources related to ERD/ERS activity in the motor cortex. The lateralization of the ERD/ERS effect is used to control SMR-BCIs.

SMR can be separated into an alpha band (8–13 Hz) and a beta band (15–30 Hz), which display different functional properties within the sensorimotor system. Oscillations in alpha band enable functional coupling of remote cortical areas by the selection of task-relevant cortical regions, as well as for inhibition of activity in task-irrelevant regions 14 . Beta oscillations are engaged in control of muscular activity and communication between the cortex and periphery and related to some cognitive aspects of motor control, like visual cue anticipation and processing 15 . On the one hand, researchers state that the activity in beta bands simply reflects the maintenance of the current sensorimotor parameters and a cognitive states 16 or activation/deactivation of motor cortical areas 17 . On the other hand, in the course of movement preparation, the ERD is regulated by uncertainty about the direction of an upcoming movement. The less confidence about the movement direction is associated with the reduction of ERD effect 18 .

The widespread usage of brain-computer interfaces encounters several obstacles. Among the most significant issues, two shall be highlighted, that is a low efficiency in translating brain activity into information and large individual differences in the ability to control effectively BCIs. Indeed, the reported phenomenon of BCI illiteracy 6 , indicates that a subject is unable to operate a given type of device at a rate higher than random. This problem is present in various BCI approaches in the group of 15–30% research participants. A lot of research currently focuses on inter- and intra-subject variation in BCI performance (see review by Ahn and Jun 19 ). In recent years, the efforts to identify individual factors correlated with the BCI performance have yielded interesting results. The ability to control a SMR-BCI is moderated by motor experience, particularly the average number of hand-and-arm movements per day, practice in playing musical instruments 20,21 , frequency of manual activity 22 and cognitive skills, e.g. visual-motor integration 23 or mental rotation accuracy 24 . Vuckovic and Osuagwu 25 reported that people with high BCI aptitude prefer kinesthetic rather than visual forms of motor imagery, measured by a self-reported questionnaire. However, this finding has not been confirmed in more recent studies 22 , so the estimation of the BCI performance using subjective methods could be ineffective.

In the study by Marchesotti et al. 26 , participants with a similar temporal profile, obtained in the mental chronometry task, showed both higher BCI abilities and stronger lateralization of sensorimotor rhythms during motor imagery. In fact, SMR lateralization is a major issue in the area of motor cognition research. Stancák and Pfurtscheller 27 showed that hand dominance and handedness influence the lateralization of sensorimotor rhythms desynchronization during motor preparation. Right-handers exhibit stronger lateralization of SMRs preceding right-finger in comparison to left-finger movements, whereas in left-handed participants the similar contralateral preponderance for both sides was found. In another experiment, McFarland and colleagues 28 reported differences between right- and left-hand movement or imagery in scalp topographies of SMR bands and individual differences in lateralization of the signal between the subjects. Still, there are no data indicating whether this pattern depends on the handedness of the subjects. Bai, Mari, Vorbach and Hallet 29 investigated the patterns of ERD prior to sequential finger movements in the group of right-handers. They observed the contralateral prevalence of ERD only during right-hand finger movements, while ERD during left-hand finger movements was bilateral. Researchers also draw the conclusion that for the right-handers, the activation on the left hemisphere during non-dominant hand movements is greater than that on the right hemisphere during dominant hand movements. The neuroimaging studies also indicate that activation of motor areas during hand movements is different in right- and left-handed individuals, e.g. during sequential movement, the left-handers activate larger volumes and a larger number of brain areas than the right-handers. They also show significantly less brain lateralization however, there are no such differences for simple movement 30 . In the other fMRI studies Pool et al. 31,32 reported weaker asymmetry in the motor network effective connectivity in left-handers than right-handers during fist closures and a resting state.

To the best of our knowledge, there is no research directly investigating the effects of a subject’s handedness on SMR desynchronization in motor imagery tasks. On the one hand, the ERD/ERS pattern for left and right-hand imagery is well recognized and underlies SMR-BCI functioning, but the majority of research concerns only right-handed participants, as they represent a vast majority of the population 33,34 . To the extent that SMR-BCIs are based on recognizing the patterns of EEG activity during hand movement imagery, and the existing research proves dependence of SMR patterns in movement execution on handedness of the subjects, it seems crucial to investigate the role of handedness in SMR-BCI controls. For this purpose, we decided to use the ICA decomposition to identify the neural substrates of SMR activity evoked by the performance of the motor imagery task. Another essential value of our current study is that we investigate this issue based on an on-line BCI control task, while most of the studies use off-line data only, essentially relying on the classification of previously acquired signals without feedback. Moreover, we have decided to apply a complex and goal-oriented experimental procedure in order to eliminate random hits which could result in lower BCI accuracy. For this reason, the reported results may seem to be relatively low compared to typical offline or online studies (e.g. based on Graz BCI paradigm).

In this study, we hypothesized that handedness would influence both sensorimotor rhythm distribution during motor imagery and the accuracy of SMR-BCI control. We predicted that desynchronization of SMR would be more pronounced and BCI aptitude will be higher in right- than in left-handed individuals. Additionally, our goal was to determine whether there would be differences in ERD of SMR in motor imagery tasks, depending on an individual’s handedness and the estimated cortical location. We investigated the differences between the groups in a pure motor imagery task (off-line session) and a real-time BCI cursor control task (online session).

The description “Methods” can be found after the “Results” section.


Background

Anxiety and depression are the most frequently diagnosed mental health disorders, both in the general population, and consequently also in child and adolescent mental health outpatient services (CAMHS) [1,2,3]. In the general population up to 10% of children and 20% of adolescents will meet the criteria of an anxiety disorder at any point in time [4]. Adolescents are at high risk for the development of depression. The percentage of adolescents with major depressive disorder range from 8 to 20% before the age of 18 [5,6,7,8]. Emotional disorders interfere negatively with various aspects of functioning and quality of life [9,10,11,12,13]. The prevalence of both anxiety and depressive disorders increase during adolescence [14, 15]. Comorbidity and co-occurrence of anxiety and depression is high [16] and studies show that anxiety and depression both have shared and separate features and etiology [16]. Hence, combined treatments for emotional disorders could offer effective treatments for these complex disorders.

Cognitive behavioral therapy (CBT) and interpersonal treatment (IPT) are well-established interventions for adolescent depression [17], and numerous studies have also demonstrated that CBT relieves anxiety symptoms in youths [18, 19]. In a comprehensive multilevel meta-analysis [20], integrating the results of 140 studies from the past five decades (1963–2013), youth psychotherapies showed a significant post-treatment effect size (ES) of 0.46. For the separate disorders, the largest ES was reported for anxiety (0.61), while treatments of depression in youths have yielded weaker ES (0.29) [20]. Other systematic reviews and meta-analyses examining the effect of youth CBT, show moderate to large treatment effects on anxiety and depression in youths [21, 22]. The majority of the evidence-based protocols for youths target symptoms of single disorders [23, 24], or symptom domains [25]. So far, the most well-known and well-studied combined treatment for the comorbid features of emotional disorders is the unified protocol for treatment of emotional disorders in adolescents [26]. Although not all studies have found a relationship between treatment outcome and comorbidity [22], some have found that comorbidity predicts poorer response to interventions in youth with both primary anxiety [27,28,29] and primary depression [16]. According to the previously mentioned multilevel meta-analysis [20], treatments of concurrent multiple problems, as opposed to any single targeted problem, showed an effect that was not significantly different from zero at post-treatment or follow-up [20]. Some argue that this could suggest that efforts made to concurrently treat multiple problems have been less effective than focusing more narrowly [30], suggesting new ways to address comorbidity in youths [31,32,33]. In an earlier review of trials of 461 youth psychotherapies, spanning from the 1960-ies and 50 years onward, Weisz and colleagues [20] found that the interventions were usually delivered in settings outside regular clinical practice, i.e. in research settings. Across the trials, only 2.1% of all study groups were described as involving clinically referred clients treated by practitioners in regular clinical practice settings [20]. When delivered in regular clinical practice, evidence-based treatments (EBT), compared to treatment as usual (TAU), has modest outcome (ES, d = 0.29) [29]. Furthermore, in several instances TAU delivered in regular clinical practice, outperformed standard EBT, usually delivered as single-disorder interventions. Even studies using exclusively diagnosed samples (d = 0.09) and studies on clinically referred youths (d = 0.17) showed low and non-significant ES values [34]. Despite the importance of quality assurance in routine practice, most CAMHS do not evaluate patients clinical change systematically [35]. A report from the Child and Outcomes Research Consortium (CORC) 2013–2016 with patients receiving treatment over six months in one of our participating CAMHS, showed improvement in many patients, however as many as 27% deteriorated [36].

In a CAMHS setting, there are high production requirements for staff, so the treatments need to be short and effective. Clinicians in a managed care setting reportedly emphasize short-term cognitive behavioral strategies [37]. Transdiagnostic treatment focuses on treatment strategies that may be generic across diverse conditions and can be defined as a therapy made available to individuals with a wide range of disorders [38]. Transdiagnostic treatment is characterized by a focus on cognitive, behavioral, and physiological processes that are shared or common across diverse disorders [38]. Although focal EBT are excellent in many ways (see 20), there may be challenges associated with implementation of several disorder-specific CBTs in regular clinical practice, and hence reasons for advocating training in one transdiagnostic CBT intervention that spans over several disorders or symptom clusters. In the framework of regular clinical practice, transdiagnostic CBT could be more applicable, time-saving, realistic to learn and cost-efficient for therapists in terms of training and application, and last but not least, it addresses the comorbid states we encounter in regular practice (e.g. 20,33).

There is a growing body of evidence demonstrating that transdiagnostic treatments could be effective in the reduction of symptoms of anxiety and depression [39], furthermore that transdiagnostic CBT has similar effects as disorder-specific interventions [40], and finally that effect sizes range from medium to large for these types of interventions [41]. The Unified Protocol for the Treatment of Emotional Disorders in Adolescents (UP-A) showed a significant effect compared to waiting list controls on all outcome measures [26]. However, as highlighted in Weisz and colleagues [20] extensive meta- analysis, the vast majority of the 1160 treatment and control groups included therapy that was not delivered in regular clinical care settings.

To the best of our knowledge, the present study is the first RCT performed with short-term transdiagnostic CBT for adolescents, the SMART protocol, with combined emotional disorders in regular clinical settings in CAMHS.

Objectives

The objective of the present study was to examine the effectiveness of a short-term, transdiagnostic CBT (SMART) in adolescents with clinically significant emotional symptoms referred to community clinics. The effectiveness is investigated both with regard to