Information

Is fear in response to the sight of predators innate or acquired?

Is fear in response to the sight of predators innate or acquired?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Children of very small age have less knowledge. They have very limited knowledge about harm, fear, etc.

Can children feel fear with such little knowledge?

Let us assume a child is playing in the house and suddenly a lion comes into the house. Does the child feel fear upon seeing it?

The child does not know that the lion is harmful at all. Does the child's brain indicate that something bad is going to happen just by seeing the lion?

If the child feels fear in situations like that, does the brain have inbuilt knowledge related to such issues?

Note: The lion does not make any noise. It is just approaching the child slowly.

Is fear in response to the sight of predators innate or acquired?


Alright, so my familiarity with this area primarily comes from Vanessa LoBue's work. And what I get from her research is that we don't really know if certain fears are innate or acquired.

LoBue seems to favor a prepared learning model, which is just as it sounds. Infants are not born fearful of things like spiders, snakes, and heights (Adolph, Kretch, & LoBue, 2014; LoBue et al., 2012), but they are biased (prepared) to learn to fear them. She has supported this theory by showing that children are more rapid at attending to snakes than other stimuli (e.g., flowers, frogs, caterpillars), independent of snake knowledge/exposure (LoBue & DeLoache, 2008). Infants are also biased to associate undulating snakes, but not other stimuli, with negatively valenced sounds (DeLoache & LoBue, 2009). However, this wasn't true for static images of snakes.

Importantly, these stimuli are (ostensibly) evolutionarily significant (snakes, spiders, but not lions, tigers). However, the evidence for their threatening presence in human evolution, at least for spiders, is lacking (LoBue & Rakison, 2013). This has been a major criticism of the prepared learning model: which stimuli are 'prepared'?

LoBue suggests that the bias toward snakes might be mediated by low-level perceptual characteristics that pop out (i.e., curvilinear features) (LoBue, 2014). However, it's not clear if this visual saliency leads to actual fear learning of snakes (LoBue & Rakison, 2013). Moreover, this suggests that any predominantly curvilinear stimulus might bias attention.

Aside from her work, there are loads of mixed findings in this area that don't point to a clear answer. For example, there are studies that report a general attention bias for animals in general, threatening or non-threatening (e.g., Lipp, 2006). Plus there are several outstanding methodological problems in much of the past research.

Frankly, there just needs to be a lot more research in order to figure this out. Although I would argue that the "non-associative model" (that infants don't need to learn, in some way, to fear snakes, spiders, etc.) is out of fashion and not well supported. But for the most recent reviews, see LoBue & Rakison (2013) and LoBue (2013).

References

Adolph, K. E., Kretch, K. S., & LoBue, V. (2014). Fear of heights in infants?. Current directions in psychological science, 23(1), 60-66. doi: 10.1177/0963721413498895 PMCID: PMC4175923 PDF: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4175923/pdf/nihms574402.pdf

DeLoache, J. S., & LoBue, V. (2009). The narrow fellow in the grass: Human infants associate snakes and fear. Developmental science, 12(1), 201-207. doi: 10.1111/j.1467-7687.2008.00753.x

Lipp, O. V. (2006). Of snakes and flowers: Does preferential detection of pictures of fear-relevant animals in visual search reflect on fear-relevance? Emotion, 6(2), 296-308. doi: 10.1037/1528-3542.6.2.296

LoBue, V., & DeLoache, J. S. (2008). Detecting the snake in the grass: Attention to fear-relevant stimuli by adults and young children. Psychological science, 19(3), 284-289. doi: 10.1111/j.1467-9280.2008.02081.x

LoBue, V. (2013). What are we so afraid of? How early attention shapes our most common fears. Child Development Perspectives, 7(1), 38-42. doi: 10.1111/cdep.12012

LoBue, V., Bloom Pickard, M., Sherman, K., Axford, C., & DeLoache, J. S. (2013). Young children's interest in live animals. British Journal of Developmental Psychology, 31(1), 57-69. doi: 10.1111/j.2044-835X.2012.02078.x

LoBue, V., & Rakison, D. H. (2013). What we fear most: A developmental advantage for threat-relevant stimuli. Developmental Review, 33(4), 285-303. doi: 10.1016/j.dr.2013.07.005

LoBue, V. (2014). Deconstructing the snake: The relative roles of perception, cognition, and emotion on threat detection. Emotion, 14(4), 701. doi: 10.1037/a0035898


Nature and Nurture as an Enduring Tension in the History of Psychology

Nature–nurture is a dichotomous way of thinking about the origins of human (and animal) behavior and development, where “nature” refers to native, inborn, causal factors that function independently of, or prior to, the experiences (“nurture”) of the organism. In psychology during the 19th century, nature-nurture debates were voiced in the language of instinct versus learning. In the first decades of the 20th century, it was widely assumed that that humans and animals entered the world with a fixed set of inborn instincts. But in the 1920s and again in the 1950s, the validity of instinct as a scientific construct was challenged on conceptual and empirical grounds. As a result, most psychologists abandoned using the term instinct but they did not abandon the validity of distinguishing between nature versus nurture. In place of instinct, many psychologists made a semantic shift to using terms like innate knowledge, biological maturation, and/or hereditary/genetic effects on development, all of which extend well into the 21st century. Still, for some psychologists, the earlier critiques of the instinct concept remain just as relevant to these more modern usages.

The tension in nature-nurture debates is commonly eased by claiming that explanations of behavior must involve reference to both nature-based and nurture-based causes. However, for some psychologists there is a growing pressure to see the nature–nurture dichotomy as oversimplifying the development of behavior patterns. The division is seen as both arbitrary and counterproductive. Rather than treat nature and nurture as separable causal factors operating on development, they treat nature-nurture as a distinction between product (nature) versus process (nurture). Thus there has been a longstanding tension about how to define, separate, and balance the effects of nature and nurture.

Keywords

Subjects

Nature and Nurture in Development

The oldest and most persistent ways to frame explanations about the behavioral and mental development of individuals is to distinguish between two separate sources of developmental causation: (a) intrinsic, preformed, or predetermined causes (“nature”) versus (b) extrinsic, experiential, or environmental causes (“nurture”). Inputs from these two sources are thought to add their own contribution to development (see Figure 1).

Figure 1. The traditional view of nature and nurture as separate causes of development. In the traditional view, nature and nurture are treated as independent causal influences that combine during development to generate outcomes. Note that, during development, the effects of nature and nurture (shown in horizontal crossing lines) remain independent so that their effects on outcomes are theoretically separable.

Because some traits seem to derive more from one source than the other, much of the tension associated with the nature–nurture division deals with disagreements about how to balance the roles of nature and nurture in the development of a trait.

Evidence of Nature in Development

Evidence to support the nature–nurture division usually derives from patterns of behavior that suggest a limited role of environmental causation, thus implying some effect of nature by default. Table 1 depicts some common descriptors and conditions used to infer that some preference, knowledge, or skill is nature based.

Table 1. Common Descriptors and Associated Conditions for Inferring the Effects of Nature on Development

Displayed in the absence of relevant experience

Preparedness for learning

Rapidly or easily learned

Difficult or impossible to learn

Found in all like members of a species

Difficult to modify following its appearance

Emerges in an orderly sequence or at a specific time

Runs in families or with degrees of kinship

It is important to reiterate that nature-based causation (e.g., genetic determination) is inferred from these observations. Such inferences can generate tension because each of the observations listed here can be explained by nurture-based (environmental) factors. Confusion can also arise when evidence of one descriptor (e.g., being hereditary) is erroneously used to justify a different usage (e.g., that the trait is unlearned).

The Origins of Nature Versus Nurture

For much of recorded history, the distinction between nature and nurture was a temporal divide between what a person is innately endowed with at birth, prior to experience (nature), and what happens thereafter (nurture). It was not until the 19th century that the temporal division was transformed into a material division of causal influences (Keller, 2010). New views about heredity and Darwinian evolution justified distinguishing between native traits and genetic causes from acquired traits and environmental causes. More so than before, the terms nature and nurture were often juxtaposed in an opposition famously described by Sir Francis Galton (1869) as that between “nature versus nurture.”

Galton began writing about heredity in the mid-1860s. He believed we would discover laws governing the transmission of mental as well as physical qualities. Galton’s take on mental heredity, however, was forged by his desire to improve the human race in a science he would later call “eugenics.” In the mid- 19th century , British liberals assumed humans were equivalent at birth. Their social reform efforts were geared to enhancing educational opportunities and improving living conditions. Galton, a political conservative, opposed the notion of natural equality, arguing instead that people were inherently different at birth (Cowan, 2016), and that these inherited mental and behavioral inequalities were transmitted through lineages like physical qualities. Because Galton opposed the widely held Lamarckian idea that the qualities acquired in one’s lifetime could modify the inherited potential of subsequent generations, he believed long-lasting improvement of the human stock would only come by controlling breeding practices.

To explain the biological mechanisms of inheritance, Galton joined a growing trend in the 1870s to understand inheritance as involving the transmission of (hypothetical) determinative, germinal substances across generations. Foreshadowing a view that would later become scientific orthodoxy, Galton believed these germinal substances to be uninfluenced by the experiences of the organism. His theory of inheritance, however, was speculative. Realizing he was not equipped to fully explicate his theory of biological inheritance, Galton abandoned this line of inquiry by the end of that decade and refocused his efforts on identifying statistical laws of heredity of individual differences (Renwick, 2011).

Historians generally agree that Galton was the first to treat nature (as heredity) and nurture (everything else) as separate causal forces (Keller, 2010), but the schism gained biological legitimacy through the work of the German cytologist Auguste Weismann in the 1880s. Whereas Galton’s theory was motivated by his political agenda, Weismann was motivated by a scientific, theoretical agenda. Namely, Weismann opposed Lamarckian inheritance and promoted a view of evolution driven almost entirely by natural selection.

Drawing upon contemporary cytological and embryological research, Weismann made the case that the determinative substances found in the germ cells of plants and animals (called the “germ-plasm”) that are transmitted across generations were physically sequestered very early in embryogenesis and remained buffered from the other cells of the body (“somato-plasm”). This so-called, Weismann’s barrier meant that alterations in the soma that develop in the lifetime of the organism through the use or disuse of body parts would not affect the germinal substances transmitted during reproduction (see Winther, 2001, for review). On this view, Lamarckian-style inheritance of acquired characteristics was not biologically possible.

Galton and Weismann’s influence on the life sciences cannot be overstated. Their work convinced many to draw unusually sharp distinctions between the inherited (nature) and the acquired (nurture). Although their theories were met with much resistance and generated significant tension in the life sciences from cytology to psychology, their efforts helped stage a new epistemic space through which to appreciate Mendel’s soon to be rediscovered breeding studies and usher in genetics (Muller-Wille & Rheinberger, 2012).

Ever since, psychology has teetered between nature-biased and nurture-biased positions. With the rise of genetics, the wedge between nature–nurture was deepened in the early to mid- 20th century , creating fields of study that focused exclusively on the effects of either nature or nurture.

The “Middle Ground” Perspective on Nature–Nurture

Twenty-first-century psychology textbooks often state that the nature–nurture debates have been resolved, and the tension relaxed, because we have moved on from emphasizing nature or nurture to appreciating that development necessarily involves both nature and nurture. In this middle-ground position, one asks how nature and nurture interact. For example, how do biological (or genetic) predispositions for behaviors or innate knowledge bias early learning experiences? Or how might environmental factors influence the biologically determined (maturational) unfolding of bodily form and behaviors?

Rejection of the Nature–Nurture Divide

For some, the “middle-ground” resolution is as problematic as “either/or” views and does not resolve a deeper source of tension inherent in the dichotomy. On this view, the nature–nurture divide is neither a legitimate nor a constructive way of thinking about development. Instead, developmental analysis reveals that the terms commonly associated with nature (e.g., innate, genetic, hereditary, or instinctual) and nurture (environmental or learned) are so entwined and confounded (and often arbitrary) that their independent effects cannot be meaningfully discussed. The nature–nurture division oversimplifies developmental processes, takes too much for granted, and ultimately hinders scientific progress. Thus not only is there a lingering tension about how to balance the effects of nature and nurture in the middle-ground view, but there is also a growing tension to move beyond the dichotomous nature–nurture framework.

Nativism in Behavior: Instincts

Definitions of instinct can vary tremendously, but many contrast (a) instinct with reason (or intellect, thought, will), which is related to but separable from contrasting (b) instinct with learning (or experience or habit).

Instinct in the Age of Enlightenment

Early usages of the instinct concept, following Aristotle, treated instinct as a mental, estimative faculty (vis aestimativa or aestimativa naturalis) in humans and animals that allowed for the judgments of objects in the world (e.g., seeing a predator) to be deemed beneficial or harmful in a way that transcends immediate sensory experience but does not involve the use of reason (Diamond, 1971). In many of the early usages, the “natural instinct” of animals even included subrational forms of learning.

The modern usage of instincts as unlearned behaviors took shape in the 17th century . By that point it was widely believed that nature or God had implanted in animals and humans innate behaviors and predispositions (“instincts”) to promote the survival of the individual and the propagation of the species. Disagreements arose as to whether instincts derived from innate mental images or were mindlessly and mechanically (physiologically) generated from innately specified bodily organization (Richards, 1987).

Anti-Instinct Movement in the Age of Enlightenment

Challenges to the instinct concept can be found in the 16th century (see Diamond, 1971), but they were most fully developed by empiricist philosophers of the French Sensationalist tradition in the 18th century (Richards, 1987). Sensationalists asserted that animals behaved rationally and all of the so-called instincts displayed by animals could be seen as intelligently acquired habits.

For Sensationalists, instincts, as traditionally understood, did not exist. Species-specificity in behavior patterns could be explained by commonalities in physiological organization, needs, and environmental conditions. Even those instinctual behaviors seen at birth (e.g., that newly hatched chicks peck and eat grain) might eventually be explained by the animal’s prenatal experiences. Erasmus Darwin ( 1731–1802 ), for example, speculated that the movements and swallowing experiences in ovo could account for the pecking and eating of grain by young chicks. The anti-instinct sentiment was clearly expressed by the Sensationalist Jean Antoine Guer ( 1713–1764 ), who warned that instinct was an “infantile idea” that could only be held by those who are ignorant of philosophy, that traditional appeals to instincts in animals not only explained nothing but served to hinder scientific explanations, and that nothing could be more superficial than to explain behavior than appealing to so-called instincts (Richards, 1987).

The traditional instinct concept survived. For most people, the complex, adaptive, species-specific behaviors displayed by naïve animals (e.g., caterpillars building cocoons infant suckling behaviors) appeared to be predetermined and unlearned. Arguably as important, however, was the resistance to the theological implications of Sensationalist philosophy.

One of the strongest reactions to Sensationalism was put forward in Germany by Herman Samuel Reimarus ( 1694–1768 ). As a natural theologian, Reimarus, sought evidence of a God in the natural world, and the species-specific, complex, and adaptive instincts of animals seemed to stand as the best evidence of God’s work. More so than any other, Reimarus extensively catalogued instincts in humans and animals. Rather than treat instincts as behaviors, he defined instincts as natural impulses (inner drives) to act that were expressed perfectly, without reflection or practice, and served adaptive goals (Richards, 1987). He even proposed instincts for learning, a proposal that would resurface in the mid- 20th century , as would his drive theory of instinct (Jaynes & Woodward, 1974).

Partly as a result of Reimarus’ efforts, the instinct concept survived going into the 19th century . But many issues surrounding the instinct concept were left unsettled. How do instincts differ from reflexive behaviors? What role does learning play in the expression of instincts, if any? Do humans have more or fewer instincts than animals? These questions would persist well into the first decades of the 20th century and ultimately fuel another anti-instinct movement.

Instinct in the 19th Century

In the 19th century , the tension about the nature and nurture of instincts in the lifetime of animals led to debates about the nature and nurture of instincts across generations. These debates dealt with whether instincts should be viewed as “inherited habits” from previous generations or whether they result from the natural selection. Debating the relative roles of neo-Lamarckian use-inheritance versus neo-Darwinian natural selection in the transmutation of species became a significant source of tension in the latter half of the 19th century . Although the neo-Lamarckian notion of instincts as being inherited habits was rejected in the 20th century , it has resurged in recent years (e.g., see Robinson & Barron, 2017).

Darwinian evolutionary theory required drawing distinctions between native and acquired behaviors, and, perhaps more so than before, behaviors were categorized along a continuum from the purely instinctive (unlearned), to the partially instinctive (requiring some learning), to the purely learned. Still, it was widely assumed that a purely instinctive response would be modified by experience after its first occurrence. As a result, instinct and habit were very much entangled in the lifetime of the organism. The notion of instincts as fixed and unmodifiable would not be widely advanced until after the rise of Weismann’s germ-plasm theory in the late 19thcentury .

Given their importance in evolutionary theory, there was greater interest in more objectively identifying pure instincts beyond anecdotal reports. Some of the most compelling evidence was reported by Douglas Spalding ( 1844–1877 ) in the early 1870s (see Gray, 1967). Spalding documented numerous instances of how naïve animals showed coordinated, seemingly adaptive responses (e.g., hiding) to objects (e.g., sight of predators) upon their first encounter, and he helped pioneer the use of the deprivation experiment to identify instinctive behaviors. This technique involved selectively depriving young animals of seemingly critical learning experiences or sensory stimulation. Should animals display some species-typical action following deprivation, then, presumably, the behavior could be labeled as unlearned or innate. In all, these studies seemed to show that animals displayed numerous adaptive responses at the very start, prior to any relevant experience. In a variety of ways, Spalding’s work anticipated 20th-century studies of innate behavior. Not only would the deprivation experiment be used as the primary means of detecting native tendencies by European zoologists and ethologists, but Spalding also showed evidence of what would later be called imprinting, critical period effects and evidence of behavioral maturation.

Reports of pure instinct did not go unchallenged. Lloyd Morgan (1896) questioned the accuracy of these reports in his own experimental work with young animals. In some cases, he failed to replicate the results and in other cases he found that instinctive behaviors were not as finely tuned to objects in the environment as had been claimed. Morgan’s research pointed to taking greater precision in identifying learned and instinctive components of behavior, but, like most at the turn of the 20th century , he did not question that animal behavior involved both learned and instinctive elements.

A focus on instinctive behaviors intensified in the 1890s as Weismann’s germ-plasm theory grew in popularity. More so than before, a sharp distinction was drawn between native and acquired characteristics, including behavior (Johnston, 1995). Although some psychologists continued to maintain neo-Lamarckian notions, most German (Burnham, 1972) and American (Cravens & Burnham, 1971) psychologists were quick to adopt Weismann’s theory. They envisioned a new natural science of psychology that would experimentally identify the germinally determined, invariable set of native psychological traits in species and their underlying physiological (neural) basis. However, whereas English-speaking psychologists tended to focus on how this view impacted our understanding of social institutions and its social implications, German psychologists were more interested in the longstanding philosophical implications of Weismann’s doctrine as it related to the differences (if any) between man and beast (Burnham, 1972).

Some anthropologists and sociologists, however, interpreted Weismann’s theory quite differently and used it elevate sociology as its own scientific discipline. In the 1890s, the French sociologist Emil Durkheim, for example, interpreted Weismann’s germinal determinants as a generic force on human behavior that influenced the development of general predispositions that are molded by the circumstances of life (Meloni, 2016). American anthropologists reached similar conclusions in the early 20th century (Cravens & Burnham, 1971). Because Weismann’s theory divorced biological inheritance from social inheritance, and because heredity was treated as a generic force, sociologists felt free to study social (eventually, “cultural”) phenomena without reference to biological or psychological concerns.

Anti-Instinct Movement in the 1920s

Despite their differences, in the first two decades of the 20th century both psychologists and sociologists generally assumed that humans and animals had some native tendencies or instincts. Concerns were even voiced that instinct had not received enough attention in psychology. Disagreements about instincts continued to focus on (the now centuries old debates of) how to conceptualize them. Were they complex reflexes, impulses, or motives to act, or should instinct be a mental faculty (like intuition), separate from reasoning and reflex (Herrnstein, 1972)?

In America, the instinct concept came under fire following a brief paper in 1919 by Knight Dunlap titled “Are There Any Instincts?” His primary concern dealt with teleological definitions of instincts in which an instinct referred to all the activities involved in obtaining some end-state (e.g., instincts of crying, playing, feeding, reproduction, war, curiosity, or pugnacity). Defined in this way, human instincts were simply labels for human activities, but how these activities were defined was arbitrarily imposed by the researchers. Is feeding, for instance, an instinct, or is it composed of more basic instincts (like chewing and swallowing)? The arbitrariness of classifying human behavior had led to tremendous inconsistencies and confusion among psychologists.

Not all of the challenges to instinct dealt with its teleological usage. Some of the strongest criticisms were voiced by Zing-Yang Kuo throughout the 1920s. Kuo was a Chinese animal psychologist who studied under Charles Tolman at the University of California, Berkeley. Although Kuo’s attacks on instinct changed throughout the 1920s (see Honeycutt, 2011), he ultimately argued that all behaviors develop in experience-dependent ways and that appeals to instinct were statements of ignorance about how behaviors develop. Like Dunlap, he warned that instincts were labels with no explanatory value. To illustrate, after returning to China, he showed how the so-called rodent-killing instinct in cats often cited by instinct theorists is not found in kittens that are reared with rodents (Kuo, 1930). These kittens, instead, became attached to the rodents, and they resisted attempts to train rodent-killing. Echoing the point made by Guer, Kuo claimed that appeals to instinct served to stunt scientific inquiry into the developmental origins of behavior.

But Kuo did not just challenge the instinct concept. He also argued against labeling behaviors as “learned.” After all, whether an animal “learns” depends on the surrounding environmental conditions, the physiological and developmental status of the animal, and, especially, the developmental (or experiential) history of that animal. Understanding learning also required developmental analysis. Thus Kuo targeted the basic distinction between nature and nurture, and he was not alone in doing so (e.g., see Carmichael, 1925), but his call to reject it did not spread to mainstream American psychologists.

By the 1930s, the term instinct had fallen into disrepute in psychology, but experimental psychologists (including behaviorists) remained committed to a separation of native from acquired traits. If anything, the dividing line between native and acquired behaviors became more sharply drawn than before (Logan & Johnston, 2007). For some psychologists, instinct was simply rebranded in the less contentious (but still problematic) language of biological drives or motives (Herrnstein, 1972). Many other psychologists simply turned to describing native traits as due to “maturation” and/or “heredity” rather than “instinct.”

Fixed Action Patterns

The hereditarian instinct concept received a reboot in Europe in the 1930s with the rise of ethology led by Konrad Lorenz, Niko Tinbergen, and others. Just as animals inherit organs that perform specific functions, ethologists believed animals inherit behaviors that evolved to serve adaptive functions as well. Instincts were described as unlearned (inherited), blind, stereotyped, adaptive, fixed action patterns, impervious to change that are initiated (released) by specific stimuli in the environment.

Ethologists in 1930s and 1940s were united under the banner of innateness. They were increasingly critical of the trend by American psychologists (i.e., behaviorists) to focus on studying on how a limited number of domesticated species (e.g., white rat) responded to training in artificial settings (Burkhardt, 2005). Ethologists instead began with rich descriptions of animal behavior in more natural environments along with detailed analyses of the stimulus conditions that released the fixed action patterns. To test whether behavioral components were innate, ethologists relied primarily on the deprivation experiment popularized by Spalding in the 19th century . Using these methods (and others), ethologists identified numerous fascinating examples of instinctive behaviors, which captured mainstream attention.

In the early 1950s, shortly after ethology had gained professional status (Burkhardt, 2005), a series of challenges regarding instinct and innateness were put forth by a small cadre of North American behavioral scientists (e.g., T. C. Schneirla, Donald Hebb, Frank Beach). Arguably the most influential critique was voiced by comparative psychologist Daniel Lehrman (1953), who presented a detailed and damning critique of deprivation experiments on empirical and logical grounds. Lehrman explained that deprivation experiments isolate the animal from some but not all experiences. Thus deprivation experiments simply change what an animal experiences rather than eliminating experience altogether, and so they cannot possibly determine whether a behavior is innate (independent of experience). Instead, these experiments show what environmental conditions do not matter in the development of a behavior but do not speak to what conditions do matter.

Lehrman went on to argue that the whole endeavor to identify instinctive or innate behavior was misguided from the start. All behavior, according to Lehrman, develops from a history of interactions between an organism and its environment. If a behavior is found to develop in the absence of certain experiences, the researcher should not stop and label it as innate. Rather, research should continue to identify the conditions under which the behavior comes about. In line with Kuo, Lehrman repeated the warning that to label something as instinctive (or inherited or maturational) is a statement of ignorance about how that behavior develops and does more to stunt than promote research.

Lehrman’s critique created significant turmoil among ethologists. As a result, ethologists took greater care in using the term innate, and it led to new attempts to synthesize or re-envision learning and instinct.

Some of these attempts focused on an increased role for learning and experience in the ontogeny of species-typical behaviors. These efforts spawned significant cross-talk between ethologists and comparative psychologists to more thoroughly investigate behavioral development under natural conditions. Traditional appeals to instinct and learning (as classical and operant conditioning) were both found to be inadequate for explaining animal behavior. In their stead, these researchers focused more closely on how anatomical, physiological, experiential, and environmental conditions influenced the development of species-typical behaviors.

Tinbergen (1963) was among those ethologists who urged for greater developmental analysis of species-typical behaviors, and he included it as one of his four problems in the biological study of organisms, along with causation (mechanism), survival value (function), and evolution. Of these four problems, Tinbergen believed ethologists were especially well suited to study survival value, which he felt had been seriously neglected (Burkhardt, 2005).

The questions of survival value coupled with models of population genetics would gain significant momentum in the 1960s and 1970s in England and the United States with the rise of behavioral ecology and sociobiology (Griffiths, 2008). But because these new fields seemed to promote some kind of genetic determinism in behavioral development, they were met with much resistance and reignited a new round of nature–nurture debates in the 1970s (see Segerstrale, 2000).

However, not all ethologists abandoned the instinct concept. Lorenz, in particular, continued to defend the division between nature and nurture. Rather than speaking of native and acquired behaviors, Lorenz later spoke of two different sources of information for behavior (innate/genetic vs. acquired/environmental), which was more a subtle shift in language than it was an actual change in theory, as Lehrman later pointed out.

Some ethologists followed Lorenz’s lead and continued to maintain more of a traditional delineation between instinct and learning. Their alternative synthesis viewed learning as instinctive (Gould & Marler, 1987). They proposed that animals have evolved domain-specific “instincts to learn” that result from the its genetic predispositions and innate knowledge. To support the idea of instincts for learning, ethologists pointed to traditional ethological findings (on imprinting and birdsong learning), but they also drew from the growing body of work in experimental psychology that seemed to indicate certain types of biological effects on learning.

Biological Constraints and Preparedness

While ethology was spreading in Europe in the 1930s–1950s, behaviorism reigned in the United States. Just as ethologists were confronted with including a greater role of nurture in their studies, behaviorists were challenged to consider a greater role of nature.

Behaviorists assumed there to be some behavioral innateness (e.g., fixed action patterns, unconditioned reflexes, primary reinforcers and drives). But because behaviorists focused on learning, they tended to study animals in laboratory settings using biologically (or ecologically) irrelevant stimuli and responses to minimize any role of instinct (Johnston, 1981). It was widely assumed that these studies would identify general laws of learning that applied to all species regardless of the specific cues, reinforcers, and responses involved.

Challenges to the generality assumption began to accumulate in the 1960s. Some studies pointed to failures that occurred during conditioning procedures. Breland and Breland (1961), for example, reported that some complex behaviors formed through operant conditioning would eventually become “displaced” by conditioned fixed action patterns in a phenomenon they called “instinctive drift.” Studies of taste-aversion learning (e.g., Garcia & Koelling, 1966) also reported the failure of rats to associate certain events (e.g., flavors with shock or audiovisual stimuli with toxicosis).

Other studies were pointing to enhanced learning. In particular, it was found that rats could form strong conditioned taste aversions after only a single pairing between a novel flavor and illness. (This rapid “one trial learning” was a major focus in the research from Niko Tinbergen’s ethological laboratory.) Animals, it seemed, had evolved innate predispositions to form (or not form) certain associations.

In humans, studies of biological constraints on learning were mostly limited to fear conditioning. Evidence indicated that humans conditioned differently to (biologically or evolutionarily) fear-relevant stimuli like pictures of spiders or snakes than to fear-irrelevant stimuli like pictures of mushrooms or flowers (Ohman, Fredrikson, Hugdahl, & Rimmö, 1976).

These findings and others were treated as a major problem in learning theory and led to calls for a new framework to study learning from a more biologically oriented perspective that integrated the evolutionary history and innate predispositions of the species. These predispositions were described as biological “constraints” on, “preparedness,” or “adaptive specializations” for learning, all of which were consistent with the “instincts to learn” framework proposed by ethologists.

By the 1980s it was becoming clear that the biological preparedness/constraint view of learning suffered some limitations. For example, what constraints count as “biological” was questioned. It was well established that there were general constraints on learning associated with the intensity, novelty, and timing of stimuli. But, arbitrarily it seemed, these constraints were not classified as “biological” (Domjan & Galef, 1983). Other studies of “biological constraints” found that 5- and 10-day old rats readily learned to associated a flavor with shock (unlike in adults), but (like in adults) such conditioning was not found in 15-day-old rats (Hoffman & Spear, 1988). In other words, the constraint on learning was not present in young rats but developed later in life, suggesting a possible role of experience in bringing about the adult-like pattern.

Attempts to synthesize these alternatives led to numerous calls for more ecologically oriented approaches to learning not unlike the synthesis between ethology and comparative psychology in the 1960s. All ecological approaches to learning proposed that learning should be studied in the context of “natural” (recurrent and species-typical) problems that animals encounter (and have evolved to encounter) using ecologically meaningful stimuli and responses. Some argued (e.g., Johnston, 1981) that studies of learning should take place within the larger context of studying how animals develop and adapt to their surround. Others (Domjan & Galef, 1983) pointed to more of a comparative approach in studying animal learning in line with behavioral ecology that takes into account how learning can be influenced by the possible selective pressures faced by each species. Still, how to synthesize biological constraints (and evolutionary explanations) on learning with a general process approach remains a source of tension in experimental psychology.

Nativism in Mind: Innate Ideas

Nativism and Empiricism in Philosophy

In the philosophy of mind, nature–nurture debates are voiced as debates between nativists and empiricists. Nativism is a philosophical position that holds that our minds have some innate (a priori to experience) knowledge, concepts, or structure at the very start of life. Empiricism, in contrast, holds that all knowledge derives from our experiences in the world.

However, rarely (if ever) were there pure nativist or empiricist positions, but the positions bespeak a persistent tension. Empiricists tended to eschew innateness and promote a view of the mental content that is built by general mechanisms (e.g., association) operating on sensory experiences, whereas nativists tend to promote a view of mind that contains domain-specific, innate processes and/or content (Simpson, Carruthers, Laurence, & Stich, 2005). Although the tension about mental innateness would loosen as empiricism gained prominence in philosophy and science, the strain never went away and would intensify again in the 20th century .

Nativism in 20th Century Psychology: The Case of Language Development

In the first half of the 20th century , psychologists generally assumed that knowledge was gained or constructed through experience with the world. This is not to say that psychologists did not assume some innate knowledge. The Swiss psychologist Jean Piaget, for example, believed infants enter the world with some innate knowledge structures, particularly as they relate to early sensory and motor functioning (see Piaget, 1971). But the bulk of his work dealt with the construction of conceptual knowledge as children adapt to their worlds. By and large, there were no research programs in psychology that sought to identify innate factors in human knowledge and cognition until the 1950s (Samet & Zaitchick, 2017)

An interest in psychological nativism was instigated in large part by Noam Chomsky’s (1959) critique of B. F. Skinner’s book on language. To explain the complexity of language, he argued, we must view language as the knowledge and application of grammatical rules. He went on to claim that the acquisition of these rules could not be attributed to any general-purpose, learning process (e.g., reinforcement). Indeed, language acquisition occurs despite very little explicit instruction. Moreover, language is special in terms of its complexity, ease, and speed of acquisition by children and in its uniqueness to humans. Instead, he claimed that our minds innately contain some language-specific knowledge that kick-starts and promotes language acquisition. He later claimed this knowledge can be considered some sort of specialized mental faculty or module he called the “language acquisition device” (Chomsky, 1965) or what Pinker (1995) later called the “language instinct.”

To support the idea of linguistic nativism, Chomsky and others appealed to the poverty of the stimulus argument. In short, this argument holds that our experiences in life are insufficient to explain our knowledge and abilities. When applied to language acquisition, this argument holds children’s knowledge of language (grammar) goes far beyond the limited, and sometimes broken, linguistic events that children directly encounter. Additional evidence for nativism drew upon the apparent maturational quality of language development. Despite wide variations in languages and child-rearing practices across the world, the major milestones in language development appear to unfold in children in a universal sequence and timeline, and some evidence suggested a critical period for language acquisition.

Nativist claims about language sparked intense rebuttals by empiricist-minded psychologists and philosophers. Some of these retorts tackled the logical limitations of the poverty of stimulus argument. Others pointed to the importance of learning and social interaction in driving language development, and still others showed that language (grammatical knowledge) may not be uniquely human (see Tomasello, 1995, for review). Nativists, in due course, provided their own rebuttals to these challenges, creating a persistent tension in psychology.

Extending Nativism Beyond Language Development

In the decades that followed, nativist arguments expanded beyond language to include cognitive domains that dealt with understanding the physical, psychological, and social worlds. Developmental psychologists were finding that infants appeared to be much more knowledgeable in cognitive tasks (e.g., on understanding object permanence) and skillful (e.g., in imitating others) than had previously been thought, and at much younger ages. Infants also showed a variety of perceptual biases (e.g., preference for face-like stimuli over equally complex non-face-like stimuli) from very early on. Following the standard poverty of the stimulus argument, these findings were taken as evidence that infants enter the world with some sort of primitive, innate, representational knowledge (or domain-specific neural mechanisms) that constrains and promotes subsequent cognitive development. The nature of this knowledge (e.g., as theories or as core knowledge), however, continues to be debated (Spelke & Kinzler, 2007).

Empiricist-minded developmental psychologists responded by demonstrating shortcomings in the research used to support nativist claims. For example, in studies of infants’ object knowledge, the behavior of infants (looking time) in nativist studies could be attributed to relatively simple perceptual processes rather than to the infants’ conceptual knowledge (Heyes, 2014). Likewise, reports of human neonatal imitation not only suffered from failures to replicate but could be explained by simpler mechanisms (e.g., arousal) than true imitation (Jones, 2017). Finally, studies of perceptual preferences found in young infants, like newborn preferences for face-like stimuli, may not be specific preferences for faces per se but instead may reflect simpler, nonspecific perceptual biases (e.g., preferences for top-heavy visual configurations and congruency Simion & Di Giorgio, 2015).

Other arguments from empiricist-minded developmental psychologists focused on the larger rationale for inferring innateness. Even if it is conceded that young infants, like two-month-olds, or even two-day-olds, display signs of conceptual knowledge, there is no good evidence to presume the knowledge is innate. Their knowledgeable behaviors could still be seen as resulting from their experiences (many of which may be nonobvious to researchers) leading up to the age of testing (Spencer et al., 2009).

In the 21st century , there is still no consensus about the reality, extensiveness, or quality of mental innateness. If there is innate knowledge, can experience add new knowledge or only expand the initial knowledge? Can the doctrine of innate knowledge be falsified? There are no agreed-upon answers to these questions. The recurring arguments for and against mental nativism continue to confound developmental psychologists.

Maturation Theory

The emergence of bodily changes and basic behavioral skills sometimes occurs in an invariant, predictable, and orderly sequence in a species despite wide variations in rearing conditions. These observations are often attributed to the operation of an inferred, internally driven, maturational process. Indeed, 21st-century textbooks in psychology commonly associate “nature” with “maturation,” where maturation is defined as the predetermined unfolding of the individual from a biological or genetic blueprint. Environmental factors play a necessary, but fundamentally supportive, role in the unfolding of form.

Preformationism Versus Epigenesis in the Generation of Form

The embryological generation of bodily form was debated in antiquity but received renewed interest in the 17th century . Following Aristotle, some claimed that embryological development involved “epigenesis,” defined as the successive emergence of form from a formless state. Epigenesists, however, struggled to explain what orchestrated development without appealing to Aristotelean souls. Attempts were made to invoke to natural causes like physical and chemical forces, but, despite their best efforts, the epigenesists were forced to appeal to the power of presumed, quasi-mystical, vitalistic forces (entelechies) that directed development.

The primary alternative to epigenesis was “preformationism,” which held that development involved the growth of pre-existing form from a tiny miniature (homunculus) that formed immediately after conception or was preformed in the egg or sperm. Although it seems reasonable to guess that the invention and widespread use of the microscope would immediately lay to rest any claim of homuncular preformationism, this was not the case. To the contrary, some early microscopists claimed to see signs of miniature organisms in sperm or eggs, and failures to find these miniatures were explained away (e.g., the homunculus was transparent or deflated to the point of being unrecognizable). But as microscopes improved and more detailed observations of embryological development were reported in the late 18th and 19th centuries , homuncular preformationism was finally refuted.

From Preformationism to Predeterminism

Despite the rejection of homuncular preformationism, preformationist appeals can be found throughout the 19th century . One of the most popular preformationist theories of embryological development was put forth by Ernst Haeckel in the 1860s (Gottlieb, 1992). He promoted a recapitulation theory (not original to Haeckel) that maintained that the development of the individual embryo passes through all the ancestral forms of its species. Ontogeny was thought to be a rapid, condensed replay of phylogeny. Indeed, for Haeckel, phylogenesis was the mechanical cause of ontogenesis. The phylogenetic evolution of the species created the maturational unfolding of embryonic form. Exactly how this unfolding takes place was less important than its phylogenetic basis.

Most embryologists were not impressed with recapitulation theory. After all, the great embryologist Karl Ernst von Baer ( 1792–1876 ) had refuted strict recapitulation decades earlier. Instead, there was greater interest in how best to explain the mechanical causes of development ushering in a new “experimental embryology.” Many experimental embryologists followed the earlier epigenesists by discussing vitalistic forces operating on the unorganized zygote. But it soon became clear that the zygote was structured, and many people believed the zygote contained special (unknown) substances that specified development. Epigenesis-minded experimental embryologists soon warned that the old homuncular preformationism was being transformed into a new predetermined preformationism.

As a result, the debates between preformationism and epigenesis were reignited in experimental embryology, but the focus of these debates shifted to the various roles of nature and nurture during development. More specifically, research focused on the extent to which early cellular differentiation was predetermined by factors internal to cells like chromosomes or cytoplasm (preformationism, nature) or involved factors (e.g., location) outside of the cell (epigenesis, nurture). The former emphasized reductionism and developmental programming, whereas the latter emphasized some sort of holistic, regulatory system responsive to internal and external conditions. The tension between viewing development as predetermined or “epigenetic” persists into the 21st century .

Preformationism gained momentum in the 20th century following the rediscovery of Mendel’s studies of heredity and the rapid rise of genetics, but not because of embryological research on the causes of early differentiation. Instead, preformationism prevailed because it seemed embryological research on the mechanisms of development could be ignored in studies of hereditary patterns.

The initial split between heredity and development can be found in Galton’s speculations but is usually attributed to Weismann’s germ-plasm theory. Weismann’s barrier seemed to posit that the germinal determinants present at conception would be the same, unaltered determinants transmitted during reproduction. This position, later dubbed as “Weismannism,” was ironically not one promoted by Weismann. Like nearly all theorists in the 19th century , he viewed the origins of variation and heredity as developmental phenomena (Amundson, 2005), and he claimed that the germ-plasm could be directly modified in the lifetime of the organism by environmental (e.g., climactic and dietary) conditions (Winther, 2001). Still, Weismann’s theory treated development as a largely predetermined affair driven by inherited, germinal determinants buffered from most developmental events. As such, it helped set the stage for a more formal divorce between heredity and development with the rise of Mendelism in the early 20th century .

Mendel’s theory of heredity was exceptional in how it split development from heredity (Amundson, 2005). More so than in Weismann’s theory, Mendel’s theory assumed that the internal factors that determine form and are transmitted across generations remain unaltered in the lifetime of the organism. To predict offspring outcomes, one need only know the combination of internal factors present at conception and their dominance relations. Exactly how these internal factors determined form could be disregarded. The laws of hereditary transmission of the internal factors (e.g., segregation) did not depend on the development or experiences of the organism or the experiences the organism’s ancestors. Thus the experimental study of heredity (i.e., breeding) could proceed without reference to ancestral records or embryological concerns (Amundson, 2000). By the mid-1920s, the Mendelian factors (now commonly called “genes”) were found to be structurally arranged on chromosomes, and the empirical study of heredity (transmission genetics) was officially divorced from studies of development.

The splitting of heredity and development found in Mendel’s and Weismann’s work met with much resistance. Neo-Lamarckian scientists, especially in the United States (Cook, 1999) and France (Loison, 2011), sought unsuccessfully to experimentally demonstrate the inheritance of acquired characteristics into the 1930s.

In Germany during the 1920s and 1930s, resistance to Mendelism dealt with the chromosomal view of Mendelian heredity championed by American geneticists who were narrowly focused on studying transmission genetics at the expense of developmental genetics. German biologists, in contrast, were much more interested in the broader roles of genes in development (and evolution). In trying to understand how genes influence development, particularly of traits of interest to embryologists, they found the Mendelian theory to be lacking. In the decades between the world wars, German biologists proposed various expanded views of heredity that included some form of cytoplasmic inheritance (Harwood, 1985).

Embryologists resisted the preformationist view of development throughout the early to mid- 20th century , often maintaining no divide between heredity and development, but their objections were overshadowed by genetics and its eventual synthesis with evolutionary theory. Consequently, embryological development was treated by geneticists and evolutionary biologists as a predetermined, maturational process driven by internal, “genetic” factors buffered from environmental influence.

Maturation Theory in Psychology

Maturation theory was applied to behavioral development in the 19th century in the application of Haeckel’s recapitulation theory. Some psychologists believed that the mental growth of children recapitulated the history of the human race (from savage brute to civilized human). With this in mind, many people began to more carefully document child development. Recapitulationist notions were found in the ideas of many notable psychologists in the 19th and early 20th centuries (e.g., G. S. Hall), and, as such, the concept played an important role in the origins of developmental psychology (Koops, 2015). But for present purposes what is most important is that children’s mental and behavioral development was thought to unfold via a predetermined, maturational process.

With the growth of genetics, maturational explanations were increasingly invoked to explain nearly all native and hereditary traits. As the instinct concept lost value in the 1920s, maturation theory gained currency, although the shift was largely a matter of semantics. For many psychologists, the language simply shifted from “instinct versus learning” to “maturation versus practice/experience” (Witty & Lehman, 1933).

Initial lines of evidence for maturational explanations of behavior were often the same as those that justified instinct and native traits, but new embryological research presented in the mid-1920s converged to show support for strict maturational explanations of behavioral development. In these experiments (see Wyman, 2005, for review), spanning multiple laboratories, amphibians (salamanders and frogs) were exposed to drugs that acted as anesthetics and/or paralytics throughout the early stages of development, thus reducing sensory experience and/or motor practice. Despite the reduced sensory experiences and being unable to move, these animals showed no delays in the onset of motor development once the drugs wore off.

This maturational account of motor development in amphibians fit well with contemporaneous studies of motor development in humans. The orderly, invariant, and predictable (age-related) sequential appearance of motor skills documented in infants reared under different circumstances (in different countries and across different decades) was seen as strong evidence for a maturational account. Additional evidence was reported by Arnold Gessell and Myrtle McGraw, who independently presented evidence in the 1920s to show that the pace and sequence of motor development in infancy were not altered by special training experiences. Although the theories of these maturation theorists were more sophisticated when applied to cognitive development, their work promoted a view in which development was primarily driven by neural maturation rather than experience (Thelen, 2000).

Critical and Sensitive Periods

As the maturation account of behavioral development gained ground, it became clear that environmental input played a more informative role than had previously been thought. Environmental factors were found to either disrupt or induce maturational changes at specific times during development. Embryological research suggested that there were well-delineated time periods of heightened sensitivity in which specific experimental manipulations (e.g., tissue transplantations) could induce irreversible developmental changes, but the same manipulation would have no effect outside of that critical period.

In the 1950s–1960s a flurry of critical period effects were reported in birds and mammals across a range of behaviors including imprinting, attachment, socialization, sensory development, bird song learning, and language development (Michel & Tyler, 2005). Even though these findings highlighted an important role of experience in behavioral development, evidence of critical periods was usually taken to imply some rigid form of biological determinism (Oyama, 1979).

As additional studies were conducted on critical period effects, it became clear that many of the reported effects were more gradual, variable, experience-dependent, and not necessarily as reversible as was previously assumed. In light of these reports, there was a push in the 1970s (e.g., Connolly, 1972) to substitute “sensitive period” for “critical period” to avoid the predeterminist connotations associated with the latter and to better appreciate that these periods simply describe (not explain) certain temporal aspects of behavioral development. As a result, a consensus emerged that behaviors should not be attributed to “time” or “age” but to the developmental history and status of the animal under investigation (Michel & Tyler, 2005).

Heredity and Genetics

In the decades leading up to and following the start of the 20th century , it was widely assumed that many psychological traits (not just instincts) were inherited or “due to heredity,” although the underlying mechanisms were unknown. Differences in intelligence, personality, and criminality within and between races and sexes were largely assumed to be hereditary and unalterable by environmental intervention (Gould, 1996). The evidence to support these views in humans was often derived from statistical analyses of how various traits tended to run in families. But all too frequently, explanations of data were clouded by pre-existing, hereditarian assumptions.

Human Behavioral Genetics

The statistical study of inherited human (physical, mental, and behavioral) differences was pioneered by Galton (1869). Although at times Galton wrote that nature and nurture were so intertwined as to be inseparable, he nevertheless devised statistical methods to separate their effects. In the 1860s and 1870s, Galton published reports purporting to show how similarities in intellect (genius, talent, character, and eminence) in European lineages appeared to be a function of degree of relatedness. Galton considered, but dismissed, environmental explanations of his data, leading him to confirm his belief that nature was stronger than nurture.

Galton also introduced the use of twin studies to tease apart the relative impact of nature versus nurture, but the twin method he used was markedly different from later twin studies used by behavioral geneticists. Galton tracked the life history of twins who were judged to be very similar or very dissimilar near birth (i.e., by nature) to test the power of various postnatal environments (nurture) that might make them more or less similar over time. Here again, Galton concluded that nature overpowers nurture.

Similar pedigree (e.g., the Kallikak study see Zenderland, 2001) and twin studies appeared in the early 1900s, but the first adoption study and the modern twin method (which compares monozygotic to dizygotic twin pairs) did not appear until the 1920s (Rende, Plomin, & Vandenberg, 1990). These reports led to a flurry of additional work on the inheritance of mental and behavioral traits over the next decade.

Behavioral genetic research peaked in the 1930s but rapidly lost prominence due in large part to its association with the eugenics movement (spearheaded by Galton) but also because of the rise and eventual hegemony of behaviorism and the social sciences in the United States. Behavioral genetics resurged in the 1960s with the rising tide of nativism in psychology, and returned to its 1930s-level prominence in the 1970s (McGue & Gottesman, 2015).

The resurgence brought with a new statistical tool: the heritability statistic. The origins of heritability trace back to early attempts to synthesize Mendelian genetics with biometrics by Ronald Fisher and others. This synthesis ushered in a new field of quantitative genetics and it marked a new way of thinking about nature and nurture. The shift was to no longer think about nature and nurture as causes of traits in individuals but as causes of variation in traits between populations of individuals. Eventually, heritability came to refer to the amount of variance in a population sample that could be statistically attributed to genetic variation in that sample. Kinship (especially twin) studies provided seemingly straightforward ways of partitioning variation in population trait attributes into genetic versus environmental sources.

Into the early 21st century , hundreds of behavioral genetic studies of personality, intelligence, and psychopathology were reported. With rare exceptions, these studies converge to argue for a pervasive influence of genetics on human psychological variation.

These studies have also fueled much controversy. Citing in part behavioral genetic research, the educational psychologist Arthur Jensen (1969) claimed that the differences in intelligence and educational achievement in the United States between black and white students appeared to have a strong genetic basis. He went on to assume that because these racial differences appeared hereditary, they were likely impervious to environmental (educational) intervention. His article fanned the embers of past eugenics practices and ignited fiery responses (e.g., Hirsch, 1975). The ensuing debates not only spawned a rethinking of intelligence and how to measure it, but they ushered in a more critical look at the methods and assumptions of behavioral genetics.

Challenges to Behavioral Genetics

Many of the early critiques of behavioral genetics centered on interpreting the heritability statistic commonly calculated in kinship (family, twin, and adoption) studies. Perhaps more so than any other statistic, heritability has been persistently misinterpreted by academics and laypersons alike (Lerner, 2002). Contrary to popular belief, heritability tells us nothing about the relative impact of genetic and environmental factors on the development of traits in individuals. It deals with accounting for trait variation between people, not the causes of traits within people. As a result, a high heritability does not indicate anything about the fixity of traits or their imperviousness to environmental influence (contra Jensen), and a low heritability does not indicate an absence of genetic influence on trait development. Worse still, heritability does not even indicate anything about the role of genetics in generating the differences between people.

Other challenges to heritability focused not on its interpretation but on its underlying computational assumptions. Most notably, heritability analyses assume that genetic and environmental contributions to trait differences are independent and additive. The interaction between genetic and environmental factors were dismissed a priori in these analyses. Studies of development, however, show that no factor (genes, hormones, parenting, schooling) operates independently, making it impossible to quantify how much of a given trait in a person is due to any causal factor. Thus heritability analyses are bound to be misleading because they are based on biologically implausible and logically indefensible assumptions about development (Gottlieb, 2003).

Aside from heritability, kinship studies have been criticized for not being able to disentangle genetic and environmental effects on variation. It had long been known that that in family (pedigree) studies, environmental and genetic factors are confounded. Twin and adoption studies seemed to provide unique opportunities to statistically disentangle these effects, but these studies are also deeply problematic in assumptions and methodology. There are numerous plausible environmental reasons for why monozygotic twin pairs could resemble each other more than dizygotic twin pairs or why adoptive children might more closely resemble their biological than their adoptive parents (Joseph & Ratner, 2013).

A more recent challenge to behavioral genetics came from an unlikely source. Advances in genomic scanning in the 21st century made it possible in a single study to correlate thousands of genetic polymorphisms with variation in the psychological profiles (e.g., intelligence, memory, temperament, psychopathology) of thousands of people. These “genome-wide association” studies seemed to have the power and precision to finally identify genetic contributions to heritability at the level of single nucleotides. Yet, these studies consistently found only very small effects.

The failure to find large effects came to be known as the “missing heritability” problem (Maher, 2008). To account for the missing heritability, some behavioral geneticists and molecular biologists asserted that important genetic polymorphisms remain unknown, they may be too rare to detect, and/or that current studies are just not well equipped to handle gene–gene interactions. These studies were also insensitive to epigenetic profiles (see the section on Behavioral Epigenetics), which deal with differences in gene expression. Even when people share genes, they may differ in whether those genes get expressed in their lifetimes.

But genome-wide association studies faced an even more problematic issue: Many of these studies failed to replicate (Lickliter & Honeycutt, 2015). For those who viewed heritability analyses as biologically implausible, the small effect sizes and failures to replicate in genome-wide association studies were not that surprising. The search for independent genetic effects was bound to fail, because genes simply do not operate independently during development.

Behavioral Epigenetics

Epigenetics was a term coined in the 1940s by the developmental biologist Conrad Waddington to refer to a new field of study that would examine how genetic factors interact with local environmental conditions to bring about the embryological development of traits. By the end of the 20th century , epigenetics came to refer to the study of how nongenetic, molecular mechanisms physically regulate gene expression patterns in cells and across cell lineages. The most-studied mechanisms involve organic compounds (e.g., methyl-groups) that physically bind to DNA or the surrounding proteins that package DNA. The addition or removal of these compounds can activate or silence gene transcription. Different cell types have different, stable epigenetic markings, and these markings are recreated during cell division so that cells so marked give rise to similar types of cells. Epigenetic changes were known to occur during developmental periods of cellular differentiation (e.g., during embryogenesis), but not until 2004 was it discovered that these changes can occur at other periods in the life, including after birth (Roth, 2013)

Of interest to psychologists were reports that different behavioral and physiological profiles (e.g., stress reactivity) of animals were associated with different epigenetic patterns in the nervous system (Moore, 2015). Furthermore, these different epigenetic patterns could be established or modified by environmental factors (e.g., caregiving practices, training regimes, or environmental enrichment), and, under certain conditions, they remain stable over long periods of time (from infancy to adulthood).

Because epigenetic research investigates the physical interface between genes and environment, it represents an exciting advance in understanding the interaction of nature and nurture. Despite some warnings that the excitement over behavioral epigenetic research may be premature (e.g., Miller, 2010), for many psychologists, epigenetics underscores how development involves both nature and nurture.

For others, what is equally exciting is the additional evidence epigenetics provides to show that the genome is an interactive and regulated system. Once viewed as the static director of development buffered from environment influence, the genome is better described as a developing resource of the cell (Moore, 2015). More broadly, epigenetics also points to how development is not a genetically (or biologically) predetermined affair. Instead, epigenetics provides additional evidence that development is a probabilistic process, contingent upon factors internal and external to the organism. In this sense, epigenetics is well positioned to help dissolve the nature–nurture dichotomy.

Beyond Nature–Nurture

In the final decades of the 20th century , a position was articulated to move beyond the dichotomous nature–nurture framework. The middle-ground position on nature–nurture did not seem up to the task of explaining the origins of form, and it brought about more confusion than clarity. The back-and-forth (or balanced) pendulum between nature- and nurture-based positions throughout history had only gone in circles. Moving forward would require moving beyond such dichotomous thinking (Johnston, 1987).

The anti-dichotomy position, referred to as the Developmentalist tradition, was expressed in a variety of systems-based, metatheoretical approaches to studying development, all of which extended the arguments against nature–nurture expressed earlier by Kuo and Lehrman. The central problem with all nativist claims according to Developmentalists is a reliance on preformationism (or predeterminism).

The problem with preformationism, they argue, besides issues of evidence, is that it is an anti-developmental mindset. It presumes the existence of the very thing(s) one wishes to explain and, consequently, discourages developmental analyses. To claim that some knowledge is innate effectively shuts down research on the developmental origins of that knowledge. After all, why look for the origins of conceptual knowledge if that knowledge is there all along? Or why search for any experiential contributions to innate behaviors if those behaviors by definition develop independently of experience? In the words of Developmentalists Thelen and Adolph (1992), nativism “leads to a static science, with no principles for understanding change or for confronting the ultimate challenge of development, the source of new forms in structure and function” (p. 378).

A commitment to maturational theory is likely one of the reasons why studies of motor development remained relatively dormant for decades following its heyday in the 1930–1940s (Thelen, 2000). Likewise, a commitment to maturational theory also helps explain the delay in neuroscience to examine how the brain physically changes in response to environmental conditions, a line of inquiry that only began in the 1960s.

In addition to the theoretical pitfalls of nativism, Developmentalists point to numerous studies that show how some seemingly native behaviors and innate constraints on learning are driven by the experiences of animals. For example, the comparative psychologist Gilbert Gottlieb (1971) showed that newly hatched ducklings display a naïve preference for a duck maternal call over a (similarly novel) chicken maternal call (Gottlieb, 1971), even when duck embryos were repeatedly exposed to the chicken call prior to hatching (Gottlieb, 1991). It would be easy to conclude that ducklings have an innate preference to approach their own species call and that they are biologically constrained (contraprepared) in learning a chicken call. However, Gottlieb found that the naïve preference for the duck call stemmed from exposure to the duck embryos’ own (or other) vocalizations in the days before hatching (Gottlieb, 1971). Exposure to these vocalizations not only made duck maternal calls more attractive, but it hindered the establishment of a preference for heterospecific calls. When duck embryos were reared in the absence of the embryonic vocalizations (by devocalizing embryos in ovo) and exposed instead to chicken maternal calls, the newly hatched ducklings preferred chicken over duck calls (Gottlieb, 1991). These studies clearly showed how seemingly innate, biologically based preferences and constraints on learning derived from prenatal sensory experiences.

For Developmentalists, findings like these suggest that nativist explanations of any given behavior are statements of ignorance about how that behavior actually develops. As Kuo and Lehrman made clear, nativist terms are labels, not explanations. Although such appeals are couched in respectable, scientific language (e.g., “X is due to maturation, genes, or heredity”), they argue it would be more accurate simply to say that “We don’t know what causes X” or that “X is not due to A, B, or C.” Indeed, for Developmentalists, the more we unpack the complex dynamics about how traits develop, the less likely we are to use labels like nature or nurture (Blumberg, 2005).

On the other hand, Developmentalists recognize that labeling a behavior as “learned” also falls short as an explanatory construct. The empiricist position that knowledge or behavior is learned does not adequately take into account that what is learned and how easily something is learned depends on (a) the physiological and developmental status of the person, (b) the nature of the surrounding physical and social context in which learning takes place, and the (c) experiential history of the person. The empiricist tendency to say “X is learned or acquired through experience” can also short-circuit developmental analyses in the same way as nativist claims.

Still, Developmentalists appreciate that classifying behaviors can be useful. For example, the development of some behaviors may be more robust, reliably emerging across a range of environments and/or remaining relatively resistant to change, whereas others are more context-specific and malleable. Some preferences for stimuli require direct experience with those stimuli. Other preferences require less obvious (indirect) types of experiences. Likewise, it can still be useful to describe some behaviors in the ways shown in Table 1. Developmentalists simply urge psychologists to resist the temptation to treat these behavioral classifications as implying different kinds of explanations (Johnston, 1987).

Rather than treat nature and nurture as separate developmental sources of causation (see Figure 1), Developmentalists argue that a more productive way of thinking about nature–nurture is to reframe the division as that between product and process (Lickliter & Honeycutt, 2015). The phenotype or structure (one’s genetic, epigenetic, anatomical, physiological, behavioral, and mental profile) of an individual at any given time can be considered one’s “nature.” “Nurture” then refers to the set of processes that generate, maintain, and transform one’s nature (Figure 2). These processes involve the dynamic interplay between phenotypes and environments.

Figure 2. The developmentalist alternative view of nature–nurture as product–process. Developmentalists view nature and nurture not as separate sources of causation in development (see Figure 1) but as a distinction between process (nurture) and product (nature).

Conclusion

It is hard to imagine any set of findings that will end debates about the roles of nature and nurture in human development. Why? First, more so than other assumptions about human development, the nature–nurture dichotomy is deeply entrenched in popular culture and the life sciences. Second, throughout history, the differing positions on nature and nurture were often driven by other ideological, philosophical, and sociopolitical commitments. Thus the essential source of tension in debates about nature–nurture is not as much about research agendas or evidence as about basic differences in metatheoretical positions (epistemological and ontological assumptions) about human behavior and development (Overton, 2006).


Factors in avoidance behaviour

Warning calls and visual signals that are unique to different species of birds and mammals effectively and specifically evoke avoidance patterns. In some cases, learning clearly emerges as a factor thus, members of a colony of birds seem to learn to respond to the alarm calls of all species present in the colony. Among ducklings, a visual model to evoke fleeing and hiding can be fashioned as a cardboard cutout. When moved overhead in one direction, the model resembles a short-necked, long-tailed hawk, and the ducklings flee from it when moved in the other direction, the model looks like a harmless, long-necked goose, and the ducklings tend to stay calm. The model is effective, however, in eliciting the two kinds of behaviour only when the ducklings are accustomed to geese flying over but not hawks.

Innate factors also contribute to such responses (see instinct). Domestic chicks, for example, show crouching and freezing in response to the long alarm call of their species. Many of the perching birds ( passerines) will gather to mob when stimulated by the sight of an owl. The eyes in the characteristic owl face have been found to be especially important even birds reared in isolation respond to man-made models with appropriate eyespots painted on. It has been suggested that many human beings are specifically (and perhaps instinctively) disturbed by the sight of snakes—the notion of a legless object perhaps being a key stimulus. Human responses to spiders and centipedes with conspicuous legs also may be intense. In the reaction to snakes at least, notwithstanding Freudian explanations that they symbolize male sex organs, the behaviour of people may be compared with owl mobbing among passerine birds.

Specific chemical signals can induce avoidance behaviour some are released by minnows and tadpoles when their skin is damaged (usually indicating to fellows that there is danger). These chemicals appear to be specific for each species of fish and are highly effective in producing fleeing (see chemoreception). Many ants produce volatile alarm substances (terpenes) that are attractants to other ants at low concentrations and, in high concentrations near their source, produce rapid locomotion, defense postures, and, sometimes, fleeing. Some invertebrate avoidance responses are reflexes evoked by very specific stimuli rapid swimming by cockles clapping their shells, for example, is elicited by starfish extract. Shell jerking is produced in a freshwater snail (Physa) by contact with a leech, another specific response to a major predator.


Tracing the scent of fear: Study identifies neurons, brain region involved in rodent stress response

Cerebellum of CIVM postnatal rat brain atlas. Credit: Neurolex

The odor of bobcat urine, if you ever get a chance to take a whiff, is unforgettable—like rotten meat combined with sweat, with something indescribably feral underlying it. To humans, it's just nose-wrinklingly disgusting.

But to mice, it smells like one thing: fear.

Rodents respond instinctually to this trace of their natural predator. Even those mice raised in the lab, which have never been exposed to bobcats—or cats of any sort—respond to it.

For mice, this instinctual reaction can be lifesaving. The fear response triggers a surge of stress hormones which sends the mice into hyper-preparedness, helping them to respond and flee quickly from hungry predators. Although humans and mice have different stress triggers, this response is reminiscent of our physiological responses to fear and stress.

Now, a study has identified nerve cells and a region of the brain behind this innate fear response. With a new technique that uses specially-engineered viruses to uncover the nerve pathway involved, a research team led by Fred Hutchinson Cancer Research Center biologist and Nobel Prize winner Dr. Linda Buck has pinpointed a tiny area of the mouse brain responsible for this scent-induced reaction.

It's known as the "amygdalo-piriform transition area," or AmPir for short the researchers were surprised to find that the fear response was so concentrated in this one small region of the olfactory cortex, a part of the brain responsible for perceiving odors.

Although humans do not show innate fear to predator odors, studying how mice respond to predator cues can help us learn about our own innate emotions and responses, Buck said. On a general level, the rodent stress response looks a lot like our own.

"Understanding the neural circuitry underlying fear and stress of various sorts is very important, not just to understand the basic biology and functions of the brain, but also for potentially finding evolutionarily conserved neural circuits and genes that play an important role in humans," said Buck.

Buck and her team describe their findings in a paper published online March 21 in the journal Nature.


Babies’ Reactions To Spiders, Snakes Show Humans Have Evolved To Fear Them, Study Finds

LEIPZIG, Germany — The infant’s eyes grow wide at the sight of the eight-legged creature. She’s never been exposed to spiders before, but something inside her signals to pay attention.

Demonstrating this instinctual reaction, a new study out of the Max Planck Institute for Human Cognitive and Brain Sciences and Uppsala University shows that even six-month-old babies’ pupils dilate when seeing snakes or spiders. This response, researchers say, adds to the argument that fear of such creatures is facilitated by instinct, rather than just learned.

A new study that found even babies are alarmed at the sight of a snake or spider may prove that humans have evolved to fear the creepy critters.

“When we showed pictures of a snake or a spider to the babies instead of a flower or a fish of the same size and colour, they reacted with significantly bigger pupils,” says lead investigator and neuroscientist Stefanie Hoehl in a press release . “In constant light conditions this change in size of the pupils is an important signal for the activation of the noradrenergic system in the brain, which is responsible for stress reactions. Accordingly, even the youngest babies seem to be stressed by these groups of animals.”

While this increased attention to the animals means that babies quickly learn to fear them, other studies suggest it is not the fear that is innate, rather it’s the increased arousal and attention to them that is instinctual. Indeed, some of the previous research the authors cite shows that younger babies’ pupils actually dilate more in response to happy faces than fearful ones.

In such instances, the authors refer to such dilation as an “arousal” rather than a “stress” response.

Other experiments have likewise found that infants are faster at detecting snakes, but not necessarily inherently afraid of them. In an earlier experiment, researcher Vanessa LoBue of Rutgers University helped show that while babies paid more attention to snakes, they weren’t startled more easily when looking at them.

“While we find differential responses to snakes early on, meaning they are special, it doesn’t seem to be related to fear early in development,” says LoBue in a BBC article on that experiment. “It’s possible that paying more attention to something might make fear learning easier later on. It facilitates fear learning.

As the babies in Hoehl ‘s more recent study were only six months old, and are from a part of the world where there are few poisonous snakes or spiders, the study authors say the reactions –whether they represent stress or just increased interest — must be an ancestral instinct.

“We conclude that fear of snakes and spiders is of evolutionary origin,” according to the study’s lead investigator.

“We conclude that fear of snakes and spiders is of evolutionary origin. Similar to primates, mechanisms in our brains enable us to identify objects as ‘spider’ or ‘snake’ and to react to them very fast. This obviously inherited stress reaction in turn predisposes us to learn these animals as dangerous or disgusting. When this accompanies further factors it can develop into a real fear or even phobia,” Hoehl says. “A strong panicky aversion exhibited by the parents or a genetic predisposition for a hyperactive amygdala, which is important for estimating hazards, can mean that increased attention towards these creatures becomes an anxiety disorder.”

The scientists say that it’s the length of time our ancestors spent around spiders and snakes that makes them scarier to us than other potentially dangerous animals.

“We assume that the reason for this particular reaction upon seeing spiders and snakes is due to the coexistence of these potentially dangerous animals with humans and their ancestors for more than 40 to 60 million years—and therefore much longer than with today’s dangerous mammals,” says Hoehl. “The reaction which is induced by animal groups feared from birth could have been embedded in the brain for an evolutionarily long time.”

The results of Hoehl and her colleagues study were published recently in a paper in the journal Frontiers In Psychology.


Unlearning Conditioned Responses

It can be a challenge to determine if a response is conditioned or unconditioned. The key to understanding the difference is that an unconditioned response happens automatically. Meanwhile, a conditioned response is learned and is only acquired if the individual has made an association between an unconditioned and conditioned stimulus.

However, because a conditioned response must be learned, it can also be unlearned. Pavlov tested this after the dogs had developed conditioned responses to the light. He found that if he repeatedly shined the conditioned-stimulus light but refrained from giving the dog the food, the dog would salivate less and less until it stopped salivating completely. The gradual diminishing and eventual disappearance of the conditioned response is called extinction.

Extinction can happen to real-life conditioned responses, too. For example, if you see a new dentist who doesn’t make your gums raw when you have an appointment and compliments you on your healthy mouth, over time you may find you no longer dread the dentist's office.


Data Incompatible With Modular Theories

1. Considering that it is better to err on the side of caution than to wait until the cortex figures out whether the seen object is a snake or a curved stick, people should be afraid of all kinds of snakes or snake-like animals and objects. Indeed, it has been suggested (Larson et al., 2007, 2009, 2012 Van Strien et al., 2016) that curved lines and curvilinear shapes have an advantage in visual processing. Respondents seem faster to find a curved line as targets among straight lines compared to when they had to find a straight line among curved ones. Furthermore, curvilinear shapes are also detected faster than rectilinear (V-shaped) shapes when they had to be found among the same set of distractors, i.e., straight lines or circles.

False alarms (e.g., looks like a snake but after all it is a log) are less costly than false negatives (e.g., looks like a log but it is a snake) thus, it is safer to err on the side of excessive defensive expression (see e.g., LeDoux, 1998 Nesse, 2001). However, if the system is built to be over-defensive and err to the side of false positives, it should respond not only to snake-specific cues but also to other similar threat-related stimuli because many non-venomous snakes mimic venomous snakes, and venomous snakes exhibit extreme pattern variability and different characteristics (e.g., Wüster, 1998). For example, young white-faced capuchin monkeys (Cebus capucinus) are known to utter false alarm calls at a wide range of harmless non-predator animals (e.g., indigo snakes) compared to adults, and snake-species discrimination does not become apparent until the juvenile stage (Meno et al., 2013). Similarly studies with vervet monkeys (Chlorocebus pygerythrus) (Seyfarth and Cheney, 1980) and with spectral tarsiers (Tarsius spectrum) (Gursky, 2003) found youngsters to provide false alarm calls triggered by a wide range of harmless animals or objects, suggesting that this predator-recognition process likely involves experiential and vicariant learning refinements (Meno et al., 2013). Another example is given by the forest-living Campbell’s monkeys (Cercopithecus C. campbelli) that vocalized to the presence of familiar Gaboon vipers but not to unfamiliar black mambas (Dendroaspis polylepis), unreported in the Taï forest where the study was performed (Ouattara et al., 2009).

2. Previous research has shown (see e.g. Subra et al., 2018 Zsido et al., 2018a) using various paradigms that a modern threatening stimulus could lead to similar behavior as an evolutionary relevant one. For instance, when modern (e.g., gun) and evolutionary (e.g., snake) targets are compared directly in the classical visual search task proposed by Öhman and colleagues (see e.g., Öhman et al., 2001), the modern threatening target caught participants attention faster than evolutionary ones (Zsido et al., 2018b).

3. Low-level features of the visual stimulus could have affected many of the previous findings, as contrast/luminance and spatial frequency affect visual detection speed and processing of emotional content during early visual stages (Vlamings et al., 2009 Quinlan, 2013). Most studies did not control for low-level features of the visual stimulus (including contrast) of snakes, which can be significantly different from those of other categories and can affect the findings. These effects of contrast equalization on spatial frequency processing are particularly crucial during early (i.e., < 100 ms) visual processing (McFadyen et al., 2017).

4. In tropical and subtropical countries, envenoming affects mainly people involved in occupations and lifestyles requiring movements in a dense land, such as farmers, herders, labourers, hunters, shepherds, and workers (Meenatchisundaram and Michael, 2009). However, most of the previous work relied on controlled laboratory paradigms without support from studies in naturalistic conditions, which may be the cause of many sorts of potential bias (Quinlan, 2013 Paré and Quirk, 2017).

5. Many snakes pigmentation pattern appears to be strongly influenced by selection to avoid visually oriented predators (Jackson et al., 1976). Snakes rely on crypsis – (i.e., background matching) to become indistinguishable from the surrounding background (Isaac and Gregory, 2013) as one of many strategies to avoid detection (see Allen et al., 2013).

So even highly experienced observers with over 20 years’ experience miss seeing most snakes (more than 75%) aboveground around them in a flat and lightly vegetated area (Whitaker and Shine, 1999a,b). However, this could also suggest that snakes developed crypsis since the primate visual system was so effective to detect snakes.

7. Most snakebites follow at very close vicinity (Whitaker et al., 2000 Clark et al., 2012) and are extremely fast (Cundall, 2002 LaDuc, 2002 Clark et al., 2012 Penning et al., 2016), outside the capability for humans to escape.

8. There is evidence supporting that the human right amygdala is a specialized neural adaptation dedicated to processing visual information about animals in general (Mormann et al., 2011), which could represent either predators or prey, such as snakes. Indeed, humans do kill and eat snakes in several cultures (Headland and Greene, 2011). It seems likely that human’s enhanced detection of snakes is not so much contributing toward a significant escape advantage, but it might instead favor attention and further evaluation (Purkis and Lipp, 2007). When a stimulus needs to be approached in one situation may be avoided in another (Mesulam, 1998), the brain is more adapted to behavioral flexibility than to automatic, modular responses. In the case of snakes, the evidence seems to point to a similar conclusion.

9. Various laboratory studies comparing snakes and spiders (animate objects) as prepared stimuli with flowers and mushrooms might have been flawed by animacy as a confound. Flowers and mushrooms �have” similarly to inanimate objects (e.g., coffee mugs and telephones). Change-detection, for instance, is slower and less precise for inanimate targets, even when the inanimate targets can potentially move (e.g., tools or cars) or have evolutionary relevance such as plants or fruits (New et al., 2010 Jackson and Calvillo, 2013). This evidence does not necessarily deny the snake detection theory, but previous papers using flowers and mushrooms are often used and cited to support the snake detection theory (e.g., Soares et al., 2017).

10. Some of the characteristics of human-snake interaction do not appear to be the ones commonly seen between prey-predator interactions that lead to specific evolved automated traits. Ultra-specializations make sense for ultra-dependent predator-prey interactions, such as the kangaroo rat—rattlesnake interaction. These rodents escape speed may have evolved in response to snake predators (Higham et al., 2017), as Mohave rattlesnakes (Crotalus scutulatus) show very fast accelerations (up to 362 ms 𢄢 ) requiring from the kangaroo rats a response around 61.5 ms in order to escape. Although humans evolved the large cortex that could modify instinct behaviors, these take time and reduce the automaticity and response speed.

11. Finally, Grassini et al. (2016) found that snakes need to be consciously perceived to elicit emotion. Furthermore, it was also shown that affective processing requires awareness (Lähteenmäki et al., 2015).

The abovementioned discussion points raise the question of why and how humans would develop an ultra-fast detection mechanism specific to snakes. In the next part of the paper, we present some information about snakes that hopefully can assist in disentangling this problem.


Concluding remarks

While fear is an adaptive component of response to potentially threatening stimuli, too much or inappropriate fear accounts for chronic psychiatric disorders, including PTSD and phobias. Understanding the neurobiological basis of fear is therefore critical at elucidating the mechanisms improving treatments of these fear-related pathologies. Concerning phobias, nonexperiential, engaging innate fear, and experiential, engaging conditioned fear, disorders can be distinguished. However, so far, we know a lot about how the brain processes fear that is conditioned, while much less is known about innate fear. An increase of research on innate fear is therefore necessary.

Based on the research reviewed in this article, it appears that the amygdala is central to two phenomena that may support pathological innate fear: fear sensitization (with decreased amygdala threshold activity and potentiation of amygdala activity) and failure of fear habituation (with decreased amygdala habituation). Mechanisms of fear sensitization may contribute to decreased amygdala habituation. As current research is limited, this hypothesis is not fully supported here.

The cause of experiential phobia is attributed to some external learning experience. However, although phobia-like fears can be produced in laboratory with fear conditioning paradigms, experimental findings do not prove that in everyday life, classical fear conditioning is the cause of phobias (Merckelbach et al. 1996). For example, not all people who have had fearful events go on to develop phobias (Aitken et al. 1981), why? This may result from exaggerated fear conditionability, due to amygdala metaplastic changes (i.e., exaggerated LTP-like changes) in vulnerable individuals. However, not all phobic patients show exaggerated fear conditionability in laboratory experiences (Hermann et al. 2002). Poor extinction, due to impairment in mechanisms of depotentiation in the amygdala of vulnerable individuals, is also hypothesized as a potential factor maintaining pathological phobia. In experiments done by Öhman (1986), it was observed that people conditioned to angry faces showed significant resistance to extinction compared with those exposed to happy and neutral faces. So, nonphobic individuals can also present resistance to extinction.

As mentioned in the introduction, the development of experiential phobia seems to involve two stages. During the first stage, a pairing of a neutral stimulus and an aversive event results in a conditioned fear response to the neutral stimulus, which becomes a conditioned stimulus. During the second stage, the person learns that fear responses to the conditioned stimulus can be reduced by avoiding this stimulus. However, as shown in rats, strong avoidance causes resistance to extinction of classical fear conditioning (Nachon et al. 2014). Mechanisms that are involved need also to be examined in future studies. It would also be good to consider, in this context, other pathologies that contribute to pathological fear, such as relapse of fear after extinction (e.g., renewal, recovery, and reinstatement).

Despite these critical remarks, poor habituation and poor extinction, support, at least in part, fear dysfunction in phobia.


4.1. The Nuts and Bolts of Respondent Conditioning

Section Learning Objectives

  • Describe Pavlov’s accidental discovery.
  • Define respondent conditioning.
  • Recognize other terms used for respondent conditioning.
  • Outline the three phases of respondent conditioning. Define all terms.
  • Describe and exemplify higher order conditioning.
  • Contrast appetitive and aversive conditioning.
  • Contrast excitatory and inhibitory conditioning.
  • Outline and describe the four temporal presentations of US and NS in respondent conditioning.
  • Describe the phenomena of pseudoconditioning.

4.1.1. Pavlov and His Dogs

You have likely heard about Pavlov and his dogs but what you may not know is that this was a discovery made accidentally. Ivan Petrovich Pavlov (1849-1936 1927), a Russian physiologist, was interested in studying digestive processes in dogs in response to being fed meat powder. What he discovered was the dogs would salivate even before the meat powder was presented. They would salivate at the sound of a bell, footsteps in the hall, a tuning fork, or the presence of a lab assistant. Pavlov realized there were some stimuli that automatically elicited responses (such as salivating to meat powder) and those that had to be paired with these automatic associations for the animal or person to respond to it (such as salivating to a bell). Armed with this stunning revelation, Pavlov spent the rest of his career investigating the learning phenomenon and won a Nobel Prize in 1904 for his work.

The important thing to understand is that not all behaviors occur due to reinforcement and punishment as operant conditioning says. In the case of respondent conditioning, antecedent stimuli exert complete and automatic control over some behaviors. We saw this in the case of reflexes. When a doctor strikes your knee with that little hammer it extends out automatically. You do not have to do anything but watch. Babies will root for a food source if the mother’s breast is placed near their mouth. If a nipple is placed in their mouth, they will also automatically suck, as per the sucking reflex. Humans have several of these reflexes, though not as many as other animals, due to our more complicated nervous system.

4.1.2. Respondent Conditioning Described

Respondent conditioning occurs when we link or pair a previously neutral stimulus with a stimulus that is unlearned or inborn, called an unconditioned stimulus. Note that this form of learning also goes by the name classical conditioning or Pavlovian conditioning in honor of Ivan Pavlov.

Respondent conditioning is best described as occurring in three phases: pre-conditioning, conditioning, and post-conditioning. See Figure 4.1 for an overview of Pavlov’s classic experiment.

Let’s define terms first. The term conditioning means learning. So pre-conditioning is before learning occurs, conditioning is during learning or the acquisition of the relationship between the two stimuli, and post-conditioning is after learning has occurred. If we say something is un-conditioned it is not learned. Going back to our earlier philosophical discussion, this is learning that is innate or present at birth. Also keep in mind that the stimulus is what is sensed in the world through vision, hearing, smell, taste, or touch. The response is the behavior that is made. Making sure you have the terms straight will help you to understand respondent conditioning easier.

4.1.2.1. Pre-conditioning. Notice that pre-conditioning has both an A and a B panel. All this stage of learning signifies is that some knowledge is already present. In Panel A, the taste of food makes a dog salivate. This does not need to be trained and is the relationship of an unconditioned stimulus (US) yielding an unconditioned response (UR). The association occurs naturally. In Panel B, we see that a neutral stimulus (NS) yields nothing. Dogs do not enter the world knowing to respond to the ringing of a bell (which it hears).

4.1.2.2. Conditioning. Conditioning is when learning occurs. Through a pairing of a neutral stimulus and an unconditioned stimulus (bell and food, respectively) the dog will learn that the bell ringing (NS) signals food coming (US) and salivate (UR). The key is that the NS is presented just before the US which yields a UR (in most cases more on that in a bit).

4.1.2.3. Post-conditioning. Post-conditioning, or after learning has occurred, establishes a new and not naturally occurring relationship of a conditioned stimulus (CS previously the NS) and conditioned response (CR the same response). So, the dog now reliably salivates at the sound of the bell because he expects that food will follow, and it does. If it doesn’t, the response ends or extinguishes as you will see later.

Figure 4.1. Pavlov’s Classic Experiment

Let’s now clearly define our terms:

  • Unconditioned stimulus — The stimulus that naturally elicits a response.
  • Unconditioned response —The response that occurs naturally when the US is present.
  • Neutral stimulus — A stimulus that causes no response.
  • Conditioned stimulus — The initially neutral stimulus that has been associated with a naturally occurring stimulus to bring about a response.
  • Conditioned response — The response which is elicited by a CS, though it is not the same as the UR. This response is usually weaker than the UR (the dog salivates to the bell, though it does not do it as much as it does to the sight/smell/taste of the food).

Note to Student: Be sure you not only understand these terms but the acronyms used to represent them. I will use the shorthand the rest of the way through this module and in other places in the book.

To fully understand respondent conditioning, know that the pairings of an NS and US each represent a single trial, called the conditioning trial. The period between conditioning trials is called the intertrial interval. The period between the presentation of the NS and then the US (Panel C) within a conditioning trial is called the interstimulus interval.

The entire process of conditioning, to include when we first make the association between an NS and US to its strengthening over time through repeated pairings, is called acquisition. It is likely not surprising to learn that conditioning occurs quicker if the US is more intense. We will be more motivated to learn to associate making an incorrect response with shock if we receive 150 volts compared to 25 volts.

Conditioning is also more effective when the trials are spaced rather than massed (Menzel et al., 2001). For instance, spacing the trials 5 minutes apart is more effective than spacing them 25 seconds apart. One explanation for this is that we have time to rehearse the CS and US in memory during the intertrial interval and if a new trial occurs too soon, it could interfere with rehearsal (Wagner, Rudy, & Whitlow, 1973).

And we can determine how good the learning is if we toss in a test trial occasionally in which the NS is presented alone to see if it elicits the response (UR/CR ring the bell alone and see if salivation occurs). We can also wait to see if after the presentation of the NS (bell) and before the US appears (sight of food) if the UR/CR appears on its own (salivation). In other words, does the response occur during the interstimulus interval?

4.1.3. Conditioning and its Different Forms

It is worth noting that the conditioning procedure described in the preceding section on Pavlov is not the only form it can take. In this section, we will discuss a type of layered conditioning, conditioning based on the event being something we desire or want to avoid, conditioning based on the presentation or removal of the US, and finally temporal factors that can produce unique conditioning procedures.

4.1.3.1. Higher order conditioning. Sometimes, a stimulus that is associated with a CS (formerly the NS) becomes a CS itself and elicits the CR. We call this higher order conditioning, and each level of conditioning is referred to as first, second, third, etc. order conditioning. So how might this work?

Being assaulted (US) will elicit fear (UR). A person wearing a ski mask would alone not cause any response (it is an NS1). If though, you pair the person wearing the ski mask (NS1) with the assault (US) which causes fear (UR), then the sight of a person wearing a ski mask (CS1) will elicit fear (CR). Keep in mind that with the stimuli, you see a person wearing a ski mask and feel the effects of the assault (touch or pain receptors in the skin will be activated). This is first-order conditioning (not to be confused with the training of First Order stormtroopers in Star Wars) and in this example involves a person being associated with fear.

But what if the assault occurred in an alley in your neighborhood? Now the alley (NS2) is paired with the person wearing the ski mask (CS1) which causes fear (CR), and post-conditioning shows that the alley (CS2) causes fear (CR). This is second-order conditioning and involves a location being associated with fear.

Could the time of day be a factor too? What if the mugging occurred at night? If night (NS3) is paired with the alley (CS2) which causes fear (CR), then being outside at night (CS3) could lead to fear (or at least some anxiety CR). This would be third-order conditioning and now involves a time of day being associated with fear.

Fear was originally elicited by being assaulted. Through higher order conditioning, it was also elicited by the sight of a ski mask, being in an alley, and being outside at night. The fear reaction becomes weaker across the conditioning of these additional NS, such that our response to being outside at night could be better classified as anxiety and not so much the bona fide fear felt while being assaulted (and likely for a time afterward) which suggests that the response is strongest to the US and becomes weaker across CS1, CS2, and CS3.

4.1.3.2. Appetitive and aversive conditioning. Recall from Section 2.1.3 that appetitive stimuli are those that an organism desires and seeks out while aversive stimuli are readily avoided. In respondent conditioning, the US could be an appetitive or aversive stimulus. For instance, in appetitive conditioning, the US would be something desirable such as candy which makes us happy. Other examples could include water, food, sex, or drugs. In aversive conditioning, the stimulus is not pleasant and could include extreme temperatures, a painful sting such as from a wasp or a bite from a dog, electric shock, or something that does not smell nice. It would not be surprising to learn that conditioning occurs relatively fast when aversive US are involved. Since these stimuli could harm or kill us, learning to avoid them is adaptive and aids our survival.

4.1.3.3. Excitatory and inhibitory conditioning. Our discussion so far has included examples in which the NS is associated with the presentation of the US, called excitatory conditioning. For Palov’s dogs, they associated the ringing of a bell (NS) with the presentation of the food (US) which caused their salivation (UR). Eventually, salivation (CR) occurred to just the ringing of the bell (CS).

Interestingly enough, the absence of the US could be associated with an NS too, in a process called inhibitory conditioning. Go back to our example for higher conditioning. A person wearing a ski mask is an excitatory CS for fear but seeing someone wearing such a mask during the daytime leads to an inhibition of fear. It being day indicates a safe interval and we will not be overly concerned about ski masks. We have only ever been assaulted at night. The excitatory CS is expressed as CS+ and the inhibitory CS as CS-.

4.1.3.4. Temporal factors affecting conditioning. In the previous section we saw that generally, the US is presented after the NS though the NS could be followed by the absence of an US. These examples have also always presented the NS before the US, but this is not necessary in all cases.

First, delay conditioning involves the presentation of the NS before the US, but the NS overlaps with the US for a short period of time. In the case of Pavlov’s experiment, the bell would ring for say 10 seconds, then the food would enter the room, and then the bell would end 5 seconds after this. The ISI (interstimulus interval) should be relatively brief to use this procedure.

What if we present the NS well ahead of the US in time? Let’s say we ring the bell for 10 seconds and then there is a 5-second gap before the food enters the room. The NS and US do not overlap. This is the basis of trace conditioning and the trace is a memory that we have to access. The organism will need to remember that the NS occurred before the US to make the association, or that the bell rang before the food came in. The period of time between the NS terminating and the US beginning is called the trace interval and ideally should be short, or a few seconds.

The NS and US could occur at the same time such as in simultaneous conditioning. As you might expect, conditioning in this procedure is poor since the NS does not predict the occurrence of the US. They occur simultaneously. The bell would ring as the food enters the room. The bell-ringing does not lead to an expectation that food will come shortly, which aids with learning the association.

Finally, the US could come before the NS in a procedure called backward conditioning. The US would occur first and last for a few seconds with the NS starting near the end of this time. Hence, the NS and US co-occur for a short period of time. Of the four methods, backward conditioning is the least effective for excitatory conditioning though it could lead to inhibitory conditioning. Consider a shock paradigm in which a rat is given a shock (US) and then near the end of the shock a light is turned on (NS). The light (NS) would signal the end of the shock (US) and serve as a safety signal. Hence, the NS would become a CS-.

4.1.4. How Do You Know if Learning Occurred?

A cardinal feature of science is to verify that any change in your variable of interest (the DV) is caused by the treatment or manipulation (the IV). It could be that the elicited response was not actually caused by the NS/CS and so a product of learning or conditioning, but was caused by sensitization instead, called pseudoconditioning.

Let’s say you were working with turtles and presented them with a tone (the NS) followed by tapping on the shell (US) which resulted in the turtles withdrawing into their shells (UR). With a few such pairings, the tone (CS) would lead to withdrawing into shells (CR). So the tone has been associated with tapping, right? Possibly, but let’s say in addition to the tone we also flash a light. The turtles also withdraw into their shells at the presentation of this stimulus. In the case of sensitization, repeated presentation of a stimulus leads to an increase in how strong the response is. It can also lead to other stimuli eliciting the same response as in the case of the bright light and tone both eliciting the withdraw into shell response.

To know if the effect on the behavior you are seeing is due to conditioning and not sensitization, a simple adjustment can be made — the inclusion of a control group. The experimental group would have the tone and tap paired together resulting in a withdrawal response. The control group would have the tone played and then the tap made far apart in time. Now when the tone is presented to each group alone, the experimental group would have a strong withdrawal into shell response while the control group may have the same response, but it would be weak. The intensity of the response, or in this case it being stronger in the experimental rather than control condition, indicates conditioning has truly occurred. There is no pseudoconditioning in other words.


What does fear do to our vision?

Consider the following scenario. A policeman is on patrol, maybe he’s quite new to working in the field. He sees a suspicious young man and decides to follow him.

He turns the corner and sees that the man has drawn a gun from his pocket. In a snap second – almost too fast to think twice – he takes out his own gun and shoots the man dead.

Only the man didn’t have a gun at all, it was a mobile phone.

Sadly, it’s a familiar story. An incident exactly like it occurred only last week (January 2016) and a quick trawl though more newspaper reports shows how commonly it occurs.

When people make snap decisions in situations like this, they are often under intense momentary stress. This can provoke a host of automatic mental and physical effects that some psychologists refer to as “freezing behaviour”. We usually think of this kind of reaction as occurring in animals – a mouse paralysed with fear or a deer trapped motionless in the headlights (resulting in much road kill).

In other words, it’s the moment before an animal decides what to do to do, whether to “fight or flee”. This is believed to be an innate response to a predator, to avoid being seen or heard. Research has shown that an animal’s heart rate actually decreases when in this state.

Although we hear about it less in humans, our physiological response can be similar. For instance one 2005 study found that in response to pictures of mutilated bodies, participants’ physical movements reduced and their heart rates slowed. The same effect was found in a 2010 study in response to pictures of threatening faces.

However, there’s still much more we need to learn about the effects of the human freezing response – for example, what effect does it have on visual perception, and could any effects help explain some of the tragic instances when police have mistaken phones and other harmless objects for guns? A new study published in the Journal of Experimental Psychology aimed to find out.

Maria Lojowska of Radboud University and colleagues in the Netherlands tested 34 participants between the ages of 18 and 30. To create a situation that elicited freezing behaviour, the researchers occasionally gave their participants a mild electric shock, which was always preceded by a red dot. Participants were told they were taking part in a visual perception task and were fully informed about the nature of the shocks before the experiment started.

It was not the shock itself that made the participants show “freezing behaviour” (as measured by their heart rate), rather it was the anticipation of the shock. When participants saw a green dot (which did not presage a shock), they relaxed, but when they saw a red dot they felt more scared, regardless of whether a shock was actually given or not.

The participants’ task was to judge as accurately as possible the orientation of the lines inside small squares, which appeared on a computer screen on the left or right of their visual fields. The squares either had several lines (high detail) or few lines (indicating low detail), as you can see below. Crucially, the researchers found that the participants’ visual performance was affected by whether or not they were stressed and showing physiological signs of freezing. When they were afraid and stressed, their performance at judging the squares with high detail was impaired but their ability to judge the squares with coarse visual details actually improved.

The square on the left features high detail and the one on the right low detail. Stimuli from Lojowska et al 2015. When scared, participants were better at perceiving low detail.

The researchers said that previous research in animals had suggested that the freezing response leads to an overall improvement in vision, but their new findings suggest a more nuanced situation – it seems that when we’re afraid, we perceive some aspects of the world more clearly, but at the cost of ignoring much of the detail.

Intuitively, it makes sense that an animal or human only sees the most basic detail of a potentially threatening object. It would take too much time to take in all the detail of a scene. Our brain has a clever way of quickly reconstructing what every object is likely to be using its memory of similar events and situations, rather than analysing each new thing afresh, in depth. It is these shortcuts that can result in errors and visual illusions.

Despite these potential flaws in our visual perception, it’s important for us to be able to perceive things quickly. If you are walking in a desert and glimpse a shape that could be a snake (but is more likely a stick), it’s better to show caution and stop than assume it’s a stick and walk right into danger.

Now that we better understand how our visual perception changes when we feel fear, Maria Lojowska and her team plan to discover exactly what’s going on in the brain when this happens. Meanwhile, the researchers hope their findings might help inform training programmes to improve a person’s performance when they are in a stressful environment. Many police forces in the US already train their officers to overcome their implicit bias towards race and sex. It would be helpful to add the limits of our visual perception to the list.
_________________________________



Lojowska, M., Gladwin, T., Hermans, E., & Roelofs, K. (2015). Freezing promotes perception of coarse visual features. Journal of Experimental Psychology: General, 144 (6), 1080-1088 DOI: 10.1037/xge0000117

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!


Concluding remarks

While fear is an adaptive component of response to potentially threatening stimuli, too much or inappropriate fear accounts for chronic psychiatric disorders, including PTSD and phobias. Understanding the neurobiological basis of fear is therefore critical at elucidating the mechanisms improving treatments of these fear-related pathologies. Concerning phobias, nonexperiential, engaging innate fear, and experiential, engaging conditioned fear, disorders can be distinguished. However, so far, we know a lot about how the brain processes fear that is conditioned, while much less is known about innate fear. An increase of research on innate fear is therefore necessary.

Based on the research reviewed in this article, it appears that the amygdala is central to two phenomena that may support pathological innate fear: fear sensitization (with decreased amygdala threshold activity and potentiation of amygdala activity) and failure of fear habituation (with decreased amygdala habituation). Mechanisms of fear sensitization may contribute to decreased amygdala habituation. As current research is limited, this hypothesis is not fully supported here.

The cause of experiential phobia is attributed to some external learning experience. However, although phobia-like fears can be produced in laboratory with fear conditioning paradigms, experimental findings do not prove that in everyday life, classical fear conditioning is the cause of phobias (Merckelbach et al. 1996). For example, not all people who have had fearful events go on to develop phobias (Aitken et al. 1981), why? This may result from exaggerated fear conditionability, due to amygdala metaplastic changes (i.e., exaggerated LTP-like changes) in vulnerable individuals. However, not all phobic patients show exaggerated fear conditionability in laboratory experiences (Hermann et al. 2002). Poor extinction, due to impairment in mechanisms of depotentiation in the amygdala of vulnerable individuals, is also hypothesized as a potential factor maintaining pathological phobia. In experiments done by Öhman (1986), it was observed that people conditioned to angry faces showed significant resistance to extinction compared with those exposed to happy and neutral faces. So, nonphobic individuals can also present resistance to extinction.

As mentioned in the introduction, the development of experiential phobia seems to involve two stages. During the first stage, a pairing of a neutral stimulus and an aversive event results in a conditioned fear response to the neutral stimulus, which becomes a conditioned stimulus. During the second stage, the person learns that fear responses to the conditioned stimulus can be reduced by avoiding this stimulus. However, as shown in rats, strong avoidance causes resistance to extinction of classical fear conditioning (Nachon et al. 2014). Mechanisms that are involved need also to be examined in future studies. It would also be good to consider, in this context, other pathologies that contribute to pathological fear, such as relapse of fear after extinction (e.g., renewal, recovery, and reinstatement).

Despite these critical remarks, poor habituation and poor extinction, support, at least in part, fear dysfunction in phobia.


Factors in avoidance behaviour

Warning calls and visual signals that are unique to different species of birds and mammals effectively and specifically evoke avoidance patterns. In some cases, learning clearly emerges as a factor thus, members of a colony of birds seem to learn to respond to the alarm calls of all species present in the colony. Among ducklings, a visual model to evoke fleeing and hiding can be fashioned as a cardboard cutout. When moved overhead in one direction, the model resembles a short-necked, long-tailed hawk, and the ducklings flee from it when moved in the other direction, the model looks like a harmless, long-necked goose, and the ducklings tend to stay calm. The model is effective, however, in eliciting the two kinds of behaviour only when the ducklings are accustomed to geese flying over but not hawks.

Innate factors also contribute to such responses (see instinct). Domestic chicks, for example, show crouching and freezing in response to the long alarm call of their species. Many of the perching birds ( passerines) will gather to mob when stimulated by the sight of an owl. The eyes in the characteristic owl face have been found to be especially important even birds reared in isolation respond to man-made models with appropriate eyespots painted on. It has been suggested that many human beings are specifically (and perhaps instinctively) disturbed by the sight of snakes—the notion of a legless object perhaps being a key stimulus. Human responses to spiders and centipedes with conspicuous legs also may be intense. In the reaction to snakes at least, notwithstanding Freudian explanations that they symbolize male sex organs, the behaviour of people may be compared with owl mobbing among passerine birds.

Specific chemical signals can induce avoidance behaviour some are released by minnows and tadpoles when their skin is damaged (usually indicating to fellows that there is danger). These chemicals appear to be specific for each species of fish and are highly effective in producing fleeing (see chemoreception). Many ants produce volatile alarm substances (terpenes) that are attractants to other ants at low concentrations and, in high concentrations near their source, produce rapid locomotion, defense postures, and, sometimes, fleeing. Some invertebrate avoidance responses are reflexes evoked by very specific stimuli rapid swimming by cockles clapping their shells, for example, is elicited by starfish extract. Shell jerking is produced in a freshwater snail (Physa) by contact with a leech, another specific response to a major predator.


Babies’ Reactions To Spiders, Snakes Show Humans Have Evolved To Fear Them, Study Finds

LEIPZIG, Germany — The infant’s eyes grow wide at the sight of the eight-legged creature. She’s never been exposed to spiders before, but something inside her signals to pay attention.

Demonstrating this instinctual reaction, a new study out of the Max Planck Institute for Human Cognitive and Brain Sciences and Uppsala University shows that even six-month-old babies’ pupils dilate when seeing snakes or spiders. This response, researchers say, adds to the argument that fear of such creatures is facilitated by instinct, rather than just learned.

A new study that found even babies are alarmed at the sight of a snake or spider may prove that humans have evolved to fear the creepy critters.

“When we showed pictures of a snake or a spider to the babies instead of a flower or a fish of the same size and colour, they reacted with significantly bigger pupils,” says lead investigator and neuroscientist Stefanie Hoehl in a press release . “In constant light conditions this change in size of the pupils is an important signal for the activation of the noradrenergic system in the brain, which is responsible for stress reactions. Accordingly, even the youngest babies seem to be stressed by these groups of animals.”

While this increased attention to the animals means that babies quickly learn to fear them, other studies suggest it is not the fear that is innate, rather it’s the increased arousal and attention to them that is instinctual. Indeed, some of the previous research the authors cite shows that younger babies’ pupils actually dilate more in response to happy faces than fearful ones.

In such instances, the authors refer to such dilation as an “arousal” rather than a “stress” response.

Other experiments have likewise found that infants are faster at detecting snakes, but not necessarily inherently afraid of them. In an earlier experiment, researcher Vanessa LoBue of Rutgers University helped show that while babies paid more attention to snakes, they weren’t startled more easily when looking at them.

“While we find differential responses to snakes early on, meaning they are special, it doesn’t seem to be related to fear early in development,” says LoBue in a BBC article on that experiment. “It’s possible that paying more attention to something might make fear learning easier later on. It facilitates fear learning.

As the babies in Hoehl ‘s more recent study were only six months old, and are from a part of the world where there are few poisonous snakes or spiders, the study authors say the reactions –whether they represent stress or just increased interest — must be an ancestral instinct.

“We conclude that fear of snakes and spiders is of evolutionary origin,” according to the study’s lead investigator.

“We conclude that fear of snakes and spiders is of evolutionary origin. Similar to primates, mechanisms in our brains enable us to identify objects as ‘spider’ or ‘snake’ and to react to them very fast. This obviously inherited stress reaction in turn predisposes us to learn these animals as dangerous or disgusting. When this accompanies further factors it can develop into a real fear or even phobia,” Hoehl says. “A strong panicky aversion exhibited by the parents or a genetic predisposition for a hyperactive amygdala, which is important for estimating hazards, can mean that increased attention towards these creatures becomes an anxiety disorder.”

The scientists say that it’s the length of time our ancestors spent around spiders and snakes that makes them scarier to us than other potentially dangerous animals.

“We assume that the reason for this particular reaction upon seeing spiders and snakes is due to the coexistence of these potentially dangerous animals with humans and their ancestors for more than 40 to 60 million years—and therefore much longer than with today’s dangerous mammals,” says Hoehl. “The reaction which is induced by animal groups feared from birth could have been embedded in the brain for an evolutionarily long time.”

The results of Hoehl and her colleagues study were published recently in a paper in the journal Frontiers In Psychology.


Data Incompatible With Modular Theories

1. Considering that it is better to err on the side of caution than to wait until the cortex figures out whether the seen object is a snake or a curved stick, people should be afraid of all kinds of snakes or snake-like animals and objects. Indeed, it has been suggested (Larson et al., 2007, 2009, 2012 Van Strien et al., 2016) that curved lines and curvilinear shapes have an advantage in visual processing. Respondents seem faster to find a curved line as targets among straight lines compared to when they had to find a straight line among curved ones. Furthermore, curvilinear shapes are also detected faster than rectilinear (V-shaped) shapes when they had to be found among the same set of distractors, i.e., straight lines or circles.

False alarms (e.g., looks like a snake but after all it is a log) are less costly than false negatives (e.g., looks like a log but it is a snake) thus, it is safer to err on the side of excessive defensive expression (see e.g., LeDoux, 1998 Nesse, 2001). However, if the system is built to be over-defensive and err to the side of false positives, it should respond not only to snake-specific cues but also to other similar threat-related stimuli because many non-venomous snakes mimic venomous snakes, and venomous snakes exhibit extreme pattern variability and different characteristics (e.g., Wüster, 1998). For example, young white-faced capuchin monkeys (Cebus capucinus) are known to utter false alarm calls at a wide range of harmless non-predator animals (e.g., indigo snakes) compared to adults, and snake-species discrimination does not become apparent until the juvenile stage (Meno et al., 2013). Similarly studies with vervet monkeys (Chlorocebus pygerythrus) (Seyfarth and Cheney, 1980) and with spectral tarsiers (Tarsius spectrum) (Gursky, 2003) found youngsters to provide false alarm calls triggered by a wide range of harmless animals or objects, suggesting that this predator-recognition process likely involves experiential and vicariant learning refinements (Meno et al., 2013). Another example is given by the forest-living Campbell’s monkeys (Cercopithecus C. campbelli) that vocalized to the presence of familiar Gaboon vipers but not to unfamiliar black mambas (Dendroaspis polylepis), unreported in the Taï forest where the study was performed (Ouattara et al., 2009).

2. Previous research has shown (see e.g. Subra et al., 2018 Zsido et al., 2018a) using various paradigms that a modern threatening stimulus could lead to similar behavior as an evolutionary relevant one. For instance, when modern (e.g., gun) and evolutionary (e.g., snake) targets are compared directly in the classical visual search task proposed by Öhman and colleagues (see e.g., Öhman et al., 2001), the modern threatening target caught participants attention faster than evolutionary ones (Zsido et al., 2018b).

3. Low-level features of the visual stimulus could have affected many of the previous findings, as contrast/luminance and spatial frequency affect visual detection speed and processing of emotional content during early visual stages (Vlamings et al., 2009 Quinlan, 2013). Most studies did not control for low-level features of the visual stimulus (including contrast) of snakes, which can be significantly different from those of other categories and can affect the findings. These effects of contrast equalization on spatial frequency processing are particularly crucial during early (i.e., < 100 ms) visual processing (McFadyen et al., 2017).

4. In tropical and subtropical countries, envenoming affects mainly people involved in occupations and lifestyles requiring movements in a dense land, such as farmers, herders, labourers, hunters, shepherds, and workers (Meenatchisundaram and Michael, 2009). However, most of the previous work relied on controlled laboratory paradigms without support from studies in naturalistic conditions, which may be the cause of many sorts of potential bias (Quinlan, 2013 Paré and Quirk, 2017).

5. Many snakes pigmentation pattern appears to be strongly influenced by selection to avoid visually oriented predators (Jackson et al., 1976). Snakes rely on crypsis – (i.e., background matching) to become indistinguishable from the surrounding background (Isaac and Gregory, 2013) as one of many strategies to avoid detection (see Allen et al., 2013).

So even highly experienced observers with over 20 years’ experience miss seeing most snakes (more than 75%) aboveground around them in a flat and lightly vegetated area (Whitaker and Shine, 1999a,b). However, this could also suggest that snakes developed crypsis since the primate visual system was so effective to detect snakes.

7. Most snakebites follow at very close vicinity (Whitaker et al., 2000 Clark et al., 2012) and are extremely fast (Cundall, 2002 LaDuc, 2002 Clark et al., 2012 Penning et al., 2016), outside the capability for humans to escape.

8. There is evidence supporting that the human right amygdala is a specialized neural adaptation dedicated to processing visual information about animals in general (Mormann et al., 2011), which could represent either predators or prey, such as snakes. Indeed, humans do kill and eat snakes in several cultures (Headland and Greene, 2011). It seems likely that human’s enhanced detection of snakes is not so much contributing toward a significant escape advantage, but it might instead favor attention and further evaluation (Purkis and Lipp, 2007). When a stimulus needs to be approached in one situation may be avoided in another (Mesulam, 1998), the brain is more adapted to behavioral flexibility than to automatic, modular responses. In the case of snakes, the evidence seems to point to a similar conclusion.

9. Various laboratory studies comparing snakes and spiders (animate objects) as prepared stimuli with flowers and mushrooms might have been flawed by animacy as a confound. Flowers and mushrooms �have” similarly to inanimate objects (e.g., coffee mugs and telephones). Change-detection, for instance, is slower and less precise for inanimate targets, even when the inanimate targets can potentially move (e.g., tools or cars) or have evolutionary relevance such as plants or fruits (New et al., 2010 Jackson and Calvillo, 2013). This evidence does not necessarily deny the snake detection theory, but previous papers using flowers and mushrooms are often used and cited to support the snake detection theory (e.g., Soares et al., 2017).

10. Some of the characteristics of human-snake interaction do not appear to be the ones commonly seen between prey-predator interactions that lead to specific evolved automated traits. Ultra-specializations make sense for ultra-dependent predator-prey interactions, such as the kangaroo rat—rattlesnake interaction. These rodents escape speed may have evolved in response to snake predators (Higham et al., 2017), as Mohave rattlesnakes (Crotalus scutulatus) show very fast accelerations (up to 362 ms 𢄢 ) requiring from the kangaroo rats a response around 61.5 ms in order to escape. Although humans evolved the large cortex that could modify instinct behaviors, these take time and reduce the automaticity and response speed.

11. Finally, Grassini et al. (2016) found that snakes need to be consciously perceived to elicit emotion. Furthermore, it was also shown that affective processing requires awareness (Lähteenmäki et al., 2015).

The abovementioned discussion points raise the question of why and how humans would develop an ultra-fast detection mechanism specific to snakes. In the next part of the paper, we present some information about snakes that hopefully can assist in disentangling this problem.


Tracing the scent of fear: Study identifies neurons, brain region involved in rodent stress response

Cerebellum of CIVM postnatal rat brain atlas. Credit: Neurolex

The odor of bobcat urine, if you ever get a chance to take a whiff, is unforgettable—like rotten meat combined with sweat, with something indescribably feral underlying it. To humans, it's just nose-wrinklingly disgusting.

But to mice, it smells like one thing: fear.

Rodents respond instinctually to this trace of their natural predator. Even those mice raised in the lab, which have never been exposed to bobcats—or cats of any sort—respond to it.

For mice, this instinctual reaction can be lifesaving. The fear response triggers a surge of stress hormones which sends the mice into hyper-preparedness, helping them to respond and flee quickly from hungry predators. Although humans and mice have different stress triggers, this response is reminiscent of our physiological responses to fear and stress.

Now, a study has identified nerve cells and a region of the brain behind this innate fear response. With a new technique that uses specially-engineered viruses to uncover the nerve pathway involved, a research team led by Fred Hutchinson Cancer Research Center biologist and Nobel Prize winner Dr. Linda Buck has pinpointed a tiny area of the mouse brain responsible for this scent-induced reaction.

It's known as the "amygdalo-piriform transition area," or AmPir for short the researchers were surprised to find that the fear response was so concentrated in this one small region of the olfactory cortex, a part of the brain responsible for perceiving odors.

Although humans do not show innate fear to predator odors, studying how mice respond to predator cues can help us learn about our own innate emotions and responses, Buck said. On a general level, the rodent stress response looks a lot like our own.

"Understanding the neural circuitry underlying fear and stress of various sorts is very important, not just to understand the basic biology and functions of the brain, but also for potentially finding evolutionarily conserved neural circuits and genes that play an important role in humans," said Buck.

Buck and her team describe their findings in a paper published online March 21 in the journal Nature.


Nature and Nurture as an Enduring Tension in the History of Psychology

Nature–nurture is a dichotomous way of thinking about the origins of human (and animal) behavior and development, where “nature” refers to native, inborn, causal factors that function independently of, or prior to, the experiences (“nurture”) of the organism. In psychology during the 19th century, nature-nurture debates were voiced in the language of instinct versus learning. In the first decades of the 20th century, it was widely assumed that that humans and animals entered the world with a fixed set of inborn instincts. But in the 1920s and again in the 1950s, the validity of instinct as a scientific construct was challenged on conceptual and empirical grounds. As a result, most psychologists abandoned using the term instinct but they did not abandon the validity of distinguishing between nature versus nurture. In place of instinct, many psychologists made a semantic shift to using terms like innate knowledge, biological maturation, and/or hereditary/genetic effects on development, all of which extend well into the 21st century. Still, for some psychologists, the earlier critiques of the instinct concept remain just as relevant to these more modern usages.

The tension in nature-nurture debates is commonly eased by claiming that explanations of behavior must involve reference to both nature-based and nurture-based causes. However, for some psychologists there is a growing pressure to see the nature–nurture dichotomy as oversimplifying the development of behavior patterns. The division is seen as both arbitrary and counterproductive. Rather than treat nature and nurture as separable causal factors operating on development, they treat nature-nurture as a distinction between product (nature) versus process (nurture). Thus there has been a longstanding tension about how to define, separate, and balance the effects of nature and nurture.

Keywords

Subjects

Nature and Nurture in Development

The oldest and most persistent ways to frame explanations about the behavioral and mental development of individuals is to distinguish between two separate sources of developmental causation: (a) intrinsic, preformed, or predetermined causes (“nature”) versus (b) extrinsic, experiential, or environmental causes (“nurture”). Inputs from these two sources are thought to add their own contribution to development (see Figure 1).

Figure 1. The traditional view of nature and nurture as separate causes of development. In the traditional view, nature and nurture are treated as independent causal influences that combine during development to generate outcomes. Note that, during development, the effects of nature and nurture (shown in horizontal crossing lines) remain independent so that their effects on outcomes are theoretically separable.

Because some traits seem to derive more from one source than the other, much of the tension associated with the nature–nurture division deals with disagreements about how to balance the roles of nature and nurture in the development of a trait.

Evidence of Nature in Development

Evidence to support the nature–nurture division usually derives from patterns of behavior that suggest a limited role of environmental causation, thus implying some effect of nature by default. Table 1 depicts some common descriptors and conditions used to infer that some preference, knowledge, or skill is nature based.

Table 1. Common Descriptors and Associated Conditions for Inferring the Effects of Nature on Development

Displayed in the absence of relevant experience

Preparedness for learning

Rapidly or easily learned

Difficult or impossible to learn

Found in all like members of a species

Difficult to modify following its appearance

Emerges in an orderly sequence or at a specific time

Runs in families or with degrees of kinship

It is important to reiterate that nature-based causation (e.g., genetic determination) is inferred from these observations. Such inferences can generate tension because each of the observations listed here can be explained by nurture-based (environmental) factors. Confusion can also arise when evidence of one descriptor (e.g., being hereditary) is erroneously used to justify a different usage (e.g., that the trait is unlearned).

The Origins of Nature Versus Nurture

For much of recorded history, the distinction between nature and nurture was a temporal divide between what a person is innately endowed with at birth, prior to experience (nature), and what happens thereafter (nurture). It was not until the 19th century that the temporal division was transformed into a material division of causal influences (Keller, 2010). New views about heredity and Darwinian evolution justified distinguishing between native traits and genetic causes from acquired traits and environmental causes. More so than before, the terms nature and nurture were often juxtaposed in an opposition famously described by Sir Francis Galton (1869) as that between “nature versus nurture.”

Galton began writing about heredity in the mid-1860s. He believed we would discover laws governing the transmission of mental as well as physical qualities. Galton’s take on mental heredity, however, was forged by his desire to improve the human race in a science he would later call “eugenics.” In the mid- 19th century , British liberals assumed humans were equivalent at birth. Their social reform efforts were geared to enhancing educational opportunities and improving living conditions. Galton, a political conservative, opposed the notion of natural equality, arguing instead that people were inherently different at birth (Cowan, 2016), and that these inherited mental and behavioral inequalities were transmitted through lineages like physical qualities. Because Galton opposed the widely held Lamarckian idea that the qualities acquired in one’s lifetime could modify the inherited potential of subsequent generations, he believed long-lasting improvement of the human stock would only come by controlling breeding practices.

To explain the biological mechanisms of inheritance, Galton joined a growing trend in the 1870s to understand inheritance as involving the transmission of (hypothetical) determinative, germinal substances across generations. Foreshadowing a view that would later become scientific orthodoxy, Galton believed these germinal substances to be uninfluenced by the experiences of the organism. His theory of inheritance, however, was speculative. Realizing he was not equipped to fully explicate his theory of biological inheritance, Galton abandoned this line of inquiry by the end of that decade and refocused his efforts on identifying statistical laws of heredity of individual differences (Renwick, 2011).

Historians generally agree that Galton was the first to treat nature (as heredity) and nurture (everything else) as separate causal forces (Keller, 2010), but the schism gained biological legitimacy through the work of the German cytologist Auguste Weismann in the 1880s. Whereas Galton’s theory was motivated by his political agenda, Weismann was motivated by a scientific, theoretical agenda. Namely, Weismann opposed Lamarckian inheritance and promoted a view of evolution driven almost entirely by natural selection.

Drawing upon contemporary cytological and embryological research, Weismann made the case that the determinative substances found in the germ cells of plants and animals (called the “germ-plasm”) that are transmitted across generations were physically sequestered very early in embryogenesis and remained buffered from the other cells of the body (“somato-plasm”). This so-called, Weismann’s barrier meant that alterations in the soma that develop in the lifetime of the organism through the use or disuse of body parts would not affect the germinal substances transmitted during reproduction (see Winther, 2001, for review). On this view, Lamarckian-style inheritance of acquired characteristics was not biologically possible.

Galton and Weismann’s influence on the life sciences cannot be overstated. Their work convinced many to draw unusually sharp distinctions between the inherited (nature) and the acquired (nurture). Although their theories were met with much resistance and generated significant tension in the life sciences from cytology to psychology, their efforts helped stage a new epistemic space through which to appreciate Mendel’s soon to be rediscovered breeding studies and usher in genetics (Muller-Wille & Rheinberger, 2012).

Ever since, psychology has teetered between nature-biased and nurture-biased positions. With the rise of genetics, the wedge between nature–nurture was deepened in the early to mid- 20th century , creating fields of study that focused exclusively on the effects of either nature or nurture.

The “Middle Ground” Perspective on Nature–Nurture

Twenty-first-century psychology textbooks often state that the nature–nurture debates have been resolved, and the tension relaxed, because we have moved on from emphasizing nature or nurture to appreciating that development necessarily involves both nature and nurture. In this middle-ground position, one asks how nature and nurture interact. For example, how do biological (or genetic) predispositions for behaviors or innate knowledge bias early learning experiences? Or how might environmental factors influence the biologically determined (maturational) unfolding of bodily form and behaviors?

Rejection of the Nature–Nurture Divide

For some, the “middle-ground” resolution is as problematic as “either/or” views and does not resolve a deeper source of tension inherent in the dichotomy. On this view, the nature–nurture divide is neither a legitimate nor a constructive way of thinking about development. Instead, developmental analysis reveals that the terms commonly associated with nature (e.g., innate, genetic, hereditary, or instinctual) and nurture (environmental or learned) are so entwined and confounded (and often arbitrary) that their independent effects cannot be meaningfully discussed. The nature–nurture division oversimplifies developmental processes, takes too much for granted, and ultimately hinders scientific progress. Thus not only is there a lingering tension about how to balance the effects of nature and nurture in the middle-ground view, but there is also a growing tension to move beyond the dichotomous nature–nurture framework.

Nativism in Behavior: Instincts

Definitions of instinct can vary tremendously, but many contrast (a) instinct with reason (or intellect, thought, will), which is related to but separable from contrasting (b) instinct with learning (or experience or habit).

Instinct in the Age of Enlightenment

Early usages of the instinct concept, following Aristotle, treated instinct as a mental, estimative faculty (vis aestimativa or aestimativa naturalis) in humans and animals that allowed for the judgments of objects in the world (e.g., seeing a predator) to be deemed beneficial or harmful in a way that transcends immediate sensory experience but does not involve the use of reason (Diamond, 1971). In many of the early usages, the “natural instinct” of animals even included subrational forms of learning.

The modern usage of instincts as unlearned behaviors took shape in the 17th century . By that point it was widely believed that nature or God had implanted in animals and humans innate behaviors and predispositions (“instincts”) to promote the survival of the individual and the propagation of the species. Disagreements arose as to whether instincts derived from innate mental images or were mindlessly and mechanically (physiologically) generated from innately specified bodily organization (Richards, 1987).

Anti-Instinct Movement in the Age of Enlightenment

Challenges to the instinct concept can be found in the 16th century (see Diamond, 1971), but they were most fully developed by empiricist philosophers of the French Sensationalist tradition in the 18th century (Richards, 1987). Sensationalists asserted that animals behaved rationally and all of the so-called instincts displayed by animals could be seen as intelligently acquired habits.

For Sensationalists, instincts, as traditionally understood, did not exist. Species-specificity in behavior patterns could be explained by commonalities in physiological organization, needs, and environmental conditions. Even those instinctual behaviors seen at birth (e.g., that newly hatched chicks peck and eat grain) might eventually be explained by the animal’s prenatal experiences. Erasmus Darwin ( 1731–1802 ), for example, speculated that the movements and swallowing experiences in ovo could account for the pecking and eating of grain by young chicks. The anti-instinct sentiment was clearly expressed by the Sensationalist Jean Antoine Guer ( 1713–1764 ), who warned that instinct was an “infantile idea” that could only be held by those who are ignorant of philosophy, that traditional appeals to instincts in animals not only explained nothing but served to hinder scientific explanations, and that nothing could be more superficial than to explain behavior than appealing to so-called instincts (Richards, 1987).

The traditional instinct concept survived. For most people, the complex, adaptive, species-specific behaviors displayed by naïve animals (e.g., caterpillars building cocoons infant suckling behaviors) appeared to be predetermined and unlearned. Arguably as important, however, was the resistance to the theological implications of Sensationalist philosophy.

One of the strongest reactions to Sensationalism was put forward in Germany by Herman Samuel Reimarus ( 1694–1768 ). As a natural theologian, Reimarus, sought evidence of a God in the natural world, and the species-specific, complex, and adaptive instincts of animals seemed to stand as the best evidence of God’s work. More so than any other, Reimarus extensively catalogued instincts in humans and animals. Rather than treat instincts as behaviors, he defined instincts as natural impulses (inner drives) to act that were expressed perfectly, without reflection or practice, and served adaptive goals (Richards, 1987). He even proposed instincts for learning, a proposal that would resurface in the mid- 20th century , as would his drive theory of instinct (Jaynes & Woodward, 1974).

Partly as a result of Reimarus’ efforts, the instinct concept survived going into the 19th century . But many issues surrounding the instinct concept were left unsettled. How do instincts differ from reflexive behaviors? What role does learning play in the expression of instincts, if any? Do humans have more or fewer instincts than animals? These questions would persist well into the first decades of the 20th century and ultimately fuel another anti-instinct movement.

Instinct in the 19th Century

In the 19th century , the tension about the nature and nurture of instincts in the lifetime of animals led to debates about the nature and nurture of instincts across generations. These debates dealt with whether instincts should be viewed as “inherited habits” from previous generations or whether they result from the natural selection. Debating the relative roles of neo-Lamarckian use-inheritance versus neo-Darwinian natural selection in the transmutation of species became a significant source of tension in the latter half of the 19th century . Although the neo-Lamarckian notion of instincts as being inherited habits was rejected in the 20th century , it has resurged in recent years (e.g., see Robinson & Barron, 2017).

Darwinian evolutionary theory required drawing distinctions between native and acquired behaviors, and, perhaps more so than before, behaviors were categorized along a continuum from the purely instinctive (unlearned), to the partially instinctive (requiring some learning), to the purely learned. Still, it was widely assumed that a purely instinctive response would be modified by experience after its first occurrence. As a result, instinct and habit were very much entangled in the lifetime of the organism. The notion of instincts as fixed and unmodifiable would not be widely advanced until after the rise of Weismann’s germ-plasm theory in the late 19thcentury .

Given their importance in evolutionary theory, there was greater interest in more objectively identifying pure instincts beyond anecdotal reports. Some of the most compelling evidence was reported by Douglas Spalding ( 1844–1877 ) in the early 1870s (see Gray, 1967). Spalding documented numerous instances of how naïve animals showed coordinated, seemingly adaptive responses (e.g., hiding) to objects (e.g., sight of predators) upon their first encounter, and he helped pioneer the use of the deprivation experiment to identify instinctive behaviors. This technique involved selectively depriving young animals of seemingly critical learning experiences or sensory stimulation. Should animals display some species-typical action following deprivation, then, presumably, the behavior could be labeled as unlearned or innate. In all, these studies seemed to show that animals displayed numerous adaptive responses at the very start, prior to any relevant experience. In a variety of ways, Spalding’s work anticipated 20th-century studies of innate behavior. Not only would the deprivation experiment be used as the primary means of detecting native tendencies by European zoologists and ethologists, but Spalding also showed evidence of what would later be called imprinting, critical period effects and evidence of behavioral maturation.

Reports of pure instinct did not go unchallenged. Lloyd Morgan (1896) questioned the accuracy of these reports in his own experimental work with young animals. In some cases, he failed to replicate the results and in other cases he found that instinctive behaviors were not as finely tuned to objects in the environment as had been claimed. Morgan’s research pointed to taking greater precision in identifying learned and instinctive components of behavior, but, like most at the turn of the 20th century , he did not question that animal behavior involved both learned and instinctive elements.

A focus on instinctive behaviors intensified in the 1890s as Weismann’s germ-plasm theory grew in popularity. More so than before, a sharp distinction was drawn between native and acquired characteristics, including behavior (Johnston, 1995). Although some psychologists continued to maintain neo-Lamarckian notions, most German (Burnham, 1972) and American (Cravens & Burnham, 1971) psychologists were quick to adopt Weismann’s theory. They envisioned a new natural science of psychology that would experimentally identify the germinally determined, invariable set of native psychological traits in species and their underlying physiological (neural) basis. However, whereas English-speaking psychologists tended to focus on how this view impacted our understanding of social institutions and its social implications, German psychologists were more interested in the longstanding philosophical implications of Weismann’s doctrine as it related to the differences (if any) between man and beast (Burnham, 1972).

Some anthropologists and sociologists, however, interpreted Weismann’s theory quite differently and used it elevate sociology as its own scientific discipline. In the 1890s, the French sociologist Emil Durkheim, for example, interpreted Weismann’s germinal determinants as a generic force on human behavior that influenced the development of general predispositions that are molded by the circumstances of life (Meloni, 2016). American anthropologists reached similar conclusions in the early 20th century (Cravens & Burnham, 1971). Because Weismann’s theory divorced biological inheritance from social inheritance, and because heredity was treated as a generic force, sociologists felt free to study social (eventually, “cultural”) phenomena without reference to biological or psychological concerns.

Anti-Instinct Movement in the 1920s

Despite their differences, in the first two decades of the 20th century both psychologists and sociologists generally assumed that humans and animals had some native tendencies or instincts. Concerns were even voiced that instinct had not received enough attention in psychology. Disagreements about instincts continued to focus on (the now centuries old debates of) how to conceptualize them. Were they complex reflexes, impulses, or motives to act, or should instinct be a mental faculty (like intuition), separate from reasoning and reflex (Herrnstein, 1972)?

In America, the instinct concept came under fire following a brief paper in 1919 by Knight Dunlap titled “Are There Any Instincts?” His primary concern dealt with teleological definitions of instincts in which an instinct referred to all the activities involved in obtaining some end-state (e.g., instincts of crying, playing, feeding, reproduction, war, curiosity, or pugnacity). Defined in this way, human instincts were simply labels for human activities, but how these activities were defined was arbitrarily imposed by the researchers. Is feeding, for instance, an instinct, or is it composed of more basic instincts (like chewing and swallowing)? The arbitrariness of classifying human behavior had led to tremendous inconsistencies and confusion among psychologists.

Not all of the challenges to instinct dealt with its teleological usage. Some of the strongest criticisms were voiced by Zing-Yang Kuo throughout the 1920s. Kuo was a Chinese animal psychologist who studied under Charles Tolman at the University of California, Berkeley. Although Kuo’s attacks on instinct changed throughout the 1920s (see Honeycutt, 2011), he ultimately argued that all behaviors develop in experience-dependent ways and that appeals to instinct were statements of ignorance about how behaviors develop. Like Dunlap, he warned that instincts were labels with no explanatory value. To illustrate, after returning to China, he showed how the so-called rodent-killing instinct in cats often cited by instinct theorists is not found in kittens that are reared with rodents (Kuo, 1930). These kittens, instead, became attached to the rodents, and they resisted attempts to train rodent-killing. Echoing the point made by Guer, Kuo claimed that appeals to instinct served to stunt scientific inquiry into the developmental origins of behavior.

But Kuo did not just challenge the instinct concept. He also argued against labeling behaviors as “learned.” After all, whether an animal “learns” depends on the surrounding environmental conditions, the physiological and developmental status of the animal, and, especially, the developmental (or experiential) history of that animal. Understanding learning also required developmental analysis. Thus Kuo targeted the basic distinction between nature and nurture, and he was not alone in doing so (e.g., see Carmichael, 1925), but his call to reject it did not spread to mainstream American psychologists.

By the 1930s, the term instinct had fallen into disrepute in psychology, but experimental psychologists (including behaviorists) remained committed to a separation of native from acquired traits. If anything, the dividing line between native and acquired behaviors became more sharply drawn than before (Logan & Johnston, 2007). For some psychologists, instinct was simply rebranded in the less contentious (but still problematic) language of biological drives or motives (Herrnstein, 1972). Many other psychologists simply turned to describing native traits as due to “maturation” and/or “heredity” rather than “instinct.”

Fixed Action Patterns

The hereditarian instinct concept received a reboot in Europe in the 1930s with the rise of ethology led by Konrad Lorenz, Niko Tinbergen, and others. Just as animals inherit organs that perform specific functions, ethologists believed animals inherit behaviors that evolved to serve adaptive functions as well. Instincts were described as unlearned (inherited), blind, stereotyped, adaptive, fixed action patterns, impervious to change that are initiated (released) by specific stimuli in the environment.

Ethologists in 1930s and 1940s were united under the banner of innateness. They were increasingly critical of the trend by American psychologists (i.e., behaviorists) to focus on studying on how a limited number of domesticated species (e.g., white rat) responded to training in artificial settings (Burkhardt, 2005). Ethologists instead began with rich descriptions of animal behavior in more natural environments along with detailed analyses of the stimulus conditions that released the fixed action patterns. To test whether behavioral components were innate, ethologists relied primarily on the deprivation experiment popularized by Spalding in the 19th century . Using these methods (and others), ethologists identified numerous fascinating examples of instinctive behaviors, which captured mainstream attention.

In the early 1950s, shortly after ethology had gained professional status (Burkhardt, 2005), a series of challenges regarding instinct and innateness were put forth by a small cadre of North American behavioral scientists (e.g., T. C. Schneirla, Donald Hebb, Frank Beach). Arguably the most influential critique was voiced by comparative psychologist Daniel Lehrman (1953), who presented a detailed and damning critique of deprivation experiments on empirical and logical grounds. Lehrman explained that deprivation experiments isolate the animal from some but not all experiences. Thus deprivation experiments simply change what an animal experiences rather than eliminating experience altogether, and so they cannot possibly determine whether a behavior is innate (independent of experience). Instead, these experiments show what environmental conditions do not matter in the development of a behavior but do not speak to what conditions do matter.

Lehrman went on to argue that the whole endeavor to identify instinctive or innate behavior was misguided from the start. All behavior, according to Lehrman, develops from a history of interactions between an organism and its environment. If a behavior is found to develop in the absence of certain experiences, the researcher should not stop and label it as innate. Rather, research should continue to identify the conditions under which the behavior comes about. In line with Kuo, Lehrman repeated the warning that to label something as instinctive (or inherited or maturational) is a statement of ignorance about how that behavior develops and does more to stunt than promote research.

Lehrman’s critique created significant turmoil among ethologists. As a result, ethologists took greater care in using the term innate, and it led to new attempts to synthesize or re-envision learning and instinct.

Some of these attempts focused on an increased role for learning and experience in the ontogeny of species-typical behaviors. These efforts spawned significant cross-talk between ethologists and comparative psychologists to more thoroughly investigate behavioral development under natural conditions. Traditional appeals to instinct and learning (as classical and operant conditioning) were both found to be inadequate for explaining animal behavior. In their stead, these researchers focused more closely on how anatomical, physiological, experiential, and environmental conditions influenced the development of species-typical behaviors.

Tinbergen (1963) was among those ethologists who urged for greater developmental analysis of species-typical behaviors, and he included it as one of his four problems in the biological study of organisms, along with causation (mechanism), survival value (function), and evolution. Of these four problems, Tinbergen believed ethologists were especially well suited to study survival value, which he felt had been seriously neglected (Burkhardt, 2005).

The questions of survival value coupled with models of population genetics would gain significant momentum in the 1960s and 1970s in England and the United States with the rise of behavioral ecology and sociobiology (Griffiths, 2008). But because these new fields seemed to promote some kind of genetic determinism in behavioral development, they were met with much resistance and reignited a new round of nature–nurture debates in the 1970s (see Segerstrale, 2000).

However, not all ethologists abandoned the instinct concept. Lorenz, in particular, continued to defend the division between nature and nurture. Rather than speaking of native and acquired behaviors, Lorenz later spoke of two different sources of information for behavior (innate/genetic vs. acquired/environmental), which was more a subtle shift in language than it was an actual change in theory, as Lehrman later pointed out.

Some ethologists followed Lorenz’s lead and continued to maintain more of a traditional delineation between instinct and learning. Their alternative synthesis viewed learning as instinctive (Gould & Marler, 1987). They proposed that animals have evolved domain-specific “instincts to learn” that result from the its genetic predispositions and innate knowledge. To support the idea of instincts for learning, ethologists pointed to traditional ethological findings (on imprinting and birdsong learning), but they also drew from the growing body of work in experimental psychology that seemed to indicate certain types of biological effects on learning.

Biological Constraints and Preparedness

While ethology was spreading in Europe in the 1930s–1950s, behaviorism reigned in the United States. Just as ethologists were confronted with including a greater role of nurture in their studies, behaviorists were challenged to consider a greater role of nature.

Behaviorists assumed there to be some behavioral innateness (e.g., fixed action patterns, unconditioned reflexes, primary reinforcers and drives). But because behaviorists focused on learning, they tended to study animals in laboratory settings using biologically (or ecologically) irrelevant stimuli and responses to minimize any role of instinct (Johnston, 1981). It was widely assumed that these studies would identify general laws of learning that applied to all species regardless of the specific cues, reinforcers, and responses involved.

Challenges to the generality assumption began to accumulate in the 1960s. Some studies pointed to failures that occurred during conditioning procedures. Breland and Breland (1961), for example, reported that some complex behaviors formed through operant conditioning would eventually become “displaced” by conditioned fixed action patterns in a phenomenon they called “instinctive drift.” Studies of taste-aversion learning (e.g., Garcia & Koelling, 1966) also reported the failure of rats to associate certain events (e.g., flavors with shock or audiovisual stimuli with toxicosis).

Other studies were pointing to enhanced learning. In particular, it was found that rats could form strong conditioned taste aversions after only a single pairing between a novel flavor and illness. (This rapid “one trial learning” was a major focus in the research from Niko Tinbergen’s ethological laboratory.) Animals, it seemed, had evolved innate predispositions to form (or not form) certain associations.

In humans, studies of biological constraints on learning were mostly limited to fear conditioning. Evidence indicated that humans conditioned differently to (biologically or evolutionarily) fear-relevant stimuli like pictures of spiders or snakes than to fear-irrelevant stimuli like pictures of mushrooms or flowers (Ohman, Fredrikson, Hugdahl, & Rimmö, 1976).

These findings and others were treated as a major problem in learning theory and led to calls for a new framework to study learning from a more biologically oriented perspective that integrated the evolutionary history and innate predispositions of the species. These predispositions were described as biological “constraints” on, “preparedness,” or “adaptive specializations” for learning, all of which were consistent with the “instincts to learn” framework proposed by ethologists.

By the 1980s it was becoming clear that the biological preparedness/constraint view of learning suffered some limitations. For example, what constraints count as “biological” was questioned. It was well established that there were general constraints on learning associated with the intensity, novelty, and timing of stimuli. But, arbitrarily it seemed, these constraints were not classified as “biological” (Domjan & Galef, 1983). Other studies of “biological constraints” found that 5- and 10-day old rats readily learned to associated a flavor with shock (unlike in adults), but (like in adults) such conditioning was not found in 15-day-old rats (Hoffman & Spear, 1988). In other words, the constraint on learning was not present in young rats but developed later in life, suggesting a possible role of experience in bringing about the adult-like pattern.

Attempts to synthesize these alternatives led to numerous calls for more ecologically oriented approaches to learning not unlike the synthesis between ethology and comparative psychology in the 1960s. All ecological approaches to learning proposed that learning should be studied in the context of “natural” (recurrent and species-typical) problems that animals encounter (and have evolved to encounter) using ecologically meaningful stimuli and responses. Some argued (e.g., Johnston, 1981) that studies of learning should take place within the larger context of studying how animals develop and adapt to their surround. Others (Domjan & Galef, 1983) pointed to more of a comparative approach in studying animal learning in line with behavioral ecology that takes into account how learning can be influenced by the possible selective pressures faced by each species. Still, how to synthesize biological constraints (and evolutionary explanations) on learning with a general process approach remains a source of tension in experimental psychology.

Nativism in Mind: Innate Ideas

Nativism and Empiricism in Philosophy

In the philosophy of mind, nature–nurture debates are voiced as debates between nativists and empiricists. Nativism is a philosophical position that holds that our minds have some innate (a priori to experience) knowledge, concepts, or structure at the very start of life. Empiricism, in contrast, holds that all knowledge derives from our experiences in the world.

However, rarely (if ever) were there pure nativist or empiricist positions, but the positions bespeak a persistent tension. Empiricists tended to eschew innateness and promote a view of the mental content that is built by general mechanisms (e.g., association) operating on sensory experiences, whereas nativists tend to promote a view of mind that contains domain-specific, innate processes and/or content (Simpson, Carruthers, Laurence, & Stich, 2005). Although the tension about mental innateness would loosen as empiricism gained prominence in philosophy and science, the strain never went away and would intensify again in the 20th century .

Nativism in 20th Century Psychology: The Case of Language Development

In the first half of the 20th century , psychologists generally assumed that knowledge was gained or constructed through experience with the world. This is not to say that psychologists did not assume some innate knowledge. The Swiss psychologist Jean Piaget, for example, believed infants enter the world with some innate knowledge structures, particularly as they relate to early sensory and motor functioning (see Piaget, 1971). But the bulk of his work dealt with the construction of conceptual knowledge as children adapt to their worlds. By and large, there were no research programs in psychology that sought to identify innate factors in human knowledge and cognition until the 1950s (Samet & Zaitchick, 2017)

An interest in psychological nativism was instigated in large part by Noam Chomsky’s (1959) critique of B. F. Skinner’s book on language. To explain the complexity of language, he argued, we must view language as the knowledge and application of grammatical rules. He went on to claim that the acquisition of these rules could not be attributed to any general-purpose, learning process (e.g., reinforcement). Indeed, language acquisition occurs despite very little explicit instruction. Moreover, language is special in terms of its complexity, ease, and speed of acquisition by children and in its uniqueness to humans. Instead, he claimed that our minds innately contain some language-specific knowledge that kick-starts and promotes language acquisition. He later claimed this knowledge can be considered some sort of specialized mental faculty or module he called the “language acquisition device” (Chomsky, 1965) or what Pinker (1995) later called the “language instinct.”

To support the idea of linguistic nativism, Chomsky and others appealed to the poverty of the stimulus argument. In short, this argument holds that our experiences in life are insufficient to explain our knowledge and abilities. When applied to language acquisition, this argument holds children’s knowledge of language (grammar) goes far beyond the limited, and sometimes broken, linguistic events that children directly encounter. Additional evidence for nativism drew upon the apparent maturational quality of language development. Despite wide variations in languages and child-rearing practices across the world, the major milestones in language development appear to unfold in children in a universal sequence and timeline, and some evidence suggested a critical period for language acquisition.

Nativist claims about language sparked intense rebuttals by empiricist-minded psychologists and philosophers. Some of these retorts tackled the logical limitations of the poverty of stimulus argument. Others pointed to the importance of learning and social interaction in driving language development, and still others showed that language (grammatical knowledge) may not be uniquely human (see Tomasello, 1995, for review). Nativists, in due course, provided their own rebuttals to these challenges, creating a persistent tension in psychology.

Extending Nativism Beyond Language Development

In the decades that followed, nativist arguments expanded beyond language to include cognitive domains that dealt with understanding the physical, psychological, and social worlds. Developmental psychologists were finding that infants appeared to be much more knowledgeable in cognitive tasks (e.g., on understanding object permanence) and skillful (e.g., in imitating others) than had previously been thought, and at much younger ages. Infants also showed a variety of perceptual biases (e.g., preference for face-like stimuli over equally complex non-face-like stimuli) from very early on. Following the standard poverty of the stimulus argument, these findings were taken as evidence that infants enter the world with some sort of primitive, innate, representational knowledge (or domain-specific neural mechanisms) that constrains and promotes subsequent cognitive development. The nature of this knowledge (e.g., as theories or as core knowledge), however, continues to be debated (Spelke & Kinzler, 2007).

Empiricist-minded developmental psychologists responded by demonstrating shortcomings in the research used to support nativist claims. For example, in studies of infants’ object knowledge, the behavior of infants (looking time) in nativist studies could be attributed to relatively simple perceptual processes rather than to the infants’ conceptual knowledge (Heyes, 2014). Likewise, reports of human neonatal imitation not only suffered from failures to replicate but could be explained by simpler mechanisms (e.g., arousal) than true imitation (Jones, 2017). Finally, studies of perceptual preferences found in young infants, like newborn preferences for face-like stimuli, may not be specific preferences for faces per se but instead may reflect simpler, nonspecific perceptual biases (e.g., preferences for top-heavy visual configurations and congruency Simion & Di Giorgio, 2015).

Other arguments from empiricist-minded developmental psychologists focused on the larger rationale for inferring innateness. Even if it is conceded that young infants, like two-month-olds, or even two-day-olds, display signs of conceptual knowledge, there is no good evidence to presume the knowledge is innate. Their knowledgeable behaviors could still be seen as resulting from their experiences (many of which may be nonobvious to researchers) leading up to the age of testing (Spencer et al., 2009).

In the 21st century , there is still no consensus about the reality, extensiveness, or quality of mental innateness. If there is innate knowledge, can experience add new knowledge or only expand the initial knowledge? Can the doctrine of innate knowledge be falsified? There are no agreed-upon answers to these questions. The recurring arguments for and against mental nativism continue to confound developmental psychologists.

Maturation Theory

The emergence of bodily changes and basic behavioral skills sometimes occurs in an invariant, predictable, and orderly sequence in a species despite wide variations in rearing conditions. These observations are often attributed to the operation of an inferred, internally driven, maturational process. Indeed, 21st-century textbooks in psychology commonly associate “nature” with “maturation,” where maturation is defined as the predetermined unfolding of the individual from a biological or genetic blueprint. Environmental factors play a necessary, but fundamentally supportive, role in the unfolding of form.

Preformationism Versus Epigenesis in the Generation of Form

The embryological generation of bodily form was debated in antiquity but received renewed interest in the 17th century . Following Aristotle, some claimed that embryological development involved “epigenesis,” defined as the successive emergence of form from a formless state. Epigenesists, however, struggled to explain what orchestrated development without appealing to Aristotelean souls. Attempts were made to invoke to natural causes like physical and chemical forces, but, despite their best efforts, the epigenesists were forced to appeal to the power of presumed, quasi-mystical, vitalistic forces (entelechies) that directed development.

The primary alternative to epigenesis was “preformationism,” which held that development involved the growth of pre-existing form from a tiny miniature (homunculus) that formed immediately after conception or was preformed in the egg or sperm. Although it seems reasonable to guess that the invention and widespread use of the microscope would immediately lay to rest any claim of homuncular preformationism, this was not the case. To the contrary, some early microscopists claimed to see signs of miniature organisms in sperm or eggs, and failures to find these miniatures were explained away (e.g., the homunculus was transparent or deflated to the point of being unrecognizable). But as microscopes improved and more detailed observations of embryological development were reported in the late 18th and 19th centuries , homuncular preformationism was finally refuted.

From Preformationism to Predeterminism

Despite the rejection of homuncular preformationism, preformationist appeals can be found throughout the 19th century . One of the most popular preformationist theories of embryological development was put forth by Ernst Haeckel in the 1860s (Gottlieb, 1992). He promoted a recapitulation theory (not original to Haeckel) that maintained that the development of the individual embryo passes through all the ancestral forms of its species. Ontogeny was thought to be a rapid, condensed replay of phylogeny. Indeed, for Haeckel, phylogenesis was the mechanical cause of ontogenesis. The phylogenetic evolution of the species created the maturational unfolding of embryonic form. Exactly how this unfolding takes place was less important than its phylogenetic basis.

Most embryologists were not impressed with recapitulation theory. After all, the great embryologist Karl Ernst von Baer ( 1792–1876 ) had refuted strict recapitulation decades earlier. Instead, there was greater interest in how best to explain the mechanical causes of development ushering in a new “experimental embryology.” Many experimental embryologists followed the earlier epigenesists by discussing vitalistic forces operating on the unorganized zygote. But it soon became clear that the zygote was structured, and many people believed the zygote contained special (unknown) substances that specified development. Epigenesis-minded experimental embryologists soon warned that the old homuncular preformationism was being transformed into a new predetermined preformationism.

As a result, the debates between preformationism and epigenesis were reignited in experimental embryology, but the focus of these debates shifted to the various roles of nature and nurture during development. More specifically, research focused on the extent to which early cellular differentiation was predetermined by factors internal to cells like chromosomes or cytoplasm (preformationism, nature) or involved factors (e.g., location) outside of the cell (epigenesis, nurture). The former emphasized reductionism and developmental programming, whereas the latter emphasized some sort of holistic, regulatory system responsive to internal and external conditions. The tension between viewing development as predetermined or “epigenetic” persists into the 21st century .

Preformationism gained momentum in the 20th century following the rediscovery of Mendel’s studies of heredity and the rapid rise of genetics, but not because of embryological research on the causes of early differentiation. Instead, preformationism prevailed because it seemed embryological research on the mechanisms of development could be ignored in studies of hereditary patterns.

The initial split between heredity and development can be found in Galton’s speculations but is usually attributed to Weismann’s germ-plasm theory. Weismann’s barrier seemed to posit that the germinal determinants present at conception would be the same, unaltered determinants transmitted during reproduction. This position, later dubbed as “Weismannism,” was ironically not one promoted by Weismann. Like nearly all theorists in the 19th century , he viewed the origins of variation and heredity as developmental phenomena (Amundson, 2005), and he claimed that the germ-plasm could be directly modified in the lifetime of the organism by environmental (e.g., climactic and dietary) conditions (Winther, 2001). Still, Weismann’s theory treated development as a largely predetermined affair driven by inherited, germinal determinants buffered from most developmental events. As such, it helped set the stage for a more formal divorce between heredity and development with the rise of Mendelism in the early 20th century .

Mendel’s theory of heredity was exceptional in how it split development from heredity (Amundson, 2005). More so than in Weismann’s theory, Mendel’s theory assumed that the internal factors that determine form and are transmitted across generations remain unaltered in the lifetime of the organism. To predict offspring outcomes, one need only know the combination of internal factors present at conception and their dominance relations. Exactly how these internal factors determined form could be disregarded. The laws of hereditary transmission of the internal factors (e.g., segregation) did not depend on the development or experiences of the organism or the experiences the organism’s ancestors. Thus the experimental study of heredity (i.e., breeding) could proceed without reference to ancestral records or embryological concerns (Amundson, 2000). By the mid-1920s, the Mendelian factors (now commonly called “genes”) were found to be structurally arranged on chromosomes, and the empirical study of heredity (transmission genetics) was officially divorced from studies of development.

The splitting of heredity and development found in Mendel’s and Weismann’s work met with much resistance. Neo-Lamarckian scientists, especially in the United States (Cook, 1999) and France (Loison, 2011), sought unsuccessfully to experimentally demonstrate the inheritance of acquired characteristics into the 1930s.

In Germany during the 1920s and 1930s, resistance to Mendelism dealt with the chromosomal view of Mendelian heredity championed by American geneticists who were narrowly focused on studying transmission genetics at the expense of developmental genetics. German biologists, in contrast, were much more interested in the broader roles of genes in development (and evolution). In trying to understand how genes influence development, particularly of traits of interest to embryologists, they found the Mendelian theory to be lacking. In the decades between the world wars, German biologists proposed various expanded views of heredity that included some form of cytoplasmic inheritance (Harwood, 1985).

Embryologists resisted the preformationist view of development throughout the early to mid- 20th century , often maintaining no divide between heredity and development, but their objections were overshadowed by genetics and its eventual synthesis with evolutionary theory. Consequently, embryological development was treated by geneticists and evolutionary biologists as a predetermined, maturational process driven by internal, “genetic” factors buffered from environmental influence.

Maturation Theory in Psychology

Maturation theory was applied to behavioral development in the 19th century in the application of Haeckel’s recapitulation theory. Some psychologists believed that the mental growth of children recapitulated the history of the human race (from savage brute to civilized human). With this in mind, many people began to more carefully document child development. Recapitulationist notions were found in the ideas of many notable psychologists in the 19th and early 20th centuries (e.g., G. S. Hall), and, as such, the concept played an important role in the origins of developmental psychology (Koops, 2015). But for present purposes what is most important is that children’s mental and behavioral development was thought to unfold via a predetermined, maturational process.

With the growth of genetics, maturational explanations were increasingly invoked to explain nearly all native and hereditary traits. As the instinct concept lost value in the 1920s, maturation theory gained currency, although the shift was largely a matter of semantics. For many psychologists, the language simply shifted from “instinct versus learning” to “maturation versus practice/experience” (Witty & Lehman, 1933).

Initial lines of evidence for maturational explanations of behavior were often the same as those that justified instinct and native traits, but new embryological research presented in the mid-1920s converged to show support for strict maturational explanations of behavioral development. In these experiments (see Wyman, 2005, for review), spanning multiple laboratories, amphibians (salamanders and frogs) were exposed to drugs that acted as anesthetics and/or paralytics throughout the early stages of development, thus reducing sensory experience and/or motor practice. Despite the reduced sensory experiences and being unable to move, these animals showed no delays in the onset of motor development once the drugs wore off.

This maturational account of motor development in amphibians fit well with contemporaneous studies of motor development in humans. The orderly, invariant, and predictable (age-related) sequential appearance of motor skills documented in infants reared under different circumstances (in different countries and across different decades) was seen as strong evidence for a maturational account. Additional evidence was reported by Arnold Gessell and Myrtle McGraw, who independently presented evidence in the 1920s to show that the pace and sequence of motor development in infancy were not altered by special training experiences. Although the theories of these maturation theorists were more sophisticated when applied to cognitive development, their work promoted a view in which development was primarily driven by neural maturation rather than experience (Thelen, 2000).

Critical and Sensitive Periods

As the maturation account of behavioral development gained ground, it became clear that environmental input played a more informative role than had previously been thought. Environmental factors were found to either disrupt or induce maturational changes at specific times during development. Embryological research suggested that there were well-delineated time periods of heightened sensitivity in which specific experimental manipulations (e.g., tissue transplantations) could induce irreversible developmental changes, but the same manipulation would have no effect outside of that critical period.

In the 1950s–1960s a flurry of critical period effects were reported in birds and mammals across a range of behaviors including imprinting, attachment, socialization, sensory development, bird song learning, and language development (Michel & Tyler, 2005). Even though these findings highlighted an important role of experience in behavioral development, evidence of critical periods was usually taken to imply some rigid form of biological determinism (Oyama, 1979).

As additional studies were conducted on critical period effects, it became clear that many of the reported effects were more gradual, variable, experience-dependent, and not necessarily as reversible as was previously assumed. In light of these reports, there was a push in the 1970s (e.g., Connolly, 1972) to substitute “sensitive period” for “critical period” to avoid the predeterminist connotations associated with the latter and to better appreciate that these periods simply describe (not explain) certain temporal aspects of behavioral development. As a result, a consensus emerged that behaviors should not be attributed to “time” or “age” but to the developmental history and status of the animal under investigation (Michel & Tyler, 2005).

Heredity and Genetics

In the decades leading up to and following the start of the 20th century , it was widely assumed that many psychological traits (not just instincts) were inherited or “due to heredity,” although the underlying mechanisms were unknown. Differences in intelligence, personality, and criminality within and between races and sexes were largely assumed to be hereditary and unalterable by environmental intervention (Gould, 1996). The evidence to support these views in humans was often derived from statistical analyses of how various traits tended to run in families. But all too frequently, explanations of data were clouded by pre-existing, hereditarian assumptions.

Human Behavioral Genetics

The statistical study of inherited human (physical, mental, and behavioral) differences was pioneered by Galton (1869). Although at times Galton wrote that nature and nurture were so intertwined as to be inseparable, he nevertheless devised statistical methods to separate their effects. In the 1860s and 1870s, Galton published reports purporting to show how similarities in intellect (genius, talent, character, and eminence) in European lineages appeared to be a function of degree of relatedness. Galton considered, but dismissed, environmental explanations of his data, leading him to confirm his belief that nature was stronger than nurture.

Galton also introduced the use of twin studies to tease apart the relative impact of nature versus nurture, but the twin method he used was markedly different from later twin studies used by behavioral geneticists. Galton tracked the life history of twins who were judged to be very similar or very dissimilar near birth (i.e., by nature) to test the power of various postnatal environments (nurture) that might make them more or less similar over time. Here again, Galton concluded that nature overpowers nurture.

Similar pedigree (e.g., the Kallikak study see Zenderland, 2001) and twin studies appeared in the early 1900s, but the first adoption study and the modern twin method (which compares monozygotic to dizygotic twin pairs) did not appear until the 1920s (Rende, Plomin, & Vandenberg, 1990). These reports led to a flurry of additional work on the inheritance of mental and behavioral traits over the next decade.

Behavioral genetic research peaked in the 1930s but rapidly lost prominence due in large part to its association with the eugenics movement (spearheaded by Galton) but also because of the rise and eventual hegemony of behaviorism and the social sciences in the United States. Behavioral genetics resurged in the 1960s with the rising tide of nativism in psychology, and returned to its 1930s-level prominence in the 1970s (McGue & Gottesman, 2015).

The resurgence brought with a new statistical tool: the heritability statistic. The origins of heritability trace back to early attempts to synthesize Mendelian genetics with biometrics by Ronald Fisher and others. This synthesis ushered in a new field of quantitative genetics and it marked a new way of thinking about nature and nurture. The shift was to no longer think about nature and nurture as causes of traits in individuals but as causes of variation in traits between populations of individuals. Eventually, heritability came to refer to the amount of variance in a population sample that could be statistically attributed to genetic variation in that sample. Kinship (especially twin) studies provided seemingly straightforward ways of partitioning variation in population trait attributes into genetic versus environmental sources.

Into the early 21st century , hundreds of behavioral genetic studies of personality, intelligence, and psychopathology were reported. With rare exceptions, these studies converge to argue for a pervasive influence of genetics on human psychological variation.

These studies have also fueled much controversy. Citing in part behavioral genetic research, the educational psychologist Arthur Jensen (1969) claimed that the differences in intelligence and educational achievement in the United States between black and white students appeared to have a strong genetic basis. He went on to assume that because these racial differences appeared hereditary, they were likely impervious to environmental (educational) intervention. His article fanned the embers of past eugenics practices and ignited fiery responses (e.g., Hirsch, 1975). The ensuing debates not only spawned a rethinking of intelligence and how to measure it, but they ushered in a more critical look at the methods and assumptions of behavioral genetics.

Challenges to Behavioral Genetics

Many of the early critiques of behavioral genetics centered on interpreting the heritability statistic commonly calculated in kinship (family, twin, and adoption) studies. Perhaps more so than any other statistic, heritability has been persistently misinterpreted by academics and laypersons alike (Lerner, 2002). Contrary to popular belief, heritability tells us nothing about the relative impact of genetic and environmental factors on the development of traits in individuals. It deals with accounting for trait variation between people, not the causes of traits within people. As a result, a high heritability does not indicate anything about the fixity of traits or their imperviousness to environmental influence (contra Jensen), and a low heritability does not indicate an absence of genetic influence on trait development. Worse still, heritability does not even indicate anything about the role of genetics in generating the differences between people.

Other challenges to heritability focused not on its interpretation but on its underlying computational assumptions. Most notably, heritability analyses assume that genetic and environmental contributions to trait differences are independent and additive. The interaction between genetic and environmental factors were dismissed a priori in these analyses. Studies of development, however, show that no factor (genes, hormones, parenting, schooling) operates independently, making it impossible to quantify how much of a given trait in a person is due to any causal factor. Thus heritability analyses are bound to be misleading because they are based on biologically implausible and logically indefensible assumptions about development (Gottlieb, 2003).

Aside from heritability, kinship studies have been criticized for not being able to disentangle genetic and environmental effects on variation. It had long been known that that in family (pedigree) studies, environmental and genetic factors are confounded. Twin and adoption studies seemed to provide unique opportunities to statistically disentangle these effects, but these studies are also deeply problematic in assumptions and methodology. There are numerous plausible environmental reasons for why monozygotic twin pairs could resemble each other more than dizygotic twin pairs or why adoptive children might more closely resemble their biological than their adoptive parents (Joseph & Ratner, 2013).

A more recent challenge to behavioral genetics came from an unlikely source. Advances in genomic scanning in the 21st century made it possible in a single study to correlate thousands of genetic polymorphisms with variation in the psychological profiles (e.g., intelligence, memory, temperament, psychopathology) of thousands of people. These “genome-wide association” studies seemed to have the power and precision to finally identify genetic contributions to heritability at the level of single nucleotides. Yet, these studies consistently found only very small effects.

The failure to find large effects came to be known as the “missing heritability” problem (Maher, 2008). To account for the missing heritability, some behavioral geneticists and molecular biologists asserted that important genetic polymorphisms remain unknown, they may be too rare to detect, and/or that current studies are just not well equipped to handle gene–gene interactions. These studies were also insensitive to epigenetic profiles (see the section on Behavioral Epigenetics), which deal with differences in gene expression. Even when people share genes, they may differ in whether those genes get expressed in their lifetimes.

But genome-wide association studies faced an even more problematic issue: Many of these studies failed to replicate (Lickliter & Honeycutt, 2015). For those who viewed heritability analyses as biologically implausible, the small effect sizes and failures to replicate in genome-wide association studies were not that surprising. The search for independent genetic effects was bound to fail, because genes simply do not operate independently during development.

Behavioral Epigenetics

Epigenetics was a term coined in the 1940s by the developmental biologist Conrad Waddington to refer to a new field of study that would examine how genetic factors interact with local environmental conditions to bring about the embryological development of traits. By the end of the 20th century , epigenetics came to refer to the study of how nongenetic, molecular mechanisms physically regulate gene expression patterns in cells and across cell lineages. The most-studied mechanisms involve organic compounds (e.g., methyl-groups) that physically bind to DNA or the surrounding proteins that package DNA. The addition or removal of these compounds can activate or silence gene transcription. Different cell types have different, stable epigenetic markings, and these markings are recreated during cell division so that cells so marked give rise to similar types of cells. Epigenetic changes were known to occur during developmental periods of cellular differentiation (e.g., during embryogenesis), but not until 2004 was it discovered that these changes can occur at other periods in the life, including after birth (Roth, 2013)

Of interest to psychologists were reports that different behavioral and physiological profiles (e.g., stress reactivity) of animals were associated with different epigenetic patterns in the nervous system (Moore, 2015). Furthermore, these different epigenetic patterns could be established or modified by environmental factors (e.g., caregiving practices, training regimes, or environmental enrichment), and, under certain conditions, they remain stable over long periods of time (from infancy to adulthood).

Because epigenetic research investigates the physical interface between genes and environment, it represents an exciting advance in understanding the interaction of nature and nurture. Despite some warnings that the excitement over behavioral epigenetic research may be premature (e.g., Miller, 2010), for many psychologists, epigenetics underscores how development involves both nature and nurture.

For others, what is equally exciting is the additional evidence epigenetics provides to show that the genome is an interactive and regulated system. Once viewed as the static director of development buffered from environment influence, the genome is better described as a developing resource of the cell (Moore, 2015). More broadly, epigenetics also points to how development is not a genetically (or biologically) predetermined affair. Instead, epigenetics provides additional evidence that development is a probabilistic process, contingent upon factors internal and external to the organism. In this sense, epigenetics is well positioned to help dissolve the nature–nurture dichotomy.

Beyond Nature–Nurture

In the final decades of the 20th century , a position was articulated to move beyond the dichotomous nature–nurture framework. The middle-ground position on nature–nurture did not seem up to the task of explaining the origins of form, and it brought about more confusion than clarity. The back-and-forth (or balanced) pendulum between nature- and nurture-based positions throughout history had only gone in circles. Moving forward would require moving beyond such dichotomous thinking (Johnston, 1987).

The anti-dichotomy position, referred to as the Developmentalist tradition, was expressed in a variety of systems-based, metatheoretical approaches to studying development, all of which extended the arguments against nature–nurture expressed earlier by Kuo and Lehrman. The central problem with all nativist claims according to Developmentalists is a reliance on preformationism (or predeterminism).

The problem with preformationism, they argue, besides issues of evidence, is that it is an anti-developmental mindset. It presumes the existence of the very thing(s) one wishes to explain and, consequently, discourages developmental analyses. To claim that some knowledge is innate effectively shuts down research on the developmental origins of that knowledge. After all, why look for the origins of conceptual knowledge if that knowledge is there all along? Or why search for any experiential contributions to innate behaviors if those behaviors by definition develop independently of experience? In the words of Developmentalists Thelen and Adolph (1992), nativism “leads to a static science, with no principles for understanding change or for confronting the ultimate challenge of development, the source of new forms in structure and function” (p. 378).

A commitment to maturational theory is likely one of the reasons why studies of motor development remained relatively dormant for decades following its heyday in the 1930–1940s (Thelen, 2000). Likewise, a commitment to maturational theory also helps explain the delay in neuroscience to examine how the brain physically changes in response to environmental conditions, a line of inquiry that only began in the 1960s.

In addition to the theoretical pitfalls of nativism, Developmentalists point to numerous studies that show how some seemingly native behaviors and innate constraints on learning are driven by the experiences of animals. For example, the comparative psychologist Gilbert Gottlieb (1971) showed that newly hatched ducklings display a naïve preference for a duck maternal call over a (similarly novel) chicken maternal call (Gottlieb, 1971), even when duck embryos were repeatedly exposed to the chicken call prior to hatching (Gottlieb, 1991). It would be easy to conclude that ducklings have an innate preference to approach their own species call and that they are biologically constrained (contraprepared) in learning a chicken call. However, Gottlieb found that the naïve preference for the duck call stemmed from exposure to the duck embryos’ own (or other) vocalizations in the days before hatching (Gottlieb, 1971). Exposure to these vocalizations not only made duck maternal calls more attractive, but it hindered the establishment of a preference for heterospecific calls. When duck embryos were reared in the absence of the embryonic vocalizations (by devocalizing embryos in ovo) and exposed instead to chicken maternal calls, the newly hatched ducklings preferred chicken over duck calls (Gottlieb, 1991). These studies clearly showed how seemingly innate, biologically based preferences and constraints on learning derived from prenatal sensory experiences.

For Developmentalists, findings like these suggest that nativist explanations of any given behavior are statements of ignorance about how that behavior actually develops. As Kuo and Lehrman made clear, nativist terms are labels, not explanations. Although such appeals are couched in respectable, scientific language (e.g., “X is due to maturation, genes, or heredity”), they argue it would be more accurate simply to say that “We don’t know what causes X” or that “X is not due to A, B, or C.” Indeed, for Developmentalists, the more we unpack the complex dynamics about how traits develop, the less likely we are to use labels like nature or nurture (Blumberg, 2005).

On the other hand, Developmentalists recognize that labeling a behavior as “learned” also falls short as an explanatory construct. The empiricist position that knowledge or behavior is learned does not adequately take into account that what is learned and how easily something is learned depends on (a) the physiological and developmental status of the person, (b) the nature of the surrounding physical and social context in which learning takes place, and the (c) experiential history of the person. The empiricist tendency to say “X is learned or acquired through experience” can also short-circuit developmental analyses in the same way as nativist claims.

Still, Developmentalists appreciate that classifying behaviors can be useful. For example, the development of some behaviors may be more robust, reliably emerging across a range of environments and/or remaining relatively resistant to change, whereas others are more context-specific and malleable. Some preferences for stimuli require direct experience with those stimuli. Other preferences require less obvious (indirect) types of experiences. Likewise, it can still be useful to describe some behaviors in the ways shown in Table 1. Developmentalists simply urge psychologists to resist the temptation to treat these behavioral classifications as implying different kinds of explanations (Johnston, 1987).

Rather than treat nature and nurture as separate developmental sources of causation (see Figure 1), Developmentalists argue that a more productive way of thinking about nature–nurture is to reframe the division as that between product and process (Lickliter & Honeycutt, 2015). The phenotype or structure (one’s genetic, epigenetic, anatomical, physiological, behavioral, and mental profile) of an individual at any given time can be considered one’s “nature.” “Nurture” then refers to the set of processes that generate, maintain, and transform one’s nature (Figure 2). These processes involve the dynamic interplay between phenotypes and environments.

Figure 2. The developmentalist alternative view of nature–nurture as product–process. Developmentalists view nature and nurture not as separate sources of causation in development (see Figure 1) but as a distinction between process (nurture) and product (nature).

Conclusion

It is hard to imagine any set of findings that will end debates about the roles of nature and nurture in human development. Why? First, more so than other assumptions about human development, the nature–nurture dichotomy is deeply entrenched in popular culture and the life sciences. Second, throughout history, the differing positions on nature and nurture were often driven by other ideological, philosophical, and sociopolitical commitments. Thus the essential source of tension in debates about nature–nurture is not as much about research agendas or evidence as about basic differences in metatheoretical positions (epistemological and ontological assumptions) about human behavior and development (Overton, 2006).


4.1. The Nuts and Bolts of Respondent Conditioning

Section Learning Objectives

  • Describe Pavlov’s accidental discovery.
  • Define respondent conditioning.
  • Recognize other terms used for respondent conditioning.
  • Outline the three phases of respondent conditioning. Define all terms.
  • Describe and exemplify higher order conditioning.
  • Contrast appetitive and aversive conditioning.
  • Contrast excitatory and inhibitory conditioning.
  • Outline and describe the four temporal presentations of US and NS in respondent conditioning.
  • Describe the phenomena of pseudoconditioning.

4.1.1. Pavlov and His Dogs

You have likely heard about Pavlov and his dogs but what you may not know is that this was a discovery made accidentally. Ivan Petrovich Pavlov (1849-1936 1927), a Russian physiologist, was interested in studying digestive processes in dogs in response to being fed meat powder. What he discovered was the dogs would salivate even before the meat powder was presented. They would salivate at the sound of a bell, footsteps in the hall, a tuning fork, or the presence of a lab assistant. Pavlov realized there were some stimuli that automatically elicited responses (such as salivating to meat powder) and those that had to be paired with these automatic associations for the animal or person to respond to it (such as salivating to a bell). Armed with this stunning revelation, Pavlov spent the rest of his career investigating the learning phenomenon and won a Nobel Prize in 1904 for his work.

The important thing to understand is that not all behaviors occur due to reinforcement and punishment as operant conditioning says. In the case of respondent conditioning, antecedent stimuli exert complete and automatic control over some behaviors. We saw this in the case of reflexes. When a doctor strikes your knee with that little hammer it extends out automatically. You do not have to do anything but watch. Babies will root for a food source if the mother’s breast is placed near their mouth. If a nipple is placed in their mouth, they will also automatically suck, as per the sucking reflex. Humans have several of these reflexes, though not as many as other animals, due to our more complicated nervous system.

4.1.2. Respondent Conditioning Described

Respondent conditioning occurs when we link or pair a previously neutral stimulus with a stimulus that is unlearned or inborn, called an unconditioned stimulus. Note that this form of learning also goes by the name classical conditioning or Pavlovian conditioning in honor of Ivan Pavlov.

Respondent conditioning is best described as occurring in three phases: pre-conditioning, conditioning, and post-conditioning. See Figure 4.1 for an overview of Pavlov’s classic experiment.

Let’s define terms first. The term conditioning means learning. So pre-conditioning is before learning occurs, conditioning is during learning or the acquisition of the relationship between the two stimuli, and post-conditioning is after learning has occurred. If we say something is un-conditioned it is not learned. Going back to our earlier philosophical discussion, this is learning that is innate or present at birth. Also keep in mind that the stimulus is what is sensed in the world through vision, hearing, smell, taste, or touch. The response is the behavior that is made. Making sure you have the terms straight will help you to understand respondent conditioning easier.

4.1.2.1. Pre-conditioning. Notice that pre-conditioning has both an A and a B panel. All this stage of learning signifies is that some knowledge is already present. In Panel A, the taste of food makes a dog salivate. This does not need to be trained and is the relationship of an unconditioned stimulus (US) yielding an unconditioned response (UR). The association occurs naturally. In Panel B, we see that a neutral stimulus (NS) yields nothing. Dogs do not enter the world knowing to respond to the ringing of a bell (which it hears).

4.1.2.2. Conditioning. Conditioning is when learning occurs. Through a pairing of a neutral stimulus and an unconditioned stimulus (bell and food, respectively) the dog will learn that the bell ringing (NS) signals food coming (US) and salivate (UR). The key is that the NS is presented just before the US which yields a UR (in most cases more on that in a bit).

4.1.2.3. Post-conditioning. Post-conditioning, or after learning has occurred, establishes a new and not naturally occurring relationship of a conditioned stimulus (CS previously the NS) and conditioned response (CR the same response). So, the dog now reliably salivates at the sound of the bell because he expects that food will follow, and it does. If it doesn’t, the response ends or extinguishes as you will see later.

Figure 4.1. Pavlov’s Classic Experiment

Let’s now clearly define our terms:

  • Unconditioned stimulus — The stimulus that naturally elicits a response.
  • Unconditioned response —The response that occurs naturally when the US is present.
  • Neutral stimulus — A stimulus that causes no response.
  • Conditioned stimulus — The initially neutral stimulus that has been associated with a naturally occurring stimulus to bring about a response.
  • Conditioned response — The response which is elicited by a CS, though it is not the same as the UR. This response is usually weaker than the UR (the dog salivates to the bell, though it does not do it as much as it does to the sight/smell/taste of the food).

Note to Student: Be sure you not only understand these terms but the acronyms used to represent them. I will use the shorthand the rest of the way through this module and in other places in the book.

To fully understand respondent conditioning, know that the pairings of an NS and US each represent a single trial, called the conditioning trial. The period between conditioning trials is called the intertrial interval. The period between the presentation of the NS and then the US (Panel C) within a conditioning trial is called the interstimulus interval.

The entire process of conditioning, to include when we first make the association between an NS and US to its strengthening over time through repeated pairings, is called acquisition. It is likely not surprising to learn that conditioning occurs quicker if the US is more intense. We will be more motivated to learn to associate making an incorrect response with shock if we receive 150 volts compared to 25 volts.

Conditioning is also more effective when the trials are spaced rather than massed (Menzel et al., 2001). For instance, spacing the trials 5 minutes apart is more effective than spacing them 25 seconds apart. One explanation for this is that we have time to rehearse the CS and US in memory during the intertrial interval and if a new trial occurs too soon, it could interfere with rehearsal (Wagner, Rudy, & Whitlow, 1973).

And we can determine how good the learning is if we toss in a test trial occasionally in which the NS is presented alone to see if it elicits the response (UR/CR ring the bell alone and see if salivation occurs). We can also wait to see if after the presentation of the NS (bell) and before the US appears (sight of food) if the UR/CR appears on its own (salivation). In other words, does the response occur during the interstimulus interval?

4.1.3. Conditioning and its Different Forms

It is worth noting that the conditioning procedure described in the preceding section on Pavlov is not the only form it can take. In this section, we will discuss a type of layered conditioning, conditioning based on the event being something we desire or want to avoid, conditioning based on the presentation or removal of the US, and finally temporal factors that can produce unique conditioning procedures.

4.1.3.1. Higher order conditioning. Sometimes, a stimulus that is associated with a CS (formerly the NS) becomes a CS itself and elicits the CR. We call this higher order conditioning, and each level of conditioning is referred to as first, second, third, etc. order conditioning. So how might this work?

Being assaulted (US) will elicit fear (UR). A person wearing a ski mask would alone not cause any response (it is an NS1). If though, you pair the person wearing the ski mask (NS1) with the assault (US) which causes fear (UR), then the sight of a person wearing a ski mask (CS1) will elicit fear (CR). Keep in mind that with the stimuli, you see a person wearing a ski mask and feel the effects of the assault (touch or pain receptors in the skin will be activated). This is first-order conditioning (not to be confused with the training of First Order stormtroopers in Star Wars) and in this example involves a person being associated with fear.

But what if the assault occurred in an alley in your neighborhood? Now the alley (NS2) is paired with the person wearing the ski mask (CS1) which causes fear (CR), and post-conditioning shows that the alley (CS2) causes fear (CR). This is second-order conditioning and involves a location being associated with fear.

Could the time of day be a factor too? What if the mugging occurred at night? If night (NS3) is paired with the alley (CS2) which causes fear (CR), then being outside at night (CS3) could lead to fear (or at least some anxiety CR). This would be third-order conditioning and now involves a time of day being associated with fear.

Fear was originally elicited by being assaulted. Through higher order conditioning, it was also elicited by the sight of a ski mask, being in an alley, and being outside at night. The fear reaction becomes weaker across the conditioning of these additional NS, such that our response to being outside at night could be better classified as anxiety and not so much the bona fide fear felt while being assaulted (and likely for a time afterward) which suggests that the response is strongest to the US and becomes weaker across CS1, CS2, and CS3.

4.1.3.2. Appetitive and aversive conditioning. Recall from Section 2.1.3 that appetitive stimuli are those that an organism desires and seeks out while aversive stimuli are readily avoided. In respondent conditioning, the US could be an appetitive or aversive stimulus. For instance, in appetitive conditioning, the US would be something desirable such as candy which makes us happy. Other examples could include water, food, sex, or drugs. In aversive conditioning, the stimulus is not pleasant and could include extreme temperatures, a painful sting such as from a wasp or a bite from a dog, electric shock, or something that does not smell nice. It would not be surprising to learn that conditioning occurs relatively fast when aversive US are involved. Since these stimuli could harm or kill us, learning to avoid them is adaptive and aids our survival.

4.1.3.3. Excitatory and inhibitory conditioning. Our discussion so far has included examples in which the NS is associated with the presentation of the US, called excitatory conditioning. For Palov’s dogs, they associated the ringing of a bell (NS) with the presentation of the food (US) which caused their salivation (UR). Eventually, salivation (CR) occurred to just the ringing of the bell (CS).

Interestingly enough, the absence of the US could be associated with an NS too, in a process called inhibitory conditioning. Go back to our example for higher conditioning. A person wearing a ski mask is an excitatory CS for fear but seeing someone wearing such a mask during the daytime leads to an inhibition of fear. It being day indicates a safe interval and we will not be overly concerned about ski masks. We have only ever been assaulted at night. The excitatory CS is expressed as CS+ and the inhibitory CS as CS-.

4.1.3.4. Temporal factors affecting conditioning. In the previous section we saw that generally, the US is presented after the NS though the NS could be followed by the absence of an US. These examples have also always presented the NS before the US, but this is not necessary in all cases.

First, delay conditioning involves the presentation of the NS before the US, but the NS overlaps with the US for a short period of time. In the case of Pavlov’s experiment, the bell would ring for say 10 seconds, then the food would enter the room, and then the bell would end 5 seconds after this. The ISI (interstimulus interval) should be relatively brief to use this procedure.

What if we present the NS well ahead of the US in time? Let’s say we ring the bell for 10 seconds and then there is a 5-second gap before the food enters the room. The NS and US do not overlap. This is the basis of trace conditioning and the trace is a memory that we have to access. The organism will need to remember that the NS occurred before the US to make the association, or that the bell rang before the food came in. The period of time between the NS terminating and the US beginning is called the trace interval and ideally should be short, or a few seconds.

The NS and US could occur at the same time such as in simultaneous conditioning. As you might expect, conditioning in this procedure is poor since the NS does not predict the occurrence of the US. They occur simultaneously. The bell would ring as the food enters the room. The bell-ringing does not lead to an expectation that food will come shortly, which aids with learning the association.

Finally, the US could come before the NS in a procedure called backward conditioning. The US would occur first and last for a few seconds with the NS starting near the end of this time. Hence, the NS and US co-occur for a short period of time. Of the four methods, backward conditioning is the least effective for excitatory conditioning though it could lead to inhibitory conditioning. Consider a shock paradigm in which a rat is given a shock (US) and then near the end of the shock a light is turned on (NS). The light (NS) would signal the end of the shock (US) and serve as a safety signal. Hence, the NS would become a CS-.

4.1.4. How Do You Know if Learning Occurred?

A cardinal feature of science is to verify that any change in your variable of interest (the DV) is caused by the treatment or manipulation (the IV). It could be that the elicited response was not actually caused by the NS/CS and so a product of learning or conditioning, but was caused by sensitization instead, called pseudoconditioning.

Let’s say you were working with turtles and presented them with a tone (the NS) followed by tapping on the shell (US) which resulted in the turtles withdrawing into their shells (UR). With a few such pairings, the tone (CS) would lead to withdrawing into shells (CR). So the tone has been associated with tapping, right? Possibly, but let’s say in addition to the tone we also flash a light. The turtles also withdraw into their shells at the presentation of this stimulus. In the case of sensitization, repeated presentation of a stimulus leads to an increase in how strong the response is. It can also lead to other stimuli eliciting the same response as in the case of the bright light and tone both eliciting the withdraw into shell response.

To know if the effect on the behavior you are seeing is due to conditioning and not sensitization, a simple adjustment can be made — the inclusion of a control group. The experimental group would have the tone and tap paired together resulting in a withdrawal response. The control group would have the tone played and then the tap made far apart in time. Now when the tone is presented to each group alone, the experimental group would have a strong withdrawal into shell response while the control group may have the same response, but it would be weak. The intensity of the response, or in this case it being stronger in the experimental rather than control condition, indicates conditioning has truly occurred. There is no pseudoconditioning in other words.


What does fear do to our vision?

Consider the following scenario. A policeman is on patrol, maybe he’s quite new to working in the field. He sees a suspicious young man and decides to follow him.

He turns the corner and sees that the man has drawn a gun from his pocket. In a snap second – almost too fast to think twice – he takes out his own gun and shoots the man dead.

Only the man didn’t have a gun at all, it was a mobile phone.

Sadly, it’s a familiar story. An incident exactly like it occurred only last week (January 2016) and a quick trawl though more newspaper reports shows how commonly it occurs.

When people make snap decisions in situations like this, they are often under intense momentary stress. This can provoke a host of automatic mental and physical effects that some psychologists refer to as “freezing behaviour”. We usually think of this kind of reaction as occurring in animals – a mouse paralysed with fear or a deer trapped motionless in the headlights (resulting in much road kill).

In other words, it’s the moment before an animal decides what to do to do, whether to “fight or flee”. This is believed to be an innate response to a predator, to avoid being seen or heard. Research has shown that an animal’s heart rate actually decreases when in this state.

Although we hear about it less in humans, our physiological response can be similar. For instance one 2005 study found that in response to pictures of mutilated bodies, participants’ physical movements reduced and their heart rates slowed. The same effect was found in a 2010 study in response to pictures of threatening faces.

However, there’s still much more we need to learn about the effects of the human freezing response – for example, what effect does it have on visual perception, and could any effects help explain some of the tragic instances when police have mistaken phones and other harmless objects for guns? A new study published in the Journal of Experimental Psychology aimed to find out.

Maria Lojowska of Radboud University and colleagues in the Netherlands tested 34 participants between the ages of 18 and 30. To create a situation that elicited freezing behaviour, the researchers occasionally gave their participants a mild electric shock, which was always preceded by a red dot. Participants were told they were taking part in a visual perception task and were fully informed about the nature of the shocks before the experiment started.

It was not the shock itself that made the participants show “freezing behaviour” (as measured by their heart rate), rather it was the anticipation of the shock. When participants saw a green dot (which did not presage a shock), they relaxed, but when they saw a red dot they felt more scared, regardless of whether a shock was actually given or not.

The participants’ task was to judge as accurately as possible the orientation of the lines inside small squares, which appeared on a computer screen on the left or right of their visual fields. The squares either had several lines (high detail) or few lines (indicating low detail), as you can see below. Crucially, the researchers found that the participants’ visual performance was affected by whether or not they were stressed and showing physiological signs of freezing. When they were afraid and stressed, their performance at judging the squares with high detail was impaired but their ability to judge the squares with coarse visual details actually improved.

The square on the left features high detail and the one on the right low detail. Stimuli from Lojowska et al 2015. When scared, participants were better at perceiving low detail.

The researchers said that previous research in animals had suggested that the freezing response leads to an overall improvement in vision, but their new findings suggest a more nuanced situation – it seems that when we’re afraid, we perceive some aspects of the world more clearly, but at the cost of ignoring much of the detail.

Intuitively, it makes sense that an animal or human only sees the most basic detail of a potentially threatening object. It would take too much time to take in all the detail of a scene. Our brain has a clever way of quickly reconstructing what every object is likely to be using its memory of similar events and situations, rather than analysing each new thing afresh, in depth. It is these shortcuts that can result in errors and visual illusions.

Despite these potential flaws in our visual perception, it’s important for us to be able to perceive things quickly. If you are walking in a desert and glimpse a shape that could be a snake (but is more likely a stick), it’s better to show caution and stop than assume it’s a stick and walk right into danger.

Now that we better understand how our visual perception changes when we feel fear, Maria Lojowska and her team plan to discover exactly what’s going on in the brain when this happens. Meanwhile, the researchers hope their findings might help inform training programmes to improve a person’s performance when they are in a stressful environment. Many police forces in the US already train their officers to overcome their implicit bias towards race and sex. It would be helpful to add the limits of our visual perception to the list.
_________________________________



Lojowska, M., Gladwin, T., Hermans, E., & Roelofs, K. (2015). Freezing promotes perception of coarse visual features. Journal of Experimental Psychology: General, 144 (6), 1080-1088 DOI: 10.1037/xge0000117

Our free fortnightly email will keep you up-to-date with all the psychology research we digest: Sign up!


Unlearning Conditioned Responses

It can be a challenge to determine if a response is conditioned or unconditioned. The key to understanding the difference is that an unconditioned response happens automatically. Meanwhile, a conditioned response is learned and is only acquired if the individual has made an association between an unconditioned and conditioned stimulus.

However, because a conditioned response must be learned, it can also be unlearned. Pavlov tested this after the dogs had developed conditioned responses to the light. He found that if he repeatedly shined the conditioned-stimulus light but refrained from giving the dog the food, the dog would salivate less and less until it stopped salivating completely. The gradual diminishing and eventual disappearance of the conditioned response is called extinction.

Extinction can happen to real-life conditioned responses, too. For example, if you see a new dentist who doesn’t make your gums raw when you have an appointment and compliments you on your healthy mouth, over time you may find you no longer dread the dentist's office.