Information

Minimum duration for presenting a visual stimulus on screen

Minimum duration for presenting a visual stimulus on screen



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

My experiment involves presenting simple visual stimuli briefly on screen (200 ms to 1000 ms). The stimuli would be simple enough (e.g. a circle and a line) but I want to use a range of durations to test out some hypotheses. I would like to know if there's a minimal time for human eyes to perceive simple visual stimuli, so I could say "the durations I've chosen for my experiment are safely above the minimal value suggested by 'so and so' in the psychophysics experiment.

I am sure many factors would have an impact on such a value, such as the participant's condition and how far the target is (in my case it's the distance between someone, sitting at their desk, and the computer screen), etc., but perhaps there are data and rules already established? If anyone knows of applicable guidelines or literature, I would really appreciate it.

(I've tried searching under "tachistoscopic presentation minimum time" but haven't discovered anything useful yet. Any other search term combo would also be helpful.)


In psychophysics, this is known as an absolute threshold. The absolute threshold will depend on a variety of factors such as brightness, size, etc.

It is also important to keep in mind that the time needed to detect a stimulus may be too short to cause some desired effect on the participant (e.g. a manipulation). For example, the time needed to detect a stimulus is necessarily shorter than the time needed to identify a stimulus. Though research on subliminal stimuli suggests stimuli can sometimes be presented below threshold and still achieve some manipulation. However, some of this research is controversial.

Often, researchers will choose a range of display times specifically in order to unravel the time course of some cognitive process. For instance, when displaying a cluster of dots, how long is needed in order to subitize them?

There is insufficient information in your question to give you an exact number, but I suggest looking at similar studies and see what sort of display times they use.


This depends on intensity but it is shorter at most intensities than your screen could possibly present a stimulus. Don't worry about it or cite references. It would be like citing a reference for why you don't need to worry about quantum effects when testing the best kind of screwdriver.


Assessing User Reactions: Open-Ended vs. Structured

Once participants have been exposed to the design, the next step is to measure their responses. People’s aesthetic impressions can be very idiosyncratic and will need to be systematically analyzed to identify meaningful trends. This can be done with open-ended feedback, but using a slightly more structured approach makes it easier to understand overall patterns. Here are a few techniques that can be used, ranging from completely open-ended to highly structured:

  • Open-ended preference explanation: Ask users to explain why they like a design
  • Open word choice: Ask users to list 3 to 5 words that describe the design
  • Closed word choice (desirability testing): Provide users with a list of terms and ask them to pick the words which best describe the design
  • Numerical ratings: Collect numerical ratings about how much the design exhibits specific brand qualities

Open-Ended Preference Explanation

The first method, simply asking people to explain why they like (or don’t like) a design, can work well for in-person sessions with highly motivated and articulate users. This question casts the broadest net and can be useful if you don’t know much about your audience’s expectations, and want to discover what matters to them. It can also help identify opinions that based on personal idiosyncrasies (such as “I like purple”), that can be screened out so you can focus on more substantive factors. The drawback of this approach is that you may get only brief or irrelevant responses if the participant is not motivated or just not very articulate. This method is especially risky in an unmoderated remote setting (such as a survey), since you won’t be able to ask for more detail in followup questions if someone gives a vague response such as ‘It’s nice.’

Open Word Choice

A slightly more structured approach to assessing user perceptions is to ask test participants to list several words which describe the design. This format ensures you get at least some specific feedback, while still keeping the question open-ended to discover factors you may not have considered, but which are significant to your audience. You may get a wide range of descriptors back, and will need to analyze them carefully to identify meaningful themes. A good approach for this analysis is to categorize terms as generally positive, negative, or neutral, then group terms with similar meanings, and evaluate whether they match your target brand attributes. For example, the table below shows descriptors provided about a business-to-business website whose brand goal was to be trustworthy, contemporary, and helpful. None of these terms were specifically named by the study participants as descriptors, but many users described the design as simple (with both positive and negative connotations).

Open-ended word-choice questions elicit a broad range of descriptors, which must be analyzed to determine whether they effectively express the desired brand traits.

Structured Word Choice

Requiring users to choose descriptors from a list of terms you provide is a controlled variation of the word-choice method. By supplying users with a limited set of words, this method focuses specifically on whether the target brand attributes are perceived by participants. The brand traits you hope to convey should be included in your list of terms, along with other choices which describe contradictory or divergent qualities. Structured word choice, (also known as “desirability testing”) is less sensitive than open word choice to discovering new points of view, but makes it easier to compare different versions of a design, or the reactions of different audience groups to the same design. This technique works well in an in-person study, where you can ask users follow up questions and let them refer to the design as they explain their reasoning for selecting each term. It can also be used in a remote study, but it's not a good idea to combine this with a '5-second' test format because looking through a long list of words may take so much time that by users get to the end they don't recall much about a design they only saw for 5 seconds. Instead, use a survey tool which allows people to see the design as they are choosing words from the list.

Numerical Ratings of Brand Perceptions

Finally, the most controlled approach is to collect numerical ratings of how well each brand trait is expressed by the design. To avoid prohibitively long test sessions, pick the 3–5 most important brand qualities and ask people to rate how well each of them is captured by the design. (The more questions you have, the more difficult the questionnaire, and the higher the chance of random answers.) Because this paradigm limits the ability to discover different perspectives and reactions, numerical ratings are appropriate only if you’ve figured out the most common perceptions in previous research and simply want to assess the relative strength of each quality.

Finally, a word about focus groups: although they can be used to capture user preferences, this method is risky if you don’t have a talented, experienced focus-group facilitator available. Capturing detailed feedback about a visual design from each participant in a group conversation is difficult. One tactic that can help is to ask participants write down their own perceptions before discussing them as a group, and collect these written comments for later analysis. Also focus groups don’t capture any behavioral information.


Method

Participants

Thirty-four students from a German university (23 female) took part in the experiment in return for course credit. The data from two of the participants (both female) were discarded prior to analysis because they committed far too many errors, or else responded far too slowly (i.e. far-out values according to Tukey, 1977 ) resulting in a sample size of N = 32. Apart from one participant who did not report their age, the median age was 24 years (ranging from 19 to 32 years) and participants had normal or corrected-to-normal vision. According to power constraints, the SPE that has been documented with visual stimuli has been rather large in previous studies (dz > 0.80 Schäfer, Wesslein, et al., 2016 Sui et al., 2012 ). Importantly, we assume the SPE in our study to be smaller due to the switch of the modality (Spence, Nicholls, & Driver, 2001 ). However, with N = 32, even an effect of dz = 0.45 would be found with a power of 1 – β > .80 (α = .05, one-tailed calculated with G*Power, Faul, Erdfelder, Lang, & Buchner, 2007 ).

Procedure

In order to reduce the amount of environmental noise to a minimum, the participants were tested individually in a completely light- and sound-proofed testing chamber. An experimenter provided some initial instructions and then left the room, before the task instructions were presented on the screen. The experiment started with an association phase, in which the to-be-learned associations were shown on the display. Here, for each participant, each of three tactile stimuli, consisting of different onsets and offsets of vibrotactile pulses (see Figure 1), was assigned to one of three labels. The particular associations were presented to the participant by presenting one label on the screen in written form in each trial for 500 ms, followed by the delivery of a single tactile stimulus (i.e. vibrotactile pulse) for 300 ms to the participant’s hand while the label remained on the screen and also remained for a further 1,700 ms. After an interstimulus interval of 2,000 ms, the next trial followed. Each stimulus-label association was presented six times. Hence, the association phase comprised 18 trials presented in a random order. The participants were instructed to learn these associations – more specifically, associations of labels and vibrotactile pulses presented to their hand – and were informed that they would be asked about them later.

The matching task followed after the association phase. The participant’s task was to judge whether each stimulus-label combination that was presented corresponded to one of the initially learned associations or not. Here, the participants were told to place their left index finger on the S-key (non-matching response) and their right index finger on the L-key (matching response). Throughout the experiment, both of their arms were placed on armrests in order to prevent any disturbance because of the tactile stimulus that was delivered to the left hand.

After 12 practice trials, the test phase of the matching task started. In each trial, after a blank slide (500 ms) and a fixation cross (550 ms), a stimulus-label combination was presented. However, this stimulus-label combination contained a label and a flashing visual stimulus which appeared on the screen simultaneously (for a schematic presentation of the stimuli, see Figure 1). The flashing stimulus had a duration of either 100 or 300 ms (see Figure 1), and the label remained on the screen for a maximum of 1,500 ms or until a response had been given. If no response was detected, a blank slide appeared until a response had been registered. Each of the nine possible stimulus-label combinations was presented several times and in a random order, whereby matching combinations were presented twice as often thus giving rise to an equal number of matching and non-matching trials. The test phase consisted of 144 trials, and no feedback was given to prevent further learning. Importantly, before this test phase, the participants were not informed that tactile patterns could switch to visual patterns, thus they had no idea of a potential switch of modality.

Material and apparatus

The experiment was conducted using standard PCs with standard TFT monitors, German QWERTZ keyboards, and E-Prime 2.0 software. The labels were written in Courier New and presented in white on a black background. The labels were always presented in the middle of the screen to have the greatest consistency between the association phase with tactile stimuli and the matching task with the visual stimuli. Thus, in the association phase, each label was presented in the centre of the screen and each tactile stimulus was presented on the participant’s left hand, while in the matching task, the label was presented from the centre of the screen and a flashing stimulus above it (see Figure 1). With a viewing distance of about 50 cm throughout the experiment, the labels subtended 0.7° visual angle vertically. The tactile stimuli used were constructed as follows: a continuous 300-ms pulse as Stimulus A, a sequence of a 100-ms pulse, a 100-ms pause, and a 100-ms pulse as Stimulus B, and a 100-ms pulse as Stimulus C (see Figure 1). The presentation of the tactile stimuli (

250 Hz, about 71 μm peak-to-peak amplitude) was controlled via a serial interface. That is, the vibrotactile stimuli were delivered by means of a tactor (Model C-2, Engineering Acoustic, Inc.) which was 1.17 cm in diameter and 0.30 cm thick. The visual stimulus was presented as a sequence of light grey squares against a black background (with a visual angle of 2.6° × 2.6°) suggesting a flashing stimulus. Importantly, each visual pattern exactly mirrored one of the tactile patterns. Thus, either a continuous 300-ms flash (Stimulus A), a 100-ms flash, 100-ms pause, 100-ms flash sequence (Stimulus B) or a 100-ms flash (Stimulus C) was presented (see Figure 1)1 1 Note that, before the experiment started, the participants went through a pattern-tone combination task that was completely irrelevant to the current study. In this task, the tactile stimuli were presented on either the participant’s left or right hand and were combined with one of two tones. The participants were instructed to indicate the hand on which they perceived the stimulus. To test whether this preceding task had any influence on the pattern of data in the matching task, a 2 (tone condition in pattern-tone combination task: tone 1 vs. tone 2) × 2 (matching condition: matching vs. non-matching) × 3 (association: self vs. friend vs. stranger) repeated-measures MANOVA was conducted with ‘tone condition in pattern-tone combination task’ as a between-participant factor. The analysis did not reveal any influence we found neither a main effect, F(1, 30) = 1.14, p = .294, ηp 2 = .04, nor any interaction effect with this factor, all Fs < 1, all ps> .474. .

We used the German words Ich [I] as the self-relevant label and Freund [friend] and Fremder [stranger] as the two non-self-relevant labels. The friend label was chosen to have a strong comparison condition for the self-relevant conditions because a friend is a ‘close-to-the-self, but not myself’ person. The assignments of these labels to the particular vibrotactile pulses, that is the concrete stimulus-label associations, were varied following a Latin-square design. Thus, each label was assigned to each vibrotactile stimulus for one third of the participants.

Design

As typical for the matching paradigm, the experiment comprised a 2 (matching condition: matching vs. non-matching) × 3 (association: self vs. friend vs. stranger) repeated-measures experimental design. The factor ‘association’ thereby represents the association which was given to the tactile stimulus in the association phase although performance was analysed only in those trials where visual stimuli were presented (i.e. in the matching task). Hence, all matching or non-matching trials with one particular visual stimulus (e.g. Stimulus A) were grouped based on the fact that the tactile equivalent was associated with a particular label.

The SPE is defined as better performance in self-associated matching trials as compared to other-associated matching trials (Sui et al., 2012 ), so that we will test for our hypothesis in the matching condition. Non-matching trials only serve as filler trials to make the matching task a useful task (so that participants have to compare each presented combination to the previously learned combinations). Hence, no hypotheses were formulated in relation to this condition.


The Absolute Threshold for Hearing

The absolute threshold for hearing her first to the lowest sound level that a person with normal hearing can detect at least 50% of the time when no other sounds are present. For example, researchers might test the absolute threshold for the detection of the sound of a metronome.

In general, children tend to be able to detect lower levels of sound in adults because their hearing is more sensitive. As people age, hearing tends to decline.


Instructions

If you do not see six (6) tabs across the top, widen your browser or rotate your phone or tablet so that it is wide.

Full Screen Mode

To see the illustration in full screen, which is recommended, press the Full Screen button, which appears at the top of the page.

Stimulus Settings Tab

On this tab, you can adjust the type of stimulus and the various parameters of the stimulus to see how this might affect your ability to detect the dot. The settings include the following:

Dot Diameter: the diameter of the dot in pixels.

Dot Position: the distance the dot is above the fixation mark (in number of diameters of fixation mark).

Background Level: brightness of the background against which the dot is to be detected (in the basic intensity values of the screen). The dot values are differences from this level.

Reset: at the top of the settings page is a Reset button. Pressing this button restores the method settings to their default values.

Method Settings Tab

On this tab, you can adjust how the method will work. The settings include the following:

Type of Method of Constant Stimuli: Traditional or Forced-Choice. In Traditional, you be asked if the stimulus is present. In Forced-Choice, you will be asked if the stimulus is in one of two locations or one of two time periods. For more on Forced-Choice see here.

Number of Levels of Relative Dot Luminance: how many different levels of intensity of your stimulus.

Number of Repetitions: How many times each stimulus will be presented.

Minimum Value of Relative Dot Luminance: what is the least intense dot to be displayed.

Maximum Value of Relative Dot Luminance: what is the most intense dot to be displayed.

Reset: At the top of the settings page is a Reset button. Pressing this button restores the stimulus settings to their default values.

Experiment Tab

On the Experiment tab, press the space bar or the Start button on the screen to start the experiment. Keep your eyes fixated on the red plus sign in the middle of the screen. A dot will be presented above this mark.
Traditional Method Instructions: After the dot has been presented, buttons will appear, asking if you have seen the dot. Click or press yes (or press the Z key) if you have and no (or press the / key) if you have not seen the dot. When the experiment has been completed, you will be instructed that you can view your results.
Forced-Choice Method Instructions: In addition to the dot being above the fixation mark, it will be either to the right or to the left of the fixation mark. After the dot has been presented, buttons will appear asking if the dot is to the left or right of the fixation mark. Click the proper button, or press Z for left or the / key for right. If you do not know which side the dot appeared you must guess.

Results Tab

You data will be presented on this tab. On the x-axis will be the different levels of intensity of the stimulus. On the y-axis will be the proportion of trials where you reported detecting the stimulus for each intensity. To see your threshold, which is determined by a simple linear interpolation method, click the button that says Show Threshold. The value will be shown below the graph and highlighted on the graph. To display the data showing the proportion of trials you detect for each stimulus intensity, click the Show Data button.

Settings for Stimuli in Method of Constant Stimuli Experiment

Change the settings below to alter the stimulus parameters in this experiment.


Keeping in time with social and non-social stimuli: Synchronisation with auditory, visual, and audio-visual cues

Everyday social interactions require us to closely monitor, predict, and synchronise our movements with those of an interacting partner. Experimental studies of social synchrony typically examine the social-cognitive outcomes associated with synchrony, such as affiliation. On the other hand, research on the sensorimotor aspects of synchronisation generally uses non-social stimuli (e.g. a moving dot). To date, the differences in sensorimotor aspects of synchronisation to social compared to non-social stimuli remain largely unknown. The present study aims to address this gap using a verbal response paradigm where participants were asked to synchronise a ‘ba’ response in time with social and non-social stimuli, which were presented auditorily, visually, or audio-visually combined. For social stimuli a video/audio recording of an actor performing the same verbal ‘ba’ response was presented, whereas for non-social stimuli a moving dot, an auditory metronome or both combined were presented. The impact of autistic traits on participants’ synchronisation performance was examined using the Autism Spectrum Quotient (AQ). Our results revealed more accurate synchronisation for social compared to non-social stimuli, suggesting that greater familiarity with and motivation in attending to social stimuli may enhance our ability to better predict and synchronise with them. Individuals with fewer autistic traits demonstrated greater social learning, as indexed through an improvement in synchronisation performance to social vs non-social stimuli across the experiment.


Blindsight: When the brain sees what you do not

When can you see what you can't see? When you have blindsight, a "condition," says the Oxford Concise Dictionary, "in which the sufferer responds to visual stimuli without consciously perceiving them." Here vision researcher Susana Martinez-Conde describes how a man named DB perceives flickering Gabor patches (see illustration above) much more accurately and consistently in his "blind" eye than in his sighted eye -- even though he denies ever seeing anything with the blind eye. Sacksian stuff here read it and wonder.

Blindsight: The Blind Leading the Sighted


DB is a 67-year-old man who has been blind to the left half of his visual field since age 26, following a neurosurgical operation. The operation, which was necessary to remove a vascular malformation in his occipital lobe, unfortunately destroyed the part of DB's right hemisphere corresponding to the primary visual cortex. The primary visual cortex, also called area V1, is the brain's largest area and one of over two dozen regions dedicated to vision. V1 serves many important functions. Among them, it relays visual information from the eyes to higher cortical visual areas. Since DB lost the right half of area V1, it did not surprise his doctors that he became blind in the left half of his visual field. (The right part of the brain processes visual information from the left half of the visual field, and vice versa.) But they were astounded that, although DB denied seeing any visual targets presented to the left visual field, he was nevertheless able to accurately "guess" many properties of targets presented there, such as shape, specific location and other aspects one could only know of by seeing them. DB's ability to provide accurate information about unseen targets is called "blindsight." Blindsight is thought to be due to information flow through secondary neural pathways that bypass area V1 but which nevertheless convey a small amount of visual information to higher visual cortices. For some unknown reason, these secondary routes are not sufficient to maintain the feeling of sight. Thus the blindsight patient has the subjective feeling that he or she is blind, and reports visual information only when forced to take a guess. Oxford University psychologist Larry Weiskrantz and colleagues tested began testing DB's abilities extensively during in the 1970s and 1980s and have continued to this day. In the meantime, other blindsight patients have been identified and examined. DB's detection abilities are considerably superior to other blindsight cases, however. In the paper reviewed here, "Can Blindsight be Superior to "Sighted-Sight'?," Ceri Trevethan, Arash Sahraie and Weiskrantz set out to directly compare DB's blind field sensitivity with his sighted field sensitivity and also with normal vision in a group of healthy volunteers. In the Country of the Sighted, the Blind-Sighted Man is King Trevethan, Sahraie and Weiskrantz conducted three experiments. In Experiment 1, they informed DB that a visual stimulus called a Gabor patch (see figure below) would appear on a gray screen in one of two time spans. They would then ask DB to indicate, with a button press, which time interval the target had appeared in he was asked to pick one of the two intervals even if he wasn't completely sure he saw anything -- in other words, to guess. In this first experiment the stimulus presentations to the blind and the sighted fields were "blocked" Ã&fnof¢âÂ&sbquo‰Â&euro that is, a block of 30 sequential stimulus presentations (each with two time intervals, only one of which contained a stimulus) were made to DB's blind field (his left eye), followed by 30 stimuli presentations to the sighted field. Astonishingly, DB performed better in his blind field than in his sighted field, and by a wide margin. He correctly identified the time span containing the stimulus 87 percent of the time in his blind field as opposed to only 50 percent of the time in his sighted field -- a rate consistent to that he would achieve if he were guessing.
However, because the trials were blocked, it was theoretically possible that DB might have somehow varied his performance based on whether he expected to see the stimulus after all, he expected to see it in his right eye but not in his left. So in a second variant of Experiment 1, a total of 100 stimuli presentations were randomly interleaved to the two sides. In this trial DB could not know whether a given stimulus would appear in the blind or the sighted field, so he had to approach the task in exactly the same manner in either situation. He again did much better detecting the stimulus in the blind field (84 percent correct) than in the sighted field, where he again correctly detected just 50 percent. Ironically, DB found the field tests on the sighted side hard work, whereas the blind field tests seemed effortless ("No problem, I'm just guessing"). In Experiment 2, Trevethan and colleagues quantified the sensitivity difference between blind and sighted fields by presenting a range of stimulus contrasts, with some figures at higher contrast, and thus easier to detect, than others. In his blind field, DB was able to detect stimuli with contrast as low as 6 percent. In his sighted field he required a 12 percent contrast stimulus for successful detection. Although these results so far seemed astounding, one potentially mundane explanation remained. Perhaps DB's sighted field simply suffered from poor vision. If so, his superior performance in his blind field might simply reflect the abnormally poor vision in hs sighted field. To rule out this possibility Trevethan and colleagues carried out a final experiment. In Experiment 3, they tested a group of six naive (that is, new to such testing) aged-matched participants with normal vision, using the same detection task as in Experiment 1. The results showed DB's vision in his sighted field to be equivalent to that of normal, healthy subjects. In conclusion, DB's blind field sensitivity is not merely superior to his own sighted field, but also to normal vision. Awareness without perception An interesting aside concerns DB's reports of subjective awareness during the experiments. In Experiment 1, DB was asked to report any feelings of subjective awareness of the stimuli, or lack thereof, after each stimulus presentation. He reported no awareness of stimuli presented to the sighted field (confirming that he was essentially guessing), but he had subjective awareness of 80 percent of stimuli presented to the blind field. However, this subjective awareness was nothing like a visual experience he denied having any experience of vision in his blind field, but described his subjective awareness of stimuli as "feeling as if a finger is pointing through the screen." Interestingly, DB's subjective awareness of stimuli presented to the blind field disappeared during the interleaved trials, in which stimuli presentations to the blind and sighted fields were intermixed. This finding suggests that DB's subjective awareness during blind field presentations is related to his expectation that he will not be able to actually see the stimulus during those trials. The most fascinating aspects of this study are two-fold. First, the study intriguingly dissociates the feeling of awareness from the feeling of visual experience. That is, DB is sometimes subjectively aware of stimuli that he is blind to. Thus (the lack of) function in area V1 may not be critical to subjective awareness per se, but to the feeling of visual experience. If a missing V1 is your vision system's only flaw, you may actually be aware of much that you cannot see. Second, this work revolutionizes the definition of blindsight from a residual set of vision skills to a type of superior sight. But let's not be too quick to conclude that blindsight individuals have vision superpowers -- Daredevil, the blind Marvel Comics superhero with radar-like perception, comes to mind. Someone playing (Dare)devil's advocate might suggest that DB, who has participated in many vision experiments over the years, shows impressive detecting abilities because he has spent years practicing a specific type of skill in a familiar experimental condition. It may also be that other patients with similar lesions may not share DB's extraordinary blindsight. And despite his prowess in the lab, DB might not want to don a spandex costume and start fighting crime just yet. Nonetheless, this paper raises some questions that vision and consciousness researchers are sure to focus on, and it demonstrates that -- at least in certain conditions -- blindsight can be superior to normal sight. As Daredevil once said: "Yeah, tell them you got beat by a blind man, too."
Susana Martinez-Conde is the director of the Barrow Neurological Institute's Laboratory of Visual Neuroscience, where she studies, the neural code and dynamics of visual perception.

The views expressed are those of the author(s) and are not necessarily those of Scientific American.


Human Factors and Applied Psychology Student Conference

The redundant signals effect (RSE) refers to a decrease in response time (RT) when multiple signals are present compared to when one signal is present. The RSE is widespread when responses are made to specific signals for example, a participant who is asked to respond to the letter “N” will respond more quickly to two “Ns” than to one “N.” The current research was conducted to determine whether or not the RSE generalizes to categorical signals. In Experiment 1, participants pressed a button when they saw any number on a computer screen. Each trial contained two stimuli subtending 1º visual angle and placed 3º above and below the center of the screen. Both stimuli were letters on 50% of trials (no-signal condition), one stimulus was a number on 25% of trials (single-signal condition), and both stimuli were numbers on 25% of trials (redundant-signal condition). RT was faster in the redundant-signal condition (461 ms) than in the single-signal condition (509 ms, p DOWNLOADS


Minimum duration for presenting a visual stimulus on screen - Psychology

1 Gippsland Physiotherapy Group, Melbourne, Australia 2 VIT University, Vellore, India.

Received May 30 th , 2010 revised August 2 nd , 2010 accepted August 7 th , 2010.

Keywords: Reaction time, Auditory stimuli, Visual stimuli, Neuromuscular-physiological response, auditory cortex, visual cortex, muscle contraction

Objective: The purpose of this study was to find out whether the simple reaction time was faster for auditory or visual stimulus and the factors responsible for improving the performance of the athlete. Methodology: 14 subjects were assigned randomly into groups consisting of 2 members. Both the members from each group performed both the visual and auditory tests. The tests were taken from the DirectRT software program from a laptop. The DirectRT software consists of Testlabvisual and Testlabsounds to test the reaction times to visual and auditory stimuli. The 2 members from each group completed both the visual and auditory reaction times, the data was taken and the mean reaction time was calculated excluding the first and last values. Results: The results show that the mean visual reaction time is around 331 milliseconds as compared to the mean auditory reaction time of around 284 milliseconds. Conclusion: This shows that the auditory reaction time is faster than the visual reaction time. And also males have faster reaction times when compared to females for both auditory as well as visual stimuli.

Reaction time (RT) is the elapsed time between the presentation of a sensory stimulus and the subsequent behavioral response. Simple reaction time is usually defined as the time required for an observer to detect the presence of a stimulus. It is a physical skill closely related to human performance. It represents the level of neuromuscular coordination in which the body through different physical, chemical and mechanical processes decodes visual or auditory stimuli which travel via afferent pathways and reach the brain as sensory stimuli.

Simple reaction time can be determined when an individual is asked to press a button as soon as a light or sound appears.Research done by Pain & Hibbs, reference [1], shows that simple auditory reaction time has the fastest reaction time for any given stimulus. A study done by Thompson et al., reference [2] has documented that the mean reaction time to detect visual stimuli is approximately 180 to 200 milliseconds, whereas for sound it is around 140-160 milliseconds. On the other hand, there are also researches done by Yagi et al., reference [3], that show that reaction time to visual stimuli is faster than to auditory stimuli. Research by Verleger, reference [4] also confirms that visual reaction time is faster than auditory reaction time during or after exercise.

There are various factors that affect the reaction time to a stimulus. Factors like intensity and duration of the stimulus, age and gender of the participant, effect of practice can affect the reaction time of an individual to a particular stimulus. For example, there are relative differences between the reaction time to visual and auditory stimuli between genders. Male athletes tend to be faster than their female counterparts in responding to different stimuli. Researches done by Engel, reference [5], show the reaction time to sound to be faster in males when compared to females. Studies done by Dane et al., reference [6], show the difference in reaction time in eye-hand reaction time among male and female handball players.

The purpose of this study was to find out whether the simple reaction time was faster for auditory or visual stimulus and the factors responsible for improving the performance of the athlete.

14 subjects were randomly divided into groups consisting of 2 members. Both the members from each group performed both the visual and auditory tests. The tests were taken from the DirectRT software program in the laptop. The tests for visual reaction time were taken from the ‘testlabvisual’ file in the DirectRT program. Before starting the test, the subjects were asked to give individual file numbers under the ‘enter codes’ menu, in order to access the data after the test. In the testlabvisual test, the subjects were asked to press the ‘space bar’ key, every time they saw a yellow box on the screen. Once the test was completed, the data was taken from the output file, the mean reaction time was calculated excluding the first and last values. After both the subjects from each group completed the visual test, they undertook the auditory reaction test. This was taken from the ‘testlabsounds’ file in the DirectRT program. In the testlabsounds test, the subjects were asked to press the ‘spacebar’ key, every time they heard a ‘beep’ sound. Once the test was completed, the data was taken from the output file, the mean reaction time was calculated excluding the first and last values. After both the members of a group completed the visual and auditory tests, the mean reaction time data for both the visual and auditory tests were entered in the laptop.

The results show that the auditory reaction time is faster than the visual reaction time. And also males have faster reaction times when compared to females for both auditory as well as visual stimuli.

As the result shows, in Figure 1 , the mean visual reaction time is around 331 milliseconds as compared to the mean auditory reaction time of around 284 milliseconds. This confirms that the auditory reaction time is definitely faster compared to the visual reaction time. This finding is similar to the studies done by Pain & Hibbs, reference [1]

Figure 1 . Graph showing faster simple reaction time for auditory stimulus compared to visual stimulus.

and Thompson et al., reference [2], which also show that auditory reaction time is faster than visual reaction time.

Reaction time is dependent on several factors like arrival of the stimulus at the sensory organ, conversion of the stimulus by the sensory organ to a neural signal, neural transmissions and processing, muscular activation, soft tissue compliance, and the selection of an external measurement parameter (Pain & Hibbs, reference [1]). Researches by Kemp et al., reference [7], show that an auditory stimulus takes only 8-10 milliseconds to reach the brain, but on the other hand, a visual stimulus takes 20-40 milliseconds. This implies that the faster the stimulus reaches the motor cortex, faster will be the reaction time to the stimulus. Therefore since the auditory stimulus reaches the cortex faster than the visual stimulus, the auditory reaction time is faster than the visual reaction time.

Reaction times are widely used to evaluate neuromuscular-physiological responses in sports. Studies by Pain & Hibbs, reference [1], have shown that the neuromuscular-physiological component of an auditory reaction time for sprint athletes can be around 85 milliseconds. Faster reaction times are significant for better performance of athletes. The faster the stimulus reaches the brain, the faster the signal is processed and the necessary responses are sent for the necessary motor reaction. Van den Berg et al., reference [8], also found that fatigue due to sleep deprivation caused subjects to have slower reaction times. Studies by Ando et al., reference [9], reported that reaction times reduced with repeated practice. Therefore reaction times to a particular stimulus can be made faster with repeated practice with a particular stimulus and with adequate rest in between stimuli.

In this study, as seen in Figure 2 , it was also found that the male subjects had faster reaction times compared to the female subjects for both auditory as well as visual stimuli. This finding is similar to the research done by Dane et al., reference [6]. The reason for this difference

Figure 2 . Graph showing males having faster simple reaction time compared to females for both auditory and visual stimuli.

could be that it takes the same time for both the auditory and visual stimuli to reach the cortex but the time taken for the corresponding motor response and muscle contraction might differ. This was documented in the study done by Silverman, reference [10], that the motor response is faster in males when compared to females because they are comparatively stronger than females. This explains why males have faster simple reaction times for both auditory as well as visual stimuli.

From the above study it can be concluded that simple reaction time is faster for auditory stimuli compared to visual stimuli. Auditory stimuli has

&bull The fastest conduction time to the motor cortex.

&bull Fast processing time in the auditory cortex.

&bull Therefore faster reaction time and quick muscle contraction.

&bull And on the whole improves the performance of the athlete.

As exercise physiologists, our main aim is to improve the speed, skill and performance of the athlete. The above evidences suggest that speed and performance of an activity can be improved with faster reaction time to a stimulus. From the above findings of the study, faster reaction times can be achieved by providing repeated auditory stimuli and with adequate periods of rest between the stimuli.


Information processing through the first year of life: a longitudinal study using the visual expectation paradigm

This Monograph uses a developmental function approach to describe age-related change and individual differences in infant information processing during the first year of life. The Visual Expectation Paradigm (VExP) is used to measure speed of information processing, response variability, and expectancy formation. Eye-movement reaction times and anticipatory saccades were gathered from 13 infants assessed monthly from 2 to 9 months and then again at 12 months. Analysis of response patterns demonstrated the applicability of the paradigm throughout the age range studied. Converging operations strongly indicate that the traditional estimate of the minimum time required for infants to initiate a saccade to a peripheral stimulus may be as much as 100 milliseconds (ms) too long. Moreover, the newly estimated minimum of 133 ms does not appear to change during the 2-12-month period. Reanalysis of the present data and past research reveals that the new, shorter minimum reaction time is unlikely to affect findings based on mean reaction time. However, using the traditional minimum reaction time will inflate estimates of percentage anticipation, especially in infants older than 5 months. Group and individual growth curves are described through quantitative models of four variables: reaction time, standard deviation of reaction time, percentage anticipation, and anticipation latency. Developmental change in reaction time was best described by an asymptotic exponential function, and evidence for a local asymptote during infancy is presented. Variability in reaction time was found to decline with age, independent of mean reaction time, and was best described by a polynomial function with linear and quadratic terms. Anticipation showed little lawful change during any portion of the age span, but latency to anticipate declined linearly throughout the first year. Stability of individual differences was strong between consecutive assessments of mean reaction time. For nonconsecutive assessments, stability was found only for the 6-12-month period. Month-to-month stability was inconsistent for reaction-time variability and weak for both anticipation measures. Analyses of individual differences in growth curves were carried out using random regressions for the polynomial models. The only significant individual difference (in growth curves) was found for reaction-time variability. Parameter estimates from the exponential models for reaction time suggested two or three developmental patterns with different exponential trajectories. This finding indicates that the strong form of the exponential growth hypothesis, which states that processing speed develops at the same rate for all individuals, does not hold for the first year of life. In the concluding chapter, Grice's Variable Criterion Model (Grice, 1968) is used to integrate three key findings: regular age changes in mean reaction time and variability but no age change in the minimum reaction time. It is argued that the rate of growth of sensory-detection information is developmentally constant during much of the first year but that age changes occur in the level and spread of the distribution of response threshold values. The unique strengths of the paradigm are discussed, and future directions are suggested for further developing the paradigm itself and for using it as a tool to study broad issues in infant cognition.


Watch the video: Using Image Files in PsychoPy and Conditional Sorting in Excel (August 2022).