Information

Difference in frequencies of mistakes between use of left-right vs. up-down

Difference in frequencies of mistakes between use of left-right vs. up-down



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I have no hard data, but from my personal experience in people specifying right or left directions (similarly east or west) and up or down (north or south, top or bottom), people frequently make mistakes with specifying left or right much more often.

1) Is there some hard data on such a difference in frequencies?

2) If so, does cognitive-neuroscience address this difference?

(Certainly, there are physical properties of our world that enforce the distinction between up and down (gravity) but not between left and right although people do generally have a right-hand motor bias.)


There is data on this question generated by research on the embodiment of spatial cognition. The idea here is that we mentally represent and construct space in relation to our bodies. From this perspective, the differences that you describe (left vs. right more complicated than up vs down) stem from the properties of how we perceive the world in our bodies:

Tversky (2008) describing research of Franklin and Tversky (1990):

We proposed that people keep track of the relative positions of the objects around them as they move by constructing a spatial-mental framework out of the three axes of the body and appending objects to it, updating it as the situation changes. We reasoned that accessibility of objects should reflect characteristics of the body axes and the world relevant to perception and action. The head-feet axis has salient asymmetries both perceptually and behaviorally; moreover, for the canonically upright observer, it correlates with the only asymmetric axis in the world, the up-down axis of gravity. The front-back axis separates the world that can be easily perceived and acted on from the world that cannot be easily perceived or acted on, but the left-right axis has few salient perceptual or behavioral asymmetries. This analysis predicts that, for the upright observer, things located along the head-feet axis should be fastest to retrieve, followed by things located on the front-back axis, followed by things located on the left-right axis.

Their data supports this reasoning. Furthermore in their experiments they also disentangled gravity from body orientation by having some participants recline:

For the reclining observer, no body axis correlates with gravity, so accessibility depends entirely on the body axes. In this case, things located along the front-back axis should be fastest because of the forward bias of perception and action.

This was the case as well.

Note that accessibility was operationalized as the speed in which participants could name the position of an object. Participants made very few errors in this task. So strictly speaking it does not answer your question about left-right errors. On the whole, this has nevertheless much overlap with your question. They could have increased the number of errors by imposing a short reaction time window, for example. In this case they would have expected these effects on the error rate.

References

Franklin, N., & Tversky, B. (1990). Searching imagined environments. Journal of Experimental Psychology: General, 119, 63-76. doi:10.1037/0096-3445.119.1.63

Tversky, B. (2008). Spatial cognition: Situated and embodied. In P. Robbins and M. Aydede (Eds). Cambridge handbook of situated cognition. Cambridge: Cambridge University Press.


Benchmarking

I decided to do my own benchmarking to see how the in-place assignment approach compares to the other solutions that have been offered in this question.

Here's a benchmark of the example based on the OP that I demonstrated earlier:

Here I benchmark on random input data, trying different scales and different patterns of key overlap between the two input tables. This benchmark is still restricted to the case of a single-column integer key. As well, to ensure that the in-place solution would work for both left and right joins of the same tables, all random test data uses 0..1:0..1 cardinality. This is implemented by sampling without replacement the key column of the first data.frame when generating the key column of the second data.frame.

I wrote some code to create log-log plots of the above results. I generated a separate plot for each overlap percentage. It's a little bit cluttered, but I like having all the solution types and join types represented in the same plot.

I used spline interpolation to show a smooth curve for each solution/join type combination, drawn with individual pch symbols. The join type is captured by the pch symbol, using a dot for inner, left and right angle brackets for left and right, and a diamond for full. The solution type is captured by the color as shown in the legend.


Top-down processing is defined as the development of pattern recognition through the use of contextual information. For instance, you are presented with a paragraph written with difficult handwriting. It is easier to understand what the writer wants to convey if you read the whole paragraph rather than reading the words in separate terms. The brain may be able to perceive and understand the gist of the paragraph due to the context supplied by the surrounding words.

Gregory's Theory

In 1970, psychologist Richard Gregory stated that perception is a constructive process that depends on top-down processing. He explained that past experience and prior knowledge related to a stimulus help us make inferences. For Gregory, perception is all about making the best guess or a hypothesis about what we see. In terms of visual perception, Gregory argues that about 90% of visual information is lost by the time it arrives in the brain for processing. This event leads to the creation of a perceptual hypothesis about the stimulus, based on his memory and past experience that may be related to it. When it comes to visual illusions, such as the Necker tube, Gregory believed that the brain may create incorrect hypotheses, leading to several errors of perception.


As of jQuery version 1.1.3, event.which normalizes event.keyCode and event.charCode so you don't have to worry about browser compatibility issues. Documentation on event.which

event.which will give 1, 2 or 3 for left, middle and right mouse buttons respectively so:

Edit: I changed it to work for dynamically added elements using .on() in jQuery 1.7 or above:

[Start of original post] This is what worked for me:

In case you are into multiple solutions ^^

Edit: Tim Down brings up a good point that it's not always going to be a right-click that fires the contextmenu event, but also when the context menu key is pressed (which is arguably a replacement for a right-click )

You can easily tell which mouse button was pressed by checking the which property of the event object on mouse events:

You can also bind to contextmenu and return false :

Or you can make a quick plugin that does the same:

Update for the current state of the things:

There are a lot of very good answers, but I just want to touch on one major difference between IE9 and IE < 9 when using event.button .

According to the old Microsoft specification for event.button the codes differ from the ones used by W3C. W3C considers only 3 cases:

  1. Left mouse button is clicked - event.button === 1
  2. Right mouse button is clicked - event.button === 3
  3. Middle mouse button is clicked - event.button === 2

In older Internet Explorers however Microsoft are flipping a bit for the pressed button and there are 8 cases:

  1. No button is clicked - event.button === 0 or 000
  2. Left button is clicked - event.button === 1 or 001
  3. Right button is clicked - event.button === 2 or 010
  4. Left and right buttons are clicked - event.button === 3 or 011
  5. Middle button is clicked - event.button === 4 or 100
  6. Middle and left buttons are clicked - event.button === 5 or 101
  7. Middle and right buttons are clicked - event.button === 6 or 110
  8. All 3 buttons are clicked - event.button === 7 or 111

Despite the fact that this is theoretically how it should work, no Internet Explorer has ever supported the cases of two or three buttons simultaneously pressed. I am mentioning it because the W3C standard cannot even theoretically support this.

It seems to me that a slight adaptation of TheVillageIdiot's answer would be cleaner:

EDIT: JQuery provides an e.which attribute, returning 1, 2, 3 for left, middle, and right click respectively. So you could also use if (e.which == 3)

event.which === 1 ensures it's a left-click (when using jQuery).

But you should also think about modifier keys: ctrl cmd shift alt

If you're only interested in catching simple, unmodified left-clicks, you can do something like this:

To those who are wondering if they should or not use event.which in vanilla JS or Angular : It's now deprecated so prefer using event.buttons instead.

Note : With this method and (mousedown) event:

  • left click press is associated to 1
  • right click press is associated to 2
  • scroll button press is associated with 4

and (mouseup) event will NOT return the same numbers but 0 instead.

there is also a way, to do it without JQuery!

If you are looking for "Better Javascript Mouse Events" which allow for

  • left mousedown
  • middle mousedown
  • right mousedown
  • left mouseup
  • middle mouseup
  • right mouseup
  • left click
  • middle click
  • right click
  • mousewheel up
  • mousewheel down

Have a look at this cross browser normal javascript which triggers the above events, and removes the headache work. Just copy and paste it into the head of your script, or include it in a file in the <head> of your document. Then bind your events, refer to the next code block below which shows a jquery example of capturing the events and firing the functions assigned to them, though this works with normal javascript binding as well.

If your interested in seeing it work, have a look at the jsFiddle: https://jsfiddle.net/BNefn/

Better Mouse Click Events Example (uses jquery for simplicity, but the above will work cross browser and fire the same event names, IE uses on before the names)


How Right-Brain vs. Left-Brain Thinking Impacts Learning

Curriculum–In order to be more “whole-brained” in their orientation, schools need to give equal weight to the arts, creativity, and the skills of imagination and synthesis.

Instruction–To foster a more whole-brained scholastic experience, teachers should use instruction techniques that connect with both sides of the brain. They can increase their classroom’s right-brain learning activities by incorporating more patterning, metaphors, analogies, role playing, visuals, and movement into their reading, calculation, and analytical activities.

Assessment–For a more accurate whole-brained evaluation of student learning, educators must develop new forms of assessment that honor right-brained talents and skills.


Introduction

Do we see with the eyes or with the brain? Is vision a discrete form of perception, distinct from others such as audition and touch? Is it possible for those who have lost their eyesight or have been born without vision to experience visual sensation or perception? Questions such as these have occupied the minds of philosophers and scientists for centuries (Morgan, 1977) and now lie at the heart of modern cognitive neuroscience. Today, with current experimental techniques and technologies including high-resolution functional brain imaging and devices which purport to transduce information from a lost sensory modality into the brain via another modality, inroads are being made toward finding answers to these questions. Sensory substitution devices (SSDs) aim to compensate for the loss of a sensory modality, typically vision, by converting information from the lost modality into stimuli in a remaining modality (Bach-y-Rita and Kercel, 2003). Here we utilized sensory substitution to examine how the very first stages of learning to “see with sound” occurs, and the quality of the information transfer from vision to audition as assessed with a test of acuity. A more complete understanding of the way in which this occurs may assist in the development of such devices that not only replicate lost sensory functionality, particularly in the blind, but along with research on synesthesia and multisensory processing, also call into question our notion of sensory modalities as functionally discrete, non-overlapping entities.

Changes Following Sensory Loss

Major neuroplastic changes can occur in a brain that is undamaged but loses input from a sensory modality. Multisensory processes in which cues from multiple modalities unite to form a percept may also include a degree of redundancy: an object's shape can be discerned by the hands and eyes simultaneously or separately the eyes and ears can be used in concert to determine the direction of a physical sound source more accurately than from sound alone. It may be this redundancy which helps the brain to compensate for sensory loss by enhancement of function of the remaining senses (Merabet and Pascual-Leone, 2010).

Blind individuals, particularly those born without sight or those who lost sight early in life often show superior performance in other modalities, including finer pitch discrimination and sound localization, more accurate tactile discrimination, better speech discrimination, and verbal recall (Merabet et al., 2005 Pasqualotto and Proulx, 2012 Pasqualotto et al., 2013). Blind individuals lack normal visual input to their occipital cortices, but brain imaging studies have shown that this area is nevertheless active during a number of tasks, including Braille reading, auditory localization tasks, speech comprehension, and verbal recall (Merabet et al., 2005).

Sensory loss need not have occurred early in life however, and changes can occur rapidly in adults following sensory deprivation. In one study, participants were blindfolded 24 h per day for 5 days and given intensive training in tactile and spatial discrimination tasks. Participants experienced visual hallucinations soon after blindfolding and functional magnetic resonance imaging (fMRI) scans showed occipital cortex activation when fingertips were stimulated, with primary and secondary visual cortices becoming increasingly active over the blindfolded period (Pascual-Leone and Hamilton, 2001). Tactile discrimination skills learnt during the experiment were disrupted when participants' occipital cortices were subjected to repetitive transcranial magnetic stimulation (rTMS). After the blindfold period, participants' brains were scanned again and occipital cortex activity linked to tactile stimulation was absent. Because the changes seen in this experiment manifested and then reversed so quickly, it cannot have been that new neuronal connections were established. Instead, existing connections between somatosensory, auditory, and visual cortices were “unmasked” when input from the eyes temporarily ceased. The authors of this study suggest that neuroplasticity in response to sensory loss is a two-stage process: rapid unmasking of existing cortico-cortical connections followed by slower and more permanent formation of new neuronal connections (Pascual-Leone and Hamilton, 2001).

Sensory Substitution

Sensory substitution is the use of one modality (the substituting modality) to take the place of another (the substituted modality). The concept has been construed by some in a very broad sense to include, for example, reading, in which vision (the written word) takes the place of audition (the spoken word) (Bach-y-Rita and Kercel, 2003). More commonly, however, the term is used to refer to a means to allow a person who has suffered sensory loss to make use of their remaining senses to perform functions normally carried out using the lost sense. An obvious and widely used example of this is Braille, in which tactile perception via the fingers substitutes for vision (or, arguably, audition), allowing blind people to read. This system only replaces a specific aspect of a modality however, namely language substitution on a general level represents a much greater technical challenge. This challenge has been met over the past four decades by a variety of systems and devices, most of which have been designed to replace vision, either by touch or audition.

Auditory-Visual Sensory Substitution

In tactile-visual sensory substitution (TVSS) systems, the skin or tongue functions as an analog of the retina (Bach-y-Rita et al., 1969). However, by comparison it is very crude and low-resolution. Kokjer (1987) estimated the informational capacity of the human fingertip to be in the order of 100 bps. The eye, by comparison, has been estimated to deliver around 4.3 × 10 6 bps (Jacobson, 1951), some four orders of magnitude greater bandwidth. The ear falls between these two limits, its capacity has been estimated at around 10 4 bps (Jacobson, 1950). So although parallels between the auditory and visual systems are not obvious in the way that the skin/retina analog is, the ear has the potential to provide a higher-throughput means of directing visual information to the brain than the skin.

The first general-purpose auditory-visual sensory substitution (AVSS) system was developed by Meijer (1992). It is known as “The vOICe” and is the system used in the present study. The vOICe converts images captured by a camera into “soundscapes” delivered to the user through headphones at a default rate of one soundscape per second. Each soundscape is a left to right scan of the visual scene with frequency representing the image's vertical axis and loudness representing brightness (these mappings are not arbitrary, see Evans and Treisman, 2010). The user therefore experiences a series of “snapshots” passing from the left to the right ear. Other AVSS devices have been developed: one which uses a similar encoding protocol as The vOICe but converts scenes into images resembling line drawings and produces a more “musical” output (Cronly-Dillon et al., 1999, 2000) another, the Prosthesis for Substitution of Vision by Audition (PVSA), does not scan the visual field but lets frequency increase both from bottom to top and from left to right of the captured image, using a higher density of auditory “pixels” in the center of the image to simulate the fovea (Capelle et al., 1998) and a third, the Vibe, also does not scan the visual field, instead dividing it into several “receptive fields” which are presented simultaneously, their position encoded by frequency and left-right audio channel balance (Auvray et al., 2005 Hanneton et al., 2010).

As with TVSS devices, users of AVSS systems report distal attribution (Auvray et al., 2005). Users have been shown to recognize patterns (Arno et al., 2001), recognize and locate objects in 3D space, including placing previously unseen objects into categories, such as “plant” or 𠇋oot” (Auvray et al., 2007 Merabet et al., 2009). One expert late-blind user of The vOICe, P.F., has provided repeated, detailed descriptions of her experiences which, she claims, have gradually improved and become more like vision. Depth perception, smooth movement (as opposed to 1 Hz “snapshots”) and even experience of colors emerged with continued use of the device for P.F., suggesting that her brain had been gradually adapting to more efficiently process this novel kind of auditory information (Ward and Meijer, 2010).

Acuity in Sensory Substitution Systems

An important factor in the usefulness of a system in which vision is the substituted modality is the limit on detail resolvable by the user. Finding this limit can be achieved in much the same way that visual acuity is conventionally measured. Some studies have measured acuity through indirect means, by assessing the ability of participants to either localize or recognize objects with AVSS devices (Auvray et al., 2007 Proulx et al., 2008 Brown et al., 2011). The study by Proulx et al. (2008) even used an ophthalmic perimeter, commonly used to map the visual field, as a means of assessing the speed and accuracy of spatial localization using sensory substitution. Other studies determined the acuity limits of TVSS devices directly. The acuity limit of legal blindness in the United States is 20/200 that is, a person with this level of acuity can read an eye chart located 20 feet away as well as a person with normal vision would read the same eye chart were it 200 feet away (Social Security Act. United States Social Security Administration, 2006). Normal vision thus corresponds to an acuity of 20/20.

The translation of visual acuity to sensory substitution is not entirely straightforward as the computation requires consideration of the field of view provided by the device. For example, it might be physically possible to provide 20/20 vision with a SSD through telescopic means. However, if this is accompanied by severe tunnel vision due to a restricted field of view, then the end result is still classified as a severe visual impairment. In fact, the definition of legal blindness in the United States specifies an acuity of 20/200 or less, or a field of view of 20° or less. A full explanation and demonstrations of the issues involved in defining acuity for sensory substitution are also available online 1 , but we will summarize the main points here. For our calculations of acuity we will assume a 60° field of view for the camera, as we used in the experiments reported here. This is a typical field of view for web-cams and similar devices, (and may, for lack of a suitable standard, serve as a ballpark figure for a practical field of view). The minimum number of pixels required to portray most optotypes used in acuity measurement would be 2𠄳 pixels horizontally. Assuming 176 horizontal pixels for the camera input, as we also use in our experiments, then every pixel subtends approximately 0.35° in width. The smallest discernable optotype then spans about one degree for 3 pixels horizontally (3 × 60°/176𢑑°), or 0.7° for 2 pixels (2 × 60°/176). Normal vision under Snellen's definition is the ability to recognize one of the Snellen chart optotypes when it subtends 5 min of arc (Duke-Elder and Abrams, 1970). Functionally, this means that visual acuity under the above conditions is between 8 and 12 times less than that possible with normal human vision. At best, visual acuity could be in the range 20/160�/240. The crucial aspect of these calculations for comparisons with reports of visual acuity in the literature is that they are based on a horizontal resolution of 176 pixels for a 60° field of view. If the physical resolution of a sensory device provides much less than this, then the maximum visual acuity possible with that device is dramatically reduced for the same field of view.

The first study to assess visual acuity with sensory substitution was conducted in the domain of touch by Sampaio et al. (2001). Sampaio et al., used the Snellen tumbling E paradigm to test blind and sighted naïve participants' performance using a 3 cm 2 12 × 12 electrotactile array or “tongue-display unit (TDU).” Their setup included a camera with a 54° horizontal and 40° vertical field of view, and its 280 × 180 pixel frames were down-sampled to the 12 × 12 tactile display resolution by averaging adjacent pixels. Judging acuity as performance at or near 100% in letter orientation discrimination they reported that all participants were able to achieve this to an acuity level of 20/860 and that two participants of median performance doubled their acuity after 9 h of training to 20/430. Because the device provided a resolution of 12 pixels horizontally, the actual functional acuity might be far less, with a maximum theoretical acuity of 20/2400 for a 2 pixel wide optotype and a 60° field of view, or 20/2160 when calculated for their camera's 54° field of view. For example, in the latter case the denominator is calculated as (2 pixels × 54°/12 electrodes) × (60 min of arc per degree/5 min of arc for normal vision) × 20 for normal vision = 2160.

The second study to assess acuity was conducted by Chebat et al. (2007), who tested a larger sample of early blind and sighted participants on a 4 cm 2 10 × 10-array TDU. After a period of training participants were tested also using the Snellen tumbling E. The criterion for passing each level was 70% correct responses. Acuity scores ranged between 20/1800 and 20/8400 for an estimated 29° field of view, and it was found that blind participants were overrepresented at higher acuity scores with 8.4% of sighted and 31.3% of blind participants achieving the highest score. Again, by using the calculations and limitations described earlier, the maximum theoretical acuity for a 10 pixel device such as this would be 20/2880 for a 2 pixel wide optotype and a 60° field of view, or 20/1392 when calculated for their 29° field of view. The latter is consistent with the range of acuity scores reported by Chebat et al. (2007) for their narrower field of view.

Acuity using The vOICe AVSS device has recently been reported by Striem-Amit et al. (2012) for nine fully blind participants who had already been trained to use the device. Participants were trained for between 55 and 101 h and tested on Snellen tumbling Es. Using a criterion of 60% correct responses, participants' acuity is reported to have varied between 20/200 and 20/600 using a 66° field of view camera. The present study was designed to assess a number of additional issues beyond the scope of the study by Striem-Amit et al. First, their study was conducted only with expert users of the SSD who were also blind. It is thus unclear whether the acuity levels achieved reveal the resolution of the device, or rather the compensatory neural plasticity of the blind participants combined with their expertise in using the device. Furthermore, the mechanisms that give rise to the acuity performance are also unclear. To provide a benchmark measure of acuity, we here employed naïve sighted participants without previous experience of the device. Furthermore we tested them under different conditions (static and active use of a camera), and with additional experiments and questionnaires to determine the possible correlates of individual differences in acuity performance.

The present study also used the Snellen tumbling E in two separate experiments: in the first, The vOICe software was used to turn letter images of decreasing size into sound files offline which were played to participants as static soundscapes in the second, blindfolded participants used a sunglasses-mounted camera and headphones to “read” letters from a screen. Acuity in present tongue-based TVSS devices is limited by the number of electrodes on the array (144 in Sampaio et al., 2001 and 100 in Chebat et al., 2007). The vOICe software, by contrast, produces an equivalent resolution of 11,264 “voicels” or auditory pixels in the default setting. This fact, along with the higher informational capacity of the ear (Jacobson, 1951) suggests that higher acuity scores with audition should be possible than those in the tactile studies cited above (see, e.g., Sampaio et al., 2001 Chebat et al., 2007 Striem-Amit et al., 2012).

As well as assessing the mean acuity of a sample group, the present study also takes an individual differences approach to determine whether any correlations can be found between performance on acuity tests with a SSD and other metrics. It has been shown, for example, that musical training correlates with improved ability to extract information from sound pre-attentively (Koelsch et al., 1999), and to extract speech from noise (Parbery-Clark et al., 2009). Many of the participants also took part in additional experiments to explore such individual differences. First we assessed whether there was any relationship between acuity and another form of auditory expertise, musical training. Their ability to discriminate between similar musical phrases and their pitch discrimination abilities was also tested. This study also considered whether early, naïve use of sensory substitution immediately draws upon the substituted modality (vision) or only the substituting modality (hearing). Work on synesthesia, a cross-wiring of the senses where a sound might evoke a visual experience, such as music evoking different colors (Hubbard et al., 2011), suggests that the sensory modalities are not always distinct, independent modules. Certainly one broad goal for work on sensory substitution is to ultimately provide the phenomenological experience of vision in a form of synthetic synesthesia (Proulx and Stoerig, 2006 Proulx, 2010). Along this line of interest, here participants also took a vividness of visual imagery questionnaire (Marks, 1973), as well as a psychophysical test designed by Cui et al. (2007) to correlate with the vividness of imagery reported by participants. If individual differences such as these can be found to correlate with acuity performance they may be useful as proxies to gage a person's likelihood of making successful use of an AVSS device such as The vOICe and to reveal potential mechanisms for such performance. This also assesses whether visual imagery evoked by the device, as a form or synthetic synesthesia (Proulx and Stoerig, 2006 Proulx, 2010), is related to measures of the functional resolution possible with the device.


5 Mixing Mistakes To Avoid: Using Panning & Imaging Tools

As we get caught up in using all the wonderful tools available for mixing in the modern world, sometimes it&rsquos easy to overlook the most basic. After level adjustments, panning&mdashpositioning different tracks between left and right in the stereo soundfield&mdashis one of those basic mixing tasks that contributes a lot more than it may seem. And while panning is a pretty straightforward mix move, it&rsquos still possible to do a less-than-optimum job of it. This article covers 7 things to look out for&mdashpotential positioning pitfalls when panning, that could contribute to a less-than-perfect mix.


Watch the video: Τι είναι η συχνότητα? Πείραμα διαφοράς AC με DC (August 2022).