We are searching data for your request:
Upon completion, a link will appear to access the found materials.
I have been thinking about how using social networks (like Facebook, Twitter and etc) have been changing the way we think. For example, Facebook feed is tailored on the basis of what we like. On Twitter, I usually follow people whose ideas I adhere to. So we are being introduced to information we like more and more thereby making us biased towards ideas that we like.
My question is: can you kindly point me to research studies that show we like to see/read information on themes/topics/ideas that we like?
Can you kindly give me some pointers on that? Thanks in advance.
What you describe is known as Selective Exposure to Information.
Selective exposure is a theory within the practice of psychology, often used in media and communication research, that historically refers to individuals' tendency to favor information which reinforces their pre-existing views while avoiding contradictory information.
Below find extracts from this article about selective exposure:
The researchers found that people are about twice as likely to select information that supports their own point of view (67 percent) as to consider an opposing idea (33 percent). Certain individuals, those with close-minded personalities, are even more reluctant to expose themselves to differing perspectives
Perhaps more surprisingly, people who have little confidence in their own beliefs are less likely to expose themselves to contrary views than people who are very confident in their own ideas…
The researchers also found, not surprisingly, that people are more resistant to new points of view when their own ideas are associated with political, religious or ethical values.
The above article is based on this research article.
A very related theory is the confirmation bias.
Confirmation bias, also called confirmatory bias or myside bias, is the tendency to search for, interpret, favor, and recall information in a way that confirms one's beliefs or hypotheses, while giving disproportionately less consideration to alternative possibilities.
In my opinion the reasons why we employ selective exposure are:
- Preserve and form identity
- The need for certainty. We need to have firm theories that do not change whenever we encounter dis-confirming evidence.
- Competence. The need to have good skills i.e. fishers forum
The introduction should contain your thesis statement or the topic of your research as well as the purpose of your study. You may include here the reason why you chose that particular topic or simply the significance of your research paper's topic. You may also state what type of approach it is that you'll be using in your paper for the entire discussion of your topic. Generally, your Introduction should orient your readers to the major points the rest of the paper will be covering, and how.
The body of your paper is where you will be presenting all your arguments to support your thesis statement. Remember the “Rule of 3" which states that you should find 3 supporting arguments for each position you take. Start with a strong argument, followed by a stronger one, and end with the strongest argument as your final point.
Behaviourism and the Question of Free Will
Although they differed in approach, both structuralism and functionalism were essentially studies of the mind. The psychologists associated with the school of behaviourism, on the other hand, were reacting in part to the difficulties psychologists encountered when they tried to use introspection to understand behaviour. Behaviourism is a school of psychology that is based on the premise that it is not possible to objectively study the mind, and therefore that psychologists should limit their attention to the study of behaviour itself. Behaviourists believe that the human mind is a black box into which stimuli are sent and from which responses are received. They argue that there is no point in trying to determine what happens in the box because we can successfully predict behaviour without knowing what happens inside the mind. Furthermore, behaviourists believe that it is possible to develop laws of learning that can explain all behaviours. The first behaviourist was the American psychologist John B. Watson (1878-1958). Watson was influenced in large part by the work of the Russian physiologist Ivan Pavlov (1849-1936), who had discovered that dogs would salivate at the sound of a tone that had previously been associated with the presentation of food. Watson and the other behaviourists began to use these ideas to explain how events that people and other organisms experienced in their environment (stimuli) could produce specific behaviours (responses). For instance, in Pavlov’s research the stimulus (either the food or, after learning, the tone) would produce the response of salivation in the dogs. In his research Watson found that systematically exposing a child to fearful stimuli in the presence of objects that did not themselves elicit fear could lead the child to respond with a fearful behaviour to the presence of the objects (Watson & Rayner, 1920 Beck, Levinson, & Irons, 2009). In the best known of his studies, an eight-month-old boy named Little Albert was used as the subject. Here is a summary of the findings: The boy was placed in the middle of a room a white laboratory rat was placed near him and he was allowed to play with it. The child showed no fear of the rat. In later trials, the researchers made a loud sound behind Albert’s back by striking a steel bar with a hammer whenever the baby touched the rat. The child cried when he heard the noise. After several such pairings of the two stimuli, the child was again shown the rat. Now, however, he cried and tried to move away from the rat. In line with the behaviourist approach, the boy had learned to associate the white rat with the loud noise, resulting in crying.
Figure 1.7 Skinner. B. F. Skinner was a member of the behaviourist school of psychology. He argued that free will is an illusion and that all behaviour is determined by environmental factors.
The most famous behaviourist was Burrhus Frederick (B. F.) Skinner (1904 to 1990), who expanded the principles of behaviourism and also brought them to the attention of the public at large. Skinner (Figure 1.7) used the ideas of stimulus and response, along with the application of rewards or reinforcements, to train pigeons and other animals. And he used the general principles of behaviourism to develop theories about how best to teach children and how to create societies that were peaceful and productive. Skinner even developed a method for studying thoughts and feelings using the behaviourist approach (Skinner, 1957, 1972).
Research Focus: Do We Have Free Will?
The behaviourist research program had important implications for the fundamental questions about nature and nurture and about free will. In terms of the nature-nurture debate, the behaviourists agreed with the nurture approach, believing that we are shaped exclusively by our environments. They also argued that there is no free will, but rather that our behaviours are determined by the events that we have experienced in our past. In short, this approach argues that organisms, including humans, are a lot like puppets in a show who don’t realize that other people are controlling them. Furthermore, although we do not cause our own actions, we nevertheless believe that we do because we don’t realize all the influences acting on our behaviour.
Recent research in psychology has suggested that Skinner and the behaviourists might well have been right, at least in the sense that we overestimate our own free will in responding to the events around us (Libet, 1985 Matsuhashi & Hallett, 2008 Wegner, 2002). In one demonstration of the misperception of our own free will, neuroscientists Soon, Brass, Heinze, and Haynes (2008) placed their research participants in a functional magnetic resonance imaging (fMRI) brain scanner while they presented them with a series of letters on a computer screen. The letter on the screen changed every half second. The participants were asked, whenever they decided to, to press either of two buttons. Then they were asked to indicate which letter was showing on the screen when they decided to press the button. The researchers analyzed the brain images to see if they could predict which of the two buttons the participant was going to press, even before the letter at which he or she had indicated the decision to press a button. Suggesting that the intention to act occurred in the brain before the research participants became aware of it, the researchers found that the prefrontal cortex region of the brain showed activation that could be used to predict the button pressed as long as 10 seconds before the participants said that they had decided which button to press.
Research has found that we are more likely to think that we control our behaviour when the desire to act occurs immediately prior to the outcome, when the thought is consistent with the outcome, and when there are no other apparent causes for the behaviour. Aarts, Custers, and Wegner (2005) asked their research participants to control a rapidly moving square along with a computer that was also controlling the square independently. The participants pressed a button to stop the movement. When participants were exposed to words related to the location of the square just before they stopped its movement, they became more likely to think that they controlled the motion, even when it was actually the computer that stopped it. And Dijksterhuis, Preston, Wegner, and Aarts (2008) found that participants who had just been exposed to first-person singular pronouns, such as “I” and “me,” were more likely to believe that they controlled their actions than were people who had seen the words “computer” or “God.” The idea that we are more likely to take ownership for our actions in some cases than in others is also seen in our attributions for success and failure. Because we normally expect that our behaviours will be met with success, when we are successful we easily believe that the success is the result of our own free will. When an action is met with failure, on the other hand, we are less likely to perceive this outcome as the result of our free will, and we are more likely to blame the outcome on luck or our teacher (Wegner, 2003).
The behaviourists made substantial contributions to psychology by identifying the principles of learning. Although the behaviourists were incorrect in their beliefs that it was not possible to measure thoughts and feelings, their ideas provided new ideas that helped further our understanding regarding the nature-nurture debate and the question of free will. The ideas of behaviourism are fundamental to psychology and have been developed to help us better understand the role of prior experiences in a variety of areas of psychology.
What Happens When Dad Disappears
An absence of dad, no matter how they go, means an absence of benefits.“For dads that live far away, it doesn’t seem there’s tons of evidence that what they do matters for their children,” says sociologist Marcy Carlson. “Dads living with their kids are much more involved they read stories to their children and put their kids to bed. If you look at comparisons of resident and non-resident dads, there’s a consistent difference in average involvement.”
Children who lose a father to death or incarceration usually suffer more than those who have uninvolved fathers. Several research projects have focused on how a father’s incarceration can harm children. The largest of these efforts is Princeton University’s Fragile Families Study, which currently follows a cohort of 5,000 children born in the United States between 1998 and 2000. Most of the children in the study have unmarried parents and absentee fathers, for a variety of reasons. One of the most sobering findings of the Fragile Families Study is that when a dad is far away there is relatively little he can do to have a positive influence on his children.
When “being away” means being behind bars, kids face additional challenges — sometimes more serious ones than what they would have faced had their fathers died or left due to divorce. “Most of the literature on widowhood shows that kids whose dads died are better off than kids who go through a divorce,” she says. As for incarceration, “there’s a lot of stigma and stress. I wouldn’t be surprised if it’s worse for kids when their dads are away due to incarceration.”
Have expert determinations been applied outside of the health field?
Yes. The notion of expert certification is not unique to the health care field. Professional scientists and statisticians in various fields routinely determine and accordingly mitigate risk prior to sharing data. The field of statistical disclosure limitation, for instance, has been developed within government statistical agencies, such as the Bureau of the Census, and applied to protect numerous types of data. 5
Types of Qualitative Research Methods
Researchers collect data of the targeted population, place, or event by using different types of qualitative research analysis.
Below are the most commonly used qualitative research types for writing a research paper.
The following is a detailed description of these research types.
The table shows the basic elements of the ethnography method.
- Describe cultural characteristics
- Identify the cultural aspects and variables by reviewing the literature
- Getting involved in the environment, live with the target audience, and collect data through observing and interacting with subjects
Ethnography is a branch of anthropology that provides the scientific explanation of human societies and cultures. It is one of the most popular and widely used techniques of qualitative research.
The fieldwork requires the researcher to get involved in the environment and live with the focus group. Such an interaction is done to understand the goals, motivations, challenges, and cultures of the individuals.
Similarly, it also helps to illustrate the cultural characteristics such as:
- Tribal systems
- Shared experience
- Life style
Rather than conducting surveys and interviews, researchers experience the environment and act as an observer. Thus, the primary data collection method is observation over an extended period.
However, it would also be appropriate to interview those who have studied the same cultures.
Ethnographic research becomes difficult if the researcher is not familiar with the social morals and language of the group. Furthermore, interpretations by outsiders may also lead to confusion.
Thus, it requires the researcher to validate the data before presenting the findings.
A good approach to understand the needs of the customers is by observing their daily activities. Notice how they interact with the product.
For this, you don’t have to come up with any hypotheses to test. However, you only need it in the social life of the subjects.
Have a look at the table given below.
|Purpose||Collect data in the form of a cohesive story|
|Method||Review the sequence of events, and conduct interviews to describe the largest influences that affected an individual.|
|Analysis||Analyze different life situations and opportunities|
|Outcomes||Present a short story with themes, conflicts, and challenges|
The narrative research method occurs over a long period for compiling the data. It takes a sequence of events to form a cohesive story. However, similar to a story narrative, it takes a subject from a starting point and reviews different situations of life.
Here, the researcher conducts in-depth interviews and reads various documents. Moreover, it also reviews the events that largely impact the personality of an individual.
Sometimes, interviews are conducted even after weeks, months, or years. Nevertheless, this method requires the outcomes to be presented in a short story with themes.
It may also include the conflicts, tensions, and challenges that have become a great opportunity for innovation.
The narrative method can be used in a business to understand the different challenges faced by the target audience. Moreover, it can be utilized for further innovation and development of products.
The following are the essential aspects of this research method.
- Sampling and data collection by conducting interviews, observation, surveys, and reading documents
- Describe and write the experience of the phenomena
The word phenomenological means the study of a phenomenon such as events, situations, or experiences. It is the best approach to describe something from different angles and add to the existing knowledge. Similarly, it focuses on subjective experiences.
Here, a researcher uses different methods to gather data and understand the phenomenon. These methods include interviews, visiting places, observation, surveys, and reading documents.
Lastly, this technique takes into account how participants feel about things during an event or activity. Thus, a database with themes is formed to validate the findings.
You can use this method to understand why students prefer to take online courses. Moreover, it will also identify the reason behind the rise in the number of students from the last few years.
Grounded Theory Method
Check out the table below to understand the elements of a grounded theory method.
- Used to develop theory, identify social development and ways to deal it
- Involves the formulation and testing of data until the theory is developed
A phenomenological study describes an event. Whereas, a grounded theory approach provides an explanation, reasons, or theory behind that event. It aims to develop new theories by collecting and analyzing data about a phenomenon.
Here, a researcher makes use of various data collection techniques. It includes observation, interview, literature review, and relevant document analysis. Moreover, the unit of content analysis is a specific phenomenon or incident and not individual behaviors.
Usually, different coding techniques and large sample sizes are used to identify themes and develop a better theory.
This method can be used in businesses to conduct surveys. It also helps to demonstrate why the consumer uses the company’s product or services.
The data collected through these surveys help companies to improve and maintain their customer’s satisfaction and loyalty.
Here are the main characteristics of a case study method.
|Purpose||Describe an experience, person, event, or place in detail|
|Method||Direct observation and interaction with the subject|
|Analysis||Analyze the experiences|
|Outcomes||An in-depth description of the subjects|
The case study approach occurs over extended periods of time to compile information. It involves an in-depth understanding of a subject such as an event, person, business, or place.
Similarly, the data is collected from various sources, including interviews, direct observation, and historical documentation.
Case studies are carried out in different disciplines like law, education, medicine, and sciences. Therefore, they can be descriptive or explanatory in nature.
Furthermore, this method is used when the researcher wants to focus on:
- ‘How’ and ‘why’ research questions
- The behaviors under observation
- Understand the phenomenon
- The context of the phenomena
Businesses can use case studies to show their business solutions effectively. Similarly, it also helps them to identify how they can solve a particular problem for the subject.
Let suppose a company AB introduces new UX designs into an agile environment. It would be considered as enlightening to many companies.
Have a look at the below table to understand the historical method.
- Develop your idea after reading the relevant literature
- Develop the types of qualitative research questions
- Identify the sources - archives, libraries, papers
- Clarify the reliability and validity of data sources
- Create a research outline to organize the process
- Collect data
- Analyze the data by accepting or rejecting it
- Identify the conflicting evidence
The historical method describes past events to understand present scenarios and predict future choices. It answers the research questions based on a hypothetical idea. Later this technique used multiple resources to test the idea for any potential challenges.
It also requires the development of the research outline to organize the whole process. Lastly, the historical method presents the findings in the form of a biography.
For creating new ads, businesses can use historical data of previous ad campaigns and the targeted demographics.Paper Due? Why suffer? That’s our job! Click here to learn more.
Research Paper Outline Example
Before you plan on writing a well-researched paper, make a rough draft.
Brainstorm again and again!
Pour all of your ideas in the basket of the outline.
A standard is not set but follow the research paper outline example below:
This example outlines the following elements:
- Thesis Statement
- Main Idea
- Sub Idea
Utilize this standard of outline in your research papers to polish your paper. Here is a step by step guide that will help you write a research paper according to this format.
Example of a Literature Review
What if a novice person reads your research paper?
He will never understand the critical elements involved in the research paper.
To enlighten him, focus on the literature review section. This section offers an extensive analysis of the past research conducted on the paper topics.
It is relatively easier than other sections of the paper.
Take a closer look at the paper below to find out.
Methods Section of Research Paper
While writing research papers, excellent papers focus a great deal on the methodology.
Yes, the research sample and methodology define the fate of the papers.
Are you facing the trouble going through the methodology section?
Relax and let comprehensive sample research papers clear your doubts.
Paper Due? Why suffer? That’s our job! Click here to learn more.
Social Media Conversations About Race
Americans are increasingly turning to social media for news and political information and to encourage others to get involved with a cause or movement. Social media also can serve as an important venue where groups with common interests come together to share ideas and information. And at times, Twitter, Facebook and other social media sites can help users bring greater attention to issues through their collective voice.
In recent years, these platforms have provided new arenas for national conversations about race and racial inequality. Some researchers and activists credit social media – in particular, Black Twitter – with propelling racially focused issues to greater national attention. In fact, two of the most used hashtags around social causes in Twitter history focus on race and criminal justice: #Ferguson and #BlackLivesMatter. In addition to social and political issues, social media also serve as places where conversations about race intersect with a number of issues, including pop culture, sports and everyday personal experiences.
A new Pew Research Center survey finds significant differences in the way black and white adults use social media to share and interact with race-related content 1 And a Pew Research Center analysis of tweets reveals that key news events – from Baltimore, to Charleston, South Carolina, to Dallas – often serve as a catalyst for social media conversations about race.
Black social media users (68%) are roughly twice as likely as whites (35%) to say that at least some of the posts they see on social networking sites are about race or race relations. When it comes to their own postings, a similar racial gap exists. Among black social media users, 28% say most or some of what they post is about race or race relations 8% of whites say the same. On the other hand, roughly two-thirds (67%) of whites who use social media say that none of things they post or share pertain to race.
In addition to the survey data, Pew Research Center conducted three content analysis case studies using publically available tweets. The first analysis found that over a 15-month period (from Jan. 1, 2015, through March 31, 2016) there were about 995 million tweets about race – or, on average, 2.1 million tweets per day on the subject. By contrast, about 500 million tweets in total were posted on Twitter each day in 2015, meaning that tweets mentioning race made up about 0.04% of all tweets posted.
Of the tweets about race, a majority (60%) were directly related to news events, like the church shooting in Charleston, South Carolina, or the Grammy performance of rapper Kendrick Lamar 2
The second case study focuses on the use of the hashtag #BlackLivesMatter – which predated the Black Lives Matter organization. The hashtag has been used approximately 12 million times from July 12, 2013, through March 31, 2016, and during this period, it was used more often in support of the movement than in opposition to it. Roughly 40% of the times #BlackLivesMatter was used, it was to display solidarity with the social movement, compared with 11% of the time when it was used to criticize the same movement. 3
The third case study examines the Twitter conversation following the deaths of two black men at the hands of police and the shootings of police officers in Dallas and Baton Rouge, Louisiana. This time period – July 5-17, 2016 – had the hashtags of #BlackLivesMatter, #AllLivesMatter and #BlueLivesMatter used more often than any other time since the hashtags began appearing on Twitter in July 2013. And almost overnight, the tone of the online conversation around #BlackLivesMatter shifted following the attacks on law enforcement. There was a dramatic rise in the share of tweets criticizing the Black Lives Matter movement using that hashtag in our July analysis and a drop in the share of tweets that supported the movement. The rise in critical tweets was especially notable after the killing of police officers in Dallas.
The analysis of public tweets, which was conducted using computer coding software from Crimson Hexagon, also found that the volume of race-related tweets tended to peak in the immediate aftermath of high-profile events and reflected more of a synthesis of ideas and reactions than an account of the details of those events. 4
3. Concerns about the future of people’s well-being
About half of the people responding in this study were in substantial agreement that the positives of digital life will continue to outweigh the negatives. However, as in all great technological revolutions, digital life has and will continue to have a dark side.
Roughly a third of respondents predicted that harms to well-being will outweigh the positives overall in the next decade. In addition, even among those who said they are hopeful that digital life will be more helpful than harmful and among those who said there will not be much change, there were many who also expressed deep concerns about people’s well-being in the future. All of these voices are represented in this section of the report.
The technologies that 50 years ago we could only dream of in science fiction novels, which we then actually created with so much faith and hope in their power to unite us and make us freer, have been co-opted into tools of surveillance, behavioral manipulation, radicalization and addiction.
Anonymous research scientist
Rob Reich, professor of political science at Stanford University, said, “If the baseline for making a projection about the next today is the current level of benefit/harm of digital life, then I am willing to express a confident judgment that the next decade will bring a net harm to people’s well-being. The massive and undeniable benefits of digital life – access to knowledge and culture – have been mostly realized. The harms have begun to come into view just over the past few years, and the trend line is moving consistently in a negative direction. I am mainly worried about corporate and governmental power to surveil users (attendant loss of privacy and security), about the degraded public sphere and its new corporate owners that care not much for sustaining democratic governance. And then there are the worries about AI [artificial intelligence] and the technological displacement of labor. And finally, the addictive technologies that have captured the attention and mindspace of the youngest generation. All in all, digital life is now threatening our psychological, economic and political well-being.”
Rich Salz, principal engineer at Akamai Technologies, commented, “We have already seen some negative effects, including more isolation, less ability to focus, more ability to be deceived by bad actors (fake news) and so on. I do not see those lessening. Sadly.”
Leora Lawton, lecturer in demography and sociology and executive director of the Berkeley Population Center at the University of California, Berkeley, shared these reasons digital life is likely to be mostly harmful: “The long-term effects of children growing up with screen time are not well understood but early signs are not encouraging: poor attention spans, anxiety, depression and lack of in-person social connections are some of the correlations already seen, as well as the small number of teens who become addicts and non-functioning adults.”
David Ellis, Ph.D., course director of the department of communication studies at York University in Toronto, said, “Much like a mutating virus, digital services and devices keep churning out new threats along with the new benefits – making mitigation efforts a daunting and open-ended challenge for everyone. Over the next decade, the majority of North Americans will experience harms of many different kinds thanks to the widespread adoption and use of digital technologies. The last year alone has seen an outpouring of commentary, including some 20 trade books, arguing that our digital habits are harming individual welfare and tearing up the social fabric. In marketing its services, Silicon Valley is committed to the relentless promotion of convenience and connectedness. Its success in doing so has wreaked havoc on personal privacy, online security, social skills and the ability to focus attention, not least in college classrooms. While they may be victims of a kind, most consumers are simply in denial about their compulsive use of smartphones and social media, as well as other services designed by their developers to be addictive – a problem that persists even when legal sanctions are in play, as with texting while driving. There’s growing evidence these digital addictions are promoting depression, loneliness, video-gaming abuse and even suicidal behavior, especially among teens and young adults. Instead of feeling obliged to moderate their level of connectivity, however, consumers have come to feel a sense of entitlement about their habits, unconstrained by social mores that previously framed these habits as inappropriate. Indeed, heavy use of digital devices is widely encouraged because of the misguided idea that so-called multitasking makes us more productive.”
An anonymous research scientist and professor said, “The grand internet experiment is slowly derailing. The technologies that 50 years ago we could only dream of in science fiction novels, which we then actually created with so much faith and hope in their power to unite us and make us freer, have been co-opted into tools of surveillance, behavioral manipulation, radicalization and addiction.”
The next few sections share primary concerns expressed by respondents, grouped under commonly expressed themes: digital deficits digital addiction digital distrust/divisiveness digital duress and digital dangers.
Digital deficits: People’s cognitive capabilities will be challenged in multiple ways, including their capacity for analytical thinking, memory, focus, creativity, reflection and mental resilience
A number of respondents said people’s cognitive capabilities seem to be undergoing changes detrimental to human performance. Because these deficits are found most commonly among those who live a highly digital life, they are being attributed to near-constant connectivity online.
Steven Polunsky, a research scientist at Texas A&M University, wrote, “One way to describe how we behave is the OODA cycle – when something happens, we Observe it, Orient it to our personal context, Decide what to do and Act on that decision. The internet is easily weaponized to short-circuit that process, so we receive minimal information and are urged to act immediately on it. Unless behavior changes and adapts, this tendency will lead to greater dissatisfaction among internet users and those affected by their actions, which may be a wide audience.”
Nikki Graves, an associate professor at Emory University’s Goizueta Business School, said, “We currently live in a culture that fosters attention-deficit disorder because of hyperconnectivity. I have been teaching at the college level since 1993, and I can see a definitive decline in students’ ability to focus on details and in general. Additionally, I believe that the research on the relationship between hyperconnectivity and this has merit.”
We currently live in a culture that fosters attention-deficit disorder because of hyperconnectivity.
Meg Mott, a professor of politics at Marlboro College, said, “The internet is harming well-being. My answer has to do with the disturbing trend amongst college students, who operate as if all questions should be answered online. The devices make it so easy to find answers elsewhere that students forget to ask deep questions of themselves. This lack of uninterrupted introspection creates a very human problem: the anxiety of not knowing oneself. The more the culture equates knowledge with data and social life with social media, the less time is spent on the path of wisdom, a path that always requires a good quotient of self-awareness. This becomes evident in classes where a portion of the grade is derived by open-ended writing assignments. In order to write a compelling essay, the author needs to know that the process of crafting a question is more interesting than the retrieval of any answer. Instead, the anxiety is attached to getting the ‘right’ piece of data. I am of the mind that a lot of the anxiety we see in college students is the agony of not having a clue about who they are. This hypothesis is now supported by Jean Twenge’s research on the impact of smartphones on the Millennial and post-Millennial generations.”
An anonymous director of one of the world’s foremost digital rights organization said, “I’m concerned that the pace of technology creation is faster than the pace of our understanding, or our development of critical thinking. Consider, for a moment, the latest buzzword: blockchain. Yesterday, I heard about a blockchain app designed for consent in sexual interactions – designed, of course, by men in Silicon Valley. If it sounds ridiculous, that’s because it is. We’ve reached a phase in which men (always men) believe that technology can solve all of our social problems. Nevermind the fact that a blockchain is a permanent ledger (and thus incontestable, even though sexual abuse can occur after consent is given) or that blockchain applications aren’t designed for privacy (imagine the outing of a sexual partner that could occur in this instance). This is merely one example, but I worry that we’re headed toward a world in which techno-solutionism reigns, ‘value’ has lost all its meaning, and we’re no longer taught critical-thinking skills.”
An anonymous president of a U.S.-based nonprofit commented, “Increasingly social media is continuing to reduce people’s real communication skills and working knowledge. Major industries – energy, religion, environment, etc., are rotting from lack of new leadership. The level of those with aliteracy – people who can read but choose not to do so – is increasing in percentage. The issues we face are complex and intertwined, obfuscated further by lazy bloated media and readers and huge established industry desperate to remain in power as cheaply, easily, safely and profitably as possible – of course! Those of us who still read actual books that require thinking rather than mere entertainment, must redouble our efforts to explain the complex phenomena we are in the midst of addressing in simple terms that can encourage, stimulate, motivate.”
Some respondents also more indirectly noted that individuals’ anxiety over online political divisiveness, security and privacy issues, bullying/trolling, their loss of independent agency due to lack of control over what they are served by platform providers, and other psychosocial stress are contributing factors in this cognitive change.
An anonymous professor wrote, “As life becomes more and more monitored, what was previously private space will become public, causing more stress in people’s lives. Furthermore, some of these technologies will operate without a person’s knowledge or consent. People cannot opt out, advocate for themselves, or fix errors about themselves in proprietary algorithms.”
A sampling of additional comments about “digital deficits” from anonymous respondents:
- “We have less focus – too much multitasking – and not enough real connection.”
- “The downside is too much information and the lack of ability to manage it.”
- “Attention spans have certainly been decreasing recently because people are inundated with information today.”
- “There is increasing isolation from human interaction and increased Balkanization of knowledge and understanding.”
- “Over 50% of U.S. children over 10 now have some sort of social network-based application, whether it be Instagram, Snapchat or Minecraft. These children are always looking for what they may be missing online. They are increasingly finding it hard to be present and focused.”
- “The writing skills of students have been in constant decline, as they opt for abbreviations and symbols rather than appropriately structured sentences.”
- “Digital users who have not lived without technology will not know how to cope with utilizing resources outside of solely tech. With users relying on devices for companionship, we will no longer see people’s faces, only the blue or white screens reflecting from this effervescent gaze.”
Digital addiction: Internet businesses are organized around dopamine-dosing tools designed to hook the public
Some of the most-concerned respondents pointed to the monetization of attention – the foundation of the internet economy – as the driving force behind many wellness issues.
Douglas Rushkoff, writer, documentarian, and professor of media at City University of New York, said, “The real reason why digital technology will continue to compromise human cognition and well-being is that the companies dominating the space (Facebook, Google, Amazon) are run by people with no knowledge of human society or history. By leaving college at an early age, or running companies immediately after graduating, they fell under the spell of venture capitalists who push growth of capital over all other values. So the platforms will necessarily compromise humanity, democracy and other essential values. The larger the companies grow, the more desperate and extractive they will have to become to grow still further.”
Unfortunately, major social media corporations have discovered that anger and insecurity keep people glued to their screens. As long as profit is more important than people, digital life will only grow more destructive.
Michael Kleeman, senior fellow at the University of California, San Diego and board member at the Institute for the Future, wrote, “The early promise of the Net has been realized, but the financial incentives to use it for harmful purposes, including legal and illegal ones, have proven too attractive. ‘Digital Life’ will continue to erode personal interactions, reduce the diversity of ideas and conversation and contribute to negative health impacts. Other than the use of data analytics we have virtually no proof that wearables, etc., alter health trajectories. We do have evidence of a radical reduction in privacy, increase in criminal activity (as digital means reduce the cost of major financial and personal crimes), reduction of engagement with and caring for the environment as a result of increased interaction with online and digital devices.”
Kate Thomas, a writer/editor based in North America, wrote, “Unfortunately, major social media corporations have discovered that anger and insecurity keep people glued to their screens. As long as profit is more important than people, digital life will only grow more destructive.”
An anonymous professor at one of the world’s leading technological universities who is well-known for several decades of research into human-computer interaction wrote, “Deterioration in privacy slicing and dicing of identity for sale identification of individuals as targets for political messaging. I don’t see the institutions growing that will bring this under control. I don’t see corporations taking sufficient responsibility for these issues.”
Sam Punnett, president of FAD Research Inc., said, “Distraction is our most prevalent commodity, paid for with attention span. The society-wide effects of ‘continuous partial attention’ and the tracking, analysis and corruption of the use of data trails are only beginning to be realized.”
Many respondents to this canvassing wrote about their concern that online products are designed to tap into people’s pleasure centers and create a dependence leading to addiction.
Richard Bennett, a creator of the WiFi MAC protocol and modern Ethernet, commented, “Highly-connected nations such as South Korea have had to develop treatment programs for internet addiction. Gamers in particular are subject to this malady, and Korea’s broadband networks make gaming very attractive to socially isolated teens.”
Vicki Davis, an IT director, teacher and podcaster based in North America, said, “Un-savvy consumers don’t realize the addictive nature of the dopamine hits they are getting through the social media sites they use. In an attempt to keep a Snapchat streak going or to perform for the illusion of a growing audience, this generation could easily live a life one inch deep and a mile wide instead of a deeper life with deeper relationships and deeper productivity. The future of society depends upon our ability to educate people who are willing to get out of the zone on their phone and live life in the real world. … Many students I work with seem to show some sort of withdrawal symptoms after just a few hours away from Snapchat or Instagram. The greatest innovations often happen with uninterrupted thought. This interruption generation must learn how to turn off their notifications and find satisfaction in solving problems that aren’t solved in a snap but take years of dedication. Without tenacity, self-control and some modicum of intelligence about the agenda of social media, the interruption generation will miss out on the greatness that could be theirs.”
Robert Stratton, cybersecurity entrepreneur, coach and investor, wrote, “While there may be beneficial uses for this technology … we cannot ignore the question of what happens when addictive technologies are coupled with very plausible but erroneous content, particularly when generated by skilled actors with specific goals. Additionally, there are decentralized, distributed-actor groups with information operations capabilities that I will assert now rival those of nation-states. Things are not what they seem. We now live in an environment where digital audio and video can be generated with modest skill to produce video that is functionally indistinguishable from photography while being essentially wholly specious. Most internet users and virtually all of the news media seem to operating on two errant assumptions: 1) People mean what they write on the internet. 2) People are witting of their roles in events that occur due to their actions. I would respectfully assert that anyone with a basic knowledge of intelligence tradecraft would agree that these are naïve in the modern environment. Additionally, there are now generalized programming APIs that provide the ability to make essentially ANY application or website habituating for its users.”
An anonymous respondent predicted this scenario as a continuation of today’s trends into the next decade: “More and more will seem possible in all aspects of life. People may perceive that their lives are better, but it will be the experience of the lobster in the slowly boiling pot. Digital life will take people’s privacy and influence their opinions. People will be fed news and targeted information that they will believe since they will not access the information needed to make up their own minds. Out of convenience, people will accept limitations of privacy and narrowed information resources. Countries or political entities will be the influencers of certain groups of people. People will be become more divided, more paranoid as they eventually understand that they have no privacy and need to be careful of what they say, even in their own homes. Some people will break free but at the loss of everything they had worked for. The digital divide will become worse, and many will be unable to pay for all the conveniences. To ensure simpler access and control, some political entities may try to make it available to everyone but at a cost of even more privacy. Convenience will be chosen over freedom. Perhaps.”
The massive change in people’s news-finding habits instigated by the rapid adoptions of the smartphone and social media was cited by some as the reason for the destruction of accurate, objective journalism, a foundation of democracy. An anonymous respondent commented, “The addictive nature of social media means the dis-benefits could be profound. Watch a young mother utterly engrossed in her phone and ignoring her small children and you will know what I mean. Humans need real-time, real-life interaction not just social interaction, yet the pull of the phone is overwhelming. More broadly, the platform companies are already destroying the business models of legacy media, and as that continues civic journalism will become thinner, poorer and possibly obsolete. Journalism won’t disappear. It will simply drift back to propaganda.”
A sampling of additional comments related to “digital addiction” from anonymous respondents:
- “Engaging apps and digital experiences are much like addictive substances such as alcohol, tobacco and even sweet foods and sex and there has been little progress in creating a ‘healthy’ consumption model for digital experiences.”
- “Kids and adults alike are prone to go for the quick fix, the easy high or pleasant feeling, but not well armed to understand its impact on their health.”
- “People’s well-being will continue to be affected by the internet because the software, hardware and structures that are already in place are built to do exactly this.”
- “As social networking becomes ‘professional grooming’ as well as providing family/friend updates, the need for multiple platforms (such as LinkedIn and Facebook/Instagram) becomes an assumed need. The amount of time it takes for workers to manage tedious online interactions will lead to an increasing lack of work/life balance.”
- “Behavioral and psychological impacts of digital life will continue to be discovered and will confirm negative trends.”
- “Digital communications and the time they take away from personal interactions are contributing to growing social isolation and eroding interpersonal relationships. This affects individuals’ mental well-being. People everywhere – walking, in their cars, in meetings, etc. – are glued to their cell phones.”
- “Unless we are more aware/careful/media literate, there are a lot of ‘analogue’ behaviours we will jettison that are actually more efficient, positive and valuable.”
- “When human beings are constantly reminding themselves about a selfish bubble they’ve lost touch with the truth.”
- “I fear … social media having us surround ourselves with people who think like we do, entrenching divisions among people.”
- “Engagement in social media takes a lot of time for the individual and gives back small and decreasing jolts of satisfaction for a substantial cost in time.”
- “There is a reason the iPhone was initially called a ‘crack-phone.’ Spending time on websites and apps is a very seductive way to avoid and/or ignore painful and difficult situations. I’ve seen very young children ignored while their caregiver texts, plays games, or surf the Net and can’t help but wonder how this neglect is affecting them. Will these children learn to parent their children in a better way or will they do the same thing?”
Digital distrust/divisiveness: Personal agency will be reduced and emotions such as shock, fear, indignation and outrage will be further weaponized online, driving divisions and doubts
Among the most-expressed fears for well-being in the next decade were those having to do with issues of social isolation, societal distrust and identity and human agency.
Fay Niker, postdoctoral fellow at Stanford University’s Center for Ethics in Society, wrote, “Understanding well-being in terms of human flourishing – which includes among other things the exercise of autonomous agency and the quality of human relationships – it seems to clear to me that the ongoing structuring of our lives by digital technologies will only continue to harm human well-being. This is a psychological claim, as well as a moral one. Unless we are able to regulate our digital environments politically and personally, it is likely that our mental and moral health will be harmed by the agency-undermining, disempowering, individuality-threatening and exploitative effects of the late-capitalistic system marked by the attention-extracting global digital communication firms.”
People spend too much time online, often devouring fake and biased items. They grow hateful of each other rather than closer in understanding.
Evan Selinger, a professor of philosophy at Rochester Institute of Technology, wrote, “The repeal of the Obama administration’s 2015 rules for Net neutrality is a devastating blow. … Net neutrality is fundamentally about social control. Thanks to the [Ajit] Pai regime at the FCC, Internet Service Providers have more power than they deserve to micromanage how we conduct our online social, political, educational and economic lives. While Net neutrality advocates have identified several disheartening outcomes to be on our guard for, the projected parade-of-horribles only scratches the surface. If we can’t get the information superhighway right, it’s a bad omen for the future where we’ll need to govern a mature Internet of Things. Second, although analysis of the last U.S. presidential election is shining a spotlight on the problem of botified communication, the focus on internet propaganda obscures the more basic, habit-forming ways that we’re being techno-socially engineered to outsource more and more of our communication – and thus ourselves – to software. Third, despite increased awareness of the value of being able to spend time offline, practical constraints continue make the freedom to unplug ever-harder to achieve.”
Adam Popescu, a freelance journalist who has written for The New York Times, Bloomberg and other publications wrote, “You see it everywhere. People with their heads down, more comfortable engaging with a miniature world-in-a-box than with the people around them. And you see it while they’re behind the wheel driving, while working and performing dangerous and focus-intensive tasks. Forget emotional happiness and the loss of focus and deep thought and the fact that we’re now more comfortable to choose who we sleep with based on an algorithm than we are based on serendipity, intuition, chance, and the potential for rejection by walking up to someone and saying ‘Hi, my name is …’ The biggest issue with our addiction to smartphones, one none of us talk about openly yet all engage in, is the threat to health and safety. Sure, no one says ‘hi’ anymore when they’re passing by, no one takes a moment to be friendly or reach out, even with just our eyes, because our eyes are no longer at eye-level, they’re down, hiding in our screens. Social media over the past year has been revealed for the ugly wolf-in-sheep’s clothing it is, a monster once draped in the skin of liberty. We see it for what it is. When will we see that it’s not just the programs and toys and apps and sites on our screens that are the problem – but our screens themselves?”
Judith Donath, author of “The Social Machine, Designs for Living Online,” also predicted, “We will see a big increase in the ability of technologies to affect our sense of well-being. The ability to both monitor and manipulate individuals is rapidly increasing. Over the past decade, technologies to track our online behavior were perfected the next decade will see massively increased surveillance of our off-line behavior. It’s already commonplace for our physical location, heart rate, etc., to be tracked voice input provides data not only about what we’re saying, but also the affective component of our speech virtual assistants learn our household habits. The combination of these technologies makes it possible for observers (Amazon, government, Facebook, etc.) to know what we are doing, what is happening around us, and how we react to it all. At the same time, increasingly sophisticated technology for emotion and response manipulation is being developed. This includes devices such as Alexa and other virtual assistants designed to be seen as friends and confidants. Alexa is an Amazon interface – owned and controlled by a giant retailer: she’s designed, ultimately, to encourage you to shop, not to enhance your sense of well-being.”
A number of these experts wrote about their concerns that technology’s evolution would make people suffer a “loss of agency” and control over their world.
Dewayne Hendricks, CEO of Tetherless Access, said, “It is important to consider just how much of digital life is provided/controlled by cyber monopolies. Those entities will have an ever-increasing ability to control/shape the factors that make up that digital life. I see individuals for the most part having less control as time passes.”
John Klensin, Internet Hall of Fame member, longtime Internet Engineering Task Force and Internet Society leader, and an innovator of the Domain Name System administration, said, “I am impressed by the increasing anecdotal and research evidence of people not only using the internet to isolate themselves from others but to select the information they are exposed to in a way that confirms and strengthens their existing, predetermined views. While that behavior is certainly not new, the rapid turnaround and instant responsiveness of the internet and social media appear to be reinforcing it in ways that are ultimately undesirable, a situation that is further reinforced by the substitute of labeling and denunciations for examination and reasoning about facts.”
Rosanna Guadagno, a social psychologist with expertise in social influence, persuasion, and digital communication and researcher at the Peace Innovation Lab at Stanford University, wrote, “In my professional opinion, the current trends in digital communication are alarming and may have a negative long-term impact on human social interaction. It was naive of social media companies fail to consider and prepare for the prospect that their platforms could be misused for large-scale information warfare (e.g., Russian interference in the 2016 U.S. presidential election). Furthermore, these companies have shirked their responsibility to their users by failing to protect their customers from cyberwarfare. This has not only interfered with people’s perception of reality and their ability to tell fact from fiction (I’ve actually conducted research demonstrating that information presented on a computer screen is perceived as more persuasive than comparable printed material). This has caused a lot of disinformation to spread online and has fueled myriad divisive online interactions. In addition to these issues, there is quite a bit of evidence mounting that people are spending more and more time alone using digital communication as a proxy for face-to-face interactions and this is increasing loneliness and depression among people, particularly our young adults. These technologies should be designed to promote healthy interactions. One way to accomplish this would be to switch to more interactive options for conversation (e.g., video chat instead of text-based conversation would reduce miscommunications and remind people that there are other people with real thoughts, feelings, and emotions behind the computer screen). It remains to be seen whether any of the promises made by digital technology companies to address these issue will be implemented. As a faculty member, one issue I’ve also commonly noticed is how little time is spent on ethics and psychology as part of the typical software engineering course curriculum. The ethics of software development and the idea that technology should be designed to enhance people’s well-being are both principles that should be stressed as part of any education in software design.”
A sampling of quote excerpts tied to “digital distrust/divisiveness” from anonymous respondents:
- “The dominance of algorithmic decision-making and speed and reach of digital realms have proliferated cultures of misinformation and hatred. We have not yet adjusted to this. It may take a while for the political realm to fully engage with it, and for people to demand tech companies regulate better. I am more optimistic in the long run than I am in the short term.”
- “People spend too much time online, often devouring fake and biased items. They grow hateful of each other rather than closer in understanding. Negative and harmful ideologies now have platforms that can reach much farther.”
- “There will be an increase in isolation, further dependence on technology and an increase in unearned narcissism.”
Digital duress: Information overload + declines in trust and face-to-face skills + poor interface design = rises in stress, anxiety, depression, inactivity and sleeplessness
A swath of respondents argued that as digital life advances it will damage some individuals’ sense of self, their understanding of others and their faith in institutions. They project that as these technologies spread, they will suck up people’s time and attention and some will be overwhelmed to the point that they often operate under duress, in a near-constant state of alert.
Device use will lead to more social alienation, increased depression and less-fit people. Because it’s still relatively new, its dangers are not well understood yet.
Anonymous digital strategy director
Larry Rosen, a professor emeritus of psychology at California State University, Dominguez Hills known as an international expert on technology and its impacts on well-being, wrote, “1) We continue to spend more time connecting electronically rather than face-to-face, which lacks essential cues for understanding. 2) We also continue to attempt to multitask even though it harms performance. 3) We insist on using LED-based devices close to our eyes right up to bedtime even though it negatively impacts sleep and our brain’s nightly needs for synaptic rejuvenation harming our ability to retain information.”
Susan Price, lead experience strategist at USAA, commented, “Mental health problems are rising and workplace productivity is falling. The tendency to engage with digital content and people not present instead of people in our immediate presence is growing, and small-screen trance has become an accepted interpersonal norm in the workplace. Culturally-induced attention-deficit behavior has already reached staggering proportions, and is still rising. The mini-serotonin payoffs we get when ‘connecting’ in this way are mildly, insidiously addictive and are squeezing out the more uneven, effortful, problematic real social connections we need for true productivity and intimacy.”
Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures, said, “Well-being and digital life seem so intertangled because of the breakdown between personal and public life … that digital tools have amplified. One significant aspect of public life is our relationship to work. … We need to wake up to the proximate cause of the drive for well-being, which is the trap of overwork and the forced march away from living private lives.”
K.G. Schneider, dean of the university library at Sonoma State University, wrote, “Anonymized discourse, it turns out, is not a civilizing influence, nor is having one’s every thought broadcast in real time the best way for us to interact as humans.”
Marcus Foth, professor of urban informatics at Queensland University of Technology, wrote, “Advancement and innovation of digital technology is still predominantly driven by the goal to increase and optimise productivity rather than people’s quality of life or well-being. While proponents of an elusive work-life balance may argue that you can always switch off digital technology, the reality is that [it] is not being switched off – not because it cannot, but there is now a socio-cultural expectation to be always available and responding in real-time.”
Jan Schaffer, executive director at J-Lab, wrote, “Overall, people will be more harmed than helped by the way the internet is evolving. People’s trust in basic institutions has been hurt, perhaps irreparably, by conflicting accounts of what is true or not, online. People’s productivity at work has been hampered by the distractions of social media. People’s social and emotional intelligence have been impaired by the displacement of personal interactions with online interactions. “
An anonymous digital strategy director for a major U.S. professional association wrote, “Device use will lead to more social alienation, increased depression and less-fit people. Because it’s still relatively new, its dangers are not well understood yet.”
An anonymous professor wrote, “While there are many positive aspects to a more digitally connected life, I find that it is very difficult to keep up with the volume of spaces where one must go. I spend too much time answering emails, communicating in digital spaces and just trying to keep up. This causes a significant amount of stress and a lack of deliberate, thoughtful approach to information sharing. One cannot keep up with personal and professional email accounts, LinkedIn, Twitter, Facebook, Instagram and all the rest. Truly, it is just too much.”
A sampling of comments about “digital duress” from anonymous respondents:
- “There is too much connecting to other people’s anxieties and expectations.”
- “We already know there are negative effects for everyone waiting for a ‘like’ or other similar kind of gratification.”
- “I worry about mental illness and increasing social isolation as a result of more time spent with technology.”
- “Increased digitalization is leading to more sedentary lifestyles in a society already plagued with obesity challenges. Social media use has also led to poor communication skills, even in face-to-face settings, people opt to burying their faces into the smartphone screens.”
- “Some people are creating and then trying to live up to fake worlds they build with their phones.”
- “Constant connections to electronic-information feeds causes anxiety and damage to our eyes, brains.”
Digital dangers: The structure of the internet and pace of digital change invite ever-evolving threats to human interaction, security, democracy, jobs, privacy and more
A number of respondents pointed out that digital life opens the door to societal dangers that can affect individuals’ well-being. They say the digital world’s systems – the internet, the web, the smartphone, all networked digital hardware and software – have evolved so rapidly due to their incredible appeal and the economic and social forces driving them forward that there has been little recognition of nor a real reckoning with the wider negatives emerging with the positives.
What we are seeing now becoming reality are the risks and uncertainties that we have allowed to emerge at the fringes of innovation.
Anthony Rutkowski, internet pioneer and business leader, said, “Clearly – as DARPA’s director noted in his seminal 2000 millennium article on this topic – the past 17 years have demonstrated how the DARPA internet, which was never designed for public infrastructure use, has resulted in all kinds of adverse impacts to people’s lives and even the security of society. It has amplified the most outrageous behavior and alt[ernate]-truth as the new normal. See details of my position at http://www.circleid.com/posts/20170312_the_internet_as_weapon/ (Excerpt: ‘The existence of ‘an open platform that enables anyone, everywhere, to share information, access opportunities and collaborate across geographic and cultural boundaries’ globally is fundamentally a weapon. … Such an infrastructure has inherent economic, operational, and political self-destructive properties that are playing out exponentially every day.’)”
An anonymous longtime leader of research at one of the top five global technology companies said, “I chose my career believing that technology would improve our lives. Seeing what has happened, I’ve grown pessimistic. Our species has lived for millions of years in small communities – bands, tribes, extended families. We are wired to feel valued and good about ourselves through direct, repeated interactions in such groups. These tight-knit associations are disappearing as our activity moves online. Relationships are replaced by transactions. If we avoid catastrophe, in the long run natural selection will produce a new kind of human being that is adapted for the world we are creating. That individual will not be like most of us. Living through the transition will be painful.”
Aram Sinnreich, an associate professor at American University’s School of Communication, said, “In general, people’s lives will change for the worse over the next decade because of the internet. There are several factors I am taking into account here: 1) The increasing prevalence and power of internet-based surveillance of citizenry by state and commercial actors. 2) The catalyzing power of digital technology in exacerbating the gaps between haves and have-nots. 3) The as-yet-undertheorized and unchecked role of digital disinformation in polluting the democratic process and news dissemination channels. 4) The increasingly savvy and widespread use of the internet by crime syndicates. 5) The increasing vulnerability of our social infrastructure to internet disruption and hacking. 6) The environmental consequences of the internet, recently exemplified by studies analyzing the electrical power consumption that goes into Bitcoin transaction processing. This isn’t to say there aren’t many benefits to the internet, or that its impact won’t net positively over the longer term. But I don’t see any likely benefits outweighing the threats I outlined above over the next decade.”
An anonymous professor based in North America said there is a public perception of well-being – crafted by platform builders and policy (or lack of policy) – while well-being is actually being damaged. This respondent wrote, “People may very well experience an increase in subjective well-being. The techno-social world we’re building is increasingly geared toward engineering happy humans. While a life of cheap bliss, of satiated will, may yield more net well-being measured in terms of subjective happiness, it would at the same time be a rather pitiful life, devoid of many of the meaningful blessings of humanity. Brett Frischmann and Evan Selinger address the questions you’re asking in a 500-page book, ‘Re-Engineering Humanity,’ due out in April 2018. One chapter, ‘To What End?’ directly considers the normative values at stake and the issue of what well-being means. Other chapters explain in detail the technological path we’re on and how to evaluate techno-social engineering of humans.”
Bob Frankston, a technologist based in North America, said, “The internet is not a thing but rather a product of the ability to use software to program around limits. It enables the creation of systems of technologies that work in concert. But the benefits will be limited to point solutions as long as we are limited to solutions that are profitable in isolation, until we invest in common infrastructure and have open interfaces.”
Jeremy Blackburn, a computing sciences professor who specializes in the study of the impacts of digital life, wrote, “1) People will continue to be manipulated via targeted (mis/dis)information (sic) from a variety of sources. 2) There will be an increase in online harassment attacks that will be mostly ignored due to their statistical weight (Google/Facebook/Twitter/etc. do not care if 0.1% of their users are attacked, even though the raw numbers are substantial). 3) There will be an increase in extremists and their ability to recruit and radicalize vulnerable individuals. 4) There will be an increase in information silos, eventually resulting in extreme polarization of information acceptance. 5) There will be decreased concern about individual impact in the face of big data and large-scale machine learning (e.g., a 1% increase in revenue due to scale is worth it, even if it means a few people here and there will suffer). This will eventually cascade to large-scale suffering due to network effects. 6) There will be an increase in the acceptance of opinion as fact due to the democratization of information. No one knows if you are a dog on the Internet, and no one cares if you are an expert.”
An anonymous respondent commented, “What we are seeing now becoming reality are the risks and uncertainties that we have allowed to emerge at the fringes of innovation. One is the systemic loss of privacy, which is a precondition for deliberation and a sense of self-determination. Further, we already see how our critical infrastructures – ranging from energy supply to health systems and the internet itself – increasingly are at risk of failing us due to their openness for malicious attacks, but also due to the complexity of interrelated, networked processes. Due to the lack of traceability on the internet, there is no expectation that we will achieve accountability in such situations.”
An anonymous Ph.D. in biostatistics commented, “The culture of anonymity on the Web is scary and seems to allow people to behave in ways they wouldn’t otherwise (see recent news about ‘swatting’ in the online gaming community). Then there is the social media ‘hive’ that allows internet uproar to dictate what happens. There is no room for discourse, grey areas or mistakes. Lives can be ruined by the publicity of a simple mistake (and combined with people sharing home addresses this can also be dangerous).”
An anonymous professor in the United States commented, “My belief is that unless extensive regulation and user education occurs, we will see an increase in negative consequences of online activity such as violations of privacy, dissemination of misinformation, crime and displacement of jobs.”
An anonymous research scientist and internet pioneer commented, “We have reaped great benefit from digital life over the past decades. My answer compares the next decade to the current situation, not to the time prior to the digital life. The negative aspects of the digital life are becoming more pronounced, and I think the next decade will be one of retrenchment and adjustment, while society sorts out how to deal with our perhaps over-optimistic construction of the digital experience.”
A sampling of additional comments about “digital dangers” from anonymous respondents:
- : LOC.gov lets you not only ask a librarian for help but also search catalogs of libraries from all over the world. This is truly a huge resource that should be on your Top 10 best research sites list. Anything from Academia Sinica in Taiwan to Yale University in the U.S. is here and ready to be searched. : Dubbed "The Internet's Best Reference Source," this web directory is an extremely useful site that provides everything from business and finance information to federal government resources, scholarship details, links to newspapers and calendars, search engines, and more. : NASA's source for space and science research help. Use the video links to listen to questions answered by experts. : This is where you should start when looking for specific U.S. government information. : Extremely simple to use with a basic layout, this reference website lets you browse by category or search by keywords to research everything from food and health to history, beauty, education, technology, vehicles, art, and more. : Billing itself as the internet's fact-checker, this reference site includes in-depth research links to breaking news, editorials, Today in History, Word of the Day, Daily Pictures, and other references. : The number one online encyclopedia that lets you search over 200 reference books and encyclopedias at once. : One of the world's oldest encyclopedias online has featured posts and category listings. : Research site with tons of information that includes resources specific to Purdue University and surrounding areas in Indiana. It also includes an Ask a Librarian service. : A wonderful research tool when gathering detailed medical information. : Serves as a gateway for reference and research links. : Scientific knowledge from over 100 million publication pages browse topics in categories like engineering, biology, climate change, medicine, math, and more. : This reference site has everything you ever wanted to know about baseball. : A research site that has indexed hundreds of sources. Includes a Must-See Sites list and a reference desk for a variety of topics. : Free Online Dictionary of Computing is a detailed computing dictionary for researching the meaning behind computer-related tools, standards, jargon, languages, and more.
Depending on the type of research you're doing or how you need to reference the information, you may need quick access to books. There are lots of places to find free book downloads, textbooks, and educational movies.