Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

To live in fear and falsehood is worse than death.

–Boyce, M. (2001).

Introduction

Some kids are smarter than others. Some are better looking than others. Some are kinder and more sensitive than others. Some are more talented than others. Some are more confident than others. But all these differences pale in comparison with what kids share and kids do not change much at their core over the years. They want to be valued and accepted. They want to be safe. They want to learn and explore. They want to play and have fun. They need to find meaning in their lives and make a spiritual connection (Garbarino, 1995).

It is not these core themes and concerns that change. Rather, it is the cultural, psychological, and social messages and tools that are available to them as they go about the universal business of growing up (Garbarino, 1995). The nature of these messages and tools does have an effect on the process of growing up, however. Some ennoble; others degrade. Some promote social order; others promote chaos. Some are good; some are bad. Some result in young adults who want to serve humanity and carve out a spiritually meaningful life for themselves, like the kids I read about who raised money in their school to help Hurricane Katrina victims a thousand miles away. Others result in teenagers like the ones I watched on a ­“reality” program on television who to a person said their goal in life was “to be rich and famous.”

When the social environment spreads “fear and falsehood,” it becomes ­poisonous to the development of children and youth, much as when the physical environment is poisoned and misused it can undermine their physical well-being (Garbarino, 1995). This is particularly true for kids who are especially vulnerable to developmental harm because of their difficult temperament or mental health problems.

Social toxicity refers to the extent to which the social environment of children and youth is poisonous, in the sense that it contains serious threats to the development of identity, competence, moral reasoning, trust, hope, and the other features of personality and ideology that make for success in school, family, work, and the community (Garbarino, 1995). Like physical toxicity, it can be fatal – in the forms of suicide, homicide, drug-related and other life style-related preventable deaths. But mostly, it results in diminished “humanity” in the lives of children and youth by virtue of ­leading them to live in a state of degradation, whether they know it or not.

What are the social and cultural poisons that are psychologically equivalent to lead and smoke in the air, PCBs in the water, and pesticides in the food chain? We can see social toxicity in the values, practices, and institutions that breed feelings of fear about the world, feelings of rejection by adults inside and outside the family, exposure to traumatic images and experiences, absence of adult supervision, and inadequate exposure to positive adult role models. These feelings and experiences arise from being embedded in a shallow materialist culture, being surrounded with negative and degrading media messages, and being deprived of relationships with sources of character in the school, the neighborhood, and the larger community (Eron, Gentry, & Schlegel, 1994).

For example, research on the impact of televised violence indicates that its effect on increasing aggressive behavior by child viewers is equivalent to the effect of smoking on lung cancer – namely, that it accounts for about 10–15% of the variation (Eron et al., 1994). In this sense, violent television is a social toxin. By the same token, all the various “isms” – racism and sexism, for example – that diminish the worth of targeted groups are toxins in the sense that they are linked to negative developmental outcomes.

The bias against homosexuals has a similarly negative effect. Although the term homophobia is widely accepted, it may not be the most useful way to approach this issue (allowing the offending bigots to say, “I don’t fear homosexuals, I just don’t like them and think they are unnatural or deviant.”). There is no alternative widely accepted (with terms like “homonegativity” and “heterosexualism” being offered but not widely used in public). Although it took decades of advocacy to do so, the professional psychological community has acknowledged that whatever we may call the bias against homosexuals, there is no scientific foundation for it. For example, in 1973, the American Psychiatric Association’s Board of Trustees declared that “homosexuality per se implies no impairment in judgment, stability, reliability, or general social or vocational capabilities” and came out squarely against public and private discrimination against gays and lesbians (American Psychiatric Association, 2000).

This has not ended homophobic actions, of course. A study of high school ­students published in 1998, found that in comparison with heterosexual kids, gay, lesbian, and bisexual youth were five times more likely to miss school because they felt unsafe, four times more likely to be threatened with a weapon at school, twice as likely to have their property damaged at school, and three times more likely to require medical treatment after a fight at school (despite the fact that they were four times less likely to be involved in fighting at school; Garofalo, Wolf, Kessel, Palfrey, & Durant, 1998).

As reported earlier, it was not until 1973 that the American Psychiatric Association’s Board of Trustees declared that “homosexuality per se” is not pathological, and came out squarely against public and private discrimination against gays and lesbians. Things have changed for the better on this fundamental issue of human rights, the right to be who you are, albeit too late for many earlier generation. Now, many more people are comfortable with the idea of homosexuality and in relationship with real live homosexuals, and many more gay and lesbian individuals feel safe enough to come out (Garbarino, 2008). A cursory tour through prime time television and mainstream movies makes that clear.

However, rejection and hatred directed at gays and lesbians is one of the few forms of negative bias that can still be expressed openly in America by politicians, religious leaders, and other public figures. After all, even as late as 1998, the American Psychiatric Association’s Board of Trustees thought it necessary to issue a statement saying that it opposes any psychiatric treatment which is based upon the assumption that homosexuality per se is a mental disorder or the a priori assumption that a patient should want to or try to change his/her sexual orientation. And, it is still true that openly homosexual individuals are barred from serving in the US military – and they continue to be discharged once their “secret” is officially acknowledged.

Homophobia, racism, sexism. All these dimensions of social toxicity are important, but superseding and infusing them all is spiritual emptiness, the loss of a sense of living in a positive meaningful universe beyond the material experience of day-to-day life. When there is no meaning beyond the material, there is no life beyond going to the shopping mall. I heard this once in its most terrible form when a 19-year-old who had just been sentenced to life in prison (for killing a police officer) said he was going to kill himself. “Why?” I asked. “Because I am never going to the mall again,” he replied. Indeed, if kids live only for their commercial lives, there really is no life left when denied access to the shopping mall that gives their lives material meaning (Garbarino, 1999).

Just as some children are more vulnerable than other children to physical ­poisons in the ground and in the air, some children are more vulnerable to social toxicity. Emotionally troubled and temperamentally vulnerable children living in a socially toxic environment are like psychological asthmatics living in an atmospherically ­polluted city. It seems that young children are most vulnerable to aspects of life that threaten the availability and quality of care by parents and other caregivers while adolescents are most vulnerable to toxic influences in the broader culture and ­community, like pornography on the internet and violent video games in the mall.

Adolescence is mostly and usually the crystallization of childhood experience, and so the youth most at-risk are those who develop psychological vulnerabilities in childhood and then face social deprivation and trauma in adolescence (Loeber & Farrington, 1999). This is why research reveals that in some (positive) ­neighborhoods, only 15% of 9-year-olds who have developed a chronic pattern of aggression, bad behavior, acting out, and violating the rights of others (kids who might be ­diagnosed with “Conduct Disorder”) become serious violent delinquents, while in other (negative) neighborhoods the figure is 60%!

At-risk and marginalized youth act as “social weathervanes,” in the sense that they indicate the direction of social change in their societies. The particular cultural and social pathologies present in a society will generally be most evident in the lives of these youth. For example, when the old Soviet system in Eastern Europe collapsed, adolescent drug abuse became epidemic (Kelly & Amirkhanian, 2003). The epidemic in Thailand and the Philippines is child prostitution (Mulhall, 1996). Where the drug economy overwhelmed the justice system, murderous violence became epidemic in Colombia (Wadlow, 2002).

In each case, psychologically vulnerable youth were most affected. They are the youth who already have accumulated the most developmental risk factors – youth who enter adolescence with a history of malfunctioning families, youth with ­unstable and reactive temperaments, and youth with emotional disabilities.

Amidst all the confusion and the temptations and the blind alleys of modern life, we can always gain clarity by asking, “does this contribute to my character development” (Lickona, 2004)? If it does not, we must go back to the drawing board. Years ago a colleague of mine had a bumper sticker on his office door that read: “You can change the world…but unless you know what you are doing, please don’t.”

The nature of my work has exposed me to some of the dark side of America and to some of its moral and political limitations (Garbarino, 2008). I traveled to New Orleans in 2006, a year after the Katrina Hurricane hit New Orleans, and I saw reconstruction mired in racism, the interests of the affluent class trumping the needs of the poor, and “politics as usual.” Two years later, there are still reports that emergency aid has been diverted and wasted, to the detriment of meeting the basic needs of many residents of the city (Briscoe, 2007).

I have been to Cambodia and seen how American arrogance during the Vietnam War in the 1960s and 1970s all but guaranteed the success of Pol Pot’s Khmer Rouge in taking over the country and thus setting in motion the years of insane slaughter that followed (Garbarino, Kostelny, & Dubrow, 1991). I have been to Nicaragua and seen the toll taken on lives and spirits by American support for the Contras’ war against the Sandinistas during the 1980s. I have repeatedly been to the Middle East and seen how American decades of pro-Israeli bias and unwillingness to recognize the legitimate national aspirations of the Palestinians allowed that conflict to fester and continue to the ugly point it has reached today.

And perhaps most to the point, among all the nearly 200 nations of the world the USA stands nearly alone (Bedard, 2007), one of only two UN members which have not ratified the UN Convention on the Rights of the Child. The other is Somalia, which can at least offer the defense that it does not have a functioning central government empowered to enact ratification. Two toxic forces have blocked ratification. The first is the fundamentalist impulse in American culture that fears and rejects human rights initiatives in general as a threat to the power of the entrenched interests of homophobic, patriarchal, punishment-oriented “­traditional values.” The second is the power of those who believe that we are above and beyond the rest of the world – “We’re Number One!” – and therefore entitled to our exceptional status. Americans have a special difficulty in dealing with this issue. One of our problems is what historians have called our “historical exceptionalism.” What they mean in using this term is that we tend to view our history as unique, and to reject the idea that we are like everyone else, as a people and as a country. It is a rare politician who can refrain from saying, “This is the greatest nation on Earth.” Many would go so far as to say this is the greatest nation that has ever existed, unique among all countries. The theme of exceptionalism reverberates down through the decades of American history. I think we can start this process by looking backward to America of the 1950s (Kaplan, Pelcovitz, & Fornari, 2005).

America in the 1950s had just emerged from the Great Depression and World War II (Shales, 2007). During the Great Depression in the 1930s, large numbers of American workers were unemployed because of the economic crisis, and felt despair, fear, and anger that through no fault of their own they were being impoverished. Debate continues among historians and economists about the exact causes of the Depression and the strategies and tactics used to deal with it by the national government and other public policy entities. What does seem clear is that the actions of President Franklin Roosevelt, a Democrat elected to lead the nation in 1932, played an important role in inspiring demoralized unemployed workers, who prior to his arrival on the national scene felt betrayed and abandoned by the national political leadership and business leaders who were their allies. The renowned American writer John Updike (2007) was a child during the 1930s and recalls observing his own unemployed father’s desolation, and his reaction to the policies and words of President Roosevelt:

My father had been reared a Republican, but he switched parties to vote for Roosevelt and never switched back. His memory of being abandoned by society and big business never left him and, for all his paternal kindness and humorousness, communicated itself to me, along with his preference for the political party that offered ‘the forgotten man’ the better break. Roosevelt made such people feel less alone. The impression of recovery – the impression that a President was bending the old rules and, drawing upon his own courage and flamboyance in adversity and illness, stirring things up on behalf of the down-and-out – mattered more than any miscalculations in the moot mathematics of economics.

World War II built upon this sense of meaningfulness to create a powerful sense of confidence and solidarity. Brokaw (1998) captured all this in his book The Greatest Generation, and this was the launching pad for the 1950s. Despite the challenges parents of the 1950s faced with the rise of atomic war as a threat, I believe that they had an easier time of it when it came to protecting children than I did as a parent in the 1980s and 1990s, and than do parents in the world of the twenty-first Century. For one thing, the flow of information to children 50 years ago was under relatively tight and benign control. To be sure, this control had a downside (e.g., in its narrow portrayal of females and ethnic and racial minorities and the absence of people with other than heterosexual orientation). But on the plus side, television was effectively censored when it came to sex and vivid violence.

There was a strong sense that “children are watching” meant that adults should forego the pleasure and titillation of explicit sexuality on the screen (Luke, 1990). Of course, this censorship limited the ability of television and the movies to deal with some adult subjects, but in retrospect I do not think the cost was too great. Themes of sexuality, infidelity, debilitating illness, depression, suicide, and murder could be presented, but in a manner that seems muted, dignified, and subtle by today’s “let it all hang out” standards.

There was violence, but it was highly stylized and sanitized. The “bad guys” were only moderately nasty, and the “good guys” subscribed to a strict code of honorable conduct. In the television environment of the 1950s, even the child of a negligent parent was at little risk sitting in front of the television set because the narrow range of available images and themes was tightly controlled by the adults who made and broadcast the programming. The same was true for movies (Luke, 1990).

The media technology of the 1950s also worked to the advantage of children (Hilmes, 2004; Luke, 1990). Special effects were primitive and not likely to ­produce the kind of visual trauma associated with contemporary images. The ­cumbersome quality of visual recording technology – limited for the most part to film – reduced to negligible the possibility that horrific events would be made available visually to the television and movie viewer, including the child viewer.

Today, the ubiquitous availability of video recording means that much of what is horrible to see will be made available for the seeing, and usually by children as readily as by adults (Garbarino, 2002). Consider the horror of the attack on Pearl Harbor in 1941 versus the attack on the World Trade Center 60 years later. The former was visually witnessed by a relative handful of children; the latter was seen via videotape by virtually every child in America – over and over again, in many cases. Repeat this for every violent and traumatic image over and over again, from the big events like plane crashes to the little events like ritual beatings purveyed over local television news as well as over You-Tube and other internet sites that cater to kids. This exposure to traumatic imagery is one important feature of the social toxicity which ­compounds the problem of parents and other caring adults in helping children deal with growing up in the age of terror. But it is not the only element.

As the atomic age began, the structure of benevolent adult authority was relatively intact, at least when compared to the world of the twenty-first century America. Adults were adults, and kids were kids. The social contract between ­children and adults was intact and in force: Children will live in their world (under the direct supervision of empowered adults); adults will live in theirs (mostly out of sight from the innocent eyes of children). Adults were in charge and in return took responsibility for protecting children.

This empowered adults to keep children out of the adult world and the institutions of America cooperated and conspired to maintain the useful illusion that children did not have to worry because the grownups were taking care of business on their behalf. Perhaps one notable exception to this rule was to be found in the “duck and cover” scare tactics associated with the threat of atomic war. The very exceptionality of this violation of children’s sense of safety is evidence of the existence of the general rule of innocence.

But with each new year after 1950, children’s visual access to scary stuff increased, whether it be horrific violence of war and crime, parental incapacitation, family break up, the clay feet of political leaders, or the sweaty details of sexuality. Today’s routine exposure of children to social toxicity means that today’s child is already reeling from the sense of a broken social contract with the adult world before we even begin to factor in the challenges of living in the age of terror. Thus, if ever there was a time for parents to take up the mantle of “responsibility,” it is now. Mediating the child’s exposure to the dark side of human experience in today’s already toxic social environment will continue to be one of the principal challenges for “good” parents in the years to come.

One of the casualties of both trauma and social toxicity in general is social trust and faith in the future. Adults who grew up confined to the images and messages of a child sized world may have a solid world view to sustain their social trust and faith in the future, but children who are growing up in an age in which mass media can and do bring vivid trauma to children from an early age onward may not. A study of adults seeking psychiatric intervention found that among those who had suffered a traumatic event at a young age, nearly three quarters replied “yes,” when they were asked, “Have you given up all hope of finding meaning in your life?” Among those who were adults before they experienced trauma the comparable figure was much lower – 20%. Parents must display empathic parenting grounded in the awareness of developmental processes when children are faced with trauma, lest they slip into a profound sense of meaninglessness.

For most of us, seeing life from a spiritual perspective necessitates a shift in our thinking. It requires that we see ourselves as spiritual beings, first and foremost. This means that we acknowledge our spiritual identities and existence in addition to the physical and psychological realities of living as a human organism. This recognition includes the awareness of the primacy of spiritual existence, a shift to recognizing oneself as a spiritual being first and foremost. Even for many who see themselves as religious, this recognition requires a fundamental shift: from a materialist metaphysic of body first, consciousness second, to spirit first, and body second.

What are the requisite elements of this shift? One is a transcendent organizing belief in a coherent spiritual existence (a Higher Power, a spiritual Source, a spiritual Creator, an all benevolent higher spiritual being). Another is a belief in oneself as being connected in spirit to the Higher Power and to other human beings as other spiritual peers having a physical experience and the centrality of love in this approach to the world. The third concerns the way we approach reality in our efforts to understand and improve it. Each informs our analysis of how the search for ­meaning in the lives of children and youth facing issues of life and death makes an ­enormous difference in our understanding human development. This is not just a matter of impersonal analysis. It is a matter of real lives shaped and defined by how well we do in guaranteeing each child’s basic human right to a healthy social environment, how well we do it converting social toxicity to a socially healthy state in which all the “isms” and other cultural poisons that affect kids are replaced with nurturing acceptance.