Saturday, May 18, 2013

Truth Until Relativity: Is Cultural Relativism Applicable Today?

Note to the reader: this paper was written in 2009 and is not a very philosophic review of Cultural Relativity. Nonetheless, I think it is interesting enough to revisit.

Philosopher: When is truth false?
Anthropologist: When the belief of truth is relative.

          The 19th century anthropological theory of Cultural Relativity has swayed a great deal of thought and research, despite its sweeping philosophical implications. The idea that truth is relative flies in the face of philosophical reason, but it still may provide a neutral viewpoint for anthropologists studying cultures that do not share common morals. As humanity enters the 21st century, the great debate between the two disciplines continues, and even after 200 years of deliberation, Cultural Relativism may have a valid place in academia today if restated in such a way to resolve its philosophical issues.

          In an article for The Stanford Encyclopedia of Philosophy, Chris Swoyer states that Relativism is a group of theories that include a base notion that many aspects of truth-bearing statements are dependent concepts “relative to language, culture, or biological makeup” (Swoyer). Swoyer goes on to say culture may significantly alter the way a person perceives and interprets information.
[Culture] includes ideals about how one should live, customs, mores, taken-for-granted common knowledge, systems of production and exchange, ways of coping with illness, disease and death, legal institutions, religion, rituals, rites of passage, myths, taboos, technologies, social hierarchies and status, sexual practices, accepted ways of displaying emotions, marriage, kinship structures, power hierarchies, sports, games, art, architecture, language. (Swoyer 3.2)
          With such a broad range of concepts included in the idea of culture, and a multitude of perceived absolutes possible in each concept, it seems that nearly all theories will carry with them some sense of the theorist’s culture. One way to state the argument is this: since separate cultures make different decisions about the morality of an action, the decisions of two cultures concerning one action often differ, and cultures accept diverse standards, then there are no universal moral standards (Schick 335). So, it might be that the only culture that can judge itself or any of its members is the culture in question. Despite the validity of the argument, reason generally does not abide this conclusion, because it implies that a truth-bearing statement can be both true and false at the same time relative to culture. Also, it implies that only through complete agreement on a topic can any truth be determined. It would only take one voice of dissention to claim any argument invalid (Schick 336). Philosophically that is akin to a mathematician invalidating two plus two equals four by merely stating it does not. The implications simply do not follow.

          Additionally, this formulation implies that no culture, society, nation, government, state, city, neighborhood, or even household would be at all justified in making judgments about anyone that was not part of the exact same culture, because any major difference in the core ideals that comprise culture potentially would give grounds for division of a new culture. Therefore, a Generation X, rugby-playing, unwed American Sufi (Islamic Mystic) living in the Southwestern region is likely a part of a completely different culture than a majority of Americans today. The greater the difference between two people equates to the lower probability of common culture.

          Western traditions tend toward a homogeneous view of societies by trying to assume sameness among members of any given set or subset. For instance, terms often used for fair-skinned people in America are English descendants, Anglo-Saxon, and Caucasian; however, the truth is that many “white” Americans can trace their biological and cultural heritage to various peoples that have little to do with any of the above categories, and using those terms trivializes all other possibilities. While the tendency may be somewhat useful for most trying to rap their heads around concepts they find larger then themselves, it denies acknowledgment and appreciation of the uniqueness of many cultures that are active in the society.

The significance of recognition as an essential and universal human need is at the core of a very old and rich tradition of philosophical reflection and has further been enhanced by numerous interpretations of mythology, tragedy and other literary forms, as well as studies in psychology, social anthropology, and the phenomenology of religion. (van der Merwe)

          Therefore, given the importance of recognition, academics should not ignore or trivialize the subtle differences in each subculture as minor, aberrant deviations from the main category. On the other hand, this conclusion walks a fine line between Cultural and Subject Relativism, The latter states that no action is universally right or wrong because every moral decision is completely relative to the individual (Schick 330). However, “it sanctions obviously immoral actions, it implies that people are morally infallible, and it denies that there are any substantive moral disagreements” (Schick 331). So, to maintain any sense of intelligible discourse on the topic, one must continue on the side of Cultural Relativism.

          To apply Cultural Relativity in any meaningful way, humanity would have to reevaluate many practices at all levels because of the lack of commonality. Citizens must question major tenants of the American Judicial system, like trial by a group of peers, as it is doubtful that one could find two people sharing the exact worldview and the exact culture in one household, let alone twelve of them in one region. Larger issues, once examined, might lack moral grounds as well. It would be as wrong for one nation to judge another’s use of weapons of mass destruction as it would be for the second to use them.

          One anthropologist, Renato Rosaldo, commented along similar lines in an article, “Of Headhunters and Soldiers: Separating Culture and Ethical Relativism,” by telling the story of his experiences living with a Filipino tribe known as the Ilongots. While studying the tribes’ cannibalistic headhunting practices he received a draft notice to serve in the Vietnam War. He explained to the tribe what had happened expecting that the tribe would respect him more because he was going off to kill in a manor he believed the violent tribe respected. However, much to his surprise, the tribe was horrified and offered to hide him because the tribe experienced modern warfare during World War 2 “when the Americans drove Japanese troops into the hills where the Ilongots lived. The tribe lost a third of its population during that time” (Rosaldo). At first, he was mistaken as to their reasons, believing that they were opposed to the massive loss of life, but he later came to understand that it was the structured nature of the military, and the willingness of the officers to sacrifice their soldiers that the Ilongots were so strongly against (Rosaldo).

          Rosaldo used his personal experience to illustrate how moral differences made judgments from one violent culture to another nearly nonsensical. For him, Cultural Relativity was a tool to help keep the observer’s native culture from tainting the view of a dissimilar culture. That is not to say that he approved of the Ilongots practices, because he subscribed to a belief that Culture and Ethics are two separate concepts (Rosaldo).

          Researchers are exploring the differences between Cultural and Ethical Relativity on many levels today. In his article for Death Studies, “Cultural and Ethical Relativity,” Arthur Zucker concludes that, with a slightly broader definition of Cultural Relativity, “what people believe to be right or wrong” seems true per se (Zucker 98-99). For Zucker, the dividing line for relativity does not fall on the side of culture, as human cultures believe many conflicting things to be true, and it would not invalidate the theory to acknowledge the difference between a belief and a fact. However, he goes on to suggest that the core of the problem lies closer to the heart of morality, which he describes as Ethical Relativity – “what is right and wrong varies from culture to culture” (Zucker 98).

          In many ways, his division of beliefs and ethics solves many of the major philosophical issues of Cultural Relativity. It follows that a colored windowpane may cause the viewer trapped inside to believe that his colored view of the world backed by his observed evidence is a true understanding of the state of affairs despite the reality beyond the pane. “Our own imagination is limited by the culture we have grown up in, but if we actually go elsewhere and look at what other people do, we can expand our world and challenge our own notions” (Rosaldo). In that view, one may only be able to overcome native culture through seeking out ways to challenge its beliefs. However, relegating the issue of truth to a different, companion theory does little to bolster Cultural Relativity’s standing, because it leaves researchers in a position of evaluating the beliefs of another culture that maybe functionally equal to their own. It is doubtful that they can ascertain anything more meaningful about humanity by looking at what amounts to the opinions of a wider range of people.

          Another issue with Zucker’s supposition of Ethical Relativity is that it does not escape rational inconsistencies. If an action is ethically right in one culture and ethically wrong in another, and the difference is beyond the two culture’s beliefs, but is somehow rooted in a broader state of affairs, a paradox of sorts develops. The problem likely compounds if the two cultures overlap.

          He cites the example of the Navajo culture’s view of life as “a mix of beauty, blessedness, goodness, order, harmony, and everything that is positive” against the biomedical ethical standards in the Death with Dignity Act (Zucker 99). To the modern American, any view that may prevent people from preparing for their impending deaths could seem backwards at best. However, it would violate the Navajo worldview to discuss death even with a doctor (Zucker 99). Zucker sees this as proof that what is right and wrong varies from culture to culture. On the other hand, the paradox shows the best in cases like this where an action from one culture to another is at least partly wrong no matter what choices are available. A doctor would violate a terminal Navajo patient’s religious and personal way of life if the doctor follows the law in his practice and informs the Navajo of impending death. This illustrates one major issue with the theory: any attempt (like that of laws) to universalize a moral principle would be ultimately worthless because the theory does not allow for any absolutes in moral beliefs (“Ethical” 1-4). Philosophical reason tells us that something can either be true or false, but not both, although Ethical Relativity claims just the opposite. Therefore, it fails in the exact same way that the first formulation of Cultural Relativity does.

          Rather than focusing on specific dissimilarities of culture, Robin Fox explores the consistencies of morals in culture. According to Fox, relativism has little to do with morals because each society develops rules that suit the individual needs of the culture at the time.

. . . We know of no societies where there is not a recognizable framework of moral discourse, and that the contents of this discourse are pretty much the same everywhere, even if the actual prescriptions differ a lot--which one would expect, since people are not totally out to lunch and so will usually make their specific rules relative to their actual situations. (Fox)

          In her view, the various decisions that comprise one culture’s moral framework are not relative to the culture but part of a “universal standard of moral discourse” (Fox). To her, every culture comprises a different aspect of a singular conversation on how humans solve complex social problems. While each culture may come up with different answers in isolation, the basic rules that each follow are similar. No society is devoid of moral rules, all societies actively examine their own rules, all contain “fair distribution of resources”, all have “legitimate use of violence”, all have a “theory of blame”, “a system of rewards for keeping the rules,” and so on (Fox). That is not to say that all societies contain a “good” set of moral rules.

          Much of the problem surrounding the concept of morality is that people misunderstand it as meaning good. Many cultures and subcultures generally thought of as undesirable, like terrorists, prisoners, and headhunters often live extremely moral lives (Fox). Therefore, if Fox is correct, each moral system is appropriate for the culture that developed it at the time, but not expressly good or bad.

          Along similar lines, Marc Hauser, the director of Harvard University’s Cognitive Evolutionary Laboratory, postulates that people learn moral code in a similar way to language (qtd. in Tucker 21). In his article “Reinventing Morality,” Patrick Tucker expands the idea by claiming that morals pass “from groups to individuals in the form of traditions, institutions, codes, etc.” (Tucker 21). Comparing the human mind to that of a computer program he states, “religion, upbringing, gender, third-grade experiences dealing with bullies, and so on all contribute lines of code to an individual’s moral software. For this reason, no two moral processes will be identical,” which he describes as Moral Relativism (21).

          The model of morals-like-language allows Hauser to examine the similarities between the two concepts. He concludes that because both morals and language have “usage guidelines,” and since those guidelines have similar practical applications each culture contains a kind of “moral grammar” (22). Like Fox’s universal standard of moral discourse, Hauser’s moral grammar accounts for the observed state of affairs by showing that each rule that governs moral decisions, while not “good” in and of itself, fits the individual situation as needed at the time in a greater context of cultural experience and training. This argument removes much of the philosophic protest against relativism by showing any particular object of moral discourse may be the same despite the specific “words” used as in language. Even though “sentence structure” is different and thought-object connections vary, the proper way to view the discourse is in the particulars of the cultural “language.” This means “translations” between two moral languages are possible.

          Another promising aspect of recent research by David Poeppel of the University of Maryland is the discovery of the physical portion of the brain that processes moral decisions (Tucker 23).

We all process moral decisions based on different assumptions or beliefs, but the process happens in the same place for each of us, an area in the front of the brain called the ventromedial prefrontal cortex. This is where our emotional experiences – religious, traumatic, joyous – connect with our higher-level social decision making to give us a sense of good or bad. (23)
          According to Tucker, the importance of this discovery is that it allows for a new “science of morality” that may help scrutinize the moral decision-making process in ways previously impossible (25). Continuing Tucker’s mind-like-computer analogy, the brain is the “hardware” the “software” runs on. Understanding the physical way the brain processes moral decisions may help researchers challenge preconceived notions about morality, and help find the “default position” that humans are born with before social and cultural influences begin their programming (24-25).

          Tucker’s combination of Relativistic and Functionalistic theories provides a very appealing possibility of the creations of a true scientific field of morality beyond the current limitations of social scientists and philosophers. However, the realization of this is likely years away as the required field of neuroscience “is still in its infancy” (24). Until then, humanity finds itself in virtually the same place as it was before.

          Lee Carter, an instructor of Philosophy [at Glendale Community College, who retired in 2009], expressed one pragmatic issue with this view. To Carter, treating morals like a language leaves as much room for moral misunderstandings as syntax does, and when the misinterpretation occurs between nuclear-armed cultures, the results could be horrific. “. . . We better recognize a common humanity among us in spite of any differences we have among us in language or cultural relativistic views, to have a common bond in which we agree to live without trying to destroy one another, either economically or through warfare” (Carter). While Carter’s views echo those of many people, finding this utopian ideal seems rather elusive for all humans, and it may not be without a deep-seated cause. This might be especially true in regards to humanity’s affinity for warfare.

          In his book, The First Armies, Doyne Dawson defines warfare as “coalitional intraspecific aggression” (25). Although William Tecumseh Sherman’s statement, “War is Hell,” pithily conveys humanity’s fear of warfare, Dawson’s definition, being more scientific, precludes a great deal of violently aggressive behavior. To qualify as war according to Dawson, a conflict must be a group of individuals coming together as one body in an aggressive manner against another group of the same species.

          Under Darwinian Theory, any behavior that increases the likelihood that one group would survive over another falls in line with natural selection, and therefore strengthens the species (Dawson 25). So, warfare – a concept commonly thought of in terms of moral interest – is a requisite part of hominid evolution. In this view, despite any culturally relative views on war, violence or the proper use of force, warfare is a behavioral adaptation present in all humans when the conditions require, and not any reason derived from higher-level thinking. With that in mind, there exists at least one thing believed to be a moral concept that transcends not only culture, but also species. “Genuinely coalitional aggression . . . seems to have evolved only twice, once in the line of primates that has ended in ourselves, and before that in ants” (25).

          On the other hand, Carter pointed out that people, being exceedingly similar because of evolution, should be able to effectively communicate moral standards to at least a level required to prevent modern conflicts. However, if the roots of war are in the evolution that brought humans to the apex of the food chain, then it is doubtful that humanity would easily part with a concept that has apparently served it so well. Additionally, regardless of culture, it would seem that all hominids would use warfare if environmental factors demand it. If correct, the same may be true of other moral concepts as well. Nevertheless, there is a need for more research along these lines.

          In conclusion, despite the various pitfalls of logical contradictions that have plagued it, theorists can restate Cultural Relativity in such a way that allows anthropologists the objectivity needed to study cultures dissimilar to their own, without claiming a truth to be false, because the belief is only relative. With neurological breakthroughs, humankind may soon have a detailed understanding of the abstractions humans use to make moral decisions. With a better moral model, anthropologists might gain the ability to “transcribe” entire moral systems, and evaluate them in ways that are more meaningful. With more research, a deeper understanding of the moral “default” position all humans are born with might bridge the conceptual gap between relativistic anthropologists and universalistic philosophers, and that might open entirely new lines of thought.


Works Cited


Carter, Lee. Personal Interview. 10 Mar. 2009.

Dawson, Doyne. First Armies. London: Cassell & Co, 2001.

"Ethical Relativism." Encyclopædia Britannica. 2009. Encyclopædia Britannica Online. 8 May 2009 <http://www.search.eb.com/eb/article-242045>.

Fox, Robin. "Moral Sense and Utopian Sensibility." Criminal Justice Ethics 13.2 (Summer/Fall94 1994): 19. Religion and Philosophy Collection. EBSCO. Glendale Community College, Glendale, AZ. 8 May 2009 .

Rosaldo, Renato. "Of Headhunters and Soldiers: Separating Cultural and Ethical Relativism." Issues in Ethics 11 (2000). Santa Clara University. Winter 2000. Markkula Center for Applied Ethics. 8 May 2009 .

Schick, Theodore. Doing Philosophy an Introduction Through Thought Experiments. Boston, Mass: McGraw-Hill, 2005.

Swoyer, Chris. "Relativism." Stanford Encyclopedia of Philosophy. 2 Feb. 2003. Stanford University. 8 May 2009 .

Tucker, Patrick. "Reinventing Morality." Futurist 43.1 (Jan. 2009): 20-25. Religion and Philosophy Collection. EBSCO. Glendale Community College, Glendale, AZ. 8 May 2009 .

van der Merwe, W.L. "Cultural Relativism and the Recognition of Cultural Differences." South African Journal of Philosophy 18.3 (Aug. 1999): 313. Religion and Philosophy Collection. EBSCO. Glendale Community College, Glendale, AZ. 8 May 2009 .

Zucker, Arthur. "Cultural and Ethical Relativity." Death Studies 20.1 (Jan. 1996): 98. Religion and Philosophy Collection. EBSCO. Glendale Community College, Glendale, AZ. 8 May 2009 .

No comments:

Post a Comment