In linguistics, the Sapir-Whorf Hypothesis (SWH) states that there are certain thoughts of an individual in one language that cannot be understood by those who use another language. SWH states that the way people think is strongly affected by their native languages. It is a controversial theory championed by linguist Edward Sapir and his student Benjamin Whorf.
First discussed by Sapir in 1929, the hypothesis became popular in the 1950s following posthumous publication of Whorf's writings on the subject. In 1955, Dr. James Cooke Brown created the Loglan language (which led to an offshoot Lojban) in order to test the hypothesis. After vigorous attack from followers of Noam Chomsky in the following decades, the hypothesis is now believed by most linguists only in the weak sense that language can have effect on thought, which is referred to as linguistic relativity. For a Chomskian refutation, see, for example, Steven Pinker's book The Language Instinct.
So-called politically correct language stems from the belief that using (for example) sexist language tends to make one think in a sexist manner.
Central to the Sapir-Whorf hypothesis is the idea of linguistic relativity--that distinctions of meaning between related terms in a language are often arbitrary and particular to that language. Sapir and Whorf took this one step further by arguing that a person's world view is largely determined by the vocabulary and syntax available in his or her language (linguistic determinism). Whorf in fact called his version of the theory the Principle of Linguistic Relativity.
A possible argument against the extreme ("Weltanschauung") version of this idea, that all thought is constrained by language, can be discovered through personal experience: all people have occasional difficulty expressing themselves due to constraints in the language, and are conscious that the language is not adequate for what they mean. Perhaps they say or write something, and then think "that's not quite what I meant to say" or perhaps they cannot find a good way to explain a concept they understand to a novice. This makes it clear that what is being thought is not a set of words, because one can understand a concept without being able to express it in words.
The opposite extreme--that language does not influence thought at all--is also widely considered to be false. For example, it has been shown that people's discrimination of similar colors can be influenced by how their language organizes color names. Another study showed that deaf children of hearing parents may fail on some cognitive tasks unrelated to hearing, while deaf children of deaf parents succeed, due to the hearing parents being less fluent in sign language. Computer programmerss who know different programming languages often see the same problem in completely different ways.
The Neuro-Linguistic Programming (NLP) analysis of the problem is direct: Most people do some of their thinking by talking to themselves. Most people do some of their thinking by imagining images and other sensory phantasms. To the extent that people think by talking to themselves they are limited by their vocabulary and the structure of their language and their linguistic habits. (However it should also be noted that individuals have idiolects.)
John Grinder, a founder of NLP, was a linguistics professor who perhaps unconsciously combined the ideas of Chomsky with the Sapir-Whorf hypothesis. A seminal NLP insight came from a challenge he gave to his students: coin a neologism to describe a distinction for which you have no words. Student Robert Dilts coined a word for the way people stare into space when they are thinking, and for the different directions they stare. These new words enabled users to describe patterns in the ways people stare into space, which led directly to NLP — a fitting piece of support for the validity of NLP.
Table of contents |
2 Examples 3 See also |
Sapir-Whorf and Programming Languages
The hypothesis is sometimes applied in computer science to postulate that programmers skilled in a certain programming language may not have a (deep) understanding of some concepts of other languages. Though it may equally apply to any area where languages are "synthesized" for specific purposes, computer science is especially fertile when it comes to creating languages.
One way of stating the Church-Turing thesis is that any language that can simulate a Turing machine can be used to implement any effective algorithm -- in this sense, it is irrelevant what language is used to implement a particular algorithm, as that exact algorithm can also be implemented in every other language. However, when designing an algorithm to solve a particular problem, programmers are typically heavily influenced by the language constructs available. Though a large part of this is undoubtedly the way of least resistance (implement whatever is easiest to implement), there is also an element of "appropriateness" or "naturalness" that seems to compel the programmer to a design that "befits" the language.
Most programmers consider this a Good Thing, and the bewildering multitude of programming languages can be defended with the remark that a new programming language, while not extending the set of all possible algorithms, does extend the set of all algorithms we can efficiently think about. A well-known epigram by Alan Perlis states that "a language that doesn't affect the way you think about programming, is not worth knowing".
Examples
See also