- Language And Thought Chomsky Pdf
- Language Thought And Consciousness Pdf
- Relation Between Language And Thought Pdf
- Language And Thought Pdf
- Vygotsky Thought And Language Summary
[Editor's Note: The following new entry by Michael Rescorla replaces the former entry on this topic by the previous author.]
The language of thought hypothesis (LOTH) proposes thatthinking occurs in a mental language. Often called Mentalese,the mental language resembles spoken language in several key respects:it contains words that can combine into sentences; the words andsentences are meaningful; and each sentence’s meaning depends ina systematic way upon the meanings of its component words and the waythose words are combined. For example, there is a Mentaleseword whale that denotes whales, and there is aMentalese word mammal that denotesmammals. These words can combine into a Mentalesesentence whales are mammals, which means thatwhales are mammals. To believe that whales are mammals is to bear anappropriate psychological relation to this sentence. During aprototypical deductive inference, I might transform the Mentalesesentence whales are mammals and the Mentalesesentence Moby Dick is a whale into theMentalese sentence Moby Dick is a mammal. As Iexecute the inference, I enter into a succession of mental states thatinstantiate those sentences.
Introduction to Linguistics Language, Thought and Culture. The Sapir-Whorf hypothesis can be stated in this way. Structural differences between language systems will, in general, be paralleled by nonlinguistic cognitive differences, of an unspecified sort, in the native speakers of the two languages.
LOTH emerged gradually through the writings of Augustine, Boethius,Thomas Aquinas, John Duns Scotus, and many others. William of Ockhamoffered the first systematic treatment in his Summa Logicae(c. 1323), whichmeticulously analyzed the meaning and structure of Mentaleseexpressions. LOTH was quite popular during the late medieval era, butit slipped from view in the sixteenth and seventeenth centuries. Fromthat point through the mid-twentieth century, it played little seriousrole within theorizing about the mind.
- Language, thought, and reality by Benjamin Lee Whorf, 1956, Technology Press of Massachusetts Institute of Technology edition, in English.
- Language of thought. Language of thought theories rely on the belief that mental representation has linguistic structure. Thoughts are 'sentences in the head', meaning they take place within a mental language. Two theories work in support of the language of thought theory.
In the 1970s, LOTH underwent a dramatic revival. The watershed waspublication of Jerry Fodor’s The Language of Thought(1975). Fodor argued abductively: our current best scientific theoriesof psychological activity postulate Mentalese; we therefore have goodreason to accept that Mentalese exists. Fodor’s analysis exertedtremendous impact. LOTH once again became a focus of discussion, somesupportive and some critical. Debates over the existence and nature ofMentalese continue to figure prominently within philosophy andcognitive science. These debates have pivotal importance for ourunderstanding of how the mind works.
- 1. Mental Language
- 4. Arguments for LOTH
- 6. Regress Objections to LOTH
1. Mental Language
What does it mean to posit a mental language? Or to say that thinkingoccurs in this language? Just how “language-like” isMentalese supposed to be? To address these questions, we will isolatesome core commitments that are widely shared among LOT theorists.
Language And Thought Chomsky Pdf
1.1 The Representational Theory of Thought
Folk psychology routinely explains and predicts behavior by citingmental states, including beliefs, desires, intentions, fears, hopes,and so on. To explain why Mary walked to the refrigerator, we mightnote that she believed there was orange juice in the refrigerator andwanted to drink orange juice. Mental states such as belief and desireare called propositional attitudes. They can be specifiedusing locutions of the form
X believes that p.
X desires that p.
X intends that p.
X fears that p.
etc.
By replacing “p” with a sentence, we specify thecontent of X’s mental state. Propositional attitudes have intentionality or aboutness: they are about a subject matter.For that reason, they are often called intentionalstates.
The term “propositional attitude” originates with Russell(1918–1919 [1985]) and reflects his own preferred analysis: thatpropositional attitudes are relations to propositions. A proposition is an abstract entity that determinesa truth-condition. To illustrate, suppose John believes thatParis is north of London. Then John’s belief is a relation tothe proposition that Paris is north of London, and thisproposition is true iff Paris is north of London. Beyond the thesisthat propositions determine truth-conditions, there is littleagreement about what propositions are like. The literature offers manyoptions, mainly derived from theories of Frege (1892 [1997]), Russell(1918–1919 [1985]), and Wittgenstein (1921 [1922]).
Fodor (1981: 177–203; 1987: 16–26) proposes a theory ofpropositional attitudes that assigns a central role to mentalrepresentations. A mental representation is a mental item withsemantic properties (such as a denotation, or a meaning, or atruth-condition, etc.). To believe that p, or hope thatp, or intend that p, is to bear an appropriate relationto a mental representation whose meaning is that p. Forexample, there is a relation belief* between thinkers and mentalrepresentations, where the following biconditional is true no matterwhat English sentence one substitutes for “p”:
X believes that p iff there is a mental representationS such that X believes* S and S means thatp.
More generally:
- (1) Each propositionalattitude A corresponds to a unique psychological relationA*, where the following biconditional is true no matter whatsentence one substitutes for “p”: XAs that p iff there is a mental representationS such that X bears A* to S and Smeans that p.
On this analysis, mental representations are the most direct objectsof propositional attitudes. A propositional attitude inherits itssemantic properties, including its truth-condition, from the mentalrepresentation that is its object.
Proponents of (1) typically invoke functionalism to analyze A*.Each psychological relation A* is associated with a distinctivefunctional role: a role that S plays within yourmental activity just in case you bear A* to S. Whenspecifying what it is to believe* S, for example, we mightmention how S serves as a basis for inferential reasoning, howit interacts with desires to produce actions, and so on. Precisefunctional roles are to be discovered by scientific psychology.Following Schiffer (1981), it is common to use the term“belief-box” as a placeholder for the functional rolecorresponding to belief*: to believe* S is to place S inyour belief box. Similarly for “desire-box”, etc.
(1) is compatible with the view that propositional attitudes arerelations to propositions. One might analyze the locution“S means that p” as involving a relationbetween S and a proposition expressed by S. It wouldthen follow that someone who believes* S stands in apsychologically important relation to the proposition expressed byS. Fodor (1987: 17) adopts this approach. He combines acommitment to mental representations with a commitment topropositions. In contrast, Field (2001: 30–82) declines topostulate propositions when analyzing “S means thatp”. He posits mental representations with semanticproperties, but he does not posit propositions expressed by the mentalrepresentations.
The distinction between types and tokens is crucial for understanding(1). A mental representation is a repeatable type that can beinstantiated on different occasions. In the current literature, it isgenerally assumed that a mental representation’s tokens areneurological. For present purposes, the key point is that mentalrepresentations are instantiated by mental events. Here weconstrue the category of events broadly so as to include bothoccurrences (e.g., I form an intention to drink orange juice)and enduring states (e.g., my longstanding belief thatAbraham Lincoln was president of the United States). When mental evente instantiates representation S, we say that S istokened and that e is a tokening of S.For example, if I believe that whales are mammals, then my belief (amental event) is a tokening of a mental representation whose meaningis that whales are mammals.
According to Fodor (1987: 17), thinking consists in chains of mentalevents that instantiate mental representations:
- (2) Thought processesare causal sequences of tokenings of mentalrepresentations.
A paradigm example is deductive inference: I transition frombelieving* the premises to believing* the conclusion. The first mentalevent (my belief* in the premises) causes the second (my belief* inthe conclusion).
(1) and (2) fit together naturally as a package that one might callthe representational theory of thought (RTT). RTT postulatesmental representations that serve as the objects of propositionalattitudes and that constitute the domain of thought processes.[1]
RTT as stated requires qualification. There is a clear sense in whichyou believe that there are no elephants on Jupiter. However, youprobably never considered the question until now. It is not plausiblethat your belief box previously contained a mental representation withthe meaning that there are no elephants on Jupiter. Fodor (1987:20–26) responds to this sort of example by restricting (1) tocore cases. Core cases are those where the propositionalattitude figures as a causally efficacious episode in a mentalprocess. Your tacit belief that there are no elephants on Jupiter doesnot figure in your reasoning or decision-making, although it can cometo do so if the question becomes salient and you consciously judgethat there are no elephants on Jupiter. So long as the belief remainstacit, (1) need not apply. In general, Fodor says, an intentionalmental state that is causally efficacious must involve explicittokening of an appropriate mental representation. In a slogan:“No Intentional Causation without Explicit Representation”(Fodor 1987: 25). Thus, we should not construe (1) as an attempt atfaithfully analyzing informal discourse about propositional attitudes.Fodor does not seek to replicate folk psychological categories. Heaims to identify mental states that resemble the propositionalattitudes adduced within folk psychology, that play roughly similarroles in mental activity, and that can support systematictheorizing.
Dennett’s (1977 [1981]) review of The Language ofThought raises a widely cited objection to RTT:
In a recent conversation with the designer of a chess-playing programI heard the following criticism of a rival program: “it thinksit should get its queen out early”. This ascribes apropositional attitude to the program in a very useful and predictiveway, for as the designer went on to say, one can usefully count onchasing that queen around the board. But for all the many levels ofexplicit representation to be found in that program, nowhere isanything roughly synonymous with “I should get my queen outearly” explicitly tokened. The level of analysis to which thedesigner’s remark belongs describes features of the program thatare, in an entirely innocent way, emergent properties of thecomputational processes that have “engineering reality”. Isee no reason to believe that the relation between belief-talk andpsychological talk will be any more direct.
In Dennett’s example, the chess-playing machine does notexplicitly represent that it should get the queen out early, yet insome sense it acts upon a belief that it should do so. Analogousexamples arise for human cognition. For example, we often follow rulesof deductive inference without explicitly representing the rules.
To assess Dennett’s objection, we must distinguish sharplybetween mental representations and rules governing the manipulation ofmental representations (Fodor 1987: 25). RTT does not require thatevery such rule be explicitly represented. Some rules may beexplicitly represented—we can imagine a reasoning system thatexplicitly represents deductive inference rules to which it conforms.But the rules need not be explicitly represented. They maymerely be implicit in the system’s operations. Only whenconsultation of a rule figures as a causally efficacious episode inmental activity does RTT require that the rule be explicitlyrepresented. Dennett’s chess machine explicitly represents chessboard configurations and perhaps some rules for manipulating chesspieces. It never consults any rule akin to Get the Queen outearly. For that reason, we should not expect that the machineexplicitly represents this rule even if the rule is in some sensebuilt into the machine’s programming. Similarly, typicalthinkers do not consult inference rules when engaging in deductiveinference. So RTT does not demand that a typical thinker explicitlyrepresent inference rules, even if she conforms to them and in somesense tacitly believes that she should conform to them.
1.2 Compositional Semantics
Natural language is compositional: complex linguistic expressions arebuilt from simpler linguistic expressions, and the meaning of acomplex expression is a function of the meanings of its constituentstogether with the way those constituents are combined.Compositional semantics describes in a systematic way howsemantic properties of a complex expression depend upon semanticproperties of its constituents and the way those constituents arecombined. For example, the truth-condition of a conjunction isdetermined as follows: the conjunction is true iff both conjuncts aretrue.
Historical and contemporary LOT theorists universally agree thatMentalese is compositional:
Compositionality of mental representations(COMP): Mental representations have a compositionalsemantics: complex representations are composed of simpleconstituents, and the meaning of a complex representation depends uponthe meanings of its constituents together with the constituencystructure into which those constituents are arranged.
Clearly, mental language and natural language must differ in manyimportant respects. For example, Mentalese surely does not have aphonology. It may not have a morphology either. Nevertheless, COMParticulates a fundamental point of similarity. Just like naturallanguage, Mentalese contains complex symbols amenable to semanticanalysis.
What is it for one representation to be a “constituent” ofanother? According to Fodor (2008: 108), “constituent structureis a species of the part/whole relation”. Not all partsof a linguistic expression are constituents: “John ran” isa constituent of “John ran and Mary jumped”, but“ran and Mary” is not a constituent because it is notsemantically interpretable. The important point for our purposes isthat all constituents are parts. When a complex representation istokened, so are its parts. For example,
intending that (P amp Q) requires having a sentence in yourintention box… one of whose parts is a token of the very sametype that’s in the intention box when you intend that (P), andanother of whose parts is a token of the very same type that’sin the intention box when you intend that (Q). (Fodor 1987: 139)
More generally: mental event (e) instantiates a complex mentalrepresentation only if (e) instantiates all of therepresentation’s constituent parts. In that sense, (e) itselfhas internal complexity.
The complexity of mental events figures crucially here, as highlightedby Fodor in the following passage (1987: 136):
Practically everybody thinks that the objects of intentional statesare in some way complex… [For example], what you believe whenyou believe that (P amp Q) is… something composite, whoseelements are—as it might be—the proposition that Pand the proposition that Q. But the (putative) complexity ofthe intentional object of a mental state does not, of course,entail the complexity of the mental state itself… LOT claimsthat mental states—and not just their propositionalobjects--typically have constituent structure.
Many philosophers, including Frege and Russell, regard propositions asstructured entities. These philosophers apply a part/whole model topropositions but not necessarily to mental events during whichthinkers entertain propositions. LOTH as developed by Fodor appliesthe part/whole model to the mental events themselves:
what’s at issue here is the complexity of mental events and notmerely the complexity of the propositions that are their intentionalobjects. (Fodor 1987: 142)
On this approach, a key element of LOTH is the thesis that mentalevents have semantically relevant complexity.
Contemporary proponents of LOTH endorse RTT+COMP. Historicalproponents also believed something in the vicinity (Normore 1990,2009; Panaccio 1999 [2017]), although of course they did not usemodern terminology to formulate their views. We may regard RTT+COMP asa minimalist formulation of LOTH, bearing in mind that manyphilosophers have used the phrase “language of thoughthypothesis” to denote one of the stronger theses discussedbelow. As befits a minimalist formulation, RTT+COMP leaves unresolvednumerous questions about the nature, structure, and psychological roleof Mentalese expressions.
1.3 Logical Structure
In practice, LOT theorists usually adopt a more specific view of thecompositional semantics for Mentalese. They claim that Mentaleseexpressions have logical form (Fodor 2008: 21). Morespecifically, they claim that Mentalese contains analogues to thefamiliar logical connectives (and, or, not,if-then, some, all, the).Iterative application of logical connectives generates complexexpressions from simpler expressions. The meaning of a logicallycomplex expression depends upon the meanings of its parts and upon itslogical structure. Thus, LOT theorists usually endorse a doctrinealong the following lines:
Logically structured mental representations(LOGIC): Some mental representations have logicalstructure. The compositional semantics for these mentalrepresentations resembles the compositional semantics for logicallystructured natural language expressions.
Medieval LOT theorists used syllogistic and propositional logic toanalyze the semantics of Mentalese (King 2005; Normore 1990).Contemporary proponents instead use the predicate calculus,which was discovered by Frege (1879 [1967]) and whose semantics wasfirst systematically articulated by Tarski (1933 [1983]). The view isthat Mentalese contains primitive words—including predicates,singular terms, and logical connectives—and that these wordscombine to form complex sentences governed by something like thesemantics of the predicate calculus.
The notion of a Mentalese word corresponds roughly to theintuitive notion of a concept. In fact, Fodor (1998: 70)construes a concept as a Mentalese word together with its denotation.For example, a thinker has the concept of a cat only if she has in herrepertoire a Mentalese word that denotes cats.
Logical structure is just one possible paradigm for the structure ofmental representations. Human society employs a wide range ofnon-sentential representations, including pictures, maps, diagrams,and graphs. Non-sentential representations typically contain partsarranged into a compositionally significant structure. In many cases,it is not obvious that the resulting complex representations havelogical structure. For example, maps do not seem to contain logicalconnectives (Fodor 1991: 295; Millikan 1993: 302; Pylyshyn 2003: 424–5). Nor isit evident that they contain predicates (Camp 2018; Rescorla 2009c),although some philosophers contend that they do (Blumson 2012; Casati& Varzi 1999; Kulvicki 2015).
Theorists often posit mental representations that conform to COMP butthat lack logical structure. The British empiricists postulatedideas, which they characterized in broadly imagistic terms.They emphasized that simple ideas can combine to form complex ideas.They held that the representational import of a complex idea dependsupon the representational import of its parts and the way those partsare combined. So they accepted COMP or something close to it(depending on what exactly “constituency” amounts to).[2] They did not say in much detail how compounding of ideas was supposedto work, but imagistic structure seems to be the paradigm in at leastsome passages. LOGIC plays no significant role in their writings.[3] Partly inspired by the British empiricists, Prinz (2002) and Barsalou(1999) analyze cognition in terms of image-like representationsderived from perception. Armstrong (1973) and Braddon-Mitchell andJackson (2007) propose that propositional attitudes are relations notto mental sentences but to mental maps analogous in importantrespects to ordinary concrete maps.
One problem facing imagistic and cartographic theories of thought isthat propositional attitudes are often logically complex (e.g., Johnbelieves that if Plácido Domingo does not sing then eitherGustavo Dudamel will conduct or the concert will be cancelled).Images and maps do not seem to support logical operations: thenegation of a map is not a map; the disjunction of two maps is not amap; similarly for other logical operations; and similarly for images.Given that images and maps do not support logical operations, theoriesthat analyze thought in exclusively imagistic or cartographic termswill struggle to explain logically complex propositional attitudes.[4]
There is room here for a pluralist position that allows mentalrepresentations of different kinds: some with logical structure, somemore analogous to pictures, or maps, or diagrams, and so on. Thepluralist position is widespread within cognitive science, whichposits a range of formats for mental representation (Block 1983; Camp 2009; Johnson-Laird2004: 187; Kosslyn 1980; McDermott 2001: 69; Pinker 2005: 7; Sloman 1978:144–76). Fodor himself (1975: 184–195) suggests a view onwhich imagistic mental representations co-exist alongside, andinteract with, logically structured Mentalese expressions.
Given the prominent role played by logical structure within historicaland contemporary discussion of Mentalese, one might take LOGIC to bedefinitive of LOTH. One might insist that mental representationscomprise a mental language only if they have logicalstructure. We need not evaluate the merits of this terminologicalchoice.
2. Scope of LOTH
RTT concerns propositional attitudes and the mental processes in whichthey figure, such as deductive inference, reasoning, decision-making,and planning. It does not address perception, motor control,imagination, dreaming, pattern recognition, linguistic processing, orany other mental activity distinct from high-level cognition. Hencethe emphasis upon a language of thought: a system of mentalrepresentations that underlie thinking, as opposed to perceiving,imagining, etc. Nevertheless, talk about a mental language generalizesnaturally from high-level cognition to other mental phenomena.
Perception is a good example. The perceptual systemtransforms proximal sensory stimulations (e.g., retinal stimulations)into perceptual estimates of environmental conditions (e.g., estimatesof shapes, sizes, colors, locations, etc.). Helmholtz (1867 [1925])proposed that the transition from proximal sensory input to perceptualestimates features an unconscious inference, similar in keyrespects to high-level conscious inference yet inaccessible toconsciousness. Helmholtz’s proposal is foundational tocontemporary perceptual psychology, which constructs detailedmathematical models of unconscious perceptual inference (Knill &Richards 1996; Rescorla 2015). Fodor (1975: 44–55) argues thatthis scientific research program presupposes mental representations.The representations participate in unconscious inferences orinference-like transitions executed by the perceptual system.[5]
Navigation is another good example. Tolman (1948)hypothesized that rats navigate using cognitive maps: mentalrepresentations that represent the layout of the spatial environment.The cognitive map hypothesis, advanced during the heyday ofbehaviorism, initially encountered great scorn. It remained a fringeposition well into the 1970s, long after the demise of behaviorism.Eventually, mounting behavioral and neurophysiological evidence won itmany converts (Gallistel 1990; Gallistel & Matzel 2013; Jacobs& Menzel 2014; O’Keefe & Nadel 1978; Weiner et al.2011). Although a few researchers remain skeptical (Mackintosh 20002),there is now a broad consensus that mammals (and possibly even someinsects) navigate using mental representations of spatial layout.Rescorla (2017b) summarizes the case for cognitive maps and reviewssome of their core properties.
To what extent should we expect perceptual representations andcognitive maps to resemble the mental representations that figure inhigh-level human thought? It is generally agreed that all these mentalrepresentations have compositional structure. For example, theperceptual system can bind together a representation of shape and arepresentation of size to form a complex representation that an objecthas a certain shape and size; the representational import of thecomplex representation depends in a systematic way upon therepresentational import of the component representations. On the otherhand, it is not clear that perceptual representations have anythingresembling logical structure, including even predicativestructure (Burge 2010: 540–544; Fodor 2008: 169–195). Noris it evident that cognitive maps contain logical connectives orpredicates (Rescorla 2009a, 2009b). Perceptual processing andnon-human navigation certainly do not seem to instantiate mentalprocesses that would exploit putative logical structure. Inparticular, they do not seem to instantiate deductive inference.
These observations provide ammunition for pluralism aboutrepresentational format. Pluralists can posit one system ofcompositionally structured mental representations for perception,another for navigation, another for high-level cognition, and so on.Different representational systems potentially feature differentcompositional mechanisms. As indicated in section 1.3, pluralism figures prominently in contemporary cognitive science.Pluralists face some pressing questions. Which compositionalmechanisms figure in which psychological domains? Whichrepresentational formats support which mental operations? How dodifferent representational formats interface with each other? Furtherresearch bridging philosophy and cognitive science is needed toaddress such questions.
3. Mental Computation
Modern proponents of LOTH typically endorse the computational theory of mind (CTM), which claims that the mind is a computational system.Some authors use the phrase “language of thoughthypothesis” so that it definitionally includes CTM as onecomponent.
In a seminal contribution, Turing (1936) introduced what is now calledthe Turing machine: an abstract model of an idealized computingdevice. A Turing machine contains a central processor, governed byprecise mechanical rules, that manipulates symbols inscribed along alinear array of memory locations. Impressed by the enormous power ofthe Turing machine formalism, many researchers seek to constructcomputational models of core mental processes, including reasoning,decision-making, and problem solving. This enterprise bifurcates intotwo main branches. The first branch is artificialintelligence (AI), which aims to build “thinkingmachines”. Here the goal is primarily an engineeringone—to build a system that instantiates or at least simulatesthought—without any pretense at capturing how the human mindworks. The second branch, computational psychology, aims toconstruct computational models of human mental activity. AI andcomputational psychology both emerged in the 1960s as crucial elementsin the new interdisciplinary initiative cognitive science, whichstudies the mind by drawing upon psychology, computer science(especially AI), linguistics, philosophy, economics (especially gametheory and behavioral economics), anthropology, and neuroscience.
From the 1960s to the early 1980s, computational models offered withinpsychology were mainly Turing-style models. These models embody aviewpoint known as the classical computational theory of mind(CCTM). According to CCTM, the mind is a computational system similarin important respects to a Turing machine, and certain core mentalprocesses are computations similar in important respects tocomputations executed by a Turing machine.
CCTM fits together nicely with RTT+COMP. Turing-style computationoperates over symbols, so any Turing-style mental computations mustoperate over mental symbols. The essence of RTT+COMP is postulation ofmental symbols. Fodor (1975, 1981) advocates RTT+COMP+CCTM. He holdsthat certain core mental processes are Turing-style computations overMentalese expressions.
One can endorse RTT+COMP without endorsing CCTM. By positing a systemof compositionally structured mental representations, one does notcommit oneself to saying that operations over the representations arecomputational. Historical LOT theorists could not evenformulate CCTM, for the simple reason that the Turing formalism hadnot been discovered. In the modern era, Harman (1973) and Sellars(1975) endorse something like RTT+COMP but not CCTM. Horgan andTienson (1996) endorse RTT+COMP+CTM but not CCTM, i.e.,classical CTM. They favor a version of CTM grounded in connectionism, an alternative computational framework that differsquite significantly from Turing’s approach. Thus, proponents ofRTT+COMP need not accept that mental activity instantiatesTuring-style computation.
Fodor (1981) combines RTT+COMP+CCTM with a view that one might callthe formal-syntactic conception of computation (FSC).According to FSC, computation manipulates symbols in virtue of theirformal syntactic properties but not their semantic properties.
FSC draws inspiration from modern logic, which emphasizes theformalization of deductive reasoning. To formalize, wespecify a formal language whose component linguisticexpressions are individuated non-semantically (e.g., by theirgeometric shapes). We describe the expressions as pieces of formalsyntax, without considering what if anything the expressions mean. Wethen specify inference rules in syntactic, non-semanticterms. Well-chosen inference rules will carry true premises to trueconclusions. By combining formalization with Turing-style computation,we can build a physical machine that manipulates symbols based solelyon the formal syntax of the symbols. If we program the machine toimplement appropriate inference rules, then its syntacticmanipulations will transform true premises into true conclusions.
CCTM+FSC says that the mind is a formal syntactic computing system:mental activity consists in computation over symbols with formalsyntactic properties; computational transitions are sensitive to thesymbols’ formal syntactic properties but not their semanticproperties. The key term “sensitive” is rather imprecise,allowing some latitude as to the precise import of CCTM+FSC.Intuitively, the picture is that a mental symbol’s formal syntaxrather than its semantics determines how mental computationmanipulates it. The mind is a “syntactic engine”.
Fodor (1987: 18–20) argues that CCTM+FSC helps illuminate acrucial feature of cognition: semantic coherence. For themost part, our thinking does not move randomly from thought tothought. Rather, thoughts are causally connected in a way thatrespects their semantics. For example, deductive inference carriestrue beliefs to true beliefs. More generally, thinking tends torespect epistemic properties such as warrant and degree ofconfirmation. In some sense, then, our thinking tends to cohere withsemantic relations among thoughts. How is semantic coherence achieved?How does our thinking manage to track semantic properties? CCTM+FSCgives one possible answer. It shows how a physical system operating inaccord with physical laws can execute computations that coherentlytrack semantic properties. By treating the mind as a syntax-drivenmachine, we explain how mental activity achieves semantic coherence.We thereby answer the question: How is rationality mechanicallypossible?
Fodor’s argument convinced many researchers that CCTM+FSCdecisively advances our understanding of the mind’s relation tothe physical world. But not everyone agrees that CCTM+FSC adequatelyintegrates semantics into the causal order. A common worry is that theformal syntactic picture veers dangerously close toepiphenomenalism (Block 1990; Kazez 1994). Pre-theoretically,semantic properties of mental states seem highly relevant to mentaland behavioral outcomes. For example, if I form an intention to walkto the grocery store, then the fact that my intention concerns thegrocery store rather than the post office helps explain why I walk tothe grocery store rather than the post office. Burge (2010) andPeacocke (1994) argue that cognitive science theorizing likewiseassigns causal and explanatory importance to semantic properties. Theworry is that CCTM+FSC cannot accommodate the causal and explanatoryimportance of semantic properties because it depicts them as causallyirrelevant: formal syntax, not semantics, drives mental computationforward. Semantics looks epiphenomenal, with syntax doing all the work(Stich 1983).
Fodor (1990, 1994) expends considerable energy trying to allayepiphenomenalist worries. Advancing a detailed theory of the relationbetween Mentalese syntax and Mentalese semantics, he insists that FSCcan honor the causal and explanatory relevance of semantic properties.Fodor’s treatment is widely regarded as problematic (Arjo 1996;Aydede 1997b, 1998; Aydede & Robbins 2001; Perry 1998; Prinz 2011;Wakefield 2002), although Rupert (2008) and Schneider (2005) espousesomewhat similar positions.
Partly in response to epiphenomenalist worries, some authors recommendthat we replace FSC with an alternative semantic conceptionof computation (Block 1990; Burge 2010: 95–101; Figdor 2009;O’Brien & Opie 2006; Peacocke 1994, 1999; Rescorla 2012a).Semantic computationalists claim that computational transitions aresometimes sensitive to semantic properties, perhaps in addition tosyntactic properties. More specifically, semantic computationalistsinsist that mental computation is sometimes sensitive tosemantics. Thus, they reject any suggestion that the mind is a“syntactic engine” or that mental computation is sensitiveonly to formal syntax.[6] To illustrate, consider Mentalese conjunction. This mental symbolexpresses the truth-table for conjunction. According to semanticcomputationalists, the symbol’s meaning is relevant (bothcausally and explanatorily) to mechanical operations over it. That thesymbol expresses the truth-table for conjunction rather than, say,disjunction influences the course of computation. We should thereforereject any suggestion that mental computation is sensitive to thesymbol’s syntactic properties rather than its semanticproperties. The claim is not that mental computation explicitlyrepresents semantic properties of mental symbols. All partiesagree that, in general, it does not. There is no homunculus insideyour head interpreting your mental language. The claim is rather thatsemantic properties influence how mental computation proceeds.(Compare: the momentum of a baseball thrown at a window causallyinfluences whether the window breaks, even though the window does notexplicitly represent the baseball’s momentum.)
Proponents of the semantic conception differ as to how exactly theygloss the core claim that some computations are“sensitive” to semantic properties. They also differ intheir stance towards CCTM. Block (1990) and Rescorla (2014a) focus upon CCTM. They arguethat a symbol’s semantic properties can impact mechanicaloperations executed by a Turing-style computational system. Incontrast, O’Brien and Opie (2006) favor connectionism overCCTM.
Theorists who reject FSC must reject Fodor’s explanation ofsemantic coherence. What alternative explanation might they offer? Sofar, the question has received relatively little attention. Rescorla(2017a) argues that semantic computationalists can explain semanticcoherence and simultaneously avoid epiphenomenalist worries byinvoking neural implementation of semantically-sensitive mentalcomputations.
Fodor’s exposition sometimes suggests that CTM, CCTM, orCCTM+FSC is definitive of LOTH (1981: 26). Yet not everyone whoendorses RTT+COMP endorses CTM, CCTM, or FSC. One can postulate amental language without agreeing that mental activity iscomputational, and one can postulate mental computations over a mentallanguage without agreeing that the computations are sensitive only tosyntactic properties. For most purposes, it is not important whetherwe regard CTM, CCTM, or CCTM+FSC as definitive of LOTH. More importantis that we track the distinctions among the doctrines.
4. Arguments for LOTH
The literature offers many arguments for LOTH. This section introducesfour influential arguments, each of which supports LOTH abductively byciting its explanatory benefits. Section 5 discusses some prominent objections to the four arguments.
4.1 Argument from Cognitive Science Practice
Fodor (1975) defends RTT+COMP+CCTM by appealing to scientificpractice: our best cognitive science postulates Turing-style mentalcomputations over Mentalese expressions; therefore, we should acceptthat mental computation operates over Mentalese expressions. Fodordevelops his argument by examining detailed case studies, includingperception, decision-making, and linguistic comprehension. He arguesthat, in each case, computation over mental representations plays acentral explanatory role. Fodor’s argument was widely heraldedas a compelling analysis of then-current cognitive science.
When evaluating cognitive science support for LOTH, it is crucial tospecify what version of LOTH one has in mind. Specifically,establishing that certain mental processes operate over mentalrepresentations is not enough to establish RTT. For example, one mightaccept that mental representations figure in perception and animalnavigation but not in high-level human cognition. Gallistel and King(2009) defend COMP+CCTM+FSC through a number of (mainly non-human)empirical case studies, but they do not endorse RTT. They focus onrelatively low-level phenomena, such as animal navigation, withoutdiscussing human decision-making, deductive inference, problemsolving, or other high-level cognitive phenomena.
4.2 Argument from the Productivity of Thought
During your lifetime, you will only entertain a finite number ofthoughts. In principle, though, there are infinitely many thoughts youmight entertain. Consider:
Mary gave the test tube to John’s daughter.
![Language Language](/uploads/1/2/6/2/126256535/706218196.jpg)
Mary gave the test tube to John’s daughter’s daughter.
Mary gave the test tube to John’s daughter’sdaughter’s daughter.
⋮
The moral usually drawn is that you have the competence toentertain a potential infinity of thoughts, even though yourperformance is bounded by biological limits upon memory,attention, processing capacity, and so on. In a slogan: thought isproductive.
RTT+COMP straightforwardly explains productivity. We postulate afinite base of primitive Mentalese symbols, along with operations forcombining simple expressions into complex expressions. Iterativeapplication of the compounding operations generates an infinite arrayof mental sentences, each in principle within your cognitiverepertoire. By tokening a mental sentence, you entertain the thoughtexpressed by it. This explanation leverages the recursive nature ofcompositional mechanisms to generate infinitely many expressions froma finite base. It thereby illuminates how finite creatures such asourselves are able to entertain a potential infinity of thoughts.
Fodor and Pylyshyn (1988) argue that, since RTT+COMP provides asatisfying explanation for productivity, we have good reason to acceptRTT+COMP. A potential worry about this argument is that it rests uponan infinitary competence never manifested within actual performance.One might dismiss the supposed infinitary competence as anidealization that, while perhaps convenient for certain purposes, doesnot stand in need of explanation.
4.3 Argument from the Systematicity of Thought
There are systematic interrelations among the thoughts a thinker canentertain. For example, if you can entertain the thought that Johnloves Mary, then you can also entertain the thought that Mary lovesJohn. Systematicity looks like a crucial property of human thought andso demands a principled explanation.
RTT+COMP gives a compelling explanation. According to RTT+COMP, yourability to entertain the thought that p hinges upon yourability to bear appropriate psychological relations to a Mentalesesentence S whose meaning is that p. If you are able tothink that John loves Mary, then your internal system of mentalrepresentations includes a mental sentence John lovesMary, composed of mental words John,loves, and Marycombined in the right way. If you have the capacity to stand inpsychological relation A* to John lovesMary, then you also have the capacity to stand in relationA* to a distinct mental sentence Mary lovesJohn. The constituent words John, loves,and Mary make thesame semantic contribution to both mental sentences (Johndenotes John, lovesdenotes the loving relation, and Mary denotesMary), but the words are arranged in different constituency structuresso that the sentences have different meanings. Whereas Johnloves Mary means that John loves Mary, Maryloves John means that Mary loves John. Bystanding in relation A* to the sentence Maryloves John, you entertain the thought that Mary loves John.Thus, an ability to think that John loves Mary entails an ability tothink that John loves Mary. By comparison, an ability to think thatJohn loves Mary does not entail an ability to think that whales aremammals or an ability to think that (56 + 138 = 194).
Fodor (1987: 148–153) supports RTT+COMP by citing its ability toexplain systematicity. In contrast with the productivity argument, thesystematicity argument does not depend upon infinitary idealizationsthat outstrip finite performance. Note that neither argument providesany direct support for CTM. Neither argument even mentionscomputation.
4.4 Argument from the Systematicity of Thinking
There are systematic interrelations among which inferences a thinkercan draw. For example, if you can infer p from pandq, then you can also infer m from m andn. The systematicity of thinking requires explanation. Why is itthat thinkers who can infer p from pandq can also infer m from mandn?
RTT+COMP+CCTM gives a compelling explanation. During an inference fromp and q to p, you transit from believing* mentalsentence (S_1 amp S_2) (which means that p and q) tobelieving* mental sentence (S_{1}) (which means that p).According to CCTM, the transition involves symbol manipulation. Amechanical operation detaches the conjunct (S_{1}) from theconjunction (S_1 amp S_2). The same mechanical operation isapplicable to a conjunction (S_{3} amp S_{4}) (which means thatm and n), corresponding to the inference from mandn to n. An ability to execute the first inference entailsan ability to execute the second, because drawing the inference ineither case corresponds to executing a single uniform mechanicaloperation. More generally, logical inference deploys mechanicaloperations over structured symbols, and the mechanical operationcorresponding to a given inference pattern (e.g., conjunctionintroduction, disjunction elimination, etc.) is applicable to anypremises with the right logical structure. The uniform applicabilityof a single mechanical operation across diverse symbols explainsinferential systematicity. Fodor and Pylyshyn (1988) conclude thatinferential systematicity provides reason to accept RTT+COMP+CCTM.
Fodor and Pylyshyn (1988) endorse an additional thesis about themechanical operations corresponding to logical transitions. In keepingwith FSC, they claim that the operations are sensitive to formalsyntactic properties but not semantic properties. For example,conjunction elimination responds to Mentalese conjunction as a pieceof pure formal syntax, much as a computer manipulates items in aformal language without considering what those items mean.
Semantic computationalists reject FSC. They claim that mentalcomputation is sometimes sensitive to semantic properties. Semanticcomputationalists can agree that drawing an inference involvesexecuting a mechanical operation over structured symbols, and they canagree that the same mechanical operation uniformly applies to anypremises with appropriate logical structure. So they can still explaininferential systematicity. However, they can also say that thepostulated mechanical operation is sensitive to semantic properties.For example, they can say that conjunction elimination is sensitive tothe meaning of Mentalese conjunction.
In assessing the debate between FSC and semantic computationalism, onemust distinguish between logical versus non-logicalsymbols. For present purposes, it is common ground that the meaningsof non-logical symbols do not inform logical inference. Theinference from (S_1 amp S_2) to (S_{1}) features the samemechanical operation as the inference from (S_{3} amp S_{4}) to(S_{4}), and this mechanical operation is not sensitive to themeanings of the conjuncts (S_{1}), (S_{2}), (S_{3}), or(S_{4}). It does not follow that the mechanical operation isinsensitive to the meaning of Mentalese conjunction. The meaning ofconjunction might influence how the logical inference proceeds, eventhough the meanings of the conjuncts do not.
5. The Connectionist Challenge
In the 1960s and 1970s, cognitive scientists almost universallymodeled mental activity as rule-governed symbol manipulation. In the1980s, connectionism gained currency as an alternative computationalframework. Connectionists employ computational models, calledneural networks, that differ quite significantly fromTuring-style models. There is no central processor. There are nomemory locations for symbols to be inscribed. Instead, there is anetwork of nodes bearing weighted connections to one another.During computation, waves of activation spread through the network. Anode’s activation level depends upon the weighted activations ofthe nodes to which it is connected. Nodes function somewhatanalogously to neurons, and connections between nodes functionsomewhat analogously to synapses. One should receive theneurophysiological analogy cautiously, as there are numerous importantdifferences between neural networks and actual neural configurationsin the brain (Bechtel & Abramson 2002: 341–343;Bermúdez 2010: 237–239; Clark 2014: 87–89; Harnish2002: 359–362).
Connectionists raise many objections to the classical computationalparadigm (Rumelhart, McClelland, & the PDP Research Group 1986;Horgan & Tienson 1996; McLaughlin & Warfield 1994; Bechtel& Abrahamsen 2002), such as that classical systems are notbiologically realistic or that they are unable to model certainpsychological tasks. Classicists in turn launch various argumentsagainst connectionism. The most famous arguments showcaseproductivity, systematicity of thought, and systematicity of thinking.Fodor and Pylyshyn (1988) argue that these phenomena support classicalCTM over connectionist CTM.
Fodor and Pylyshyn’s argument hinges on the distinction betweeneliminative connectionism and implementationistconnectionism (cf. Pinker & Prince 1988). Eliminativeconnectionists advance neural networks as a replacement forthe Turing-style formalism. They deny that mental computation consistsin rule-governed symbol manipulation. Implementationist connectionistsallow that, in some cases, mental computation may instantiaterule-governed symbol manipulation. They advance neural networks not toreplace classical computations but rather to model how classicalcomputations are implemented in the brain. The hope is that, becauseneural network computation more closely resembles actual brainactivity, it can illuminate the physical realization of rule-governedsymbol manipulation.
Building on Aydede’s (2015) discussion, we may reconstruct Fodorand Pylyshyn’s argument like so:
- Representational mental states and processes exist. Anexplanatorily adequate account of cognition should acknowledge thesestates and processes.
- The representational states and processes that figure inhigh-level cognition have certain fundamental properties: thought isproductive and systematic; inferential thinking issystematic. The states and processes have these properties asa matter of nomic necessity: it is a psychological law thatthey have the properties.
- A theory of mental computation is explanatorily adequate only ifit explains the nomic necessity of systematicity andproductivity.
- The only way to explain the nomic necessity of systematicity andproductivity is to postulate that high-level cognition instantiatescomputation over mental symbols with a compositional semantics.Specifically, we must accept RTT+COMP.
- Either a connectionist theory endorses RTT+COMP or it doesnot.
- If it does, then it is a version of implementationistconnectionism.
- If it does not, then it is a version of eliminativeconnectionism. As per (iv), it does not explain productivity andsystematicity. As per (iii), it is not explanatorily adequate.
- Conclusion: Eliminative connectionist theoriesare not explanatorily adequate.
The argument does not say that neural networks are unable tomodel systematicity. One can certainly build a neural network that issystematic. For example, one might build a neural network that canrepresent that John loves Mary only if it can represent that Maryloves John. The problem is that one might just as well build a neuralnetwork that can represent that John loves Mary but cannot representthat Mary loves John. Hence, nothing about the connectionist frameworkper se guarantees systematicity. For that reason, theframework does not explain the nomic necessity of systematicity. Itdoes not explain why all the minds we find are systematic. Incontrast, the classical framework mandates systematicity, and so itexplains the nomic necessity of systematicity. The only apparentrecourse for connectionists is to adopt the classical explanation,thereby becoming implementationist rather than eliminativeconnectionists.
Fodor and Pylyshyn’s argument has spawned a massive literature,including too many rebuttals to survey here. The most popularresponses fall into five categories:
- Deny (i). Some connectionists deny that cognitivescience should posit representational mental states. They believe thatmature scientific theorizing about the mind will delineateconnectionist models specified in non-representational terms (P.S.Churchland 1986; P.S. Churchland & Sejnowski 1989; P.M. Churchland1990; P.M. Churchland & P.S. Churchland 1990; Ramsey 2007). If so,then Fodor and Pylyshyn’s argument falters at its first step.There is no need to explain why representational mental states aresystematic and productive if one rejects all talk aboutrepresentational mental states.
- Accept (viii). Some authors, such as Marcus (2001), feel that neural networksare best deployed to illuminate the implementation of Turing-stylemodels, rather than as replacements for Turing-style models.
- Deny (ii). Some authors claim that Fodor and Pylyshyngreatly exaggerate the extent to which thought is productive(Rumelhart & McClelland 1986) or systematic (Dennett 1991; Johnson2004). Horgan and Tienson (1996: 91–94) question thesystematicity of thinking. They contend that we deviate from norms ofdeductive inference more than one would expect if we were followingthe rigid mechanical rules postulated by CCTM.
- Deny (iv). Braddon-Mitchell and Fitzpatrick (1990) offer an evolutionaryexplanation for the systematicity of thought, bypassing any appeal tostructured mental representations. In a similar vein, Horgan andTienson (1996: 90) seek to explain systematicity by emphasizing howour survival depends upon our ability to keep track of objects in theenvironment and their ever-changing properties. Clark (1991) arguesthat systematicity follows from the holistic nature of thoughtascription.
- Deny (vi). Chalmers (1990, 1993), Smolensky (1991), andvan Gelder (1991) claim that one can reject Turing-style models whilestill postulating mental representations with compositionally andcomputationally relevant internal structure.
We focus here on (vi).
As discussed in section 1.2, Fodor elucidates constituency structure in terms of part/wholerelations. A complex representation’s constituents are literalparts of it. One consequence is that, whenever the firstrepresentation is tokened, so are its constituents. Fodor takes thisconsequence to be definitive of classical computation. As Fodor andMcLaughlin (1990: 186) put it:
for a pair of expression types E1, E2, the first is aClassical constituent of the second only if thefirst is tokened whenever the second is tokened.
Thus, structured representations have a concatenativestructure: each token of a structured representation involves aconcatenation of tokens of the constituent representations.Connectionists who deny (vi) espouse a non-concatenativeconception of constituency structure, according to which structure isencoded by a suitable distributed representation.Developments of the non-concatenative conception are usually quitetechnical (Elman 1989; Hinton 1990; Pollack 1990; Smolensky 1990, 1991, 1995; Touretzky 1990).Most models use vector or tensor algebra to defineoperations over connectionist representations, which are codified byactivity vectors across nodes in a neural network. The representationsare said to have implicit constituency structure: theconstituents are not literal parts of the complex representation, butthey can be extracted from the complex representation through suitablecomputational operations over it.
Fodor and McLaughlin (1990) grant that distributed representations mayhave constituency structure “in an extended sense”. Butthey insist that distributed representations are ill-suited to explainsystematicity. They focus especially on the systematicity of thinking,the classical explanation for which postulates mechanical operationsthat respond to constituency structure. Fodor and McLaughlin arguethat the non-concatenative conception cannot replicate the classicalexplanation and offers no satisfactory substitute for it. Chalmers(1993) and Niklasson and van Gelder (1994) disagree. They contend thata neural network can execute structure-sensitive computations overrepresentations that have non-concatenative constituency structure.They conclude that connectionists can explain productivity andsystematicity without retreating to implementationistconnectionism.
Aydede (1995, 1997a) agrees that there is a legitimate notion ofnon-concatenative constituency structure, but he questions whether theresulting models are non-classical. He denies that we should regardconcatenative structure as integral to LOTH. According to Aydede,concatenative structure is just one possible physical realization ofconstituency structure. Non-concatenative structure is anotherpossible realization. We can accept RTT+COMP without glossingconstituency structure in concatenative terms. On this view, a neuralnetwork whose operations are sensitive to non-concatenativeconstituency structure may still count as broadly classical and inparticular as manipulating Mentalese expressions.
The debate between classical and connectionist CTM is still active,although not as active as during the 1990s. Recent anti-connectionistarguments tend to have a more empirical flavor. For example, Gallisteland King (2009) defend CCTM by canvassing a range of non-humanempirical case studies. According to Gallistel and King, the casestudies manifest a kind of productivity that CCTM can easily explainbut eliminative connectionism cannot.
6. Regress Objections to LOTH
LOTH has elicited too many objections to cover in a singleencyclopedia entry. We will discuss two objections, both alleging thatLOTH generates a vicious regress. The first objection emphasizeslanguagelearning. The second emphasizeslanguageunderstanding.
6.1 Learning a Language
Like many cognitive scientists, Fodor holds that children learn anatural language via hypothesis formation and testing.Children formulate, test, and confirm hypotheses about the denotationsof words. For example, a child learning English will confirm thehypothesis that “cat” denotes cats. According to Fodor,denotations are represented in Mentalese. To formulate the hypothesisthat “cat” denotes cats, the child uses a Mentalese wordcat that denotes cats. It may seem that a regress is now in theoffing, sparked by the question: How does the child learn Mentalese?Suppose we extend the hypothesis formation and testing model(henceforth HF) to Mentalese. Then we must posit a meta-language toexpress hypotheses about denotations of Mentalese words, ameta-meta-language to express hypotheses about denotations ofmeta-language words, and so on ad infinitum (Atherton andSchwartz 1974: 163).
Fodor responds to the threatened regress by denying we should apply HFto Mentalese (1975: 65). Children do not test hypotheses about thedenotations of Mentalese words. They do not learn Mentalese at all.The mental language is innate.
The doctrine that some concepts are innate was a focal pointin the clash between rationalism versus empiricism. Rationalistsdefended the innateness of certain fundamental ideas, such as god andcause, while empiricists held that all ideas derive from sensoryexperience. A major theme in the 1960s cognitive science revolutionwas revival of a nativist picture, inspired by therationalists, on which many key elements of cognition are innate. Mostfamously, Chomsky (1965) explained language acquisition by positinginnate knowledge about possible human languages. Fodor’sinnateness thesis was widely perceived as going way beyond allprecedent, verging on the preposterous (P.S. Churchland 1986; Putnam1988). How could we have an innate ability to represent all thedenotations we mentally represent? For example, how could we innatelypossess a Mentalese word carburetor that represents carburetors?
In evaluating these issues, it is vital to distinguish betweenlearning a concept versus acquiring a concept. WhenFodor says that a concept is innate, he does not mean to deny that weacquire the concept or even that certain kinds of experience areneeded to acquire it. Fodor fully grants that we cannot mentallyrepresent carburetors at birth and that we come to represent them onlyby undergoing appropriate experiences. He agrees that most conceptsare acquired. He denies that they are learned. Ineffect, he uses “innate” as a synonym for“unlearned” (1975: 96). One might reasonably challengeFodor’s usage. One might resist classifying a concept as innatesimply because it is unlearned. However, that is how Fodoruses the word “innate”. Properly understood, then,Fodor’s position is not as far-fetched as it may sound.[7]
Fodor gives a simple but striking argument that concepts areunlearned. The argument begins from the premise that HF is the onlypotentially viable model of concept learning. Fodor thenargues that HF is not a viable model of concept learning,from which he concludes that concepts are unlearned. He offers variousformulations and refinements of the argument over his career. Here isa relatively recent rendition (2008: 139):
Now, according to HF, the process by which one learns C mustinclude the inductive evaluation of some such hypothesis as “TheC things are the ones that are green or triangular”. Butthe inductive evaluation of that hypothesis itself requires (interalia) bringing the property green or triangular beforethe mind as such… Quite generally, you can’t representanything as such and such unless you already have the conceptsuch and such. All that being so, it follows, on pain ofcircularity, that “concept learning” as HF understands itcan’t be a way of acquiring concept C…Conclusion: If concept learning is as HF understands it, there canbe no such thing. This conclusion is entirely general; itdoesn’t matter whether the target concept is primitive (likegreen) or complex (like greenor triangular).
Fodor’s argument does not presuppose RTT, COMP, or CTM. To theextent that the argument works, it applies to any view on which peoplehave concepts.
If concepts are not learned, then how are they acquired? Fodor offerssome preliminary remarks (2008: 144–168), but by his ownadmission the remarks are sketchy and leave numerous questionsunanswered (2008: 144–145). Prinz (2011) critiques Fodor’spositive treatment of concept acquisition.
The most common rejoinder to Fodor’s innateness argument is todeny that HF is the only viable model of concept learning. Therejoinder acknowledges that concepts are not learned throughhypothesis testing but insists they are learned through othermeans. Three examples:
- Margolis (1998) proposes an acquisition model that differs fromHF but that allegedly yields concept learning. Fodor (2008:140–144) retorts that Margolis’s model does not yieldgenuine concept learning. Margolis and Laurence (2011) insist that itdoes.
- Carey (2009) maintains that children can “bootstrap”their way to new concepts using induction, analogical reasoning, andother techniques. She develops her view in great detail, supporting itpartly through her groundbreaking experimental work with youngchildren. Fodor (2010) and Rey (2014) object that Carey’sbootstrapping theory is circular: it surreptitiously presupposes thatchildren already possess the very concepts whose acquisition itpurports to explain. Beck (2017) and Carey (2014) respond to the circularityobjection.
- Shea (2016) argues that connectionist modeling can explainconcept acquisition in non-HF terms and that the resulting modelsinstantiate genuine learning.
A lot depends here upon what counts as “learning” and whatdoes not, a question that seems difficult to adjudicate. A closelyconnected question is whether concept acquisition is arational process or a mere causal process. To theextent that acquiring some concept is a rational achievement, we willwant to say that one learned the concept. To the extent that acquiringthe concept is a mere causal process (more like catching a cold thanconfirming a hypothesis), we will feel less inclined to say thatgenuine learning took place (Fodor 1981: 275).
Language Thought And Consciousness Pdf
These issues lie at the frontier of psychological and philosophicalresearch. The key point for present purposes is that there are twooptions for halting the regress of language learning: we can say thatthinkers acquire concepts but do not learn them; or we can say thatthinkers learn concepts through some means other than hypothesistesting. Of course, it is not enough just to note that the two optionsexist. Ultimately, one must develop one’s favored option into acompelling theory. But there is no reason to think that doing so wouldreinitiate the regress. In any event, explaining concept acquisitionis an important task facing any theorist who accepts that we haveconcepts, whether or not the theorist accepts LOTH. Thus, the learningregress objection is best regarded not as posing a challenge specificto LOTH but rather as highlighting a more widely shared theoreticalobligation: the obligation to explain how we acquire concepts.
For further discussion, see the entry on innateness. See also theexchange between Cowie (1999) and Fodor (2001).
6.2 Understanding a Language
What is it to understand a natural language word? On a popularpicture, understanding a word requires that you mentally represent theword’s denotation. For example, understanding the word“cat” requires representing that it denotes cats. LOTtheorists will say that you use Mentalese words to representdenotations. The question now arises what it is to understand aMentalese word. If understanding the Mentalese word requiresrepresenting that it has a certain denotation, then we face aninfinite regress of meta-languages (Blackburn 1984: 43–44).
The standard response is to deny that ordinary thinkers representMentalese words as having denotations (Bach 1987; Fodor 1975:66–79). Mentalese is not an instrument of communication.Thinking is not “talking to oneself” in Mentalese. Atypical thinker does not represent, perceive, interpret, or reflectupon Mentalese expressions. Mentalese serves as a medium within whichher thought occurs, not an object of interpretation. We should not saythat she “understands” Mentalese in the same way that sheunderstands a natural language.
There is perhaps another sense in which the thinker“understands” Mentalese: her mental activity coheres withthe meanings of Mentalese words. For example, her deductive reasoningcoheres with the truth-tables expressed by Mentalese logicalconnectives. More generally, her mental activity is semanticallycoherent. To say that the thinker “understands” Mentalesein this sense is not to say that she represents Mentalesedenotations. Nor is there any evident reason to suspect thatexplaining semantic coherence will ultimately require us to positmental representation of Mentalese denotations. So there is no regressof understanding.
For further criticism of this regress argument, see the discussions ofKnowles (1998) andLaurence and Margolis (1997).[8]
7. Naturalizing the Mind
Naturalism is a movement that seeks to ground philosophical theorizingin the scientific enterprise. As so often in philosophy, differentauthors use the term “naturalism” in different ways. Usagewithin philosophy of mind typically connotes an effort to depictmental states and processes as denizens of the physical world, with noirreducibly mental entities or properties allowed. In the modern era,philosophers have often recruited LOTH to advance naturalism. Indeed,LOTH’s supposed contribution to naturalism is frequently citedas a significant consideration in its favor. One example isFodor’s use of CCTM+FSC to explain semantic coherence. The othermain example turns upon the problem of intentionality.
How does intentionality arise? How do mental states come to beabout anything, or to have semantic properties? Brentano(1874 [1973: 97]) maintained that intentionality is a hallmark of themental as opposed to the physical: “The reference to somethingas an object is a distinguishing characteristic of all mentalphenomena. No physical phenomenon exhibits anything similar”. Inresponse, contemporary naturalists seek to naturalizeintentionality. They want to explain in naturalisticallyacceptable terms what makes it the case that mental states havesemantic properties. In effect, the goal is to reduce the intentionalto the non-intentional. Beginning in the 1980s, philosophers haveoffered various proposals about how to naturalize intentionality. Mostproposals emphasize causal or nomic links between mind and world(Aydede & Güzeldere 2005; Dretske 1981; Fodor 1987, 1990;Stalnaker 1984), sometimes also invoking teleological factors(Millikan 1984, 1993; Neander 2017l; Papineau 1987; Dretske 1988) orhistorical lineages of mental states (Devitt 1995; Field 2001).Another approach, functional role semantics, emphasizes thefunctional role of a mental state: the cluster of causal orinferential relations that the state bears to other mental states. Theidea is that meaning emerges at least partly through these causal andinferential relations. Some functional role theories cite causalrelations to the external world (Block 1987; Loar 1982), and others donot (Cummins 1989).
Even the best developed attempts at naturalizing intentionality, suchas Fodor’s (1990) version of the nomic strategy, face seriousproblems that no one knows how to solve (M. Greenberg 2014; Loewer1997). Partly for that reason, the flurry of naturalizing attemptsabated in the 2000s. Burge (2010: 298) reckons that the naturalizingproject is not promising and that current proposals are“hopeless”. He agrees that we should try to illuminaterepresentationality by limning its connections to the physical, thecausal, the biological, and the teleological. But he insists thatillumination need not yield a reduction of the intentional to thenon-intentional.
LOTH is neutral as to the naturalization of intentionality. An LOTtheorist might attempt to reduce the intentional to thenon-intentional. Alternatively, she might dismiss the reductiveproject as impossible or pointless. Assuming she chooses the reductiveroute, LOTH provides guidance regarding how she might proceed.According to RTT,
XA’s that p iff there is a mentalrepresentation S such that X bears A* to Sand S means that p.
The task of elucidating “XA’s thatp” in naturalistically acceptable terms factors into twosub-tasks (Field 2001: 33):
- Explain in naturalistically acceptable terms what it is to bearpsychological relation A* to mental representationS.
- Explain in naturalistically acceptable terms what it is formental representation S to mean that p.
As we have seen, functionalism helps with (a). Moreover, COMP providesa blueprint for tackling (b). We can first delineate a compositionalsemantics describing how S’s meaning depends uponsemantic properties of its component words and upon the compositionalimport of the constituency structure into which those words arearranged. We can then explain in naturalistically acceptable terms whythe component words have the semantic properties that they have andwhy the constituency structure has the compositional import that ithas.
How much does LOTH advance the naturalization of intentionality? Ourcompositional semantics for Mentalese may illuminate how the semanticproperties of a complex expression depend upon the semantic propertiesof primitive expressions, but it says nothing about how primitiveexpressions get their semantic properties in the first place.Brentano’s challenge (How could intentionality arise frompurely physical entities and processes?) remains unanswered. Tomeet the challenge, we must invoke naturalizing strategies that gowell beyond LOTH itself, such as the causal or nomic strategiesmentioned above. Those naturalizing strategies are not specificallylinked to LOTH and can usually be tailored to semantic properties ofneural states rather than semantic properties of Mentaleseexpressions. Thus, it is debatable how much LOTH ultimately helps usnaturalize intentionality. Naturalizing strategies orthogonal to LOTHseem to do the heavy lifting.
8. Individuation of Mentalese Expressions
How are Mentalese expressions individuated? Since Mentaleseexpressions are types, answering this question requires us to considerthe type/token relation for Mentalese. We want to fill in theschema
e and e* are tokens of the same Mentalese type iffR(e, e*).
What should we substitute for R(e, e*)? Theliterature typically focuses on primitive symbol types, andwe will follow suit here.
It is almost universally agreed among contemporary LOT theorists thatMentalese tokens are neurophysiological entities of some sort. Onemight therefore hope to individuate Mentalese types by citing neuralproperties of the tokens. Drawing R(e, e*) fromthe language of neuroscience induces a theory along the followinglines:
Neural individuation: e and e*are tokens of the same primitive Mentalese type iff e ande* are tokens of the same neural type.
This schema leaves open how neural types are individuated. We maybypass that question here, because neural individuation of Mentalesetypes finds no proponents in the contemporary literature. The mainreason is that it conflicts with multiple realizability: the doctrine that a single mental state type can be realized byphysical systems that are wildly heterogeneous when described inphysical, biological, or neuroscientific terms. Putnam (1967)introduced multiple realizability as evidence against the mind/brain identity theory, which asserts that mental state types are brain statetypes. Fodor (1975: 13–25) further developed the multiplerealizability argument, presenting it as foundational toLOTH. Although the multiple realizability argument has subsequentlybeen challenged (Polger 2004), LOT theorists widely agree that weshould not individuate Mentalese types in neural terms.
The most popular strategy is to individuate Mentalese typesfunctionally:
Functional individuation: e ande* are tokens of the same primitive Mentalese type iff eand e* have the same functional role.
Field (2001: 56–67), Fodor (1994: 105–109), and Stich (1983:149–151) pursue functional individuation. They specifyfunctional roles using a Turing-style computationalism formalism, sothat “functional role” becomes something like“computational role”, i.e., role within mentalcomputation.
Functional roles theories divide into two categories:molecular and holist. Molecular theories isolateprivileged canonical relations that a symbol bears to other symbols.Canonical relations individuate the symbol, but non-canonicalrelations do not. For example, one might individuate Mentaleseconjunction solely through the introduction and elimination rulesgoverning conjunction while ignoring any other computational rules. Ifwe say that a symbol’s “canonical functional role”is constituted by its canonical relations to other symbols, then wecan offer the following theory:
Molecular functional individuation: eand e* are tokens of the same primitive Mentalese type iffe and e* have the same canonical functional role.
One problem facing molecular individuation is that, aside from logicalconnectives and a few other special cases, it is difficult to draw anyprincipled demarcation between canonical and non-canonical relations(Schneider 2011: 106). Which relations are canonical for SOFA?[9] Citing the demarcation problem, Schneider espouses a holist approachthat individuates mental symbols through total functionalrole, i.e., every single aspect of the role that a symbol playswithin mental activity:
Holist functional individuation: eand e* are tokens of the same primitive Mentalese type iffe and e* have the same total functional role.
Holist individuation is very fine-grained: the slightest difference intotal functional role entails that different types are tokened. Sincedifferent thinkers will always differ somewhat in their mentalcomputations, it now looks like two thinkers will never share the samemental language. This consequence is worrisome, for two reasonsemphasized by Aydede (1998). First, it violates the plausiblepublicity constraint that propositional attitudes are inprinciple shareable. Second, it apparently precludes interpersonalpsychological explanations that cite Mentalese expressions. Schneider(2011: 111–158) addresses both concerns, arguing that they aremisdirected.
A crucial consideration when individuating mental symbols is what roleto assign to semantic properties. Here we may usefully compareMentalese with natural language. It is widely agreed that naturallanguage words do not have their denotations essentially. The Englishword “cat” denotes cats, but it could just as well havedenoted dogs, or the number 27, or anything else, or nothing at all,if our linguistic conventions had been different. Virtually allcontemporary LOT theorists hold that a Mentalese word likewise doesnot have its denotation essentially. The Mentalese word cat denotescats, but it could have had a different denotation had it borndifferent causal relations to the external world or had it occupied adifferent role in mental activity. In that sense, cat is a piece offormal syntax. Fodor’s early view (1981: 225–253) was thata Mentalese word could have had a different denotation butnot an arbitrarily different denotation: cat could not havedenoted just anything—it could not have denoted the number27—but it could have denoted some other animal species had thethinker suitably interacted with that species rather than with cats.Fodor eventually (1994, 2008) embraces the stronger thesis that aMentalese word bears an arbitrary relation to its denotation:cat could have had any arbitrarily different denotation. Mostcontemporary theorists agree (Egan 1992: 446; Field 2001: 58; Harnad1994: 386; Haugeland 1985: 91: 117–123; Pylyshyn 1984: 50).
The historical literature on LOTH suggests an alternativesemantically permeated view: Mentalese words are individuatedpartly through their denotations. The Mentalese word cat is not apiece of formal syntax subject to reinterpretation. It could not havedenoted another species, or the number 27, or anything else. Itdenotes cats by its inherent nature. From a semanticallypermeated viewpoint, a Mentalese word has its denotation essentially.Thus, there is a profound difference between natural language andmental language. Mental words, unlike natural language words, bringwith them one fixed semantic interpretation. The semanticallypermeated approach is present in Ockham, among other medieval LOTtheorists (Normore 2003, 2009). In light of the problems facing neuraland functional individuation, Aydede (2005) recommends that weconsider taking semantics into account when individuating Mentaleseexpressions. Rescorla (2012b) concurs, defending a semantically permeated approach asapplied to at least some mental representations. He proposes thatcertain mental computations operate over mental symbols with essentialsemantic properties, and he argues that the proposal fits well withmany sectors of cognitive science.[10]
A recurring complaint about the semantically permeated approach isthat inherently meaningful mental representations seem like highlysuspect entities (Putnam 1988: 21). How could a mental word have onefixed denotation by its inherent nature? What magic ensuresthe necessary connection between the word and the denotation? Theseworries diminish in force if one keeps firmly in mind that Mentalesewords are types. Types are abstract entities corresponding to a schemefor classifying, or type-identifying, tokens. To ascribe atype to a token is to type-identify the token as belonging to somecategory. Semantically permeated types correspond to a classificatoryscheme that takes semantics into account when categorizing tokens. AsBurge emphasizes (2007: 302), there is nothing magical aboutsemantically-based classification. On the contrary, both folkpsychology and cognitive science routinely classify mental eventsbased at least partly upon their semantic properties.
Relation Between Language And Thought Pdf
A simplistic implementation of the semantically permeated approachindividuates symbol tokens solely through theirdenotations:
Denotational individuation: e ande* are tokens of the same primitive Mentalese type iff eand e* have the same denotation.
As Aydede (2000) and Schneider (2011) emphasize, denotationalindividuation is unsatisfying. Co-referring words may playsignificantly different roles in mental activity. Frege’s (1892[1997]) famous Hesperus-Phosphorus example illustrates: one canbelieve that Hesperus is Hesperus without believing that Hesperus isPhosphorus. As Frege put it, one can think about the same denotation“in different ways”, or “under different modes ofpresentation”. Different modes of presentation have differentroles within mental activity, implicating different psychologicalexplanations. Thus, a semantically permeated individuative schemeadequate for psychological explanation must be finer-grained thandenotational individuation allows. It must take mode of presentationinto account. But what it is to think about a denotation “underthe same mode of presentation”? How are “modes ofpresentation” individuated? Ultimately, semantically permeatedtheorists must grapple with these questions. Rescorla (forthcoming)offers some suggestions about how to proceed.[11]
Chalmers (2012) complains that semantically permeated individuationsacrifices significant virtues that made LOTH attractive in the firstplace. LOTH promised to advance naturalism by grounding cognitivescience in non-representational computational models.Representationally-specified computational models seem like asignificant retrenchment from these naturalistic ambitions. Forexample, semantically permeated theorists cannot accept the FSCexplanation of semantic coherence, because they do not postulateformal syntactic types manipulated during mental computation.
How compelling one finds naturalistic worries about semanticallypermeated individuation will depend on how impressive one finds thenaturalistic contributions made by formal mental syntax. We sawearlier that FSC arguably engenders a worrisome epiphenomenalism.Moreover, the semantically permeated approach in no way precludes anaturalistic reduction of intentionality. It merely precludes invokingformal syntactic Mentalese types while executing such a reduction. Forexample, proponents of the semantically permeated approach can stillpursue the causal or nomic naturalizing strategies discussed in section 7. Nothing about either strategy presupposes formal syntactic Mentalesetypes. Thus, it is not clear that replacing a formal syntacticindividuative scheme with a semantically permeated schemesignificantly impedes the naturalistic endeavor.
Language And Thought Pdf
No one has yet provided an individuative scheme for Mentalese thatcommands widespread assent. The topic demands continued investigation,because LOTH remains highly schematic until its proponents clarifysameness and difference of Mentalese types.
Bibliography
- Arjo, Dennis, 1996, “Sticking Up for Oedipus: Fodor onIntentional Generalizations and Broad Content”, Mind &Language, 11(3): 231–245.doi:10.1111/j.1468-0017.1996.tb00044.x
- Armstrong, D. M., 1973, Belief Truth and Knowledge,Cambridge: Cambridge University Press.doi:10.1017/CBO9780511570827
- Atherton, Margaret and Robert Schwartz, 1974, “LinguisticInnateness and Its Evidence”:, Journal of Philosophy,71(6): 155–168. doi:10.2307/2024657
- Aydede, Murat, 1995, “Connectionism and Language ofThought”, CSLI Technical Report 195, Stanford: Center for theStudy of Language and Information Publications.
- –––, 1997a, “Language of Thought: TheConnectionist Contribution”, Minds and Machines, 7(1):57–101. doi:10.1023/A:1008203301671
- –––, 1997b, “Has Fodor Really Changed HisMind on Narrow Content?”, Mind & Language,12(3–4): 422–458. doi:10.1111/j.1468-0017.1997.tb00082.x
- –––, 1998, “Fodor on Concepts and FregePuzzles”, Pacific Philosophical Quarterly, 79(4):289–294. doi:10.1111/1468-0114.00063
- –––, 2000, “On the Type/Token Relation ofMental Representations”, Facta Philosophica, 2:23–49.
- –––, 2005, “Computation and Functionalism:Syntactic Theory of Mind Revisited”, in Turkish Studies inthe History and Philosophy of Science, Gürol Irzik andGüven Güzeldere (eds.), (Boston Studies in the History andPhilosophy of Science 244), Berlin/Heidelberg: Springer-Verlag,177–204. doi:10.1007/1-4020-3333-8_13
- –––, 2015, “The Language of ThoughtHypothesis”, The Stanford Encyclopedia of Philosophy(Fall 2015 Edition), Edward Zalta (ed.). URL = <https://plato.stanford.edu/archives/fall2015/entries/language-thought/>.
- Aydede, Murat and Güven Güzeldere, 2005,“Cognitive Architecture, Concepts, and Introspection: AnInformation-Theoretic Solution to the Problem of PhenomenalConsciousness”, Noûs, 39(2): 197–255.doi:10.1111/j.0029-4624.2005.00500.x
- Aydede, Murat and Philip Robbins, 2001, “Are Frege CasesExceptions to Intentional Generalizations?”, CanadianJournal of Philosophy, 31(1): 1–22.doi:10.1080/00455091.2001.10717558
- Bach, Kent, 1987, “Review: Spreading theWord”, The Philosophical Review, 96(1): 120–123.doi:10.2307/2185336
- Barsalou, Lawrence W., 1999, “Perceptual SymbolSystems”, Behavioral and Brain Sciences, 22(4):577–660. doi:10.1017/S0140525X99002149
- Bechtel, William and Adele Abrahamsen, 2002, Connectionism andthe Mind: Parallel Processing, Dynamics and Evolution inNetworks, second edition, Malden, MA: Blackwell.
- Beck, Jacob, 2017, “Can Bootstrapping Explain ConceptLearning?”, Cognition, 158: 110–121.doi:10.1016/j.cognition.2016.10.017
- Bermúdez, José Luis, 2010, Cognitive Science: AnIntroduction to the Science of the Mind, Cambridge: CambridgeUniversity Press.
- Blackburn, Simon, 1984, Spreading the Word, Oxford:Oxford University Press.
- Block, Ned, 1983, “Mental Pictures and Cognitive Science”, The Philosophical Review, 92(4):499–451. doi:10.2307/2184879
- –––, “Advertisement for a Semantics forPsychology”, in Midwest Studies in Philosophy, 10:615–678. doi:10.1111/j.1475-4975.1987.tb00558.x
- –––, 1990, “Can the Mind Change theWorld?”, in Meaning and Method: Essays in Honor of HilaryPutnam, George Boolos (ed.), Cambridge: Cambridge UniversityPress.
- Blumson, Ben, 2012, “Mental Maps”, Philosophy andPhenomenological Research, 85(2): 413–434.doi:10.1111/j.1933-1592.2011.00499.x
- Braddon-Mitchell, David and John Fitzpatrick, 1990, “Explanation and the Language of Thought”, Synthese, 83(1): 3–29. doi: 10.1007/BF00413686
- Braddon-Mitchell, David and Frank Jackson, 2007, Philosophy ofMind and Cognition, second edition, Cambridge: Blackwell.
- Burge, Tyler, 2007, Foundations of Mind, (PhilosophicalEssays, 2), Oxford: Oxford University Press.
- –––, 2010, Origins of Objectivity,Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199581405.001.0001
- –––, 2018, “Iconic Representation: Maps,Pictures, and Perception”, in The Map and the Territory:Exploring the Foundations of Science, Thought, and Reality, ShyamWuppuluri and Francisco Antonio Doria (eds.), Cham: SpringerInternational Publishing, 79–100.doi:10.1007/978-3-319-72478-2_5
- Brentano, Franz, 1874 [1973], Psychology from an EmpiricalStandpoint (Psychologie vom empirischen Standpunkt, 1924edition), Antos C. Rancurello, D.B. Terrell, and Linda McAlister(trans.), London: Routledge and Kegan Paul.
- Camp, Elisabeth, 2009, “A Language of BaboonThought?”, in Lurz 2009: 108–127.doi:10.1017/CBO9780511819001.007
- –––, 2018, “Why Maps Are NotPropositional”, in Non-Propositional Intentionality,Alex Grzankowski and Michelle Montague (eds.), Oxford: OxfordUniversity Press. doi:10.1093/oso/9780198732570.003.0002
- Carey, Susan, 2009, The Origin of Concepts, Oxford:Oxford University Press.doi:10.1093/acprof:oso/9780195367638.001.0001
- –––, 2014, “On Learning New Primitives in the Language of Thought: Reply to Rey”,Mind and Language, 29(2): 133–166.doi:10.1111/mila.12045
- Casati, Roberto and Achille C. Varzi, 1999, Parts and Places:The Structures of Spatial Representation, Cambridge, MA: MITPress.
- Chalmers, David J., 1990, “Syntactic Transformations onDistributed Representations”, Connection Science,2(1–2): 53–62. doi:10.1080/09540099008915662
- –––, 1993, “Connectionism andCompositionality: Why Fodor and Pylyshyn Were Wrong”,Philosophical Psychology, 6(3): 305–319.doi:10.1080/09515089308573094
- –––, 2012, “The Varieties of Computation:A Reply”, Journal of Cognitive Science, 13(3):211–248. doi:10.17791/jcs.2012.13.3.211
- Chomsky, Noam, 1965, Aspects of the Theory of Syntax.Cambridge, MA: MIT Press.
- Churchland, Patricia S., 1986, Neurophilosophy: Toward aUnified Science of Mind-Brain, Cambridge, MA: MIT Press.
- Churchland, Patricia S. and Terrence J. Sejnowski, 1989,“Neural Representation and Neural Computation”, inNeural Connections, Neural Computation, Lynn Nadel, Lynn A.Cooper, Peter W. Culicover, and Robert M. Harnish, Cambridge, MA: MITPress.
- Churchland, Paul M., 1990, A Neurocomputational Perspective:The Nature of Mind and the Structure of Science, Cambridge, MA:MIT Press.
- Churchland, Paul M., and Patricia S. Churchland, 1990,“Could a Machine Think?”, Scientific American,262(1): 32–37. doi:10.1038/scientificamerican0190-32
- Clark, Andy, 1991, “Systematicity, StructuredRepresentations and Cognitive Architecture: A Reply to Fodor andPylyshyn”, in Horgan and Tienson 1991: 198–218.doi:10.1007/978-94-011-3524-5_9
- –––, 2014, Mindware: An Introduction to thePhilosophy of Cognitive Science, second edition, Oxford: OxfordUniversity Press.
- Cowie, Fiona, 1999, What’s Within? NativismReconsidered, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780195159783.001.0001
- Cummins, Robert, 1989, Meaning and Mental Representation,Cambridge, MA: MIT Press.
- Dennett, Daniel C., 1977 [1981], “Critical Noticw: Review ofThe Language of Thought by Jerry Fodor”, Mind, 86(342):265–280. Reprinted as “A Cure for the Common Code”, inBrainstorms: Philosophical Essays on Mind and Psychology,Cambridge, MA: MIT Press, 1981. doi:10.1093/mind/LXXXVI.342.265
- –––, 1991, “Mother Nature Versus theWalking Encyclopedia: A Western Drama”, in Philosophy andConnectionist Theory, W. Ramsey, S. Stich, and D. Rumelhart,Hillsdale, NJ: Lawrence Erlbaum Associates. [available online]
- Devitt, Michael, 1995, Coming to Our Senses: A NaturalisticProgram for Semantic Localism, Cambridge: Cambridge UniversityPress. doi:10.1017/CBO9780511609190
- Dretske, Fred, 1981, Knowledge and the Flow ofInformation, Cambridge, MA: MIT Press.
- –––, 1988. Explaining Behavior,Cambridge, MA: MIT Press.
- Egan, Frances, 1992, “Individualism, Computation, andPerceptual Content”, Mind, 101(403): 443–459.doi:10.1093/mind/101.403.443
- Elman, Jeffrey L., 1989, “Structured Representations andConnectionist Models”, in Proceedings of the Eleventh AnnualMeeting of the Cognitive Science Society, Mahwah: LaurenceErlbaum Associates.
- Field, Hartry, 2001, Truth and the Absence of Fact,Oxford: Oxford University Press. doi:10.1093/0199242895.001.0001
- Figdor, Carrie, 2009, “Semantic Externalism and theMechanics of Thought”, Minds and Machines, 19(1):1–24. doi:10.1007/s11023-008-9114-6
- Fodor, Jerry A., 1975, The Language of Thought, New York:Thomas Y. Crowell.
- –––, 1981, Representations, Cambridge,MA: MIT Press.
- –––, 1987, Psychosemantics, Cambridge,MA: MIT Press.
- –––, 1990, A Theory of Content and OtherEssays, Cambridge, MA: MIT Press.
- –––, 1991, “Replies”, in Meaning in Mind: Fodor and His Critics, Barry M. Loewer and Georges Rey (eds.), Cambridge, MA: MIT Press.
- –––, 1994, The Elm and the Expert,Cambridge, MA: MIT Press.
- –––, 1998, Concepts: Where Cognitive ScienceWent Wrong, Oxford: Oxford University Press.doi:10.1093/0198236360.001.0001
- –––, 2001, “Doing without What’s within:Fiona Cowie’s Critique of Nativism”, Mind, 110(437):99–148. doi:10.1093/mind/110.437.99
- –––, 2003, Hume Variations, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199287338.001.0001
- –––, 2008, LOT 2: The Language of ThoughtRevisited, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199548774.001.0001
- –––, 2010, “Woof, Woof. Review of TheOrigin of Concepts by Susan Carey”, The Times LiterarySupplement, October 8: pp. 7–8.
- Fodor, Jerry and Brian P. McLaughlin, 1990, “Connectionismand the Problem of Systematicity: Why Smolensky’s Solution Doesn’tWork”, Cognition, 35(2): 183–204.doi:10.1016/0010-0277(90)90014-B
- Fodor, Jerry A. and Zenon W. Pylyshyn, 1981, “How Direct IsVisual Perception?: Some Reflections on Gibson’s‘Ecological Approach’”, Cognition, 9(2):139–196. doi:10.1016/0010-0277(81)90009-3
- –––, 1988, “Connectionism and CognitiveArchitecture: A Critical Analysis”, Cognition,28(1–2): 3–71. doi:10.1016/0010-0277(88)90031-5
- –––, 2015, Minds Without Meanings,Cambridge, MA: MIT Press.
- Frege, Gottlob, 1879 [1967], Begriffsschrift, eine derArithmetischen Nachgebildete Formelsprache des Reinen Denkens.Translated as Concept Script, a Formal Language of Pure ThoughtModeled upon that of Arithmetic in From Frege to Gödel:A Source Book in Mathematical Logic, 1879–1931, J. vanHeijenoort (ed.), S. Bauer-Mengelberg (trans.), Cambridge, MA: HarvardUniversity Press.
- –––, 1892 [1997], “On Sinn andBedeutung”. Reprinted in the The Frege Reader,M. Beaney (ed.), M. Black (trans.), Malden, MA: Blackwell.
- –––, 1918 [1997], “Thought”.Reprinted in The Frege Reader, M. Beaney (ed.), P. Geach andR. Stoothof (trans.), Malden, MA: Blackwell.
- Gallistel, Charles R., 1990, The Organization ofLearning, Cambridge, MA: MIT Press.
- Gallistel, Charles R. and Adam Philip King, 2009, Memory andthe Computational Brain, Malden, MA: Wiley- Blackwell.
- Gallistel, C.R. and Louis D. Matzel, 2013, “The Neuroscienceof Learning: Beyond the Hebbian Synapse”, Annual Review ofPsychology, 64(1): 169–200.doi:10.1146/annurev-psych-113011-143807
- Gibson, James J., 1979, The Ecological Approach to VisualPerception, Boston, MA: Houghton Mifflin.
- Greenberg, Gabriel, 2013, “Beyond Resemblance”,Philosophical Review, 122(2): 215–287.doi:10.1215/00318108-1963716
- Greenberg, Mark, 2014, “Troubles for Content I”, inMetasemantics: New Essays on the Foundations of Meaning,Alexis Burgess and Brett Sherman (eds.), Oxford: Oxford UniversityPress, 147–168. doi:10.1093/acprof:oso/9780199669592.003.0006
- Harman, Gilbert, 1973, Thought, Princeton, NJ: PrincetonUniversity Press.
- Harnad, Stevan, 1994, “Computation Is Just InterpretableSymbol Manipulation; Cognition Isn’t”, Minds andMachines, 4(4): 379–390. doi:10.1007/BF00974165
- Harnish, Robert M., 2002, Minds, Brains, Computers: AnHistorical Introduction to the Foundations of Cognitive Science,Malden, MA: Blackwell.
- Haugeland, John, 1985, Artificial Intelligence: The VeryIdea, Cambridge, MA: MIT Press
- Helmholtz, Hermann von, 1867 [1925], Treatise on PhysiologicalOptics (Handbuch der physiologischen Optik), James P.C.Southall, Manasha, WI: George Banta Publishing Company.
- Hinton, G. 1990. “Mapping Part-Whole Hierarchies intoConnectionist Networks”. Artificial Intelligence 46:pp. 47-75.
- Horgan, Terence and John Tienson (eds.), 1991, Connectionismand the Philosophy of Mind, (Studies in Cognitive Systems 9),Dordrecht: Springer Netherlands. doi:10.1007/978-94-011-3524-5
- –––, 1996, Connectionism and the Philosophyof Psychology, Cambridge, MA: MIT Press.
- Hume, David, 1739 [1978], A Treatise on Human Nature,second edition, P. H. Nidditch (ed.). Oxford: Clarendon Press.
- Jacobs, Lucia F and Randolf Menzel, 2014, “NavigationOutside of the Box: What the Lab Can Learn from the Field and What theField Can Learn from the Lab”, Movement Ecology, 2(1):3. doi:10.1186/2051-3933-2-3
- Johnson, Kent, 2004, “On the Systematicity of Language andThought”:, Journal of Philosophy, 101(3): 111–139.doi:10.5840/jphil2004101321
- Johnson-Laird, Philip N., 2004, “The History of MentalModels”, in Psychology of Reasoning: Theoretical andHistorical Perspectives, Ken Manktelow and Man Cheung Chung, NewYork: Psychology Press.
- Kant, Immanuel, 1781 [1998], The Critique of Pure Reason,P. Guyer and A. Wood (eds), Cambridge: Cambridge UniversityPress.
- Kaplan, David, 1989, “Demonstratives”, in Themesfrom Kaplan, Joseph Almog, John Perry, and Howard Wettstein(eds.), New York: Oxford University Press.
- Kazez, Jean R., 1994, “Computationalism and the Causal Roleof Content”, Philosophical Studies, 75(3): 231–260.doi:10.1007/BF00989583
- King, Peter, 2005, “William of Ockham: SummaLogicae”, in Central Works of Philosophy: Ancient andMedieval, volume 1: Ancient and Medieval Philosophy, John Shand(ed.), Montreal: McGill-Queen’s University Press,242–270.
- Knill, David C. and Whitman Richards (eds.), 1996, Perceptionas Bayesian Inference, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511984037
- Knowles, Jonathan, 1998, “The Language of Thought and Natural Language Understanding”, Analysis, 58(4): 264–272. doi: 10.1093/analys/58.4.264
- Kosslyn, Stephen, 1980, Image and Mind, Cambridge, MA: Harvard University Press.
- Kulvicki, John, 2015, “Maps, Pictures, andPredication”, Ergo: An Open Access Journal ofPhilosophy, 2(7): 149–174.
- Laurence, Stephen and Eric Margolis, 1997, “RegressArguments Against the Language of Thought”, Analysis,57(1): 60–66.
- Loar, Brian, 1982, Mind and Meaning, Cambridge: CambridgeUniversity Press.
- Loewer, Barry, 1997, “A Guide to NaturalizingSemantics”, in A Companion to the Philosophy ofLanguage, Bob Hale and Crispin Wright (eds.), Oxford:Blackwell.
- Lurz, Robert W. (ed.), 2009, The Philosophy of AnimalMinds, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511819001
- Mackintosh, Nicholas John, 2002, “Do Not Ask Whether TheyHave a Cognitive Map, but How They Find Their Way about”,Psicológica, 23(1): 165–185. [Mackintosh 2002 available online]
- Margolis, Eric, 1998, “How to Acquire a Concept”,Mind & Language, 13(3): 347–369.doi:10.1111/1468-0017.00081
- Margolis, Eric and Stephen Laurence, 2011, “LearningMatters: The Role of Learning in Concept Acquisition”, Mind& Language, 26(5): 507–539.doi:10.1111/j.1468-0017.2011.01429.x
- McDermott, Drew V., 2001, Mind and Mechanism, Cambridge,MA: MIT Press.
- McLaughlin, B. P. and T. A. Warfield, 1994, “The Allure ofConnectionism Reexamined”, Synthese, 101(3): 365–400.doi:10.1007/BF01063895
- Marcus, G., 2001, The Algebraic Mind, Cambridge: MITPress.
- Millikan, Ruth Garrett, 1984, Language, Thought, and OtherBiological Categories: New Foundations for Realism, Cambridge,MA: MIT Press.
- –––, 1993, White Queen Psychology and OtherEssays for Alice, Cambridge, MA: MIT Press.
- Neander, Karen, 2017, A Mark of the Mental: In Defense ofInformational Teleosemantics, Cambridge, MA: MIT Press.
- Niklasson, Lars F. and Tim Gelder, 1994, “On BeingSystematically Connectionist”, Mind & Language,9(3): 288–302. doi:10.1111/j.1468-0017.1994.tb00227.x
- Normore, Calvin, 1990, “Ockham on Mental Language”, inThe Historical Foundations of Cognitive Science, J. Smith(ed.), Dordrecht: Kluwer.
- –––, 2003, “Burge, Descartes, andUs”, in Reflections and Replies: Essays on the Philosophy ofTyler Burge, Martin Hahn and Bjørn Ramberg, Cambridge, MA:MIT Press.
- –––, 2009, “The End of MentalLanguage”, in Le Langage Mental du Moyen Âge àl’Âge Classique, J. Biard (ed.), Leuven:Peeters.
- O’Brien, Gerard and Jon Opie, 2006, “How Do ConnectionistNetworks Compute?”, Cognitive Processing, 7(1):30–41. doi:10.1007/s10339-005-0017-7
- O’Keefe, John and Lynn Nadel, 1978, The Hippocampus as aCognitive Map, Oxford: Clarendon Press.
- Ockham, William of, c. 1323 [1957], Summa Logicae,Translated in his Philosophical Writings, A Selection,Philotheus Boehner (ed. and trans.), London: Nelson, 1957.
- Panaccio, Claude, 1999 [2017], Mental Language: From Plato toWilliam of Ockham (Discours intérieur), Joshua P.Hochschild and Meredith K. Ziebart (trans.), New York: FordhamUniversity Press.
- Papineau, David, 1987, Reality and Representation,Oxford: Basil Blackwell.
- Peacocke, Christopher, 1992, A Study of Concepts,Cambridge, MA: MIT Press.
- –––, 1994, “Content, Computation andExternalism”, Mind & Language, 9(3): 303–335.doi:10.1111/j.1468-0017.1994.tb00228.x
- –––, 1999, “Computation as InvolvingContent: A Response to Egan”, Mind & Language,14(2): 195–202. doi:10.1111/1468-0017.00109
- Perry, John, 1998, “Broadening the Mind”,Philosophy and Phenomenological Research, 58(1): 223–231.doi:10.2307/2653644
- Piccinini, Gualtiero, 2008, “Computation withoutRepresentation”, Philosophical Studies, 137(2):205–241. doi:10.1007/s11098-005-5385-4
- Pinker, Steven, 2005, “So How Does the Mind Work?”,Mind & Language, 20(1): 1–24.doi:10.1111/j.0268-1064.2005.00274.x
- Pinker, Steven and Alan Prince, 1988, “On Language andConnectionism: Analysis of a Parallel Distributed Processing Model ofLanguage Acquisition”, Cognition, 28(1–2): 73–193.doi:10.1016/0010-0277(88)90032-7
- Polger, Thomas W., 2004, Natural Minds, Cambridge, MA:MIT Press.
- Pollack, Jordan B., 1990, “Recursive DistributedRepresentations”, Artificial Intelligence, 46(1–2):77–105. doi:10.1016/0004-3702(90)90005-K
- Prinz, Jesse, 2002, Furnishing the Mind: Concepts and TheirPerceptual Basis, Cambridge, MA: MIT Press.
- –––, 2011, “Has Mentalese Earned Its Keep?On Jerry Fodor’s LOT 2”, Mind, 120(478): 485–501.doi:10.1093/mind/fzr025
- Putnam, Hilary, 1967, “Psychophysical Predicates”, InArt, Mind, and Religion: Proceedings of the 1965 OberlinColloquium in Philosophy, W.H. Capitan and D.D. Merrill (eds),Pittsburgh, PA: University of Pittsburgh Press, 37–48.
- –––, 1988, Representation and Reality,Cambridge, MA: MIT Press.
- Pylyshyn, Zenon W., 1984, Computation and Cognition: Toward aFoundation for Cognitive Science, Cambridge, MA: MIT Press.
- –––, 2003, Seeing and Visualizing: It’sNot What You Think, Cambridge, MA: MIT Press.
- Quine, W. V., 1951 [1980], “Two Dogmas of Empiricism”,The Philosophical Review, 60(1): 20–43. Reprinted in hisFrom a Logical Point of View, second edition, Cambridge, MA:Harvard University Press, 1980, 20–46. doi:10.2307/2181906
- Ramsey, William M., 2007, Representation Reconsidered,Cambridge: Cambridge University Press.doi:10.1017/CBO9780511597954
- Rescorla, Michael, 2009a, “Chrysippus’ Dog as a Case Studyin Non-Linguistic Cognition”, in Lurz 2009: 52–71.doi:10.1017/CBO9780511819001.004
- –––, 2009b, “Cognitive Maps and theLanguage of Thought”, The British Journal for the Philosophyof Science, 60(2): 377–407. doi:10.1093/bjps/axp012
- –––, 2009c, “Predication and CartographicRepresentation”, Synthese, 169(1): 175–200.doi:10.1007/s11229-008-9343-5
- –––, 2012a, “Are Computational TransitionsSensitive to Semantics?”, Australasian Journal ofPhilosophy, 90(4): 703–721.doi:10.1080/00048402.2011.615333
- –––, 2012b, “How to IntegrateRepresentation into Computational Modeling, and Why We Should”,Journal of Cognitive Science, 13(1): 1–37.doi:10.17791/jcs.2012.13.1.1
- –––, 2014a, “The Causal Relevance ofContent to Computation”, Philosophy and PhenomenologicalResearch, 88(1): 173–208.doi:10.1111/j.1933-1592.2012.00619.x
- –––, 2014b, “A Theory of ComputationalImplementation”, Synthese, 191(6): 1277–1307.doi:10.1007/s11229-013-0324-y
- –––, 2015, “Bayesian PerceptualPsychology”, in The Oxford Handbook of Philosophy ofPerception, Mohan Matthen (ed.), Oxford: Oxford University Press.doi:10.1093/oxfordhb/9780199600472.013.010
- –––, 2017a, “From Ockham toTuring—and Back Again”, in Philosophical Explorationsof the Legacy of Alan Turing, Juliet Floyd and Alisa Bokulich(eds.), (Boston Studies in the Philosophy and History of Science 324),Cham: Springer International Publishing, 279–304.doi:10.1007/978-3-319-53280-6_12
- –––, 2017b, “Maps in the Head?”,The Routledge Handbook of Philosophy of Animal Minds, KristinAndrews and Jacob Beck (eds.), New York: Routledge.
- –––, forthcoming, “ReifyingRepresentations”, in What Are Mental Representations?,Tobias Schlicht, Krzysztof Doulega, and Joulia Smortchkova (eds.), Oxford:Oxford University Press.
- Rey, Georges, 2014, “Innate and Learned: Carey, Mad DogNativism, and the Poverty of Stimuli and Analogies (Yet Again): Innateand Learned”, Mind & Language, 29(2): 109–132.doi:10.1111/mila.12044
- Rumelhart, David and James L. McClelland, 1986, “PDP Modelsand General Issues in Cognitive Science”, in Rumelhart, et al.1986: 110–146.
- Rumelhart, David E., James L. McClelland, and the PDP ResearchGroup, 1986, Parallel Distributed Processing, volume 1:Explorations in the Microstructure of Cognition: Foundations,Cambridge, MA: MIT Press.
- Russell, Bertrand, 1918–1919 [1985], “The Philosophyof Logical Atomism: Lectures 1-2”, Monist, 28(4):495–527, doi:10.5840/monist19182843, 29(1): 32–63,doi:10.5840/monist191929120, 29(2): 190–222,doi:10.5840/monist19192922, 29(3): 345–380,doi:10.5840/monist19192937. Reprinted in The Philosophy of LogicalAtomism, David F. Pears (ed.), La Salle, IL: Open Court.
- Rupert, Robert D., 2008, “Frege’s Puzzle and Frege Cases:Defending a Quasi-Syntactic Solution”, Cognitive SystemsResearch, 9(1–2): 76–91.doi:10.1016/j.cogsys.2007.07.003
- Schiffer, Stephen, 1981, “Truth and the Theory ofContent”, in Meaning and Understanding, Herman Parretand Jacques Bouveresse, Berlin: Walter de Gruyter, 204–222.
- Schneider, Susan, 2005, “Direct Reference, PsychologicalExplanation, and Frege Cases”, Mind & Language,20(4): 423–447. doi:10.1111/j.0268-1064.2005.00294.x
- –––, 2011, The Language of Thought: A NewPhilosophical Direction, Cambridge, MA: MIT Press.
- Sellars, Wilfrid, 1975, “The Structure of Knowledge”,in Action, Knowledge and Reality: Studies in Honor of WilfridSellars, Hector-Neri Castañeda (ed.), Indianapolis, IN:Bobbs-Merrill, 295–347.
- Shagrir, Oron, forthcoming, “In Defense of the Semantic Viewof Computation”, Synthese, First online: 11 October2018. doi:10.1007/s11229-018-01921-z
- Shea, Nicholas, 2016, “Representational Development Need NotBe Explicable-By-Content”, in Fundamental Issues ofArtificial Intelligence, Vincent C. Müller (ed.), Cham:Springer International Publishing, 223–240.doi:10.1007/978-3-319-26485-1_14
- Sloman, Aaron, 1978, The Computer Revolution in Philosophy:Philosophy, Science and Models of the Mind, Hassocks: TheHarvester Press.
- Smolensky, Paul, 1990, “Tensor Product Variable Binding andthe Representation of Symbolic Structures in ConnectionistSystems”, Artificial Intelligence, 46(1–2):159–216. doi:10.1016/0004-3702(90)90007-M
- –––, 1991, “Connectionism, Constituency,and the Language of Thought”, in Meaning in Mind: Fodor andHis Critics, Barry M. Loewer and Georges Rey (eds), Cambridge,MA: Blackwell.
- –––, 1995, “Constituent Structure andExplanation in an Integrated Connectionist/Symbolic CognitiveArchitecture”, in Connectionism: Debates on PsychologicalExplanation, Cynthia Macdonald and Graham Macdonald (eds),Oxford: Basil Blackwell.
- Stalnaker, Robert C., 1984, Inquiry, Cambridge, MA: MITPress.
- Stich, Stephen P., 1983, From Folk Psychology to CognitiveScience, Cambridge, MA: MIT Press.
- Tarski, Alfred, 1933 [1983], “Pojęcie prawdy w językachnauk dedukcyjnych”, Warsaw: Nakładem Towarzystwa NaukowegoWarszawskiego. Translated into German (1935) by L. Blaustein as“Der Wahrheitsbegriff in den formalisierten Sprachen”,Studia Philosophica, 1: 261–405. Translated intoEnglish (1983) as “The Concept of Truth in FormalizedLanguages”, in Logic, Semantics, Metamathematics: Papersfrom 1923 to 1938, second edition, J.H. Woodger (trans.), JohnCorcoran (ed.), Indianapolis, IN: Hackett.
- Tolman, Edward C., 1948, “Cognitive Maps in Rats andMen.”, Psychological Review, 55(4): 189–208.doi:10.1037/h0061626
- Touretzky, David S., 1990, “BoltzCONS: Dynamic SymbolStructures in a Connectionist Network”, ArtificialIntelligence, 46(1–2): 5–46.doi:10.1016/0004-3702(90)90003-I
- Turing, Alan M., 1936, “On Computable Numbers, with anApplication to the Entscheidungsproblem”, Proceedings of theLondon Mathematical Society, s2-42(1): 230–265.doi:10.1112/plms/s2-42.1.230
- van Gelder, Timothy, 1991, “Classical Questions, RadicalAnswers: Connectionism and the Structure of MentalRepresentations”. In Horgan and Tienson 1991: 355–381,doi:10.1007/978-94-011-3524-5_16
- Wakefield, Jerome C., 2002, “Broad versus Narrow Content inthe Explanation of Action: Fodor on Frege Cases”,Philosophical Psychology, 15(2): 119–133.doi:10.1080/09515080220127099
- Weiner, Jan, Sara Shettleworth, Verner P. Bingman, Ken Cheng,Susan Healy, Lucia F. Jacobs, Kathryn J. Jeffery, Hanspeter A. Mallot,Randolf Menzel, and Nora S. Newcombe, 2011, “Animal Navigation:A Synthesis”, in Animal Thinking, Randolf Menzel andJulia Fischer (eds), Cambridge, MA: MIT Press.
- Wittgenstein, Ludwig, 1921 [1922], Logisch-PhilosophischeAbhandlung, in W. Ostwald (ed.), Annalen derNaturphilosophie, 14. Translated as TractatusLogico-Philosophicus, C.K. Ogden (trans.), London: Kegan Paul,1922.
- –––, 1953, PhilosophicalInvestigations, G.E.M. Anscombe (trans.), Oxford: Blackwell.
Vygotsky Thought And Language Summary
Academic Tools
How to cite this entry. |
Preview the PDF version of this entry at the Friends of the SEP Society. |
Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). |
Enhanced bibliography for this entryat PhilPapers, with links to its database. |
Other Internet Resources
- Aydede, Murat, “The Language of Thought Hypothesis,” Stanford Encyclopedia of Philosophy (Spring 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2019/entries/language-thought/>. [This was the previous entry on the langugage of thought hypothesis in the Stanford Encyclopedia of Philosophy — see the version history.]
- Bibliography on the language of thought, in PhilPapers.org.
- Bibliography on the philosophy of artificial intelligence, curated by Eric Dietrich, in PhilPapers.org.
Related Entries
artificial intelligence | belief | Church-Turing Thesis | cognitive science | computation: in physical systems | concepts | connectionism | consciousness: representational theories of | folk psychology: as a theory | functionalism | intentionality | mental content: causal theories of | mental imagery | mental representation | mind: computational theory of | naturalism | physicalism | propositional attitude reports | qualia | reasoning: automated | Turing, Alan | Turing machines
Acknowledgments
I owe a profound debt to the Murat Aydede, author of the previous entry on the same topic. His exposition hugelyinfluenced my work on the entry, figuring indispensably as aspringboard, a reference, and a standard of excellence. Some of myformulations in the introduction and in sections 1.1, 2, 3, 4.3, 5,6.1, and 7 closely track formulations from the previous entry. Section5’s discussion of connectionism is directly based on theprevious entry’s treatment. I also thank Calvin Normore, MelanieSchoenberg, and the Stanford Encyclopedia editors for helpfulcomments.
Copyright © 2019 by
Michael Rescorla<[email protected]>
Michael Rescorla<[email protected]>