quinta-feira, fevereiro 26, 2009
Studio FRST's 16943 HDTV Boasts Dual Aspect Ratio, DVD Player - The Design blog: "Modern home gadgets haven’t just remained the square entertainment systems, but with contemporary designing and technology now they have turned into a work of art, enhancing the decor of your home. And Studio FRST’s new TV concept, called ‘16943,’ is the latest addition to the list of technological sculptures. Apart from its unique design, the HDTV also boasts dual aspect ratio, 4:3 and 16:9, so you may enjoy full screen as well as widescreen footage on a single monitor, and that too without annoying black bars and chopped picture. Moreover, the concept TV comes with a DVD player where you may enjoy your favorite video tracks and movies, devoid of any additional gadget and extension. The ‘16943’ with its glossy looks and functionality seems to be an ideal home entertainment system for modern apartments."
ORYX Concept Bike: Standing Out From The Rest! - Auto Motto: "Designer Harald Cramer’s ORYX is a time travel bike with most of the components integrated into the frame. The handlebars, stem and fork have been carved out of a single material, and the seat post and saddle are built into the frame. This construction gives a unique shape to the bike, and it’s so very different than its counterparts."
quarta-feira, fevereiro 25, 2009
Burj Dubai - Wikipedia, the free encyclopedia: "Burj Dubai (Arabic: برج دبي 'Dubai Tower') is a supertall skyscraper under construction in the Business Bay district of Dubai, United Arab Emirates, and is the tallest man-made structure ever built, despite being incomplete. Construction began on September 21, 2004 and is expected to be completed and ready for occupancy in September 2009.
The building is part of the 2 km2 (0.8 sq mi) development called 'Downtown Dubai', at the 'First Interchange' (aka 'Defence Roundabout') along Sheikh Zayed Road at Doha Street. The tower's architect is Adrian Smith who worked with Skidmore, Owings and Merrill (SOM) until 2006. The architecture and engineering firm SOM is in charge of the project. The primary builders are Samsung Engineering & Construction and Besix along with Arabtec. Turner Construction Company was chosen as the construction manager.
The total budget for the Burj Dubai project is about US$4.1 billion and for the entire new 'Downtown Dubai', US$20 billion. Mohamed Ali Alabbar, the CEO of Emaar Properties, speaking at the Council on Tall Buildings and Urban Habitat 8th World Congress, said that the price of office space at Burj Dubai had reached $4,000 per sq ft (over $43,000 per sq m) and that the Armani Residences, also in Burj Dubai, were selling for $3,500 per sq ft (over $37,500 per sq m)."
SixNine Performance Car Concept Draws Inspiration From Nature - Auto Motto: "Designer André Lyngra’s SixNine Performance car looks at the entire world of flora and fauna for inspiration, but the ultimate impressions are those of the leopard and the stingrays. The concept is built for performance and speed, as is clear from its shape and its stance. The curves and shape of the concept owe their existence to the stingray, but the speed draws clear parallels with the leopard. An imposing combination for the road!"
A Wooden House That Is Modern, Ecological And Economic - The Design blog: "Inspired by Pierre Koenig’s Case Study House, this environmentally-friendly, modular building by Module-Home has been designed by combining different modules. The modules can be mounted on stilts, sills, slabs or anything of your choice. Since its inception, they have integrated bioclimatic strict standards. These modules are all innovations to minimize its impact on the planet. The house has been designed with a nice ceiling height, a protected terrace and large sliding windows to enhance the perception of space and lightness. The duration of the project is so short, owing to the fact that the modules are assembled in the day to a house up to 100m ² (with 3 modules). Optional features include solar panels, rainwater harvesting and heating with renewable energy to ensure minimum impact on the environment. The budget varies according to the interior layout, finishes, type of heating and the structure of the field. The average cost per square meter starts at €600/m² for a “ready for decorating” finish, while the cost can go up to €900/m² for a “ready to live” finish. Construction techniques are the same as used for traditional wooden houses, which guarantees sustainability and low energy consumption."
Tramontana R: Segment-busting Supercar For The Super-riders - The Design blog: "If you are bored with riding Lamborghinis and Ferraris and they don’t magnetize you anymore, the Tramontana Group has come up with a segment-busting supercar named “Tramontana R” that is basically an evolved version of the standard open-wheel two-seater. Featuring a Mercedes-sourced 5.5-liter V12 available in either naturally aspirated, 550 hp guise or a twin-turbocharged 760 hp version that dolls out an astonishing 811 lb-ft of torque, the powerful “R” version is capable of reaching 0-100 time in 3.6 seconds and a 10.15-second to attain 200 mph. The R not just boasts a powerful engine, but also comes with a reduced weight (2,777 pounds) and shortened wheelbase that helps in improving the handling and aerodynamics. With an ideal 50:50 left to right and 42:58 front to rear weight distribution, the supercar just measures 192×82x51 inches (LWH). While the interiors, together with the chop-top steering wheel, an LCD instrument panel and the six-speed sequential gearbox, of the car is finished with carbon fiber, ensuring the safety of the riders. Priced at whopping $495,000, the Tramontana R will official be released by the end of March at the Top Marques Monaco Show."
Digital Drops » Blog Archive » LG-X120, Um Netbook 3G com Smart-On e Smart-Link: "Um dos destaques da LG no GSMA Mobile World Congress foi o lançamento do netbook LG-X120, que tem como principais destaques a conexão 3G HSPA e a interface “Smart-On” que abre os programas mais usados em apenas 5 segundos, eliminando a necessidade de esperar o boot completo.
O LG-X120 também é equipado com a tecnologia “Smart-Link” que pode ser usada para transferir arquivos ou instalar programas a partir de outros computadores usando o cabo USB."
Olympus' E-620 raises the bar for entry-level DSLRs - Engadget: "Olympus just joined the pre-PMA pileup with the announcement of its E-620 DSLR for entry-level enthusiasts. The E-620 is a mash-up of Olympus' semi-pro E-30 and entry-level E-520 in a compact body approaching Oly's own E-420 (the world's smallest DSLR when launched). The resulting cam brings a 12.3 megapixel Live MOS image sensor with sensor-shift image stabilization, 7-point AF, TruePic III+ image processor, built-in wireless flash controller, and a fully articulating, 2.7-inch tilt-and-swivel live-view LCD. It also features Olympus' Art Filters which take in-camera image enhancements a bit beyond sepia. Expect the E-620 body to ship in May for about $700; $800 with the 14-42mm f3.5-5.6 lens. Front-side front after the break.
Read -- Press release
Read -- DP Review preview
Read -- DigitalCameraInfo first impression"
Sleek new Studio XPS 435 materializes on Dell website - Engadget: "Well, what do we have here? Dell's own website has outed a new Studio XPS 435. Here's the specs for its supremum configuration: a 3.2GHz Intel Core i7 processor extreme edition on a X58 chipset, up to 24GB DDR3 SDRAM and 4.5 TB with three hard drive bays, ATI Radeon HD4870, Blu-ray disc drive, 15-in-1 card reader, and eight USB 2.0 ports. Of course, getting the max settings is certainly going to cost you a pretty penny, and at this point we've got no deets on pricing or availability.
Digital Drops » Blog Archive » Canon PowerShot D10: A Prova d’Água até 10 Metros de Profundidade!: "A câmera PowerShot D10 Canon pode ser usada em baixo d’água até uma profundidade de 10 metros, além de resistir a temperaturas de menos 10 graus e quedas de até 1.22 metros!"
iPoint 3D brings gesture-based inputs to 3D displays - Engadget: "Just in case you've been parked out under a local stone for the past six months and change, we figured it prudent to let you know that the 3D bandwagon has totally regained momentum. So much momentum, in fact, that the brilliant minds over at Fraunhofer-Gesellschaft have decided to bust out a 3D innovation that actually makes us eager to sink our minds into the elusive third dimension. The iPoint 3D, which we're hoping to get up close and personal with at CeBIT next week, is a technology that enables Earthlings to interact with a 3D display via simple gestures -- all without touching the panel and without those style-smashing 3D glasses. The gurus even go so far as to compare their creation to something you'd see in a science fiction flick, with the heart of it involving a recognition device (usually suspended above the user) and a pair of inbuilt cameras. There's no mention of just how crazy expensive this would be if it were ready for the commercial realm, but we'll try to snag an estimated MSRP for ya next week.
segunda-feira, fevereiro 23, 2009
First published Mon Sep 23, 1996; substantive revision Mon Apr 30, 2007
Cognitive science is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology. Its intellectual origins are in the mid-1950s when researchers in several fields began to develop theories of mind based on complex representations and computational procedures. Its organizational origins are in the mid-1970s when the Cognitive Science Society was formed and the journal Cognitive Science began. Since then, more than sixty universities in North America, Europe, Asia, and Australia have established cognitive science programs, and many others have instituted courses in cognitive science.
- 1. History
- 2. Methods
- 3. Representation and Computation
- Other Internet Resources
- Related Entries
Attempts to understand the mind and its operation go back at least to the Ancient Greeks, when philosophers such as Plato and Aristotle tried to explain the nature of human knowledge. The study of mind remained the province of philosophy until the nineteenth century, when experimental psychology developed. Wilhelm Wundt and his students initiated laboratory methods for studying mental operations more systematically. Within a few decades, however, experimental psychology became dominated by behaviorism, a view that virtually denied the existence of mind. According to behaviorists such as J. B. Watson, psychology should restrict itself to examining the relation between observable stimuli and observable behavioral responses. Talk of consciousness and mental representations was banished from respectable scientific discussion. Especially in North America, behaviorism dominated the psychological scene through the 1950s. Around 1956, the intellectual landscape began to change dramatically. George Miller summarized numerous studies which showed that the capacity of human thinking is limited, with short-term memory, for example, limited to around seven items. He proposed that memory limitations can be overcome by recoding information into chunks, mental representations that require mental procedures for encoding and decoding the information. At this time, primitive computers had been around for only a few years, but pioneers such as John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon were founding the field of artificial intelligence. In addition, Noam Chomsky rejected behaviorist assumptions about language as a learned habit and proposed instead to explain language comprehension in terms of mental grammars consisting of rules. The six thinkers mentioned in this paragraph can be viewed as the founders of cognitive science.
Cognitive science has unifying theoretical ideas, but we have to appreciate the diversity of outlooks and methods that researchers in different fields bring to the study of mind and intelligence. Although cognitive psychologists today often engage in theorizing and computational modeling, their primary method is experimentation with human participants. People, usually undergraduates satisfying course requirements, are brought into the laboratory so that different kinds of thinking can be studied under controlled conditions. For example, psychologists have experimentally examined the kinds of mistakes people make in deductive reasoning, the ways that people form and apply concepts, the speed of people thinking with mental images, and the performance of people solving problems using analogies. Our conclusions about how the mind works must be based on more than "common sense" and introspection, since these can give a misleading picture of mental operations, many of which are not consciously accessible. Psychological experiments that carefully approach mental operations from diverse directions are therefore crucial for cognitive science to be scientific.
Although theory without experiment is empty, experiment without theory is blind. To address the crucial questions about the nature of mind, the psychological experiments need to be interpretable within a theoretical framework that postulates mental representations and procedures. One of the best ways of developing theoretical frameworks is by forming and testing computational models intended to be analogous to mental operations. To complement psychological experiments on deductive reasoning, concept formation, mental imagery, and analogical problem solving, researchers have developed computational models that simulate aspects of human performance. Designing, building, and experimenting with computational models is the central method of artificial intelligence (AI), the branch of computer science concerned with intelligent systems. Ideally in cognitive science, computational models and psychological experimentation go hand in hand, but much important work in AI has examined the power of different approaches to knowledge representation in relative isolation from experimental psychology.
While some linguists do psychological experiments or develop computational models, most currently use different methods. For linguists in the Chomskian tradition, the main theoretical task is to identify grammatical principles that provide the basic structure of human languages. Identification takes place by noticing subtle differences between grammatical and ungrammatical utterances. In English, for example, the sentences "She hit the ball" and "What do you like?" are grammatical, but "She the hit ball" and "What does you like?" are not. A grammar of English will explain why the former are acceptable but not the latter.
Like cognitive psychologists, neuroscientists often perform controlled experiments, but their observations are very different, since neuroscientists are concerned directly with the nature of the brain. With nonhuman subjects, researchers can insert electrodes and record the firing of individual neurons. With humans for whom this technique would be too invasive, it has become possible in recent years to use magnetic and positron scanning devices to observe what is happening in different parts of the brain while people are doing various mental tasks. For example, brain scans have identified the regions of the brain involved in mental imagery and word interpretation. Additional evidence about brain functioning is gathered by observing the performance of people whose brains have been damaged in identifiable ways. A stroke, for example, in a part of the brain dedicated to language can produce deficits such as the inability to utter sentences. Like cognitive psychology, neuroscience is often theoretical as well as experimental, and theory development is frequently aided by developing computational models of the behavior of groups of neurons.
Cognitive anthropology expands the examination of human thinking to consider how thought works in different cultural settings. The study of mind should obviously not be restricted to how English speakers think but should consider possible differences in modes of thinking across cultures. Cognitive science is becoming increasingly aware of the need to view the operations of mind in particular physical and social environments. For cultural anthropologists, the main method is ethnography, which requires living and interacting with members of a culture to a sufficient extent that their social and cognitive systems become apparent. Cognitive anthropologists have investigated, for example, the similarities and differences across cultures in words for colors.
With a few exceptions, philosophers generally do not perform systematic empirical observations or construct computational models. But philosophy remains important to cognitive science because it deals with fundamental issues that underlie the experimental and computational approach to mind. Abstract questions such as the nature of representation and computation need not be addressed in the everyday practice of psychology or artificial intelligence, but they inevitably arise when researchers think deeply about what they are doing. Philosophy also deals with general questions such as the relation of mind and body and with methodological questions such as the nature of explanations found in cognitive science. In addition, philosophy concerns itself with normative questions about how people should think as well as with descriptive ones about how they do. In addition to the theoretical goal of understanding human thinking, cognitive science can have the practical goal of improving it, which requires normative reflection on what we want thinking to be. Philosophy of mind does not have a distinct method, but should share with the best theoretical work in other fields a concern with empirical results.
In its weakest form, cognitive science is just the sum of the fields mentioned: psychology, artificial intelligence, linguistics, neuroscience, anthropology, and philosophy. Interdisciplinary work becomes much more interesting when there is theoretical and experimental convergence on conclusions about the nature of mind. For example, psychology and artificial intelligence can be combined through computational models of how people behave in experiments. The best way to grasp the complexity of human thinking is to use multiple methods, especially psychological and neurological experiments and computational models. Theoretically, the most fertile approach has been to understand the mind in terms of representation and computation.
The central hypothesis of cognitive science is that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures. While there is much disagreement about the nature of the representations and computations that constitute thinking, the central hypothesis is general enough to encompass the current range of thinking in cognitive science, including connectionist theories which model thinking using artificial neural networks.
Most work in cognitive science assumes that the mind has mental representations analogous to computer data structures, and computational procedures similar to computational algorithms. Cognitive theorists have proposed that the mind contains such mental representations as logical propositions, rules, concepts, images, and analogies, and that it uses mental procedures such as deduction, search, matching, rotating, and retrieval. The dominant mind-computer analogy in cognitive science has taken on a novel twist from the use of another analog, the brain.
Connectionists have proposed novel ideas about representation and computation that use neurons and their connections as inspirations for data structures, and neuron firing and spreading activation as inspirations for algorithms. Cognitive science then works with a complex 3-way analogy among the mind, the brain, and computers. Mind, brain, and computation can each be used to suggest new ideas about the others. There is no single computational model of mind, since different kinds of computers and programming approaches suggest different ways in which the mind might work. The computers that most of us work with today are serial processors, performing one instruction at a time, but the brain and some recently developed computers are parallel processors, capable of doing many operations at once.
Here is a schematic summary of current theories about the nature of the representations and computations that explain how the mind works.
Formal logic provides some powerful tools for looking at the nature of representation and computation. Propositional and predicate calculus serve to express many complex kinds of knowledge, and many inferences can be understood in terms of logical deduction with inferences rules such as modus ponens. The explanation schema for the logical approach is:
- Why do people make the inferences they do?
- People have mental representations similar to sentences in predicate logic.
- People have deductive and inductive procedures that operate on those sentences.
- The deductive and inductive procedures, applied to the sentences, produce the inferences.
It is not certain, however, that logic provides the core ideas about representation and computation needed for cognitive science, since more efficient and psychologically natural methods of computation may be needed to explain human thinking.
Much of human knowledge is naturally described in terms of rules of the form IF … THEN …, and many kinds of thinking such as planning can be modeled by rule-based systems. The explanation schema used is:
- Why do people have a particular kind of intelligent behavior?
- People have mental rules.
- People have procedures for using these rules to search a space of possible solutions, and procedures for generating new rules.
- Procedures for using and forming rules produce the behavior.
Computational models based on rules have provided detailed simulations of a wide range of psychological experiments, from cryptarithmetic problem solving to skill acquisition to language use. Rule-based systems have also been of practical importance in suggesting how to improve learning and how to develop intelligent machine systems.
Concepts, which partly correspond to the words in spoken and written language, are an important kind of mental representation. There are computational and psychological reasons for abandoning the classical view that concepts have strict definitions. Instead, concepts can be viewed as sets of typical features. Concept application is then a matter of getting an approximate match between concepts and the world. Schemas and scripts are more complex than concepts that correspond to words, but they are similar in that they consist of bundles of features that can be matched and applied to new situations. The explanatory schema used in concept-based systems is:
- Why do people have a particular kind of intelligent behavior?
- People have a set of concepts, organized via slots that establish kind and part hierarchies and other associations.
- People have a set of procedures for concept application, including spreading activation, matching, and inheritance.
- The procedures applied to the concepts produce the behavior.
- Concepts can be translated into rules, but they bundle information differently than sets of rules, making possible different computational procedures.
Analogies play an important role in human thinking, in areas as diverse as problem solving, decision making, explanation, and linguistic communication. Computational models simulate how people retrieve and map source analogs in order to apply them to target situations. The explanation schema for analogies is:
- Why do people have a particular kind of intelligent behavior?
- People have verbal and visual representations of situations that can be used as cases or analogs.
- People have processes of retrieval, mapping, and adaptation that operate on those analogs.
- The analogical processes, applied to the representations of analogs, produce the behavior.
The constraints of similarity, structure, and purpose overcome the difficult problem of how previous experiences can be found and used to help with new problems. Not all thinking is analogical, and using inappropriate analogies can hinder thinking, but analogies can be very effective in applications such as education and design.
Visual and other kinds of images play an important role in human thinking. Pictorial representations capture visual and spatial information in a much more usable form than lengthy verbal descriptions. Computational procedures well suited to visual representations include inspecting, finding, zooming, rotating, and transforming. Such operations can be very useful for generating plans and explanations in domains to which pictorial representations apply. The explanatory schema for visual representation is:
- Why do people have a particular kind of intelligent behavior?
- People have visual images of situations.
- People have processes such as scanning and rotation that operate on those images.
- The processes for constructing and manipulating images produce the intelligent behavior.
Imagery can aid learning, and some metaphorical aspects of language may have their roots in imagery. Psychological experiments suggest that visual procedures such as scanning and rotating employ imagery, and recent neurophysiological results confirm a close physical link between reasoning with mental imagery and perception.
Connectionist networks consisting of simple nodes and links are very useful for understanding psychological processes that involve parallel constraint satisfaction. Such processes include aspects of vision, decision making, explanation selection, and meaning making in language comprehension. Connectionist models can simulate learning by methods that include Hebbian learning and backpropagation. The explanatory schema for the connectionist approach is:
- Why do people have a particular kind of intelligent behavior?
- People have representations that involve simple processing units linked to each other by excitatory and inhibitory connections.
- People have processes that spread activation between the units via their connections, as well as processes for modifying the connections.
- Applying spreading activation and learning to the units produces the behavior.
Simulations of various psychological experiments have shown the psychological relevance of the connectionist models, which are, however, only very rough approximations to actual neural networks.
Theoretical neuroscience is the attempt to develop mathematical and computational theories and models of the structures and processes of the brains of humans and other animals. It differs from connectionism in trying to be more biologically accurate by modeling the behavior of large numbers of realistic neurons organized into functionally significant brain areas. In recent years, computational models of the brain have become biologically richer, both with respect to employing more realistic neurons such as ones that spike and have chemical pathways, and with respect to simulating the interactions among different areas of the brain such as the hippocampus and the cortex. These models are not strictly an alternative to computational accounts in terms of logic, rules, concepts, analogies, images, and connections, but should mesh with them and show how mental functioning can be performed at the neural level. The explanatory schema for theoretical neuroscience is:
- How does the brain carry out functions such as cognitive tasks?
- The brain has neurons organized by synaptic connections into populations and brain areas.
- The neural populations have spiking patterns that are transformed via sensory inputs and the spiking patterns of other neural populations.
- Interactions of neural populations carry out functions including cognitive tasks.
From the perspective of theoretical neuroscience, mental representations are patterns of neural activity, and inference is transformation of such patterns.
Some philosophy, in particular naturalistic philosophy of mind, is part of cognitive science. But the interdisciplinary field of cognitive science is relevant to philosophy in several ways. First, the psychological, computational, and other results of cognitive science investigations have important potential applications to traditional philosophical problems in epistemology, metaphysics, and ethics. Second, cognitive science can serve as an object of philosophical critique, particularly concerning the central assumption that thinking is representational and computational. Third and more constructively, cognitive science can be taken as an object of investigation in the philosophy of science, generating reflections on the methodology and presuppositions of the enterprise.
Much philosophical research today is naturalistic, treating philosophical investigations as continuous with empirical work in fields such as psychology. From a naturalistic perspective, philosophy of mind is closely allied with theoretical and experimental work in cognitive science. Metaphysical conclusions about the nature of mind are to be reached, not by a priori speculation, but by informed reflection on scientific developments in fields such as computer science and neuroscience. Similarly, epistemology is not a stand-alone conceptual exercise, but depends on and benefits from scientific findings concerning mental structures and learning procedures. Even ethics can benefit by using greater understanding of the psychology of moral thinking to bear on ethical questions such as the nature of deliberations concerning right and wrong. Goldman (1993) provides a concise review of applications of cognitive science to epistemology, philosophy of science, philosophy of mind, metaphysics, and ethics. Here are some philosophical problems to which ongoing developments in cognitive science are highly relevant. Links are provided to other relevant articles in this Encyclopedia.
- Innateness. To what extent is knowledge innate or acquired by experience? Is human behavior shaped primarily by nature or nurture?
- Language of thought. Does the human brain operate with a language-like code or with a more general connectionist architecture? What is the relation between symbolic cognitive models using rules and concepts and sub-symbolic models using neural networks?
- Mental imagery. Do human minds think with visual and other kinds of imagery, or only with language-like representations?
- Folk psychology. Does a person's everyday understanding of other people consist of having a theory of mind, or of merely being able to simulate them?
- Meaning. How do mental representations acquire meaning or mental content? To what extent does the meaning of a representation depend on its relation to other representations, its relation to the world, and its relation to a community of thinkers?
- Mind-brain identity. Are mental states brain states? Or can they be multiply realized by other material states? What is the relation between psychology and neuroscience? Is materialism true?
- Free will. Is human action free or merely caused by brain events?
- Moral psychology. How do minds/brains make ethical judgments?
- Emotions. What are emotions, and what role do they play in thinking?
- Appearance and reality. How do minds/brains form and evaluate representations of the external world?
Additional philosophical problems arise from examining the presuppositions of current approaches to cognitive science.
The claim that human minds work by representation and computation is an empirical conjecture and might be wrong. Although the computational-representational approach to cognitive science has been successful in explaining many aspects of human problem solving, learning, and language use, some philosophical critics such as Hubert Dreyfus (1992) and John Searle (1992) have claimed that this approach is fundamentally mistaken. Critics of cognitive science have offered such challenges as:
- The emotion challenge: Cognitive science neglects the important role of emotions in human thinking.
- The consciousness challenge: Cognitive science ignores the importance of consciousness in human thinking.
- The world challenge: Cognitive science disregards the significant role of physical environments in human thinking.
- The body challenge: Cognitive science neglects the contribution of the body to human thought and action.
- The social challenge: Human thought is inherently social in ways that cognitive science ignores.
- The dynamical systems challenge: The mind is a dynamical system, not a computational system.
- The mathematics challenge: Mathematical results show that human thinking cannot be computational in the standard sense, so the brain must operate differently, perhaps as a quantum computer.
Thagard (2005) argues that all these challenges can best be met by expanding and supplementing the computational-representational approach, not by abandoning it.
Cognitive science raises many interesting methodological questions that are worthy of investigation by philosophers of science. What is the nature of representation? What role do computational models play in the development of cognitive theories? What is the relation among apparently competing accounts of mind involving symbolic processing, neural networks, and dynamical systems? What is the relation among the various fields of cognitive science such as psychology, linguistics, and neuroscience? Are psychological phenomena subject to reductionist explanations via neuroscience? Von Eckardt (1993) and Clark (2001) provide discussions of some of the philosophical issues that arise in cognitive science. Bechtel et al. (2001) collect useful articles on the philosophy of neuroscience.
The increasing prominence of neural explanations in cognitive, social, developmental, and clinical psychology raises important philosophical questions about explanation and reduction. Anti-reductionism, according to which psychological explanations are completely independent of neurological ones, is becoming increasingly implausible, but it remains controversial to what extent psychology can be reduced to neuroscience and molecular biology (see McCauley, 2007, for a comprehensive survey). Essential to answering questions about the nature of reduction are answers to questions about the nature of explanation. Explanations in psychology, neuroscience, and biology in general are plausibly viewed as descriptions of mechanisms, which are systems of parts that interact to produce regular changes (Bechtel and Abrahamsen, 2005). In psychological explanations, the parts are mental representations that interact by computational procedures to produce new representations. In neuroscientific explanations, the parts are neural populations that interact by electrochemical processes to produce new activity in neural populations. If progress in theoretical neuroscience continues, it should become possible to tie psychological to neurological explanations by showing how mental representations such as concepts are constituted by activities in neural populations, and how computational procedures such as spreading activation among concepts are carried out by neural processes.
- Bechtel, W., & Abrahamsen, A. A. (2005). Explanation: A Mechanistic Alternative. Studies in History and Philosophy of Biology and Biomedical Sciences, 36, 421-441.
- Bechtel, W., & Graham, G. (Eds.). (1998). A Companion to Cognitive Science. Malden, MA: Blackwell.
- Bechtel, W., Mandik, P., Mundale, J., & Stufflebeam, R. S. (Eds.). (2001). Philosophy and the Neurosciences: A Reader. Malden, MA: Blackwell.
- Clark, A. (2001). Mindware: An Introduction to the Philosophy of Cognitive science. New York: Oxford University Press.
- Dawson, M. R. W. (1998). Understanding Cognitive Science. Oxford: Blackwell.
- Dreyfus, H. L. (1992). What Computers Still Can't Do. (3rd ed.). Cambridge, MA: MIT Press.
- Eliasmith, C., & Anderson, C. H. (2003). Neural Engineering: Computation, Representation and Dynamics in Neurobiological Systems. Cambridge, MA: MIT Press.
- Friedenberg, J. D., & Silverman, G. (2005). Cognitive science: An introduction to the study of mind. Thousand Oaks, CA: Sage.
- Goldman, A. (1993). Philosophical Applications of Cognitive Science. Boulder: Westview Press.
- Johnson-Laird, P., (1988). The Computer and the Mind: An Introduction to Cognitive Science. Cambridge, MA: Harvard University Press.
- McCauley, R. N. (2007). Reduction: Models of Cross-scientific Relations and their Implications for the Psychology-neuroscience Interface. In P. Thagard (Ed.), Philosophy of Psychology and Cognitive Science (pp. 105-158). Amsterdam: Elsevier.
- Nadel, L. (Ed.). (2003). Encyclopedia of Cognitive Science. London:Nature Publishing Group.
- Polk, T. A., & Seifert, C. M. (Eds.). (2002). Cognitive Modeling. Cambridge, MA: MIT Press.
- Searle, J. (1992). The Rediscovery of the Mind. Cambridge, MA: MIT Press.
- Sobel, C. P. (2001). The Cognitive Sciences: An Interdisciplinary Approach. Mountain View, CA: Mayfield.
- Stillings, N., et al., (1995). Cognitive Science. Second edition. Cambridge, MA: MIT Press.
- Thagard, P., (2005). Mind: Introduction to Cognitive Science, second edition, Cambridge, MA: MIT Press.
- Thagard, P. (Ed.). (2007). Philosophy of Psychology and Cognitive Science. Amsterdam: Elsevier.
- von Eckardt, B. (1993). What is Cognitive Science? Cambridge, MA: MIT Press.
- Wilson, R. A., & Keil, F. C. (Eds.). (1999). The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press.
With the kind permission of MIT Press, this page incorporates some material from the first and second editions of P. Thagard, Mind: Introduction to Cognitive Science.
- Artificial Intelligence in the news (American Association for Artificial Intelligence)
- Artificial intelligence on the Web
- Bibliography of Cognitive Science
- Biographies of Major Contributors to Cognitive Science
- Celebrities in Cognitive Science
- Cognitive Science Dictionary, University of Alberta
- Cognitive Science Society
- Cogprints: Archive of papers on Cognitive Science
- Computational Epistemology Lab at the University of Waterloo
- Dictionary of Philosophy of Mind
- Glossary of Cognitive Science
- Google Cognitive Science page
- Yahoo! Cognitive Science page
- More specific Cognitive Science links
artificial intelligence | behaviorism | concepts | connectionism | consciousness | emotion | folk psychology: as a theory | folk psychology: as mental simulation | identity theory of mind | innate/acquired distinction | innateness: and contemporary theories of cognition | intentionality | language of thought hypothesis | meaning holism | memory | mental content: causal theories of | mental imagery | mental representation | mind: computational theory of | mind: modularity of | neuroscience, philosophy of | propositional attitude reports
First published Thu Mar 30, 2000; substantive revision Wed Jul 7, 2004
The notion of a "mental representation" is, arguably, in the first instance a theoretical construct of cognitive science. As such, it is a basic concept of the Computational Theory of Mind, according to which cognitive states and processes are constituted by the occurrence, transformation and storage (in the mind/brain) of information-bearing structures (representations) of one kind or another.
However, on the assumption that a representation is an object with semantic properties (content, reference, truth-conditions, truth-value, etc.), a mental representation may be more broadly construed as a mental object with semantic properties. As such, mental representations (and the states and processes that involve them) need not be understood only in computational terms. On this broader construal, mental representation is a philosophical topic with roots in antiquity and a rich history and literature predating the recent "cognitive revolution." Though most contemporary philosophers of mind acknowledge the relevance and importance of cognitive science, they vary in their degree of engagement with its literature, methods and results; and there remain, for many, issues concerning the representational properties of the mind that can be addressed independently of the computational hypothesis.
Though the term 'Representational Theory of Mind' is sometimes used almost interchangeably with 'Computational Theory of Mind', I will use it here to refer to any theory that postulates the existence of semantically evaluable mental objects, including philosophy's stock in trade mentalia — thoughts, concepts, percepts, ideas, impressions, notions, rules, schemas, images, phantasms, etc. — as well as the various sorts of "subpersonal" representations postulated by cognitive science. Representational theories may thus be contrasted with theories, such as those of Baker (1995), Collins (1987), Dennett (1987), Gibson (1966, 1979), Reid (1764/1997), Stich (1983) and Thau (2002), which deny the existence of such things.
- 1. The Representational Theory of Mind
- 2. Propositional Attitudes
- 3. Conceptual and Nonconceptual Representation
- 4. Representationalism and Phenomenalism
- 5. Imagery
- 6. Content Determination
- 7. Internalism and Externalism
- 8. The Computational Theory of Mind
- 9. Thought and Language
- Other Internet Resources
- Related Entries
The Representational Theory of Mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and images. Such states are said to have "intentionality" — they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an image of George W. Bush with dreadlocks is inaccurate.)
RTM defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry.
RTM also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if p then q is (among other things) to have a sequence of thoughts of the form p, if p then q, q.
Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized — i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensically conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.
In philosophy, recent debates about mental representation have centered around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focused on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.
Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behavior (often collectively referred to as "folk psychology") are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behavior than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)
Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behavior. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court. See, e.g., Churchland 1989.)
Dennett (1987a) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behavior is merely to adopt the "intentional stance" toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behavior (on the assumption that it is rational — i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this. (See Dennett 1987a: 29.)
Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a "moderate" realist about propositional attitudes, since he believes that the patterns in the behavior and behavioral dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987b, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.
(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.)
Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behavior and cognition, and the causal powers of a mental state are determined by its intrinsic "structural" or "syntactic" properties. The semantic properties of a mental state, however, are determined by its extrinsic properties — e.g., its history, environmental or intramental relations. Hence, such properties cannot figure in causal-scientific explanations of behavior. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role. (Stich has changed his views on a number of these issues. See Stich 1996.)
It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (cf. Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ("what-it's-like") features ("qualia"), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Nonconceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)
Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that nonconceptual representations — percepts ("impressions"), images ("ideas") and the like — are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focusing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981c) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frege 1918/1997, Geach 1957) or mathematical (Frege 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only nonconceptual representations construed in this way.
Contemporary disagreement over nonconceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible. (See the discussion in the next section.)
There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tiensen (2003), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991a), Pitt (2004), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely symbolic (conscious) representational states themselves have a (perhaps proprietary) phenomenology. If this claim is correct, the question of what role phenomenology plays in the determination of content rearises for conceptual representation; and the eliminativist ambitions of Sellars, Brandom, Rey, et al. would meet a new obstacle. (It would also raise prima face problems for reductivist representationalism (see the next section).)
Among realists about phenomenal properties, the central division is between representationalists (also called "representationists" and "intentionalists") — e.g., Dretske (1995), Harman (1990), Leeds (1993), Lycan (1987, 1996), Rey (1991), Thau (2002), Tye (1995, 2000) — and phenomenalists (also called "phenomenists" and "qualia freaks") — e.g., Block (1996, 2003), Chalmers (1996, 2004), Evans (1982), Loar (2003a, 2003b), Peacocke (1983, 1989, 1992, 2001), Raffman (1995), Shoemaker (1990). Representationalists claim that the phenomenal character of a mental state is reducible to a kind of intentional content. Phenomenalists claim that the phenomenal character of a mental state is not so reducible.
The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-reductive claim (though the term 'representationalism' is most often used for the reductive claim). (See Chalmers 2004.) On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke).
Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality (see the next section) is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal — though not in the same way.)
The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to "see through it" to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.
In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property P is a state of a system whose evolved function is to indicate the presence of P in the environment; a thought representing the property P, on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of "symbol-filled arrays." (Cf. the account of mental images in Tye 1991.)
Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences — qualia themselves — that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual "scenario" (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is "correct" (a semantic property) if in the corresponding "scene" (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.
Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the "phenomenal concept" — a conceptual/phenomenal hybrid consisting of a phenomenological "sample" (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991b) puts it, "you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties." One cannot have a phenomenal concept of a phenomenal property P, and, hence, phenomenal beliefs about P, without having experience of P, because P itself is (in some way) constitutive of the concept of P. (Cf. Jackson 1982, 1986 and Nagel 1974.)
Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.
Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties — i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981a, 1981b, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)
The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery (see, e.g., Kosslyn and Pomerantz 1977). The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focused on visual imagery — hence the designation 'pictorial'; though of course there may imagery in other modalities — auditory, olfactory, etc. — as well.)
The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.
It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is "quasi-pictorial" when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially — for example, in terms of the number of discrete computational steps required to combine stored information about them. (Cf. Rey 1981.)
Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are "(labeled) interpreted symbol-filled arrays." The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each "cell" in the array represents a specific viewer-centered 2-D location on the surface of the imagined object).
The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.
Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990a) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.
The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990a, 1994) and Teleological Theories (Fodor 1990b, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses (or the property horse).
According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.
Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989).
(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (non-conceptual) content of experiential states. They thus tend to be externalists (see the next section) about phenomenological as well as conceptual content. Phenomenalists and non-reductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.
Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are externalists (e.g., Burge 1979, 1986b, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981b).
This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviors they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic (see Stich 1983, Fodor 1982, 1987, 1994). Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both "narrow" content (determined by intrinsic factors) and "wide" or "broad" content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology. See von Eckardt 1993: 189.)
Narrow content has been variously construed. Putnam (1975), Fodor (1982: 114; 1994: 39ff), and Block (1986: 627ff), for example, seem to understand it as something like de dicto content (i.e., Fregean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construals, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intramental computational or inferential role (or its phenomenology — see, e.g., Searle 1992, Siewert 1998, Pitt forthcoming).
Burge (1986b) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frege cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.
The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to CTM, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states.
CTM develops RTM by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some — so-called "subpersonal" or "sub-doxastic" representations — are not. Though many philosophers believe that CTM can provide the best scientific explanations of cognition and behavior, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific RTM.
According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981a, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, that computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.
Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the "mental models" of Johnson-Laird 1983, the "retinal arrays," "primal sketches" and "2½ -D sketches" of Marr 1982, the "frames" of Minsky 1974, the "sub-symbolic" structures of Smolensky 1989, the "quasi-pictures"of Kosslyn 1980, and the "interpreted symbol-filled arrays" of Tye 1991 — in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and use (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).
A fundamental disagreement among proponents of CTM concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Connectionist Architectures.
The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors ("nodes") and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism — "localist" versions — on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the connectionist program (Smolensky 1988, 1991, Chalmers 1993).)
Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. (Cf. also Marr 1982 for an application of classical approach in scientific psychology.) According to the LOTH, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the LOTH explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.
Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in connectionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)
Moreover, connectionists argue that information processing as it occurs in connectionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981c), on the connectionist model it is a matter of evolving distribution of "weight" (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The connectionist network is "trained up" by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well. (Cf. the sonar example in Churchland 1989.)
Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that connectionist systems show the kind of flexibility in response to novel situations typical of human cognition — situations in which classical systems are relatively "brittle" or "fragile."
Some philosophers have maintained that connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if connectionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within connectionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/connectionist debate, and provides useful introductory material as well. See also Von Eckardt 2004.)
Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that CTM provides the correct account of mental states and processes.
Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters. (See also Port and Van Gelder 1995; Clark 1997a, 1997b.)
Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. CTM attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So CTM involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.
To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to RTM such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.
Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.
(Others, however, e.g., Davidson (1975, 1982) have suggested that the kind of thought human beings are capable of is not possible without language, so that the dependency might be reversed, or somehow mutual (see also Sellars 1956). (But see Martin 1987 for a defense of the claim that thought is possible without language. See also Chisholm and Sellars 1958.) Schiffer (1987) subsequently despaired of the success of what he calls "Intention Based Semantics.")
It is also widely held that in addition to having such properties as reference, truth-conditions and truth — so-called extensional properties — expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions — i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frege 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.
- Almog, J., Perry, J. and Wettstein, H., eds. (1989), Themes from Kaplan, New York: Oxford University Press.
- Aristotle, (1984), De Anima, in The Complete Works of Aristotle: The Revised Oxford Translation, Oxford: Oxford University Press.
- Baker, L. R. (1995), Explaining Attitudes: A Practical Approach to the Mind, Cambridge: Cambridge University Press.
- Ballard, D.H. (1986), "Cortical Connections and Parallel Processing: Structure and Function," The Behavioral and Brain Sciences 9: 67-120.
- Ballard, D.H and Hayes, P.J. (1984), "Parallel Logical Inference," Proceedings of the Sixth Annual Conference of the Cognitive Science Society, Rochester, NY.
- Beaney, M. ed., (1997), The Frege Reader, Oxford: Blackwell Publishers.
- Berkeley, G. (1975), Principles of Human Knowledge, in M.R. Ayers, ed., Berkeley: Philosophical Writings, London: Dent.
- Block, N. ed., (1981), Readings in Philosophy of Psychology, Vol. 2, Cambridge, Mass.: Harvard University Press.
- Block, N. ed., (1982), Imagery, Cambridge, Mass.: The MIT Press.
- Block, N. (1983), "Mental Pictures and Cognitive Science," Philosophical Review 93: 499-542.
- Block, N. (1986), "Advertisement for a Semantics for Psychology," in P.A. French, T.E. Uehling and H.K. Wettstein, eds., Midwest Studies in Philosophy, Vol. X, Minneapolis: University of Minnesota Press: 615-678.
- Block, N. (1996), "Mental Paint and Mental Latex," in E. Villanueva, ed., Philosophical Issues, 7: Perception: 19-49.
- Block, N. (2003), "Mental Paint," in M. Hahn and B. Ramberg, eds., Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, Mass.: The MIT Press.
- Boghossian, P. A. (1995), "Content," in J. Kim and E. Sosa, eds., A Companion to Metaphysics, Oxford: Blackwell Publishers Ltd.: 94-96.
- Brandom, R. (2002), "Non-inferential Knowledge, Perceptual Experience, and Secondary Qualities: Placing McDowell's Empiricism," in N.H. Smith, ed., Reading McDowell: On Mind and World, London: Routledge.
- Burge, T. (1979), "Individualism and the Mental," in P.A. French, T.E. Uehling and H.K.Wettstein, eds., Midwest Studies in Philosophy, Vol. IV, Minneapolis: University of Minnesota Press: 73-121.
- Burge, T. (1986a), "Individualism and Psychology," Philosophical Review 95: 3-45.
- Burge, T. (1986b), "Intellectual Norms and Foundations of Mind," The Journal of Philosophy 83: 697-720.
- Chalmers, D. (1993), "Connectionism and Compositionality: Why Fodor and Pylyshyn Were Wrong," Philosophical Psychology 6: 305-319.
- Chalmers, D. (1996), The Conscious Mind, New York: Oxford University Press.
- Chalmers, D. (2003), "The Content and Epistemology of Phenomenal Belief," in Q. Smith & A. Jokic, eds., Consciousness: New Philosophical Perspectives, Oxford: Oxford University Press: 220-272.
- Chalmers, D. (2004), "The Representational Character of Experience," in B. Leiter, ed., The Future for Philosophy, Oxford: Oxford University Press: 153-181.
- Chisholm, R. and Sellars, W. (1958), "The Chisholm-Sellars Correspondence on Intentionality," in H. Feigl, M. Scriven and G. Maxwell, eds., Minnesota Studies in the Philosophy of Science, Vol. II, Minneapolis: University of Minnesota Press: 529-539.
- Chomsky, N. (1965), Aspects of the Theory of Syntax, Cambridge, Mass.: The MIT Press.
- Churchland, P.M. (1981), "Eliminative Materialism and the Propositional Attitudes," Journal of Philosophy 78: 67-90.
- Churchland, P.M. (1989), "On the Nature of Theories: A Neurocomputational Perspective," in W. Savage, ed., Scientific Theories: Minnesota Studies in the Philosophy of Science, Vol. 14, Minneapolis: University of Minnesota Press: 59-101.
- Clark, A. (1997a), "The Dynamical Challenge," Cognitive Science 21: 461-481.
- Clark, A. (1997b), Being There: Putting Brain, Body and World Together Again, Cambridge, MA: The MIT Press.
- Collins, A. (1987), The Nature of Mental Things, Notre Dame: Notre Dame University Press.
- Crane, T. (1995), The Mechanical Mind, London: Penguin Books Ltd.
- Davidson, D. (1973), "Radical Interpretation," Dialectica 27: 313-328.
- Davidson, D. (1974), "Belief and the Basis of Meaning," Synthese 27: 309-323.
- Davidson, D. (1975), "Thought and Talk," in S. Guttenplan, ed., Mind and Language, Oxford: Clarendon Press: 7-23.
- Davidson, D. (1982), "Rational Animals," Dialectica 4: 317-327.
- Dennett, D. (1969), Content and Consciousness, London: Routledge & Kegan Paul.
- Dennett, D. (1981), "The Nature of Images and the Introspective Trap," pages 132-141 of Dennett 1969, reprinted in Block 1981: 128-134.
- Dennett, D. (1987), The Intentional Stance, Cambridge, Mass.: The MIT Press.
- Dennett, D. (1987a), "True Believers: The Intentional Strategy and Why it Works," in Dennett 1987: 13-35.
- Dennett, D. (1987b), "Reflections: Real Patterns, Deeper Facts, and Empty Questions," in Dennett 1987: 37-42.
- Dennett, D. (1988), "Quining Qualia," in A.J. Marcel and E. Bisiach, eds., Consciousness in Contemporary Science, Oxford: Clarendon Press: 42-77.
- Dennett, D. (1991), "Real Patterns," The Journal of Philosophy 87: 27-51.
- Devitt, M. (1996), Coming to Our Senses: A Naturalistic Program for Semantic Localism, Cambridge: Cambridge University Press.
- Dretske, F. (1969), Seeing and Knowing, Chicago: The University of Chicago Press.
- Dretske, F. (1981), Knowledge and the Flow of Information, Cambridge, Mass.: The MIT Press.
- Dretske, F. (1988), Explaining Behavior: Reasons in a World of Causes, Cambridge, Mass.: The MIT Press.
- Dretske, F. (1995), Naturalizing the Mind, Cambridge, Mass.: The MIT Press.
- Dretske, F. (1998), "Minds, Machines, and Money: What Really Explains Behavior," in J. Bransen and S. Cuypers, eds., Human Action, Deliberation and Causation, Philosophical Studies Series 77, Dordrecht: Kluwer Academic Publishers. Reprinted in Dretske 2000.
- Dretske, F. (2000), Perception, Knowledge and Belief, Cambridge: Cambridge University Press.
- Evans, G. (1982), The Varieties of Reference, Oxford: Oxford University Press.
- Field, H. (1978), "Mental representation," Erkenntnis 13: 9-61.
- Flanagan, O. (1992), Consciousness Reconsidered, Cambridge, Mass.: The MIT Press.
- Fodor, J.A. (1975), The Language of Thought, Cambridge, Mass.: Harvard University Press.
- Fodor, J.A. (1978), "Propositional Attitudes," The Monist 61: 501-523.
- Fodor, J.A. (1981), Representations, Cambridge, Mass.: The MIT Press.
- Fodor, J.A. (1981a), "Introduction," in Fodor 1981: 1-31.
- Fodor, J.A. (1981b), "Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology," in Fodor 1981: 225-253.
- Fodor, J.A. (1981c), "The Present Status of the Innateness Controversy," in Fodor 1981: 257-316.
- Fodor, J.A. (1982), "Cognitive Science and the Twin-Earth Problem," Notre Dame Journal of Formal Logic 23: 98-118.
- Fodor, J.A. (1987), Psychosemantics, Cambridge, Mass.: The MIT Press.
- Fodor, J.A. (1990a), A Theory of Content and Other Essays, Cambridge, Mass.: The MIT Press.
- Fodor, J.A. (1990b), "Psychosemantics or: Where Do Truth Conditions Come From?" in W.G. Lycan, ed., Mind and Cognition: A Reader, Oxford: Blackwell Publishers: 312-337.
- Fodor, J.A. (1994), The Elm and the Expert, Cambridge, Mass.: The MIT Press.
- Fodor, J.A. (1998), Concepts: Where Cognitive Science Went Wrong, Oxford: Oxford University Press.
- Fodor, J.A. and Pylyshyn, Z. (1981), "How Direct is Visual Perception?: Some Reflections on Gibson's 'Ecological Approach'," Cognition 9: 207-246.
- Fodor, J.A. and Pylyshyn, Z. (1988), "Connectionism and Cognitive Architecture: A Critical Analysis," Cognition 28: 3-71.
- Frege, G. (1884), The Foundations of Arithmetic, trans. J.L. Austin, New York: Philosophical Library (1954).
- Frege, G. (1892), "On Sinn and Bedeutung", in Beany 1997: 151-171.
- Frege, G. (1918), "Thought", in Beany 1997: 325-345.
- Geach, P. (1957), Mental Acts: Their Content and Their Objects, London: Routledge & Kegan Paul.
- Gibson, J.J. (1966), The senses considered as perceptual systems, Boston: Houghton Mifflin.
- Gibson, J.J. (1979), The ecological approach to visual perception, Boston: Houghton Mifflin.
- Goldman, A. (1993), "The Psychology of Folk Psychology," Behavioral and Brian Sciences 16: 15-28.
- Goodman, N. (1976), Languages of Art (2nd ed.), Indianapolis: Hackett.
- Grice, H.P. (1957), "Meaning," Philosophical Review, 66: 377-388; reprinted in Studies in the Way of Words, Cambridge, Mass.: Harvard University Press (1989): 213-223.
- Gunther, Y.H. (ed.) (2003), Essays on Nonconceptual Content, Cambridge, Mass.: The MIT Press.
- Harman, G. (1973), Thought, Princeton: Princeton University Press.
- Harman, G. 1987, "(Non-Solipsistic) Conceptual Role Semantics," in E. Lepore, ed., New Directions in Semantics, London: Academic Press: 55-81.
- Harman, G. 1990, "The Intrinsic Quality of Experience," in J. Tomberlin, ed., Philosophical Perspectives 4: Action Theory and Philosophy of Mind, Atascadero: Ridgeview Publishing Company: 31-52.
- Harnish, R. (2002), Minds, Brains, Computers, Malden, Mass.: Blackwell Publishers Inc.
- Haugeland, J. (1981), "Analog and analog," Philosophical Topics 12: 213-226.
- Heil, J. (1991), "Being Indiscrete," in J. Greenwood, ed., The Future of Folk Psychology, Cambridge: Cambridge University Press: 120-134.
- Horgan, T. and Tienson, J. (1996), Connectionism and the Philosophy of Psychology, Cambridge, Mass: The MIT Press.
- Horgan, T. and Tiensen, J. (2003), "The Intentionality of Phenomenology and the Phenomenology of Intentionality," in D.J. Chalmers, ed., Philosophy of Mind, Oxford: Oxford University Press.
- Horst, S. (1996), Symbols, Computation, and Intentionality, Berkeley: University of California Press.
- Hume, D. (1739), A Treatise of Human Nature, ed. L.A. Selby-Bigg, rev. P.H. Nidditch, Oxford: Oxford University Press (1978).
- Jackendoff, R. (1987), Computation and Cognition, Cambridge, Mass.: The MIT Press.
- Jackson, F. (1982), "Epiphenomenal Qualia," Philosophical Quarterly 32: 127-136.
- Jackson, F. (1986), "What Mary Didn't Know," Journal of Philosophy 83: 291-295.
- Johnson-Laird, P.N. and Wason, P.C. (1977), Thinking: Readings in Cognitive Science, Cambridge University Press.
- Johnson-Laird, P.N. (1983), Mental Models, Cambridge, Mass.: Harvard University Press.
- Kaplan, D. (1989), "Demonstratives," in Almog, Perry and Wettstein 1989: 481-614.
- Kosslyn, S.M. (1980), Image and Mind, Cambridge, Mass.: Harvard University Press.
- Kosslyn, S.M. (1982), "The Medium and the Message in Mental Imagery," in Block 1982: 207-246.
- Kosslyn, S. (1983), Ghosts in the Mind's Machine, New York: W.W. Norton & Co.
- Kosslyn, S.M. and Pomerantz, J.R. (1977), "Imagery, Propositions, and the Form of Internal Representations," Cognitive Psychology 9: 52-76.
- Leeds, S. (1993), "Qualia, Awareness, Sellars," Noûs XXVII: 303-329.
- Lerdahl, F. and Jackendoff, R. (1983), A Generative Theory of Tonal Music, Cambridge, Mass.: The MIT Press.
- Levine, J. (1993), "On Leaving Out What It's Like," in M. Davies and G. Humphreys, eds., Consciousness, Oxford: Blackwell Publishers: 121-136.
- Levine, J. (1995), "On What It Is Like to Grasp a Concept," in E. Villanueva, ed., Philosophical Issues 6: Content , Atascadero: Ridgeview Publishing Company: 38-43.
- Levine, J. (2001), Purple Haze, Oxford: Oxford University Press.
- Lewis, D. (1971), "Analog and Digital," Noûs 5: 321-328.
- Lewis, D. (1974), "Radical Interpretation," Synthese 27: 331-344.
- Loar, B. (1981), Mind and Meaning, Cambridge: Cambridge University Press.
- Loar, B. (1996), "Phenomenal States" (Revised Version), in N. Block, O. Flanagan and G. Güzeldere, eds., The Nature of Consciousness, Cambridge, Mass.: The MIT Press: 597-616.
- Loar, B. (2003a), "Transparent Experience and the Availability of Qualia," in Q. Smith and A. Jokic, eds., Consciousness: New Philosophical Perspectives, Oxford: Clarendon Press: 77-96.
- Loar, B. (2003b), "Phenomenal Intentionality as the Basis of Mental Content," in M. Hahn and B. Ramberg, eds., Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, Mass.: The MIT Press.
- Locke, J. (1689), An Essay Concerning Human Understanding, ed. P.H. Nidditch, Oxford: Oxford University Press (1975).
- Lycan, W.G. (1987), Consciousness, Cambridge, Mass.: The MIT Press.
- Lycan, W.G. (1986), Consciousness and Experience, Cambridge, Mass.: The MIT Press.
- MacDonald, C. and MacDonald, G. (1995), Connectionism: Debates on Psychological Explanation, Oxford: Blackwell Publishers.
- Marr, D. (1982), Vision, New York: W.H. Freeman and Company.
- Martin, C.B. (1987), "Proto-Language," Australasian Journal of Philosophy 65: 277-289.
- McCulloch, W.S. and Pitts, W., (1943), "A Logical Calculus of the Ideas Immanent in Nervous Activity," Bulletin of Mathematical Biophysics 5: 115-33.
- McDowell, J. (1994), Mind and World, Cambridge, Mass.: Harvard University Press.
- McGinn, C. (1977), "Charity, Interpretation, and Belief," Journal of Philosophy 74:
- McGinn, C. (1982), "The Structure of Content," in A. Woodfield, ed., Thought and Content, Oxford: Oxford University Press: 207-258.
- McGinn, C. (1989), Mental Content, Oxford: Blackwell Publishers.
- McGinn, C. (1991), The Problem of Consciousness, Oxford: Blackwell Publishers.
- McGinn, C. (1991a), "Content and Consciousness," in McGinn 1991: 23-43.
- McGinn, C. (1991b), "Can We Solve the Mind-Body Problem?" in McGinn 1991: 1-22.
- Millikan, R. (1984), Language, Thought and other Biological Categories, Cambridge, Mass.: The MIT Press.
- Minsky, M. (1974), "A Framework for Representing Knowledge," MIT-AI Laboratory Memo 306 June. (A shorter version appears in J. Haugeland, ed., Mind Design II, Cambridge, Mass.: The MIT Press (1997).)
- Nagel, T. (1974), "What Is It Like to Be a Bat?" Philosophical Review 83: 435-450.
- Newell, A. and Simon, H.A. (1972), Human Problem Solving, New York: Prentice-Hall.
- Newell, A. and Simon, H.A. (1976), "Computer Science as Empirical Inquiry: Symbols and Search," Communications of the Association for Computing Machinery 19: 113-126.
- Osherson, D.N., Kosslyn, S.M. and Hollerbach, J.M. (1990), Visual Cognition and Action: An Invitation to Cognitive Science, Vol. 2, Cambridge, Mass.: The MIT Press.
- Papineau, D. (1987), Reality and Representation, Oxford: Blackwell Publishers.
- Peacocke, C. (1983), Sense and Content, Oxford: Clarendon Press.
- Peacocke, C. (1989), "Perceptual Content," in Almog, Perry and Wettstein 1989: 297-329.
- Peacocke, C. (1992), "Scenarios, Concepts and Perception," in T. Crane, ed., The Contents of Experience, Cambridge: Cambridge University Press: 105-35.
- Peacocke, C. (2001), "Does Perception Have a Nonconceptual Content?" Journal of Philosophy 99: 239-264.
- Pinker, S. (1989), Learnability and Cognition, Cambridge, Mass.: The MIT Press.
- Pitt, D. (2004), "The Phenomenology of Cognition, Or, What Is it Like to Think That P?" Philosophy and Phenomenological Research 69: 1-36.
- Port, R., and Van Gelder, T. (1995), Mind as Motion: Explorations in the Dynamics of Cognition, Cambridge, Mass.: The MIT Press.
- Putnam, H. (1975), "The Meaning of 'Meaning'," in Philosophical Papers, Vol. 2, Cambridge: Cambridge University Press: 215-71.
- Pylyshyn, Z. (1979), "The Rate of 'Mental Rotation' of Images: A Test of a Holistic Analogue Hypothesis," Memory and Cognition, 7: 19-28.
- Pylyshyn, Z. (1981a), "Imagery and Artificial Intelligence," in Block 1981: 170-194.
- Pylyshyn, Z. (1981b), "The Imagery Debate: Analog Media versus Tacit Knowledge," Psychological Review 88: 16-45.
- Pylyshyn, Z. (1984), Computation and Cognition, Cambridge, Mass.: The MIT Press.
- Pylyshyn, Z. (2003), Seeing and Visualizing: It's Not What You Think, Cambridge, Mass.: The MIT Press.
- Raffman, D. (1995), "The Persistence of Phenomenology," in T. Metzinger, ed., Conscious Experience, Paderborn: Schönigh/Imprint Academic: 293-308.
- Ramsey, W., Stich, S. and Garon, J. (1990), "Connectionism, Eliminativism and the Future of Folk Psychology," Philosophical Perspectives 4: 499-533.
- Reid, T. (1764), An Inquiry into the Human Mind, D.R. Brooks, ed., Edinburgh: Edinburgh University Press (1997).
- Rey, G. (1981), "Introduction: What Are Mental Images?" in Block 1981: 117-127.
- Rey, G. (1991), "Sensations in a Language of Thought," in E. Villaneuva, ed., Philosophical Issues 1: Consciousness, Atascadero: Ridgeview Publishing Company: 73-112.
- Rumelhart, D.E. (1989), "The Architecture of the Mind: A Connectionist Approach," in M.I. Posner, ed., Foundations of Cognitive Science, Cambridge, Mass.: The MIT Press: 133-159.
- Rumelhart, D.E. and McCelland, J.L. (1986). Parallel Distributed Processing, Vol. I, Cambridge, Mass.: The MIT Press.
- Schiffer, S. (1987), Remnants of Meaning, Cambridge, Mass.: The MIT Press.
- Schiffer, S. (1972), "Introduction to the Paperback Edition," in Meaning, Oxford: Clarendon Press (1972/1988): xi-xxix.
- Searle, J.R. (1980), "Minds, Brains, and Programs," Behavioral and Brain Sciences 3: 417-424.
- Searle, J.R. (1983), Intentionality, Cambridge: Cambridge University Press.
- Searle, J.R. (1984) Minds, Brains, and Science, Cambridge: Harvard University Press.
- Searle, J.R. (1992), The Rediscovery of the Mind, Cambridge, Mass.: The MIT Press.
- Sellars, W. (1956), "Empiricism and the Philosophy of Mind," in K. Gunderson, ed., Minnesota Studies in the Philosophy of Science, Vol. I, Minneapolis: University of Minnesota Press: 253-329.
- Shepard, R.N. and Cooper, L. (1982), Mental Images and their Transformations, Cambridge, Mass.: The MIT Press.
- Shoemaker, S. (1990), "Qualities and Qualia: What's in the Mind?" Philosophy and Phenomenological Research 50: 109-31.
- Smolensky, P. (1988), "On the Proper Treatment of Connectionism," Behavioral and Brain Sciences, 11: 1-74.
- Siewert, C. (1998), The Significance of Consciousness, Princeton: Princeton University Press.
- Smolensky, P. (1989), "Connectionist Modeling: Neural Computation/Mental Connections," in L. Nadel, L.A. Cooper, P. Culicover and R.M. Harnish, eds., Neural Connections, Mental Computation Cambridge, Mass.:The MIT Press: 49-67.
- Smolensky, P. (1991), "Connectionism and the Language of Thought," in B. Loewer and G. Rey, eds., Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell Ltd.: 201-227.
- Sterelny, K. (1989), "Fodor's Nativism," Philosophical Studies 55: 119-141.
- Stich, S. (1983), From Folk Psychology to Cognitive Science, Cambridge, Mass.: The MIT Press.
- Stich, S. (1996), Deconstructing the Mind, New York: Oxford University Press.
- Strawson, G. (1994), Mental Reality, Cambridge, Mass.: The MIT Press.
- Thau, M. (2002), Consciousness and Cognition, Oxford: Oxford University Press.
- Turing, A. (1950), "Computing Machinery and Intelligence," Mind 59: 433-60.
- Tye, M. (1991), The Imagery Debate, Cambridge, Mass.: The MIT Press.
- Tye, M. (1995), Ten Problems of Consciousness, Cambridge, Mass.: The MIT Press.
- Tye, M. (2000), Consciousness, Color, and Content, Cambridge, Mass.: The MIT Press.
- Van Gelder, T. (1995), "What Might Cognition Be, if not Computation?", Journal of Philosophy XCI: 345-381.
- Von Eckardt, B. (1993), What Is Cognitive Science?, Cambridge, Mass.: The MIT Press.
- Von Eckardt, B. (forthcoming, June 2004), "Connectionism and the Propositional Attitudes," in C.E. Erneling and D.M. Johnson, eds., The Mind as a Scientific Object: Between Brain and Culture, Oxford: Oxford University Press.
- Wittgenstein, L. (1953), Philosophical Investigations, trans. G.E.M. Anscombe, Oxford: Blackwell Publishers.
- A Field Guide to the Philosophy of Mind, General editors, Marcho Nani and Massimo Marraffa.
- Cogprints, Cognitive Sciences Eprint Archive, University of Southampton.
- Annotated Bibliography of Contemporary Philosophy of Mind, maintained by David Chalmers, Australian National University.
- Dictionary of Philosophy of Mind, Editor, Chris Eliasmith, Washington University in St. Louis.
- DON'T PANIC: Tye's Intentionalist Theory of Consciousness, by Alex Byrne, from A Field Guide to the Philosophy of Mind.
artificial intelligence | cognitive science | concepts | connectionism | consciousness: and intentionality | consciousness: representational theories of | folk psychology: as mental simulation | information: semantic conceptions of | intentionality | language of thought hypothesis | materialism: eliminative | mental content: causal theories of | mental content: externalism about | mental content: narrow | mental content: nonconceptual | mental content: teleological theories of | mental imagery | mental representation: in medieval philosophy | mind: computational theory of | neuroscience, philosophy of | perception: the problem of | qualia | reference
Thanks to Brad Armour-Garb, Mark Balaguer, Dave Chalmers, Jim Garson, John Heil, Jeff Poland, Bill Robinson, Galen Strawson, Adam Vinueza and (especially) Barbara Von Eckardt for comments on earlier versions of this entry.