Wednesday, October 15, 2014

Constructivism and Eliminative Materialism

A few months ago Fred M Beshears and I discussed Constructivism and Materialism. He compiled the conversation and posted it here.

Stephen Downes

We don't reason over perceptions or construct meaning, etc- there's no mechanism to do that - rather, we gradually become better recognizer

The basic constructivist premise (and I mean constructivists generally, not just those working in education) is that learning and discovery proceeds by the creation or models or representations off reality, and then carrying out operations in these representations. Usually these representations are created using a symbol system - language, mathematics, universal grammar, etc - composed of signs and rules for manipulation. We create meaning or sense in these representations by means of a semiotic system - a way of assigning meaning to individual symbols, phrases, groups of symbols, or entire models, sometimes by reference, sometimes by coherence, etc. (note that there are *many* different variations on this common theme). These representations are easy to find in the world - we can see instances of language and mathematics, for example, in any book. But the theory argues that we *also* have these systems in our minds - that we actually reason in our heads by means of these representations, and hence that learning means constructing these representations and assigning meaning to their symbolic entities. Cf. for example the 'physical symbol system' hypothesis. What I am arguing is that this position is wrong. That even *if* we construct representations in our mind, there is no distinct entity over and above the representation that does the constructing, manipulation, or sense-making. Therefore, we do *not* learn in this way.

Fred M Beshears

 It depends on which level of description works best for the problem at hand. To describe the workings of a computer you could pick from the following: logic gates, machine language, assembly language, a high level programming language (e.g. Lisp), or the user interface.

Similarly with humans, you can pick from the following: an individual neuron, a group of neurons (e.g. Kurzweil claims that it takes an average of 100 neuron in the cerebral cortex to form a pattern recognizer), or groups of pattern recognizers (Kurzweil claims there are around 300 million pattern recognizers in the cerebral cortex), or one's 1st person account of one's stream of consciousness (which for many of us comes in the form of a sequence of words).

Of course, animal consciousness is probably very different from human. And, with computers, we know how to map from one level of description to another. But, with biologically evolved brains, we still have a long way to go before we've completely reverse engineered the brain.

As for educators, I don't know if there is one "correct" level of description. Some may prefer folk psychology, others neuroscience.

Stephen Downes

Not all descriptions are simply 'levels of description'. Some are simply wrong and should be eliminated from the discourse. For example, discussion of 'phlogiston' was not just some level of description, it was just wrong. If you say that your computer has a soul, it's not just a level of description, it's wrong. And when educators use 'folk levels of description', they should be aware that their discourse is no more reliable than phrenology or reading the Tarot. I don't think that being an educator is a license to use whatever terminonology and ontology they please.

Fred M Beshears

There may be some eliminative materialists out there who have jumped the gun and are now starting to claim the folk psychology has been successfully eliminated by neuroscience! Eliminative materialism may someday turn out to be right, but it would be very wrong for them to claim that they have proved their case as of today - especially if they try to do so by simply making selective references to physics.

Some folk theories of science - such as phlogiston - have been eliminated and have been consigned to the history books. And, some eliminative materialists working in the fields of cognitive psychology and neuroscience selectively refer to these examples to bolster their case. In the case of cognition, the eliminative materialist believes that since this has happened in some cases in physics it will someday happen in cognitive psychology, too. In other words, the eliminative materialist believes that neuroscience will someday eliminate folk psychology; it will not simply match up with folk psychology categories.

But, there are at least two other materialist schools of thought that we should also consider: reductive materialism and functionalism.

Reductive materialists (aka identity theorists) believe that someday folk psychology will be reduced to neuroscience. In other words, they believe that each mental state of folk psychology will be found to be in a 1-to-1 relationship with physical states of the brain. They, too, try to support their theory by making selective references to physics. So, the identity theorist will say that in the case of sound we know that as a train compresses air it creates sound waves, and that high pitched sounds are the property of having a high frequency of oscillating waves in air. We later learned that light was an electromagnetic wave and that the color of an object is related to the reflective efficiencies of the object, much like a musical cord. But the notes in the case of light are electromagnetic waves. So some reductive materialists in the field of cognitive psychology (like some eliminative materialists) use selective references to physics to bolster their case - i.e. that someday there will be a intertheoretic reduction between folk psychology and neuroscience.

Functionalism is yet another form of materialism. According to Paul Churchland, the functionalist believes the "essential or defining feature of any type of mental state is the set of causal relationships it bears to (1) environmental effects on the body, (2) other types of mental states, and (3) bodily behavior." (p. 63 of Matter and Consciousness 2013) Unlike the behaviorist who wants to define mental states solely in terms of inputs from the environment and behavioral outputs, the functionalist believes that mental states involve an ineliminable reference to a wide variety of other mental states, which makes impossible the behaviorist game plan. Functionalists are at odds with reductive materialists, too. So, the functionalist would argue that a computer or an alien from another planet could have the same metal states that humans do (e.g. pain, fear, hope) even though they implement these mental states in a different physical substrate. According to Paul Churchland: "This provides a rational for a great deal of work in cognitive psychology and artificial intelligence, where researchers postulate a system of abstract functional states and then test the postulated system, often by way of its computer simulation, against human behavior in similar circumstances."

But there are arguments against functionalism, too. For example, many functionalist AI researchers try to model thought as "an internal dance of sentence-like states, a dance that respects the various inferential relations holding among [propositions]" (p. 80 in Mind and Matter) But although humans do have a command of language, and most 1st person accounts of human thought do involve language, there are obviously other creatures with brains that do not.

Churchland provides a very balanced presentation of these three perspectives. And, he doesn't try to make the case that eliminative materialism has triumphed over the other two perspectives by simply making selective reference to the cases in physics that support his view (which is a moderate form of eliminative materialism).


Stephan Downes

Hiya Fred, I am of course familiar with identity theory and functionalism (you can see references in my latest presentation) and I am of course familiar with Paul Churchland. Your overview is quite correct as a broad account of some major recent theories in the philosophy of mind.

Now of course I am not going to claim to have defended elimininative materialism in one or two paragraphs (or even in my recent talk, in which I discuss both identity theory and various forms of functionalism, as well as Thomas Nagel and the cognitivist position of people like Fodor and Pylyshyn).

My response to you was to indicate that folk psychology is not automatically correct, and that something akin to Dennett's 'intentional stance' might not be reasonable if in fact the position I argue for is correct. Indeed, I think that folk psychology is deeply flawed (cf Steven Stich 'From Folk Psychology to Cognitive Science'). In particular, if the claims made by folk psychology (and for that matter constructivism) are literally true, then we descend into nonsense and contradiction.

But what I would also like to be clear about is that in this case as in all cases I am explaining my line of reasoning. This is where my thoughts have led me. I'm pretty sure I'm right, but I don't expect anyone to be swayed by my arguments (this makes my quite unlike most theorists in education). I develop learning systems based on my theories, and if they work, that is my argument.


Fred M Beshears

Thank you Stephen. If the eliminative materialists are correct and neuroscience advances to the point where we can replace folk psychology, then the effect on education may be far more radical than even the cMOOC.

In anticipation of this coming revolution, I've started writing a book. It will be called:

What color are your neural implants, a guide for post-singularity job seekers.

Monday, October 13, 2014

Positivism and Big Data

This is an outcome of a conversation with Rita Kop regarding the article The View from Nowhere.

She writes, data scientists use their quantitative measure, as positivists do, by putting a large number veneer over their research

This misrepresents positivism, just as Nathan Jurgenson does in his original article.

The core of positivism is that all knowledge is derived from experience. The core tenet of positivism is that there is a knowable set of observation statements which constitute the totality of experience. The number of sentences doesn’t particularly matter; in some cases (eg. in Popper’s falsifiability theory) even one such sentence will be significant.

Quine’s “Two Dogmas of Empiricism” most clearly states the core of positivism in the process of attacking it. The two dogmas are:

  1. Reductionism – that all theoretical statements can be reduced to a set of observation statements (by means of self-evident logical principles);
  2. The analytic-synthetic distinction – this is the idea that observation statements can be clearly and completely distinguished from theory, which of course, turns out not to be true (because we have ‘theory-laden data’.

We can apply these principles to big data analytics, of course, and we can use the standard criticisms of positivism to do it:

  1. Underdetermination – this is the ‘problem of induction’ or the ‘problem of confirmation’. Theoretical statements cannot be deduced from observation statements, hence, we rely on induction, however, observational data underdetermines theory – for any given set of observation statements, an infinite set of theoretical statements is consistent with each statement equally well confirmed;
  2. Observer bias – the language we employ in order to make observation statements must exist prior to the making of the statement, and hence, adds an element of theory to the statement. This language is typically a product of the culture of the investigator, hence, language introduces cultural bias.
To Quine’s two objections I am inclined to add a third: logicism. This core element of logical positivism in particular in general escapes challenge. It is essentially the idea that the relation between data and theory can be expressed logically, that is, in systems comprised of statements and inferences.

It’s not that data scientists put a ‘veneer’ over their research by virtue of large numbers. Rather, it is that, by virtue of following positivism’s basic tenets, they subscribe to one of positivism’s core principles: a difference that does not make (an observable) difference is no difference at all. The contemporary version is “You can’t manage what you can’t measure.”

Flip this, and you get the assertive statement: if there is anything to be found, it will be found in the data. If there is any improvement to process or method that can be made, this will result in a change in the data. It is this belief that places an air of inevitability to big data.

We can employ Quine’s objections to show how the data scientists’ beliefs are false.
  1. The same data will confirm any number of theories. It is not literally true that you can make statistics say anything you want, but even when subscribing to Carnap’s requirement (that the totality of the evidence be considered; no cherry-picking) you can make statistics say may different things. This will be true no matter how much data there is.
  2. The collection of the data will presuppose the theory (or elements of the theory) it is intended to represent (and, very often, to prove). I’ve often stated, as a way to express this principle, that you only see what you’re looking for.
In my own epistemology, though I remain unreservedly empiricist, I have abandoned Quine’s ‘logical point of view’. In particular, I propose two major things:

  1. Theory (and abstractions in general) are not generated through a process of induction, but rather through a process of subtraction. These are not inferences to be drawn from observations, but rather, merely ways of looking at things. For example, you can see a tiger in front of you, but if you wish, you can ignore most of the detail and focus simply on the teeth, in which case we've generalized it to "a thing with teeth".
  2. Neither observations nor theories are neutral (nor indeed is there any meaningful way of distinguishing the two (which is why I don’t care whether connectivism is a theory)). Rather, any observation is experienced in the presence of the already-existing effects of previous observations, which is the basis for the phenomenon of ‘recognition’, which in turn is the basis for knowledge.
These constitute a consistent empirical epistemology, however, they are in important ways inconsistent with the core tenets of big data analytics (but that said, this depends a lot on how the analytics are carried out).

In particular, it conflicts with the idea that you can take one large set of data, representing any number of individuals, and draw general conclusions from it, because these data are embedded in personal perspectives, which are (typically) elided in big data analytics. Hence, big data is transforming deeply contextual data into context-free data. Any principles derived from such data are thereby impacted.

Monday, October 06, 2014


Responding to Linda Harasim's comment, here:

To Mohsen Saadatmand: there is indeed significant overlap between communities of interest and cMOOCs and the underlying mechanisms are the same. The difference between the communities and the MOOCs is that the former are persistent while the latter are occasional. Thus, the two play different roles: the communities embed knowledge and standardize practice, while the MOOCs disrupt existing patterns of thinking and introduce people to new connections and new ideas.

To Linda Harasim: interesting and engaging comment. I do not think that artificial intelligence will be in any particular way better than human intelligence. But I think that anything that is a network – and this includes networks of machines as well as networks of humans or networks of neurons – can develop intelligence. There is substantial evidence to suggest that this is true, and I think it is no longer sufficient to suggest that the theory is simply an instance of “magical thinking.”

Take one particular point you make, for example. You write: “Siemens and Downes expected that somehow 2,000 participants should self-manage their learning by forming interest groups: how would or could 2000 strangers meet and self-organize into functioning learning groups? How would each individual know how to identify their interests?” The suggestion that this is impossible, that strangers could not self-organize, is refuted by reams of evidence. From tag-based communities to clusters on interest groups on Google Groups to the threads in Metafilter and 4chan, people have shown a remarkable ability to self-organize. And the research on our MOOCs (and even some xMOOCs) shows that this happens in MOOCs as well, and that these self-organizations have a learning focus.

But I also think you misunderstand the role of technology in cMOOCs. You write, “the ultimate organizer and decision-maker in the learning network (whether formal or non-formal) is that artificial intelligence (neural networks) replace the teacher or the moderator or the organizer. Technology replaces the human who is making the decisions and organizing the interactions.” This is simply not true, and nothing I have developed or advocated leans this way.

Technology makes learning networks possible; technology creates the channels through which people can interact, but it is people – each one of them making their own decisions – who choose what to read, what to link to, what to create, what to say. The ultimate organizers and decision makers in the learning network are students.

Can machines learn? Sure they can, of course they can, anything that is networked can learn. Simple stupid neurons, when joined together, can learn. So can simple stupid computers. But the most interesting results happen when you take networks of humans and, instead of telling them what to do, enable them to make decisions for themselves. Now you have networks of learning networks. You get remarkable results, like memes, cat photos, and maybe, global democracy. And it’s not magic. It’s the simple, observable, science of networks.