March 11, 2007

Donald A. Norman, The Invisibile Computer.

In the Italian version blog I published a synopsis of Donald A. Norman,The Invisibile Computer, The MIT Press, Cambridge Massachusetts, 1998

March 09, 2007

January 04, 2007

Natural-Born Cyborg

Andy Clark, Natural-Born Cyborgs. Minds, Technology, and the Future of Human Intelligence, Oxford University Press, 2003

(synopsis and notes on this book)

Introduction
We, human beings, are natural-born cyborgs; we are thinking systems whose minds and selves are spread across biological brain and non-biological circuitry.
This hybridization is not a modern development, it is an aspect of our nature as human beings.
We need to spell the old prejudice that what counts as “mind” is solely what goes on inside our own biological skin-bag. Perpetrating this separation between the inside (mental) and the outside (world) doesn’t allow us to understand a distinctive feature of human intelligence: the ability to enter into deep and complex relationship with non-biological props and aids.

The line between biological self and technological world, user and tools, is very flimsy.
We exist, as thinking things, only thanks to the supportive environment that we create and it creates us.
Maybe the small biological difference between human beings and animals becomes an enormous gap cause this plasticity of our mind.

Chapter 1 - Cyborgs Unplugged
The term “cyborg” was first used in 1960 by Clynes and Kline in their article “Cyborgs and Space”.

What is important in making a cyborg is
* Not the merger of flesh and wire
* Not the depth of the implant
* But the potential of transformation
* And for cognitive systems the fluidity of the flaws of information.

In our daily activity our biological brain is already cooperating with a lot of technologies (spoken or written words, drawings, pen, paper, notes, watches, etc.) which are not implanted in our body but nonetheless play a crucial role in our cognitive activities.

What blinds us about our cyborg nature is the ancient western prejudice that the mind is completely separate and different from the rest of the world. If we dispel this illusion we can understand that our mind and our self are problem solving systems constituted by brain, body and technologies.

The idea of human cognition as an hybrid is not new (Vygotsky, Bruner, Merleau-Ponty, Dennett, Norman, Hutchins) but underestimated.


Chapter 2 – Technologies to Bond With

Difference between transparent and opaque technologies

Transparent technologies:
* well fitted to, integrated with, the biological capacities
* almost invisible in use.

Opaque technologies:
* Not naturally integrated with the organism,
* Requiring constant attention, remaining in focus during operation
* It’s easy to distinguish user from tool

The distinction is not fixed. It depends on the tool and on the user.

Transparent technologies
* should be easily and constantly available
* are not knew (pen, paper, book, watches, etc.)
* often need a long process (the tool must change, but also the user and his culture) to became such .

Donald Norman, The invisible computer, MIT Press, 1999
“Technology centered” – “Human centered” products

A technology, at the same time, adapts to and shapes the cognitive processes of the user.

Differences between
* “Do you know the time?”
* “Do you know the meaning of the word ‘clepsydra’?”

Our sense of self is changeable, its not tight to fixed biological borders, but to our mutable experience in thinking, reasoning and doing, inside a system of strong mental scaffolding.

Danger of technologies “too transparent”

Heidegger
“ready-to-hand” – “present-to-hand”
focus on task – focus on tool

The importance of the opportunity to change, when necessary, from 'ready to hand' to 'present-to-hand' .

[notes :]
  • The general problem and importance of interfaces
  • Plato's avversion to the technique of writing
  • Our better ability on outer then on inner world

Chapter 3 – Plastic Brains, Hybrid Minds

The image of our physical body, despite all its appearance of durability, is highly negotiable. It is a mental construct, open to continual renewal and reconfiguration. Just a few simple tricks can modify it

Our brains can readily project feeling and sensation beyond the biological shell.

Seeing.
The brain make a very intelligent use of the small high-resolution fovea. Subjects presented with the same image, but different task, show different patterns of saccade. Tendency to overestimate what we see. Dennett (Consciousness Explained).

Change Blindness.

  • J. K. O’Regan, “Solving the ‘Real’ Mysteries of Visual Perception: The World as an Outside Memory”, Canadian Journal of Psychology 46 (1992): 461-88
  • Dan Simons & Dan Levin, ‘Change Blindness’ Trends in Cognitive Science 1 (1997) p. 261-67. They bring the research into the real world.

The brain prefers meta-knowledge (how to acquire and exploit information) over baseline knowledge (basic, real knowledge).

The visual brain is opportunistic; instead of attempting to create, maintain and update a rich inner representation, it deploys a strategy that roboticist Rodney Brooks describes as “letting the world serve as its own best model”.

[note:]
Compression algorithms in Computer Science


It just doesn’t matter whether the data are stored inside the biological body or in the external world.

The importance of language.
When a the infant brain encounters the language a variety of cognitive shortcuts become available.(e.i., second-order relations).

The act of labeling allow the brain to reduce a complex, abstract problem, to a simpler, concrete one.

Our brains are good at patters matching, simple associations, perceptual processing. For long reasoning we need external scaffoldings.

Dennett “mind-tools”(Gregory)

One large jump or discontinuity in human cognitive evolution seems to involve the distinctive way human brains repeatedly create and exploit various species of cognitive technology so as to expand and reshape the space of human reason. We – more then any other creature on the planet – deploy non-biological elements to complement our basic modes of processing, creating extended cognitive systems whose computational and problem-solving profiles are quite different from those of naked brain.

When we freeze a thought or idea in words, we create a new object upon which to direct our critical attention.

  • Dennett, Kinds of Minds.
  • Merlin Donald, Origins of the Modern Mind (February 1993) and A Mind So Rare: The Evolution of Human Consciousness (2001).

With speech, text, and the tradition to use them as a critical tools, humankind entered the first phase of its cyborg existence.

Instead of seeing our words and texts as simply the outward manifestations of our biological reason, we may find whole edifices of thought and reason accreting only courtesy of stable structures provided by words and texts.

It’s a mistake to think of a fixed human nature, tools and culture are as much as determiners of our human nature as products of it.


Chapter 4 – Where Are We? & Chapter 5 – What Are We?

The sense of our location and our body.

Dennett’s “Where Am I”.

The sense of real telepresence can be generated by quite basic technologies, but only if they are interactive. Passive experiences don’t work.

The notion of our perceptual experience as a passive receipt of information is misleading. Our brains are not like TV, which simply take incoming signals a display them for… who? The whole business of seeing and perceiving our world is bound up with the business of acting upon, and intervening in, our worlds.

Aglioti's follow-up experiment on “Titchener Circles”. Subjects are prone to the illusory scaling effect but they act accordingly the real sizes. Differences between conscious perception and action. Like the human vision system was an hybrid of two different cooperating systems.

The adaptation (for instance) to new limbs or to relocation can be a very long process but after a while the new limb or the new position became second-nature, the prosthesis became transparent.
The relevant differences between natural or artificial connections are only those affecting timing, flow and density of informational exchange.

Our brains are amazingly adept at learning to exploit new type and channels of input. Our brains, especially those of newborn babies, are extremely plastic.
The relation between a neural signal and a movement is always arbitrary. The infant discover which signal command which limb by try and error.

[note:
Hume: expectation, gentle force, habit.]

Our sense of ‘where we are’ and of ‘what we are’ are always constructed on the basis of the brain’s ongoing registration of correlations.

It is possible to drive a prosthesis by attaching its input directly to the brain or to some place in the nervous system.
It is possible to drive muscles by attaching its output directly to the muscles or to some place in the nervous system.

Dennett: “I am the sum total of the parts I control directly” (Elbow Room)

What we need to reject is the idea that all our various neural, or non-neural, tools need a kind of Privileged User. Instead, we are just tools all the way down. We are just shifting coalitions of tools.


Chapter 6 – Global Swarming

Like slugs or ants we leave trails of our activities in a global networked environment. This trails can be exploited to better suite (but there are dangers) our relations with our technological extensions.
Interesting:
The collaborative filtering. The categorization by cumulative trail laying which is unplanned, emergent and flexible.
The strategy of new search engines like Google that focus attention not (ultimately) on the content of the pages so much as on the structure of links between pages.
Jon Kleinberg, Authoritative Sources in a Hyperlinked Environment at http://www.cs.cornell.edu/home/kleinber/auth.pdf
Better search engines make the extensive use of personal bookmarks redundant.


Chapter 7 – Bad Borg?

Same danger (more or less real):
* Inequality
* Intrusion
* Uncontrollability
* Overload
* Alienation
* Narrowing
* Deceit
* Degradati0on
* Disembodiment

December 29, 2006

Andy Clark & David J. Chalmers, The Extended Mind, ANALYSIS 58: 1: 1998 p.7-19

Synopsis

Main question: Where does the mind stop and the rest of the world begin?


First case study:
An human agent playing TETRIS (in some future), to assess if a piece will fit or not in a slot, could:
  1. mentally rotate the object;
  2. use a key to rotate the object on the computer screen;
  3. use a neural implant able to produce the same rotation as case b), but this time activated by his thought and visually reproduced on his retina.
The three cases are very similar: 1) and 3) are clearly internal, 2) is external and distributed between subject and computer, but:

If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process. Cognitive processes ain't (all) in the head!

Which is what Andy Clark in “Memento's Revenge: Objections and Replies to the Extended Mind" to appear in R. Menary (ed) Papers On The Extended Mind call: Parity Principle.


At this regard is important the distinction:
epistemic action <--> pragmatic action
Epistemic actions alter the world so as to aid and augment cognitive processes such as recognition and search.
Pragmatic actions, by contrast, alter the world because some physical change is desirable for its own sake.
Kirsh, D. & Maglio, P. On distinguishing epistemic from pragmatic action. Cognitive Science 18:513-49. 1994

So:
In these cases, the human organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right. All the components in the system play an active causal role, and they jointly govern behaviour in the same sort of way that cognition usually does. If we remove the external component the system's behavioural competence will drop, just as it would if we removed part of its brain. Our thesis is that this sort of coupled process counts equally well as a cognitive process, whether or not it is wholly in the head.


C&C call their position Active Externalism.

This externalism differs greatly from standard variety advocated by Putnam (The meaning of `meaning' -1975) and Burge (Individualism and the mental.1979),
In Putnam’s example the relevant external features (water = H2O or water = XYZ – Hearth /Twin Hearth) are distal and passive. This is reflected by the fact that the actions performed by me and my twin are physically indistinguishable, despite our external differences. In the cases we describe, by contrast, the relevant external features are active, playing a crucial role in the here-and-now.

Could be possible to refute this kind of externalism identifing the cognitive with the conscious, as seems far from plausible that consciousness extends outside the head. But not every cognitive process is a conscious process. More interestingly, one might argue that what keeps real cognition processes in the head is the requirement that cognitive processes be portable… the trouble with coupled systems is that they are too easily decoupled. But this is a contingent aspect. The real moral of the portability intuition is that for coupled systems to be relevant to the core of cognition, reliable coupling is required.

Language appears to be a central means by which cognitive processes are extended into the world.

So far we have spoken largely about "cognitive processing", and argued for its extension into the environment. Some might think that the conclusion has been bought too cheaply. Perhaps some processing takes place in the environment, but what of mind? Everything we have said so far is compatible with the view that truly mental states - experiences, beliefs, desires, emotions, and so on - are all determined by states of the brain. Perhaps what is truly mental is internal, after all?

We propose to take things a step further. While some mental states, such as experiences, may be determined internally, there are other cases in which external factors make a significant contribution. In particular, we will argue that beliefs can be constituted partly by features of the environment, when those features play the right sort of role in driving cognitive processes. If so, the mind extends into the world.


Second case study:
(to address the portability issue and to extend the treatment to the more central case of an agent’s beliefs about the world.)
  • Inga hears of an intriguing exhibition at MOMA. She thinks, recalls it's on 53rd St, and sets off.
  • Otto suffers from a mild form of Alzheimer's, and as a result he always carries a thick notebook. When Otto learns useful new information, he always writes it in the notebook. He hears of the exhibition at MOMA, retrieves the address from his trusty notebook and sets off.
Just like Inga, Otto walked to 53rd St. because he wanted to go to the museum and believed that it was on 53rd St (even before consulting his notebook, unless we don’t consider as beliefs only the current ones, but then also Inga didn’t belief that the MOMA was in 53th street before fetching this information from her memory). Otto’s long-term beliefs just weren’t all in his head.

To be included in someone cognitive system his beliefs must be:
  1. constantly available and used,
  2. easily accessible,
  3. immediately believed (not subject to critical scrutiny)

What about socially extended cognition?
Also in this case it seems there are not valid reasons to exclude someone else believes (I’m constantly surely rely on) in my believes.
In this case languages play a central role. Without language we will be closed in a Cartesian solipsism where all relies on inner processes. Thanks to language we can partially spread this burden into the world. Language, thus construed, is not a mirror of our inner states but a complement to them. It serves as a tool whose role is to extend cognition in ways that on-board devices cannot. Indeed, it may be that the intellectual explosion in recent evolutionary time is due as much to this linguistically-enabled extension of cognition as to any independent development in our inner cognitive resources.

Finally, also our self image should be revisited. As our self goes beyond our conscious states, so could go beyond the border of the skin.

Piccola ricerca sulla teoria della mente estesa

Tutti coloro che accettano o rifiutano il modello della mente estesa partono dal saggio:

I seguenti articoli prendono posizione contro alcuni aspetti della concezione esposta da Clark e Chalmers:

  • Paul Loader, Extending the Mind da http://www.informatics.sussex.ac.uk/users/pl27/Notes.pdf
  • Diego Marconi, Contro la mente estesa, in Sistemi intelligenti, n.: 3, dicembre 2005
  • Alberto Oliverio, La mente estesa e le neuroscienze, in Sistemi intelligenti, n.: 3, dicembre 2005
  • Adams, F. & Aizawa, K. The Bounds of Cognition. Philosophical Psychology, Vol 14, 1. (2001).

Molti saggi sulla mente estesa si troveranno in Richard Menary (ed.) The Extended Mind. Forthcoming, Ashgate (fine 2006 oppure inizio 2007). Per esempio:

  • John Preston, The Extended Mind, the Concept of Belief, and Epistemic Credit
  • Andy Clark, Memento's Revenge: Objections and Replies to the Extended Mind
  • Adams, F. & K. Aizawa, Defending the Bounds of Cognition
  • Clark, A., Coupling, Constitution and the Cognitive Kind: A Reply to Adams and Aizawa
  • Rupert, R., Representation in Extended Cognitive Systems: Does the Scaffolding of Language Extend the Mind?
  • Wilson, R.A., Meaning Making and the Mind of the Externalist

Ci sono state due conferenze sulla Mente Estesa all’University of Hertfordshire, organizzate da Menary:

Altri scritti che dovrebbero aver a che fare con il tema della mente estesa, ma non so quanto valgono né che posizione prendono (alcuni con dati bibliografici incerti):

First Bibliography (12/12/2006)

  • Braddon-Mitchell, David e Jackson, Frank - Philosophy of Mind and Cognition: An Introduction
  • Brooks, R. - Cambrian Intelligence. The Early History of the New AI, MIT Press, 1999
  • Brooks, Rodney - Intelligence without representation, in “Artificial Intelligence”, 47 , pp. 139-59, 1991
  • Churchland, Patricia e Sejnowski, T. - Neural Representation and Neural Computation, in L. Nadel et al. Neural Connections, Mental Computations, The MIT Press, Cambridge, 1989
  • Clark, Andy - Being There. Putting Brain, Body, and World Together Again, Mit Press, 1997
  • Clark, Andy - Natural-Born Cyborgs. Minds, Technologies, and the Future of Human Intelligence, Oxford University Press, 2003
  • Clark, Andy e Chalmers, David J. - The Extended Mind, Analysis 58:10-23, 1998.
  • Cummins, Robert - Meaning and mental representation, Cambridge, London, MIT Press, 1989
  • Damasio, A.The Feeling of What Happens. Body and Emotion in the Making of Consciousness, Harcourt Brace, 1999
  • Fodor –Pylyshyn, Connectionism and cognitive architecture: a critical analysis, Cognition, 28, pp.3-71
  • Fodor, Jerry - Psychosemantics, Cambridge, MIT Press, 1987
  • Gibson, The Ecological Approach to Visual Perception, Houghton Mifin, Boston, 1979
  • Heil, John - Philosophy of Mind, Routledge 1998
  • Kirsh D. e Maglio P. “On distinguishing epistemic from pragmatic action”, Cognitive Science, 18, pp.513-549, 1994
  • Lowe, E. J. - An Introduction to the Philosophy of Mind -Series: Cambridge Introductions to Philosophy 2000
  • Smolensky, Paul. Il Connessionismo: Tra simboli e neuroni. (a cura di Marcello Frizione) Genova: Marietti
  • Thagard, Mind. Intoduction to cognitive science, The MIT Press, 1996
  • Varela- Rosh- Thompson The Embodied Mind. Cognitive Science and Human Experience, The MIT Press, Cambridge, 1991
  • Oliverio, A. La mente estesa e le neuroscienze, Sistemi intelligenti, n. 3, dicembre 2005, Il Mulino
  • Marconi, D. Contro la mente estesa, Sistemi intelligenti, n. 3, dicembre 2005, Il Mulino
  • Sutton, J. Material memories and extended minds: interdisciplinarity and traces,
  • Donald, Merlin Origins of the Modern Mind: three stages in the evolution of culture and cognition. Cambridge, MA: Harvard University Press, 1991
  • Haugeland, John ‘Mind Embodied and Embedded’, in his Having Thought: essays in the metaphysics of mind (Cambridge, MA: Harvard University Press(1998)), pp. 207-237.
  • Dennett, Daniel C. ‘Making Tools for Thinking’, in D. Sperber (ed), Metarepresentations: a multidisciplinary perspective. Oxford: Oxford University Press, 2000, pp. 17-29.
  • Clark, Andy ‘On Dennett: minds, brains, and tools’, in H. Clapin (ed) Philosophy of Mental Representation. (2002)

Primo abbozzo ricerca (12/12/2006)

Intelligenza senza rappresentazioni?

Con il passaggio dalla scienza cognitiva classica (modello paradigmatico Teoria Computazional-Rappresentazionale della Mente di J. Fodor) alle reti neurali e più recentemente alla nuova robotica (R. Brooks) si assiste ad un progressivo indebolimento del ruolo delle rappresentazioni mentali nei processi cognitivi.

Mi ripropongo di approfondire la portata e il senso dell’indebolimento di questo concetto centrale in quasi tutte le posizioni di filosofia della mente a partire almeno da Cartesio ad oggi.

Intendo chiedermi soprattutto:
  1. quali tipi di azioni intelligenti si possono compiere senza rappresentazioni interne?
  2. considerato che i “casi ostici” per la scienza cognitiva classica sono quelli più elementari (interazione soggetto/ambiente) e non tanto quelli più sofisticati e astratti (“puramente razionali”), non si rischia con la nuova scienza cognitiva un semplice ribaltamento del problema?
  3. dal momento che i modelli che nascono dal connubio tra reti neurali e nuova robotica sono più robusti e più realistici dal punto di vista biologico e della psicologia evolutiva, mentre quelli della scienza cognitiva classica sono più efficienti nei compiti “elevati” e settoriali, è possibile adottare un atteggiamento “ecumenico” che accolga entrambe le posizioni?
  4. è possibile immaginare l’implementazione di una ‘macchina rappresentativa’ su una più semplice ‘macchina senza rappresentazioni’? E a quali condizioni? E a quale livello di complessità del soggetto?
  5. che ruolo svolgono le “tecnologie esterne” o mind scaffolding di cui si serve la mente umana (in primo luogo il linguaggio) nel passaggio dall’intelligenza senza rappresentazioni a quella con rappresentazioni?