THREE DEFINITIONS OF
‘EMERGENT’
I.
“If P is a property of w, then P is emergent iff
(1) P supervenes with nomological necessity, but not
with logical necessity, on properties the parts of w have taken separately or
in other combinations; and (2) some of the supervenience
principles linking properties of the parts of w the w’s having P are
fundamental laws.” (McLaughlin, p. 93)
“A-properties supervene on B-properties = df.
Necessarily, for any object x and A-property a, if x has a, then there is a
B-Property b such that (i) x has b, and (ii)
necessarily, if anything has b, it also has a.”
(McLaughlin, p. 91, quoting from van Cleve)
“A law L is a fundamental law iff it is not
metaphysically necessitated [and
therefore not nomologically necessitated – my
addition] by any other laws, even together with initial conditions.” (McLaughlin, p. 93).
[Mclaughlin
on this definition of emergent property:
It captures the spirit of the British Emergentist
concept. It is coherent. That chemical and biological properties are
emergent has since been empirically refuted by Quantum Mechanics. It remains an open question whether or not
conscious properties are emergent in this sense, but his bet is that they are
not.]
II.
“The
occurrence of a characteristic W in an object w is emergent relative to a
theory T, a part relation Pt, and a class G of attributes if that occurrence
cannot be deduced by means of T from a characterization of the Pt-parts of w
with respect to all the attributes in G….A characteristic
W is emergent relatively to T, Pt, and G if its occurrence in any object is emergent in the sense just
indicated.” (Hempel, pp. 64-5).
[Note: Although Hempel
refers to this as a definition or ‘redefinition’, it is formulated merely as a
sufficient condition. E.G., let T be
physics, and contain only homophonic laws;
Let G encompass certain physical attributes had by the physical parts, Pt,
considered apart from their role in w;
then if the occurrence of W in w cannot be deduced from T, it is
emergent]
III.
Let S be a system made up of
constitutive elements a, b. c,… If the occurrence of a ‘system feature’, F,.of S must be explained in terms of causal interactions
among S’s constitutive elements, then F is an ‘emergent1’ feature of S. If F in addition has, or grounds, causal
powers that cannot be explained by the causal interactions of S’s constitution
features, then F is ‘emergent2’.
[Searle then claims that
consciousness is an emergent1 feature of certain systems of neurons in the same
way that solidity and liquidity are emergent1 features of systems of
molecules. (cf., Searle, pp.; 69-70)]
PEPPER ON EMERGENCE
A Taxonomy of Change:
1. chance occurrences
(do not fall under a law)
2. ‘shifts’ (one characteristic replaces another
in a lawlike way,
describable as functional relation)
3. ‘emergence’ (certain
characteristics supervene on others,
where the occurrence of these others is explainable in a lawlike
way on their own)
Emergence:
1. There are levels of existence defined in
terms of degrees of
Integration.
2. There are marks which distinguish these
levels from one
another over and above the degrees of integration.
3. It is impossible to deduce the marks of a
higher level from
those of a lower level.
‘epiphenomenal
occurrences’ (standard definition): occurrences that are caused, but that are
not themselves causally efficacious.
‘Epiphenomenalism’ wrt to occurrences of a
certain kind of property: the view that all such occurrences are epiphenomenal.
Pepper’s 1st thesis: All natural regularities are
shifts and cannot be otherwise described, on pain of epiphenomenalism.
Argument: Suppose that there is a shift, a lawlike
relation at level B, between the values of variables q,r,s, and t, correctly expressed as the function f1(q,r,s,t). Suppose
further that at level C, supervening on level B, there are occurrences of new
variables a and b, which are claimed to be emergent, and which satisfy the
function f2(r,s,a,b). “But these new variables either have some
functional relationship with the rest of the lower level variables or they
haven’t. If they haven’t they are sheer
epiphenomena…. If they have, they have
to be included among the total set of variables described by the lower level
functional relation; they have to drop down and take their place among the
lower level variables as elements in a lower level shift.”
Pepper’s 2nd thesis: Either emergent laws are
ineffectual and epiphenomenal, or they are effectual. But they are only effectual (useable to make
reliable predictions) if they are consistent with, don’t conflict with, the
lower level shifts that they are taken to supervene on. And the only way to secure that is if they
are deducible from those lower level shifts, i.e., if they are reducible to
those lower levels laws, and thus in effect amount to a lower level shift after
all.
In an (so far as I know) unpublished ms, Richard Boyd argues against emergence in a way
reminiscent of Pepper, invoking the following principle about causation:
“Suppose that under
certain circumstances causal factors f1…fn
are sufficient to change the state of a system S from its initial state S1 to
some subsequent state S2 over a time interval T. Suppose, now, that under otherwise the same
circumstances the causal factors f1…fn act on a
system that is initially in state S, and it is also true that a causal factor,
call it g, acts on the same system. At
the end of the time interval T the state of the system in question will be
different from S2, unless either g is one of the factors f1…fn,
or g is a constituent of one of those factors; or g is made up of , or
constituted from, aspects of some or all of the factors f1…fn
or their constituents….Some such principle as this is, I think, central to our
conception of causation, and to the way in which we individuate causal
factors.”
Suppose then that the causal factors
f1…fn are all at the lower
level, and the factor ‘g’ is being put forward as an ‘emergent’ factor. According to this principle g must either be
one of the f1…fn, or a
‘constituent’ of one of these, or ‘made up of or constituted from aspects of
some or all’ of these. It is as though g
must be on the same level as the f1…fn
after all; and g will be apparently predictable from f1…fn
together with certain facts of constitution.
But is emergence ruled out? It seems to depend on how we construe
‘constituted from aspects of’. E.g., if
we allow that factors on one explanatory level can be constituted, in this way,
of factors at lower levels, then the principle seems perfectly compatible with
emergent properties and powers, notwithstanding Boyd’s insinuations to the
contrary.
MEEHL AND SELLARS ON PEPPER.
“What the emergentist
says is that there is a region within the fourspace qrst within which f1(q.r.s.t) =0
holds. This region is the “lower level
of integration” – e.g., physicochemical processes which are not occurring in
protoplasm. On the other hand, there is
another region – the “emergent” region – in which f2(q,r,s,t) =0 holds,
f1=/=f2. ….But while the notion of
different regions in the fourspace qrst exhibiting different functional relationships is
mathematically unexceptional, is it emergence?...no emergent variables have
been introduced…and it has not been claimed that there are ‘piggy-back’
regularities…thus to the extent that ‘emergence’ connotes a simultaneous
presence in a single situation of two or more levels, the notion we have been analyzing is not, as
such, a matter of emergence.”
Two senses of emergence.
“But if we add to this the notion
that protoplasm exhibits a constellation of physiochemical variables which
belongs in a region of the n-space defined by those variables that conform to a
different function than do the regions to which belong constellations exhibited
by less complex physiochemical situations, then the use of the term “emergence”
seems not inappropriate. And, indeed
many philosophers [and scientists] who have made use of the concept of levels of integration or levels of causality seem to have had
something like the above in mind. But it
is reasonably clear that most emergentist
philosophers have had something more in mind.
They have spoken of the emergence of properties….[including allegedly basic, simple, non-dispositional
properties such as qualia; and have wanted to claim that occurrences of these
emergent properties, while dependent on the particularities of subvenient properties, are not reducible to them]”.
[Sidebar: ‘methodist’ vs. ‘particularist’ approaches to sorting this out.]
Pepper’s Worry
Suppose the emergent properties to be
a and b.
Suppose also that the occurrence of a
and b depends in a certain functional way on the values of q,r,s, and t:
a
= g(q,r)
b
= h(s,t)
Then the function which adequately
describes the interrelationships of the inclusive set of variables qrstab,
call it E(q,r,s,t,a,b), can be written without a and
b thus: E[q,r,s,t,g(qr),h(s,t,)] or f3(q,r,s,t).
1.
Unless f3(q,r,s,t) is
equivalent to f1(q,r,s,t), they cannot both hold. (True, but only if they are each intended to cover the entire fourspace of these variables.)
2.But for f3 to be equivalent to f1 is for a and b to be epiphenomenal (i.e.,
in Pepper’s sense of ‘making no difference’)
3.
If f3 holds and a and b are not epiphenomenal,
then f1 cannot hold.
4.
So E(q,r,s,t,a,b)
must adequately describe both the before and after integration of the
phenomenon in question.
5.
So the supposed emergents a
and b have to be included among the total set of lower level variables
after all. Q.E.D.
Does this give Pepper the last word?
Meehl and Sellars on the emergence of
‘raw feels’
Suppose that the a
and b in the above function E are the raw feels that some philosophers have
wanted to claim are emergent. “That is,
raw feels depend upon the variables q,r,s,t
which also characterize pre-emergent situations. But raw feels do not occur in the presence of
matter generally; only matter as it is in the living brain. The function f1(q,r,s,t) which fits the behavior of matter everywhere else,
breaks down when applied to brains.”
But must the scientist introduce the variables a and
b? Cannot he either just differentiate
two different regions of qsrt space, governed by two
different functions; or, failing that, introduce a single complex function E
involving the constitutive functions g and h but not a and b? (Call that the behaviorist’ or maybe ‘eliminativist’ gambit.)
On the other hand, what prevents the scientist from affirming
his experience of raw feels, introducing variables a and
b for them, and insisting that the functions g and h do not constitute
reductive analyses of a and b? (Call
that the ‘emergent realist’ gambit.)
THEMES IN WIMSATT
1.
We need to acknowledge an important sense of ‘emergence’ on which
emergence is compatible with reduction, where “…a reductive explanation is one
showing it to be mechanistically explicable in terms of the properties of and
interactions among the parts of the system”, and where an emergent property is,
roughly “…a system property which is dependent upon the mode of organization of
the system’s parts”(pp. 99-100).
2.
Wimsatt alludes to a list of “well-understood”
reductive examples of emergence in this sense (p. 101).
[Sidebar: particularism!]
3.
The sorts of property occurrences typically characterized as emergent
tend to be “non-aggregative” wrt the properties
occurring on subvenient structural levels.
4.
To get a fix on what that comes to, we can focus on what it takes for a
property occurrence to be “aggregative” wrt
structurally underlying property occurrences.
Wimsatt’s account of this suggests that non-aggregativity can then come in degrees, and is ubiquitous.
FODOR ON THE SPECIAL
SCIENCES AND REDUCTION
The generality of physics: all events which fall under
the laws of any science are physical events and hence fall under the laws of
physics (p. 395). To that extent,
physics is the most basic science.
Compare:
Token Physicalism: the view that all the
events that the sciences talk about are physical events (p. 397). versus
Reductivism: the view that all of the special
sciences reduce to physics, in the sense that the laws of any special science
at the ideal limit of its development will be derivable from the laws of
physics at ideal limit of its development, via bridge laws establishing a
correspondence between each ‘natural kind’ predicate distinctive of the special
science laws and a respective natural kind predicate of physics (p. 396). Compare:
Type Physicalism: the view that every property mentioned in the
laws of any science is a physical property (p. 397.
Fodor takes Reductivism
(hence Type Physicalism) to be too strong a
constraint on the special sciences (while acknowledging that this is ultimately
an empirical question). But he holds
that there is a weaker notion of reduction, that
preserves token physicalism, the generality of
physics and the ‘basic’ position of physics.
Instead of bridge laws linking a natural kind term occurring in a
special science law to a physical natural kind term, there can be bridge laws
linking a natural kind term of a special science law to a disjunction of heterogenous physical natural terms (i.e., not itself
constituting a physical kind term.)
What physical explanation one gives
of the occurrence of a special science kind will therefore depend on how it
happens to be physically realized, and that can vary with the context. There will, by the same token, be different
physical explanations of the applicability of a special science law, S1x à S2x, varying with how the natural kind terms S1
and S2 happen to be physically realized (p. 404). So this weaker notion of reduction makes the
special sciences more autonomous than Reductivism
does, and to that extent makes science less unified. The second full paragraph on p. 408 gives a
nice summaric formulation of this thought. And here is a feisty Fodor again in 1997 (from
“Special Sciences: Still Autonomous After All These Years”, Philosophical Perspectives, 11, pp.
149-63, see esp. pp. 160-161):
The very existence of the special sciences
testifies to reliable macrolevel regularities that
are realized by mechanisms whose physical substance is quite typically
heterogeneous […] Damn near everything we know about the world suggests that
unimaginably complicated to-ings and fro-ings of bits and pieces at the extreme microlevel manage somehow to
converge on stable macrolevel
properties.
On the other hand, the
‘somehow’ really is entirely mysterious, and my guess is that that is what is
bugging Kim […] [Kim] doesn’t see why there should be (how there could be)
[macro level regularities] unless, at a minimum, macrolevel
kinds are homogeneous in respect of their microlevel
constitution. Which,
however, functionalists in psychology, biology, geology, and elsewhere, keep
claiming that they typically aren’t.
MICHAEL POLANYI, “LIFE’S IRREDUCIBLE STRUCTURE” (1968)
“…the organism is shown to be, like a
machine, a system which works according to two different principles: its
structure serves as a boundary condition harnessing the physical-chemical
processes by which its organs perform their functions. Thus, the system may be called a system under
dual control. Morphogenesis, the process
by which the structure of the living beings develops, can then be likened to
the shaping of a machine which will act as a boundary for the laws of inanimate
nature.”
‘A boundary condition is always
extraneous to the process which it delimits.
In Galileo’s experiments on balls rolling down a slope, the angle of the
slope was not derived from the laws of mechanics, but was chosen by Galileo. And as this choice of slopes was extraneous to
the laws of mechanics, so is the shape and manufacture of test tubes extraneous
to the laws of chemistry. The same thing
holds for machine-like boundaries; their structure cannot be defined in terms
of the laws which they harness…. Therefore if the structure of living things is
a set of boundary conditions, this structure is extraneous to the laws of
physics and chemistry which the organism is harnessing. Thus the morphology of living things
transcends the laws of physics and chemistry.”
DNA acts as a blueprint. It functions as a code of instructions. “As the arrangement of a printed page is extraneous to the
chemistry of the printed page, so is the base sequence in a DNA molecule
extraneous to the chemical forces at work in the DAN molecule.”
“…the existence of dual control in
machines and living mechanisms represents a discontinuity between machines and
living things on the one hand and inanimate nature on the other hand, so that
both machines and living mechanism are irreducible to the laws of physics and
chemistry.”
“…the control of a system by
irreducible boundary conditions does not interfere with the laws of physics and
chemistry. A system under dual control
relies, in fact , for the operations of its higher
principle, on the working of principles of the lower level, such as the laws of
physics and chemistry.”
Biological Hierarchies Consist of a
Series of Boundary Conditions.
Living beings comprise a whole
sequence of levels forming such a hierarchy.
“Each level relies for its operations
on all the levels below it. Each reduces
the scope of the one immediately below it by imposing on it a boundary that
harnesses it to the service of the next-higher level, and this control is
transmitted state by stage, down to the basic inanimate level.”
P.W. ANDERSON; “MORE IS DIFFERENT:
BROKEN SYMMETRY AND THE NATURE OF THE HIERARCHICAL STRUCTURE OF SCIENCE”
(appeared in Science, 1972, based on
a talk given in 1967)
The “reductionist hypothesis” widely
accepted by scientists.
But “[t]he ability to reduce everything
to simple fundamental laws does not imply the ability to start from those laws
and reconstruct the universe. In fact,
the more the elementary particle physicists tells us about the nature of the
fundamental laws, the less relevance they seem to have to the very real
problems of the rest of science, much less to those of society. The constructionist hypothesis breaks down
when confronted with the twin difficulties of scale and complexity.”
While symmetry is of great importance
in physics, where symmetry is the existence of different viewpoints from which
the system appears the same,…the internal structure of a piece of matter need
not be symmetrical even if the total state is,…and the state of a really big
system does not at all have to have the symmetry of the laws which govern it;
in fact it usually has less symmetry.”
Important examples of “broken
symmetries” include crystalline structures, superconductivity, antiferromagnets, ferroelectrics, liquid crystals. “…matter will undergo mathematically sharp,
singular “phase transitions” to states in which the microscopic symmetries, and
even the microscopic equations of motion are in a sense violated.”
“In some sense, structure –
functional structure in a teleological sense, as opposed to mere crystalline
shape – must be considered a state, possibly intermediate between crystallinity
and information strings, in the hierarchy of broken symmetries. To pile speculation on speculation, I would
say that the next stage could be hierarchy or specialization of function, or
both. At some point we have to stop
talking about decreasing symmetry and start calling it increasing
complication.”
BACKGROUND TO KAUFFMAN:
Self-organisation: the tendency of complex systems to become organized – to exhibit
particular systemic structural properties or behavioral patterns – where the
explanation of this is in important respects not attributable to factors external to the system.
Emergent self-organisation: the self-organisational properties are not
mere aggregations of properties of the elements of the self-organising
system, but distinct system properties occasioned by non-linear lawfully
constrained interactions among the elements.
Order for free: order explainable without appeal to any new fundamental forces or new
basic laws of nature.
Kauffman’s View of life as emergent
self-organisation, and as order for free: roughly, life as the mathematically expected outcome
inherent in the physico-chemical conditions of the
right chemical ‘soup’, specifically (1) the relative spatio-temporal
placement of the self-organising elements of the
system (i.e.its molecules), (2) salient intrinsic
chemical properties of these elements, and (3) various lawlike
constraints on the local causal interactions of elements having these
properties. The math used to model this:
‘autonomous random Boolean NK networks’, where N=the number of elements of the
network, K=the number of input links per element, ‘autonomous’ because none of
the inputs comes from outside the network’ ‘Boolean’ because a Boolean function
is assigned to process each input link, ‘random’ because assigned randomly. (S.
Kauffman, “Antichaos and Adaption” Scientific American (August 1991)p. 80)
P.W. Anderson, D.L.
Stein: “Broken Symmetry, Emergent Properties, Dissipative Structure, Life; Are
They Related? (1987)”
1.
Can properties emerge from a more complex system if they are not present
in the simpler substrate from which the complex system is formed?
Yes.
“In ‘equilibrium’ systems containing large numbers of atoms, new
properties such as rigidity or superconductivity, and new stable entities or
structures such as quantized vortex lines can emerge that are not just
nonexistent but even meaningless at the atomic level.” (p. 446)
2.
Are there emergent properties in dissipative systems driven far from
equilibrium?
Yes.
e.g., dynamic instabilities, like turbulence or convection, which
exhibit ‘broken symmetry effects’ (e.g. bifurcations) sometimes (i.e., by
Prigogine) called “dissipative structures”
3.
Is there a theory of dissipative structures comparable to that of
equilibrium structures, explaining the existence of new stable properties and
entities in such systems?
No.
Are living structures “stable”?
Life is more stable than
vortexes. “…one has an intuitive feeling
that living systems have an extraordinary great ability to ignore perturbations
and changes in boundary conditions, i.e., to be autonomous and rigid in some
sense.”(p. 447) It
is relative to time scales, of course.
4.
Can we see our way clear to a physical theory of the origin of life that
follows these general lines?
Not yet.
“…there is no analogy between the
stability, rigidity, and other emergent properties of equilibrium broken
symmetry systems and the properties of dissipative systems driven far from
equilibrium. The latter types of systems
have never been observed to exhibit the rigidity, stability and permanence that
characterize the thermodynamically stable broken symmetry systems, nor has any
mathematical reason been found why they should….[Yet]
it is indeed an obvious fact…that life succeeds in maintaining its stability
and integrity, and the identity of its genetic material, at the cost of
increasing the rate of entropy production of the world as a whole. It is at least in that sense a stable
‘dissipative structure’” (p. 454)
More recently, cf. “Random Boolean network models
and the yeast transcriptional network”, by Stuart Kauffman, Carsten
Peterson, Bjorn Samuelsson, and Carl Troein”,
“communicated by Philip Anderson” (Note!)
in Proceedings of the National
Academy of Science, Dec. 9, 2003 vol. 100 no. 25, pp. 14796-14799.
Abstract: “The recently measured yeast transcriptional network is analysed in terms of simplified Boolean network models,
with the aim of determining feasible rule structures, given the requirement of
stable solutions of the generated Boolean networks. We find that, for ensembles of generated
models, those with canalysing Boolean rules are
remarkably stable, whereas those with random Boolean rules are only marginally
stable. Furthermore, substantial parts
of the generated networks are frozen, in the sense that they reach the same
state, regardless of initial state.
Thus, our ensemble approach suggests that the yeast network shows highly
ordered dynamics.
PAUL HUMPHRIES: “HOW
PROPERTIES EMERGE”
The exclusion argument:
1.
If an event x is causally sufficient for an event y, then no event x*
distinct from x is causally relevant to y (exclusion).
2
For every physical event y, some
physical event x is causally sufficient
for y (physical determinism).
3.
For every physical event x and mental event x*, x is distinct from x*
(dualism)
4.
So: for every physical event y, no mental event x* is causally relevant
to y (epiphenomenalism). (Yablo, 1992)
The downwards causation argument:
1.But if emergentism (or non-reductive physicalism) is committed to causation from the mental to
the physical (‘downwards causation’), then the exclusion argument rules it out.
2. Emergentism
is committed to downwards causation, because:
3. Emergent mental
properties must have novel causal powers .
4. These powers must
manifest themselves either by causing
physical properties
or other mental properties.
5. If the former, there
is downward causation.
6. If the latter, then
call ‘M*’ the emergent mental property
caused by emergent mental property M.
7. But all emergent
mental properties strongly supervene
on
physical properties. (supervenience)
Let M supervene on
physical property P and M* on P*.
8. How can M* supervene
on P* yet have been caused by M?
9. the
only way is by M causing M* by causing its supervenience
base P*.
10.
So the causal exclusion argument rules out emergence. (Kim, 1992)
Generalizing the above, assume a
hierarchy of levels: L0, L1,…,Ln.
The generalized exclusion argument:
Df. An event z is causally connected to a second event x iff x causes z or z causes x. z is causally
disconnected from x iff
z is not causally connected to x.
1’. If an event x is causally sufficient
for an event y, then no event x* distinct from x and causally disconnected from
x is causally relevant to y. (exclusion)
Assumption: events are dated particulars; so the exact
time of occurrence and manner of occurrence are crucial to an event having the
identity that it does.
2’. For every 0-level event y, some
0-level event x is causally sufficient for y (0-level determinism)
3’. For every 0-level; event x and every i-level event x*i (i > 0) x is distinct from
x*i (pluralism)
4’. Therefore for every 0-level event y,
no i-level event x*i (i > 0) that is causally disconnected from every 0-level
event antecedent to y is causally relevant to y.
Note: this argument, claims
Humphries, allows higher level events to causally affect 0-level events if the
former are part of causal chains that begin and end at the 0-level.
How might this go? Very roughly, let us suppose that occurrences
of emergent properties happen as a result of the fusion of occurrences of lower level properties, where fusion is a
physical operation, an example of which could be causal interaction. Think of causal interactions as taking place
over time, so that the onset of the causal fusion operation at the lower level i will be t1 and the resulting property occurrence at level
i+1 will be at t2 > t1. We are to
think of fusion as involving the ‘submergence’ (to coin my own phrase ;-) of
the lower level properties. They no
longer exist at the same time as the supervenient property occurrence. But supervenience
is a synchronic relation between
property occurrences. So at t1 we do not
yet have supervenience, and at t2 we have a violation
of supervenience.
At no point do we have supervenience! Similarily, think
of the“defusion” of a property occurring at level i+1
as a causal process whereby it comes not to exist, and the erstwhile submerged
lower level property occurrences whose fusion originally led to its existence
now themselves reappear at level i.
“MAKING SENSE OF
EMERGENCE,” JAEGWON KIM (1999)
Part I (pp. 127-139): re.
Explanation, Prediction, and Reduction
p. 128; “…the fortunes of
reductionism correlate inversely with those of emergentism
(modulo the rejection of substantival dualism).”
Can we make the idea of emergence at
least intelligible?
Two groups of ideas usually
associated with emergence:
Group 1: emergents are
‘novel’ and ‘upredictable’ from knowledge of their
lower-level bases, and are not ‘explainable’ or ‘mechanistically reducible’ in
terms of their underlying properties.
Group 2: emergent properties bring into the world new
causal powers of their own, and have powers to influence and control the
direction of lower level processes from which they emerge.
Kim takes the Group 1 ideas as
‘constitutive ‘ of the idea of an emergent property,
and proposes to make this idea coherent in terms of a certain model of
reduction. Then in Part II (pp. 139-150)
he will argue against the ideas of Group 2.
p. 130; ‘Mereological
Supervenience’: systems with an identical total
microstructural property have all other properties in common; i.e., all
properties of a physical system supervene on, or are determined by, its total
microstructural property. Among those
properties supervenient on a system’s total
microstructural property, some may be emergent, while others are (merely)
resultant.
Re. Unprediictability. The emergent properties are novel
and unpredictable from the base properties, while the merely resultant
properties are familiar and readily predictable.
pp. 130-1; “…an emergent property is
new in an epistemological sense because it is unpredictable, and it is new in a
metaphysical sense because the emergent property has new causal powers.”
The predictability that is denied by emergentists
is theoretical unpredictability, not inductive unpredictability. We can know all there is to be known
theoretically about the basal conditions M of emergent property E, but this
alone will not suffice to predict E.
Re. Reductive unexplainability.
What would constitute such reductive explainability of an emergent property E?
The functional conception of
reduction.
For emergent property E to be reducible to the reduction base B,
(1) it first must be ‘functionalized’, i.e., construed or reconstrued
as a property defined by its causal/nomic relations
to other properties, specifically those
in the B, where this definition would take roughly the following form: Having E
=def Having some property P in B such that (i) C1,…,Cn cause P to be instantiated, and (ii) P causes
F1,…,Fm to be instantiated; (Discussion)
(2) then realizers of E in B must be empirically detected; and finally
(3) how those realizers of E in B perform
the causal tasks constitutive of E must be theoretically explained. (cf. pp. 132-3)
How this type of reduction differs
from classical theoretical reduction: it involves no bridge laws or derivation
of laws. Yet it can be held to meet the
explanatory and predictive requirements of reduction.
The explanatory question: Why do systems exhibit E
whenever they instantiate
Q?
Ans: because E is a functional property defined by causal role C, and Q is a
realizer of E for these systems, and there is a theory that explains how Q
realizes E in these systems.
The predictive question: Will this system exhibit E at time t?
Ans. We can so predict on the basis of
having identified a realizer of E for the system solely on the basis of
knowledge of the causal/nomic relations obtaining in
the base domain. What enables the
prediction of a higher order property is the conceptual connections generated
by the functionalization of the higher properties ,
not bridge laws taken as auxiliary premises.
The ontological question: In what sense does the functional model of explanation provide a model
of reduction? That is, if system S
having E has been explained in terms of the functional definition of E together
with the detection of a realizer Q, of E in S and an account of how Q realizes
E, has the occurrence of E in S been reduced to the occurrence of Q, and can
E’s occurrence be said to be nothing over
and above Q’s occurrence? (Discussion.)
Ans.
We must carefully distinguish here between the case of tokens of the property E and the
property type E per se.
A.
Tokens (instances) of the type E obey
‘The causal inheritance principle’: if a functional property E is instantiated on
a given occasion in virtue of one of its realizers, Q, being instantiated, then
the causal powers of this instance of E are identical with the causal powers of
this instance of Q.
But if the causal powers of these
tokens of E and Q are identical, what empirical reason could we have for deny
that the tokens per se are identical?
(p. 137). (Discussion)
B. Turning now to consider the issue
of reduction as it applies to types, the property type E may have multiple
realizers, with different tokens realized by tokens of disparate
realizers. So under what conditions may
we regard E as having been reduced?
(1) Should we (could we) identify the
property type E with a disjunction of
the types of all of its actual and potential realizers? (Discussion.)
(2) Should we deny E as a genuine
property, as opposed to a concept? (Discussion.)
(3) Or should we embrace the
emergence of E and therefore its irreducibility? (But then, e.g., where does this leave the
proposed account, in A above, of the reduction of each
of its tokens? Discussion.)
If acceptable, (1) would count as a
’conservative’ reduction, while (2) would be an ‘eliminative’ reduction.
(3), says Kim, is the view favored by
many philosophers, but it is, he claims, problematic when Group 2 ideas about
emergence are added. (See Part II)
Kim also suggests that if anything is
going to turn out be emergent, it will be the phenomenal properties of
consciousness. They are the most likely
to resist functionalization entirely.
(So presumably he think that this would apply
to both types and tokens of these properties.)
As for what properties may turn out
to be emergent, then, Kim is comparable to McLaughlin in leaving just the
emergence of consciousness as an open question, but unlike McLaughlin,
expressing a bias in favor of it. Where
does this bias come from? McLaughlin
took the issue to be an empirical one, to be settled by science. Kim appears to be appealing a prioiri to his intuition or sense that the phenomenal
properties of consciousness experience are intrinsic and non-relational, and
therefore not subject to functionalization.
Would it follow, then, that if Kim’s
bias were borne out, then, given his line of argument, in Part II (still to
come) against emergent properties having causal powers, Kim would be willing to
bite the bullet and regard the phenomenal properties of consciousness as
epiphenomenal?
Kim also briefly expresses doubts,
citing Chalmers, about self-organizational phenomena being emergent in his
sense. (Discussion.)
PART II: Downward Causation, The Very Idea.
Why posit emergent properties if they
are not going to have causal powers? But
one may distinguish between “horizontal” causal powers, i.e., those that are exercised and have their effects at the level of structure at which
the emergent property occurs, and “vertical” causal powers, i.e., those
exercised by the emergent property and having their effects at higher or lower
levels of structure. The latter
direction would be so-called “downward causation”(p.
139).
Early emergentists
were interested in the history or evolution of order in the world – how
molecules arose from atoms, how life arose from chemistry, and in general how
higher levels of structure arose from lower levels. But contemporary emergentists
accept the layered model of the world, and are interested in the synchronic
structure of the world, or its structure over small (not historical ) time
intervals, how it all hangs together (p. 140).
(Discussion)
Central doctrines of emergentism:
1. the emergence of complex higher-level entities, entities that are new
structural configurations of lower level
entities
2. the emergence of higher level properties, that are not merely
‘resultants’ of the properties of lower constituents of the emergent entity (or
system), but new properties not occurring at the lower level of structure, and
where the relation between the occurrence of the lower and higher level
properties is understood in terms of
causation or in terms of supervenience.
3. Emergent properties are not predicable from exhaustive information about their “basal conditions”
4. Emergent properties are neither explainable nor reducible solely in terms
of their basal conditions.
5. Emergent properties have novel and irreducible causal powers of their own
(pp. 140-141)
But the causal power of emergent
properties can only be understood in terms of downward causation. I.e., either upwards or same level causation
presupposes downwards causation. First,
upwards causation presupposes same level causation; e.g., the beauty of the sculpture , assuming
it to be an emergent property of the sculpture, was brought bout by the
physical work of the sculptor on the block of marble, and this latter is an
instance of same level causation.
Second, same level causation presupposes downwards causation, except in
the limiting case of the very bottom level of structure (if there is one),
according to the following ‘principle of downward causation’:
To cause any property
(except those at the very bottom level) to be instantiated, you must cause the
basal conditions from which it arises (either as an emergent or as a
resultant). (p. 143)
So downward causation must be coherent if emergent properties are to have
causal powers.
Now there is nothing
incoherent in general about the idea of downward causation, cf. Kim’s smashing
vase example, p. 144.
The problem arises in
cases of “reflexive downwards causation”, cf the
discussion of Roger Sperry’s wheel example, and eddies, p. 145. Can one hold both upwards determination of
the emergent property and its downward reflexive causal influence? (p. 146)
Case 1: At a certain time t, a
whole, W, has emergent property M, where M emerges from the following
configuration of conditions: W has a complete decomposition into parts a1,…,an, each ai has property Pi, and
relation R holds for the sequence a1,…,an.
For some ai W’s having M at t causes aj to have Pj at t.
This case is
incoherent, claims Kim, because it violates ‘the causal power actuality
principle’:
For an object x to exercise at time t the causal/determinative powers it
has in virtue of having property P, x must already possess P at t. When x is caused to acquire P at t, it does
not already possess P at t and is not capable of exercising the
causal/determinative powers inherent in P. (Discussion: Kim’s rejection of a
mutual causal interdependence interpretation of this)
Case 2: As before, W has an
emergent property M at t, and aj has Pj. We now consider
the causal effect of W’s having M at t on aj at a later time t+ Delta t. Note that this is a case of Diachronic
reflexive downwards causation.
This latter type of
downwards causation escapes the problems noted in Case 1. Which brings Kim to his ‘critical question’
on p. 149: If an emergent, M, emerges from basal
condition, P, why can’t P displace M as a cause of any putative effect of M, e.g.,
P*? Why cannot P do all the work in explaining why an alleged effect of M
occurred? He then argues thus:
“…if causation is understood as nomological
(law based) sufficiency, P, as M’s emergence base, is nomologically
sufficient for it, and M, as P*’s cause is nomologically
sufficient for P*. Hence (by
transitivity) P is nomologically sufficient for P*,
and hence qualifies as its cause.” (p. 149).
But, then why couldn’t this be viewed as a causal chain with emergent M as
an intermediate link? And now here is
the revealing clinker. Kim does not suppose that causation can be
understood simply as nomological sufficiency, as per
the antecedent in the quotation. He does
not think that the emergence of M from P can be regarded as a causal
relation. P* has P as its sufficient
cause (by invoking physical causal closure).
M is “otiose and dispensible” as a cause of
P*. M can serve as a cause of anything
only if it isn’t emergent but reducible to lower level properties (p. 150).
DOWNWARD CAUSATION AND
AUTONOMY IN WEAK EMERGENCE
Mark A. Bedau
The problem of emergence.
· Complexity Science as the science of ‘weak’ emergence.
· Two ‘hallmarks’ of emergence explain why emergence is controversial: (1)
emergent phenomena are dependent on underlying processes; (2) emergent
phenomena are autonomous from underlying processes.
· Explaining the way in which both (1) and (2), may require some
‘revisionary metaphysics’ (p. 157).
Three kinds of emergence.
· ‘nominal emergence’: when a macro-property is
the kind of property that cannot be a micro-property. [dependency: nominally emergent properties are properties of macro wholes that
are dependent on their micro constituents; autonomy:
nominally emergent properties do not apply to the underlying entities.]
· ‘strong emergence’: when a nominally emergent
macro property also supervenes on properties at the micro level, and grounds
irreducible causal powers. [dependency: strongly emergent properties supervene on the
properties of their underlying base; autonomy:
strongly emergent properties ground irreducible causal powers.]
· ‘weak emergence’: when emergent causal powers can be explained from full
knowledge of the micro facts, but only in a certain ‘complex’ way [dependency: weakly emergent phenomena
are ontologically dependent on and reducible to their underlying
phenomena; “their existence is nothing more than the coordinated existence of
certain microphenomena”; autonomy: weakly emergent phenomena have explanatory autonomy
and irreducibility given
the complex way that they are generated.] (p. 160)
Weak emergence as underivability
except by simulation.
“Assume that P is a nominally
emergent property possessed by some locally [ontologically] reducible system
S. Then P is weakly emergent iff (Df.) P is derivable from all
of S’s micro facts but only by simulation” (p. 162). Compare this with the following sufficient
condition: “P is weakly emergent if in principle underivable
except by finite feasible simulation”.
That might seem too epistemic.
The definition is meant to express a formal
limitation, not an epistemic one: a Laplacian supercalculator could
not derive weakly emergent properties except by simulation. (p. 163).
“A derivation by simulation involves
the temporal iteration of the spatial aggregation of local causal interactions
among micro elements….Derivation by simulation is the process by which causal
influence typically propogates in nature….Natural
systems compute their future behavior by aggregating the relevant local causal
interactions and iterating these effects in real time. They “simulate” themselves
in a trivial sense…The behavior of weakly emergent systems cannot be determined
by any computation that is essentially simpler than the intrinsic natural computational
processes by which the system’s behavior is generated.” (p. 164. For more on this notion of simulation and of
‘natural computation,’ see the short paper by Stephen Wolfram, “Undecidability and Intractability in Theoretical Physics”
pp. 387-393 of your text.)
Downward causation of weak emergence.
1. Ordinary downward causation is
unproblematic (cf. examples p. 177)
2. Weak downward causation is simply
a species of ordinary causation: macro causal powers are constituted by the causal powers of their micro
constituents, and these are typically so complicated that the only way to
derive their effects is by iterating their aggregative, context-dependent
effects – i.e., by simulation.
· Since a weak macro cause is identical with the aggregation and iteration
of micro causes, weak macro causation cannot violate micro causal laws.
· Since a weak macro cause is nothing more than the aggregation of micro
causes, macro and micro causes are not two things that can compete with each
other for causal influence.
· Weak downwards causation is diachronic; it cannot alter the conditions
from which it arose. (cf. pp. 177-8)
The autonomy of weak emergence.
But then, how could the explanations
of weak emergence be sufficiently autonomous?
If the underlying explanation of the macro phenomena is merely the
aggregation of micro-phenomena, how could there be macro level explanatory
autonomy?
Is ‘the autonomy’ merely epistemic?
(p. 179)
Sometimes, but not always. Sometimes, in addition to the
aggregative micro causal history, there
is an overarching, generalizing macro explanation that captures counterfactual
patterns not derivable from the
micro-history, such that if that if various details of that micro-history
had been different, if the micro explanation had varied in an indefinite
variety of different ways, the macro explanation would still have been
true. The macro-explanation is to that
extent autonomous from the micro-explanation, and contributes something to our
understanding of the weakly emergent phenomenon: why it was to be expected
independently of many of the underlying micro details, why it is a ‘universality’. (cf. transit strike
example, pp. 181-182).
[Compare, e.g., Fodor, and other
defenders of special science autonomy.]
“EMERGENCE AND
EXPLANATION,” CH. 6 OF BEING THERE
(1998)
Andy Clark
“What kinds of tools are required to
make sense of real-time, embodied, embedded cognition?” (p. 103)
3 Styles of Cognitive-Scientific
Explanation:
1.
Componential explanation;
explaining the functioning of the whole by detailing the individual roles and
overall organization of its parts. This is a kind of ‘reduction’, but contrasts
with traditional ‘theoretical reduction’, in
that it does not involve explaining by derivation of higher level laws
from lower level laws plus bridge principles, but involves explaining via the
development of a ‘partial model’. Note: the ‘whole’ can either be, e.g., a
brain, or a brain together with the the body that it
is in and the surrounding environment in which the body is embedded (‘extended
mind’). Typical examples of such
explanation may involve the positing of representational functions to various
parts of the whole, where what is being represented is elements in the
environment and is determined by suitable interactions with those elements.
2.
‘Catch and Toss’ explanation;
like componential explanation, but the brain is flagged as central to
explanatory understanding. There is a primacy to ‘inner processing’. The world tosses inputs to the brain, which
catches them, and then processes them, leading to action in the world, while a
firm boundary is maintained between the brain and the rest of the world.
3.
Emergent explanation;
consider, e.g., convection rolls, a ‘self-organizing’ property of a contained
collection of liquid molecules. Once heat is applied and the rolling begins,
the convection feeds and maintains itself.
There is a sense in which the actions of the parts cause the overall
behavior, and simultaneously, the over-all behavior guides the actions of the
parts (‘circular causation’, p. 107).
Convection rolls are an instance of ‘direct
emergence’: the emergent phenomena can be tweaked by tweaking one of the
system’s external ‘control parameters’, in this case temperature. But the convection behavior is not centrally
or internally controlled. So, there is
emergence in this sense whenever interesting non-centrally-controlled behavior
ensues as a result of the interactions of multiple simple, homogeneous
components within a system. (p. 109)
One models convection cells by
introducing ‘collective variables’ (cf. p. 108), that fix on and track ‘higher
level’ features such as the behavior of the convection cell, and do not track
properties of the component molecules in the container. By plotting the values of such collective
variables as a system unfolds over time, we may come to understand important
facts about the actual and potential behavior of the larger system. And by plotting the relation between the
values of the collective variable and the system’s control parameters, we may
come to understand important facts about the circumstances in which the
higher–level patterns will emerge, when one will give way to another, (e.g.,
bifurcations, chaos) etc..
Sometimes functionally valuable side-effects are brought about by the
interactions of heterogeneous components, which ‘foregrounds’ the action
between behavior systems and local environmental structure (p. 109). The example is given of a robot programmed in
such a way that it will back into its recharging station, between two
poles. The charging station is indicated
by a light. The robot is programmed with
a phototaxis system that yields a zig-zag approach towards any light source, and an obstacle
avoidance system that causes the robot to turn away when it hits
something. That is all the programming
required. Clark calls this ‘indirect
emergence’ (because there is something fortuitous or undirected about it?).
Sometimes the emergent phenomena
comes about as a result of ‘uncontrolled
variables’, where these are variables which track behavior or properties that
arise from the interaction of multiple parameters and hence tend to resist
direct and simple manipulation.
So Clark arrives at the following
general understanding of ‘emergent phenomena’:
a phenomena is emergent if best
understood by attention to the changing values of a collective variable of a
system, where all uncontrolled variables are collective, and where sometimes we
must think of the system as (at times) extended to include aspects of its environment
(p.112)
There will be degrees of emergence, a la Wimsatt,
correlated with degrees of complexity of the interactions involved. Wimsatt’s
“aggregate systems” are limiting cases, for which componential explanation
alone is perhaps best suited, or sufficient.
The more complex and interesting systems are ones evidencing nonlinear relations among the
explanatorily salient variables, where a nonlinear relation between variables
is one in which the two quantities do not alter in smooth mutual lockstep. Instead, the value of one quantity may (e.g.)
increase for some time without affecting the other at all, and then suddenly,
when some hidden threshold is reached, cause the other to make a sudden leap or
change.” (n. 8, p. 236)
It is Dynamic Systems Theory which provides the explanatory framework for
emergent explanations as sketched above.
Dynamic Systems Theoretic explanations owe their explanatory status to
their ability to help us find and learn to detect dynamic patterns (cf. p. 125) of behavior and illuminate counterfactual
aspects of the occurrence of those patterns wrt the
behavior of structural elements of the system.
But they do this without showing us how to build such a system from
simple well-understood phenomena.
Rather, DST explanations abstract from the micro-details of the physical
structure of the system to the topological
structure of the system’s dynamics. (pp.117 ff.)
But it is not a choice, insists
Clark, between emergent explanation and, say,
componential explanation or the computational modelling of catch and toss
explanation. We need all three. The system parameters tracked in DST
explanations of an agent’s behavior can be arbitrarily far removed from facts
about their real internal structure and processing (p. 118). To really understand a complex phenomenon, an
uncontrolled collective variable, it is at least necessary that we understand
“at least something” of how it’s behavior is rooted in the more basic
properties of the system’s biological or physical proper parts. What is really needed for full understanding
of the system is a kind of ‘explanatory interlock’ between all three modes of
explanation, where each mode of explanation constrains the others (pp. 125,
126).
“REAL PATTERNS”: Daniel
Dennett
A question raised by Clark’s account
of emergent explanation is the extent to which it nudges us towards a kind of
epistemological emergence; not Hempel’s kind, of
course, at least not to the extent of embracing Hempel’s
Deductive-Nomological conception of explanation, and
his conception of theoretical reduction to physics as our ontological
scientific goal, with talk of epistemological emergence as a hopefully a mere
temporary expedient; but rather explanation as abstract model construction,
whose explanatory force lies centrally in the accuracy of its counterfactual
predictions about the behavior of the system modelled, and not necessarily in
giving us, or contributing to our having, an accurate representation of the
ontological nature of the system’s constitutive parts or even of its system
features.
Enter Daniel Dennett, who published
an influential paper which brought useful focus to the discussion of this very
issue. When we seem to discern patterns
or regularities in the behavior of a system, patterns which seem to help us understand
something about its nature, which can sometimes lead to the discovery of ways
to predict and control its behavior, what is the status of these patterns? Dennett identifies a number of dimensions of
this issue. Are patterns real? In what sense do they exist? Where are they located? If you cannot
discern the pattern might it still be there --are there indiscernible patterns?
One contrast offered is between the
idea of pattern and the idea of “utter randomness”. Dennett here invokes Gregory Chaitin’s definition of mathematical randomness of a series
of entities (dots, numbers, functions, whatever): it
is random iff the information required to describe
(transmit) the series accurately is incompressible
in the sense that nothing short of the full bit map will preserve the
series. The series is not random – has a
pattern – iff there is some more efficient way to
describe it (p. 193). ‘Detecting the
pattern is just finding one of these more efficient ways of describing it.
[As somewhat of an aside, a
question: Is there an interesting
connection between this notion of incompressibility and Bedau’s
notion of derivability of the occurrence of an emergent phenomenon from the
underlying base, but only by simulation?]
But what if the series is taken to be
compressible in a certain way only under the assumption that certain observed
deviancies from this would-be compression are the result of “noise”? So in adopting the compressed, more
efficient, description – in attributing the pattern – we seem then to be
idealizing away from our actual observations, our data, in order to embrace
it. This kind of idealization could
sometimes be thought of as a mere ‘rounding out’ idealization, if, as per
Dennett’s barcode example D, there are just a few stray pixels one has to
ignore in order to see it as a nice neat row of 5 black squares. In D there is just 1% variance from the
proposed compression. But what if it
were 25% variance like in his barcodes A and C?
What if it is 33% or even 50%, as with his barcodes E and F
respectively? Is the pattern really
there in F? We cannot eyeball it. Our only reason for thinking it is there is
that we have been told about the settings on the algorithm that generated
it. Is that enough?
Independent of this, a limitation of
Dennett’s barcode example for our context is that the kind of patterns that are
central to dynamic system theoretic explanations are diachronic patterns of
behavior, where the values of different system variables salient to the pattern
are tracked over time. The relevant
pattern, if there is one, is a pattern in the relations between the values of
the system’s parameters. So it isn’t
something that you can just eyeball in the system, like you can eyeball a
barcode. Maybe it is something you can
in effect detect by eyeballing a piece of paper on which the changing values of
these parameters over time have been recorded.
Dennett’s gives a definition of
pattern formulated so as to accommodate the above:
A pattern exists in
some date – is real – if there is a description of the data that is more
efficient than the bit map, whether or not anyone can concoct it. (p. 194)
[Questions:
(1) How (if at all) might this idea
of pattern be seen as accommodating or helping with the idea of emergent
phenomena?
(2) Does it provide a compelling
model for the existence of emergent
phenomenon, or just for the objectivity
of explaining something by alluding to a pattern or an emergent phenomenon?]
Now if there are a few apparent
counterexamples to the proposed compression in the data, it could be explained
away in various ways, such as limitations in the ways in which we are measuring
the values of these parameters. We could
suppose that the projected pattern really is there in the behavior of the
system, and that this would be increasingly confirmed with the improvement of
our measuring methods. So, ‘in
principle’ we could verify its presence.
One might think that the presence of idealization in our dynamic models
of the behavior of complex systems will always be like that: in principle
eliminable by the improvement of our empirical methods of observation and
measurement. Once eliminated, if our
hypothesized model has been sustained, we can now be assured that it represents
a feature of the system. But is all such
idealization in our explanatory models eliminable in principle like that?
[Batterman,
in one of your readings, argues that there are idealizations that appear to
make essential contributions to our explanatory understanding of certain
physical phenomena, that are not eliminable in
principle in this way.]
A NEW PROBLEM FOR
ONTOLOGICAL EMERGENCE
Daniel Heard
I.
Intro.
‘Emergent predicates’ are predicable
only of whole systems, and allow predictions that would be difficult or
impossible to derive from the dynamical laws plus boundary conditions alone;
e.g., ‘is convection rolling’. (p. 56)
Ontological emergence says that distinctly emergent predicates can be explained through
reference to distinct kinds of property – emergent properties. [What is ‘distinct in kind’ about properties
that are said to be emergent?]
Epistemological
emergence denies that there is a distinct
class of emergent properties. Rather,
the distinction between emergent and non-emergent predicates is rather to be
explained through reference to facts
about our epistemic status. (p. 56) [Such as our
inability to derive the emergent ones from the underlying dynamics and boundary
conditions?]
II.
The Old Problem.
Ontological emergentism
is incompatible with ‘ontological minimalism’: the latter is committed to ‘mereological supervenience’ of
all system properties, whereas emergentism is
committed to the denial of this. [Really??] This is ‘the old problem’ of ontological
emergence. [It is?]
Mereological supervenience is the view that the properties
of the system as a whole are fixed by the properties, including relational
properties, of its constituent parts. (p. 57) [Would reference to such fixedness
amount to an explanation of the system properties?]
But this incompatibility seems to
amount, au fond, to just a
question-begging in both directions ‘clash of intuitions’ about the role of
ontological parsimony in theory construction, which leaves things in a
stalemate. (p. 58)
III.
The New Problem
An emergent predicate like ‘is
convection rolling’
(1)
applies only to a
whole system, like a flask of oil, not to its individual lipid molecules of
oil. The latter would make no sense (p.
59).
(2)
The usefulness of predicating an emergent predicate of
a system is that it can enable us to make inferences about the behavior of the
system without invoking underlying dynamical laws; e.g., in this case one can
predict the motion of a ‘marker’ dropped into the oil: that it will follow a toroidal path, rising up the center of the flask and
sinking down again at its edges. (p. 56).
And it is for these reasons that the
ontological emergentist wants to reify emergent
properties as the referents of emergent predicates. But the problem with this, ‘the new problem’
of ontological emergence, is that both of these sorts of reasons are also satisfied by abstract mathematical
predicates such as ‘satisfies the central limit theorem’. (The central limit theorem states that a
large number of uncorrelated deviations from a mean yield a normal distribution
about that mean.) Such a predicate
applies only to systems and not to their individual constituents (i.e.,
satisfies (1)), and allows us to make predictions about the system without deriving
those predictions from underlying dynamical laws. (i.e., satisfies (2)). (p. 60)
So if the ontological emergentist is going to
reify the property of being a convection rolling, she must by the same token
reify the property of satisfying the
central limit theorem. “But this
will yield what is to many a very implausible ontology indeed”. (p. 60)
IV.
Difficulties For Ontological
Responses to The New Problem.
Ontological emergentists
cannot simply bite this bullet. The
reason is that they are committed to the failure of mereological
supervenience.
[No they are not] “…they think that the emergent properties cannot be
explicated [note the leap from mereological supervenience to explication] purely in terms of the
properties of the constituents of an emergent system….Yet this is clearly false…”
for mathematical predications like ‘satisfies the central limit theorem’, which
are defined as a mathematical
operation on properties of the constituents (in this case on uncorrelated deviations from a
mean). (p. 61)[This
all seems very confused.]
If this bullet cannot be bitten, then
the only other option is to distinguish between causally efficacious [concrete]emergent properties and the others [abstract], and to only
reify the former. Then the property of
being a convection roll will be reified, but not the property of satisfying the
central limit theorem. But this begs the
question against the opponent of ontological emergence, who will argue that all
causation is takes place at a more fundamental level. (p. 61)
[Sigh.]
But, concludes Heard, surely the
explanatory work done by real emergent properties could be done without the
need to postulate them, as in various accounts of epistemological emergence,
such as those elaborated by Bedau, Clark, and Batterman. So ontological emergence should be abandoned (p. 61-62).
[But are Bedau,
Clark, and Batterman plausibly construed as ‘epistemologicall emergentists’ as
characterized by Heard? (Cf. n.2 p. 56)
Recall that Bedau
defines weakly emergent properties as derivable from all of the system’s micro
facts, but only by simulation, where this is meant as a formal limitation: a Laplacian supercalculator could not derive weakly emergent properties
except by simulation. That doesn’t
sound like epistemological emergence in Heard’s
sense.
Clark takes emergent phenomena to be
those “…best understood by attention
to the changing values of a collective variable of a system, where all
uncontrolled variables are collective,….” The patterns of dynamic behavior that we
detect using Dynamic systems Theory are there to be detected in the system, and
the topological structure of the dynamics within which they are embedded gives
us counterfactual understanding of
the behavior of the system. That doesn’t
sound like the emergent phenomena are being explained “through reference to
facts about our epistemic status”.
Batterman, as we shall see (but in a more recent paper than the work cited by
Heard), in discussing the role of mathematical idealizations in the explanation
of empirical phenomena, distinguishes between “Galilean” and “non-Galilean”
idealizations. The Galilean ones are
explanatory stand-ins that we regard as potentially removable through further
work. So the view that all explanatory
idealizations are Galilean is naturally viewable is epistemically
motivated. But Batterman
makes clear that he is committed to the existence of ineliminable
non-Galilean mathematical idealizations in theoretical explanations of
empirical macro-level phenomena. So here
is one place, at least, where he seems to eschew a merely epistemic
perspective. But does that make him an
ontological emergentist? It is not clear. His paper is noted for its studied avoidance
of the issue of the existence of mathematical structures, for instance, and
instead focuses on the mathematical processes of asymptotic reasoning
, and of mathematical ‘limit operations’. But does that make him an epistemic emergentist after all?
Well maybe in some sense, but
not in Heard’s sense. Batterman
is not referring the distinction between emergent and non-emergent predicates
to just facts about our epistemic status.
He seems to be referring it to facts about the nature of ineliminable
logical or inferential relations between our ground level theories and our
explanations of ‘emergent’ phenomena.
The properties of the singularities we reach asymptotically are, for Batterman, objectively
explanatorily revealing.
At the very least this suggests that
in order to correctly classify Batterman’s views on
emergence we would need a richer taxonomy of options than presented by Heard.
“STRONG AND WEAK
EMERGENCE,” DAVID
CHALMERS
I.
Two Concepts of Emergence (1st
approximations)
A
high level phenomenon is strongly
emergent wrt a low-level domain when the high
level phenomenon arises from the low- level domain, but truths concerning that phenomenon are not deducible
even in principle from truths in the low level domain (or: high level truths
are not conceptually or metaphysically necessitated by low level truths – see n. 1). Note the this is meant to be incompatible with physicalism. New
fundamental laws are required in addition to the fundamental laws of physics
A high level phenomenon is weakly emergent wrt
a low level domain when it arises from the low level domain, but truths
concerning that phenomenon are unexpected given the principles governing the
low-level domain; e.g., the emergence of high level patterns in cellular
automata. Note that this is compatible
with physicalism.
“If one wants to use emergence to
draw conclusions about the structure of nature at the most fundamental level,
it is only strong emergence that is relevant.”
Weak emergence, on the other hand, can be used to support the physicalist picture of the world.
II.
Strong Emergence
Are there strongly emergent
phenomena? Yes: consciousness. Facts about consciousness – about what
something feels like from the system’s own perspective -- are not deducible from physical facts. Although in our world facts of human
consciousness are strongly correlated with physical facts about human brains, it is logically coherent
to suppose a world physically identical
to this one in every respect yet lacking consciousness.
We must distinguish between
deducibility from low level laws and deducibility from low level facts.
“Medium Emergence” (‘intermediate but
still radical’): high level facts and laws not deducible from low level laws
plus initial conditions. Note that this
implies the “incompleteness” of physical laws, and
with it the failure to deduce some of the low
level facts from low level laws plus initial conditions. Why?
Because if all the low level facts were deducible, then presumably a Laplacian demon
would be able to deduce the high level facts from there. [Would the Laplacian
demon also be able to deduce the high level laws?]
Two concepts of downward
causation:
With strong downward causation, the causal impact of a high level
phenomenon on low level processes is not deducible even in principle from
initial conditions and low-level laws.
With weak downward causation, the causal impact of the high-level phenomenon is deducible
but is nevertheless unexpected.
These are both coherent ideas, and
stand independently of the notions of strong and weak emergence. One can
conceive of strong downward causation without emergence. An example from quantum mechanics may be the
so called “collapse of the wave packet due to measurement.
The idea in “medium emergence” about
the incompleteness of
physical laws can be understood interms
of downwards causation. “Such causation
requires the formulation of basic principles which state that when certain high
level configurations occur, certain consequences will follow….These
consequences will themselves either be cast in low
level terms, or will be cast in high level terms that put strong constraints on
low-level facts. Either way, it follows
that low level laws will be incomplete as a guide to both the low level and
the high level evolution of processes in the world.”
III.
Weak Emergence
Examples: The game of life; connectionist networks;
the overloading threshold of an operating system; the evolution of intelligence
(p. 9)
Re. Clark’s ‘nominal emergence’: too ubiquitous to be interesting.
Re. ‘deducibility
without reducibility’: this would encompass lots of functional properties that
do not seem unexpected or interesting
enough to be weakly ‘emergent’.
Some elaborations of weak emergence,
arriving finally at:
A weakly emergent property of a
system is an interesting property that is unexpected, given the underlying
principles governing the system. E.g.,
maybe raw consciousnss was not selected for, but it
somehow emerges as an unexpected by-product (or ‘spandrel’) of selection for
adaptive processes such as intelligence.
Apart from the interesting example,
this seems very close to his initial characterization, except for the
additional qualifier “interesting”.
TERRENCE W. DEACON ON
EMERGENCE
[‘ECE’: “Eliminativism,
Complexity, and Emergence” (with Tyrone Cashman), (2007). ‘HWH’: “Emergence: the Hole at the Wheel’s
Hub” (2006). ‘HLE’: “The Hierarchic
Logic of Emergence: Untangling the Interdependence of Evolution and
Self-Organization” (2003).]
Ontological emergence.
What emerges? “The answer is not some ‘thing’, but rather
something like a form, or pattern, or function.
The concept of emergence seems to apply to phenomenon in which
relational properties tend to dominate over constituent properties in
determining aggregate features….it is with respect to the configurations and
topologies, not the specific properties of constituents that we trace processes
of emergence” (HLE, 276).
“Emergence is about the topology of
causality.” (HLE, 281)
“In functionalist accounts, only certain substrate
details are essential; others may vary without functional consequence. But function is defined extrinsically, at
least in computer science and cognitive theories, because function is a
semiotic distinction, not a physical one.
What is the physical analogue? It
is…something like topology: the form, configuration, or distribution of
component features.” (HLE, 282)
“…topology is not just a descriptive
feature of a physical system, it is a constitutive
fact about the spatio-temporal relationships among
component elements and interactions with intrinsic causal consequences.” (HLE,
282)
“What needs to be explained…is not a new form
of causality, but how some systems come to be dominated by their higher order
topological properties so that these appear to “drag along” component
constituent dynamics, even though, at the same time, these higher order
regularities are also constituted by lower-order interactions. In other words, an explanation of how
topologies come to make a difference is required. I believe that the secret to explaining this
lies in what can be called amplification processes. I believe that we can understand emergent
phenomena as all being variant forms of what might be called topological reinforcement or amplification in pattern formation….I
will argue that a kind of compound interest of topologies is the basis
for all forms of emergence. We are
justified in calling something “emergent,” I will suggest,
if it is the result of a recurrent amplification of configuration or
topology. This recurrent architecture is
itself a topological concept, so in some sense emergence is a special case of
topological transformation of topologies.
Wherever it occurs, amplification is accomplished by a kind of repeated
superimposition of similar forms….In all cases, it is a form that is
amplified; repetition either multiplies the number of its appearances in some
physical medium, or else gets it embodies in a progressively larger fraction of
the physical medium (as in sound amplification). Thus amplification is a kind of compound
interest of physical form….amplification occurs
because of iterated superimposition of events sharing the same form
occurring across levels of scale”. (HLE, 283-4)
Other causal topologies contributing
to such amplification: ‘circular causation’ in positive and negative
feedback systems. (cf. HLE 284-5)
“By ‘amplified’ I mean something like
“come to be more coherently expressed over ever more extensive scales of both
space and time.” (HLE, 286)
“The critical role played by ascent in scale in physical emergence is
that it creates the context for causal circularity and amplification. It affords the substrate for structural
influences to recirculate, so to speak.” (HLE, 287)
“The causal architectonic feature I
will use as my central diagnostic feature might be described as trans-scale causal recursion,
that is, circles of deviation amplifying causality that develop up levels of
scale. (HLE, 287)
COMPARE:
“…the key to understanding emergent
phenomena is to understand organization in terms of what is not included, not
realized, not present…a “constitutive absence,” created by a “…constraint:
that which reduces the possible degrees of freedom of a process…. E.g., self-organisation is merely an intrinsically arising asymmetric
change from more to fewer dynamical tendencies, which results when a system is
continually perturbed, or pushed away from thermodynamic equilibrium.” (ECE,
200)
“The principal hypothesis of this
essay is that emergent phenomena grow out of an amplification dynamic that can
spontaneously develop in very large ensembles of interacting elements by virtue
of the continuing circulation of interaction constraints and biases, which
become expressed as system-wide characteristics. In other words, these emergent
forms of causality are due to a curious type of circular connectivity of causal
dynamics, not a special form of causality.
This circularity enables certain distributional and configurational
regularities of constituents to reinforce one another iteratively throughout an
entire system.” (HWH: 124)
‘1st order
emergence’: simple higher order
properties of an aggregate, such as the statistically or stochastically
determined phase states of matter, e.g., liquidity, whose occurrence can,
without distortion, be regarded as synchronically related to the underlying
properties of the liquid molecules. (HLE, 288)
‘2nd order emergence’:
“Whereas micro-configuration can be ignored in 1st order emergent
systems, with minimal loss of descriptive adequacy, this is not the case for
systems exhibiting 2nd order emergence, such as chaotic and self-organised phenomena”.
Also ‘autopoietic’ systems. See discussion of snow crystal formation, and
Kauffman’s ‘autocatalytic’ sets. (HLE,
294-6) What these systems of 2nd
order emergence have in common is “…a kind of tangled hierarchy of causality
where micro-configurational particularities can be
amplified to determine macro-configurational
regularities, and where these in turn further constrain and/or amplify
subsequent micro-configurational regularities.” (HLE,
296)
3rd order emergence’: adds some form of information or memory to 2nd
order emergence, and a developmental or evolutionary (thus temporal,
diachronic) character, adding an “…additional loop of recursive causality,
enclosing the 2nd order recursive causality of self-organised systems. E.g. life even in its simplest forms. (Re. consciousness, see last 2 sentences, HLE, 306)
SELF, SENTIENCE, AND
CONSCIOUSNESS,
from the final three chs.
of
INCOMPLETE NATURE: HOW MIND EMERGED FROM MATTER, by TERRENCE W. DEACON,
W.W. Norton & Co.,
(N.Y., 2012, pp. 463 ff.)
Definitions from the Glossary (pp. 547-53):
Emergence: A term used to designate an apparently discontinuous transition from
one mode of causal properties to another of a higher rank, typically associated
with an increase in scale in which lower–order component interactions
contribute global properties that appear irreducible to the lower order
interactions. The term has a long and
diverse history, but throughout this history it has been used to describe the
way that living and mental processes depend upon chemical and physical
processes, yet exhibit collective properties not exhibited by non-living and
non-mental processes, and in many cases appear to violate the ubiquitous tendencies
exhibited by these component interactions.
Strong Emergentism: The argument that
emergent transitions involve a fundamental discontinuity of physical laws.
Weak Emergentism: The argument that
although in emergent transitions there may be a superficially radical
reorganization, the properties of the higher and lower levels form a continuum,
with no new laws of causality emerging. Often associated with epistemological emergentism
because it is attributed to incomplete knowledge of the critical causality.
Emergent dynamics: A theory developed in this book which explains how homeodynamic
(e.g., thermodynamic) processes can give rise to morphodynamic
(e.g., self-organising) processes, which can give
rise to teleodynamic (e.g., living and mental)
processes. Intended to legitimize
scientific uses of ententional (intentional,
purposeful, normative) concepts by demonstrating the way that processes at a
higher level in this hierarchy emerge from, and are grounded in, simpler
physical processes, but exhibit reversals of the otherwise ubiquitous
tendencies of these lower-level processes.
Constraint: The state of being restricted or confined within prescribed
bounds. Constraints are what is not
there but could have been. The concept
of constraint is, in effect, a complementary concept to order, habit, and
organization because something that is ordered or organized is restricted in
its range and/or dimensions of variation, and consequently tends to exhibit
redundant features or regularities. A
dynamical system is constrained to the extent that it is restricted in degrees
of freedom to change and exhibit attractor tendencies. Constraints can originate intrinsic or
extrinsic to the system that is thereby constrained.
Attractor: An attractor is a “region” within the range of possible states that a
dynamical system is most likely to be found within. The behavior of a dynamical system is
commonly modeled as a complex “trajectory of states leading to states” within a
phase space (typically
depicted as a complex curve in a multidimensional graph). The term is used here to describe one or more
of the quasi-stable regions of dynamics that a dynamical system will
asymmetrically tend toward. Dynamical
attractors include the state of equilibrium of a thermodynamic system, the
self-organized global regularity converged upon by a morphodynamic
process, or the metabolic maintenance and developmental trajectory of an
organism (a teleodynamic system). An attractor does not “attract” in the sense
of a field of force; rather it is the expression of an asymmetric statistical
tendency.
Absential:
The paradoxical intrinsic property of existing with respect to something
missing, separate, and possibly nonexistent.
Although this property is irrelevant when it comes to inanimate things,
it is a defining property of life and mind.
Elsewhere described as a constitutive absence.
Constitutive absence: A particular and precise missing something that is a critical defining
attribute of “ententional” phenomena, such as
functions, thoughts, adaptations, purposes, and subjective experiences.
Homeodynamics: Any dynamic process
that spontaneously reduces a system’s constraints to their minimum and thus
more evenly distributes system properties across space and time. The second law of thermodynamics describes
the paradigm case. [previously ‘1st order
emergence’?]
Morphodynamics: Dynamical organization exhibiting the
tendency to become spontaneously more organized and orderly over time due to
constant perturbation, but without the extrinsic imposition of influences that
specifically impose that regularity. [‘2nd order emergence’]
Teleodynamics: A form of dynamical
organization exhibiting end-directedness and consequence-organized features that
is constituted by the co-creation, complementary constraint, and reciprocal
synergy of two or more strongly coupled morphodynamic
processes. [‘3rd order emergence’]
SELF.
How did we come to be creatures with
selves?
In trying to answer this, don’t start
with the problem of human subjective experience. Start small.
What is the most minimal case where we can feel justified in identifying
something vaguely like self? Build up
from there an account of the dynamical architecture of subjective self.
Proposal: start with a simple living organism (sans brain). All aspects of their constitution are
organized around the maintenance and perpetuation of this form of organization.
“It is the circularity of this consequential architecture – teleodynamics
– that both delineates and creates the individuality that is ‘organism
self’….Understood in this more general sense, self is not a property limited to
organisms with brains like humans.” (pp. 465-6). So subjectivity is not a critical defining
feature of self. “In an organism…each
functional feature embodies a trace of the whole individuated organism,
reflecting the coherent influence of the whole and contributing to its future
coherence. This is the essence of
reflexive individuation: a compositional synergy, functioning to determine its
constituents in a way that both embodies and reinforces their synergistic
relationship. The whole/part hierarchy
thus becomes inextricably tangled.” (p. 469))
The ‘mental self’ is subordinated to
and nested within the more general form of self that is characteristic of all
living things. The ‘subjective self’ is
to be identified with a locus of the mental self’s neurological teleodynamics . The ‘self-as-agent’: the generation of
interactive constraints which do work to perpetuate the reciprocal maintenance
of the constraints that maintain organism self.
The Extentionless
Cogito:
“There is no ghost in the organic
machine and no intender serving as a witness to a Cartesian theater. The locus of self-perspective is a circular
dynamic, where ends and means, observing and observed, are incessantly
transformed from one to the other.” (p. 483-4)
“Thus autonomy and agency, and their
implicit teleology, and even the locus of subjectivity, can be given a concrete
account. Paradoxically, however, by
filling in the physical dynamics of this account, we end up with a non-material
conception of organism and neurological self, and by extension, of subjective
self as well: a self that is embodied by dynamical constraints. But constraints are the present signature of
what is absent. So, surprisingly, this
view of self shows it to be as non-material as
Descartes might have imagined, and yet as physical, extended, and relevant to
the causal scheme of things as the hole at the hub of a wheel.” (p. 484)
SENTIENCE.
Like selves, there are also grades of
sentience.
“Reframing the concept of sentience
in emergent dynamical terms will allow us to address questions that are not
often considered to be subject to empirical neuroscientific
analysis. Contrary to many of my
neuroscience colleagues, I believe that these phenomena are entirely available
to scientific investigation once we discover how they emerge from lower-level teleodynamic, morphodynamic, and
thermodynamic processes. Even the
so-called hard problem of consciousness will turn out to be reconceptualized
in these terms. This is because what
appeared to make it hard was our predisposition to frame it in mechanistic and
computational terms, presuming that its intentional content must be embodied in
some material or energetic substrate.” (p. 489).
“The central claim of this analysis
is that sentience is a typical emergent attribute of any teleodynamic
system. The distinct emergent
higher-order form of sentience that is found in animals with brains is a form
of sentience built upon sentience” (p. 508)
“…sentience is constituted by the
dynamical organization, not the stuff (signals, chemistry) or even the neuronly cellular-level sentience that constitutes the
substrate of that dynamics” (p. 510)
CONSCIOUSNESS
“…we have broken the spell of dualism
by focusing attention on the contributions of both what is present and what is
absent. Surprisingly, this even points the
way to a non-mystical account of the non-materiality of consciousness. The apparent riddle of its non-materiality
turns out not to be a riddle after all, but an accurate reflection of the fact
the locus of subjective sentience is not in fact a material substrate. The riddle was not the result of any problem
with the concept of consciousness, but of our failure to understand the causal
relevance of constraint. With the realization
that specific absent tendencies – dynamical constraints – are critically
relevant to the causal fabric of the world, and are the crucial mediators of
non-spontaneous change, we are able to stop the search for consciousness “in”
the brain or “made of” neural signals….We are what we are not [so, not what we
eat!]: continually, intrinsically, necessarily incomplete in our very
nature. Our sense of self, our
experience of being the originative locus of agency, our interior subjective
isolation, and the sense of emerging out of nothing and being our own prime
mover – all these core characteristics of conscious experience – are accurate
reflections of the fact that self is literally sui generis, emerging each moment from what is not there. There can be no simple and direct neural
embodiment of subjective experience in this sense. This is not because subjectivity is somehow
other-worldly or non-physical, but rather because neural activity patterns
convey both the interpretation and the contents of experiences in the negative,
so to speak; a bit like the way that the space in a mold represents a potential
statue. The subjectivity is not located
in what is there, but emerges quite precisely from what is not there….I believe
that despite its counterintuitive negative framing, this figure/background
reversal of the way we conceive of living and mental causality promises to
reinstate subjective experience as a legitimate participant in the web of
physical causes and effects, and ultimately reintroduce intentional phenomena
back into the natural sciences. It also
suggests that the subtitle of this book is slightly misleading. Mind didn’t exactly emerge from matter, but
from constraints on matter.” (pp. 534-8)
“A ROLE FOR MATHEMATICS
IN THE PHYSICAL SCIENCES”
By Christopher Pincock (Nous, 2007)
I.
Mathematics can seem theoretically indispensable for physical science,
and yet metaphysically dispensable, contra the Quine/Putnam
argument. Even if mathematics is
theoretically indispensable, it seems to play a different role in physical
theories than the positing of theoretical entities.
Field: argues for the metaphysical
dispensability of math via arguing for its theoretical dispensability after
all, via a nominalization of physical theory.
Balaguer, Azzouni and Yablo,
argue for metaphysical dispensability directly in various ways, without insisting
on a nominalization of the science.
“Impatient Naturalism”: the view that
one can accept both theoretical indispensability and metaphysical
dispensability by simply appealing to standards of scientific practice (Burgess
and Rosen), without having to come up with an explanation.
“Patient Naturalism”: the view that
one should consider various proposed explanations and wait for one that works.
Proposed role of mathematics in
physical science: it allows us to make claims about higher-order or large-scale
features of physical systems, while remaining neutral about the basic or
micro-scale features of such systems.
But this only works on the supposition that the pure math in question
has a large measure of (mathematical) confirmation prior to its application.
II. III. Can we defend theoretical
indispensability by rejecting metaphysical dispensability,
say on the grounds of math’s causal irrelevance, because it only has a
representational, mapping, or indexing function?
This would seem to fall prey to Field’s
nominalistic strategy.
But what about ‘abstract’
explanations in physical theory that do not seem to pivot on a mapping role, or
on a coordinate system for units of measurement, but instead on appealing to
formal relational features of the system being explained? Cf. the bridges of Konigsberg example; also Batterman’s asymptotic explanations which simplify a
fundamental mathematical law by taking one or more of the quantities that it
relates to a limit, such as 0 or infinity.
Apparently on a Field program these will require their own separate
representation theorems (involving a mapping, plus axioms characterizing
salient physical properties of the system).
A defender of theoretical
indispensability but who accepts such mappings can, contra Balaguer,
block the proof of a representation theorem by finding reasons to doubt or
withhold assent from some of the axioms (say because of their current lack of
empirical support).
Yet even in the face of this,
scientists can and do offer abstract mathematical descriptions of physical
systems and have these descriptions confirmed to a reasonable degree, even when
in ignorance of those many physical features of the system. This amounts to an indispensable epistemic
role for mathematics that is consistent with its metaphysical dispensability,
and at the same time undermines the role of indispensability arguments for platonism.
IV. Can we accept the
indispensability of mathematics to empirical explanation without adjusting our
ontological commitments in any way, a la Balaguer, Azzouni and Yablo? Surely we at least have to offer an account
of the subject matter of pure mathematics, assigning truth values to its
statements that accord with mathematical practice – whether these be Platonist or nominalist.
Balaguer’s nominalistic principles:
(NC) Empirical science has a purely nominalistic content that captures its “complete picture”
of the world
(COH) It is coherent and sensible to
maintain that the nominalistic content of empirical
science is true and the platonistic content of
empirical science is fictional.
(TA) Empirical theories use
mathematical-object talk only in order to construct theoretical apparatuses (or
descriptive frameworks) in which to make assertions about the physical world.
TA is supported by NC and COH, which
are in turn supported by a Principle of Causal Isolation for mathematical
objects. ‘Mixed facts’ supervene on
bottom level physical facts and possibly on purely mathematical facts. But those latter are not needed except to
underpin our claims that statements of pure math that we accept are in fact
true.
But what if we do not have a good
understanding of the physical facts responsible for the physical phenomenon we
are investigating? Given such ignorance,
NC and COH are surely in doubt for the nominalist. In science, at least, it is surely not
rational to fix one’s commitments using an indeterminate collection of
claims! Note that the nominalistic content of the bridges of Konigsberg is left
completely unspecified, so how will the nominalist deal with this example?
How about replacing COH with COH*?
(COH*) it is coherent and sensible to maintain that
the actual bottom-level physical facts render the nominalistic
content of empirical science true and the platonistic
content of empirical science fictional.
But what is the empirical evidence
for that claim? This would require
rejecting Quine’s confirmational
holistic criterion for existence, via the existential quantifier.
V.
Yablo’s “figuralism”; Azzouni’s
distinction between ‘quantificational commitment’ and genuine ontological
commitment. (The latter would raise the question why scientists are committed
to mathematical entities, via quantificational statements, yet not thereby be
ontologically committed.)
VI. Objection: Don’t these worries
simply apply to those cases of mathematical application that involve
idealization? Reply: But those cases are central to the use of
math in scientific theories.
Another objection: Can’t these concerns be limited to just the
portion of math that we actually do need and use in physical theory (so not
requiring a full representation theorem).?...Reply: it
remains to be seen.
Wants to conclude from all of this:
that we cannot, at present, make our commitments in our mathematical scientific
theories determinate without taking the mathematics we use seriously. All of this may be consistent
with the eventual removal of mathematics from our scientific theories. But for now, by including it, scientists can
formulate definite claims that zero in on those aspects of the physical
situation that they wish to take a stand on, while remaining neutral about the
aspects they have yet to understand and that are not relevant to the phenomena
that they do understand.
“ON THE EXPLANATORY
ROLE OF MATHEMATICS IN EMPIRICAL SCIENCE”
Robert W. Batterman
Are there genuine mathematical
explanations of physical phenomena?
Assuming so, how do they work?
(p. 1)
If
mathematical objects are abstract and acausal, and if the natural sciences are fundamentally
causal, then it would seem that mathematical objects cannot play a fundamental
explanatory role. (p.2)
But, as Pincock
has argued, there are acausal ‘abstract’ explanations
of empirical phenomena, such as the explanation of the bridges of Konigsberg, abstracting
away from the physical details to their exhibiting the structure of a non-Eularian graph. (p. 3)
Pincock classifies ‘asymptotic’ explanations with his “abstracting”
explanations, but this is wrong, to the extant that
the former do not proceed by focusing on abstract structure realized by the physical system. (p. 3)
For instance, to explain why distinct
fluids behave the same way at their respective critical temperatures (i.e., at
phase transition) one introduces a ‘limiting idealization’ known to be false. And that limiting idealization is
explanatorily essential. (pp. 6-7)
“…one can explain the fact that the spacings and intensities of the bows of rainbows are the
same, despite the fact that the (causal) details of how each rainbow gets
formed will be completely distinct -- for instance the sizes and shapes of
the rain drops will vary from rain shower to rain shower. The explanation in this case also involves
the taking of a limit: In order to
explain the universal pattern we witness in rainbows, we need to examine the
wave theory of light in the limit as the wavelength goes to zero. When the wavelength equals zero, we are in
the domain of ray theory or geometrical optics, and it turns out that stability
properties of ray-theoretic structures explain the universal rainbow pattern
that we witness.” (p. 8).
Mapping/structuralist
accounts of the role of mathematics in explanations of physical phenomena posit
a mapping between some mathematical structure and a physical structure. (p. 8)
That works for the abstract
explanation of the bridges of Konigsberg, but not for asymptotic explanations, whose idealizations
are false but ineliminable. (p. 8)
…various attempts by mapping account
theorists to accommodate the explanatory role of mathematical idealization:
matching models, partial mappings…
These attempts seem to presuppose
that idealizations are perfectly respectable in science, provided they are
“Galilean”, i.e., provided one can provide, in principle, a story about how
they might be removed through further work.
(p. 16)
But the interesting cases of
idealization are those for which no de-idealizing story is possible, even in
principle, including the universalities at singularities/phase
transitions. Call these “non-Galilean”.(p. 17)
There are no possible physical
structures analogous to such mathematical structures s that might be pointed to
by the limiting mathematical operations in asymptotic explanations. (p. 19).
Question: Does the physical world dictate
in any way the kind of mathematics that must be used?
Answer: yes.
In order to explain the robustness
(the repeatability) of the patterns we see, our mathematical representations
have to be stable under changes of
detail. One way achieve such stability
is through the taking of a variable to its limit. …this can lead us to focus on
mathematical singularities that can emerge in those limits. (p. 20)
“…if one’s interest is in
understanding the robustness of the patterns of behavior that we see, a focus
on regularities and lawlike equations very often
turns out to be the wrong place to look!
We need to understand why we have these regularities and
invariances. We need, that is, to ask for an explanation of those very regularities
and invariances. This is the
fundamental explanatory question…. To explain and understand the robustness of
patterns and regularities, one sometimes needs to focus on places where those
very regularities break down.…this is one way to begin to understand the
effectiveness of mathematics in applied situations. Furthermore, it is an approach that is
completely orthogonal to structuralist/mapping
accounts that take explanations necessarily to involve static representational
maps.” (p. 21)
These non-Galilean idealizations
“…play essential explanatory roles involving operations or mathematical
processes without representing the system(s) in question”. (p. 23)
We must look to the world as the
‘driving influence’ for how mathematics gets applied, rather than to fortuitous
analogies between mathematical structures and physical structures. …the world itself tells us that a certain
kind of mathematical language is required for genuine understanding” (p. 24).
Discussion question: How does that world tell us this?
How does the world constrain our non-Galilean mathematical modeling? And what if anything does this tell us bout the world? Is
it that these false idealizations “point” to something about the world, about
the system in the world being explained?
Not the something that is literally
expressed by the idealization, by its ‘literal truth conditions’ as it were,
since it is non-Galilean.
“EMERGENT PHYSICS AND
MICRO-ONTOLOGY”
Margaret Morrison
(Highlights):
Main examples considered:
macro-physical universalities of symmetry breaking at critical points/phase
transitions: e.g. , superconductivity, ferro-magnetism.
Macro-physical universalities are
higher-order organizing principles that cannot be deduced from their
underlying microphysics (cites Laughlin and Pines; cf. p. 149). Quite the opposite: certain aspects of whatever
micro-physics happens to underlie the emergent phenomenon of a system will be
deducible from it. As an explanatory
strategy, then, emergence is the converse of micro-physical reduction, a kind
of macro-physical reduction, if you will.
The macro-physical universalities of
critical point emergence do not supervene on the underlying
microphysics. This is not because of
‘fusion’, a la Humphreys, but because once the system reaches the critical
point, the universal behavior (the would-be supervenient properties) are dominant and
information about the micro-level structure (the subvenient
properties) is simply lost.
So with emergence there is no form
of reduction comparable to cases of ‘multiple realizability,’
and, in particular, no identification of the causal powers of the
macro-phenomena to the causal powers of their micro-realizers. Emergent phenomena have their own causal
powers.
What is missing from Batterman’s account of physical emergence is an appeal to symmetry
breaking as a high level independent physical organizational constraint. It is such an appeal that is needed to ground
an ontological account of emergence.
Appealing to the mathematics of
‘renormalization’ fails to provide such an account, contra Batterman. The renormalization mathematics does
contribute important details to our explanatory understanding of emergent
phenomena. But the mere fact that we
need not appeal to micro-phenomena to explain emergent macro-phenomena, as RG
explanations demonstrate, at most shows the epistemic independence of emergent
phenomena from microstructural details, it does not show their ontological
independence. But emergence is about
ontological independence, and to demonstrate this we must appeal to higher order
physical principles of symmetry and to the universality of the orderly pattern
by means of which such symmetry may be broken.
Morrison plays down the ‘idealizing’
aspects of the renormalization, characterizing it instead as a kind of
simplification: neutralizing one of the variables in order to make clear the
behavioral pattern of the system at the critical point.
So, it turns out that there is a debate
in physics and philosophy of physics about the nature of emergent
phenomena, paralleling the debate in philosophy about the emergence of consciousness. Exponents of robust physical emergence
include Laughlin and Pines, Anderson, Bangu, and
Morrison. Detractors include Batterman, Callendar, and Earman.
FUNCTIONALISM,
PLASTICITY, AND PSYCHOLOGICAL EXPLANATION
In an early article, “The Mind-Body
Problem,” Scientific American (Jan.,
1981) pp. 114-123, Jerry Fodor characterizes functionalism as involving the
following two theses:
The Causal Role Thesis: “…what determines the psychological type to which a mental particular
belongs is the causal role of the particular in the mental life of the
organism”
The Structural Thesis: “…the psychology of a system
depends not on the stuff it is made of (living cells, metal, or spiritual
energy) but on how the stuff is put together” (op. cit., pp. 114, 118)
Fodor also makes the following claim about functionalism:
The Neutrality Thesis: Functionalism is “….a philosophy of mind…” that “…is neither dualist
nor materialist” (op. cit., p. 114)
The Causal Role Thesis is supposed to
imply The Structural Thesis, and that The Structural Thesis is true of
functionalism is supposed to imply The Neutrality Thesis.
But, looking ahead, both the
Structural and Causal Role theses admit of different interpretations, and for
varying reasons both theses are unacceptable in
interpretations that imply The Neutrality Thesis.
The Structural Thesis invokes a distinction between composition and structure, and suggests
that mental phenomena are ‘merely structural,’ or ‘relational,’ in a way that is independent of the
composition of the relata. But how should we
interpret ‘independent’ here? Independent in what sense?
How can the explanatory power of mental particulars depend on the
special causal explanatory features that they have, and yet not depend on the
sort of stuff involved? For one thing it
seems to presuppose that how stuff is or can be put together is independent of
what sort of stuff it is, and in general this seems to be false. In the case of psychological systems, it is
surely not enough to show that there are several distinct empirically possible
realizations of some psychology, because that is still compatible with other
envisaged realizations being empirically ruled out on grounds having to do with
the sort of stuff envisioned to have been involved. One would have thought, therefore that the
question of how much the explanatory
power of mental particulars depends on the sort of stuff involved, ought
therefore to be treated as an empirical question.
Plasticity.
By the ‘plasticity’ of a type of
event, state, process (or power), let us mean its capacity to be realized in
more than one way. Following Richard
Boyd, in “Materialism Without Reduction: What Physicalism
Does Not Entail,” in Block, N. (ed.), Readings
in Philosophy of Psychology, Vol. I, Harvard U. Press (Cambridge, Mass,
1980) pp. 67-106, we distinguish two ‘dimensions’ of plasticity: compositional
and configurational.
If a type of state or process has a
high degree of compositional plasticity,that
means that there can be a high degree of variability in the sorts of substances
or causal factors that may constitute realizations of the state or process.
If a type of state or process has a
high degree of configurational plasticity,that
means that there can be a high degree of variability in the structural
arrangements of constituent parts constituting a realization of the state of
process (cf. pp. 87-90).
[Compare, e.g., the state of being an
inscription of a certain English sentence vs the process of smelting iron.]
One way of interpreting The Causal
Role Thesis is as the claim that the only essential characteristic of a mental
particular is its causal role in the mental life of the entity that has
it. Boyd says that mental particulars
“…are like computational states in being entirely configurational,
that is, in possessing maximal
compositional plasticity;” “…in any particular world, only the causal laws
governing that world limit the possible composition of realizations of such
computational states; such states have no essential properties that constrain
the sorts of substances or causal factors that can be constituents of their
realizations” (op. cit. p. 88). This in turn suggests a corresponding
interpretation of the Structural Thesis, according to which mental particulars
are maximally compositionally plastic.
Boyd’s 3 accounts of maximal
compositional plasticity (‘mcp’) of the mental (put
forward as equivalent):
1.
Mental phenomena are mcp
in being ‘entirely configurational’.
[But relations differ widely on how
constraining they are, so being mcp
on this account would be compatible with greater or lesser degrees of realizability both within and across worlds. Compare, e.g., being a computation of e to
the power x for x=9; being to the left of x, being x’s brother, being the cause
of x’s felt sensation of warmth…]
2.
Mental phenomenon are mcp in that, in any
particular possible world, only the causal laws governing that world limit the
possible composition of realization of such phenomena.
[But this seems false, since it is
surely also going to depend on the constitution of the world: i.e., what sorts
of stuff, and in what distributions, it contains. But taking that into account, why isn’t every
property mcp?]
3.
Mental phenomena are mcp
in the sense that they have no essential properties that constrain the sorts of
substances or causal factors that may be constituents of their realizations.
[But surely this suggests that mental
phenomena could not essentially be causal roles, for being a causal role might
well constrain the sorts of substance or causal factor that could be
constituents of their realization. At
least, it seems question begging to suppose otherwise. For one thing only concrete entities can realize
causal roles. For another, the ability
to realize a causal role must surely be grounded in the nature of what realizes
it. If there are causal roles that place
no constraints on the nature of what may realize them, then we require a
separate argument for this. And then we
require an additional argument for the thesis that mental phenomena are among
such realizations.
Consider also the evolution of
‘higher functions’. One can think of an
evolved system as consisting of a number of distinct hierarchical levels of
organization, each of whose spatial boundaries coincides at the system’s
interface with its environment – e.g., its skin. Now to say that mental phenomena are mcp seems to imply that being a
mental phenomenon of a certain sort is independent of the sort of underlying
constituents, at hierarchically lower levels of organization, that may happen
to realize that mental phenomenon in a given instance. Whereas some evolutionary theorists have
suggested that part of what explains the relative ‘stability’ of hierarchically
complex evolved systems is that a kind of ‘causal integration’ or ‘coadaptation
of the various organizational levels has occurred over time, and under
maximizing conditions of natural selection, such that, e.g., mental phenomena qua mental phenomena are subject to
physical, chemical and biological constraints at a number of levels, “…and never completely lose the marks of the
levels of organization they have evolved from, even down to the level of basic
chemical elements or which they are composed” (cf. Bill Wimsatt,
“Complexity and Organization” in Cohen, R.S. and Schaffner,
K.Y., (eds.) Boston Studies in the
Philosophy of Science, Vol. XX, D.Reidel Pub. Co., (Dordrecht-Holland, 1974) pp.
76-78). But if so, then it would seem that
mental phenomena, as we know them, are not maximally compositionally
plastic. They are not just a ‘structural
overlay’ of what realizes them.]
What seems at this point to be the
case is that mental phenomena are mcp
if and only if they are (type) abstract. For then, qua
being mental, they are neither material nor immaterial, and they do indeed have
no essential properties that constrain the sorts of substance or causal factor
that can be constituents of their realization.
But of course, abstract entities are
not part of the causal nexus; that is part of what we mean by ‘abstract’. So,
supposing that mental phenomena are mcp
raises problems for Fodor’s Causal Role and Structural theses.
Another interpretive issue for The
Causal Role Thesis is what is meant by “determines”.
Three Grades of Functionalism (in
order of strength):
First grade: determination of
psychological type by analyticity (e.g., D. Lewis) [What definitions?]
Second grade: determination of
psychological type by a posteriori identity (e.g., early Putnam, late Dennett)
Third grade: determination by
material equivalence
The third grade might make the Causal
Role Thesis true, but not in a sense that
implies the Structural Thesis. A
mere extensional equivalence of mental state types and causal role types does not imply that mental states are ‘merely
structural’, or even that they essentially involve relational properties, let
alone that they are highly,let alone maximally,
compositionally plastic. If so, then the
Causal Role Thesis manages to imply the Structural Thesis only by being at least as strong as on the second grade, by being a
‘Strong (Functional Identity) Theory”.
According to The Strong Theory, there is a level of structure of any
system having a psychology, such that the functioning ‘parts’ of the system at
that level are causally related to each other, and to inputs and outputs, such
that each distinct mental particular attributed to the system is, qua being a mental particular of some
given type, nothing but the
correspondingly distinct causal contribution made to the over-all functioning
of the system by one of these parts. Their causal roles are exhaustively constitutive of their
mentality.
If so, then mental state types are to
be abstract states of a complex
relational sort, concretely realizable, of course, in various ways. But for any given realization, what is mental
about it is abstract, and therefore
cannot contribute to the state’s particular causal efficacy. So, while epiphenomenalism acknowledges the
special mental nature of the mental but deprives it of causal potency, the
Strong Theory sustains the causal potency of the mental, but only by depriving
the mental qua mental of any special
nature contributing to its potency.
Concluding remarks.
Explanations do not have to be causal; e.g., Pincock’s
‘abstract’ explanations are not causal. Functional explanations of state types
are causal, and, to be causal, must be ‘concrete’. But if the above line of argument holds, it
follows that such ‘structuralist’ explanations must
rest on a notion of structure that is thicker than that of maximal configurational plasticity.
There is something concrete, in the world, that grounds or constrains the specific
(less than maximal) degree of configurational
plasticity of that particular functional type.
Compare Morrison on physical
emergence. She wants macro-physical phenomena to have their own causal powers;
so when she invokes the higher level symmetry principles of physics, and the
emergence of the universal pattern of symmetry breaking captured by
Renormalization Group mathematics, she must be understanding
these as not merely abstract.
Compare Deacon on the role of
constraints, and of particular absences they may enforce, in shaping the causal
topology of a complex living system. The absences -- the hole at the wheels hub
-- cannot contribute causally if they are taken to be merely abstract. So by
that token causal topology must not be seen as merely mathematical.
As Batterman
shows us, sometimes our mathematical models of emergent behavior of complex
systems can only work when embodying in principle ineliminable
false idealizations. To that extent, such mathematical models behave like an abstract
explanation. They can help us understand
important aspects of the emergent behavior, but cannot by themselves help us
understand any causal role that we may want or need to attribute to it. The choice we are then left with is either to
rest content with epistemic emergence, or to embrace salient aspects of higher
level structure as fundamental.
Resting content with epistemic
emergence leaves one with pretty much everything that, e.g., Clark invoked in
his elaboration and defense of emergent explanation. At worst it would have to be supplemented
with the prefatory qualifier “It is epistemically ineliminably as if…”.