Untangling complex syste.., p.50

Untangling Complex Systems, page 50

 

Untangling Complex Systems
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  

  s

   Ry ( Y ) s 

  

  0

  DY

  2 

  

  ∂ r 

  The solution of equation [9.16], for both x and y, is the product of two functions: one depending on

  the temporal coordinate t and the other on the spatial coordinate r:

  δ

  λ

  x = c e t

  1

  cos( Kr)

  λ t

  [9.17]1

  δ y = c 2 e

  K

  cos( r)

  In [9.17], λ is the eigenvalue; K is the wavenumber whose value depends on the contour conditions.

  Introducing the solutions of equation [9.17] into the system of equation [9.16], and dividing by

  eλ t cos( Kr), we obtain:

   c

    R

  2

   ( )

  

  x X 

  D K

   R X

  

   −

   ( )

   c 

  λ 1

  s

  X

  y

  s

  1

    = 

   

  [9.18]

   c

  2

  

  

  2 

   R

  

   ( )

   c

  x Y 

   R Y 

  2 

  

   ( )

  s

  y

   − DY K

  s

  

  In [9.18], the Jacobian matrix, J′, has additional terms in each of its diagonal elements if compared

  to J, defined in the absence of diffusion. The trace of the Jacobian is now

  tr ( J′) =  R

  2

  2

   ( )

  [9.19]

  x X 

  D K

   R Y 

  D K

   −

  +  ( )

  s

  X

  y

   −

  s

  Y

  and its determinant is

  det ( J′) = D

  4

  2

  X DY K − D 

  ( X  Ry( Y) + D  Y x

  s

   R ( X ) s ) K

  [9.20]

  + 

  ( R

   ( )  R Y   R X 

  x

  y

  s

  s

  s )

  x ( X )

  Ry Y

   s

   −  ( )  ( )

  δ

  λ

  x = c e t

  1

  eikr

  1 The general solution of the system of partial differential equations [9.16] is

  .

  δ

  λ

  y = c 2 e teikr

  248

  Untangling Complex Systems

  Since the original trace, regarding the homogeneous system, was negative, the new tr ( J′)

  ( equation [9.19]) remains negative because the diffusion coefficients are positive. The perturbation

  will be damped unless det ( J′) becomes negative. It is worthwhile noticing that in equation [9.20],

  defining det ( J′), the terms D

  4

  (

  )

  X DY K and [ Rx ( X ]

  ) s[ Ry( Y )] s −[ Rx( Y ]

  ) s[ Ry( X )] s are both positive;

  the only possibility for the new det ( J′) to be less than zero is that

  ( D  ( )

  )

  X

  Ry Y 

  D  R X

   +

   ( ) > 0

  s

  Y

  x

   s

  [9.21]

  and the discriminant of the equation that determines the roots of the second order equation in K 2

  is positive.2

  We know that the sum ([ R

  )

  x ( X ]

  ) s + [ Ry( Y ]

  ) s = tr( J ) is negative. If both terms, [ Rx( X ]

  ) s and

  [ Ry( Y ]

  ) s, are negative, the relation [9.21] cannot be verified. Therefore, one of the two terms,

  [ Rx( X ]

  ) s and [ Ry( Y ]

  ) s, must be negative and the other positive. This situation means that one spe-

  cies enhances the rate of its own production, whereas the other decreases its rate of production. Let

  us suppose that the derivative [ Rx( X ]

  ) s is positive. Species X plays as an activator of itself. In other

  words, X is an autocatalytic species. On the other hand, [ Ry( Y ]

  ) s is negative. Hence, Y is an inhibitor

  of itself. For the sum ([ R

  )

  x ( X ]

  ) s + [ Ry( Y ]

  ) s to be negative, we must have

   R

   ( )

  x X 

   R Y 

  [9.22]

   <  ( )

  s

  y

   s

  Moreover, det ( J ) = ([ R

  )

  x ( X ]

  ) s[ Ry( Y )] s −[ Rx( Y ]

  ) s[ Ry( X )] s is positive if [ Rx( Y ]

  ) s[ Ry( X )] s < 0. The

  product [ Rx( Y ]

  ) s[ Ry( X )] s is negative if the two derivatives have opposite signs. For example, it might be that [ Rx( Y ]

  ) s > 0 and [ Ry( X ]

  ) s < 0, i.e., X is also an activator of Y, and Y is also an inhibitor of X.

  We may rearrange equation [9.21] in D

  ( [ ( ]) )

  X [ Ry Y

  (

  s

  )] > − DY [ Rx( X s

  )] , or in DX − Ry Y s <

  DY[ Rx( X s

  )] . Finally, also considering relation [9.22], we achieve

  D

   R

   ( )

  x X 

  X

   s

  <

  [9.23]

  DY

  (− R ( ) ) <1

  y Y 

   s

  This last expression means that the inhibitor, Y, must diffuse more rapidly than the activator, X, to generate instability. If the conditions expressed by the relations tr ( J′) < 0, det ( J′) < 0, and [9.23] are all verified, along with the subsidiary condition expressed in note 2, the eigenvalues of the solutions [9.17]

  of the PDE will be one positive (λ+ ) and the other negative (λ−). In other words, the original steady-

  state that was stable in the absence of diffusion encounters a bifurcation and becomes a saddle point

  in the presence of diffusion. The solution having a positive eigenvalue, ceλ+ t cos( Kr), represents the exponential growth of a co-sinusoidal or sinusoidal spatial pattern. An example of what can occur in the

  saddle-node is shown in Figure 9.4. Graph (1) in Figure 9.4 shows the initial situation, where we have a

  2 If we think of equation [9.20] as a quadratic equation in K 2 of the form a( K 2 )2 bK 2

  +

  + c, then, to have real roots

  for the inequality a( K 2 )2 + bK 2 + c < 0, the discriminant must be positive (see Figure in this note). In other words, ∆ = b 2 − ac = ( D

  2

  4

  [ ( )]

  [ ( )] )

  4

  [

  (

  (

  ]

  ) [ ( )]

  [ R ( )] [ ( )] ) > 0. Reminding that

  x Y

  s Ry

  X

  X Ry Y

  s + DY Rx

  X

  −

  s

  DX DY Rx X s Ry Y s −

  s

  ([ R

  −

  x ( X ]

  ) s[ Ry ( Y )] s [ Rx ( Y ]

  ) s[ Ry ( X )] s ) = det( J ) > 0 and that relation [9.21] must hold, it derives that it is necessary to verify ( D

  /

  1 2

  X [ Ry ( Y )] s + DY [ Rx ( X )] s ) > [

  2 DX DY [

  ( Rx( X ]

  ) s[ Ry( Y )] s − [ Rx( Y ]

  ) s[ Ry( X ) s])] > 0.

  When

  When

  K 4

  K 4

  K 4

  When

  a > 0

  a > 0

  ∆ < 0

  a > 0

  ∆ > 0

  ∆ = 0

  K 2

  K 2

  K 2

  K 21

  K 22

  The Emergence of Order in Space

  249

  (1)

  (2)

  A

  X

  X

  Y

  Y

  Concentration

  Concentration

  Spatial coordinate

  Spatial coordinate

  (3)

  (4)

  B

  B

  B

  B

  X

  X

  Y

  Y

  Concentration

  Concentration

  Spatial coordinate

  Spatial coordinate

  (5)

  C

  C (6)

  C

  C

  X

  X

  Y

  Y

  Concentration

  Concentration

  Spatial coordinate

  Spatial coordinate

  FIGURE 9.4 Schematic and qualitative description of the Turing pattern formation. The graphs from (1) to

  (6) are six snapshots in the formation of a Turing pattern.

  uniform distribution of the two species, X and Y, over the spatial coordinate. If the concentration of the activator X becomes slightly higher in one point (labelled as A in graph 2) by random fluctuation or after

  a perturbation, then, A can be the trigger point for the emergence of a Turing pattern. Since the activa-

  tor X self-enhances, its concentration increases in A (see graph 2). This event determines an increase of

  the inhibitor in the nearby regions labeled as B because Y has a diffusion rate much larger than that of

  X (graph 3). The presence of Y depresses the activator, resulting in a depletion of its concentration in B

  (graph 4). At the regions labeled as C, since the inhibitor concentration is getting lower, then X increases

  and becomes dominant (graph 5). These local situations are enough to generate a chemical structure

  (graph 6). In the end, the system evolves to a stationary Turing pattern. The wavenumber of the spatial

  pattern can be found by minimizing det ( J′) with respect to K (look at the Figure in note 2).

  The result ( solve exercise 9.3) is

   

  1/2

  

  1

  R

   ( )

   ( ) 

  x X 

  Ry Y 

  K

   s

   s

  =  

  +

  

  [9.24]

   2 

  D

  

  X

  D

  

  Y

   

  

  250

  Untangling Complex Systems

  A finite wavenumber exists if ( [ R

  ) (

  ))

  x ( X ]

  ) s DX > [

  − Ry( Y ]

  ) s DY . After noticing that the ratio

  ( D

  1 2

  ( )

  X [ Rx ( X ]

  ) s) defines the characteristic decay length of the activator lX , whereas the ratio

  ( D

  1 2

  ( )

  Y [ Ry ( Y ]

  ) s) the corresponding decay length of the inhibitor lY , the criterion for the existence

  of a finite wavenumber is lX < lY, which means that the inhibitor has a longer range of the activator.

  This condition is sometimes referred to as “local self-activation and lateral inhibition” (Meinhardt

  and Gierer 2000), since the activator tends to be confined to the neighborhood of its initiation site,

  in contrast to the inhibitor that can spread more rapidly throughout the medium.

  TRY EXERCISE 9.4

  The possibility to generate patterns, from a homogeneous and stable steady-state by diffusion-

  driven instability, was inferred for the first time by Alan Turing3 in 1952. Turing was interested in accounting for the phenomena of morphogenesis in biology. He predicted that the crucial diffusion-driven instability in an initially homogeneous state might be triggered by random disturbances,

  such as Brownian motion. Since Turing’s seminal paper, several Reaction-Diffusion models have

  been proposed (Maini et al. 1997). Most of them are abstract models of real or imaginary chemical

  reactions. Some examples are shown in Table 9.1 and exercise 9.5, 9.6 and 9.7.

  The Reaction-Diffusion models originating Turing patterns can be grouped in two sets depend-

  ing on the sign of the cross-terms [ Rx( Y ]

  ) s and [ Ry( X ]

  ) s, reminding that the only restriction that

  must hold is [ Rx( Y ]

  ) s[ Ry( X )] s < 0. Therefore, we must have [ Rx( Y ]

  ) s > 0 and [ Ry( X ]

  ) s < 0, or the

  opposite. The combinations of the signs for the terms of the Jacobian matrix

    R

   ( )

  

   

  x X

  Ry X

  

   ( )

   +

  −  +

  + 

  J

  s

  s

  = 

  

  

  

   or 

  .

   R

   must be either + −

   −

  −

    ( )

  

  

  x Y

  Ry Y

  

   ( )

  

  

  s

   s 

  In the first case, the self-activator X also plays as an activator of Y, whereas Y inhibits itself, but also X. In the second case, X is an activator of itself but not of Y, and Y is an inhibitor of itself but not of X. The original Turing model along with the Brandeisator, the Gierer-Meinhardt, and the Thomas

  models are examples of the first case, whereas the Schnackenberg model is an example of the second

  case (see Table 9.1).

  The original Turing model involves only linear differential equations (Turing 1952). The

  Brandeisator (Lengyel and Epstein 1991) is a model of the first example of Turing pattern obtained

  in a chemical laboratory (as mentioned in the next paragraph). The Thomas’ model (1975) is

  based on a specific reaction involving the substrates oxygen and uric acid in the presence of the

  enzyme uricase. The Gierer and Meinhardt (1972) model is a hypothetical but biologically plausible

  mechanism to account for important types of morphogenesis observed in the development of living

  beings. Finally, the Schnakenberg model (1979) is based on a hypothetical mechanism involving a

  cubic autocatalytic process.

  TRY EXERCISE 9.8

  3 Alan Turing (1912–1954 AD) was a British mathematician who is widely considered to be the father of theoretical computer science and artificial intelligence. He provided a formalization of the concepts of algorithm and computation with the Turing machine, which is a model of a general-purpose computer (read Chapter 10 for more information). He also devised a test to measure the intelligence of a machine.

  During the Second World War, Turing devised techniques for breaking secret codes of the German army. Winston

  Churchill said that Turing made the single most significant contribution to Allied victory in the war against Nazi

  Germany. In fact, Turing’s pivotal role in cracking intercepted coded messages enabled the Allies to defeat the Nazis in several crucial battles. After the Second World War, Turing became interested in mathematical biology, and he wrote his seminal paper on the chemical basis of morphogenesis (Turing, 1952).

  The Emergence of Order in Space

  251

  TABLE 9.1

  Examples of Reaction-Diffusion (RD) Models Wherein x is the Activator, and y is the

  Inhibitor

  Name

  Equations

  Schematic Model

  Original Turing model

  ∂ x

  2

  (Turing 1952)

  = a 1 x − b 1 y + c 1 − d 1 x + DX∇ x

  ∂ t

  Activator

  ∂ y = a

  2

  2 x + b 2 y + c 2 − d 2 y + DY ∇ y

  ∂ t

  Inhibitor

  Brandeisator (Lengyel

  ∂ x

  4 k′3 xy

  2

  and Epstein 1991)

  = k′1 − k′2 x −

  + DX∇ x

  ∂ t

  α + x 2

  Activator

  ∂ y

  k′3 xy

  = k′

  2

  2 x −

  + DY∇ y

  ∂ t

  α + x 2

  Inhibitor

  Gierer-Meinhardt

  ∂ x

  k 2

  3 x

  (Gierer and

  = k

  2

  1 − k 2 x +

  + DX∇ x

  Meinhardt 1972)

  ∂ t

  y

  Activator

  ∂ y = k 2

  2

  3 x

  − k 4 y + k 5 + DY∇ y

  ∂ t

  Inhibitor

  Thomas (1975)

  ∂ x

  k 5 xy

  = k

  2

  1 − k 2 x −

  + DX∇ x

  ∂ t

  k

  2

  6 + k 7 x + k 8 x

  Activator

  ∂ y

  k 5 xy

  = k

  D 2

  +

  ∇

  Y

  y

  3 − k 4 y −

  ∂ t

  k

  x 2

  6 + k 7 x + k 8

  Inhibitor

  Schnakenberg (1979)

  ∂ x = k

  2

  2

  1 a − k 2 x + k 3 x y + DX ∇ x

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183