Untangling complex syste.., p.81

Untangling Complex Systems, page 81

 

Untangling Complex Systems
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  

  how

  log(1/

  2.0

  1.8

  1.6

  )) l ( N

  log(

  )(e

  a–e) s

  −0.3

  0.0

  hapes (

  d = 1.70 r = 0.9997

  −0.4

  2.4

  2.1

  1.8

  )) l ( N

  log(

  )

  −0.1

  (b

  ractal-like s

  l)

  ve f

  −0.2

  he fi

  log(1/

  f t

  s o

  −0.3

  ension

  d = 1.40 r = 0.9980

  im

  0.0

  −0.4

  he d

  2.0

  1.8

  1.6

  f t

  )) l ( N

  log(

  )(d

  −0.1

  ethod o

  l)

  0.0

  −0.2

  log(1/

  ounting m

  −0.1

  −0.3

  ox C

  l)

  d = 1.72 r = 0.9977

  he B

  −0.2

  log(1/

  y t

  −0.4

  2.1

  1.8

  )) l ( N

  log(

  −0.3

  (a)

  ination b

  d = 1.72 r = 0.9982

  eterm

  −0.4

  D

  2.1

  1.8

  1.5

  )) l ( N

  log(

  )(c

  1.28

  E 1

  UR

  FIG

  Chaos in Space

  413

  11.16. Although the species A has the same instantaneous concentration in the two reactors,

  the rates of reaction are different. In fact, k = k

  0 33

  0 15

  /( ) . in the first reactor, whereas

  k = k

  0 33

  0 /(5) . into the second one. At the time of the observation, the reaction proceeds

  faster into the second reactor. How is it possible? The two reactors have a different his-

  tory. The longer time elapsed in the first reactor has “degraded” the rate coefficient k. If

  they were catalytic reactions, we would say that the catalyst got “poisoned.” But there

  is no “poison.” Only when we deal with homogenous distribution of the reactants in

  Euclidean spaces, we should expect that two or more systems, with identical conditions,

  will have equal reactions rates.

  Complex Systems

  12

  The whole is something besides the parts.

  Aristotle (384–322 BC)

  As long as a branch of science offers an abundance of problems,

  so long it is alive; a lack of problems foreshadows extinction or

  the cessation of independent development.

  David Hilbert (1862–1943)

  12.1 THE NATURAL COMPLEXITY CHALLENGES

  A major goal of science is that of solving problems. In fact, the scientific knowledge has allowed

  humanity to reach outstanding achievements in the technological development and economic

  growth, especially in the last three hundred years (Beinhocker 2007), or so. Despite many efforts,

  there is still a list of compelling challenges that has to be won. First, we are unable to predict cata-

  strophic events on our planet, like earthquakes or volcanic eruptions, which, unfortunately, may

  cause many victims and damages every time they occur. Second, we are still striving to protect

  our environment and ecosystems from some risks, such as the climate change and the shrinkage

  of biodiversity. Third, we need to find innovative solutions to guarantee a worldwide sustainable

  economic growth, primarily by focusing on the energy issue. Fourth, we struggle to ensure stabil-

  ity in our societies. Finally, there are still incurable diseases that must be defeated, such as cancer,

  Alzheimer, Parkinson, diabetes, HIV, to cite just a few.

  Why do we find so many difficulties in winning such challenges? Because, whenever we face

  them, we deal with Complex Systems, such as the geology and the climate of our planet; the ecosys-

  tems; the living beings; the human brain; the immune system; the human societies, and the economy.

  We encounter substantial limitations in describing Complex Systems exhaustively when we start

  from the fundamental physical laws. Why?

  12.2 THE COMPUTATIONAL COMPLEXITY OF THE NATURAL

  COMPLEX SYSTEMS

  The solution of the equations that describe Natural Complex Systems, based on the knowledge of

  their ultimate constituents, such as atoms, remains a task that sounds unreasonable. In fact, it is

  entirely beyond our reach. The difficulty stems from the exponential growth of the computational

  415

  416

  Untangling Complex Systems

  51

  × 13

  153

  + 51

  663

  FIGURE 12.1 An example of pencil-and-paper algorithm for solving a multiplication.

  cost with the number of interacting particles (Whitfield et al. 2013). 1 The computational cost, also called computational complexity of an algorithm, is a measure of the number of steps the algorithm

  requires to solve a problem, expressed as a function of the size of the problem. In the field of the

  Computational Complexity, a problem is a computational task coupled to a question (Goldreich

  2008). An example is the Schrödinger equation that wants to answer the question: “Which is the

  total energy of a chemical system consisting of many interplaying particles?” Another example is

  the famous Traveling Salesman Problem (TSP). It asks: “Given a graph with nodes (i.e., a map with

  cities), edges and costs associated with the edges (i.e. , connections and their costs), which is the

  least-cost closed tour containing every node (i.e., every city) just once?” An algorithm for solving

  a problem is a finite set of instructions or steps that guarantee to find the correct solution to any

  instance of that problem. If we imagine using a Turing machine, the steps of an algorithm can be the

  addition, subtraction, multiplication, finite-precision division, and the comparison of two numbers.

  An example of an algorithm is the pencil-and-paper procedure of multiplication learned at school.

  Let us recall it with an instance shown in Figure 12.1.

  The pencil-and-paper method applied to the example reported in Figure 12.1 requires a total of

  four multiplications and three additions to achieve the final output. It is easy to infer that the total

  number of steps depends on the size of the numbers. For having an idea of the running time of an

  algorithm solving a particular problem, it is useful to determine the function that relates the number

  of computational steps to the size of the problem. Each elementary step is performed at the same

  speed, or, in other words, each operation requires the same amount of time that depends on the

  intrinsic speed of our computational machine. The size of a problem is expressed in terms of bits, N,

  required to encode an instance of that problem. For example, in the case of TSP, the running time

  of an algorithm is expressed as a function of the number of nodes, the number of edges plus the

  number of bits required to encode the numerical information of the edge costs. A problem is easy if

  1 The equation of conventional nonrelativistic quantum mechanics, which in principle can describe every Complex System we mentioned in paragraph 12.1, is:

  ∂

  i

  Ψ = H Ψ

  t

  ∂

  where the Hamiltonian operator is:

  N

  N

  N

  N

  e

  n

  e

  n

  2

  2

  2

  N

  N

  e

  n

  e 2

  Z Z e 2

  2

  2

  Z e

  k

  k

  m

   = −

  ∇

  ∑

  +

  −

  ∑

  j −

  ∇ k −

  +

  ∑

  2 m

  ∑2 M ∑∑ r R

  r

  r

  R − R

  k

  j −

  k

  j

  l

  k

  m

  j=1

  k =1

  j=1

  k =1

  j< l

  k< m

  In the definition of  (wherein intranuclear interactions and gravity are not included), e and m are the electron charge and mass, r is the position vector of the j th electron, N their total number; Z , M , and R are the charge, the mass and k e

  j

  e

  k

  k

  the position of the k th nucleus, and is the Planck’s constant divided by 2 π. The Schrödinger equation can be solved accurately only for the very simple hydrogen atom involving two particles (the nucleus and one electron). Any problem with three or more particles cannot be solved accurately. This statement also holds for macroscopic bodies interacting through the gravitational force. We need to apply an approximate method, such as the perturbation approach, to achieve a reasonable solution (Kwapień and Drożdż, 2012).

  Complex Systems

  417

  it is polynomial ( P), for instance if the number of computational steps increases polynomially with

  the size of the problem. This idea is usually expressed by the Big-O notation. For two functions,

  f( x) and g( x) of a non-negative parameter x, we say that f ( x) = O ( g ( x)), if there exists a constant c ≥ 0 such that, for x → ∞, f ( x) ≤ cg ( x). In terms of Big-O notation, a problem, f( N), is tractable if f N

  O Nk

  ( ) = ( ) [12.1]

  wherein k is a constant, which could be 1, 2, etc. The amount N never appears in the exponent.

  If k = 1, it means that the number of computational steps grows linearly with the size of the

  problem; if k = 2, quadratically, and so on. The class of the polynomial problems, P, is the class of the recognition problems. An example of a recognition problem is: “track the telephone number of

  a person in a phone book.” If N is the total number of elements in the book, the computational steps

  are, at most, equal to N.

  However, there also exists exponential problems for which the number of computational steps

  grows exponentially with the size of the problem itself:

  f N

  O NN

  ( ) = ( ) or f N O N

  ( ) = (2 ) f ( N) = O( N )! [12.2]

  These problems are intractable when N is large. Examples are the Schrödinger equation for which

  f N

  O N

  ( ) = (2 ) holds (Bolotin 2014), and the TSP for which f N

  O N !

  O ( N e N

  ( ) = ( ) ≈ (

  ) ). It suf-

  fices to make a few calculations to understand why exponential problems cannot be solved accu-

  rately and in a reasonable time, even if we have the best supercomputers in the world at our disposal.

  Let us consider the Schrödinger equation. For ten interacting particles, the maximum number of

  computational steps needed to determine the energy of the system is 210 = 1024; if N = 20, the

  n° of steps is 220 1 106

  ≈ ×

  . According to the TOP500 list,2 updated in November 2017, the fastest

  supercomputer in the world is the Chinese Sunway TaihuLight that reaches the astonishing com-

  putational rate of 93 PFlop/s.3 With TaihuLight at our disposal, we need just ten femtoseconds and

  ≈10 picoseconds to solve the Schrödinger equation for a system with 10 and 20 particles, respec-

  tively. But, if our system consists of 500 particles, the number of computational steps becomes

  so huge, 2500 3 3

  .

  10150

  ≈

  ×

  , even TaihuLight would require an unreasonable amount of time to find

  the exact solution: ≈ 1×10126 years. This amount of time is much, much longer than the age of the

  Universe, which has been estimated to be 14 109

  ×

  years.

  TRY EXERCISE 12.1

  We must abandon the idea of finding the exact solutions of large exponential problems if the only

  possible algorithm is that of brute force.4 This statement holds even if we parallelize the computation (Aaronson 2005). In fact, the benefit of making the calculations with, let us say, 1020 processors

  working in parallel, can never be appreciable for enormous exponential problems.5 Therefore, we

  are obliged to transform the original exponential problems into problems of recognizing acceptable

  solutions, generated non-deterministically and in a reasonable time. In other words, the exponential

  problems are changed in Non-Deterministic Polynomial problems or NP-problems. For example, let

  us imagine facing a TSP. Given a graph G with a certain number of nodes, edges, and costs, we get

  2 The TOP500 project (website: https://www.top500.org/) ranks the 500 most powerful non-distributed computer systems in the world. This project started in 1993 and publishes an updated list of the best supercomputers twice a year: in June and November.

  3 1 (PFlops/s)= 1 1015

  ×

  floating-point operations per second. P is for Peta that corresponds to 1 1015

  ×

  .

  4 The brute force solution of an exponential problem, such as the TSP, involves an exhaustive search that consists in measuring the length of all the permutations of edges and, then, seeing which is the cheapest.

  5 If a processor requires an amount of time equal to [1020Δt] for solving a problem, 1020 processors, working in parallel, would need a period equal to [Δt].

  418

  Untangling Complex Systems

  satisfied as soon as we find a complete tour requiring a total cost that is less than or equal to some

  predefined maximum amount F. Possible solutions might be generated by heuristic algorithms; for

  instance, by guessing. Then, the problem becomes polynomial because it reduces to recognize if the

  solutions, generated non-deterministically, verify the imposed conditions or not.

  In synthesis, the P-class contains computational problems that are solved efficiently, whereas the

  NP-class includes problems whose solutions are effectively verifiable. The NP-class encompasses

  many scientific problems. Following, is a summary list.

  • In the scientific experiments regarding the characterization of the time evolution of sys-

  tems, we gather data in time, which are snapshots of the states of the systems. For the

  comprehension of the laws behind the systems’ behavior, we must reconstruct the underly-

  ing dynamical equations from the snapshots. Such a task, named “system identification,”

  is, in general, an intractable computational problem (Cubitt et al. 2012), especially for the

  Natural Complex Systems.

  • In 1925, the German physicists Ernst Ising proposed a model for the phase transitions.

  Ising’s model can be described in graph-theoretic terms (Cipra 2000), in which vertices

  represent atoms in a lattice and edges represent bonds between adjacent atoms. Each ver-

  tex i can be in one of two states, usually indicated as σ i = ±1. Each edge has an assigned

  coupling constant, Jij, where i and j are the two vertices. When neighboring vertices i

  and j are in states σ i and σ j, respectively, the interaction between them contributes

  an amount − Jijσ iσ j to the total energy E of the system. The total energy is given by tot

  Etot = −∑ i, j Jijσ iσ j where the sum is extended to all the pairs of neighbors—all the edges of the graph. If two vertices, i and j, are in the same state and the Jij is positive, then they decrease the total energy. If all the coupling constants are positive, the overall system has

  an evident lowest-energy configuration, where all the atoms are in the same state, either

  +1 or −1. But if the coupling constants are a mixture of positive and negative numbers, as

  it happens in the case of spin glasses, finding the ground state (i.e., the state with the lowest

  energy) for a three-dimensional lattice is an NP-complete problem. In fact, with N vertices,

  each one having two possible states, finding the ground state can be done by brute force,

  computing all the 2 N possible combinations.

  • The protein folding problem consisting in predicting its three-dimensional structure from a

  string giving the protein’s amino acid sequence is also NP-complete (Lathrop 1994; Berger

  and Leighton 1998).

  • Many other computational tasks of practical interest, in planning, scheduling, machine-

  learning, financial-forecasting, the design of computers’ hardware with an optimal

  arrangement of transistors on a silicon chip, and computational biology belong to the class

  of NP-complete problems (Monasson et al. 1999).

  NP-complete problems, cited earlier (whose other examples are the TSP, the Schrödinger equa-

  tion, and the satisfiability problems6), form a special subset of the NP-class set. In fact, it embodies the secret of computational intractability since a polynomial time algorithm for one of them

  would immediately imply the tractability of all the problems in NP. This result would mean NP = P

  (see Figure 12.2). Then, suddenly, Complexity would melt like snow under the sun, and we would

  become able to understand nature as never we have done, so far, in the history of humanity.

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183