Untangling Complex Systems, page 89
4 I
4(1366)
u
5
3
tot =
=
= 1 8
. 2 × −
−
10 Wm
c
3×108
Then, the pressure is evaluated by using equation [12.12].
u
−
1.82 10 5
P
tot
=
=
×
= 6 1
.
× −
10 6 Pa ≈ 6× −
10 11 atm
3
3
It is evident that the solar pressure is very tiny.
Complex Systems
455
12.9. The currently accepted explanation of the phenomenon involved in the Crookes radiom-
eter was first proposed by the English physicist Osborne Reynolds, in 1879. It considers
the flow of gas around the edges of the vanes from the warmer black sides to the cooler
white sides. The slight tangential viscous friction causes the vanes to move in the direc-
tion from black to white. When a cold body is placed near the radiometer, the motion of
the gas molecules is reversed, and the rotation occurs in the opposition direction. It is not
the radiation pressure the responsible force of the movement. In fact, if it were, the vanes
would rotate in the opposite direction, since the photons are absorbed by the black sur-
faces but reflected from the white (or silver) sides, thus transferring twice the momentum.
The rotation of vanes is controlled by the radiation pressure only when a high vacuum is
operated within the sealed glass bulb.
12.10. For the calculation of the total emissive power of the Earth, we can use the Stephan-
Boltzmann law, equation [12.9]:
4
−8
W
W
E ( )
4
k T = ξ T
= 5.67×
10
2
4 (288 K ) = 390
m K
m2
To determine the wavelength of the peak power, we can use the Wien’s law obtained in
the exercise 12.6.
6
λ (
2,9 10
nm) = 14.3 +
×
10084 nm
T ( K) =
The emissive power of the Earth is peaked in the IR, at the wavenumber of 992 cm−1.
12.11. Part I: By increasing the vision range from 1.0 patch to 7.0 patches, the flocking birds
converge more quickly on more massive flocks.
Part II: The most “flock-like” behavior is achieved by fixing the minimum-separation
to 1.0.
How to Untangle
13 Complex Systems?
Everything should be made as simple as possible, but not simpler.
Albert Einstein (1879–1955 AD)
Every revolution was first a thought in one man’s mind.
Ralph Waldo Emerson (1803–1882 AD)
I think it’s fair to say that personal computers have become the most empowering tool we’ve
ever created. They’re tools of communication, they’re tools of creativity, and they can be
shaped by their user.
Bill Gates (1955–)
13.1 INTRODUCTION
In Chapter 12, we learned which are the Complexity Challenges and the reason we find so many
difficulties in winning them. Now, the question is: How can we tackle the Complexity Challenges
effectively? Whenever we deal with Complex Systems, we need to collect, handle, process Big Data,
i.e., massive data sets (Marx 2013). Therefore, for trying to win the Complexity Challenges, we need
to contrive smart methods to cope with the vast volume and the fast stream of data we collect, their
variety (they come in many types of formats, such as numeric, text documents, video, audio, etc.),
and variability and complexity (because they are often interlinked). If we want to extract insights
from Big Data, it is essential to speed up our computational rate and find out new ways to process
data, i.e . , new algorithms. There are two main strategies to succeed. One consists of improving cur-
rent electronic computers, and the other is the interdisciplinary research line of Natural Computing.
13.2 IMPROVING ELECTRONIC COMPUTERS
Since their first appearance in the 1950s, electronic computers have become ever faster and ever more
reliable. Despite this constant progress, current computers have the same architecture as the first ones.
A structure that was worked out by John von Neumann (read Chapter 2 for the details). The principal elements of the so-called von Neumann computer are an active central processing unit (CPU) and
a passive memory. The memory stores two fundamental types of information. First is instructions,
which the CPU fetches in sequence and then tells the CPU what to do—whether it be to make an
operation, compare bits or fetch other data from memory. Second is the data. The data are manipu-
lated according to the instructions of the program. Information is encoded as binary digits through
electrical signals, and transistors are the basic switches of the CPU. A transistor is a semiconductor
device. There are different types of transistors (Amos and James 2000). A Field-Effect-Transistor
(FET), broadly used in digital and analog electronics to implement integrated circuits, is made of two
regions negatively-charged, called source and drain, separated by a region positively-charged, called
457
458
Untangling Complex Systems
OFF state
0V
ON state
+V
Gate
Gate
Drain
Drain
Source
Source
−
−
Insulator
+
−
−
−
+
+
−
−
Insulator
−
− − −− −
+
+
+
− −
− −
− − −
−
+
−
−
− −
−
−
−
−
+
+
−
−
+
+
−
−
−
−
+
−
−
− − − −
+
Substrate
−
+
−
−
+
+
+
−
− −
−
+
+
+ +
Substrate
+
+
+
(a)
Tox
Source Channel
Drain
(b)
Lch
FIGURE 13.1 The schematic structure of a MOSFET in its OFF and ON states (a), and its frontal view (b),
wherein L is the length of the channel, whereas T is the width of the insulator.
ch
ox
substrate (see Figure 13.1a). The substrate is surmounted by a thin layer of insulator (for instance, SiO or HfO . In this case the transistor is named as MOSFET wherein MOS stands for Metal Oxide
2
2
Semiconductor) and an electrode, called gate, which allows applying a voltage to the substrate. A rule
of thumb (Cavin et al. 2012) suggests that the width of the insulator, T , must be
ox
≈1/30 of the chan-
nel length, L (see Figure 13.1b). When no voltage is applied to the gate (0V), the positively-charged ch
substrate plays as a barrier that does not allow electrons to flow from the source to the drain. If a
positive voltage (+V) is applied to the gate, the positive charges of the substrate are pushed away, and
a negatively charged communication channel is established between the source and the drain. It is
evident that a FET acts as a switch that is ON when the gate voltage is applied, whereas it is OFF
when the gate voltage is not applied. Transistors are used to construct the fundamental elements
of integrated circuits and CPU, which are physical devices that perform logical operations and are
called logic gates. Devices realized with logic gates can process either combinational or sequential
logic. A combinational logic circuit is characterized by the following behavior: in the absence of any
external force, it is at equilibrium in its S state. When an external force is applied, it switches to a
0
F 0
new state S and remains in that state as long as the force is present. Once the force is removed, it goes
1
back to S . On the other hand, a sequential logic circuit is characterized by the following behavior:
0
if it is in the state S , it can be switched to the state by applying an external force
. Once it is
0
S 1
F 0→1
in the state S , it remains in this state even when the force is removed. The transition from to is
1
S 1
S 0
promoted by applying a new force, F
. Once the logic circuit is in the state , it remains in this state
1→0
S 0
even when the force F
is removed. In contrast to the combinational logic circuit, the sequential one
1→0
remembers its state even after removal of the force.
Since 1965, the pace of improvement of electronic computers has been described by the law
formulated by Gordon Moore (co-founder of Intel) stating that the number of transistors on a chip
doubles every year. Ten years later, Moore edited his law and alleged that the number of transistors
on a chip actually doubles every two years (Moore 1995). By increasing the number of transistors
per chip, the number of computational steps that can be performed at the same cost grows. In fact,
by miniaturizing a transistor, the voltage needed to power the transistor scales downward, too
(Dennard et al. 1999). In integrated circuit manufacture, billions of transistors and wires are packed
in several square centimeters of silicon, with meager defect rates, through precision optics and pho-
tochemical processes (Markov 2014). Nowadays, silicon wafers are etched to the limit of 14 nm, and
the use of X-ray lasers promises to overcome the limit of 10 nm (Markov 2014). The miniaturization
of transistors is limited by their tiniest feature, the width of the gate dielectric, which has reached
the size of several atoms. This situation creates some problems. First, a few missing atoms can alter
transistor performance. Second, no transistor is equal to the others. Third, electric current tends to
How to Untangle Complex Systems?
459
Insulator
Gate
Substrate
Insulator
Drain
Source
Drain
Gate
Substrate
(a)
Source
(b)
FIGURE 13.2 Lateral view (a) and top-view (b) of a tri-gate FinFET.
leak through thin narrow dielectrics, due to quantum-mechanical tunneling events, making unreli-
able the states ON and OFF of the transistors. Electronic engineers are striving to go around these
problems. For example, they have devised tri-gate FinFET transistors (see Figure 13.2). A FinFET is a redesigned transistor having wider dielectric layers that surround a fin shape. Moreover, the presence of three gates allows checking the flow of current on three sides rather than just one.
One indicator of the CPU performance is the maximum number of binary transitions per unit
of time ( β):
β = ν
M [13.1]
where M is the number of switches working with the clock frequency ν of the microprocessor (Cavin
et al. 2012). The computational power of a CPU, measured in the number of instructions per second,
is directly proportional to β. Therefore, it is evident that for larger computational power, it is impor-
tant to increase not only M but also ν . Researchers have found that a silicon CPU can work at most at
a frequency of 4 gigahertz without melting from excessive heat production. To overcome this hurdle,
it is necessary to introduce either an effective cooling system or multi-core CPUs. A multi-core CPU
is a single computing element with two or more processors, called “cores,” which work in parallel.
The speed-up ( Sp) of the calculations is described by the Amdahl’s law (Amdahl 1967):
1
Sp =
[13.2]
P
(1− P)+ N
In [13.2], N is the number of cores, P is the fraction of a software code that can be parallelized, and (1 − P) is the portion that remains serial. The speed-up grows by increasing N and P, as shown graphically in Figure 13.3. The challenge becomes the parallelization of the computational problems. One can spend many years getting 95% of code to be parallel, and never achieve a speed-up larger than
twenty times, no matter how many processors are involved (see Figure 13.3). Therefore, it is important to continue to increase the number of transistors per chip ( M in equation [13.1]) by miniaturization.
However, soon or later, Moore’s law will stop to hold because transistors will be made of a few atoms.
The twilight of Moore’s law does not mean the end of the progress (Waldrop 2016). In fact,
chip producers are investing billions of dollars to contrive computing technologies that can go
beyond Moore’s law. One strategy consists of substituting silicon with other computing materials.
For example, graphene: a 2-D hexagonal grid of carbon atoms. Electrons in graphene can reach
relativistic speeds and graphene transistors, which can be manufactured using traditional equipment
and procedures, work faster than the best silicon chips (Schwierz 2010). However, the main disad-
vantage is that graphene does not have a band gap like silicon. Therefore, it is not suitable to process
binary but rather analog logic, or continuous logic (Bourzac 2012). When sheets of graphene roll
in void cylinders, we obtain carbon nanotubes that have small band gaps and become useful for
processing digital logic. However, carbon nanotubes are delicate structures. Even a tiny variation
460
Untangling Complex Systems
20
P = 0.25
18
P = 0.5
16
P = 0.75
14
P = 0.95
12
Sp 10
8
6
4
2
0
1
10
100
N
FIGURE 13.3 Graphical representation of the Amdahl’s law as a function of N and for different values of P.
in their diameter and chirality (depending on the rolling angle) can induce the disappearance of the
band gap. Moreover, engineers must learn how to cast billions of nanotubes in ordered circuits using
the technology available for silicon (Cavin et al. 2012).
It is worthwhile noticing that without a fast and high capacity memory, a speedy CPU is use-
less. For this reason, researchers are trying to revolutionize the hierarchical structure of memory (see
Figure 13.4). At the bottom, there is the non-volatile flash memory that grounds on transistors and has high capacity but low speed. Then, there is the moderately fast Dynamic Random-Access Memory
(DRAM) where bits are stored in capacitors. Since the capacitors slowly discharge, the information is
volatile and eventually fades unless the capacitor charge is refreshed periodically. On top, there is the
Static Random-Access Memory (SRAM) that uses bi-stable flip-flop circuits to store the bits that are
the most frequently accessed instructions and data (for more details read Shiva 2008).
The hierarchical structure of the memory can be revolutionized by introducing cells of memristors.
A memristor (Chua 1971) is the fourth fundamental circuit element along with the resistor, capacitor,
and inductor. A memristor—short for memory resistor—is an electronic component whose resistance
is not constant but depends on how much electric charge has flowed in what direction through it in the
past. In other words, the memristor remembers its history. When the electric power supply is turned
off, the memristor remembers its most recent resistance until it is turned on again (Yang et al. 2013).
Cells of memristors can be exploited to devise a RAM that is not anymore volatile. To achieve perfor-
mances similar to the SRAM, it is necessary that memristors cells are close to CPU. This requirement
is not easy to be accomplished with the available technology. However, photons may be used to con-
nect the memory based on memristors with the traditional SRAM working closely to CPU.
eedSp
SRAM
DRAM
Flash
Hard disk
Capacity
FIGURE 13.4 The features of the three types of memory in an electronic computer.
How to Untangle Complex Systems?
461
Another strategy to reduce data-movement demands and data access bottlenecks between logic
