Emergent behaviour

From ComplexWorldWiki
Jump to: navigation, search

Contents

Introduction

Air transportation systems are facing the challenge to innovate air and ground infrastructures and air traffic management protocols to meet the levels of projected passenger volume and quality of services expected in future years. The most critical aspect of this challenge is how to overcome the problem of airspace capacity limitation by ensuring high levels of efficiency together with the highest standards of security.

General objective

In recent years there has been a growing awareness that the air transportation system cannot be planned, optimized, monitored and investigated by focusing separately on the different modules building up the system, see Holmes (2004). In fact, although the majority of the interactions among air traffic actors are interactions relatively localized in space and time, the complex interconnections present among all the actors of this socio-technical system make the air transportation system a "system-of-systems" structured as a layered collection of interacting networks, see for example DeLaurentis et al. (2006). For example, Holmes (2004) has identified four main layers of networks of the air transportation system described as (i) a physical layer (with airports as nodes and airways as links), (ii) a transportation layer (with aircrafts as nodes linked by air traffic control radar), (iii) an operational layer (with a network of pilots, crew, controllers, etc. linked by a VHF communication system) and (iv) an application layer (with people, goods and travel planners setting the needs of air capacity for the short and medium term time period).

In each of these network layers the interconnection of a plurality of heterogeneous socio-technical actors produces emergent phenomena impacting multiple spatial regions on multiple time scales. The position paper needs therefore to focus on the investigation and modelling of emergent processes observed in the current setting of air traffic management and on the future scenarios of it (planned and/or simulated). A key concept in the description of emergent properties of complex systems is the concept of phase transition. Phase transitions occur in physical, biological and social systems both in the presence and in the absence of tuning parameters. A paradigmatic example of phase transition is provided by the phenomenon of percolation that was originally introduced in simple geometrical lattices and therefore used in the modelling of many real systems. Specific scales both in space and in time cannot typically describe statistical properties of emerging behaviours. For this reason, their functional profile is often a power-law profile.

In summary, the complexity of the interconnections and of the procedures present in the air traffic management systems and of their future developments implies that emergent phenomena are natural in this system. These emergent phenomena need to be investigated to assess the degree of efficiency, robustness and resilience of the system at a global scale.

Definitions

We will therefore illustrate hereafter a few emergent phenomena observed in a socio-technical complex system like the ATM one. Before going further, it is however useful to set up a glossary of terms specifically used in the ATM context. Networks and power-laws were defined earlier, in Section 0.2.1.

Phase Transition: the original concept of “phase transition” refers to the fact that a thermodynamic system can change its state of matter. In the complex systems context, “phase transition” refers to the fact that a complex systems made of many elementary elements locally interacting, under certain conditions may present a collective state (called phase of the system). The properties of this state are controlled by a variable named “order parameter”. It also said that the system shows criticality, meaning that there exists a “critical point” that marks the passage from a disordered state in which the local interactions are predominant to the collective state. A fingerprint of criticality is the presence of power-law behaviour in variables describing the system.

Percolation: the original concept of “percolation” concerns the movement and filtering of fluids through porous materials. In the complex systems context, “percolation” refers to the fact that given two neighbouring sites, a link between them is present with probability $ p $ and its absence is observed with probability $ (1-p) $. Percolation is relevant in the study of the spreading of information over a network.

Scope

In this chapter we will focus mainly on a specific mechanism of emergent behaviour, namely phase transitions with a special emphasis on jamming transitions and percolation phase transitions, because we believe they might play an important role in ATM modelling. Moreover phase transitions are probably the most important example of emergent behaviour in Statistical Physics. We also discuss power laws, which are functional relations between variables of a complex system that are observed frequently around phase transition. It is important to stress, however, that a power law relation is not sufficient to claim that the system is at a critical state. In fact, it has been repeatedly shown that there are many mechanisms, which are able to display power law behaviour. Therefore the section on power laws focuses on a characteristic, which is ubiquitous in complex systems but not necessarily always overlapping with critical phenomena.

For the topics mentioned above, we will present some case studies reported in the existing literature. These include jamming transitions in air traffic, percolation of congestions in sectors, and the relevance of power law distributions in describing the hub and spoke airport network. This list is clearly very partial and not exhaustive. We present also some possible future challenges for the application of the concept of emergence to ATM. We only marginally cover self-organized criticality, which is a form of critical behaviour not driven by an external parameter. In particular, we do not describe self-organization mechanisms creating formation of patterns. This emergent behaviour is observed when an ordered spatial or spatio-temporal pattern emerges as the result of self-organization in systems out of equilibrium. This type of emergent phenomena has been observed in developmental biology, chemical reaction, growth of bacterial colonies, etc..

We also briefly consider emergent phenomena induced by the interaction of agents in an agent-based model of a complex system. In many socio-economic systems the interaction of individuals that pursue their own sheer interest might lead to collective phenomena that are not expected by looking at the individuals alone. The paradigmatic example is the concept of “Invisible hand” by Adam Smith. More recent examples include systems where agents have bounded rationality.

Finally we are not considering noise-induced phenomena, which is a class of emergent behaviour that is receiving growing attention in the recent years. Typically random fluctuations are considered a source of disorder in complex systems. In noise induced phenomena, the interaction between noise and nonlinear dynamics may lead to the emergence of a number of ordered behaviours (in time and space) that would not exist in the absence of noise.

Research lines

The analysis of the research theme ‘Emergen behaviour’ is divided into three research lines; they are the following:

  • Phase transitions;
  • Percolation in non-homogenous media;
  • Power laws in ATM complex systems.

Phase transitions

Problem statement

The concept of phase transition and criticality (Stanley (1971), Binney et al. (1992)) is probably the most important one when a physically oriented modelling of emergent behaviour is accomplished. In the simplest setting, a model (often a highly stylized toy model such as the Ising model (Huang (1987)) of many elementary elements interacting locally, presents a collective state (called phase of the system), whose properties are controlled by the temperature (or another thermodynamic variable) of the system and are characterized by a variable named order parameter. By changing the temperature of the system (or any variable that can play its role in complex systems) the system presents an abrupt transition between different phases of the system (e.g. a transition from a paramagnetic to a ferromagnetic phase). The nature of the phase of the system cannot be related to the microscopic nature of the basic element composing the system (a two state up or down variable in the Ising model). Different phases are separated by a critical state. The physical properties of the system near the critical state can belong to universality classes characterized by the nature of the order parameter of the system, Binney et al. (1992).

Statistical physics has also developed the concept of self-organized criticality (see Bak et al. (1987)). This is a concept showing that a critical state is not only encountered when a system switches between two distinct macroscopic phases but that criticality can also be observed for complex systems that naturally converge to a critical state without an external tuning.

A different type of transition is the jamming transition. In some materials, such as granular materials, it is observed that by increasing the density the material becomes rigid. This type of transition has similarity with the glass-liquid transition, which is observed when an amorphous material transforms itself from a hard and brittle state into a rubber-like state. Technically speaking it is still debated whether jamming transition is a phase transition, even if recent results seem to suggest a positive answer to this question (Biroli (2007), Key et al. (2007)), pointing out the difference with other phase transitions, such as the formation of a crystal. In a jamming transition at fixed temperature, the increase of the density limit the possibility that a particle explore the phase space, and the matter behaves as a solid. If one increases the temperature, the system might be able to un-jam. Therefore in jamming transitions there is a critical density, which signals the transition. This critical density depends on the details of the considered system. For example, the shape of the constituents or the nature (attracting versus repulsive) of the interaction between them plays an important role.

Literature review

The literature on phase transitions is so vast that it is quite difficult to summarize it here. Moreover it is probably more relevant in this case to review some application of phase transitions to systems that are or might be close to ATM systems and therefore that might be relevant as future research challenges (see also below).

For the purpose of this position paper an important example of applications of emergent behaviours associated with phase transition is the case of traffic jams. As it is well known, in a car jam the average velocity of cars on a road may drop sharply when the density of cars is increased. Therefore this system is analogous to the flow of grain in a pipe. When the density increases the velocity decreases and the flow can stop. The interaction between the grains represents the interaction between cars that need to stay at a certain distance from the next one. Several authors have shown how to build stylized (or toy) models of car flowing in a street and that traffic jams arise in a way similar to a (statistical mechanics) jamming transition.

One of the first and most popular of such models is the Nagel-Schreckenberg model (Nagel et al. (1992), Eisenblatter et al. (1998); for a review of the argument, see Helbing, (2001)). The model describes a flow of cars in a freeway and shows how traffic jams can arise as an emergent collective phenomenon due to the interaction between cars. In the model the road is divided into cells aligned in a single row, i.e. it describes a single lane where no passing is allowed and moreover periodic boundary conditions are often imposed. Each cell describes the space available for a car and at each time it can be in two states, either empty (no car) or filled (one car). Each car has a velocity that ranges from zero to a predefined maximum. Time is also discretized and therefore the model can be thought of as a cellular automaton. There are several variants of the model, but each of them prescribes a set of rules sequentially followed by the cars. For example, each car tries to increase their velocity (if no car in front avoid that) and sometimes randomness is added to the model. The main feature of the model is highlighted by considering the relation between the average car velocity and the density of cars (for a given set of parameters, such as the maximum velocity or the randomization parameter). Numerical simulations show that for small density the average velocity is high and equal to the maximum velocity in the deterministic case. Then there is a critical car density where one observes a discontinuity in the slope (i.e. the derivative) due to the sudden appearance of traffic jams. Then as the density increases further the average velocity decreases until it reaches zero when the road is 100% occupied. An example of this type of behaviour is shown in Figure 2.1.

When the density is smaller than a threshold (in this case, 0.2) the average car velocity is maximum, indicating that the system is in the flowing state. After the transition, the average velocity decreases to zero when the density of cars is increased.

The effect of randomization is to reduce the average velocity in the low-density phase, but it also lowers the critical density at which traffic jams appear. Moreover randomization rounds off an otherwise sharp transition from free flow to jam state. Note that the model gives a clear example of an emergent phenomenon in traffic systems. In fact traffic congestions can emerge without external influences, such as accidents or bottlenecks, but they emerge just because of crowding on the road.

Another example of phase transitions that can be relevant for ATM is the set of critical phenomena observed when modelling agent-based models. In fact, analytical (when feasible) and numerical investigations of an agent-based model can highlight the presence of an order parameter describing different phases of the system. A paradigmatic example of an agent based model describing agents' decisions in a framework of inductive reasoning of economic agents, extensively investigated both analytically and numerically, is the so-called El Farol bar problem (Arthur (1994)) and the corresponding formalized version of the minority game (Challet et al. (1997), Challet et al. (2005)). For a review on the statistical mechanics approach to socio-economic systems with heterogeneous agents the interested reader can consult De Martino et al. (2006).

The modelling of emergent behaviour in biological and social sciences has pointed out the importance of the hierarchical structure of complex systems. In the classic setting of this concept H.A. Simon stated that a hierarchic system is "a system that is composed of interrelated subsystems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem" (Simon (1962)). The presence of a hierarchical organization with different levels makes it natural to observe that intrinsically different scientific descriptions are needed in the modelling of different levels of the system. Anderson pointed out this concept by stating "the behaviour of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviours requires research which I think is as fundamental in its nature as any other" (Anderson (1972)). In other words, at different hierarchical levels, emergent properties set up and they might need a scientific explanation, which cannot be given in terms of the scientific laws describing the constituent parts of the lower hierarchical level. When the hierarchy of the system is of self-similar nature the concepts of scaling and fractal geometry naturally apply. Scaling (Kadanoff (1990), Stanley (1999)) is a concept that has originated in different areas of mathematics and physical sciences. It is observed in: (i) the absence of a specific scale for some variables of a system, which is at a critical state; (ii) the allometric laws (West et al. (1997)) observed between variables characterizing a system. Deviation from isometric scaling is often due to dimensional constraint as it is observed, for example, in turbulence; (iii) the relationships among observables which are functions of random variables (for example linear sum, maximum or minimum value, etc.) and their number.

Research challenges

Identification of phase transitions and emergence of collective phenomena in ATM.

As in many complex systems, it is important to identify which are the possible phase transitions that can emerge in a model of air traffic management. In fact, the microscopic interaction of many heterogeneous elements can give rise to unexpected emergent phenomena, even in the absence of an external driving event (a large strike or the volcanic ashes). As an example, preliminarily explored in the paper reviewed in the Case Study #1, is the emergence of air traffic jams. The current structure of airspace, with airways and navigation points, and the bottlenecks represented by airports, suggests the possibility that an increasing density of traffic, joined with the significant amount of randomness, can push the system close to a transition similar to a jamming transition.

Another important challenge arises when one considers the ATM system as a socio-technical system. In these cases agent based modelling is a natural tool to investigate the emergence of collective emergent phenomena from the interaction of individuals. As mentioned above, it is not uncommon to observe phase transitions in these models, and phases represent different possible collective states of the system.

The expected rapid increase of traffic in the European and worldwide airspace will challenge the current structure of the system. It is important therefore to know which increase in the air traffic density is sustainable by the system before a significant increase in jams is observed. Clearly jams in a rigidly controlled system, such as air traffic, means an increase in delays frequency, which, in the current structure of indirect connectivity, means in turn significant problems for the passengers. In a SESAR scenario, the topology of routes will completely change and this might have a significant effect on the critical density where the emergent phenomenon of jamming could appear. Carefully empirically calibrated models could help in answering these questions.

A better understanding of the behaviour of air traffic when the density of traffic is increased will clearly have a significant impact on the planning of the future scenario for air traffic management.

Besides amplification of delays frequency, one should also ask what increase is possible in the rate of safety events and accidents and the scaling of their frequency with aircraft density.

All these questions might be tackled by having a better understanding of the different phases in which the system can be found.

Percolation in non-homogenous media

Problem statement

What is percolation? Percolation is a random process exhibiting a phase transition. In the simplest setting percolation is investigated in simple geometrical systems such as regular lattices covering a 2D surface. Even in the simplest setting there are different variants of the percolation problem. Specifically, one speaks about bond percolation when a link between two neighbouring sites is present with probability $ p $ and its absence is observed with probability $ (1-p) $. In this variant all sites are present in the system and links between any pair of them may or may not be present. In the other variant of site percolation the links of the lattice are always present between two occupied sites, but each site is occupied with probability $ p $ and empty with probability $ (1-p) $.

In spite of the simplicity of the setting the problem of obtaining the percolation probability, i.e. the probability that there is a continuous path from an arbitrary selected site of the system to infinity, is not an easy task. The percolation probability switches from 0 to 1 (the probability observed at the two extreme cases of empty lattice and fully-connected lattice respectively). Theoretical considerations and numerical simulations show that the percolation probability changes abruptly around a specific value $ p_c $ called critical probability. Exact solutions have been obtained in a few cases as, for example, the case of 2D bond percolation in a square lattice and bond and site percolation in a triangular lattice and the case of bond percolation for d-dimensional lattices with $ d\geq19 $ or with $ d>6 $ when additional hypothesis on the insertion of links between any two sites within a finite distance is assumed. An exact solution is also known for the case of Cayley tree (also known as Bethe lattice). In the large majority of cases the abrupt increase of the percolation probability at $ p\approx p_c $ is investigated by approximate methods and/or with numerical simulations. The abrupt transition of the percolation probability between two distinct macroscopic states observed for $ p<p_c $ and $ p>p_c $ together with the presence of many functions characterizing the system, which are showing power-law behaviour when $ p\approx p_c $, are signatures of universal behaviour characterizing systems at a critical state or undergoing a phase transition.

The percolation approach has been extended to disordered systems. Examples are studies describing electric transport in a random media and invasion percolation, i.e. the problem of one fluid invading a porous medium proceeding along a path at least resistance. Percolation has been also investigated in statistical and geometrical fractals. Recently the concept of percolation has been used in the modelling of network properties. In fact, in the investigation of networks describing, for example, internet, social networks, and the power grid, the resilience of these networks to either random or targeted deletion of network nodes has been empirically investigated and theoretically modelled by using concepts and tools of percolation theory on graphs characterized by both Poisson degree distribution and scale free distribution at their vertices.

Percolation is also a key concept in the study of the spreading of epidemics on a network.

Literature review

Percolation (Stauffer et al. (1994)) is today a key concept in complexity theory, statistical physics and in the mathematical description of random media. The first example of mathematical modelling of percolation theory was originally provided by Broadbent and Hammersley (Broadbent (1957)). However, before this mathematical formalization Flory and Stockmayer used percolation concepts in the modelling of the polymerization process that leads to gelation. In their studies, they developed a description of percolation on the Bethe lattice (or Cayley tree) ((Flory (1941), Stockmayer (1943)).

Harry Kesten (Kesten (1980)) obtained the first exact value of the critical probability $ p_c $ for the 2D bond percolation in a square lattice ($ p_c=1/2 $). Exact estimations of the critical probability $ p_c $ are known only for other few cases of bond and site percolation in a triangular lattice and bond percolation for a honeycomb lattice (Kesten (1982)). However, Scullard (Scullard (2006)) and Scullard and Ziff (Scullard et al. (2006)) proved in 2006 that this exact knowledge can be used to obtain exact values of the critical probability of other 2D lattices obtained performing some nonlinear mappings on the triangular lattice.

Percolation theory has also been used in the modelling of random media since Flory's pioneering work. An area of wide applications of the percolation concept to random media is the area of transport of charged carriers. Scott Kirkpatrick (Kirkpatrick (1973)) investigated first the normalized conductance of random resistor networks. Other applications focus on disordered systems, fractals (Havlin et al. (1987)), statistical topography, turbulent diffusion, and heterogeneous media (Isichenko (1992)).

Under standard percolation, criticality is reached at the critical probability. However, there is a variant of percolation that automatically finds the critical points of the system. This variant of the percolation was named invasion percolation (Wilkinson et al. (1983)) and was originally proposed to describe the process of one fluid displacing another from a porous medium under the action of capillary forces. However, the same concept has been applied to many kinds of invasion process proceeding along paths of least resistance. Percolation concepts are also successfully used in the study of epidemics (Grassberger (1983), Cardy et al. (1985)).

The concept of percolation is also fruitfully used in network theory (Cohen et al. (2000), Callaway et al. (2000)) especially for the evaluation of the resilience of networks of different topology. Another research area widely using percolation concepts is the one investigating epidemics problems in social systems and in complex networks (Moore et al. (2000), PastorSatorras et al. (2000)).

Percolation concepts have also been used in the modelling of socio-technical systems. A percolation model was also developed to show how the sales of a new product might penetrate the consumer market (Goldenberg et al. (2000)). Another application concerns the model of innovation. Innovations in social systems occur highly clustered in time rather than if they were purely randomly generated in time. Technological change is often modelled in the economic literature as a process following technological trajectories. It has been proposed that these empirical observations be modelled considering a complex technology space whose dynamics is described in terms of percolation theory (Silverberg et al. (2005)). The technological space is searched randomly in local neighbourhoods. Within the proposed model, numerical simulations show that by increasing the diameter of search, the probability of becoming deadlocked declines and the mean rate of innovation increases. The distribution of innovation cluster sizes is highly skewed and may resemble a power-law distribution near the critical percolation probability.

It has been suggested that the modelling of air traffic control can benefit from the use of percolation concepts in the analysis and modelling of the spatio-temporal diffusion of congestion of sectors of the airspace (Conway2005, BenAmor et al. (2006)). Preliminary attempts show that percolation concepts can be fruitfully used in the analysis, modelling and simulation of the propagation of the saturation of capacity of neighbouring sectors triggered by en route modifications of the flight trajectories (Conway2005, BenAmor et al. (2006)). Moreover, percolation can be a guiding concept in the analysis of the network structure of future design for automated conflict detection of the next generation air transportation system. In fact percolation concepts are useful in the investigation and evaluation of the trade-offs of alternative concepts of collaborative operations, expected conflicts, communications requirements, and vulnerability of the transportation system to targeted attacks. Percolation concepts are indeed qualifying network architecture for air traffic control, enabling in-depth analysis and evaluation of the resulting system (Chen et al. (2011)).

Research challenges

Emergence of Percolation Phenomena in ATM.

The main research challenge to be first addressed in the area of emergence of a percolation phenomenon in ATM is whether percolation concepts can be fruitfully used in the analysis, modelling and numerical simulation of air traffic management time records and in the modelling of future scenario simulations performed with different procedures such as, for example, agent based modelling. Key research questions are: is the process controlled in statistical terms by a parameter describing the probability of congestion of each airspace/sector? is the topology of the airspaces/sectors interconnection relevant in the "percolation" of the congested status across wide regions of the airspace? is a centralized managing of the Air Traffic Control (ATC) more or less sensitive to "avalanches" of delays than a decentralized allocation of the flight trajectories?

En route delay propagation is a non-local problem. Delay originated in a specific airspace/sector can propagate quite far from the originating region. A basic design problem is to evaluate the sensitivity of the air traffic control system to the onset of an "avalanche" of delay as a function of the "avalanche" size. In other words the empirical and theoretical investigation of the scaling relations of the number of airspaces/sectors affected by congestions or closed will be highly informative about the degree of robustness and of resilience of the current and/or designed ATC system. This research challenge implies the solution of several research problems, which are state of the art problems in mathematics, computer science, network theory and statistical physics. In fact the study of percolation in time-dependent networks of arbitrary topology presents many unsolved problems.

Percolation makes the problem of en route delay non-local. The understanding of the spatio-temporal scaling relations of the delay avalanches observed in the global system of airspaces/sectors will provide theoretical instruments for the quantification of the degree of robustness and resilience of the air traffic control system.

Power laws in ATM complex systems

Problem statement

Power-laws were introduced in Section 0.2.1 and have been discussed at various points in this paper. Let us develop the discussion by illustrating the concept further with an example taken from Binney et al. (1992): let us consider $ f_1=(x/L)^a $ and $ f_2=e^{x/L} $. Both functions involve a scale parameter $ L $. Let us consider the $ f_2 $ function in the interval $ \left[ 1/2 L, 2L\right] $. The ratio between the largest to the smallest value in this interval is $ r_2=e^{3/2} $. When considering the $ f_1 $ function the ratio is $ r_1=4^a $. Let us now consider the interval $ \left[ \frac{10}{2}L, 2\cdot10L\right] $. When considering the $ f_2 $ function the ratio between the largest to the smallest value in this interval is $ r_2=e^{15} $. When considering the $ f_1 $ function the ratio is $ r_1=4^a $. Let us now consider the interval $ \left[ \frac{100}{2}L, 2\cdot100L\right] $. When considering the $ f_2 $ function the ratio between the largest to the smallest value in this interval is $ r_2=e^{150} $. When considering the $ f_1 $ function the ratio is $ r_1=4^a $. These results show that if we draw graphs of the $ f_1 $ function in any of the three intervals they can be superimposed by a simple change of variable. The same is not true for the $ f_2 $ function. In this sense a variable obeying a power-law looks the same no matter on what scale one probes it. We refer to such a property by saying that the variable is scale-free (as introduced in Section 0.2.1). The relevance of scale-free behaviour was popularized by Mandelbrot, in Mandelbrot (1977), where the word “fractals” was used to describe the non-differentiable patterns that satisfy power law scaling when the power-law exponent is not equal to an integer. Fractals have the property that, by using an appropriate magnifying glass, one sees the same behaviour across different scales.

Literature review

Statistical regularities expressed in terms of power-laws can be observed in many natural and social phenomena.

Relevant examples are the allometric relations between biological variables. For example the metabolic rates B of entire organisms scale like $ M^{3/4} $ with respect to the mass M of the organism, the cross-sectional areas of mammalians scales again like $ M^{3/4} $ , while the time of blood circulation scales like $ M^{1/4} $ , see West et al. (1997) and West (1998). Other examples of power laws are given by the Zipf’s law. Originally observed in the context of natural language analysis, this law states that after ranking the different words, which are present in a certain text from the most to the least frequent, one observes that the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc., i.e. the occurrence $ N_k $ of the $ k $-th most frequent word is proportional to $ 1/k $, see Zipf (1949) and Li (1992a). Similar relationships, apply to other variables in other social complex systems, such as the population of cities in a given country, Gabaix (1999). Power laws are also observed when considering the probability distribution of some statistical variables. Examples include research areas as disparate as financial variables such as stock’s returns, see Mantegna et al. (2000), climate variables such as rainfall, see Peters et al. (2006), and sociological variables such as the number of sexual partnerships, see Liljeros et al. (2001). Finally, power-laws can be observed in the auto-covariance function of some stochastic variables such as stock’s volatility, see Liu et al. (1999), or in the spectrum of non-coding DNA sequences, see Li et al. (1992b) and Li et al. (2002).

In the context of probability distributions the power-law relationships are observed in the tails of the distribution, i.e. $ p(x) $ decays to zero like $ p(x) \propto 1/\vert x \vert^a $ for large values of $ \vert x \vert $ . It is usually said that distributions with power-law tails are an example of fat-tailed distributions, thus meaning that the probability that the variable $ x $ assumes a large value is bigger than what expected for a Gaussian distribution that exhibits exponentially decaying tails (thin-tailed distribution). It is also worth mentioning that the moments $ <x_n> =\int x^n p(x) dx $ are defined only when $ a-n>1 $. The pure power-law distribution, i.e. $ p(x)=1/\vert x \vert^a $ for $ \vert x \vert>x_0 $, is usually referred to as the Pareto distribution, after the name of Vilfredo Pareto who firstly introduced it in a paper about wealth distribution, see Pareto (1896). Examples of power law distributions are the Cauchy, Student’s t, Levy stable, Lorentzian, log Gamma and the Frechet distribution. It is worth mentioning that power-law distributions play a central role in the theory of extreme events. In fact, given any stochastic variable $ x $ admitting a well behaved fat tailed probability distribution, the maximal values $ M_k(n) $ that can be obtained by doing a number k of draws each of length $ n $ follows the Frechét distribution in the limit when $ k \rightarrow \infty $ and $ n \rightarrow \infty $.

The mechanisms that give rise to power-law decaying distributions are numerous. A basic point of view assumes that heterogeneity is the key aspect. In other words systems with power-laws distributions arise as the “weighted sum”, or better the convolution, of many heterogeneous un-correlated subsystems, each characterized by its own space-length, see Gheorghiu et al. (2004). Other approaches tend to give importance to spatial and temporal correlations of the statistical variable as a source for the power-law tails, see, for example, Bouchaud et al. (1990). Another research line tends to assign to power-law distributions the same role that the Gaussian distribution plays in classical thermodynamics. This is the case of the thermodynamic theory of non-extensive systems proposed by Tsallis in Tsallis (1998).

Power-law distributions can also be generated by other kinds of mechanisms, not necessarily related to heterogeneity issues, such as the ones based on random multiplicative processes with constraints, see Kesten (1973) and Levy et al. (1996). An extensive review of possible mechanisms able to generate power-law distribution is found in Newman (2005) and Malevergne et al. (2009).

Testing for power laws in probability distributions of empirical data can be very difficult mainly due to the fact that a power-law is often an asymptotic property, and in a real data set one can never be sure whether or not the asymptotic regime has been reached. One popular estimator for the power-law exponent is the Hill estimator originally proposed in Hill (1975). Given a variable $ x $ that is supposed to be power-law distributed, one considers the $ k $ elements such that $ x_i>x_0 $, where $ x_0 $ is a pre-determined threshold. The Hill estimator of the power-law exponent is $ a=\sum \log (x_i/x_0) k $, where the sum is performed over the above $ k $ elements. Naturally this estimate is largely affected by the choice of $ x_0 $, which exactly amounts at understanding when the asymptotic regime takes place.

When considering stochastic processes, power-laws in the auto-covariance function of a stochastic variable indicate that the considered variable is long-range correlated, see Beran (1994). The auto-covariance function $ R(t,t')=<x(t)x(t')> $ measures the correlation properties of a variable with respect to time. It gives a proxy of the amount of “memory” that a stochastic variable carries about its past behaviour. When the integral to infinity of the auto-covariance function is finite then the process is referred to as short-range correlated and the integral of the auto-covariance defines the typical time-scale of the process. The paradigmatic stationary short range correlated variable is given by the Ornstein-Uhlenbeck (OU) stochastic process, whose auto-covariance function is $ R(t,t')=\exp (-\gamma\vert t-t'\vert) $. When the auto-covariance decays with a power-law with an exponent b smaller than unity, then the integral of the auto-covariance no longer exists and therefore a typical time-scale cannot be defined. An alternative, yet equivalent, approach for studying the memory properties of a process is to investigate the mean-squared displacement $ <\Delta s^2> $ rather than the auto-covariance function. Given a stochastic variable x the mean squared displacement is nothing but the variance of the integrated process $ s(t)= \int x(t') dt' $. One can show that $ <\Delta s^2>=\int 〈x(t’) x(t’+t)〉 dt' $. If the auto-covariance function decays like a power-law with exponent b than the mean squared displacement grows with a power-law with exponent $ c=2-b $ Since $ b<1 $ then $ c>1 $, therefore it is said that the variable is superdiffusive. By contrast short range correlated usually have a mean squared displacement that grows linearly in time. They are therefore said to be diffusive, see Einstein (1905). The mechanisms for generating power-laws in the auto-covariance function are again quite numerous. Here we only want to mention that also in this case heterogeneity can play a special role. We will illustrate such issue with an example relative to stationary Markovian processes. For such special class of stochastic variables, the auto-covariance function, under quite large constraints, can be written as $ R(t,t')=\int c(E)^2 \exp(- E \vert t-t’ \vert ) dE $. In other words the auto-covariance of the process is seen as the weighted sum of the auto-covariance of infinitely many OU variables each characterized by a time-scale $ 1/E $, see Risken (1989). The special case of the OU variable is recovered when the weights $ c(E) $ are given by $ c(E)=-\delta (E-\gamma) $. In the general case we have a variable characterized by multiple time-scales. Only when the weights $ c(E) $ exhibits a peculiar power-law behaviour, then also the auto-covariance function decays like a power-law. The example therefore shows that having an heterogeneous set of time-scales is a key ingredient for the generation of power-laws. However, heterogeneity is not per se enough to generate power-laws: there is more to power-laws than heterogeneity.

Various mechanisms can be found in the literature to explain the existence of power-laws in the autocorrelation function. Examples include (i) Fractional Brownian motion, see Mandelbrot et al. (1968), (ii) specific nonlinear Langevin equations in the continuous time domain, see, for example, Marksteiner et al. (1996), and (iii) FIARCH processes in the discrete time domain, see, for example Podobnik et al. (2007). A good review of possible mechanisms able to generate long-memory processes is given by Beran (1994).

As much as for the power-law distributions, testing for power laws in the auto-covariance function of empirical stochastic variables can be very difficult mainly due to the fact that a power-law is often an asymptotic property, and in a real data set one can never be sure whether or not the asymptotic regime has been reached. Being $ <\Delta s^2> $ a cumulative quantity, the power-law exponent $ c $ is usually easier to be measured, even by using a Nonlinear Least Squares Fitting algorithm. Other techniques to estimate long-memory in the time and in the frequency domain can be found in Ref. Beran (1994).

Research challenges

Network Characterization of the ATM System.

We have seen in the previous section that one of the reasons why power-law distributions are so important is that the variables that exhibit such behaviour are scale-free. We also saw, in Section 0.2.1, that, by analogy, some networks whose degree distribution follows a power law, at least asymptotically, are referred to as scale-free networks. Degree distributions have the utmost importance in network theory as they may give information about the topology of the networks. In Section 0.2.3, we discussed growth and preferential attachment.

An open research challenge is to perform a network characterization of the ATM system, at various levels of aggregation, in order to understand how the network topology is related to the management of the air traffic system, with a special focus on the way controllers’ decisions affect the system.

In particular, a network characterization makes it possible to investigate the nature of the role played by the management of en route bottlenecks (airports, flight segments, sectors, etc.) in the case of delays propagation throughout the system.

This can be done starting from the information contained in the initial flight plans and in the realized flight plans. The investigation of the appropriate network metrics would allow the achievement of a better view of the mechanisms that are at the basis of the network construction of both networks. Are both types of networks scale-free? Are both networks characterized by the presence of the same types of clusters? Are both networks characterized by a preferential attachment mechanism?

A comparison of the networks obtained from planned and realized trajectories would allow the quantitative detection of how the modifications of the planned flight trajectories, adopted by controllers in the case of delays, sectors congestion or weather alerts, affect the ATM system globally. This would allow for a better selection of the operational strategies to be considered in order to minimize en route traffic delays, sectors congestion or weather alerts.

Case Studies

Designing and Managing Air Traffic in Order to Avoid Jamming Transitions

As we have seen above, traffic systems are susceptible to abrupt transitions from an orderly flowing state to a jammed state. Most of the studies so far have considered mostly road systems where a series of constraints (essentially a graph structure) characterize the structure. In ATM one important problem is the growing traffic volume and therefore the possibility of congestions both en route and in airports. In the first case, the current (pre-SESAR) airway structure and the restrictions due to safety constraints make the problem of the emergence of jamming quite similar to the road case. Therefore the application of statistical mechanics’ methods to the design of the airway structure and to the management of air traffic in real time seem to be a promising avenue to identify possible thresholds where jamming states could emerge. Similarly, for the airport case, transitions to jammed states, where queues of aircraft can quickly accumulate are an important field where models can reveal the possible presence of threshold phenomena. Moreover if traffic jams naturally emerge when the traffic load exceeds a given threshold, the presence of disturbance, such as adverse weather conditions, operational problems, and restrictions of air traffic, can make these transitions much more probable and can change dramatically the way in which the system relaxes to the normal state. The emergence of jams in perturbed traffic systems is therefore a promising avenue where the application of concepts from statistical mechanics could be useful in many different aspects.

In particular, studying emergent phenomena in air traffic systems could be helpful in: (i) designing better airway (and sector) structures in order to keep the system far from thresholds corresponding to the emergence of jams, in normal or reasonably perturbed states; (ii) designing the airport scheduling in order to make it more robust to perturbations, in order to avoid transitions to jammed states; (iii) developing real time decision support tools to help air traffic controllers avoid rerouting decisions that can lead the system closer to a threshold where jams appear; (iv) helping to design better air traffic structures (post SESAR) that allow the increase of local and global traffic.

A preliminary case study in this direction is Lacasa et al. (2009), where authors proposed a network-based model of the air transport system that simulates the effect of traffic dynamics and shows the appearance of jams. Specifically, authors modelled the topology of the interconnected airports as a random (Erdos-Renyi) network. Moreover each airport is characterized by an exogenously assigned capacity, which is the maximal number of aircraft per unit of time that the airport can handle in an ideal situation. The ideal capacity can be diminished by a random noise term. Moreover each link of the network is weighted and the weight measures the number of time steps that an aircraft needs in order to complete the route. A set of simple queuing rules describes the interplay between incoming and outgoing flows of aircraft in an airport. Simplifying a bit, if the input flow is larger than the capacity, the output flow will be equal to the capacity and the other aircraft will remain in a queue. Only when the input flow becomes smaller than the capacity does it become possible to remove the waiting aircraft from the queue. The model is simulated with Monte Carlo methods.

The key system indicator, P, is the percentage of aircraft that are not stuck in a node's queue, measured in the steady state. This parameter plays the role of an order parameter of the system. Note that P actually measures the network's efficiency because it gives the diffusing flow rate compared to the flow rate, which is stuck. The key finding in Lacasa et al. (2009) is that by increasing the aircraft density (number of aircraft), the system undergoes a phase transition. This is testified by the fact that the expected value of P abruptly deviates from the efficient phase P=1 when the aircraft density is larger than a threshold. After this threshold P declines, as expected. Correspondingly, the variance of P, goes abruptly from zero to a high value when the aircraft density is larger than the threshold described above. These and other evidences indicate that the system undergoes a jamming transition, similar to what is observed in other traffic systems (Helbing (2001)). One may ask whether network topology plays a role in this observation. Authors considered the topology of the real European air traffic system composed by 858 airports and 11,170 flights and they found qualitatively the same result, i.e. the emergence of a jamming phase transition for a given value of the aircraft density

En route Congestion Percolation and Air Traffic Control

The most promising area of application of the concept of percolation in the ATC system concerns the analysis and modelling of the en route propagation of flight delays. During operation, a given airspace (and/or a sector therein) can go beyond capacity due to several reasons such as weather conditions, air traffic congestion triggered by unexpected events like, for example, ash emissions from volcanoes, technical emergencies, etc. or, especially in the future scenarios of self-regulated and/or self-assigned trajectories, they could go beyond capacity due to self-originated air congestion. In the presence of these events it is important to analyse the determinants that trigger the propagation of the status of congested airspace/sector to neighbouring airspaces/sectors.

A first attempt to use the percolation concept in the empirical analysis and numerical modelling of the en route delay of flights has been performed in (Ben Amor et al. (2006)). In the current setting of air traffic control, aircrafts follow a planned trajectory between the departure and the arrival airport. The air traffic management is centralized in any phase of the flight. During the flight three main control activities take place: (i) Controllers from a control tower manage the landing and taking off of aircrafts; (ii) Approach control is performed by controllers to guide aircrafts in or out from airport area and (iii) En route control where controllers guide aircrafts across airspaces and/or sectors.

Percolation concepts are particularly suited for the investigation of en route delays because it is natural to define specific states for the airspaces and/or sectors (such as, normal state, congestion state, closure state, due to, for example, bad weather conditions) and moreover the topological interrelations among sectors are well defined through the geographical interconnections and the route definitions, which are present between airspaces and/or sectors. The system is non-homogeneous because airspaces and sectors can be quite heterogeneous with respect to their dimensions, weather fluctuations, and flight loadings. The scientific problem can therefore be formalized in terms of a problem of percolation in a "porous" (i.e. heterogeneous) medium. In the current setting of air traffic management this "porosity" reflects the flexibility left to controllers during the air traffic control to minimize en route traffic delays. A percolation approach might allow investigation of the effects of non-local interactions in the spreading of the congested status in wide regions of airspace. An interesting development in the research could also compare the robustness and resilience process of the current setting with respect to simulations of future ATM scenarios based on decentralized air traffic control procedures.

In Ben Amor et al. (2006), authors did a preliminary investigation of the usefulness of using percolation concepts in the modelling of air traffic control by setting up a simple model where controllers can re-route the trajectory of an aircraft to one of the contiguous sectors when a specific sector is congested. When analysing large simulations they observed that the presence of extensive feedback interactions among the densely connected sectors might induce, in certain conditions, a phase transition from an ordinary state to a jammed state extending over a large number of sectors.

Concerning applications, the knowledge of scaling relations governing, for example, the number of airspaces/sectors affected by congestions or closed will provide guidance for the release of pre-alerting procedures able to counteract the spreading of the congestion of airspaces/sectors therefore minimizing the congestion spreading Their knowledge will also provide a quantitative indication of the degree of robustness and of resilience of the current and/or designed Air Traffic Control System with respect to the propagation of the en route delay.

Networks Investigation of En route Delay Propagation Management

A useful empirical investigation could analyse data about planned and realized flights over a certain number of days and over a relatively large portion of the ECTL airspace. These data should include detailed information about the flight segments, such as in the M1 and M3 type of DDR data maintained by ECTL CFMU.

Networks in the air traffic system context have been studied in the past, see Ref. Bagler (2004), Li (2004), Guimera (2004), Guimera (2005), Colizza (2006a), Colizza (2006b). In these studies, network nodes are mainly airports. It is however possible to consider networks where the nodes are trajectory segments and links between two segments are set whenever there are flights connecting both segments. Analogously it is possible to construct networks where the nodes are the begin/end points of the segments. The reason for considering segments rather than airports is that they should be better suited in explaining the en route delays. In fact segments can provide more microscopic information about flights trajectories thus allowing a better investigation of the existence of en route bottlenecks. At a higher level of aggregation one might consider networks where the nodes are airblocks or sectors.

The first step could be to detect the differences between planned (M1 data) and realized (M3 data) flight trajectories for each of the involved flights. Some research questions to be posed are (i) whether or not these differences are persistent over time and (ii) to statistically characterize en route delays for each flight. What are the distributions of these quantities day by day? What is the autocorrelation function of these quantities? After de-trending for seasonal cycles, do they still bear memory of their past behaviour? Is there any scale-free behaviour emerging? Answering these questions would allow the statistical characterization of the differences between planned and realized flight trajectories at a flight level, i.e. starting from the highest resolution of the system.

The second step could be to have a network description of the system day by day. For example, networks should be constructed starting from both the planned and realized flight trajectories. Network nodes could be the flight segments. Network links between two nodes would bear the information about the number of flights passing through both segments. Again, some relevant research questions to be posed are: what is the degree distribution in both networks? What is the average path length in both networks? What is the clustering coefficient in both networks? What are the communities in both networks? What is their number? What is their size? Are these communities characterized by a certain characteristic delay level? Are these networks scale-free or not? Answering these questions would allow the statistically characterization of the differences between planned and realized flight trajectories at a network level, i.e. starting from global information about the system.

The third step would be to infer from data the strategies used by the controllers in case of traffic delays, congestion, and weather alerts, at the level of networks. For example, it would be interesting to investigate whether or not there exists any kind of preferential attachment mechanism that might explain how the network built from planned flight trajectories modifies into the network built from realized flight trajectories. This permits the investigation of the impact of the decisions taken by controllers in the spreading of delays. For example, in the case of re-routing, what is the impact of choosing a certain trajectory in the propagation of traffic delay? What is the impact in the network topology and cluster organization? These investigations might be performed over time. Other than identifying seasonal cycles, studying the appropriate network metrics would allow for the detection of possible regularities in the operational strategies such as whether or not they are persistent over time.

Answering this question, besides its importance in the context of ATM system, will also be relevant from a theoretical point of view. In fact, it poses questions that are at the edge of current state-of-art knowledge and requires skills that are at the boundary between mathematics, network theory and statistical physics.

Recent Developments

This section is devoted to describing recent research results that are relevant to the research theme of Emergent Behaviour in ATM. If you have any related results that you wish to contribute, please feel free to add your contribution as a subsection below. Also consider linking your results with relevant portions of the main text of the article and/or other articles (e.g. related research lines) to increase its visibility and help give it a context inside the research theme.

Breakdown in Air Traffic Control Modelled as a Phase Transition

References

  • Albert R. and Barabasi A.-L., 2002. Statistical mechanics of complex networks, Reviews Of Modern Physics 74, 47.
  • Anderson, P.W. 1972, More Is Different, Science, 177,393-396.
  • Arthur, W.B. 1994, Inductive Reasoning and Bounded Rationality (The El Farol Problem), Amer. Econ. Review 84,406.
  • Bagler G., 2004. Analysis of the Airport Network of India as a complex weighted network, Physica A 387, 2972-2980.
  • Bak P., Tang C. and Wiesenfeld K. 1987, Self-organized criticality: An explanation of the 1/f noise, Physical Review Letters, 59, 381-384.
  • Ben Amor, S., Bui, M., & Lavallée, I. 2006. A Complex Systems Approach in ATM Modeling. Proceedings of the ECCS'06. European Conference on Complex Systems, Oxford.
  • Beran J., 1994. Statistics for Long-Memory Processes, Chapman & Hall/CRC Monographs on Statistics & Applied Probability.
  • Binney J., Dowrick N.J., Fisher A.J. and Newman M.E.J. 1992, The Theory of Critical Phenomena, Oxford University Press.
  • Biroli, G. 2007, Jamming: A new kind of phase transition? Nature Physics 3, 222–223.
  • Bouchaud J.-P. and Georges A., 1990. Anomalous Diffusion In Disordered Media: Statistical Mechanisms, Models And Physical Applications, Physics Reports 195, 127-293.
  • Broadbent S.R. 1957, Percolation processes. I: Crystals and mazes, P. Cambridge Phil Soc 53, 629.
  • Callaway D.S., Newman M.E.J., Strogatz S.H., and Watts D.J. 2000, Network Robustness and Fragility: Percolation on Random Graphs, Physical Review Letters 85, 5468–5471.
  • Cardy J.L. and Grassberger P. 1985, Epidemic models and percolation, Journal of Physics A: Math. Gen. 18, L267-L271.
  • Challet D. and Zhang Y.C. 1997, Emergence of cooperation and organization in an evolutionary game, Physica A, 246,407-418.
  • Challet D., Marsili M., Zhang, Y.C. 2005, Minority Games: Interacting Agents in Financial Markets, Oxford University Press.
  • Cohen R., Erez K., ben-Avraham D., and Havlin S. 2000, Resilience of the Internet to Random Breakdowns, Physical Review Letters 85, 4626-4628.
  • Conway S. 2005, Systemic Analysis Approaches for Air Transportation, Proceedings of the 3rd Annual Conference on Systems Engineering Research, March 23-25, 2005, Stevens Institute of Technology Campus, Hoboken, * New Jersey, USA
  • Colizza V., Barrat A., Barthélemy M., and Vespignani A., 2006a. The role of the airline transportation network in the prediction and predictability of global epidemics, Proceedings of the National Academy of Sciences USA 103, 2015-2020.
  • Colizza V., Barrat A., Barthélemy M., and Vespignani A., 2006b. Optimal paths in complex networks with correlated weights: The worldwide airport network, Physical Review E 74, 056104.
  • DeLaurentis D. and Han E.-P. 2006. System-of-systems simulation for analyzing the evolution of air transportation, International Council for Aeronautics and Space, Twenty-Fifth Congress.
  • De Martino A. and Marsili M. 2006, Statistical mechanics of socio-economic systems with heterogeneous agents Journal of Physics A: Mathematical and General, 39, R465-R540.
  • Einstein A., 1905. On the Movement of Small Particles Suspended in Stationary Liquids Required by the Molecular-Kinetic Theory of Heat, Ann. d. Physik 17, 549.
  • Eisenblatter B., Santen L., Schadschneider A. and Schreckenberg M. 1998, Jamming transition in a cellular automaton model for traffic flow, Physical Review E 57, 1309-1314.
  • Flory P.J. 1941, Molecular Size Distribution in Three-Dimensional Polymers: I, Gelation, Journal of the American Chemical Society 63, 3083.
  • Gabaix X., 1999. Zipf's Law for Cities: An Explanation. Quarterly Journal of Economics 114, 739-767.
  • Gheorghiu S. and Coppens M.-O., 2004. Heterogeneity explains features of ‘‘anomalous’’ thermodynamics and statistics, Proceedings of the National Academy of Sciences USA 101, 15852–15856.
  • Goldenberg J., Libai B., Solomon S., Jan N., Stauffer D. 2000, Marketing percolation, Physica A 284, 335-347.
  • Grassberger P. 1983, On the critical behavior of the general epidemic process and dynamical percolation, Mathematical Biosciences 63, 157-172.
  • Guimerà R., and Amaral L.A.N., 2004. Modeling the world-wide airport network, European Physical Journal B 38, 381-385.
  • Guimerà R., Mossa S., Turtschi A., and Amaral L.A.N., 2005. The worldwide air transportation network: Anomalous centrality, community structure, and cities global roles, Proceedings of the National Academy of Sciences USA 102, 7794-7799.
  • Havlin S., Ben-Avraham D., 1987, Diffusion in disordered media, Advances in Physics 36, 695-798.
  • Helbing D. 2001, Traffic and related self-driven many-particle systems, Review of Modern Physics 73, 1067-1141.
  • Hill B.M., 1975. A simple general approach to inference about the tail of a distribution, The Annals of Statistics 3, 1163–1174.
  • Holmes B.J. 2004. Transformation in Air Transportation Systems, International Council for Aeronautics and Space, Twenty-Fourth Congress, Yokohama, Japan.
  • Huang, K. 1987, Statistical Mechanics, J. Wiley & Sons.
  • Isichenko, M.B. 1992, Percolation, statistical topography, and transport in random media, Reviews of Modern Physics 64, 961-1043.
  • Kadanoff, L.P. 1990, Scaling and universality in statistical physics, Physica A, 163,1-14.
  • Kesten H. 1980, The Critical Probability of Bond Percolation on the Square Lattice Equals 1/2, Communications in Mathematical Physics 74, 41-59.
  • Kesten H. 1982, Percolation Theory for Mathematicians, Birkhauser, Boston Ma.
  • Keys, A.S., Abate, A.R., Glotzer, S.C. and Durian, D. J. 2007, Measurement of growing dynamical length scales and prediction of the jamming transition in a granular material, Nature Physics 3, 260–264.
  • Kesten, H., 1973. Random difference equations and renewal theory for products of random matrices, Acta Mathematica 131, 207-248.
  • Kirkpatrick S. 1973, Percolation and Conduction, Review of Modern Physics 45, 574-588.
  • Lacasa L., Cea M., and Zanin M. 2009, Jamming transition in air transportation networks, Physica A 388, 3948-3954.
  • Levy M., and Solomon S., 1996. Power laws are logarithmic Boltzmann laws, International Journal of Modern Physics C 7, 595.
  • Li W., 1992a. Random Texts Exhibit Zipf's-Law-Like Word Frequency Distribution, IEEE Transactions on Information Theory 38, 1842–1845.
  • Li W., Kaneko K., 1992b. Long-Range Correlation and partial 1/fa spectrum in a Noncoding DNA sequence, Europhysics Letters 17, 655.
  • Li W., Yang Y., 2002. Zipf's law in importance of genes for cancer classification using microarray data, Journal of Theoretical Biology 219, 539-551.
  • Li W., Cai X., 2004. Statistical analysis of the airport network of China, Physical Review E 16, 046106.
  • Liu Y., Gopikrishnan P., Cizeau P., Meyer M., Peng C.-K., and Stanley H. E., 1999. The Statistical Properties of the Volatility of Price Fluctuations, Physical Review E 60, 1390-1400.
  • Liljeros F., Edling C.R., Amaral L.A.N., Stanley H.E., Aberg Y., 2001. The web of human sexual contacts, Nature 411, 907-908.
  • Malevergne Y., Saichev A., and Sornette D., 2009. Theory of Zipf's Law and Beyond, Springer-Verlag Berlin and Heidelberg GmbH & Co. K, Berlin.
  • Mandelbrot B, 1977. Fractals: Form, Chance, and Dimension, W. H. Freeman & co., San Francisco.
  • Mandelbrot B, and van Ness J.W., 1968. Fractional Brownian motions, fractional noises and applications, SIAM Review 10, 422–437.
  • Mantegna R.N., and Stanley H. E., 2000. An Introduction to Econophysics, Correlations and Complexity in Finance, Cambridge, England, Cambridge University Press.
  • Marksteiner S., Ellinger K. and P. Zoller, 1996. Anomalous diffusion and Lévy walks in optical lattices, Physical Review A 53, 3409.
  • Moore C. and Newman M.E.J. 2000, Epidemics and percolation in small-world networks, Physical Review E 61, 5678–5682.
  • Nagel K. and Schreckenberg M. 1992, A cellular automaton model for freeway traffic, Journal de Physique I France 2, 2221-2229.
  • Newman, M.E.J., 2005. Power laws, Pareto distributions and Zipf’s law, Contemporary Physics 46, 323-351.
  • Newman M.E.J., Strogatz S. H., and Watts D. J., 2001. Random graphs with arbitrary degree distributions and their applications, Physical Review E 64, 026118.
  • Pareto, V. 1896. “Cours d’Économie Politique”, Nouvelle édition par G.-H. Bousquet et G. Busino, Librairie Droz, Geneva, pages 299–345 (1964).
  • Pastor-Satorras R. and Vespignani A. 2001, Epidemic Spreading in Scale-Free Networks, Physical Review Letters 86, 3200–3203.
  • Peters O. and Neelin J. D., 2006. Critical phenomena in atmospheric precipitation, Nature Physics 2, 393-396.
  • Podobnik, B., Fu D.F., Stanley H.E., Ivanov P.C., 2007. Power-law autocorrelated stochastic processes with long-range cross-correlations, European Physical Journal B 56, 47-52.
  • Risken H., 1989. The Fokker-Planck Equation: Methods of Solutions and Applications, (Springer Series in Synergetics), Springer; 2nd ed. 1989.
  • Scullard, C.R. 2006, Exact Site Percolation Thresholds Using a Site-to-Bond Transformation and the Star-Triangle Transformation. Physical Review E 73, 016107.
  • Scullard C.R. and Ziff R.M. 2006, Predictions of bond percolation thresholds for the kagomé and Archimedean (3,122) lattices Physical Review E 73, 045102.
  • Silverberg G. and Verspagen B. 2005, A percolation model of innovation in complex technology spaces, Journal of Economic Dynamics and Control 29, 225-244.
  • Simon H.A. 1962, The Architecture of Complexity, Proceedings of the American Philosophical Society, 106, 467-482.
  • Stanley H.E. 1971, Introduction to Phase Transitions and Critical Phenomena (Oxford University Press, Oxford and New York).
  • Stanley H.E. 1999, Scaling, universality, and renormalization: Three pillars of modern critical phenomena, Review of Modern Physics, 71, S358.
  • Stauffer D. and Aharony A. 1994, Introduction to Percolation Theory, 2nd edition, Taylor & Francis Ltd.
  • Stockmayer W.H. 1943, Theory of Molecular Size Distribution and Gel Formation in Branched‐Chain Polymers, Journal of Chemical Physics 11, 45-55.
  • Tsallis C., 1998. Possible Generalization of Boltzmann-Gibbs Statistics, Journal of Statistical Physics 52, 479-487.
  • West G.B., Brown J.H. and Enquist B.J. 1997, A General Model for the Origin of Allometric Scaling Laws in Biology, Science, 276,122-126.
  • West G.B. 1998. “Scale and dimension from animals to quarks”. In N.G. Cooper and G.B. West, editors, Particle Physics. Cambridge University Press, Cambridge, UK.
  • Wilkinson D. and Willemsen J.F. 1983, Invasion percolation: a new form of percolation theory, Journal of Physics A: Math, Gen. 16, 3365-3376.
  • Xin W. Chen, Steven J. Landry, Shimon Y. Nof, 2011, A framework of enroute air traffic conflict detection and resolution through complex network analysis, Computers in Industry 62, 787–794.
  • Zipf G.K., 1949. Human Behavior and the Principle of Least Effort, Addison-Wesley.
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox
Share