SOME DEFINITIONS:


CHAOS:

"APERIODIC LONG-TERM BEHAVIOR IN A

DETERMINISTIC SYSTEM THAT EXHIBITS

SENSITIVE DEPENDENCE TO INITIAL CONDITIONS."

 

APERIODIC: Trajectories do not settle down (long term) to fixed points, periodic or quasi-periodic orbits.

DETERMINISTIC: No randomness involved, the future is absolutely determined by the present (but through non-linear laws, so that the past is not uniquely determined by the present).

SENSITIVE: Nearby trajectories in phase space diverge exponentially fast in time (technically: "positive Liapunov exponent"), so that long-term prediction becomes impossible past some time horizon.


ATTRACTOR:

Is a set A of trajectories in phase space to which all neighboring trajectories converge.

 

Its properties are:

1)A is Invariant (If you happen to start in A you remain there);

2) A has a "basin of attraction" U (A attracts a set of trajectories U that start close enough to A).

3) A is minimal (no smaller subset of A can be found to satisfy 1) and 2) )

 

An ATTRACTOR is called STRANGE if it exhibits sensitive dependence on initial conditions; they usually have fractal structure (infinite detail).


The issue is: how can a system that is deterministic (the present absolutely determines the future) arrange itself such as:

a) No two trajectories in phase space can ever cross (which would "kill" determinism, providing "choices" at certain crossroads); this condition implies for continuous systems (like Lorentz's weather model) that three is the minimum number of variables for chaotic behaviour (topologically impossible to do "no crossing" in one or two dimensional spaces). Notice that discrete systems ,like for instance those described by the logistic equation: Xn+1 = a Xn (1-Xn), can be chaotic even when a single variable is involved.

b) Has a limited amount of (phase) space to "fill" with trajectories, since only certain range of values are possible.

c) And will never repeat itself (which would make it periodic), no matter how long it runs.

Notice that is indeed very hard to imagine how you can manage not "bumping" into a previously visited place after an infinite time of running around a limited space (try to imagine yourself "avoiding" successfully another Sewanee student during a full semester, and then "forever" while living on campus...)

Note also that this implies the fact that two extremely close points will diverge from each other (sensitive dependence) as time progresses, but also that two extremely far apart points will eventually become neighbors at some time or other in the future...

The idea is that these "conflicts" are "solved" by the process of folding and stretching. The "trick" is achieved by creating a very complex "interleaving" of trajectories with a fractal structure, in such a way that there are always more trajectories between any two you consider, no matter how much you "zoom in".


Lyapunov Exponent:

The difference ("error") E(t) between the respective values of two time series of the same system that start with some small initial difference E(0) in the initial conditions will tend to grow exponentially in time for a chaotic system: E(t) = E(0) exp (L*t) , where L (Lyapunov exponent) will be a positive number (with units of 1/time, or frequency) that can be obtained as the slope of the plot of ln(E(t)/E(0)) vs. t. So the value of L indicates the degree of sensitivity to the initial conditions of the chaotic system under study. Every chaotic system has to have at least one positive Lyapunov exponent L+ ; since the attractor is bounded there must be at least one negative L- (associated with some other variable) to keep the attractor volume bounded.

The fact that the divergency is exponential implies that is practically impossible to substantially improve your prediction by merely reducing your initial error. For instance, reducing E(0) by a (usually very difficult to achieve) factor 100 would only "postpone" the prediction problem by a mere ln(100)=4.6 factor in the time horizon.

In contrast, non-chaotic behavior (="classical science") is characterized by a "milder" (usually linear at most) growth of E(t), so that it "pays off" to reduce the initial error E(0) to substantially improve your prediction.