Avalanches are physical phenomena of great interest, mainly because they represent a big risk for those who live or visit areas where this kind of natural disaster can occur. But avalanches are very complex. They arise from an instability in a pile of granular material like sand or snow. Granular materials can be piled but just until the slope of the sides of this pile is below a critical angle. When the slope is above this angle, any extra grain added to the pile can cause a chain reaction and start an avalanche. The point is that you never know exactly when the avalanche will start.
Avalanches are an example of what is called an emergent behavior. Complex systems, which are composed by a great number of interacting unities, can show exceptional characteristics that are not expected: strange organization phenomena and surprising effects.
Although the problem of modelling granular materials seems to be something easy at first sight, it is a difficult matter and to this date we haven´t a unified theory yet. There are different approaches to attack this problem. One is to try to model granular materials a kind of fluid with special properties. It is a hydrodynamical approach. The other is to build discrete toy models and analyze them mathematically.
The second approach is related to the famous Bak-Tang-Wisenfeld model of a sandpile, where they use a bidimensional cellular automaton to model a pile where at each time step a grain is added at random in some site. When a site has a slope above a critical slope relative to its neighbours, one or more grains is transferred to this neighbour. This model turn out to have a very special behavior called Self-Organized Criticality (SOC). This behavior is rrelated to the distribution of the sizes of the avalanches in the pile and to the fact that the pile has a set of quiescent states, named metastable states, where the pile is momentarily stable.
These models can be complicated or simplified as much as we want and their study is not an easy matter once they are models that should be studied out of equilibrium, and out-of-equilibrium phenomena and, once more, we don´t have a unified theory for them too. An interesting example of a simplified model where you can see "avalanches" was sent to me last week by a friend named Marlo who found it in the internet. It is a game where you have a bidimensional cellular automaton where each site can be in one of four different states. You can change the state of one site and the interaction between states can trigger an avalanche effect. The aim is to trigger the biggest possible avalanche, although it is funny just to look at the dynamics and the metastable states to see how they look like.
My friend becomes excited with the game and said to me that this could have a lot of consequences even in sociology... well, physicists already though of this and he is right. I´ll edit this post another day and will try to put some links to show this.
Picture taken from: Milford Road.
I´m a kung fu fighter. After eigth years and two knee surgeries, last year I finally got my black belt. My style is Ton Long, or Praying Mantis, one of the several kung fu styles that exist. Most of the styles are inspired in the movements of animals, like Tiger (Hung Garr), Crane, Eagle´s Claws and Monkey, but there are others that do not follow the pattern, like Tai Chi Chuan, Drunk Style, Wing Chun (the style of Bruce Lee) or Suai Shiao. In fact, most of the styles are completely different martial arts and kung fu is a common name for all chinese martial arts. Kung fu is not even the correct name, its meaning is "hard work" and in China is used to every kind of art that needs a great effort to learn and master. The chinese name for their martial arts is wushu or kuoshu.
My passion for kung fu is well known among my friends and yesterday one of them send me a link about the physics of kung fu, a site entitled Kung Fu Science. The link is indeed about a study of the physics involved in breaking blocks with bare hands led by a young PhD student of atmosferic physics. The site has beautiful presentation and design and the text is very accessible for those who are not scientists too. There are links to related studies about the physics of other martial arts in the end of the webpage. It is worth to visit.
Picture taken from: International Chinese Kung Fu Association Website.
This is a program written by three guys from MIT: Jeremy Stribling, Max Krohn and Dan Aguayo. It generates random papers that seem real enough to fool someone who´s not a scientist. The incredible part, although, is that these guys submitted some of these papers to real conferences and were accepted!
I generated a paper with three other friends of mine. It even has some references to other papers with our names (although we never wrote them...).
You can generate your papers and read details and the whole story in their site:
SCIgen - An Automatic CS Paper Generator.
Last Thursday I finally earned my Ph.D. in Physics. The title of my work, together with my advisor Nestor Caticha, is "Learning on Hidden Markov Models", where we studied the performance of learning algorithms for HMMs, a kind of machine learning model that is a special case of a wider class named graphical models. Machine learning is an alternate name, although you can consider it as a particular area, of artificial intelligence. The diference between both terms is as diffuse as you want, but technically the former is preferred.
It may seems strange that a physics thesis is about machine learning, but statistical physics is an area with a lot of insterdisciplinar applications. It is an area that studies the interactions between systems composed of a large number of individual interacting units. It has already given a lot of important results when applied to the study of perceptrons, simplified models of artificial neural networks, and our hope in our work was that it could give interesting results for HMMs too. When I had prepared a suitable english version of my thesis and when our papers were submitted I will put links to them here.
But coming back to the main point, what exactly does physics have to do with machine learning? Well, one of the first insights of machine learning appeared when two guys, McCulloch and Pitts, introduced a simplified mathematical model of a neuron. The simplified model was inspired in the real neuron in the brain, it was composed of "synapses" from where the neuron received inputs in the form of numerical values, a "body", mathematically a function that processed the input turning it in an output numerical value that were transmitted to other unit by an output synapse. This model is in the paper "McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 7:115 - 133".
Linking a great number of these units together by the synapses, one could construct a network as complicated as we want. It can be shown that these networks can be used to store memories and to infer answers to questions. These artificial neural networks (ANNs) can have complex or simplified architectures, the simplest one being the so-called (Rosenblatt) perceptron. Well, our brain is a natural neural network with about 1011 neurons. It´s a huge number! So huge that for practical purposes we can treat this number as infinite. This is where physics enter, to be more specific, statistical physics.
Statistical physics studies systems with a great number of interacting individual units trying to make predictions of the typical behavior of the system as a whole. It makes the connection between Newtonian mechanics and Thermodynamics. In Newtonian mechanics the systems are described by position and velocity of each particle, but in thermodynamics a system is described by "bulk" macroscopic properties as temperature, pressure and volume. Thermodynamics is recovered from mechanics when we analyse the equations for a large number of particles and take averages, mathematically this limit, widely known as the Thermodynamical Limit, is attained only when the size of the system (the number of individual components) goes to infinity. This approach helped to understand interesting properties of matter, as the famous Phase Transitions (like water boiling or ice melting in water).
I turns out that there is a way to map neural networks into already studied physical systems such that when you apply the methods of statistical physics to then and take the thermodynamic limit, you can calculate properties! It was made for perceptrons and worked pretty well! Later, other methods of statistical physics were used with the same success, one of the most celebrated, and most controversial for most mathematicians, is a mathematical trick named the Replica Trick. But I´ve already written too much and will let those matters for another post.
Picture taken from:http://www.nada.kth.se/~asa/Game/BigIdeas/ai.html
Causal Dynamical Triangulations (CDT) is a very recent approach to quantum gravity. Like LQG it didn't use any new principles or symmetries, but tries to quantize gravity using the Feynman path-integral approach to quantum mechanics.
The Feynman path-integral in quantum mechanics is a formalism that allows us to calculate the probability of scattering processes in quantum field theory (QFT) in a simple way that can be associated with graphs to easier visualization and calculation. The idea is that there is a quantity named the propagator that can be calculated as the weighted sum of the possible ways that a particle has to going from a point to the other of spacetime. In classical physics, there are some paths that are forbidden, the ones where the particle needs to travel faster than light, but the idea of Feynman was that in quantum mechanics all paths are allowed, but the resulting path is calculated by interference of all the paths. All the scattering probabilities can be calculated using the propagator and this approach allowed the quantization of electromagnetic theory (QED). But gravity is a very trick force and resisted to first attempts to be quantized in this way.
In gravity, the analogous quantity to a path in spacetime is a path in the space of all possible geometries of the universe. But this summation is divergent, i.e., the result is infinite and a lot of work has been done to try to find a way to make this sum convergent. The CDT approach consists of approximating spacetime by a mesh of triangles (in this case, 4-dimensional triangles), make the summation and then calculate it in the limit where the size of the triangles goes to zero. In this continuous limit, it's expected that the resulting theory is well behaved. The first results show that it could be.
CDT was developed by Renate Loll and Jan Ambjorn and has the advantage that a lot of simulations can be done and some interesting results appeared. One of the most interesting results is the possibility of spacetime to have different dimensionalities in different scales. This appears to be a little strange, but technically what happens is that if you put a particle moving at random (technically, executing a random walk) in space, its behavior is similar to a particle walking in a 2-dimensional space for short times (what is interpreted as for short scales) and similar to a particle walking in a 4-dimensional space for longer times. Mathematically what happens is that the particle is performing a diffusion in a fractal space and we have a formula for this that gives the so-called spectral dimension for the diffusion. The spectral dimension can be calculated by fitting a curve to the graphic of how much the particle walked versus the time spent and inserting the result in the formula. That is what they did and found these results.
As I said, CDT is new and there are not so many people working on it in the world, but if you are interested, search for it and for papers published by Loll and Ambjorn in the arXiv. They always put a preprint of their work there. And take a look at the discussions in the "Strings, Branes and LGQ" section of Physics Forums. You can always learn a lot of things there.
Last year I attended a course about cosmology given by Prof. Raul Abramo at the Physics Institute of the University of São Paulo. At some point, he was explaining the Hubble´s law and how we could calculate the Hubble constant by measuring the (cosmological) redshift of far objects. The explanation for the redshift is that, as the universe expands, it stretches the electromagnetic waves so that their wavelength is increased. A larger wavelength means a lower frequency and then the light is redshifted, or equivalently, the frequency of a light wave emitted from an object goes in the direction of the red light and beyond (to the infrared and more…) as its frequency is lower than the frequency of blue light.
After the class, I was thinking about the effect and realized that I was told a classical (not quantum) explanation. The nature of light seems to be quantum and then a full quantum explanation should exist. I first asked for a former professor of mine, Prof. Henrique Fleming, and he told me that there was no official explanation, because we haven´t a Quantum Gravity theory yet, and the redshift was a relativistic, therefore a gravitational effect.
I thought about the question and arrived at the conclusion that the explanation probably would be given by an interaction between the photon and the graviton. Somehow, the photon should interact with the graviton and give it some part of its energy. As energy is proportional to the frequency, less energy means less frequency and a redshift. Then, last week, I saw in Physics Forums a comment about a preprint by Michael Ivanov entitled “Low-energy quantum gravity” that presented the idea in detail. I did not read the paper yet, but it seems that it is an interesting one. Although nobody is sure that the graviton really exists (and if you go to Physics Forums you will see a lot of people that will say that it didn´t), maybe as a low-energy approximation the concept could explain the redshift effect in a quantum mechanical way, something that was not done till now.
Picture: Barred Spiral Galaxy NGC 1300 - NASA
There are two big problems in physics that all clues given by nature seem to indicate that they are related, but till now, no one is capable of explain exactly what is the relationship such that both could be solved. They are the arrow of time and the baryon asymmetry.
To be brief (and very unprecise), the arrow of time problem is the problem of why time goes on only in one direction since all equations of physics are symmetric with respect to the time variable. There are a lot of conjectures, but no one is really certain about that. The baryon asymmetry comes from the fact that we observe much more matter in our universe than antimatter, what is a problem for (electric) charge is conserved and if we suppose that in the beginning there was nothing and that everytime a matter particle is created its antiparticle is created together, we should have as much matter as antimatter in our universe.
Both problems are related by a symmetry of nature named CPT, that says that if we change the signs of the time, the charges and the parity in a physical system, all the equations of motion stay the same. Then, charge and time are related somehow. Generally, as Feynman pointed out, we can consider FORMALLY that an antiparticle is a particle going backwards in time.
Now, to the crazy idea I´ve been thinking about. I stress this point: IT´S JUST AN IDEA, I have to work over this to see if it has some chance to live or if it´s just nonsense. Maybe the reason why we see more matter is the same why time only goes in one direction. Somehow, particles and antiparticles may have an internally defined direction of time and as time goes on forwards, we see much more matter. Seems to me like a symmetry breaking induced by a field or a fluctuation. Anyway, I need to work more...
Edition (04-Nov-2005): I was reading around and found that there are ideas related, so that´s not so asurd at all.
Picture: A burst of light is emitted as the electron and its antiparticle, the positron, collide. (Image credit) NASA - Goddard Space Flight Center Scientific Visualization Studio