Dec 22, 2011

There's no "Science crisis of Faith"

I don't like to much to criticize other people's work, but came to learn that it's an important part of letting people aware of the wrong things that appear in the press. Not only that, it's important also for people to know that even scientists are not error-proof and free from prejudices. Even Nobel Prizes. 

But I am not going to criticize the work of a Nobel Prize, even because such works usually are technically accurate, at least accurate enough to deserve the prize. I am going to criticize a horrible, horrible popular science article that I read recently. This one:

It's full of logical fallacies, prejudices and misleading sentences, although they are probably not intentionally so. So, let's analyse it. The title already is intended, of course, to draw attention. Anything that relates science to faith is certainly going to attract many readers. The scientists because they always get annoyed with that, the believers because they are always delighted in seeing how those arrogant bunch of scientists are just as believers as they are. Let's then excuse the title, as anyone knows that publishers need to draw attention. As much as I don't like it, I also learn that marketing IS necessary.

The first paragraph is okay. As usual, it talks about the fact that the Greek thought about almost everything first. In this case, about the atomic theory. They just did not have enough technology (in all senses) to verify that. So far, so good. The second paragraph is a bit dubious, but I will skip it as I don't want to be too pick at this point. Then comes the next paragraph, which I will quote almost in its entirety here:
(...) Dramatic developments in cosmological findings and thought have led some of the world’s premier physicists to propose that our universe is only one of an enormous number of universes with wildly varying properties, and that some of the most basic features of our particular universe are indeed mere accidents—a random throw of the cosmic dice. In which case, there is no hope of ever explaining our universe’s features in terms of fundamental causes and principles.
You can already notice the beginning of the "authority argument" here. Of course, in this case, it must be partially excused as the author is not an expert in the field. I just want to remind the reader that even if ALL world's premier physicist's believe in something, even though it doesn't mean that it's true. Now, the last sentence is just nonsense. Even if our universe is just one among many with different physical properties, that doesn't mean that we cannot explain features in terms of fundamental principles. The variety of universes would itself be a principle! But let's continue, I'll talk more about that later on at the appropriate place. Let's take an excerpt of the next paragraph.
(...)Alan Guth, a pioneer in cosmological thought, says that “the multiple-universe idea severely limits our hopes to understand the world from fundamental principles.” And the philosophical ethos of science is torn from its roots. As put to me recently by Nobel Prize–winning physicist Steven Weinberg, a man as careful in his words as in his mathematical calculations, “We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed.”
Two authority arguments here again. Guth and Weinberg are really competent physicists, but that doesn't mean that their philosophical views are correct. When Guth says that the multiverse idea "limits our hopes to understand the universe from fundamental principles", he's talking about HIS hopes, or better, what he thinks that fundamental principles are, whatever it is. And to let it clear, NOTHING in the ethos of science is torn from it's roots, because science is based on evidence and if evidence says so, that's the right thing to think independently of our prejudices. Now, Weinberg may be correct in say that the style of PART of fundamental physics changes only because most theoretical physicists are happy to share the same group thinking about philosophy, which again, might be absolutely wrong. All Weinberg is saying is that they will have to revise their "beliefs" in face of new evidence, which is actually what science is all about. Hardly any problem from the scientific point of view, but maybe a large one from the HUMAN point of view... Now, things get worse. Let's see some parts of the next paragraph.
Theoretical physics is the deepest and purest branch of science. It is the outpost of science closest to philosophy, and religion.
I happily can accept the compliment to theoretical physics, but the allusion to religion here is absolutely unnecessary, wrong and only a means to draw the attention of the reader once more. Besides, all science is close to philosophy. Philosophy underlies science and precedes it in the exploration of possibilities. They are, in some sense, interlinked. Religion is another story and science, no matter what branch, IS NOTHING LIKE RELIGION. Scientists may be religious though. Many are. Next bit.
Experimental scientists occupy themselves with observing and measuring the cosmos (...). Theoretical physicists, on the other hand, are not satisfied with observing the universe. They want to know why. 
That represents a huge prejudice against experimental physicists. I'm a theoretical one, but I doubt that my experimental friends are only interested in taking measurements for the fun of doing it. Some may be, but I doubt that it's the majority. They are also driven by the curiosity of understanding, they just have other abilities. Just that. Everybody wants to know "why", the problem being that different people are differently satisfied with the answers. Moving on.
The underlying hope and belief of this enterprise has always been that these basic principles are so restrictive that only one, self-consistent universe is possible, like a crossword puzzle with only one solution.
The author is talking about the idea that it's possible to find a set of principles that uniquely leads to our universe. That's, however, is only an idea. That is not the fundamentals of science. It's a hypothesis and can be as wrong as any other hypothesis. Again, although many scientists believe in it, that might simply be wrong and good scientists have no problem in admitting that. Now, after talking about string theory, multiverse, internal inflation and calling ALL theoretical physicists "Platonists" without a clear meaning, the author jumps to this conclusion:
 We are living in a universe uncalculable by science.
Of course, that's just a sensationalist sentence. In truth, the correct one would be that, according to THOSE theories (which might be wrong) many laws of our universe may not be uniquely determined or constrained. We still can pinpoint a set of axioms and constants though from which we can calculate things. It's just that they are not the way many of us expected. Well, IF it's correct, get over it. That's life. But we can still calculate things, believe me. There is no risk that the universe around us will suddenly fall in total chaos. A bit ahead, there is a sentence from Guth:
“Back in the 1970s and 1980s,” says Alan Guth, “the feeling was that we were so smart, we almost had everything figured out.”
That's actually what Guth thought. Lord Kelvin thought that as well in the 1900's, just before Relativity and Quantum Mechanics. We would hope that 80 years later, scientists would know better. In fact, I think that most do. In fact, the next paragraph reveals that Guth was only talking about his OWN expectations:
“The reason I went into theoretical physics,” Guth tells me, “is that I liked the idea that we could understand everything—i.e., the universe—in terms of mathematics and logic.” 
Which is a completely personal opinion which might not be shared by other physicists. Not by me, for instance. But who am I, right? (I love authority arguments... People fall so easily for them...) Then, the next paragraph contains all that old nonsense about how life could not evolve if the laws or constants of the universe were different. I don't need to say that this is based on a largely prejudicial and short sighted view of what is life. There is no reason to believe that life could not arise with a different set of physical laws. All that discussion is just, in a way, trivial as it says that life is the way it is because those are the numbers and would be different if the numbers were different. By the way, let me say something once more, the weak anthropic principle IS TRIVIAL. The strong is religion and, therefore, not science.

The next two paragraphs reveal something scaring about the author. He really considers that "Intelligent Design" is a solution, it's simply not appealing! I hope that was just an impression, but it really looks like that. Then, it's said that the multiverse solves the fine tuning problem. Honestly, there is not really a fine-tuning problem, in the sense I have already argued. The multiverse might though be an alternative explanation why we see the laws and constants we see. It's, at least, a solution if we cannot find a set of laws that restrict the possibilities to the ones we observe. Again, it's a possibility which, to be honest again, is not in any sense more philosophical appealing by any rational arguments unless the theories that lead to it become more and more plausible with more evidence. With respect to the example of dark energy as fine-tuning, I don't really get it. The alleged fine-tuning seems more like a consequence of poor understanding of its nature than anything else.

In fact, at the end of the article, it's clear that the author hopes is that science becomes more like religion and he says that scientists are not used to take some beliefs on faith as theologians are. Well, you bet not! And I hope that doesn't change. As I said, science IS NOT like religion and should NEVER be. As much as the multiverse might be an interesting idea, it must be supported by evidence and NEVER EVER took for granted just because some people say so.

The most important difference between science and religion is that NOTHING in science is exempt from being questioned. Scientists may take it wrongly many times because we're humans as well, but that's our fault.








Dec 20, 2011

Bayes and Particle Physics

I have just read a very nice paper on how badly confidence levels are interpreted as probabilities of events in Particle Physics:


It explains well how the frequentist hypothesis-test language can be misleading, to say the least. To be honest, it's amazing how after so many examples of how the Bayesian (although it should be called Laplacian) reasoning is the correct way of doing probabilistic inference, people still are stubborn enough to reject it by the silliest reasons.

It's extremely simple to see that the frequentist approach is a particular case of the much more general, Bayesian framework (once more, read Jaynes or Sivia). The last arguments I heard were related to quantum mechanics. Some people say that, because probabilities are fundamental on QM, the frequentist framework is the correct one. That's nonsense, of course. QM probabilities are still Bayesian. They give the odds of a result given the preparation of an experiment. Obviously, as always, they coincide with the frequentist calculations in the long run, but still, whenever your quantum state is |+>, you can safely say that the probability of measuring a spin up or down is the same, even if you do only one experiment (not an infinity amount of them). The meaning is simply that you don't have any reason to prefer either result, up or down, in your next measurement.

The important thing to remember is this: frequentists cannot say that your NEXT coin throw have a probability of 1/2 tails and 1/2 heads because that should be meaningless for them (although it can be masked by a lot of tricky justifications). On the opposite hand, that kind of reasoning is totally allowed by Bayesian inference, where it has the simple and intuitive clear meaning that there is no reason to favour either tails or heads.

That should be simple, isn't it? In this case, it is. 

Oct 5, 2011

Nobel Prize of Physics for Accelerating Universe

I think it's a little late for this now, but as I had this draft on my blogger, I thought I'd better publish it or delete it. I decided to publish it.

The Nobel Prize of Physics of this year was given to the discovery of the accelerated rate of expansion of the universe by means of supernova measurements. It was awarded jointly to Saul Perlmutter, Brian Schmidt and Adam Riess, the last two members of the High-z Supernova Search Team.

I'm gonna write about the accelerated expansion in the following, but one of the reasons I decided to publish this post instead of deleting it was that this is the second year in a row that I predicted correctly the prize. Unfortunately, the prediction of the Nobel for the discoverers of the graphene last year didn't really count for anyone but me because I never really wrote it anywhere, so I don't have any proof. However, I have the proof for this year's prize in this post I wrote last year:


Just take a look at the last paragraph.

Now that I did some marketing, which seems to be the most important thing in science today (sic), I'll get back to explain the meaning of the discovery. It is actually a big lesson for those who still think that science is made by consensus (just to stress it, they are wrong). Before this measurement, there was not much certainty about the rate of expansion of the universe, except that we already guessed it was expanding based on the Doppler shift of the galaxies measured by Hubble. However, there was a "consensus" (sic, again) that the universe should be decelerating. Many important physicists supported this idea. Stephen Hawking, which is undoubtedly a good physicist, was one of them. The main reason was that this hypothesis was more elegant theoretically.

Don't get me wrong, it is completely okay to support a preferred hypothesis for its elegance if there is no piece of evidence whatsoever supporting any of its rivals, as long as you leave it clear that it's just a personal preference. It works fine as long as you are aware that it is just a choice, or a working hypothesis, that still MUST be confirmed by experiments at some point.

For those who believed religiously on the deceleration, the 1998 supernova result was a shock. It is also a fine argument to throw against that friend of yours who thinks that science is a social construct and reality is exactly what people believe it to be (although you never really win against those people). Once the results were checked and confirmed, the good scientists obviously had no problem in accepting them (just don't forget that nothing in science is beyond doubt...)

The implications of this discovery are quite interesting. By one side, it means that the universe will not end up in a Big Crunch, where everything will disappear in an ultimate singularity. However, it means that in the very long run, everything will suffer from a kind of death by isolation as the space between things will continually stretch, although I guess it is not clear if the expansion will be strong enough to separate electrons from their atoms. 

The isolation comes from the fact that as the rate of expansion increases to a velocity higher than that of the light, the bodies which remain more or less together will simply not be able to communicate with the other lumps of matter as nothing could travel faster than light (forget momentarily about the OPERA neutrinos, even because I'm far from being convinced about that result). This isolation is not depressing, but also can be the source of several philosophical problems related to what is real or not, as one lump is in practice severed for ever from the other and there are no measurements one can do about the other. 

Nobody really knows what's causing the acceleration and many ideas have appeared to try to explain it, including the usual scalar field, which is usually conjured to fix almost everything in the universe. In this case, it was called quintessence, but it's still to be made work. In general, the convention is to call whatever is causing the expansion dark energy, just to have a more general working term. That is a hot topic of research and apparently far from being solved.

I will finish by giving a new Nobel Prize prediction. I think next year's goes to Quantum Computation/Information, probably to Ben Schumacher and Peter Shor. There are other possibilities, like the iron pnictides superconductors or spin glasses (probably for Parisi at least), but I will stick to the QC people this time.    

Aug 9, 2011

Never fails...

From Science Jokes



Four stages of acceptance:

      i)    this is worthless nonsense;
      ii)   this is an interesting, but perverse, point of view;
      iii)  this is true, but quite unimportant;
      iv)   I always said so.

 (J.B.S. Haldane, Journal of Genetics #58, 1963,p.464)

Jun 14, 2011

Solving Mazes on Mathematica


Just a quick post as I found this very interesting article in the Wolfram blog:

by Jon McLoone

The photo above is from the article. It's an aerial view of the maze at Blenheim Palace, a very beautiful palace here in the UK. The author describes how he used Mathematica (the program) to find a solution to that maze. The most interesting thing is that it used the above photo an nothing else! The whole procedure, with pictures of the intermediate steps, is explained in the article. The final solution, which is what happens with the photo after processing, is this:


This final image is a superposition of the solution, obtained from the image itself, and the original one. Unfortunately, Mathematica is not only not free, but also very expensive. On the other hand, probably all universities have the license for it. Another good result is that I discovered that the Wolfram blog is actually quite interesting to follow... I guess I was a bit prejudiced before.

Jun 10, 2011

Negative Entropy and Quantum Observers


I've just read this interesting paper from arXiv:
The thermodynamic meaning of negative entropy, Rio et al.
Landauer's erasure principle exposes an intrinsic relation between thermodynamics and information theory: the erasure of information stored in a system, S, requires an amount of work proportional to the entropy of that system. This entropy, H(S|O), depends on the information that a given observer, O, has about S, and the work necessary to erase a system may therefore vary for different observers. Here, we consider a general setting where the information held by the observer may be quantum-mechanical, and show that an amount of work proportional to H(S|O) is still sufficient to erase S. Since the entropy H(S|O) can now become negative, erasing a system can result in a net gain of work (and a corresponding cooling of the environment).
The main point is an idea of the authors about Landauer's erasure principle applied to quantum memories. There is a less technical description, but with a no less catching title, in this article: Erasing data could keep quantum computers cool.


The authors suggest that a quantum observer can use Landauer's principle to extract heat from the environment instead of throwing heat on it, effectively cooling instead of heating. I'm not a specialist in quantum computing and I didn't go through the arguments of the paper in much detail, but I will be a little skeptical about that and I will explain why. As far as I understood, in order to derive the result the paper assume the existence of a "quantum observer" and give the example of a quantum memory. That is not very clear to me. In fact, if the final idea is to use it in real computers, the ultimate observer will have to do a sharp measurement in the end and will obtain a definite number. The very idea of a quantum observer seems strange to me in the sense that all observers are obviously quantum, but somehow the measurement of a property will cause the decoherence of the entanglement between the observer and the system.

If someone wants to share the thoughts about that paper, please feel free. It seems to be an interesting work, but I would like to understand it better.

Jun 3, 2011

Stupid World


I've been blogging about the world entering a New Dark Age for some time, but news like this one make me think that instead of Dark, I should call it Dumb Age:


Apart from the fact that most politicians are not even capable of predicting the consequences of their own actions, any medium educated person knows that it is not possible to predict this kind of natural event yet. I would add more. With high probability, this is due to the fact that governments (and I'm not saying most, I'm saying all of them) don't give a damn to research in this area until something really bad happens.

What really happened is that politicians and government officials wanted to blame someone for the 2009 earthquake 08 casualties in Italy, when of course they were the guilty ones. As always, they found a scapegoat in the weakest link of the chain. The scientists are facing trial for something which is just barely lighter than genocide. The judge of this case is Giuseppe Romano Gargarella and, in his own words, the seven defendants had supplied "imprecise, incomplete and contradictory information," and in doing so, they "thwarted the activities designed to protect the public."

Note that a judge is someone who is supposed to have attended a university course in law, but even if the subject of his graduation was only law, I thought that any graduate student should have an understanding about how science works. The judge clearly doesn't, which is a severe failure in whatever system of education he graduated.

This kind of judgement is so stupid and so clearly abusive and misleading that should be considered even criminal in the present world. For those interested in a much more detailed analysis and much harsher comments than mine, I leave you with Lubos:


One last word. If you're a scientist, please react! These kind of things only keep happening because most of the times we are afraid of reacting properly. Spread the news, complain about that, do something to draw attention to it. At least people will know how dangerous are those who are supposed to be our leaders.

PS: The photo is from a protest that has nothing to do with the news, but I thought that the message carried by the protesters was appropriate for the moment...

May 27, 2011

The Dark Side of Scientists: A Tale of Complex Networks

I'm not citing names, first because they would be meaningless for most people, second because I'm not stupid and my future job may in one way or another depend on those involved some day. Besides, it wouldn't be either elegant or very professional. The tale I'm gonna tell is anyway very funny and clarify a bit of the human nature.

I've just got accepted for publication by PLoS ONE this nice paper, which I wrote with Joerg Reichardt and David Saad. Joerg, by the way, deserves all the merit as the computer expert behind some of the best tricks involved in the algorithm, which as I'm going to explain, is the main point of the paper. 

The paper is this:

The interplay of microscopic and mesoscopic structure in complex networks, J. Reichardt, R. Alamino, D. Saad - arXiv:1012.4524v1 [cond-mat.stat-mech]

The link above is from the preprint in the arXiv, the final version being a bit different in the end, but not very far from it. It's not traditional physics, it's a very interdisciplinary paper mixing ideas from physics and information theory and with applications to sociology and biology. Okay... It's fundamentally a paper on Bayesian inference and, being a mathematics paper, it's naturally interdisciplinary. But it's always nice to use that word. :) 

Let me talk a bit of the paper. That may be a long talk, so if you are more interested about the tale itself, I suggest you to scroll down. I will start by telling what we mean by a complex network. A complex network is basically a bunch of things, that may be equal or different, interacting among themselves in any kind of way. Looks like everything is a complex network, right? Well, it's very much like that. But it is much easier to visualize it with a picture, and so I'm putting one I found the internet here.


Each dot, or edge, or node, in the above network is an author of a scientific paper. Each connection, link, edge, means that two authors shared a paper. And that is a complex network. Mathematically, it's a graph. Now, in this graph it's not easy to see, but if you pay attention you will see that are some structures in these graph. Some groups of nodes are more interconnected among themselves than with other nodes. Finding out the rules by which this happens is called community detection.

It's interesting to note that this group structure is what we call a mesoscopic characteristic of the network. It means that it happens in an intermediary level between the macroscopic and the microscopic phenomena in the graph. By macroscopic you can imagine things like paths, cycles and cliques. By microscopic, you can think about characteristics of individual nodes or of small groups of nodes, two or three usually.

The interesting thing is that community detection is usually done by trying to infer how one group connect to another one, completely ignoring any node specific characteristic. What we've done was to include this microscopic information in the inference and, voila, our algorithm was capable of modelling the network structure better than the others!

Our algorithm is a Bayesian one, full of tricks I must admit, but it works anyway. What it effectively does is to define a general model for a network that depend on two hyperparameters: the group structure and the tendency of a node to link to another one. Then, we feed the algorithm with the observe adjacency matrix of the graph representing the network and the algorithm give back a classification of each node into a different group, how the groups link to each other and what is the propensity of a node to link to another one! 

You may say: Of course, give me enough points and I can fit an elephant! But first, we did not add as much parameters as we could, we did it by thinking about the best structure. Each one of our parameters has a "physical" interpretation. Second, we compared it with an algorithm with more degrees of freedom (more parameters to adjust) and we still performed better.

How do you know you're better? Well, there are some networks that were studied by specialists in their area and the group structure was inferred and studied by them. In out paper we give one example from sociology and two from biology. In these cases, we had what we called the expert classification. So, we run our algorithm and others on the network and compared with this classification. As I said, our algorithm agreed much better in ALL three cases. 

Now, I will ask you something. Isn't that clear that, in order to know if the algorithm was good, we had to compare with a case where the classification is known? Isn't it obvious that there can be cases where the expert classification may be difficult, may be not available, or may take a long, long time to be obtained? And after all, even if we always had the expert, isn't that interesting to have a program that is as good as the expert? That would certainly tell us something about how the expert works and, as I have been writing in this blog for a long time now, pure knowledge is also a good thing.

And finally we come to the climax of the tale. I explained all of that, except for the last paragraph above simply because I thought it was too obvious for an audience of scientists. After I finished my talk, there were few questions from the public, which is a sign that either no one understood what I said or that nobody liked what I said. The second turned out to be the case as one of the members of the audience asked with a sarcastic tone of voice:

"Why don't you always ask the classification for the expert?"

The audience was pleased with the question and many smiled and nodded in agreement. I answered that it was a test and the guy asked me for an example where the expert could not give the classification. I'm a terrible debater, so it took me some time to think about a specific example and the one I came with was not very convincing. But I guess that it's clear that the more complex the network is, the more difficult is to a human expert to analyse it. Note that these people were not stupid. But they were nonetheless arrogant enough to think that if they could not see the importance of what you're doing, it's not important at all. In fact, I could say that many of the talks from those who were smiling were actually very devoid of any short term practical application, what for me is irrelevant as, I'm saying again, knowledge for the sake of knowledge IS IMPORTANT no matter what politicians and businessmen say.

This kind of attitude is unfortunately very common in the scientific community. Not everyone is like that, but a lot are. Be it because we are competing for funding or for awards, or because we want to be the brilliant rising star, that's not what science is all about. I guess that this kind of disunion just make us more vulnerable to the attack we have been suffering from the governments around the world. The utilitarian philosophy, which is just a means of mass control, is already rooted into our community. On the other hand, maybe we were always like that. Newton seemed to be like that. Others as well. But it is, anyway, regrettable.

Feb 22, 2011

Talking about Time, Mach and Information


Two weeks ago, I had the pleasure of receiving Julian Barbour here at Aston for a seminar. Julian is a singular physicist and I really admire him. Specially for the path he chose after getting his Ph.D. in the University of Cologne, Germany. Instead of finding an academic position, he decided to finance himself by translating Russian texts part-time. He said to me that he knew he could not fit into the publish or perish academic environment. Well, his first published paper took him many years, but was published in Nature. In addition to that, he's a very nice and very polite person.

When I invited him to Aston, he promptly and kindly agreed to give us a seminar about his work on Mach's Principle. As probably most of you know, Mach's Principle is the idea, first expressed by Ernst Mach, that all movement should be relative. Newtonian physics is based on the assumption that a non-accelerated movement is relative, but whenever acceleration comes into play, there must be a sense in which we can talk about absolute movement. For instance, circular movement should be absolutely accelerated, no matter the referential. Mach, and many philosophers including Poincare, did not like that. They thought, as indeed I think as well, that all movement, accelerated or not, should be relative. The problem is that it doesn't seem that the universe agrees with this point of view.

Well, actually that's what I thought was the content of Mach's Principle before Julian gave his talk. What Julian taught us was that this is only part of it. It seems that the relational point of view has problems that appear even before acceleration comes into play

Another thing that Julian showed was how using Mach's Principle you are able to deduce very interesting physics like gauge theories. You should notice, however, that his work until now is just about classical physics, but Julian also works with quantum gravity and one of his objectives is to attack the problem from that perspective.

The details of all of this can be found in this paper, freely available from the arXiv:
Julian has a book on his ideas about time as well, which I should have read before he came. That would have made our discussions much more fun. Yes... I am buying it now and I am eager to read it. I'll try to post some comments when I finish. Julian idea is the one of the block universe and he also shares David Deutch's enthusiasm with the multiverse idea.

After the seminar we had, as it is usual, a lunch on the Business School and afterwards we spend the rest of the afternoon discussing about his new interest on information theory. He was writing his essay for the Foundational Questions Institute contest whose theme was "Is reality digital or analogue?". His essay is quite interesting and can be read here:

Basically, he argues against the present fashionable position that information is a concept more fundamental than others, for instance, fields. "It from bit" is the famous aphorism created by Wheeler in his phase when he thought that everything should be originated from information. It's a quite enjoyable paper and we here already are planning to discuss it. I would recommend it's reading, as the mathematics is not very difficult. You can also vote for him in the contest. ;)


Feb 17, 2011

Fortuna Imperatrix Mundi


This very illuminating paragraph is a comment to the blog post Bloodbath for Science, posted in the Cosmic Variance blog, that talks about the huge cuts in science funding in the USA:
Understand, John: The people proposing these cuts believe that research scientists and staff are not doing any real work *by definition* unless they are employed by the private sector. If your work doesn’t contribute tangibly to some company’s bottom line, and ultimately to the profit of the CEO and shareholders, then your work produces nothing of actual value. In this view, any job that exists as a result of federal funding is, as a matter of principle, disposable and can be cut with no real loss of productivity. If your work is valuable to the private sector, you’ll be hired by some company anyway. If not, it has no value and you shouldn’t be getting paid to do it.
This is probably the most concise and clear explanation about how the mind of politicians not only in the USA, but also in the rest of the world, think. Very depressing. Most depressing yet is that many people will find that absolutely logical.  

Jan 30, 2011

Molecular Random Tilings


I am still organising the seminars in our group every Friday. In the last one, we had a very interesting one given by Prof. Juan Garrahan, from Nottingham University. The title was the same as in this post, Molecular Random Tilings. Although I don't have the version of the talk he gave, you can access a very similar previous version on his webpage through this link

The idea is a very interesting and beautiful one. The chemical problem is related to an organic molecule which is called TPTC or p-terphenyl-3,5,3’,5’-tetracarboxylic acid. It has the form below:


These molecules are adsorbed onto a substratum of graphite and bind together by means of hydrogen bonds in one of two possible relative configurations. After the deposition process, they cover the substratum forming an hexagonal molecular lattice. In fact, you can associate to each of these molecules a rhombus and, in doing so, each configuration of the molecular lattice can be associated a tiling of the plane (also known as a tesselation) by these polygons, which is a classic mathematical problem. It's also equivalent to another well known statistical mechanical problem, which is called the covering of the lattice by dimers, the simplest one being that on the regular lattice.

The way to associate the rhombus to the molecule is quite interesting. There are three directions for the rhombus in the plane and to each one a colour is associated (guess which...): red, green and blue. The picture in the top of this post, taken from the paper Molecular Random Tilings as Glasses by Garrahan et al., shows how the model is and the figure below, taken from an article in the AMS site (and property of Peter Beton) shows on the left an image of the molecular lattice taken by an scanning tunneling microscope and on the right the associated tiling.


The interesting thing in statistical mechanics is always to analyse phase transitions. In models like these, what is interesting is to study how the system passes from a phase dominated by random tilings, meaning tilings which are not ordered in the obvious way, to an ordered phase where the tiling is regular as we vary the temperature of the system. The basic quantity we need to calculate turns out to be the free energy, which will allow us to calculate everything else we want to know about the system. The beautiful thing about this model is that the free energy is proportional to the integral of the squared gradient of a field called the height field that can be defined for each point of the tiling. The cool thing is that this field can be seen in some sense as the height of the pile of 3-dimensional cubes you will certainly see when you look at the tiling! 

Another very interesting aspect of this model is that it supports fractional excitations, which are very much like the anyons we already discussed in some previous posts. While the anions have fractional statistics, these defects in the tiling are triangles which are a result of imperfect matches. Two triangles form a rhombus, but then they can divide themselves and run free through the tiling. This amounts for a fractionalisation of the degrees of the freedom of the model and, as you can imagine, charges can be associated to these defects.

The details of the model are in the paper I linked to in the beginning of the post. It's worth to take a look at it as there are a lot of beautiful images and much more information about the phase diagram of the model. After the seminar, I took Prof. Garrahan to have lunch in our Business School (one of the advantages of giving seminars in our group). A friend called also Juan, which is again also Argentinian as Prof. Garrahan, accompanied us. It was a nice lunch and I would like to thank Prof. Garrahan for an excellent talk and a pleasant conversation afterwards.

Jan 24, 2011

A Note about Footnotes

I know this seems completely off-topic and unnecessary, but one of the advantages of having a blog is to be able to make your complaints available for a wider audience. However, I believe that this will not be so useless as it seems and I would use it as an advice when writing documents, specially reports and thesis. It's simple: do not overuse footnotes!

Footnotes are devices that should be used with care, which in many books (some very famous) and articles they are not. I don't mind when the author use the footnotes to place the references, for instance. In fact, in some journals this is part of the articles standard format. I do prefer when the references are at the end of the paper, but that is just a biased opinion and there is not much difference. The biggest and most annoying misuse of footnotes is to add "extra information". I have an opinion about that. If you have any relevant information, just put it on the main text. If it's not relevant, almost all the time it's better to just keep it out of the document. There are very rare occasions where a footnote is okay, but they are really rare.

When should you consider the information worth of a footnote? Well, you must use your own common sense, but there are some tips to see if you are not abusing them. For instance, if every page of your thesis has a footnote, you actually have more than one thesis. Also, if your footnotes are longer than two lines, maybe the information should be written with slightly larger characters in the main text. Believe me, I have seen books where the main page had just a few lines of text and the whole rest of it was filled with footnotes!

Another thing, there is nothing more distracting for the reader than a long sequence of footnotes that keep interrupting the flow of the text all the time. It's absolutely disrupting and I gave up reading some books because the footnotes made it look like a jigsaw puzzle. And to give just one example of a brilliant person who abused too much of footnotes, think about the Landau & Lifshitz books (it's a famous series of physics books for those who are not physicists). Beyond all the other issues that make those books difficult to follow, on top of that the footnotes keep interrupting the reading over and over again. And Landau is surely in the pantheon of physics gods.

When I wrote my Ph.D. thesis, I used just one footnote. I kept it because, in fact, I wanted to look smart about a topic, but I regret it. The rest of the 150 pages has no footnotes, except for the references but they were at the end of the document, not of the pages. At the end, I received many compliments for the clarity of the text.

So, my advice is: include every relevant piece of information in the main text. Use parenthesis, comas or whatever other trick you may need, but don't force the reader to make a detour to the end of the page unless you really, really, really think there is no other way. Your readers (maybe me one day) will thank you. (Of course, that's only MY taste...)

Jan 23, 2011

Anthropic Principle


All definitions of the Anthropic Principle can be classified into two groups: the trivial and the wrong.

I know that the above assertion can be criticised for being too strong and too careless, and in some sense I must admit that there is a sort of radicalism in it. However, given that the probability of it being precise is high, it's worth the risk. I would expect that such issue would be longer settled, but over and over again I end up reading about the Anthropic Principle as if it is a really great and brilliant idea. In fact, I only decided to write about it because I was reading Richard Dawkins's The God Delusion and he talks about it at some point. So let me explain the reasoning behind my point of view.

The detailed definition of the Anthropic Principle, with all technical terms and such, can be found in a summarised form in the Wikipedia Article about the topic. Technically, there are basically two versions that can be afterwards subdivided according to extra details. They are the strong and the weak versions.

The strong version says that the laws of the universe are such that at some point conscious observers must appear. The "must" is what makes the version strong. I have very little to say beyond the fact that this is a highly non-falsifiable argument. It claims that conscious beings are somehow an objective to be reached by an universe and that the laws of the universe should be such that they allow them to appear. Or that without these beings the universe cannot exist somehow. It's actually quite easy to smell a bit of deism in this kind of argument. You may argue that this has something to do with some kind of natural selection principle where universes with conscious beings are fitter, but in fact I do not know any convincing argument apart from shear speculation. The fact that it is not falsifiable should be clear. How would we falsify it? Well, we could if the universe did not have conscious observers from the beginning to the end. Too bad it does. We could construct this kind of universe... oh, but wait... if we are constructing them, then the universe that include ours and that one also contains conscious observer. This is the version I call wrong. I know it's too strong to call it wrong, specially for a philosopher, but it's basically true. 

The fact is that, in principle, there is nothing that prevents a version of our universe that is too fast to be able to sustain any kind of life, be it conscious or not, to exist. Mathematically, for instance, I see no problem. The issue is even deeper, because we don't really have a detailed understanding of the phenomenon of conscience or even of life itself. The only example we have of life is the one we can observe on Earth, which is hardly a fair sample of the whole universe. It's true that according to some calculations, a slight deviation from the known versions of the physical constants would have a huge impact on life as we know it to the point it would not be able to exist, but we are not really sure that some other kind of life would not. That brings me to the second group of definitions.  

The second group are collectively known as weak versions. These are the ones I am calling trivial. Again, I am exaggerating on purpose. They all say that the constants of physics must be such that they allow (conscious) life to develop. For example, based on the fact that humans exist, you can get a good estimative of some physical constants and the allowed range of the estimative falls very close to the real value. I hardly see the point of calling such and observation by the term "Principle". I tend to think that every person in the world which works with some kind of inference procedure, which obviously include science as well, should see that if you assume life and try to estimate a physical constant, it just shows that your model is correct, not much else. The fact that humans exist is data. It's given evidence. If you do your estimate and reach a wrong value, it would mean that you should work on a better model for your physics. Now, you can take every piece of evidence in the world and associate some kind of principle to it. For instance, let's talk about the 'Bread Principle'. It says that the physical constants must be such that bread can exists. Now, bread requires yeast among other things. So the 'Bread Principle' says that microscopic life must exist. And it must be such that the chemical reactions that take place in the yeast must occur in such a way that allows bread to grow (!). You probably see where do I want to get.

At the end, my point is in fact very simple. Any idea trying to justify the laws of the universe by requiring consciousness are relying on a phenomenon that we are not even close to understand at the moment and, to be honest, are nothing more that some sort of religious argument disguised in science cloths. On the other hand, the fact that the laws of the universe are compatible with our existence and that given the correct model we can calculate things backwards is just a statement of the obvious: the model must agree with the experimental evidence. I am not aware of any breakthrough provided by the so called Anthropic Principle idea, and I am willing to bet that none will ever come from it, besides of course the usual ones provided by probabilistic inference. 

Jan 18, 2011

Critical Care


For those who are not Star Trek fans, it is probably not very clear what a geek sci-fi show that is not even being broadcast anymore has to do with anything barely real. Fans, otherwise, know better. When Gene Rodenberry created Star Trek, his idea was to to discuss the problems of society in a disguised language. By placing them into the distant future and on distant planets, he could be excused from criticizing his own country. Just to give an example, it was on Star Trek (the Original Series) that the first interracial kiss in the US television took place, with William Shatner being highly responsible for it not being cut from the original text. But that's another episode. The episode I really want to talk about was the one I watched last weekend.

By showing the social side of Star Trek to my wife, I was able to convince her, who's a lawyer, to watch the whole five Star Trek series with me. She's actually enjoying it so much that she's also a fan now. Okay, from now on I will spoil the episode. So if you like surprises, stop reading now. The episode is from the last season of Voyager and it is called Critical Care. The ship's doctor, which is a hologram, is stolen and sold in a planet where the health system has some similarities with the real (not the idealised) terrestrial one. The story goes like this. The planet's economy was crashing (any similarity here?) and then an alien race appeared to help. They ended up leaving them with a health system where people would be treated accordingly to an index named the treatment coefficient, TC for short. The TC of a person would be calculated by an advanced computer, left by the nice aliens, that would carefully take into consideration the impact of the corresponding person upon society, i.e., how much the person in question contributes to the well being of all.

As in all societies, it turns out that the ones with the highest TC receive the best treatments, while the others, which are less relevant, receive just an annual quote of treatment. You can read other synopses on Wikipedia and IMDB. Alternatively you can watch the whole episode, although I am not sure for how long, on YouTube by following the links starting with this:
There are very interesting dialogues and scenes. For example, the higher TC patients are treated in a Blue Zone (or something like that) where everything is nice and clean. Then, the computer allows the doctors to treat the patients with a certain quote of medicines but if the doctors do not use everything, the computer decreases it in the next month. At first sight, it seems okay, but if you think deeply, that is just absurd. Try. Of course the episode was meant to be a direct critic of the US health system, but if you change the time to today, the country to UK and the terms Treatment Coefficient by Research Impact and patient by scientific project, you have an isomorphism.

As a friend of mine said, the messenger changes, but the message is always the same. In fact, what is happens with people in that episode is presently happening with science and education in the UK. And it's not just metaphorically. The methods are literally, and I really mean LITERALLY, the same as in the Star Trek show! Is it possible that there are profound and important things that science fiction writers can see about how to make a better society while politicians cannot? If so, aren't we giving the wrong job to each of them?

I am not a person who thinks that politicians do not know what they are doing (well, maybe some...). They are clever people. They know exactly what they are doing. Our duty is not to call them stupid. That's actually just helping them. What we need to think is 'They are not stupid, so they are doing this for some reason. What is the reason?'. The answer to this question is the most important.

Jan 9, 2011

The Holographic Way




Those of you who have been following me on Twitter (and had the patience to read what I am posting there) probably noticed the huge amount of twits with the tag #holography attached. The reason is, naturally, that I am trying to learn it. But before I enter in details, I need to explain what it is all about. If you already know what it is, I will hardly say anything new. 

The term "holography" has two meanings in modern physics, and they are obviously related. The first and most popular one is the technique used to create holograms, those three dimensional images embedded in a two dimensional sheet of paper or plastic. The second one is derived from an analogy with this property of storing the information for a three dimensional environment into a two dimensional one. The story starts with Jacob Bekenstein, a theoretical physicist that was thinking about thermodynamics and black holes. Although I will cut the story a lot, the main point is that he discovered that the entropy of black holes should be proportional to the area of their even horizon, the surface after which nothing can come back. That's what we call, in statistical mechanics language, non-extensive. We call a property extensive when it's proportional to the volume of the object.

The story actually mixes a lot of things. But I will try not to rush in. Back to the black holes, they are in fact the most entropic "objects" in the universe. The argument is simple enough and works by, as in many situations, invoking the Second Law of Thermodynamics. Suppose that in a region of space of radius R there is more entropy than a black hole the size of that region. Then, by adding matter to the region you can increase its mass. If you do that with no care at all, you can always increase the entropy by creating disorder, which is actually very easy as anyone know. It's easy to see where it ends. With enough matter, you can create a black hole the size of the original region. If the black hole has less entropy, than you decreased the TOTAL entropy of the universe and broke the Second Law.

Enters statistical mechanics. In the late 19th century, Boltzmann discovered that the entropy can be understood microscopically as the number of states accessible to some system. And it was by using this concept, that two other physicists, 't Hooft and Susskind, suggested which became known as the Holographic Principle. Consider a region in space. The entropy of that region is bounded by the area of the event horizon of a black hole the size of that region, which means, that the maximum entropy of that region is given by this area. Therefore, the number of possible states in which the entire region can be is proportional not to the volume of the region, but to its area!

Now it's easy to see why it is called the Holographic Principle. The possible configurations of the whole three dimensional region are in fact limited by the two dimensional area of its boundary. Like a hologram. Well, the Holographic Principle actually go one step further by suggesting that the boundary actually ENCODES the degrees of freedom (the equivalent of the possible configurations in some sense) inside the region. That's a bit more difficult to accept, but around 1997, a string theory guy named Juan Maldacena, based on his work on strings proposed something called the AdS/CFT conjecture. In a few words, the conjecture says that the degrees of freedom of a quantum gravity theory in Anti-de Sitter space are encoded in a strongly coupled conformal field theory that leaves on its boundary.

The importance of this is that in some limit, the quantum gravity theory becomes classical gravity, which means general relativity. In fact, it means a classical field theory with a dynamical metric, where metric is the mathematical way of encoding the distance between two points in any kind of space. I am not sure if I understood this point precisely, but I guess that this classical limit is the limit where the conformal field theory becomes strongly coupled. A conformal field theory is a special kind of field theory with an additional scaling symmetry. The good thing is that although we don't know how to deal with strong coupled field theories, we more or less can calculate things in the gravity sector of the AdS/CFT duality.

To finish, let me explain finally why I am interested in it. Recently, there has been some work where the CFT part of the duality display a phenomenology very similar to some strong coupled systems in condensed matter. Now, these systems are quite important and very difficult to deal with with traditional methods like statistical physics or perturbation theory. One of the most famous example is the high temperature superconductor. These superconductors were discovered in 1986 and we still do not have a good understanding of them. It seems that AdS/CFT can shed some light on this. Another problem is called non-Fermi liquids, which are also  strong coupled systems of fermions in condensed matter.

Well, this was just an introduction to the topic. I will try to write more about it as I read. It's a selfish endeavour as it's meant to help myself to think more clearly and understand better this subject. If anyone have comments, suggestions or want to correct the probably lots of mistakes I wrote, or the ones I will write, feel free. That's the aim after all. :) Oh, and by the way, the video has really nothing to do with the text. I just thought of it as a nice example of a hologram. :)

Jan 1, 2011

Intuition and Neural Networks

I had an interesting discussion with a friend during Christmas. It started because one of my presents, which I chose, was Richard Dawkin's The God Delusion. The discussion at some point became one about spirituality. He was arguing in favor of the existence of it and I was trying to understand what he exactly meant by the word spirituality. The details of the conversation are really not important, but at some point he argued that spirituality was related to intuition, and intuition is something that cannot be logically understood. Of course I disagreed for to me that sounds like a very fun remark as, among all cognitive phenomena, intuition is the one which I would say that was most illuminated by the study of artificial neural networks and machine learning in general.

For many the above statement may seem not only surprising, but highly unbelievable and extremely exaggerated. It's not. In order to prove it, let me start by explaining what I understand by intuition. This is also probably the concept that everyone shares. Most people have already been in a situation where you have to take a decision and, although you cannot explain why and it may even sound counterintuitive, something inside you tells what is the correct answer. I will not use intuition in the sense of premonition or anything like this. I will concentrate on this sort of "I know this is the correct answer but I can't explain it." thing.

You may think that the fact that you cannot explain the decision makes it something beyond logic and therefore impossible to understand. Actually, it is the complete opposite. The explanation is in fact the simplest one: the feeling of what is the correct decision comes from our brain's experience with similar situations. Too simplistic, you would say. Okay, but why should this not be so? But this is not just a guess, we can actually reproduce this in a computer. That is exactly how machine learning algorithms work. 

Let me start by describing the simplest machine learning model, the perceptron. The perceptron is a mathematical model inspired by a real neuron. It has N entries, which usually are taken as N binary numbers, and computes what is called a boolean function using them, giving as a result another binary number. The simplest rule is this

\[\sigma(\mathbf{x})=\mbox{sign}{\sum_i x_i w_i},\] 
where the $\mathbf{x}=(x_i)_{i=1,...,N}$ are the N boolean entries and the real numbers $w_i$ are what enables this simple model to do some kind of very basic learning. The trick is that, if we change these numbers, we can change (to some extent, which is already a technical issue) the boolean function that is implemented by $\sigma$. The idea is that we have what is called a dataset of pairs $(\sigma_\mu,\mathbf{x}_\mu)$, with the indices $\mu$ labeling the datapoints. We usually call these datapoints by the suggestive name of examples, as they indicate to the perceptron what is the pattern it must follow. We then use a computer algorithm to modify the $w_i$ such that it tries to match the correct answers $\sigma_\mu$ for every corresponding $\mathbf{x}_\mu$. The simplest algorithm that works is the so called Hebb algorithm, which is based on the work of the psychologist Donald Hebb, and amounts to reinforcing connections (by which I mean the numbers $w_i$) when the answer is correct and weakening them when it's wrong. 

As I said, in simple situations this algorithm really works. Of course, there are more complex situations where the perceptron does not work, but then there are more sophisticated machine learning models as well as algorithms. I will not discuss these details now, as this is not important to our discussion. The important thing is  that, after learning, the perceptron can infer the correct answer to a question based simply on the adjusted numbers $w_i$. Now, notice that the perceptron does not really know the pattern it's learning. It is too simple a model to have any kind of awareness. The perceptron also does not perform any kind of logical thinking to answer the questions, it just knows the correct answer as soon as the question is presented. It never really knows the pattern it's following after learning. Basically, it gives an intuitive answer. But what is really more incredible is that, even if we look at the numbers $w_i$, we also cannot explain what is the pattern the perceptron learned. It's just a bunch of numbers and if the number N is too large it becomes even more difficult for us to "understand" it.

Looks too simplistic but this is exactly what we called intuition above. In the end, taking a decision based on intuition happens when your brain tells you that the question you are faced with follows some kind of pattern that you cannot really explain, but just seem right. You learned it somehow, although you cannot explain what you've learned. As you can see, intuition is in fact the first thing we were able to understand with machine learning and the myth that this cannot be understood is just that: a myth.