ellipsix informatics
2015
Jan
25

Time to kick off a new year of blog posts! For my first post of 2015, I'm continuing a series I've had on hold since nearly the same time last year, about the research I work on for my job. This is based on a paper my group published in Physical Review Letters and an answer I posted at Physics Stack Exchange.

In the first post of the series, I wrote about how particle physicists characterize collisions between protons. A quark or gluon from one proton (the "probe"), carrying a fraction $x_p$ of that proton's momentum, smacks into a quark or gluon from the other proton (the "target"), carrying a fraction $x_t$ of that proton's momentum, and they bounce off each other with transverse momentum $Q$. The target proton acts as if it has different compositions depending on the values of $x_t$ and $Q$: in collisions with smaller values of $x_t$, the target appears to contain more partons.

At the end of the last post, I pointed out that something funny happens at the top left of this diagram. Maybe you can already see it: in these collisions with small $x_t$ and small $Q$, the proton acts like a collection of many partons, each of which is relatively large. Smaller $x_t$ means more partons, and smaller $Q$ means larger partons. What happens when there are so many, and so large, that they can't fit?

Admittedly, that may not seem like a problem at first. In the model I've been using so far, a proton is a collection of particles. And it seems totally reasonable that when you're looking at a proton from one side, some of the particles will look like they're in front of other particles. But this is one of those situations where the particle model falls short. Remember, protons are really made of quantum fields. Analyzing the proton's behavior using quantum field theory is not an easy task, but it's been done, and it turns out an analogous, but very serious, problem shows up in the field model: if you extrapolate the behavior of these quantum fields to smaller and smaller values of $x_t$, you reach a point where the results don't make physical sense. Essentially it corresponds to certain probabilities becoming greater than 1. So clearly, something unexpected and interesting has to happen at small $x_t$ to keep the fields under control.

Parton branching and the BFKL equation

To explain how we know this, I have to go all the way back to 1977. Quantum chromodynamics (QCD), the model we use to describe the behavior of quarks and gluons, was only about 10 years old, and physicists at the time were playing around with it, poking and prodding, trying to figure out just how well it explained the known behavior of protons in collisions.

Most of this tinkering with QCD centered around the parton distributions $f_i(x, Q^2)$, which I mentioned in my last post. Parton distributions themselves actually predate QCD. They first emerged out of something called the "parton model," invented in 1969, which is exactly what it sounds like: a mathematical version of the statement "protons are made of partons." So by the time QCD arrived on the scene, the parton distributions had already been measured, and the task that fell to the physicists of the 1970s was to try to reproduce the measurements of $f_i(x, Q^2)$ using QCD.

When you're testing a model of particle behavior, like QCD, you do it by calculating something called a scattering cross section, which is like the effective cross-sectional area of the target particle. If the target were a sphere of radius $r$, for example, its cross section would be $\pi r^2$. But unlike a plain old solid sphere, the scattering cross section for a subatomic particle depends on things like how much energy is involved in the collision (which you may remember as $\sqrt{s}$ from the last post) and what kinds of particles are colliding. The information about what kinds of particles are colliding is represented mathematically by the parton distributions $f_i(x, Q^2)$. So naturally, in order to make a prediction using the theory, you need to know the parton distributions.

The thing is, we actually can't do that! Believe me, people are trying, but there's a fairly fundamental problem: parton distributions are nonperturbative, meaning they are inextricably linked to the behavior of the strong interaction when it is too strong for standard methods to handle. They already knew this in the 1970s. However, that didn't stop people from trying to calculate something about the PDFs which could be linked to experimental results.

It turns out that even though the exact forms of the parton distributions can't be calculated from quantum field theory, you can calculate their behavior at small values of $x$, the green part on the left of the preceding diagram. In 1977, four Russian physicists — Ian Balitsky, Victor Fadin, Eduard Kuraev and Lev Lipatov — derived from QCD an equation for the rate of change of parton distributions with respect to $x$, in collisions with energy $\sqrt{s}$ much larger than either the masses of the particles involved or the amount of energy transferred between them ($Q$, roughly). In modern notation, the equation (which I will explain later) is written

$\pd{N(x, Q^2, \vec{r}_{01})}{\ln\frac{1}{x}} = \frac{\alpha_s}{2\pi}\int\uddc\vec{r}_2\frac{r_{01}^2}{r_{02}^2r_{12}^2} [N(x, Q^2, \vec{r}_{02}) + N(x, Q^2, \vec{r}_{12}) - N(x, Q^2, \vec{r}_{01})]$

$N$ is something called the color dipole cross section, which is related to $f$ from before via an equation roughly like this:

$f(x, Q^2) = \int^{Q^2}\iint N(x, k^2, \vec{r})\uddc\vec{r}\udc k^2$

That's why $f$ is often called an integrated parton distribution and $N$ an unintegrated parton distribution. I won't go into the details of the difference between $N$ and $f$, since both of them show the behavior I'm going to talk about in the rest of this post.

Anyway, the behavior that Balitsky, Fadin, Kuraev, and Lipatov analyzed comes from processes like these:

At each vertex, one parton with a certain fraction $x$ of the proton's momentum splits into other partons with smaller values of $x$. You can see this reflected in the equation: the term $-N(x, Q^2, \vec{r}_{01})$ represents the disappearance of the original parton, and $N(x, Q^2, \vec{r}_{02}) + N(x, Q^2, \vec{r}_{12})$ represents the creation of two new partons with smaller momentum fractions $x$. When this happens repeatedly, it leads to a cascade of lower-and-lower momentum particles as the branching process goes on. This explains why the number of partons, and thus the parton distribution functions, increase as you go to smaller and smaller values of $x$.

This BFKL model has been tested in experiment after experiment for many years, and it works quite well. For example, in the plot below, from this paper by Anatoly Kotikov, you can see that the predictions from the BFKL equation (solid lines) generally match the experimental data (dots with error bars) quite closely.

The plot shows the structure function $F_2$, which is a quantity related to the integrated parton distribution.

Parton recombination

However, there is one big problem with the BFKL prediction: it never stops growing! After all, if the partons keep splitting over and over again, you keep getting more and more of them as you go to lower momentum fractions $x$. Mathematically, this corresponds to exponential growth in the parton distributions:

$\mathcal{F}(x, Q^2) = \ud{f}{Q^2} \sim \frac{x^{-4\bar{\alpha}_s\ln 2}}{\sqrt{\ln\frac{1}{x}}}$

which is roughly the solution to the BFKL equation.

If the parton distributions get too large, when you try to calculate the scattering cross section, the result "breaks unitarity," which effectively means the probability of two partons interacting becomes greater than 1. Obviously, that doesn't make sense! So this exponential growth that we see as we look at collisions with smaller and smaller $x_t$ can't continue unchecked. Some new kind of physics has to kick in and slow it down. That new kind of physics is called saturation.

The physical motivation for saturation was proposed by two physicists, Balitsky (the same one from BFKL) and Yuri Kovchegov, in a series of papers starting in 1995. Their idea is that, when there are many partons, they actually interact with each other — in addition to the branching described above, you also have the reverse process, recombination, where two partons with smaller momentum fractions combine to create one parton with a larger momentum fraction.

At large values of $x$, when the number of partons is small, it makes sense that not many of them will merge, so this recombination doesn't make much of a difference in the proton's structure. But as you move to smaller and smaller $x$ and the number of partons grows, more and more of them recombine, making the parton distribution deviate more and more from the exponential growth predicted by the BFKL equation. Mathematically, this recombination adds a negative term, proportional to the square of the parton distribution, to the equation.

$\pd{N(x, Q^2, \vec{r}_{01})}{\ln\frac{1}{x}} = \frac{\alpha_s}{2\pi}\int\uddc\vec{r}_2\frac{r_{01}^2}{r_{02}^2r_{12}^2} [N(x, Q^2, \vec{r}_{02}) + N(x, Q^2, \vec{r}_{12}) - N(x, Q^2, \vec{r}_{01}) - N(x, Q^2, \vec{r}_{02})N(x, Q^2, \vec{r}_{12})]$

When the parton density is low, $N$ is small and this nonlinear term is pretty small. But at high parton densities, the nonlinear term has a value close to 1, which cancels out the other terms in the equation. That makes the rate of change $\pd{N}{\ln\frac{1}{x}}$ approach zero as you go to smaller and smaller values of $x$, which keeps $N$ from blowing up and ruining physics.

By way of example, the following plot, from this paper (my PhD advisor is an author), shows how the integrated gluon distribution grows more slowly when you include the nonlinear term (solid lines) than when you don't (dashed lines):

So where does that leave us? Well, we have a great model that works when the parton density is low, but we don't know if it works when the density is high. That's right: saturation has never really been experimentally confirmed, although it's getting very close. In the third and final post in this series (not counting any unplanned sequels), I'll explain how physicists are now trying to do just that, and how my group's research fits into the effort.

2014
Feb
24

What's in a proton?

Hooray, it's time for science! For my long-overdue first science post of 2014, I'm starting a three-part series explaining the research paper my group recently published in Physical Review Letters. Our research concerns the structure of protons and atomic nuclei, so this post is going to be all about the framework physicists use to describe that structure. It's partially based on an answer of mine at Physics Stack Exchange.

What's in a proton?

Fundamentally, a proton is really made of quantum fields. Remember that. Any time you hear any other description of the composition of a proton, it's just some approximation of the behavior of quantum fields in terms of something people are likely to be more familiar with. We need to do this because quantum fields behave in very nonintuitive ways, so if you're not working with the full mathematical machinery of QCD (which is hard), you have to make some kind of simplified model to use as an analogy.

If you're not familiar with the term, fields in physics are things which can be represented by a value associated with every point in space and time. In the simplest kind of field, a scalar field, the value is just a number. Think of it like this:

More complicated kinds of fields exist as well, where the value is something else. You could, in principle, have a fruit-valued field, that associates a fruit with every point in spacetime. In physics, you'd be more likely to encounter a vector-, spinor-, or tensor-valued field, but the details aren't important. Just keep in mind that the value associated with a field at a certain point can be "strong," meaning that the value differs from the "background" value by a lot, or "weak," meaning that the value is close to the "background" value. When you have multiple fields, they can interact with each other, so that the different kinds of fields tend to be strong in the same place.

The tricky thing about quantum fields specifically (as opposed to non-quantum, or classical, fields) is that we can't directly measure them, the way you would directly measure something like air temperature. You can't stick a field-o-meter at some point inside a proton and see what the values of the fields are there. The only way to get any information about a quantum field is to expose it to some sort of external "influence" and see how it reacts — this is what physicists call "observing" the particle. For a proton, this means hitting it with another high-energy particle, called a probe, in a particle accelerator and seeing what comes out. Each collision acts something like an X-ray, exposing a cross-sectional view of the innards of the proton.

Because these are quantum fields, though, the outcome you get from each collision is actually random. Sometimes nothing happens. Sometimes you get a low-energy electron coming out, sometimes you get a high-energy pion, sometimes you get several things together, and so on. In order to make a coherent picture of the structure of a proton, you have to subject a large number of them to these collisions, find some way of organizing collisions according to how the proton behaves in each one, and accumulate a distribution of the results.

Classification of collisions

Imagine a slow collision between two protons, each of which has relatively little energy. They just deflect each other due to mutual electrical repulsion. (This is called elastic scattering.)

If we give the protons more energy, though, we can force them to actually hit each other, and then the individual particles within them, called partons, start interacting.

At higher energies, a proton-proton collision entails one of the partons in one proton interacting with one of the partons in the other proton. We characterize the collision by two variables — well, really three — which can be calculated from measurements made on the stuff that comes out:

• $x_p$ is the fraction of the probe proton's forward momentum that is carried by the probe parton
• $x_t$ is the same, but for the target proton
• $Q^2$ is roughly the square of the amount of transverse (sideways) momentum transferred between the two partons.

With only a small amount of total energy available, $x_p$ and $x_t$ can't be particularly small. If they were, the interacting partons would have a small fraction of a small amount of energy, and the interaction products just wouldn't be able to go anywhere after they hit. Also, $Q^2$ tends to be small, because there's not enough energy to give the interacting particles much of a transverse "kick." You can actually write a mathematical relationship for this:

$(\text{total energy})^2 = s = \frac{Q^2}{x_p x_t}$

Collisions that occur in modern particle accelerators involve much more energy. There's enough to allow partons with very small values of $x_p$ (in the probe) or $x_t$ (in the target) to participate in the collision and easily make it out to be detected. Or alternatively, there's enough energy to allow the interacting partons to produce something with a large amount of transverse momentum. Accordingly, in these high-energy collisions we get a random distribution of all the combinations of $x_p$, $x_t$, and $Q^2$ that satisfy the relationship above.

Proton structure

Over many years of operating particle accelerators, physicists have found that the behavior of the target proton depends only on $x_t$ and $Q^2$. In other words, targets in different collisions with the same values of $x_t$ and $Q^2$ behave pretty much the same way. While there are some subtle details, the results of these decades of experiments can be summarized like this: at smaller values of $x_t$, the proton behaves like it has more constituent partons, and at larger values of $Q^2$, it behaves like it has smaller constituents.

This diagram shows how a proton might appear in different kinds of collisions. The contents of each circle represents, roughly, a "snapshot" of how the proton might behave in a collision at the corresponding values of $x$ and $Q^2$.

Physicists describe this apparently-changing composition using parton distribution functions, denoted $f_i(x, Q^2)$, where $i$ is the type of parton: up quark, antidown quark, gluon, etc. Mathematically inclined readers can roughly interpret the value of a parton distribution for a particular type of parton as the probability per unit $x$ and per unit $Q^2$ that the probe interacts with that type of parton with that amount of momentum.

This diagram shows how the parton distributions relate to the "snapshots" in my last picture:

The general field I work in is dedicated to determining these parton distribution functions as accurately as possible, over as wide of a range of $x$ and $Q^2$ as possible.

As particle accelerators get more and more powerful, we get to use them to explore more and more of this diagram. In particular, proton-ion collisions (which work more or less the same way) at the LHC cover a pretty large region of the diagram, as shown in this figure from a paper by Carlos Salgado:

I rotated it 90 degrees so the orientation would match that of my earlier diagrams. The small-$x$, low-$Q^2$ region at the upper left is particularly interesting, because we expect the parton distributions to start behaving considerably differently in those kinds of collisions. New effects, called saturation effects, come into play that don't occur anywhere else on the diagram. In my next post, I'll explain what saturation is and why we expect it to happen. Stay tuned!