2013
May
16

Careful, the rope is out to get you!

I know, I know, this blog post is very late! If I'm going to post about a Mythbusters episode, I usually try to do it before the next one airs. But this topic — calculating the circumstances under which a rope will pull a person's leg — turned out to be pretty complicated. Which of course made it impossible to give up on.

And after days of toil, I think I finally figured it out! If you're the kind of person who finds complicated physical interactions fascinating, you're going to love this post. The math isn't too complicated; if you know what a differential equation is, you'll be fine, but the physical reasoning is something you could probably spend a while wrapping your head around.

The setup

On the Deadliest Catch-themed Mythbusters episode which came out last week, one of the myths being tested was that if a person steps in a coil of rope, and that rope is attached to a crab pot (trap) that gets dropped over the side of the ship, the rope will be pulled tight enough around the person's leg that it will drag them overboard, and down to the ocean floor, along with the trap. Now, whether this works can depend on how the rope is coiled, which direction it's pulled, whether it gets knotted, and so on, but at its core, the myth is about friction. The frictional force between the rope and the leg has to be high enough to keep the rope from slipping while it pulls the person overboard. And friction doesn't need knots. I've managed to develop a model for how friction affects the behavior of a rope coiled around a cylindrical object.

Here's how it works: Imagine a very short, infintesimally short, segment of the rope. It's subject to four forces: tension $T_L$ pulling to the left, tension $T_R$ pulling to the right, the normal force $F_N$, and the frictional force $F_f$ which opposes the motion of the rope.

Let's decompose these forces in cylindrical coordinates, using the unit vectors $\unitr$ and $\unittheta$, parallel and perpendicular to the rope respectively. The normal force and friction will be in the radial and tangential directions, respectively, but the tension at each end points slightly off-axis. I've marked out the angles in the diagram to show how to decompose the tensions:

\begin{align}\vec{T}_L &= T_L\biggl(-\unitr\sin\frac{\udc\theta}{2} - \unittheta\cos\frac{\udc\theta}{2}\biggr) = T_L\biggl(-\unitr\frac{\udc\theta}{2} - \unittheta\biggr) \\ \vec{T}_R &= T_R\biggl(-\unitr\sin\frac{\udc\theta}{2} + \unittheta\cos\frac{\udc\theta}{2}\biggr) = T_R\biggl(-\unitr\frac{\udc\theta}{2} + \unittheta\biggr)\end{align}

Time to move on to Newton's second law. Here's the radial component of $\sum\vec{F} = m\vec{a}$:

$\unitr\cdot\sum\vec{F} = -T_L\frac{\udc\theta}{2} - T_R\frac{\udc\theta}{2} + F_N = m\unitr\cdot\vec{a}$

and the tangential component:

$\unittheta\cdot\sum\vec{F} = -T_L + T_R + F_f = m\unittheta\cdot\vec{a}$

From here, there are two cases to consider. If the rope slips, we have $F_f = \mu_k F_N$, and if not, we have an inequality, $F_f \le \mu_s F_N$.

Slipping rope

For simplicity, let's take the equality first, in which we assume that the rope is slipping. We can put all these equations together by solving for $F_N$ and $F_f$ and using substitution. That gives

$T_L - T_R + m a_\theta = \mu_k\biggl((T_L + T_R)\frac{\udc\theta}{2} + m a_r\biggr)$

Since $T_L$ and $T_R$ are the tensions at infinitesimally close positions, their difference will be an infinitesimal quantity, $T_R - T_L = \udc T$. Also, the mass of the segment of rope will be an infinitesimal $\udc m$, which is equal to $R\lambda\udc\theta$ where $R$ is the radius of the coil and $\lambda$ is the rope's mass per unit length. Putting all this in, we get

$-\udc T + a_\theta R\lambda\udc\theta = \mu_k T\udc\theta + \mu_k a_r R\lambda\udc\theta$

I've set $T_L + T_R = 2T + \udc T$ and then ignored the second-order term proportional to $\udc T\udc\theta$. This leads to the differential equation

$\ud{T}{\theta} = R\lambda(a_\theta - \mu_k a_r) - \mu_k T$

which is the final formulation of the model. It's a general model, which can apply to several different situations; to get a specific solution, we just need to characterize the rope's acceleration by postulating some form for the components of the acceleration, $a_\theta$ and $a_r$.

Example solution: the fixed cylinder

To get a feel for this model, I'm going to investigate a simple situation where the cylinder stays in place and the rope slips around it. The rope's acceleration will be in the tangential direction, $\vec{a} = -a\unittheta$, setting $a_\theta = -a$ and $a_r = 0$.

With these conditions, the solution to the differential equation is

$T = -\frac{R\lambda a}{\mu_k} + \biggl(T_0 + \frac{R\lambda a}{\mu_k}\biggr)e^{-\mu_k\Delta\theta}$

where $\Delta\theta$ is the total angle by which the rope is wrapped around the leg — $2\pi$ for each complete loop. This solution has the form

$T = T_\infty + (T_0 - T_\infty)e^{-\mu_k\Delta\theta}$

where $T_0$ is the tension at $\theta = 0$ — where the rope just starts wrapping around the cylinder — and $T_\infty = -\frac{R\lambda a}{\mu_k}$ is a constant, the asymptotic value of the tension after it's wound around the cylinder many times. Here's a sample plot:

Wait a minute — a negative asymptotic value? Does it even make sense for tension to be negative? Of course not, and here's where a bit of physical intuition comes into play. We know our function $T(\Delta\theta)$ starts out positive, $T(0) > 0$, and, if the rope is wound enough times, it ends up negative, $T(\infty) < 0$. That means that for some intermediate angle, the function hits zero. Physically, that's the point at which the friction between the cylinder and the rope has "accumulated" enough to cancel out the external force applied to the rope. Any further along than that critical angle, the rope is slack, and the function $T(\theta)$ ceases to be applicable.

But wait! We can actually solve for the critical angle at which the tension hits zero.

$\Delta\theta_\text{critical} = \frac{1}{\mu_k}\ln\frac{T_\infty - T_0}{T_\infty} = \frac{1}{\mu_k}\ln\biggl(1 + \frac{T_0\mu_k}{R\lambda a}\biggr)$

Let's do a few sanity checks on this formula:

• Greater initial tension $T_0$ corresponds to a greater critical angle. Which makes sense because the harder you pull, the more of the rope you'll be able to pull taut.
• The critical angle decreases as $\mu$ gets larger — which, again, makes sense, because with a rougher pole, your pull won't be felt as far along the rope.
• In the opposite limit, as $\mu$ goes to zero (a frictionless pole), the critical angle becomes $\frac{T_0}{R\lambda a}$. Now, if you take this criterion $\Delta\theta = \frac{T_0}{R\lambda a}$ and rearrange it, you get $T_0 = (R\lambda\Delta\theta)a$, which is just $\sum F = ma$ for a rope on a frictionless surface! As it should be. If this complicated model that handles friction gave the wrong answer when there is no friction, it wouldn't be much good at all!

That last item in particular also suggests a physical interpretation of this formula: it's effectively an equation of motion, filling the same role as $\sum F = ma$. Whereas we'd ordinarily use Newton's second law to calculate the motion of a rope subject to a given force, for the rope sliding around a pole (or leg) we can use this formula instead, and it will take care of the friction and the changing tension over the length of the rope. This works because at the end of the rope, where it "stops" winding around the cylinder, its tension is zero — it's not pulling anything. (Except perhaps some additional length of rope, but I'm going to assume that's negligible for simplicity.)

Non-slipping rope

The analysis if the rope doesn't slip is nearly the same, except that the coefficient of friction is $\mu_s$ instead of $\mu_k$, and instead of an equation, we get an inequality from $F_f \le \mu_s F_N$.

$T_L - T_R + m a_\theta \le \mu_k\biggl((T_L + T_R)\frac{\udc\theta}{2} + m a_r\biggr)$

Following the same procedure from before, this becomes

$\ud{T}{\theta} \ge R\lambda(a_\theta - \mu_s a_r) - \mu_s T$

OK, this is new. Everybody (well, everybody who has a geeky streak) knows about differential equations, but when have you ever seen a differential inequality? They're not that common. But it's not really that hard to understand. In order for the inequality to be satisfied, the rate at which tension changes as you move along the rope (as in the previous section, $\ud{T}{\theta}$ is negative) has to be greater than the particular combination of acceleration and tension that appears on the right. Physically, the left side is related to the strength of friction, the right side is related to the other forces acting on one segment of rope, and for the rope not to slip, friction has to be strong enough to cancel out the other forces at every point along the contact surface.

Example solution: the moving cylinder

It'll be easiest to make sense of this differential inequality by choosing a specific type of acceleration, as we did with the slipping rope. Let's say the rope just pulls the cylinder in the direction of the initial tension, namely the $-\unitx$ direction, without rotating it.

This is not really realistic, since there will actually be some torque on the cylinder that will also rotate it, but it's close enough for demonstration purposes.

The components of acceleration in this case will be $a_r = \unitr\cdot\vec{a} = -a\sin\theta$, and $a_\theta = \unittheta\cdot\vec{a} = -a\cos\theta$.

Substituting these in, we get

$\ud{T}{\theta} \ge R\lambda(\mu_s a \sin\theta - a\cos\theta) - \mu_s T$

There's still the little problem that this is an inequality, not an equation. But I have a plan. First, let's solve the corresponding differential equation; it gives

$T = \frac{R\lambda a}{1 + \mu_s^2}\biggl((\mu_s^2 - 1)\sin\Delta\theta - 2\mu_s\cos\Delta\theta\biggr) + \biggl(T_0 + \frac{2R\lambda a\mu_s}{1 + \mu_s^2}\biggr)e^{-\mu_s\Delta\theta}$

or equivalently

$T = T_\infty\biggl(\cos\Delta\theta - \frac{\mu_s^2 - 1}{2\mu_s}\sin\Delta\theta\biggr) + \biggl(T_0 - T_\infty\biggr)e^{-\mu_s\Delta\theta}$

with $T_\infty = -\frac{2R\lambda a\mu_s}{1 + \mu_s^2}$. This function gives a value of tension that, if we plug it back into the differential equation, represents the minimum possible rate at which tension can change over the length of the rope.

Now, given that the starting value is fixed at $T_0$, there's no way the actual tension can dip below the value specified by this function without the rate of change $\ud{T}{\theta}$ being less than the minimum specified by the differential equation. This follows from the mean value theorem. Evidently the tension in the rope can be no less than the function we found from the differential equation; that is,

$T \ge T_\infty\biggl(\cos\Delta\theta - \frac{\mu_s^2 - 1}{2\mu_s}\sin\Delta\theta\biggr) + \biggl(T_0 - T_\infty\biggr)e^{-\mu_s\Delta\theta}$

This is the condition that must be satisfied for the rope not to slip. It corresponds to the shaded region in this plot:

Now that that's cleared up, let's look at the properties of this solution. Again, the asymptotic value (actually, an asymptotic amplitude of sorts) is negative, which means this function crosses zero somewhere. But now there are those pesky sine and cosine factors, and unfortunately they make it impossible to find a symbolic formula for the critical angle where the tension drops to zero. We can still calculate $\Delta\theta_\text{critical}$ numerically, we'll just need actual values for all the constants involved. As in the last section, the equation for the critical angle where $T = 0$ defines some relationship between $a$, $\mu_s$, $R$, $\lambda$, $\Delta\theta$, and $T_0$, which we can use as a "smart" stand-in for $\sum F = ma$.

Will the cylinder move?

Now the key question is, if you have a rope wrapped around a cylinder (like someone's leg) and you pull it, will the rope pull the person, or will it slip? It all comes down to whether the force of friction is strong enough to resist the other forces applied to the rope. Mathematically, that corresponds to whether the differential inequality in the last section is satisfied (at least up to the critical angle where the tension goes to zero). If the inequality is violated at any point along the rope, it'll slip, but otherwise the rope stays pressed against the cylinder and pulls it.

To figure out whether that happens, we'll need values of $a$, $\mu_s$, $R$, $\lambda$, $\Delta\theta$, and $T_0$. We plug them into the formula, then set $T = 0$ on the left side (which should be true at the end of the rope, in this simplified model) and see whether the inequality is satisfied. If if is, that puts us in the shaded region of the graph, where friction is strong enough to keep the rope from slipping. Otherwise, we're in the unshaded region where friction is just not strong enough and the rope slips.

There's just one problem: what's the acceleration? The values of $\mu_s$, $R$, and $\lambda$ are readily available (they would be measured), and the experimenter can control the initial tension $T_0$, but we don't know $a$. Or do we? Well, look at this from an outside perspective. The initial tension exerted on the rope, $T_0$, has to pull the rope itself and the person. If you look at the rope and person as one physical system, a "black box" if you will,

you can ignore the effect of friction and just use Newton's second law, $T_0 = (m_\text{person} + m_\text{rope})a$.

$a = \frac{T_0}{m_\text{person} + m_\text{rope}}$

which gives us

$T_\infty = -\frac{2R\lambda\mu_s T_0}{(1 + \mu^2)(m_\text{person} + m_\text{rope})}$

Numbers: do they work?

Even though this model is unrealistic, chiefly because it doesn't take into account the rotation of the cylinder due to the tension being off-axis, I still want to try plugging some numbers in. Here are my estimates:

• Radius of the leg: $R = \SI{6}{cm}$
• Density of the rope: $\lambda = \SI{50}{g/m}$
• Static friction coefficient: $\mu_s = 0.2$ (maybe a little low)
• Total mass of the rope: $m_\text{rope} = \SI{1}{kg}$
• Total mass of the person: $m_\text{person} = \SI{80}{kg}$
• External force exerted on the rope: $T_0 = \SI{800}{lbf.} \approx \SI{3500}{N}$, I figure it's roughly the weight of the crab pot, which was stated to be 800 pounds in the show — though if you ask me, they did not look that heavy. (It turns out not to matter anyway.)

Now yes, I could have used these values for the graph in the earlier section, but I didn't want to give the answer away! Here's what the plot actually looks like:

It's hard to tell there, so let's have a closer look:

The tension hits zero for the first time just past 8, which means that with these numbers, it takes about 8 turns of rope around your leg to pull you. Any less, and the rope will just slip off.

That's kind of a lot. But what if we up the coefficient of friction? $\mu = 0.4$ is still probably reasonable, maybe even low, for a rope (they're made somewhat rough, after all, so that they don't slip through your fingers). Doubling the friction coefficient gives this plot (already zoomed in):

Now it only takes four turns for the rope to grab you. I wouldn't form a conclusion on this myth based on the math alone, but compared to what Adam and Jamie found, it seems not too far off from reality!

2013
May
15

A quick technical test

shhh, the computers are thinking :-P

2013
May
07

Putting the JATO rocket car to rest

It's that time again: Mythbusters is back! And they sure know how to kick things off with a bang — or better yet, a prolonged burn!

For the 10th anniversary of the show, the Mythbusters revisited the very first myth they ever tested, the JATO rocket car. Wikipedia has the story in what appears to be its most common form:

The Arizona Highway Patrol came upon a pile of smoldering metal embedded into the side of a cliff rising above the road at the apex of a curve. the wreckage resembled the site of an airplane crash, but it was a car. The type of car was unidentifiable at the scene. The lab finally figured out what it was and what had happened.

It seems that a guy had somehow gotten hold of a JATO unit (Jet Assisted Take Off - actually a solid fuel rocket) that is used to give heavy military transport planes an extra 'push' for taking off from short airfields. He had driven his Chevy Impala out into the desert and found a long, straight stretch of road. Then he attached the JATO unit to his car, jumped in, got up some speed and fired off the JATO!

The facts, as best could be determined, are that the operator of the 1967 Impala hit JATO ignition at a distance of approximately 3.0 miles from the crash site. This was established by the prominent scorched and melted asphalt at that location. The JATO, if operating properly, would have reached maximum thrust within five seconds, causing the Chevy to reach speeds well in excess of 350 MPH, continuing at full power for an additional 20-25 seconds. The driver, soon to be pilot, most likely would have experienced G-forces usually reserved for dog-fighting F-14 jocks under full afterburners, basically causing him to become insignificant for the remainder of the event. However, the automobile remained on the straight highway for about 2.5 miles (15-20 seconds) before the driver applied and completely melted the brakes, blowing the tires and leaving thick rubber marks on the road surface, then becoming airborne for an additional 1.4 miles and impacting the cliff face at a height of 125 feet, leaving a blackened crater 3 feet deep in the rock.

Most of the driver's remains were not recoverable; however, small fragments of bone, teeth and hair were extracted from the crater, and fingernail and bone shards were removed from a piece of debris believed to be a portion of the steering wheel.

It's a fascinating story and all, but there's plenty of evidence to suggest that this didn't happen, and in fact that it can't happen as described. Not only does the Arizona Highway Patrol have no record of ever investigating a case like this, but it's been tested no less than three times on Mythbusters.

• The first time, on the pilot episode in 2003, Adam and Jamie found that a Chevy Impala with three hobby rockets on top, providing equivalent thrust to a JATO unit, wouldn't get anywhere close to $\SI{350}{mph}$, though it did exceed the $\SI{130}{mph}$ top speed of the chase helicopter.
• The second time, in 2007, they found that... rockets explode sometimes. Hey, it's Mythbusters, you can expect nothing less. :-P
• The third time was the 10th anniversary special that aired last week. With more power than the equivalent of one JATO unit, the car still didn't get much faster than about $\SI{200}{mph}$. In this iteration, the Mythbusters also tested the part of the myth in which the car supposedly took off and flew through the air — which also failed spectacularly (in every sense of the word), with their test car running off a ramp and nosediving into the desert.

There's a lot of juicy physics in this myth. But it breaks down into a couple of key parts: first, can a rocket-powered Impala even make it up to $\SI{350}{mph}$? And secondly, if it did, could it fly a mile and a half through the air? I'm going to handle the speed issue here, and get into the fable of the flying car for a later post.

Before it becomes airborne, a JATO car is just an object subject to three forces: the thrust of the rocket and the engine force pointing forward, and air resistance pointing backward. Well, there's also tire friction, but that's a lesser influence. In the simplified model I'm going to use, three of these forces are constant — the thrust and engine force forward, and tire friction backward — and air resistance is the one velocity-dependent force. For convenience I'll group all the constant forces under the name drive force, $F_\text{drive}$.

With the forces as described, a car is pretty similar to a falling object, which is also subject to a constant force in one direction and air resistance in the other. Like, say, an airplane pilot who fell out of his plane at $\SI{22000}{ft.}$, which I've already worked through the math for in an earlier blog post. Here's how that same math applies to the JATO car: first write Newton's second law $\sum F = ma$, including the drive force $F_\text{drive}$ and the air resistance $F_\text{drag}$,

$F_\text{drive} - F_\text{drag} = F_\text{drive} - \frac{1}{2}CA\rho v^2 = m\ud{v}{t}$

The car's terminal speed is the speed at which its acceleration, $\ud{v}{t}$, is zero. Plugging that in, we get

$v_T = \sqrt{\frac{2F_\text{drive}}{CA\rho}}$

Now we can rewrite Newton's second law like so:

$F_\text{drive}\biggl(1 - \frac{v^2}{v_T^2}\biggr) = m\ud{v}{t}$

This makes it easy to see that if an object is moving faster than its terminal speed at any point, that is $v > v_T$, it will tend to slow down (because $\ud{v}{t} < 0$), and if it's moving slower than its terminal speed, $v > v_T$, it will speed up. It'll only stay at a steady speed if $\ud{v}{t}\approx 0$, and that requires $v \approx v_T$, i.e. that the object is traveling roughly at its terminal speed.

The story from Wikipedia has the car barreling down the road at more than $\SI{350}{mph}$ for several seconds, which suggests that for a JATO Impala, $v_T \gtrsim \SI{350}{mph}$. Is that realistic? Well, we do have some of the information necessary to figure it out. As reported in the Mythbusters pilot, the thrust of a JATO is about $\SI{1000}{lbf.}$, or $\SI{4400}{N}$, and the density of air, $\rho = \SI{1.21}{kg/m^3}$. But we don't know the cross-sectional area $A$ and drag coefficient $C$ of the car.

Hmm. That could be a problem.

Fortunately, there's a way around that. Ever heard of a drag race? That's where you set a car (or two) at the beginning of typically a quarter mile track and just floor it to see how fast it makes it to the end. Besides being good for a movie or two... or six (come on, seriously?), the quarter-mile run is a pretty common way to test a car's performance, and the results of some of these drag tests are available online. For the '67 Impala, the site gives

1967 Chevrolet Impala SS427 (CL)
427ci/385hp, 3spd auto, 3.07, 0-60 - 8.4, 1/4 mile - 15.75 @ 86.5mph

This means the car's acceleration is sufficient to take it from rest to $\SI{60}{mph}$ in $\SI{8.4}{s}$, and that it completed the quarter mile in $\SI{15.75}{s}$, traveling $\SI{86.5}{mph}$ over the final 66 feet.

Let's now go back to the rewritten version of Newton's second law for a normal car,

$F_\text{drive}\biggl(1 - \frac{v^2}{v_T^2}\biggr) = m\ud{v}{t}$

You can solve this by rearranging and integrating it, but I'm lazy: I just plugged it into Mathematica. The solution for speed as a function of time is

$v(t) = v_T\tanh\biggl(\frac{F_\text{drive}}{m v_T}t + \atanh\frac{v_0}{v_T}\biggr)$

where $v_0$ is the initial speed. Now, a '67 Impala has a mass of $\SI{1969}{kg}$ (it varies from car to car, of course, but that's a representative value), and in a drag test, the initial velocity $v_0 = 0$. That leaves two variables still unknown: $F_\text{drive}$, the net forward force which moves the car (engine minus friction), and $v_T$, its terminal velocity. Luckily, we have two data points we can use to solve for them: the 0-60 benchmark $v(\SI{8.4}{s}) = \SI{60}{mph}$, and the quarter mile time $v(\SI{15.75}{s}) = \SI{86.5}{mph}$. Those two velocity-time coordinates should be enough to determine $F_\text{drive}$ and $v_T$. Probably the easiest way to do it is to make a contour plot showing the combinations of $v_T$ and $F_\text{drive}$ that satisfy each condition, like this:

The blue curve shows the points at which

$v_T\tanh\frac{F_\text{drive}(\SI{8.4}{s})}{(\SI{1969}{kg})v_T} = \SI{60}{mph}$

and the red curve shows points where

$v_T\tanh\frac{F_\text{drive}(\SI{15.75}{s})}{(\SI{1969}{kg})v_T} = \SI{86.5}{mph}$

Their intersection is the one set of parameters that satisfies both conditions, namely $v_T = \SI{100.8}{mph}$ and $F_\text{drive} = \SI{7250}{N} = \SI{1628}{lbf.}$. Bingo! Now we've got everything we need to calculate the behavior of the rocket car.

Well, wait a minute. We said — actually, Mythbusters said (and what I can find online seems to confirm) that a JATO provides a thousand pounds of thrust. But the value we found for $F_\text{drive}$ is even larger. Surely a standard car engine can't be more powerful than a rocket, can it?

I think we have to conclude that it is. Though this isn't quite what you'd call a standard car engine. The results of the drag test we used to calculate $F_\text{drive}$ were for the '67 Impala SS (Super Sport) 427, a special high-performance version of the car whose engine could crank out $\SI{385}{hp}$. A standard version of the car would have a less powerful engine, ranging down to $\SI{155}{hp}$, and thus could have as little as half the engine force — roughly, $F \propto P^{2/3}$, because $F \sim v^2$ and $P = Fv$, and $(150/385)^{2/3} \approx 0.5$.

Note that if you use $F_\text{drive}$ as obtained from the drag test to calculate the power at the car's top speed of $\SI{86.5}{mph}$, you get $\SI{376}{hp}$, which is quite close to the engine's reported horsepower. But I think that's just a coincidence. The same method applied to the car's eventual top speed of $SI{100.8}{mph} gives $\SI{437}{hp}$, which is more power than the engine is even able to generate! This reflects the fact that the way we derived $v(t)$ makes a lot of simplifying assumptions. For example, it assumes that the air resistance is proportional to $v^2$. In reality, a car is a complicated shape that induces some amount of turbulence, making the drag force difficult to characterize. More importantly, we've assumed the drive force is constant, which is not at all true for a real car. The engine force changes as the car shifts gears and as parts warm up, there are other assorted forces at work like rolling friction. As a check of sorts on how close this model comes, I'm going to integrate the formula again, to get a formula for distance traveled over time: $x(t) = \frac{m v_T^2}{F_\text{drive}}\ln\cosh\frac{F_\text{drive} t}{m v_T}$ In theory, we should be able to plug in the values we found — $m = \SI{1969}{kg}$, $v_T = \SI{100.8}{mph}$, and $F_\text{drive} = \SI{7250}{N}$ — along with the quarter mile time, $t = \SI{15.75}{s}$, and the distance $\SI{0.25}{mile}$ will pop out. When I actually plug the values in, I get $\SI{0.229}{mile}$, which is within 10%, so not bad. That suggests that the different inaccuracies cancel out to some extent. OK, so where does that leave us? We have a formula, $v(t) = v_T\tanh\frac{F_\text{drive} t}{m v_T}$ which seems to somewhat accurately describe the motion of a car under its own power in a drag test. We also have best-fit values for the parameters of this formula: $v_T = \SI{100.8}{mph}$, and $F_\text{drive} = \SI{7250}{N}$ for the SS427, and $F_\text{drive} \approx \SI{3600}{N}$ for a standard Impala. Time to ramp it up to the JATO rocket car! You may remember from earlier in this post that the terminal velocity is proportional to the square root of the net constant force $F_\text{drive}$ applied to move the car. If a terminal speed of $\SI{100.8}{mph}$ corresponds to a drive force of $\SI{7250}{N}$, then adding on a JATO's thrust of an additional $\SI{1000}{lbf.}$ would naively give a terminal speed of $\SI{100.8}{mph}\times \sqrt{\frac{\SI{7250}{N} + \SI{1000}{lbf.}}{\SI{7250}{N}}} = \SI{128}{mph}$ Doing the same calculation for the $\SI{1500}{lbf.}$ hobby rockets the Mythbusters used gives $\SI{100.8}{mph}\times \sqrt{\frac{\SI{7250}{N} + \SI{1500}{lbf.}}{\SI{7250}{N}}} = \SI{140}{mph}$ which is considerably less than the estimate of a top speed around $\SI{190}{mph}$ for the car in the pilot episode. Huh. OK, so what could be wrong with the model? • Could the drag coefficient — actually the product $CA\rho$ — of the car (with attached rocket) be much smaller than we thought, making the car's terminal speed higher? It's hard to see how. After all, the terminal speed we calculated was for a stock Impala, a shape which is reasonably aerodynamic, but the Mythbusters went and put a rocket pack on it, breaking the airflow over the roof. If anything, it should go slower with the rockets on top. • Could the car's engine be more powerful than we think? Again, probably not. I'm already using a value which corresponds to one of the most powerful engines available at the time — it'd still be a pretty strong engine in the modern market. To increase the car's terminal speed up to $\SI{190}{mph}$ by increasing engine force would require an engine almost three times as strong as what I'm using in the model. • What about power losses in the drivetrain? Well, of course those make the car's terminal speed slower, not faster. There are plenty of reasons to imagine that the terminal speed should be less than what we calculated, but not higher. I'm actually doubtful that whatever the error is has anything to do with the car itself. Here's why: suppose we take that equation for $v(t)$ from earlier and plot it, starting at $\SI{80}{mph}$, for several different configurations. The lower edge of each band represents a car with a standard $\SI{155}{hp}$ motor, and the higher edge represents a car with an enhanced performance $\SI{385}{hp}$ motor. And the shaded areas represent the results we actually saw on Mythbusters (or the circumstances described in the myth, for the gray box). Notice that the orange band, representing the car with 5 rockets simultaneously firing as in the anniversary special, goes right through the pink box representing the speed we saw on that show. So the model works in that case! On the other hand, the light green band, the one for the hobby rockets used in the pilot, only barely breaks the $\SI{130}{mph}$ speed of the chase helicopter (the dotted line) and doesn't ever get up to the speed estimate of $\SI{190}{mph}$. The only way I can come up with to reconcile this math with the results we saw on the show is that the rockets they used might have been more powerful than claimed. If you assume that the rockets give off a little more than twice as much thrust as was said on the show, i.e. around $\SI{3500}{lbf.}$, you get the dark green curve which seems to come close to the observed results. If anyone knows a better explanation for that difference, I'd be interested to here it, but for now I just have to conclude that something was off about those reported rocket numbers. Of course, let's not forget the real point of the graph. None of these curves come anywhere close to the circumstances claimed in the myth! So despite the discrepancies in the details, the conclusion remains solidly the same: physics says this myth is absolutely busted. 2013 Apr 22 A close look at the new CISPA Justice League of the Internet, unite! So went the call from the Elders of the Internet to make a last stand against the long-feared reawakening of the... uh, legislative process. (No, there are no Elders of the Internet. I just couldn't resist linking to that clip.) Internet privacy advocates are up in arms these days over the Cyber Intelligence Sharing and Protection Act, a bill which modifies the guidelines by which information, including personal and/or private information, may be shared among technology companies and the federal government. CISPA was first introduced last year as House Resolution 3523, and passed by the House of Representatives, but it stalled and died out in the Senate, perhaps partially in response to strong public opposition. Now, CISPA is back, in the form of House Resolution 624. This was passed by the House last week, and is headed to the Senate for discussion. The text of the bill is quite similar to last year's version, so most of what I wrote about it last year is still applicable, but there are a few things I want to update in light of new information, plus some new provisions in the bill to look at. So what I'm going to do is repost more or less the same thing I wrote last year, with additions and updates to cover the new information. Since this is a really long post, though, I'll jump straight to the punch line. I don't think CISPA is as bad as some people will say it is. Here is the short version of what I would like to see changed before the bill passes: • Require judicial intervention to allow the sharing of personal information without the express approval of the subject, except when necessary to prevent an imminent threat • Require that the subject of any shared personal information be notified immediately of what information was shared, except when necessary to protect national security • Allow shared information to be subject to the Freedom of Information Act • Allow the entity sharing information to specify which department(s) of the government it may be shared with (maybe this is already in there) • Allow legal action to proceed against an entity believed to be sharing information improperly, even if the entity asserts they were acting in good faith • Create a more flexible process for maintaining a list of classes of information that cannot be used by the government (and perhaps also by private entities) • Give an independent or semi-independent watchdog the authority to implement (not just recommend) policies to protect privacy and civil liberties when personal information is shared As usual, this post comes with the standard disclaimer that I am not a lawyer and this is not legal advice. I make no guarantees about the correctness of this information. If you're concerned about specific effects that CISPA could have on you personally, check with a lawyer. Amendments to National Security Act The main body of CISPA consists of an addition to title 50 of the United States Code, which deals with national security. The proposed addition starts out as follows: Sharing information with private entities Sec. 1104. (a) Intelligence Community Sharing of Cyber Threat Intelligence With Private Sector and Utilities- (1) IN GENERAL- The Director of National Intelligence shall establish procedures to allow elements of the intelligence community to share cyber threat intelligence with private-sector entities and utilities and to encourage the sharing of such intelligence. This basically sums up a large part of what people consider to be the problem with CISPA. It allows the government, or more precisely the national intelligence community (FBI, CIA, NSA, and other such organizations) to share information they have collected with private-sector entities, like businesses. Now, I don't know exactly what information our intelligence agencies collect on U.S. residents, but it stands to reason that if they wanted it, they could have access to phone records and the content of phone calls, emails, personal information like your address history and phone number history, your employment history and credit history, all your financial information, most of your shopping preferences, large parts of your web browsing history, and assorted other information. Obviously, government agencies can get far more information on your life and habits than private businesses or random people can. If a channel is opened up by which businesses can get a share of that information, they'd have a field day — and who knows what kinds of nefarious tricks they could pull with it? But let's hold on a minute. The capacity for information sharing that CISPA introduces comes with restrictions, which are spelled out by the next paragraph of the bill. (2) SHARING AND USE OF CLASSIFIED INTELLIGENCE- The procedures established under paragraph (1) shall provide that classified cyber threat intelligence may only be-- (A) shared by an element of the intelligence community with-- (i) certified entities; or (ii) a person with an appropriate security clearance to receive such cyber threat intelligence; (B) shared consistent with the need to protect the national security of the United States; and (C) used by a certified entity in a manner which protects such cyber threat intelligence from unauthorized disclosure; and (D) used, retained, or further disclosed by a certified entity for cybersecurity purposes. A "certified entity" is defined in subsection (g) of the bill as follows: (1) CERTIFIED ENTITY- The term certified entity' means a protected entity, self-protected entity, or cybersecurity provider that-- (A) possesses or is eligible to obtain a security clearance, as determined by the Director of National Intelligence; and (B) is able to demonstrate to the Director of National Intelligence that such provider or such entity can appropriately protect classified cyber threat intelligence. and in turn, "protected entity," "self-protected entity," and "cybersecurity provider," and the related term "cybersecurity purpose," are defined as (4) CYBERSECURITY PROVIDER- The term cybersecurity provider' means a non-governmental entity that provides goods or services intended to be used for cybersecurity purposes. (5) CYBERSECURITY PURPOSE- The term cybersecurity purpose' means the purpose of ensuring the integrity, confidentiality, or availability of, or safeguarding, a system or network, including protecting a system or network from-- (A) efforts to degrade, disrupt, or destroy such system or network; or (B) theft or misappropriation of private or government information, intellectual property, or personally identifiable information. (7) PROTECTED ENTITY- The term protected entity' means an entity, other than an individual, that contracts with a cybersecurity provider for goods or services to be used for cybersecurity purposes. (8) SELF-PROTECTED ENTITY- The term `self-protected entity' means an entity, other than an individual, that provides goods or services for cybersecurity purposes to itself.' OK, soo... if I'm getting this right, certified entities are basically businesses or organizations that either produce or use (or both) computer security technology, and either have or are eligible for a certain level of security clearance, and which confirm that they are capable of protecting whatever information they receive from unauthorized use. However, simply being capable of obtaining a security clearance, and being capable of protecting information, is not saying much. That's where subparagraphs (C) and (D) come in; it actually requires these certified entities to protect the information they're given, and not to use it for any purpose other than cybersecurity. In essence, the apparent intent of the bill is to set up a framework to ensure that, once privileged information leaves the intelligence community, it doesn't go any further and isn't used for any purpose other than the one it was explicitly shared for. But the relevant definitions don't seem specific enough to make that happen. "Cybersecurity purposes" encompasses any activity intended to prevent theft or misuse of various types of information, as well as any sort of technological attack. Consider this situation: let's say you wind up on some low-level CIA watchlist for a perfectly innocent reason, such as making multiple business trips to China over the course of a few months. Ordinarily, they would probably watch you for a little while longer, see nothing of interest, and file the whole matter away. But under CISPA, the CIA could share their interest in you with your email provider, who could then start keeping a very close eye on your emails. And, of greater concern, anything suspicious-looking (even if it's actually innocent, these things can be misinterpreted) that your email provider finds, they can then share back with the CIA. Yes, CISPA doesn't require this information to be shared, but how well do you trust your email provider to stick up for your right to privacy? Here's another issue: how much information is the intelligence community allowed to share, anyway? That is loosely addressed by subparagraph (B), which says that the government can only share information as necessary to protect national security. There are a couple of problems I have with this statement. First of all, it's really vague on what exactly is necessary to protect national security. I understand that intelligence services need to have flexible tools to deal with problems that they haven't anticipated, and it would hinder their work to specify a complete list of circumstances under which information could be shared outside the government, but I really feel like some restrictions could be put in place here — for example, sharing information might only be allowed 1. when necessary to get access to additional information for which the private entity is the only source, or 2. when necessary to facilitate the cooperation of the private entity in an ongoing investigation; and 3. in the face of an imminent threat to national security such that the delay required to go through legal proceedings in a court (i.e. getting a warrant) could lead to property damage or loss of life. It might be necessary to create some additional procedure by which a court could approve a request to share information with the private sector, since warrants are usually used to take things, not to give them out (as far as I know), but certainly that could be part of the bill as well. Honestly, I'm not sure exactly what sorts of situations prompted this bill to be written, and so I'm not sure what sorts of restrictions would be appropriate. But if history is any indication, intelligence agencies will try pretty hard to pass all sorts of things off as being required in the name of national security, and the current wording gives them free reign to do just that. And as with any organization, there are almost certainly going to be a few people in the intelligence community who would abuse that power. The other thing that bothers me about this is that there is not much accountability for what information gets shared and why it had to be shared. There is a provision in an earlier part of the bill (section 2, which I'll discuss in more detail later) that specifies that any sharing of information with the federal government under this act must be described in an annual report to Congress. But it says nothing explicit about information shared by the federal government, and it also leaves a lot of leeway for the details of the information shared to be kept in the dark. (c) Reports on Information Sharing- (1) INSPECTOR GENERAL OF THE DEPARTMENT OF HOMELAND SECURITY REPORT- The Inspector General of the Department of Homeland Security, in consultation with the Inspector General of the Department of Justice, the Inspector General of the Intelligence Community, the Inspector General of the Department of Defense, and the Privacy and Civil Liberties Oversight Board, shall annually submit to the appropriate congressional committees a report containing a review of the use of information shared with the Federal Government under subsection (b) of section 1104 of the National Security Act of 1947, as added by section 3(a) of this Act, including-- (A) a review of the use by the Federal Government of such information for a purpose other than a cybersecurity purpose; (B) a review of the type of information shared with the Federal Government under such subsection; (C) a review of the actions taken by the Federal Government based on such information; (D) appropriate metrics to determine the impact of the sharing of such information with the Federal Government on privacy and civil liberties, if any; (E) a list of the departments or agencies receiving such information; (G) a review of the sharing of such information within the Federal Government to identify inappropriate [stovepiping](http://en.wikipedia.org/wiki/Stovepiping) of shared information; and (G) any recommendations of the Inspector General for improvements or modifications to the authorities under such section. ... (3) FORM- Each report required under paragraph (1) or (2) shall be submitted in unclassified form, but may include a classified annex. A new subsection has been added since last year's version of CISPA, describing a report to be prepared and submitted by the privacy officers of the intelligence community on the privacy implications of the government's actions under CISPA. (2) PRIVACY AND CIVIL LIBERTIES OFFICERS REPORT- The Officer for Civil Rights and Civil Liberties of the Department of Homeland Security, in consultation with the Privacy and Civil Liberties Oversight Board, the Inspector General of the Intelligence Community, and the senior privacy and civil liberties officer of each department or agency of the Federal Government that receives cyber threat information shared with the Federal Government under such subsection (b), shall annually and jointly submit to Congress a report assessing the privacy and civil liberties impact of the activities conducted by the Federal Government under such section 1104. Such report shall include any recommendations the Civil Liberties Protection Officer and Chief Privacy and Civil Liberties Officer consider appropriate to minimize or mitigate the privacy and civil liberties impact of the sharing of cyber threat information under such section 1104. This does partially address the concerns I had about last year's version of this section, in that there is some sort of oversight over the information sharing process. But it seems rather weakly defined. All that this subsection allows is the submission of a report and recommendations, which could very well be ignored. I'd much rather have some assurance built into the bill that the recommendations will actually be followed, when doing so doesn't directly hinder national security (and even then, there should be a requirement for an explanation). Sharing information with the government Whew. OK. Let's move on to the next part of the bill, subsections (b) and (c), which deal with the reverse process, namely when private-sector entities share information with federal intelligence services. (b) Use of Cybersecurity Systems and Sharing of Cyber Threat Information- (1) IN GENERAL- (A) CYBERSECURITY PROVIDERS- Notwithstanding any other provision of law, a cybersecurity provider, with the express consent of a protected entity for which such cybersecurity provider is providing goods or services for cybersecurity purposes, may, for cybersecurity purposes-- (i) use cybersecurity systems to identify and obtain cyber threat information to protect the rights and property of such protected entity; and (ii) share such cyber threat information with any other entity designated by such protected entity, including, if specifically designated, the entities of the Department of Homeland Security and the Department of Justice designated under paragraphs (1) and (2) of section 2(b) of the Cyber Intelligence Sharing and Protection Act. (B) SELF-PROTECTED ENTITIES- Notwithstanding any other provision of law, a self-protected entity may, for cybersecurity purposes-- (i) use cybersecurity systems to identify and obtain cyber threat information to protect the rights and property of such self-protected entity; and (ii) share such cyber threat information with any other entity, including the entities of the Department of Homeland Security and the Department of Justice designated under paragraphs (1) and (2) of section 2(b) of the Cyber Intelligence Sharing and Protection Act. This part seems straightforward enough; it's basically saying that a technology security company can share with the government (or anyone else) information about threats to its systems or its clients' resources, with the explicit permission of the client, when doing so is necessary for the company to do its job of protecting the client. (2) USE AND PROTECTION OF INFORMATION- Cyber threat information shared in accordance with paragraph (1)-- (A) shall only be shared in accordance with any restrictions placed on the sharing of such information by the protected entity or self-protected entity authorizing such sharing, including appropriate anonymization or minimization of such information and excluding limiting a department or agency of the Federal Government from sharing such information with another department or agency of the Federal Government in accordance with this section; (B) may not be used by an entity to gain an unfair competitive advantage to the detriment of the protected entity or the self-protected entity authorizing the sharing of information; and (C) may only be used by a non-Federal recipient of such information for a cybersecurity purpose; This part, somewhat expanded since last year's CISPA, specifies conditions on when and how that information can be shared: basically that it has to be done in accordance with the company's own privacy policy, and that it can't be used for inappropriate purposes (though I doubt that "unfair competitive advantage" covers all the inappropriate purposes one could come up with). (D) if shared with the Federal Government-- (i) shall be exempt from disclosure under section 552 of title 5, United States Code (commonly known as the "Freedom of Information Act"); (ii) shall be considered proprietary information and shall not be disclosed to an entity outside of the Federal Government except as authorized by the entity sharing such information; This says that information shared with the government is exempt from Freedom of Information Act requests. Now, I used to think this was a good thing, because the type of information shared will often be personally identifying information like names, addresses, phone numbers, email addresses, perhaps credit card numbers, accounts with various services, and so on. But the Freedom of Information Act, 5 USC § 552, already includes provisions to omit personally identifying information: To the extent required to prevent a clearly unwarranted invasion of personal privacy, an agency may delete identifying details when it makes available or publishes an opinion, statement of policy, interpretation, staff manual, instruction, or copies of records referred to in subparagraph (D). However, in each case the justification for the deletion shall be explained fully in writing, and the extent of such deletion shall be indicated on the portion of the record which is made available or published, unless including that indication would harm an interest protected by the exemption in subsection (b) under which the deletion is made. If technically feasible, the extent of the deletion shall be indicated at the place in the record where the deletion was made. With that in place, there seems to be little justification for exempting this information from FOIA entirely. It seems only fair that, if your information is being shared between private entities and the government, you should be able to know what is being shared, when there isn't a pressing need to keep it secret. I would like to see the FOIA exemption removed from the bill. But anyway, back to CISPA: (iii) shall not be used by the Federal Government for regulatory purposes; and OK, so information shared under CISPA can't be used to create or enforce regulations. That's good, I guess. I'm not sure exactly how this would be relevant. (iv) shall not be provided to another department or agency of the Federal Government under paragraph (2)(A) if-- (I) the entity providing such information determines that the provision of such information will undermine the purpose for which such information is shared; or (II) unless otherwise directed by the President, the head of the department or agency of the Federal Government receiving such cyber threat information determines that the provision of such information will undermine the purpose for which such information is shared; and (v) shall be handled by the Federal Government consistent with the need to protect sources and methods and the national security of the United States; and Honestly, I can't quite tell what is being said here. Paragraph (2)(A) (scroll up a bit) seems to be saying that different agencies of the federal government can share information among each other, but this says that the entity providing that information, or the head of the government agency receiving it, can block that intragovernmental sharing by saying that it would undermine the purpose for which the information is shared. Again, I really can't imagine what sort of situation this would be relevant in. But I think it would just be better to limit the sharing of information between governmental agencies. Let the company sharing the information specify where it's going, and that's it. (3) EXEMPTION FROM LIABILITY- (A) EXEMPTION- No civil or criminal cause of action shall lie or be maintained in Federal or State court against a protected entity, self-protected entity, cybersecurity provider, or an officer, employee, or agent of a protected entity, self-protected entity, or cybersecurity provider, acting in good faith-- (i) for using cybersecurity systems or sharing information in accordance with this section; or (ii) for not acting on information obtained or shared in accordance with this section. (B) LACK OF GOOD FAITH- For purposes of the exemption from liability under subparagraph (A), a lack of good faith includes any act or omission taken with intent to injure, defraud, or otherwise endanger any individual, government entity, private entity, or utility. This paragraph is an interesting inclusion in part because of (3)(A)(ii), which provides immunity from prosecution for declining to use any of this cybersecurity information. I like this clause because it means that, if you're ever not sure about the legal status of some information shared pursuant to this act, the safe "default" course of action is to just leave it alone, and that way there will be no legal consequences. This is much better than the alternative of providing immunity from prosecution for people who went ahead and used the information, under the belief that they were doing so legally, but who actually weren't. However, the condition of "acting in good faith" is kind of worrying because, as subparagraph (3)(B) says, it's based on intent, and it's very difficult to prove intent in court. This means that even if you think a company is illegally sharing your personal information with the government, all they have to do is claim that they are acting in good faith, and any legal action you may take against them will be dismissed. That just goes too far. If you suspect a company of improper information sharing, there really should be some sort of process by which you can satisfy yourself that they're not doing it, and a proper court proceeding (that at least goes far enough for the shared information to be revealed) should be one such method. (5) RULE OF CONSTRUCTION- Nothing in this subsection shall be construed to provide new authority to-- (A) a cybersecurity provider to use a cybersecurity system to identify or obtain cyber threat information from a system or network other than a system or network owned or operated by a protected entity for which such cybersecurity provider is providing goods or services for cybersecurity purposes; or (B) a self-protected entity to use a cybersecurity system to identify or obtain cyber threat information from a system or network other than a system or network owned or operated by such self-protected entity. There's actually one more thing I don't get about this entire subsection. Why is it even necessary? After all, most companies already have privacy policies, and most of those already say that they may share information with the government in accordance with a court order or when necessary to protect their business, in some cases even without explicit approval by the client. Now, granted, this is coming from the perspective of an individual, and subsection (b) does not apply to individuals (it talks about "protected entities," which are organizations, not people). But I would imagine that businesses have similar agreements in place when they deal with each other. So everything that this piece of CISPA allows was already perfectly legal? Maybe it just needed to be explicit, but I just don't see the point. Let's continue on to subsection (c), which governs how the federal government (in particular, the intelligence community) may use any information it receives from private-sector entities. (c) Federal Government Use of Information- (1) LIMITATION- The Federal Government may use cyber threat information shared with the Federal Government in accordance with subsection (b)-- (A) for cybersecurity purposes; (B) for the investigation and prosecution of cybersecurity crimes; (C) for the protection of individuals from the danger of death or serious bodily harm and the investigation and prosecution of crimes involving such danger of death or serious bodily harm; or (D) for the protection of minors from child pornography, any risk of sexual exploitation, and serious threats to the physical safety of minors, including kidnapping and trafficking and the investigation and prosecution of crimes involving child pornography, any risk of sexual exploitation, and serious threats to the physical safety of minors, including kidnapping and trafficking, and any crime referred to in section 2258A(a)(2) of title 18, United States Code. (2) AFFIRMATIVE SEARCH RESTRICTION- The Federal Government may not affirmatively search cyber threat information shared with the Federal Government under subsection (b) for a purpose other than a purpose referred to in paragraph (1)(B). ... (6) RETENTION AND USE OF CYBER THREAT INFORMATION- No department or agency of the Federal Government shall retain or use information shared pursuant to subsection (b)(1) for any use other than a use permitted under subsection (c)(1). This piece, considerably reworked from the original bill, allows the government to use information collected under CISPA for cybersecurity purposes and for certain kinds of serious crime prevention, which seem like acceptable additions, but not to search through it to find evidence of other crimes. This is probably better than last year's CISPA when I wrote about that, but it still suffers from the same vagueness in the definition of "cybersecurity purposes" that I brought up earlier. (3) ANTI-TASKING RESTRICTION- Nothing in this section shall be construed to permit the Federal Government to-- (A) require a private-sector entity to share information with the Federal Government; or (B) condition the sharing of cyber threat intelligence with a private-sector entity on the provision of cyber threat information to the Federal Government. This bit says that the bill does not give the government the authority to demand information from a private company, at least not in any way that isn't already permitted by existing laws (namely, with a search warrant). It's definitely a good thing to make clear that intelligence agencies are still not allowed to bypass the judicial process; CISPA does not enable warrantless wiretapping and the like. A lot of people are not getting this point correct. (4) PROTECTION OF SENSITIVE PERSONAL DOCUMENTS- The Federal Government may not use the following information, containing information that identifies a person, shared with the Federal Government in accordance with subsection (b): (A) Library circulation records. (B) Library patron lists. (C) Book sales records. (D) Book customer lists. (E) Firearms sales records. (F) Tax return records. (G) Educational records. (H) Medical records. This identifies selected pieces of information that the government can't use, even if it is shared. The point of this is presumably that, if it gets back to the government that you checked out a couple of books on nuclear engineering, for example, that shouldn't mark you as a terrorist. Good for that, I guess, but I have to wonder where this list came from. I think there should be a more flexible process for marking certain classes of information as protected from government use, probably resulting in a slightly longer list. At this point I want to point out one section that existed in the earlier version of CISPA but was removed: (7) PROTECTION OF INDIVIDUAL INFORMATION- The Federal Government may, consistent with the need to protect Federal systems and critical information infrastructure from cybersecurity threats and to mitigate such threats, undertake reasonable efforts to limit the impact on privacy and civil liberties of the sharing of cyber threat information with the Federal Government pursuant to this subsection. I think this is meant to be replaced by the section on privacy and civil liberties in section 2 of the act (discussed below under "Federal Government Coordination"), so maybe it isn't a big deal that it was removed, but it's something to be aware of. One final piece of the USC amendment (that I'm going to talk about), that is present in the new CISPA: (d) Federal Government Liability for Violations of Restrictions on the Disclosure, Use, and Protection of Voluntarily Shared Information- (1) IN GENERAL- If a department or agency of the Federal Government intentionally or willfully violates subsection (b)(3)(D) or subsection (c) with respect to the disclosure, use, or protection of voluntarily shared cyber threat information shared under this section, the United States shall be liable to a person adversely affected by such violation in an amount equal to the sum of-- (A) the actual damages sustained by the person as a result of the violation or$1,000, whichever is greater; and

(B) the costs of the action together with reasonable attorney fees as determined by the court.

This provides for penalties that the government must bear if it violates the restrictions on how shared information can be used or reshared. In principle, this is a pretty useful section of the bill, but it's hampered by two issues:

• The restrictions that the bill does include still allow for a wide variety of uses of shared information, and it's not even clear in many cases which uses are allowed and which ones aren't
• More importantly, there's practically no way for a "person adversely affected by such violation" to find out about it! Remember, information shared under CISPA is exempt from FOIA requests, and there's no requirement to notify the subjects of the shared information.

Federal Government Coordination

This year's version of CISPA includes an entirely new section describing how information is to be shared within different branches of the federal government. I just want to go over one section here, the one relating to privacy and civil liberties:

(b) Coordinated Information Sharing-

(5) PRIVACY AND CIVIL LIBERTIES-

(A) POLICIES AND PROCEDURES- The Secretary of Homeland Security, the Attorney General, the Director of National Intelligence, and the Secretary of Defense shall jointly establish and periodically review policies and procedures governing the receipt, retention, use, and disclosure of non-publicly available cyber threat information shared with the Federal Government in accordance with section 1104(b) of the National Security Act of 1947, as added by section 3(a) of this Act. Such policies and procedures shall, consistent with the need to protect systems and networks from cyber threats and mitigate cyber threats in a timely manner--

(i) minimize the impact on privacy and civil liberties;

(ii) reasonably limit the receipt, retention, use, and disclosure of cyber threat information associated with specific persons that is not necessary to protect systems or networks from cyber threats or mitigate cyber threats in a timely manner;

(iii) include requirements to safeguard non-publicly available cyber threat information that may be used to identify specific persons from unauthorized access or acquisition;

(iv) protect the confidentiality of cyber threat information associated with specific persons to the greatest extent practicable; and

(v) not delay or impede the flow of cyber threat information necessary to defend against or mitigate a cyber threat.

(B) SUBMISSION TO CONGRESS- The Secretary of Homeland Security, the Attorney General, the Director of National Intelligence, and the Secretary of Defense shall, consistent with the need to protect sources and methods, jointly submit to Congress the policies and procedures required under subparagraph (A) and any updates to such policies and procedures.

(C) IMPLEMENTATION- The head of each department or agency of the Federal Government receiving cyber threat information shared with the Federal Government under such section 1104(b) shall--

(i) implement the policies and procedures established under subparagraph (A); and

(ii) promptly notify the Secretary of Homeland Security, the Attorney General, the Director of National Intelligence, the Secretary of Defense, and the appropriate congressional committees of any significant violations of such policies and procedures.

(D) OVERSIGHT- The Secretary of Homeland Security, the Attorney General, the Director of National Intelligence, and the Secretary of Defense shall jointly establish a program to monitor and oversee compliance with the policies and procedures established under subparagraph (A).

Pretty wordy, but the gist is that high-level officials in the Department of Homeland Security, Department of Justice, and Department of Defense are tasked with limiting privacy violations and infringements of civil liberties as much as possible. It's definitely a good thing that the law includes some provision for this, but I wonder if it couldn't be a little more specific about what these policies should entail. Besides, the heads of the DHS, DOJ, and DOD are not exactly the people I want watching their own organizations for privacy violations. That's okay, since their job is to prevent crime and maintain security, but I'd like to see more of this responsibility delegated to an independent or semi-independent watchdog, like the Inspectors General.

Conclusion

Bottom line, I think this bill is somewhat improved over last year's version of CISPA, and I definitely think it's not as bad as some hardcore privacy activists would have you believe (or maybe they're just people dead set against anything bearing the name CISPA). Honestly, I can't really get that worked up about the bill in its current form. Sure, there are some changes I'd like to see in it — to repeat myself from the introduction:

• Require judicial intervention to allow the sharing of personal information without the express approval of the subject, except when necessary to prevent an imminent threat
• Require that the subject of any shared personal information be notified immediately of what information was shared, except when necessary to protect national security
• Allow shared information to be subject to the Freedom of Information Act
• Allow the entity sharing information to specify which department(s) of the government it may be shared with (maybe this is already in there)
• Allow legal action to proceed against an entity believed to be sharing information improperly, even if the entity asserts they were acting in good faith
• Create a more flexible process for maintaining a list of classes of information that cannot be used by the government (and perhaps also by private entities)
• Give an independent or semi-independent watchdog the authority to implement (not just recommend) policies to protect privacy and civil liberties when personal information is shared

I wouldn't go so far as to say I support the bill as is, but it's not the kind of egregious violation of civil liberties that, say, PIPA was. This bill really just doesn't seem to do a whole lot.

If you'd like to weigh in on the legislative process, contact your Senator to voice your opinion! With CISPA having just passed the House, Senators will be particularly receptive to feedback on the bill in the upcoming days and weeks. I'd also suggest reading the bill itself, of course (in its entirety, not just the sections I've quoted here), and other resources, such as the EFF CISPA FAQ and any number of threads on Reddit, if you're into that sort of thing. Just be wary — there's a lot of misinformation out there, so use your judgment!

2013
Apr
09

Last week, I wrote about the announcement of the first results from the Alpha Magnetic Spectrometer: a measurement of the positron fraction in cosmic rays. Although AMS-02 wasn't the first to make this measurement, it was nevertheless a fairly exciting announcement because they confirm a drastic deviation from the theoretical prediction based on known astrophysical sources.

Unfortunately, most of what you can read about it is pretty light on details. News articles and blog posts alike tend to go (1) Here's what AMS measured, (2) DARK MATTER!!!1!1!! All the attention has been focused on the experimental results and the vague possibility that it could have come from dark matter, but there's precious little real discussion of the underlying theories. What's a poor theoretical physics enthusiast to do?

Well, we're in luck, because on Friday I attended a very detailed presentation on the AMS results by Stephane Coutu, author of the APS Viewpoint about the announcement. He was kind enough to point me to some references on the topic, and even to share his plots comparing the theoretical models to AMS (and other) data, several of which appear below. I never would have been able to put this together without his help, so thanks Stephane!

Time to talk positrons.

The Cosmic Background

When people talk about "known astrophysical sources" of positrons, they're mostly talking about cosmic rays. Not primary cosmic rays, though, which are the particles that come directly from pulsars, accretion discs, or whatever other sources are out there. Primary cosmic rays are generally protons or atomic nuclei. As they travel through space, they decay into other particles, secondary cosmic rays, through processes like this:

\begin{align}\prbr + \text{particle} &\to \pipm + X \\ \pipm &\to \ualp\unu \\ \ualp &\to \ealp\enu\uanu\end{align}

Positrons in the energy range AMS can detect, below $\SI{1}{TeV}$ or so, mostly come from galactic primary cosmic rays (protons). We can determine the production spectrum of these cosmic ray protons (how quickly they are produced at various energies) using astronomical measurements like the ratio of boron to carbon nuclei and the detected flux of electrons — but that's a whole other project that I won't get into here.

Once the proton spectrum is set, we can combine it with the density of the interstellar medium to determine how often reactions like the one above will occur, again as a function of energy. That gives us a spectrum for positron production. But to actually match this model to what we detect in Earth orbit, we need to account for various energy loss mechanisms that affect cosmic rays as they travel. Both primary (protons) and secondary (positrons) cosmic rays lose energy to processes like synchrotron radiation (energy losses as charged particles change direction in a magnetic field), bremsstrahlung (energy losses from charged particles slowing down in other particles' electric field), and inverse Compton scattering (charged particles "bouncing" off photons). These dissipative mechanisms tend to reduce the positron spectrum at high energies.

Doing all this accurately involves accounting for the distribution of matter in the galactic disk, and accordingly it takes a rather sophisticated computer program to get it right. The "industry standard" is a program called GALPROP, which breaks down the galaxy and its halo (a slightly larger region surrounding the disk, which contains globular clusters and dark matter) into small regions, tracks the spectra of various kinds of particles in each region, and models how the spectra change over time as cosmic rays move from one region to another. There are various models with different levels of detail, most of which are described in this paper and improved in e.g. this one and this one:

• The class of theories known as leaky box models (or homogeneous models) assume that cosmic rays are partially confined within the galaxy — a few leak out into intergalactic space, but mostly they stay within the galactic disk and halo. Both the distribution of where secondary cosmic rays are produced and the interstellar medium they travel through are effectively uniform. Accordingly, the times (or distances) they travel before running into something follow an exponential distribution with an energy-dependent average value $\expect{t}$ (or $\lambda_e = \rho v\expect{t}$).
• The diffusive halo model assumes that the galaxy consists of two regions, a disk and a halo. Within these two regions, cosmic rays diffuse outward from their sources, and those that reach the edge of the halo escape from the galaxy, never to return. The diffusion coefficient is taken to be twice as large in the disk as in the halo due to the increased density of matter.
• The dynamical halo model is exactly like the diffusive halo model with the addition of a "galactic wind" that pushes all cosmic rays in the halo outward at some fixed velocity $V$.

There are others, less commonly used, but all these models share one significant thing in common: they give a positron fraction that decreases with increasing energy. And the first really precise measurements of cosmic ray positrons, performed by the HEAT and CAPRICE experiments, confirmed that conclusion, as shown in this plot.

But new data from PAMELA, Fermi-LAT, and now AMS-02 show something entirely different! Above $\SI{10}{GeV}$, the positron fraction actually increases with energy, showing that something must be producing additional positrons at those higher energies.

The spectrum of the positron fraction excess, i.e. the difference between secondary emission predictions and the data, suggest that this unknown source produces roughly equal numbers of positrons and electrons at the energies AMS has been able to measure, with a power-law spectrum for each:

$\phi_{\mathrm{e}^\pm} \propto E^{\gamma_s},\quad E \lesssim \SI{300}{GeV}$

As an example model, the AMS-02 paper postulated

\begin{align}\Phi_{\ealp} &= C_{\ealp}E^{-\gamma_{\ealp}} + C_s E^{-\gamma_s} e^{-E/E_s} \\ \Phi_{\elp} &= C_{\elp}E^{-\gamma_{\elp}} + C_s E^{-\gamma_s} e^{-E/E_s}\end{align}

with $E_s = \SI{760}{GeV}$ based on a fit to their data. But regardless of whether this specific formula works, the point is that secondary emission tends to produce more positrons than electrons (because most primary cosmic rays are protons, which generally decay into positrons due to charge conservation). That doesn't fit the profile. This unexplained excess is probably something else.

Neutralinos

Naturally, physicists are going to be most excited if the positron excess turns out to come from some previously unknown particle. The most likely candidate is the neutralino, denoted $\tilde{\chi}^0$, a type of particle predicted by most supersymmetric theories. Neutralinos are the superpartners of the W and Z gauge bosons, and of the Higgs boson(s).

According to the theories, reactions involving supersymmetric particles tend to produce other supersymmetric particles. The neutralino, as the lightest of these particles , is at the end of the supersymmetric decay chain, which makes it a good candidate to constitute the mysterious dark matter. But occasionally, neutralinos will annihilate to produce normal particles like positrons and electrons. If dark matter is actually made of large clouds of neutralinos, it's natural to wonder whether the positrons produced from their annihilation could make up the difference between the prediction from secondary cosmic rays and the AMS observations.

Here's how the calculation goes. Using the mass of dark matter we know to exist from galaxy rotation curves and gravitational lensing, and assuming some particular mass $m_{\chi}$ for the neutralino, we can calculate how many neutralinos are in our galaxy's dark matter halo. Multiplying that by the decay rate predicted by the supersymmetric theory gives the rate of positron production from neutralino decay. That rate gets plugged into cosmic ray propagation models like those described in the last section, leading to predictions for the positron flux measured on Earth.

Several teams have run through the calculations and found that... well, it kind of works, but only if you fudge the numbers a bit. Neutralino annihilation predicts a roughly power-law contribution to the positron fraction up to the mass of the neutralino; that is,

$\phi_{\tilde{\chi}^0\to \mathrm{e}^{\pm}} \sim \begin{cases}C E^{\gamma_\chi},& E \lesssim m_\chi c^2 \\ \text{small},& E \gtrsim m_\chi c^2\end{cases}$

As long as $m_\chi \gtrsim \SI{500}{GeV}$ or so, this is exactly the kind of spectrum needed to explain the discrepancy between the PAMELA/Fermi/AMS results and the secondary emission spectrum. The problem lies in the overall constant $C$, which you would calculate from the dark matter density and the theoretical decay rate. It's orders of magnitude too small. So the papers multiply this by an additional "boost" factor, $B$, and examine how large $B$ needs to be to match the experimental results. Depending on the model, $B$ ranges from about 30 (Baltz et al., $m_\chi = \SI{160}{GeV}$) to over 7000 (Cholis et al., $m_\chi = \SI{4000}{GeV}$).

Alternatively, you can assume that something is wrong with the propagation models, and that positrons lose more energy than expected on their way through the interstellar medium. This is the approach taken in this paper, which finds that increasing the energy loss rate by a factor of 5 can kind of match the positron fraction data.

But that much of an adjustment to the energy loss leads to conflicts with other measurements. It winds up being an even more unrealistic model.

Even if the parameters of some supersymmetric theory can be tweaked to match the data without a boost factor, there's one more problem: neutralinos decay into antiprotons and photons too. If the positron excess is caused by neutralino decay, there should be corresponding excesses of antiprotons and gamma rays, but we don't see those. It's going to be quite tricky to tune a dark matter model so that it gives us the needed flux of positrons without overshooting the measurements of other particles. There is only a small range of values of mass and interaction strength that would be consistent with all the measurements. So as much as dark matter looks like an interesting direction for future research, it's not a realistic model for the positron excess just yet.

Astrophysical sources

With the dark matter explanation looking only moderately plausible at best, let's turn to other (less exotic) astrophysical sources. There's a fair amount of uncertainty about just how many cosmic rays are produced even by known sources. They could be emitting enough electrons and positrons to make the difference between the new data and the theories.

Pulsars in particular, in addition to being sources of primary cosmic rays (protons), are often surrounded by nebulae that emit electrons and positrons from their outer regions. The pulsar's solar wind interacts with the nebula to accelerate light particles to high energies, giving these systems the name of pulsar wind nebulae (PWNs). Simply by virtue of being a PWN, it's expected to emit a certain "baseline" positron and electron flux, which is included in secondary emission models, but the pulsar could have been much more active in the past, emitting a lot more positrons and electrons. These would have become "trapped" in the surrounding nebula and continued to leak out over time, which means we would be seeing more positrons and electrons than we'd expect to based on the pulsar's current activity. There are a few nearby PWNs which seem like excellent candidates for this effect, going by the (rather snazzy, if you ask me) names of Geminga and Monogem. A number of papers (Yüskel et al., and recently Linden and Profumo) have crunched the numbers on these pulsars, and they find that the positron/electron flux from enhanced pulsar activity can match up quite well with the positron fraction excess detected by PAMELA, Fermi-LAT, and AMS-02.

The "smoking gun" that would definitely (well, almost definitely) identify a pulsar as the source of the excess would be an anisotropy in the flux: we'd see more positrons coming from the direction of the pulsar than from other directions in the sky. Now, AMS-02 (and Fermi-LAT before it) looked for an effect of this sort, and they didn't find it — but according to Linden and Profumo, it's entirely possible that the anisotropy could be very slight, less than what either experiment was able to detect. We'll have to wait for future experimental results to check that hypothesis.

Modified secondary emission

Of course, it's important to remember (again) that all these analyses are based on the propagation models that tell us how cosmic rays are produced and move through the galaxy. It's entirely possible that adjusting the propagation models alone, without involving any extra source of positrons, would bring the predictions from secondary emission in line with the experimental data.

A paper by Burch and Cowsik looked at this possibility, and it turns out that something called the nested leaky-box model can fix the positron fraction discrepancy fairly well. As I wrote back in the first section, the leaky box model gets its name because cosmic rays are considered to be partially confined within the galaxy. Well, the nested leaky box model adds the assumption that cosmic rays are also partially confined in small regions around the sources that produce them. That means that, rather than being produced uniformly throughout the galaxy, secondary cosmic rays come preferentially from certain regions of space. This is actually similar to the hypothesis from the last section, of extra positrons coming from PWNs, so it shouldn't be too surprising that using the nested leaky box model can account for the data about as well as the pulsars can.

Looking to the future

All the media outlets reporting on the AMS results have been talking about the dark matter hypothesis, even going so far as to say AMS found evidence of dark matter — but clearly, that's not the case. There's no reason to say we have evidence of dark matter when there are perfectly valid, simpler, maybe even better explanations for the positron fraction excess at high energies! There's just not enough data yet to tell which explanation is right.

As AMS-02 continues to make measurements over the next decade or so, there are two main things to look for that will help distinguish between these models. First, does the positron fraction stop rising? And if so, where on the energy spectrum does it peak? As we've seen, this can happen in any model, but if neutralino annihilation is the right explanation, that peak will have to occur at an energy compatible with other constraints on the neutralino mass. Perhaps more importantly, is there any anisotropy in the direction from which these positrons are coming? If there is, it would pretty strongly disfavor the dark matter hypothesis. The anisotropy itself could actually point us toward the source of the extra positrons. So even if we don't wind up discovering a new particle from this series of experiments, there's probably something pretty interesting to be found.

2013
Apr
03

Positrons in space!!

A fair amount of what I write about here is about accelerator physics, done at facilities like the Large Hadron Collider. But you can also do particle physics in space, which is filled with fast-moving particles from natural sources. "All" you need to do is build a particle detector, analogous to ATLAS or CMS, and put it in Earth orbit. That's exactly what the Alpha Magnetic Spectrometer (AMS) is. Since 2011, when it was installed on the International Space Station, AMS has been detecting cosmic electrons and positrons, looking for anomalous patterns, and today they presented their first data release.

Let's jump straight to the results:

This plot shows the number of positrons with a given energy as a fraction of the total number of electrons and positrons with that energy, $\frac{N_\text{positrons}}{N_\text{electrons} + N_\text{positrons}}$. The key interesting feature, which confirms a result from the previous experiments PAMELA and Fermi-LAT, is that the plot rises at energies higher than about $\SI{10}{GeV}$. That's not what you'd normally expect, because most physical processes produce fewer particles at higher energies. (Think about it: it's less likely that you'll accumulate a lot of energy in one particle.) So there must be some process, not completely understood, which is producing positrons.

As part of their data analysis, AMS has tested a model which describes the flux (total number) of positrons as the sum of two contributions:

• A power law (the first term in the below equations), representing known, "typical" sources, and
• A "source spectrum" with an exponential cutoff, representing something new

\begin{align}\Phi_{\ealp} &= C_{\ealp}E^{-\gamma_{\ealp}} + C_s E^{-\gamma_s} e^{-E/E_s} \\ \Phi_{\elp} &= C_{\elp}E^{-\gamma_{\elp}} + C_s E^{-\gamma_s} e^{-E/E_s}\end{align}

The model fits pretty well to the data so far:

This means that the AMS data are consistent with the existence of a new massive particle, one that might make up the universe's dark matter. But a new particle is not the only explanation. You'll see a lot of news articles, blog posts, comments, etc. saying that AMS has detected evidence of dark matter, but that's just not true. For example, there are known astrophysical sources, such as pulsars, which could conceivably be making these high-energy positrons. The results found by AMS so far are not precise enough, and don't go up to high enough energies, to allow us to tell the difference with any confidence.

There are a couple of signs we'll be looking for that could help identify this unknown source of positrons:

• The main one is that, if the positrons are coming from the decays of some as-yet-undiscovered particle, they can't be produced at energies higher than that particle's mass. Now yes, the new particle could be moving fast when it decays, and that would produce fast-moving positrons with high energies — but for a variety of reasons, we don't expect that to be the case. In the standard model of cosmology, the dark matter is "cold," which means that it's not moving very fast relative to other things in the universe.
• The other main sign to look for is any anisotropy in the positrons' direction — that would mean that there are more positrons coming from some directions in the sky than others. If an anisotropy is detected, that indicates that these positrons are coming from specific, localized sources, and not from something that exists pretty much everywhere, as dark matter would. AMS has checked for this effect in the data they've collected so far, and they see a pretty isotropic distribution; positrons are coming from every direction in the sky with roughly the same frequency. That is consistent with the idea that they come from dark matter particle decays, but again, it is not actual evidence. Even if the positrons are coming from something like pulsars, they could be bent around to approach in different directions by the volatile magnetic fields of the Sun, Earth, and galaxy.

AMS will continue to collect data for a long time to come, so we can look forward to ever more precise data releases in the future, data which will hopefully put a rest to the mystery of the not-missing positrons. In the meantime, you may want to check out the PRL viewpoint — a not-too-technical explanation — of this research, or even read the original paper, which can be downloaded for free from the gray citation on the viewpoint page.

2013
Apr
01

An April Fool's Planck, for science

Oh, I kid. Despite the name, nothing about this post is a prank (except perhaps for the title).

It's been a week and a half since the Planck collaboration released their measurements of the cosmic microwave background. At the time, I wrote about some of the many other places you can read about what those measurements mean for cosmology and particle physics. But it's a little harder to find information on how we come to those conclusions. So I thought I'd dig into the science behind the cosmic microwave background: how we measure it and how we manipulate those measurements to come up with numbers.

Measuring the CMB

With that in mind, what did Planck actually measure? Well, the satellite is basically a spectrometer attached to a telescope. It has 74 individual detectors, each of which detects photons in one of 9 separate frequency ranges. As the telescope points in a certain direction, each detector records how much energy carried by photons in its frequency range hit it from that direction. The data collected would look something like the points in this plot:

From any one of these data points, given the frequency and the measured power, you can calculate the temperature of the blackbody that produced it by starting with Planck's law,

$B(\nu, T) = \frac{2h\nu^3}{c^2}\frac{1}{\exp(h\nu/kT) - 1}$

($\nu$ is the frequency), combining it with the definition of spectral radiance,

$B = \frac{\udc^4 P}{\uddc\Omega \uddc A\cos\theta}$

($A$ is the area of the satellite's mirror, $\Omega$ is the solid angle it sees at one time), and solving for temperature to get

$T = \frac{h\nu}{k\ln\bigl(\frac{2h \Omega A \nu^3}{Pc^2} + 1\bigr)}$

With 74 separate simultaneous measurements, you can imagine that Planck is able to constrain the temperature of the CMB very precisely!

We've known for quite some time, since the COBE data presentation in 1990, that the CMB has an essentially perfect blackbody spectrum, with a temperature of $\SI{2.72548+-0.00057}{K}$.

But we've also known for some time that the CMB isn't exactly the same temperature in every direction. It varies by tiny fractions (thousandths) of a degree from one spot in the sky to another, so depending on which way you point the telescope, you'll find a slightly different result. The objective of the Planck mission, like WMAP before it, is to measure these slight variations in temperature as precisely as possible.

The CMB power spectrum

One way to represent the temperature variations, or anisotropies, measured by Planck is a straightforward visualization, like this:

Every direction in space maps to a point in this image. Red areas indicate the directions in which Planck found the CMB to be slightly warmer than average (after accounting for radiation received from our galaxy and other known astronomical sources), and blue areas indicate the directions in which it was slightly cooler than average.

But for the scientific analysis of the CMB, it's not actually that important to know exactly where in the sky the hot spots and cold spots are. These little anisotropies were generated by quantum fluctuations in the structure of spacetime in the very early universe, and quantum fluctuations are random. No theory can actually predict that you'll have a hot spot in this particular direction and a cold spot in that particular direction. What you can predict is how the energy in the CMB should be distributed among various "modes." Each mode is basically a pattern of hot and cold spots of a characteristic size.

Interlude: modes of a one-dimensional function

Modes are a lot easier to understand once you can visualize them, so let me explain the concept with a simple example. Here's a plot of a function:

Actually, that graph is kind of boring. Here's a more colorful way of visualizing the same function: a density map, which is red in the regions where the function is large and blue where it's small:

If you're familiar with Fourier analysis, you know that any function (more precisely: any periodic function on a finite interval of length $L$) can be expressed a sum of several sinusoidal functions with different wavelengths.

$f(x) = \sum_{n = 0}^{\infty} f_n \sin\frac{n\pi x}{L}$

For this function, those are the 12 sine waves shown here:

Each of these sine waves corresponds to a mode. We typically label them by the number of peaks and valleys in the wave, shown as $n$ on each plot. If you look at the density maps, you can see that the modes with higher numbers have smaller hot (red) and cold (blue) spots, as I mentioned earlier.

Having broken down the original function into these sine waves, you can talk about how much energy is in each mode. The energy is related to the square of the sine wave's amplitude. For example, the $n = 3$ mode of this function has the most energy; $n = 6$ and $n = 8$ have relatively little. You can tell because the graph of the $n = 3$ sine wave has the largest amplitude, and the $n = 6$ and $n = 8$ sine waves have small amplitudes. (The density maps don't show the amplitudes.)

What a cosmological theory predicts is the amount of energy (per unit time) in each mode: the numbers $C_n = \abs{f_n}^2$. This is called the power spectrum.

Modes on a sphere

We can do the same thing with the CMB that we did with the one-dimensional function in that example: break it down into individual modes, each with some amplitude, and determine how much energy is in each mode. Of course, the sky isn't a line; it's a sphere, which means the modes of the CMB are more complicated than just sine waves. Modes on a sphere are called spherical harmonics, and their density maps look like this:

Because a sphere is a two-dimensional surface, it takes two numbers to index these modes, $\ell$ and $m$.

Any real-valued function on a sphere — like, say, the function which gives the temperature of the CMB in a given direction — can be expressed as a sum of real spherical harmonics, each scaled by some amplitude.

$T(\theta, \phi) = \sum_{\ell = 0}^{\infty}\sum_{m = 0}^{m = \ell} T_{\ell m} Y_{\ell m}(\theta, \phi)$

We can then go on to compute the power spectrum, just as in the 1D case. But it's conventional to combine the power in all the modes with the same value of $\ell$:

$C_{\ell} = \sum_{m = 0}^{\ell} \abs{T_{\ell m}}^2$

The numbers $C_{\ell}$ constitute the power spectrum, analogous to $C_n$ for the 1D function.

To the data!

Here's the actual power spectrum measured by Planck, shown as red dots:

The quantity on the vertical axis is $\frac{1}{2\pi}\ell(\ell + 1) C_\ell$. Beyond $\ell = 10$ or so, each point represents an average over a few different values of $\ell$. You don't see points for $\ell = 0$ or $\ell = 1$ on the plot because the amplitude of the $\ell = 0$ mode is just the average CMB temperature over the entire sky, which is probably skewed by sources of radiation local to our little corner of the universe, and the amplitude of the $\ell = 1$ mode is primarily due to our motion relative to the CMB. So the meaningful physics starts at $\ell = 2$.

At this point, I would love to dig into the models, and explain where some of those features in the power spectrum come from. But that will have to wait for another day. Stay tuned for the sequel to this post, coming soon, where I'll talk about the physics that makes the CMB power spectrum what it is!

2013
Mar
22

Planck exposes the universe

Yesterday, the team behind the European Space Agency's Planck satellite released their first set of data. This was a seriously exciting moment in the world of cosmology, in the same way as the previous weeks' Higgs updates were an exciting moment in the world of particle physics. And I have the perfect way to explain it to you:

Go read this, this, this, this, and this.

OK, seriously though. The preceding five blog posts do a fantastic job (individually, and even more so together) of explaining, at a reasonably abstract level why the Planck data release is important and what it means. Now, I do plan to do my usual act of digging into the science and explaining some of the details, but in this case, there's a lot of science. The Planck collaboration released thirty papers, and I just haven't had time to comb through them yet. So a proper Planck post will have to wait for some time later this weekend. Until then, you can get a good, just-technical-enough overview of results from Ethan Siegel's summary post, the last link in that last paragraph.

2013
Mar
14

Sit back, close your eyes, and think all the way back to... last week, when physicists from the LHC experiments presented their latest results on the Higgs search at the Rencontres de Moriond Electroweak session. Yes, I know, we barely had time to digest those results. But digest we must, because this week there are even more new results coming out, from the Moriond session on QCD and High Energy Interactions. And what the experiments have presented today is, rightly or wrongly, turning a lot of heads.

The key update from today's presentations is a measurement by ATLAS of the cross section for the Higgs decaying to two W bosons, which each then decay to a lepton and a neutrino: the $H\to WW\to ll\nu\nu$ channel. It comes on the heels of a similar measurement presented by CMS last week. Both detectors are now reporting that they measure a strong signal for $\ell\bar\ell\nu\bar\nu$ detection beyond the standard model (without a Higgs boson) at $\SI{125}{GeV}$, with a significance of $4.0\sigma$ at CMS and $3.8\sigma$ at ATLAS. In other words, if the particles of the standard model, not including the newly discovered Higgs candidate, were all the particles there are, the probability that each detector would measure what it did is less than a hundredth of a percent.

Compared to the last batch of Higgs search results, when ATLAS was only detecting a $2.6\sigma$ signal and CMS a $3.1\sigma$ signal, this is a significant improvement indeed. (Pun intended, if you got it!) As far as I know, nobody has combined the new results from ATLAS and CMS to see just what statistical significance they get when put together, but if they did, it would likely be above the "mythical" $5\sigma$ threshold in this channel alone. Plainly put, that means we are now effectively certain the newly discovered particle decays to W bosons.

OK, so why is that so important? Well, the whole reason the Higgs boson was predicted was to allow the W and Z bosons, the carriers of the weak force, to have mass. It does so via the Higgs mechanism, which also predicts that the Higgs boson should interact with those bosons. If this newly discovered particle didn't interact with the W, it couldn't be the Higgs! Simple as that. But now, we know that if that were the case, it would be extremely unlikely that ATLAS and CMS would be seeing the results that they are. So that's probably not what's happening.

However, just discovering that the new particle interacts with W bosons doesn't automatically mean that it is the standard model Higgs. There are plenty of theories that predict Higgs bosons, any one of which (or none of which) could be correct. It'll take many more years of data collection and analysis before we can rule out all but one of the proposed theories.

I leave you with this animation of the signal appearing in the $H\to WW$ channel at ATLAS, showing how it differs from the expectation based on a standard model without the Higgs, and how including a standard model Higgs boson with a mass of $\SI{125}{GeV}$ very neatly fills in the gap.

2013
Mar
07

This week sees a major physics conference in Italy, the Rencontres de Moriond 2013 Electroweak session. It's notable because the LHC experimentalists involved in the search for the Higgs boson are presenting their latest results. (There are also many other things being presented — less high-profile, but no less important!) I won't give too many details of what has been presented, since there are plenty of other places on the web you can read about it, but certainly a quick overview is in order.

When last we left the Higgs search, it was November, and the experimentalists had just presented the results of analyzing the data the LHC had collected in the later half of summer 2012, combined in some cases with earlier data.

Of the various ways (channels) the standard model Higgs boson can decay, the experiments are looking most closely at these five:

• $H\to\gamma\gamma$ (two photons)
• $H\to WW\to ll\nu\nu$ (two leptons and two neutrinos)
• $H\to ZZ\to 4l$ (four leptons)
• $H\to \tlp\talp$ (two tau leptons)
• $H\to \btq\btaq$ (bottom and antibottom quark)

Remember that if the particle discovered is really the standard model Higgs boson, it should decay via each of these channels exactly as often as predicted. As of last November, there were some hints that it might not be doing that: ATLAS and CMS had seen slightly more diphoton decays than predicted, and slightly fewer tau and bottom decays than predicted. It wasn't conclusive evidence of anything, but physicists were holding out a little bit of hope that these slight discrepancies would become stronger as more data was collected, and would indicate something new to be discovered.

Unfortunately, with the new results presented yesterday, it looks more and more likely that the particle we've discovered is the plain old standard model Higgs boson, with no surprises. Both ATLAS and CMS are starting to release plots which show the number of observed events in a given channel compared to what is expected from the standard model with a Higgs boson with a mass of $\SI{125}{GeV}$, and they match quite closely. For example, here are the plots from CMS [PDF] showing the numbers of $\tlp\talp$ pairs detected,

and the numbers of $\btq\btaq$ pairs detected,

In both cases, you can see that the lines for the observed counts are coming out above the green and yellow bands showing what would be expected if there were no Higgs boson, and are getting closer to the red lines showing what is expected with a $\SI{125}{GeV}$ Higgs. There is still some excess in the $\btq\btaq$ channel above $\SI{125}{GeV}$, but that may well be a statistical fluke and has a decent chance of disappearing with time.

ATLAS is still seeing some more collisions than expected in the diphoton channel, but again, it's not enough to strongly indicate that something unexpected is happening.

In addition to just looking at the different decay channels, physicists are also checking the spin and parity of the new particle, to check whether it matches the expectation for the standard model Higgs boson: zero spin and even parity. Back in November, that set of properties seemed most likely, but it wasn't anywhere near conclusive. This week, both LHC experiments released more detailed analyses of the different possibilities for spin and parity the new particle could have. Here are some plots from CMS, for example:

Each graph shows a comparison of the standard model Higgs hypothesis, spin zero and even (+) parity (yellow), with another hypothesis (blue). The way to read these is to look at the height of each curve at the position of the red arrow. If one curve is much higher than the other, then one hypothesis is correspondingly much more likely than the other. You can see this in the center column, for example, where the height of the blue curve at the position of the arrow is essentially zero. That means that it's exceedingly unlikely the new boson has spin one. (We know that it can't have spin one if it's a single particle, because it decays into two photons which each have spin one themselves, but if there are multiple particles with the same mass, it's possible one of them has spin one, which is what those center graphs are exploring.) In some of the other plots, it's clearly more likely that the new boson has spin zero and even parity, but not likely enough to eliminate the possibility outright. Quantitatively, all hypotheses other than $0^+$ are excluded at a $2\sigma$ level, which is short of the $5\sigma$ exclusion that physicists would like to consider the matter closed.

New results aren't the only things to come out of this conference, though. ATLAS has release some excellent animated GIFs showing how peaks emerge in the diphoton and four-lepton channels as you accumulate data over time. It's rare to see a really good use of animation, and it's also disappointingly rare to see a truly effective way of visualizing any scientific data, so this is a real treat. Observe and enjoy!