1. 2013

    Fun fact about prime numbers

    Today I learned from Reddit that the squares of all prime numbers greater than three are one more than a multiple of 24. It’s an easy enough fact to prove: first, all primes greater than 3 can be written \(6n \pm 1\), for some positive integer \(n\). The square of this number is

    $$(6n\pm 1)^2 = 36n^2 \pm 12 n + 1 = 12n(3\pm n) + 1$$

    For all integers \(n\), either \(n\) or \(3\pm n\) is even, so \(12n(3\pm n)\) will be a multiple of 24, and the prime number squared is one more than that.

    Okay, this isn’t a particularly practical fact (unless you consider number theory practical), but it’s kind of a cute proof.

  2. 2012

    The Gini coefficient for distribution inequality

    As we go out shopping for gifts this holiday season, given the state of the economy, a lot of people will be thinking about how to get the best value from their gift budget. A lot more people than usual, in fact, because as you’ll hear on TV or read online from time to time, the income gap in this country is exceptionally large.

    It’s probably common knowledge that a large income gap means roughly a large difference between the richest and poorest income levels. But that’s not a very precise statement by itself. Suppose you have two tiny countries of six people each, and their incomes are distributed like this:


    The difference between richest and poorest is the same in both countries, but the other values are significantly higher in Lolistan. We need to calculate something that takes into account everyone’s income, not just the extremes.

    OK, how about the standard deviation? That’s the usual way to characterize how widely a bunch of numbers are distributed.

     OmnomnomiaLolistan …
  3. 2012

    A simple regularization example

    Earlier this month I promised to show you a simple example of regularization. Well, here it is. This is a particular combination of integrals I’ve been working with a lot lately:

    $$\iint_{\mathbb{R}^2} \uddc\mathbf{x}\frac{e^{-x^2}}{x^2}e^{-i\mathbf{k}\cdot\mathbf{x}} - \iint_{\mathbb{R}^2} \frac{\uddc\mathbf{y}}{y^2}e^{-i\xi\mathbf{k}\cdot\mathbf{y}}$$

    A quick look at the formulas shows you that both integrands have singularities at \(\mathbf{x} = 0\) and \(\mathbf{y} = 0\).

    OK, well, that’s why we have two integrands. We can change variables from \(\xi\mathbf{y}\to\mathbf{x}\), subtract them, and the singularities will cancel out, right? You can do this integral by hand in polar coordinates, or just pop it into Mathematica:

    $$\iint_{\mathbb{R}^2} \uddc\mathbf{x}\frac{e^{-x^2} - 1}{x^2}e^{-i\mathbf{k}\cdot\mathbf{x}} = -\pi\Gamma\biggl(0, \frac{k^2}{4}\biggr)$$

    Just one problem: that’s not the right answer! You can tell because the value of this integral had better depend on \(\xi\), but this expression doesn’t. This isn’t anything so mundane …

  4. 2012

    Math: Painful? Apparently so, for real

    This Wired science article is the latest in a series of reports I’ve seen going around the web lately, saying that for some people, the anticipation of doing even simple math activates the same regions of the brain that are responsible for physical pain. So math anxiety isn’t just something that people make up to make themselves feel better (not that I ever really thought it was); it has an actual neurological basis.

    I’ll be honest, I just don’t get math anxiety. I know I have a bit of a tendency to act like it doesn’t exist; for example, this entire blog is meant for people who actually look forward to digging into the math (or other source material) behind an interesting result. I make no apologies for the fact that I use a ton of math here. But this is all just because I would actually like math anxiety not to exist. Whether it’s a matter of education, or cultural bias, or some sort of mental condition that could be treated with drugs or therapy (I sort of doubt that last case, but who knows), I hope that someday we can live in …

  5. 2012

    On the magnitudes of vectors

    Just a quick mathematical observation for today (actually yesterday): suppose you have three points. Any function of the three points which doesn’t depend on their absolute location or orientation only depends on the magnitudes of the vectors joining the three points.

    Suppose the three points are \(\mathbf{x}\), \(\mathbf{y}\), and \(\mathbf{z}\). A function that doesn’t depend on the absolute location of the points will only depend on the displacements between them, \(\mathbf{r} = \mathbf{z} - \mathbf{y}\), \(\mathbf{s} = \mathbf{z} - \mathbf{x}\), and \(\mathbf{t} = \mathbf{y} - \mathbf{x}\). And a function that doesn’t depend on the orientations will only depend on the scalar quantities we can form out of these displacements: \(r^2\), \(\mathbf{r}\cdot\mathbf{s}\), \(s^2\), \(\mathbf{s}\cdot\mathbf{t}\), \(t^2\), and \(\mathbf{t}\cdot\mathbf{r}\). But the dot products are actually not independent, because for example

    $$r^2 = (\mathbf{s} - \mathbf{t})^2 = s^2 + t^2 - 2\mathbf{s}\cdot\mathbf{t}$$


    $$\mathbf{s}\cdot\mathbf{t} = \frac{1}{2}(r^2 - s^2 - t^2)$$

    Then you can write \(\mathbf{r}\cdot\mathbf{s} = s^2 - \mathbf{s}\cdot\mathbf{t}\) and …

  6. 2012

    Not really a simple regularization analogy

    Last year I posted about an infinite sum involving the mean of the harmonic numbers,

    $$\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\frac{1}{k} = \lim_{n\to\infty}\frac{\ln n}{n}$$

    The method I used to evaluate this was to approximate the sum by an integral. This is a technique that is used in various places in physics, such as in the computation of the Fermi surface in metals.

    In this particular case, we only cared about the large-\(n\) limiting behavior of the sum, namely that it grows sublinearly for large \(n\). But suppose you wanted to know whether there was, say, a constant term as well. Here’s one way to figure that out. Instead of converting the entire sum to an integral, you choose some cutoff value \(a\), and keep the first \(a\) terms in the sum explicitly.

    $$\sum_{k=1}^{n}\frac{1}{k} \approx \sum_{k=1}^{a}\frac{1}{k} + \sum_{k=a+1}^{n}\frac{1}{k} = \sum_{k=1}^{a}\frac{1}{k} + \ln\frac{n}{a+1}$$

    Now you have a value, \(a\), that represents a “break” in your sequence, but it’s an arbitrary …

  7. 2011

    Mean of the harmonic numbers

    A while ago, somebody posed an interesting problem on Physics Forums: how to evaluate the infinite sum


    It’s not hard to start: convert it to

    $$\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\frac{k}{k} - \lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\frac{1}{k}$$

    The first term is obviously equal to 1, and in the second term, the series \(\sum_{k=1}^{n}\frac{1}{k}\) is well-known under the name “harmonic numbers.” It’s easy to look up the behavior of this sum as \(n\to\infty\) and thereby determine the answer, but I’m not interested in the answer. I’m interested in the method that you could use if you didn’t have the world’s mathematical references at your fingertips.

    A common way to evaluate a sum with large numbers of terms like this is to approximate it by an integral. You may know that the Riemann integral, which is the first definition of an integral that students in intro calculus classes usually learn, is nothing more than the infinite limit of …

  8. 2011

    Second derivative in polar coordinates

    Here’s an interesting, and perhaps occasionally useful, identity: suppose you have a function defined on a two-dimensional space, \(f(\vec{r})\). And suppose that this function is independent of the angle of \(\vec{r}\) but rather is only a function of the magnitude \(r = \norm{\vec{r}}\). Then

    $$\lapl f(r) = \frac{1}{r}\pd{}{r}r\pd{}{r}f(r) = \frac{1}{r^2} r\pd{}{r}r\pd{}{r} = \frac{1}{r^2}\pdd{}{(\ln r)}f(r)$$

    (where \(\ln r\) is shorthand for \(\ln\frac{r}{r_0}\) where \(r_0\) is some constant) In other words, the radial term of the polar Laplacian is equal to the second logarithmic derivative divided by \(r^2\) for rotationally invariant functions.

    At this point you might be wondering, why the heck is this interesting? Well, for me, it’s because my current research project happens to involve logarithmic derivatives of rotationally invariant functions in 2D space. But for (almost) everyone else, this is related to the way electric potential drops off with distance in 2D space.

    Consider, for example, a wire that carries a charge density \(\lambda\) (but no current). Since this system is translationally symmetric (nothing changes if you …