Hey, we’re doing line integrals over vector field right now, but it’s kind of the same idea. Congrats on the image of the day!
Thanks! By the way, I also made a vector field version:
Hey, we’re doing line integrals over vector field right now, but it’s kind of the same idea. Congrats on the image of the day!
Thanks! By the way, I also made a vector field version:
A scalar field has a value associated to each point in space. Examples of scalar fields are height, temperature or pressure maps. In a two-dimensional field, the value at each point can be thought of as a height of a surface embedded in three dimensions. The line integral of a curve along this scalar field is equivalent to the area under a curve traced over the surface defined by the field.
In this animation, all these processes are represented step-by-step, directly linking the concept of the line integral over a scalar field to the representation of integrals familiar to students, as the area under a simpler curve. A breakdown of the steps:
Standing waves are an interesting physical phenomenon that show up in several places in nature. They’re a wave that oscillates “in place”.
One of the ways a standing wave can be created is by the interference of two waves travelling in opposite directions (like in the second image). By the superposition principle, the resulting wave (in black) is the addition of the both waves (red and blue).
This standing wave has points that remain fixed (called nodes, in red), where destructive interference always occurs, and points that oscillate the most (called antinodes), where constructive interference occurs.
Standing waves are behind the sound of virtually every acoustic musical instrument, whether it is a drum, a flute or a violin. The musician operates the instrument in a manner to generate a vibration, and the vibration is propagated and reflected throughout the instrument. The interference between all of the reflected waves generate standing waves, which is what ultimately produce the bulk of the sound we hear.
The waves shown here are one-dimensional, but this phenomenon occurs in two and three dimensions as well.
By studying how waves interfere and reflect, and how these generate standing waves, one can estimate the vibration and density inside a spherical body (such as the Sun or the Earth — read those links!) from measurements of oscillation on the surface, a very powerful tool for studying the inner workings of such structures.
In the third animation, for reference, we see the wave generated when opposing waves of different frequencies interfere.
In a previous post, I introduced an animation explaining radians. It used π (pi), the standard constant used to measure circles.
However, as I mentioned in that post, there’s an ongoing movement to promote the constant τ (tau) to be used instead. When dealing with radians, tau makes undeniably a lot more sense than pi: a quarter tau radians represents, precisely, a quarter of a full rotation around the circle. With pi, a quarter rotation is π/2. That’s just nonsense!
Quite a few people demanded a version of the animation with tau, though they didn’t even have to ask. I was already planning on making one!
Just as before, Tumblr forced me to get rid of a lot of the frames, so the animation here isn’t as smooth as it could be. Here’s the proper animation (click to go to the Wikimedia Commons details page).
Based on the same principle as the polygonal trigonometric functions.
This was requested a few times, but I had to figure how to draw polar stars first. Finally got around to it.
I won’t be updating the sound generator. Sorry.
Another one for Wikipedia. Tumblr forced me to cut the amount of frames in half. Here it is in its full, smooth glory.
However, only one of these angle units earns a special place in mathematics: the radian.
This animation illustrates what the radian is: it’s the angle associated with a section of a circle that has the same length as the circle’s own radius.
For a unit circle, with radius 1, the radian angle is the same value as the length of the arc around the circle that is associated with the angle.
In the animation, the radius line segment r (in red) is used to generate a circle. The same radius is then “bent”—without changing its length—around the circle it just generated. The angle (in yellow) that’s associated with this bent arc of length r is exactly 1 radian.
Making 3 copies of this arc gets you 3 radians, just a bit under half of a circle. This is because half of a circle is π radians. So that missing piece accounts for π - 3 ≈ 0.14159265… radians.
Our π radians arc is then copied once again, revealing the full circle, with 2π radians all around.
There are several great reasons to use radians instead of degrees in mathematics and physics. Everything seems to suggest this is the most natural system of measuring angles.
Radians look complicated to most people due to their reliance on the irrational number π to express relations to circle, and the fact the full circle contains 2π radians, which may seem arbitrary.
In order to simplify things, some people have been proposing a new constant τ (tau), with τ = 2π. When using τ with radians, fractions of τ correspond to the same fractions of a circle: a fourth of a tau is a fourth of a circle, and so on.
Tau does seem to make more sense than pi when dealing with radians, but pi shows up elsewhere too, with plenty of merits of its own.
I, for one, do enjoy the idea of tau being used, exclusively, as an angle constant, so that it immediately implies the use of radians. If such were the case, a student seeing Euler’s identity for the first time, but in terms of tau, would be immediately compelled to think in terms of rotations: eτi = 1 would instantly convey the idea of a full rotation, bringing you back where you started. That seems like a good thing.
So happy Pi day!
(or half-tau day, if you prefer!)
This is the second of three animations I’ll be posting today (here’s the first). Be sure to check them out later if you miss them!
This simplified things a lot, and created some interesting uses for the functions. However, since I could only have one value of radius for each angle (they were based on polar equations), I could not draw arbitrary shapes with a continuous line based on the [0,2π] interval.
The solution is to extend the idea to general closed curves, by using the position along the curve to define the sine and cosine analogues. In other words, we want “path trigonometric functions” for which the input parameter is the position along the path, and whose periods are the curve’s total arc-length.
But the concept of “sine” and “cosine”, as well as “trigonometric”, completely lose their meaning at this point. It has nothing to do with triangles or angles.
We’re now dealing with the functions x(s) (in blue) and y(s) (in red) that together describe the curve, by being used in the parametric equation r(s) = ( x(s) , y(s) ), where r(s) is a vector function and s is the arc-length. This is very standard stuff, so it isn’t incredibly exciting anymore.
Notice that if the green curve was a unit circle, the functions would become the usual sine and cosine.
But we do get to see what these functions look like and what they are doing. So here’s the coordinate functions for the arc-length parametrization of a pi curve!
Happy Pi day!
This is just the first post for today. There’ll be two more, so be sure to check them out later if you miss them!
Here’s an arc-length parametrization of a closed curve for the Greek lowercase letter pi, famously used for the circle constant, π = 3.1415926535897932384626… (that’s what I bothered memorizing!)
Arc-length parametrizations are also called unit-speed parametrizations, because a point moving along the path will move with speed 1: the point moves 1 unit of arc-length per 1 unit of time.
It is generally very hard, if not impossible, to find this parametrization in closed form. But it always exists for nice continuous curves. Since it has some pretty cool uses, just knowing it exists is a powerful enough tool for mathematicians to use it on other cool theorems.
Using computers, we can usually approximate it numerically to any degree of accuracy we desire. The basic algorithm is pretty simple: just make a table of arc-length for each value of t. Then, the unit parametrization is just reading the table in reverse: find t given arc-length. Some interpolation is usually necessary.
Whoops. I had queued that Pi post the wrong date, and couldn’t fix it in time. Sorry. :D
I feel like the guy who prints the obituary before the person has died. Oh well. I do have a couple of things planned for pi day, I’m just not sure if I’ll be able to finish them in time.
But I do have a new Wikipedia animation (pi-related, coincidentally) almost ready.
I’ve always been fond of the idea of drawing mathematical objects as physical things, with depth, thickness and mass. Here’s one of the instances I played with the concept, using several sine-like curves in 3D space.
This is a very old thing from over 6 years ago. I was trying to generate random 3D fibers in such a way that they could be tile-able both horizontally and vertically (sadly, this scaled version breaks it). It was intended for a background of a website I was working on back then.
Of course, it was going to be much more subtle and non-animated. This animation was just a test of how the code was performing, and how it looked with different amplitudes. The results are pretty cool.
Originally in monochrome, added some color overlay to it to spice things up. Here’s another render, this time static and a bit more aggressive with the angles. Still tile-able in both directions.
Some anonymous person asked me to do this with a linear movement from the starting position to the ending position of each point, instead of along the spiral’s curve, as I did before.
Since it would require an incredibly tiny change to the code, I decided to give it a shot.
On the left, the colors are based on the angle in the original parametrization. On right, the colors are based on the number of turns. While the transformation is continuous, it is not smooth: this transition creates “kinks” in the curve partway through.
Then I got curious: exactly how does the re-parametrization redistributes the points along this curve? In the original parametrization, the points are bunched up in the middle of the spiral, and more spaced on the outside. The arc-length parametrization makes them equally spaced along the whole path. So how do they compare?
First, I tried this with black points, but it was too confusing. Same thing for a few dots highlighted. So I decided to color them all based on the angle in the original parametrization. This is the result.
It is really interesting how the colors are bent around. It seems that the distribution is quite non-uniform, even though the spiral is rather uniform in growth.
I originally rendered this with four times as many frames, but due to the amount of colors and dimensions of the GIF, Tumblr wouldn’t accept it. It was too large. Below is the animation with twice as many frames.
Hint: try squinting! It blurs the colors and it looks really trippy!
Easing functions are an immensely useful tool for animators. They are very handy when we want to spice up an animation and give it an extra cool or polished look, and are incredibly simple to implement in code.
The main idea is that you have a starting point A and an ending point B, and you want something to move from A to B along a (not necessarily straight) path connecting both points.
However, the path between the points is not the only thing to consider: there’s also how the object will traverse this path, how fast it’ll move at each point, how it will accelerate, etc.
What we are looking for is a uniform “speed” parameterization of the path, that is, we want a function f(t) that returns a point in space along the path. The function is built so f(t=0) gives us the starting point and f(t=1) gives us the ending point. Additionally, for equally spaced values of t in the unit interval [0,1], we want equally spaced points along the path.
Unit speed parameterization of curves is not a trivial thing, but for a straight line path using linear interpolation — which is by far the most common — it is very straightforward: we don’t have to do anything. It is already uniform in speed!
This is where easing functions come in. The easing function e(t) takes an input value t, from 0 to 1, and returns a new value, not necessarily from 0 to 1 (to account for overshoots). The only constraint is is that e(0) = 0 and e(1) = 1. The value returned by the easing function is what we use to get the current position along the path.
In math terms, if our path is given by f(t) and our easing function is e(t), we’ll use f(e(t)) in our animation code.
In the animation above, you see the result of using several different easing functions on a simple linear path.
The horizontal value of each graph is the t time parameter, and the vertical value is the value returned by e(t). The box delimits the interval from [0,1] in both directions.
Shown at right of each graph is the movement you get with this easing function. You can see that even the slightest variation from the super-lame straight line (top left) is already much nicer to look at.
The functions shown here were all custom made, and are part of my personal animation library. Linear, power and sine are found everywhere, and are the most basic ones.
Most libraries also include “elastic” and “bounce”, among others, but these are always fixed Bézier curve or polynomial approximations, which are pretty bad since you can’t fine-tune them to your needs. So I wrote my own.
The trade off for being totally tunable is that they are not optimized for real time, but that isn’t an issue for me.
You’ll also notice that I haven’t included ease-in and ease-out separately. I find it mostly useless. I’ve never seen anyone using “elastic/bounce ease in”, for instance, and I hope it has never been used by anyone. It looks like garbage, as you can see when the animations run backwards.
In any case, creating mixed functions from these is very easy, just a matter of acting in reverse on half the interval, and subtracting the function from 1 for the ease-in parts.
This is usually found in three flavors out there: quad(tratic), cubic and quart(ic). I decided to just wrap them all in the same thing, as it’s the same construction, except using different powers
The idea is to use a variation of tp and its reflection to create the ease-in and ease-out bits.
In particular, you have (2t)p/2 for t in [0,0.5] and 1 - (2(1-x))p/2 (non-expanded for clarity) for t in (0.5,1]. All values p > 0 are well-behaved in the unit interval.
This one is just simply sin(t·π/2)2. You can easily get rid of that power using the familiar identity, but it looks cleaner this way.
The bounce one is based on the actual physics of parabolic motion. It is tuned by two parameters: decay power and number of times it hits the ground. This means you can set, precisely, how many times you want it to bounce around, and you can fine tune how sharply it will lose energy after each bounce.
I usually avoid using exponential decay on its own because it doesn’t reach zero exactly at the end of the interval, which is usually more desirable than physically accurate decay rates. So I tend to use a factor of (1-t)p for decays in general. It offers more freedom anyway.
Most libraries include “elastic” and “back” (which overshoots a bit). They look all right, but are not accurate models of physical motion, and you can’t fine tune them much.
My “physical” easing function replaces both with a solution for dampened harmonic oscillation, where you can manually set the decay rate and frequency of oscillation. This means you can have exactly as many back-and-forth motions as you want. The exponential decay rate was also replaced by the more malleable (1-t)p expression.
Using frequencies like 1 or 0.5 gives you a replacement for the “back” easing in other libraries, with the benefit of tuning. Frequencies that are not multiples of 1/2 tend to look bad, but thanks to the decay function they still end up at 1 no matter what.
This is one of the most useful ones, and something like it is lacking everywhere I looked. In a lot of situations, it is desirable to have a “mostly linear” movement, with a steady speed in the middle of it. The biggest problem with linear interpolation is the ending points. Having the object static and suddenly starting to move looks jarring and unrealistic.
The “uniform” easing I came up with is a way of keeping the best of both worlds: you can tune how much of the path will be linear, and how much of the remaining will be used by acceleration/deceleration. You can also tune how aggressive acceleration/deceleration will be.
Due to its almost-linear nature, it works exceptionally well with other easing functions. This is shown in the last one (bottom right), where I used it along with the bounce function to give it an extra anticipation in both ends. It makes the bounce feel heavier. Looks pretty good!
I will write a detailed post about each of them along with pseudocode if there’s enough interest. Since these functions aren’t meant to be used in real time applications, they are not ready to be used in a lot of contexts out there with a lot of moving objects. It would be pretty easy to cache these and make it super fast during run time, though.
However, most people seem to be happy enough with their easing libraries, so I’m not sure if it’s worth the trouble, nor if tumblr is the best way to go about it.
So if you are interested, please drop me a request so I know I won’t be wasting time posting them here.
The continuous Fourier transform takes an input function f(x) in the time domain and turns it into a new function, ƒ̂(x) in the frequency domain. (These can represent other things too, but that’s besides the point.)
(Tumblr kept rejecting the proper sized GIFs, so they may look blurry, pixelated or compressed to you. There’s also HD video.)
In the second animation, the transform is reapplied to the normalized sinc function, and we get our original rect function back.
It takes four iterations of the Fourier transform to get back to the original function. We say it is a 4-periodic automorphism.
However, in this particular example, and with this particular definition of the Fourier transform, the rect function and the sinc function are exact inverses of each other. Using other definitions would require four applications, as we would get a distorted rect and sinc function in the intermediate steps.
For simplicity, I opted for this so I don’t have very tall and very wide intermediate functions, or the need for a very long animation. It doesn’t really work visually, and the details can be easily extrapolated once the main idea gets across, I think.
In this example, it also happens that there are no imaginary/sine components, so you’re looking at the real/cosine components only.
Shown at left, overlaid on the red time domain curve, you’ll notice a changing yellow curve. This is the approximation using the components extracted from the frequency domain “found” so far (the blue cosines sweeping the surface). The approximation is calculated by adding all the components, by integrating along the entire surface (this is continuous, remember?)
As we add more and more of the components, the approximation improves. In some special cases, it is exact. For the rect function, it isn’t, and you get some wavy artifacts in some places (the sudden jumps, aka discontinuities). These are due to Gibbs phenomenon, and are the main cause of ringing artifacts. As you’ll probably notice, the approximation is pretty much dead on for the sinc function, as shown in the second animation.
The illustration shows the domains in the interval [-5,5], but the Fourier transform extends infinitely to all directions, of course.
The surface illustrated here isn’t too far off from the approach used in Fourier operators. If you consider the surfaces defined by z = cos(xy) and z = sin(xy), you get the cosine and sine Fourier operators. Having complex values lets you mix both into one thing.
The surface you see in the first animation is just z = cos(2πxy)sinc(πy). The Fourier transform can be thought of as multiplying a function by these continuous operators, and integrating the result. This can be very neatly performed using matrix multiplication in the discrete cases. (New drinking game: take a shot every time linear algebra shows up in any mathematical discussion.)
This also explains why the Fourier transform is cyclic after 4 iterations: rotating 90° four times gets you back to your original position. By using different rotation angles, you get fractional Fourier transforms. Awesome stuff.
NOTE: This animation is a follow-up to the previous one on time/frequency domains, showing discrete frequency components. Check that one out too, as it may help with understanding this one.
Sadly, I had to reduce the images to 400 pixels wide instead of 500. Tumblr wouldn’t accept it otherwise. However, a HD video is also available:
This animation would probably look better with a different way of rendering that surface. Sorry, I don’t have anything better available at the moment, but I’ll work on it. If I do come up with something, I’ll post an update.