diff --git a/chapters/aligning/content.en-GB.md b/chapters/aligning/content.en-GB.md
index c8a312a7..69629d4c 100644
--- a/chapters/aligning/content.en-GB.md
+++ b/chapters/aligning/content.en-GB.md
@@ -40,5 +40,5 @@ If we drop all the zero-terms, this gives us:
We can see that our original curve definition has been simplified considerably. The following graphics illustrate the result of aligning our example curves to the x-axis, with the cubic case using the coordinates that were just used in the example formulae:
-
-
+
- Finding the solution for "where is this line 0" should be trivial:
+ The derivative of a quadratic Bézier curve is a linear Bézier curve,
+ interpolating between just two terms, which means finding the
+ solution for "where is this line 0" is effectively trivial by
+ rewriting it to a function of t
and solving. First we
+ turn our cubic Bézier function into a quadratic one, by following
+ the rule mentioned at the end of the
+ derivatives section:
- Done. And quadratic curves have no meaningful second derivative, so
- we're really done.
+ And then we turn this into our solution for t
using
+ basic arithmetics:
+
Done.
+
+ Although with the
+ caveat
+ that if b-a
is zero, there is no solution and we
+ probably shouldn't try to perform that division.
- The derivative of a cubic curve is a quadratic curve, and finding - the roots for a quadratic Bézier curve means we can apply the + The derivative of a cubic Bézier curve is a quadratic Bézier curve, + and finding the roots for a quadratic polynomial means we can apply + the Quadratic formula. If you've seen it before, you'll remember it, and if you haven't, @@ -2944,7 +2966,7 @@ function drawCurve(points[], t): height="40px" />
- So, if we can express a Bézier component function as a plain + So, if we can rewrite the Bézier component function as a plain polynomial, we're done: we just plug in the values into the quadratic formula, check if that square root is negative or not (if it is, there are no roots) and then just compute the two values that @@ -2975,11 +2997,10 @@ function drawCurve(points[], t): height="119px" />
- This gives us thee coefficients a, b, and
- c that are expressed in terms of v values, where
- the v values are just convenient expressions of our
- original p values, so we can do some trivial substitution
- to get:
+ This gives us three coefficients {a, b, c} that are expressed in
+ terms of v
values, where the v
values are
+ expressions of our original coordinate values, so we can do some
+ substitution to get:
Easy-peasy. We can now almost trivially find the roots by plugging - those values into the quadratic formula. We also note that the - second derivative of a cubic curve means computing the first - derivative of a quadratic curve, and we just saw how to do that in - the section above. + those values into the quadratic formula.
- Quartic—fourth degree—curves have a cubic function as derivative. - Now, cubic functions are a bit of a problem because they're really - hard to solve. But, way back in the 16th century, + We haven't really looked at them before now, but the next step up + would be a Quartic curve, a fourth degree Bézier curve. As expected, + these have a derivative that is a cubic function, and now things get + much harder. Cubic functions don't have a "simple" rule to find + their roots, like the quadratic formula, and instead require quite a + bit of rewriting to a form that we can even start to try to solve. +
++ Back in the 16th century, before Bézier curves were a + thing, and even before calculus itself was a thing, Gerolamo Cardano figured out that even if the general cubic function is really hard to solve, it can be rewritten to a form for which finding the roots - is "easy", and then the only hard part is figuring out how to go - from that form to the generic form. So: + is "easier" (even if not "easy"):
- This is easier because for the "easier formula" we can use
+ We can see that the easier formula only has two constants, rather
+ than four, and only two expressions involving t
, rather
+ than three: this makes things considerably easier to solve because
+ it lets us use
regular calculus
- to find the roots. (As a cubic function, however, it can have up to
- three roots, but two of those can be complex. For the purpose of
- Bézier curve extremities, we can completely ignore those complex
- roots, since our t is a plain real number from 0 to 1.)
+ to find the values that satisfy the equasion.
- So, the trick is to figure out how to turn the first formula into - the second formula, and to then work out the maths that gives us the - roots. This is explained in detail over at + Now, there is one small hitch: as a cubic function, the solutions + may be + complex numbers + rather than plain numbers... And Cardona realised this, centuries + befor complex numbers were a well-understood and established part of + number theory. His interpretation of them was "these numbers are + impossible but that's okay because they disappear again in later + steps", allowing him to not think about them too much, but we have + it even easier: as we're trying to find the roots for display + purposes, we don't even care about complex numbers: we're + going to simplify Cardano's approach just that tiny bit further by + throwing away any solution that's not a plain number. +
++ So, how do we rewrite the hard formula into the easier formula? This + is explained in detail over at Ken J. Ward's page for solving the cubic equation, so instead of showing the maths, I'm simply going to show the programming code for solving the cubic - equation, with the complex roots getting totally ignored. + equation, with the complex roots getting totally ignored, but if + you're interested you should definitely head over to Ken's page and + give the procedure a read-through.
And that's it. The maths is complicated, but the code is pretty much just "follow the maths, while caching as many values as we can to - reduce recomputing things as much as possible" and now we have a way - to find all roots for a cubic function and can just move on with + prevent recomputing things as much as possible" and now we have a + way to find all roots for a cubic function and can just move on with using that to find extremities of our curves.
- The problem with this is that as the order of the curve goes up, we - can't actually solve those equations the normal way. We can't take - the function, and then work out what the solutions are. Not to - mention that even solving a third order derivative (for a fourth - order curve) is already a royal pain in the backside. We need a - better solution. We need numerical approaches. + And this is where thing stop, because we cannot find the + roots for polynomials of degree 5 or higher using algebra (a fact + known as + the Abel–Ruffini theorem). Instead, for occasions like these, where algebra simply cannot + yield an answer, we turn to + numerical analysis.
- That's a fancy word for saying "rather than solve the function, - treat the problem as a sequence of identical operations, the - performing of which gets us closer and closer to the real answer". - As it turns out, there is a really nice numerical root-finding + That's a fancy term for saying "rather than trying to find exact + answers by manipulating symbols, find approximate answers by + describing the underlying process as a combination of steps, each of + which can be assigned a number via symbolic manipulation". + For example, trying to mathematically compute how much water fits in + a completely crazy three dimensional shape is very hard, even if it + got you the perfect, precise answer. A much easier approach, which + would be less perfect but still entirely useful, would be to just + grab a buck and start filling the shape until it was full: just + count the number of buckets of water you used. And if we want a more + precise answer, we can use smaller buckets. +
+
+ So that's what we're going to do here, too: we're going to treat the
+ problem as a sequence of steps, and the smaller we can make each
+ step, the closer we'll get to that "perfect, precise" answer. And as
+ it turns out, there is a really nice numerical root-finding
algorithm, called the
Newton-Raphsonthat
- Newton), which we can make use of.
+ Newton), which we can make use of. The Newton-Raphson approach
+ consists of taking our impossible-to-solve function
+ f(x)
, picking some intial value
+ x
(literally any value will do), and calculating
+ f(x)
. We can think of that value as the "height" of the
+ function at x
. If that height is zero, we're done, we
+ have found a root. If it isn't, we calculate the tangent line at
+ f(x)
and calculate at which x
value
+ its height is zero (which we've already seen is very easy).
+ That will give us a new x
and we repeat the process
+ until we find a root.
- The Newton-Raphson approach consists of picking a value - t (any value will do), and getting the corresponding value - of the function at that t value. For normal functions, we - can treat that value as a height. If the height is zero, we're done, - we have found a root. If it's not, we take the tangent of the curve - at that point, and extend it until it passes the x-axis, which will - be at some new point t. We then repeat the procedure with - this new value, and we keep doing this until we find our root. -
-
- Mathematically, this means that for some t, at step
- n=1, we perform the following calculation until
- fy(t) is zero, so that the next t is the same as
- the one we already have:
+ Mathematically, this means that for some x
, at step
+ n=1
, we perform the following calculation until
+ fy(x)
is zero, so that the next
+ t
is the same as the one we already have:
- (The Wikipedia article has a decent animation for this process, so - I'm not adding a sketch for that here unless there are requests for - it) + (The Wikipedia article has a decent animation for this process, so I + will not add a graphic for that here)
Now, this works well only if we can pick good starting points, and - our curve is continuously differentiable and doesn't have - oscillations. Glossing over the exact meaning of those terms, the - curves we're dealing with conform to those constraints, so as long - as we pick good starting points, this will work. So the question is: - which starting points do we pick? + our curve is + continuously differentiable + and doesn't have + oscillations. Glossing over the exact meaning of those terms, the curves we're + dealing with conform to those constraints, so as long as we pick + good starting points, this will work. So the question is: which + starting points do we pick?
- As it turns out, Newton-Raphson is so blindingly fast, so we could + As it turns out, Newton-Raphson is so blindingly fast that we could get away with just not picking: we simply run the algorithm from t=0 to t=1 at small steps (say, 1/200th) and the result will be all the roots we want. Of course, this may pose problems for high order Bézier curves: 200 steps for a 200th order Bézier curve is going to go - wrong, but that's okay: there is no reason, ever, to use Bézier - curves of crazy high orders. You might use a fifth order curve to - get the "nicest still remotely workable" approximation of a full - circle with a single Bézier curve, that's pretty much as high as - you'll ever need to go. + wrong, but that's okay: there is no reason (at least, none that I + know of) to ever use Bézier curves of crazy high orders. + You might use a fifth order curve to get the "nicest still remotely + workable" approximation of a full circle with a single Bézier curve, + but that's pretty much as high as you'll ever need to go.
@@ -3228,7 +3290,7 @@ function getCubicRoots(pa, pb, pc, pd) {
Scripts are disabled. Showing fallback image.
@@ -3385,16 +3447,38 @@ function getCubicRoots(pa, pb, pc, pd) {
aligning our example curves to the x-axis, with the cubic case using
the coordinates that were just used in the example formulae:
- Finding the solution for "where is this line 0" should be trivial:
+ The derivative of a quadratic Bézier curve is a linear Bézier curve,
+ interpolating between just two terms, which means finding the
+ solution for "where is this line 0" is effectively trivial by
+ rewriting it to a function of t
and solving. First we
+ turn our cubic Bézier function into a quadratic one, by following
+ the rule mentioned at the end of the
+ derivatives section:
- Done. And quadratic curves have no meaningful second derivative, so
- we're really done.
+ And then we turn this into our solution for t
using
+ basic arithmetics:
+
Done.
+
+ Although with the
+ caveat
+ that if b-a
is zero, there is no solution and we
+ probably shouldn't try to perform that division.
- The derivative of a cubic curve is a quadratic curve, and finding - the roots for a quadratic Bézier curve means we can apply the + The derivative of a cubic Bézier curve is a quadratic Bézier curve, + and finding the roots for a quadratic polynomial means we can apply + the Quadratic formula. If you've seen it before, you'll remember it, and if you haven't, @@ -2614,7 +2636,7 @@ function drawCurve(points[], t): height="40px" />
- So, if we can express a Bézier component function as a plain + So, if we can rewrite the Bézier component function as a plain polynomial, we're done: we just plug in the values into the quadratic formula, check if that square root is negative or not (if it is, there are no roots) and then just compute the two values that @@ -2645,11 +2667,10 @@ function drawCurve(points[], t): height="119px" />
- This gives us thee coefficients a, b, and
- c that are expressed in terms of v values, where
- the v values are just convenient expressions of our
- original p values, so we can do some trivial substitution
- to get:
+ This gives us three coefficients {a, b, c} that are expressed in
+ terms of v
values, where the v
values are
+ expressions of our original coordinate values, so we can do some
+ substitution to get:
Easy-peasy. We can now almost trivially find the roots by plugging - those values into the quadratic formula. We also note that the - second derivative of a cubic curve means computing the first - derivative of a quadratic curve, and we just saw how to do that in - the section above. + those values into the quadratic formula.
- Quartic—fourth degree—curves have a cubic function as derivative. - Now, cubic functions are a bit of a problem because they're really - hard to solve. But, way back in the 16th century, + We haven't really looked at them before now, but the next step up + would be a Quartic curve, a fourth degree Bézier curve. As expected, + these have a derivative that is a cubic function, and now things get + much harder. Cubic functions don't have a "simple" rule to find + their roots, like the quadratic formula, and instead require quite a + bit of rewriting to a form that we can even start to try to solve. +
++ Back in the 16th century, before Bézier curves were a + thing, and even before calculus itself was a thing, Gerolamo Cardano figured out that even if the general cubic function is really hard to solve, it can be rewritten to a form for which finding the roots - is "easy", and then the only hard part is figuring out how to go - from that form to the generic form. So: + is "easier" (even if not "easy"):
- This is easier because for the "easier formula" we can use
+ We can see that the easier formula only has two constants, rather
+ than four, and only two expressions involving t
, rather
+ than three: this makes things considerably easier to solve because
+ it lets us use
regular calculus
- to find the roots. (As a cubic function, however, it can have up to
- three roots, but two of those can be complex. For the purpose of
- Bézier curve extremities, we can completely ignore those complex
- roots, since our t is a plain real number from 0 to 1.)
+ to find the values that satisfy the equasion.
- So, the trick is to figure out how to turn the first formula into - the second formula, and to then work out the maths that gives us the - roots. This is explained in detail over at + Now, there is one small hitch: as a cubic function, the solutions + may be + complex numbers + rather than plain numbers... And Cardona realised this, centuries + befor complex numbers were a well-understood and established part of + number theory. His interpretation of them was "these numbers are + impossible but that's okay because they disappear again in later + steps", allowing him to not think about them too much, but we have + it even easier: as we're trying to find the roots for display + purposes, we don't even care about complex numbers: we're + going to simplify Cardano's approach just that tiny bit further by + throwing away any solution that's not a plain number. +
++ So, how do we rewrite the hard formula into the easier formula? This + is explained in detail over at Ken J. Ward's page for solving the cubic equation, so instead of showing the maths, I'm simply going to show the programming code for solving the cubic - equation, with the complex roots getting totally ignored. + equation, with the complex roots getting totally ignored, but if + you're interested you should definitely head over to Ken's page and + give the procedure a read-through.
And that's it. The maths is complicated, but the code is pretty much just "follow the maths, while caching as many values as we can to - reduce recomputing things as much as possible" and now we have a way - to find all roots for a cubic function and can just move on with + prevent recomputing things as much as possible" and now we have a + way to find all roots for a cubic function and can just move on with using that to find extremities of our curves.
- The problem with this is that as the order of the curve goes up, we - can't actually solve those equations the normal way. We can't take - the function, and then work out what the solutions are. Not to - mention that even solving a third order derivative (for a fourth - order curve) is already a royal pain in the backside. We need a - better solution. We need numerical approaches. + And this is where thing stop, because we cannot find the + roots for polynomials of degree 5 or higher using algebra (a fact + known as + the Abel–Ruffini theorem). Instead, for occasions like these, where algebra simply cannot + yield an answer, we turn to + numerical analysis.
- That's a fancy word for saying "rather than solve the function, - treat the problem as a sequence of identical operations, the - performing of which gets us closer and closer to the real answer". - As it turns out, there is a really nice numerical root-finding + That's a fancy term for saying "rather than trying to find exact + answers by manipulating symbols, find approximate answers by + describing the underlying process as a combination of steps, each of + which can be assigned a number via symbolic manipulation". + For example, trying to mathematically compute how much water fits in + a completely crazy three dimensional shape is very hard, even if it + got you the perfect, precise answer. A much easier approach, which + would be less perfect but still entirely useful, would be to just + grab a buck and start filling the shape until it was full: just + count the number of buckets of water you used. And if we want a more + precise answer, we can use smaller buckets. +
+
+ So that's what we're going to do here, too: we're going to treat the
+ problem as a sequence of steps, and the smaller we can make each
+ step, the closer we'll get to that "perfect, precise" answer. And as
+ it turns out, there is a really nice numerical root-finding
algorithm, called the
Newton-Raphsonthat
- Newton), which we can make use of.
+ Newton), which we can make use of. The Newton-Raphson approach
+ consists of taking our impossible-to-solve function
+ f(x)
, picking some intial value
+ x
(literally any value will do), and calculating
+ f(x)
. We can think of that value as the "height" of the
+ function at x
. If that height is zero, we're done, we
+ have found a root. If it isn't, we calculate the tangent line at
+ f(x)
and calculate at which x
value
+ its height is zero (which we've already seen is very easy).
+ That will give us a new x
and we repeat the process
+ until we find a root.
- The Newton-Raphson approach consists of picking a value - t (any value will do), and getting the corresponding value - of the function at that t value. For normal functions, we - can treat that value as a height. If the height is zero, we're done, - we have found a root. If it's not, we take the tangent of the curve - at that point, and extend it until it passes the x-axis, which will - be at some new point t. We then repeat the procedure with - this new value, and we keep doing this until we find our root. -
-
- Mathematically, this means that for some t, at step
- n=1, we perform the following calculation until
- fy(t) is zero, so that the next t is the same as
- the one we already have:
+ Mathematically, this means that for some x
, at step
+ n=1
, we perform the following calculation until
+ fy(x)
is zero, so that the next
+ t
is the same as the one we already have:
- (The Wikipedia article has a decent animation for this process, so - I'm not adding a sketch for that here unless there are requests for - it) + (The Wikipedia article has a decent animation for this process, so I + will not add a graphic for that here)
Now, this works well only if we can pick good starting points, and - our curve is continuously differentiable and doesn't have - oscillations. Glossing over the exact meaning of those terms, the - curves we're dealing with conform to those constraints, so as long - as we pick good starting points, this will work. So the question is: - which starting points do we pick? + our curve is + continuously differentiable + and doesn't have + oscillations. Glossing over the exact meaning of those terms, the curves we're + dealing with conform to those constraints, so as long as we pick + good starting points, this will work. So the question is: which + starting points do we pick?
- As it turns out, Newton-Raphson is so blindingly fast, so we could + As it turns out, Newton-Raphson is so blindingly fast that we could get away with just not picking: we simply run the algorithm from t=0 to t=1 at small steps (say, 1/200th) and the result will be all the roots we want. Of course, this may pose problems for high order Bézier curves: 200 steps for a 200th order Bézier curve is going to go - wrong, but that's okay: there is no reason, ever, to use Bézier - curves of crazy high orders. You might use a fifth order curve to - get the "nicest still remotely workable" approximation of a full - circle with a single Bézier curve, that's pretty much as high as - you'll ever need to go. + wrong, but that's okay: there is no reason (at least, none that I + know of) to ever use Bézier curves of crazy high orders. + You might use a fifth order curve to get the "nicest still remotely + workable" approximation of a full circle with a single Bézier curve, + but that's pretty much as high as you'll ever need to go.
@@ -2898,7 +2960,7 @@ function getCubicRoots(pa, pb, pc, pd) {
Scripts are disabled. Showing fallback image.
@@ -3055,16 +3117,38 @@ function getCubicRoots(pa, pb, pc, pd) {
aligning our example curves to the x-axis, with the cubic case using
the coordinates that were just used in the example formulae:
- Finding the solution for "where is this line 0" should be trivial:
+ The derivative of a quadratic Bézier curve is a linear Bézier curve,
+ interpolating between just two terms, which means finding the
+ solution for "where is this line 0" is effectively trivial by
+ rewriting it to a function of t
and solving. First we
+ turn our cubic Bézier function into a quadratic one, by following
+ the rule mentioned at the end of the
+ derivatives section:
- Done. And quadratic curves have no meaningful second derivative, so
- we're really done.
+ And then we turn this into our solution for t
using
+ basic arithmetics:
+
Done.
+
+ Although with the
+ caveat
+ that if b-a
is zero, there is no solution and we
+ probably shouldn't try to perform that division.
- The derivative of a cubic curve is a quadratic curve, and finding - the roots for a quadratic Bézier curve means we can apply the + The derivative of a cubic Bézier curve is a quadratic Bézier curve, + and finding the roots for a quadratic polynomial means we can apply + the Quadratic formula. If you've seen it before, you'll remember it, and if you haven't, @@ -2624,7 +2646,7 @@ function drawCurve(points[], t): height="40px" />
- So, if we can express a Bézier component function as a plain + So, if we can rewrite the Bézier component function as a plain polynomial, we're done: we just plug in the values into the quadratic formula, check if that square root is negative or not (if it is, there are no roots) and then just compute the two values that @@ -2655,11 +2677,10 @@ function drawCurve(points[], t): height="119px" />
- This gives us thee coefficients a, b, and
- c that are expressed in terms of v values, where
- the v values are just convenient expressions of our
- original p values, so we can do some trivial substitution
- to get:
+ This gives us three coefficients {a, b, c} that are expressed in
+ terms of v
values, where the v
values are
+ expressions of our original coordinate values, so we can do some
+ substitution to get:
Easy-peasy. We can now almost trivially find the roots by plugging - those values into the quadratic formula. We also note that the - second derivative of a cubic curve means computing the first - derivative of a quadratic curve, and we just saw how to do that in - the section above. + those values into the quadratic formula.
- Quartic—fourth degree—curves have a cubic function as derivative. - Now, cubic functions are a bit of a problem because they're really - hard to solve. But, way back in the 16th century, + We haven't really looked at them before now, but the next step up + would be a Quartic curve, a fourth degree Bézier curve. As expected, + these have a derivative that is a cubic function, and now things get + much harder. Cubic functions don't have a "simple" rule to find + their roots, like the quadratic formula, and instead require quite a + bit of rewriting to a form that we can even start to try to solve. +
++ Back in the 16th century, before Bézier curves were a + thing, and even before calculus itself was a thing, Gerolamo Cardano figured out that even if the general cubic function is really hard to solve, it can be rewritten to a form for which finding the roots - is "easy", and then the only hard part is figuring out how to go - from that form to the generic form. So: + is "easier" (even if not "easy"):
- This is easier because for the "easier formula" we can use
+ We can see that the easier formula only has two constants, rather
+ than four, and only two expressions involving t
, rather
+ than three: this makes things considerably easier to solve because
+ it lets us use
regular calculus
- to find the roots. (As a cubic function, however, it can have up to
- three roots, but two of those can be complex. For the purpose of
- Bézier curve extremities, we can completely ignore those complex
- roots, since our t is a plain real number from 0 to 1.)
+ to find the values that satisfy the equasion.
- So, the trick is to figure out how to turn the first formula into - the second formula, and to then work out the maths that gives us the - roots. This is explained in detail over at + Now, there is one small hitch: as a cubic function, the solutions + may be + complex numbers + rather than plain numbers... And Cardona realised this, centuries + befor complex numbers were a well-understood and established part of + number theory. His interpretation of them was "these numbers are + impossible but that's okay because they disappear again in later + steps", allowing him to not think about them too much, but we have + it even easier: as we're trying to find the roots for display + purposes, we don't even care about complex numbers: we're + going to simplify Cardano's approach just that tiny bit further by + throwing away any solution that's not a plain number. +
++ So, how do we rewrite the hard formula into the easier formula? This + is explained in detail over at Ken J. Ward's page for solving the cubic equation, so instead of showing the maths, I'm simply going to show the programming code for solving the cubic - equation, with the complex roots getting totally ignored. + equation, with the complex roots getting totally ignored, but if + you're interested you should definitely head over to Ken's page and + give the procedure a read-through.
And that's it. The maths is complicated, but the code is pretty much just "follow the maths, while caching as many values as we can to - reduce recomputing things as much as possible" and now we have a way - to find all roots for a cubic function and can just move on with + prevent recomputing things as much as possible" and now we have a + way to find all roots for a cubic function and can just move on with using that to find extremities of our curves.
- The problem with this is that as the order of the curve goes up, we - can't actually solve those equations the normal way. We can't take - the function, and then work out what the solutions are. Not to - mention that even solving a third order derivative (for a fourth - order curve) is already a royal pain in the backside. We need a - better solution. We need numerical approaches. + And this is where thing stop, because we cannot find the + roots for polynomials of degree 5 or higher using algebra (a fact + known as + the Abel–Ruffini theorem). Instead, for occasions like these, where algebra simply cannot + yield an answer, we turn to + numerical analysis.
- That's a fancy word for saying "rather than solve the function, - treat the problem as a sequence of identical operations, the - performing of which gets us closer and closer to the real answer". - As it turns out, there is a really nice numerical root-finding + That's a fancy term for saying "rather than trying to find exact + answers by manipulating symbols, find approximate answers by + describing the underlying process as a combination of steps, each of + which can be assigned a number via symbolic manipulation". + For example, trying to mathematically compute how much water fits in + a completely crazy three dimensional shape is very hard, even if it + got you the perfect, precise answer. A much easier approach, which + would be less perfect but still entirely useful, would be to just + grab a buck and start filling the shape until it was full: just + count the number of buckets of water you used. And if we want a more + precise answer, we can use smaller buckets. +
+
+ So that's what we're going to do here, too: we're going to treat the
+ problem as a sequence of steps, and the smaller we can make each
+ step, the closer we'll get to that "perfect, precise" answer. And as
+ it turns out, there is a really nice numerical root-finding
algorithm, called the
Newton-Raphsonthat
- Newton), which we can make use of.
+ Newton), which we can make use of. The Newton-Raphson approach
+ consists of taking our impossible-to-solve function
+ f(x)
, picking some intial value
+ x
(literally any value will do), and calculating
+ f(x)
. We can think of that value as the "height" of the
+ function at x
. If that height is zero, we're done, we
+ have found a root. If it isn't, we calculate the tangent line at
+ f(x)
and calculate at which x
value
+ its height is zero (which we've already seen is very easy).
+ That will give us a new x
and we repeat the process
+ until we find a root.
- The Newton-Raphson approach consists of picking a value - t (any value will do), and getting the corresponding value - of the function at that t value. For normal functions, we - can treat that value as a height. If the height is zero, we're done, - we have found a root. If it's not, we take the tangent of the curve - at that point, and extend it until it passes the x-axis, which will - be at some new point t. We then repeat the procedure with - this new value, and we keep doing this until we find our root. -
-
- Mathematically, this means that for some t, at step
- n=1, we perform the following calculation until
- fy(t) is zero, so that the next t is the same as
- the one we already have:
+ Mathematically, this means that for some x
, at step
+ n=1
, we perform the following calculation until
+ fy(x)
is zero, so that the next
+ t
is the same as the one we already have:
- (The Wikipedia article has a decent animation for this process, so - I'm not adding a sketch for that here unless there are requests for - it) + (The Wikipedia article has a decent animation for this process, so I + will not add a graphic for that here)
Now, this works well only if we can pick good starting points, and - our curve is continuously differentiable and doesn't have - oscillations. Glossing over the exact meaning of those terms, the - curves we're dealing with conform to those constraints, so as long - as we pick good starting points, this will work. So the question is: - which starting points do we pick? + our curve is + continuously differentiable + and doesn't have + oscillations. Glossing over the exact meaning of those terms, the curves we're + dealing with conform to those constraints, so as long as we pick + good starting points, this will work. So the question is: which + starting points do we pick?
- As it turns out, Newton-Raphson is so blindingly fast, so we could + As it turns out, Newton-Raphson is so blindingly fast that we could get away with just not picking: we simply run the algorithm from t=0 to t=1 at small steps (say, 1/200th) and the result will be all the roots we want. Of course, this may pose problems for high order Bézier curves: 200 steps for a 200th order Bézier curve is going to go - wrong, but that's okay: there is no reason, ever, to use Bézier - curves of crazy high orders. You might use a fifth order curve to - get the "nicest still remotely workable" approximation of a full - circle with a single Bézier curve, that's pretty much as high as - you'll ever need to go. + wrong, but that's okay: there is no reason (at least, none that I + know of) to ever use Bézier curves of crazy high orders. + You might use a fifth order curve to get the "nicest still remotely + workable" approximation of a full circle with a single Bézier curve, + but that's pretty much as high as you'll ever need to go.
@@ -2908,7 +2970,7 @@ function getCubicRoots(pa, pb, pc, pd) {
Scripts are disabled. Showing fallback image.
@@ -3065,16 +3127,38 @@ function getCubicRoots(pa, pb, pc, pd) {
aligning our example curves to the x-axis, with the cubic case using
the coordinates that were just used in the example formulae: