1
0
mirror of https://github.com/Pomax/BezierInfo-2.git synced 2025-02-25 01:53:05 +01:00

Edits to reordering section (#185)

* Edits to reordering section

* Just spotted a typo

* Update content.en-GB.md

* Update content.en-GB.md
This commit is contained in:
Simon Cozens 2019-06-11 01:09:08 +01:00 committed by Pomax
parent 284cc28770
commit 3de2ee1cbf

View File

@ -1,8 +1,8 @@
# Lowering and elevating curve order
One interesting property of Bézier curves is that an *n<sup>th</sup>* order curve can always be perfectly represented by an *(n+1)<sup>th</sup>* order curve, by giving the higher order curve specific control points.
One interesting property of Bézier curves is that an *n<sup>th</sup>* order curve can always be perfectly represented by an *(n+1)<sup>th</sup>* order curve, by giving the higher-order curve specific control points.
If we have a curve with three points, then we can create a four point curve that exactly reproduce the original curve as long as we give it the same start and end points, and for its two control points we pick "1/3<sup>rd</sup> start + 2/3<sup>rd</sup> control" and "2/3<sup>rd</sup> control + 1/3<sup>rd</sup> end", and now we have exactly the same curve as before, except represented as a cubic curve, rather than a quadratic curve.
If we have a curve with three points, then we can create a curve with four points that exactly reproduces the original curve. First, we give it the same start and end points, and for its two control points we pick "1/3<sup>rd</sup> start + 2/3<sup>rd</sup> control" and "2/3<sup>rd</sup> control + 1/3<sup>rd</sup> end". Now we have exactly the same curve as before, except represented as a cubic curve rather than a quadratic curve.
The general rule for raising an *n<sup>th</sup>* order curve to an *(n+1)<sup>th</sup>* order curve is as follows (observing that the start and end weights are the same as the start and end weights for the old curve):
@ -26,7 +26,7 @@ We start by taking the standard Bézier function:
Bézier(n,t) = \sum_{i=0}^{n} b_i B^n_i(t)
\]
And then, we apply off one of those silly (actually, super useful) calculus tricks: since our `t` value is always between zero and one (inclusive), we know that `(1-t)` plus `t` always sum to 1. As such, we can express any value as a sum of `t` and `1-t`:
And then, we apply one of those silly (actually, super useful) calculus tricks: since our `t` value is always between zero and one (inclusive), we know that `(1-t)` plus `t` always sums to 1. As such, we can express any value as a sum of `t` and `1-t`:
\[
x = 1 x = \left ( (1-t) + t \right ) x = (1-t) x + t x = x (1-t) + x t
@ -52,7 +52,7 @@ So far so good. Now, to see why we did this, let's write out the `(1-t)` and `t`
\end{aligned}
\]
So by using this seemingly silly trick, we can suddenly express part of our n<sup>th</sup> order Bézier function in terms of an (n+1)<sup>th</sup> order Bézier function. And that sounds a lot like raising the curve order! Of course we need to be able to repeat that trick for the `t` part, but of course that's not a problem:
So by using this seemingly silly trick, we can suddenly express part of our n<sup>th</sup> order Bézier function in terms of an (n+1)<sup>th</sup> order Bézier function. And that sounds a lot like raising the curve order! Of course we need to be able to repeat that trick for the `t` part, but that's not a problem:
\[
\begin{aligned}
@ -63,7 +63,7 @@ So by using this seemingly silly trick, we can suddenly express part of our n<su
\end{aligned}
\]
So, with both of those changed from an order `n` expression to an order `(n+1)` expression, we can put them back together again. Now, where the order `n` function had a summation from 0 to `n`, the order `n+1` function uses a summation from from 0 to `n+1` , but this shouldn't be a problem as long as we can add terms that "contribute nothing". If you read the section on derivatives, you may remember that "higher terms than there is a binomial for" and "lower than zero terms" both contribute "nothing", so as long as we can add terms that have the same form as the terms we need, we can just include them in the summation, they'll sit there and do nothing, and the resulting function stays identical to the lower order curve.
So, with both of those changed from an order `n` expression to an order `(n+1)` expression, we can put them back together again. Now, where the order `n` function had a summation from 0 to `n`, the order `n+1` function uses a summation from 0 to `n+1`, but this shouldn't be a problem as long as we can add some new terms that "contribute nothing". If you read the section on derivatives, you may remember that "higher terms than there is a binomial for" and "lower than zero terms" both "contribute nothing". So as long as we can add terms that have the same form as the terms we need, we can just include them in the summation, they'll sit there and do nothing, and the resulting function stays identical to the lower order curve.
Let's do this:
@ -83,7 +83,7 @@ And this is where we switch over from calculus to linear algebra, and matrices:
M B_n = B_k
\]
Where the matrix **M** is an `n+1` by `n` matrix, and looks like:
where the matrix **M** is an `n+1` by `n` matrix, and looks like:
\[
M =
@ -102,11 +102,11 @@ M =
\right ]
\]
That might look unwieldy, but it's really just a mostly-zeroes matrix, with a very simply fraction on the diagonal, and and even simpler fraction to the left of it. Multiplying a list of coordinates with this matrix means we can plug the resulting transformed coordinates into the one-order-higher function and get an identical looking curve.
That might look unwieldy, but it's really just a mostly-zeroes matrix, with a very simply fraction on the diagonal, and an even simpler fraction to the left of it. Multiplying a list of coordinates with this matrix means we can plug the resulting transformed coordinates into the one-order-higher function and get an identical looking curve.
Not too bad!
Equally interesting, though, is that with this matrix operation established, we can now use an incredibly powerful and ridiculously simple way to find out a "best fit" way to reverse the operation, called [the normal equasion](http://mathworld.wolfram.com/NormalEquation.html). What it does is minimize sum of the square differences between one set of values and another set of values. Specifically, if we can express that as some function**A x = b**, we can use it. And as it so happens, that's exactly what we're dealing with, so:
Equally interesting, though, is that with this matrix operation established, we can now use an incredibly powerful and ridiculously simple way to find out a "best fit" way to reverse the operation, called [the normal equation](http://mathworld.wolfram.com/NormalEquation.html). What it does is minimize the sum of the square differences between one set of values and another set of values. Specifically, if we can express that as some function **A x = b**, we can use it. And as it so happens, that's exactly what we're dealing with, so:
\[
\begin{aligned}
@ -120,10 +120,10 @@ Equally interesting, though, is that with this matrix operation established, we
The steps taken here are:
1. We have a function in a form that the normal equasion can be used with, so
2. apply the normal equasion!
1. We have a function in a form that the normal equation can be used with, so
2. apply the normal equation!
3. Then, we want to end up with just B<sub>n</sub> on the left, so we start by left-multiply both sides such that we'll end up with lots of stuff on the left that simplified to "a factor 1", which in matrix maths is the [identity matrix](https://en.wikipedia.org/wiki/Identity_matrix).
4. In fact, by left-multiplying with the inverse of what was already there, we've effectively "nullified" (but really, one-inified) that big, unwieldly block into the identity matrix **I**, so we make that substitution, and then
4. In fact, by left-multiplying with the inverse of what was already there, we've effectively "nullified" (but really, one-inified) that big, unwieldy block into the identity matrix **I**. So we substitute the mess with **I**, and then
5. because multiplication with the identity matrix does nothing (like multiplying by 1 does nothing in regular algebra), we just drop it.
And we're done: we now have an expression that lets us approximate an `n+1`<sup>th</sup> order curve with a lower `n`<sup>th</sup> order curve. It won't be an exact fit, but it's definitely a best approximation. So, let's implement these rules for raising and lowering curve order to a (semi) random curve, using the following graphic. Select the sketch, which has movable control points, and press your up and down arrow keys to raise or lower the curve order.