-
Notifications
You must be signed in to change notification settings - Fork 0
/
CC 2.srt
1792 lines (1408 loc) · 36.7 KB
/
CC 2.srt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1
00:00:00,080 --> 00:00:05,779
Hello and welcome, dear viewer, to the second
video in the three-part FLAC series.
2
00:00:05,779 --> 00:00:11,880
Today, I’m actually concerning myself with
what makes FLAC tick, and how it does its
3
00:00:11,880 --> 00:00:12,980
compression.
4
00:00:12,980 --> 00:00:18,270
Note that the first video, you can watch that
up here, was just an introduction to digital
5
00:00:18,270 --> 00:00:26,710
audio, so if you haven’t seen it and you
know all of these terms, you should be good.
6
00:00:26,710 --> 00:00:32,430
Last time, we saw how we can take a sound
wave and represent it literally with numbers,
7
00:00:32,430 --> 00:00:35,450
which is called Pulse Code Modulation.
8
00:00:35,450 --> 00:00:40,590
If you want to go the short route for storing
the audio data, you can of course take these
9
00:00:40,590 --> 00:00:44,790
PCM samples and throw them into your file
as-is.
10
00:00:44,790 --> 00:00:48,340
If you did that, you would get the WAV format.
11
00:00:48,340 --> 00:00:54,690
But as I mentioned in the first video, the
file sizes here get large fast.
12
00:00:54,690 --> 00:01:00,410
Simple math tells us that there are about
1.4 megabits of sample data for each second.
13
00:01:00,410 --> 00:01:06,890
However, FLAC can commonly compress this to
about half the size, 600 kilobits per second
14
00:01:06,890 --> 00:01:08,460
or even less.
15
00:01:08,460 --> 00:01:15,440
We can go further and reach for 150 kilobits
with MP3, but we’re here for lossless compression.
16
00:01:15,440 --> 00:01:20,170
Today I want to dig deep, so it’s easy to
lose track of what we’re trying to do, and
17
00:01:20,170 --> 00:01:21,760
what we have done so far.
18
00:01:21,760 --> 00:01:27,500
Therefore, let’s look at this sidebar once
in a while, just to keep the big picture in
19
00:01:27,500 --> 00:01:29,170
mind.
20
00:01:29,170 --> 00:01:44,130
Linear Predictive Coding
The
21
00:01:44,130 --> 00:01:47,610
key to all kinds of compression is patterns.
22
00:01:47,610 --> 00:01:53,540
Last time, I demonstrated how repeats in data
can be easily removed with basic compression.
23
00:01:53,540 --> 00:02:00,210
Now, if you were to look at this audio data,
there’s no obvious repetition here.
24
00:02:00,210 --> 00:02:05,890
But remember that no matter how random this
looks, we’re still dealing with sound.
25
00:02:05,890 --> 00:02:11,380
And sound is made up of waves, whose defining
feature is that they repeat.
26
00:02:11,380 --> 00:02:13,860
But what are these waves, actually?
27
00:02:13,860 --> 00:02:18,360
I didn’t mention it in the last episode
because it wasn’t relevant, but the most
28
00:02:18,360 --> 00:02:21,740
basic kind of wave is a sine wave.
29
00:02:21,740 --> 00:02:27,660
The mathematical function takes parameters
that modify the amplitude, frequency, phase
30
00:02:27,660 --> 00:02:29,270
and offset of the wave.
31
00:02:29,270 --> 00:02:34,830
In fact, it’s the case that any sound can
be described just by the basic sine waves
32
00:02:34,830 --> 00:02:36,900
it is made out of.
33
00:02:36,900 --> 00:02:42,630
Now you might have a first idea of how to
store the samples: Check what sine waves constitute
34
00:02:42,630 --> 00:02:46,750
this signal, then just store those sine waves
and their parameters.
35
00:02:46,750 --> 00:02:52,940
That’s a great idea, in fact, you’re about
to reinvent MP3 and the Discrete Sine Transform.
36
00:02:52,940 --> 00:02:56,590
But for FLAC, we won’t see much of a benefit.
37
00:02:56,590 --> 00:03:01,050
The number of frequencies we need to store
is extremely large.
38
00:03:01,050 --> 00:03:06,060
It’s so large that we need just as much
data as for the samples themselves.
39
00:03:06,060 --> 00:03:10,390
And, sine waves are rather expensive to compute.
40
00:03:10,390 --> 00:03:13,240
So let’s use something simpler.
41
00:03:13,240 --> 00:03:16,970
How about a polynomial function?
42
00:03:16,970 --> 00:03:22,090
A quick refresher on high school calculus:
A monomial is any function like these, where
43
00:03:22,090 --> 00:03:27,860
the parameter x is taken to an integer power
and multiplied by a coefficient, and a polynomial
44
00:03:27,860 --> 00:03:30,459
is just adding a bunch of monomials.
45
00:03:30,459 --> 00:03:36,710
Linear functions, constant functions or quadratic
functions, they’re all polynomials.
46
00:03:36,710 --> 00:03:41,980
If we consider the coefficients again, there’s
only one coefficient per x term.
47
00:03:41,980 --> 00:03:49,750
So a function of degree 5, meaning 5 is the
highest power of x, can only have 6 coefficients.
48
00:03:49,750 --> 00:03:54,620
Don’t forget x^0, the constant term!
49
00:03:54,620 --> 00:03:57,209
But we need to take a step back.
50
00:03:57,209 --> 00:04:01,270
We want to approximate waves, which are sine
functions.
51
00:04:01,270 --> 00:04:05,349
Polynomials are nothing like sine functions,
at least not out of the box.
52
00:04:05,349 --> 00:04:09,849
Some of you are already guessing where I’m
going with this, so let me tell you about
53
00:04:09,849 --> 00:04:13,850
one of the most amazing things in calculus.
54
00:04:13,850 --> 00:04:19,069
Like sine itself, on the one hand we have
a lot of weird functions in math.
55
00:04:19,069 --> 00:04:23,249
On the other hand, we have the super-simple
polynomial functions.
56
00:04:23,249 --> 00:04:29,330
So how about we approximate a complicated
function, any function, with a polynomial?
57
00:04:29,330 --> 00:04:35,830
For starters, we pick a point x0 on the function
where the approximation is “centered”,
58
00:04:35,830 --> 00:04:37,080
so to speak.
59
00:04:37,080 --> 00:04:42,279
Then, our approximated function T should of
course have the same value as the original
60
00:04:42,279 --> 00:04:44,690
function at this point.
61
00:04:44,690 --> 00:04:50,080
And then, T should probably have the same
derivative in x0 as f does, so that its slope
62
00:04:50,080 --> 00:04:51,750
is the same.
63
00:04:51,750 --> 00:04:58,089
And then, T should probably have the same
second derivative in x0 as f does, so that
64
00:04:58,089 --> 00:05:00,630
its curve is the same.
65
00:05:00,630 --> 00:05:03,369
And so on, as long as you want to.
66
00:05:03,369 --> 00:05:07,129
If you’re curious about how we get this
formula, watch the linked video, I don’t
67
00:05:07,129 --> 00:05:08,250
have time right now.
68
00:05:08,250 --> 00:05:13,580
The gist is that especially for functions
that can be derived a lot, the approximation
69
00:05:13,580 --> 00:05:16,759
with a polynomial is really good.
70
00:05:16,759 --> 00:05:21,020
In the case of sine and cosine, in fact, arbitrarily
good!
71
00:05:21,020 --> 00:05:26,800
These formulas here is just what we get when
we put in the well-known values and derivatives
72
00:05:26,800 --> 00:05:29,099
of sine and cosine.
73
00:05:29,099 --> 00:05:34,650
The function T we just created is called the
Taylor Series of f, and it’s one of the
74
00:05:34,650 --> 00:05:38,810
most powerful numeric tools in existence.
75
00:05:38,810 --> 00:05:44,229
In our case, we just need to know this: It’s
very possible to approximate an audio wave
76
00:05:44,229 --> 00:05:45,520
closely with a polynomial.
77
00:05:45,520 --> 00:05:50,160
[sidebar: idea 2]
But, I have again kind of lied to you.
78
00:05:50,160 --> 00:05:55,039
Sure, we can approximate the audio wave with
a polynomial, and we can tune how close we
79
00:05:55,039 --> 00:05:58,259
get by choosing what degree the polynomial
is.
80
00:05:58,259 --> 00:06:04,449
But this kind of approximation breaks down
very fast with more than a handful of samples.
81
00:06:04,449 --> 00:06:11,139
The required polynomial degrees would be extremely
large, which is expensive and unstable on
82
00:06:11,139 --> 00:06:12,349
all fronts.
83
00:06:12,349 --> 00:06:14,909
Let’s try to improve things.
84
00:06:14,909 --> 00:06:20,419
I kind of skipped over it, but we’re of
course not in discrete real number math land.
85
00:06:20,419 --> 00:06:25,419
All our numbers are discrete and finite, both
in x and y.
86
00:06:25,419 --> 00:06:30,689
This means that we’re not dealing with approximating
a function with a function, we’re approximating
87
00:06:30,689 --> 00:06:36,529
a series with a series, or rather, a digital
signal with a digital signal.
88
00:06:36,529 --> 00:06:40,940
As long as we keep our signal defined like
this, however, we’re just switching up the
89
00:06:40,940 --> 00:06:44,520
notation, nothing actually changes, practically
speaking.
90
00:06:44,520 --> 00:06:48,460
The magic only happens once we use a recursive
definition [sidebar: idea 3].
91
00:06:48,460 --> 00:06:52,020
You might have heard about recursion before
[google joke], and here we just mean that
92
00:06:52,020 --> 00:06:58,289
we define a signal’s value as a combination
of previous signal values.
93
00:06:58,289 --> 00:07:02,319
Recursion is not super common in the kind
of calculus you learn in school, but it turns
94
00:07:02,319 --> 00:07:05,409
out to be a powerful tool for data compression.
95
00:07:05,409 --> 00:07:11,639
Now, you have to briefly take my word and
believe me that these specific recursive signal
96
00:07:11,639 --> 00:07:13,970
definitions are really great.
97
00:07:13,970 --> 00:07:16,159
What does this notation mean?
98
00:07:16,159 --> 00:07:23,110
The sample at position t is specified to be
two times the sample at position t-1 minus
99
00:07:23,110 --> 00:07:25,749
the sample at position t-2.
100
00:07:25,749 --> 00:07:31,240
Therefore, if we already know the previous
samples, we can compute the next sample very
101
00:07:31,240 --> 00:07:32,280
easily.
102
00:07:32,280 --> 00:07:35,529
But what if we don’t yet know the previous
samples?
103
00:07:35,529 --> 00:07:41,099
That’s a problem all recursive definitions
need to solve, there has to be a non-recursive
104
00:07:41,099 --> 00:07:42,840
starting point.
105
00:07:42,840 --> 00:07:48,139
More formally, there is an infinite number
of non-recursive functions that fulfil this
106
00:07:48,139 --> 00:07:50,169
recursive formula.
107
00:07:50,169 --> 00:07:55,800
We take the easiest way out and say that the
first couple of samples have some constant
108
00:07:55,800 --> 00:07:56,800
values.
109
00:07:56,800 --> 00:08:02,110
How many constants samples simply depends
on how many previous samples our formula asks
110
00:08:02,110 --> 00:08:03,139
for.
111
00:08:03,139 --> 00:08:07,279
So let’s revisit the procedure and introduce
some terminology.
112
00:08:07,279 --> 00:08:16,699
Out of the four different orders, we are given
one predictor as well as the warm-up samples.
113
00:08:16,699 --> 00:08:22,089
The order specifies how many samples we need
to look back, so how many constant warm-up
114
00:08:22,089 --> 00:08:23,649
samples we need.
115
00:08:23,649 --> 00:08:29,270
For the first couple of samples, we don’t
use the recursive predictor formula, only
116
00:08:29,270 --> 00:08:30,270
afterwards.
117
00:08:30,270 --> 00:08:35,870
Not only is this extremely cheap to compute,
it’s also super space-efficient.
118
00:08:35,870 --> 00:08:41,210
Because there’s only one predictor per order,
we just store the order instead of the coefficients
119
00:08:41,210 --> 00:08:42,530
themselves.
120
00:08:42,530 --> 00:08:47,460
And at the same time, the order tells us how
many warm-up samples there are.
121
00:08:47,460 --> 00:08:52,820
What we’re doing here is encoding samples
by predicting what the next sample will be,
122
00:08:52,820 --> 00:08:56,490
based on a linear combination of the previous
samples.
123
00:08:56,490 --> 00:09:02,400
That’s why we call it Linear Predictive
Coding.
124
00:09:02,400 --> 00:09:06,660
You remember how you took my word a minute
ago when I said that you have to believe in
125
00:09:06,660 --> 00:09:09,290
the greatness of these specific predictors?
126
00:09:09,290 --> 00:09:12,000
Well, let’s get to why that is.
127
00:09:12,000 --> 00:09:18,460
LPC does not always result in polynomial functions,
in fact, most of the time it doesn’t.
128
00:09:18,460 --> 00:09:22,710
And even though it doesn’t look like it
from the seemingly arbitrary coefficients,
129
00:09:22,710 --> 00:09:28,340
these predictors here are the only ones of
their order that do actually correspond to
130
00:09:28,340 --> 00:09:30,210
a polynomial function.
131
00:09:30,210 --> 00:09:32,800
It’s not magic, it’s math.
132
00:09:32,800 --> 00:09:35,410
Let’s look at this from another angle.
133
00:09:35,410 --> 00:09:40,440
We have a number of samples that were already
decoded, and we now want to predict what the
134
00:09:40,440 --> 00:09:42,120
next sample will be.
135
00:09:42,120 --> 00:09:44,110
How can we do that?
136
00:09:44,110 --> 00:09:47,570
Let’s take the simplest approach first and
try to use a line.
137
00:09:47,570 --> 00:09:53,820
In math, of course, we call that a linear
function, or a polynomial of degree 1.
138
00:09:53,820 --> 00:09:59,300
So we want to insert a line somewhere here,
and the prediction we make is where the line
139
00:09:59,300 --> 00:10:03,570
intersects the sample’s point in time, which
we called t.
140
00:10:03,570 --> 00:10:10,360
A linear function consists of only two parameters:
the slope and the vertical position; but we
141
00:10:10,360 --> 00:10:14,320
want to recursively define them depending
on the previous samples.
142
00:10:14,320 --> 00:10:16,680
First, let’s ignore the slope.
143
00:10:16,680 --> 00:10:19,910
I challenge you to come up with a position
for the sample t.
144
00:10:19,910 --> 00:10:24,060
[pause] Your solution is probably too complicated.
145
00:10:24,060 --> 00:10:29,020
Let’s do this as simple as possible and
use the previous sample.
146
00:10:29,020 --> 00:10:34,680
We can think of the previous sample as our
starting point, just something to go off from.
147
00:10:34,680 --> 00:10:36,450
Now what about the slope?
148
00:10:36,450 --> 00:10:40,340
Think about what we just did with the position,
we simply copied it.
149
00:10:40,340 --> 00:10:43,880
So how about we copy the slope too?
150
00:10:43,880 --> 00:10:49,380
The best slope to use of course is the slope
between the previous two samples.
151
00:10:49,380 --> 00:10:54,900
If you took any amount of calculus, you know
that this slope can be calculated by dividing
152
00:10:54,900 --> 00:10:58,120
the y difference by the x difference.
153
00:10:58,120 --> 00:11:04,010
Lucky for us, the x difference is 1 and the
y difference is the difference between these
154
00:11:04,010 --> 00:11:05,070
two samples.
155
00:11:05,070 --> 00:11:10,800
Now, shifting this slope triangle to the right
so that it starts at the previous sample shows
156
00:11:10,800 --> 00:11:14,120
us where we predict the next sample to be.
157
00:11:14,120 --> 00:11:18,940
In terms of our formula, we add the slope
to the position.
158
00:11:18,940 --> 00:11:22,300
And that’s already the second order predictor!
159
00:11:22,300 --> 00:11:27,940
Note that the very last step only works because
all the samples have the same distance, so
160
00:11:27,940 --> 00:11:31,930
the x portion of the slope triangle is always
1.
161
00:11:31,930 --> 00:11:36,550
This assumption of a distance of 1 is one
we generally make because it doesn’t really
162
00:11:36,550 --> 00:11:40,580
change anything but it simplifies the math.
163
00:11:40,580 --> 00:11:44,621
Now that you have seen how we can come up
with a formula for the simple case, let me
164
00:11:44,621 --> 00:11:47,890
show you a more formal mathematical method.
165
00:11:47,890 --> 00:11:53,980
This is less intuitive, but more robust, as
we can use it for even the very highest predictor
166
00:11:53,980 --> 00:11:55,170
orders.
167
00:11:55,170 --> 00:11:58,710
Our starting point again are the Taylor polynomials.
168
00:11:58,710 --> 00:12:04,850
Remember, the fundamental assumption behind
Taylor is that we can approximate some unknown
169
00:12:04,850 --> 00:12:10,790
input by adding up the function and its derivatives
at a known input.
170
00:12:10,790 --> 00:12:16,320
For the best accuracy, let’s pick the last
decoded sample as our known input.
171
00:12:16,320 --> 00:12:21,320
For the function’s value itself, that’s
of course just this sample, but we don’t
172
00:12:21,320 --> 00:12:23,620
even know the first derivative!
173
00:12:23,620 --> 00:12:29,950
After all, we don’t actually know the underlying
function, we can’t derive it with calculus.
174
00:12:29,950 --> 00:12:35,260
Instead, we have to approximate the derivative
as well, and we’re gonna use Taylor once
175
00:12:35,260 --> 00:12:36,260
more.
176
00:12:36,260 --> 00:12:42,460
First, we need to decide how many samples
we want to use to approximate the derivative.
177
00:12:42,460 --> 00:12:47,810
You can use arbitrarily many, and for reasons
outside the scope here, that will give you
178
00:12:47,810 --> 00:12:50,200
better and better approximations.
179
00:12:50,200 --> 00:12:56,470
However, just know that we need at least two
samples for the first derivative, at least
180
00:12:56,470 --> 00:13:00,480
three samples for the second derivative, and
so on.
181
00:13:00,480 --> 00:13:05,440
So for us, let’s just choose the sample
at t-1 which we’re dealing with anyways,
182
00:13:05,440 --> 00:13:08,410
and the one before that, t-2.
183
00:13:08,410 --> 00:13:13,470
To make it simple, our assumption will be
that we can calculate the discrete derivative
184
00:13:13,470 --> 00:13:21,000
at t-1 by some linear combination of the know
samples at t-1 and t-2.
185
00:13:21,000 --> 00:13:27,030
Alright, and this might seem arbitrary at
first, let’s write both those samples not
186
00:13:27,030 --> 00:13:34,040
as the actual value they are, but how we could
in theory calculate them by a Taylor approximation
187
00:13:34,040 --> 00:13:37,100
from t-1.
188
00:13:37,100 --> 00:13:41,570
And then let’s plug that into the right
side of the linear combination.
189
00:13:41,570 --> 00:13:43,890
Do you see now?
190
00:13:43,890 --> 00:13:48,940
If I write the left-hand side more explicitly
and rearrange the right side, it might be
191
00:13:48,940 --> 00:13:51,380
a bit more obvious.
192
00:13:51,380 --> 00:13:56,970
We have coefficients for both the sample and
its derivative on both sides.
193
00:13:56,970 --> 00:14:02,960
That means in order for this formula to be
correct, all the corresponding coefficients
194
00:14:02,960 --> 00:14:08,990
need to be equal, and that gives us a linear
system of equations containing a and b as
195
00:14:08,990 --> 00:14:10,590
the unknowns.
196
00:14:10,590 --> 00:14:18,100
Luckily, this system is super easy to solve,
and we get a=1 and b=-1.
197
00:14:18,100 --> 00:14:23,850
Plug that back into the very first equation
and we have an approximation for the derivative.
198
00:14:23,850 --> 00:14:25,500
Not too bad, was it?
199
00:14:25,500 --> 00:14:31,160
Of course, the steps quickly get out of hand
once we try the same thing for higher-order
200
00:14:31,160 --> 00:14:32,160
derivatives.
201
00:14:32,160 --> 00:14:37,750
However, the procedure is exactly the same,
you just have longer expansions and more equations
202
00:14:37,750 --> 00:14:38,840
in the system.
203
00:14:38,840 --> 00:14:42,810
I encourage you to try doing the second derivative
yourself.
204
00:14:42,810 --> 00:14:47,490
Remember, we need the previous three samples,
and if you know a thing or two about linear
205
00:14:47,490 --> 00:14:51,870
systems of equations, you should be able to
tell why that is.
206
00:14:51,870 --> 00:14:57,290
Finally, we need to invoke Taylor once more
and just combine these derivatives into one
207
00:14:57,290 --> 00:15:00,400
formula for the sample we actually care about.
208
00:15:00,400 --> 00:15:06,210
There’s a tradeoff here: Using more derivatives
gives us a better approximation, that’s
209
00:15:06,210 --> 00:15:09,400
just a basic property of Taylor polynomials.
210
00:15:09,400 --> 00:15:15,230
On the other hand, higher derivatives require
more samples and more time to compute.
211
00:15:15,230 --> 00:15:22,590
So depending on how much accuracy we need,
we can select zero to three derivatives, and
212
00:15:22,590 --> 00:15:28,980
simplifying these equations gives us exactly
the five predictors we know from before.
213
00:15:28,980 --> 00:15:34,279
Of course, if you’ve been calculating the
Taylor expansions along at home, these formulas
214
00:15:34,279 --> 00:15:38,380
aren’t correct, some of the higher derivatives
are missing a factor.
215
00:15:38,380 --> 00:15:42,820
That is indeed intentional, we’re not doing
exact math, we’re just trying to do some