You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/algebra/fft.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,7 +97,7 @@ It is easy to see that
97
97
98
98
$$A(x) = A_0(x^2) + x A_1(x^2).$$
99
99
100
-
The polynomials $A_0$ and $A_1$ are only half as much coefficients as the polynomial $A$.
100
+
The polynomials $A_0$ and $A_1$ have only half as many coefficients as the polynomial $A$.
101
101
If we can compute the $\text{DFT}(A)$ in linear time using $\text{DFT}(A_0)$ and $\text{DFT}(A_1)$, then we get the recurrence $T_{\text{DFT}}(n) = 2 T_{\text{DFT}}\left(\frac{n}{2}\right) + O(n)$ for the time complexity, which results in $T_{\text{DFT}}(n) = O(n \log n)$ by the **master theorem**.
Copy file name to clipboardExpand all lines: src/algebra/phi-function.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@ int phi(int n) {
73
73
74
74
## Euler totient function from $1$ to $n$ in $O(n \log\log{n})$ { #etf_1_to_n data-toc-label="Euler totient function from 1 to n in <script type=\"math/tex\">O(n log log n)</script>" }
75
75
76
-
If we need all all the totient of all numbers between $1$ and $n$, then factorizing all $n$ numbers is not efficient.
76
+
If we need the totient of all numbers between $1$ and $n$, then factorizing all $n$ numbers is not efficient.
77
77
We can use the same idea as the [Sieve of Eratosthenes](sieve-of-eratosthenes.md).
78
78
It is still based on the property shown above, but instead of updating the temporary result for each prime factor for each number, we find all prime numbers and for each one update the temporary results of all numbers that are divisible by that prime number.
Copy file name to clipboardExpand all lines: src/data_structures/sqrt_decomposition.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -120,7 +120,7 @@ But in a lot of situations this method has advantages.
120
120
During a normal sqrt decomposition, we have to precompute the answers for each block, and merge them during answering queries.
121
121
In some problems this merging step can be quite problematic.
122
122
E.g. when each queries asks to find the **mode** of its range (the number that appears the most often).
123
-
For this each block would have to store the count of each number in it in some sort of data structure, and we cannot longer perform the merge step fast enough any more.
123
+
For this each block would have to store the count of each number in it in some sort of data structure, and we can no longer perform the merge step fast enough any more.
124
124
**Mo's algorithm** uses a completely different approach, that can answer these kind of queries fast, because it only keeps track of one data structure, and the only operations with it are easy and fast.
125
125
126
126
The idea is to answer the queries in a special order based on the indices.
Copy file name to clipboardExpand all lines: src/dynamic_programming/intro-to-dp.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ tags:
7
7
8
8
The essence of dynamic programming is to avoid repeated calculation. Often, dynamic programming problems are naturally solvable by recursion. In such cases, it's easiest to write the recursive solution, then save repeated states in a lookup table. This process is known as top-down dynamic programming with memoization. That's read "memoization" (like we are writing in a memo pad) not memorization.
9
9
10
-
One of the most basic, classic examples of this process is the fibonacci sequence. It's recursive formulation is $f(n) = f(n-1) + f(n-2)$ where $n \ge 2$ and $f(0)=0$ and $f(1)=1$. In C++, this would be expressed as:
10
+
One of the most basic, classic examples of this process is the fibonacci sequence. Its recursive formulation is $f(n) = f(n-1) + f(n-2)$ where $n \ge 2$ and $f(0)=0$ and $f(1)=1$. In C++, this would be expressed as:
11
11
12
12
```cpp
13
13
intf(int n) {
@@ -25,7 +25,7 @@ Our recursive function currently solves fibonacci in exponential time. This mean
25
25
26
26
To increase the speed, we recognize that the number of subproblems is only $O(n)$. That is, in order to calculate $f(n)$ we only need to know $f(n-1),f(n-2), \dots ,f(0)$. Therefore, instead of recalculating these subproblems, we solve them once and then save the result in a lookup table. Subsequent calls will use this lookup table and immediately return a result, thus eliminating exponential work!
27
27
28
-
Each recursive call will check against a lookup table to see if the value has been calculated. This is done in $O(1)$ time. If we have previously calcuated it, return the result, otherwise, we calculate the function normally. The overall runtime is $O(n)$. This is an enormous improvement over our previous exponential time algorithm!
28
+
Each recursive call will check against a lookup table to see if the value has been calculated. This is done in $O(1)$ time. If we have previously calculated it, return the result, otherwise, we calculate the function normally. The overall runtime is $O(n)$. This is an enormous improvement over our previous exponential time algorithm!
29
29
30
30
```cpp
31
31
const int MAXN = 100;
@@ -88,7 +88,7 @@ This approach is called top-down, as we can call the function with a query value
88
88
Until now you've only seen top-down dynamic programming with memoization. However, we can also solve problems with bottom-up dynamic programming.
89
89
Bottom-up is exactly the opposite of top-down, you start at the bottom (base cases of the recursion), and extend it to more and more values.
90
90
91
-
To create a bottom-up approach for fibonacci numbers, we initilize the base cases in an array. Then, we simply use the recursive definition on array:
91
+
To create a bottom-up approach for fibonacci numbers, we initialize the base cases in an array. Then, we simply use the recursive definition on array:
92
92
93
93
```cpp
94
94
constint MAXN = 100;
@@ -107,7 +107,7 @@ Of course, as written, this is a bit silly for two reasons:
107
107
Firstly, we do repeated work if we call the function more than once.
108
108
Secondly, we only need to use the two previous values to calculate the current element. Therefore, we can reduce our memory from $O(n)$ to $O(1)$.
109
109
110
-
An example of a bottom-up dynamic programming solution for fibonacci which uses $O(1)$ might be:
110
+
An example of a bottom-up dynamic programming solution for fibonacci which uses $O(1)$ memory might be:
Copy file name to clipboardExpand all lines: src/graph/edge_vertex_connectivity.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ It is clear, that the vertex connectivity of a graph is equal to the minimal siz
39
39
40
40
### The Whitney inequalities
41
41
42
-
The **Whitney inequalities** (1932) gives a relation between the edge connectivity $\lambda$, the vertex connectivity $\kappa$ and the smallest degree of the vertices $\delta$:
42
+
The **Whitney inequalities** (1932) gives a relation between the edge connectivity $\lambda$, the vertex connectivity $\kappa$, and the minimum degree of any vertex in the graph $\delta$:
43
43
44
44
$$\kappa \le \lambda \le \delta$$
45
45
@@ -77,7 +77,7 @@ Especially the algorithm will run pretty fast for random graphs.
77
77
78
78
### Special algorithm for edge connectivity
79
79
80
-
The task of finding the edge connectivity if equal to the task of finding the **global minimum cut**.
80
+
The task of finding the edge connectivity is equal to the task of finding the **global minimum cut**.
81
81
82
82
Special algorithms have been developed for this task.
83
83
One of them is the Stoer-Wagner algorithm, which works in $O(V^3)$ or $O(V E)$ time.
0 commit comments