- Question 2.2-1:Express the function $$\frac{n^3}{1000} - 100n^2 - 100n + 3$$ in terms of $\Theta$ notation
- A function g(x) is said to be in the set of all functions $\Theta(x)$ if and only if g(x) is also in the set of all functions $\Omega(x)$ and in the set of all functions $O(x)$.
Symbolically: $$g(x) \in \Theta(x) \iff g(x) \in O(x) \And g(x) \in \Omega(x)$$A function g(x) is in the set of all functions $\Theta(x)$ if and only if after some constant $c$ it is always true that for some constant C, $g(x) \lt Cf(x)$
A function g(x) is in the set of all functions O(x) if and only if after some constant $c$ it is always true that for some constant C, $g(x) \gt Cf(x)$
With our function we could choose practically any function to satisfy either one of these conditions. However we need to satisfy both of them. One thing that makes this easier is that it only has to be true after some constant number. This allows us to throw away the "trivial" parts that are eventually overwhelmed by the faster growing terms. We therefore are only left with $n^3$, which is the answer.
- Question 2.2-2: Consider sorting n numbers stored in an array A by first finding the smallest element and exchanging it with the element in A[1], then find the second smallest element and exchange it with A[2], and continue this for the first n-1 elements of A. Write the pseudocode for this algorithm, which is known as Selection Sort. What loop invariant does this algorithm maintain? Why does it need to run only for the first n-1 elements and not for all n? Give the best case and worst case running times in $\Theta$ notation
- This question is asking us to analyze selection sort in a variety of ways. I will start with writing out the pseudocode:
for $j \leftarrow 1$ to $n-1$
min $\leftarrow$ j
for $i \leftarrow j+1$ to $n$
$\rhd$ if A[i] < A[min] then min $\leftarrow$ i
$\rhd$ if min $\neq$ j then swap A[min] and A[j]
A loop invariant that this algorithm maintains is that every elements prior to A[j] is sorted among the subarray A[1] to A[j] and is less than or equal to every element in the subarray A[j+1] to A[n]. I do not believe a stronger loop invariant is provable. The algorithm only needs to run until n-1 because of the second part of the loop invariant. When $j = n-1$ we know that every element after A[j], which is A[n] is not less than all previous elements. Therefore no check has to be done. In the best case (an already sorted array) and in the worst case (a reverse sorted array) the running time is the same: $\Theta(n^2)$ - Question 2.2-3: Consider linear search again. How many elements of the input sequence need to be checked on average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in $\Theta$ notation?
- The best case for a linear search algorithm is when the searched-for element is in the first location. In the worst case all n locations must be searched. In the average case $\frac{n}{2}$ locations have to be searched.
- Question 2.2-4: How can we modify almost any algorithm to have a good best-case running time?
- I have no idea what this question is asking for. I guess checking for the optimal case (as in a pre-sorted array for a sorting algorithm) and then skipping the rest of the procedure might work.
Wednesday, June 29, 2011
Blogging my way through CLRS [part 4]
Part 3 here
This set is a bit easier than last time.
Sunday, June 26, 2011
Blogging my way through CLRS [3/?]
part 2 here
According to wikipedia Introduction to Algorithms is also known as CLRS which is shorter (and more fair to the other authors) so I'll use that name for now on.
Question 2.1-1 asks me to redraw a previous diagram, but with different numbers. I am not going to post that here.
According to wikipedia Introduction to Algorithms is also known as CLRS which is shorter (and more fair to the other authors) so I'll use that name for now on.
Question 2.1-1 asks me to redraw a previous diagram, but with different numbers. I am not going to post that here.
- Question 2.1-2 Rewrite the insertion sort procedure to sort into nonincreasing instead of nondecreasing order:
- Here is the pseudocode of the nonincreasing version of insertion sort:
for j $ \leftarrow 2$ to length[A]
do key$ \leftarrow A[j]$
$\rhd$ Insert A[j] into sorted sequence A[1..j-1]
$ i \leftarrow j - 1$
while $i \gt 0$ AND $A[i] \lt key$
do $A[i+1] \leftarrow A[i]$
$i \leftarrow i - 1$
$A[i+1] \leftarrow key$
Now we prove that this loop correctly terminates with a nonincreasing array to about the same level of formality as the book proved the original.
Initialization: At the first iteration, when $j=2$ the subarray A[1..j-1] is trivially sorted (as it has only one element).
Maintenance: In order to prove maintenance we need to show that the inner loop correctly terminates with an array with "space" for the correct element. As CLRS did not prove this property, I will also skip this proof.
Termination: this loop terminates when j > length[A] or when $j = length[A]+1$. Since we have "proven" (to some level) the maintenance of the loop invariant (that at each point during the loop the subarray [1..j-1] is sorted) we could substitute length[A]+1 for $j$ which becomes [1..length[A]] or the entire array.
This shows that the loop terminates with a correctly sorted array. - Question 2.1-3:
Input:A sequence of $n$ numbers $A = {a_1,a_2,...,a_n}$ and a value $v$.
Output: An index i such that $v = A[i]$ or a special value $\varnothing$ (NIL) if $v \notin A$
Write the pseudocode for Linear Search, which scans through the sequence looking for $v$. Using a loop invariant, prove that your algorithm is correct.
- The first part, writing the pseudocode, seems fairly easy:
$r \leftarrow \varnothing$
$j \leftarrow 1$ to length[A]
if $v = A[j] \rhd$ optionally check that $r = \varnothing$
$r \leftarrow j$
return $r$
The second part, proving that this is correct is harder than before because we don't have a trivially true initialization of our loop invariant.
Initialization: $j = 1\ \And\ r = \varnothing$ at the start of our loop. At this point there are no elements prior to A[j] and we have yet to find $v$ in A. As such our invariant (that r will contain the correct value) is true.
Maintenance: At every point in the loop the subarray A[1..j] has either contained $v$ in which case it has been assigned to $r$ or has not contained $v$ in which case $r$ remains $\varnothing$. This means that loop invariant holds for every subarray A[1..j].
Termination: At the end of the loop $j = $ length[A]. We know from our maintenance that $r$ is correct for every subarray A[1..j] so at termination $r$ contains the correct value - Question 2.1-4 Consider the problem of adding two $l$-bit binary integers, stored in two $l$-element arrays $A$ and $B$. the sum of the two integers should be stored in binary form in $(l+1)$-element array $C$. State the problem formally and write pseudocode for adding the integers.
- Stating the problem formally looks something like:
Input: Two $l$-bit integers $A$ and $B$ stored as arrays of length $l$ with the most significant bit stored last
Output: An $l+1$-bit integer ($C$) stored as arrays of length $l+1$ with the most significant bit stored last
Here is the pseudocode: $\rhd$ X is a $l$-bit array of bits initialized to all zeros in order to store the carry
for j $\leftarrow$ 1 to $l$
$C[j] \leftarrow copyC \leftarrow A[j] \oplus B[j]$
$X[j+1] \leftarrow A[j] \And B[j]$
$C[j] \leftarrow C[j] \oplus X[j] $
$X[j+1] \leftarrow copyC \oplus X[j+1] $
Thursday, June 23, 2011
Blogging my way through Cormen [2/?]
As I said in part 1 I am reading a book on algorithms and have decided to blog my way through. My primary goal in doing so is to improve my writing skills. A secondary goal is to force myself to actually answer the questions.
Updated 2012-09-05: I had a brain lapse the day I originally published this and accidentally used the natural logarithm instead of the base 2 log for question 1.2-2. How I ever managed to do that I will not know, but I've fixed it.
- 1.2-2 Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in $8n^2$ steps, while merge sort runs in $64n \lg n$ steps. For which values of $n$ does insertion sort beat merge sort?
- The question is essentially asking for which values of $n$ is $8n^{2} \lt 64n \lg n$. We can solve this question by first factoring out an $8n$ and we get $n \lt 8 \lg n$
Unfortunately this problem is not solvable using elementary operations. Luckily we are being asked for an integer solution (as computers operate in discrete steps) and we could use the underutilized guess-and-and method.
$n$ $8 \lg n$ 14 30.46 41 42.86 43 43.41 44 43.675 - 1.2-3 What is the smallest value of n such that an algorithm whose running time is $100n^2$ runs faster than an algorithm whose running time is $2^n$ on the same machine.
- This question is asking us find the smallest positive integer $n$ that satisfies $100n^{2} \lt 2^n$. This could be solved by doing the math, by looking at a plot of the curves, or using the above method again. . $$2^{14} = 16384$$ $$(100 \times 14^{2}) = 19600$$ $$2^{15} = 32768$$ $$(100 \times 15^{2}) = 22500$$ An exponential algorithm becomes far worse than a polynomial algorithm when we have only 15 items to worry about. In other words: avoid exponential algorithms!
Updated 2012-09-05: I had a brain lapse the day I originally published this and accidentally used the natural logarithm instead of the base 2 log for question 1.2-2. How I ever managed to do that I will not know, but I've fixed it.
Tuesday, June 21, 2011
Cormen on Algorithms: Blogging my way through [1/?]
Two of my good friends recently started reading Introduction to Algorithms by Thomas H. Cormen, et. al. Being unable to resist peer pressure I decided to follow and read along.
I plan on blogging my way through the chapters writing my answers to the questions as I go through the book. Like most of my plans they don't always work out, but one could try.
Here it goes!
1.1-1: Give a real-world example in which each of the following computational problems appears: (a)Sorting, (b) Determining the best order for multiplying matrices, (c) finding the convex hull of a set of points.
Sorting - Sorting comes up in virtually every algorithm one could think of. Everything from optimizing monetary investments to efficient compression algorithms has to sort data at some point or another. A harder question might be: Name one non-trivial algorithm that doesn't require sorting.
Multiplying Matrices - graphics and scientific problems frequently require matrix operations.
Convex Hull - Collision detection for use in games, modeling biological systems, or other related work could make use of this
1.1-2: Other than speed what other measures of efficiency might one use in a real-world setting?
It is possible to optimize for (and against) every limited resource. For example minimizing the amount of memory usage is important for embedded applications (and desktop ones too). Reducing total disk I/O is important to increase the longevity of hard drives. On a less technical note optimizing for monetary cost or man hours expended is important too.
1.1-3: Select a data structure you have seen previously and discuss its strengths and limitations
One of the most interesting data structures I know is the Bloom Filter. It is a probabilistic data structure that can determine if an element is NOT in a set but can't determine definitively if an element is in a set. It works by hashing each element in a set to a fixed size bit array. It then ORs the hash with itself (which starts at all zeros). One can test to see if an element is in a set by generating the hash and testing to see if every bit set to 1 in the queried element is set to 1 in the filter. If it is then you have some degree of confidence that the element is in the set. Any negative means that what you are querying for has not been added.
While most probabilistic structures have certain properties in common, bloom filters have a number of interesting pros and cons.
1.1-4: How are the shortest path and traveling salesmen problems similar? How are they different?
The traveling salesman problem might make use of the shortest path problem repeatedly in order to come up with the correct solution.
1.1-5: Come up with a real-world problem in which only the best solution will do. Then come up with a problem in which a solution that is "approximately" the best will do?
There are very few problems where one needs the objectively optimal solution. Mathematical questions are the only problems I could think of that need that level of accuracy. Virtually every problem needs a good enough solution. Some examples include finding a fast route for packets on the internet or locating a piece of data in a database.
update 2011-06-30: modified text of answers 1.1-3 and 1.1-5 to be more clear.
I plan on blogging my way through the chapters writing my answers to the questions as I go through the book. Like most of my plans they don't always work out, but one could try.
Here it goes!
Multiplying Matrices - graphics and scientific problems frequently require matrix operations.
Convex Hull - Collision detection for use in games, modeling biological systems, or other related work could make use of this
While most probabilistic structures have certain properties in common, bloom filters have a number of interesting pros and cons.
- A negative result is definitive - if a query returns that an element has not been added then one knows this to be 100% true.
- Since hashes are fixed size the amount of memory a Bloom Filter uses is known and bounded.
- Bloom filters can quickly become useless with large amounts of data. It is possible that every bit will be set to 1 which effectively makes the query a NOP.
- It is impossible to remove data from a bloom filter. One can't just set all the bits of the hash to a zero because that might be removing other elements as well.
- Without a second set of data there is no way to deterministically list all elements (unlike other probabilistic data structures such as Skip Lists).
- The shortest path problem is
- Given a weighted (undirected) graph G:
, a start vertex $V_0$ and an end vertex $V_e$, find a path between $V_0$ and $V_e$ such that the sum of the weights is minimized. This could be expanded to $Given a weighted graph G: , find a path between every pair such that the sum of the weights for each path is minimized. - Traveling salesman is defined as:
- Given a weighted, undirected, graph G:
and a start vertex $V_0$ find a path starting and ending at $V_0$ such that it passes through every other vertex exactly once and the sum of the weights is minimized.
Subscribe to:
Posts (Atom)