Homework solutions posted, give passwd

This is hard to type in using html. The book is fine and I will write the formulas on the board.

**Definition**:
The sigma notation: ∑f(i) with i going from a to b.

**Theorem**:
Assume 0<a≠1. Then ∑a^{i} i from 0 to n =
(a^{n+1}-1)/(a-1).

**Proof**:
Cute trick. Multiply by a and subtract.

**Theorem**:
∑i from 1 to n = n(n+1)/2.

Recall that log_{b}a = c means that b^{c}=a.
b is called the **base** and c is called the
**exponent**.

What is meant by log(n) when we don't specify the base?

- Some people use base 10 by default.
- Mathematicians use base e.
- We will use base 2 (common in computer science)

I assume you know what a^{b} is. (Actually this is not so
obvious. Whatever 2 raised to the square root of 3 means it is
**not** writing 2 down the square root of 3 times and
multiplying.) So you also know that
a^{x+y}=a^{x}a^{y}.

**Theorem**:
Let a, b, and c be positive real numbers. To ease writing, I will use
base 2 often. This is not needed. Any base would do.

- log(ac) = log(a)+log(c)
- log(a/c) = log(a) - log(c)
- log(a
^{c}) = c log(a) - log
_{c}(a) = (log(a))/log(c): consider a = c^{logca}and take log of both sides. - c
^{log(a)}= a^{log(c)}: take log of both sides. - (b
^{a})^{c}= b^{ac} - b
^{a}b^{c}= b^{a+c} - b
^{a}/b^{c}= b^{a-c}

- log(2nlog(n)) = 1 + log(n) + log(log(n)) is Θ(log(n))
- log(log(sqrt(n))) = log(.5log(n)) = log(.5)+log(log(n)) = -1 + log(log(n)) = Θ(log(log(n))
- log(2
^{n}) = nlog(2) = n = 2^{log(n)}

**Homework:** C-1.12

⌊x⌋ is the greatest integer not greater than x. ⌈x⌉ is the least integer not less than x.

⌊5⌋ = ⌈5⌉ = 5

⌊5.2⌋ = 5 and ⌈5.2⌉ = 6

⌊-5.2⌋ = -6 and ⌈-5.2⌉ = -5

To prove the claim that **there is a** positive n
satisfying n^{n}>n+n, we merely have to note that
3^{3}>3+3.

To refute the claim that **all** positive n
satisfy n^{n}>n+n, we merely have to note that
1^{1}<1+1.

"P implies Q" is the same as "not Q implies not P". So to show
that in the world of positive integers "a^{2}≥b^{2}
implies that a≥b" we can show instead that
"NOT(a≥b) implies NOT(a^{2}≥b^{2})", i.e., that
"a<b implies a^{2}<b^{2}",
which is clear.

Assume what you want to prove is **false** and derive
a contradiction.

**Theorem**:
There are an infinite number of primes.

**Proof**:
Assume not. Let the primes be p_{1} up to p_{k} and
consider the number
A=p_{1}p_{2}…p_{k}+1.
A has remainder 1 when divided by any p_{i} so cannot have any
p_{i} as a factor. Factor A into primes. None can be
p_{i} (A may or may not be prime). But we assumed that all
the primes were p_{i}. Contradiction. Hence our assumption
that we could list all the primes was false.

The goal is to show the truth of some statement for all integers n≥1. It is enough to show two things.

- The statement is true for n=1
**IF**the statement is true for all k<n, then it is true for n.

**Theorem**:
A complete binary tree of height h has 2^{h}-1 nodes.

**Proof**:
We write NN(h) to mean the number of nodes in a complete binary tree
of height h.
A complete binary tree of height 1 is just a root so NN(1)=1 and
2^{1}-1 = 1.
Now we assume NN(k)=2^{k}-1 nodes for all k<h
and consider a complete
binary tree of height h.
It is just two complete binary trees of height
h-1 with new root to connect them.

So NN(h) = 2NN(h-1)+1 = 2(2^{h-1}-1)+1 = 2^{h}-1,
as desired

**Homework:** R-1.9

Very similar to induction. Assume we have a loop with controlling
variable i. For example a "`for i←0 to n-1`". We then
associate with the loop a statement S(j) depending on j such that

- S(0) is true (just) before the loop begins
**IF**S(j-1) holds before iteration j begins, then S(j) will hold when iteration j ends.

I favor having array and loop indexes starting at zero. However, here it causes us some grief. We must remember that iteration j occurs when i=j-1.

**Example:**:
Recall the countPositives algorithm

Algorithm countPositives Input: Non-negative integer n and an integer array A of size n. Output: The number of positive elements in A pos ← 0 for i ← 0 to n-1 do if A[i] > 0 then pos ← pos + 1 return pos

Let S(j) be "pos equals the number of positive values in the first j elements of A".

Just before the loop starts S(0) is true
**vacuously**. Indeed that is the purpose of the first
statement in the algorithm.

Assume S(j-1) is true before iteration j, then iteration j (i.e., i=j-1) checks A[j-1] which is the jth element and updates pos accordingly. Hence S(j) is true after iteration j finishes.

Hence we conclude that S(n) is true when iteration n concludes, i.e. when the loop terminates. Thus pos is the correct value to return.

Skipped for now.

We trivially improved innerProduct (same asymptotic complexity before and after). Now we will see a real improvement. For simplicity I do a slightly simpler algorithm than the book does, namely prefix sums.

Algorithm partialSumsSlow Input: Positive integer n and a real array A of size n Output: A real array B of size n with B[i]=A[0]+…+A[i] for i ← 0 to n-1 do s ← 0 for j ← 0 to i do s ← s + A[j] B[i] ← s return B

The update of s is performed 1+2+…+n times. Hence the
running time is Ω(1+2+…+n)=&Omega(n^{2}).
In fact it is easy to see that the time is &Theta(n^{2}).

Algorithm partialSumsFast Input: Positive integer n and a real array A of size n Output: A real array B of size n with B[i]=A[0]+…+A[i] s ← 0 for i ← 0 to n-1 do s ← s + A[i] B[i] ← s return B

We just have a single loop and each statement inside is O(1), so the algorithm is O(n) (in fact Θ(n)).

**Homework:** Write partialSumsFastNoTemps, which is also
Θ(n) time but avoids the use of s (it still uses i so my name is not
great).