# Basic Algorithms: Lecture 3

================ Start Lecture #3 ================

Problem Set #1, Problem 1.
The problem set will be officially assigned a little later, but the first problem in the set is:
Describe in pseudo-code a recursive algorithm

```sumAndMax(n, A)
```
that computes both the sum of the elements of A and the maximum element. You may assume n is a positive integer giving the number of elements in A. You should return a pair (s,m) where s is the sum and m is the max (in java this would be an object; in C it would be a struct). ALSO compute the (worst case) running time for your algorithm. Give the exact answer AND also Θ notation. So if the exact answer were 4X7+3X2-2, your answers would be 4X7+3X2-2 and Θ(X7).
End of Problem Set #1, Problem 1

Example: Let's do problem R-1.10. Consider the following simple loop that computes the sum of the first n positive integers and calculate the running time using the big-Oh notation.

```Algorithm Loop1(n)
s ← 0
for i←1 to n do
s ← s+i
```
With big-Oh we don't have to worry about multiplicative or additive constants so we see right away that the running time is just the number of iterates of the loop so the answer is O(n)

Homework: R-1.11 and R-1.12

Definitions: (Common names)

1. If a function is O(log(n)), we call it logarithmic.
2. If a function is O(n), we call it linear.
3. If a function is O(n2), we call it quadratic.
4. If a function is O(nk) with k≥1, we call it polynomial.
5. If a function is O(an) with a>1, we call it exponential.
Remark: The last definitions would be better with a relative of big-Oh, namely big-Theta (defined below), since, for example 3log(n) is O(n2), but we do not call 3log(n) quadratic.

Homework: R-1.10 and R-1.12.

Example: R-1.13. What is running time of the following loop using big-Oh notation?

```Algorithm Loop4(n)
s ← 0
for i←1 to 2n do
for j←1 to i do
s ← s+1
```
Clearly the time is determined by the number of executions of the last statement. But this looks hard since the inner loop is executed a different number of times for each iteration of the outer loop. But it is not so bad. For iteration i of the outer loop, the inner loop has i iterations. So the total number of iterations of the last statement is 1+2+...+2n, which is 2n(2n+1)/2 (we will learn this formula soon). So the answer is O(n2).

Homework: R-1.14

================ Start Lecture #3 ================

### 1.2.2 Relatives of the Big-Oh

#### Big-Omega and Big-Theta

Recall that f(n) is O(g(n)) if, for large n, f is not much bigger than g. That is g is some sort of upper bound on f. How about a definition for the case when g is (in the same sense) a lower bound for f?

Definition: Let f(n) and g(n) be real valued functions of an integer value. Then f(n) is Ω(g(n)) if g(n) is O(f(n)).

Remarks:

1. We pronounce f(n) is Ω(g(n)) as "f(n) is big-Omega of g(n)".
2. What the last definition says is that we say f(n) is not much smaller than g(n) if g(n) is not much bigger than f(n), which sounds reasonable to me.
3. What if f(n) and g(n) are about equal, i.e., neither is much bigger than the other?

Definition: We write f(n) is Θ(g(n)) if both f(n) is O(g(n)) and f(n) is Ω(g(n)).

Remarks We pronounce f(n) is Θ(g(n)) as "f(n) is big-Theta of g(n)"

Examples to do on the board.

1. 2x2+3x is Θ(x2).
2. 2x3+3x is not θ(x2).
3. 2x3+3x is Ω(x2).
4. innerProductRecursive is Θ(n).
5. binarySearch is Θ(log(n)). Unofficial for now.
6. If f(n) is Θ(g(n)), then f(n) is &Omega(g(n)).
7. If f(n) is Θ(g(n)), then f(n) is O(g(n)).

Homework: R-1.6

#### Little-Oh and Little-Omega

Recall that big-Oh captures the idea that for large n, f(n) is not much bigger than g(n). Now we want to capture the idea that, for large n, f(n) is tiny compared to g(n).

If you remember limits from calculus, what we want is that f(n)/g(n)→0 as n→∞. However, the definition we give does not use the word limit (it essentially has the definition of a limit built in).

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is o(g(n)) if for any c>0, there is an n0 such that f(n)≤cg(n) for all n>n0. This is pronounced as "f(n) is little-oh of g(n)".

Definition: Let f(n) and g(n) be real valued functions of an integer variable. We say f(n) is ω(g(n) if g(n) is o(f(n)). This is pronounced as "f(n) is little-omega of g(n)".

Examples: log(n) is o(n) and x2 is ω(nlog(n)).

What should we say if f(n) is o(g(n)) and f(n) is ω(g(n))? Perhaps we should say f(n) is θ(g(n)), i.e., little theta.

NO! Why?
Because we cannot have for large n both f(n)<.5*g(n) and g(n)<.5*f(n).

Homework: R-1.4. R-1.22

#### What is "fast" or "efficient"?

If the asymptotic time complexity is bad, say Ω(n8), or horrendous, say Ω(2n), then for large n, the algorithm will definitely be slow. Indeed for exponential algorithms even modest n's (say n=50) are hopeless.

Algorithms that are o(n) (i.e., faster than linear, a.k.a. sub-linear), e.g. logarithmic algorithms, are very fast and quite rare. Note that such algorithms do not even inspect most of the input data once. Binary search has this property. When you look up a name in the phone book you do not even glance at a majority of the names present.

Linear algorithms (i.e., Θ(n)) are also fast. Indeed, if the time complexity is O(nlog(n)), we are normally quite happy.

Low degree polynomial (e.g., Θ(n2), Θ(n3), Θ(n4)) are interesting. They are certainly not fast but speeding up a computer system by a factor of 1000 (feasible today with parallelism) means that a Θ(n3) algorithm can solve a problem 10 times larger. Many science/engineering problems are in this range.

### 1.2.3 The Importance of Asymptotics

It really is true that if algorithm A is o(algorithm B) then for large problems A will take much less time than B.

Definition: If (the number of operations in) algorithm A is o(algorithm B), we call A asymptotically faster than B.

Example:: The following sequence of functions are ordered by growth rate, i.e., each function is little-oh of the subsequent function.
log(log(n)), log(n), (log(n))2, n1/3, n1/2, n, nlog(n), n2/(log(n)), n2, n3, 2n.

#### What about those constants that we have swept under the rug?

Modest multiplicative constants (as well as immodest additive constants) don't cause too much trouble. But there are algorithms (e.g. the AKS logarithmic sorting algorithm) in which the multiplicative constants are astronomical and hence, despite its wonderful asymptotic complexity, the algorithm is not used in practice.

#### A Great Table

See table 1.10 on page 20.

Homework: R-1.7

Allan Gottlieb