Homework 6. First draft due Mar 30. Final submission due Apr 6. There is no class Mar 16 or Mar 23, but I will read email I am away.

This homework is long, but it should be manageable, as long as you are following what is happening in class and becoming reasonably familiar with Matlab. If you find you are stuck and wasting too much time on it, PLEASE SEND ME EMAIL. We can arrange a time to talk when I return.

The original Matlab program simplx.m uses the "\" operation to solve the necessary systems of linear equations from scratch at every iteration, using Gauss elimination (which is equivalent to LU factorization). This does not take advantage of the fact that the basis matrix B is changing only by one column at each iteration. Modify simplx.m to implement three EFFICIENT versions of the primal simplex method, as follows. In all cases assume that b > 0 so a primal feasible point is available (original variables x=0, slack variables w=b) and that the initial basis matrix B is therefore the identity matrix.

  • The FIRST method to be implemented is explicit updating of the inverse of the basis matrix, using the Sherman-Morrison-Woodbury (SMW) formula. All you have to do is to change the code to remove the "\" operations, used to get either xB and yN or Delta-xB and Delta-yN (depending on which version you are using), and replace these with multiplications by the inverse, say "Binv" and its transpose. Do NOT compute all of "Binv*N", which is not needed. The "Binv" matrix is then updated using the SMW formula, as explained in class (see also p.127 of the text).

  • The SECOND method to be implemented is the "product form of the inverse", avoiding both explicit computation of the inverse and LU factorizations. In this method, we simply keep track of all the E matrices (the "eta file") since the beginning of the simplex method, which means simply keeping track of the "delta-xB" vectors and an associated integer for each (see p.130 of the text). Each "solve" operation to get xB and yN or Delta-xB and Delta-yN requires a sequence of "solves" with the E matrices, each of which is trivially done using the formula on p.129, as also explained in class.

  • The THIRD method to be implemented uses the LU factorization of the basis matrix. The initial basis matrix is B=I which has L=I, U=I, and the method starts by doing exactly the same as the second method, namely just keeping track of the "eta file". The difference is that after a certain number, say K, iterations, the "eta file" is abandoned and the LU factorization of the current basis matrix is computed from scratch by simply calling the Matlab "lu" function: use "[L,U] = lu(B)": this produces factors L and U such that B = LU, with U upper triangular and L lower triangular with permutations. We don't want K to be too small because it's too expensive to compute L and U factors frequently, but a larger K may pay off. When the LU factorization is computed, the equation for xB (or Delta-xB) requires only solving equations with L and U, like this: xB = L\(U\b). Since L and U are triangular (or triangular with permutations), these are solved very efficiently by Matlab, as long as it recognizes the structure: to test this, experiment with "flops" (type "help flops"); in my experience both on workstations and PC's, Matlab does recognize that L and U can be solved efficiently and so does NOT do Gauss elimination all over again, as it would do with x = B\b where B is not triangular. Likewise the system for yN involves solves with the transpose matrices L' and U'. Subsequent iterations require repeated solves with these triangular matrices together with the eta matrix operations using the "eta-file" saved since the most recent LU factorization was done. A new LU factorization is then computed after K more iterations, etc. This can be controlled in the main loop by a statement beginning "if mod(iter,K) == 0".

    There is another important aspect to the THIRD method: it must work efficiently when the input matrix A is stored using Matlab's sparse data structures. Type "help sparse" for more information on this. Most Matlab operations, including both the LU factorization and the solve operations using "\", work with both dense and sparse data. As far as I can see, the only addition you have to make to this program to accommodate sparse data is that, if A is sparse, you must use speye(m) (sparse identity) instead of eye(m) when initializing the basis matrix B. You can check whether A is sparse by "if issparse(A)..." (type "help issparse").

    When you are writing your programs remember to use PLENTY OF COMMENTS, and use other good programming habits, such as indenting loops sensibly. using meaningful variable names, and generally writing readable code. The grader will be reading the source code. Start by debugging your programs on the small examples you have used before. Continue until you are sure the programs are working correctly. If you are using a version of Matlab with the optimization toolbox installed (which is available at Courant but probably not outside), you can check your answers by calling an interface for the Matlab LP solver: lpmatlab.m . Otherwise you will have to check them against simplx.m, remembering that my original version produces a permuted solution without keeping track of the permutations (you were supposed to have fixed this). Once you are sure your programs are working correctly, you should suppress all output by adding semicolons and turn your script files into function subprograms (type "help function" for information on how to do this). These functions should have the calling sequence [x,y] = LPname(A,b,c,maxit), and they should all return the optimal original vectors x and y (often called x_tilde and y_tilde in class) for the primal-dual pair
    max c'x subject to Ax <= b, x >= 0
    min b'y subject to A'y >= c, y >= 0
    with the zero variables set accordingly and all vectors in correctly permuted order. You don't need to return the slack variables w and z since these are easily computed by w = b - Ax, z = A'y - c if they are needed. Your functions should generate user-friendly error messages if any of the following errors occur: the data dimensions don't match, the iteration count exceeds maxit, the initial b has a negative entry or the problem is found to be unbounded. See "help error".

    Once everything is working correctly, test your programs on the following examples:

  • rand('state',0); m = 40; n = 50;
    A = full(sprand(m,n,0.2)) + 10*eye(m,n); b = rand(m,1); c = rand(n,1);
    This example generates a sparse matrix A with about 20% nonzero entries, adding 10 to the diagonal entries, and passing it to your codes as a full matrix, i.e. NOT taking account of the sparsity. Do "help sprand" and "help full" for more information. The first statement initializes the random number generator so we are all using the same example (assuming you are using Matlab 5).

  • rand('state',0); m = 40; n = 50;
    A = sprand(m,n,0.2) + 10*speye(m,n); b = rand(m,1); c = rand(n,1);
    This generates the SAME example except that now A is passed to your codes as a sparse matrix. This should make the third method far MORE efficient, since the LU factorization will take advantage of the sparsity in the matrix, as will the "solves" using the \ operator. Don't forget that your code must use eye, not speye if the data is sparse. You can also try passing the sparse data to the codes for the first and second methods. What happens?

  • rand('state',0); m = 100; n = 200;
    A = sprand(m,n,0.2) + 10*speye(m,n); b = rand(m,1); c = rand(n,1);


  • A = hilb(8); b = A*ones(8,1); c = b;
    This example has been chosen to be very difficult numerically. The Hilbert matrix A (type "help hilb") is notoriously ill-conditioned, meaning that the solution of Ax=b and Ap=q will have very different solutions x and p even if b is nearly equal to q. Another way of saying this is that the norm of the inverse of A is huge, or that A is nearly singular. The LP has been set up so the solution is x = y = A\b = A'\c = ones(8,1). Be sure to use "format long" so you can see all digits of your computed solution. How do your codes do on this difficult problem?

    Compare the results of your codes with each other and with either lpmatlab.m or the fixed-up simplx.m on these problems in a nice table, showing number of ITERATIONS (should be same for all methods), FLOP count (do "help flops"; remember "flops(0)" resets the flop count), and ACCURACY (comparing with lpmatlab.m or the fixed-up simplx.m for all problems but the last, and comparing with the exact solution of all ones in the last case).

    EXPERIMENT with various choices for K, which should affect BOTH the efficiency AND the accuracy (in the case of the Hilbert LP) of the third method. Try values of K in the range 5 to m for the random problems and 1 to 8 for the Hilbert LP. You should also experiment with some other dimensions for the examples, including at least one other in your table.

    Finally, PROVE that the solution to the Hilbert LP is indeed x and y all ones. Hint: show that the optimality conditions hold for these x and y, by showing feasibility and complementarity.