# doc-cache created by Octave 4.4.1
# name: cache
# type: cell
# rows: 3
# columns: 17
# name: <cell-element>
# type: sq_string
# elements: 1
# length: 8
cartprod


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 762
 -- Function File: cartprod (VARARGIN)

     Computes the cartesian product of given column vectors ( row
     vectors ).  The vector elements are assumend to be numbers.

     Alternatively the vectors can be specified by as a matrix, by its
     columns.

     To calculate the cartesian product of vectors, P = A x B x C x D
     ...  .  Requires A, B, C, D be column vectors.  The algorithm is
     iteratively calcualte the products, ( ( (A x B ) x C ) x D ) x etc.

            cartprod(1:2,3:4,0:1)
            ans =   1   3   0
                    2   3   0
                    1   4   0
                    2   4   0
                    1   3   1
                    2   3   1
                    1   4   1
                    2   4   1

See also: kron.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 71
Computes the cartesian product of given column vectors ( row vectors ).



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 13
circulant_eig


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 698
 -- Function File: LAMBDA = circulant_eig (V)
 -- Function File: [VS, LAMBDA] = circulant_eig (V)

     Fast, compact calculation of eigenvalues and eigenvectors of a
     circulant matrix
     Given an N*1 vector V, return the eigenvalues LAMBDA and optionally
     eigenvectors VS of the N*N circulant matrix C that has V as its
     first column

     Theoretically same as 'eig(make_circulant_matrix(v))', but many
     fewer computations; does not form C explicitly

     Reference: Robert M. Gray, Toeplitz and Circulant Matrices: A
     Review, Now Publishers, http://ee.stanford.edu/~gray/toeplitz.pdf,
     Chapter 3

     See also: gallery, circulant_matrix_vector_product, circulant_inv.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Fast, compact calculation of eigenvalues and eigenvectors of a circulant
matrix




# name: <cell-element>
# type: sq_string
# elements: 1
# length: 13
circulant_inv


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 855
 -- Function File: C = circulant_inv (V)

     Fast, compact calculation of inverse of a circulant matrix
     Given an N*1 vector V, return the inverse C of the N*N circulant
     matrix C that has V as its first column The returned C is the first
     column of the inverse, which is also circulant - to get the full
     matrix, use 'circulant_make_matrix(c)'

     Theoretically same as 'inv(make_circulant_matrix(v))(:, 1)', but
     requires many fewer computations and does not form matrices
     explicitly

     Roundoff may induce a small imaginary component in C even if V is
     real - use 'real(c)' to remedy this

     Reference: Robert M. Gray, Toeplitz and Circulant Matrices: A
     Review, Now Publishers, http://ee.stanford.edu/~gray/toeplitz.pdf,
     Chapter 3

     See also: gallery, circulant_matrix_vector_product, circulant_eig.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Fast, compact calculation of inverse of a circulant matrix
Given an N*1 vector V



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 21
circulant_make_matrix


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 872
 -- Function File: C = circulant_make_matrix (V)
     Produce a full circulant matrix given the first column.

     _Note:_ this function has been deprecated and will be removed in
     the future.  Instead, use 'gallery' with the the 'circul' option.
     To obtain the exactly same matrix, transpose the result, i.e.,
     replace 'circulant_make_matrix (V)' with 'gallery ("circul", V)''.

     Given an N*1 vector V, returns the N*N circulant matrix C where V
     is the left column and all other columns are downshifted versions
     of V.

     Note: If the first row R of a circulant matrix is given, the first
     column V can be obtained as 'v = r([1 end:-1:2])'.

     Reference: Gene H. Golub and Charles F. Van Loan, Matrix
     Computations, 3rd Ed., Section 4.7.7

     See also: gallery, circulant_matrix_vector_product, circulant_eig,
     circulant_inv.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 55
Produce a full circulant matrix given the first column.



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 31
circulant_matrix_vector_product


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 731
 -- Function File: Y = circulant_matrix_vector_product (V, X)

     Fast, compact calculation of the product of a circulant matrix with
     a vector
     Given N*1 vectors V and X, return the matrix-vector product Y = CX,
     where C is the N*N circulant matrix that has V as its first column

     Theoretically the same as 'make_circulant_matrix(x) * v', but does
     not form C explicitly; uses the discrete Fourier transform

     Because of roundoff, the returned Y may have a small imaginary
     component even if V and X are real (use 'real(y)' to remedy this)

     Reference: Gene H. Golub and Charles F. Van Loan, Matrix
     Computations, 3rd Ed., Section 4.7.7

     See also: gallery, circulant_eig, circulant_inv.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Fast, compact calculation of the product of a circulant matrix with a
vector
Giv



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 3
cod


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 887
 -- Function File: [Q, R, Z] = cod (A)
 -- Function File: [Q, R, Z, P] = cod (A)
 -- Function File: [...] = cod (A, '0')
     Computes the complete orthogonal decomposition (COD) of the matrix
     A:
            A = Q*R*Z'
     Let A be an M-by-N matrix, and let 'K = min(M, N)'.  Then Q is
     M-by-M orthogonal, Z is N-by-N orthogonal, and R is M-by-N such
     that 'R(:,1:K)' is upper trapezoidal and 'R(:,K+1:N)' is zero.  The
     additional P output argument specifies that pivoting should be used
     in the first step (QR decomposition).  In this case,
            A*P = Q*R*Z'
     If a second argument of '0' is given, an economy-sized
     factorization is returned so that R is K-by-K.

     _NOTE_: This is currently implemented by double QR factorization
     plus some tricky manipulations, and is not as efficient as using
     xRZTZF from LAPACK.

     See also: qr.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Computes the complete orthogonal decomposition (COD) of the matrix A:
       A =



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 4
funm


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 1164
 -- Function File: B = funm (A, F)
     Compute matrix equivalent of function F; F can be a function name
     or a function handle and A must be a square matrix.

     For trigonometric and hyperbolic functions, 'thfm' is automatically
     invoked as that is based on 'expm' and diagonalization is avoided.
     For other functions diagonalization is invoked, which implies that
     -depending on the properties of input matrix A- the results can be
     very inaccurate _without any warning_.  For easy diagonizable and
     stable matrices results of funm will be sufficiently accurate.

     Note that you should not use funm for 'sqrt', 'log' or 'exp';
     instead use sqrtm, logm and expm as these are more robust.

     Examples:

            B = funm (A, sin);
            (Compute matrix equivalent of sin() )

            function bk1 = besselk1 (x)
               bk1 = besselk(x, 1);
            endfunction
            B = funm (A, besselk1);
            (Compute matrix equivalent of bessel function K1();
             a helper function is needed here to convey extra
             arguments for besselk() )

     See also: thfm, expm, logm, sqrtm.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Compute matrix equivalent of function F; F can be a function name or a
function 



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 6
lobpcg


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 9795
 -- Function File: [BLOCKVECTORX, LAMBDA] = lobpcg (BLOCKVECTORX,
          OPERATORA)
 -- Function File: [BLOCKVECTORX, LAMBDA, FAILUREFLAG] = lobpcg
          (BLOCKVECTORX, OPERATORA)
 -- Function File: [BLOCKVECTORX, LAMBDA, FAILUREFLAG, LAMBDAHISTORY,
          RESIDUALNORMSHISTORY] = lobpcg (BLOCKVECTORX, OPERATORA,
          OPERATORB, OPERATORT, BLOCKVECTORY, RESIDUALTOLERANCE,
          MAXITERATIONS, VERBOSITYLEVEL)
     Solves Hermitian partial eigenproblems using preconditioning.

     The first form outputs the array of algebraic smallest eigenvalues
     LAMBDA and corresponding matrix of orthonormalized eigenvectors
     BLOCKVECTORX of the Hermitian (full or sparse) operator OPERATORA
     using input matrix BLOCKVECTORX as an initial guess, without
     preconditioning, somewhat similar to:

          # for real symmetric operator operatorA
          opts.issym  = 1; opts.isreal = 1; K = size (blockVectorX, 2);
          [blockVectorX, lambda] = eigs (operatorA, K, 'SR', opts);

          # for Hermitian operator operatorA
          K = size (blockVectorX, 2);
          [blockVectorX, lambda] = eigs (operatorA, K, 'SR');

     The second form returns a convergence flag.  If FAILUREFLAG is 0
     then all the eigenvalues converged; otherwise not all converged.

     The third form computes smallest eigenvalues LAMBDA and
     corresponding eigenvectors BLOCKVECTORX of the generalized
     eigenproblem Ax=lambda Bx, where Hermitian operators OPERATORA and
     OPERATORB are given as functions, as well as a preconditioner,
     OPERATORT.  The operators OPERATORB and OPERATORT must be in
     addition _positive definite_.  To compute the largest eigenpairs of
     OPERATORA, simply apply the code to OPERATORA multiplied by -1.
     The code does not involve _any_ matrix factorizations of OPERATORA
     and OPERATORB, thus, e.g., it preserves the sparsity and the
     structure of OPERATORA and OPERATORB.

     RESIDUALTOLERANCE and MAXITERATIONS control tolerance and max
     number of steps, and VERBOSITYLEVEL = 0, 1, or 2 controls the
     amount of printed info.  LAMBDAHISTORY is a matrix with all
     iterative lambdas, and RESIDUALNORMSHISTORY are matrices of the
     history of 2-norms of residuals

     Required input:
        * BLOCKVECTORX (class numeric) - initial approximation to
          eigenvectors, full or sparse matrix n-by-blockSize.
          BLOCKVECTORX must be full rank.
        * OPERATORA (class numeric, char, or function_handle) - the main
          operator of the eigenproblem, can be a matrix, a function
          name, or handle

     Optional function input:
        * OPERATORB (class numeric, char, or function_handle) - the
          second operator, if solving a generalized eigenproblem, can be
          a matrix, a function name, or handle; by default if empty,
          'operatorB = I'.
        * OPERATORT (class char or function_handle) - the
          preconditioner, by default 'operatorT(blockVectorX) =
          blockVectorX'.

     Optional constraints input:
        * BLOCKVECTORY (class numeric) - a full or sparse n-by-sizeY
          matrix of constraints, where sizeY < n.  BLOCKVECTORY must be
          full rank.  The iterations will be performed in the
          (operatorB-) orthogonal complement of the column-space of
          BLOCKVECTORY.

     Optional scalar input parameters:
        * RESIDUALTOLERANCE (class numeric) - tolerance, by default,
          'residualTolerance = n * sqrt (eps)'
        * MAXITERATIONS - max number of iterations, by default,
          'maxIterations = min (n, 20)'
        * VERBOSITYLEVEL - either 0 (no info), 1, or 2 (with pictures);
          by default, 'verbosityLevel = 0'.

     Required output:
        * BLOCKVECTORX and LAMBDA (class numeric) both are computed
          blockSize eigenpairs, where 'blockSize = size (blockVectorX,
          2)' for the initial guess BLOCKVECTORX if it is full rank.

     Optional output:
        * FAILUREFLAG (class integer) as described above.
        * LAMBDAHISTORY (class numeric) as described above.
        * RESIDUALNORMSHISTORY (class numeric) as described above.

     Functions 'operatorA(blockVectorX)', 'operatorB(blockVectorX)' and
     'operatorT(blockVectorX)' must support BLOCKVECTORX being a matrix,
     not just a column vector.

     Every iteration involves one application of OPERATORA and
     OPERATORB, and one of OPERATORT.

     Main memory requirements: 6 (9 if 'isempty(operatorB)=0') matrices
     of the same size as BLOCKVECTORX, 2 matrices of the same size as
     BLOCKVECTORY (if present), and two square matrices of the size
     3*blockSize.

     In all examples below, we use the Laplacian operator in a 20x20
     square with the mesh size 1 which can be generated in MATLAB by
     running:
          A = delsq (numgrid ('S', 21));
          n = size (A, 1);

     or in MATLAB and Octave by:
          [~,~,A] = laplacian ([19, 19]);
          n = size (A, 1);

     Note that 'laplacian' is a function of the specfun octave-forge
     package.

     The following Example:
          [blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, 1e-5, 50, 2);

     attempts to compute 8 first eigenpairs without preconditioning, but
     not all eigenpairs converge after 50 steps, so failureFlag=1.

     The next Example:
          blockVectorY = [];
          lambda_all = [];
          for j = 1:4
            [blockVectorX, lambda] = lobpcg (randn (n, 2), A, blockVectorY, 1e-5, 200, 2);
            blockVectorY           = [blockVectorY, blockVectorX];
            lambda_all             = [lambda_all' lambda']';
            pause;
          end

     attemps to compute the same 8 eigenpairs by calling the code 4
     times with blockSize=2 using orthogonalization to the previously
     founded eigenvectors.

     The following Example:
          R       = ichol (A, struct('michol', 'on'));
          precfun = @(x)R\(R'\x);
          [blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, [], @(x)precfun(x), 1e-5, 60, 2);

     computes the same eigenpairs in less then 25 steps, so that
     failureFlag=0 using the preconditioner function 'precfun', defined
     inline.  If 'precfun' is defined as an octave function in a file,
     the function handle '@(x)precfun(x)' can be equivalently replaced
     by the function name 'precfun'.  Running:

          [blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, speye (n), @(x)precfun(x), 1e-5, 50, 2);

     produces similar answers, but is somewhat slower and needs more
     memory as technically a generalized eigenproblem with B=I is solved
     here.

     The following example for a mostly diagonally dominant sparse
     matrix A demonstrates different types of preconditioning, compared
     to the standard use of the main diagonal of A:

          clear all; close all;
          n       = 1000;
          M       = spdiags ([1:n]', 0, n, n);
          precfun = @(x)M\x;
          A       = M + sprandsym (n, .1);
          Xini    = randn (n, 5);
          maxiter = 15;
          tol     = 1e-5;
          [~,~,~,~,rnp] = lobpcg (Xini, A, tol, maxiter, 1);
          [~,~,~,~,r]   = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
          subplot (2,2,1), semilogy (r'); hold on;
          semilogy (rnp', ':>');
          title ('No preconditioning (top)'); axis tight;
          M(1,2)  = 2;
          precfun = @(x)M\x; % M is no longer symmetric
          [~,~,~,~,rns] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
          subplot (2,2,2), semilogy (r'); hold on;
          semilogy (rns', '--s');
          title ('Nonsymmetric preconditioning (square)'); axis tight;
          M(1,2)  = 0;
          precfun = @(x)M\(x+10*sin(x)); % nonlinear preconditioning
          [~,~,~,~,rnl] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
          subplot (2,2,3),  semilogy (r'); hold on;
          semilogy (rnl', '-.*');
          title ('Nonlinear preconditioning (star)'); axis tight;
          M       = abs (M - 3.5 * speye (n, n));
          precfun = @(x)M\x;
          [~,~,~,~,rs] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
          subplot (2,2,4),  semilogy (r'); hold on;
          semilogy (rs', '-d');
          title ('Selective preconditioning (diamond)'); axis tight;

     References
     ==========

     This main function 'lobpcg' is a version of the preconditioned
     conjugate gradient method (Algorithm 5.1) described in A. V.
     Knyazev, Toward the Optimal Preconditioned Eigensolver: Locally
     Optimal Block Preconditioned Conjugate Gradient Method, SIAM
     Journal on Scientific Computing 23 (2001), no.  2, pp.  517-541.
     <http://dx.doi.org/10.1137/S1064827500366124>

     Known bugs/features
     ===================

        * an excessively small requested tolerance may result in often
          restarts and instability.  The code is not written to produce
          an eps-level accuracy!  Use common sense.
        * the code may be very sensitive to the number of eigenpairs
          computed, if there is a cluster of eigenvalues not completely
          included, cf.
               operatorA = diag ([1 1.99 2:99]);
               [blockVectorX, lambda] = lobpcg (randn (100, 1),operatorA, 1e-10, 80, 2);
               [blockVectorX, lambda] = lobpcg (randn (100, 2),operatorA, 1e-10, 80, 2);
               [blockVectorX, lambda] = lobpcg (randn (100, 3),operatorA, 1e-10, 80, 2);

     Distribution
     ============

     The main distribution site: <http://math.ucdenver.edu/~aknyazev/>

     A C-version of this code is a part of the
     <http://code.google.com/p/blopex/> package and is directly
     available, e.g., in PETSc and HYPRE.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 61
Solves Hermitian partial eigenproblems using preconditioning.



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 7
ndcovlt


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 771
 -- Function File: Y = ndcovlt (X, T1, T2, ...)
     Computes an n-dimensional covariant linear transform of an n-d
     tensor, given a transformation matrix for each dimension.  The
     number of columns of each transformation matrix must match the
     corresponding extent of X, and the number of rows determines the
     corresponding extent of Y.  For example:

            size (X, 2) == columns (T2)
            size (Y, 2) == rows (T2)

     The element 'Y(i1, i2, ...)' is defined as a sum of

            X(j1, j2, ...) * T1(i1, j1) * T2(i2, j2) * ...

     over all j1, j2, ....  For two dimensions, this reduces to
            Y = T1 * X * T2.'

     [] passed as a transformation matrix is converted to identity
     matrix for the corresponding dimension.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Computes an n-dimensional covariant linear transform of an n-d tensor,
given a t



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 6
ndmult


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 1730
 -- Function File: C = ndmult (A,B,DIM)
     Multidimensional scalar product

     Given multidimensional arrays A and B with entries A(i1,12,...,in)
     and B(j1,j2,...,jm) and the 1-by-2 dimesion array DIM with entries
     [N,K]. Assume that

          shape(A,N) == shape(B,K)

     Then the function calculates the product


          C (i1,...,iN-1,iN+1,...,in,j1,...,jK-1,jK+1,...,jm) =
           = sum_over_s A(i1,...,iN-1,s,iN+1,...,in)*B(j1,...,jK-1,s,jK+1,...,jm)


     For example if 'size(A) == [2,3,4]' and 'size(B) == [5,3]' then the
     'C = ndmult(A,B,[2,2])' produces 'size(C) == [2,4,5]'.

     This function is useful, for example, when calculating grammian
     matrices of a set of signals produced from different experiments.
            nT      = 100;
            t       = 2 * pi * linspace (0,1,nT).';
            signals = zeros (nT,3,2); % 2 experiments measuring 3 signals at nT timestamps

            signals(:,:,1) = [sin(2*t) cos(2*t) sin(4*t).^2];
            signals(:,:,2) = [sin(2*t+pi/4) cos(2*t+pi/4) sin(4*t+pi/6).^2];

            sT(:,:,1) = signals(:,:,1).';
            sT(:,:,2) = signals(:,:,2).';
            G = ndmult (signals, sT, [1 2]);

     In the example G contains the scalar product of all the signals
     against each other.  This can be verified in the following way:
            s1 = 1 e1 = 1; % First signal in first experiment;
            s2 = 1 e2 = 2; % First signal in second experiment;
            [G(s1,e1,s2,e2)  signals(:,s1,e1)'*signals(:,s2,e2)]
     You may want to re-order the scalar products into a 2-by-2
     arrangement (representing pairs of experiments) of gramian
     matrices.  The following command 'G = permute(G,[1 3 2 4])' does
     it.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 31
Multidimensional scalar product



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 8
nmf_bpas


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 3864
 -- Function File: [W, H, ITER, HIS] = nmf_bpas (A, K)
     Nonnegative Matrix Factorization by Alternating Nonnegativity
     Constrained Least Squares using Block Principal Pivoting/Active Set
     method.

     This function solves one the following problems: given A and K,
     find W and H such that

     (1) minimize 1/2 * || A-WH ||_F^2

     (2) minimize 1/2 * ( || A-WH ||_F^2 + alpha * || W ||_F^2 + beta *
     || H ||_F^2 )

     (3) minimize 1/2 * ( || A-WH ||_F^2 + alpha * || W ||_F^2 + beta *
     (sum_(i=1)^n || H(:,i) ||_1^2 ) )

     where W>=0 and H>=0 elementwise.  The input arguments are A : Input
     data matrix (m x n) and K : Target low-rank.

     *Optional Inputs*
     'Type'
          Default is 'regularized', which is recommended for quick
          application testing unless 'sparse' or 'plain' is explicitly
          needed.  If sparsity is needed for 'W' factor, then apply this
          function for the transpose of 'A' with formulation (3).  Then,
          exchange 'W' and 'H' and obtain the transpose of them.
          Imposing sparsity for both factors is not recommended and thus
          not included in this software.
          'plain'
               to use formulation (1)
          'regularized'
               to use formulation (2)
          'sparse'
               to use formulation (3)

     'NNLSSolver'
          Default is 'bp', which is in general faster.
               item 'bp' to use the algorithm in [1] item 'as' to use
               the algorithm in [2]

     'Alpha'
          Parameter alpha in the formulation (2) or (3).  Default is the
          average of all elements in A. No good justfication for this
          default value, and you might want to try other values.
     'Beta'
          Parameter beta in the formulation (2) or (3).  Default is the
          average of all elements in A. No good justfication for this
          default value, and you might want to try other values.
     'MaxIter'
          Maximum number of iterations.  Default is 100.
     'MinIter'
          Minimum number of iterations.  Default is 20.
     'MaxTime'
          Maximum amount of time in seconds.  Default is 100,000.
     'Winit'
          (m x k) initial value for W.
     'Hinit'
          (k x n) initial value for H.
     'Tol'
          Stopping tolerance.  Default is 1e-3.  If you want to obtain a
          more accurate solution, decrease TOL and increase MAX_ITER at
          the same time.
     'Verbose'
          If present the function will show information during the
          calculations.

     *Outputs*
     'W'
          Obtained basis matrix (m x k)
     'H'
          Obtained coefficients matrix (k x n)
     'iter'
          Number of iterations
     'HIS'
          If present the history of computation is returned.

     Usage Examples:
           nmf_bpas (A,10)
           nmf_bpas (A,20,'verbose')
           nmf_bpas (A,30,'verbose','nnlssolver','as')
           nmf_bpas (A,5,'verbose','type','sparse')
           nmf_bpas (A,60,'verbose','type','plain','Winit',rand(size(A,1),60))
           nmf_bpas (A,70,'verbose','type','sparse','nnlssolver','bp','alpha',1.1,'beta',1.3)

     References: [1] For using this software, please cite:
     Jingu Kim and Haesun Park, Toward Faster Nonnegative Matrix
     Factorization: A New Algorithm and Comparisons,
     In Proceedings of the 2008 Eighth IEEE International Conference on
     Data Mining (ICDM'08), 353-362, 2008
     [2] If you use 'nnls_solver'='as' (see below), please cite:
     Hyunsoo Kim and Haesun Park, Nonnegative Matrix Factorization Based

     on Alternating Nonnegativity Constrained Least Squares and Active
     Set Method,
     SIAM Journal on Matrix Analysis and Applications, 2008, 30, 713-730

     Check original code at <http://www.cc.gatech.edu/~jingu>

     See also: nmf_pg.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Nonnegative Matrix Factorization by Alternating Nonnegativity
Constrained Least 



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 6
nmf_pg


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 910
 -- Function File: [W, H] = nmf_pg (V, WINIT, HINIT, TOL, TIMELIMIT,
          MAXITER)

     Non-negative matrix factorization by alternative non-negative least
     squares using projected gradients.

     The matrix V is factorized into two possitive matrices W and H such
     that 'V = W*H + U'.  Where U is a matrix of residuals that can be
     negative or positive.  When the matrix V is positive the order of
     the elements in U is bounded by the optional named argument TOL
     (default value '1e-9').

     The factorization is not unique and depends on the inital guess for
     the matrices W and H.  You can pass this initalizations using the
     optional named arguments WINIT and HINIT.

     timelimit, maxiter: limit of time and iterations

     Examples:

            A     = rand(10,5);
            [W H] = nmf_pg(A,tol=1e-3);
            U     = W*H -A;
            disp(max(abs(U)));


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Non-negative matrix factorization by alternative non-negative least
squares usin



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 9
rotparams


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 447
 -- Function File: [VSTACKED, ASTACKED] = rotparams (RSTACKED)
     The function w = rotparams (r) - Inverse to rotv().  Using, W =
     rotparams(R) is such that rotv(w)*r' == eye(3).

     If used as, [v,a]=rotparams(r) , idem, with v (1 x 3) s.t.  w ==
     a*v.

     0 <= norm(w)==a <= pi

     :-O !!  Does not check if 'r' is a rotation matrix.

     Ignores matrices with zero rows or with NaNs.  (returns 0 for them)

     See also: rotv.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 51
The function w = rotparams (r) - Inverse to rotv().



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 4
rotv


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 498
 -- Function File: R = rotv ( v, ang )
     The functionrotv calculates a Matrix of rotation about V w/ angle
     |v| r = rotv(v [,ang])

     Returns the rotation matrix w/ axis v, and angle, in radians,
     norm(v) or ang (if present).

     rotv(v) == w'*w + cos(a) * (eye(3)-w'*w) - sin(a) * crossmat(w)

     where a = norm (v) and w = v/a.

     v and ang may be vertically stacked : If 'v' is 2x3, then rotv( v )
     == [rotv(v(1,:)); rotv(v(2,:))]



     See also: rotparams, rota, rot.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
The functionrotv calculates a Matrix of rotation about V w/ angle |v| r
= rotv(v



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 8
smwsolve


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 612
 -- Function File: X = smwsolve (A, U, V, B)
 -- Function File: smwsolve (SOLVER, U, V, B)
     Solves the square system '(A + U*V')*X == B', where U and V are
     matrices with several columns, using the Sherman-Morrison-Woodbury
     formula, so that a system with A as left-hand side is actually
     solved.  This is especially advantageous if A is diagonal, sparse,
     triangular or positive definite.  A can be sparse or full, the
     other matrices are expected to be full.  Instead of a matrix A, a
     user may alternatively provide a function SOLVER that performs the
     left division operation.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 80
Solves the square system '(A + U*V')*X == B', where U and V are matrices
with se



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 4
thfm


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 692
 -- Function File: Y = thfm (X, MODE)
     Trigonometric/hyperbolic functions of square matrix X.

     MODE must be the name of a function.  Valid functions are 'sin',
     'cos', 'tan', 'sec', 'csc', 'cot' and all their inverses and/or
     hyperbolic variants, and 'sqrt', 'log' and 'exp'.

     The code 'thfm (x, 'cos')' calculates matrix cosinus _even if_
     input matrix X is _not_ diagonalizable.

     _Important note_: This algorithm does _not_ use an eigensystem
     similarity transformation.  It maps the MODE functions to functions
     of 'expm', 'logm' and 'sqrtm', which are known to be robust with
     respect to non-diagonalizable ('defective') X.

     See also: funm.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 54
Trigonometric/hyperbolic functions of square matrix X.



# name: <cell-element>
# type: sq_string
# elements: 1
# length: 14
vec_projection


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 455
 -- Function File: OUT = vec_projection (X, Y)
     Compute the vector projection of a 3-vector onto another.  X : size
     1 x 3 and Y : size 1 x 3 TOL : size 1 x 1

               vec_projection ([1,0,0], [0.5,0.5,0])
               => 0.70711

     Vector projection of X onto Y, both are 3-vectors, returning the
     value of X along Y.  Function uses dot product, Euclidean norm, and
     angle between vectors to compute the proper length along Y.


# name: <cell-element>
# type: sq_string
# elements: 1
# length: 57
Compute the vector projection of a 3-vector onto another.





