(last updated: 08 August 2007)

Permanent codes (download)

This code is used for estimating the permanent on nonnegative matrices using the Huber-Law methodology, as well as calculating the permanent exactly for small matrices using the Ryser method.


Calculates the hl factors for a vector of row sums


hl = hl_factor[x,hl]


Minc's inequality (conjectured by Minc, proved by Bregman) states that the permanent of a matrix with entries either 0 or 1 is bounded above by the product of the Minc factors of the row sums. The factor for a row sum a is (a!)^(1/a). Soules extended this to entries in [0,1] using a different factor. Huber-Law extends this to [0,1] entries as well, but in such a way that the upper bound given by the factors can be employed to obtain an acceptance rejection algorithm for finding weighted permutations in the matrix and estimating the permanent. This function evaluates these Huber-Law factors, which grow asymptotically as the original Minc factors.


Next subset


[ a, more ] = ksub_next ( n, k, a, more )


Using a Gray Code, finds the next subset of size k from n elements.


Estimate of the permanent of a nonnegative matrix


est = permanent_estimate(A,iterations)


Estimates the permanent of a matrix A by running an acceptance/rejection procedure iterations times. The upper bound on the permanent comes from multiplying the Huber-Law factors for each of the rows together (in a fashion similar to Minc's inequality.) The final estimate est comes from multiplying the upper bound times the number of successes divided by iterations.


Ryser's method for small matrices


[p] = ryser_smalldim(A)


Calculates the permanent of a matrix exactly using Ryser's method. Use is limited to matrices with small entries, as this procedure does not use infinite precision integers.


Sinkhorn balancing of a matrix


[B,x,y] = sinkhorn(A,epsilon)


Sinkhorn balancing scales the rows and columns of a matrix A in order that the row sums and column sums equal 1. The procedure is straightforward: first each row is divided by its row sum. This makes the new row sums equal to 1. Then each column is divided by its column sum. This is one iteration of the sinkhorn procedure. This back and forth procedure continues until the maximum difference between a row sum and 1 is at most epsilon The scaled matrix is returned in B. The scaling for each row is returned in the vector x. The scaling for each column is returned in the vector y. Note that the permanent of B is just the permanent of A times the entries of x and y, so finding/approximating the permanent of B is the new goal after balancing.

Back to Mark's Home Page