Dear all,
A{k} are square, non-symmetrical, large ( NxN with N in 1E5-5E5) sparse matrices (typical density <1E-4), column strictly diagonally dominant. They all have the same (block, see EDIT 1) structure/filling :
and b is Nx1 with nnz(b) typically in 1-1E4.
---- Digression start ----
I discussed the case A*x+B=0 already in another context, for B NxM with M large, where I can take advantage of a single LU factorization of a single A and solve in parallel:
[L, U, P, Q, R] = lu( A ) ;
parfor ...
X{k} = Q * (U \ (L \ (P * (R \ B_cell{k}))))
end
where B_cell splits B in blocks of appropriate size, in this thread. ---- Digression end ----
My question in the current context is: is there a transformation/factorization that I could use, that exploits the fact that all A{k} have strictly the same filling/structure (i.e. non-zero elements are always at the same place), that I would apply e.g. to A{1} and then re-use to accelerate the solving of A{k}*x+b=0 for all other A{k} (e.g. by providing MLDIVIDE (or LINSOLVE with smaller, dense blocks) with matrices that shortcut part of the factorization)?
Also, could the block-structure and the partial filling of b be used in any manner?
EDIT 1, 2017/12/19 @ 17:32 UTC - On this line, if I perform a LU decomposition as shown in the digression above, for example, I get matrices P (row permutation) and Q (column reordering) that will (or should) be the same for all A{k}. Noting that:
[L, U, P, Q, R] = lu( A{k} ) ;
x{k} = Q * (U \ (L \ (P * (R \ b)))) ;
is always more efficient than:
because we avoid the analysis of A, is there a way to exploit the fact that P and Q are known after the first iteration (i.e. P and Q computed for k=1 will remain the same for all ks)?
Cheers,
Cedric
EDIT 1, 2017/12/14 @ 19:39 UTC
Here is another sparse matrix where I delineated the block structure. It is associated with another simulation, hence the different size, but in this simulation all A{k} are 99149x99149 with the same structure:
댓글 수: 0
댓글을 달려면 로그인하십시오.