- This again assumes that you've already done the transpose operation on A. If you're including that in your computation time, then it's no longer the case that computation will be independent of m, since the transpose operation itself has complexity that's linear in m.
- Complexity is a limiting characteristic, i.e., it characterizes how computational burden grows with m when m is already large. It need not hold for m small.
- On top of that, complexity is a sort of "average growth" rate of computational times. Actual computational times are subject to some randomness arising from several different sources, however. As a result, we don't expect to see computation times to be exactly the same when increasing m even when m is large to begin with.
Sparse matrix multiplication complexity and CPU time
조회 수: 4 (최근 30일)
이전 댓글 표시
I am multiplying two sparse matrices $A$ and $A^T$ such that I have $A^T*A$. From what I know the complexity of this operation depends on nnz(A). Yet, when I generate a random matrix with fixed nnz(A) but increasing number of rows and compute the product $A^T*A$ I see that the CPU time it takes to perform this operation increases. Am I missing something ?
댓글 수: 0
채택된 답변
Dana
2020년 9월 9일
Assuming by A^T you mean the transpose of A, and assuming you already have A and A^T stored, then yes, the complexity of A^T*A should depend only on nnz(A) and on the number of rows A^T has (which is equal to the number of columns A has). So if you increase the number of rows m of A but keep the number of columns the same, computing time should eventually stop increasing with m. There are some important caveats here, though:
추가 답변 (0개)
참고 항목
제품
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!