10
$\begingroup$

I have a fixed $n \times n$ matrix $M$ whose entries are all either 0 or 1. I want to compute the product $Mv$ for various vectors $v \in \mathbb{R}^n$ (or over other fields/rings).

Since $M$ is fixed and does not change, is it possible to design a preprocessing algorithm that examines $M$ (possibly taking exponential time in $n$) and subsequently allows each matrix-vector multiplication $Mv$ to be computed in sub-quadratic (e.g., $O(n^{2-\epsilon})$) time?

Are there known characterizations of matrices (e.g., based on rank, sparsity pattern, or combinatorial structure) for which such fast multiplication is possible?

Note: any preprocessing must be confined to operations over the ring of integers. Furthermore, the complexity is measured strictly by the number of additions. Crucially, computing a scalar multiple such as k ⋅ a (for an integer k > 1) would itself require ∼ ⌊ log ⁡ 2 k ⌋ additions through repeated doubling, and this cost must be accounted for in the complexity analysis.

$\endgroup$
5
  • $\begingroup$ How many ones are in $M$? If it's already $O(n^{2-\epsilon})$ you are done, if it is more than that, say more than $O(n^{2+\epsilon})$ you can use $M = \mathbf{1} - (\mathbf{1}-M)$ which then runs in $O(n^{2-\epsilon})$. $\endgroup$ Commented 10 hours ago
  • $\begingroup$ @Dirk there can be $\frac{n^2}2$ ones, for instance, and then neither $M$ nor $1 - M$ have $O(n^{2-\varepsilon})$ ones $\endgroup$ Commented 10 hours ago
  • $\begingroup$ Another idea (way away from my area of expertise): Build a tree to construct the supports of each row to save computation. $\endgroup$ Commented 10 hours ago
  • $\begingroup$ @DanielWeber Yes, that's why I wrote it like this. $\endgroup$ Commented 10 hours ago
  • $\begingroup$ So, strictly additions only, and not even subtractions allowed? Even though the vector elements are in a ring? $\endgroup$ Commented 1 hour ago

2 Answers 2

6
$\begingroup$

I don't know what kinds of matrix patterns you are interested in, but there are some.

Suppose that the $\{0,1\}$ valued matrix $M$ represents a partial order $\le$, i.e. $M_{ij} = 1$ if and only if $i \le j$ (in the partial order). Then, if the partial order has suitable form, there are fast algorithms for $Mv$. The elementary operations are additions and subtractions (the time requirement is the number of these operations).

If the order is a lattice with $n$ elements and $j$ join-irreducible elements (elements that cover exactly one element), it can be done in $O(nj)$ time (Björklund et al.), which could be much less than $O(n^2)$.

If the order is an ER-labelable or semimodular lattice, with $e$ edges in its covering relation, it can be done in $O(e)$ time (Kaski et al.).


Björklund, Andreas; Husfeldt, Thore; Kaski, Petteri; Koivisto, Mikko; Nederlof, Jesper; Parviainen, Pekka, Fast zeta transforms for lattices with few irreducibles, ACM Trans. Algorithms 12, No. 1, Article No. 4, 19 p. (2016). ZBL1398.68694.

Kaski, Petteri; Kohonen, Jukka; Westerbäck, Thomas, Fast Möbius inversion in semimodular lattices and ER-labelable posets, Electron. J. Comb. 23, No. 3, Research Paper P3.26, 13 p. (2016). ZBL1377.06006.

$\endgroup$
2
$\begingroup$

You are looking for a small linear circuit computing the transformation $Mv$. In the general case there might not be a sub-quadratic circuit, but for a rank $k$ matrix $M$ you can use the factorization $M = AB$ for $A \in \mathbb{Q}^{n \times k}, B \in \mathbb{Q}^{k \times n}$ to compute $Mv = ABv$ in $O(nk)$ time.

For a sparse matrix $M$ with $|M|$ ones you can compute $Mv$ in time $O(|M|)$.

These can both be generalized to decomposing the matrix $M$ to a sum of low rank or sparse matrices. However, since rank and sparsity are subadditive, this reduces to just decomposing to a single low-rank and a single sparse matrix.

This problem is studied in operations research and machine learning, see for example Dimitris Bertsimas, Ryan Cory-Wright, Nicholas A. G. Johnson, Sparse Plus Low Rank Matrix Decomposition: A Discrete Optimization Approach (2023).

In case the matrix has some more specific combinatorial structure it's definitely possible that you can construct better circuits manually, and if $n$ is small you can also try general circuit minimization - I'm not aware of anything for linear circuits specifically, but it's likely that some approaches can be adapted.

$\endgroup$
2
  • $\begingroup$ Your proposed approach is not acceptable within the intended computational model. A decomposition involving rational coefficients is not permissible; any preprocessing must be confined to operations over the ring of integers. Furthermore, the complexity is measured strictly by the number of additions. Crucially, computing a scalar multiple such as k ⋅ a (for an integer k > 1) would itself require ∼ ⌊ log ⁡ 2 k ⌋ additions through repeated doubling, and this cost must be accounted for in the complexity analysis. $\endgroup$ Commented 9 hours ago
  • 5
    $\begingroup$ @max_herman You should add these constraints to the question. $\endgroup$ Commented 8 hours ago

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.