When a task built more than just a few binary packages, it makes sense to parallelize gb-task-check-install-arch because the only thing that could be shared between install checks is the cache.
I have a rough implementation of parallel install checks that currently works on mipsel (secondary) and riscv64 girar instances. It uses the same build node and user and runs install checks in parallel using several hasher instances (with different --number). For 4-core Loongson3A I run 5 install check threads, and it gives from 1.6 to 3 times speed up, depending on the task. Also, for trivial tasks where only one binary package is build parallel version may be slower by a second or so. I'm not posting links to code here mainly because I believe that it will be more effective to write something from scratch then to forward-port & clean up my implementation onto current girar (I'm still using an ancient pre-ga version). Still, I can write smth up on what and how I was doing, if you think it'll be useful.
(In reply to Dmitry V. Levin from comment #0) > When a task built more than just a few binary packages, it makes sense to > parallelize gb-task-check-install-arch because the only thing that could be > shared between install checks is the cache. Since the cache is so important for install checks, it would make sense to generate the cache once and subsequently reuse it by parallelized install checks.