diff --git "a/-NE5T4oBgHgl3EQfRg5P/content/tmp_files/2301.05521v1.pdf.txt" "b/-NE5T4oBgHgl3EQfRg5P/content/tmp_files/2301.05521v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/-NE5T4oBgHgl3EQfRg5P/content/tmp_files/2301.05521v1.pdf.txt" @@ -0,0 +1,2448 @@ +AIR multigrid with GMRES polynomials (AIRG) and additive preconditioners +for Boltzmann transport⋆ +S. Dargavillea, R.P. Smedley-Stevensonb,a, P.N. Smithc,a, C.C. Paina +aApplied Modelling and Computation Group, Imperial College London, SW7 2AZ, UK +bAWE, Aldermaston, Reading, RG7 4PR, UK +cANSWERS Software Service, Jacobs, Kimmeridge House, Dorset Green Technology Park, Dorchester, DT2 8ZB, UK +Abstract +We develop a reduction multigrid based on approximate ideal restriction (AIR) for use with asymmetric linear systems. +We use fixed-order GMRES polynomials to approximate A−1 +ff and we use these polynomials to build grid transfer +operators and perform F-point smoothing. We can also apply a fixed sparsity to these polynomials to prevent fill-in. +When applied in the streaming limit of the Boltzmann Transport Equation (BTE), with a P0 angular discretisation +and a low-memory spatial discretisation, this “AIRG” multigrid used as a preconditioner to an outer GMRES iteration +outperforms the lAIR implementation in hypre, with two to three times less work. AIRG is very close to scalable; we +find either fixed work in the solve with slight growth in the setup, or slight growth in the solve with fixed work in the +setup when using fixed sparsity. Using fixed sparsity we see less than 20% growth in the work of the solve with either +6 levels of spatial refinement or 3 levels of angular refinement. In problems with scattering AIRG performs as well as +lAIR, but using the full matrix with scattering is not scalable. +We then present an iterative method designed for use with scattering which uses the additive combination of two +fixed-sparsity preconditioners applied to the angular flux; a single AIRG V-cycle on the streaming/removal operator +and a DSA method with a CG FEM. We find with space or angle refinement our iterative method is very close to +scalable with fixed memory use. +Keywords: Asymmetric multigrid, Advection, Radiation transport, Boltzmann, AIR, GMRES polynomials +1. Introduction +The Boltzmann transport equation (BTE) describes the distribution of particles moving through an interacting +medium and is used to model radiation transport, along with spectral-waves and fluid problems through kinetic and +lattice-Boltzmann methods. The mono-energetic steady-state form of the Boltzmann Transport Equation (BTE), with +linear scattering and straight-line propagation is written in (1) as +Ω · ∇rψ(r, Ω) + σtψ(r, Ω) − +� +Ω′ σs(r, Ω′ → Ω)ψ(r, Ω′)dΩ′ = S e(r, Ω). +(1) +Equation (1) is a 5-dimensional linear PDE, with three spatial dimensions and two angular dimensions; we neglect +the energy and time dimensions. The angular flux, ψ(r, Ω), describes the number of particles moving in direction +Ω, at spatial position r. The macroscopic total cross section of the material that the particles are moving through is +given by σt, which describes particles removed either through absorption or scattering in the material. The source of +particles coming from scattering in many radiative processes is given by an integral term, where σs is the macroscopic +scatter cross-sections for this process that describes how particles scatter from direction Ω′ into direction Ω. Finally +any external sources of particles are given by S e. +⋆UK Ministry of Defence © Crown owned copyright 2023/AWE +Email address: dargaville.steven@gmail.com (S. Dargaville) +Preprint submitted to Elsevier +January 16, 2023 +arXiv:2301.05521v1 [physics.comp-ph] 13 Jan 2023 + +One of the main challenges in solving (1) is that when the scattering cross-sections are large (i.e., the parti- +cles are interacting with the material), the BTE tends to a diffusion equation, whereas when the scattering and total +cross-sections are zero (i.e., particles are moving through a vacuum), the BTE is purely hyperbolic and hence stable +discretisations must be used and the resulting linear systems are asymmetric and non-normal. +In (1), the scattering cross-section, σs, for any given angle-to-angle scattering event is often described by a Leg- +endre expansion whose coefficients we denote as σs. If we discretise in space/angle (with a discretisation like Sn, or +FEM), we introduce, ϕ, which is the angular flux, ψ, in Legendre space. We then write (1) as a 2 × 2 block system, +namely +� +I +Dm +−MmΣs +L +� �ϕ +ψ +� += +� 0 +ˆSe +� +, +(2) +where L is the streaming/removal operator, Σs is a matrix with the scattering cross-sections for each spatial node, +Dm and Mm are formed from the tensor product of the spatial mass matrices and the mapping operators which map +between our angular discretisation and the moments of the Legendre space and ˆSe is the discretised source term from +(1). When discretised in space with an upwind discretisation and an appropriate ordering of ψ is applied, L is a block- +diagonal matrix, where each of the blocks is lower triangular and corresponds to the spatial coupling for each direction +(i.e., each direction in L is not coupled to the others; L represents a set of advection equations for each direction). +Equation (2) is typically solved by forming the Schur complement of block L, namely +(I + DmL−1MmΣs)ϕ = −DmL−1 ˆSe, +(3) +and then a preconditioned Richardson iteration (known as a source iteration in the transport community), with pre- +conditioner M−1 is applied to recover +ϕn+1 = ϕn − M−1(DmL−1(MmΣsϕn + ˆSe) + ϕn). +(4) +Computing the solution to (2) therefore only requires the Legendre representation of the angular flux (the angular flux +can easily be formed, either at a single spatial point or across the domain if needed), as each of the components of +(4) are block-diagonal. This allows for a very low memory iterative method. If there is scattering in the problem, +then typically an additive preconditioner like M−1 = I + D−1 +diff is used; this is known as diffusion-synthetic acceleration +(DSA), where Ddiff is a diffusion operator, with diffusion/sink/boundary coefficients taken from an asymptotic analysis +of the BTE in the diffusion limit. +The spatial discretisation applied to the diffusion operator in DSA can govern its effectiveness in accelerating +convergence; research into discretisations of the diffusion operator that are effective and/or “consistent” with transport +are extensive. The use of a Krylov method to solve (3) instead of a Richardson method can reduce the dependence on +this consistency [1] and hence “inconsistent” discretisations can be applied to the diffusion operator as part of a DSA +resulting in SPD systems that can be solved efficiently. Transport synthetic acceleration (TSA) [2] has also been used +to accelerate convergence in the scattering limit, where lower order transport solutions are used instead of diffusion; +see [3] for a review of both DSA and TSA method in transport. +The solution of (3) relies on the exact inversion of L; if as mentioned above the spatial and angular discretisations +result in L with lower-triangular structure then L can be inverted exactly with a single (matrix-free) Gauss-Seidel +(GS) iteration, also known as a sweep in the transport community. If unstructured spatial grids are used, inverting L +can become more difficult, particularly in parallel; finding an ideal sweep ordering is then NP-complete. Furthermore, +the introduction of different spatial or angular discretisations, time dependence or additional physics that cannot be +mapped to Legendre space compounds this problem. Similarly, if we wish to use angular adaptivity where the angular +resolution differs throughout the spatial grid, L is no longer block-diagonal. +Our goal in the AMCG has been to investigate different discretisations, adaptive and iterative methods for solving +the BTE, that have the potential to overcome these difficulties while remaining scalable (i.e., perform a fixed amount of +work with a fixed memory consumption, even with space/angle refinement). We instead form the Schur complement +of block I from (2) and solve the system formed with the angular flux, namely +(L + MmΣsDm)ψ = ˆSe. +(5) +There are several disadvantages to solving (5) instead of (3), the most important of which is the considerable increase +in memory required. The angular flux, ψ, is much bigger than ϕ and the matrix L + MmΣsDm is not block diagonal; +2 + +instead the scattering operator MmΣsDm couples different angles together and results in dense angle-angle blocks +where the nnzs scale like O(n2) with angle size. These disadvantages would typically preclude the development of a +practical transport algorithm. Previously we have tackled those problems through the combination of: using a stable +spatial discretisation based on static condensation that has the stencil of a CG discretisation (reducing the size of ψ +but at the cost of making the blocks in L no longer lower triangular), using angular adaptivity to only focus angular +resolution where required (reducing the size of ψ), and using a matrix-free multigrid to solve (5) that does not rely on +the explicit construction of MmΣsDm or on the lower triangular structure in L. +We showed previously [4, 5] that these techniques perform well on many transport problems, allowing the practical +use of both angular adaptivity with high levels of refinement and unstructured spatial grids. These methods do not +use GS/sweep smoothers however and do not scale well in the streaming limit where σs tends to zero. Similarly, +most multigrid methods in the literature that achieve good performance for the BTE have relied on either block- +based smoothers, often on a cell/element, which do not scale with increasing angle size but which perform well in +the scattering limit [6, 7, 8, 9], or GS/sweeps as smoothers which perform well in the streaming limit [10, 11, 12, +13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] but as mentioned can be difficult to parallelise on unstructured +grids. Developing multigrid methods which perform well with local smoothers is difficult, particularly as hyperbolic +problems have long proved difficult to solve with multigrid due to the lack of developed theory for non-SPD matrices. +Recently multigrid methods based on approximate ideal restrictors (AIR) have been developed [26, 27, 28, 29] that +show good convergence in asymmetric problems, and in particular with the BTE when using point-based smoothers +[29]. +The aim of this work is hence to build an iterative method that solves (5) with the following features: +1. Compatible with the space/angle discretisations we have developed previously [4, 5] and hence does not rely +on L having block diagonal and/or lower triangular structure +2. Does not require the explicit construction of MmΣsDm +3. Does not require the use of GS/sweeps +4. Scalable in both the streaming and scattering limits with space/angle refinement +5. Compatible with angular adaptivity +6. Good strong and weak scaling in parallel with unstructured grids +We examine the first four of these points in this paper, and leave investigation of adaptivity and the parallel per- +formance of our method to future work. Here we present two contributions: the first is an algaebraic multigrid +method based on combining AIR with low-order GMRES polynomials, which we call AIRG; the second is an iter- +ative method where we apply additive preconditioners to an outer GMRES iteration on the angular flux, based on +the streaming/removal operator and a DSA diffusion operator. Both these contributions can be used independently of +the other; for example we could use AIRG to invert L and/or the DSA operator as part of a typical DG FEM/source +iteration. +We then compare our iterative method to the hypre implementation of lAIR and find performance advantages. +We therefore have an iterative method on unstructured grids which never requires the assembly of the full matrix in +(5), that has both good performance and fixed memory consumption across all parameter regimes with space/angle +refinement for Boltzmann transport problems. +2. Discretisations +We begin by presenting the spatial and angular discretisations used in this work; they are based on those presented +in [30, 31, 32, 33, 4, 5] and hence we only discuss their key features. +3 + +2.1. Angular discretisation +We use a P0 DG FEM in angle (or equivalently a cell-centred FVM) with constant area azi/polar elements that +we normalise so that the angular mass matrix is the identity. The first level of our angular discretisation is denoted +level 1, with one constant basis function per octant. Each subsequent level of refinement comes from splitting an +angular element into four at the midpoint of the azi/cosine polar bounds; this is structured, nested refinement. This is +equivalent to the Haar wavelet discretisations discussed in [4]; indeed as part of the matrix-free iterative method in [4] +an O(n) mapping to/from this P0 space to Haar space was performed during every matvec. We instead choose to solve +in P0 space as we can form an assembled copy of the streaming/removal matrix that has fixed sparsity with angular +refinement; this property was not needed for the matrix-free methods used in our previous work, all we required was a +scalable mapping between P0 and Haar spaces. We can also adapt in this P0 space in the same manner as our wavelet +space [4], given the equivalence. We investigate adapting in P0 space in future work. +2.2. Spatial discretisation +Our spatial discretisation is a sub-grid scale FEM, which represents the angular flux as ψ = φ + θ, where φ is the +solution on a “coarse” scale and θ is the solution on a “fine” scale. The finite element expansions for both the fine and +coarse scales can be written as +φ(r, Ω) ≈ +ηN +� +i=1 +Ni(r)˜φi(Ω); +θ(r, Ω) ≈ +ηQ +� +i=1 +Qi(r)˜θi(Ω), +(6) +with ηN continuous basis functions, Ni, and ηQ discontinuous basis functions, Qi, with ˜φi and ˜θi the expansion co- +efficients, respectively. In this work we use linear basis functions for both the continuous and discontinuous spatial +expansions. +As described in Section 2.1, we use a P0 discretisation, with (constant) basis functions G j(Ω), with a spatially +varying number of angular elements ηi +A and ηi +D on the coarse and fine scales respectively (we enforce that DG nodes +have the same angular expansion as their CG counterparts). The expansion coefficients ˜φi and ˜θi in (6) can then be +written as space/angle expansion coefficients ˜φi, j and ˜θi,j via the expansion +˜φi(Ω) ≈ +ηi +A +� +j=1 +G j(Ω)˜φi, j; +˜θi(Ω) ≈ +ηi +D +� +j=1 +G j(Ω)˜θi,j. +(7) +Following standard FEM theory the discretised form of (1) can then be written as +�A +B +C +D +� �Φ +Θ +� += +�SΦ +SΘ +� +, +(8) +where Φ and Θ are vectors containing the coarse and fine scale expansion coefficients, we denote the number of +unknowns in Φ as NCDOFs and in Θ as NDDOFs. The discretised source terms for both scales are SΦ and SΘ. We +should note the matrices A and D are the standard CG and DG FEM matrices that result from discretising (1). +We can then form a Schur complement of block D and recover +(A − BD−1C)Φ = SΦ − BD−1SΘ. +(9) +The fine solution Θ can then be computed +Θ = D−1(SΘ − CΦ), +(10) +and our discrete solution is the addition of both the coarse and fine solutions, namely Ψ = Φ + Θ (where the coarse +solution Φ has been projected onto the fine space). In order to solve (9) and (10) efficiently/scalably, we must +sparsify D and this sparsification is dependent on the angular discretisation used, see [33, 34, 35, 36, 37, 38, 4, 39] for +examples. In this work we replace D−1 in (9) and (10) with ˆD +−1, which is the streaming operator with removal and +self-scatter only, and vacuum conditions applied on each DG element (as this removes the jump terms that couple the +DG elements, resulting in element blocks). We can then invert this matrix element-wise and store the result, with a +4 + +constant nnzs with space/angle refinement, as it has the same stencil as the streaming operator. Now if we consider +the streaming/removal (denoted with a subscript Ω) and scattering contributions (denoted with a subscript S) in (9) +separately, along with our sparsified ˆD +−1 we can rearrange (9) and write +� +AΩ − BΩ ˆD +−1CΩ +� +Φ + +� +(AS + BS(y + ˆD +−1CΩ) + BΩy +� +Φ = SΦ − (BΩ + BS) ˆD +−1SΘ. +(11) +where y = ˆD +−1CS and our fine component is Θ = ˆD +−1(SΘ−(CΩ+CS)Φ). We can see that (11) is now written similarly +to (5), where the left term is (very close to) the sub-grid scale streaming/removal operator, with the remainder being +the contribution from scattering. It is not exactly the streaming/removal operator as ˆD +−1 contains self-scatter, but it has +the same stencil as the streaming/removal operator and we call it such below for simplicity; practically we can modify +our stabilisation such that ˆD +−1 does not contain self-scatter without any substantial differences to either the stability +of our system or the preconditioning described in Section 3. As mentioned in Section 1, we cannot explicitly form the +scattering contribution as it has dense blocks, but given that ˆD +−1 has fixed sparsity and the scatter contributions from +AS, BS and CS can be formed in Legendre space and mapped back, we can scalably compute a matrix-free matvec +with angular refinement in P0 space (with a fixed scatter order). +Practically, there are many ways to implement a matvec for (11) (these involve further rearrangement of (11)), +depending on the amount of memory available. Some key points include that away from boundaries, the element +matrices in AΩ, BΩ, CΩ and ˆD +−1 all share the same (block) sparsity; in fact given matching angular resolution and the +same order of basis functions on the continuous and discontinuous spatial meshes (we enforce both conditions) two of +the element matrices are identical, with BΩ and CΩ related by the transpose of the spatial table. A matvec with these +components can therefore consist of performing several matvecs on small (max 3 × 3 or 4 × 4) angular blocks which +can be kept in cache (similar to the work performed during a DG sweep, but without the Gauss-Seidel dependency on +ordering). We can also combine a number of the maps to/from Legendre space. We describe some of the choices we +make in Section 7, but note that a FLOP count shows our sub-grid scale matvec with scattering can be computed with +1.8x the FLOPs of an equivalent DG matvec. +With our sub-grid scale discretisation, we are therefore choosing to increase the number of (local) FLOPs in order +to significantly decrease our memory consumption. We can build iterative methods that only depend on the the coarse- +scale solution, Φ, which is on a CG stencil, which in 3D has approximately 20× fewer unknowns than a DG method; +e.g., a GMRES(20) space built on Φ would take approximately the same space as a single copy of Ψ in 3D. One +additional benefit is as noted in [4], is that with either a wavelet or non-wavelet angular discretisation, our sub-grid +scale discretisation does not require interpolation between areas of different angular resolution (e.g., across faces), +due to the lack of DG jump terms. +3. Additively preconditioned iterative method +This section details the iterative method we use to solve (11). For simplicity we begin by writing the unseparated +form (9) with our sparsified ˆD +−1 and introduce a preconditioner M−1 on the right to give +(A − B ˆD +−1C)M−1u = SΦ − BD−1SΘ, +u = Mψ. +(12) +We solve (12) with GMRES and use a matrix-free matvec as described above to compute the action of (A − B ˆD +−1C) +(and to compute the source and Θ). The preconditioner we apply is based on the additive combination of a two-level +method in angle and the streaming/removal operator, which we denote as +M−1 = M−1 +angle + M−1 +Ω . +(13) +The first of these uses a DSA type operator; both DSA and TSA can be thought of as additive angular multigrid +preconditioners, with DSA forming a two-level multigrid with the coarse level represented by a diffusion equation, +whereas TSA can form multiple coarse grids if desired through lower angular resolution transport discretisations (e.g., +see [6, 14, 16, 17, 18, 20, 40, 41, 24, 25]). We found both were very effective, but in this work we use a DSA method +which we write as +M−1 +angle = RangleD−1 +diffPangle, +(14) +5 + +where the angular restrict/prolong are simply the mappings between the constant moment (i.e., the 0th order scatter +term) and our P0 angular space. Ddiff is a standard DSA diffusion operator with diffusion coefficient of κ = 1/3σs and +Robin conditions on vacuum boundaries that we discretise with a CG FEM; this makes our DSA “inconsistent” but +we find this performs well with a Krylov method as the outer iteration, as in [1]. For further discussions we appeal to +the wealth of literature on different acceleration methods. +The second component of our additive preconditioner is based on the sub-grid scale streaming/removal operator +in (11), namely +M−1 +Ω = +� +AΩ − BΩ ˆD +−1CΩ +�−1 +(15) +The additive combination of these two preconditioners is designed to achieve good performance in both the stream- +ing and scattering limits. For problems with both streaming/scattering regions we would apply each of the additive +preconditioners only in regions which require it [42]. It is trivial to form the grid-transfer operators Rangle and Pangle +regardless of angular refinement, but for a scalable iterative method we must be able to apply the inverses of both +the diffusion matrix with a fixed amount of work given spatial refinement, and the streaming/removal operator with +fixed work given space/angle refinement. We can do so inexactly given these are applied as preconditioners. This is +in contrast to a source iteration like (4), where [29] notes that inexact application of L−1 changes the discrete solution +and they show that increasingly accurate solves are required with grid refinement. +The next section describes a novel reduction multigrid that we can use to apply the inverses of our stream- +ing/removal operator, AΩ − BΩ ˆD +−1CΩ (or the streaming operator in the limit of zero total cross-section), and a CG +diffusion operator, Ddiff if desired. For comparison purposes we also test AIRG on the full matrix, A − B ˆD +−1C, in +the scattering limit; we cannot form this matrix scalably but it helps demonstrate that AIRG is applicable in both +advective and diffuse problems. +4. AIRG multigrid +We begin this section with a summary of a reduction multigrid [43, 44, 45, 28, 26, 46]; we should note that +reduction multigrids, block LDU factorisations/preconditioners and multi-level ILU methods all use similar building +blocks (we discuss this further below). If we consider a general linear system Ax = b we can form a block-system +due to a coarse/fine (CF) splitting as +�Aff +Afc +Acf +Acc +� �xf +xc +� += +�bf +bc +� +. +(16) +If we write the prolongator and restrictor as +P = +�W +I +� +, +R = +� +Z +I +� +, +(17) +then we can form a coarse grid matrix as Acoarse = RAP and a multgrid hierarchy can be built by applying the same +technique to Acoarse. To see how the operators R and P are constructed we consider an exact two-grid method, where +down smoothing (“pre”) occurs before restriction and up smoothing occurs after prolongation (“post”). We can write +the error at the ith step as ei = ¯x − xi, where ¯x is the exact solution to our linear system. Following [26], if we have +error on the top grid after the down smooth, we can form our coarse grid residual by computing RAei and hence the +error after a coarse grid solve is A−1 +coarseRAei. The error on the top grid after coarse grid correction, denoted as ei+1 is +hence +ei+1 = (I − PA−1 +coarseRA)ei. +(18) +We can consider the error, ei, to be made up of two components, one of which is in the range of interpolation and a +remainder, namely +ei = +�ef +ec +� += +�Wec +ec +� ++ +�δei +f +0 +� +. +(19) +We can then write (18) as +ei+1 = (I − PA−1 +coarseRA) +�δei +f +0 +� +. +(20) +6 + +If δei +f = 0 then F-point error is in the range of interpolation and the coarse-grid correction is exact, as ei+1 = 0. +Methods based on ideal restriction however choose Z to ensure that +RA +�δei +f +0 +� += 0. +(21) +This enforces that any error on the top grid that is not in the range of interpolation does not make it down to the coarse +grid. A local form of (21) is used explicitly to form the restrictor in lAIR. Expanding (21) gives the condition +(ZAff + Acf)δei +f = 0, +and hence Z = −AcfA−1 +ff is known as the “ideal” restrictor that satisfies this condition. The error after coarse-grid +correction (20) then becomes +ei+1 = +�ei +f − Wei +c +0 +� +. +(22) +Post F-point smoothing to convergence then gives an exact two-grid method. This is one of the defining characteristics +of a reduction multigrid, that ideal restriction ensures that after coarse-grid correction, the C-point error is zero and +hence F-point smoothing is appropriate; a similar statement can be used to construct an “ideal” prolongator given +by W = −A−1 +ff Afc. Both [26, 28] build approximations to the ideal restrictor and then build a classical one-point +prolongator that interpolates each F-point from it’s strongest C-point connection with injection; by definition this +preserves the constant. [26] show that this is sufficient to ensure good convergence in advection problems, and [27] +prove more general criteria on the required operators. +With the ideal operators the near-nullspace vectors are naturally preserved and we don’t have to explicitly provide +(or determine through adaptive methods [47, 48]) the near-nullspace vectors; in fact any error modes (near-nullspace or +otherwise) that are not in the range of interpolation stay on the top grid and are smoothed, with the rest being accurately +restricted to the coarse grid by construction; if we consider near-nullspace vectors this can be easily demonstrated by +considering that if A[nf, nc]T ≈ 0, where [nf, nc] is a (partitioned) near-nullspace vector, then with the ideal operators +Acoarsenc ≈ 0. This is in contrast to traditional multigrid methods, which smooth high-frequency modes preferentially +on the top grid, with interpolation specifically designed to transfer low-frequency modes (i.e., near-nullspace vectors) +to the coarse grid, where they become high-frequency and hence can be smoothed easily. +[26] show that the error propagation matrix, ϵ of a reduction multigrid with F point up smooths (and no down +smooths) is given by +ϵ = I − M−1 +LDUA, +(23) +where MLDU is a block LDU factorisation of A given by +MLDU = +� +I +0 +AcfA−1 +ff +I +� �Aff +0 +0 +S +� � +I +A−1 +ff Afc +0 +I +� +, +(24) +where S = Acc − AcfA−1 +ff Afc is the Schur complement of block Aff. The inverse of MLDU is given by +M−1 +LDU = +�I +W +0 +I +� �A−1 +ff +0 +0 +S−1 +� �I +0 +Z +I +� +. +(25) +The ideal operators give that Acoarse = S. Block LDU factorisations formed with approximate ideal operators have a +long history of being used as preconditioners. The same factorisation can be applied to S to recover a multilevel LDU +method. The benefit to using a block LDU method, instead of a reduction multigrid is that up F-point smoothing and +coarse grid correction occur additively (as written in (25)), rather than F-point smoothing occuring after coarse grid +correction with a reduction multigrid (as written in (22)). This is an attractive property in parallel. +If we wish to form a reduction multigrid (or an LDU method) we must approximate A−1 +ff ; we denote the approxi- +mation as ˆA +−1 +ff ≈ A−1 +ff and hence our approximate ideal restrictor and prolongator are given by +P = +� +− ˆA +−1 +ff Afc +I +� +, +R = +� +−Acf ˆA +−1 +ff +I +� +. +(26) +7 + +The strength of our approximate ideal operators allows the use of zero “down” smoothing iterations, with F-point +smoothing only on the “up” cycle after coarse-grid correction. The authors in [26] however do perform some C-point +smoothing on their “up” cycle (with F-F-C Jacobi) for robustness, but we did not find that necessary in this work. +Equation (22) suggests a simple choice for the prolongator would be W = 0, but in a multi-level setting where we +are approximating A−1 +ff [28, 26] show this would require an increasingly accurate approximation with grid refinement. +Rather than build a classical prolongator like [26, 28], as we have an approximation of A−1 +ff it only costs one extra +matmatmult per level to compute the ideal prolongator, P. We then keep only the largest (in magnitude) entry, and +hence we form a one-point ideal prolongator. This is like using AIR in conjunction with AIP which [27] suggest +could form a scalable method for non-symmetric systems. This is different to lAIR [28], where Z is computed +directly; computing the ideal prolongator would therefore require an equivalent calculation for W. We find this is a +robust choice for our prolongator (as it satisfies the approximation properties required by [27]) while also being very +simple as it doesn’t require knowledge of the near-nullspace. +To perform up F-point smoothing on each level, we use a Richardson iteration to apply ˆA +−1 +ff . On each level if we +are smoothing Ae = r, where r is the residual computed after the down smooth then +en+1 +f += en +f + ˆA +−1 +ff (rf − Afcen +c − Affen +f ). +(27) +The coarse-grid error does not change during this process, so we can cache the result of Afcen +c during multiple F-point +smooths. The next section details how we construct our approximation ˆA +−1 +ff . +4.1. GMRES polynomials +Forming good, but sparse approximations to A−1 +ff is the key to a reduction multigrid (and LDU methods as men- +tioned); this is achieved in-part by ensuring a “good” CF splitting that results in a well-conditioned Aff. In particular +Aff is often better conditioned (or more diagonally dominant) than A. In this section we assume a suitable CF splitting +has been performed; see Section 5 for more details. +The original AMGr work [44] approximated A−1 +ff using the inverse diagonal of Aff. The nAIR method presented +by [26] on advection-diffusion equations used a matrix-polynomial approximation of A−1 +ff generated from a truncated +Neumann series; unfortunately this does not converge well in the diffuse limit (i.e., when Aff is not lower-triangular). +The work in [28] however solved dense, local linear systems, which enforce that RA = 0 within a certain F-point +sparsity pattern to compute Z directly; this was denoted as lAIR. For advection-diffusion equations, this showed good +performance in both the advection and diffuse limit. The work of [29] specifically examined the performance of lAIR +in parallel for the BTE, but used on L in (3). [46] showed the use of sparse approximate inverses (SAIs) [49, 50] on +diffusion equations to approximate A−1 +ff (they also approximated both W and A−1 +cc for C-point smoothing). The variant +of SAI method used in [46] solved the minimisation problem, argminM||I − AM||F (which can be written as a separate +least-squares problem for each column), to generate an approximate inverse M, while enforcing the fixed sparsity of +Aff on M. For LDU methods, [51] outlined many of the techniques used; these include using diagonal approximations, +or incomplete LU (often referred to as multi-level ILU [52]) and Cholesky factorisations. +Rather than explicitly approximate A−1 +ff , many multigrid methods such as smoothed aggregation [53, 54, 55] or +root-node [56] methods build their prolongation operators by minimising the energy (in an appropriate norm) of their +prolongator. This is equivalent to solving AP = 0 (either column-wise or in a global sense) and given the equivalent +construction of (21) for the prolongator, these methods can therefore converge to the ideal prolongator. One of the +benefits of forming the ideal operators through a minimisation process is that constraints on the sparsity of the resulting +operator can be applied at all points through this process. In particular, if this sparsity affects the ability to preserve +near-nullspace vectors, their preservation can be enforced at each step as the minimisation converges (e.g., see both +[57] for an overview and [58] for a general application to asymmetric systems). +In this work we compute ˆA +−1 +ff with GMRES polynomials [59, 60, 61, 62, 63]. Polynomial methods have been +used in multigrid for many years. Examples include, as mentioned, in nAIR [26] to approximate A−1 +ff , in AMLI +to approximate the inverse of the coarse-grid matrix [64, 65] and commonly with Chebychev polynomials used as +smoothers for SPD matrices, among others. The aim of using GMRES polynomials in this work is to ensure good +approximations to ˆA +−1 +ff that don’t depend on particular orderings of the unknowns, diagonal, lower-triangular or block +structure, while still allowing good performance in parallel. As such AIRG should be applicable to many common +8 + +discretisations of advection-diffusion problems, in both limits, or any operators where a suitable splitting can produce +a well-conditioned Aff that a GMRES polynomial can approximate. A key difference between this work and previous +work with AIR is that we also reuse these polynomials as our F-point smoothers. This allows us to build a very simple +and strong multigrid method for asymmetric problems. +We begin by summarising the GMRES method. For a general linear system Ax = b, where A is n × n, we +specify an initial guess x0 = 0 and hence an initial residual r0 = b. +The Krylov subspace of dimension m: +span{b, Ab, A2b, . . . , Am−1b} is used in GMRES to build an approximate solution. This solution at step m can be +written as xm = qm−1(A)b, where qm−1(A) is a matrix polynomial of degree m − 1 known as the GMRES polynomial. +This is the polynomial that minimises the residual ||rm|| = ||p(A)r0||2 subject to p(0) = 1, where p(A) = 1 − Aqm−1(A) +is known as the residual polynomial. +The coefficients of qm−1 correspond to the required linear combinations of the Krylov vectors and hence a typical +GMRES algorithm can be modified to generate them. [61, 62] discuss the generation and application of these polyno- +mials in detail and care must be taken if high order polynomials are desired. Thankfully we are only concerned with +low-order polynomials for use within our multigrid hierarchy and as such consider two different bases. +As part of a typical GMRES algorithm, we consider a set of orthonormal vectors which form a basis for our +subspace, stored in the columns of the matrix Vm (i.e., the Arnoldi basis). Similarly the Krylov vectors b, Ab, . . . make +up the columns of the matrix Km (i.e., the Krylov basis). Given the vectors in Vm and Km span the same space we can +write them as linear combinations of each other and hence Vm = KmCm, where Cm is of size m×m. Typically GMRES +doesn’t store Cm but a small modification can be made to store these values at each GMRES step. The approximate +solution produced by GMRES is given by xm = x0 + Vmym, where ym comes from the solution of the least-squares +problem. The coefficients for the polynomial qm−1 can then be computed through (α0, . . . , αm−1)T = Cmym (as in [60]). +Equivalently (in exact arithmetic) a QR factorisation of Km+1 = QR can be computed. If we form the submatrix ˜R +from R but without the first column, and note that β = R1,1 then the polynomial coefficients come from the solution of +the least-squares problem (α0, . . . , αm−1)T = argminym||βe1 − ˜Rym||2, where e1 is the first column of the m + 1 identity. +GMRES methods normally don’t use the Krylov basis directly as it is poorly conditioned when m → ∞, although this +is typically not a concern at low-order; for example [61] found using the Krylov basis was stable up to 10th order. In +either case our GMRES polynomial of degree m − 1 is given by +qm−1(A) = α0 + α1A + α2A2 + . . . + αm−1Am−1. +(28) +At each step of a GMRES method, the subspace size, m, grows and hence the GMRES polynomial changes. Polyno- +mial preconditioning methods typically freeze the size of the subspace and perform a precompute step to generate a +GMRES polynomial of a fixed order. This polynomial is then used as a stationary preconditioner, often for GMRES +itself. In this work we want to approximate A−1 +ff on each multigrid level; we use GMRES polynomials with a fixed +polynomial order. +To generate the coefficients for our GMRES polynomials, during our multigrid setup, on each level of our multigrid +hierarchy we set m to a fixed value, assign an initial guess of zero and use a random rhs (see [66, 67, 62]). We can +then chose to use either the Arnoldi or Krylov basis with Aff to form our coefficients. The Arnoldi basis requires m +steps of the modified GMRES described above; this costs m matvecs along with a number of dot products and norms +(i.e., reductions) on each level. +Instead we could use the Krylov basis, Km+1, which still requires m matvecs (the setup of a matrix-power kernel +may not be worth it given we only need to compute our low-order polynomial coefficients once), but in parallel a tall- +skinny QR (TSQR) factorisation can be used to decrease the number of reductions. This relies on QR factorisations +of small local blocks along with a single all-reduce to generate R; we don’t require Q to compute our polynomial +coefficients and hence the small local Q blocks can be discarded. This is equivalent to modifying a communication- +avoiding GMRES with s = 1 [68, 67] to generate the coefficients. We tested using both the Arnoldi and Krylov basis +in this work and they generate the same polynomial coefficients to near round-off error which shows that stability is +not a concern at such low orders. Hence we can use the Krylov basis in parallel and the generation of the polynomial +coefficients can be considered as communication-avoiding. +Rather than use a random vector, we could form a GMRES polynomial by using a block method and solving +ZAff = −Acf. That polynomial would be tailored to computing Z which is not desirable given we also use our +polynomial to perform F-point smoothing and hence want to probe all the modes of Aff, as discussed above. +9 + +If we wish to use low-order GMRES polynomials to approximate A−1 +ff , we risk not having converged our approxi- +mate ideal operators sufficiently. Similarly, the introduction of fixed sparsity, drop tolerances or equivalent in ˆA +−1 +ff , P, +R or Acoarse may exacerbate this. We examine the impact of both varying m and sparsifying our operators below in +Section 4.2. +4.1.1. Fixed sparsity polynomials +In the limit m → n, the GMRES polynomials converge to A−1 +ff exactly. Practically this is impossible to achieve +given the storage requirements. Even at low-order however, we would like to avoid the fill-in that comes from explic- +itly forming our polynomials with m > 2. To do this, we can construct our polynomials by enforcing a fixed sparsity +on each of the matrix powers; for simplicity we chose the sparsity of Aff. This is particularly suited to convection +operators given that the fill-in should be small. For example, if we consider a third-order polynomial (i.e., m = 4), and +denoting the sparsity pattern of Aff as S ⊂ {(i, j) | (Aff)i, j � 0}, we enforce that +( ˜A2 +ff)i,j = (AffAff)i,j, +( ˜A3 +ff)i, j = ( ˜A2 +ffAff)i, j +(i, j) ∈ S, +(29) +and for (i, j) not in S , the entries are zero. This is simply computing A2 +ff with no fill-in, using this approximation when +computing subsequent matrix-powers and again enforcing no fill-in on the result. Our fixed-sparsity approximation to +A−1 +ff with m = 4 would then be given by +ˆA +−1 +ff = α0 + α1Aff + α2 ˜A2 +ff + α3 ˜A3 +ff ≈ q3(Aff). +(30) +Fixing the sparsity of the matrix powers reduces the memory consumption of our hierarchy and also allows us to +optimise the construction of our polynomial. For example, with m = 4 it costs two matrix-matrix additions and two +matmatmults to explicitly construct our polynomial, but given the shared sparsity, the additions can be performed +quickly and the matmatmults with m > 2 can share the same row data required when computing A2 +ff. In parallel this +means we only require the communication of off-processor row data once with m > 2. Computing low-order GMRES +polynomials with fixed sparsity with the Krylov basis therefore only requires the communication associated with m +matvecs, a single all-reduce and if m > 2 the matmatmult which produces A2 +ff, regardless of the polynomial order. +This has the potential to scale well, in contrast to some of the other methods which approximate A−1 +ff described +above. We cannot use nAIR with truncated Neumann series due to the lack of lower triangular structure in our spatial +discretisation, while with lAIR we found we require greater than distance 2 neighbours for scalability (we exam- +ine this in Section 7), but the communication required for this becomes prohibitive. Using ILU factorisations make +parallelisation difficult given the sequential nature of the underlying Gaussian elimination; approximate ILU factori- +sations have more parallelism [69, 70] but they still require triangle solves (which could then also be approximated +with truncated Neumann series for better performance in parallel, for example). The SAIs used by [46] should scale +well, given that if the fixed sparsity of Aff is used, then the formation of an approximate inverse only requires the +same communication as computing A2 +ff; we must however also consider the local cost of computing a SAI and the +effectiveness of its approximation; we examine this in Section 7. +Typically with AIR the approximations ˆA +−1 +ff on each level are thrown away once the grid-transfer operators have +been built and different F-point smoothers are used; as mentioned we save them to use as smoothers instead. The GM- +RES polynomials are very strong smoothers (which don’t require the calculation of any extra dampening parameters) +but storing them explicitly takes extra memory. We examine the total memory consumption of the AIRG hierarchy +in Section 7, but note that given the fixed sparsity of ˆA +−1 +ff , our experiments with pure streaming show it has approxi- +mately 65% as many non-zeros as R. We could instead throw away ˆA +−1 +ff after our grid transfer operators are computed +on each level and perform F-point smoothing by applying qm−1(Aff) matrix-free. With m ≤ 2 the number of matvecs +required is the same and hence ˆA +−1 +ff could be discarded. With m > 2 however, applying qm−1(Aff) matrix-free would +require more matvecs in order to apply the matrix-powers. As such we believe the (constant sized) extra memory +required is easily justified. We also investigated using Chebyshev polynomials as smoothers. We precomputed the +required eigenvalue estimates with GMRES in order to build the bounding circle/ellipse given our asymmetric linear +systems. We found that smoothing with these polynomials was far less efficient/robust (and practically more difficult) +than using the GMRES polynomials; indeed using the GMRES polynomials directly is one of the key messages of +[60]. +10 + +4.1.2. Drop tolerance on Aff +If our original linear system Ax = b is not sparse, neglecting the fill-in with the fixed sparsity polynomials +in Section 4.1.1 may not be sufficient by itself to ensure a practical multigrid method. We can also introduce a +drop tolerance to Aff that is applied prior to constructing our polynomial approximations ˆA +−1 +ff . In general this is +not necessary in the streaming limit, but with scattering we find it helps keep the complexity low. This is similar +to the strong R threshold used in the hypre implementation of lAIR, which determines strong neighbours prior to +the construction of Z, and in Section 7 we denote it as such. We can either apply the dropping after the polynomial +coefficients are computed (in a similar fashion to how the fixed sparsity in Section 4.1.1 is applied), or before such that +we are forming a polynomial approximation to a sparsified Aff. With scattering, we did not find much of a difference +in convergence so we chose to apply the dropping before, as this helps reduce the cost of the matvec used to compute +the polynomial coefficients. +4.2. Approximations of A−1 +ff +0.8 +0.9 +1 +1.1 +1.2 +1.3 +1.4 +-0.2 +-0.15 +-0.1 +-0.05 +0 +0.05 +0.1 +0.15 +0.2 +(a) Operators formed from GMRES polynomials without fixed +sparsity. +0.8 +0.9 +1 +1.1 +1.2 +1.3 +1.4 +-0.2 +-0.15 +-0.1 +-0.05 +0 +0.05 +0.1 +0.15 +0.2 +(b) Operators formed from fixed sparsity GMRES polynomials +and relative drop tolerances applied to the resulting R and P. +0.8 +0.9 +1 +1.1 +1.2 +1.3 +1.4 +-0.2 +-0.15 +-0.1 +-0.05 +0 +0.05 +0.1 +0.15 +0.2 +(c) Operators formed from fixed sparsity GMRES polynomials, +relative drop tolerances applied to the resulting R and P and rela- +tive drop tolerances applied to the resulting Acoarse. +Figure 1: Eigenvalue distribution of A−1 +coarseS on the second level of a pure streaming problem, where S is the exact Schur complement formed +from the ideal restrictor and prolongator (i.e., the exact coarse-grid matrix) and Acoarse is the approximate coarse grid matrix formed from our +approximate ideal restrictor and prolongator. The colours correspond to different GMRES polynomial orders for approximating A−1 +ff , with m = 1, +m = 2, m = 3 and m = 4. +11 + +As mentioned in Section 4.1, the low-order polynomials we use and the fixed sparsity described in Section 4.1.1 +may impact the operators in our multigrid hierarchy. Given this we examine the impact of our approximations to +A−1 +ff and the resulting operators by considering the spectrum of A−1 +coarseS and Acoarse, where Acoarse is the coarse matrix +formed from our approximate operators and S is the exact coarse grid matrix. +As an example, we use AIRG on a pure streaming problem with an unstructured grid, with two levels of uniform +refinement in angle (giving 16 angles). Fig. 1 plots the the spectrum of A−1 +coarseS and ideally it should approach one. +Fig. 1a shows the result of constructing our coarse grid matrix with increasing orders of our GMRES polynomial, but +without fixing the sparsity of the matrix powers, so we are using q0(Aff), q1(Aff), q2(Aff) and q3(Aff) exactly. We can +see increasing the order increases the accuracy of our resulting coarse grid matrix, as would be expected, with the +eigenvalues converging to one. The radius of a circle that bounds the eigenvalues reduces from 0.1760 to 0.0072 with +m = 1 to m = 4, respectively. +Fig. 1b shows the result of introducing both the fixed sparsity matrix powers discussed in Section 4.1.1 and +introducing relative drop tolerances to the resulting R and P operators. For the restrictor we drop any entry in a row +that is less than 0.1 times the maximum absolute row entry, and for the prolongator we keep only the largest entry +in each row. We can see these approximations are detrimental to Acoarse compared with Fig. 1a, with the eigenvalues +further from one. The bounding circle at all orders has greater radius, with 0.2533 for m = 1 and 0.0913 for m = 4. +Finally, Fig. 1c shows the results from using the same fixed sparsity matrix powers and drop tolerances on R and +P while also introducing a relative drop tolerance on the resulting Acoarse, where we drop any entry in a row that is +less than 0.1 times the maximum absolute row entry. We again see this further degrades our approximate coarse grid +matrix, with the effect of increasing the order of our GMRES polynomials diminished; the bounding circle goes from +0.3539 to 0.3532 with m = 1 to m = 4, respectively. +It is clear that introducing additional sparsity into our GMRES polynomials and resulting operators degrades our +coarse matrix. We examine this further by plotting the smallest eigenvalues of Acoarse and S in Fig. 2. In the limit of +ideal operators we know the near-nullspace vectors are preserved, but we would like to verify that with an approximate +ideal restrictor and approximate ideal prolongator this is still the case. We can see that in Fig. 2a that the GMRES +polynomials with m = 4 and fixed sparsity does an excellent job capturing the smallest eigenvalues. Furthermore +introducing both fixed sparsity to the GMRES polynomial and drop tolerances on R and P results in reasonable +approximations. We can see in Fig. 2b that introducing the drop tolerances on the resulting Acoarse results in small +eigenvalues that do not match the exact coarse grid matrix. +These results indicate that our low-order GMRES polynomials with fixed sparsity and the introduction of drop +tolerances to our approximate ideal R and P results in an excellent approximation for the coarse grid streaming +operator. It is well known that introducing additional sparsity to multigrid operators can harm the resulting operators, +and it is clear from these results that introducing a drop tolerance to our coarse grid matrix has the biggest impact. In +a multilevel setting however, we find that doing this often result in the best performance, with the slight increase in +iterations balanced by the reduced complexity; care must be taken to not make the drop tolerance too high. +5. CF splitting +All multigrid/multilevel methods require the formation of a hierarchy of “grids”; LDU methods and reduction +multigrids like in this work require the selection of a subset of DOFs defined as “fine” and “coarse”. For asymmetric +linear systems, CF splitting algorithms often result in coarse grids with directionality (i.e., they result in a semi- +coarsening), typically through heuristic methods that identify strong connections in matrix entries (e.g, see [71]) with +algorithms like CLJP, PMIS, HMIS, etc [72]) or through compatible relaxation [73, 74, 75]. We would like the CF +splitting to produce a well-conditioned Aff on each level without giving a large grid or operator complexity across the +hierarchy. The effectiveness of some of the approximations used in the literature (described in Section 4) for A−1 +ff also +clearly depend on the sparsity of Aff produced by a CF splitting. +Previous works have used various CF splittings, including those that produce a maximally-independent set, giving +a diagonal Aff that is easily inverted [76]; or if a block-independent set is generated then Aff is block-diagonal and the +blocks can be inverted directly [77, 78]. With a more general CF splitting [52] used ILU factorisations to approximate +A−1 +ff . [79] produced CF splittings specifically for reduction multigrids and LDU methods that are targeted at producing +a diagonally dominant Aff. +12 + +0.03 +0.032 +0.034 +0.036 +0.038 +0.04 +0.042 +-4 +-3 +-2 +-1 +0 +1 +2 +3 +4 +#10 -3 +(a) The × are eigenvalues of Acoarse with fixed sparsity GMRES +polynomial, the ∗ are with fixed sparsity GMRES polynomial and +relative drop tolerances applied to the resulting R and P. +0.03 +0.032 +0.034 +0.036 +0.038 +0.04 +0.042 +-4 +-3 +-2 +-1 +0 +1 +2 +3 +4 +#10 -3 +(b) The . are eigenvalues of Acoarse with fixed sparsity GMRES +polynomial, relative drop tolerances applied to the resulting R and +P and relative drop tolerances applied to the resulting Acoarse. +Figure 2: Eigenvalue distribution for the smallest eigenvalues of Acoarse and S on the second level of a pure streaming problem, where S is the +exact Schur complement formed from the ideal restrictor and prolongator (i.e., the exact coarse-grid matrix) whose eigenvalues are denoted with +the black “o”. The red symbols are the eigenvalues of Acoarse, which is the approximate coarse grid matrix formed from our approximate ideal +restrictor and prolongator given a GMRES polynomial approximation of A−1 +ff with m = 4 and different sparsification applied to operators. +In this work we use traditional CF splitting algorithms (like in [28]) as we find they perform well enough and +parallel implementations are readily available. Section 7 presents the results from using the lAIR implementation in +hypre and we found that using the Falgout-CLJP algorithm in hypre resulted in good CF splittings. In order to make +fair comparisons, we show results from using AIRG with the same algorithm. +6. Work estimates +One of the key metrics we use to quantify the performance of the iterative methods tested is the number of Work +Units (WUs) required to solve our linear systems; this is a FLOP count scaled by the number of FLOPs required to +compute a matvec. We present several different WU calculations, each of which is scaled by a different matvec FLOP +count. This is in an attempt to show fair comparisons against other multigrid methods, along with source iteration. +To begin, we must first establish a FLOP count for all the different components of our iterative methods. We begin +with our definition of the Cycle Complexity (CC) of AIRG. The CC is the amount of work performed during a single +V-cycle, scaled by the number of nnzs in the top-grid matrix. Our calculation of the CC includes the work performed +during smoothing and grid-transfer operators; we use our definition of CC and WUs in all the results below. We define +the work required to compute a matvec with our matrices on each level as {.}l. For an assembled matrix we set this +as the nnzs. This assumes fused-multiply-add (FMA) instructions are available and hence the cost of multiplying by +Aff for example is nnzs rather than 2×nnzs (this cost scales out of the CC anyway). If we consider a general linear +system, Ax = b, the FLOP count for performing a single V-cycle with lmax levels of AIRG is given by +FLOPAIRG +V += { ˆA +−1}lmax + +l=lmax−1 +� +l=1 +vup{ ˆA +−1 +ff }l + vup{Aff}l + {Afc}l + {R}l + {P}l, +(31) +where vup = 2 is the number of up F-point smooths and we perform one application of a GMRES polynomial approx- +imation of ˆA +−1 as a coarse grid solve on l = lmax. +In Section 7, we also show the results from using lAIR in hypre with FCF-Jacobi smoothing; by default the CC +output by hypre doesn’t include all work associated with smoothing, residual calculation, etc, and hence we recompute +13 + +it. Due to the use of FCF-Jacobi, the result of Afcxc during the F smooths cannot be cached (similarly for the C-point +smooths). We therefore compute +FLOPhypre +V += { ˆA +−1}lmax + +l=lmax−1 +� +l=1 +vup(2nl +F + nl +C) + vup +� +2 +� +{Aff}l + {Afc}l� ++ {Acf}l + {Acc}l� ++ {R}l + {P}l, +(32) +where nl +F and nl +C are the number of F and C-points on level l, respectively and given we only use one FCF-Jacobi as +our smoother on each level vup = 1. The cycle complexity (for either hypre or AIRG) is then given by +CC = FLOPV +{A}1 . +(33) +Any matvec that involves scatter should be computed matrix-free and we denote that with an “mf” subscript. +Given our sub-grid scale discretisation, we need to account for the cost of computing the source on the rhs of (11), the +fine-scale solution Θ and the addition of the coarse and fine scale solutions to form ψ. These are given by +FLOPsource = {B}mf + { ˆD +−1}, +FLOPSGS = {C}mf + { ˆD +−1}, +FLOPψ = NDDOFs +(34) +The FLOP count of one iteration of our angular preconditioner (14) is given by +FLOPangle = 2 × NCDOFs + 4.5 × {Ddiff}. +(35) +We investigated using AIRG to invert the diffusion operator, but found it difficult to beat the default boomerAMG +implementation in hypre (i.e., not lAIR), which is unsurprising given hypre has been heavily optimised for such +elliptic operators. The factor of 4.5 comes from the cycle complexity of running boomerAMG on a heavily refined +spatial grid and as might be expected we see the cycle complexity plateaus to around this value (hence we have an +upper bound on work on less refined grids). +We must now quantify the cost of our matrix-free matvecs. As mentioned in Section 2.2 there are numerous ways +we could compute such a matvec, depending on how much memory we have available; Section 3 discussed that we +store the streaming/removal operator, MΩ to precondition with and hence we use that in our matvec. The additional +cost therefore comes with scattering (which we assume is isotropic in this work) and hence we have +{A − B ˆD +−1C}mf = {MΩ} + +� +i∈cg nodes +2 × δ(i) × NCDOFs(i)+ +� +i∈dg nodes +δ(i) × (2 × NDDOFs(i) + 2 × nnodes + 1) + +� +e∈eles +3 × δ(i) × { ˆD +−1 +e } +(36) +where nnodes is the number of spatial nodes on our DG elements (3 in 2D or 4 in 3D with tri/tets), { ˆD +−1 +e } is the number +of non-zeros in our stored block approximation on a given element (the factor of 3 comes from one application of ˆD +−1 +and one of BΩ and CΩ which have the same sparsity); this sparsity depends on the angles present on each DG node, +but is at a maximum when uniform angle is used and for each angle present on all nodes of an element we have a +nnodes × nnodes block. N*DOFS(i) is the number of DOFs on an individual (CG or DG) spatial node i and δ(i) is 1 or 0 +depending on the presence of scatter; for a CG node this is zero if every element connected to node i has a zero scatter +cross-section, and one otherwise, for a DG node this is zero if the element containing node i has a zero cross-section, +one otherwise. The calculation in (36) includes the work required to map to/from Legendre space on both our coarse +(CG) and fine (DG) spatial meshes in order to apply the scatter component in A, B and C. Similar expressions are +used for the individual {B}mf and {C}mf in (34). +We now have all the components required to calculate our WUs. We begin with a simple definition, where we use +AIRG as a preconditioner on the assembled matrix, A − B ˆD +−1C, that could include scatter and is hence non-scalable. +If nits is the number of outer GMRES iterations performed then the total FLOPs are +FLOPfull = nits +� +{A − B ˆD +−1C} + FLOPV +� ++ FLOPsource + FLOPSGS + FLOPψ, +(37) +14 + +and hence the WUs +WUsfull = +FLOPfull +{A − B ˆD +−1C} +. +(38) +If instead we use the additively preconditioned iterative method defined in Section 3 along with our matrix-free +matvec, our total FLOPs are +FLOPmf = nits +� +{A − B ˆD +−1C}mf + FLOPV + FLOPangle +� ++ FLOPsource + FLOPSGS + FLOPψ, +(39) +We then scale these FLOP counts in several different ways. Firstly there is the WUs required to compute a matrix-free +matvec of our sub-grid scale discretisation, namely +WUsmf = +FLOPmf +{A − B ˆD +−1C}mf +. +(40) +In order to make rough comparisons with a traditional DG FEM source iteration method, we can scale by the work +required to compute a matrix-free matvec with a DG FEM. In order to not unfairly disadvantage a DG discretisation, +we assume the DG streaming operator is stored in memory, and hence the FLOPs required to compute a single DG +matvec is +FLOPDG = 5 +3{ ˆD +−1} + +� +i∈dg nodes +δ(i) × (2 × NDDOFs(i) + nnodes). +(41) +The factor of 5/3 comes from the jump terms (in 2D) that are not included in the nnzs of our sparsified DG matrix +ˆD +−1. The work units scaled by this quantity are therefore +WUsDG = FLOPfull +FLOPDG +or +FLOPmf +FLOPDG +. +(42) +Again we note that with scattering we have {A − B ˆD +−1C}mf ≈ 1.8 × FLOPDG. +7. Results +Outlined below are several examples problems in both the streaming and scattering limits, designed to test the per- +formance of AIRG and our additively preconditioned iterative method. We solve our linear systems with GMRES(30) +to a relative tolerance of 1× 10-10, with an absolute tolerance of 1× 10-50 and use an initial guess of zero unless +otherwise stated. We should note that we use AIRG (and lAIR) on our matrices without relying on any (potential) +block structure, for example in the streaming limit with uniform angle we could use our multigrid on each of the angle +blocks separately. We also don’t scale our matrices; for example we could view a diagonal scaling as preconditioning +the outer GMRES iteration and/or the GMRES polynomials in our multigrid, but we did not find it necessary. +When using AIRG we perform zero down smooths and two up F-point smooths (our C-points remain unchanged). +On the bottom level of our multigrid we use one Richardson iteration to apply a GMRES polynomial approximation of +the coarse matrix. We use a 1-point prolongator which computes W and then keeps the biggest absolute entry per row. +Unless otherwise noted we use 3rd order (m = 4) GMRES polynomials with fixed sparsity as described in Section +4.1.1. We only use isotropic scatter in this work. For both AIRG and lAIR, we use the row-wise infinity norm to define +any drop tolerances. When using the lAIR implementation in hypre, we use zero down smooths and one iteration of +FCF-Jacobi for up smooths, while on the bottom level we use a direct-solve, as we found these options resulted in the +lowest cost/best scaling. We searched the parameter space to try and find the best values for drop tolerances, strength +of connections, etc when using lAIR and AIRG; these searches were not exhaustive however, and it is possible there +are more optimal values. +All timing results are taken from compiling our code, PETSc 3.15 and hypre with “-O3” optimisation flag. We +compare timing results between our PETSc implementation of AIRG and the hypre implementation of lAIR. As such +we try and limit the impact of different implementation details. Given this, when calculating the setup time we exclude +the CF splitting time, the time required to drop entries from matrices (as the PETSc interfaces require us to take copies +15 + +of matrices), and to extract the submatrices Aff, Afc, etc. These should all be relatively low cost parts of the setup, are +shared by both AIRG and lAIR and should scale with the nnzs. The setup time we compare is therefore that required to +form the restrictors, prolongators and coarse matrices. Also we should note that with AIRG our setup time is an upper +bound, as we have not built an optimised matmatmult for building our fixed sparsity GMRES polynomials (we know +the sparsity of the matrix powers is the same as the two input matrices). As such we compute a standard matmatmult +in PETSc and then drop entries (although we do provide a flop count for a matmatmult with fixed sparsity). When +timing lAIR, we use the PETSc interface to hypre and hence we run two solves (and set the initial condition to zero in +both). The hypre setup occurs on the 0th iteration of the first solve, so a second solve allows us to correctly measure +just the solve time. We only show solve times for problems with pure streaming, as our implementation of the P0 +matrix-free matvec with scattering is not well optimised. +All tabulations of memory used are scaled by the total NDOFs in ψ in (8), i.e., NDOFs=NCDOFs + NDDOFs. +Included in this figure is the memory required to store the GMRES space, both additive preconditioners (if required) +and hence the AIRG hierarchy, ˆA +−1 +ff , separate copies of Aff, Afc, Acf and Acc, the diffusion operator and temporary +storage. We do not report the memory use of hypre, although given the operator complexities it is similar to AIRG. +7.1. AIRG on the matrix A − B ˆD +−1C +In this section, we build the matrix A − B ˆD +−1C and use this matrix as a preconditioner, applied with 1 V-cycle of +either AIRG or lAIR. As such we do not use the iterative method described in Section 3, nor do we use the matrix-free +matvec described in Section 2.2. As mentioned using an iterative method that relies on the full operator is not practical +with scattering given the nonlinear increase in nnzs with angular refinement, but we wish to show our methods are +still convergent in the scattering (diffuse) limit. Instead, Section 7.2 shows the results from using the iterative method +in Section 3. +Our test problem is a 3× 3 box with a source of strength 1 and size 0.2 × 0.2 in the centre of the domain. We apply +vacuum conditions on the boundaries and discretise this problem with unstructured triangles and ensure that our grids +are not semi-structured (e.g., we don’t refine coarse grids by splitting elements). We use uniform level 1 refinement +in angle, with 1 angle per octant (similar to S2). +7.1.1. Pure streaming problem +For the pure streaming problem we set the total and scatter cross-sections to zero. To begin, we examine the +performance of AIRG and lAIR with spatial refinement and a fixed uniform level 1 angular discretisation (i.e, with +4 angles in 2D). Table 1 shows that using distance 1 lAIR in this problem, with Falgout-CLJP CF splitting results in +growth in both the iteration count and work. We could not find a combination of parameters that results in scalability +with lAIR; increasing the number of FCF smooths to 3 results in an iteration count with similar growth, namely 15, +14, 14, 17, 19 and 22, but with a cycle complexity at the finest spatial refinement of 11.2 and hence 271 WUs. Even +using distance 2 lAIR didn’t result in scalability, as shown in Table 2 where we can see a slightly decreased iteration +count, with higher cycle and operator complexities, resulting in a similar number of WUs. Using both distance 2 lAIR +and 3 FCF smooths still results in growth, with iteration counts of 15, 13, 14, 16, 17 and 19, with a cycle complexity +of 15.9 and hence 323 WUs at the finest spatial refinement. +CG nodes NDOFs +nits +CC Op Complx WUsfull WUsDG Memory +97 +2.4× 103 26 2.96 +1.5 +106 +26.3 +- +591 +1.6× 104 25 +3.5 +1.77 +116 +27.8 +- +2313 +6.3× 104 28 +3.8 +1.9 +138 +32.6 +- +9166 +2.5× 105 31 +4.1 +2.0 +159 +37.4 +- +35784 +9.9× 105 36 +4.2 +2.1 +189 +44.5 +- +150063 +4.2× 106 42 +4.3 +2.16 +225 +52.6 +- +Table 1: Results from using distance 1 lAIR in hypre on a pure streaming problem in 2D with CF splitting by the hypre implementation of +Falgout-CLJP with a strong threshold of 0.2, drop tolerance on A of 0.0075 and R of 0.025 and a strong R threshold of 0.25. +16 + +CG nodes NDOFs +nits CC Op Complx WUsfull WUsDG Memory +97 +2.4× 103 26 3.2 +1.58 +112 +27.9 +- +591 +1.6× 104 24 3.7 +1.85 +116 +28 +- +2313 +6.3× 104 27 4.1 +2.04 +141 +33.5 +- +9166 +2.5× 105 30 4.4 +2.16 +164 +38.6 +- +35784 +9.9× 105 34 4.6 +2.26 +192 +44.9 +- +150063 +4.2× 106 38 4.7 +2.32 +219 +51.1 +- +Table 2: Results from using distance 2 lAIR in hypre on a pure streaming problem in 2D with CF splitting by the hypre implementation of +Falgout-CLJP with a strong threshold of 0.2, drop tolerance on A of 0.0075 and R of 0.025 and a strong R threshold 0.25. +We also tried decreasing the strong R threshold to 1 × 10-7 in case some neighbours were being excluded, but this +resulted in very little change in the iteration count, while the number of nnzs in the restrictor (and hence the setup +time) grew considerably. Preliminary investigation suggests we would need to include neighbours at greater distance +than two (along with only F-point smoothing, rather than FCF), but this increases the setup cost considerably. We also +note that using nAIR (at several different orders) in this case results in a similar iteration count (even with diagonal +scaling of our operators); this is likely due to the lack of lower triangular structure in our discretisation. +CG nodes NDOFs +nits +CC Op. Complx WUsfull WUsDG Memory +97 +2.4× 103 11 5.58 +1.96 +83 +20.4 +11.9 +591 +1.6× 104 10 5.85 +2.48 +79 +18.9 +11.7 +2313 +6.3× 104 +8 +6.4 +2.88 +70 +16.6 +12.2 +9166 +2.5× 105 +8 +6.7 +3.16 +73 +17.1 +12.4 +35784 +9.9× 105 +9 +6.9 +3.36 +82 +19.3 +12.5 +150063 +4.2× 106 +9 +7.07 +3.48 +84 +19.6 +12.7 +Table 3: Results from using AIRG with m = 4 and without fixed sparsity on a pure streaming problem in 2D with CF splitting by the hypre +implementation of Falgout-CLJP with a strong threshold of 0.2, drop tolerance on A of 0.0075 and R of 0.025. +Tables 3 & 4 however shows that using AIRG with a third-order GMRES polynomial and Falgout-CLJP CF +splitting results in less work with smaller growth. Using GMRES polynomials without fixed sparsity requires 84 WUs +at the highest level of refinement, compared to 75 with fixed sparsity. The work in Table 3 has plateaued, however +the work with fixed sparsity is growing slightly with spatial refinement. Compared to lAIR, we see that AIRG with +sparsity control needs three times less work to solve our pure streaming problem. Rather than use our one-point +ideal prolongator, we also investigated using W with the same drop tolerances as applied to Z. With fixed-sparsity +this resulted in 11, 10, 9, 9, 9 and 10 iterations, with 67, 68, 66, 68, 70, and 78 WUs. The slight increase in cycle +complexity is compensated by using one fewer iterations at the finest level, but overall we see similar work. In an +attempt to ascertain the effect of using our one-point approximation to the ideal prolongator, we also constructed a +classical one-point prolongator as in [28, 26] where each F-point is injected from its strongest C-point neighbour. +With fixed-sparsity this results in the same number of iterations and WUs as in Table 4, which confirms that the +classical prolongator is not responsible for the poor performance of lAIR. The benefit of using a classical one-point +prolongator in this fashion is we require one fewer matmatmults on each level of our setup. For the rest of this paper +we use the one-point approximation to the ideal operator, but it is clear there is a range of practical operators (with +different sparsification strategies) we could use with AIRG. +Table 4 shows we can solve our streaming problem with the equivalent of approximately 18 DG matvecs. We +can also see that the plateau in the operator complexity results in almost constant memory use, at 10 copies of the +angular flux. We should also note that we can use AIRG as a solver, rather than as a preconditioner. We see the +same iteration count in pure streaming problems and the lack of the GMRES space means we only need memory +equivalent to approx. 5 copies of the angular flux; this is the amount of memory that would be required to store just +a DG streaming operator in 2D. This shows that our discretisation and iterative method are low-memory in streaming +17 + +problems. +CG nodes NDOFs +nits +CC Op. Complx WUsfull WUsDG Memory +97 +2.4× 103 12 +3.7 +1.97 +67 +16.5 +10 +591 +1.6× 104 10 +4.1 +2.5 +62 +14.7 +10 +2313 +6.3× 104 +9 +4.4 +2.87 +60 +14.1 +10.2 +9166 +2.5× 105 10 +4.6 +3.17 +67 +15.8 +10.3 +35784 +9.9× 105 10 4.74 +3.36 +69 +16 +10.4 +150063 +4.2× 106 11 4.84 +3.5 +75 +17.6 +10.4 +Table 4: Results from using AIRG with m = 4 and fixed sparsity on a pure streaming problem in 2D with CF splitting by the hypre implementation +of Falgout-CLJP with a strong threshold of 0.2, drop tolerance on A of 0.0075 and R of 0.025. +The cost of a solve must also be balanced by the cost of the setup of our multigrid. Our setup involves computing +our GMRES polynomial approximations, followed by standard AMG operations, namely computing matrix-matrix +products to form our restrictors/prolongators and coarse matrices on each level. Fig. 3 begins by showing the relative +amount of work required to compute Z for AIRG and distance 1 lAIR. For AIRG this is the sum of FLOPs required +to compute ˆA +−1 +ff and those to compute −Acf ˆA +−1 +ff . For lAIR, it is more difficult to calculate a FLOP count for the small +dense solves which locally enforce RA = 0, as it is dependent on the BLAS implementation; instead we take the +size of each of the n × n dense systems and compute the sum of n3. As such we do not try and compare an absolute +measure of the work required by both (we have timing results below). Fig. 3 instead shows that the growth in work for +both AIRG with fixed sparsity and lAIR begins to plateau with grid refinement; AIRG without fixed sparsity however +grows with refinement, showing the necessity of the sparsity control on the matrix powers. +Fig. 4a shows that the cost of computing our GMRES polynomial approximations to ˆA +−1 +ff with fixed sparsity is +constant, at around 8 FLOPs per DOF. The plateauing growth seen in Fig. 3 therefore comes from the matmatmult +required to compute −Acf ˆA +−1 +ff . The FLOP count with fixed sparsity relies on a custom matmatmult algorithm that +assumes the same sparsity in each component of the matmatmult, as in (29). Computing the matrix powers with a +standard matmatmult and then dropping any fill-in results in an almost constant number of FLOPs per DOF, at around +9.6, which is convenient as this makes it less necessary to build a custom matmatmult implementation. If we do not +fix the sparsity of our polynomial approximations we can see growth in the cost, due to the increasingly dense matrix +powers, as might be expected. +Fig. 4b shows that the cost of our setup with fixed sparsity plateaus, and the total cost of our setup plus solve grows, +due to the increase in work during the solve shown in Table 4, with 29% growth from lowest to highest refinement. +Without fixed sparsity we see less growth in Table 3 from the solve, but higher growth in the setup. This results in +similar growth in the total work, but the fixed sparsity case uses approximately 30% fewer FLOPs. +Fig. 5 shows results for the solve, Z, setup and total time taken to solve our system with AIRG and lAIR. We can +see in Fig. 5a that the results match those in Tables 1-5, with AIRG with Falgout-CLJP coarsening and fixed sparsity +giving the lowest solve times. We found a difference in the efficiency of our PETSc implementation vs the hypre +implementations however. If we modify the parameters used with AIRG and increase the work required to solve to +roughly match that distance 1 lAIR in this problem, we found that there was a factor of almost 2 difference in the +solve times. +Fig. 5b shows the total cost of computing the approximate ideal restrictor, Z. For AIRG we also show the time +for computing the GMRES polynomial approximation to A−1 +ff separately. We can see for AIRG with fixed sparsity +there is slight growth in the time to compute our polynomials, with much greater growth without fixed sparsity. Given +the FLOP count in Fig. 4a we would expect the time with fixed sparsity to be constant. The further growth we see +in the time to compute Z comes from the matmatmult to compute −AcfA−1 +ff , which Fig. 4b suggests should plateau. +We also see growth in the time taken to compute Z with lAIR, with higher growth for distance 2. Again Fig. 3 shows +the work estimate plateauing with spatial refinement. At the higher spatial refinements, the Z with AIRG, Falgout- +CLJP coarsening and fixed sparsity costs more to compute than distance 1 lAIR, but less than distance 2. An AIRG +implementation that takes advantage of the shared sparsity of the matrices when forming ˆA +−1 +ff could reduce this setup +further, with a reduction in the FLOPs required (by approx. 20% as shown in Fig. 4a), and also in the cost of any +18 + +102 +103 +104 +105 +1 +1.2 +1.4 +1.6 +1.8 +2 +CG Nodes +Relative work +(a) The × is for AIRG with fixed sparsity, × is for AIRG without +fixed sparsity and ⊗ is for lAIR. +Figure 3: Sum of FLOPs required across all levels to compute Z, scaled to the NDOFs, and then relative to the work required on the least refined +spatial grid, for AIRG with m = 4 (see Table 4) and distance 1 lAIR (see Table 1) with Falgout-CLJP in a 2D pure streaming problem. The relative +scaling is done separately for each line; they do not all cost the same at the coarsest resolution, we provide timings in Section 7.1.1. +symbolic computation. Given the substantially improved convergence shown in Table 4 when compared to distance 2 +lAIR in Table 2, this shows AIRG is an effective and relatively cheap way to compute approximate ideal operators in +convection problems. +Fig. 5c shows the setup times (which include the times to compute Z from Fig. 5b) and we can see that lAIR has +the cheapest overall setup, with fixed sparsity AIRG coming in the middle, and AIRG without fixed sparsity giving the +highest setup times. We expect AIRG without fixed sparsity to be increasing given the increased cost of computing +the GMRES polynomial with spatial refinement, as shown in Fig. 4a, but we see increased setup time with AIRG with +fixed sparsity and distance 1 lAIR, whereas the work estimates in Figures 3 and 4b again suggests the setup times +should plateau. We found that the time taken to compute the matmatmults used to build the transfer operators and +coarse matrices is increasing more than the FLOP count would imply. Of course a FLOP count is not necessarily +perfectly indicative of the time required in a matmatmult given the symbolic compute and memory accesses. It is +possible more efficient matmatmult implementations are available. +Fig. 5d shows the total time taken to setup and solve with our multigrid. Two of our AIRG results, namely +using GMRES polynomials without and without fixed sparsity manage to beat lAIR. This helps show the promise of +GMRES polynomials as part of an AIR-style multigrid; even without fixed sparsity they can be competitive. The fixed +sparsity AIRG however results in a considerably decrease in total time, taking approx. 3× less time than distance 1 +lAIR. Even if we try and remove the effect of implementation differences between lAIR and AIRG discussed above, +by equating the solve time of lAIR with an AIRG result that requires similar WUs, fixed sparsity AIRG still has a +total time of 0.65 × that of distance 1 lAIR. +We also examined using SAIs to build an approximation of A−1 +ff with the fixed sparsity of Aff, like in [46]. We +found approximations of A−1 +ff computed with SAI comparable to our fixed sparsity GMRES polynomials with m = 4, +with identical convergence behavior compared to the fixed sparsity AIRG shown in Table 4 (except with the first +spatial refinement). With spatial refinement using SAI had 11, 10, 9, 10, 10 and 11 iterations and hence the same +work units and solve times as AIRG given the matching fixed sparsity. However we found it was more expensive to +setup SAIs on each level when compared to our GMRES polynomials, by approximately 2×; we used the ParaSails +implementation in hypre for comparisons. In particular the fixed sparsity GMRES polynomials require fewer FLOPs +as the nnzs in Aff grow compared to fixed sparsity SAI. If nF is the number of F-points, we require m matvecs which +scale linearly with the nnzs and a single QR factorisation of size nF × (m + 1), which doesn’t depend on the nnzs. +If m > 2, we must also compute m − 2 fixed sparsity matrix powers at cost (m − 2)srscnF, where sr and sc are the +average number of nnzs per row and column of Aff, respectively. If we assume s = sr ≈ sc then the cost of computing +19 + +102 +103 +104 +105 +8 +10 +12 +14 +CG Nodes +FLOPs per DOF +(a) The × is the cost of computing ˆA−1 +ff for AIRG with fixed spar- +sity, the × is with fixed sparsity computed with a standard mat- +matmult followed by dropping entries and the × is without fixed +sparsity +102 +103 +104 +105 +50 +100 +150 +CG Nodes +FLOPs per DOF +(b) The dashed × is the cost of the setup for AIRG with fixed +sparsity, the × is without fixed sparsity, the × is the setup plus +solve with fixed sparsity, the × is without fixed sparsity. +Figure 4: Sum of FLOPs required across all levels during the setup and solve, scaled to the NDOFs, for AIRG with m = 4 and Falgout-CLJP in a +2D pure streaming problem (see Tables 4 and 3). +the matrix-powers can be written as s(m − 2) × nnzs(Aff). In comparison, the SAI algorithm with the fixed sparsity +of Aff requires solving nF (local) least-squares problems. If we just consider the required nF QR factorisations of +size s × s, this requires a FLOP count of s2 × nnzs(Aff) (we have dropped the constants). For many problems a low +polynomial order is acceptable and hence (m − 2) ≪ s; this is particularly true on lower multigrid levels where the +average row or column sparsity can grow. There may be a scale, however, at which the extra local cost of setting up +SAIs is balanced by the extra communication required by our fixed sparsity GMRES polynomials. With m = 4 we +require the communication associated with four matvecs, a single all-reduce and computing A2 +ff, whereas SAI only +requires that of A2 +ff. One of the benefits of using our GMRES polynomials however is that we can decrease the amount +of communication required by decreasing the polynomial order below 2. +Given this, we examine the role of changing the GMRES polynomial order with AIRG and fixed sparsity, from +zero to four (m = 1 to m = 5; the zeroth and first order polynomials implicitly have fixed sparsity) in Fig. 6. We +can see the zeroth order polynomial is very cheap to construct, but results in the highest total time. This is because +the iteration count is higher. The increasingly higher order polynomials take longer to setup, although the difference +between successive orders decreases, due to the shared sparsity. With increasing polynomial order, on the most refined +spatial grid we have 40, 16, 12, 11 and 11 iterations to solve, with cycle complexities of 3.25, 4.67, 4.81, 4.84 and +4.85, respectively. We can see that all the polynomials between first and third order result in similar total times; the +decreased cost of setup for the lower orders is balanced by the increase in iterations. We see however very similar +growth in iteration count with spatial refinement with the first through fourth order polynomials; for example with +first order GMRES polynomials (m = 2) in Fig. 6 we have 15, 14, 12, 14, 14 and 16 iterations. This gives a higher +overall amount of work than with m = 4 (up to approximately 100 WUs at the highest refinement), but given the +similar growth in iteration count and reduced communication in parallel during the setup, requiring only two matvecs +and a single all-reduce, this may be a good choice in parallel. +Given the results with spatial refinement above, we chose to examine the role of angular refinement with AIRG +with fixed sparsity and m = 4. Table 5 shows that with 3 levels of angular refinement on the third refined spatial +grid, the iteration count increases slightly from 9 to 11, but the cycle and operator complexity are almost identical and +hence we have fixed memory consumption. We do not show the timings as they scale as would be expected. Table 6 +shows that distance 1 lAIR requires a fixed amount of work with angular refinement, but this is roughly twice that of +AIRG. +20 + +CG nodes Angle lvl. NDOFs +nits CC Op. Complx WUsfull WUsDG Memory +2313 +1 +6.3× 104 +9 +4.4 +2.87 +60 +14.1 +10.2 +2313 +2 +2.5× 105 10 4.4 +2.88 +65 +15.4 +10.4 +2313 +3 +1× 106 +11 4.4 +2.87 +70 +16.7 +10.4 +Table 5: Results from using AIRG with m = 4 and fixed sparsity on a pure streaming problem in 2D with CF splitting by the hypre implementation +of Falgout-CLJP with a strong threshold of 0.2, drop tolerance on A of 0.0075 and R of 0.025 with different levels of angular refinement. +CG nodes Angle lvl. NDOFs +nits CC Op Complx WUsfull WUsDG Memory +2313 +1 +6.3× 104 28 3.8 +1.9 +138 +32.6 +- +2313 +2 +2.5× 105 27 3.7 +2.36 +133 +31.7 +- +2313 +3 +1× 106 +28 3.8 +2.35 +133 +31.7 +- +Table 6: Results from using distance 1 lAIR in hypre on a pure streaming problem in 2D with CF splitting by the hypre implementation of Falgout- +CLJP with a strong threshold of 0.2, drop tolerance on A of 0.0075 and R of 0.025 and a strong R threshold of 0.25 with different levels of angular +refinement. +7.1.2. Scattering problem +To test the performance of AIRG with diffusion, we set the total and scattering cross-section to 10.0. Tables 7 and +8 show that both distance 1 and distance 2 lAIR, respectively, perform similarly, with approximately 3× growth in +WUs from the least to most refined spatial grid. Both the cycle and operator complexities have plateaued though. +CG nodes NDOFs +nits CC Op Complx WUsfull WUsDG Memory +97 +2.4× 103 23 2.2 +1.3 +77 +49 +- +591 +1.6× 104 27 3.0 +1.9 +112 +69 +- +2313 +6.3× 104 30 3.3 +2.2 +134 +82 +- +9166 +2.5× 105 34 3.4 +2.2 +153 +93 +- +35784 +9.9× 105 41 3.4 +2.2 +183 +110 +- +150063 +4.2× 106 54 3.3 +2.2 +238 +144 +- +Table 7: Results from using distance 1 lAIR in hypre on a pure scattering problem in 2D with CF splitting by the hypre implementation of +Falgout-CLJP with a strong threshold of 0.9, drop tolerance on A of 1× 10-4, R of 1× 10-2 and strong R threshold of 0.4. +AIRG performs simliarly to lAIR in this problem, as shown in Tables 9 without fixed sparsity and 10 with fixed +sparsity, with around 3× growth and similar number of WUs. AIRG with fixed sparsity results in slightly lower +operator complexities, but both methods result in memory use of around 20 copies of the angular flux; this is higher +than that in the streaming limit as the full matrix with scattering and a uniform angular discretisation at one level of +refinement has 4× the nnzs as that in the streaming limit. There is not a great deal of difference in the cycle complexity +between AIRG with and without fixed sparsity; this is because the strong R threshold of 0.4 results in a very sparse +version of Aff being used to construct the GMRES polynomials (and hence very little fill-in relative to the top grid +matrix). We can decrease the strong R tolerance to decrease the iteration count (in both AIRG and lAIR), but the +nnzs in (or equivalently the number of neighbours used to construct) Z grows considerably, as might be expected with +scattering. +Fig. 7 shows the timing results from AIRG and lAIR in this problem. Again we see a curious implementation +difference in solve times in Fig. 7a, as both lAIR and AIRG require simliar number of WUs, but the hypre imple- +mentation of lAIR requires roughly twice the time to solve. Fig. 7b shows that the time to compute the GMRES +polynomial for AIRG is largely constant, with the time to compute Z again between distance 1 and distance 2 lAIR; +this is also true for the total setup time in Fig. 7c. Fig. 7d shows that our AIRG implementation is the cheapest method +overall, taking around 0.4× the amount of time as lAIR to solve this problem. If we again equate the solve time be- +tween lAIR and AIRG given the similar amount of work required, lAIR and AIRG perform similarly, each requiring +21 + +CG nodes NDOFs +nits CC Op Complx WUsfull WUsDG Memory +97 +2.4× 103 22 2.5 +1.5 +80 +51 +- +591 +1.6× 104 25 3.5 +2.2 +117 +72 +- +2313 +6.3× 104 27 3.8 +2.4 +133 +81 +- +9166 +2.5× 105 31 3.7 +2.4 +150 +91 +- +35784 +9.9× 105 37 3.7 +2.5 +178 +108 +- +150063 +4.2× 106 47 3.7 +2.5 +225 +136 +- +Table 8: Results from using distance 2 lAIR in hypre on a pure scattering problem in 2D with CF splitting by the hypre implementation of +Falgout-CLJP with a strong threshold of 0.9, drop tolerance on A of 1× 10-4, R of 1× 10-2 and strong R threshold of 0.4. +CG nodes NDOFs +nits CC Op. Complx WUsfull WUsDG Memory +97 +2.4× 103 22 1.9 +1.3 +68 +43 +17.3 +591 +1.6× 104 26 2.2 +2.1 +88 +54 +18.0 +2313 +6.3× 104 34 2.5 +2.4 +122 +74 +18.7 +9166 +2.5× 105 41 2.7 +2.7 +155 +94 +19.5 +35784 +9.9× 105 45 2.8 +2.8 +175 +106 +19.9 +150063 +4.2× 106 61 2.7 +2.9 +232 +140 +19.5 +Table 9: Results from using AIRG with m = 4 and without fixed sparsity on a pure scattering problem in 2D with CF splitting by the hypre +implementation of Falgout-CLJP with a strong threshold of 0.9, drop tolerance on A of 1× 10-4, R of 1× 10-2 and strong R threshold of 0.4. +approximately 2.9 µs total time per DOF. +Fig. 8 shows the results from changing the GMRES polynomial order and similar trends to that in the streaming +limit can be seen, namely the 0th order polynomial is very cheap to setup, but results in the highest total time. Indeed +the 0th order polynomial did not converge at the two highest spatial refinements. The first through fourth order +polynomials all result in similar total times, with 62, 61, 60 and 64 iterations, respectively. This result indicates that +the higher polynomial order does not necessarily help decrease the iteration count with scattering. This is because the +strong R threshold is so high; decreasing this makes the effect of the polynomial order (and the fixed sparsity) much +more pronounced, but we found the lowest overall total times by allowing heavy dropping. Given using the full matrix +is not scalable with angular refinement, we don’t show convergence results in that case; instead the next section uses +the iterative method defined in Section 3 on this scattering problem. +7.2. Additively preconditioned iterative method +In this section we show the performance of the iterative method from Section 3 in the scattering limit. The results +in this section are not designed to show the performance of a standard DSA method with different scattering ratios, +optical cell lengths, etc; we appeal to the wealth of literature on the topic. Instead we wish to show that the additive +combination of our preconditioners is effective and that multigrid methods can be used to invert these operators +scalably. As discussed, this means forming the streaming/removal operator MΩ, a CG diffusion operator Ddiff and the +streaming/removal components BΩ and/or CΩ, all of which can be done scalably. We use 1 V-cycle of AIRG with the +same drop/strong tolerances as in Section 7.1.1 to apply M−1 +Ω and 1 V-cycle of boomerAMG (with default options) to +apply D−1 +diff per outer GMRES iteration. +For our iterative method to be effective, 1 V-cycle of both methods must reduce the error by a fixed amount with +space/angle refinement (which is equivalent to a solve with a fixed tolerance taking a fixed amount of work). We +can assume that multigrid methods such as boomerAMG can invert the diffusion operator with fixed work (we also +scale the diffusion operator by its inverse diagonal prior to use), but in Section 7.1.1 we only showed that AIRG can +invert the streaming operator with fixed work in the solve, rather than the streaming/removal operator. Thankfully the +removal term results in a better conditioned matrix given extra term on the (block) diagonals; the streaming limit is +the most difficult to solve. Fig. 9 shows (part of) the spectrum of the streaming operator vs the streaming/removal +22 + +CG nodes NDOFs +nits CC Op. Complx WUsfull WUsDG Memory +97 +2.4× 103 22 1.9 +1.3 +67 +43 +17.3 +591 +1.6× 104 26 2.2 +2.0 +88 +54 +18.0 +2313 +6.3× 104 34 2.5 +2.4 +122 +74 +18.7 +9166 +2.5× 105 38 2.7 +2.7 +144 +87 +19.4 +35784 +9.9× 105 51 2.8 +2.8 +196 +118 +19.7 +150063 +4.2× 106 60 2.7 +2.8 +224 +135 +19.3 +Table 10: Results from using AIRG with m = 4 and fixed sparsity on a pure scattering problem in 2D with CF splitting by the hypre implementation +of Falgout-CLJP with a strong threshold of 0.9, drop tolerance on A of 1× 10-4, R of 1× 10-2 and strong R threshold of 0.4. +operator for the 2D source problem with the third refined grid, level one angular refinement and total cross-section of +10.0. We can see that the smallest eigenvalues of the streaming/removal operator are (slightly) further from the origin. +The convergence of AIRG relies on the convergence of our GMRES polynomial approximations to A−1 +ff . We can +also see in Fig. 9 that the eigenvalues of Aff are more compact than that of the full operators, confirming that the CF +splitting is helping produce a better conditioned Aff in both cases. We know that our operators are non-normal, so the +spectrum does not completely determine the convergence of our GMRES polynomials [63]. Given this, Fig. 9 also +plots the field of values (a.k.a., the numerical range) of our operators, given by +F (A) = {x∗Ax | xx∗ = 1, x ∈ Cn}, +(43) +which is a convex set that contains the eigenvalues. To give some insight into the convergence of the GMRES +polynomials, we define µ to be the distance from the origin, or +µ = min +z∈F (A) |z|. +(44) +[80] show that (see also [63]) if the field of values doesn’t contain the origin, β ∈ (0, π/2) such that cos(β) = µ/||A|| +and the Hermitian part of A, namely (A + A∗)/2 is positive definite then the residual at step m is bounded by +||rm|| ≤ ||r0|| +� +2 + 2√ +3 +� +(2 + γβ)γm +β , +(45) +where +γβ = 2 sin +� +β +4 − 2β/π +� +. +(46) +We confirmed numerically that none of our operators or their fine-fine sub-matrices touch the origin (and hence the +field of values are all in the right-half of the complex plane) and that their Hermitian parts are positive definite. +For the Aff component of the streaming operator on the top grid, pictured in Fig. 9 we found that with m = 4, (45) +gives ||rm|| ≤ 9.41||r0||, while for the Aff component of the streaming/removal operator we have ||rm|| ≤ 9.40||r0|| (the +disk bound in [81] gives a similar conclusion). These bounds are not particularly tight, but they do indicate that a 3rd +order GMRES polynomial should result in a smaller residual for the streaming/removal operator and hence we would +expect AIRG to perform better. +We should note that as µ → 0, β gets closer to π/2 and the asymptotic convergence factor, γm +β → 1. In general this +means the further the minimum field of values is from the origin, the better the convergence; this is also demonstrated +by considering the disk bound in [81], given by |δ/c| = (1−cos(β))/(1+cos(β)) < γβ, where δ and c are the radius and +centre of a disk, respectively, that covers F (A). This helps explain why using GMRES polynomials to approximate +Aff can be effective even when GMRES polynomial preconditioning of the full operators may not be; Fig. 9a shows +that the field of values for the streaming operator almost touches the origin, with µ ≈ 3.2 × 10-5 and hence single- +level GMRES polynomial preconditioning would likely not be effective in this problem (this is backed by numerical +experiments; we find considerable growth in the iteration count with refinement). This hints at the importance of +combining GMRES polynomials with a reduction multigrid. +23 + +CG nodes NDOFs +nits CC Op. Complx WUsmf WUsDG Memory +97 +2.4× 103 23 3.0 +1.6 +33 +61 +16.5 +591 +1.6× 104 24 4.0 +1.0 +36 +67 +17.2 +2313 +6.3× 104 25 4.1 +1.7 +38 +69 +17.2 +9166 +2.5× 105 26 4.2 +2.4 +39 +72 +17.2 +35784 +9.9× 105 26 4.4 +2.8 +39 +73 +17.3 +150063 +4.2× 106 26 4.6 +3.2 +39 +73 +17.5 +Table 11: Results from using additive preconditioning on a pure scattering problem with total and scattering cross-section of 10.0 in 2D. The cycle +and operator complexity listed are for AIRG on MΩ with CF splitting by Falgout-CLJP. +CG nodes Angle lvl. NDOFs +nits CC Op. Complx WUsmf WUsDG Memory +2313 +1 +6.3× 104 25 4.1 +1.7 +38 +69 +17.2 +2313 +2 +2.5× 105 27 4.0 +1.4 +40 +71 +16.5 +2313 +3 +1× 106 +28 4.1 +1.4 +41 +73 +16.2 +Table 12: Results from using additive preconditioning on a pure scattering problem with total and scattering cross-section of 10.0 in 2D with angle +refinement. The cycle and operator complexity listed are for AIRG on MΩ with CF splitting by Falgout-CLJP. +We see in Table 11 our additively preconditioned iterative method is effective with a total and scatter cross-section +of 10, with the iteration count growing from 23 to a plateau of 26 with spatial refinement. The work is very close +to constant, with fixed iteration count and slight growth in the cycle complexity, even though we used AIRG with +fixed sparsity. As such we didn’t investigate using AIRG without fixed sparsity, as the fixed sparsity was sufficient +to give plateauing work. This helps confirm the observations above, namely that AIRG is more effective on the +streaming/removal operator. +Comparing to the results in Section 7.1.2, it uses roughly 73 DG WUs, compared to around 135 when using either +AIRG or lAIR as a preconditioner on the full matrix. It also uses less memory at approximately 18 copies of the +angular flux, even though we have to store the diffusion operator and several extra temporary vectors. Compared to +the pure streaming problem, this method requires approximately 4.1× more work; we can see in Table 11 that this +work largely comes from computing the matrix-free matvec with scattering, with 26 iterations at the highest spatial +refinement requiring 39 WUsmf. The split of work is 26 WUsmf in the matvec required by the outer GMRES, 11 +WUsmf to apply the additive preconditioners and around 2 WUsmf to compute the source and Θ. Similarly, Table 13 +shows a (lower) constant iteration count with a lower total and scatter cross-section of 1.0. +Given the streaming/removal operator is easier to solve, lAIR performs better when used additively to invert MΩ, +compared with just the streaming operator. With a total and scatter cross-section of 10.0, we find both distance 1 and +2 lAIR give 30, 33, 31, 34, 36 and 39 iterations with spatial refinement, with similar grid and operator complexities to +AIRG. Increasing the number of FCF smooths from 1 to 3 results in an iteration count with less growth, namely 30, +25, 25, 26, 26 and 27 iterations, but the cycle complexity at the finest level of refinement is large at 10.7, compared +with AIRG at 4.6. We also know from Section 7.1.1 that the iteration count of lAIR grows in the streaming limit, so +we do not test lAIR any further as part of our additive method. +Tables 12 and 14 show that the additive method with AIRG and angular refinement perform well, as the iteration +count and the work required is very close to constant. Importantly we can also see the memory use is fixed. These +results show that as might be expected, using an (inconsistent) CG DSA can form an effective preconditioner in +scattering problems when used with an outer GMRES iteration. Importantly the combination of a single V-cycle of +AIRG used to apply the streaming/removal operator and a single V-cycle of a traditional multigrid on the diffusion +operator can be used additively and results in almost constant work with spatial and angular refinement on unstructured +grids. +24 + +CG nodes NDOFs +nits CC Op. Complx WUsmf WUsDG Memory +97 +2.4× 103 18 3.6 +1.9 +27 +50 +17.1 +591 +1.6× 104 18 4.0 +2.4 +27 +50 +17.2 +2313 +6.3× 104 18 4.3 +2.8 +28 +51 +17.4 +9166 +2.5× 105 19 4.5 +3.1 +29 +54 +17.5 +35784 +9.9× 105 19 4.7 +3.3 +30 +55 +17.6 +150063 +4.2× 106 19 4.8 +3.5 +30 +55 +17.7 +Table 13: Results from using additive preconditioning on a pure scattering problem with total and scattering cross-section of 1.0 in 2D. The cycle +and operator complexity listed are for AIRG on MΩ with CF splitting by Falgout-CLJP. +CG nodes Angle lvl. NDOFs +nits CC Op. Complx WUsmf WUsDG Memory +2313 +1 +6.3× 104 18 4.3 +2.8 +28 +51 +17.4 +2313 +2 +2.5× 105 18 4.3 +2.8 +27 +49 +16.7 +2313 +3 +1× 106 +19 4.3 +2.8 +29 +51 +16.5 +Table 14: Results from using additive preconditioning on a pure scattering problem with total and scattering cross-section of 1.0 in 2D with angle +refinement. The cycle and operator complexity listed are for AIRG on MΩ with CF splitting by Falgout-CLJP. +8. Conclusions +This paper presented a new reduction multigrid based on approximate ideal restrictors (AIR) combined with +GMRES polynomials (AIRG) with excellent performance in advection-type problems. Matrix polynomial methods +have been used for many years in multilevel methods but we believe we are the first to use GMRES polynomials in +this fashion. Reduction multigrids and LDU methods in particular benefit from using GMRES polynomials, as the +improved conditioning of Aff, when compared to A, can allow the formation of good approximate inverses with low +polynomial orders. This allowed us to easily build both approximate ideal restrictors, approximate ideal prolongators +(without the need to compute near-nullspace vectors) and perform F-point smoothing (without the need to compute +additional dampening parameters). +GMRES polynomials share many advantages with other polynomial methods; in particular their coefficients can +be computed very simply; low-order polynomials don’t require additional work to ensure stability (like in [61, 62]); +explicitly forming approximate matrix inverses is simple and only involves matrix-matrix products or if desired; the +polynomials can be applied matrix-free; their application is highly parallel with their setup able to use communication- +avoiding techniques; and they also work well across a range of symmetric and asymmetric problems. +When applied to the time independent Boltzmann Transport Equation (BTE) we could solve pure streaming prob- +lems (i.e., in the pure advection limit) on unstructured spatial grids with space/angle refinement with fixed memory +use. The time-independent streaming limit is the most challenging to solve and we found we could either get fixed +work in the solve and growth in the setup, or by introducing fixed sparsity into the matrix-powers of our GMRES +polynomials, we found fixed work in the setup with growth in the solve. We found good performance from using +between first to fourth order GMRES polynomials on each level of our multigrid. Fixing the sparsity of our third- +order (m = 4) GMRES polynomials resulted in a fixed FLOP count in the setup, and building an implementation of a +matmatmult A = BC where the three matrices share the same sparsity would reduce the implementation costs of our +setup for second order polynomials and higher. With fixed sparsity we found at most 20% growth in the work to solve +with either 6 levels of spatial refinement or three levels of uniform angular refinement. +A balance must be struck between the scalability of the solve vs expense of the setup, but we believe this is the +first method to show scalable solves with a stable spatial discretisation that doesn’t feature lower-triangular structure +in the streaming limit of the BTE. We did not spend much effort tweaking parameters and we have found that that we +can get better performance in these problems with standard AMG tweaks such as level specific drop tolerances. +We also compared AIRG to two different reduction multigrids and found performance advantages; one where +sparse approximation inverses (SAIs) are used to approximate A−1 +ff , and the lAIR implementation in hypre. We used +ParaSails in hypre to form SAIs with the fixed sparsity of Aff and found almost identical convergence behavior +25 + +to fixed sparsity AIRG (and hence the same solve time) in the streaming limit. We found however that the setup +of the SAIs took twice as long as our GMRES polynomials with m = 4. For lAIR, we could not find a set of +parameters that resulted in fixed work in the solve. Our further investigations suggest the combination of distance 3 +or 4 lAIR plus (only) F-point smooths are required to get scalable results with lAIR, but this is not practical given the +setup/communication costs. +In comparison to distance 1 or 2 lAIR, AIRG took roughly two to three times less work to solve. Timing the setup +showed that computing Z with our third-order GMRES polynomial approximations cost between that of computing +Z with distance 1 and 2 lAIR. The total time of our setup at the highest level of spatial refinement matched that of +distance 2 lAIR. The total time (setup plus solve) for AIRG was roughly 3× less than lAIR, though implementation +differences make this comparison difficult. We then investigated using AIRG and lAIR on the full matrix formed +with scattering. Forming this matrix cannot be done scalably with angular refinement, but we showed that AIRG is +applicable in the diffuse limit, performing about as well as lAIR. +We then built an iterative method that used the additive combination of two preconditioners applied to the angular +flux; 1 V-cycle of AIRG was used to invert the streaming/removal operator and 1 V-cycle of boomerAMG was used +to invert a CG diffusion operator. The streaming/removal operator is easier to solve than the streaming operator +and hence we found the work in the solve plateaued with fixed sparsity AIRG. Using distance 1 or 2 lAIR to invert +the streaming/removal operator resulted in cycle complexities over twice that of fixed sparsity AIRG. Given the +performance shown here it would be worth investigating the use of AIRG as part of a standard DG FEM source +iteration; preliminary work reveals AIRG performs similarly when used with DG streaming or streaming/removal +operators. +The only remaining consideration is how we can apply this method with our previously developed angular adap- +tivity and the parallel performance, which we will investigate in future work. AIRG should be performant in parallel, +as the entire multigrid hierarchy can be applied with only matrix-vector products (i.e., no reductions). The CF split- +ting algorithm used has a parallel implementation available in hypre. The GMRES polynomial coefficients on each +level must be computed once during the setup and can be trivially stored for multiple solves. Furthermore given our +use of low-order GMRES polynomials in AIRG, we found a single step method based on a QR factorisation of the +Krylov basis could be used to generate these coefficients stably. In parallel we could therefore use a tall-skinny QR +and generate the coefficients of a polynomial of order m − 1 with m matvecs and a single all-reduce on each level. +For zero and first order polynomials, there is no other communication required. For second order and higher, the +remainder of the GMRES polynomial setup uses m − 2 matmatmults and matmatadds to compute matrix-powers. If +we impose the aforementioned fixed sparsity we only need to communicate the required off-processor rows of Aff in +the matmatmults once in order to compute those matrix powers, regardless of the order of the polynomial. +Given the results in this paper, we believe the combination of our low-memory sub-grid scale discretisation, +AIRG with low-order GMRES polynomials, and an iterative method that additively preconditions with the stream- +ing/removal operator and an inconsistent CG DSA forms an excellent method for solving transport problems on +unstructured grids in both the streaming and scattering limit. +Acknowledgments +The authors would like to acknowledge the support of the EPSRC through the funding of the EPSRC grants +EP/R029423/1 and EP/T000414/1. +References +References +[1] J. S. Warsa, T. A. Wareing, J. E. Morel, Krylov Iterative Methods and the Degraded Effectiveness of Diffusion Synthetic Acceleration for +Multidimensional SN Calculations in Problems with Material Discontinuities, Nuclear Science and Engineering 147 (2004) 218–248. +[2] G. L. Ramone, M. L. Adams, P. F. Nowak, A Transport Synthetic Acceleration Method for Transport Iterations, Nuclear Science and +Engineering 125 (1997) 257–283. Publisher: Taylor & Francis eprint: https://doi.org/10.13182/NSE97-A24274. +[3] M. L. Adams, E. W. Larsen, Fast iterative methods for discrete-ordinates particle transport calculations, Progress in Nuclear Energy 40 +(2002) 3–159. +26 + +[4] S. Dargaville, A. G. Buchan, R. P. Smedley-Stevenson, P. N. Smith, C. C. Pain, Scalable angular adaptivity for Boltzmann transport, Journal +of Computational Physics 397 (2020). +[5] S. Dargaville, R. P. Smedley-Stevenson, P. N. Smith, C. C. Pain, Goal-based angular adaptivity for Boltzmann transport in the presence of +ray-effects, Journal of Computational Physics 421 (2020). +[6] T. Manteuffel, S. McCormick, J. Morel, S. Oliveira, G. Yang, A parallel version of a multigrid algorithm for isotropic transport equations, +SIAM Journal on Scientific Computing 15 (1994) 474–493. +[7] B. D. Lansrud, A spatial multigrid iterative method for two-dimensional discrete-ordinates transport problems, Book, Texas A&M University, +2005. Accepted: 2005-08-29T14:39:54Z Artwork Medium: electronic Interview Medium: electronic. +[8] G. Kanschat, J. Ragusa, A Robust Multigrid Preconditioner for $S n$DG Approximation of Monochromatic, Isotropic Radiation Transport +Problems, SIAM Journal on Scientific Computing 36 (2014) A2326–A2345. +[9] J. D. Densmore, D. F. Gill, J. M. Pounders, Cellwise Block Iteration as a Multigrid Smoother for Discrete-Ordinates Radiation-Transport +Calculations, Journal of Computational and Theoretical Transport 0 (2016) 1–26. +[10] P. F. Nowak, A Coupled Synthetic and Multigrid Acceleration Method for Two-Dimensional Transport Calculations., Ph.D. thesis, University +of Michigan, 1988. +[11] J. E. Morel, T. A. Manteuffel, An Angular Multigrid Acceleration Technique for Sn Equations with Highly Forward-Peaked Scattering, +Nuclear Science and Engineering 107 (1991) 330–342. +[12] T. Manteuffel, S. Mccormick, J. Morel, S. Oliveira, G. Yang, A fast multigrid algorithm for isotropic transport problems i: Pure scattering, +SIAM J. Sci. Comp 16 (1995) 601–635. +[13] T. Manteuffel, S. McCormick, J. Morel, G. Yang, A fast multigrid algorithm for isotropic transport problems. II: With absorption, SIAM +Journal on Scientific Computing 17 (1996) 1449–1474. +[14] S. D. Pautz, J. E. Morel, M. L. Adams, An angular multigrid acceleration method for SN equations with highly forward-peaked scattering, +in: Proc. of Int. Conf. Mathematics and Computation, Reactor Physics and Environmental Analysis in Nuclear Application, Madrid, Spain, +volume 1, pp. 647–656. +[15] B. Chang, T. Manteuffel, S. McCormick, J. Ruge, B. Sheehan, Spatial multigrid for isotropic neutron transport, SIAM Journal on Scientific +Computing 29 (2007) 1900–1917. +[16] B. Lee, A novel multigrid method for sn discretizations of the mono-energetic boltzmann transport equation in the optically thick and thin +regimes with anisotropic scattering, part i, SIAM Journal on Scientific Computing 31 (2010) 4744–4773. +[17] B. Lee, Improved multiple-coarsening methods for sn discretizations of the boltzmann equation, SIAM Journal on Scientific Computing 32 +(2010) 2497–2522. +[18] B. Lee, Space-angle-energy multigrid methods for sn discretizations of the multi-energetic boltzmann equation, Numerical Linear Algebra +with Applications 19 (2012) 773–795. +[19] H. Gao, L. Phan, Y. Lin, Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit, Journal of +Biomedical Optics 17 (2012). +[20] B. Turcksin, J. C. Ragusa, J. E. Morel, Angular multigrid preconditioner for krylov-based solution techniques applied to the sn equations +with highly forward-peaked scattering, Transport Theory and Statistical Physics 41 (2012) 1–22. +[21] A. Buchan, C. Pain, A. Umpleby, R. Smedley-Stevenson, A sub-grid scale finite element agglomeration multigrid method with application +to the boltzmann transport equation, International Journal for Numerical Methods in Engineering 92 (2012) 318–342. +[22] R. N. Slaybaugh, T. M. Evans, G. G. Davidson, P. P. H. Wilson, Multigrid in energy preconditioner for Krylov solvers, Journal of Computa- +tional Physics 242 (2013) 405–419. +[23] R. Slaybaug, T. M. Evans, G. Davidson, P. P. H. Wilson, Rayleigh Quotient Iteration with a Multigrid in Energy Preconditioner for Massively +Parallel Neutron Transport, in: Proceedings of Joint International Conference on Mathematics and Computation, Supercomputing in Nuclear +Applications, and the Monte Carlo Metho, Nashville, TN. +[24] C. Drumm, W. Fan, Multilevel acceleration of scattering-source iterations with application to electron transport, Nuclear Engineering and +Technology 49 (2017) 1114–1124. +[25] D. Lathouwers, Z. Perk´o, An angular multigrid preconditioner for the radiation transport equation with Fokker–Planck scattering, Journal of +Computational and Applied Mathematics 350 (2019) 165–177. +[26] T. A. Manteuffel, S. M¨unzenmaier, J. Ruge, B. Southworth, Nonsymmetric Reduction-Based Algebraic Multigrid, SIAM Journal on Scientific +Computing 41 (2019) S242–S268. Publisher: Society for Industrial and Applied Mathematics. +[27] T. Manteuffel, B. S. Southworth, Convergence in Norm of Nonsymmetric Algebraic Multigrid, SIAM Journal on Scientific Computing 41 +(2019) S269–S296. Publisher: Society for Industrial and Applied Mathematics. +[28] B. Southworth, T. A. Manteuffel, J. Ruge, +Nonsymmetric Algebraic Multigrid Based on Local Approximate Ideal Restriction (LAIR), +arXiv:1708.06065 [math] (2017). +[29] J. Hanophy, B. S. Southworth, R. Li, T. Manteuffel, J. Morel, +Parallel Approximate Ideal Restriction Multigrid for Solv- +ing the S N Transport Equations, +Nuclear Science and Engineering 0 (2020) 1–20. Publisher: +Taylor & Francis +eprint: +https://doi.org/10.1080/00295639.2020.1747263. +[30] T. J. R. Hughes, G. R. Feij´oo, L. Mazzei, J.-B. Quincy, The variational multiscale method—a paradigm for computational mechanics, +Computer Methods in Applied Mechanics and Engineering 166 (1998) 3–24. +[31] T. J. R. Hughes, G. Scovazzi, P. B. Bochev, A. Buffa, A multiscale discontinuous galerkin method with the computational structure of a +continuous galerkin method, Computer Methods in Applied Mechanics and Engineering 195 (2006) 2761–2787. +[32] A. S. Candy, Subgrid scale modelling of transport processes., Thesis or dissertation, Imperial College London, 2008. +[33] A. G. Buchan, A. S. Candy, S. R. Merton, C. C. Pain, J. I. Hadi, M. D. Eaton, A. J. H. Goddard, R. P. Smedley-Stevenson, G. J. Pearce, +The inner-element subgrid scale finite element method for the boltzmann transport equation, Nuclear science and engineering 164 (2010) +105–121. +[34] M. A. Goffin, A. G. Buchan, A. C. Belme, C. C. Pain, M. D. Eaton, P. N. Smith, R. P. Smedley-Stevenson, Goal-based angular adaptivity +applied to the spherical harmonics discretisation of the neutral particle transport equation, Ann. Nucl. Energy 71 (2014) 60–80. +27 + +[35] S. Dargaville, M. A. Goffin, A. G. Buchan, C. C. Pain, R. P. Smedley-Stevenson, P. N. Smith, G. Gorman, Solving the boltzmann transport +equation with multigrid and adaptive space/angle discretisations, Annals of Nuclear Energy 86 (2015) 99–107. +[36] M. Goffin, Goal-based adaptive methods applied to the spatial and angular dimensions of the transport equation, Ph.D. thesis, Imperial College +London, 2015. +[37] A. G. Buchan, C. C. Pain, An efficient space-angle subgrid scale discretisation of the neutron transport equation, Annals of Nuclear Energy +94 (2016) 440–450. +[38] B. J. Adigun, A. G. Buchan, A. Adam, S. Dargaville, M. A. Goffin, C. C. Pain, A Haar wavelet method for angularly discretising the +Boltzmann transport equation, Progress in Nuclear Energy 108 (2018) 295–309. +[39] S. Dargaville, A. G. Buchan, R. P. Smedley-Stevenson, P. N. Smith, C. C. Pain, Angular adaptivity with spherical harmonics for Boltzmann +transport, Journal of Computational Physics 397 (2019). +[40] G. N. Lygidakis, I. K. Nikolos, Using a Parallel Spatial/Angular Agglomeration Multigrid Scheme to Accelerate the FVM Radiative Heat +Transfer Computation—Part I: Methodology, Numerical Heat Transfer, Part B: Fundamentals 66 (2014) 471–497. Publisher: Taylor & +Francis eprint: https://doi.org/10.1080/10407790.2014.949561. +[41] G. N. Lygidakis, I. K. Nikolos, Using a Parallel Spatial/Angular Agglomeration Multigrid Scheme to Accelerate the FVM Radiative Heat +Transfer Computation—Part II: Numerical Results, Numerical Heat Transfer, Part B: Fundamentals 66 (2014) 498–525. +[42] B. S. Southworth, M. Holec, T. S. Haut, Diffusion Synthetic Acceleration for Heterogeneous Domains, Compatible with Voids, Nuclear +Science and Engineering 195 (2021) 119–136. Publisher: Taylor & Francis eprint: https://doi.org/10.1080/00295639.2020.1799603. +[43] Y. Notay, Algebraic multigrid and algebraic multilevel methods: a theoretical comparison, Numerical Linear Algebra with Applications 12 +(2005) 419–451. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/nla.435. +[44] S. MacLachlan, T. Manteuffel, S. McCormick, Adaptive reduction-based AMG, Numerical Linear Algebra with Applications 13 (2006) +599–620. +[45] J. Brannick, A. Frommer, K. Kahl, S. MacLachlan, L. Zikatanov, Adaptive reduction-based multigrid for nearly singular and highly disordered +physical systems, Electronic transactions on numerical analysis 37 (2010) 276–295. Publisher: Institute of Computational Mathematics. +[46] T. Zaman, N. Nytko, A. Taghibakhshi, S. MacLachlan, L. Olson, M. West, Generalizing Reduction-Based Algebraic Multigrid, 2022. +ArXiv:2212.08371 [cs, math]. +[47] M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, J. Ruge, Adaptive Smoothed Aggregation(alphaSA), SIAM Journal +on Scientific Computing 25 (2004) 1896–1920. +[48] M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, J. Ruge, Adaptive Smoothed Aggregation (alphaSA) Multigrid, SIAM +Review 47 (2005) 317–346. +[49] L. Y. Kolotilina, A. Y. Yeremin, Factorized Sparse Approximate Inverse Preconditionings I. Theory, SIAM Journal on Matrix Analysis and +Applications 14 (1993) 45–58. Publisher: Society for Industrial and Applied Mathematics. +[50] M. J. Grote, T. Huckle, Parallel Preconditioning with Sparse Approximate Inverses, SIAM Journal on Scientific Computing 18 (1997) +838–853. Publisher: Society for Industrial and Applied Mathematics. +[51] P. S. Vassilevski, Multilevel Block Factorization Preconditioners, Springer, 2008. +[52] Y. Saad, Multilevel ILU With Reorderings for Diagonal Dominance, SIAM Journal on Scientific Computing 27 (2005) 1032–1057. Publisher: +Society for Industrial and Applied Mathematics. +[53] P. Van\vek, J. Mandel, M. Brezina, Algebraic multigrid by smoothed aggregation for second and fourth order elliptic problems, Computing +56 (1996) 179–196. +[54] J. B. Schroder, Generalizing smoothed aggregation-based algebraic multigrid, Ph.D. thesis, University of Illinois at Urbana-Champaign, 2010. +[55] T. Wiesner, Flexible Aggregation-based Algebraic Multigrid Methods for Contact and Flow Problems, Ph.D. thesis, 2015. +[56] T. A. Manteuffel, L. N. Olson, J. B. Schroder, B. S. Southworth, A Root-Node Based Algebraic Multigrid Method, SIAM Journal on +Scientific Computing 39 (2017) S723–S756. ArXiv: 1610.03154. +[57] J. Xu, L. Zikatanov, Algebraic multigrid methods*, Acta Numerica 26 (2017) 591–721. Publisher: Cambridge University Press. +[58] L. N. Olson, J. B. Schroder, R. S. Tuminaro, A General Interpolation Strategy for Algebraic Multigrid Using Energy Minimization, SIAM +Journal on Scientific Computing 33 (2011) 966–991. Publisher: Society for Industrial and Applied Mathematics. +[59] Y. Saad, M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM Journal on +scientific and statistical computing 7 (1986) 856–869. +[60] N. M. Nachtigal, L. Reichel, L. N. Trefethen, A hybrid GMRES algorithm for nonsymmetric linear systems, SIAM Journal on Matrix +Analysis and Applications 13 (1992) 796–825. +[61] Q. Liu, R. B. Morgan, W. Wilcox, Polynomial Preconditioned GMRES and GMRES-DR, SIAM Journal on Scientific Computing 37 (2015) +S407–S428. Publisher: SIAM. +[62] J. A. Loe, R. B. Morgan, Toward Efficient Polynomial Preconditioning for GMRES, Numerical Linear Algebra with Applications (2021). +ArXiv: 1911.07065. +[63] G. Meurant, J. D. Tebbens, Krylov Methods for Nonsymmetric Linear Systems: From Theory to Computations, Springer Series in Computa- +tional Mathematics, Springer International Publishing, 2020. +[64] O. Axelsson, P. S. Vassilevski, Algebraic multilevel preconditioning methods. I, Numerische Mathematik 56 (1989) 157–177. +[65] O. Axelsson, P. S. Vassilevski, Algebraic Multilevel Preconditioning Methods, II, SIAM Journal on Numerical Analysis 27 (1990) 1569– +1590. Publisher: Society for Industrial and Applied Mathematics. +[66] A. Greenbaum, L. N. Trefethen, GMRES/CR and Arnoldi/Lanczos as Matrix Approximation Problems, SIAM Journal on Scientific Com- +puting 15 (1994) 359–368. Publisher: Society for Industrial and Applied Mathematics. +[67] J. A. Loe, Polynomial preconditioning with the minimum residual polynomial., Thesis, 2019. Accepted: 2020-09-09T13:49:00Z. +[68] M. Hoemmen, Communication-avoiding Krylov subspace methods, Ph.D., University of California, Berkeley, United States – California, +2010. +[69] E. Chow, A. Patel, Fine-Grained Parallel Incomplete LU Factorization, SIAM Journal on Scientific Computing 37 (2015) C169–C193. +Publisher: Society for Industrial and Applied Mathematics. +28 + +[70] H. Anzt, E. Chow, J. Dongarra, ParILUT—A New Parallel Threshold ILU Factorization, SIAM Journal on Scientific Computing 40 (2018) +C503–C519. Publisher: Society for Industrial and Applied Mathematics. +[71] D. J. Mavriplis, Directional agglomeration multigrid techniques for high-Reynolds-number viscous flows, AIAA journal 37 (1999) 1222– +1230. +[72] V. E. Henson, U. M. Yang, BoomerAMG: A parallel algebraic multigrid solver and preconditioner, Applied Numerical Mathematics 41 +(2002) 155–177. +[73] A. Brandt, General highly accurate algebraic coarsening., ETNA. Electronic Transactions on Numerical Analysis [electronic only] 10 (2000) +1–20. Publisher: Kent State University, Department of Mathematics and Computer Science. +[74] J. Brannick, L. Zikatanov, Algebraic Multigrid Methods Based on Compatible Relaxation and Energy Minimization, in: O. B. Widlund, D. E. +Keyes (Eds.), Domain Decomposition Methods in Science and Engineering XVI, Lecture Notes in Computational Science and Engineering, +Springer, Berlin, Heidelberg, 2007, pp. 15–26. +[75] J. J. Brannick, R. D. Falgout, Compatible Relaxation and Coarsening in Algebraic Multigrid, SIAM Journal on Scientific Computing 32 +(2010) 1393–1416. Publisher: Society for Industrial and Applied Mathematics. +[76] Y. Saad, ILUM: A Multi-Elimination ILU Preconditioner for General Sparse Matrices, SIAM Journal on Scientific Computing 17 (1996) +830–847. Publisher: Society for Industrial and Applied Mathematics. +[77] Y. Saad, J. Zhang, BILUTM: A Domain-Based Multilevel Block ILUT Preconditioner for General Sparse Matrices, SIAM Journal on Matrix +Analysis and Applications 21 (1999) 279–299. +[78] Y. Saad, B. Suchomel, ARMS: an algebraic recursive multilevel solver for general sparse linear systems, Numerical Linear Algebra with +Applications 9 (2002) 359–378. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/nla.279. +[79] S. MacLachlan, Y. Saad, A Greedy Strategy for Coarse-Grid Selection, SIAM Journal on Scientific Computing 29 (2007) 1825–1853. +[80] B. Beckermann, S. A. Goreinov, E. E. Tyrtyshnikov, Some Remarks on the Elman Estimate for GMRES, SIAM Journal on Matrix Analysis +and Applications 27 (2005) 772–778. Publisher: Society for Industrial and Applied Mathematics. +[81] J. Liesen, P. Tich´y, The field of values bound on ideal GMRES, 2020. ArXiv:1211.5969 [cs, math]. +29 + +102 +103 +104 +105 +0.5 +1 +1.5 +2 +2.5 +CG Nodes +Time (µs) per DOF +(a) Solve time +102 +103 +104 +105 +0.2 +0.4 +0.6 +0.8 +CG Nodes +Time (µs) per DOF +(b) Dashed are time to compute ˆA−1 +ff for AIRG, solid are time to +compute Z +102 +103 +104 +105 +0.2 +0.4 +0.6 +0.8 +1 +1.2 +1.4 +CG Nodes +Time (µs) per DOF +(c) Setup time +102 +103 +104 +105 +0.5 +1 +1.5 +2 +2.5 +CG Nodes +Time (µs) per DOF +(d) Total time +Figure 5: Timings per DOF for AIRG with m = 4 and lAIR in a 2D pure streaming problem. The × is AIRG with fixed sparsity with Falgout-CLJP, +the × is AIRG without fixed sparsity and Falgout-CLJP, the ⊗ is distance 1 lAIR with Falgout-CLJP, the ⊗ is distance 2 lAIR with Falgout-CLJP. +30 + +102 +103 +104 +105 +0.2 +0.4 +0.6 +CG Nodes +Time (µs) per DOF +(a) Setup time +102 +103 +104 +105 +0.4 +0.6 +0.8 +1 +1.2 +CG Nodes +Time (µs) per DOF +(b) Total time +Figure 6: Timings per DOF for AIRG with fixed sparsity, Falgout-CLJP and with varying GMRES polynomial order in a 2D pure streaming +problem. The × is AIRG with m = 1, o is m = 2, ⊗ is m = 3, □ is m = 4, ⋄ is m = 5. +31 + +102 +103 +104 +105 +1 +2 +3 +4 +5 +CG Nodes +Time (µs) per DOF +(a) Solve time +102 +103 +104 +105 +0.1 +0.2 +0.3 +0.4 +0.5 +CG Nodes +Time (µs) per DOF +(b) Dashed are time to compute ˆA−1 +ff for AIRG, solid are time to +compute Z +102 +103 +104 +105 +0.2 +0.4 +0.6 +0.8 +CG Nodes +Time (µs) per DOF +(c) Setup time +102 +103 +104 +105 +1 +2 +3 +4 +5 +6 +CG Nodes +Time (µs) per DOF +(d) Total time +Figure 7: Timings per DOF for AIRG with m = 4 and lAIR in a 2D pure scattering problem. The × is AIRG with fixed sparsity with Falgout-CLJP, +the × is AIRG without fixed sparsity and Falgout-CLJP, the ⊗ is distance 1 lAIR with Falgout-CLJP, the ⊗ is distance 2 lAIR with Falgout-CLJP. +32 + +102 +103 +104 +105 +0.3 +0.4 +0.5 +0.6 +CG Nodes +Time (µs) per DOF +(a) Setup time +102 +103 +104 +105 +1 +1.5 +2 +2.5 +3 +CG Nodes +Time (µs) per DOF +(b) Total time +Figure 8: Timings per DOF for AIRG with fixed sparsity, Falgout-CLJP and with varying GMRES polynomial order in a 2D pure scattering +problem. The × is AIRG with m = 1, o is m = 2, ⊗ is m = 3, □ is m = 4, ⋄ is m = 5. +(a) Streaming operator +(b) Streaming/removal operator with a total cross-section of 10.0. +Figure 9: The 10 biggest and smallest eigenvalues (by real, imaginary parts and magnitude) (dots) and field of values (solid lines) of different +operators, with the red the equivalent for Aff with CF splitting by Falgout-CLJP. Computed on the third refined spatial grid with level one angular +refinement. +33 + +0.03 +0.02 +0.01 +0.00 +0.01 +0.02 +0.03 +0.000 +0.005 +0.010 +0.015 +0.020 +0.025 +0.030 +0.0350.03 +0.02 +0.01 +0.00 +0.01 +0.02 +0.03 +0.000 +0.005 +0.010 +0.015 +0.020 +0.025 +0.030 +0.035 \ No newline at end of file