Gather/scatter (vector addressing)

(Redirected from Gather-scatter)

Gather/scatter is a type of memory addressing that at once collects (gathers) from, or stores (scatters) data to, multiple, arbitrary indices. Examples of its use include sparse linear algebra operations,[1] sorting algorithms, fast Fourier transforms,[2] and some computational graph theory problems.[3] It is the vector equivalent of register indirect addressing, with gather involving indexed reads, and scatter, indexed writes. Vector processors (and some SIMD units in CPUs) have hardware support for gather and scatter operations, as do many input/output systems, allowing large data sets to be transferred to main memory more rapidly.

The concept is somewhat similar to vectored I/O, which is sometimes also referred to as scatter-gather I/O. This system differs in that it is used to map multiple sources of data from contiguous structures into a single stream for reading or writing. A common example is writing out a series of strings, which in most programming languages would be stored in separate memory locations.

Definitions

edit

Gather

edit

A sparsely populated vector   holding   non-empty elements can be represented by two densely populated vectors of length  ;   containing the non-empty elements of  , and   giving the index in   where  's element is located. The gather of   into  , denoted  , assigns   with   having already been calculated.[4] Assuming no pointer aliasing between x[], y[],idx[], a C implementation is

for (i = 0; i < N; ++i)
    x[i] = y[idx[i]];

Scatter

edit

The sparse scatter, denoted   is the reverse operation. It copies the values of   into the corresponding locations in the sparsely populated vector  , i.e.  .

for (i = 0; i < N; ++i)
    y[idx[i]] = x[i];

Support

edit

Scatter/gather units were also a part of most vector computers, notably the Cray-1. In this case, the purpose was to efficiently store values in the limited resource of the vector registers. For instance, the Cray-1 had eight 64-word vector registers, so data that contained values that had no effect on the outcome, like zeros in an addition, were using up valuable space that would be better used. By gathering non-zero values into the registers, and scattering the results back out, the registers could be used much more efficiently, leading to higher performance. Such machines generally implemented two access models, scatter/gather and "stride", the latter designed to quickly load contiguous data.[5] This basic layout was widely copied in later supercomputer designs, especially on the variety of models from Japan.

As microprocessor design improved during the 1990s, commodity CPUs began to add vector processing units. At first these tended to be simple, sometimes overlaying the CPU's general purpose registers, but over time these evolved into increasingly powerful systems that met and then surpassed the units in high-end supercomputers. By this time, scatter/gather instructions had been added to many of these designs.

x86-64 CPUs which support the AVX2 instruction set can gather 32-bit and 64-bit elements with memory offsets from a base address. A second register determines whether the particular element is loaded, and faults occurring from invalid memory accesses by masked-out elements are suppressed.[6]: 503–4  The AVX-512 instruction set also contains (potentially masked) scatter operations.[6]: 539 [7] The ARM instruction set's Scalable Vector Extension includes gather and scatter operations on 8-, 16-, 32- and 64-bit elements.[8][9] InfiniBand has hardware support for gather/scatter.[10]

Without instruction-level gather/scatter, efficient implementations may need to be tuned for optimal performance, for example with prefetching; libraries such as OpenMPI may provide such primitives.[2][8]

See also

edit

References

edit
  1. ^ Lewis, John G.; Simon, Horst D. (1 March 1988). "The Impact of Hardware Gather/Scatter on Sparse Gaussian Elimination". SIAM Journal on Scientific and Statistical Computing. 9 (2): 304–311. doi:10.1137/0909019.
  2. ^ a b He, Bingsheng; Govindaraju, Naga K.; Luo, Qiong; Smith, Burton (2007). "Efficient gather and scatter operations on graphics processors". Proceedings of the 2007 ACM/IEEE conference on Supercomputing (PDF). pp. 1–12. doi:10.1145/1362622.1362684. ISBN 9781595937643. S2CID 2928233.
  3. ^ Kumar, Manoj; Serrano, Mauricio; Moreira, Jose; Pattnaik, Pratap; Horn, W P; Jann, Joefon; Tanase, Gabriel (September 2016). "Efficient implementation of scatter-gather operations for large scale graph analytics". 2016 IEEE High Performance Extreme Computing Conference (HPEC). pp. 1–7. doi:10.1109/HPEC.2016.7761578. ISBN 978-1-5090-3525-0. S2CID 10566760.
  4. ^ BLAS Technical Forum standard, Chapter 3: Sparse BLAS.
  5. ^ Bell, Gordon (25 January 1998). A Seymour Cray Perspective (Technical report).
  6. ^ a b Kusswurm, Daniel (2022). Modern parallel programming with C++ and Assembly language : X86 SIMD development using AVX, AVX2, and AVX-512. Apress Media. ISBN 978-1-4842-7917-5.
  7. ^ Hossain, Md Maruf; Saule, Erik (9 August 2021). "Impact of AVX-512 Instructions on Graph Partitioning Problems". 50th International Conference on Parallel Processing Workshop. pp. 1–9. doi:10.1145/3458744.3473362. ISBN 9781450384414. S2CID 237350994.
  8. ^ a b Zhong, Dong; Shamis, Pavel; Cao, Qinglei; Bosilca, George; Sumimoto, Shinji; Miura, Kenichi; Dongarra, Jack (May 2020). "Using Arm Scalable Vector Extension to Optimize OPEN MPI" (PDF). 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID). pp. 222–231. doi:10.1109/CCGrid49817.2020.00-71. ISBN 978-1-7281-6095-5. S2CID 220604878.
  9. ^ "What is the Scalable Vector Extension?". ARM Developer. Retrieved 19 November 2022.
  10. ^ Gainaru, Ana; Graham, Richard L.; Polyakov, Artem; Shainer, Gilad (25 September 2016). "Using InfiniBand Hardware Gather-Scatter Capabilities to Optimize MPI All-to-All". Proceedings of the 23rd European MPI Users' Group Meeting. pp. 167–179. doi:10.1145/2966884.2966918. ISBN 9781450342346. S2CID 15880901.