Many modern data processing and HPC workloads are heavily memory-latency bound. A tempting proposition to solve this is software prefetching, where special non-blocking loads are used to bring data into the cache hierarchy just before being required. However, these are difficult to insert to effectively improve performance, and techniques for automatic insertion are currently limited. This paper develops a novel compiler pass to automatically generate software prefetches for indirect memory accesses, a special class of irregular memory accesses often seen in high-performance workloads. We evaluate this across a wide set of systems, all of which gain benefit from the technique. We then evaluate the extent to which good prefetch instructions are architecture dependent, and the class of programs that are particularly amenable. Across a set of memory-bound benchmarks, our automated pass achieves average speedups of 1.3x for an Intel Haswell processor, 1.1x for both an ARM Cortex-A57 and Qualcomm Kryo, 1.2x for a Cortex-72 and an Intel Kaby Lake, and 1.35x for an Intel Xeon Phi Knight's Landing, each of which is an out-of-order core, and performance improvements of 2.1x and 2.7x for the in-order ARM Cortex-A53 and first generation Intel Xeon Phi.
The Arm Triple Core Lock-Step (TCLS) processor is the natural evolution of Arm Cortex-R Dual Core Lock- Step (DCLS) processors to increase reliability, predictability and availability in safety-critical and ultrareliable applications. TCLS is simple, scalable and easy to deploy in applications where Arm DCLS processors are widely used (e.g., automotive), as well as in new applications where the presence of Arm technology is incipient (e.g., enterprise) or almost non-existent (e.g., space). This article discusses the fundamentals of the Arm TCLS processor, providing key functioning and implementation details. The article also describes a TRL6 proof-of-concept TCLS-based System-on-Chip (SoC) that has been prototyped and tested in an Airbus Defence and Space telecom satellite on-board computer. The article provides implementation results of the latter SoC using commercial and rad-hard process technology.
ML techniques are conventionally executed on general-purpose processors, which usually are not energy-efficient since they invest excessive hardware resources to flexibility. Consequently, application-specific hardware accelerators have been proposed to improve the energy-efficiency. However, such accelerators were designed for a small set of ML techniques sharing similar computational patterns, and they adopt complex and informative instructions (control signals) directly corresponding to high-level functional blocks (such as layers in neural networks). The lack of agility in the instruction set prevents such accelerator designs from supporting a variety of different ML techniques with sufficient flexibility and efficiency. We first propose a novel domain-specific ISA for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques. We then extend the application scope of Cambricon from NN to ML techniques. We also propose an assembly language, an assembler and runtime to support programming with Cambricon. Our evaluation over a total of 16 representative yet distinct ML techniques have demonstrated that Cambricon exhibits strong descriptive capacity over a broad range of ML techniques, and provides higher code density than general-purpose ISAs such as x86, MIPS, and GPGPU.