This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
No instruction set computing (NISC) is a computing architecture and compiler technology for designing highly efficient custom processors and hardware accelerators by allowing a compiler to have low-level control of hardware resources.
Overview
editNISC is a statically scheduled horizontal nanocoded architecture (SSHNA). The term "statically scheduled" means that the operation scheduling and Hazard handling are done by a compiler. The term "horizontal nanocoded" means that NISC does not have any predefined instruction set or microcode. The compiler generates nanocodes which directly control functional units, registers and multiplexers of a given datapath. Giving low-level control to the compiler enables better utilization of datapath resources, which ultimately result in better performance. The benefits of NISC technology are:
- Simpler controller: no hardware scheduler, no instruction decoder
- Better performance: more flexible architecture, better resource utilization
- Easier to design: no need for designing instruction-sets
The instruction set and controller of processors are the most tedious and time-consuming parts to design. By eliminating these two, design of custom processing elements become significantly easier.
Furthermore, the datapath of NISC processors can even be generated automatically for a given application. Therefore, designer's productivity is improved significantly.
Since NISC datapaths are very efficient and can be generated automatically, NISC technology is comparable to high level synthesis (HLS) or C to HDL synthesis approaches. In fact, one of the benefits of this architecture style is its capability to bridge these two technologies (custom processor design and HLS).
Zero instruction set computer
editIn computer science, zero instruction set computer (ZISC) refers to a computer architecture based solely on pattern matching and absence of (micro-)instructions in the classical[clarification needed] sense. These chips are known for being thought of as comparable to the neural networks, being marketed for the number of "synapses" and "neurons".[1] The acronym ZISC alludes to reduced instruction set computer (RISC).[citation needed]
ZISC is a hardware implementation of Kohonen networks (artificial neural networks) allowing massively parallel processing of very simple data (0 or 1). This hardware implementation was invented by Guy Paillet[2] and Pascal Tannhof (IBM),[3][2] developed in cooperation with the IBM chip factory of Essonnes, in France, and was commercialized by IBM.
The ZISC architecture alleviates the memory bottleneck[clarification needed] by blending pattern memory with pattern learning and recognition logic.[how?] Their massively parallel computing solves the "winner takes all problem in action selection"[clarification needed from Winner-takes-all problem in Neural Networks] by allotting each "neuron" its own memory and allowing simultaneous problem-solving the results of which are settled up disputing with each other.[4]
Applications and controversy
editAccording to TechCrunch, software emulations of these types of chips are currently used for image recognition by many large tech companies, such as Facebook and Google. When applied to other miscellaneous pattern detection tasks, such as with text, results are said to be produced in microseconds even with chips released in 2007.[1]
Junko Yoshida, of the EE Times, compared the NeuroMem chip with "The Machine", a machine capable of being able to predict crimes from scanning people's faces from the television series Person of Interest, describing it as "the heart of big data" and "foreshadow[ing] a real-life escalation in the era of massive data collection".[5]
History
editIn the past, microprocessor design technology evolved from complex instruction set computer (CISC) to reduced instruction set computer (RISC). In the early days of the computer industry, compiler technology did not exist and programming was done in assembly language. To make programming easier, computer architects created complex instructions which were direct representations of high level functions of high level programming languages. Another force that encouraged instruction complexity was the lack of large memory blocks.
As compiler and memory technologies advanced, RISC architectures were introduced. RISC architectures need more instruction memory and require a compiler to translate high-level languages to RISC assembly code. Further advancement of compiler and memory technologies leads to emerging very long instruction word (VLIW) processors, where the compiler controls the schedule of instructions and handles data hazards.
NISC is a successor of VLIW processors. In NISC, the compiler has both horizontal and vertical control of the operations in the datapath. Therefore, the hardware is much simpler. However the control memory size is larger than the previous generations. To address this issue, low-overhead compression techniques can be used.
See also
editReferences
edit- ^ a b Lambinet, Philippe (31 January 2015). "The Ongoing Quest For The 'Brain' Chip". TechCrunch.
- ^ a b "Neuron circuit".
- ^ "Profile: Pascal Tannhof". ResearchGate.
- ^ Higginbotham, Stacey (14 November 2011). "Make way for more brain-based chips". Gigaom.
- ^ Yoshida, Junko. "NeuroMem IC Matches Patterns, Sees All, Knows All". EE Times.
Further reading
edit- Chapter 2. Henkel, Jörg; Parameswaran, Sri (11 July 2007). Designing Embedded Processors: A Low Power Perspective: By: Jörg Henkel, Sri Parameswaran. Springer. ISBN 978-1402058684.
External links
edit- US Patent for ZISC hardware, issued to IBM/G.Paillet on April 15, 1997
- Image Processing Using RBF like Neural Networks: A ZISC-036 Based Fully Parallel Implementation Solving Real World and Real Complexity Industrial Problems by K. Madani, G. de Trémiolles, and P. Tannhof
- From CISC to RISC to ZISC by S. Liebman on lsmarketing.com
- Neural Networks on Silicon at aboutAI.net
- French Patent Request NISC for purely applicative engine - the sole operation of application (no lambda-calculus that is a particular case of quasi-applicative systems with two operations : application and abstraction - Curry 1958 p. 31)