Task Description: Search the literature and discuss in your own words (~ 300 words) what could be seen as a breakthrough in parallel computer architecture technology that could lead in the near future or has led recently (in the last ten years) to improvement in computer performance. List references (only books, journal and conference papers are accepted – no web sites will be accepted) that you used using IEEE style. You may choose the topic number corresponding to your 2 ID digits or that corresponding to adding of the 2 ID digits, or preferably choose your own Parallel Computing topic.
Solution:
Dataflow Computing
In the field of high-performance computing (HPC), the world faced a challenge after the introduction of multicore processors, the problem of parallelization computation. There is a need to create parallel programs for productive multicore systems. This is not a simple task to make such programs. Dataflow computing has been proved a successful approach in solving parallel as well as big data problems. It is a data-driven approach. The operation is executed in dataflow computing only when all operands are available. Dataflow computing is an alternative to control flow in order to extract parallelism from programs. To speed up the execution of scientific applications the community of High-Performance Computing supports dataflow ideas. Dataflow is the favored choice for handling a large amount of data because it covers up the fundamental subtle elements of distributed processing, coordination, and information administration. it has the ability to parallelize tasks and determine the dependency between those tasks. In the past HPC community and application, programmers did not find this attractive product because this product was not available commercially and had a low-density academic prototype, less efficient and speed was very low and gave disappointing results in past. The product did not meet the requirements of application programmers. [1]. But now dataflow computing model is becoming the de facto standard in big data and its application [2].
The dataflow computing model is a progressive way of performing computation, totally different from computing with routine CPUs. Dataflow computers center on optimizing the development of information in an application and utilize large-scale parallelism between thousands of little ‘dataflow cores’ to supply arrange of magnitude benefits in execution, space, and power consumption. Data flow computers are very good with applications that have a tall degree of operation monotony and a few degrees of handled information reusability. The dataflow computers in applications can save time, space, and power.
Need academic assistance? Contact US for expert help!
References
[1] L. V. a. R. Giorg, “A Data-Flow Soft-Core Processor for Accelerating Scientific Calculation on FPGAs,” Mathematical Problems in Engineering, vol. 2016, vol. 2016, no. 1, p. 21, 2016.
[2] S. K. K. G. V. C. W. Pulasthi Wickramasinghe*, “Twister2: TSet High-Performance Iterative,” in 2019 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), 2019.