Comparative Analysis of Parallel Algorithm’s Optimization Methods Taking into Consideration or Ignoring the Execution Time of Operations
Keywords:
optimization, algorithm, information graph, sequence list, execution time, operation, process, processor, information dependence, unit of time
Abstract
In this paper, we propose an analysis of our (developed by us) methods for optimizing the parallel algorithm, taking into account and without taking into account the execution time of each operation. These methods can be applied on sequential algorithms in order to obtain their parallel analogue as well as on parallel algorithms in order to improve their quality. The proposed methods for optimizing the parallel algorithm can reduce the amount of communication between processors and, accordingly, reduce the execution time of the entire algorithm.
References
1. Abramov O. V., Katueva Ya. Multivariant analysis and stochastic optimization using parallel
processing techniques // Management problems. 2003. № 4. P. 11–15.
2. Jordan H. F., Alaghband F. Fundamentals of Parallel Processing. Pearson Education, Inc., Upper
Saddle River, NJ, 2003. P. 578.
3. Drake D. E., Hougardy S. A linear-time approximation algorithm for weighted matchings in graphs //
ACM Transactions on Algorithms. 2005. № 1. P. 107–122. doi: 10.1145/1077464.1077472
4. Hu Chen. MPIPP: An Automatic Profileguided Parallel Process Placement Toolset for SMP Clusters
and Multiclusters / Hu.Chen // Proceedings of the 20th annual international conference on Supercomputing. New York, NY, USA. 2006. P. 353–360.
5. Rauber N., Runger G. Parallel Programming: for Multicore and Cluster Systems. / N. Rauber, G.
Runger. Chemnitz, Germany: Springer, 2010. 450 p. doi: 10.1007/978-3-642-04818-0
6. Gergel V. P., Fursov V. A. Lectures of Parallel Programming: Proc. Benefit. Samara State Aerospace
University Publishing House, 2009. 163 p.
7. Voevodin V. V., Voevodin Vl. V. Parallel computing. St. Petersburg: BHV-Petersburg, 2002. 608 p.
8. Шичкина Ю. А. Сокращение высоты информационного графа параллельных программ //
Научно-технические ведомости СПбГПУ. 2009. № 3 (80). С. 148–152.
processing techniques // Management problems. 2003. № 4. P. 11–15.
2. Jordan H. F., Alaghband F. Fundamentals of Parallel Processing. Pearson Education, Inc., Upper
Saddle River, NJ, 2003. P. 578.
3. Drake D. E., Hougardy S. A linear-time approximation algorithm for weighted matchings in graphs //
ACM Transactions on Algorithms. 2005. № 1. P. 107–122. doi: 10.1145/1077464.1077472
4. Hu Chen. MPIPP: An Automatic Profileguided Parallel Process Placement Toolset for SMP Clusters
and Multiclusters / Hu.Chen // Proceedings of the 20th annual international conference on Supercomputing. New York, NY, USA. 2006. P. 353–360.
5. Rauber N., Runger G. Parallel Programming: for Multicore and Cluster Systems. / N. Rauber, G.
Runger. Chemnitz, Germany: Springer, 2010. 450 p. doi: 10.1007/978-3-642-04818-0
6. Gergel V. P., Fursov V. A. Lectures of Parallel Programming: Proc. Benefit. Samara State Aerospace
University Publishing House, 2009. 163 p.
7. Voevodin V. V., Voevodin Vl. V. Parallel computing. St. Petersburg: BHV-Petersburg, 2002. 608 p.
8. Шичкина Ю. А. Сокращение высоты информационного графа параллельных программ //
Научно-технические ведомости СПбГПУ. 2009. № 3 (80). С. 148–152.
Published
2018-06-29
How to Cite
Al-Mardi, M. H. A. (2018). Comparative Analysis of Parallel Algorithm’s Optimization Methods Taking into Consideration or Ignoring the Execution Time of Operations. Computer Tools in Education, (3), 38-48. https://doi.org/10.32603/2071-2340-3-38-48
Issue
Section
Software Engineering
This work is licensed under a Creative Commons Attribution 4.0 International License.