Parallel task scheduling problem

In theoretical computer science, the parallel task scheduling problem is an NP-hard optimization problem. A given set of parallel tasks, also called jobs, need to be scheduled on identical machines, sometimes called processors, minimizing the latest completion time. In Veltman et al.[1] and Drozdowski,[2] this problem is denoted by in the three-field notation introduced by Graham et al.[3] The origins of this problem formulation can be traced back to 1960.[4] For this problem, there exists no polynomial time approximation algorithm with a ratio smaller than unless .

Definition

In this problem, we are given a set of jobs as well as identical machines. Each job has a processing time and requires the simultaneous use of machines during its execution. A schedule assigns each job to a starting time and a set of machines to be processed on. A schedule is feasible if each processor executes at most one job at any given time. The objective of the problem denoted by is to find a schedule with minimum length , also called the makespan of the schedule. A sufficient condition for the feasibility of a schedule is the following

.

If this property is satisfied for all starting times, a feasible schedule can be generated by assigning free machines to the jobs at each string time starting with time .[5][6] Furthermore, the number of machine intervals used by jobs and idle intervals at each time step can be bounded by .[5] Here a machine interval is a set of consecutive machines of maximal cardinality such that all machines in this set are processing the same job. A machine interval is completely specified by the index of its first and last machine. Therefore, it is possible to obtain a compact way of encoding the output with polynomial size.

Hardness

This problem is NP-hard even for a constant number of machines due to the fact that corresponds to the partition problem. Furthermore, Du and Leung[7] showed that this problem is strongly NP-hard when the number of machines is at least , and that there exists a pseudo-polynomial time algorithm, which solves the problem exactly if the number of machines is at most . Later Henning et al.[8] showed that the problem is also strongly NP-hard when the number of machines is . If the number of machines is not bounded by a constant, there can be no approximation algorithm with an approximation ratio smaller than unless because this problem contains the bin packing problem as a subcase, i.e., when all the jobs have processing time .

Variants

There are several variants of this problem that have been studied.[2] The following variants also have been considered in combination with each other.

Contiguous jobs: In this variant, the machines have a fixed order . Instead of assigning the jobs to any subset , the jobs have to be assigned to an interval of machines. This problem corresponds to the problem formulation of the strip packing problem.

Moldable jobs: In this variant each job has a set of feasible machine numbers and for each of these numbers it has a processing time . To schedule a job , an algorithm has to choose a machine number and assign it to a starting time and to machines during the time interval A usual assumption for this kind of problem is that the work of a job, which is defined as is non-increasing for an increasing number of machines.

Multiple platforms: In this variant, the set of machines is partitioned into independent platforms. A scheduled job can only use the machines of one platform and is not allowed to span over multiple platforms when processed.

Algorithms

The list scheduling algorithm by Garey and Graham[9] has an absolute ratio , as pointed out by Turek et al.[10] and Ludwig and Tiwari.[11] Feldmann, Sgall and Teng[12] observed that the length of a non-preemptive schedule produced by the list scheduling algorithm is actually at most times the optimum preemptive makespan. A polynomial-time approximation scheme (PTAS) for the case when the number of processors is constant, denoted by , was presented by Amoura et al.[13] and Jansen et al.[14] Later, Jansen and Thöle [6] found a PTAS for the case where the number of processors is polynomially bounded in the number of jobs. In this algorithm, the number of machines appears polynomially in the time complexity of the algorithm. Since, in general, the number of machines appears only in logarithmic in the size of the instance, this algorithm is a pseudo-polynomial time approximation scheme as well. A -approximation was given by Jansen,[15] which closes the gap to the lower bound of except for an arbitrarily small .

Differences between contiguous and non-contiguous jobs

Given an instance of the parallel task scheduling problem, the optimal makespan can differ depending on the constraint to the contiguity of the machines. If the jobs can be scheduled on non-contiguous machines, the optimal makespan can be smaller than in the case that they have to be scheduled on contiguous ones. The difference between contiguous and non-contiguous schedules has been first demonstrated in 1992[16] on an instance with tasks, processors, , and . Błądek et al. [17] studied these so called c/nc-differences and proved the following points:

  • For a c/nc-difference to arise, there must be at least three tasks with
  • For a c/nc-difference to arise, there must be at least three tasks with
  • For a c/nc-difference to arise, at least processors are required (and there exists an instance with a c/nc-difference with ).
  • For a c/nc-difference to arise, the non-contiguous schedule length must be at least
  • The maximal c/nc-difference is at least and at most
  • To decide whether there is an c/nc-difference in a given instance is NP-complete.

Furthermore, they proposed the following two conjectures, which remain unproven:

  • For a c/nc-difference to arise, at least tasks are required.

See also

References

  1. Veltman, B; Lageweg, B. J; Lenstra, J. K (1990-12-01). "Multiprocessor scheduling with communication delays". Parallel Computing. 16 (2): 173–182. doi:10.1016/0167-8191(90)90056-F. ISSN 0167-8191.
  2. Drozdowski, Maciej (2009). "Scheduling for Parallel Processing". Computer Communications and Networks. doi:10.1007/978-1-84882-310-5. ISBN 978-1-84882-309-9. ISSN 1617-7975.
  3. Graham, R. L.; Lawler, E. L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. (1979). "Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey" (PDF). Proceedings of the Advanced Research Institute on Discrete Optimization and Systems Applications of the Systems Science Panel of NATO and of the Discrete Optimization Symposium. Elsevier. pp. (5) 287–326.
  4. F, CoddE (1960-06-01). "Multiprogram scheduling". Communications of the ACM. 3 (6): 347–350. doi:10.1145/367297.367317. S2CID 14701351.
  5. Johannes, Berit (2006-10-01). "Scheduling parallel jobs to minimize the makespan". Journal of Scheduling. 9 (5): 433–452. doi:10.1007/s10951-006-8497-6. hdl:20.500.11850/36804. ISSN 1099-1425. S2CID 18819458.
  6. Jansen, Klaus.; Thöle, Ralf. (2010-01-01). "Approximation Algorithms for Scheduling Parallel Jobs". SIAM Journal on Computing. 39 (8): 3571–3615. doi:10.1137/080736491. ISSN 0097-5397.
  7. Du, Jianzhong.; Leung, Joseph Y.-T. (1 November 1989). "Complexity of Scheduling Parallel Task Systems". SIAM Journal on Discrete Mathematics. 2 (4): 473–487. doi:10.1137/0402042. ISSN 0895-4801.
  8. Henning, Sören; Jansen, Klaus; Rau, Malin; Schmarje, Lars (1 January 2020). "Complexity and Inapproximability Results for Parallel Task Scheduling and Strip Packing". Theory of Computing Systems. 64 (1): 120–140. arXiv:1705.04587. doi:10.1007/s00224-019-09910-6. ISSN 1433-0490. S2CID 67168004.
  9. Garey, M. R.; Graham, R. L. (1 June 1975). "Bounds for Multiprocessor Scheduling with Resource Constraints". SIAM Journal on Computing. 4 (2): 187–200. doi:10.1137/0204015. ISSN 0097-5397.
  10. Turek, John; Wolf, Joel L.; Yu, Philip S. "Approximate algorithms scheduling parallelizable tasks | Proceedings of the fourth annual ACM symposium on Parallel algorithms and architectures". dl.acm.org. doi:10.1145/140901.141909. S2CID 15607549.
  11. Ludwig, Walter; Tiwari, Prasoon (1994). "Scheduling malleable and nonmalleable parallel tasks | Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms". Fifth Annual {ACM-SIAM} Symposium on Discrete Algorithms (SODA): 167–176.
  12. Feldmann, Anja; Sgall, Jiří; Teng, Shang-Hua (1 August 1994). "Dynamic scheduling on parallel machines". Theoretical Computer Science. 130 (1): 49–72. doi:10.1016/0304-3975(94)90152-X. ISSN 0304-3975.
  13. Amoura, Abdel Krim; Bampis, Evripidis; Kenyon, Claire; Manoussakis, Yannis (1 February 2002). "Scheduling Independent Multiprocessor Tasks". Algorithmica. 32 (2): 247–261. doi:10.1007/s00453-001-0076-9. ISSN 1432-0541. S2CID 17256951.
  14. Jansen, Klaus; Porkolab, Lorant (1 March 2002). "Linear-Time Approximation Schemes for Scheduling Malleable Parallel Tasks". Algorithmica. 32 (3): 507–520. doi:10.1007/s00453-001-0085-8. hdl:11858/00-001M-0000-0014-7B6C-D. ISSN 1432-0541. S2CID 2019475.
  15. Jansen, Klaus (2012). "A(3/2+ε) approximation algorithm for scheduling moldable and non-moldable parallel tasks | Proceedings of the twenty-fourth annual ACM symposium on Parallelism in algorithms and architectures". 24th {ACM} Symposium on Parallelism in Algorithms and Architectures,{SPAA}. doi:10.1145/2312005.2312048. S2CID 6586439.
  16. "Approximate algorithms scheduling parallelizable tasks | Proceedings of the fourth annual ACM symposium on Parallel algorithms and architectures". doi:10.1145/140901.141909. S2CID 15607549. Cite journal requires |journal= (help)
  17. Błądek, Iwo; Drozdowski, Maciej; Guinand, Frédéric; Schepler, Xavier (1 October 2015). "On contiguous and non-contiguous parallel task scheduling". Journal of Scheduling. 18 (5): 487–495. doi:10.1007/s10951-015-0427-z. ISSN 1099-1425.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.