Papers
Topics
Authors
Recent
Search
2000 character limit reached

Improved Approximation Algorithms for Non-Preemptive Throughput Maximization

Published 31 Mar 2026 in cs.DS | (2603.29451v1)

Abstract: The (Non-Preemptive) Throughput Maximization problem is a natural and fundamental scheduling problem. We are given $n$ jobs, where each job $j$ is characterized by a processing time and a time window, contained in a global interval $[0,T)$, during which~$j$ can be scheduled. Our goal is to schedule the maximum possible number of jobs non-preemptively on a single machine, so that no two scheduled jobs are processed at the same time. This problem is known to be strongly NP-hard. The best-known approximation algorithm for it has an approximation ratio of $1/0.6448 + \varepsilon \approx 1.551 + \varepsilon$ [Im, Li, Moseley IPCO'17], improving on an earlier result in [Chuzhoy, Ostrovsky, Rabani FOCS'01]. In this paper we substantially improve the approximation factor for the problem to $4/3+\varepsilon$ for any constant~$\varepsilon>0$. Using pseudo-polynomial time $(nT){O(1)}$, we improve the factor even further to $5/4+\varepsilon$. Our results extend to the setting in which we are given an arbitrary number of (identical) machines.

Summary

  • The paper presents a randomized (4/3+ε)-approximation algorithm that overcomes previous e/(e-1) integrality barriers in non-preemptive scheduling.
  • It employs novel block and superblock decompositions with a configuration LP to optimize job assignments and ensure scheduling feasibility.
  • The techniques extend efficiently to multiple machines and pseudo-polynomial regimes, offering robust trade-offs between performance and computation.

Improved Approximation Algorithms for Non-Preemptive Throughput Maximization

Problem Overview

The Non-Preemptive Throughput Maximization problem (Job Interval Scheduling) aims to schedule the maximum number of jobs on one or more identical machines, such that each job is executed non-preemptively within its time window without overlap on the same machine. The problem encapsulates core combinatorial complexity—being strongly NP-hard—even in restricted variants. Historically, algorithmic progress has seen iterative improvements, with the best-known polynomial-time approximation at approximately $1.551$ [Im, Li, Moseley 2017], marginally improving the earlier e/(e1)1.582e/(e-1)\approx 1.582 barrier [Chuzhoy, Ostrovsky, Rabani 2001].

Main Contributions

The paper introduces substantial algorithmic advances, closing the approximation gap significantly:

  • A randomized polynomial-time (4/3+ε)(4/3+\varepsilon)-approximation algorithm for any fixed ε>0\varepsilon > 0.
  • A randomized (5/4+ε)(5/4+\varepsilon)-approximation in pseudo-polynomial time (nT)O(1)(nT)^{O(1)}, leveraging the problem's discrete structure.
  • Both techniques generalize seamlessly to mm identical machines, providing matching guarantees independent of the machine count in the polynomial regime and optimal up to resource-augmentation barriers.

These results comprise the first concrete breach of the e/(e1)e/(e-1)-type integrality barrier without resource augmentation or special assumptions.

Algorithmic Framework

Partitioning via Block and Superblock Decompositions

A fundamental mechanism is the partition of the time horizon [0,T)[0, T) into blocks and superblocks. Such decompositions allow for localizing the scheduling decision—reducing global interference and enabling LP formulations with polynomially many configuration variables:

  • Blocks are intervals where a bounded (constant or logarithmic, depending on the algorithm) number of jobs from a near-optimal solution are scheduled.
  • Superblocks aggregate consecutive blocks, ensuring each job is either scheduled in a boundary block or in a block contained entirely within a superblock it spans.

This partitioning instance aligns with advanced shifting techniques that guarantee near-optimal alignment of solution structure and decomposition.

Configuration LP and Tight Rounding

Both core algorithms hinge on a configuration LP that uses, for each block, all subcollections of jobs that fit feasibly within the block. This LP admits integral solutions that cover all near-optimal (up to small ε\varepsilon losses) schedules aligned with the partition.

e/(e1)1.582e/(e-1)\approx 1.5820-Approximation in Polynomial Time

  • Sampling: For each block, independently sample a configuration according to the fractional LP solution.
  • Assignment via Bipartite Matching: Instead of a direct assignment, the method forms slots (time intervals) for each sampled job, and matches jobs (potentially different from the configuration choice) that fit into the slots using bipartite graph matching. In contrast to prior work—where jobs replicated across blocks were "lost"—this assignment captures these opportunities via matching.
  • Analysis: A nuanced analysis ensures that the expected total matching size is at least e/(e1)1.582e/(e-1)\approx 1.5821 of the global LP value for global jobs (those spanning superblocks), and nearly all of the mass for local jobs (fully contained in a block). The harmonic grouping and concentration techniques ensure that randomly instantiated slot-job assignments closely track expected LP values, even with dependencies, via tail bounds for read-e/(e1)1.582e/(e-1)\approx 1.5822 families [gavinsky2015tail].

e/(e1)1.582e/(e-1)\approx 1.5823-Approximation in Pseudo-Polynomial Time

  • Block Size Scaling: Increase block size to e/(e1)1.582e/(e-1)\approx 1.5824, which permits more flexible job assignment and higher-probability concentration.
  • Alternative Randomized Rounding: Assign each job independently to a block according to marginal LP probabilities, with alterations (removing sufficiently long jobs or resolving conflicts by discarding a vanishing fraction) so that assignment feasibility holds with high probability.
  • Ellipsoid and Color Coding: Due to the increased configuration count, the ellipsoid method with a color coding-based separation oracle is applied for efficient LP solution, making use of the small support and dynamic programming under color constraints.

Multiple Machines Extension

The combinatorial and LP tools are elevated naturally to e/(e1)1.582e/(e-1)\approx 1.5825 machines by encoding machine assignment into the configuration space. Complexity remains polynomial as long as e/(e1)1.582e/(e-1)\approx 1.5826 is a constant; for arbitrary e/(e1)1.582e/(e-1)\approx 1.5827, the combination with barriers from [im2020breaking] yields polynomial-time e/(e1)1.582e/(e-1)\approx 1.5828 and pseudopolynomial-time e/(e1)1.582e/(e-1)\approx 1.5829 algorithms.

Technical Insights and Structural Innovations

Key algorithmic and analysis techniques:

  • Harmonic Grouping: Jobs are grouped by processing time such that each group contains a nearly equal fractional LP mass, ensuring that rounding errors do not concentrate in any single group.
  • Advanced Concentration for Dependent Random Variables: Limiting dependencies to a bounded read-(4/3+ε)(4/3+\varepsilon)0 structure enables the use of nontrivial concentration inequalities; this is critical for the expected performance guarantee.
  • Matching Structure: By modeling the assignment of jobs to slots as a matching problem, the loss from slot over-allocation is recaptured, refining the classic LP-Chernoff loss of (4/3+ε)(4/3+\varepsilon)1.

Numerical Guarantees and Comparison

The polynomial-time algorithm achieves a strict (4/3+ε)(4/3+\varepsilon)2 approximation, breaking the long-standing "natural" integrality barrier; the (4/3+ε)(4/3+\varepsilon)3 factor in pseudo-polynomial time is even tighter, and aligns with the best-known approaches for higher-complexity scheduling or packing problems. These factors match those for weaker or special-case variants, showing that the problem—in the unweighted, single-resource, non-preemptive case—admits near-optimal approximation in general instances.

Theoretical and Practical Implications

The findings clarify the true approximability landscape of non-preemptive throughput maximization, suggesting that, contrary to previously slow progress, PTAS-level results may be achievable with further combinatorial innovations. The block/superblock and configuration frameworks, along with harmonic analysis, potentially extend to more general scheduling and packing settings, including weighted and multiple-resource (e.g., time window with demands) generalizations.

Practically, for large-scale interval scheduling in manufacturing, mission planning, and computational resource allocation where non-preemptive constraints dominate, these algorithms yield robust guarantees with feasible computational demands in both polynomial and pseudo-polynomial regimes.

Future Directions

Follow-up work may focus on:

  • Extension to the weighted case, where job profits are non-uniform. This setting is substantially more challenging.
  • Closing the gap to a true PTAS, or proving APX-hardness, especially for settings with more general constraints.
  • Generalizing the concentration analysis and grouping techniques for other combinatorial and geometric constraint satisfaction problems.

Conclusion

This work decisively refines the approximability of the non-preemptive throughput maximization problem, delivering the first polynomial-time algorithm to surpass the (4/3+ε)(4/3+\varepsilon)4 barrier and nearly matching lower bounds in pseudo-polynomial time. The methodologies—block/superblock partitioning, configuration LPs, harmonic group rounding, and advanced concentration analysis—constitute significant structural contributions, likely to influence the design and analysis of complex scheduling algorithms broadly.

Reference:

"Improved Approximation Algorithms for Non-Preemptive Throughput Maximization" (2603.29451)

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.