- The paper presents a randomized (4/3+ε)-approximation algorithm that overcomes previous e/(e-1) integrality barriers in non-preemptive scheduling.
- It employs novel block and superblock decompositions with a configuration LP to optimize job assignments and ensure scheduling feasibility.
- The techniques extend efficiently to multiple machines and pseudo-polynomial regimes, offering robust trade-offs between performance and computation.
Improved Approximation Algorithms for Non-Preemptive Throughput Maximization
Problem Overview
The Non-Preemptive Throughput Maximization problem (Job Interval Scheduling) aims to schedule the maximum number of jobs on one or more identical machines, such that each job is executed non-preemptively within its time window without overlap on the same machine. The problem encapsulates core combinatorial complexity—being strongly NP-hard—even in restricted variants. Historically, algorithmic progress has seen iterative improvements, with the best-known polynomial-time approximation at approximately $1.551$ [Im, Li, Moseley 2017], marginally improving the earlier e/(e−1)≈1.582 barrier [Chuzhoy, Ostrovsky, Rabani 2001].
Main Contributions
The paper introduces substantial algorithmic advances, closing the approximation gap significantly:
- A randomized polynomial-time (4/3+ε)-approximation algorithm for any fixed ε>0.
- A randomized (5/4+ε)-approximation in pseudo-polynomial time (nT)O(1), leveraging the problem's discrete structure.
- Both techniques generalize seamlessly to m identical machines, providing matching guarantees independent of the machine count in the polynomial regime and optimal up to resource-augmentation barriers.
These results comprise the first concrete breach of the e/(e−1)-type integrality barrier without resource augmentation or special assumptions.
Algorithmic Framework
Partitioning via Block and Superblock Decompositions
A fundamental mechanism is the partition of the time horizon [0,T) into blocks and superblocks. Such decompositions allow for localizing the scheduling decision—reducing global interference and enabling LP formulations with polynomially many configuration variables:
- Blocks are intervals where a bounded (constant or logarithmic, depending on the algorithm) number of jobs from a near-optimal solution are scheduled.
- Superblocks aggregate consecutive blocks, ensuring each job is either scheduled in a boundary block or in a block contained entirely within a superblock it spans.
This partitioning instance aligns with advanced shifting techniques that guarantee near-optimal alignment of solution structure and decomposition.
Configuration LP and Tight Rounding
Both core algorithms hinge on a configuration LP that uses, for each block, all subcollections of jobs that fit feasibly within the block. This LP admits integral solutions that cover all near-optimal (up to small ε losses) schedules aligned with the partition.
e/(e−1)≈1.5820-Approximation in Polynomial Time
- Sampling: For each block, independently sample a configuration according to the fractional LP solution.
- Assignment via Bipartite Matching: Instead of a direct assignment, the method forms slots (time intervals) for each sampled job, and matches jobs (potentially different from the configuration choice) that fit into the slots using bipartite graph matching. In contrast to prior work—where jobs replicated across blocks were "lost"—this assignment captures these opportunities via matching.
- Analysis: A nuanced analysis ensures that the expected total matching size is at least e/(e−1)≈1.5821 of the global LP value for global jobs (those spanning superblocks), and nearly all of the mass for local jobs (fully contained in a block). The harmonic grouping and concentration techniques ensure that randomly instantiated slot-job assignments closely track expected LP values, even with dependencies, via tail bounds for read-e/(e−1)≈1.5822 families [gavinsky2015tail].
e/(e−1)≈1.5823-Approximation in Pseudo-Polynomial Time
- Block Size Scaling: Increase block size to e/(e−1)≈1.5824, which permits more flexible job assignment and higher-probability concentration.
- Alternative Randomized Rounding: Assign each job independently to a block according to marginal LP probabilities, with alterations (removing sufficiently long jobs or resolving conflicts by discarding a vanishing fraction) so that assignment feasibility holds with high probability.
- Ellipsoid and Color Coding: Due to the increased configuration count, the ellipsoid method with a color coding-based separation oracle is applied for efficient LP solution, making use of the small support and dynamic programming under color constraints.
Multiple Machines Extension
The combinatorial and LP tools are elevated naturally to e/(e−1)≈1.5825 machines by encoding machine assignment into the configuration space. Complexity remains polynomial as long as e/(e−1)≈1.5826 is a constant; for arbitrary e/(e−1)≈1.5827, the combination with barriers from [im2020breaking] yields polynomial-time e/(e−1)≈1.5828 and pseudopolynomial-time e/(e−1)≈1.5829 algorithms.
Technical Insights and Structural Innovations
Key algorithmic and analysis techniques:
- Harmonic Grouping: Jobs are grouped by processing time such that each group contains a nearly equal fractional LP mass, ensuring that rounding errors do not concentrate in any single group.
- Advanced Concentration for Dependent Random Variables: Limiting dependencies to a bounded read-(4/3+ε)0 structure enables the use of nontrivial concentration inequalities; this is critical for the expected performance guarantee.
- Matching Structure: By modeling the assignment of jobs to slots as a matching problem, the loss from slot over-allocation is recaptured, refining the classic LP-Chernoff loss of (4/3+ε)1.
Numerical Guarantees and Comparison
The polynomial-time algorithm achieves a strict (4/3+ε)2 approximation, breaking the long-standing "natural" integrality barrier; the (4/3+ε)3 factor in pseudo-polynomial time is even tighter, and aligns with the best-known approaches for higher-complexity scheduling or packing problems. These factors match those for weaker or special-case variants, showing that the problem—in the unweighted, single-resource, non-preemptive case—admits near-optimal approximation in general instances.
Theoretical and Practical Implications
The findings clarify the true approximability landscape of non-preemptive throughput maximization, suggesting that, contrary to previously slow progress, PTAS-level results may be achievable with further combinatorial innovations. The block/superblock and configuration frameworks, along with harmonic analysis, potentially extend to more general scheduling and packing settings, including weighted and multiple-resource (e.g., time window with demands) generalizations.
Practically, for large-scale interval scheduling in manufacturing, mission planning, and computational resource allocation where non-preemptive constraints dominate, these algorithms yield robust guarantees with feasible computational demands in both polynomial and pseudo-polynomial regimes.
Future Directions
Follow-up work may focus on:
- Extension to the weighted case, where job profits are non-uniform. This setting is substantially more challenging.
- Closing the gap to a true PTAS, or proving APX-hardness, especially for settings with more general constraints.
- Generalizing the concentration analysis and grouping techniques for other combinatorial and geometric constraint satisfaction problems.
Conclusion
This work decisively refines the approximability of the non-preemptive throughput maximization problem, delivering the first polynomial-time algorithm to surpass the (4/3+ε)4 barrier and nearly matching lower bounds in pseudo-polynomial time. The methodologies—block/superblock partitioning, configuration LPs, harmonic group rounding, and advanced concentration analysis—constitute significant structural contributions, likely to influence the design and analysis of complex scheduling algorithms broadly.
Reference:
"Improved Approximation Algorithms for Non-Preemptive Throughput Maximization" (2603.29451)