Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning policies for resource allocation in business processes

Published 19 Apr 2023 in cs.AI | (2304.09970v3)

Abstract: Efficient allocation of resources to activities is pivotal in executing business processes but remains challenging. While resource allocation methodologies are well-established in domains like manufacturing, their application within business process management remains limited. Existing methods often do not scale well to large processes with numerous activities or optimize across multiple cases. This paper aims to address this gap by proposing two learning-based methods for resource allocation in business processes to minimize the average cycle time of cases. The first method leverages Deep Reinforcement Learning (DRL) to learn policies by allocating resources to activities. The second method is a score-based value function approximation approach, which learns the weights of a set of curated features to prioritize resource assignments. We evaluated the proposed approaches on six distinct business processes with archetypal process flows, referred to as scenarios, and three realistically sized business processes, referred to as composite business processes, which are a combination of the scenarios. We benchmarked our methods against traditional heuristics and existing resource allocation methods. The results show that our methods learn adaptive resource allocation policies that outperform or are competitive with the benchmarks in five out of six scenarios. The DRL approach outperforms all benchmarks in all three composite business processes and finds a policy that is, on average, 12.7% better than the best-performing benchmark.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. arXiv:2112.01769.
  2. doi:10.1007/978-3-319-39696-5_35.
  3. doi:10.1007/978-3-319-19509-4_6.
  4. doi:10.1109/ICECCT.2017.8118050.
  5. doi:10.1109/ICPM.2019.00027.
  6. doi:10.5829/IJE.2019.32.08B.05.
  7. doi:10.1007/978-3-031-27815-0_13.
  8. doi:10.1007/978-3-319-07881-6_38.
  9. doi:10.1016/j.eswa.2011.12.061.
  10. doi:10.1007/978-3-031-26886-1_23.
  11. doi:10.1007/978-3-030-33702-5_35.
  12. doi:10.1007/978-3-540-85758-7_18.
  13. doi:10.1007/978-3-319-22053-6_38.
  14. doi:10.1007/978-3-319-42887-1_37.
  15. doi:10.1057/jos.2015.8.
  16. doi:10.1016/j.simpat.2018.05.004.
  17. doi:10.5555/3312046.
  18. doi:10.1080/00207543.2011.611539.
  19. doi:10.1007/s10845-012-0626-9.
  20. doi:10.1007/978-3-030-14347-3_34.
  21. doi:https://doi.org/10.1016/j.asoc.2020.106208.
  22. doi:10.1109/TII.2022.3189725.
  23. doi:10.1007/978-3-662-49851-4.
  24. arXiv:1707.06347.
  25. doi:10.32473/flairs.v35i.130584.
  26. doi:10.1609/aaai.v32i1.11694.
  27. P. I. Frazier, A tutorial on bayesian optimization (2018). arXiv:1807.02811.
  28. arXiv:1810.08541.
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.