Papers
Topics
Authors
Recent
Search
2000 character limit reached

MockLLM: A Multi-Agent Behavior Collaboration Framework for Online Job Seeking and Recruiting

Published 28 May 2024 in cs.CL and cs.AI | (2405.18113v2)

Abstract: Online recruitment platforms have reshaped job-seeking and recruiting processes, driving increased demand for applications that enhance person-job matching. Traditional methods generally rely on analyzing textual data from resumes and job descriptions, limiting the dynamic, interactive aspects crucial to effective recruitment. Recent advances in LLMs have revealed remarkable potential in simulating adaptive, role-based dialogues, making them well-suited for recruitment scenarios. In this paper, we propose \textbf{MockLLM}, a novel framework to generate and evaluate mock interview interactions. The system consists of two key components: mock interview generation and two-sided evaluation in handshake protocol. By simulating both interviewer and candidate roles, MockLLM enables consistent and collaborative interactions for real-time and two-sided matching. To further improve the matching quality, MockLLM further incorporates reflection memory generation and dynamic strategy modification, refining behaviors based on previous experience. We evaluate MockLLM on real-world data Boss Zhipin, a major Chinese recruitment platform. The experimental results indicate that MockLLM outperforms existing methods in matching accuracy, scalability, and adaptability across job domains, highlighting its potential to advance candidate assessment and online recruitment.

Citations (3)

Summary

  • The paper introduces MockLLM, a framework that enhances candidate-job matching by simulating realistic mock interviews using LLMs.
  • It employs a two-sided handshake protocol where both interviewer and candidate evaluate each other using dialogue history and traditional metrics.
  • The framework’s reflection memory mechanism refines prompts for future interactions, leading to improved precision, recall, and F1 scores in recruitment assessments.

MockLLM: A Multi-Agent Behavior Collaboration Framework for Online Job Seeking and Recruiting

Introduction

The paper introduces MockLLM, a framework designed to enhance online job seeking and recruiting processes through the use of LLMs as role-playing interviewers and candidates. MockLLM divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in a handshake protocol. This method augments traditional person-job fitting by leveraging LLMs to simulate interview situations, providing additional data points for evaluating candidates.

Framework Overview

MockLLM is structured into three main components:

  1. Mock Interview Generation: This module employs LLMs to generate mock interviews through role-playing as both interviewers and candidates. The multi-turn conversational interactions serve as a supplementary data source, augmenting traditional evaluations based solely on resumes and job descriptions. Figure 1

    Figure 1: The framework overview of MockLLM, highlighting the interaction flow among its modules.

  2. Two-Sided Evaluation in Handshake Protocol: Both parties in the interview evaluate each other using dialogue history, resumes, and job descriptions. The handshake protocol ensures mutual agreement before declaring a match, supporting a more robust person-job fit.
  3. Reflection Memory Generation: Successfully matched cases are stored to refine future interactions, enabling continuous prompt optimization.

Implementation Details

Implementation of MockLLM involves the following key stages:

  • Role Initialization: LLMs are prompted to play roles of interviewers and candidates. Functions for interview question generation and response formulation are driven by module-specific prompts.

1
2
interviewer = f_role(job_description)
candidate = g_role(resume)

  • Mock Interview Process: Conducted through standardized multi-turn interactions ensuring coherence and relevance to candidate resumes and job descriptions.

1
2
question = f_ques(interview_history, resume, job_description, question_prompt)
answer = g_resp(updated_history, resume, job_description, answer_prompt)

  • Evaluation Mechanism: Two-sided evaluation aggregates performance data from both descriptive content and dialogic history.

1
2
score_interviewer = f_eval(resume, job_description, interview_dialogue)
score_candidate = g_eval(resume, job_description, interview_dialogue)

  • Reflection Memory: Positive interactions are stored, dynamically modifying prompts to improve upcoming interview rounds.

1
prompt_mod = f_mod(existing_memory, new_data)

Performance and Evaluation

Extensive experiments highlight the superior performance of MockLLM. The framework demonstrates improved precision, recall, and F1 scores across traditional and mock interview-based assessments. Figure 2

Figure 2: Comparison results before and after questioning prompt modification showing increased specificity post-reflection adjustments.

The evaluation process leverages both machine learning metrics and human evaluations, highlighting MockLLM's capacity to generate coherent, relevant, and diverse interview content. Automatic and human assessments confirm MockLLM's efficacy in generating high-quality mock interviews, leading to more accurate person-job matches.

Conclusion

MockLLM significantly contributes to advancing methodologies in online job recruitment through innovative application of LLMs. By utilizing mock interviews for person-job evaluation, the framework provides a unique approach to enhance the efficiency and accuracy of candidate selection processes. Future research could explore the adaptation of MockLLM to different cultural or language-specific contexts, broadening its applicability in global recruitment scenarios.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.