Citation: Lee, J.; Choi, Y.; Suh, J.
DeConNet: Deep Neural Network
Model to Solve the Multi-Job
Assignment Problem in the
Multi-Agent System. Appl. Sci. 2022,
12, 5454. https://doi.org/10.3390/
app12115454
Academic Editors: Phivos Mylonas,
Katia Lida Kermanidis and
Manolis Maragoudakis
Received: 22 April 2022
Accepted: 25 May 2022
Published: 27 May 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
DeConNet: Deep Neural Network Model to Solve the Multi-Job
Assignment Problem in the Multi-Agent System
Jungwoo Lee
1,2
, Youngho Choi
1
and Jinho Suh
2,
*
1
Smart Mobility Research Center, Korea Institute of Robotics and Technology Convergence (KIRO),
Pohang 37666, Korea; ricow@kiro.re.kr (J.L.); rockboy@kiro.re.kr (Y.C.)
2
Department of Mechanical System Engineering, Pukyong National University, Busan 48513, Korea
* Correspondence: suhgang@pknu.ac.kr
Abstract:
In a multi-agent system, multi-job assignment is an optimization problem that seeks to
minimize total cost. This can be generalized as a complex problem in which several variations of
vehicle routing problems are combined, and as an NP-hard problem. The parameters considered
include the number of agents and jobs, the loading capacity, the speed of the agents, and the sequence
of consecutive positions of jobs. In this study, a deep neural network (DNN) model was developed
to solve the job assignment problem in a constant time regardless of the state of the parameters. To
generate a large training dataset for the DNN, the planning domain definition language (PDDL)
was used to describe the problem, and the optimal solution that was obtained using the PDDL
solver was preprocessed into a sample of the dataset. A DNN was constructed by concatenating the
fully-connected layers. The assignment solution obtained via DNN inference increased the average
traveling time by up to 13% compared with the ground cost. As compared with the ground cost,
which required hundreds of seconds, the DNN execution time was constant at approximately 20 ms
regardless of the number of agents and jobs.
Keywords:
multi-agent system; assignment problem; vehicle routing problem; planning domain
definition language; deep neural network
1. Introduction
In a multi-agent system, the job assignment problem between multiple agents and
jobs is a combinational optimization problem that minimizes the total cost of the traveling
time or distance.
There are multiple agents and jobs in the job assignment problem, and each agent
and job is in a specific two-dimensional position. Each agent has a different movement
speed and loading capacity for the allocated job. A job is an ordered sequence that results
in movement from a starting position to a target position. Agents can select several jobs
that are within their loading capacity. A robot, for example, may move multiple objects, or
a worker may receive multiple delivery requests. When all the jobs have been moved to
their target location and are completed, the allocation between the agents and the jobs aims
to minimize the sum of the time spent traveling, which is calculated based on each agent’s
moving speed, or the time spent on the last completed task.
This multi-parameter allocation problem can be generalized as the vehicle routing
problem (VRP). Since the VRP was first proposed [
1
], VRP with pickup and delivery
(VRPPD) [
2
,
3
], capacitated VRP (CVRP) [
4
], open VRP (OVRP) [
5
], multi-depot VRP
(MDVRP) [6,7], and many other variants have been investigated.
An agent in VRPPD must handle multiple sequential jobs from specific start (pickup)
locations to other destination (delivery) locations. In the CVRP, each agent with the
same capacity is assigned a set of jobs to achieve minimum-cost routes that originate and
terminate at a depot. The OVRP is similar to the CVRP, but unlike the CVRP, an agent does
Appl. Sci. 2022, 12, 5454. https://doi.org/10.3390/app12115454 https://www.mdpi.com/journal/applsci