Citation: de Curtò, J.; de Zarzà, I.;
Calafate, C.T. Semantic Scene
Understanding with Large Language
Models on Unmanned Aerial
Vehicles. Drones 2023, 7, 114.
https://doi.org/10.3390/
drones7020114
Academic Editors: Diego
González-Aguilera
and Federico Tombari
Received: 16 December 2022
Revised: 31 January 2023
Accepted: 6 February 2023
Published: 8 February 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Semantic Scene Understanding with Large Language Models on
Unmanned Aerial Vehicles
J. de Curtò
1,2,3,4,
* , I. de Zarzà
1,2,3,4
and Carlos T. Calafate
2
1
Centre for Intelligent Multidimensional Data Analysis, HK Science Park, Shatin, Hong Kong
2
Departamento de Informática de Sistemas y Computadores, Universitat Politècnica de València,
46022 València, Spain
3
Informatik und Mathematik, GOETHE-University Frankfurt am Main,
60323 Frankfurt am Main, Germany
4
Estudis d’Informàtica, Multimèdia i Telecomunicació, Universitat Oberta de Catalunya,
08018 Barcelona, Spain
* Correspondence: decurto@em.uni-frankfurt.de
Abstract:
Unmanned Aerial Vehicles (UAVs) are able to provide instantaneous visual cues and a high-
level data throughput that could be further leveraged to address complex tasks, such as semantically
rich scene understanding. In this work, we built on the use of Large Language Models (LLMs)
and Visual Language Models (VLMs), together with a state-of-the-art detection pipeline, to provide
thorough zero-shot UAV scene literary text descriptions. The generated texts achieve a GUNNING
Fog median grade level in the range of 7–12. Applications of this framework could be found in the
filming industry and could enhance user experience in theme parks or in the advertisement sector.
We demonstrate a low-cost highly efficient state-of-the-art practical implementation of microdrones in
a well-controlled and challenging setting, in addition to proposing the use of standardized readability
metrics to assess LLM-enhanced descriptions.
Keywords:
scene understanding; large language models; visual language models; CLIP; GPT-3;
YOLOv7; UAV
1. Introduction and Motivation
Unmanned Aerial Vehicles (UAVs) have proven to be an essential asset for practically
addressing many challenges in vision and robotics. From surveillance and disaster response
to the monitoring of satellite communications, UAVs perform well in situations where
seamless mobility and high-definition visual capture are necessary. In this work, we
focused on tasks that require a semantic understanding of visual cues and that could
guide initial estimates in proposing an adequate characterization of a certain environment.
Problems that are of interest include semi-adaptive filming [
1
] and automatic literary
text description. In this setting, we propose a complete pipeline that provides real-time
original text descriptions of incoming frames or a general scene description given some
pre-recorded videos. The descriptions are well-suited to creating an automatic storytelling
framework that can be used in theme parks or family trips alike.
Foundation models are techniques based on neural networks that are trained on
large amounts of data and that present good generalization capabilities across tasks. In
particular, Natural Language Processing (NLP) has seen a dramatic improvement with
the appearance of GPT-2 [
2
] and its subsequent improvements (GPT-3 [
3
]). Indeed, Large
Language Models (LLMs) and Visual Language Models (VLMs) have recently arisen as a
resource for determining widespread problems in disciplines from robotics manipulation
and navigation to literary text description, completion, and question answering. We attempt
to introduce these techniques in the field of UAVs by providing the vehicle with enhanced
semantic understanding. Our approach uses a captioning technique based on CLIP [
4
,
5
],
Drones 2023, 7, 114. https://doi.org/10.3390/drones7020114 https://www.mdpi.com/journal/drones