Drone-vs-Bird Detection Challenge

in conjunction with the “5th International workshop on small-drone surveillance, detection and counteraction techniques” (WOSDETC) of ICIAP, May 23-27, Lecce, Italy (hybrid, both virtual and in presence)

Motivation and description

Small drones are a rising threat due to their possible misuse for illegal activities such as smuggling of drugs as well as for terrorism attacks using explosives or chemical weapons. Several surveillance and detection technologies are under investigation at the moment, with different trade-offs in complexity, range, and capabilities. The “International Workshop on Small-Drone Surveillance, Detection and Counteraction Techniques” (WOSDETC) is aimed at bringing together researchers from both academia and industry, to share recent advances in this field. In conjunction, the Drone-vs-Bird Detection Challenge is proposed. Indeed, given their characteristics, drones can be easily confused with birds, which makes the surveillance tasks even more challenging especially in maritime areas where bird populations may be massive. The use of video analytics can solve the issue, but effective algorithms are needed able to operate also under unfavorable conditions, namely weak constraint, long range, reduced visibility, etc. Furthermore, practical systems require drones to be recognized at far distances, in order to allow time for reaction. Thus, very small objects must be recognized and differentiated against structured background and other challenging image contents.

The challenge aims at attracting research efforts to identify novel solutions to the problem outlined above, i.e., discrimination between birds and drones at far distance, by providing a video dataset that may be difficult to obtain (drone flying require special conditions and permissions, and shore areas are needed for the considered problem). The challenge goal is to detect a drone appearing at some time in a short video sequence where birds are also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. The dataset is continually increased over consecutive installments of the challenge and made available to the community afterwards.

Participation and joint workshop

The challenge is organized in conjunction with the WOSDETC workshop. All the participants to the challenge must submit score files with their results, as explained below. The winning team will be awarded with a Jetson Nano prize!

The best teams will be invited to submit an ICIAP paper describing their approach, which will be published (after peer review) in the conference proceedings and may be also presented in the workshop day session.

Publication in the main conference of a paper summarizing the overall challenge results, including the author names of all participants to the challenge, will be considered.

Participation to the workshop is of course possible independently of the challenge, following the standard submission and peer-review process.

Challenge organization and Dataset

An extended training dataset is made available for the challenge to support the development of methods, and can be requested at any time (signing a data user agreement is required). The dataset comprises a collection of videos where one or more drones enter the scene at some point. Annotation is provided in separate files in terms of frame number and bounding box of the target ([top_x top_y width height]) for those frames where drones are present.

Please send your request to wosdetc@googlegroups.com

Submission and evaluation procedure

5 days before the challenge deadline, a set of video sequences without annotations will be provided for testing. By the deadline, teams should submit one file for each test video, in a similar format as the annotation file. Submission files will provide the frame numbers and estimated drone bounding boxes ([top_x top_y width height]) in conjunction with detection confidence scores. Multiple detections can be reported for each frame, using the same frame number (but different bounding boxes). For frames not reported in the files, no detection is assumed.

Developed algorithms should aim to localize drones accurately and generate bounding boxes as close as possible to to the targets. For evaluation, the Averaged Precision metric (AP) will be employed. The metric is well established in the field of object detection and well known from the COCO object detection challenge. It is based on the Intersection over Union (IoU) criterion for matching ground truth and detected object boxes.

Typically, a detection is counted as correct, when its IoU with a ground truth box is above 0.5. The AP summarizes a whole precision-recall curve into a single metric. It thus encompasses the various precision-recall trade-offs of a detector. While the final ranking will be obtained based on overall AP, a more detailed analysis of AP for various object sizes will be carried out in the challenge summary paper in order to identify different strengths and weaknesses of the submitted approaches.

Each participating team will have to submit a summary description of their method with their results. The winning teams will be invited to extend their summary into a full paper. Submissions with interesting methodologies may be considered similarly. The summary will have to contain references to used public codebases, a detailed specification of the applied model, as well as training parameters. The use of additional training data is permitted. However, the amount and nature of the data will have to be described in detail. Furthermore, teams relying on added data will be asked to submit an additional result of their method, which was achieved relying only on the provided training data. Nevertheless, the overall best achieved score will count towards the final challenge ranking.

Result and paper submission

The result must be submitted through the CMT web site activated for the workshop. See the submission page for all details and important dates.

Organizers

Angelo Coluccia, University of Salento, Lecce, Italy

Alessio Fascista, University of Salento, Lecce, Italy

Arne Schumann, Fraunhofer Institute, Karlsruhe, Germany

Lars Sommer, Fraunhofer Institute, Karlsruhe, Germany

Anastasios Dimou, CERTH, Greece

Dimitrios Zarpalas, CERTH, Greece

Nabin Sharma, University of Technology, Sydney, Australia

Mrunalini Nalamati, University of Technology, Sydney, Australia

Design a site like this with WordPress.com
Get started