Journal Browser

Journal Browser

Special Issue "Cooperative Camera Networks"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 15 March 2020.

Special Issue Editors

Prof. Dr. Francesco De Natale
E-Mail Website
Guest Editor
Dept. of Information & Communication Technologies, University of Trento, Via Sommarive, 14 – 38050 Trento, Italy
Interests: multimedia signal processing; image retrieval
Prof. Nicola Conci
E-Mail Website
Guest Editor
Department of Information Engineering and Computer Science, University of Trento, Via Sommarive, 5 - 38123 Povo, Italy
Interests: multimedia signal processing; computer vision
Dr. Lucio Marcenaro
E-Mail Website
Guest Editor
Via all'Opera Pia 11, 16145 Genova, Italy
Interests: video processing for event recognition; detection and localization of objects in complex scenes; distributed heterogeneous sensors ambient awareness systems; ambient intelligence and bio-inspired cognitive systems
Dr. Jungong Han
E-Mail Website
Guest Editor
WMG Data Science, University of Warwick, CV4 7AL, Coventry, UK
Interests: computer vision; video analysis; machine learning
Special Issues and Collections in MDPI journals
Dr. Caifeng Shan
E-Mail Website
Guest Editor
Philips Research, High Tech Campus 34, 5656AE Eindhoven, The Netherlands
Interests: computer vision; pattern recognition; image and video analysis

Special Issue Information

Dear Colleagues,

Video acquisition devices are everywhere around us. They are used in private homes to provide monitoring and assistive services; placed in buildings and indoor public spaces for surveillance purposes; and spread in urban areas to monitor traffic and people, reveal possible danger or security issues, or trigger intelligent systems such as adaptive lighting or smart crossings. Besides fixed cameras, mobile cameras are also increasingly being used as a way of collecting user-centered information in applications such as life-logging, augmented reality or location-based services. A further increase in the diffusion of visual sensors is expected in the years to come, due to the availability of huge-bandwidth/low-latency networks such as 5G on the one hand, and to the spread of cost-effective advanced visual sensors (smart-, lightfield-, 360- cameras) on the other hand.

Although the current situation presents a largely unstructured scenario, in which the various devices operate without any coordination and are not meant to exchange data amongst themselves, a current trend in the research is devoted to the study of systems able to jointly exploit the large amount of inter-related information acquired by visual sensors in the framework of large cooperative visual sensor networks. However, the great potential of these technologies is still hindered by the many challenges to be solved, such as the availability of an efficient protocol for communication among sensors that is capable of guaranteeing the necessary coordination for acquisition and processing, calibrating and reconstructing multiple views, enabling distributed processing of the acquired visual information, and fusing information flows.

Prof. Dr. Nicola Conci
Prof. Dr. Francesco De Natale
Dr. Lucio Marcenaro
Dr. Jungong Han
Dr. Caifeng Shan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


  • Distributed smart cameras
  • Cooperative camera networks
  • Self-aware camera networks
  • Massive data analysis and information fusion
  • Mobile and ego-vision
  • Autonomous vehicles and autonomous driving
  • Deep learning for distributed and mobile vision
  • Cloud and edge computing for video analysis
  • Ambient-aware robotic systems
  • Immersive reality
  • Networking and 5G for video connectivity

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:


Open AccessArticle
Automatic Multi-Camera Extrinsic Parameter Calibration Based on Pedestrian Torsors
Sensors 2019, 19(22), 4989; - 15 Nov 2019
Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be [...] Read more.
Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street. Full article
(This article belongs to the Special Issue Cooperative Camera Networks)
Show Figures

Figure 1

Back to TopTop