Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = file size preserving

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 69905 KiB  
Article
Lossless Thumbnail Preservation Encryption Based on Reversible Information Hiding
by Junlin Ouyang, Tingjian Shi and Ruijie Wang
Electronics 2025, 14(10), 2060; https://doi.org/10.3390/electronics14102060 - 19 May 2025
Viewed by 342
Abstract
With the proliferation of multi-camera smartphones, image generation has proliferated and cloud storage services have become the primary tool for storing and sharing photos. However, this also poses privacy and security risks. Traditional image encryption techniques, while protecting privacy, also lead to loss [...] Read more.
With the proliferation of multi-camera smartphones, image generation has proliferated and cloud storage services have become the primary tool for storing and sharing photos. However, this also poses privacy and security risks. Traditional image encryption techniques, while protecting privacy, also lead to loss of image availability in the cloud. To balance security and availability, TPE (Thumbnail Preserving Encryption) is proposed. However, the decryption effect of the existing schemes is generally unsatisfactory, and many existing schemes are unable to achieve perfect restoration in practical applications. Meanwhile, a few fully reversible schemes are limited by the proposed algorithms, which makes it difficult to be extended to a wider range of applications. To solve this contradiction, this paper proposes a TPE scheme based on reversible information hiding. Specifically, the scheme preserves the DC coefficients of the image during the encryption process and encrypts the AC coefficients to enhance the security of the image, thus obtaining the intermediate encrypted image. Then, the intermediate encrypted image is pre-decrypted, and the subtle error between the original image and the intermediate encrypted image is used as the compensation information. In order to achieve lossless decryption, we introduce the reversible information hiding technique to embed the compensation information into the intermediate image, and we finally obtain the encrypted image. This is also applicable to other high-quality TPE schemes and can provide ideas for their optimization direction. The experimental results show that this scheme not only achieves lossless decryption but also outperforms other TPE schemes in terms of visual effect, while the file extension size is kept at a low level. The research in this paper provides new ideas for balancing image privacy protection and usability, which has important theoretical and practical significance. Full article
Show Figures

Figure 1

20 pages, 12422 KiB  
Article
LHSDNet: A Lightweight and High-Accuracy SAR Ship Object Detection Algorithm
by Dahai Dai, Hao Wu, Yue Wang and Penghui Ji
Remote Sens. 2024, 16(23), 4527; https://doi.org/10.3390/rs16234527 - 3 Dec 2024
Cited by 6 | Viewed by 1166
Abstract
At present, the majority of deep learning-based ship object detection algorithms concentrate predominantly on enhancing recognition accuracy, often overlooking the complexity of the algorithm. These complex algorithms demand significant computational resources, making them unsuitable for deployment on resource-constrained edge devices, such as airborne [...] Read more.
At present, the majority of deep learning-based ship object detection algorithms concentrate predominantly on enhancing recognition accuracy, often overlooking the complexity of the algorithm. These complex algorithms demand significant computational resources, making them unsuitable for deployment on resource-constrained edge devices, such as airborne and spaceborne platforms, thereby limiting their practicality. With the purpose of alleviating this problem, a lightweight and high-accuracy synthetic aperture radar (SAR) ship image detection network (LHSDNet) is proposed. Initially, GhostHGNetV2 was utilized as the feature extraction network, and the calculation amount of the network was reduced by GhostConv. Next, a lightweight feature fusion network was designed to combine shallow and deep features through lightweight convolutions, effectively preserving more information while minimizing computational requirements. Lastly, the feature extraction module was integrated through parameter sharing, and the detection head was lightweight to save computing resources further. The results from our experiments demonstrate that the proposed LHSDNet model increases mAP50 by 0.7% in comparison to the baseline model. Additionally, it illustrates a pronounced decrease in parameter count, computational demand, and model file size by 48.33%, 51.85%, and 41.26%, respectively, when contrasted with the baseline model. LHSDNet achieves a balance between precision and computing resources, rendering it more appropriate for edge device implementation. Full article
Show Figures

Figure 1

13 pages, 10695 KiB  
Article
Optimising Floor Plan Extraction: Applying DBSCAN and K-Means in Point Cloud Analysis of Valencia Cathedral
by Pablo Ariel Escudero, María Concepción López González and Jorge L. García Valldecabres
Heritage 2024, 7(10), 5787-5799; https://doi.org/10.3390/heritage7100272 - 16 Oct 2024
Cited by 1 | Viewed by 1778
Abstract
Accurate documentation of the geometry of historical buildings presents a considerable challenge, especially when dealing with complex structures like the Metropolitan Cathedral of Valencia. Advanced technologies such as 3D laser scanning has enabled detailed spatial data capture. Still, efficient handling of this data [...] Read more.
Accurate documentation of the geometry of historical buildings presents a considerable challenge, especially when dealing with complex structures like the Metropolitan Cathedral of Valencia. Advanced technologies such as 3D laser scanning has enabled detailed spatial data capture. Still, efficient handling of this data remains challenging due to the volume and complexity of the information. This study explores the application of clustering techniques employing Machine Learning-based algorithms, such as DBSCAN and K-means, to automate the process of point cloud analysis and modelling, focusing on identifying and extracting floor plans. The proposed methodology includes data geo-referencing, culling points to reduce file size, and automated floor plan extraction through filtering and segmentation. This approach aims to streamline the documentation and modelling of historical buildings and enhance the accuracy of historical architectural surveys, significantly contributing to the preservation of cultural heritage by providing a more efficient and accurate method of data analysis. Full article
(This article belongs to the Section Architectural Heritage)
Show Figures

Figure 1

23 pages, 5348 KiB  
Article
Efficient Runtime Firmware Update Mechanism for LoRaWAN Class A Devices
by Bernardino Pinto Neves, António Valente and Victor D. N. Santos
Eng 2024, 5(4), 2610-2632; https://doi.org/10.3390/eng5040137 - 14 Oct 2024
Cited by 1 | Viewed by 2195
Abstract
This paper presents an efficient and secure method for updating firmware in IoT devices using LoRaWAN network resources and communication protocols. The proposed method involves dividing the firmware into fragments, storing them in the application server’s database, and transmitting them to remote IoT [...] Read more.
This paper presents an efficient and secure method for updating firmware in IoT devices using LoRaWAN network resources and communication protocols. The proposed method involves dividing the firmware into fragments, storing them in the application server’s database, and transmitting them to remote IoT devices via downlink messages, without necessitating any changes to the device’s class. This approach can be replicated across any IoT LoRaWAN device, offering a robust and scalable solution for large-scale firmware updates while ensuring data security and integrity. The proposed method significantly reduces the downtime of IoT devices and enhances the energy efficiency of the update process. The method was validated by updating a block in the program memory, associated to a specific functionality of the IoT end device. The associated Intel Hex file was segmented into 17 LoRaWAN downlink frames with an average size of 46 bytes. Upon receiving the complete firmware update, the microcontroller employs self-programming techniques that restrict the update process to specific rows of the program memory, avoiding interruptions or reboots. The update process was successfully completed in 51.33 ms, resulting in a downtime of 16.88 ms. This method demonstrates improved energy efficiency compared to existing solutions while preserving the communication network’s capacity, making it an adequate solution for remote devices in LoRaWAN networks. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Graphical abstract

15 pages, 1202 KiB  
Article
Semantic Hierarchical Classification Applied to Anomaly Detection Using System Logs with a BERT Model
by Clara Corbelle, Victor Carneiro and Fidel Cacheda
Appl. Sci. 2024, 14(13), 5388; https://doi.org/10.3390/app14135388 - 21 Jun 2024
Viewed by 1813
Abstract
The compaction and structuring of system logs facilitate and expedite anomaly and cyberattack detection processes using machine-learning techniques, while simultaneously reducing alert fatigue caused by false positives. In this work, we implemented an innovative algorithm that employs hierarchical codes based on the semantics [...] Read more.
The compaction and structuring of system logs facilitate and expedite anomaly and cyberattack detection processes using machine-learning techniques, while simultaneously reducing alert fatigue caused by false positives. In this work, we implemented an innovative algorithm that employs hierarchical codes based on the semantics of natural language, enabling the generation of a significantly reduced log that preserves the semantics of the original. This method uses codes that reflect the specificity of the topic and its position within a higher hierarchical structure. By applying this catalog to the analysis of logs from the Hadoop Distributed File System (HDFS), we achieved a concise summary with non-repetitive themes, significantly speeding up log analysis and resulting in a substantial reduction in log size while maintaining high semantic similarity. The resulting log has been validated for anomaly detection using the “bert-base-uncased” model and compared with six other methods: PCA, IM, LogCluster, SVM, DeepLog, and LogRobust. The reduced log achieved very similar values in precision, recall, and F1-score metrics, but drastically reduced processing time. Full article
Show Figures

Figure 1

17 pages, 15053 KiB  
Article
Encryption Method for JPEG Bitstreams for Partially Disclosing Visual Information
by Mare Hirose, Shoko Imaizumi and Hitoshi Kiya
Electronics 2024, 13(11), 2016; https://doi.org/10.3390/electronics13112016 - 22 May 2024
Cited by 1 | Viewed by 1558
Abstract
In this paper, we propose a novel encryption method for JPEG bitstreams in which encrypted data can preserve the JPEG file format with the same size as that without encryption. Accordingly, data encrypted with the method can be decoded without any modification of [...] Read more.
In this paper, we propose a novel encryption method for JPEG bitstreams in which encrypted data can preserve the JPEG file format with the same size as that without encryption. Accordingly, data encrypted with the method can be decoded without any modification of header information by using a standard JPEG decoder. In addition, the method makes two contributions that conventional methods allowing bitstream-level encryption do not: spatially partial encryption and block-permutation-based encryption. To achieve this, we propose using a code called restart marker for the first time, which can be inserted at regular intervals between minimum coded units (MCUs) for encryption. This allows us to define extended blocks separated by restart markers, so the two contributions are possible with restart markers. In experiments, the effectiveness of the method is verified in terms of file size preservation and the visibility of encrypted images. Full article
(This article belongs to the Special Issue Modern Computer Vision and Image Analysis)
Show Figures

Figure 1

10 pages, 2020 KiB  
Article
Human Middle Ear Anatomy Based on Micro-Computed Tomography and Reconstruction: An Immersive Virtual Reality Development
by Kai Cheng, Ian Curthoys, Hamish MacDougall, Jonathan Robert Clark and Payal Mukherjee
Osteology 2023, 3(2), 61-70; https://doi.org/10.3390/osteology3020007 - 23 May 2023
Cited by 3 | Viewed by 3449
Abstract
Background: For almost a decade, virtual reality (VR) has been employed in otology simulation. The realism and accuracy of traditional three-dimensional (3D) mesh models of the middle ear from clinical CT have suffered because of their low resolution. Although micro-computed tomography (micro-CT) imaging [...] Read more.
Background: For almost a decade, virtual reality (VR) has been employed in otology simulation. The realism and accuracy of traditional three-dimensional (3D) mesh models of the middle ear from clinical CT have suffered because of their low resolution. Although micro-computed tomography (micro-CT) imaging overcomes resolution issues, its usage in virtual reality platforms has been limited due to the high computational requirements. The aim of this study was to optimize a high-resolution 3D human middle ear mesh model suitable for viewing and manipulation in an immersive VR environment using an HTC VIVE VR headset (HTC and Valve Corporation, USA) to enable a seamless middle ear anatomical visualisation viewing experience in VR while preserving anatomical accuracy. Methods: A high-resolution 3D mesh model of the human middle ear was reconstructed using micro-CT data with 28 μm voxel resolution. The models were optimised by tailoring the surface model polygon counts, file size, loading time, and frame rate. Results: The optimized middle ear model and its surrounding structures (polygon counts reduced from 21 million polygons to 2.5 million) could be uploaded and visualised in immersive VR at 82 frames per second with no VR-related motion sickness reported. Conclusion: High-resolution micro-CT data can be visualized in an immersive VR environment after optimisation. To our knowledge, this is the first report on overcoming the translational hurdle in middle ear applications of VR. Full article
Show Figures

Figure 1

16 pages, 1983 KiB  
Article
Selective Power-Loss-Protection Method for Write Buffer in ZNS SSDs
by Junseok Yang, Seokjun Lee and Sungyong Ahn
Electronics 2022, 11(7), 1086; https://doi.org/10.3390/electronics11071086 - 30 Mar 2022
Cited by 2 | Viewed by 3967
Abstract
Most SSDs (solid-state drives) use an internal DRAM (Dynamic Random Access Memory) to improve the I/O performance and extend SSD lifespan by absorbing write requests. However, this volatile memory does not guarantee the persistence of buffered data in the event of sudden power-off. [...] Read more.
Most SSDs (solid-state drives) use an internal DRAM (Dynamic Random Access Memory) to improve the I/O performance and extend SSD lifespan by absorbing write requests. However, this volatile memory does not guarantee the persistence of buffered data in the event of sudden power-off. Therefore, highly reliable enterprise SSDs employ power-loss-protection (PLP) logic to ensure the durability of buffered data using the back-up power of capacitors. The SSD must provide enough capacitors for the PLP in proportion to the size of the volatile buffer. Meanwhile, emerging ZNS (Zoned Namespace) SSDs are attracting attention because they can support many I/O streams that are useful in multi-tenant systems. Although ZNS SSDs do not use an internal mapping table unlike conventional block-interface SSDs, a large write buffer is required to provide many I/O streams. The reason is that each I/O stream needs its own write buffer for write buffering where the host can allocate separate zones to different I/O streams. Moreover, the larger capacity and more I/O streams the ZNS SSD supports, the larger write buffer is required. However, the size of the write buffer depends on the amount of capacitance, which is limited not only by the SSD internal space, but also by the cost. Therefore, in this paper, we present a set of techniques that significantly reduce the amount of capacitance required in ZNS SSDs, while ensuring the durability of buffered data during sudden power-off. First, we note that modern file systems or databases have their own solutions for data recovery, such as WAL (Write-ahead Log) and journal. Therefore, we propose a selective power-loss-protection method that ensures durability only for the WAL or journal required for data recovery, not for the entire buffered data. Second, to minimize the time taken by the PLP, we propose a balanced flush method that temporarily writes buffered data to multiple zones to maximize parallelism and preserves the data in its original location when power is restored. The proposed methods are implemented and evaluated by modifying FEMU (QEMU-based Flash Emulator) and RocksDB. According to experimental results, the proposed selective-PLP reduces the amount of capacitance by 50 to 90% while retaining the reliability of ZNS SSDs. In addition, the balanced flush method reduces the PLP latency by up to 96%. Full article
(This article belongs to the Special Issue Emerging Memory Technologies for Next-Generation Applications)
Show Figures

Figure 1

26 pages, 959 KiB  
Article
SAGMAD—A Signature Agnostic Malware Detection System Based on Binary Visualisation and Fuzzy Sets
by Betty Saridou, Joseph Ryan Rose, Stavros Shiaeles and Basil Papadopoulos
Electronics 2022, 11(7), 1044; https://doi.org/10.3390/electronics11071044 - 26 Mar 2022
Cited by 12 | Viewed by 4447
Abstract
Image conversion of byte-level data, or binary visualisation, is a relevant approach to security applications interested in malicious activity detection. However, in practice, binary visualisation has always been seen to have great limitations when dealing with large volumes of data, and would be [...] Read more.
Image conversion of byte-level data, or binary visualisation, is a relevant approach to security applications interested in malicious activity detection. However, in practice, binary visualisation has always been seen to have great limitations when dealing with large volumes of data, and would be a reluctant candidate as the core building block of an intrusion detection system (IDS). This is due to the requirements of computational time when processing the flow of byte data into image format. Machine intelligence solutions based on colour tone variations that are intended for pattern recognition would overtax the process. In this paper, we aim to solve this issue by proposing a fast binary visualisation method that uses Fuzzy Set theory and the H-indexing space filling curve. Our model can assign different colour tones on a byte, allowing it to be influenced by neighbouring byte values while preserving optimal locality indexing. With this work, we wish to establish the first steps in pursuit of a signature-free IDS. For our experiment, we used 5000 malicious and benign files of different sizes. Our methodology was tested on various platforms, including GRNET’s High-Performance Computing services. Further improvements in computation time allowed larger files to convert in roughly 0.5 s on a desktop environment. Its performance was also compared with existing machine learning-based detection applications that used traditional binary visualisation. Despite lack of optimal tuning, SAGMAD was able to achieve 91.94% accuracy, 90.63% precision, 92.7% recall, and an F-score of 91.61% on average when tested within previous binary visualisation applications and following their parameterisation scheme. The results exceeded malware file-based experiments and were similar to network intrusion applications. Overall, the results demonstrated here prove our method to be a promising mechanism for a fast AI-based signature-agnostic IDS. Full article
(This article belongs to the Special Issue Next Generation Networks and Systems Security)
Show Figures

Figure 1

12 pages, 948 KiB  
Article
Principal Component Analysis and Factor Analysis for an Atanassov IF Data Set
by Viliam Ďuriš, Renáta Bartková and Anna Tirpáková
Mathematics 2021, 9(17), 2067; https://doi.org/10.3390/math9172067 - 26 Aug 2021
Cited by 7 | Viewed by 3062
Abstract
The present contribution is devoted to the theory of fuzzy sets, especially Atanassov Intuitionistic Fuzzy sets (IF sets) and their use in practice. We define the correlation between IF sets and the correlation coefficient, and we bring a new perspective to solving the [...] Read more.
The present contribution is devoted to the theory of fuzzy sets, especially Atanassov Intuitionistic Fuzzy sets (IF sets) and their use in practice. We define the correlation between IF sets and the correlation coefficient, and we bring a new perspective to solving the problem of data file reduction in case sets where the input data come from IF sets. We present specific applications of the two best-known methods, the Principal Component Analysis and Factor Analysis, used to solve the problem of reducing the size of a data file. We examine input data from IF sets from three perspectives: through membership function, non-membership function and hesitation margin. This examination better reflects the character of the input data and also better captures and preserves the information that the input data carries. In the article, we also present and solve a specific example from practice where we show the behavior of these methods on data from IF sets. The example is solved using R programming language, which is useful for statistical analysis of data and their graphical representation. Full article
(This article belongs to the Special Issue Fuzzy Systems and Optimization)
Show Figures

Figure 1

17 pages, 5468 KiB  
Article
Maximizing Image Information Using Multi-Chimera Transform Applied on Face Biometric Modality
by Ahmad Saeed Mohammad, Dhafer Zaghar and Walaa Khalaf
Information 2021, 12(3), 115; https://doi.org/10.3390/info12030115 - 8 Mar 2021
Cited by 2 | Viewed by 2269
Abstract
With the development of mobile technology, the usage of media data has increased dramatically. Therefore, data reduction represents a research field to maintain valuable information. In this paper, a new scheme called Multi Chimera Transform (MCT) based on data reduction with high information [...] Read more.
With the development of mobile technology, the usage of media data has increased dramatically. Therefore, data reduction represents a research field to maintain valuable information. In this paper, a new scheme called Multi Chimera Transform (MCT) based on data reduction with high information preservation, which aims to improve the reconstructed data by producing three parameters from each 16×16 block of data, is proposed. MCT is a 2D transform that depends on constructing a codebook of 256 picked blocks from some selected images which have a low similarity. The proposed transformation was applied on solid and soft biometric modalities of AR database, giving high information preservation with small resulted file size. The proposed method produced outstanding performance compared with KLT and WT in terms of SSIM and PSNR. The highest SSIM was 0.87 for the proposed scheme MCT of the full image of AR database, while the existed method KLT and WT had 0.81 and 0.68, respectively. In addition, the highest PSNR was 27.23 dB for the proposed scheme on warp facial image of AR database, while the existed methods KLT and WT had 24.70 dB and 21.79 dB, respectively. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

12 pages, 1951 KiB  
Article
Content and Privacy Protection in JPEG Images by Reversible Visual Transformation
by Xin Cao, Yuxuan Huang, Hao-Tian Wu and Yiu-ming Cheung
Appl. Sci. 2020, 10(19), 6776; https://doi.org/10.3390/app10196776 - 27 Sep 2020
Cited by 12 | Viewed by 2894
Abstract
With the popularity of cloud computing and social networks, more and more JPEG images are stored and distributed. Consequently, how to protect privacy and content in JPEG images has become an important issue. Although traditional encryption schemes can be employed, the file format [...] Read more.
With the popularity of cloud computing and social networks, more and more JPEG images are stored and distributed. Consequently, how to protect privacy and content in JPEG images has become an important issue. Although traditional encryption schemes can be employed, the file format of JPEG images is changed so that their usage may be affected. In this paper, a reversible visual transformation algorithm is proposed to protect content in JPEG images. Specifically, the DC coefficient in each user-selected block is modified, while the information required to recover it is reversibly hidden into AC coefficients. Then the signs of AC coefficients in the selected blocks are flipped and the blocks are further scrambled with a secret key. By embedding the location information of the selected blocks in a transformed image, the original image can be exactly recovered when needed. Besides, regions to be protected can be arbitrarily chosen without substantially affecting the rest of the image. The experimental results on a set of JPEG images validate the efficacy and reversibility of the proposed algorithm. In addition, good performance is achieved in terms of invisibility of the protected content, image quality, file size preservation and security. Full article
(This article belongs to the Special Issue Digital Transformation in Manufacturing Industry)
Show Figures

Figure 1

37 pages, 646 KiB  
Article
An Improved Bytewise Approximate Matching Algorithm Suitable for Files of Dissimilar Sizes
by Víctor Gayoso Martínez, Fernando Hernández-Álvarez and Luis Hernández Encinas
Mathematics 2020, 8(4), 503; https://doi.org/10.3390/math8040503 - 2 Apr 2020
Cited by 3 | Viewed by 3428
Abstract
The goal of digital forensics is to recover and investigate pieces of data found on digital devices, analysing in the process their relationship with other fragments of data from the same device or from different ones. Approximate matching functions, also called similarity preserving [...] Read more.
The goal of digital forensics is to recover and investigate pieces of data found on digital devices, analysing in the process their relationship with other fragments of data from the same device or from different ones. Approximate matching functions, also called similarity preserving or fuzzy hashing functions, try to achieve that goal by comparing files and determining their resemblance. In this regard, ssdeep, sdhash, and LZJD are nowadays some of the best-known functions dealing with this problem. However, even though those applications are useful and trustworthy, they also have important limitations (mainly, the inability to compare files of very different sizes in the case of ssdeep and LZJD, the excessive size of sdhash and LZJD signatures, and the occasional scarce relationship between the comparison score obtained and the actual content of the files when using the three applications). In this article, we propose a new signature generation procedure and an algorithm for comparing two files through their digital signatures. Although our design is based on ssdeep, it improves some of its limitations and satisfies the requirements that approximate matching applications should fulfil. Through a set of ad-hoc and standard tests based on the FRASH framework, it is possible to state that the proposed algorithm presents remarkable overall detection strengths and is suitable for comparing files of very different sizes. A full description of the multi-thread implementation of the algorithm is included, along with all the tests employed for comparing this proposal with ssdeep, sdhash, and LZJD. Full article
(This article belongs to the Special Issue Evolutionary Computation & Swarm Intelligence)
Show Figures

Figure 1

14 pages, 1674 KiB  
Article
Reversible Data Hiding in JPEG Images Using Quantized DC
by Suah Kim, Fangjun Huang and Hyoung Joong Kim
Entropy 2019, 21(9), 835; https://doi.org/10.3390/e21090835 - 26 Aug 2019
Cited by 13 | Viewed by 4611
Abstract
Reversible data hiding in JPEG images has become an important topic due to the prevalence and overwhelming support of the JPEG image format these days. Much of the existing work focuses on embedding using AC (quantized alternating current coefficients) to maximize the embedding [...] Read more.
Reversible data hiding in JPEG images has become an important topic due to the prevalence and overwhelming support of the JPEG image format these days. Much of the existing work focuses on embedding using AC (quantized alternating current coefficients) to maximize the embedding capacity while minimizing the distortion and the file size increase. Traditionally, DC (quantized direct current coefficients) are not used for embedding, due to the assumption that the embedding in DCs cause more distortion than embedding in ACs. However, for data analytic which extracts fine details as a feature, distortion in ACs is not acceptable, because they represent the fine details of the image. In this paper, we propose a novel reversible data hiding method which efficiently embeds in the DC. The propose method uses a novel DC prediction method to decrease the entropy of the prediction error histogram. The embedded image has higher PSNR, embedding capacity, and smaller file size increase. Furthermore, proposed method preserves all the fine details of the image. Full article
(This article belongs to the Special Issue Entropy Based Data Hiding)
Show Figures

Figure 1

13 pages, 189 KiB  
Article
Plant Species Restoration: Effects of Different Founding Patterns on Sustaining Future Population Size and Genetic Diversity
by Steven H. Rogstad and Stephan Pelikan
Sustainability 2013, 5(3), 1304-1316; https://doi.org/10.3390/su5031304 - 20 Mar 2013
Cited by 5 | Viewed by 6094
Abstract
Efforts to sustain the earth’s biodiversity will include the establishment and manipulation of isolated rescue populations, derived either via in situ fragmentation, or under ex situ circumstances. For target species, especially those with limited propagation resources, major goals of such projects include both [...] Read more.
Efforts to sustain the earth’s biodiversity will include the establishment and manipulation of isolated rescue populations, derived either via in situ fragmentation, or under ex situ circumstances. For target species, especially those with limited propagation resources, major goals of such projects include both the optimization of population size and the preservation of genetic diversity. Such rescue populations will be founded in a variety of ways, but little is known about how the geometric patterning of founders can affect population growth and genetic diversity retention. We have developed a computer program, NEWGARDEN, to investigate this issue for plant species that vary in life history characteristics. To use NEWGARDEN, input files are created that specify the size and structure of the preserve, the positioning and genetic diversity of the founders, and life history characteristics of the species (e.g., age-specific reproduction and mortality; gene dispersal distances; rates of selfing, etc.). The program conducts matings with consequent offspring establishment such that the virtual population develops through generations as constrained by the input. Output statistics allow comparisons of population development for populations that differ in one or more input conditions. Here, with NEWGARDEN analyses modeling a triennial species, we show that rescue population project managers will often have to carefully consider the geometric placement of founders to minimize effort expended while maximizing population growth and conservation of genetic diversity, such considerations being heavily dependent on the life history characteristics of particular species. Full article
(This article belongs to the Special Issue Terrestrial Ecosystem Restoration)
Show Figures

Figure 1

Back to TopTop