Next Issue
Volume 7, September
Previous Issue
Volume 7, March
 
 

Computers, Volume 7, Issue 2 (June 2018) – 17 articles

Cover Story (view full-size image): Performance requirements of applications continue increasing and manycore architectures are being built to fulfill these requirements. General purpose architectures are not as efficient as specialized architectures which are valid at the manycore scale as well. However, exploring design space of manycore architectures, especially of the heterogeneous ones, is a real challenge. Therefore, we came up with an automated design method to build manycore architectures based on specialized cores on applications within a domain. The designed architecture will not target a single application but a domain of applications. We believe that future manycore architectures will most likely have specialized components and the aforementioned design method will facilitate the design of these architectures.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 510 KiB  
Article
Improving Efficiency of Edge Computing Infrastructures through Orchestration Models
by Raffaele Bolla, Alessandro Carrega, Matteo Repetto and Giorgio Robino
Computers 2018, 7(2), 36; https://doi.org/10.3390/computers7020036 - 20 Jun 2018
Cited by 4 | Viewed by 7163
Abstract
Edge computing is an effective paradigm for proximity in computation, but must inexorably face mobility issues and traffic fluctuations. While software orchestration may provide effective service handover between different edge infrastructures, seamless operation with negligible service disruption necessarily requires pre-provisioning and the need [...] Read more.
Edge computing is an effective paradigm for proximity in computation, but must inexorably face mobility issues and traffic fluctuations. While software orchestration may provide effective service handover between different edge infrastructures, seamless operation with negligible service disruption necessarily requires pre-provisioning and the need to leave some network functions idle for most of the time, which eventually results in large energy waste and poor efficiency. Existing consolidation algorithms are largely ineffective in these conditions because they lack context, i.e., the knowledge of which resources are effectively used and which ones are just provisioned for other purposes (i.e., redundancy, resilience, scaling, migration). Though the concept is rather straightforward, its feasibility in real environments must be demonstrated. Motivated by the lack of energy-efficiency mechanisms in cloud management software, we have developed a set of extensions to OpenStack for power management and Quality of Service, explicitly targeting the introduction of more context for applications. In this paper, we briefly describe the overall architecture and evaluate its efficiency and effectiveness. We analyze performance metrics and their relationship with power consumption, hence extending the analysis to specific aspects that cannot be investigated by software simulations. We also show how the usage of context information can greatly improve the effectiveness of workload consolidation in terms of energy saving. Full article
(This article belongs to the Special Issue Mobile Edge Computing)
Show Figures

Figure 1

5 pages, 1025 KiB  
Correction
Correction: Mahmood et al. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm. Computers 2017, 6, 15
by Amjad Mahmood, Salman A. Khan and Rashed A. Bahlool
Computers 2018, 7(2), 35; https://doi.org/10.3390/computers7020035 - 15 Jun 2018
Cited by 4 | Viewed by 5245
Abstract
In article by Mahmood [1], the results for a genetic algorithm (GA), adaptive genetic algorithm (AGA), and greedy algorithm were not correctly reported in Section 5 due to a programming error[...] Full article
Show Figures

Figure 1

21 pages, 1837 KiB  
Review
Recommendations for Integrating a P300-Based Brain Computer Interface in Virtual Reality Environments for Gaming
by Grégoire Cattan, Cesar Mendoza, Anton Andreev and Marco Congedo
Computers 2018, 7(2), 34; https://doi.org/10.3390/computers7020034 - 28 May 2018
Cited by 21 | Viewed by 9192
Abstract
The integration of a P300-based brain–computer interface (BCI) into virtual reality (VR) environments is promising for the video games industry. However, it faces several limitations, mainly due to hardware constraints and constraints engendered by the stimulation needed by the BCI. The main limitation [...] Read more.
The integration of a P300-based brain–computer interface (BCI) into virtual reality (VR) environments is promising for the video games industry. However, it faces several limitations, mainly due to hardware constraints and constraints engendered by the stimulation needed by the BCI. The main limitation is still the low transfer rate that can be achieved by current BCI technology. The goal of this paper is to review current limitations and to provide application creators with design recommendations in order to overcome them. We also overview current VR and BCI commercial products in relation to the design of video games. An essential recommendation is to use the BCI only for non-complex and non-critical tasks in the game. Also, the BCI should be used to control actions that are naturally integrated into the virtual world. Finally, adventure and simulation games, especially if cooperative (multi-user) appear the best candidates for designing an effective VR game enriched by BCI technology. Full article
(This article belongs to the Special Issue Advances in Mobile Augmented Reality)
Show Figures

Figure 1

18 pages, 2645 KiB  
Article
User Experience in Mobile Augmented Reality: Emotions, Challenges, Opportunities and Best Practices
by Amir Dirin and Teemu H. Laine
Computers 2018, 7(2), 33; https://doi.org/10.3390/computers7020033 - 21 May 2018
Cited by 55 | Viewed by 15859
Abstract
Mobile Augmented Reality (MAR) is gaining a strong momentum to become a major interactive technology that can be applied across domains and purposes. The rapid proliferation of MAR applications in global mobile application markets has been fueled by a range of freely-available MAR [...] Read more.
Mobile Augmented Reality (MAR) is gaining a strong momentum to become a major interactive technology that can be applied across domains and purposes. The rapid proliferation of MAR applications in global mobile application markets has been fueled by a range of freely-available MAR software development kits and content development tools, some of which enable the creation of MAR applications even without programming skills. Despite the recent advances of MAR technology and tools, there are still many challenges associated with MAR from the User Experience (UX) design perspective. In this study, we first define UX as the emotions that the user encounters while using a service, a product or an application and then explore the recent research on the topic. We present two case studies, a commercial MAR experience and our own Virtual Campus Tour MAR application, and evaluate them from the UX perspective, with a focus on emotions. Next, we synthesize the findings from previous research and the results of the case study evaluations to form sets of challenges, opportunities and best practices related to UX design of MAR applications. Based on the identified best practices, we finally present an updated version of the Virtual Campus Tour. The results can be used for improving UX design of future MAR applications, thus making them emotionally engaging. Full article
(This article belongs to the Special Issue Advances in Mobile Augmented Reality)
Show Figures

Figure 1

14 pages, 5626 KiB  
Article
Air Condition’s PID Controller Fine-Tuning Using Artificial Neural Networks and Genetic Algorithms
by Maryam Malekabadi, Majid Haghparast and Fatemeh Nasiri
Computers 2018, 7(2), 32; https://doi.org/10.3390/computers7020032 - 21 May 2018
Cited by 9 | Viewed by 6487
Abstract
In this paper, a Proportional–Integral–Derivative (PID) controller is fine-tuned through the use of artificial neural networks and evolutionary algorithms. In particular, PID’s coefficients are adjusted on line using a multi-layer. In this paper, we used a feed forward multi-layer perceptron. There was one [...] Read more.
In this paper, a Proportional–Integral–Derivative (PID) controller is fine-tuned through the use of artificial neural networks and evolutionary algorithms. In particular, PID’s coefficients are adjusted on line using a multi-layer. In this paper, we used a feed forward multi-layer perceptron. There was one hidden layer, activation functions were sigmoid functions and weights of network were optimized using a genetic algorithm. The data for validation was derived from a desired results of system. In this paper, we used genetic algorithm, which is one type of evolutionary algorithm. The proposed methodology was evaluated against other well-known techniques of PID parameter tuning. Full article
Show Figures

Figure 1

22 pages, 5208 KiB  
Article
Hardware-Assisted Secure Communication in Embedded and Multi-Core Computing Systems
by Ahmed Saeed, Ali Ahmadinia and Mike Just
Computers 2018, 7(2), 31; https://doi.org/10.3390/computers7020031 - 15 May 2018
Viewed by 6781
Abstract
With the sharp rise of functionalities and connectivities in multi-core embedded systems, these systems have become notably vulnerable to security attacks. Conventional software security mechanisms fail to deliver full safety and also affect the system performance significantly. In this paper, a hardware-based security [...] Read more.
With the sharp rise of functionalities and connectivities in multi-core embedded systems, these systems have become notably vulnerable to security attacks. Conventional software security mechanisms fail to deliver full safety and also affect the system performance significantly. In this paper, a hardware-based security procedure is proposed to handle critical information in real-time through comprehensive separation without needing any help from the software. To evaluate the proposed system, an authentication system based on an image procession solution has been implemented on a reconfigurable device. In addition, the proposed security mechanism is evaluated for the Networks-on-chips, where minimal area, power consumption and performance overheads are achieved. Full article
(This article belongs to the Special Issue Multi-Core Systems-On-Chips Design and Optimization)
Show Figures

Figure 1

13 pages, 1657 KiB  
Article
A New Strategy for Energy Saving in Spectrum-Sliced Elastic Optical Networks
by Igor M. Queiroz and Karcius D. R. Assis
Computers 2018, 7(2), 30; https://doi.org/10.3390/computers7020030 - 11 May 2018
Cited by 1 | Viewed by 5431
Abstract
In this paper, we propose a new approach for energy saving in Elastic Optical Networks (EONs) under physical impairments based on MILP solving instances. First, we seek to maximize the attended traffic on the network whereas the blocking probability is maintained below a [...] Read more.
In this paper, we propose a new approach for energy saving in Elastic Optical Networks (EONs) under physical impairments based on MILP solving instances. First, we seek to maximize the attended traffic on the network whereas the blocking probability is maintained below a defined limit. Hence, the next step is to minimize the power consumption on the network. The proposed MILP-based algorithm models the RMLSA problem and considers transponders, optical cross-connects (OXCs), and optical amplifiers as the physical components with influence on network optimization. The results show that our approach offers, on average, a reduction of up to 7.7% of the power consumed on the four moderate networks analyzed. Full article
Show Figures

Figure 1

22 pages, 3771 KiB  
Article
Mixed Cryptography Constrained Optimization for Heterogeneous, Multicore, and Distributed Embedded Systems
by Hyunsuk Nam and Roman Lysecky
Computers 2018, 7(2), 29; https://doi.org/10.3390/computers7020029 - 24 Apr 2018
Cited by 7 | Viewed by 5299
Abstract
Embedded systems continue to execute computational- and memory-intensive applications with vast data sets, dynamic workloads, and dynamic execution characteristics. Adaptive distributed and heterogeneous embedded systems are increasingly critical in supporting dynamic execution requirements. With pervasive network access within these systems, security is a [...] Read more.
Embedded systems continue to execute computational- and memory-intensive applications with vast data sets, dynamic workloads, and dynamic execution characteristics. Adaptive distributed and heterogeneous embedded systems are increasingly critical in supporting dynamic execution requirements. With pervasive network access within these systems, security is a critical design concern that must be considered and optimized within such dynamically adaptive systems. This paper presents a modeling and optimization framework for distributed, heterogeneous embedded systems. A dataflow-based modeling framework for adaptive streaming applications integrates models for computational latency, mixed cryptographic implementations for inter-task and intra-task communication, security levels, communication latency, and power consumption. For the security model, we present a level-based modeling of cryptographic algorithms using mixed cryptographic implementations. This level-based security model enables the development of an efficient, multi-objective genetic optimization algorithm to optimize security and energy consumption subject to current application requirements and security policy constraints. The presented methodology is evaluated using a video-based object detection and tracking application and several synthetic benchmarks representing various application types and dynamic execution characteristics. Experimental results demonstrate the benefits of a mixed cryptographic algorithm security model compared to using a single, fixed cryptographic algorithm. Results also highlight how security policy constraints can yield increased security strength and cryptographic diversity for the same energy constraint. Full article
(This article belongs to the Special Issue Multi-Core Systems-On-Chips Design and Optimization)
Show Figures

Figure 1

28 pages, 8680 KiB  
Article
Comparing the Cost of Protecting Selected Lightweight Block Ciphers against Differential Power Analysis in Low-Cost FPGAs
by William Diehl, Abubakr Abdulgadir, Jens-Peter Kaps and Kris Gaj
Computers 2018, 7(2), 28; https://doi.org/10.3390/computers7020028 - 23 Apr 2018
Cited by 8 | Viewed by 7640
Abstract
Lightweight block ciphers are an important topic in the Internet of Things (IoT) since they provide moderate security while requiring fewer resources than the Advanced Encryption Standard (AES). Ongoing cryptographic contests and standardization efforts evaluate lightweight block ciphers on their resistance to power [...] Read more.
Lightweight block ciphers are an important topic in the Internet of Things (IoT) since they provide moderate security while requiring fewer resources than the Advanced Encryption Standard (AES). Ongoing cryptographic contests and standardization efforts evaluate lightweight block ciphers on their resistance to power analysis side channel attack (SCA), and the ability to apply countermeasures. While some ciphers have been individually evaluated, a large-scale comparison of resistance to side channel attack and the formulation of absolute and relative costs of implementing countermeasures is difficult, since researchers typically use varied architectures, optimization strategies, technologies, and evaluation techniques. In this research, we leverage the Test Vector Leakage Assessment (TVLA) methodology and the FOBOS SCA framework to compare FPGA implementations of AES, SIMON, SPECK, PRESENT, LED, and TWINE, using a choice of architecture targeted to optimize throughput-to-area (TP/A) ratio and suitable for introducing countermeasures to Differential Power Analysis (DPA). We then apply an equivalent level of protection to the above ciphers using 3-share threshold implementations (TI) and verify the improved resistance to DPA. We find that SIMON has the highest absolute TP/A ratio of protected versions, as well as the lowest relative cost of protection in terms of TP/A ratio. Additionally, PRESENT uses the least energy per bit (E/bit) of all protected implementations, while AES has the lowest relative cost of protection in terms of increased E/bit. Full article
(This article belongs to the Special Issue Reconfigurable Computing Technologies and Applications)
Show Figures

Figure 1

28 pages, 1372 KiB  
Article
Designing Domain-Specific Heterogeneous Architectures from Dataflow Programs
by Süleyman Savas, Zain Ul-Abdin and Tomas Nordström
Computers 2018, 7(2), 27; https://doi.org/10.3390/computers7020027 - 22 Apr 2018
Cited by 6 | Viewed by 8648
Abstract
The last ten years have seen performance and power requirements pushing computer architectures using only a single core towards so-called manycore systems with hundreds of cores on a single chip. To further increase performance and energy efficiency, we are now seeing the development [...] Read more.
The last ten years have seen performance and power requirements pushing computer architectures using only a single core towards so-called manycore systems with hundreds of cores on a single chip. To further increase performance and energy efficiency, we are now seeing the development of heterogeneous architectures with specialized and accelerated cores. However, designing these heterogeneous systems is a challenging task due to their inherent complexity. We proposed an approach for designing domain-specific heterogeneous architectures based on instruction augmentation through the integration of hardware accelerators into simple cores. These hardware accelerators were determined based on their common use among applications within a certain domain.The objective was to generate heterogeneous architectures by integrating many of these accelerated cores and connecting them with a network-on-chip. The proposed approach aimed to ease the design of heterogeneous manycore architectures—and, consequently, exploration of the design space—by automating the design steps. To evaluate our approach, we enhanced our software tool chain with a tool that can generate accelerated cores from dataflow programs. This new tool chain was evaluated with the aid of two use cases: radar signal processing and mobile baseband processing. We could achieve an approximately 4 × improvement in performance, while executing complete applications on the augmented cores with a small impact (2.5–13%) on area usage. The generated accelerators are competitive, achieving more than 90% of the performance of hand-written implementations. Full article
(This article belongs to the Special Issue Multi-Core Systems-On-Chips Design and Optimization)
Show Figures

Figure 1

19 pages, 2377 KiB  
Article
Feedback-Based Admission Control for Firm Real-Time Task Allocation with Dynamic Voltage and Frequency Scaling
by Piotr Dziurzanski and Amit Kumar Singh
Computers 2018, 7(2), 26; https://doi.org/10.3390/computers7020026 - 16 Apr 2018
Cited by 5 | Viewed by 5473
Abstract
Feedback-based mechanisms can be employed to monitor the performance of Multiprocessor Systems-on-Chips (MPSoCs) and steer the task execution even if the exact knowledge of the workload is unknown a priori. In particular, traditional proportional-integral controllers can be used with firm real-time tasks to [...] Read more.
Feedback-based mechanisms can be employed to monitor the performance of Multiprocessor Systems-on-Chips (MPSoCs) and steer the task execution even if the exact knowledge of the workload is unknown a priori. In particular, traditional proportional-integral controllers can be used with firm real-time tasks to either admit them to the processing cores or reject in order not to violate the timeliness of the already admitted tasks. During periods with a lower computational power demand, dynamic voltage and frequency scaling (DVFS) can be used to reduce the dissipation of energy in the cores while still not violating the tasks’ time constraints. Depending on the workload pattern and weight, platform size and the granularity of DVFS, energy savings can reach even 60% at the cost of a slight performance degradation. Full article
(This article belongs to the Special Issue Multi-Core Systems-On-Chips Design and Optimization)
Show Figures

Figure 1

21 pages, 2181 KiB  
Article
Scheduling and Tuning for Low Energy in Heterogeneous and Configurable Multicore Systems
by Mohamad Hammam Alsafrjalani and Ann Gordon-Ross
Computers 2018, 7(2), 25; https://doi.org/10.3390/computers7020025 - 14 Apr 2018
Cited by 3 | Viewed by 5282
Abstract
Heterogeneous and configurable multicore systems provide hardware specialization to meet disparate application hardware requirements. However, effective multicore system specialization can require a priori knowledge of the applications, application profiling information, and/or dynamic hardware tuning to schedule and execute applications on the most energy [...] Read more.
Heterogeneous and configurable multicore systems provide hardware specialization to meet disparate application hardware requirements. However, effective multicore system specialization can require a priori knowledge of the applications, application profiling information, and/or dynamic hardware tuning to schedule and execute applications on the most energy efficient cores. Furthermore, even though highly disparate core heterogeneity and/or highly configurable parameters with numerous potential parameter values result in more fine-grained specialization and higher energy savings potential, these large design spaces are challenging to efficiently explore. To address these challenges, we propose a novel configuration-subsetted heterogeneous and configurable multicore system, wherein each core offers a small subset of the design space, and propose a novel scheduling and tuning (SaT) algorithm to efficiently exploit the energy savings potential of this system. Our proposed architecture and algorithm require no a priori application knowledge or profiling, and incur minimal runtime overhead. Results reveal energy savings potential and insights on energy trade-offs in heterogeneous, configurable systems. Full article
(This article belongs to the Special Issue Multi-Core Systems-On-Chips Design and Optimization)
Show Figures

Figure 1

15 pages, 2038 KiB  
Article
Bridging the Gap between ABM and MAS: A Disaster-Rescue Simulation Using Jason and NetLogo
by Wulfrano Arturo Luna-Ramirez and Maria Fasli
Computers 2018, 7(2), 24; https://doi.org/10.3390/computers7020024 - 11 Apr 2018
Cited by 8 | Viewed by 7562
Abstract
An agent is an autonomous computer system situated in an environment to fulfill a design objective. Multi-Agent Systems aim to solve problems in a flexible and robust way by assembling sets of agents interacting in cooperative or competitive ways for the sake of [...] Read more.
An agent is an autonomous computer system situated in an environment to fulfill a design objective. Multi-Agent Systems aim to solve problems in a flexible and robust way by assembling sets of agents interacting in cooperative or competitive ways for the sake of possibly common objectives. Multi-Agent Systems have been applied to several domains ranging from many industrial sectors, e-commerce, health and even entertainment. Agent-Based Modeling, a sort of Multi-Agent Systems, is a technique used to study complex systems in a wide range of domains. A natural or social system can be represented, modeled and explained through a simulation based on agents and interactions. Such a simulation can comprise a variety of agent architectures like reactive and cognitive agents. Despite cognitive agents being highly relevant to simulate social systems due their capability of modelling aspects of human behaviour ranging from individuals to crowds, they still have not been applied extensively. A challenging and socially relevant domain are the Disaster-Rescue simulations that can benefit from using cognitive agents to develop a realistic simulation. In this paper, a Multi-Agent System applied to the Disaster-Rescue domain involving cognitive agents based on the Belief–Desire–Intention architecture is presented. The system aims to bridge the gap in combining Agent-Based Modelling and Multi-Agent Systems approaches by integrating two major platforms in the field of Agent-Based Modeling and Belief-Desire Intention multi-agent systems, namely, NetLogo and Jason. Full article
Show Figures

Figure 1

18 pages, 5564 KiB  
Article
Failure Detection of Composites with Control System Corrective Response in Drone System Applications
by Mark Bowkett, Kary Thanapalan and Ewen Constant
Computers 2018, 7(2), 23; https://doi.org/10.3390/computers7020023 - 09 Apr 2018
Cited by 8 | Viewed by 6855
Abstract
The paper describes a novel method for the detection of damage in carbon composites as used in drone frames. When damage is detected a further novel corrective response is initiated in the quadcopter flight controller to switch from a four-arm control system to [...] Read more.
The paper describes a novel method for the detection of damage in carbon composites as used in drone frames. When damage is detected a further novel corrective response is initiated in the quadcopter flight controller to switch from a four-arm control system to a three-arm control system. This is made possible as a symmetrical frame is utilized, which allows for a balanced weight distribution between both the undamaged quadcopter and the fallback tri-copter layout. The resulting work allows for continued flight where this was not previously possible. Further developing work includes improved flight stability with the aid of an underslung load model. This is beneficial to the quadcopter as a damaged arm attached to the main body by the motor wires behaves as an underslung load. The underslung load works are also transferable in a dual master and slave drone system where the master drone transports a smaller slave drone by a tether, which acts as an underslung load. Full article
Show Figures

Figure 1

16 pages, 3241 KiB  
Article
Levels for Hotline Miami 2: Wrong Number Using Procedural Content Generations
by Joseph Alexander Brown, Bulat Lutfullin, Pavel Oreshin and Ilya Pyatkin
Computers 2018, 7(2), 22; https://doi.org/10.3390/computers7020022 - 04 Apr 2018
Cited by 1 | Viewed by 11724
Abstract
Procedural Content Generation is the automatic process for generating game content in order to allow for a decrease in developer resources while adding to the replayability of a digital game. It has been found to be highly effective as a method when utilized [...] Read more.
Procedural Content Generation is the automatic process for generating game content in order to allow for a decrease in developer resources while adding to the replayability of a digital game. It has been found to be highly effective as a method when utilized in rougelike games, of which Hotline Miami 2: Wrong Number shares a number of factors. Search based procedural content, in this case, a genetic algorithm, allows for the creation of levels which meet with a number of designer set requirements. The generator proposed provides for an automatic creation of game content for a commercially available game: the level design, object placement, and enemy placement. Full article
Show Figures

Figure 1

23 pages, 3949 KiB  
Article
Low Effort Design Space Exploration Methodology for Configurable Caches
by Mohamad Hammam Alsafrjalani and Ann Gordon-Ross
Computers 2018, 7(2), 21; https://doi.org/10.3390/computers7020021 - 27 Mar 2018
Cited by 2 | Viewed by 5386
Abstract
Designers can reduce design space exploration time and efforts using the design space subsetting method that removes energy-redundant configurations. However, the subsetting method requires a priori knowledge of all applications. We analyze the impact of a priori application knowledge on the subset quality [...] Read more.
Designers can reduce design space exploration time and efforts using the design space subsetting method that removes energy-redundant configurations. However, the subsetting method requires a priori knowledge of all applications. We analyze the impact of a priori application knowledge on the subset quality by varying the amount of a priori application information available to designers during design time from no information to a general knowledge of the application domain. The results showed that only a small set of applications representative of the anticipated applications’ general domains alleviated the design efforts and was sufficient to provide energy savings within 5.6% of the complete, unsubsetted design space. Furthermore, since using a small set of applications was likely to reduce the design space exploration time, we analyze and quantify the impact of a priori applications knowledge on the speedup in the execution time to select the desired configurations. The results revealed that a basic knowledge of the anticipated applications reduced the subset design space exploration time by up to 6.6X. Full article
(This article belongs to the Special Issue Multi-Core Systems-On-Chips Design and Optimization)
Show Figures

Figure 1

23 pages, 1489 KiB  
Article
Battery Modelling and Simulation Using a Programmable Testing Equipment
by Elena Vergori, Francesco Mocera and Aurelio Somà
Computers 2018, 7(2), 20; https://doi.org/10.3390/computers7020020 - 26 Mar 2018
Cited by 29 | Viewed by 9658
Abstract
In this paper, the study and modelling of a lithium-ion battery cell is presented. To test the considered cell, a battery testing system was built using two programmable power units: an electronic load and a power supply. To communicate with them, a software/hardware [...] Read more.
In this paper, the study and modelling of a lithium-ion battery cell is presented. To test the considered cell, a battery testing system was built using two programmable power units: an electronic load and a power supply. To communicate with them, a software/hardware interface was implemented within the National Instruments (NI) LabVIEW environment. This dedicated laboratory equipment can be used to apply charging/discharging cycles according to user defined load profiles. The battery modelling and the parameters identification procedure are described. The model was used to estimate the State Of Charge (SOC) under dynamic loading conditions. The most spread techniques used in the field of battery modelling and SOC estimation are implemented and compared. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop