# Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

^{*}

## Abstract

**:**

## 1. Introduction

**s**ensor

**s**ystems (ADIMSS) based on well established as well as newly evolved signal processing and computational intelligence operators to build an application-specifc system. The proposed methodology is converted to a constantly growing toolbox based on Matlab. Our ADIMSS approach provides both rapid-prototyping properties, as well as adaptation or reconfiguration properties required when facing the deployment of the designed system to a larger volume of hardware instances, i.e., sensors and electronics, as well as the time-dependent influence of environmental changes and aging. The aim of our emerging tool is to provide flexible and computational effective solutions, rapid-prototyping under constraints, and robustness and fault tolerance at low effort, cost, and short design time. Such self-x features, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems [60], can be achieved at various levels of abstraction, from system and algorithm adaptation down to self-x sensor and system electronics. Our proposed architecture for intelligent sensor systems design can be applied in a broad variety of fields. Currently, we are focusing on ambient intelligence, home automation, MEMS (Micro Electromechanical Systems) based measurement systems, wireless–sensor–networks, and automotive applications.

## 2. Concepts and Architecture of Multi-Sensor Signal Processing

**Figure 1.**Block diagram of typical intelligent multi-sensor system. The design in each step or block depends human observation and assistance.

#### 2.1. Global Optimization

#### 2.2. Local Optimization

#### 2.3. Multi-objective Design of Intelligent Multi-Sensor Systems

**Figure 4.**Objectives and constraints to be considered in designing of intelligent multi-sensor systems.

**Figure 7.**Enhanced design methodology for intelligent multi-sensor systems based on intrinsic and extrinsic optimization.

#### 2.4. Implementation of ADIMSS

## 3. Evolutionary Techniques for Intelligent Sensor Systems Design

#### 3.1. Genetic Algorithm

_{c}, which is typically in the range [0.5,1.0]. Usually two parents are selected and then a random variable is drawn from [0,1) and compared to P

_{c}. If the random value is lower than the crossover rate P

_{c}, then two offspring are created via recombination of two parents; otherwise they just copy their parents. Recombination operator can be distinguished into two categories, namely, discrete recombination and arithmetic recombination [48].

_{i}and y

_{i}are the genes from the first and second parents, respectively, and the α parameter is in the range [0,1]. The types of arithmetic recombination can be recognised through how they select the genes for recombining process. Those are simple arithmetic recombination, single arithmetic recombination, and whole arithmetic recombination. Figure 12 explains the recombination process of all arithmetic recombination operators.

_{m}(mutation rate) and allows each bit to flip (from 1 to 0 or 0 to 1). It is usually suggested to set a very small value for the mutation rate, from 0.001 to 0.01. For integer encodings, the bit-flipping mutation is extended to random resetting, so that a new value is chosen at random from the set of permissible values in each position with mutation rate P

_{m}. For floating-point representations, a uniform mutation is used, where the values of selected gene x

_{i}in the offspring are drawn uniformly randomly in its domain given by an interval between a lower L

_{i}and upper U

_{i}bound. Table 1 summarizes the most used operators with regard to the representation of individuals.

**Table 1.**Common recombination and mutation noperators applied for binary, integer, and floating-point representations.

Representation of solutions | Recombination | Mutation |
---|---|---|

Binary | Discrete | Bit-flipping |

Integer | Discrete | Random resetting |

Floating-point | Discrete, Arithmetic | Uniform |

#### 3.2. Particle Swarm Optimization

_{1}and c

_{2}are positive constants, referred to as cognitive and social parameters, respectively. They control how far a particle will move in a single iteration. These are both typically set to a value of two [37,43], although assigning different values to c

_{1}and c

_{2}sometimes leads to improved performance. r1 and ${r}_{2}$ ~ $U\left[0,1\right]$ are values that introduce randomness into the search process, while w is the so called inertia weight, whose goal is to control the impact of the past velocity of a particle over the current one. This value is typically set up to vary linearly from 0.9 to 0.4 during the course of a training run [37,43]. Larger values of w at the start of the optimization, allow the exploration of particles into a large area and then, to slightly refine the search space of particles into local optimum by smaller inertia weight coefficients. The general optimization process of PSO is depicted in Figure 13.

#### 3.3. Representation of Individuals

#### 3.4. Parameter Settings and Knowledge Base

#### 3.5. Assessment Functions and Multi-objective Problem

_{1}, ω

_{2}, …, ω

_{L}) gives the class affiliation of patterns and q

_{ω}denotes the classification accuracy of a set of patterns with the same class affiliation ω.

_{j}computes the weighting factor for the position of the j-th nearest neighbor. D

_{i,j}is the Euclidean distance between the i-th pattern and its j-th nearest neighbor. Ri,j denotes the measure contribution of the i-th pattern with regard to the j-th nearest neighbor. ω

_{i}denotes the class affiliation of the i–th sample, L is the number of classes, and N

_{c}is the number of patterns in the c-th class. Typically, the number of parameter k of this quality measure is set in the range of 5 to 10 [34].

_{i,j}is the Euclidean distance between the I–th and the j–th samples. Thus, the normalized NPCM is computed as follows:

_{i}, ω

_{j}) is the Kronecker delta, which is δ(ω

_{i}, ω

_{j}) = 1 for ω

_{i}= ω

_{j}(i.e., both patterns have the same class affiliation), and 0 otherwise. N is the number of all patterns. The extended compactness assessment is an aggregation of two assessment functions, i.e., class compactness (q

_{intra}) and separation (q

_{inter}), where it can be considered as a multi-objective function based on weighting method. A user defines the weighting factor, where the default setting of w is 0.5.

_{i}are called weighting factors, f

_{i}denote assessment values. As GA or PSO parameters, this weigting factors w

_{i}can be determined in two ways, i.e., based on the knowledge of designer (as lucky setting) or based on the systematic search method.

## 4. Sensor Selection and Sensor Parameters

**Figure 16.**A snapshot of response curves of a gas sensor is shown for H

_{2}(7 ppm), CH

_{4}(1,000 ppm), ethanol (2.4 ppm), and CO (400 ppm) at 70% relative humidity.

_{i}are a bit value of a sensor, h

_{i}are a parameter value of a sensor, where i = (1, 2, …, m). The fitness of each of candidate solution in the iterations is evaluated by the classification rate. The designer may define a standard model of an intelligent system to evaluate each of the candidate solutions created by optimization algorithms (GA or PSO) with regard to the classification rate. Instead of using such a standard model, other assessment functions based on nearest neighbor methods can also be directly employed to evaluate the candidate solutions. The sensor selection and parameter setting requires intrinsic optimization, which is in this particular case a resource consuming method due to physical stimuli presentations and data acquisition.

**Figure 17.**An intrinsic method of the local optimization to obtain the optimum multi-sensor and their parameters.

## 5. Signal Processing and Feature Computation

_{SPE}= 1 is stated as ‘None’, which means that no operation of SPE will be applied.

**Table 2.**List of signal pre-processing and enhancement methods used for gas sensor systems filled in the design tool of ADIMSS.

ID_{SPE} | Method | Equation |
---|---|---|

1 | None | --- |

2 | Differential | $h(t)=\left(s(t)+{\delta}_{a}\right)-\left(s(0)+{\delta}_{a}\right)=\text{}s(t)-s(0)$ |

3 | Relative | $h(t)=\frac{s(t)\left(1+{\delta}_{m}\right)}{s(0)\left(1+{\delta}_{m}\right)}=\frac{s(t)}{s(0)}$ |

4 | Fractional | $h(t)=\frac{s(t)-s(0)}{s(0)}$ |

5 | Derivation | $h(t)=s(t)-s(t-1)$ |

**Table 3.**List of operators for extracting of features used in sensor signal processing (e.g., gas detection).

ID_{FC} | Method | Parameter |
---|---|---|

1 | Steady-state | none |

2 | Transient integral | none |

3 | Histogram | range of bins, down_bnd, up_bnd |

4 | MLT | thresholds (T_{L}); L = 1, 2, …, n |

5 | GWF | μ_{k}, σ_{k}, M_{k}; k = 1, 2, …, n |

6 | Spectral peaks (FFT) | None |

**Figure 18.**Multi-level thresholding used for extracting features of slope curves of gas sensors. MLT is modified from histogram and amplitude distribution computation by non-equal range of bins. Three methods of MLT are differential, cumulative (up) and cumulative (down) modes.

_{i}, standard deviations σ

_{i}, and magnitudes M

_{i}, where i = 1, 2, …, k. Those three parameters represent the position, width, and the height of kernels (see Figure 19). The extracted features of GWF and Gaussian kernel function [26,37] are defined as follows:

_{s}is a measurement value of sensor signal at sampled time index s = 1, 2, …, N. The magnitudes of kernels is in the range from ‘0’ to ‘1’. The optimization strategy of GWF is different from MLT optimization, where the number of kernels evolves according to values of M

_{i}. If the values of M

_{i}are zero, then those kernels can be discarded. The maximum number of kernels is defined by the designer.

MLT-DM-GA / MLT-CM-GA | MLT-DM-PSO / MLT-CM-PSO | GWF-GA |
---|---|---|

Population = 20 Selection = Roulette Wheel Recombination = discrete; P _{c} = 0.8Mutation = uniform; P _{m} = 0.01Elitism = 10% Maximum generation = 100 Assessment fcn = NPOM (k = 5) | Population = 20w = 0.9; _{start}w = 0.4_{end}c = 2; _{1}c = 2_{2}Update fcn = floating-point Maximum generation = 100 Assessment fcn = NPOM (k = 5) | Population = 20 Selection = Tournament Recombination = discrete; P _{c} = 0.85Mutation = uniform; P _{m} = 0.1 Elitism = 10%Maximum generation = 100 Assessment fcn = k-NN (k = 5) |

**Table 5.**Results of MLT-DM and MLT-CM configured by human expert (Manual), GA and PSO (Automated) and result of GWF configured by GA (Automated).

Method | q_{o} | k-NN (%) with k = 5 | Thresholds or Kernels |
---|---|---|---|

MLT-DM | 0.982 | 99.17 | 13 |

MLT-DM – GA | 0.995 | 99.67 | 9 |

MLT-DM – PSO | 1.00 | 100 | 9 |

MLT-CM | 0.956 | 97.17 | 5 |

MLT-CM – GA | 0.988 | 99.50 | 5 |

MLT-CM – PSO | 0.995 | 99.92 | 5 |

GWF – GA | 0.991 | 98.46 | 3 |

**Figure 20.**Visual inspection of four gases data: (a) raw sensor signals and (b) extracted by evolved Gaussian kernels.

## 6. Dimensionality Reduction

**Figure 21.**Process of intelligent multi-sensor system design focused on the structure optimization by elimination of redundant feature computation and sensors.

#### 6.1. AFS with Acquisition Cost

_{s}denotes the sum of costs from selected features, C

_{t}denotes the sum of total cost from all features, f

_{s}is the number of selected features, and f

_{t}is the number of whole features. Table 6 shows one example of defining the cost value by designer for each mathematical operation used to extract features in feature computation methods.

**Table 6.**The cost values for basic operations mostly used to evaluate the computational effort of methods of feature computation.

No. | Operation | Cost |
---|---|---|

1 | Addition (+) | 1 |

2 | Substraction (-) | 1 |

3 | Multiplication (*) | 4 |

4 | Substraction (/) | 4 |

5 | Comparison (>,≥, ≤,<, =, ≠) | 2 |

6 | Root square | 6 |

7 | Exponential (e^{x}) | 8 |

8 | logarithm | 8 |

AFS - GA | AFS - PSO | Eye image data |
---|---|---|

Population = 20 Selection = Roulette Wheel Recombination = discrete; P _{c} = 0.8Mutation = uniform; P _{m} = 0.01Elitism = 10% Maximum generation = 100 Assessment fcn = NPOM (k = 5) | Population = 20w = 0.9; _{start}w = 0.4_{end}c = 2; _{1}c = 2_{2}Update fcn = binary Maximum generation = 100 Assessment fcn = NPOM (k = 5) | Gabor filter = 12 features ELAC = 13 features LOC = 33 features Cost: Gabor filter = 6358 per feature ELAC = 3179 per feature LOC = 1445 per feature |

**Table 8.**The AFS and AFSC results for eye-image data [19].

Method | Cost | Feature | 9-NN (%) | RNN (%) | NNs (%) |
---|---|---|---|---|---|

without AFS | 165308 | 58 | 96.72 | 80.33 | 97.87 |

AFS-GA | 53176 | 16 | 98.36 | 95.08 | 96..81 |

AFSC-GA | 13872 | 6 | 96.72 | 95.08 | 96.81 |

AFS-PSO | 45951 | 18 | 100 | 95.08 | 98.94 |

AFSC-PSO | 10404 | 6 | 96.72 | 98.36 | 98.94 |

#### 6.2. Effective Unbiased Automatic Feature Selection

_{i}for the selected features, or ρ

_{ij}for selected feature pairs, respectively. The first and second order statistics of features is normalized by N. Three methods have been introduced in [50], namely, Highest Average Frequency (HAF), Elimination Low Rank Feature (ELRF), and Neighborhood-Linkage Feature (NLF). The first two approaches (HAF and ELRF) determine the unbiased features based on first order statistics, and the NLF is based on first and second order statistics.

_{n}= (f

_{n,1}, f

_{n,2}, …, f

_{n,M}) be a solution of consisting M features and F

_{n}be a binary-vector, where n = {1, 2,…, N}. The average frequency is defined as:

^{nd}order frequencies of selected the j-th feature when the i-th feature is selected, and S

_{f}denotes the selection stability measurement [16]. Here, the selection stability function is modified to give proportional assessment value for all possible cases. The selection stability measurement is defined by following equation:

_{f}is in the range between 0 (the worst case situations) and 1 (the best stability). U denotes the accumulation of all frequencies, which are larger than half of the maximum frequency value. B denotes the accumulation of all frequencies, which are lower than half of the maximum frequency value. f

_{z}is the number of features which their frequency values are equal zero. f

_{h}is the number of features which their frequency values are larger than half of the maximum frequency value. This selection stability criterion indicates the aptness of feature selection for the regarded task and data. In these experiments, we applied benchmark datasets from repository and real application, i.e., wine, rock, and eye-tracking image to give a perspective of using these approaches in the sensor system design. Details of the benchmark datasets are given in Table 9.

**Table 9.**Summary of the benchmark datasets [16].

Dataset | Feature | Class | Samples |
---|---|---|---|

Wine | 13 | 3 | 59 / 71 / 48 |

Rock | 18 | 3 | 31 / 51 / 28 |

Eye image | 58 | 2 | 105 / 28 |

**Table 10.**Results of selected features by HAF, ELRF, and NLF methods using k–NN voting classifier with k = 5.

Dataset | S_{f} | HAF | ELRF | NLF | Class (%) | ||||
---|---|---|---|---|---|---|---|---|---|

Class. (%) | Feature | Class. (%) | Feature | Class. (%) | Feature | Mean | Median | ||

Wine | 0.57 | 96.06 | 4 | 96.62 | 6 | 96.06 | 4 | 95.41 | 96.06 |

Rock | 0.59 | 99.09 | 3 | 95.45 | 6 | 99.09 | 3 | 97.65 | 98.18 |

Eye image | 0.09 | 96.24 | 7 | 99.24 | 22 | 97.74 | 17 | 96.77 | 96.99 |

## 7. Efficient Classifiers Optimization

**Figure 24.**Sensitivity investigation of k–NN using extracted features by Gaussian kernel (GWF) of gas benchmark data. The k parameter values are set as 1, 3, 5, 7, and 9.

_{i}is the i-th prototype, ω

_{i}denotes the class information of the prototype. This class information does not evolve, but remains fixed for each particle. This is signed by ‘none’ in INFO variable. Each particle movement is evaluated by the local objective function, which measures its contribution in classifying new patterns with regard to the statistical value. Two additional operators included in the PSO algorithm of ARAPNN are reproduction and reduction of prototypes. These two operator are used to increase or decrease the number of prototypes. The whole swarm of particles is evaluated by global function, which is the classification rate of PNN. If the current global fitness larger than pervious one, then the current best swarm will be saved by replacing the old one. More detail description for this optimisation procedure of ARAPNN can be seen in [38]. Similar to this concept of the ARAPNN or AMAPSO algorithms, the basic algorithm of PSO and GA can be expanded or modified to deal with other classifiers.

Population: 3 patterns per class randomly select as individualsw = 0.9; _{start}w = 0.4_{end}c = 0.35; _{1}c = 0.35; _{2}c = 0.1_{3}Update fcn = floating-point Maximum generation = 50 Fitness fcn = local and global Data splitting = 60% - training and 40% - validation Repeat = 20 runs |

**Table 12.**Comparison of the averaged number of prototypes selected by RNN, SVM, and ARAPNN [38].

Method | Bupa | Diabetes | Wine | Thyroid | Glass |
---|---|---|---|---|---|

RNN | 123.75 | 233.20 | 19.05 | 20.55 | 62.25 |

SVM | 140.20 | 237.65 | 35.75 | 27.30 | 136.85 |

ARAPNN | 41.05 | 166.45 | 18.85 | 12.40 | 28.24 |

**Table 13.**Comparison of the averaged classification rates of RNN, SVM, standard PNN and ARAPNN [38].

Method | Bupa | Diabetes | Wine | Thyroid | Glass |
---|---|---|---|---|---|

RNN | 59.06 | 65.64 | 93.24 | 94.65 | 64.71 |

SVM | 66.67 | 75.91 | 96.90 | 96.57 | 68.13 |

PNN | 62.93 | 73.92 | 95.14 | 95.87 | 66.10 |

ARAPNN | 64.49 | 75.57 | 96.41 | 96.40 | 67.70 |

**Figure 25.**Layout and chip of reconfigurable mixed-signal classifier chip and hardware modeling of nearest neighbor classifier.

**Figure 27.**Classification accuracies on test set of eye image data, where h

_{i}and R

_{f}are perturbation factors in the computational hardware model [31].

## 8. Conclusions

## References

- Traenkler, H.R.; Kanoun, O. Some Contributions to Sensor Technologies. In Proceedings of the International Conference on Sensors and Systems, St. Petersburg, Russia, June 2002; pp. 24–27.
- Luo, R.C.; Yih, C.C.; Su, L.K. Multisensor Fusion and Integration: Approaches, Applications, and Future Research Directions. IEEE. Sens. J.
**2002**, 2, 107–119. [Google Scholar] [CrossRef] - Ankara, Z.; Kammerer, T.; Gramm, A.; Schütze, A. Low Power Vvirtual Sensor Array Based on A Micromachined Gas Sensor for Fast Discrimination between H
_{2}, CO and Relative Humidity. Sens. Actuat. B**2003**, 100, 240–245. [Google Scholar] [CrossRef] - Hauptmann, P.R. Selected Examples of Intelligent (Micro) Sensor Systems: State-of-The-Art and Tendencies. Meas. Sci. Technol
**2006**, 17, 459–466. [Google Scholar] [CrossRef] - Chandrasekhar, V.; Seah, W.K.G.; Choo, Y.S.; Ee, H.V. Localization in Underwater Sensor Networks ― Survey and Challenges. In Proceedings of the 1st ACM International Workshop on Underwater Networks, Los Angles, CA, USA, September 25; 2006; pp. 33–40. [Google Scholar]
- Ou, C.H.; Ssu, K.F. Sensor Position Determination with Flying Anchors in Three-Dimensional Wireless Sensor Networks. IEEE Trans. Mobile. Comp.
**2008**, 7, 1084–1097. [Google Scholar] - Akyildiz, I.F.; Su, W.L.; Sankarasubramaniam, Y.; Cayirci, E. A Survey on Sensor Networks. IEEE Commun. Mag.
**2002**, 40, 102–114. [Google Scholar] [CrossRef] - Rhodes, M. Electromagnetic Propagation in Sea Water and Its Value in Military Systems. In Proceedings of 2nd Annual Conference Systems Engineering for Autonomous Systems Defence Technology Centre, Edinburgh, UK; July 2007. [Google Scholar]
- Toumpis, S.; Toumpakaris, D. Wireless Ad Hoc Networks and Related Topologies: Applications and Research Challenges. Electrotech. Informationstech.
**2006**, 123, 232–241. [Google Scholar] [CrossRef] - Wen, X.; Sandler, M. Composite Spectrogram Using Multiple Fourier Transforms. IET. Signal. Process.
**2009**, 3, 51–63. [Google Scholar] [CrossRef] - Johnson, S.G.; Frigo, M. A Modified. Split-Radix FFT with Fewer Artihmatic Operations. IEEE. Trans. Signal. Process.
**2007**, 55, 111–119. [Google Scholar] [CrossRef] - Courte, D.E.; Rizki, M.M.; Tamburino, L.A.; Gutierrez-Osuna, R. Evolutionary Optimization of Gaussian Windowing Functions for Data Preprocessing. Int. J. Artif. Intell. Tools
**2003**, 12, 17–35. [Google Scholar] [CrossRef] - Llobet, E.; Ionescu, R.; Al-Khalifa, S.; Brezmes, J.; Vilanova, X.; Correig, X.; Barsan, N.; Gardner, J.W. Multicomponent Gas Mixture Analysis Using A Single Tin Oxide Sensor and Dynamic Pattern Recognition. IEEE. Sens. J.
**2001**, 1, 207–213. [Google Scholar] [CrossRef] - Raymer, M.L.; Punch, W.F.; Goodman, E.D.; Kuhn, L.A.; Jain, A.K. Dimensionality Reduction Using Genetic Algorithms. IEEE. Trans. Evol. Comput.
**2000**, 4, 164–171. [Google Scholar] [CrossRef] - Iswandy, K.; König, A.; Fricke, T.; Baumbach, M.; Schütze, A. Towards Automated Configuration of Multi-Sensor Systems Using Evolutionary Computation ― A Method and A Case Study. J. Comput. Theor. Nanosci.
**2005**, 2, 574–82. [Google Scholar] [CrossRef] - Iswandy, K.; König, A. Feature-Level Fusion by Multi-Objective Binary Particle Swarm Based Unbiased Feature Selection for Optimized Sensor System Design. In Processings of IEEE International Conference on Multisensor Fusion for Intelligent Systems, Heidelberg, Germany; 2006; pp. 365–370. [Google Scholar]
- Artursson, T.; Holmberg, M. Wavelet Transform of Electronic Tongue Data. Sens. Actuat. B
**2002**, 87, 379–391. [Google Scholar] [CrossRef] - Jain, A.K.; Duin, R.P.W.; Mao, J.C. Statistical Pattern Recognition: A Review. IEEE Trans. Pattern. Anal. Mach. Intell.
**2000**, 22, 4–37. [Google Scholar] [CrossRef] - Iswandy, K.; König, A. Feature Selection with Acquisition Cost for Optimizing Sensor System Design. Adv. Rad. Sci.
**2006**, 4, 131–141. [Google Scholar] [CrossRef] - Emmanouilidis, C. Evolutionary Multi-Objective Feature Selection and ROC Analysis with Application to Industrial Machinery Fault Diagnosis. In Proceedings of Evolutionary Methods for Design, Optimization and Control (CIMNE), Barcelona, Spain; 2002. [Google Scholar]
- Zitzler, E. Evolutionary Alogrithms for Multiobjective Optimization: Methods and Applications; Shaker Verlag: Aachen, Germany, 1999. [Google Scholar]
- Bäck, T.; Hammel, U.; Schwefel, H.–P. Evolutionary Computation: Comments on The History and Current State. IEEE. Trans. Evol. Comput.
**1997**, 1, 3–17. [Google Scholar] [CrossRef] - Goldberg, D.A. Genetic Algorithms in Search, Optimization and Machine Learning; Addison Wesley: Indianapolis, IN, USA, 1989. [Google Scholar]
- Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of IEEE International Conference on Neural Networks, Perth, Australia; 1995; Vol. 4, pp. 1942–1948. [Google Scholar]
- Kennedy, J.; Eberhart, R.C. A Discrete Binary Vversion of The Particle Swarm Algorithm. In Proceedings of IEEE Conference Systems, Man, amd Cybernetics, Orlando, FL, USA, 12–15 October 1997; pp. 4104–4109.
- Iswandy, K.; König, A. Fully Evolved Kernel Method Employing SVM Assessment for Feature Computation from Multisensor Signals. Int. J. Comput. Intell. Appl.
**2009**, 8, 1–15. [Google Scholar] [CrossRef] - Kahn, J.M.; Katz, R.H.; Pister, K.S.J. Next Century Challenges: Mobile Networking for Smart Dust. In Proceedings of ACM / IEEE International Conference On Mobile Computing and Networking (MobiCom 99), Seattle, WA, USA, August 1999; pp. 271–278.
- Corcoran, P.; Anglesea, J.; Elshaw, M. The Application of Genetic Algortihms to Sensor Parameter Selection for Multisensor Array Configuration. Sens. Actuat.
**1999**, 76, 57–66. [Google Scholar] [CrossRef] - Korsten, M.J.; van der Vet, P.E.; Retien, P.P.L. A System for The Automatic Selection of Sensors. In Proceedings of International Measurement Confederation (IMEKO XVI), Vienna, Austria, 25–28 September 2000; pp. 211–216.
- Escalante, H.J.; Montes, M.; Sucar, L.E. Particle Swarm Model Selection. J. Mach. Learn. Res
**2009**, 10, 405–440. [Google Scholar] - Iswandy, K.; König, A. PSO for Fault-Tolerant Nearest Neighbor Classification Employing Reconfigurable, Analog Hardware Implementation in Low Power Intelligent Sensor Systems. In Proceedings of International Conference on Hybrid Intelligent Systems, Barcelona, Spain; 2008; pp. 380–385. [Google Scholar]
- Gutierrez-Osuna, R.; Nagle, H.T. A Method for Evaluating Data-Preprocessing Techniques for Odor Classification with An Array of Gas Sensors. IEEE. Trans. Syst. Man. Cybern. C
**1999**, 29, 626–623. [Google Scholar] [CrossRef] [PubMed] - Kammerer, T.; Ankara, Z.; Schütze, A. A Selective Gas Sensor System Based on Temperature Cycling and Comprehensible Pattern Classification: A Systematic Approach. In Prooceedings of Eurosensors XVII, Guimares, Portugal; 2003; pp. 22–24. [Google Scholar]
- Iswandy, K.; König, A. Comparison of Effective Assessment Functions for Optimized Sensor System Design. In Application of Soft Computing, ASC 52; Springer: Berlin, Germany, 2009; pp. 34–42. [Google Scholar]
- Gardner, J.W.; Craven, M.; Dow, C.; Hines, E.L. The Prediction of Bacteria Type and Culture Growth Phase by An Electronic Nose with A Multi-Layer Perceptron Network. Meas. Sci. Technol.
**1998**, 9, 120–127. [Google Scholar] [CrossRef] - Baumbach, M.; Kammerer, T.; Sossong, A.; Schütze, A. A New Method for Fast Identification of Gases and Gas Mixtures after Sensor Power Up. In Proceedings of IEEE Sensors, Viena, Austria, October 2004; pp. 1388–1391.
- Iswandy, K.; König, A. Comparison of PSO-Based Optimized Feature Computation for Automated Configuration of Multi-Sensor Systems. In Soft Computing in Industrial Applications, ASC 39; Springer: Berlin, Germany, 2007; pp. 240–245. [Google Scholar]
- Iswandy, K.; König, A. A Novel Adaptive Resource-Aware PNN Algorithm Based on Michigan-Nested Pittsburgh PSO. In ICONIP; Springer, 2009; Part II, LNCS 5507; pp. 477–484. [Google Scholar]
- Cervantes, A.; Galván, I.; Isasi, P. AMPSO: A New Particle Swarm Method for Nearest Neighbor Classification. IEEE. Trans. Syst. Man. Cybern. C
**2009**, 39, 1082–1091. [Google Scholar] [CrossRef] [PubMed] - Tawdross, P.M. Bio-Inspired Circuit Sizing and Trimming Methods for Dynamically Reconfigurable Sensor Electronics in Industrial Embedded System. Ph.D. Thesis, Institute of Integrated Sensor Systems. TU Kaiserslautern, Kaiserslautern, Germany, 2007. [Google Scholar]
- Lakshmanan, S.K. Towards Dynamically Reconfigurable Analog and Mixed-Signal Electronics for Embedded and Intelligent Sensor Systems. Ph.D. Thesis, Inst. of Integrated Sensor Systems. TU Kaiserslautern, Kaiserslautern, Germany, 2008. [Google Scholar]
- Shi, Y.; Eberhart, R.C. A Modified Particle Swarm Optimizer. In Proceedings International Conference on Evolutionary Computation, Anchorage, AK, USA; 1998; pp. 69–73. [Google Scholar]
- Shi, Y.; Eberhart, R.C. Parameter Selection in Particle Swarm Optimization. In Proceedings of the 7th International Conference on Evolutionary Programming, San Diego, CA, USA; 1998; pp. 591–600. [Google Scholar]
- Eberhardt, M.; Roth, S.; König, A. Industrial Application of Machine-in-The-Loop-Learning for A Medical Robot Vision System – Concept and Comprehensive Field Study. Comput. Electrical. Eng
**2008**, 34, 111–126. [Google Scholar] [CrossRef] - COGNEX Home Page. Available online: www.cognex.com (accessed 23 August 2009).
- Prudencio, R.B.C.; Ludermir, T.B. Active Selection of Training Examples for Meta-Learning. In Proceedings of International Conference on Hybrid Intelligent Systems, Kaiserslautern, Germany; 2007; pp. 126–131. [Google Scholar]
- Stoica, A. Reconfigurable Transistor Array for Evolvable Hardware; Caltech/JPL Novel Technology Report; Caltech/JPL: Pasadena, CA, USA, July 1996. [Google Scholar]
- Dumitrescu, D.; Lazzerini, B.; Jain, L.C.; Dumitrescu, A. Evolutionary Computation; CRC: Boca Raton, FL, USA, 2000. [Google Scholar]
- Hereford, J.M.; Gerlach, H. Integer-Valued Particle Swarm Optimisation Applied to Sudoku Puzzles. In Proceedings of the IEEE Swarm Intelligence Symposium, St. Louis, MO, USA, September 2008.
- Iswandy, K.; König, A. Towards Effective Unbiased Automated Feature Selection. In Proceedings of Inernational Conference on Hybrid Intelligent Systems, Auckland, New Zealand; 2006; pp. 380–385. [Google Scholar]
- Liu, H; Motoda, H. Feature Selection for Knowledge Discovery and Data Mining; Kluwer Academic: Norwell, MA, USA, 1998. [Google Scholar]
- Tam, S.M.; Gupta, B.; Castro, H.A.; Holler, M. Learning on An Analog VLSI Neural Network Chip. In Proceedings of IEEE International Conference Systems, Man and Cybernetics, Los Angeles, CA, USA, November 4–7; 1990; pp. 701–703. [Google Scholar]
- Schenkel, F.; Pronath, M.; Zizala, S.; Schwencker, R.; Graeb, H.E.; Antreich, K. Mismatch Analysis and Direct Yield Optimization by Spec-Wise Linearization and Feasibility-Guided Search. In Proceedings of the 38th Design Automation Conference, Las Vegas, NV, USA, June 2001; pp. 858–863.
- Gates, W. The Reduced nNearest Neighbor Rule. IEEE Trans. Inf. Theory
**1972**, 18, 431–433. [Google Scholar] [CrossRef] - Hart, P.E. The Condensed Nearest Neighbor Rule. IEEE Trans. Inf. Theory
**1968**, 14, 515–516. [Google Scholar] [CrossRef] - Platt, J.A. Resource-Allocating Network for Function Interpolation. Neural Comput.
**1991**, 3, 213–225. [Google Scholar] [CrossRef] - Haykin, S. Neural Networks: A Comprehensive Foundation,, 2nd Ed. ed; Prentice Hall International: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
- König, A. Dimensionality Reduction Techniques for Interactive Visualization, Exploratory Data Analysis, and Classification. In Pattern Recognition in Soft Computing Paradigm; World Scientific: Hackensack, NJ, USA, 2001; pp. 1–37. [Google Scholar]
- Peters, S.; König, A. A Contribution to Automatic Design of Image Analysis Systems ― Segmentation of Thin and Thick Fibers in 2D and 3D Images. In Proceedings of International Conference on Instrumentation Communication and Information Technology (ICICI), Bandung, Indonesia; 2005; pp. 488–493. [Google Scholar]
- Müller-Schloer, C.; von der Malsburg, C.; Würtz, R.P. Organic Computing. Informatik Spektrum
**2004**, 27, 332–336. [Google Scholar] [CrossRef] - Köppen, M.; Franke, K.; Vicente-Garcia, R. Tiny GAs for Image Processing Applications. IEEE Comput. Intell. Mag.
**2006**, 1, 17–26. [Google Scholar] [CrossRef] - Peters, S.; König, A. A Hybrid Texture Analysis Systems Based on Non-Linear and Oriented Kernels, Particle Swarm Optimization, and kNN Vs. Support Vector Maschines. In Proceedings of the 7th International Conference on Hybrid Intelligent Systems, Kaiserslautern, Germany, September 17–19; 2007; pp. 507–527. [Google Scholar]

© 2009 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Iswandy, K.; König, A. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems. *Algorithms* **2009**, *2*, 1368-1409.
https://doi.org/10.3390/a2041368

**AMA Style**

Iswandy K, König A. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems. *Algorithms*. 2009; 2(4):1368-1409.
https://doi.org/10.3390/a2041368

**Chicago/Turabian Style**

Iswandy, Kuncup, and Andreas König. 2009. "Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems" *Algorithms* 2, no. 4: 1368-1409.
https://doi.org/10.3390/a2041368