Next Article in Journal
A Digital Auto-Zeroing Circuit to Reduce Offset in Sub-Threshold Sense Amplifiers
Next Article in Special Issue
A Low Power CMOS Imaging System with Smart Image Capture and Adaptive Complexity 2D-DCT Calculation
Previous Article in Journal
Low-Power and Optimized VLSI Implementation of Compact Recursive Discrete Fourier Transform (RDFT) Processor for the Computations of DFT and Inverse Modified Cosine Transform (IMDCT) in a Digital Radio Mondiale (DRM) and DRM+ Receiver
Previous Article in Special Issue
Analog Encoding Voltage—A Key to Ultra-Wide Dynamic Range and Low Power CMOS Image Sensor

J. Low Power Electron. Appl. 2013, 3(2), 114-158; doi:10.3390/jlpea3020114

Synergistic Sensory Platform: Robotic Nurse
Igor Peshko 1,*, Romuald Pawluczyk 2 and Dale Wick 3
Physics and Computer Science, Wilfrid Laurier University, 75 University Ave West, Waterloo, ON, N2L 3C5, Canada
P&P Optica, Inc., 680A Davenport Road, Waterloo, ON, N2V 2C3, Canada; E-Mail:
CrossWing, Inc., 2800 John Street, Unit 21, Markham, ON L3R 0E2, Canada, E-Mail:
Author to whom correspondence should be addressed; E-Mail:; Tel.: +1-519-884-0710; Fax: +1-519-746-0677.
Received: 8 January 2013; in revised form: 25 April 2013 / Accepted: 26 April 2013 /
Published: 24 May 2013


: This paper presents the concept, structural design and implementation of components of a multifunctional sensory network, consisting of a Mobile Robotic Platform (MRP) and stationary multifunctional sensors, which are wirelessly communicating with the MRP. Each section provides the review of the principles of operation and the network components’ practical implementation. The analysis is focused on the structure of the robotic platform, sensory network and electronics and on the methods of the environment monitoring and data processing algorithms that provide maximal reliability, flexibility and stable operability of the system. The main aim of this project is the development of the Robotic Nurse (RN)—a 24/7 robotic helper for the hospital nurse personnel. To support long-lasting autonomic operation of the platform, all mechanical, electronic and photonic components were designed to provide minimal weight, size and power consumption, while still providing high operational efficiency, accuracy of measurements and adequateness of the sensor response. The stationary sensors serve as the remote “eyes, ears and noses” of the main MRP. After data acquisition, processing and analysing, the robot activates the mobile platform or specific sensors and cameras. The cross-use of data received from sensors of different types provides high reliability of the system. The key RN capabilities are simultaneous monitoring of physical conditions of a large number of patients and alarming in case of an emergency. The robotic platform Nav-2 exploits innovative principles of any-direction motion with omni-wheels, navigation and environment analysis. It includes an innovative mini-laser, the absorption spectrum analyser and a portable, extremely high signal-to-noise ratio spectrometer with two-dimensional detector array.
multi-functional sensor; spectroscopic sensor; robotic nurse; sensory network; omni-wheel robotic platform

1. Introduction

1.1. Problems and Trends

This paper presents the concept and practical implementation of some components of a sensory network that includes a multifunctional scientific instrument located on a mobile robot and those incorporated in multiple stationary units. The main task of the network is to take some of the more tedious responsibilities from a human nurse and to monitor simultaneously a large number of patients located in different places. It is referred to here as a Robotic Nurse (RN) Network. This paper initially was prepared as a research article. However, as it is considered a very important subject, the reviewers and editors proposed to expand it to the “review” level. It should be noted that preparing the review in the Robotic Nurse area is not a simple task. The problem of a Robotic Nurse overview is in the “multi-vector” character of the final technology. The robotic platform capable of completing the human health care functions accumulates so many different technologies that a description of them requires the preparation of a series of books, not just a journal paper. Being within the framework of a journal article, we were unavoidably limited in total size, in the details of each involved technology and in the capability to give precise analysis for why this or that technology has been chosen. To create a paper that is maximally informative and compact at the same time, to make it quickly readable and useful for different specialists, we used the following principles: (1) each time, where possible, we reference the open access publications and web pages, which can be found and accessed without any obstacles and typically contain a lot of the next level references; (2) we tried to be focused on the technologies that have been confirmed or potentially are ready to be used on the movable platform in “field” conditions; (3) we still describe in more detail the results of our scientific-engineering group that were the basis of the initial research paper. This paper covers a wide range of technologies and addresses everybody who is interested in robotic sensory platform applications. In the case where an electronics specialist will read about robotics or optics applications or the inverse, very often, we referred to on-line sources, which present material in very plain language and in a condensed form (see for example in [1,2,3,4,5,6,7,8,9,10,11,12]). Our understanding is that a review-type paper usually attracts a wide range of scientists and engineers who are not “narrow” specialists in the discussed areas, and some general information would be very useful for the readers.

The problems and solutions discussed in this paper can be grouped in five categories:


Robotic movable platforms: Section 2, Section 3;


Network of robots, instruments and humans: Section 4;


Sensors and instruments: Section 5;


Electronics: Section 6;


Algorithms and software: in all sections.

In principle, two approaches are possible when creating a robotic sensory platform. One is the well-known “humanoid” robot, or one that maximally looks like a human—“android”. This idea is pretty old—the artificially created, Frankenstein's monster (also called the monster or Frankenstein's creature), is a fictional character that first appeared in Mary Shelley’s novel in 1818, “Frankenstein or The Modern Prometheus” [1]. The creature was built in the lab through the methods of science. Finally, the monster was rejected by the creator and not accepted by human society, because it was not very similar to regular humans. This novella, which was written almost two hundred years ago, sends us the signal: the “robot–human” interface is an even more significant problem than the technical one.

The second approach is a machine that completes a specific function and does not mimic any human organs or processes. Most industrial robots are built disregarding any similarity to human beings. In this case, usually, they are in a fixed location and are connected to a high voltage power supply. The exception is military robots that may or may not be similar to humans, but should move with on-board power sources. The internal organization of autonomous systems is much more similar to human beings, as the power economy, optimal algorithms of actions and “smart” low-power electronics are crucial elements of the robot.

1.2. Multifunctional Systems

An extremely important characteristic of a robotic platform with multifunctional sensors and instruments on-board is reliability. The simple duplication of the same elements (e.g., sensors, engines, cameras) is not appropriate in this case, since this will result in an increase of power consumption without receiving new information or capabilities. Mother Nature gives us some tips on how to solve this problem: multi-functionality of each element helps to minimize power losses, but still receives new information or supports more actions. For example, our arms serve to pick up some objects and to support body motion (balance function), but can be used to estimate the properties of some surface, the temperature of some object, to understand the shape and hardness of that thing, to generate sounds and be used as a weapon or instrument to create something. In all these cases, the information from sensors is transmitted to brain and after analysis; the commands go back to muscles to provide some action. The question is how to build a system of highly reliable switchers that provide transmittance of different signals to the robot processor and return back the command to specific motors, lasers and other devices through the same or a minimal number of wires at a maximal rate. Probably the most effective way is to use multi-channel parallel wireless communications inside the robot system, the same as is used for outside communication. Such architecture gives a second advantage to the robotic platform: synchronous operation of two or more robots. Two “synchronized” Robotic Nurses can transport a patient without difficulties.

The next idea proposed in this paper is to distribute the presence of the robot over a large number of patients or over a large monitored area by use of multiple stationary sensory nodes. In this case, RN permanently monitors hundreds of patients and moves in the area where its presence is the most necessary. The all-in-one instrument provides an unbeatable capability: the same element, for example, the laser, can be used to initiate different processes or to evaluate different parameters. In this case, any single process, being considered from different points of view, can be investigated more precisely, more adequately and more in detail.

1.3. Background and General Requirements

The best Robotic Nurse technology is currently demonstrated in Japan [2]. However, most engineers are focused on developing an ideal copy of humans, capable of talking, making complex motions with “head and hands” and communicating with a patient. However, at the current stage of technology development, the most important of all is the functionality of the system. A general concept of such a “smart” robotic platform has been proposed, discussed and partially realized in [3,4,5,6]. The basic concept of a mobile robot with certain functions has been tested as a possible prototype for a Mars rover—a reconnaissance robot with a scientific instrument [3,4]. The machine for interplanetary missions must meet extremely strict requirements for weight, size and power consumption. Once done for a reconnaissance robot, the hardware definitely can be used for a Robotic Nurse application.

During the last decade, a new concept in robotic medical applications has been initiated. The system provides the tele-presence of a doctor in any hospital. An example of such a platform is RP-7®, a mobile robotic platform that enables the physician to be remotely present [7]. Both the RP-7 and RP-7i are the first and only FDA-cleared Remote Presence devices, which allow direct connection to Class II medical devices. Devices, such as electronic stethoscopes, otoscopes and ultrasound, can be connected to the Expansion Bay of the Robot, to transmit medical data to the remote physician. The RP-7i also includes enhanced audio capabilities, which allow the user to focus in on a specific conversation, similar to using two ears.

The main goal of our work is to design a real helper for operation in a hospital or home environment that can use a distributed network of stationary multisensory nodes. The mobile platform is capable of serving as a Robotic Nurse, hospital security guard, home babysitter or a senior’s helper. The RN should:


monitor general environmental conditions for security purposes;


remotely monitor patient temperature, pulse and breath intensity and content;


deliver scheduled medication;


carry medical instrumentation, such as a vein visualizer, defibrillator, emergency medications in a pen syringe, etc.;


be capable of self-charging from any electrical outlet in the hospital building;


communicate with doctors, the nursing station, hospital administration, security guards, patient’s relatives, etc.;


could be easily integrated into the Robotic Nurse network;


must be easily upgradable;


supply environmental, medical and security data in formats compatible with current standards, already existing systems, networks or medical instruments;


must be fabricated with the materials and use techniques that are allowed in healthcare areas.

1.4. Robotic Nurse Network

To play the RN role, a mobile robot should have the capability to precisely monitor its position in space, recognize the environment, identify the patient, detect patient medical conditions remotely, provide on-time scheduled medication delivery and, in the case of poor patient conditions, supply the doctor with complete information about the patient’s state. Hopefully, the system will be able to perform some physical tasks around the patient and, finally, communicate clearly with the patient. The main tasks on the way of RN elaboration are the development of a smart movable platform, systems of navigation, communication and environment estimation and a friendly robot-patient communication interface.

To provide flexibility, the system should be easily reconfigurable to adapt the RN to specific working conditions. To achieve this, all mechanical, electronic, sensory, laser and optical units should be easily modifiable; additionally, the weight, size and power consumption should be minimized. In this context, the development of safe and effective technology for precise orientation and navigation of the robot in a hospital and in near-patient space become the primary tasks. Two types of networks are used to provide successful system operation. The first one is an internal (on-board) set of diverse sensors (including optical rangefinders) that all together form the inhomogeneous network. Such a network should provide cross-use of information received from different sensors to describe the environmental conditions and to choose economical algorithms of motion, data processing and communication. The system could use the same elements, e.g., lasers or detectors, to support operation of different optical devices. The on-board data processing unit should be capable of analysing the environmental parameters in different domains: optical and radio bands, chemical, mechanical and gravitational. The system includes an innovative low-weight single-frequency laser (eight-times more power-consumption efficiency as compared with commercial devices), a miniature absorption spectrum analyser and a portable spectrometer, containing a two-dimensional detector array. Being equipped with new data processing software, it provides an extremely high signal-to-noise ratio. The robotic platform exploits innovative principles of any-direction motion, navigation and environment analysis. The next version of RN will use arms of different strengths and positioning accuracies to provide actions of a different nature: research, material processing and object of interest relocation.

At the present time, the various components of the system are at different levels in the development process. For example, the low-weight single-frequency laser is at the assembly stage, and a number of high performance spectrometers have been developed, designed, produced and tested. The robotic system is quite advanced and is at the stage of field-testing at the moment.

1.5. Other Applications

The final system will provide: (1) immediate monitoring of the area (house, room, site, etc.) to watch patients (hospital), children (home, school) and/or the presence of unauthorized persons in protected zones (bank, military station, nuclear power plant, etc.); (2) estimation of atmosphere content and conditions, presence of unwanted substances, etc.; (3) delivery of scheduled medications or emergency instrumentation; (4) the multi-parametric information will be transmitted to the authorized centre (police, security unit) or to the cell-phone-like device (owner of a house).

In some cases, like a private house, the robot will be supported by several stationary sensory units—at most, one per room. In a hospital, the total network can count hundreds of stationary sensors and several tens of mobile robots. To protect a relatively unpopulated area (like a bank), the stationary units could be installed only at critical observation points. Similarly, as humans can perform different functions, the robotic movable sensory platform can operate in different environments and provide different actions. The sensory platform, operating according to the same principles, can be used as: (1) multifunctional scientific instruments for terrestrial and extra-terrestrial research; (2) robot on-board chemical identifiers; (3) portable field-labs for monitoring environmental contamination; (4) reconnaissance labs for exploration of oil and gas sites; (5) a multifunctional, portable medical device for cancer diagnostics that includes: non-linear microscope, optical coherence tomograph, and laser/ultrasound scanner; and (6) mobile and stationary sensory terminals for operating within security networks at high-risk zones, such as nuclear power or chemical plants, military stations and mining sites.

Each of the above-mentioned systems is, de facto, a multi-instrument lab and can be combined from existing commercial devices. However, the key problem is developing an all-in-one multi-instrument lab with economical weight, size and power consumption characteristics. Moreover, the system should not only monitor several required parameters, but also analyse specific conditions, synergistically analyse acquired parameters and inform operators about the most probable events to follow or activate some robotic functions to deal with the events at hand. In other words, the system has elements of artificial intelligence [3]. Lastly, the instruments should be immune to mechanical vibrations, resistible to thermal and radiation influence. Very often, well-developed measurement technologies cannot be used at “in-field” conditions.

2. From Human Being to Robot

2.1. Moving Platform

Historically, industrial robots were designed and fabricated as the stationary systems aimed to complete narrow, specific tasks. The arm is the main moving part of these systems. However, service and reconnaissance robots or building construction helpers should move easily and follow operator commands. There are several technologies that provide the motion of a robotic platform. The main part of them is based on technological solutions from other transporting systems: cars and trucks, tanks and tractors, etc. This is an industrial segment of moving technologies. The second group is based on biologically motivated principles: animals and insects.

2.1.1. Tracked and Wheeled

In the case of relatively flat surfaces (roads), the most popular is wheeled machines [8]. They provide the maximal possible speed of motion and support pretty heavy mechanisms. However, vehicles with a relatively long base need space to turn around. In the case of irregular and softer surfaces, tracked vehicles are used [9]. This type is able to move at high angles, over staircases and on very wet or sandy soils. Typically, this is the most usable platform for reconnaissance robots.

2.1.2. With Legs and Snake-Like

Multi-leg (horse or spider-like) systems [10] are used when motion conditions are extremely challenging: stones, irregular field surface, water obstacles, etc. A snake robot is an exotic system that is used for military reconnaissance purposes, because of the robot body’s low position, capability to use grass and complicated ground relief for hidden motion [11]. However, other elements of a robotic platform can be also fabricated in a snake shape; for example, an elephant-trunk-like arm supplied with sensors and laser heads could be very useful for medical and security applications, as well.

Among the above-mentioned technologies, the wheeled systems are the most economical, fast, and have minimal operational noise and are easily repairable. The question is how to provide the maximal manoeuvrability and stability of the system. Let us discuss first, more in detail, the list of requirements applied for an RN platform. Working in a very specific environment, this robot needs some specific attention and innovative solutions to support the successful operation of the system and the capability to be incorporated into the hospital network of robots, medical staff, equipment and specific office/lab/surgery/care room structure.

2.1.3. Specific Requirements

To be used in the role of RN, the robot should have certain properties. First, it should be able to move in enclosed environments, recognize possible obstacles, identify personnel and recognize patients who need to be served. In order to achieve these goals, it is clear that the robot should have the capability to climb stairs and go over doorsteps. At the same time, the robotic means of interaction with a patient should be on such a level that it should be able to interact with patients lying on the floor, standing up and lying in bed. It should be strong enough to provide support to the patient in some cases, such as moving from the bed to the chair. It also should serve the patient’s medicine at the proper time and help with other physiological activities. It has to be stable enough so that it cannot be easily tipped over. In the case that it is tipped over, the RN should be able to re-establish its standing position. The RN should contain some storage space in which medicine, utensils and food trays can be stored and brought to the patient.

Most of the human robots more or less mimic human beings. However, the human profile was developed to survive in a relatively “empty” environment. To move without collisions and estimating well the safe way of moving within a populated area, the profile and sensing system should respond to several requirements:


the robot “body” should be column-like: relatively smooth and with a narrow profile;


to move stably, it must have a low centre of gravity and a relatively large pedestal;


to see “over a crowd”, it must have a camera tower of changeable height and angle of viewing;


to estimate the scene adequately, the distance to obstacles and to find the optimal route, the robot should possess a stereo-camera and rangefinders;


the screen position should be slightly higher than the head of a patient in bed;


it is preferable that the cameras operate in different spectral ranges;


the robot must navigate and move autonomously without the close presence of hospital personnel.

All these criteria were used to design the Robotic Nurse movable platform shown in Figure 1 [12]. The system of the “communicating” robot version is very lightweight and is easily transportable to any necessary location. The RN shown in Figure 1 represents Stage 1 of the movable medical platform: Robotic Nurse. The aim of this project is to develop a movable platform, provide navigation, safe motion in the hospital environment and to perform some sensing functions with further data processing and transmittance. Therefore, the current version does not need the arms to complete these requirements. However, the next model, namely, Robotic Technician, will be supplied with several arms for different purposes (see Section 3).

Jlpea 03 00114 g001 1024
Figure 1. The Robotic Nurse modular prototypes. A robot without the cover demonstrates a movable platform system.

Click here to enlarge figure

Figure 1. The Robotic Nurse modular prototypes. A robot without the cover demonstrates a movable platform system.
Jlpea 03 00114 g001 1024

2.1.4. RN Mechanics and Algorithms of Motion

The screen-supplied RN developed at CrossWing, Inc. [12] (Figure 1) demonstrates a very useful capability—doctor, nurse, relatives of the patient and, in specific cases, some officials (police or social service officers, lawyers, etc.) can communicate with the patient, while being outside of the hospital room (Figure 2). Moreover, it is possible to transmit medical lectures or specially prescribed music or movies and to support the good psychological conditions of the patient.

The RN is located on the compact movable platform, Nav-2, using three omni-directional wheels (Figure 3). The Nav-2 platform is able to move in any planar direction. It is controlled over the in-office IEEE 802.11 wireless network. Another system is the Mark 5 robot (see Section 3), which is a 17-Degrees of Freedom wheel-based robot.

Jlpea 03 00114 g002 1024
Figure 2. Artist’s impression of the robot, communicating with a patient in a hospital room scene.

Click here to enlarge figure

Figure 2. Artist’s impression of the robot, communicating with a patient in a hospital room scene.
Jlpea 03 00114 g002 1024
Jlpea 03 00114 g003 1024
Figure 3. Three-omni-wheel movable platform.

Click here to enlarge figure

Figure 3. Three-omni-wheel movable platform.
Jlpea 03 00114 g003 1024

The Mark 5 implements angular position feedback that is performed at each joint. A newly revised arm is amalgamated into the Mark 5 design. Having a lower profile and made of ABS plastic, the new design will allow for more complex part features through rapid prototyping procedures. The previous robot version (Mark 4) consists of two arms, a head with vision capabilities, an upper body, which can rotate about a waist joint, a lower body, which houses the spooling system necessary for manipulating all of the robot limb joints, and a wheel assembly to allow the robot to move around. Directly above the wheel assembly is the battery compartment, which houses one golf-kart battery. One unit is sufficient for the robot to perform its tasks under a single charge with either eight hours of active running or 32 hours of communications and data acquisition and processing. Due to the bulky nature of the batteries, they are placed as close to the floor as possible in order to provide counter-balancing weight for the upper-body of the robot.

2.2. Arms/Hands (Manipulators)

The robot manipulators, which are typically referred to as “arms”, are a very important part of the robotic platform. The same is true for humans: very often, scientists, researching the human genesis, proclaim that human beings became intellectual creatures as a result of a perfectly developed hand. The current robotic arm design principles are very advanced. However, with a more improved appearance and complicated and multifunctional robotic systems, the requirements for the arm design become more and more sophisticated. The human arm/hand system was elaborated upon mainly according to the requirements of survivability in Earth conditions. However, to realize the modern technological possibilities, the robot arm/hand should be simultaneously “softer and harder” than the human one. It should be precisely driven to operate with high accuracy laser scanners or stable enough to achieve a sharp image with a microscope. The next version of RN is supposed to use on-board medical instrumentation, like a microscope, optical coherent tomograph, vein visualizer and other techniques that require positioning of high precision, stability and mechanical strength, but with flexibility and such “non-technical” properties as sterility and compatibility with bio-tissue.

In this paper, we give a short overview of the robotic arm achievements, which somehow are related or can be used in the healthcare area. First of all, we note several recent books [13,14,15] by InTech Open Access Publishing, some chapters of which are analysed below.

The most “delicate” part of a hand is the well-operating fingers. To provide a lot of Robotic Nurse functions, such as a scheduled pill supplying, injections, meal service, etc., the perfect finger functionality is very important. In [16], the researchers were focused on grasp planning for a humanoid multi-fingered hand attached at the tip of a humanoid robot’s arm. To grasp an object, the robot first measures object position/orientation using a vision sensor. Then, the planner plans the body motion to complete the grasping task based on vision sensor information. Using the degrees of freedom of a full body, the planner develops a trajectory for reaching the object with several motions, such as a twisting waist, bending waist and squatting down. This algorithm is very important, even for an arm without fingers, for example, for a manipulator with a syringe.

The robotic hand design is very difficult, because of the complicated structures and functions of hands. Most robotic hands in industrial applications have 1 D.O.F (degree of freedom) or 2 D.O.F gripers, and they are designed for precise, repetitive operations. The human-like robotic hands [17] are being designed for robots operating in a human friendly environment, such as the home, office, hospital, school, and so on. They have up to 16 D.O.F. The authors note that most humans feel friendly and comfortable with robots with a similar appearance to them, so the appearance of service robots should resemble humans, and their hands should imitate human hands, too. This example shows how psychological factors influence technological ones. The authors of [17] mention that the hand size is very important, because it must be matched to the whole body in proportion. It can be hard to make a small-sized hand, due to its complexity, but if the proportion is contrary to the human body, it can look ugly. The same problem was noted in Shelly’s novel about Frankenstein’s creature. We cite this paper to show how multi-problematic the task is to create Robotic Nurse technology.

Each next step in human hand approximation requires introduction of additional sensors, functions and algorithms of operation [18]. In the cited paper, a robot hand system is described that has tactile sensors, joint torque sensors, joint angle sensors and a similar structure to human hands. Universal Robot Hand II has actuators, transmission gears, reduction gears and torque limiter mechanisms in the fingers. Using the torque limiter mechanisms, the fingers can sustain overloads not by the gears, but by the structure. This is the imitative behaviour of a human finger.

The description of robotic arm construction, algorithms of motion, principles of control, calibration, feedback, sensors, supporting the arm operation, and so on, can be found in [14]. All these aspects are more or less related to the problem of the Robotic Technician design and implementation. This book is mainly oriented toward the solution of industrial robot problems. However, we would like to note here one of the book chapters, which discuss a “highly intelligent system”—a Human-Robot cooperative manipulator [19]. This paper suggests a MFR (Multi-purpose Field Robot). From the viewpoint of operational characteristics, the MFR can be used as a construction robot designed to perform automatic grinding and cleaning of concrete surfaces, as a field robot designed specifically for a particular environment and used in various industries, such as agriculture, construction, engineering, space exploration and deep sea diving, due to the inherent dangers and costs associated with these fields. This is an example of a multifunctional system, whose principles of design and operation should be used when developing multifunctional movable platforms.

When RN provides some near-patient delivery or diagnostic actions, the stability and repeatability of the arm motions are extremely important. In practice, the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations [20]. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce vibrations, also. In order to analyse these phenomena, a robot signal acquisition system was developed. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments and electrical currents in the motors.

The application fields of robot arms are now extended well beyond their traditional use. These fields include physical interactions with humans (e.g., robot toys) and even emotional support (e.g., medical and elderly services) [21]. In the referenced paper, a novel motion control approach to robotic design that was inspired by studies from the animal world was demonstrated. This approach combines the robot’s manipulability aspects with its motion to enable robots to physically interact with their users while adapting to changing conditions triggered by the user or the environment. These theoretical developments are then tested in robot-child interaction activities.

2.3. Navigation and Safety Operation

2.3.1. Concepts and Solutions

The Robot Navigation Fundamentals can be found in [22]. One of the most serious problems for RN is orientation and navigation in conditions of the changeable hospital environment while in patrolling mode without assistance. The beds can be moved to new positions, life-supporting equipment can be in/out at any moment and a new patient can be located in a monitored room on a new bed of a new configuration. In principle, each room can be supplied with a radio marker to identify the robot position within a hospital map. The problem of general navigation in this case is solved. However, the problem of fast motion within the changeable room scene still exists.

These problems are under the attention of multiple researchers and are related to the development of technical and computational means: navigation, range finding, visual path control, recognition in a scene and of a patient, algorithms and software for data processing and analysis and decision acceptance. It is important as well to develop the algorithms of the path optimization for minimization of the power consumption in patrolling mode. A 3D configuration and terrain sensing are very important functions for a tracked vehicle robot to give as precise information as possible for operators and to move through the work field efficiently. A Laser Range Finder (LRF) is widely used for 3D sensing, because it can detect a wide area at high frequency and can obtain 3D information easily. A system that uses a LRF was proposed in [23], installed at the end of the arm-type movable unit.

Robotics and intelligent machines need sensory information to behave autonomously in dynamic environments. Visual information is particularly suited to recognize unknown surroundings [24]. Vision-based control of robotic systems involves the fusion of robot kinematics, dynamics and computer vision to control the motion of the robot in an efficient manner. The combination of mechanical control with visual information, so-called visual feedback control or visual servoing, is important when we consider a mechanical system working under dynamic environments. In the method proposed in [24], not only the position, but also the orientation of the robot hand with a contact force in the visual force feedback system was controlled. Both the passivity of the manipulator dynamics and the passivity of the visual feedback system are preserved in the 3D visual force feedback system.

Under unknown environments, robots cannot perform as planned, and they may fall or collide with obstacles. Even in that case, it is expected that the robots should provide services to help human daily life as much as possible. In [25], the authors developed an approach to adapt designed motions to a changed structure without model identification. Even if the robot has unobservable changes in mechanical structure, the robot generates new motions, which achieve the trajectories matching to the desired ones as much as possible.

One of the central issues in robotics and animal motor control is the problem of trajectory generation and modulation [26]. Since, in many cases, trajectories have to be modified on-line when goals are changed, obstacles are encountered or when external perturbations occur, the notions of trajectory generation and trajectory modulation are tightly coupled. This chapter addresses some of the issues related to trajectory generation and modulation, including the supervised learning of periodic trajectories. Other addressed issues include robust movement execution despite external perturbations, modulation of the trajectory to reuse it under modified conditions and adaptation of the learned trajectory based on measured force information. The systems are designed such that after having learned the trajectory, simple changes of parameters allow modulations in terms of, for instance, frequency, amplitude and oscillation offset, while keeping the general features of the original trajectory or maintaining synchronization with an external signal.

Standing and walking are very important activities for daily living, and so, their absence or any abnormality in their performance causes difficulties in doing regular tasks independently. Analysis of human motion has traditionally been accomplished subjectively through visual observations [27]. By combining advanced measurement technology and biomechanical modelling, the human gait is today objectively quantified in what is known as gait analysis. To validate the theoretical results, the authors used the humanoid robot “HOAP-3” of Fujitsu.

2.3.2. Rangefinders

In our project, we partially used the proposed technology through acquisition of the visual information achieved from the stationary node. For estimation of scene changes, the information achieved from cameras, optical and ultrasound rangefinders of the stationary node and the movable RN will be used.

The two-level navigation system is designed to support reliable robot operation in complicated indoor conditions of the concrete/steel wall buildings (Figure 4). The near-field navigation is supported with optical and ultrasonic rangefinders. Infrared sensors point in six directions at the ground and can be used as cliff sensors or to remotely detect obstacles. The sensors are positioned 0.5 m above the ground and point at the ground 0.5 m away from the robot. Relatively high power solid-state single-frequency lasers will be installed on the top of the robot for overviewing the scene around RN at distances up to several tens of meters (Figure 5).

Jlpea 03 00114 g004 1024
Figure 4. Robotic Nurse navigation ports: 1—infrared cliff/obstacle sensor, 2—ultrasonic sensor.

Click here to enlarge figure

Figure 4. Robotic Nurse navigation ports: 1—infrared cliff/obstacle sensor, 2—ultrasonic sensor.
Jlpea 03 00114 g004 1024
Jlpea 03 00114 g005 1024
Figure 5. Bench-top prototype of the diode pumped ND:YVO4 laser with nano-selector.

Click here to enlarge figure

Figure 5. Bench-top prototype of the diode pumped ND:YVO4 laser with nano-selector.
Jlpea 03 00114 g005 1024

The stereo cameras (infrared and visible) can help in 3D imaging of the scene. The far field navigation and communication is supported through the stationary sensory nodes installed at the turn points of corridors, tunnels, etc. The robotic platform with sensors should be supplied with rangefinders that are capable of estimating an open path length for remote gas sensing [3], to navigate a robot through complicated environmental relief and to help to protect the robot arm from occasional collisions with obstacles. It should provide microscope or telescope focusing. In this project, practically all on-board lasers are planned to serve as the rangefinders. The signal generated by a chirped frequency laser contains very important information for the spectroscopic sensor. The interference beat signal can be used for self-calibration of the laser’s tunability range. In case the wavelength tuning is ideally linear, the interference pattern demonstrates the same modulation frequency of the beat signal along both slopes of the triangularly modulated laser output. However, if the tuning rate is additionally modulated, the frequency of the beating signal is changeable. A detailed description of the system is done in Section 4.4 and in reference [3].


Currently, the robot is supplied with two different types of cameras: 360°-panoramic viewing and narrow-angle viewing. In Figure 6, one of the RN versions is shown: The panoramic camera is installed in the bottom part of the robot body and looks up on the spherical reflector that provides panoramic viewing. The next version of the RN will have a panoramic viewer located on the telescopic tower over the monitor.

Jlpea 03 00114 g006 1024
Figure 6. Robotic Nurse top devices: 1—360°-panoramic viewing; 2—stereo pair; 3—speaker.

Click here to enlarge figure

Figure 6. Robotic Nurse top devices: 1—360°-panoramic viewing; 2—stereo pair; 3—speaker.
Jlpea 03 00114 g006 1024

This tower will include the IR camera for night vision and temperature monitoring purposes. Additionally, a stereo-pair camera is installed on the upper pipe-frame that is supposed to protect the monitor in case the robot falls down. Besides a panoramic camera, all other ones will be equipped with zooming, scanning and targeting options. To remotely measure patient temperature, the infrared camera will be used.

2.4. Communication

The modern service robots communication means are developing in two main directions. First, one is the classical model of a head that imitates human gesticulation, speech and shape. However, an on-screen simulated image of a human head or a video of a real head has become a very popular alternative, because it is much cheaper and offers new capabilities. In our RN platform, the image of a doctor, nurse or any remote person or object can be transmitted. A session with movies, lectures, instructions or TV programs could be activated, as well. Similar technology was used in [28], but designed as a helper of a human nurse, not like this autonomous platform.

In [29], the articulated head was demonstrated: computer animation of a synthesized human head capable of speaking with human beings (Figure 7). On the hardware side, the articulated head consists of a Fanuc LR Mate 200iC robot arm with an LCD monitor as its end effector. Through deforming its underlying 3D mesh structure and blending the associated texture maps, a set of emotional face expressions and facial speech movements are created. A text-to-speech engine produces the acoustic speech output to which the face motions are synchronized.

Jlpea 03 00114 g007 1024
Figure 7. Articulated head installed on a Fanuc LR Mate 200iC robot arm with an LCD monitor [29].

Click here to enlarge figure

Figure 7. Articulated head installed on a Fanuc LR Mate 200iC robot arm with an LCD monitor [29].
Jlpea 03 00114 g007 1024

In [30], the robot, TAIZO, as a demonstrator of human health exercises, has been developed. The robot and the human demonstrator stood in front of the human audience and demonstrated together. Furthermore, in human-robot collaborative demonstration, the method of communication used between the human and robot can be used to affect the audience. This chapter presents the extension-by-unification method in order to push forward the behaviour-based scripting approach to develop a communication robot.

The goal of human-robot interaction research is to define a general human model that could lead to principles and algorithms allowing more natural and effective interaction between humans and robots [31]. A Symbiotic Information System (SIS) is an information system that includes human beings as an element, blends into human daily life and is designed on the concept of symbiosis. Research on SIS covers a broad area, including intelligent human-machine interaction with gesture, gaze, speech, text command, etc. The objective of SIS is to allow non-expert users, who might not even be able to operate a computer keyboard, to control robots. The problem of robot-human interaction is extremely important in the field of robotic nurse-patient or similar service robot areas: child-robot baby-sitter and senior-home service robot areas. The “human-robot friendly interface” is very popular terminology in robotic literature. However, de facto, this interface supposes development of robot behaviour algorithms that create the illusion that the patient deals with another human. If the behaviour is as expected according to human standards, the human disregards the “non-biological” content of the robotic nurse. This aspect is actively discussed in the literature. An example can be considered in [32]. Besides the navigation, motion, interaction with the patient and providing service functions, the RN should recognize the patient, doctor or human nurse. This is another and complicated problem in the way of RN creation. If the permanent hospital staff can be imaged once and recorded in the robot computer memory, each new patient should be recorded every time as a new object. However, once information is stored, the robot never makes a mistake with patient recognition as the new nurse can do or, at least, it will set off an alarm if the face is not recognizable.

2.5. Face Recognition

A facial recognition system is a computer application for automatically identifying a person from a digital image [33,34,35]. Typically, it works by comparing some facial features from the image and from a database. Several software packages include face recognition option, such as Picasa, Picture Motion Browser, OpenBR, Windows Live, etc. Face recognition is not perfect; it works well with full frontal faces and angles up to 20 degrees off. In addition, other conditions where face recognition does not work well include poor lighting, sunglasses, long hair or other objects partially covering the subject’s face and low resolution images. Another serious disadvantage is that many systems are less effective if facial expressions vary. Even a big smile can render the system less effective. However, all these problems are more or less related to the facial recognition of uncontrollably moving unknown persons. On the other hand, in the conditions of a hospital, multiple images of a patient or staff can be created without any difficulties. For the RN, facial recognition software available from photo cameras or computer manufacturers can be successfully applied [34,35].

A first step of any face recognition or visual person identification system is to locate the face in the images [33]. Visual detecting of the face has been studied extensively over the last decade. A lot of commercially available photo cameras have a facial recognition subsystem [34,35].

The detailed analysis of the technical and computational problems and their solutions for face and scene recognition is outside the scope of this paper. The most recent information can be found in three cited books [13,14,15] and in the International Journal of Advanced Robotic Systems.

3. Next Step RN Evolution

During the last decade, there were a lot of innovations involving miniaturization of different medical instruments [36]. The spectrometers, microscopes, optical coherence tomographs, ultrasound, X-ray, optical and THz devices, lasers, sensors, and so on, are becoming smaller, with more efficient power consumption and with more precise measurements. Tremendous progress is achieved in biomedical imaging and other data processing. All these achievements make it possible to use these on a movable robotic platform. These robots are still much lower in capabilities as compared with trained humans. However, being supplied with such devices, like a microscope, spectrometer or X-ray monitor, they can “see the invisible”. The next tremendous task is to teach the robots to “interpret and understand” what they can see and a human cannot. The other advantage of a robot is the speed of analysis: a doctor can overview the microscopic field of view, say 50 × 50 microns, and make a conclusion about cancer cell presence within several seconds. The robot can do a similar job over a million-times larger area (50 × 50 mm) in approximately the same time.

This paper describes the first basic robotic platform aimed at completing the nurse’s functions. However, CrossWing has already started conceptual design of the next more advanced version of the Robotic Nurse—“Medical Technician” robot (see photo in Figure 8). This robotic platform will be supplied with different diagnostic tools: ultrasound, X-ray, optical and THz devices. It is being designed to operate with a non-linear microscope, coherent tomograph, laser-ultrasound scanner and polarimeter. In all these technologies, we plan to use the fiber laser-based system. Such fiber devices ideally match the requirements to operate within robotic arms, to be immune to mechanical vibrations and to be the most effective, low-power consumption systems. The robot will accumulate all capabilities of the initial “communicator/sensor” RN version with additional options of a mobile diagnostic lab and medical technician.

Jlpea 03 00114 g008 1024
Figure 8. The Robotic Nurse “medical technician” version will be supplied with different diagnostic instruments: X-ray, ultrasound, optical and THz devices.

Click here to enlarge figure

Figure 8. The Robotic Nurse “medical technician” version will be supplied with different diagnostic instruments: X-ray, ultrasound, optical and THz devices.
Jlpea 03 00114 g008 1024

In total, the robotic technician final version will have six arms: two mechanical arms of high strength, a high precision and stability arm for different lasers and medical instruments, a manipulator with endoscope, an arm for coins and other object extraction from the gullet and an arm with a vein visualizer and automatic injector system. Therefore, the best name for this machine is probably “Shiva”.

4. Distributed Robotic Nurse

4.1. General Structure of the Robotic Nurse Network

Besides the robot advantages described in Section 3, the most significant one may be that a single robot can be present in any point of the hospital, communicating with remote sensors through the telecommunication network. To minimize the cost of the system, to increase mobility, reliability and accuracy, the mobile robot with additional multiple stationary “eyes, ears and nose” has been proposed. These “organs of sensing” are boxed into special units: stationary nodes that are installed in different hospital locations and are the elements of the hospital monitoring network (security functions) and patient conditions (healthcare functions). The structure and internal organization of this system follows the principles of a two-level network.

4.2. Homogeneous Sensory Network

To provide extremely high reliability communication, the multichannel transmitters and receivers (radio- and optical-band) will be used on the robotic platform. Simultaneously, these devices will be used for robot navigation and environment evaluation. In the current version of the RN, the main efforts are focused on the development of two-way video chat communication and semi-automatic navigation systems; specification of the protocols and technologies for data acquisition, processing and transmittance; and development of a new “Movable Robot–Stationary Sensors Network” architecture for the sensory network.

The proposed RN Network is a similar, but “inverse”, system to the classical cell-phone network. The difference is that in a cell-phone system, the central station is the high-power unmovable tower and cell phones are movable low-power units, all communicating with the tower. In the Nurse Robotic Network, the central station (robot) is movable and the nodes (analogy of cell phones) are unmovable (Figure 9). The nodes can communicate through the robot or directly between them. Besides the communication functions, the robotic network collects information about environment and patients, analyses the data and transmits them to the doctor or to the nurse station. Being based on this data analysis, the robot can make a decision and provide some actions: raise alarm, move to the critical patient and send message through paging, telephone, local security and/or internet systems. Such a network provides unprecedented capability of simultaneous and permanent monitoring of a large number of patients located in different places. A very important function of the network is to support communication through the stationary sensory nodes (for example, in hospitals or other multi-floor concrete-metal buildings). Such a system provides guaranteed robot wireless connection and control. An innovative system of robot navigation is based on a combination of information achieved from optical rangefinders (near-field) and radio signals from stationary sensory nodes (far-field). At the same time, a nurse station plays the role of data accumulation and storage server.

Jlpea 03 00114 g009 1024
Figure 9. Diagram of the Robotic Nurse sensory network. 1a1d—stationary sensory nodes; 2a,2b—movable robots; 3—nurse station; 4—patient room.

Click here to enlarge figure

Figure 9. Diagram of the Robotic Nurse sensory network. 1a1d—stationary sensory nodes; 2a,2b—movable robots; 3—nurse station; 4—patient room.
Jlpea 03 00114 g009 1024

4.3. Stationary Sensory Nodes

A network of stationary nodes (sensory terminals) serves for continuous visual monitoring of the scene, atmospheric conditions, remote measurement of patient temperature and breathing gas content control. This data is transmitted to the nurse station and to the movable robot. In the case of an emergency, if no nurse or doctor is available, the robot moves to a critical patient and tries to contact a doctor through telephone, paging or a local sound system. The stationary nodes are used for robot in-hospital navigation by triangulation of the signals from several nodes. In total, the sensory node contains: (1) telecom unit; (2) multi-gas sensor unit, (3) environmental parameters sensors unit; (4) visible and IR-camera unit; (5) data processing unit; and (6) radio/sound alarm devices. The identical set of instruments is located on the movable platform. However, stationary nodes can contain more sensors and can operate at a higher rate.

4.4. Inhomogeneous Network

An inhomogeneous sensory network is a set of sensors based on different physical phenomena [3,5,6]. The sensors are capable of data cross-exchange and use information received from other sensors for improving the accuracy and reliability of measurements. All information can be used for a synergistic description of the monitored phenomenon or event.

The human nervous system is a great example of an inhomogeneous network. We use our senses of smell, touch, taste, hearing, balance and vision daily. These sensations are registered by different sensors, and the information is sent to the brain, where it is processed, recognized and interpreted. After data processing, the brain activates the muscles. It is interesting to note that a human body runs under two “operating systems”: brain and spinal cord. The brain analyses the environment and decides what to do. Once the decision is made, the spinal cord provides some signals to the muscles, completing the decision. Usually, the decisions are based on information extracted from sensors specializing in different domains, i.e., analyses of electromagnetic fields, mechanical vibrations, chemical reactions, pressure, gravity, etc.

Let us consider an example of the multi-gas sensor, as it was described in [37]. At first, the environmental sensors measure ambient temperature, pressure, humidity and background radiation in different spectral ranges. Then, the temperature and pressure data are used for estimation of the molecule number in cubic cm (at current atmospheric conditions) and for choosing the appropriate algorithm of the gas concentration calculation. At the same time, the laser rangefinder estimates the beam path length. Moreover, the same laser radiation is used to precisely calibrate the laser tuning rate. This information is crosschecked with dynamic measurement of the diode laser temperature. After the spectral calibration is done, the data processing unit compares the achieved sample with the library and identifies the gas. The final relative concentration of the monitored gas can be precisely estimated, taking into account measured specific molecule density, data from water vapour sensors, ambient pressure and temperature sensors and background radiation. This example explains how the inhomogeneous network of different sensors precisely estimates some specific parameters. The principles of operation for such instruments can be specified as “all-in-one” and “one-for-all”.

5. Instruments and Technologies

5.1. General Principles

Any multifunctional inhomogeneous network located on the movable robotic platform contains a set of different sensors aimed to monitor a simultaneously significant number of parameters—external (environmental) and internal (the robot and network itself). Here, under “the sensor”, we recognize a device capable of measuring a special parameter or a set of parameters. In this sense, the spectrometer is a multifunctional sensor. For example, in the case of gaseous mixture investigation, the recorded optical absorption spectrum (spectral line parameters) contains information that can help identify the absorbing substances, evaluate relative concentrations of detected substances and estimate environmental temperature, pressure and atmosphere transparency. In some cases, the reflected laser light contains information about the density of dust, scattering particle size and about the light path length (from the laser to a reflecting object and back to the detector).

At the same time, the accuracy of the measurement of all the above-mentioned parameters are significantly improved in case the independent sensor measures the same parameter, but uses other physical processes. For example, an independent thermometer (e.g., thermocouple) confirms the spectrometer temperature estimation and simultaneously improves the accuracy of gas concentration measurements; analogously, an independent rangefinder significantly improves the accuracy of the remote measurement of the gas concentration. This principle was used when the Mars rover prototype was being developed [3]. Now, we follow this concept in designing the Robotic Nurse platform.

5.2. Innovations

The principle of a synergistically operating device can be formulated as “1 + 1 = 3”. In other words, the combination of two devices results in the capability to measure new parameter that each separate device cannot. This concept results in some innovative properties, even with the use of well-known devices or technologies. Four main innovative issues have been developed in this work: (1) interlinked sensors—similar to the human sensing/nervous system—capable of analysing certain process in dynamics and predicting some events; (2) new spectroscopy data processing algorithms; (3) new lasers and sensors aimed at operating on the robotic platform in “non-friendly” environmental conditions; (4) extremely high signal/noise ratio spectrometer based on the volume holographic grating.

To illustrate issue (1), let us consider a simple example. If your home thermometer, barometer and humidity meter show values of 26 °C, 750 torr and 75%, respectively, considering these devices individually, one can conclude that the weather is beautiful. However, if during the previous hour, the pressure was falling from 770 to 750 torr, the humidity increasing from 55% to 75%, the temperature rising by 3 °C, it means that a hurricane approaches. The dynamics of pressure, humidity and temperature readings tell you what happens next: beautiful weather is just the beginning of a critical event. Some commercially available home weather stations use simple logic: as the atmospheric pressure falls down, rainy weather will happen. Sometimes this is true, but generally, this is not the case. To analyse some complicated process, it is necessary to monitor a lot of parameters and, preferably, dynamically. The existing fire alarm systems based on the analysis of gas content or smoke detection cannot distinguish whether it is a “stove-in-fire” or a “well-done BBQ” being prepared. The additional sensors of light and temperature do not help as well. Only analysis of the light spectrum and the character of the modulation of the light can confirm what is going on.

Typical public opinion is that a network of different sensors does not contain any new idea—each sensor is available commercially, and to combine them is just a technically and easily solvable problem. However, just collected together, the sensors require a lot of space, power consumption and a lot of interfaces that provide the sensors-robot communication and “translate” the data into a unified language to be transmitted and understood on the control station. To synergistically use numerically different sensors capable of cross-exchanging information and cross using it, it is necessary to know in detail the principle of operation of each of them and to design a joint system that can generate and control the parameters that each separate sensor cannot detect in principle. The next section demonstrates more in detail the sensory hardware and operational diagram.

5.3. Sensor Block Diagram

This paragraph shows, in more detail, the general structure of the multifunctional sensor. A block diagram of the sensor is shown in Figure 10. The sensory system includes: A—unit with sensors for temperature (external and diode laser), pressure, humidity and background environmental radiation; B—data acquisition unit with photodiodes and amplifiers for the wanted signal, laser and rangefinder reference signals and an internally scattered radiation signal; C—lasers with interfaces and controllers; D—spectrometer with camera and laser absorption spectrographs; E—power supplies and electronics; F—computer and electronics; and G—communication and navigation unit with transmitters, receivers and rangefinders (if necessary).

For monitoring atmospheric conditions, we used miniature sensors of pressure, temperature and humidity placed on the multi-gas sensor optical board. However, the commercially available weather stations can be used for weather condition monitoring. Among them are professional portable systems that include GPS, as well, for example, NM150 WS [38]. The New Mountain Innovations NM150 Ultrasonic Weather Station is the perfect choice for hazmat and fire mobile command centres, storm chasers, chemical spray operations, remote severe environments, maritime operations and many more. This station can be adapted to be used on reconnaissance robotic platforms, working in outdoor conditions. For the RN platform, simpler versions [39] are quite sufficient.

Jlpea 03 00114 g010 1024
Figure 10. Block-diagram of the sensory system.

Click here to enlarge figure

Figure 10. Block-diagram of the sensory system.
Jlpea 03 00114 g010 1024

5.4. Gas Sensor

5.4.1. What for?

The gas sensor located on the RN platform is aimed at completing several functions. Among them:

  • monitoring of the regular atmospheric gaseous conditions;

  • alarming in case of emergency (fire, chemical agents, etc.);

  • patient exhausting gases content monitoring for disease diagnostics purposes.

In the third case, the presence of certain molecules in human breath can be used as an indicator of a specific disease [40]. The concentrations of these markers are at the levels of a few ppm and are hardly detectable in non-lab conditions. However, the measurement of relative concentrations of regular atmospheric gases present in human breath (oxygen, carbon dioxide, water vapour, nitrogen) can also bring some information about patient conditions. These gases can be easily estimated, as their partial ratios are from a few to several tens percent. There are different types of sensors that can be applied to complete the functions mentioned above.

5.4.2. Ultrasound, Semiconductor, Electrochemical and Optical

Among all types of commercially available gas sensors, the most advanced are those that accept different solid state, electrochemical and/or catalytic bead sensors that can be inserted into the same reading- and data-processing unit [41,42,43,44]. With such a set of exchangeable sensors, one device can monitor up to 100 gases. These types of sensors are continuously being developed on the hardware level and on data processing and noise suppression levels [45,46].

However, the use of these sensors for autonomic robotic platform is very questionable—every time a gas or group of gases needs to be monitored, a specific sensor should be installed. Different gases can interact with a sensor in a similar manner. The response of some new gases or vapours is unpredictable; typically, the system needs periodic calibration and to take a gas sample; through-the-sensor gas pumping is used. The size, weight and power consumption of just the gas sensor is on the same order of magnitude as the entire multifunctional system should be.

Hollow-core photonic bandgap fibers (HC-PBFs) have emerged as a novel technology in the field of gas sensing [47]. The long interaction path lengths achievable with these fibers are especially advantageous for the detection of weakly absorbing gases. In this work, the good performance of a HC-PBF in the detection of the ν2 + 2ν3 band of methane, at 1.3 μm, has been demonstrated. The Q-branch manifold, at 1,331.55 nm, is targeted for concentration monitoring purposes.

A new real-time human respiration process analysis method using a high-time-sampling gas concentration sensor based on ultrasound has been demonstrated in [48]. A gas concentration sensor provides a 1 kHz gas concentration sampling speed. The author notes that such a speed could not have been attained by previously proposed gas concentration measurement methods, such as infrared, semiconductor gas sensors, because the gas analysis speeds were a maximum of a few hundred milliseconds. To analyse the human respiration gas variation patterns, a newly developed gas-mask-type respiration sensor has been used. It measured medical symptoms in subjects suffering from asthma, hyperventilation and bronchial asthma.

Firstly, this technology cannot be used for remote gas concentration measurement. Secondly, a critique of the IR technology is based on the capabilities of the commercially available device: capnograph [49]. Capnography is the monitoring of the concentration or partial pressure of carbon dioxide (CO2) in respiratory gases. It is usually presented as a graph of expiratory CO2 plotted against time. Capnographs usually work on the principle that CO2 absorbs infrared radiation. The analysis is rapid and accurate, but the presence of nitrous oxide in the gas mix changes the infrared absorption via the phenomenon of collision broadening. This must be corrected.

The technology used in our research avoids all these problems. The detection of gases with tunable diode lasers in the range of 1.55–1.65 microns provide enough resolution to determine CO, CO2, N2O and other gases absorbing in this range; the rate of data acquisition can achieve MHz values and higher; and one laser capable of detecting CO2 besides several other gases that have the absorption spectra in the same spectral range [3].

To support RN security functions, commercially available chemical warfare and toxic industrial chemical sensors can be applied. Among them, those that can be mentioned are RAID-M 100 and SABRE 5000. The RAID-M 100 [50] is a hand-held Chemical Agent Detector based on the principle of Ion Mobility Spectrometry (IMS). It is able to detect, classify, identify, quantify and continuously monitor concentration levels of the substances that are profiled in its on-board libraries. Hazard levels are indicated by an incremental bar level display with eight segments. The SABRE 5000 [51] is the only portable trace detector that can detect threats from explosives, chemical warfare agents, toxic industrial chemicals or narcotics and can do so in approximately 20 seconds. Proper sample collection is key to the success of any trace detector. The versatile SABRE 5000 is capable of analysing either trace particle or vapour samples, allowing the operator to apply the ideal sampling technique for the substance suspected.

5.4.3. Spectroscopic Sensors

Spectrometers make up another group of analysing instruments. The tremendous data accumulated in absorption spectroscopy research were recently used to create the high-resolution transmission molecular absorption (HITRAN) database, which is a simulator of the absorption spectra of important atmospheric and industrial gases [52]. The latest version (2008) was used during our work. The NIST Atomic Spectra Database should be noted as a very useful tool, as well [53]. For now, the basis of absorption spectroscopy is well-developed [54,55,56,57,58,59]; however, to successfully use this technique in qualitative sensory measurements with a miniature multi-gas sensor, the theory should be re-analysed by taking into account some properties of modern techniques that were not available in previous times. Lasers with super-fine spectral line output or with femto-second pulse duration and spectrometers with an ultra-high signal/noise ratio bring new capabilities to classical spectrometry.

Classical high-resolution spectrometers are large and heavy devices (several tens of kilograms) that cannot be used as portable systems [60,61,62]. Here, we consider only the spectroscopy devices that are small and relatively lightweight. Even being with these characteristics, most of them cannot be directly installed on the robot. They should be re-designed with new lightweight, but robust, bodies, with completely new, fast, but low-power, electronics and new algorithms of data processing that provide high accuracy and reliability in “field” operational conditions. As an example of a good compromise between size/weight and sensitivity/resolution parameters, the portable P&P Optica spectrometers [63] have been chosen for use in the current project. Prospective solutions for mobile robot applications are laser spectrometers. As an example, a fast-response, high precision, tunable diode-laser spectrometer developed for field measurements of methane and nitrous oxide fluxes can be mentioned [64]. The instrument uses a multiple-pass absorption cell to provide a long absorption path length (36 m). The problems, though, are the size and lack of immunity to mechanical vibrations. A good example of a laser spectrometer demonstrates measurements of water-vapour concentration, temperature and line-shape parameters using a tunable diode laser [65].

5.5. Lasers for Remote Sensing

There are two main types of lasers applicable for absorption spectroscopy: semiconductor (diode) and solid-state (crystal and fiber).

5.5.1. Single-Frequency Laser with Nano-Selector

A highly efficient, 1 W-class, single-frequency output, solid-state laser has been developed to serve as an excitation source for spectroscopic measurements, for ultrasound remote detection, as a rangefinder and for robot arm remote navigation. Initially, this laser was developed for operation as a master oscillator for satellite control systems [66,67]. It is small and lightweight. With a 1 mm-long Nd:YVO4 crystal and 1 cm-long optical cavity, the laser provided 0.6 W of continuous-wave (CW) single-frequency output.

This laser is based on unique principles of operation. An absorbing nano-structure with a thickness that is significantly smaller than the standing wave period was placed in the linear laser cavity. If the position of the thin absorption film coincides with the node area of any mode, the losses for this mode approach zero, and this single longitudinal mode starts to operate. Other modes cannot overcome the lasing threshold. The laser system is highly effective, as up to 85% of the total multimode generated power (without a selector) is accumulated into a single mode. At standard atmospheric conditions, the gas absorption lines (in near IR range) have an approx. 1 GHz linewidth, with the spaces between lines in the 25 GHz order of magnitude. The proposed laser demonstrated up to a 150 GHz total tuning range with about 12 GHz hop-free sweeping range [66]. Hence, 1–2 lines could be scanned in the smooth tuning mode and around six lines could be detected in the hop tuning mode. This is three–four-times less than vertical-cavity surface-emitting laser (VCSEL) diode lasers, but the output power is more than a hundred-times higher.

This system demonstrated excellent parameters in the lab and on a stationary robot. However, in unfriendly conditions, thin absorption films burn fast, because in the presence of mechanical vibrations, the thin films go through multiple standing wave maxima and evaporate, due to being strongly heated. In the current project, a scattering nano-selector has been proposed. This selector does not absorb the laser radiation and is thus protected from thermal destruction (see the prototype in Figure 5).

5.5.2. Fiber Lasers

For on-robot applications, the fiber lasers can be noted as the best, especially for the robot arm location. Fiber lasers in the small- and mid-power ranges are the most effective, simple and robust systems among other lasers. An important benefit is the “all-in-one” architecture of fiber lasers: holders, bulky mirrors, gratings, modulators and other heavy optical elements are not necessary. This dramatically improves the robustness, reliability and efficiency of the lasers. The most impressive advantage of the fiber laser is the capability to use a several meters-long cavity practically immune to mechanical vibrations. Finally, this system is significantly less sensitive to temperature fluctuations as compared to diode lasers [68].

5.5.3. Diode Tunable Lasers

The use of tunable diode lasers has been considered because of their small size and relatively low power-consumption. The use of narrow linewidth tunable diode lasers provides specific gas-line monitoring without input from other molecular species. However, the disadvantage of these lasers is their relatively small output power (several mW) at single-mode operation. There are two highly developed types of diode lasers applicable in absorption spectroscopy: vertical-cavity surface-emitting diode lasers (VCSEL) [57,58,59] and distributed feedback lasers (DFB) [69,70]. Recently emerging VECSELs (vertical external cavity surface emitting lasers) combine advantages of both of the above-mentioned systems, but suffer from strong internal overheating and cannot be routinely used for in-field applications [71].

5.6. Multi-Gas Spectroscopic Sensor

5.6.1. Spectroscopic Sensor on the Robot

To monitor the populated areas, such as houses, offices, educational and public institutions and hospitals, the alarm sensor should analyse, first of all, atmospheric content for oxygen, carbon oxide and di-oxide, methane, nitrous oxide (agricultural zones or surgery room) and water vapour. A general concept and detailed description of the multi-gas sensor was done in [3,37] (Figure 11). Here, we demonstrate some benefits of the robot platform for the gas measurement technology.

Jlpea 03 00114 g011 1024
Figure 11. Photo of multigas sensor prototype. The top optical board is shifted out to demonstrate the electronic board.

Click here to enlarge figure

Figure 11. Photo of multigas sensor prototype. The top optical board is shifted out to demonstrate the electronic board.
Jlpea 03 00114 g011 1024

There were several parameters specified that RN should monitor in-time and remotely. Among them: control of patient breathing gas content and intensity, temperature, pressure (desirable) and atmospheric/room conditions. The next set of parameters that should be continuously monitored is poison or dangerous gases that could be present in a hospital in case of technological or natural emergency. These are carbon oxide and dioxide, methane, nitrous oxide (laughing gas), some sulphur compounds and hydrogen cyanide. Typically, the list of dangerous gases that appear in the case of fire depends on building construction materials and the professional activity of the company, hospital, etc., and could be very long. However, we decided to include these ones into the project, mentioned above, as the priority list for the hospital conditions. All others can be connected later through the electronic hubs or motherboard. It is logical to suppose that the “fire alarm” gases should be continuously controlled from the stationary nodes. The “patient atmosphere” should be monitored from both the RN platform and stationary nodes.

The laser sensor provides accuracy better than 99% with new multi-line absorption spectroscopy technology [3,37]. To achieve such accuracy, it used the atmospheric parameters acquired by the weather station. The multi-component alarm sensor has been designed to recognize gases and to measure gas concentration (O2, CO2, CO, CH4, N2O, C2H2, HI, OH radicals and H2O vapour, including semi-heavy water), temperature, pressure, humidity and background radiation from the environment.

The combined oxygen absorption line scan is shown in Figure 12. The signal was acquired after the laser beam passed a 2 m path in air. Data of each separate scan were recorded in the computer memory and, after that, processed to receive a smooth absorption line profile.

Jlpea 03 00114 g012 1024
Figure 12. Oscillogram of the consequent scans of the same oxygen absorption line in the range of approximately 763 nm. After summation of 20 scans at the middle point (see [3]) and averaging, this spectrum (red, bell-like) has been used to measure an oxygen concentration around the robot location.

Click here to enlarge figure

Figure 12. Oscillogram of the consequent scans of the same oxygen absorption line in the range of approximately 763 nm. After summation of 20 scans at the middle point (see [3]) and averaging, this spectrum (red, bell-like) has been used to measure an oxygen concentration around the robot location.
Jlpea 03 00114 g012 1024

First of all, the parameters of each absorption line were found: maximum intensity, width at half of the maximum and spectral position. Every time a new scan is recorded, the central point should be found and the currently scanned line profile should be added to the previous ones, all linked at this central point. Next, the oxygen concentration was found, taking into account the environmental parameters: temperature, pressure and humidity. With this algorithm, any fluctuations in the laser diode intensity, repetition rate, false signals and noise will not modify the line’s real profile. At scanning up to nine oxygen absorption lines in the range of 761–763 nm, the oxygen concentration was found, with the error below 1% [37]. An oscillogram in Figure 12 shows four consequent scans of the same oxygen absorption line and the result of the summation of 20 scans.

5.6.2. Appropriate Spectral Range

To measure any gas concentration, the gas should be recognized first. The absorption lines belonging to the specific gas should be identified among all other absorption spectra. While designing the sensor, the appropriate spectral ranges for simultaneous monitoring of several gases should be specified. To estimate how many gases can be monitored with one laser, a Sacher Lasertechnik diode DFB (distributed feedback) laser [69], tuneable in the range of roughly 1570–1670 nm, has been used.

The combined picture of all visible gases in this range (through the HITRAN database [52]) is shown in Figure 13. Potentially, up to eight gases may be monitored in this range: CH4, CO2, N2O, HI, CO, H2O vapour, OH radicals and C2H2. Oxygen was monitored at the 761–763 nm range with a separate diode laser.

Jlpea 03 00114 g013 1024
Figure 13. General view of the absorption spectra of gases (high-resolution transmission molecular absorption(HITRAN) simulation). The arrows indicate several absorption lines of each specific gas, which are appropriate for detection.

Click here to enlarge figure

Figure 13. General view of the absorption spectra of gases (high-resolution transmission molecular absorption(HITRAN) simulation). The arrows indicate several absorption lines of each specific gas, which are appropriate for detection.
Jlpea 03 00114 g013 1024

5.6.3. Identification of Operational Spectral Range

In [3], the principles of self-calibration of an optical spectroscopic sensor were described. Here, we demonstrate how this technology may be implemented practically with high reliability and at low cost. Usually, a laser reference beam is used to monitor possible intensity variations of the laser output. A small portion of radiation is split from the laser beam and is directed to a reference detector, which is a separate photodiode closely located inside the sensor body. However, this signal does not contain any information about frequency variations of the output, which may or may not be connected to intensity modulations.

The VCSEL lasers that are used in portable sensors can be tuned through approximately five to 50 gas absorption lines. This is quite enough to visualize a characteristic spectrum that identifies the gas. In a portable commercial sensor, the measurement techniques should exhibit economic power consumption and minimal cost. In some cases, two separate low-power lasers with relatively small tuning ranges may be used to recognize a gas “fingerprint”. In this case, diode lasers with slightly different output wavelengths can be used to cover the total range of interest (see Figure 14). As an example, the methane absorption spectrum in the range of 6130 cm−1 has been taken. Let one of the lasers scan lines 1–2 and the second one scan lines 4–5 (bars marked as “i” in Figure 14a). In this case, there will be some sequence of signals with individual ratios of amplitude and wavelength spaces, as shown in Figure 14b. If the central wavelengths are tuned for some reason (for example, because of ambient temperature changes), the lasers will scan in regions “ii” and a new joint oscillogram will be generated (Figure 14c). This new signal has specific amplitude and spectral positions, demonstrating the unique structure of the methane absorption spectrum.

Jlpea 03 00114 g014 1024
Figure 14. Diagram (a) demonstrates several lines of the methane absorption spectrum. Diagram (b) and (c) explain the dual-stage process of two-laser tuning for line identification purposes. Line strengths are not in scale.

Click here to enlarge figure

Figure 14. Diagram (a) demonstrates several lines of the methane absorption spectrum. Diagram (b) and (c) explain the dual-stage process of two-laser tuning for line identification purposes. Line strengths are not in scale.
Jlpea 03 00114 g014 1024

For each scanned area, a specific “fingerprint” of the investigated gas should be recorded. It is very important that the signs of signal intensity variations are connected with the laser wavelength tuning direction. Even if there are problems with the signal amplifier, the absolute values of the detected signals will change synchronously, but the amplitude ratio will be the same. After the lines are identified, the gas concentration may be calculated at any line.

5.7. Spectrometer—The Key Element

The current RN platform (Figure 1) has no arms or any “touching” probes. However, it will be supplied with a basic set of sensors. The multifunctional portable spectrometer will be a part of the smart sensory system, with further development into the Robotic Technician platform. In the future, the RN will complete not only nurse’s functions, but will be the main element of the hospital security system as well. It should control biological agents (bacteria/virus presence), dangerous gases and remotely detect explosive materials. The Robotic Nurse should be able to work in epidemic conditions or at military actions. To provide all these functions, the RN will be supplied with a specially designed spectrometer. The bench-top prototype of this innovative spectral sensor has been developed by P&P Optica, Inc. [63]. Three samples of the argon lamp spectrum achieved with different spectrometers are compared in Figure 15. The source and conditions of spectra recording were the same. The P&P Optica Inc. spectrometer (referred to here as PPO) has been compared with two commercially available spectrometers from other manufacturers. The PPO model has approximately the same dimensions as that of competitor 1 (C1), but provides a better signal-to-noise ratio and a significantly higher resolution. In the case of C2, which is several times larger than PPO, the noise background is still dramatically higher (note that the scale is logarithmic). The spectra intensities were normalized at the line near 765 nm. In reality, with an unprecedented signal/noise ratio, reaching up to 100,000, the PPO spectrometer, in some cases, can detect the spectral lines theoretically predicted with the NIST spectral line database [53], but not experimentally observed before. All these factors influenced our choice to use the PPO spectrometer on the Robotic platform as a bio-agent, toxic substances and gas identifier.

Jlpea 03 00114 g015 1024
Figure 15. Normalized dark-subtracted spectrum of argon lamp collected by three different spectrometers.

Click here to enlarge figure

Figure 15. Normalized dark-subtracted spectrum of argon lamp collected by three different spectrometers.
Jlpea 03 00114 g015 1024

Let us consider in more detail what the RN supplied with a multifunctional spectrometer can potentially detect or continuously monitor.

5.8. Applications

5.8.1. Spectroscopy Biomedical Applications

There are four main types of optical spectroscopy that have been successfully used in reconnaissance activity. These are: (1) absorption [72]; (2) fluorescence [73]; (3) Raman [74]; and (4) hyper-spectral [75]. Several applications are combinations of spectroscopy with other techniques, like microscopes or telescopes. Especially important for biomedical applications is nonlinear microscopy [76], which typically is non-invasive for bio-tissue, demonstrates high resolution and can investigate the very fast processes of fluorescence or energy migration.

The main advantage of the spectroscopy in on-robot applications is the capability of remote sensing. Firstly, the light has a maximal propagation speed among all known processes that can be used for sensing. Secondly, the spectrometer can detect the absorption/fluorescence or Raman spectra and identify several substances simultaneously. Thirdly, the view of a specific spectrum is a “fingerprint” of a substance; it does not depend on the presence of other substances, changes in atmospheric conditions, etc. At last, the absorption/fluorescence spectra can be achievable from very remote objects: the astronomic spectroscopy is the most powerful instrument in the research Universe.

From the on-robot location point of view, the absorption spectroscopy can be successfully applied for detection of environmental gases—their absorption spectra are located mainly in the near-IR–middle-IR ranges. This technology is used for sensing of atmospheric and the patient’s exhausted gases.

When the invasive monitoring is acceptable, the standoff laser-induced breakdown spectroscopy (LIBS) is used [77]. The laser beam generates a plasma plume on the surface of the investigated substance, and the ionic fluorescence spectrum is detected remotely. This technology is the main remote sensing technique for the Mars rover lab [78].

5.8.2. Raman Spectroscopy Medical Applications

For non-invasive investigation of the molecules, more “delicate” spectroscopy, like Raman, should be used. The Raman Effect is connected to the inelastic interaction of the incident laser light with the molecule vibrational modes, which can be exploited to detect and identify chemicals in various environments. It can be used for the detection of hazards in the field with no contact with the substance. The review of the most popular Raman spectroscopy methods and applications can be found in [79]. The authors note that the Raman spectroscopy is a valuable contributor in the study of various fields of science, primarily due to the extraordinary versatility of sampling methods. This article reviews the recent advances in Raman spectroscopy and its new trend of applications, ranging from ancient archaeology to advanced nanotechnology. It includes the aspects of Raman spectroscopic measurements to the analysis of various substances categorized into distinct application areas, such as biotechnology, mineralogy, environmental monitoring, food and beverages, forensic science, medical and clinical chemistry, diagnostics, pharmaceutical, material science, surface analysis, etc.

The different variants of this technology were successfully applied with emphasis on Homeland Security applications [80]. The authors analyse the novel methods that are being developed to enhance the Raman signal’s sensitivity and to reduce the masking effects of fluorescence. Basic Raman techniques applicable to Homeland Security applications include conventional (off-resonance) Raman spectroscopy, surface-enhanced Raman spectroscopy (SERS), resonance Raman spectroscopy and spatially or temporally offset Raman spectroscopy (SORS and TORS). Additional emerging Raman techniques, including remote Raman detection, Raman imaging and Heterodyne imaging, are being developed to further enhance the Raman signal, mitigate fluorescence effects and monitor hazards at a distance for use in Homeland Security and defence applications.

From a medical applications’ point of view, the tutorial review in [81] can be noted. This paper examines emerging Raman spectroscopy techniques for deep non-invasive probing of diffusely scattering media, such as living tissue and powders. As generic analytical tools, the methods pave the way for a range of new applications for Raman spectroscopy, including disease diagnosis and non-invasive probing of pharmaceutical products in quality control and security screening. This paper considers the non-invasive methods for monitoring the composition of deep layers in turbid media, such as living tissue. Such chemically-specific information is important, for example, in medical diagnosis. Other applications include the non-invasive probing of pharmaceutical products in quality control applications, drug authentication and security screening for the presence of harmful substances. This aspect was considered in our work, as well: the RN is capable of analysing and finding a difference in content of the medications from different manufacturers (see Section 5.8.5).

As for the robot biomedical applications analysis, [82] can be recommended. In this paper, the ability of portable Raman spectroscopy and bench-top spatially offset Raman spectroscopy (SORS) techniques to rapidly identify real and fake ivory samples has been discussed. In contrast to conventional Raman spectroscopy, SORS was, in addition, able to identify ivory concealed by plastics, paints, varnishes and cloth. Application of the SORS technique allows the interrogation of biomaterial samples through materials in which conventional Raman spectroscopic instrumentation cannot penetrate.

The achievements of surface-enhanced Raman spectroscopy (SERS) are discussed in [83]. SERS has since been demonstrated to be a powerful analytical tool for the sensitive and selective detection of molecules absorbed on nanostructured (i.e., roughened) coinage metal surfaces. This type of Raman spectroscopy was used in biological agent identification [84]. A rapid detection protocol suitable for use by first-responders to detect anthrax spores using a low-cost, battery-powered, portable Raman spectrometer has been developed. Bacillus subtilis spores, harmless simulants for Bacillus anthracis, were studied using surface-enhanced Raman spectroscopy (SERS) on silver film over nanosphere (AgFON) substrates.

These several references show how the Raman spectroscopy can be used for biological agent detection and identification. At the same time, this technique is a powerful tool for remote detection of the explosives [85]. It should be noted that each of the technologies mentioned above are especially successful in some specific niche of sensing. It is logical to suppose that the technical combination of several techniques in an “all-in-one” spectrometer can cover an unprecedented range of capabilities.

The greatest advantage of using Raman spectroscopy in bio-analysis is the wealth of information contained in each spectrum. Even Raman micro-spectroscopy can provide useful biochemical information regarding live cells, without the need of fixatives, markers or stains [86]. This can be related to the interactions with toxic agents or drugs, disease, cell death and differentiation. The Raman spectrum of a cell can produce a “fingerprint” of its biochemical composition, so if any toxic agent causes biochemical changes, it appears in the Raman spectra [87].

Medical diagnostics and screening are becoming increasingly demanding applications for spectroscopy [88]. Analysis of complex biological samples has created a need for instruments capable of detecting small differences between samples. One such application is the measurement of the absorbance of broad spectrum illumination by breast tissue, in order to quantify the breast tissue density. Studies have shown that breast cancer risk is closely associated with the measurement of radiographic breast density measurement. Using signal attenuation in transillumination spectroscopy in the 550–1100 nm spectral range to measure breast density has the potential to reduce the frequency of ionizing radiation or making the test accessible to younger women; lowering the cost and making the procedure more comfortable for the patient.

5.8.3. Universal Spectrometer

To provide express-analysis of patient breathing gases, blood, quality of medications, optical and chemical properties of the tissue, and so on, it seems that the only technical solution could be a spectroscopic analyser. It is able to recognize the chemical composition of tested samples by analysing the spectrum of electromagnetic radiation interacting with the samples. The sensitivity of such an instrument is directly correlated with the signal-to-noise ratio of registered signals. This ratio is usually limited by light scattered on the spectrometer elements. Detailed analysis of this problem leads to the conclusion that reduction of scattering can be provided by replacing traditional reflective optics (mirrors and reflective diffraction gratings) with refractive ones (lenses and volume transmittance diffraction gratings) [63,89] (Figure 16). This type of spectrometer matches the requirements for space missions, military field labs or portable contamination control labs well.

Jlpea 03 00114 g016 1024
Figure 16. Multi-channel P&P Optica Inc. (PPO) spectrometer.

Click here to enlarge figure

Figure 16. Multi-channel P&P Optica Inc. (PPO) spectrometer.
Jlpea 03 00114 g016 1024

The proposed commercial versions are capable of operating in the 250–2500 nm range and with a signal-to-noise ratio around 100,000. The current technical problem is providing successful spectrometer operation in outdoor conditions under intense and changeable irradiation conditions. This problem is especially important in the case of Raman spectra acquisition. It was found that the Raman probe developed at Fiber Tech Optica, Inc. [90] meets the majority of the technical requirements for this kind of application. The probe transmits the laser radiation to excite the Raman scattering process, collects the back-scattered part of Raman signal and delivers it to a spectrometer equipped with a 2D photodiode array for spectral analysis. The spectrometer has already demonstrated great usability under laboratory conditions and has passed the first test outside of laboratories in application to the Mars rover model [3] and sky monitoring using a 144-channel spectrometer. The best proof of the versatility of this spectrometer is its application for risk prediction of breast cancer development in women, as a part of the optical computer tomography system used for human eye examination, creation of hyperspectral images of microscopic samples, testing of solar panel chips and many others [63].

5.8.4. Raman Spectroscopy Applications

It should be noted that medical and forensic applications are very close in the technology of data achievement, as the basic requirements are to monitor and analyse some chemical or structural content of the investigated materials. With the Raman/fluorescence spectrometer of the PPO, practically all requested substances, like drugs, biological agents, explosive materials and toxic agents, can be detected. The main problems are the data acquisition and processing. Some examples of references in the area of interest are discussed below. The recent technological advancements in the Raman spectrometer have provided a reason for exploring its use in forensic science. Analysis of explosives [91,92], drugs [92] and other materials have been done successfully by using Raman techniques.

Raman spectroscopy has been proven as a good complementary method for the detection of drugs of abuse in fingerprints. This kind of study was done on five drugs of abuse, which were codeine phosphate, cocaine hydrochloride, amphetamine sulphate, barbital and nitrazepam. Detection of these drugs was successfully achieved and clearly distinguished using their Raman spectra obtained from the substances in cyanoacrylate-fumed fingerprints (polymer is deposited on the fingerprint material for enhancing the visibility).

Likewise, quantitative determination of caffeine in different energy drinks has been achieved by FT-Raman spectroscopy, provides a fast and alternative means to the chromatographic method with a higher sampling frequency. In order to quantify caffeine, spectra were obtained directly between 3500 and 70 cm−1 with Raman bands between 573 and 542 cm−1, with the obtained limit of detection as 18 mg/L [93].

Raman spectroscopy can also offer good flexibility in the analysis of hazardous environmental samples. Detection of cyclotrimethylenetrinitramine (RDX) is achieved by SERS, extending its broad application area to the examination of other explosives, as well. Analysis of RDX was done with gold (Au) nanoparticles (90–100 nm in diameter) as SERS substrates, with the detection level of 0.15 mg/L from a contaminated groundwater sample. Thus, it can be potentially used as a valuable tool for rapid screening and characterization of energetics in the environment, such as tri-nitro-toluene (TNT), perchlorate, pertechnetate and uranium in groundwater at low concentrations [94]. SERS has also been used to detect and distinguish explosives in the solution using azo dyes [95].

5.8.5. Medications Control

Here, we note a very interesting spectrometer application that can be successfully used on the robotic platform. An example of the results demonstrates the great ability of the PPO spectrometer to identify medical drugs purchased from different suppliers. Initially, the samples were found to be indistinguishable with other spectrometers. Indeed, simple observation of obtained spectra makes it impossible to recognize the difference between them. Two graphs in Figure 17 are identical, with just a small difference in total background level.

However, more detailed analysis of the collected data, namely a comparison of the signal ratio Iint/Iext (from the internal part (Iint) to the one from the external part (Iext) of the pill), clearly demonstrates big differences. The spectral distributions of the ratio for both pills are shown in Figure 18: the left axis is related to drug 1 and the right one to drug 2, respectively. The strength of a signal demonstrates an opposite behaviour. Moreover, the drug 2 curve clearly indicates some new lines in ranges around the sensor pixels with the numbers 300, 450 and 600. Such 3D analysis may be the key for understanding the reasons of variations in the process of treatment with the use of the “same” drugs.

Jlpea 03 00114 g017 1024
Figure 17. Back-scattered Raman signal: two identical pills from different manufacturers.

Click here to enlarge figure

Figure 17. Back-scattered Raman signal: two identical pills from different manufacturers.
Jlpea 03 00114 g017 1024
Jlpea 03 00114 g018 1024
Figure 18. Comparison of the signals from internal and external parts of the pills.

Click here to enlarge figure

Figure 18. Comparison of the signals from internal and external parts of the pills.
Jlpea 03 00114 g018 1024

According to the feedback from doctors, these pills were shown to have a different influence on patients, even having the same specific content (according to the instructions). It is clear that such a spectrometer would be an invaluable instrument for drug identification. The Robotic Nurse will be capable of providing a new type of drug express-analysis and verification of the prescribed medications directly in a hospital room near the patient’s bed. In total, the wide-band spectrum overview will be done with the spectrometer and, in the case of a very high-resolution analysis, the tunable laser spectroscopy will be used to scan a single spectral line or a small group of lines.

To monitor the fluorescence, which contains information about the atomic and molecular content of the tissue, the hyper-spectral spectrometer will be used. In Figure 19, an example of the hyperspectral image of soil (3 mm × 3 mm) achieved with the PPO spectrometer is demonstrated. Different colours identify different oil components contaminating the soil. This system was developed for an environment contamination monitoring lab, and now, it will be re-designed for tissue monitoring. For remote spectroscopic measurement, a special fiber optic probe has been developed (Figure 20). The probe can be used to acquire the Raman signal from the objects located several tens of cm away from the robot (with the currently available low-power laser).

Jlpea 03 00114 g019 1024
Figure 19. Example of the hyperspectral image of soil (3 mm × 3 mm) achieved with the PPO spectrometer. Different colours identify different oil components contaminating the soil.

Click here to enlarge figure

Figure 19. Example of the hyperspectral image of soil (3 mm × 3 mm) achieved with the PPO spectrometer. Different colours identify different oil components contaminating the soil.
Jlpea 03 00114 g019 1024
Jlpea 03 00114 g020 1024
Figure 20. Spectrometer (1); optical probe (2); controller (3) and camera (4).

Click here to enlarge figure

Figure 20. Spectrometer (1); optical probe (2); controller (3) and camera (4).
Jlpea 03 00114 g020 1024

6. Electronics

One of the “hottest” problems of the robotic sensory platform is developing low-power, economical, but fast, electronics. The human organism does not use high voltage or high current processes to operate the body or to support the thinking process. One of the most sophisticated capabilities of humans is finding probabilistic solutions based on a limited volume of information. The computer solves the same problem by trying all possible variants. To achieve high efficiency of the operation, several ways can be proposed: (1) high computational rate computer; (2) minimization of number of the interface elements—development of the “through-the-system” electronics and operational software; (3) appropriate switching of the hardware elements to provide some functions in a parallel way; (4) multi-functional operation of the hardware and software.

Currently existing computers already provide the “Energy Star” mode of operation [96], with immediate “sleeping” mode when CPU activity is not necessary. At the moment of data acquisition and processing, the on-board computer should demonstrate maximal computational speed with relatively high power consumption. At these moments, all unnecessary actions should be stopped. In other words, when possible, the RN will either walk or think and not both functions together. In the case of a fire or chemical alarm, the rate of the atmospheric gas monitoring will be a few measurements per minute; in regular patrol mode, just one measurement per several minutes. The most sophisticated part of RN behaviour algorithms is to develop the priority levels for the robot action depending on the changeable environmental conditions. The question is: what to do first if the robot received a signal that one of the patients has a heart attack and, at this moment, somewhere in hospital a fire starts? Such events are not a regular occurrence; however, the most tragic manmade catastrophes every time happen as a result of the coincidence of events of extremely low probability.

We already noted that a successfully operated robotic platform cannot be just a combination of separate well-known technologies. The general “skeleton” of the hardware and software should be developed from “zero”. Each junction should be designed keeping in mind some other connections, processes or technologies. An example of such electronic architecture is described in [97]. This paper reviews the direct connection of sensors to microcontrollers without using any analogue circuit (such as an amplifier or analogue-to-digital converter) in the signal path, thus resulting in a low-cost, lower-power sensor electronic interface. It discusses how resistive and capacitive sensors with different topologies (i.e., single, differential and bridge type) can be directly connected to a microcontroller to build the so-called direct interface circuit. It then shows some applications of the proposed circuits using commercial devices and discusses their performance. Finally, it deals with the power consumption and proposes some design guidelines to reduce the current consumption of such circuits in active mode.

One of the strategic lines of the RN design is to develop a system that uses the same hardware element to complete different functions. For example, the tunable laser will be used for gas detection and concentration measurement, for range finding, as the excitation source for Raman spectrometer, for estimation of the atmosphere transparency and for some calibration purposes. To complete these functions, in some cases, the laser radiation should be switched to another optical channel or partially re-directed in propagation or be split into several channels. It means that controllable optical switchers should be specially designed to operate in the robot elements. Such elements are available commercially [98]; however, to work not on the lab table, but within the robot mechanical elements, they should be adapted to the robotic platform conditions. No mechanical stresses, no thermal variations and no vibrations are acceptable. This example confirms once again that to develop the Robotic Nurse, the scientific-engineering team must include specialists in physics, chemistry, computer science, electronics, mechanics and properties of materials, robotics, biology, psychology and medicine.

This paper is expected to be published in the Journal of Low Power Electronics and Applications. It describes the Robotic Nurse Network with a large number of different interacting devices. The electronics for RNN as a joint “nervous” system are not yet developed. This paper is an invitation for electronics specialists to join our team on the way to the creation of a smart medical robot.

7. Conclusions.

This paper presents a review of research and practical implementation of the robotic systems capable of working in healthcare and housekeeping services. The conceptual, scientific, engineering, computational and psychological problems were analysed on the ways of creation of the robotic nurse helper. The paper describes the prototype of the Robotic Nurse Network that includes movable and stationary platforms to monitor a large hospital area and to provide some healthcare functions. In addition, the RNN is capable of performing security functions. The RNN accumulated several innovations in movable platform mechanics, robotics, laser technology, spectroscopy hardware, data acquisition and processing, in multifunctional sensory devices, in principles of navigation and communication network organization. In total, the system is based on the multi-functional instruments concept and can support different actions with the use of the same equipment.

The Robotic Nurse is applicable or easily modifiable to be used in:




kindergartens, schools, colleges and universities;


private homes and public places;


banks, government buildings; railway stations, bus terminals and other populated areas.


We would like to acknowledge the Ontario Centres of Excellence, P&P Optica, Inc. and Wilfrid Laurier University for their partial financial support throughout this project. We also appreciate the fruitful discussions with Stephen Sutherland, President of CrossWing, Inc., who helped to clarify the requirements for the robotic platform in different applications and conditions of work. Some optical gas sensor basic principles of operation were developed by Igor Peshko during the time of his work at the University of Toronto, Department of Mechanical and Industrial Engineering, in cooperation with Engineering Services, Inc., under the lead of Andrew Goldenberg.

Conflict of Interest

The authors declare no conflict of interest.

Reference and Notes

  1. Frankenstein. Available online: (accessed on 2 January 2013).
  2. Japanese Nurse Robot (Actroid-F) 2010. Available online: (accessed on 28 November 2012).
  3. Matharoo, I.; Peshko, I.; Goldenberg, A. Robotic reconnaissance platform. I. Spectroscopic instruments with rangefinders. Rev. Sci. Instrum. 2011, 82, 113107:1–113107:15. [Google Scholar]
  4. Engineering Services, Inc. Homepage. Available online: (accessed on 2 January 2013).
  5. Peshko, I. Smart Synergistic Security Sensory Network for Harsh Environment: Net4S. In Nuclear Power: Control, Reliability and Human Factors; Tsvetkov, P., Ed.; InTech: Rijeka, Croatia, 2011; pp. 85–100. [Google Scholar]
  6. Peshko, I. New-Generation Security Network with Synergistic IP-Sensors. In Proceedings of Optics East Advanced EnvironmentalChemicaland Biological Sensing Technologies V, Boston, MA, USA, 9–12 September 2007; pp. 34–46.
  7. RP–7i ROBOT. Available online: (accessed on 2 January 2013).
  8. Robotics/Types of Robots/Wheeled. Available online: (accessed on 2 January 2013).
  9. Foster-Miller TALON. Available online: (accessed on 2 January 2013).
  10. Hexapod (robotics). Available online: (accessed on 2 January 2013).
  11. Snakebot. Available online: (accessed on 2 January 2013).
  12. CrossWing Homepage. Available online: (accessed on 2 January 2013).
  13. The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012.
  14. Robot. Manipulators, New Achievements; Lazinica, A., Kawai, H., Eds.; InTech: Rijeka, Croatia, 2010.
  15. Robot. Arms; Goto, S., Ed.; InTech: Rijeka, Croatia, 2011.
  16. Tsuji, T.; Harada, K.; Kaneko, K.; Kanehiro, F.; Maruyama, K. Grasp Planning for a Humanoid Hand. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 63–80. [Google Scholar]
  17. Choi, D.; Lee, D.-W.; Shon, W.; Lee, H.-G. Design of 5 D.O.F Robot Hand with an Artificial Skin for an Android Robot. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 81–96. [Google Scholar]
  18. Fukui, W.; Kobayashi, F.; Kojima, F. Development of Multi-Fingered Universal Robot Hand with Torque Limiter Mechanism. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 97–108. [Google Scholar]
  19. Lee, S. MFR (Multi-purpose Field Robot) based on Human-Robot Cooperative Manipulation for Handling Building Materials. In Robot Manipulators, New Achievements; Lazinica, A., Kawai, H., Eds.; InTech: Rijeka, Croatia, 2010; pp. 289–313. [Google Scholar]
  20. Lima, M.F.M.; Machado, J.A.T.; Ferrolho, A. A Sensor Classification Strategy for Robotic Manipulators. In Robot Manipulators, New Achievements; Lazinica, A., Kawai, H., Eds.; InTech: Rijeka, Croatia, 2010; pp. 315–328. [Google Scholar]
  21. Beran, T.N.; Ramirez-Serrano, A. Robot Arm-Child Interactions: A Novel Application Using Bio-Inspired Motion Control. In Robot Arms; Goto, S., Ed.; InTech: Rijeka, Croatia, 2011; pp. 241–262. [Google Scholar]
  22. Advances in Robot Navigation; Barrera, A., Ed.; InTech: Rijeka, Croatia, 2011.
  23. Fujita, T.; Kondo, Y. 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit. In Robot Arms; Goto, S., Ed.; InTech: Rijeka, Croatia, 2011; pp. 159–174. [Google Scholar]
  24. Kawai, H.; Murao, T.; Fujita, M. Passivity-Based Visual Force Feedback Control for Eye-to-Hand Systems. In Robot Manipulators, New Achievements; Lazinica, A., Kawai, H., Eds.; InTech: Rijeka, Croatia, 2010; pp. 329–342. [Google Scholar]
  25. Funabora, Y.; Yano, Y.; Doki, S.; Okuma, S. Autonomous Motion Adaptation Against Structure Changes Without Model Identification. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 29–40. [Google Scholar]
  26. Gams, A.; Petric, T.; Ude, A.; Žlajpah, L. Performing Periodic Tasks: On-Line Learning, Adaptation and Synchronization with External Signals. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 3–28. [Google Scholar]
  27. Design of Oscillatory Neural Network for Locomotion Control of Humanoid Robots. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 41–60.
  28. R2D2, MD—Will Your Next Doctor Be a Robot? Available online: (accessed on 02 January 2013).
  29. Kroos, C.; Herath, D.C.; Stelarc. From Robot Arm to Intentional Agent: The Articulated Head. In Robot Arms; Goto, S., Ed.; InTech: Rijeka, Croatia, 2011; pp. 215–240. [Google Scholar]
  30. Matsusaka, Y. Speech Communication with Humanoids: How People React and How We Can Build the System. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 165–188. [Google Scholar]
  31. Hasanuzzaman, M.; Ueno, H. User, Gesture and Robot Behaviour Adaptation for Human-Robot Interaction. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 229–256. [Google Scholar]
  32. Infantino, I. Affective Human-Humanoid Interaction Through Cognitive Architecture. In The Future of Humanoid Robots—Research and Applications; Zaier, R., Ed.; InTech: Rijeka, Croatia, 2012; pp. 147–164. [Google Scholar]
  33. Facial Recognition System. Available online: (accessed on 2 January 2013).
  34. TOSHIBA Face Recognition. Available online: (accessed on 2 January 2013).
  35. FUJIFILM Canada. Available online: (accessed on 2 January 2013).
  36. Nanomedicine. Available online: (accessed on 02 January 2013).
  37. Matharoo, I.; Peshko, I. Smart spectroscopy sensors: II. Narrow-band laser systems. Opt. Laser Eng. 2013, 51, 270–277. [Google Scholar] [CrossRef]
  38. New Mountain NM150WX. Available online: (accessed on 02 January 2013).
  39. La Crosse Technology. Available online: (accessed on 2 January 2013).
  40. Wojtas, J.; Bielecki, Z.; Stacewicz, T.; Mikolajczyk, J.; Nowakowski, M. Ultrasensitive laser spectroscopy for breath analysis. Opto. Electron. Rev. 2012, 20, 26–39. [Google Scholar] [CrossRef]
  41. Wang, C.; Mandelis, A.; Garcia, J.A. Detectivity comparison between thin-film Pd/PVDF photopyroelectric interferometric and optical reflectance hydrogen sensors. Rev. Sci. Instrum. 1999, 70, 4370–4376. [Google Scholar] [CrossRef]
  42. RKI Instruments. Gas Detection for Life. Available online: (accessed on 2 January 2013).
  43. MultiRAE Pro Wireless Portable Multi-Threat Monitor for Radiation and Chemical Detection. Available online: (accessed on 8 May 2013).
  44. Moseley, P.T. Solid state gas sensors. Meas. Sci. Technol. 1997, 8, 223–237. [Google Scholar] [CrossRef]
  45. Model IQ-1000: 100+ Gas Portable. Available online: (accessed on 2 January 2013).
  46. Wang, C.; Mandelis, A. Instrumental noise and detectivity analysis of photopyroelectric destructive thermal-wave interferometry. Rev. Sci. Instrum. 2000, 71, 1961–1970. [Google Scholar] [CrossRef]
  47. Cubillas, A.M.; Lazaro, J.M.; Conde, O.M.; Petrovich, M.N.; Lopez-Higuera, J.M. Multi-line fit model for the detection of methane at ν2 + 2ν3 band using hollow-core photonic bandgap Fibres. Sensors 2009, 9, 490–502. [Google Scholar]
  48. Toda, H. The precise mechanisms of a high-speed ultrasound gas sensor and detecting human-specific lung gas exchange. Int. J. Adv. Robotic Syst. 2012, 9, 249:1–249:9. [Google Scholar]
  49. Capnography. Available online: (accessed on 2 January 2013).
  50. RAID M100: Extensive Portable Capability. Available online: (accessed on 8 May 2013).
  51. SABRE 5000. Available online: (accessed on 2 January 2013).
  52. Rothman, L.S.; Gordon, I.E.; Barbe, A.; ChrisBenner, D.; Bernath, P.F.; Birk, M.; Boudon, V.; Brown, L.R.; Campargue, A.; Champion, J.-P.; et al. The HITRAN 2008 molecular spectroscopic database. J. Quat. Spectrosc. Radiat. Transf. 2009, 110, 533–572. [Google Scholar] [CrossRef]
  53. NIST Atomic Spectra Database Lines Form. Available online: (accessed on 2 January 2013).
  54. Platt, U.; Stutz, J. Differential Optical Absorption Spectroscopy: Principles and Applications; Springer-Verlag: Berlin, Germany, 2008. [Google Scholar]
  55. Heard, D.E. Analytical Techniques for Atmospheric Measurement; Blackwell Publishing: Oxford, UK, 2006. [Google Scholar]
  56. Werle, P.; Slemr, F.; Maurer, K.; Kormann, R.; Mücke, R.; Jänker, B. Near- and mid-infrared laser-optical sensors for gas analysis. Opt. Lasers Eng. 2002, 37, 101–114. [Google Scholar] [CrossRef]
  57. Totschnig, G.; Lackner, M.; Shau, R.; Ortsiefer, M.; Rosskopf, J.; Amann, M.-C.; Winter, F. 1.8 μm vertical-cavity surface-emitting laser absorption measurements of HCl, H2O and CH4. Meas. Sci. Technol. 2003, 14, 472–478. [Google Scholar] [CrossRef]
  58. Lackner, M.; Totschnig, G.; Winter, F.; Ortsiefer, M.; Amann, M.-C.; Shau, R.; Rosskopf, J. Demonstration of methane spectroscopy using a vertical-cavity surface-emitting laser at 1.68 μm with up to 5 MHz repetition rate. Meas. Sci. Technol. 2003, 14, 101–106. [Google Scholar] [CrossRef]
  59. Hofmann, W.; Amann, M.-C. Long-wavelength vertical-cavity surface-emitting lasers for high-speed applications and gas sensing. IET Optoelectron. 2008, 2, 134–142. [Google Scholar] [CrossRef]
  60. LaserTechnik Berlin. Available online: (accessed on 02 January 2013).
  61. HORIBA Scientific. Available online: (accessed on 02 January 2013).
  62. HR4000 High-Resolution Spectrometer. Available online: (accessed on 2 January 2013).
  63. P&P Optica. Available online: (accessed on 2 January 2013).
  64. Zahniser, M.; Nelson, D.; McManus, J.; Kebabian, P.; Lloyd, D. Measurement of trace gas flexes using tunable diode laser spectroscopy. Philos. Trans. R. Soc. London Phys. Sci. Eng. 1995, 351, 371–382. [Google Scholar] [CrossRef]
  65. Arroyo, M.P.; Hanson, R.K. Absorption measurements of water-vapor concentration, temperature, and line-shape parameters using a tunable InGaAsP diode laser. Appl. Opt. 1993, 32, 6104–6116. [Google Scholar] [CrossRef]
  66. Jabczyński, J.; Firak, J.; Peshko, I. Single-frequency, thin-film tuned, 0.6 W diode-pumped Nd:YVO4 laser. Appl. Opt. 1997, 36, 2484–2490. [Google Scholar] [CrossRef]
  67. Peshko, I.; Jabczyński, J.; Firak, J. Tunable single- and double-frequency diode-pumped Nd:YAG laser. IEEE J. Quant. Electron. 1997, 33, 1417–1423. [Google Scholar] [CrossRef]
  68. Digonnet, M.J.F. Rare-Earth-Doped Fiber. Lasers and Amplifiers; Marcel Dekker, Inc.: New York, NY, USA, 2001. [Google Scholar]
  69. Sacher Lasertechnik Group. Available online: (accessed on 28 November 2012).
  70. DFB—Distributed Feedback Diodes. Available online: (accessed on 8 May 2013).
  71. Maclean, A.J.; Kemp, A.J.; Calves, S.; Kim, J.-Y.; Kim, T.; Dawson, M.D.; Burns, D. Continuous tuning and efficient intracavity second-harmonic generation in a semiconductor disk laser with an intracavity diamond heatspreader. IEEE J. Quantum. Electron. 2008, 41, 216–225. [Google Scholar]
  72. Absorption Spectroscopy. Available online: (accessed on 2 January 2013).
  73. Laser-induces Fluorescence. Available online: (accessed on 2 January 2013).
  74. Raman Spectroscopy. Available online: (accessed on 2 January 2013).
  75. Imaging Spectroscopy. Available online: (accessed on 2 January 2013).
  76. Barzda, V. Non-Linear Contrast Mechanisms for Optical Microscopy. In Biophysical Techniques in Photosynthesis II; Aartsma, T., Matysik, J., Eds.; Springer: Dordrecht, Netherland, 2008; Volume 2, pp. 35–54. [Google Scholar]
  77. Weisberg, A.; Craparo, J.; de Saro, R.; Pawluczyk, R. Comparison of transmission grating spectrometer to a reflective grating spectrometer for standoff laser-induced breakdown spectroscopy measurements. Appl. Opt. 2010, 49, C200–C210. [Google Scholar] [CrossRef]
  78. Mars Science Laboratory. Available online: (accessed on 2 January 2013).
  79. Das, R.S.; Agrawal, Y.K. Raman spectroscopy: Recent advancements, techniques and applications. Vibrational. Spectrosc. 2011, 57, 163–176. [Google Scholar] [CrossRef]
  80. Mogilevsky, G.; Borland, L.; Brickhouse, M.; Fountain, A.W., III. Raman spectroscopy for homeland security applications. Int. J. Spectrosc. 2012, 2012. [Google Scholar] [CrossRef]
  81. Matousek, P. Deep non-invasive Raman spectroscopy of living tissue and powders. Chem. Soc. Rev. 2007, 36, 1292–1304. [Google Scholar] [CrossRef]
  82. Hargreaves, M.D.; Macleod, N.A.; Brewster, V.L.; Munshi, T.; Edwards, H.G.M.; Matousek, P. Application of portable Raman spectroscopy and bench-top spatially offset Raman spectroscopy to interrogate concealed biomaterials. J. Raman Spectrosc. 2009, 40, 1875–1880. [Google Scholar] [CrossRef]
  83. Dieringer, J.A.; McFarland, A.D.; Shah, N.C.; Stuart, D.A.; Whitney, A.V.; Yonzon, C.R.; Young, M.A.; Zhang, X.; van Duyne, R.P. Surface enhanced Raman spectroscopy: New materials, concepts, characterization tools, and applications. Faraday Discuss. 2006, 132, 9–26. [Google Scholar] [CrossRef]
  84. Zhang, X.; Young, M.A.; Lyandres, O.; van Duyne, R.P. Rapid detection of an anthrax biomarker by surface-enhanced raman spectroscopy. J. Am. Chem. Soc. 2005, 127, 4484–4489. [Google Scholar]
  85. Östmark, H.; Nordberg, M.; Carlsson, T.E. Stand-off detection of explosives particles by multispectral imaging Raman spectroscopy. Appl. Opt. 2011, 50, 5592–5599. [Google Scholar] [CrossRef]
  86. Kraft, C.; Knetschke, Siegner, T.A.; Funk, R.H.W.; Salzer, R. Mapping of single cells by near infrared Raman microspectroscopy. Vib. Spectrosc. 2003, 32, 75–83. [Google Scholar] [CrossRef]
  87. Notingher, I. Raman spectroscopy cell-based biosensors. Sensors 2007, 7, 1343–1358. [Google Scholar] [CrossRef]
  88. Pawluczyk, O.; Blackmore, K.; Dick, S.; Lilge, L. High-Performance Broad-Band Spectroscopy for Breast Cancer Risk Assessment. In Proceedings of the SPIE: Photonic Applications in Biosensing and Imaging, Toronto, Canada, 30 September 2005; Wilson, B.C., Hornsey, R.I., Krull, U.J., Chan, W., Yu, K., Weersink, R.A., Eds.; Volume 5969, pp. 369–378.
  89. 2012 R & D 100 Award Winners. Available online: (accessed on 28 November 2012).
  90. FiberTech Optica. Available online: (accessed on 28 November 2012).
  91. Harvey, S.D.; Vucelick, M.E.; Lee, R.N.; Wright, B.W. Blind field test evaluation of Raman spectroscopy as a forensic tool. Forensic Sci. Int. 2002, 125, 12–21. [Google Scholar] [CrossRef]
  92. Hodges, C.M.; Akhavan, J. The use of Fourier Transform Raman spectroscopy in the forensic identification of illicit drugs and explosives. Spectrochim. Acta A Mol. Biomol. Spectrosc. 1990, 46, 303–307. [Google Scholar] [CrossRef]
  93. Armenta, S.; Garrigues, S.; de la Guardia, M. Solid-phase FT-Raman determination of caffeine in energy drinks. Anal. Chim. Acta 2005, 547, 197–203. [Google Scholar] [CrossRef]
  94. Hatab, N.A.; Eres, G.; Hatzinger, P.B.; Baohua, G. Detection and analysis of cyclotrimethylenetrinitramine (RDX) in environmental samples by surface-enhanced Raman spectroscopy. J. Raman Spectrosc. 2010, 41, 1131–1136. [Google Scholar] [CrossRef]
  95. Docherty, F.T.; Monaghan, P.B.; McHugh, C.J.; Graham, D.; Smith, W.E.; Cooper, J.M. Simultaneous multianalyte identification of molecular species involved in terrorism using raman spectroscopy. IEEE Sens. J. 2005, 5, 632–638. [Google Scholar]
  96. About ENERGY STAR. Available online: (accessed on 2 January 2013).
  97. Reverter, F. The art of directly interfacing sensors to microcontrollers. J. Low Power Electron. Appl. 2012, 2, 265–281. [Google Scholar] [CrossRef]
  98. Electro-Optic Modulator. Available online: (accessed on 2 January 2013).
J. Low Power Electron. Appl. EISSN 2079-9268 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert