Journal of Network and Computer Applications 35 (2012) 382–393
Contents lists available at SciVerse ScienceDirect
Journal of Network and Computer Applications
journal homepage: www.elsevier.com/locate/jnca
NeuralSens: A neural network based framework to allow dynamic adaptation
in wireless sensor and actor networks
Eduardo Cañete n, Jaime Chen, R. Marcos Luque, Bartolomé Rubio
University of Málaga, Dpto. Lenguajes y Ciencias de la Computación, Calle de Bulevard Louis Pasteur, s/n. 29071 Málaga, Spain
a r t i c l e i n f o
a b s t r a c t
Article history:
Received 31 January 2011
Received in revised form
29 July 2011
Accepted 16 August 2011
Available online 27 August 2011
Wireless Sensor and Actor Networks (WSANs) constitute a new way of distributed computing and
are steadily gaining importance due to the wide variety of applications that can be implemented with
them. As a result they are increasingly present everywhere (industry, farm use, buildings, etc.).
However, there are still many important areas in which the WSANs can be improved. One of the most
important aspects is to give the sensor networks the capability of being wirelessly reprogrammed so
that developers do not have to physically interact with the sensor nodes. Many proposals that deal with
this issue have been proposed, but most of them are hardly dependent on the operating system and
demand a high energy consumption, even if only a small change has been made in the code. In this
work, we propose a new way of wirelessly reprogramming based on the concept of neural networks.
Unlike most of the existing approaches, our proposal is independent of the operating system and allows
small pieces of code to be reprogrammed with a low energy consumption. The architecture developed
to achieve that is described and case studies are presented that show the use of our proposal by means
of practical examples.
& 2011 Elsevier Ltd. All rights reserved.
Keywords:
Wireless Sensor and Actor Networks
Dynamic adaptation
Neural networks
Reprogramming
1. Introduction
WSANs are considered to be one of the top ten most important
technologies that will have an impact on both our future lifestyle
and on many industrial products (Akyildiz and Kasimoglu, 2004;
Jawhar et al., 2011). As a result, intense research is being carried
out in the field of WSANs to develop and simplify the use of this
kind of technology, which can then be applied to many different
fields such as health monitoring, battlefield surveillance, volcano
monitoring or greenhouse monitoring (Garcı́a-Hernández et al.,
2007; Chen et al., 2011). WSANs are composed of small selfpowered devices that have sensors attached to them and wireless
communication capabilities that allow them to communicate
with each other. Those devices are deployed in order to monitor
and/or detect certain conditions in the environment. In addition,
WSAN can also have actors. Actors are usually resource rich
devices, with greater processing capability and longer battery life
than the sensors and they can interact with the environment
based on the information received from the sensors.
WSANs are especially suitable for monitoring dangerous or
inaccessible places as they can be easily deployed (for example
n
Corresponding author. Tel.: þ34 952 13 7147; fax: þ 34 952 13 1397.
E-mail addresses: ecc@lcc.uma.es (E. Cañete), hfc@lcc.uma.es (J. Chen),
rmluque@lcc.uma.es (R.M. Luque), tolo@lcc.uma.es (B. Rubio).
1084-8045/$ - see front matter & 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.jnca.2011.08.006
dropped from a helicopter). Nevertheless, once the network has
been deployed it is usually a hard task to change the application
behavior. This process usually involves an operator manually
transferring the binary image of the new application to every
device in the environment. Bearing in mind that the network may
have hundreds of nodes or that the environment may be hostile
or inaccessible, this solution is not always feasible.
Reprogramming techniques (Brown and Sreenan, 2006; Wang
et al., 2006) have been proposed as a way of dynamically
changing the behavior of WSAN applications without having to
manually reprogram them. Traditional reprogramming techniques send the binary image of the new application over the
network. Nodes receiving it, replace its code with the new code
and continue with the execution of the application. Although
several attempts have been made to modify only certain components of an application (Mottola et al., 2008; Horre et al., 2008), in
most cases the binary image sent across the network corresponds
to the whole application because changing only specific parts of
the application is not allowed (Hui and Culler, 2004). This means
that a great deal of code will be sent over the network, so energy
consumption is considerable. Finally, the mote operating system
must allow reprogramming, that is, dynamic code loading.
As an alternative to traditional reprogramming approaches,
this paper proposes the combined use of WSANs and neural
networks as a way to achieve dynamic and adaptive behavior in
WSAN applications. Although neural networks have already been
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
used in combination with WSANs, to the best of our knowledge,
there are no existing approaches where neural networks are used
in order to achieve dynamic adaptation in WSAN applications.
Therefore, this work is a novel approach which make WSANs
more powerful because tasks which could previously only be
carried out by reprogramming can now be done in an easy and
platform-independent way. The use of artificial neural networks
in the context of our proposal offers three main advantages:
1. Adaptation to new kinds of sensors: WSANs are able to detect
and process the data provided by new sensors.
2. Adaptation to new kinds of situations: Because the neural
networks are learning algorithms, the sensor networks adapt
to new situations using the information learned. That learning
is carried out automatically by the neural network from a set
called the training set.
3. Modification of logical decisions: Parts of the code where logic
expressions like if (X!¼a) then y are used, can be modified at
runtime, as long as they are modeled using a neural network.
As mentioned before, these advantages can also be achieved by
using traditional reprogramming, but such a solution has certain
drawbacks:
1. For some reprogramming approaches the whole code or a
large part of it has to be sent to each node even though only a
small part is required.
2. Traditional reprogramming is platform-dependent.
3. Traditional reprogramming may involve some security risks.
4. It uses too much energy.
5. It may even be necessary to interrupt the execution of the
application for a considerable period of time.
This paper presents NeuralSens, a framework for WSANs that
allows reprogramming at runtime by using neural networks.
The rest of the paper is organized as follows. Section 2 presents
the concepts related to artificial neural networks. Section 3
surveys previous work where neural networks are used in
combination with WSANs. Section 4 describes the architecture
and functionality our proposal offers as well as some implementation details. In Section 5 two meaningful case studies where our
approach can be applied are studied. An evaluation is carried out
in Section 6. Finally some conclusions are presented in Section 7.
2. ANNs background
Several pattern recognition techniques can be used, given
some examples of input signals and the corresponding decisions
for them, to make decisions automatically for a set of future
examples. The initial set of patterns together with its corresponding output labels are termed the training set, and the resulting
learning strategy is characterized as supervised learning.
A wide range of supervised learning algorithms can be applied
for pattern recognition, from simple naive Bayes classifiers and
neural networks to support vector machines. Actually, the use of
artificial neural networks (ANNs) is justified because of their
ability to be used as an arbitrary function approximation mechanism which ‘‘learns’’ from observed data, and the simplicity of its
model, easy to transmit to the application WSAN motes.
Concretely, artificial neural networks try to mimic the behavior of biological neurons and their interconnections and they
have been widely used in different areas together with other
technologies for solving artificial intelligence problems, such as
functions approximation, modeling, classification tasks or data
processing.
X1
383
W11
Y1
X2
W21
Fig. 1. Single perceptron with two inputs.
Basically, an artificial neural network (ANN) consists of small
computing units, called neurons, arranged in different layers and
interconnected with each other. Simple mathematical computations are performed in each neuron. The most widely used neural
network for classification tasks is the Multi-Layer Perceptron
(MLP) (Hornik et al., 1989) which is able to cope with nonlinearly separable problems.
The multilayer perceptron is a feedforward network which
comprises a layer of input units (which are associated to the
sensors), another layer of output units (which in our system
correspond to the number of classes) and a number of intermediate layers of processing units, also called hidden layers
because they have no connections with the outside and their
outputs are not accessible. This model has evolved from the single
perceptron whose major drawback is that it is not capable of
solving non-linear tasks (Minsky and Papert, 1969). In order to
illustrate more clearly the performance of this kind of neural
network, in Fig. 1 it is shown the topology of a single perceptron,
with two neurons in the input layer and one neuron in the output
one. There is no hidden layer. The information of how the inputs
are related to the outputs is stored in a vector for each layer called
synaptic weight vector, w, where each element of the vector, wij,
represents the strength of the link between the neurons i and j.
These vectors are acquired through the training process, given the
input and output set to learn. Let be also a function called transfer
function, g, which performs a partition of the input space Rn in a
set of classes Rm. Some of the most used are the sigmoid function,
the logistic function and the hyperbolic tangent one.
In a simple case, given an input pattern x and two classes, a
possible transfer function could be described as:
(
1, w1 x1 þw2 x2 þ þ wn xn Z y
gðxÞ ¼
ð1Þ
0, w1 x1 þw2 x2 þ þ wn xn o y
where y is a threshold, and the values 1 and 0 correspond to each
one of the target classes.
The topology of a multilayer perceptron is rather similar to the
perceptron one, although with slight differences. According to our
proposal, each sensor is connected to the units of the second
layer, and each processing unit of the second layer is connected to
the first layer units and the units of the third layer, so on. The
output units are connected only with the units of the last hidden
layer, as shown in Fig. 2. This network manages to match the set
of inputs and the set of desired outputs as follows:
ðx1 ,x2 , . . . ,xN Þ A RN -ðy1 ,y2 , . . . ,yM Þ A RM
ð2Þ
Therefore, a set of p training patterns is provided, so that we
certainly know that the input pattern ðxk1 ,xk2 , . . . ,xkN Þ corresponds
to the output ðyk1 ,yk2 , . . . ,ykM Þ,k ¼ 1; 2, . . . ,p. That is, we know the
matching of p patterns. Thus, our training set is
fðxk1 ,xk2 , . . . ,xkN Þ-ðyk1 ,yk2 , . . . ,ykM Þ : k ¼ 1; 2, . . . ,pg
ð3Þ
In order to implement this relation, the first layer has as many
sensors components as the input patterns, i.e. N, the output layer
384
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
decision over new input data obtained from the sensors. It should
be noted that this model consists of a set of weight vectors (the
number of layers minus one) whose size is minimal and less
significant in terms of data transmission. The decision process is
carried out by testing the neural network using new input data in
each node, which involves the computation of Eq. (4) given the
weight vectors previously transmitted. It is easily observable that
this process is only scalar products of several vectors and sums. As
a consequence, the complexity of the model depends on the
training process, which is executed out of the WSAN, and does not
on the decision process, which is performed in the nodes.
3. Related work
Fig. 2. Multi-layer perceptron topology.
will have as many process units as components have the desired
outputs, i.e, M, and the number hidden layers and their size will
depend on the difficulty of the matching to be implemented.
The following equation sets out the computational dynamics
of the MLP. As the inputs to the process units of a layer are the
outputs of the process units of the preceding layer, the multilayer
perceptron with only one hidden layer implements:
0
0
1
!!1
L
L
N
X
X
X
A
yi ¼ g1 @
ð4Þ
wij sj A ¼ g1 @
wij g2
tjr xr
j¼1
j¼1
r¼1
where wij is the synaptic weight which connects the output unit i
and the processing unit j in the hidden layer; L is the number of
processing units in the hidden layer; g1 is the transfer function
of the output units, which usually is the logistic function; tjr is
the synaptic weight which connects the processing unit j in the
hidden layer and the input sensor r; g2 is the transfer function of
the hidden layer units, which is the same type than the previous
function.
Once we have established the network topology and its
computational dynamics, the assignment of the synaptic weights
wij will lead to the complete design of the network. A training
process is followed, whereby we are introducing each pattern and
evaluating the error made between the output obtained by the
network and the desired output. Subsequently, the synaptic
weights will be modified according to this error by using the
backpropagation algorithm (Rumelhart et al., 1986), which is a
supervised learning method and is an implementation of the
Delta rule.
From an input pattern, output is obtained representing the
associated class. One of the major advantages of this model is its
ability to generalize (Sietsma and Dow, 1991). This means that a
trained network can correctly classify new data as long as it is
from the same class as the learning data. To reach the best
generalization, the dataset should be split into two parts, the
training set and the test set. The training set is used to train the
neural network in which the error of this dataset is minimized
during training. The test set is used to determine the performance
of a neural network with patterns that have not been used during
the training. This process is known as cross-validation (Krogh and
Vedelsby, 1995; Liu, 2006).
In our approach, the supervised training process is executed in
a conventional computer on the base station because of its high
computational requirements, which does not affect the level of
complexity of each node. After that, a trained neural network
model is obtained and transmitted to each WSAN node to make a
Learning algorithms are increasingly been used to improve the
behavior of other kinds of technology. In the case of the Wireless
Sensor and Actor Networks, Neural Networks are a type of
learning algorithm, which have been shown to be very useful to
improve many aspects of the WSANs, such as intelligent data
aggregation, routing or increasing the network lifetime. To the
best of our knowledge, our proposal is the first one where neural
networks are used in order to achieve WSAN dynamic adaption.
Some work has been done on merging technologies in order to
achieve lower communication costs, energy saving or fault
detection in a WSAN (Kulakov et al., 2005a). Zhao et al. (2007)
present a routing algorithm for WSANs based on neural networks.
This approach uses data such as node position, number of hops
between nodes, and energy remaining in the node to train the
neural network stored in each one of the nodes, which is used to
carry out the routing process. In comparison with the GEAR
(Yu et al., 2001) routing algorithm, this proposal has been shown
to be faster and less expensive in terms of energy.
Most neural network-based approaches are applied to improve
the routing algorithms. However, other proposals exist for dealing
with unusual problems. For example, Yu et al. (2005) applied a
neural network method in the cluster heads of a wireless sensor
network in order to control real-time forest fire detection. In this
approach, sensor nodes collect measured data and send it to their
respective cluster nodes that collaboratively process the data by
constructing a neural network. The neural network takes the
measured data as input to produce a weather index, which
measures the likelihood that the weather will cause a fire.
On the other hand, Von Pless et al. (2005) present a novel
approach where the neural networks (modified time-based multilayer perceptron) act as a powerful function predictor which is
able to reliably and accurately predict a wide variety of functions,
including sinusoids, and it is able to do so after only a very short
training period. In the case of the wireless sensor network
application, this approach is very useful to predict the behavior
of the signal (such as light, temperature, y) sensed by the sensor
depending on the time.
Kulakov et al. (2005b) used neural networks to classify data
obtained by the sensor nodes in a WSAN. Instead of reporting
raw-data to the clusterhead, each sensor node only reports the
sensor pattern that has been classified locally by means of a
neural network, thus saving communication time and energy.
Shareef et al. (2007) tested the viability of using neural networks to solve localization problems in the context of a WSAN.
The accuracy, robustness, computational and memory usage of
different types of neural networks to solve localization problems,
where the distance measurements are assumed to be noisy, is
studied. Mobiles’ nodes determine their position in a coordinate
system by using the signals received from the beacons (devices
deployed at known locations and that emit either radio or
acoustic signal). The different neural networks will be fed with
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
the distances from each beacon to the mobile node. The output
value calculated by the neural networks corresponds to the
location of the mobile node. The results obtained show that, for
the set of tests carried out, the multilayer perceptron neural
network (the one used in our framework) offers the best trade-off
between accuracy and resource requirements.
Reprogramming over-the-air is an interesting field of research
which allow the motes to be reprogrammed without having to
interact with them physically. As mentioned previously, the most
similar approaches to ours are those based on traditional reprogramming. This technique is more powerful in the sense that it
allows us to update or modify any kinds of program by using
complex and costly protocols. For example, Hagedorn et al. (2008)
(an improved version of Deluge) used two different and independent protocols (rateless Deluge and ACKless) to allow nodes based
on TinyOS to be reprogrammed. On the other hand, there are
other kinds of approaches which allow us to reprogram the nodes
taking into account component dependencies and versions
(Mottola et al., 2008).
Hu et al. (2009) propose a protocol called Reprogramming
with Minimal Transferred Data (RMTD). It finds common segments between the old code image and the new code image, and
computes the least number of bytes needed to be sent to the
sensor node in order to construct the new code image. This
proposal, however, is operating system dependent.
HERMES (Panta and Bagchi, 2009) is able to generate the
minimum delta between two programs (difference between the
old and the new software) in order to update only the needed
code part. The problem is that this approach needs the operating
system to support dynamic linking of software components on a
node and changes in their kernel modules.
Maia et al. (2009) proposed a new in-network algorithm called
OPA-SW that combines the use of shortcut paths with over-theair programming to improve the Deluge reprogramming protocol.
Basically, the protocol is based on the creation of short paths
between the sensor nodes and the sink in order to reduce
communication overhead.
The protocol presented by Tsiftes et al. (2008) reduces the
dissemination time and energy consumption by compressing the
new code that it is going to be sent. After testing seven different
compression algorithms, they have realized that the popular GZIP
algorithm has the most favorable trade-offs (dissemination time
and energy consumption).
Dong et al. (2010) propose Elon, one of the most recent
approaches to carry out long-term reprogramming for wireless
sensor networks. Elon does not need reboot the hardware after
sending the new code and does not use the flash memory during
the reprogramming process. Elon uses the concept of replaceable
component as minimum piece of code that can be reprogrammed.
Elon is built on top of TinyOS which means that is operating
system dependent.
Law et al. (2011) propose Sreluge, an improvement of the
popular Rateless Deluge protocol to prevent attackers from
installing either corrupt or malicious code by using ‘‘polluters’’
(nodes used to install the malicious code). Sreluge employs a
neighbor classification system and a time series forecasting
technique to isolate ‘‘polluters’’, and a combinatorial technique
to decode data packets in the presence of ‘‘polluters’’ before the
isolation is complete.
A multihop reprogramming service, called MNP, is presented
by Kulkarni and Wang (2009). MNP uses a greedy routing
algorithm to disemminate the new program so that in a given
neighborhood at most one source is transmitting it. In each
neighborhood, the node elected to transmit the program is the
one which has more neighbor nodes requesting it. Nodes not
transmitting and not interesting in receiving the program will go
385
to sleep to save energy. In the same way a node goes to sleep if
neighbors are not interested in the program segment it is
advertising.
Fok et al. (2009) propose a totally different approach to carry
out reprogramming tasks within a WSNs. In their work, they
present Agilla which is a middleware layer that runs on TinyOS
and supports mobile agents for WSNs. The agents communicate
via remote access to local tuples space on each node, and can
migrate via move and clone instructions. In our opinion, the main
drawback of this approach is that only assembly language
programming is supported. Therefore, the development of complex WSN applications can be very difficult.
Most of the times, it is only necessary to modify or update
small parts of the code. Most of the traditional approaches, such
as the ones surveyed here, require whole modules or even the
whole application to be reprogrammed every time a change in the
code needs to be done. Our approach is able to reprogram only
the required parts of the code without having to use complex
protocols, whose use in large WSN can be energy and computational demanding. Also, most of the traditional approaches, unlike
our proposal, rely on the functionality provided by the operating
system which makes them operating system dependent.
4. NeuralSens framework
A framework, called NeuralSens that allows dynamic reprogramming of sensor network applications by means of neural
networks is presented in this section. Normally, the code of the
sensor network applications contains many points where decisions are taken by means of control structures (if, while, y),
specially when the nodes are actors since they are mainly used to
make decisions on the basis of data recollected from other
sensors. NeuralSens allows developers to manipulate these control structures by using neural networks, that is, to change their
behavior at runtime. In other words, developers will be able to
carry out over-the-air programming to modify a number of
control structures in the application code. The number of neural
networks in a sensor node is related to the number of decisions
which will possibly be changed or modified over time. They will
make decisions based on information taken from their own
sensors or from sensors in remote nodes. As neural networks
can have a varied number of inputs and the number of neural
networks in a sensor node is not limited to one, many different
configurations can be achieved. In addition, sensor node behavior
reprogramming can be carried out by training the neural networks with a different training set.
Figure 3 depicts an example of how the neural networks are
organized into a node. Each node has information about the
architecture (number of layers, number of inputs, number of
outputs, kind of transfer functions and so on) of each neural
network it has. They also store the kind of data which is
associated to each one of the neural network inputs. All this
information can be saved in the static memory of the node in
order to ensure that the information about the neural networks in
the node does not get lost even if the node is reset.
On the other hand, in the application code the neural networks
contained in the node can be used to make decisions. For
example, in Fig. 3 the node receives two different kinds of data
(height and weight) which are used by a neural network to
control if a person with particular features has access to the
room. To make a decision (processNeuralNetwork method) no
input parameters are needed since the sensor registration module
stores the information about the sensor nodes that provided each
neural network input. In the example, height is provided by node
1 and weight by node 2. The analyzeDecision method represents
386
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
Fig. 3. NeuralSens deployment diagram.
the semantical interpretation that the developer produces using
the result from the neural network.
The architecture of the NeuralSens Framework is quite simple.
It is composed of four modules (sensor query, sensor registration,
neural network and application modules) which are explained in
detail in Section 4.2. They will allow us to carry out the following
actions:
To replace the code parts where decisions are taken.
To associate each of the neural network inputs with the data
sources that are going to provide the values. These values
come either from local data or from data sent by other nodes.
To modify the architecture of an already existing neural
network.
To change the behavior of the neural networks which are
already running in the motes with new information (neural
networks architecture and synaptic weights).
4.1. NeuralSens features
The following subsections describe the functionality provided
by the NeuralSens framework and the type of scenarios where it
can be applied.
Fig. 4. Greenhouse deployment.
4.1.1. Adaptation to new kinds of sensors
There are many scenarios in which WSANs are deployed to
monitor a set of parameters within an environment and to act on
the information received. However, over time, it may be necessary to add new parameters in order to improve the decision
taken by a particular node in response to a particular situation. It
would be impossible to take these new parameters into account
using traditional programming; it would only be possible by
modifying the program and uploading it to the motes. For
example, let us imagine an automobile company builds a new
car which incorporates a small wireless sensor network. This
WSN has sensors to check the status of the highroad and to detect
the environmental conditions (fog, snow and ice layers, brightness, y). Thanks to these kinds of sensors the car is able to make
decisions and help the driver drive more safely. A year later, the
automobile company creates a powerful new sensor which is not
only able to detect objects in the street but it is also able to
recognize the traffic signals. As this feature improves driver
safety, the company goal is to incorporate the new sensor into
all the existing vehicles. To do so, the code of the wireless device
(node) already installed in the car must be updated in order to
allow it to analyze the information received from the new
sensors. Using our approach, no operators are needed to upload
the program in charge of making decisions, since the new
behavior of the program can be modeled with a neural network
which can be transferred wirelessly to the car. In addition if, in
the future, the program needs to be further updated, this updating
can be also carried out in a wireless way.
4.1.2. Adaptation to new kinds of situations
Another interesting topic is to provide sensor network applications with the capacity to adapt to new situations dynamically.
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
This means that although the number of parameters by which a
decision is made does not change, the decision itself must be
made taking different criteria into account. For instance, let us
imagine an WSAN application which controls the temperature
and humidity levels inside commercial greenhouses (see Fig. 4),
provided by two different sensors. When the temperature and
humidity drops below specific levels, the greenhouse manager
must be notified via e-mail or cell phone text message. A type of
vegetable is planted for a specific time period in greenhouses
which are formed by a large number of modules. Initially, the
application was designed to set off an alarm when the levels
associated to a specific plant were not within a specific range.
Now however, the cultivated plant has been substituted by
another more profitable one, which requires different temperature and humidity levels. This means a change in the environmental conditions, so the previously mentioned process must be
performed in order to determine the new structure of the neural
network. The new information will be transmitted over-the-air to
the greenhouse actor nodes in each module, warning them when
the levels being monitored are not correct. This kind of situation
can also be controlled by means of traditional sensor networks,
but it would be necessary to upload the complete program each
time the decision making module has to be changed due to a new
situation in the environment.
Sensor node
Sensor node
Sensor query
module
Sensor
Registration
Module
Sensor query
module
Sensor
Registration
Module
Sensor node
Sensor query
module
Sensor
Registration
Module
Neural
Network
Module
Application Module
Decision-making node
Fig. 5. Software architecture of the framework.
387
4.1.3. Modification of logical decisions
It is very common in WSAN applications to make decisions
depending on data sensed by different kinds of sensors. When the
developers are programming, these decisions are expressed in the
code by means of logical expressions such as if ðX 4 KÞ then y, while
ððX ¼ ¼ 3Þ and ðY oNÞÞ then y, and so on. Once the code has
been uploaded to the nodes, the only way to modify these expressions is by reprogramming. In contrast, neuronal networks allow
modifications to be performed easily. For example, if we want to
execute a particular task when a logical expression is satisfied, it is
possible to do so by training a neuronal network so that the output
neuron will have the value 1 when the logical expression is satisfied
and 0 otherwise. It should be noted that this logic expression will be
intrinsically represented in the synaptic weights of the network after
the training phase. If after some time, the logical expression has to be
changed for a new one, the neuronal network will only have to be
trained, so that the output neuron will have the value 1 when this
new logical expression is satisfied.
4.2. Software architecture
The software architecture of the framework can be seen in
Fig. 5. An application that uses our framework is composed of four
different parts that are described in the following sections. The
framework can only be accessed through the API provided by the
system and shown in Fig. 6. It includes all the methods necessary
to gather sensor information and to use it to feed neural network
inputs. The final goal is to execute the neural network in the
context of an application and to use its result to make decisions.
The framework is composed of the following modules.
4.2.1. Sensor query module
The sensors that are going to be provided in a node of the
network need to use the addOfferedSensor method to make the
sensor registration module register them. All registered sensor
need to provide a certain interface so that homogeneous and
automatic access to the sensor readings is achieved. This means
that developers using this framework must provide sensor access
by implementing a specific class for every sensor provided. This
class must inherit from a GenericSensor interface and it needs to
implement the methods the interface defines to query the sensor.
As explained in Section 4.2.2 the readings provided by the
methods in this implemented class need to be normalized in
the [0,1] interval. In the architecture proposed, a node can offer
more than one type of sensor.
4.2.2. Sensor registration module
The decisions made by the neural networks in the context of
our application are based primarily on the information received
Fig. 6. Framework API.
388
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
by different sensors deployed in the environment although direct
specification of the inputs is also possible. Since the number of
sensors in the network can vary over time, certain mechanisms
must be implemented to: (1) detect new sensors deployed in the
network and be able to use them to feed the designated neural
networks, (2) associate different neural network inputs with
different sensor types and instances, and (3) get sensor readings
from the remote nodes to the nodes with neural networks that
use them. All these tasks are carried out by the sensor registration
module.
To keep track of the different sensors in our application a
sensor registration approach has been used. All sensors in the
network must start a registration process in which sensors with
neural networks and looking for sensor data as input for them are
linked to sensors providing sensor readings.
The communication in the registration protocol is initiated by
the sensors providing readings (sensor nodes) by sending a
message in broadcast informing about the sensor types they
provide. This message is only answered by nodes (user nodes)
interested in getting information from the sensor. Once the twomessage registration protocol has been executed the user node is
able to get readings from the sensor node when necessary by
means of the API method getRemoteNormalizedInput. The destination sensor node which will attend to the call is chosen based on
its type rather than on a unique node identifier. This means, for
example that if a neural network input expects a temperature
readings, the getRemoteNormalizedInput call will query temperature sensors already registered in the network to get the data. If
there is more than one sensor of the same type, different
approaches can be taken depending on the application requirements (getting the mean value, the maximum value, only one
value, etc.).
4.2.3. Neural network module
Any number of neural networks can be created in each node
using the NeuralNetwork constructor. The behavior of the neural
network is specified by a set of synaptic weights and transfer
functions. Also it is necessary to indicate the sensor type for each
of the neural network inputs. The sensor registration module will
link each input with sensors of the type indicated. This way,
sensor reading acquisition is simplified and hidden from the user,
who only needs to know the sensor type in order to get the sensor
readings.
Only when all the sensor readings required by each neural
network are registered in the application can they be executed. To
automatically execute a neural network with the corresponding
registered sensors the processNeuralNetwork() method is used. In
the case where the programmer wishes to manually indicate the
inputs of the network the processNeuralNetwork(input:float[])
need to be used. Those two methods return the output neuron
values of the neural network that express a decision made based
on the input values. Both the input and output values of the
neural networks are normalized so that they fall into the [0,1]
interval. The sensor readings are also normalized before being
received by the neural network module. This means that the
normalization is done by each remote sensor node providing
readings rather than by the neural network module receiving
them since the range of parameters is needed to carry out the
process and this information is not known by the latter. The way
the neural network module is accessed from the application
module is explained in the following section.
Additional functionality is provided to reprogram the behavior
of a neural network. To update the behavior of an existing neural
network a remote message can be sent to specify its new synaptic
weights by means of the updateNeuralNetwork method. This
method provides a way to change the behavior of nodes remotely.
The remote neural network is identified by the neural network
identifier specified in the first parameter of the NeuralNetwork
method when the neural network is created.
On the other hand if it is necessary to add new inputs, outputs
or layers to an existing network the NeuralNetwork constructor
provides a way to do so. In the first parameter the identifier of the
specified network is indicated. Since the network already exists
the method will update the neural network rather than create a
new one.
The process of updating a neural network is usually initiated
by the base station which is the one which decides when it is
necessary to carry out updates. However, it is also possible to
carry out an update from any other kind of node. In the case
where the base station initiates the whole process, first it creates
and transmits the packet with the new neural network to the
destination nodes. Once the node receives the request, it install
the neural network contained in the packet. Sometimes, especially if the neural network sent over the network is complex,
fragmentation needs to be used in order to send the packet to the
destination nodes. This occurs when the size of the neural
network is larger than the MTU (Maximum Transmission Unit).
All neural network installations of updates are confirmed by
means of ack packets that are sent from the destination nodes
to the source nodes.
4.2.4. Application module
The neural network module executes local neural networks
from the information received from the specified sensors. In order
to make use of the decisions made by the neural network module
the developer has to implement the main application logic. The
main application consists of code programmed in the sensor
platform programming language that makes use of the neural
network module to execute neural networks and uses its results
to make decisions. Certain knowledge about each neural network
behavior is assumed to be known in advance by the developers
because the output of the neural network needs to be interpreted
by the developer. This is inherent to all neural networks which
only return real numbers usually in the [0,1] interval and whose
semantics need to be known in order to understand the decision
they make. Finally, some additional methods are provided to
obtain neural networks (getNeuralNetwork) and to query the state
of the application.
5. Case studies
In order to show both the capacity of adaptation of the WSANs
using artificial neural networks and the facility of integrating
them within sensor networks, two different case studies have
been considered using the framework explained in Section 4. In
the first case study, our approach is applied to detect elder people
falling and to improve the detection of theses kinds of events once
the application has been deployed in the sensor network. On the
other hand, a second case study is modeled to show how it is
possible for an application which has already been deployed to
process new kinds of data and to take these into account to make
decisions.
5.1. Case study 1: Detection of falls in the elderly
In the case of elderly people living on their own, there is a
particular need for monitoring their behavior, such as a fall, or a
long period of inactivity. There are different ways of addressing
this problem, for example by using video surveillance systems.
Systems of this kind have several drawbacks, mainly involving the
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
389
motion detection algorithms, which are needed to detect anomalous activities, are very sensitive to light conditions; in particular
they are affected by the presence of shadows and sudden changes
due to light switches. This negatively affects the feature identification process. Moreover, it is practically impossible to control all
the elderly living in a particular old peoples’ home, as this would
require the installation of many cameras. Such a solution is also
expensive and requires proper room lighting.
Let us suppose that we would like to detect when an elderly
person living in a residence falls, either in their room or in a
communal area. A much more efficient way to detect when a
person falls using wireless sensor networks is presented in this
paper. The first step consists of attaching a small sensor node with
a 3-axis accelerometer to each person that will allow the system
to be able to monitor the tilt angle and motion (see Fig. 7). This
configuration detects any anomalous movement in the individual
wearing it, such as a sharp downward movement or indeed any
sudden or violent movement. In addition, the sensor node can be
used to sense other kinds of parameters such us heart rate,
temperature, blood pressure, etc. Other people together with
other sensors fixed in the walls of the residences will form a
wireless sensor network which will permit the data to be relayed
from the location where the event occurs to the base station in
order to process this information.
To extract falling patterns from the accelerometer readings
(3-axis data) gathered by the sensor is quite difficult because
developers would have to implement the program logic to decide if
the person has fallen down or not based on the obtained readings.
This process is quite arduous and time-consuming and the final result
would not be entirely reliable. However, artificial neural networks
offer a simple and versatile way to solve this problem. Specifically, the
Multi-Layer Perceptron (MLP) is trained with real data composed of
the accelerometer readings and a variable that indicating wether that
particular set of data correspond to a fall or not.
The neural network training process manages to divide the
input space into two classes, separating the patterns associated
with a fall from which are not. The resolution of this type of
learning classification problems tends to use nonlinear models
like MLP, because they solve a greater number of problems (as
well as more complex) than linear models. Specifically in this
neural model, the number of hidden layers, together with the
number of neurons, varies depending on the complexity of the
problem to solve. The usefulness of the hidden layers is to project
the input patterns into another space in which the classification
can be solved linearly.
This means that the developer does not need to know the
relationship between the data gathered in order to detect fall and
it will be detected automatically by the neural network training
process. After creating the structure of the trained neural network
model, it is tested using the remaining patterns to check the
reliability of classification results. If this verification is satisfactory, the neural model will manage to interpret correctly the data
provided for the sensors and will decide whether this data
corresponds to a fall or not. Moreover, since other parameters
are controlled by the WSAN, this information can be correlated to
the fall in order to try to establish the reason(s) for the fall such as
low blood pressure.
Fig. 7. Deployment to monitor falls.
Fig. 8. Monitoring of a hotel room.
5.2. Case study 2: treatment of new sensors at runtime
There are many cases where WSANs are deployed to monitor a
set of parameters within an environment and to act accordingly. It
is also common that other parameters must be taken into account
in the future once the network has been deployed in order to
improve the decision on the basis of which an actor carries out a
specific action. Using traditional programming techniques it is not
possible to modify the program behavior to take into account
these new parameters. Therefore, reprogramming techniques
such as the one presented in this paper are necessary to deal
with these kinds of situations to be able to modify the program
and upload it to the motes. Let us assume that a WSAN has been
deployed within a hotel with 100 rooms so that each room has a
small wireless sensor network to control and act when the smoke
and temperature levels are higher than a danger threshold (see
Fig. 8). The WSAN in each room can communicate with each other
so that all the data can be transmitted to the base station where
everything is monitored. After some time, a new sensor is
considered to be useful for the hotel and will be deployed in
each room in order to improve its security. Because the programs
uploaded in each actor were not originally designed to deal with
the new sensor, it is necessary to modify the program of the 100
actor nodes deployed in the rooms by physically locating the
nodes and uploading the new application. Considering as an
example that on average 4 min are needed to update a node, this
390
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
means that, 400 min (almost 7 h) are needed each time that all
the actor nodes have to be modified. On the other hand, our
approach is able to modify all nodes in only a few minutes, since
the neural network reprogramming can be done at runtime
without affecting the main program where they are used.
6. Evaluation
6.1. Environment setup
In order to test the NeuralSens framework and to verify the
feasibility of our proposal a prototype has been implemented
with the basic architecture functionality.
The implemented prototype uses new generation motes called
SunSPOTS.
SunSPOT motes have a 180 MHz 32-bit ARM920T core processor with 512KB RAM and 4MB Flash. This device implements a
Java Platform, Micro Edition (Java ME) Virtual Machine that runs
directly on the processor without any OS. The available Java
libraries in the devices however, are relatively limited due to the
device resource constraints. The platform only incorporates basic
libraries and some utility libraries to ease the programming tasks.
GUI and more advanced libraries are for example not available
when programming SunSPOTs.
6.2. NeuralSens performance evaluation
First, the impact of the framework in terms of memory space is
analyzed. Every sensor node with reprogramming requirement
needs to allocate memory for two things: (1) the four framework
modules used to carry out the reprogramming and (2) the
memory needed per each one of the neural networks used.
Table 1 shows the memory footprint of each one of the framework modules and the following expression indicate us the
memory footprint of a particular neural network:
!
n1
X
NN_Footprint ¼ l L1 X þ
Li Li þ 1 þ Ln Y
ð5Þ
i¼1
where X and Y are the number of input and output neurons,
respectively, and Li is the number of hidden neurons in the ith
hidden layer, i ¼ ð1; 2, . . . ,nÞ. l is the number of bytes needed to
represent each value. In our approach, we use the double data
type which requires 4 bytes (l ¼ 4).
In short, the memory needed by a sensor node to carry out
reprogramming tasks through the framework can be calculated
by using the following expression:
n
X
NN_Footprint i þ NNM þ RM þ QM þ AM
ð6Þ
Table 2
Neural network configuration used for the tests.
Neural
network id
Inputs
Neurons
per hidden
layer
Outputs
Number of
synaptic
weights
Memory
footprint
(bytes)
1
2
3
4
5
3
3
2
3
3
3
5,6
2,4
10,10,5
5,5,3
4
4
4
4
4
21
69
28
200
67
84
276
112
800
268
On the other hand, in order to evaluate the performance of the
NeuralSens framework two different tests have been carried out
in the SunSpot motes. The first one measures the time it takes the
framework to send a neural network over the network to a
destination node. This is the first step when installing a new
neural network or updating an existing one in a remote node. The
second test measures the efficiency in locally installing or updating a neural network in a node assuming that it has successfully
been received. To test the framework five different neural network configurations are used. They have been chosen with a
different number of inputs/outputs neurons and neurons in the
hidden layers in order to test the framework performance under
different circumstances. Table 2 shows the different neural networks used in the tests and the memory space needed by each
one of them. For example, the first configuration has three layers
(an input layer, a hidden layer and an output layer). The input
layer has three neurons, the hidden layer has three and the output
layer four.
The size in bytes of the different neural networks is shown in
Fig. 9(a). As expected, the size of the neural network is closely
related to the number of synaptic weights it has. Figure 9(b)
measures the time it takes the framework to transmit the neural
network to a remote node whereas Fig. 9(c) depicts the time it
takes the destination node to update the neural networks once
the data has been received. In addition, the most expensive step
in our update process is to send the neural network across the
network to the destination node, but the update process is
relatively quick. This means that the application does not need
to be stopped for a long time to update the application logic.
Finally, the results obtained when updating a neural network
show that this operation can be done in a small amount of time as
shown in Fig. 9(c). Thus, we believe that our framework is suitable
for scenarios in which a lot of low-complexity or mediumcomplexity decisions need to be made primarily based on
information from remote sensors and in cases where the criteria
by which these decisions are made change over time as the
application executes.
i¼1
6.3. Case study 1: detection of falls
where:
n¼ number of used neural networks.
NNM ¼memory footprint of the neural network module.
RM ¼ memory footprint of the sensor registration module.
QM ¼memory footprint of the query module.
AM ¼memory footprint of the application module.
Table 1
Memory footprints of the framework modules (bytes).
Neural network
module
Sensor registration
module
Query
module
Application
module
10 340
4308
1723
861
In order to generate the neural network that detects falls it is
necessary to get a data set to train the network. To obtain this
training data set in this case study we attached a sensor node to
our waist. The reason why the sensor is placed on the waist is
because it is the part where typical movements (such as walking,
sitting down, etc.) do not significantly affect the readings of the
accelerometer sensor. On the other hand, when attaching the
sensors to the waist falls have a unique, recognizable pattern.
After the sensor is attached to our waist we reproduce
common movements such as siting down, walking and stopping.
The sensor is periodically gathering accelerometer data each
100 ms. This set of data was identified with the pattern ½x y z 0
where x,z and y represent the accelerometer readings for every
axis and 0 indicates that this pattern does not correspond to a fall.
60
0.7
50
0.6
30
20
2
3
4
0
5
0.4
0.3
0.2
10
1
391
0.5
40
Time (ms)
900
800
700
600
500
400
300
200
100
0
Time (ms)
Size (Bytes)
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
0.1
1
2
3
4
0
5
1
Net Configuration
Net Configuration
2
3
4
5
Net Configuration
Fig. 9. Performance evaluation results: (a) neural network size, (b) neural network transmission time, (c) neural network update time.
1000
1000
500
500
0
0
–500
–500
–1000
–1000
–2000
–1500
–500
–1000
0
500
0
1000
1500
1000
2000
2500
2000
–2000
–1500
–500
0
–1000
No–fall class
Fall class
500
1000
1500
0
1000
2000
2500
2000
Fig. 10. First case study classification process using MLP: (a) the input data. The final classification using two classes (fall or no fall) is observed in (b), in which the ‘‘.’’ and
‘‘J’’ marks correspond to the training and testing patterns, respectively.
After that, we simulated falls to obtain patterns related with
them. This set of data was represented as the pattern ½x y z 1.
Once the neural network was trained, some tests were carried
out in order to see how well the combination of sensor and neural
networks performed to detect falls. The results obtained are
satisfactory (Fig. 10) since all simulated falls were detected
correctly by the sensor node. However, the sensor was only
programmed (trained) to detect forward falls. In other words,
any backward fall could not be detected. If after the deployment
of the network the application needs to be improved to take into
consideration new data or new behavior, using traditional reprogramming it is necessary to reprogram the whole or a large part of
the program in all the nodes attached to each person. Following
our approach, however, this task is considerably simplified.
To reprogram the neural network behavior new training must be
carried out. The good thing about this approach is that this training
can be done offline. This means that the training process can be done
in a conventional pc with the new training data, such as for example
data taking into consideration backward falls. This new data is
needed in order to obtain the new synaptic weights (float numbers)
in which the behavior of the neural network that allowing the
detection of new kinds of falls is implicit. Once the synaptic weights
are calculated they need to be sent all over the network. All nodes
receiving the new synaptic weights will update the neural network,
which will make the application change its behavior.
6.4. Case study 2: treatment of new sensors at runtime
In order to show how our approach allows the WSAN to deal
with new sensors at runtime, a small WSAN has been deployed in
Fig. 11. Training pattern rules for case study 2. (a) Training pattern rule—two
inputs, (b) training pattern rules—three inputs.
our lab. This network is composed of three nodes, two of them
measuring temperature and smoke levels and the third one (an
actor) in charge of taking decisions on the basis of data received
from the sensors. Therefore, an artificial neural network has been
implemented in the actor node with two inputs (one per sensor)
and one output, which provides a value indicating the danger
level fire depending on the inputs . To train the neural network,
patterns with the format [levelTemperature levelSmoke levelDanger] have been used. The detailed criteria followed to choose the
patterns is depicted in Fig. 11(a).
392
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
1.2
1
1
0.8
0.8
0
0.3
0.7
1
0.6
0.4
0.6
0.4
0.2
0.2
0
–0.2
0
0
–0.2
0
10
20
30
40
50
60
70
20
0
0.2
40
0.4
0.6
80
0.8
60
1
1.2 80
Fig. 12. Second case study classification process using MLP. In (a), the classification task with four classes and in (b) the classification task with eight classes, in different
colors. The classes correspond to the rules defined in the case study. Testing data are marked with a bigger mark ‘‘ ’’. (For interpretation of the references to color in this
figure legend, the reader is referred to the web version of this article.)
Four danger levels have been identified. The most dangerous
situation is when the temperature and smoke level are very high.
This case would be identified by the neural network providing the
value 1 as output. The next less dangerous situation would be
when the temperature level is low and the smoke level is high,
the next one would be the opposite case. These last cases would
be detected by the neural network providing the value 0.7 and
0.3, respectively. It has been considered that a high smoke level is
more dangerous than a high temperature level. Finally, the
normal situation would be when both temperature and smoke
level are low. This scenario has been associated to the value 0.
Unlike the patterns used to detect falls, this new sort of patterns
can be generated automatically, that is, it is not necessary to
deploy a WSAN on a real scenario to obtain a set of real patterns
since we do not only know the value range of the parameters but
also we know when the level parameters are dangerous. The set
of patterns have consequently been generated in the following
way. For each kind of pattern, 30 different training samples have
been generated randomly.
Once whole the system is deployed and is executing we add a
new sensor that will be used in conjunction with the others
already deployed to make decisions. With traditional programming it is necessary to modify the program logic, that is, the part
of the code where the temperature and smoke data are analyzed
using control syntax like if ythen. However, to change the
behavior of the actor node using our approach we only had to
train again the neural network with the new patterns [levelTemperature levelSmoke levelGas levelDanger] in order to know its
new synaptic weights and configuration. The criteria presented in
Fig. 11(b) has been used to reprogram the network.
The simple fact of adding a new sensor to our system has
incremented the complexity of analyzing new dangerous situations since now eight dangerous situations are possible. The
pattern set has been generated in the same way as in the previous
case (only two sensors). The information obtained after the
training will then be sent to the actor node, which will be able
to replace the old neural network with the new one at runtime.
After the update, the actor node is able to deal and process the
data provided by the new sensor.
Figure 12 shows the results from the neural networks obtained
for the second case study. Figure 12(a) shows the training and
testing data for the scenario with temperature and smoke sensors
whereas Fig. 12(b) shows the neural network results once an
additional gas sensor has been added. Both figures, show that
after the training, all testing data has been successfully classified
into one of the classes defined in the rules (four rules based on
two input sensors and eight rules based on three input sensors as
defined in Fig. 11(a) and (b), respectively).
7. Conclusions
In this paper, neural networks and wireless sensor and actor
networks have been combined to provide an alternative to
traditional reprogramming. On the one hand, this reprogramming
process is not dependent of the operating system executed in the
sensor nodes. On the other hand, in contrast to many traditional
reprogramming proposals, small code changes can be carried out
without having to reprogram the whole application. Neural network is used to take decisions in the program logic based on
different inputs. If the behavior needs to be changed, new offline
training can be carried out in the neural network. The resulting
synaptic weights and some more reprogramming information can
then be sent over the network to reprogram the nodes. A possible
architecture to deal with reprogramming using neural networks,
called NeuralSens, has been shown. This architecture allows a
program to ask sensors for information and the data received is
used to feed a neural network. Two meaningful case studies have
been presented and implemented with our proposal and an
evaluation has been described. Finally, it is worth mentioning
that this article proposes a new way of dealing with changing
behavior in applications and that although neural networks are
used as the reprogramming technique other alternatives can also
be used instead such as Bayesian networks, decision trees or
support vector machines.
Funding
This work was partially supported by Spanish Projects P07TIC-03184, TIN2008-03107 and TIC-03085.
References
Akyildiz IF, Kasimoglu IH. Wireless sensor and actor networks: research challenges.
Ad Hoc Networks 2004;2(4):351–67. URL /http://www.sciencedirect.com/science/
article/B7576-4CDWMMM-1/2/8c25ac6ebd5ab2e50ce172498c2add9aS.
Brown Y, Sreenan CJ. Updating software in wireless sensor networks: A survey.
Technical report; 2006.
Chen J, Dı́az M, Llopis L, Rubio B, Troya JM. A survey on quality of service support
in wireless sensor and actor networks: Requirements and challenges in the
context of critical infrastructure protection. Journal of Network and Computer
Applications 2011;34(4):1225–39.
E. Cañete et al. / Journal of Network and Computer Applications 35 (2012) 382–393
Dong W, Liu Y, Wu X, Gu L, Chen C. Elon: Enabling efficient and long-term
reprogramming for wireless sensor networks. In: SIGMETRICS 10 Proceedings
of the ACM SIGMETRICS international conference on Measurement and
modeling of computer systems, vol. 38(1); 2010. p. 49–60. URL /http://
citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.162.2763S.
Fok C-L, Roman G-C, Lu C. Agilla: A mobile agent middleware for self-adaptive
wireless sensor networks. ACM Transactions on Autonomous Adaptive and
Systems 2009;4(16):1–16.
Garcı́a-Hernández C, Ibarguengoytia-González P, Garcı́a-Hernández J, Pérez-Dı́az J.
Wireless sensor networks and applications: a survey. International Journal of
Computer Science and Network Security 2007;7(3):264–73.
Hagedorn A, Starobinski D, Trachtenberg A. Rateless deluge: Over-the-air programming of wireless sensor networks using random linear codes. In: IPSN
’08: Proceedings of the 7th international conference on Information processing
in sensor networks. Washington, DC, USA: IEEE Computer Society; 2008.
p. 457–66.
Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks 1989;2(5):359–66.
Horre W, Michiels S, Joosen W, Verbaeten P. Davim: Adaptable middleware for
sensor networks. IEEE Distributed Systems Online 2008:9.
Hu J, Xue C, He Y, Sha E-M. Reprogramming with minimal transferred data on
wireless sensor network. In: IEEE 6th international conference on mobile
Adhoc and sensor systems, October 2009. MASS ’09; 2009. p. 160–7.
Hui JW, Culler D. The dynamic behavior of a data dissemination protocol for
network programming at scale. In: SenSys ’04: Proceedings of the 2nd
international conference on Embedded networked sensor systems. New York,
NY, USA: ACM; 2004. p. 81–94.
Jawhar I, Mohamed N, Agrawal DP. Linear wireless sensor networks: classification
and applications. Journal of Network and Computer Applications
2011;34(5):1671–82 (Dependable Multimedia Communications: Systems,
Services, and Applications).
Krogh A, Vedelsby J. Neural network ensembles, cross validation, and active
learning. In: Advances in neural information processing systems. MIT Press;
1995. p. 231–8.
Kulakov A, Davcev D, Trajkovski G. Application of wavelet neural-networks in wireless
sensor networks. In: International conference on software engineering, artificial
intelligence, networking and parallel/distributed computing & International workshop on self-assembling wireless networks; 2005a. p. 262–7.
Kulakov A, Davcev D, Trajkovski G. Implementing artificial neural-networks in
wireless sensor networks. In: IEEE/Sarnoff symposium on advances in wired
and wireless communication; 2005b. p. 94–7. (April).
Kulkarni S, Wang L. Energy-efficient multihop reprogramming for sensor networks. ACM Transactions on Sensor Networks 2009;5(April). 16.1–4. URL
/http://doi.acm.org/10.1145/1498915.1498922S.
Law Y, Zhang Y, Jin J, Palaniswami M, Havinga P. Secure rateless deluge: pollutionresistant reprogramming and data dissemination for wireless sensor networks. EURASIP Journal on Wireless Communications and Networking 2011:5.
393
Liu Y. Create stable neural networks by cross-validation. In: International joint
conference on neural networks, 2006. IJCNN ’06; 2006. p. 3925–8.
Maia G, Guidoni DL, Aquino AL, Loureiro AA. Improving an over-the-air programming protocol for wireless sensor networks based on small world concepts. In:
Proceedings of the 12th ACM international conference on modeling, analysis
and simulation of wireless and mobile systems. MSWiM ’09. New York, NY,
USA: ACM; 2009. p. 261–7. URL /http://doi.acm.org/10.1145/1641804.1641848S.
Minsky M, Papert SA. Perceptrons. MIT Press; 1969.
Mottola L, Picco GP, Sheikh AA. Figaro: fine-grained software reconfiguration for
wireless sensor networks. In: EWSN; 2008. p. 286–304.
Panta RK, Bagchi S. Hermes: Fast and energy efficient incremental code updates for
wireless sensor networks. IEEE; 2009. p. 639–47. URL /http://ieeexplore.ieee.
org/lpdocs/epic03/wrapper.htm?arnumber=5061971S.
Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error
propagation. In: Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations; 1986. p. 318–62.
Shareef A, Zhu Y, Musavi M. Localization using neural networks in wireless sensor
networks. In: MOBILWARE’08: Proceedings of the 1st international conference
on MOBILe Wireless MiddleWARE, operating systems, and applications.
Brussels, Belgium: ICST (Institute for Computer Sciences, Social-Informatics
and Telecommunications Engineering); 2007. p. 1–7.
Sietsma J, Dow RJF. Creating artificial neural networks that generalize. Neural
Networks 1991;4(1):67–79.
Tsiftes N, Dunkels A, Voigt T. Efficient sensor network reprogramming through
compression of executable modules. In: Fifth annual ieee communications
society conference on sensor, mesh and ad hoc communications and networks,
2008. SECON ’08, June; 2008. p. 359–67.
Von Pless G, Al Karim T, Reznik L. Modified time-based multilayer perceptron for
sensor networks and image processing applications. In: Proceedings of IEEE
International Joint Conference on Neural Networks, 2005. IJCNN ’05, vol. 4; 31
August 2005. p. 2201–6.
Wang Q, Zhu Y, Cheng L. Reprogramming wireless sensor networks: challenges
and approaches. IEEE Network 2006;20(3):48–55. [URL /http://ieeexplore.ieee.
org/xpls/abs_all.jsp?arnumber=1637932S].
Yu L, Wang N, Meng X. Real-time forest fire detection with wireless sensor
networks. In: Proceedings of international conference on wireless communications, networking and mobile computing, September 2005, vol. 2; 2005.
p. 1214–7.
Yu Y, Govindan R, Estrin D. Geographical and energy aware routing: a recursive
data dissemination protocol for wireless sensor networks. Technical report;
2001.
Zhao W, Liu D, Jiang Y. Application of neural networks in wireless sensor network
routing algorithm. In: CISW ’07: Proceedings of the 2007 international
conference on computational intelligence and security workshops. Washington,
DC, USA: IEEE Computer Society; 2007. p. 55–8.