NSF NeTS Project

Ultra-Low Power Internet of Things through Asymmetric Radio Design


Abstract:

We are currently witnessing a boom in IoT systems or similar extreme-scale sensing systems, examples include smart cities, smart buildings, smart health, and many others. In fact, it is forecasted that by 2020, there will be more than 50 billions of embedded devices that will serve as building blocks for such systems. If the forecast comes true, in a few years, we will be surrounded by a vast amount of embedded devices, the majority of which will be connected to the cloud through wireless technology. Given that these devices, which may engage in frequent radio activities, will share the space in which we live and work – some may even be attached to our bodies – it is imperative to aim for a “green IoT,” especially in terms of making the communication “green,” so that the power consumption is optimized and bandwidth utilization is maximized. However, existing medium access control (MAC) protocols are far from green – they usually lead to short lifetime and under-utilized bandwidth because their design focus is on providing reliable communication. As a result of this design objective, traditional MAC protocols spend a large fraction of the time sensing the channel, waiting for acknowledgements or other control activities, instead of communicating data relevant to applications.


Some deliberation reveals that the most “green” communication protocol should incur the least amount of joules for each bit that is successfully decoded by the application. If we assume an equal amount of information needs to be sent to the application, then naturally the protocol that consumes the least amount of joules is one in which the only allowed radio activity is to transmit application data. All other activities such as sensing the channel, transmitting control packets, listening for acknowledgements, forwarding packets, etc., should be removed from these wireless transmit devices. We call these devices Transmit-Only devices, and such a protocol Transmit-Only, or TO in short. In this proposal, we strongly advocate the adoption of TO as the green communication protocol for emerging IoT systems.



Proposed Research
In this project, we aim to systematically maximize the throughput of TO through network planning and deployment (which can in turn control channel matrix H as discussed Section 2.1). Before discussing the proposed research, we first define an important metric for TO throughput: transmitter contention level. In a TO system, a transmitter having a contention level L means there are L other transmitters which, should a collision occur, would prevent this transmitter’s packets from being captured by any receiver. In this case, we say this transmitter is contending with L other transmitters. We use A ← B to denote that A is contending with B – should an overlap occur between A′s and B′s packets, A’s packet will not be captured by any receiver (without any extra decoding help). When we have A ← B, it is not necessary B is also contending with A; rather, “contending with” is often an asymmetric relationship between two transmitters. The objective of the proposed research is to minimize contention among transmitters as well as to minimize collisions among contending transmitters. In a TO network where transmitters receive no feedback from other transmitters or receivers, we intend to achieve this optimization through pro-active deployment planning (e.g., receiver/transmitter placement) as well as transmission scheduling. Note that this planning and scheduling phase take place before the actual network is deployed:

  • Minimizing contention of the TO system. The main difference between TO and other MAC protocols is that TO avoids packet loss (due to collisions) by reducing the contention among transmitters through a radio’s inherent capability (instead of through protocols). Thus, the first set of planning strategies aim to bring down the contention. Specifically, we propose to minimize transmitter contention by optimizing the receiver placement and transmitter placement before deploying them.
  • Minimizing collisions between contending transmitters. Even after the effort of minimizing contention, contention among transmitters will still exist unless we have as many receivers as transmitters. In this case, the second part of proactive planning is to create a strategic transmission schedule that minimizes the overlaps between contending transmissions. Our scheduling algorithm also takes into consideration potential transmitter mobility to minimize their impact on network throughput.


Progress Made in the First Year: Optimal Receiver Placement

We first formally define the optimal receiver placement problem. Consider two transmitters located at t1,t2 ∈ R2, and a receiver located at r ∈ R2. In case of a packet overlap between t1 and t2, the signal from t1 can be captured by r if and only if the distance between r and t1 is less than a certain ratio of the distance between r and t2. In this case, we say that t1 is not contending with t2 , or the ordered transmitter pair (t1,t2) is not contending. Our goal is to find m receiver locations that minimize the average contention level. Formally, our receiver embedding problem is defined as follows:

Given: locations t1,t2,...,tn ∈ R2 of n transmitters and the number of receivers, m.

Find: m receiver locations r1,r2,...,rm ∈ R2 such that the number of non-contending ordered transmitter pair is maximized.

In the first year, we have proposed a 2-approximation algorithm for the receiver embedding problem which is guaranteed to return a solution whose value is at least half of the optimum. The pictures below show the receiver placement strategy from an expert's intuition and that from our proposed algorithm. The latter performs better than the former. In addition, through experiments, we have shown that when 500 transmitters each transmit a packet per second, the proposed 2-approximate receiver placement algorithm can achieve a throughput higher than 90%.


Progress Made in the Second Year: Optimal Relay Placement

In the first year of the project (9/2014-9/2015), we focused our investigation on optimizing the receiver placement assuming each receiver can hear all the TO transmitters, where the sensors are densely deployed within a relatively small area. Also, the receivers can be placed in any position within the deployed area, so as we can optimize their placement to maximize the throughput. In the second year of the project (9/2015-9/2016), we broaden the scope of the target system and consider more realistic settings by imposing the following two assumptions: (1) transmitters cannot be heard by all the receivers due to larger deployment areas, and (2) receivers can only be deployed in limited locations due to the infrastructure availability. We believe these assumptions represent a more realistic setting for rapidly growing IoT systems. In order to guarantee connectivity and high throughput in this setting, we need to include the third type of network devices -- relay nodes, or forwarders, -- to bridge the connection between transmitters and receivers. As a result, in this stage of investigation, in addition to paying attention to receiver placement, it is also important to consider the placement of relay nodes.

In the second problem we consider, we are given n transmitters and m receivers, each at fixed locations, and mq forwarders. As in the first problem, we let ti and ri denote the i-th transmitter and receiver, respectively. We embed the mq forwarders, with the following assumptions:

(1) Both $q$ and $m$ are bounded by a constant while $n$ can be arbitrarily large.

(2) The receivers r1, r2, ..., rm partition the space into m Voronoi cells, V1, V2, ..., Vm (assuming receiver ri is in Vi). We try to place q forwarders in each Vi.

(3) There are m+1 independent communication channels. The i-th one is used for the communications from forwarders in Vi to ri, while the (m+1)-th channel for the communications from transmitters to forwarders. It is thus assumed that a transmitter sends a packet to a forwarder in Vi in the (m+1)-th channel, which is sent to ri in the i-th channel.

(4) Suppose that a forwarder f and two transmitters ti and ti' satisfy | ti - f| > b| ti - f|, and | ti'-f| > b| ti - f|. Then there is no capture effect, and f sends no packet to any receiver in case of a collision between ti and ti'.

(5) A circular transmission range with the center ti is given as a problem input, denoted by Ci. Assume that receiver ri can hear from a forwarder f at any location in Vi. If there is a circular transmission range C between ri and f, we will shrink Vi such that Vi fits into C.

Given the above assumptions, an ordered transmitter pair ti and ti' is said to be captured if there exists a forwarder f and receiver rj that satisfy the following two conditions: (1) | ti - f| <= b|ti' - f| and (2) |f - rj| <= b| f' - rj|$ for every other forwarder that (1) f' is in Ci - Vj, or (2) f' is in Ci'- Vj such that |ti' - f'| <= b|ti-f'|, or (3) f' is in R2 - Ci.

In other words, the ti-ti pair is captured if ti can send a packet to a receiver even in case of a collision in any of the two necessary communication steps.

The forwarder placement problem is defined to find the locations of mq forwarders satisfying the above conditions for n transmitters and m receivers, so that the number of captured transmitter pairs is at maximum. Towards this end, we have formulated the problem into an optimization problem, and derived the optimal algorithm and heuristic algorithms.

Currently, we are in the process of building a large-scale event-driven simulator to compare the performance of heuristic algorithms with the optimal algorithm, and also quantify their execution overhead. Given the complexity of the simulator, we will complete the task in two months.



Progress Made in the Third Year: Validation and Prototyping

In the third year of the project, we mainly focus on (1) small-scale experimental validation, and (2) TO network deployment.

Experimental Validation. The radio devices used in our experiments contain a Chipcon CC1100 radio transceiver and a 16-bit Silicon Laboratories C8051F321 microprocessor and are powered by a 20 mm diameter lithium coin cell battery, the CR2032. The receivers have attached USB hardware for loss-free data collection but are otherwise identical to the transmitters. The radio link will operate at 902.1 MHz. Transmitters will use MSK modulation, a 250kbps data rate, and a programmed output power of 0dBm. Each packet contains 32 bits of preamble, 32 bits of sync word, and 16 bits of whitened data.

In our system, each transmitter will periodically send a 10-byte packet (8 bytes of sync and preamble and 2 bytes of payload) once every 0.1 seconds. The receivers will forward received packets to the host PC for analysis over a USB connection. The 10-byte packets being used in our system have an over-the-air duration of useconds.

We test a dense, short range topology in a 7 meter square area (shown below). Eighty transmitters are placed following a uniformly random distribution. We placed the three receivers using the F-EMBED algorithm.

We found that we had an average 99.1% packet reception rate. We also found that our approach of over-provisioning the beacon rate worked well in practice because it allowed to quickly build a few applications. Additionally, the presence of multiple receivers provided redundancy even when one receiver experiences poor performance.

Use Case I: Asset Tracking. Asset tracking is an important application domain for wireless sensor networks. However, continuous tracking of a large number of items at the individual item level over a significant period of time is still not feasible. There are two main obstacles. The first is the need for efficient, low-power communication protocols. Many current protocols employ energy-expensive methods to achieve reliable communication for arbitrary traffic situations. Such protocols are not suitable for continuous asset tracking applications. The second challenge is the lack of a robust presence detection algorithm that can differentiate packet losses caused by a missing item from packet losses caused by the ambient radio environment. Here, our results show that TO can be used to create a heartbeat protocol that supports two robust detection algorithms yielding low false alarm rates while achieving timely loss notification.

Use Case II: Monitoring and Notification. The TO architecture is inherently unidirectional, from the transmitters to a receiver. This is a natural fit for sensing and monitoring applications, where data is collected from many sensors before being disseminated to a user. There are some situations, such as Smart Grid applications or vehicular sensing, where sub-second reporting latency and reliability to several significant figures are required. However, the majority of existing and anticipated applications for wireless sensors do not require this kind of low latency. Using TO in smart homes, habitat monitoring, or data center monitoring would be perfectly reasonable because trading off a few seconds of latency in reported data (caused by missed packets) is worth the ease of maintenance for the users of the systems.

There is a movement that promotes running higher level networking on wireless sensor, such as IPv6, allowing users to directly query each individual sensor. We believe though, that this approach is poorly matched to the idea of a sensor network. Users want information in one place, but so the information from the individually addressable sensors would need to aggregated in any case. If a user wanted alerts when certain conditions in the sensors occurred, then the sensors would need to communicate with one another to determine if a given state is reached, and would then need use a very high level communication system, such as SMS or e-mail, to actually contact the user. This would probably be handled by the same system that aggregated data, so why not just have an aggregation system and simplify the sensors? This is the rational behind TO.

An example of a system using this approach is Owl Platform that we are using to monitor and track various IoT devices and spots in Winlab. A screenshot of the current status of sensors in the system appears below.

Sensors used in the system are running TO, but the details of the sensors are hidden from (and unimportant to) users of the system. Historic data is easily stored in the back-end of the system, shown below. Packet loss in the system may cause latencies, and hence inaccuracies, of less than a second per beacon missed.



Progress Made in the Fourth Year: Achieving PHY-Layer Transmit Only Using Batteryless Nodes

In the fourth year of the project (with no cost extension), we investigated a way of implementing TO networks using battery-less IoT nodes through precise wireless energy delivery techniques.

Specifically, we designed a new wireless power transfer system that can focus the energy around the target and minimize energy density in other areas. Towards this goal, we arrange our transmitters in a fully distributed fashion by surrounding them around the target receiver, as shown below.

A salient property of this arrangement is that, by aligning their phases at the receiver, the energy level at the target receiver is higher than the energy level at any other spot in the charging area. In fact, a small energy ball is formed around the receiver, hence the system name of Energy-Ball. Figure above shows the energy density distribution using simulation results. In designing Energy-Ball, we draw inspiration from the design of the surround sound system, in which multiple speakers are arranged around the audience for better audio experiences.

Energy-Ball has two main components. Firstly, we arrange the transmitters around the target receiver, and secondly we align their phases at the receiver.

There are various approaches to aligning the transmitter phases. In our implementation, we use a simple heuristic approach. We partition time into rounds of equal duration, and within each round, every transmitter transmits energy to the receiver at several randomly chosen phases, and expects a feedback beacon from the receiver at the end of the round indicating whether any of the phase combinations gives higher energy than in the previous round. After receiving the feedback, the transmitters choose a phase combination that has given the highest energy level at the receiver, and then performs next round of random phase adjustments around this combination. Repeating this process round by round, the receiver can guide transmitters to adjust their phases towards the optimal phase combination which gives the optimal energy at the receiver. This algorithm does not need complex channel state estimation, and it naturally takes into consideration the multipaths in the environment. Though a heuristic based approach, it always led to fast convergence in many experiments we have conducted on our testbed, mostly because our transmitters emit sine waves which have rather smooth slopes around the peak region.

We developed an actual Energy-Ball testbeb consisting of 17 N210 and 4 B210 USRP nodes. In the testbed, we utilize the delivered energy to power our TO sensor. After the energy is precisely focused at the target TO node, the target node can communicate (by sending packets to the receiver) while other nodes remain silent. As a result, we can realize PHY-layer transmit-only by powering one node at a time.

We place TO sensors in 13 randomly chosen locations in the charging area. At all 13 locations, Energy-Ball delivers over 0.6mw power that enables sensing and transmitting data continuously. The measured minimum, average and maximum received power across the room are 0.61mw, 0.67mw and 0.79mw respectively. We observe no dropped packet during the entire experiment.

With transmitters' phases locked, we move the harvester and TO sensor to other locations. We observe that when the TO sensor moves away from the focus point, it does not receive enough power to function continuously. As the sensor moves away by more than a wavelength (30 cm in this case), it does not receive enough power to either sensor or communicate.

To summarize, our Energy-Ball testbed is able to precisely charge low power sensors for continuous functioning. In this particular set up, we can realize powering specific sensor nodes with a 30cm granularity, thus a good foundation towards PHY-layer Transmit Only.



Progress Made in the Fifth Year: Jointly Optimizing Communication Energy and Sensing Energy

This year, we explore how to use machine learning techniques to optimize the sensing energy in a TO network. Specifically, we focus on an automatic ammonia monitoring system based on a metal oxide sensor. Thanks to our low-power design, the ammonia monitoring system is compact and can be easily put into a regular rodent cage. The system automatically measures the ammonia concentration inside the cage without any additional human effort. Below we highlight the salient features of the proposed system.

Low Power Ammonia Measurement: The metal oxide sensor demands high temperature over a few minutes in order to trigger and keep the reduction reaction. This may consume a significant amount of power. In fact, heating alone costs more than 99% of the total amount of energy for a 100-second heating period on average. This is the exact reason why the current metal oxide ammonia measurement tools operate on large batteries and are usually hand-held devices.

In this work, we address this challenge by significantly reducing the amount of energy required for each measurement. One of our main contributions is to design a prediction model which can greatly shorten the time required for measurement. Our approach takes the transient ADC samples collected in the first 0.2s and can accurately predict the ADC measurement in a few minutes. We thus refer to the proposed approach as Transient-Predict. Transient-Predict consumes much less power and requires a much smaller battery, so it can be made compact enough to fit into a standard cage and provide continuous wireless monitoring for years.

Accurate Prediction of the Equilibrium Resistance: Our approach needs to predict the ADC value in the equilibrium state (which usually takes a few minutes to arrive at) from the first few transient ADC samples collected in less than a second. This challenge is made even harder by the fact that each metal oxide sensor has drastically different characteristics. In fact, metal oxide sensors are made by the quantum tunneling technique and the growth of metal oxide on the sensing layer is hard to control. As such, the sensitivity of the sensors varies, sometimes by a factor of 10.

Further, the process of reaching the chemical equilibrium is impacted by several factors: the initial state of the sensing layer (such as the percentage of metal in the form of metal oxide and the amount of ammonia stuck to the surface), ammonia concentration in the air, oxygen concentration, humidity level, heating temperature, ambient temperature, etc. Considering these factors one by one in a prediction model can be an onerous task as each factor is non-linear with respect to the ammonia concentration level. In this work, we solve this challenge by developing suitable LSTM neural networks to learn the relationship between transient ADC values and the final equilibrium state value.

Below are our prototype system and our results.

www.000webhost.com