Welcome to FoCAS Reading Room: Register or log in to tailor our content to your needs

FoCAS Reading Room is all about showing you what you're most interested in. Personalize the content on FoCAS Reading Room by choosing exactly what you want to see.

Towards self‐aware smart camera collectives

Introducing self‐awareness allows smart cameras to implement what they have learnt to improve their performance over time, enabling quicker deployment for security applications.

Cameras have been used for crime prevention, monitoring and surveillance tasks for many years. Traditionally, images are captured by static cameras and can then be analysed by either a central controller or a human. Smart cameras, on the other hand, combine an image sensor, a processing unit and a communication interface into a single embedded device, thereby enabling the on‐board processing of gathered information. Compared to traditional camera networks, this approach mitigates resource bottlenecks and increases the flexibility, scalability and robustness of the collective.[1]

Setting up a collective of smart cameras to perform a certain task typically requires an operator to introduce a priori knowledge to the cameras (e.g., information about the scene, the network layout or the environment). Our work towards self‐aware collectives of smart cameras, however, could enable quicker deployment, without the requirement of a priori knowledge of a scene. This approach demands less effort from the operator.

network

Figure 1. A simple smart-camera collective. Red lines indicate the neighbourhood of a camera (C2), comprising two neighbours (C1 and C3). Dashed arrows indicate the trajectory of corresponding objects O1 and O2. Green dots illustrate previous exchanges of tracking responsibilities.

We consider an initial level of self‐awareness to have been achieved by the collective when it meets two conditions: first, the collective can perform a specific task autonomously without explicit human interaction; and second, the collective is able to improve its own performance based on decisions made during runtime.[2] To meet these requirements with such resource‐constrained devices, each individual camera must decide what information to share, with whom to share it and at what time to share it. To demonstrate such self‐aware decision making, we employed a distributed-tracking-application example.

In the multi‐camera tracking of objects, individual cameras must coordinate with one another. To achieve this, we model the entire collective as a virtual market in which the cameras can generate virtual currency by tracking an object. They are then able to use this currency to buy information about objects from other cameras. To conserve resources, the cameras use a simple auction mechanism to trade object-tracking responsibilities. Whenever a camera initiates an auction, a description of the object to be sold is transmitted to other cameras. In return, these cameras try to find the object within their own fields of view and value it accordingly.

Because only neighbouring cameras can trade with each other, a camera learns about its neighbourhood by keeping track of its previous trading behaviour. Inspired by the foraging process of ants, each camera builds up a local graph using artificial pheromones for every successful exchange of a tracking responsibility. The amount of pheromones represents the probability of successfully trading with the corresponding camera. By exploiting this information, each individual camera is able to focus its marketing efforts, making the collective as a whole more efficient over time with respect to its communication and processing overheads. We can then develop different approaches to trigger the exchange of tracking responsibilities by using the obtained information about object location.[3],[4] Based on the link strength and the distance between the object and this location, the probability of initiating an auction with the corresponding camera can be calculated. The combination of artificial pheromones and exchange points enables each individual camera not only to optimise communication when initiating an auction but also to learn about the near‐optimal time at which to exchange auction invitations. A simple smart-camera collective is shown in Figure 1.

We developed an additional layer of reasoning to enable the cameras to make decisions during runtime about the way in which they can exploit this knowledge. The simple machine-learning approach enables individual cameras to explore different behavioural strategies and exploit previously efficient approaches.[5] We have also recently investigated the effects of different zooming factors and the resulting impact on collective performance.[6] In our future work, we intend to integrate this new factor with our concept of self-awareness.

Self‐awareness can serve as a fundamental concept for developing camera networks that are capable of learning and maintaining their topology, distributing the tasks involved in object tracking and autonomously selecting their zoom level. We are confident that the application of self‐awareness is not limited to camera networks and could also serve as an enabling technology for future systems and networks, meeting a multitude of requirements with respect to functionality, flexibility, performance, resource usage, costs, reliability and safety.

Authors

Lukas Esterle and Bernhard Rinner
Alpen-Adria-Universität Klagenfurt, Austria

 

References

  1. B. Rinner and W. Wolf, An introduction to distributed smart cameras, Proc. IEEE 96, pp. 1565–1575, 2008.
  2. P. R. Lewis, M. Platzner, B. Rinner, J. Torresen and X. Yao (eds.), Self-aware Computing Systems: An Engineering Approach, 2015.
  3. L. Esterle, P. R. Lewis, X. Yao and B. Rinner, Socio-economic vision graph generation and handover in distributed smart camera networks, ACM Trans. Sens. Netw. 10, pp. 20:1–20:24, 2014.
  4. B. Rinner, L. Esterle, J. Simonjan, G. Nebehay, R. Pflugfelder, G. F. Dominguez and P. R. Lewis, Self-aware and self-expressive camera networks, IEEE Comp. 48, pp. 21–28, 2015.
  5. P. R. Lewis, L . Esterle, A. Chandra, B. Rinner, J. Torresen and X. Yao, Static, dynamic, and adaptive heterogeneity in distributed smart camera networks, ACM Trans. Auton. Adap. Systems 10, pp. 8:1–8:30, 2015.
  6. L. Esterle, B. Rinner and P. R. Lewis, Self-organising zooms for decentralised redundancy management in visual sensor networks, IEEE SASO, 2015.

About the Author

Add Your Commentary

To participate in discussion on FoCAS Reading Room you need to be logged in to your account.

Sign up for free or log in if you already have an account.

Related Articles

Cookies on FoCAS Reading Room website

Our website uses cookies to allow you to customise your content, use the site as a registered user, and to provide us with important information about visitors. By continuing to browse the site we'll assume that you are happy to receive all cookies set by FoCAS Reading Room.