Welcome to FoCAS Reading Room: Register or log in to tailor our content to your needs

FoCAS Reading Room is all about showing you what you're most interested in. Personalize the content on FoCAS Reading Room by choosing exactly what you want to see.

Controlling negative emergent behaviour with norms

The use of norms enables a higher-level observer to guide self-organisation in open distributed systems with selfish autonomous elements, thereby reducing the impact of negative emergent behaviour and optimizing system performance.

Open distributed systems can host numerous distributable workloads, used in a variety of applications (e.g., for the distributed rendering of films). Systems such as these are considered open because they lack a central controlling entity. All communication is performed peer-to-peer, agents are free to join and benevolent behaviour cannot be assumed. Nodes in the system participate voluntarily by submitting work and thereby gain an advantage from the system. However, a successful system relies on reciprocity, meaning that agents must also compute work units for other submitters.

e have introduced a trust metric to overcome the problems inherent to an open system in which no particular behaviour can be assumed. Agents receive ratings for all of their actions (i.e., accepting or rejecting a job) from their interaction partners, allowing others to estimate the future behaviour of a certain agent based on its previous actions. Using this trust metric, a series of ratings for a particular agent can be accumulated and used to calculate a single reputation value. Agents are then able to make decisions based on trust values in our trusted desktop grid (TDG).[1] An agent will prefer to cooperate with more trustworthy agents because the chance of being exploited is reduced and the probability of the other agent cooperating when asked is increased.

Figure 1: An open distributed system—the trusted desktop grid, TDG—is monitored and controlled by a higher-level norm manager. A system under observation and control (SuOC) consists of multiple agents that interact and perform actions. The observer on the top left monitors the interactions in the SuOC, creating a situation description. The controller uses this description and changes norms, which are passed to the SuOC.

Emergent behaviour that arises as a consequence of self-organized interactions, which are themselves based on trust among distributed agents, can result in both positive and negative effects. Establishing implicit trusted communities via increased cooperation with other well-trusted agents enables malicious agents to be isolated to a certain degree, thereby leading to what could be considered positive emergent behaviour. In contrast, negative emergent behaviour (NEB) typically impacts the overall system performance and must therefore be countered.

A situation such as this occurs, for example, in the case of a potentially large group of malicious agents joining the system simultaneously. This activity loads the system with additional work packages while simultaneously rejecting work for others, leading to a trust breakdown. Consequently, benevolent agents will also reject work packages issued by the attacker. As a result, we can observe numerous bad ratings for both benevolent and uncooperative agents, leading to a drastic reduction in trust levels and resulting in a system state in which agents no longer trust each other (i.e., NEB).

To maintain a good utility (i.e., a high speedup) or well-behaving agents in our TDG, we have implemented a variety of counter and security measures. The implementation of most of these measures does, however, come with some attached costs. Although we do not benefit from these mechanisms under normal conditions, they are essential under attack and can lead to significantly faster recovery times. There is no global optimal value for most scenarios, and the ideal value or setting generally depends on the current situation.

To obtain the best overall performance, these parameters and settings must therefore adapt to the current situation during runtime. It is not possible, however, to detect global system states (such as trust breakdown or overload situations) from the local viewpoint of an agent. Additionally, it is not possible to influence agents directly due to their autonomy. To overcome these issues, we have introduced a higher-level instance that is able to detect the current system state and consequently guide the agents’ behaviour using indirect influences. Our concept for the norm manager (NM), which uses the common observer-controller pattern, is presented in Figure 1.[2]

To detect the current system state, the controller monitors the work relations of all agents by creating a work graph in which agents are nodes. In this graph, edges connect agents that have cooperated during the monitored period. The intensity of cooperation between two agents determines the weight of the edge connecting them. The controller then applies graph metrics, enabling groups or clusters of similar agents to be identified. Afterwards, it runs statistics on every cluster found and compares them to historic or threshold values. These clusters are tracked over time to detect tendencies and predict future values.

The controller is responsible for guiding the overall system behaviour by applying norms. A norm contains a rule and a sanction or an incentive.[3] Agents are still autonomous and can violate norms, but risk being sanctioned. A sanction usually results in a bad rating and therefore a worse reputation for the agent, thereby reducing its chances for success in the system. If the NM fails, the system itself is still operational and can continue to run.[4]. When the NM is recovered, it can begin to optimize the system again.

We have developed a system-wide control loop to guide self-organized behaviour in distributed systems using desktop-grid computing systems as an application scenario. Open systems that allow autonomous and heterogeneous participants to join freely tend to suffer due to uncooperative or even malicious behaviour. This can be countered by applying technical trust. In certain situations, NEB can disturb the appropriate functioning of the system (e.g., its efficiency and fairness). To overcome this issue, we intend to establish an observer/controller loop that issues norms as a response to the currently observed conditions.


This research is funded by the OC-Trust research unit (FOR 1085) of the German Research Foundation (DFG).


  1. L. Klejnowski, Trusted community: a novel multiagent organisation for open distributed systems, PhD Thesis, Leibniz Universität, Hannover, 2014.
  2. S. Tomforde, J. Hähner and C. Müller-Schloer, The multi-level observer/controller framework for learning and self-optimising systems, 2012.
  3. A. Urzică and C. Gratie, Policy-based instantiation of norms in MAS, SCI 446, pp. 287–296, 2013.
  4. H. Schmeck, C. Müller-Schloer, E. Çakar, M. Mnif and U. Richter, Adaptivity and self-organization in organic computing systems, ACM Trans. Auton. Adapt. Syst. 5, p. 10, 2010.

About the Author

Add Your Commentary

To participate in discussion on FoCAS Reading Room you need to be logged in to your account.

Sign up for free or log in if you already have an account.

Related Articles

Cookies on FoCAS Reading Room website

Our website uses cookies to allow you to customise your content, use the site as a registered user, and to provide us with important information about visitors. By continuing to browse the site we'll assume that you are happy to receive all cookies set by FoCAS Reading Room.