Multi-Agent Systems

: With the current advance of technology, agent-based applications are becoming a standard in a great variety of domains such as e-commerce, logistics, supply chain management, telecommunications, healthcare, and manufacturing. Another reason for the widespread interest in multi-agent systems is that these systems are seen as a technology and a tool that helps in the analysis and development of new models and theories in large-scale distributed systems or in human-centered systems. This last aspect is currently of great interest due to the need for democratization in the use of technology that allows people without technical preparation to interact with the devices in a simple and coherent way. In this Special Issue, different interesting approaches that advance this research discipline have been selected and presented.

In the original Pref is called Goal. Some authors call it Choice. It is meant to be a "chosen desire" (consistent!).

Properties
For Bel i all properties for KD45 operators.
For Pref i all properties for KD operators.

Achievement Goal
Agent i has the achievement goal that ϕ iff i prefers that ϕ is eventually true and believes that ϕ is currently false: In the Netflix-vs.-Lecture dilemma: Achievement Goal: Properties Check that AGoal i ¬ϕ ∧ AGoal i ϕ is unsatisfiable, because the achievement goal that ¬ϕ implies to believe ϕ, and the achievement goal that ϕ implies to believe ¬ϕ. This contradicts axiom D (Bel i ϕ → ¬Bel i ¬ϕ).
Nebel, Lindner, Engesser -MAS 18 / 32 "Lisa has the goal to listen to the lecture and she has the goal to have dinner" vs. "Lisa has the goal to listen to the lecture and to have dinner" "Paul asks Lisa whether she likes him." (Paul does not prefer any of the two possible answers.) The Nell problem Say a problem solver is confronted with the classic situation of a heroine, called Nell, having been tied to the tracks while a train approaches. The problem solver, called Dudley, knows that "If Nell is going to be mashed, I must remove her from the tracks." When Dudley deduces that he must do something, he looks for, and eventually executes, a plan for doing it. This will involve finding out where Nell is, and making a navigation plan to get to her location. Assume that he knows where she is, and he is not too far away; then the fact that the plan will be carried out will be added to Dudley's world model. Dudley must have some kind of database consistency maintainer to make sure that the plan is deleted if it is no longer necessary. Unfortunately, as soon as an apparently successful plan is added to the world model, the consistency maintainer will notice that "Nell is going to be mashed" is no longer true. But that removes any justification for the plan, so it goes too. But that means "Nell is going to be mashed" is no longer contradictory, so it comes back in. And so forth.

Persistent Goal
Agent i has the persistent goal that ϕ iff i has the achievement goal that ϕ and will keep that goal until it is either fulfilled or believed to be out of reach: Intention Agent i has the intention that ϕ iff i has the persistent goal that ϕ and believes that (s)he can achieve ϕ by an action.
Intending is acting! An agent 1 cannot intend that some other agent 2 does something. However, 1 may intend to make 2 do something.

Proof
We provide a model for Intend i ϕ ∧ Bel i G(ϕ → ψ) ∧ ¬Intend i ψ: John intends to go to the dentist. He believes that going to the dentist always implies pain. At the dentist, John gets some painkiller.