Differences
This shows you the differences between two versions of the page.
Next revisionBoth sides next revision | |||
germinationx_companion [2010-07-30 07:22] – created davegriffiths | germinationx_companion [2010-07-30 07:52] – davegriffiths | ||
---|---|---|---|
Line 11: | Line 11: | ||
=====Lirec AgentMind details===== | =====Lirec AgentMind details===== | ||
- | The AgentMind consists of multiple Java processes, a server providing the world and it's contents, and a client for each companion. The player is represented as another agent. All agents can perceive of the other agents are their actions and some external symbolic signals (such as expression). In recent versions the agents form a " | + | The AgentMind consists of multiple Java processes, a server providing the world and it's contents, and a client for each companion. The player is represented as another agent. All agents can perceive of the other agents are their actions and some external symbolic signals (representing, |
In terms of scalability, | In terms of scalability, | ||
=====Possible scenarios===== | =====Possible scenarios===== | ||
+ | |||
+ | ====Single agent==== | ||
+ | |||
+ | The simplest scenario is a single agent dispensing information and helping the player. This agent would have to be located in a single place in the game, where players would have to travel or ask for access to. They would be able to gain access if the number of existing interactions were low enough. Access to the helper/ | ||
+ | |||
+ | The agent would be affected by the interactions with the players, and the state of the world. This could be used to aid the gameplay. For instance, perhaps extreme weather conditions would cause many players to ask for help - the agent would switch to a high level of activity to cope with all their requests, after which it would need rest - and become irritable if disturbed. | ||
+ | |||
+ | It could recognise players who are being particularly needy, and start to ignore them after some time, and conversely recognise players who were doing well to reward them somehow. Note - need to look into the STM/LTM storage constraints, | ||
+ | |||
+ | Much of this type of interaction has been researched by INESC-ID in their iCat scenarios. Particularly interesting is where the companion is passively watching two people playing a game and is biased to one player, only congratulating them and displaying resentment when the other is winning. | ||
+ | |||
+ | ====Multiple agents==== | ||
+ | |||
+ | It may be possible to experiment with multiple agents. It seems like this could lead to some interesting experiences. It also makes sense in terms of scalability, | ||
+ | |||
+ | Each agent could be given a different personality. Relationships could be designed between them, and the players would take part in a more dynamic situation. For example, some agents would be helpful, others less so, and possibly some not at all. The player would choose to " | ||
+ | |||