Split all AI functions to separate process

Registered by Erik Ogenvik

Currently Cyphesis handles all AI in the same process as the main world simulation. There's support for using cyphesis as a pure AI client, but this is not the default. The default is instead that when a new mind is created, the scripts etc. are all setup in the same process as the main simulation. And since this process is single threaded this creates both a limit to how many AI agents that can exist in the world, and a risk that any misbehaving AI agent will affect the server.
This architecture also doesn't scale to use many cores, as is the norm with modern machines.

Instead of this setup there should be a separation between world simulation and AI handling, with these two concerns happening in separate processes. At server startup a separate "AI client manager" process should be spawned, connecting to the main server through the local socket. Once connected through a privileged account, this process should send an operation signalling that it's ready to handle new AI agents. Whenever a new NPC is created, the main server will produce a "possession token", and send this to the "AI client manager" process. This process will then in turn manage or spawn one or many "AI worker processes", which are separate processes used for handling AI. Such a process will connect using the local socket, but using a regular account, and use the "possession token" to take control of the newly created NPC.
Thus, the end result is a separate AI process controlled NPCs. The "AI client manager" only acts as a manager for the AI worker processes, relaying possession tokens to them whenever a new AI client is needed. It's probably a good idea to match the number of processes to the number of cores.
Note that the use of processes rather of threads fits well with the model, since there should be no shared data between to different AI agents. Each AI agent holds an unique view of the world, with even the terrain being different (depending on what entities they have discovered).

This architecture will also scale very well to multiple machines, as the connection to the AI clients doesn't necessary have to be a socket one (as long as it's a privileged connection).

Blueprint information

Status:
Started
Approver:
Erik Ogenvik
Priority:
High
Drafter:
Erik Ogenvik
Direction:
Needs approval
Assignee:
Erik Ogenvik
Definition:
Approved
Series goal:
None
Implementation:
Deployment
Milestone target:
None
Started by
Erik Ogenvik

Related branches

Sprints

Whiteboard

(?)

Work Items

Dependency tree

* Blueprints in grey have been implemented.

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.