Intelligent agents in our environment are at present capable of two types of perception: interoception and the perception of linguistic input. Responses to interoceptive input are remembering a sensation (typically a symptom) and deciding whether or not to do anything about it at the given time. Responses to language input include learning (augmenting the agent’s ontology, whenever appropriate, as a result of understanding user input), responding to a question or suggestion, generating a question based on information or advice just provided, and generating advice based on information provided and previous knowledge.
MVP uses agenda-style control and goal- and plan-based simulation. The underlying organization of and knowledge representation for the physiological and cognitive agents of the VP are the same.
A core architectural aspect of this agent environment is that all interoception and all language input are automatically translated into expressions in the same metalanguage of memory and reasoning used by intelligent agents. As a result, all cognitive processes are modeled using formal, unambiguous knowledge structures that are grounded in the ontological model of the world.