Some years back have seen animation becoming an important element that increase at a very high rate, this element of the game of artificial intelligence presentation to the player. Without high fidelity movement, it can be difficult to get artificial intelligence agents to appear convincing and intelligent. While much progress has been made in animating humanoid bodies. There are still work to be done on the more subtle and powerful portrayal of emotions.
As early as the 1940s, it was observed that people tend to attribute emotions to animated shapes based purely on their spatial interaction and motions. Until fairly recently , very few games could provide the visual fidelity needed to do much more than this in terms of conveying emotion and intent.
Animation technology is already capable of incredible feats when it comes to facial details and expressions. Building an enough model of emotional state is also clearly well within reach. The real source of difficulty lies in the realm of audio. If there is no ability to modulate voices real, games often fall back on reusing the voice assets in situations where they sound jarringly out of space.
Consider any moment when the immersion of a game experience has been cut or broken by a character seeming silted, robotic or otherwise unreal in terms of emotion. While carefully sculpted narrative moments and cultscenes can generally portray emotional state very well, the bulk of gameplay is often fraught with such moments of surreal absurdity. Modeling and displaying emotional richness will be useful to milligating this in future games.
Progresses in this areas will unlock a vast amount of potential for artificial intelligence characters and liberate game developers from the constraints of the fairly limited emotional palette currently available. Combined with dynamically generated dialogue and artificial intelligence assisted story telling, emotional modeling represents a tremendous opportunity for creating games that are more immersive and convincing than ever before.