When I was a teenager, there were these mandatory sort-of-military training classes, and due to the mind numbing nature of them, the mind wandered elsewhere.

I'd been theorizing of an emotional coordinate system ever since I was a teenager, with the thought of a `fear/protection` and `pleasure/displeasure` axis being kind of prevalent in the end. With some sort of "internal conflict" thing being when multiple stimuli have conflicting axes, they both sum up to the total value, but the internal conflict value is the total absolute sum, or some variant that only has a positive value (e.g. 1 + (-1) = 0, but the internal conflict is |1| + |-1| = 2).

With the advent of LLMs, and revisiting the old notes, it's kind of interesting how this might be utilized to provide some kind of reward system for the whole "language system", potentially even giving it some kind of self-learning feedback for real-world use.

But the thing is, this primitive system of mine can't be used to explain the various emotions as popularized by Inside Out, namely Joy, Anger, Fear, Disgust, Sadness. So.. with only knowledge related to cognition as tackled during my teenage years reading (for some reason) cognitive science books, and lacking formal training in the arts, I had to rely on a source more versed in the matters.

Of course, in this day and age, it's an LLM.

That landed me to the concept of Valence/Arousal/Dominance models, and this seems like a promising step in the right direction to use for future modeling of computing systems. Some clever folks even tried it with Vision language models already (as seen here, interesting read).

Will toy with this when I finally have the time.

Definitely an interesting time to be alive. The future that we only dreamed of or even thought would never arrive in our lifetime, seems really very near within our grasp.

Emotional artificial intelligence