Speaking to the MIT Technology Review earlier this week, AI startup Altera disclosed that it had successfully run several experiments of varying scales. It deployed AI-powered “players” into a Minecraft server and saw them dynamically socialize based on their personalities, including picking their roles within the village to create a “society” of up to 1000 “players” simulating human behavior.
This experiment was meant to “push the limit of what agents can do in groups autonomously” and saw lots of human-like behavior develop over time, including an AI chef giving more food based on who showed “him” the most appreciation. Other roles that were seen include “guards” who built fences (and presumably other perimeter features) and an “artist” bot that picked virtual flowers. Of course, “farmers,” “traders,” “builders,” “explorers,” and “defenders” all emerged as a natural result of the AI interacting with Minecraft’s gameplay systems.
Ways that the agents were collectively tested included the addition of an in-game taxation system that the bots had to pay, complete with tax laws that they voted for. Depending on interactions between AI agents, the tax rate would go higher or lower whenever there was a vote. The testers also got some AI agents to spread the word of The Flying Spaghetti Monster deity of the parody religion Pastafarianism.
Some behaviors also seemed to be almost entirely emergent. In a 500-agent simulation, many of the bots took to pranking each other for their own “amusement” or even taking an interest in environmental advocacy. Of course, these AI agents don’t have any accurate self-awareness. Thus, the eco-conscious likely don’t know just how taxing their very existence is on the environment— but it’s still quite amusing to see how much human-like behavior can emerge under Altera’s “Project Sid.”
Altera has dubbed the experiment that makes this possible “Project Sid.” Sid’s AI agents are all given “brains” composed of multiple modules—some are just for understanding the gameplay mechanics of Minecraft, but “some modules are powered by LLMs and designed to specialize in certain tasks, such as reacting to other agents, speaking, or planning the agent’s next move.”
In the long term, Altera founder Robert Yang hopes to unlock “the true power of AI.” In his eyes, that happens when we “have actually truly autonomous agents that can collaborate at scale.” This means we could create “agents that can really love humans (like dogs),” though this is sure to be controversial for many AI critics and sci-fi consumers.
Read full post on Tom’s Hardware
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.