Artificial intelligence now manages to play Minecraft at human level

Another great achievement in AI: AI can now play Minecraft at human level:
 Artificial intelligence now manages to play Minecraft at human level
READING NOW Artificial intelligence now manages to play Minecraft at human level

OpenAI experts trained a neural network that plays Minecraft to a standard as high as human players. The neural network was trained with 70,000 hours of various in-game images, supplemented by a small database of videos where keyboard and mouse inputs are also recorded and contractors perform certain in-game tasks.

After fine-tuning, OpenAI found that the model was able to perform all sorts of complex skills, from swimming to hunting animals to consuming their meat. He also grasped “column climbing,” a move in which the player places a block of material underneath as they leap to rise.

Perhaps most impressively, the artificial intelligence was able to generate diamond tools that required a long series of actions to be executed sequentially, which OpenAI describes as an unprecedented feat for a computer agent.

The Minecraft project is of great importance as the company has proven that a new technique used by OpenAI in training AI models, called Video PreTraining (VPT), ​​works.

In retrospect, the biggest challenge of using raw video as a resource for training AI models was that it was simple enough to understand what happened, but these video tutorials don’t teach how to results. In reality, the AI ​​model is absorbing the desired results, but failing to grasp the combinations of inputs needed to achieve them.

But with VPT, OpenAI pairs a large video dataset from public web sources with a handpicked pool of images tagged with relevant keyboard and mouse movements to create the base model.

To fine-tune the base model, the team adds smaller datasets designed to teach specific tasks. In this sense, OpenAI used footage of players performing early game actions such as felling trees and building production stations. This is said to provide a “major improvement” in reliability with which the model is able to handle these tasks.

Another technique involves “rewarding” the AI ​​model for accomplishing each step in a set of tasks, a practice known as reinforcement learning. This process allowed the neural network to gather all the materials to craft a diamond pickaxe with a human-level success rate.

In a blog post by OpenAI, “VPT offers the exciting possibility of directly learning large-scale behavioral priorities in more areas than just language. Although we only experimented in Minecraft, the game is very open-ended and the native human interface (mouse and keyboard) is very generic, so we believe our results bode well for other similar areas, such as computing.”

Comments
Leave a Comment

Details
199 read
okunma26420