Google used YouTube videos from the Mannequin Challenge to train its AI

As a starting point, Google used 2000 YouTube videos from the Mannequin Challenge to train the AI. As a starting point, the study needed access to a vast amount of data to train the AI, and the first logical step was training it to detect people in a scene where the camera was moving but the people were static. In a recent blog post, Google detailed how it has been working on depth perception in videos where both the camera and subject are moving. The results will lead to the ability to add effects to videos, such as portrait mode, and be used for Augmented Reality. In the future, the results could be used in AR or for adding effects to videos. As it turns out, Google had the perfect resource for this data in the form of YouTube videos that were filmed for the Mannequin Challenge. What you need to know

Google is training its AI to create depth maps isolating human subjects in the scene using only one camera. In this challenge, a person or group of people would stand completely sti…