Skip to content

Commit

Permalink
Merge pull request #193 from Unity-Technologies/HPE_tutorial
Browse files Browse the repository at this point in the history
Added the Human Pose Estimation tutorial
  • Loading branch information
sleal-unity authored Feb 6, 2021
2 parents da611ae + 4b3a08f commit 52b03e0
Show file tree
Hide file tree
Showing 22 changed files with 380 additions and 27 deletions.
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ Get your local Perception workspace up and running quickly. Recommended for user
**[Perception Tutorial](com.unity.perception/Documentation~/Tutorial/TUTORIAL.md)**
Detailed instructions covering all the important steps from installing Unity Editor, to creating your first computer vision data generation project, building a randomized Scene, and generating large-scale synthetic datasets by leveraging the power of Unity Simulation. No prior Unity experience required.

**[Human Pose Estimation Tutorial](com.unity.perception/Documentation~/HPETutorial/TUTORIAL.md)**
Step by step instructions for using the Key Point and Human Pose Estimation tools included in the Perception Package. It is recommended that you finish Phase 1 of the Perception Tutorial above before starting this tutorial.

## Documentation
In-depth documentation on individual components of the package.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
348 changes: 348 additions & 0 deletions com.unity.perception/Documentation~/HPETutorial/TUTORIAL.md

Large diffs are not rendered by default.

33 changes: 17 additions & 16 deletions com.unity.perception/Documentation~/Tutorial/Phase1.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,18 @@

In this phase of the Perception tutorial, you will start from downloading and installing Unity Editor and the Perception package. You will then use our sample assets and provided components to easily generate a synthetic dataset for training an object-detection model.

Through-out the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while non-bulleted lines will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions!

Steps included this phase of the tutorial:
- [Step 1: Download Unity Editor and Create a New Project](#step-1)
- [Step 2: Download the Perception Package and Import Samples](#step-2)
- [Step 3: Setup a Scene for Your Perception Simulation](#step-3)
- [Step 4: Specify Ground-Truth and Set Up Object Labels](#step-4)
- [Step 5: Set Up Background Randomizers](#step-5)
- [Step 6: Set Up Foreground Randomizers](#step-6)
- [Step 7: Inspect Generated Synthetic Data](#step-7)
- [Step 8: Verify Data Using Dataset Insights](#step-8)
Through-out the tutorial, lines starting with bullet points followed by **":green_circle: Action:"** denote the individual actions you will need to perform in order to progress through the tutorial. This is while the rest of the text will provide additional context and explanation around the actions. If in a hurry, you can just follow the actions!

Steps included in this phase of the tutorial:

* [Step 1: Download Unity Editor and Create a New Project](#step-1)
* [Step 2: Download the Perception Package and Import Samples](#step-2)
* [Step 3: Setup a Scene for Your Perception Simulation](#step-3)
* [Step 4: Specify Ground-Truth and Set Up Object Labels](#step-4)
* [Step 5: Set Up Background Randomizers](#step-5)
* [Step 6: Set Up Foreground Randomizers](#step-6)
* [Step 7: Inspect Generated Synthetic Data](#step-7)
* [Step 8: Verify Data Using Dataset Insights](#step-8)

### <a name="step-1">Step 1: Download Unity Editor and Create a New Project</a>
* **:green_circle: Action**: Navigate to [this](https://unity3d.com/get-unity/download/archive) page to download and install the latest version of **Unity Editor 2019.4.x**. (The tutorial has not yet been fully tested on newer versions.)
Expand Down Expand Up @@ -65,7 +66,7 @@ Once the sample files are imported, they will be placed inside the `Assets/Sampl
* **:green_circle: Action**: **(For URP projects only)** The _**Project**_ tab contains a search bar; use it to find the file named `ForwardRenderer.asset`, as shown below:

<p align="center">
<img src="Images/forward_renderer.png"/>
<img src="Images/forward_renderer.png" width="800"/>
</p>

* **:green_circle: Action**: **(For URP projects only)** Click on the found file to select it. Then, from the _**Inspector**_ tab of the editor, click on the _**Add Renderer Feature**_ button, and select _**Ground Truth Renderer Feature**_ from the dropdown menu:
Expand All @@ -91,7 +92,7 @@ As seen above, the new Scene already contains a camera (`Main Camera`) and a lig
* **:green_circle: Action**: Click on `Main Camera` and in the _**Inspector**_ tab, modify the camera's `Position`, `Rotation`, `Projection` and `Size` to match the screenshot below. (Note that `Size` only becomes available once you set `Projection` to `Orthographic`)

<p align="center">
<img src="Images/camera_prep.png"/>
<img src="Images/camera_prep.png" width = "900"/>
</p>


Expand Down Expand Up @@ -177,7 +178,7 @@ In Unity, Prefabs are essentially reusable GameObjects that are stored to disk,
When you open the Prefab asset, you will see the object shown in the Scene tab and its components shown on the right side of the editor, in the _**Inspector**_ tab:

<p align="center">
<img src="Images/exampleprefab.png"/>
<img src="Images/exampleprefab.png" width="900"/>
</p>

The Prefab contains a number of components, including a `Transform`, a `Mesh Filter`, a `Mesh Renderer` and a `Labeling` component (highlighted in the image above). While the first three of these are common Unity components, the fourth one is specific to the Perception package, and is used for assigning labels to objects. You can see here that the Prefab has one label already added, displayed in the list of `Added Labels`. The UI here provides a multitude of ways for you to assign labels to the object. You can either choose to have the asset automatically labeled (by enabling `Use Automatic Labeling`), or add labels manually. In case of automatic labeling, you can choose from a number of labeling schemes, e.g. the asset's name or folder name. If you go the manual route, you can type in labels, add labels from any of the label configurations included in the project, or add from lists of suggested labels based on the Prefab's name and path.
Expand Down Expand Up @@ -412,15 +413,15 @@ This will download a Docker image from Unity. If you get an error regarding the
* **:green_circle: Action**: The image is now running on your computer. Open a web browser and navigate to `http://localhost:8888` to open the Jupyter notebook:

<p align="center">
<img src="Images/jupyter1.png"/>
<img src="Images/jupyter1.png" width="800"/>
</p>

* **:green_circle: Action**: To make sure your data is properly mounted, navigate to the `data` folder. If you see the dataset's folders there, we are good to go.
* **:green_circle: Action**: Navigate to the `datasetinsights/notebooks` folder and open `Perception_Statistics.ipynb`.
* **:green_circle: Action**: Once in the notebook, remove the `/<GUID>` part of the `data_root = /data/<GUID>` path. Since the dataset root is already mapped to `/data`, you can use this path directly.

<p align="center">
<img src="Images/jupyter2.png"/>
<img src="Images/jupyter2.png" width="800"/>
</p>

This notebook contains a variety of functions for generating plots, tables, and bounding box images that help you analyze your generated dataset. Certain parts of this notebook are currently not of use to us, such as the code meant for downloading data generated through Unity Simulation (coming later in this tutorial).
Expand Down
2 changes: 1 addition & 1 deletion com.unity.perception/Documentation~/Tutorial/Phase2.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

In Phase 1 of the tutorial, we learned how to use the Randomizers that are bundled with the Perception package to spawn background and foreground objects, and randomize their position, rotation, texture, and hue offset (color). In this phase, we will build a custom light Randomizer for the `Directional Light` object, affecting the light's intensity and color on each Iteration of the Scenario. We will also learn how to include certain data or logic inside a randomized object (such as the light) in order to more explicitly define and restrict its randomization behaviors.

Steps included this phase of the tutorial:
Steps included in this phase of the tutorial:
- [Step 1: Build a Lighting Randomizer](#step-1)
- [Step 2: Bundle Data and Logic Inside RandomizerTags](#step-2)

Expand Down
21 changes: 11 additions & 10 deletions com.unity.perception/Documentation~/Tutorial/Phase3.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,12 @@

In this phase of the tutorial, we will learn how to run our Scene on _**Unity Simulation**_ and analyze the generated dataset using _**Dataset Insights**_. Unity Simulation will allow us to generate a much larger dataset than what is typically plausible on a workstation computer.

Steps included this phase of the tutorial:
- [Step 1: Setup Unity Account, Unity Simulation, and Cloud Project](#step-1)
- [Step 2: Run Project on Unity Simulation](#step-2)
- [Step 3: Keep Track of Your Runs Using the Unity Simulation Command-Line Interface](#step-3)
- [Step 4: Analyze the Dataset using Dataset Insights](#step-4)
Steps included in this phase of the tutorial:

* [Step 1: Setup Unity Account, Unity Simulation, and Cloud Project](#step-1)
* [Step 2: Run Project on Unity Simulation](#step-2)
* [Step 3: Keep Track of Your Runs Using the Unity Simulation Command-Line Interface](#step-3)
* [Step 4: Analyze the Dataset using Dataset Insights](#step-4)

### <a name="step-1">Step 1: Setup Unity Account, Unity Simulation, and Cloud Project</a>

Expand Down Expand Up @@ -58,7 +59,7 @@ In order to make sure our builds are compatible with Unity Simulation, we need t
* **:green_circle: Action**: In the window that opens, navigate to the _**Player**_ tab, find the _**Scripting Backend**_ setting (under _**Other Settings**_), and change it to _**Mono**_:

<p align="center">
<img src="Images/mono.png"/>
<img src="Images/mono.png" width="800"/>
</p>

* **:green_circle: Action**: Change _**Fullscreen Mode**_ to _**Windowed**_ and set a width and height of 800 by 600.
Expand Down Expand Up @@ -262,7 +263,7 @@ Once the Docker image is running, the rest of the workflow is quite similar to w
* **:green_circle: Action**: In the `data_root = /data/<GUID>` line, the `<GUID>` part will be the location inside your `<download path>` where the data will be downloaded. Therefore, you can just remove it so as to have data downloaded directly to the path you previously specified:

<p align="center">
<img src="Images/di_usim_1.png"/>
<img src="Images/di_usim_1.png" width="900"/>
</p>

The next few lines of code pertain to setting up your notebook for downloading data from Unity Simulation.
Expand Down Expand Up @@ -296,7 +297,7 @@ The `access_token` you need for your Dataset Insights notebook is the access tok
Once you have entered all the information, the block of code should look like the screenshot below (the actual values you input will be different):

<p align="center">
<img src="Images/di_usim_2.png"/>
<img src="Images/di_usim_2.png" width="800"/>
</p>


Expand All @@ -305,7 +306,7 @@ Once you have entered all the information, the block of code should look like th
You will see a progress bar while the data downloads:

<p align="center">
<img src="Images/di_usim_3.png"/>
<img src="Images/di_usim_3.png" width="800"/>
</p>


Expand All @@ -314,7 +315,7 @@ The next couple of code blocks (under "Load dataset metadata") analyze the downl
* **:green_circle: Action**: Once you reach the code block titled "Built-in Statistics", make sure the value assigned to the field `rendered_object_info_definition_id` matches the id displayed for this metric in the table output by the code block immediately before it. The screenshot below demonstrates this (note that your ids might differ from the ones here):

<p align="center">
<img src="Images/di_usim_4.png"/>
<img src="Images/di_usim_4.png" width="800"/>
</p>

Follow the rest of the steps inside the notebook to generate a variety of plots and stats. Keep in mind that this notebook is provided just as an example, and you can modify and extend it according to your own needs using the tools provided by the [Dataset Insights framework](https://datasetinsights.readthedocs.io/en/latest/).
Expand Down

0 comments on commit 52b03e0

Please sign in to comment.