Hello,
Thank you for sharing your excellent work on humanoid whole-body control using reinforcement learning.
I have two questions and a request regarding the acquisition of the Human Videos dataset.
-
When collecting Human Videos, should the environment be configured to closely match the task environment used in simulation?
For example, in the carry_and_place_bread_box task shown in Fig. 2 of the paper, should the positions of support 0 and support 1 in the Human Videos be nearly identical to the positions of support 0 and support 1 in the simulation environment?
-
Would it be possible to share the original RGB videos for the carry_and_place_bread_box task?
If sharing is possible, I would appreciate it if you could send them via email or respond through GitHub.
(Email address: sinanju06@hanyang.ac.kr)
I understand that you may be busy with follow-up research, but I would greatly appreciate it if you could reply at your convenience.
Thank you very much!
Hello,
Thank you for sharing your excellent work on humanoid whole-body control using reinforcement learning.
I have two questions and a request regarding the acquisition of the Human Videos dataset.
When collecting Human Videos, should the environment be configured to closely match the task environment used in simulation?
For example, in the carry_and_place_bread_box task shown in Fig. 2 of the paper, should the positions of support 0 and support 1 in the Human Videos be nearly identical to the positions of support 0 and support 1 in the simulation environment?
Would it be possible to share the original RGB videos for the carry_and_place_bread_box task?
If sharing is possible, I would appreciate it if you could send them via email or respond through GitHub.
(Email address: sinanju06@hanyang.ac.kr)
I understand that you may be busy with follow-up research, but I would greatly appreciate it if you could reply at your convenience.
Thank you very much!