-
Notifications
You must be signed in to change notification settings - Fork 13
visualization #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello, did you run demo.py first and saw the result or just ran "python Visualizations/simple.py"? |
thanks for your reply. I also tried to run demo.py for the dance sample first and then run Visualizations/simple.py. I can't get the result with the ground plane like your youtube video. By the way, how can i get the floor position for my own data like the "sample_data/sample_floor_position.npy"? |
hi, I also ran the code with my own data, and I found that there is very little bend in the right elbow for all test videos. Maybe something is wrong, but I don't know why? Can you help me, thank you very much! |
Aha, it seems that I have uploaded the wrong version of the trained models. Let me make sure on our end. Apologies for the trouble. |
If you are recording your own sequences, you can place a checkerboard on the floor, and use it as a world frame when calibrating cameras, i.e., the world frame is where the floor is. For estimating floor positions in a youtube video, you can for example 1) run off the shelf 3D pose estimator (e.g. VNect http://vcai.mpi-inf.mpg.de/projects/VNect/), 2) obtain trajectories of foot 3D positions through a sequence 3) apply plain fitting onto the trajectory. This can give you a rough estimate of a floor position. |
Thanks for your answer! |
I am expecting the correct version of the trained models! |
@catherineytw Hi, I got same results, have you tried to modify the |
I downloaded the rbdl V2.6.0. When I complied like this
Something went wrong
I tried some methods, but it did not work. Did you encounter this problem? How did you install this package? Looking forward to reply. |
@catalyster try rbdl-orb, compile issue should not be discussed here in my opinion. |
I used the example data and didn't change anything. The character moved strangely, and I had no idea what was wrong. Perhaps the pre-trained model provided in the link was not the best model, as he side in the eariler replies.
|
Hi, can you try the updated pre-trained model? You can download the networks from the link in the readme.md. I guess the non-bending elbow issue should be resolved. |
It seems your system failed to locate numpy. Perhaps you can try to add the path to NumPy explicitly in Makefile.txt. The path to numpy can be obtained from "np.get_include()". The above is a quick but nasty solution. If it does not solve the issue, please open an issue in rbdl repository (https://github.com/rbdl/rbdl), not here. I can't do anything about rbdl actually. |
@soshishimada thanks for your new model, the right-elbow issue seems resolved, but the result still has foot-slide or body-bend issue, will these issues be solved in the future or not? I forgot the results got influence from the floor position, I will add floor to test if I got spare time . |
I have added cam&floor param, the results seems better. |
@catherineytw Hi, the new pre-trained models were updated and I found the foot-floor penetration problem. Can you share how do you transform the floor for the correct visulization like your gif? |
The result was generated by the last version pretrained model with default floor transform parameters, I didn't change anything. By the way, I am very curious that how to get the front-faced character, I used the default urdf, and got the back-faced character. |
You need to modify the RT in the simple.py (https://github.com/soshishimada/Neural_Physcap_Demo/blob/master/Visualizations/simple.py#L23), but the result with the sample_floor_position.npy is still something wrong. |
Hello, I tested the model on my own videos, but even if I change the floor values (by estimating the floor parameters as the orientation as the best plane which fits the toes' 3D coordinates along the video) the results still show foot-slide and heavy body-bend issues. On the other side, changing camera values did not bring any changes to the results. @ykk648 could you explain how you changed the floor and camera coordinates, and how much this change improved your results? Your help would be much appreciated. Thank you very much! |
Hi, at first I test a front dancing video, the result seems bad, then I update codes, test on a new indoor video which has calibration got by CharucoBoard, I used board location to act as floor position, here's some results, when I said it's better means better than the dancing video, the body-bend may caused by the camera distortion and the foot-slide issue may caused by shaking keypoints, did no more test after that. simplescreenrecorder-2022-08-02_18.03.16.mp4note: the camera is seriously distorted |
Thank you very much for your answer. |
Not sure what you really mean by "because the author's work is to obtain the pose and trans of the model through 2D points." Regarding the manipulation of the character, as written in the paper, we have a corrective force applied on a root joint to prevent the character from falling down. Unfortunately, I'm not able to update the repository atm due to a reason on our end. But I'm happy to have a discussion, and I believe it's much more efficient to chat if you have questions regarding the theory. Please email me (written on our project page) if you wish to talk :) |
hi, I have run your visualization code and I can't get the excellent results like your youtube video ( https://www.youtube.com/watch?v=8JhUjzFAMJI&t=327s ), can you share some samples? thank you very much!
The text was updated successfully, but these errors were encountered: