Offre en lien avec l’Action/le Réseau : TIDS/– — –
Laboratoire/Entreprise : Laboratoire ICube, Strasbourg
Durée : 6 mois
Contact : seo@unistra.fr
Date limite de publication : 2025-03-31
Contexte :
Human motion generation is a key task in computer graphics, crucial for applications involving virtual characters, such as film production or virtual reality experiences. Recent deep learning methods, particularly generative models, started to make significant contributions in this domain. While early neural methods focused on the unconditional generation of vivid and realistic human motion sequences, more recent methods guide the motion generation using various conditioning signals, including action class, text, and audio. Among them, the diffusion-based model has shown significant success, dominating research frontiers.
Sujet :
Motivated by these recent successes, we will develop action-conditioned human motion generator based on a diffusion model. In particular, we will aim at the generation of daily actions in residential settings, in the view of augmenting training data for the action recognition models. To achieve this goal, we will deploy a diffusion-based motion generation, based on our previous works. To condition the generation using an action class or a text description, we will adopt CLIP as a text encoder to embed the text prompt and use a trainable tensor as embeddings for different action classes.
Profil du candidat :
− Solid programming skills in Python
− Working skills in Blender for 3D modeling and animation
− Experience in Deep Learning (Diffusion model)
− Good communication skills
Formation et compétences requises :
Adresse d’emploi :
2 Rue Marie Hamm
67000 Strasbourg
Document attaché : 202411071348_Stage-3D Human Motion Diffusion Model.pdf