Description du poste
EPFL, the Swiss Federal Institute of Technology in Lausanne, is one of the most dynamic university campuses in Europe and ranks among the top 20 universities worldwide. The EPFL employs more than 6,500 people supporting the three main missions of the institution: education, research and innovation. The EPFL campus offers an exceptional working environment at the heart of a community of more than 17,000 people, including over 12,500 students and 4,000 researchers from more than 120 different countries.Research Engineer - NLP & Large Language ModelsAbout the RoleWe are seeking aResearch Engineer in Natural Language Processing (NLP) and Large Language Models (LLMs)to contribute to the design, training, and evaluation of next-generation foundation models. The role sits at the intersection ofresearch and production-grade engineering, with a strong emphasis onpost-training, multimodality, and advanced generative modeling techniques, including diffusion-based approaches.You will work closely with researchers and applied scientists to translate novel ideas into scalable, reproducible systems, and to push the state of the art in open, responsible, and multilingual AI.Key ResponsibilitiesDesign, implement, and maintaintraining and post-training pipelinesfor large language and multimodal models (e.g., instruction tuning, alignment, preference optimization)Conduct research and engineering onpost-training methodsContribute tomultimodal modeling, integrating text with modalities such as vision, speech, or audioExplore and applydiffusion-based modelsand hybrid generative approaches for language and multimodal representation learningOptimize large-scale training and inferenceDevelop evaluation pipelines and benchmarks for language understanding, reasoning, alignment, and multimodal performanceCollaborate with researchers to prototype new ideas, reproduce results from the literature, and contribute to publications or technical reportsEnsure code quality, reproducibility, and documentation suitable for long-term research and open-source releaseRequired QualificationsMSc or PhD in Computer Science, Machine Learning, AI, or a related field (or equivalent practical experience)Strong background inNLP and deep learning, with hands-on experience working withlarge language modelsSolid programming skills inPython, with experience using modern ML frameworks (e.g., PyTorch)Experience working withopen-weight or open-data models, including releasing models, datasets, or benchmarksFamiliarity withpost-training techniquesfor LLMs (e.g., instruction tuning, preference optimization, alignment)Strong experimental rigor: ability to design controlled experiments, analyze results, and iterate efficientlyDesired / Bonus QualificationsExperience withdiffusion models(e.g., text diffusion, latent diffusion, or multimodal diffusion)Hands-on work onmultimodal models(e.g., text-image, text-audio, speech-language systems)Exposure toLLM alignment, safety, or evaluationbeyond standard language modeling metricsExperience withdistributed trainingand large-scale model experimentationFamiliarity with multilingual or low-resource language settingsContributions to open-source ML or published research in NLP, multimodality, or generative modelingWhat We OfferA research-driven environment with access tolarge-scale compute and modern ML infrastructureClose collaboration with leading researchers in NLP, multimodality, and generative modelingThe opportunity to work onfoundational, open, and socially responsible AI systemsSupport for publishing research, contributing to open-source projects, and engaging with the broader research communityCompetitive compensation and benefits, commensurate with experienceInformationContract Start Date : to be agreed uponActivity Rate : 100%Duration : 1 year, renewableContract Type: Fixed-term contract (CDD)