Job Details

ID #53734089
Estado Washington
Ciudad Seattle-tacoma
Full-time
Salario USD TBD TBD
Fuente Meta
Showed 2025-03-28
Fecha 2025-03-28
Fecha tope 2025-05-27
Categoría Etcétera
Crear un currículum vítae
Aplica ya

Fundamental AI Research Scientist, Multimodal Audio (Speech, Sound and Music) - FAIR

Washington, Seattle-tacoma, 98101 Seattle-tacoma USA
Aplica ya

Summary: Meta is seeking Research Scientists to join its Fundamental AI Research (FAIR) organization, focused on making significant advances in AI. We publish groundbreaking papers and release frameworks/libraries that are widely used in the open-source community. The team is working on the industrial leading research on building foundation models for audio understanding and audio generation. We are also closely working with vision research teams on pushing the frontier of multimodality (audio, video, language) research.Our teams research is focusing on audio and multimodality. Individuals in this role are expected to be recognized experts in identified research areas such as artificial intelligence, speech and audio generation and audio-visual learning. Researchers will drive impact by: (1) publishing state-of-the-art research papers, (2) open sourcing high quality code and reproducible results for the community, and (3) bringing the latest research to Meta products for connecting billions of users. They will work with an interdisciplinary team of scientists, engineers, and cross-functional partners, and will have access to cutting edge technology, resources, and research facilities.Required Skills: Fundamental AI Research Scientist, Multimodal Audio (Speech, Sound and Music) - FAIR Responsibilities:

Develop algorithms based on state-of-the-art machine learning and neural network methodologies

Perform research to advance the science and technology of intelligent machines.

Conduct research that enables learning the semantics of data across multiple modalities (audio, speech, images, video, text, and other modalities).

Work towards long-term ambitious research goals, while identifying intermediate milestones.

Design and implement models and algorithms

Work with large datasets, train / tune / scale the models, create benchmarks to evaluate the performance, open source and publish

Minimum Qualifications: Minimum Qualifications:

Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.

PhD degree in AI, computer science, data science, or related technical fields, or equivalent practical experience.

2+ years of experience holding an industry, faculty, academic, or government researcher position.

Research publications reflecting experience in related research fields: audio (speech, sound, or music) generation, text-to-speech (TTS) synthesis, text-to-music generation, text-to-sound generation, speech recognition, speech / audio representation learning, vision perception, image / video generation, video-to-audio generation, audio-visual learning, audio language models, lip sync, lip movement generation / correction, lip reading, etc.

Familiarity with one or more deep learning frameworks (e.g. pytorch, tensorflow, …)

Experienced in Python programming language.

Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.

Preferred Qualifications: Preferred Qualifications:

First-authored publications at peer-reviewed conferences, such as ICML, NeuRIPS, ICLR, ICASSP, Interspeech, ACL, EMNLP, CVPR, and other similar venues.

Research and engineering experience demonstrated via publications, grants, fellowships, patents, internships, work experience, open source code, and / or coding competitions.

Experience solving complex problems and comparing alternative solutions, trade-offs, and diverse points of view.

Experience working and communicating cross functionally in a team environment.

Experience communicating research findings to public audiences of peers.

Public Compensation: $147,000/year to $208,000/year + bonus + equity + benefitsIndustry: InternetEqual Opportunity: Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at [email protected].

Aplica ya Suscribir Reportar trabajo