Job Details

ID #53964227
Estado New Hampshire
Ciudad Sydney
Tipo de trabajo Full-time
Fuente Ignition
Showed 2025-06-04
Fecha 2025-06-04
Fecha tope 2025-08-03
Categoría Etcétera
Crear un currículum vítae

Large Language Model (LLM) Engineer

New Hampshire, Sydney
Aplica ya

About the role:As a Prompt Engineer and LLM Specialist at Ignition, you'll play a critical role in leveraging Large Language Models (LLMs) to drive solutions that matter to customers. You'll collaborate closely with Product Managers, Engineers, and Data Scientists to optimize and refine prompt designs, evaluate models, and ensure our AI-driven initiatives deliver meaningful outcomes efficiently.Reporting to the Head of Data and Analytics, your work will balance technical expertise with practical business acumen, empowering stakeholders to understand when and how to best utilize LLMs and related technologies. You'll foster an environment of continuous learning, experimentation, and pragmatism in adopting AI solutions.What your day to day will look like:Collaborate with cross-functional teams, including Product Managers and Engineers, to identify opportunities where LLMs provide significant value compared to traditional methods.Iterate, test, and refine prompts for Large Language Models, focusing on maximizing accuracy, relevance, and efficiency.Assess various LLM providers and models, balancing factors such as accuracy, cost-effectiveness, latency, and ease of integration.Implement advanced AI techniques such as function calling and agent orchestration (e.g., coordinating multiple agents to complete tasks like data enrichment or summarisation).Apply retrieval and response enhancements such as hybrid retrieval, query rewriting, reranking, and guardrails to improve accuracy and control.Utilize low-code tools (e.g., N8N, Zapier) to integrate and automate LLM workflows efficiently across the business.Prototype and deliver internal LLM-powered tools (e.g., knowledge assistants, automated code snippets, ticket routing agents) to boost efficiency across support, ops, and engineering workflows.Contribute to the stability and cost-efficiency of LLM pipelines by implementing caching, monitoring, and performance optimization strategies.Provide guidance and best practices on AI ethics, transparency, and model governance, proactively addressing risks related to bias, compliance, and security.Communicate insights, progress, and technical decisions clearly to both technical and non-technical stakeholders, ensuring alignment and understanding across the business.

Aplica ya Reportar trabajo