You’ll work across Python, LLM tooling, cloud infrastructure, and Kubernetes to deliver production-ready AI workflows embedded into core platforms.
What you’ll do:
- Build and deploy end-to-end AI solutions used in real-world workflows
- Design and implement LLM orchestration, RAG pipelines, and AI agents
- Develop autonomous workflows using Python, Langflow, and modern AI tools
- Embed AI components into existing software systems
- Work closely with Product, SRE, and Platform teams to ship scalable solutions
- Own AI components in production — from design to monitoring
- Continuously evaluate new AI tools and patterns
- Strong experience building applications in Python
- Hands-on experience developing and deploying LLM-based solutions (RAG, agents, orchestration)
- Solid cloud experience, ideally AWS with Terraform
- Experience deploying workloads in Docker & Kubernetes
- Ability to turn loose business problems into clean technical solutions
- Working understanding of ML concepts (focus is engineering & integration, not research)
- Build AI that actually ships — not POCs that die in notebooks
- Work with a modern AI stack and real production workloads
- High-ownership role with strong technical impact
- Backed by a collaborative engineering culture that values speed and quality
Apply below or email your CV directly to josh.kitchin@profusiongroup.com


