Candidates: Create an Account or Sign In
AI Engineer
Own our AI models end-to-end: dataset strategy, training/fine-tuning, evaluation, optimization, and deployment to real hardware in the field. You'll turn messy real-world streams into robust, low-latency inference pipelines.
What you'll do
Design, train, and fine-tune text/audio/vision models (e.g., DistilRoBERTa, wav2vec2, YOLO) for threat and aggression detection.
Build reproducible training pipelines (HF/ PyTorch/ SpeechBrain), incl. PEFT/LoRA, adapters, and transfer learning.
Optimize for real-time: quantization, pruning, ONNX/TensorRT, mixed precision, batching, caching.
Ship models to edge & cloud with CI/CD, versioning, and rollback; instrument latency and accuracy SLAs.
Create data pipelines: collection, labeling, augmentation/synthesis, dataset versioning (DVC).
Collaborate with backend/infra on streaming (RTMP/TCP), Pub/Sub, autoscaling, and observability.Must haves
Python wizardry and strong PyTorch.
Hands-on experience with Audio/Video AI - HuggingFace Transformers, SpeechBrain or torchaudio, one detection/pose stack (YOLO, MediaPipe).
Production MLOps: experiment tracking, model registries, CI/CD, model monitoring.
Comfortable with GCP, containers, and GPUs.
Located in London or ready to relocate.Nice to haves
Real-time inference experience
Streaming systems, low-latency audio (VAD/Whisper/wav2vec2), and CV pipelines.
Security/privacy by design (PII handling, encryption, GDPR).
Experience tuning multi-objective metrics (precision/recall trade-offs for safety-critical apps).The deal:
Full-time
London-based (or ready to relocate)
One-month probation to confirm you have what it takes
Career-defining opportunity
Competitive London salary + share options