Machine Learning Engineer

I design and ship applied ML systems for multimodal search, NLP, and computer vision.

Experience across Adobe, BMW, and Aptiv, spanning data pipelines, model development, and production deployment on AWS.

Currently focused on2026

Production-ready ML systems

Building retrieval systems, vision models, and reliable data pipelines with measurable impact.

4+Years of ML delivery
4Industry domains
AWSCloud deployments
BMW logo
Barmer logo
Aptiv logo
University of Wuppertal logo

Past Experience

Machine Learning Engineer Intern

Mar 2025 — Feb 2026

BMW

Built multimodal retrieval models integrating text + structured data and developed scalable data pipelines on AWS Glue.

Multimodal retrievalCustom BERT pre-trainingSpark/Polars pipelines

Data Science Working Student

Nov 2024 — Feb 2025

Barmer

Prototyped NLP models for extracting data from insurance PDFs with privacy constraints.

Information extractionPrivacy-aware NLP

Machine Learning Engineer Intern

Mar 2024 — Aug 2024

Adobe

Built image classification models for Lightroom and optimized deployment for mobile.

Image classificationMobile optimizationVision-language

Machine Learning Engineer Working Student

Oct 2022 — Mar 2024

Aptiv

Implemented semantic segmentation for radar perception and evaluated neural architectures.

Radar segmentationNeural architecture search

Research Assistant (Econometrics)

Oct 2020 — Oct 2022

University of Wuppertal

Developed forecasting models for financial time series and built interactive learning tools.

Time series forecastingEducational tooling

Research Assistant (Operations Management)

Jul 2019 — Oct 2022

University of Wuppertal

Analyzed hospital ER operations and simulated complex healthcare systems in Python.

Operational analyticsSystem simulation

Technical Expertise

Tools I use to move from data to deployed models.

PyTorch

Built end-to-end pipelines from research to production—custom architectures, self-supervised learning, and mobile deployment. Comfortable with the full stack: debugging training dynamics, optimizing compute graphs, and shipping models that actually run on phones.

Transformers & BERT

Pre-trained my own models on proprietary datasets and fine-tuned for domain-specific tasks. Know the difference between catastrophic forgetting and actually achieving transfer learning. Built retrieval systems that handle the scaling challenges nobody talks about (context length, vector DB bottlenecks, tokenizer mismatches).

Polars & Spark

Single-machine speed with Polars when it matters, Spark for problems that actually need distribution. Spent enough time optimizing ETL to recognize when you're fighting partitioning schemes instead of solving the real bottleneck. Handle terabyte-scale datasets comfortably.

MLflow & Experiment Tracking

Managed collaborative ML projects where you can't just train in notebooks. Know how to structure experiments so insights are reproducible, not scattered across Slack messages and lost runs.

SQL & DuckDB

Advanced query work—window functions, CTEs, recursive queries for real problems. DuckDB for when you need OLAP speed without the infrastructure tax. Comfortable writing SQL that's both correct and efficient at scale.

AWS Cloud

Production ML infrastructure: data lakes with S3, serverless ETL through Glue, compute on EC2. Infrastructure-as-code with Terraform so deployments are repeatable, not heroic.

Docker & MLOps

Containerized pipelines that actually work when someone else runs them.

Computer Vision & Mobile ML

Image classification, semantic segmentation, and the hard part—optimization. Reduced latency and memory footprint to make models viable for on-device inference.