job specification
Before you apply, please get familiar with Luxoft
- Luxoft locations:
- Logeek Magazine:
- Luxoft Alumni Club:
Responsibilities:
- Design and implement scalable data pipelines using Databricks and Kafka
- Build and maintain real-time streaming solutions for high-volume data
- Collaborate with cross-functional teams to integrate data flows into broader systems
- Optimize performance and reliability of data processing workflows
- Ensure data quality, lineage, and compliance across streaming and batch pipelines
- Participate in agile development processes and contribute to technical documentation
Mandatory Skills Description:
- +5 years of experience in data engineering roles
- Proven expertise with Databricks (Spark, Delta Lake, notebooks, performance tuning)
- Strong hands-on experience with Apache Kafka (topics, producers/consumers, schema registry)
- Solid understanding of streaming frameworks (e.g., Spark Structured Streaming, Flink, or similar)
- Experience with cloud platforms (AWS, Azure, or GCP)
- Proficiency in Python or Scala for data pipeline development
- Familiarity with CI/CD pipelines (GitLab, Jenkins) and agile tools (Jira)
- Exposure to data lakehouse architectures and best practices
- Knowledge of data governance, security, and observability
Project Description:
- We are seeking a skilled and hands-on Data Engineer with proven experience in Databricks, Apache Kafka, and real-time data streaming solutions.
- Start Date:
- 12.12.2025
- Contact person:
- Bernd Kraft
- Company:
- Luxoft Germany, Ludwig-Erhard-Strasse 14
- Telephone:
- Job email:
- Click here
Print job
-
Desenvolvedor com foco em automações de processos e IA
-
Desenvolvedor com foco em automações de processos e IA
-
Desenvolvedor com foco em automações de processos e IA
-
Cardiologista
-
Cardiologista
-
Project Analyst
-
Project Analyst
-
Project Analyst
-
Supply Chain Analyst - Remote
-
Supply Chain Analyst - Remote
-
Supply Chain Analyst - Remote
-
Consultor SAP TRM
-
Azure DevOps
-
Azure DevOps
-
Azure DevOps
