Data Engineer – Spark, Scala/Java
Key Responsibilities:
- Developing, managing, and optimizing data pipelines.
- Engaging with team developers and architects to shape the process implementation framework.
- Estimating workload and regularly updating progress on solution development.
- Cooperating closely with stakeholders to refine solution details and validate requirements.
- Supporting business teams in understanding technical possibilities, offering optimal solutions, and clarifying constraints.
- Establishing efficient workflows, proposing, and executing process enhancements.
Desired Qualifications:
- Proficiency in Scala or Java with substantial experience.
- Minimum 4–5 years of relevant experience
- Hands-on expertise in working with Spark.
- Familiarity with CI/CD methodologies and tools such as Git Hub Actions.
- Knowledge of cloud platforms (GCP) and infrastructure as code (Terraform) is a plus.
- Experience with Airflow is an added advantage.
- Background in data pipeline testing is beneficial.
- A business- and product-centric approach with direct experience in stakeholder collaboration.
- Banking sector experience is a plus.
- Prior work experience in Scrum and agile methodologies.
- Strong analytical skills with the ability to address intricate technical challenges.
- Passion for tackling challenges that foster both personal and professional development.
- A collaborative mindset with a willingness to learn and grow.
- Proficiency in English is essential.