Please note:
Please list all requirements and wishes and answer with YES and an explanation, in addition to the motivation and CV
We would like to receive a CV that clearly shows that you meet the strict requirements. Please also make this clear in your motivation letter.
Although we work with deadlines, it is possible that the project will still be closed earlier. We advise you to respond as soon as possible.
We are looking for an experienced Cloud Data Engineer with a strong focus on Google Cloud Platform (GCP) to join a team dedicated to designing and maintaining advanced data solutions. In this role, you will leverage GCP services to build and optimize data pipelines, ensuring the scalability, performance, and reliability of data platforms. Collaborating with cross-functional product teams, you will help turn complex business needs into innovative, data-driven solutions.
Responsibilities:
Develop and maintain efficient, scalable data pipelines using Google Cloud Platform (GCP).
Design and implement solutions leveraging core GCP services such as BigQuery, DataProc, and Pub/Sub.
Ensure effective data modeling and storage using columnar NoSQL databases like BigQuery and MongoDB.
Collaborate with product teams to deliver solutions tailored to business requirements.
Work with event-driven architectures, using tools like Apache Kafka and GCP's streaming capabilities.
Optimize costs and efficiency in cloud environments (FinOps).
Ensure stability, performance, and continuity of GCP-based BI and Big Data tools.
Requirements:
A Bachelor's degree (or equivalent work experience) in Computer Science, Software Engineering, or a related field.
At least 4 years of hands-on experience building production-grade data systems, with a focus on Google Cloud Platform.
Expertise in:
GCP Services: BigQuery, DataProc, Pub/Sub, and Cloud Storage.
Data processing tools: Apache Spark, especially on DataProc.
CI/CD practices: Using GIT and GITactions for deployment pipelines.
Data modeling and warehousing in a cloud environment.
Event streaming tools such as Apache Kafka or GCP Pub/Sub.
Infrastructure-as-code tools like Terraform and containerization using Docker.
Knowledge of programming languages such as Python, Scala, or Java.
Familiarity with the Hadoop ecosystem and related tools.
Strong analytical and problem-solving skills, with a solid understanding of algorithms and data structures.
Experience with data management tools (e.g., Collibra) is a plus.
Nice to have:
Experience with Spark Structured Streaming and Kafka Connect.
Affinity with Machine Learning or Operations Research concepts.
Knowledge of Agile methodologies.
If you are passionate about working on Google Cloud Platform and are eager to develop cutting-edge data solutions in a collaborative and innovative environment, we encourage you to apply.