Our Global Data Platform (GDP) is built on Azure Cloud and both the platform and the squads are growing fast. We strive towards high performing and self-managing DevOps squads, where squad members take responsibility and are willing to learn. Within the squad, where everyone is working closely together, each squad member shares expertise and has their own specialism. Next to creating new functionality, maintenance, configuration and security, processes and procedures are also part of the responsibility of the DevOps squads.
Within Tribe Data & Analytics you will work in the centre of the data driven enterprise. The tribe contains 6 areas:
- Global Data Platform
- Analytics Platform
- Data & Factory Services
- Data Science
- Business Intelligence
- Customer Analytics
As a Software Engineer you can make a difference
At this moment, several DevOps squads share the responsibility for the set-up, development and maintenance of the GDP for Rabobank. You work together with the squads to contribute to the objectives of the GDP, such as ensuring reliable data. We are currently looking for a Software Engineer to strengthen the Data Storage Governance squad that is specialized in storing and providing metadata of the platform to facilitate governance controls. As a Software Engineer in the Data Storage Governance squad, you define and develop solutions together with the team to provide self-service data governance capabilities that enable the platform and its users to comply to laws and regulations, such as GDPR.
For squad Data Reliability, we are looking for a Software / Data Engineer who has experience with cloud-based projects.
You will be responsible for
- Bringing in your experience and expertise in data intensive projects. You feel comfortable applying your development and soft skills in a DevOps squad. You are motivated to learn and improve in a flexible and dynamic environment.
- Developing and maintaining a new challenging project for data observability and reliability.
- Understanding the architectural design already in place and implement it using
Python
accordingly. - Challenging, advising, and aligning possible solutions with solution architects and squad members when needed.
- You bring people together to get things done.
- Building a highly scalable, governed and secured solution in a data heavy environment.
Experience:
Data
engineering in PySpark framework- Proficiency in
Python
programming language - Excellent Debugging Skills
- Knowledge of frameworks
- Core Python Concepts (data structures, exception handling, object-oriented programming, multi-threading, packages, functions, upgrading versions, generators, iterators)
- Readable code with proper documentation
- Usage of Python Shell
Knowledge on Azure services:
- Azure Resource Manager templates (ARM) & BICEP
- Azure CosmosDB
- Azure Databricks
- Security (Virtual Networks, Firewalls and IAM)
- Azure Event Hub
- Azure Event Grid
- To build and to maintain Python APIs
- Framework/library: FastAPI, SQLAlchemy
- Unit/integration test: Pytest
- Source and target: Azure CosmosDB
- Host services: Azure web apps, Kong API gateway, Container apps, Function apps
- Experience with CI/CD Azure DevOps. It means, you are capable of using Azure DevOps to deploy your code from D, T, A to P.
- YAML pipelines
Nice to have:
- DP-200 - Implementing an Azure Data Solution
- DP-900 - Azure Data Fundamentals
- DP-203 - Azure Data Engineer Associate
- AZ-900 - Azure Foundation
- Azure Data Factory knowledge
- Azure Databricks knowledge
- K-SQL knowledge
Competences:
- Strong communication skills
- Critical thinker
- Pro active
- Working together
- Providing feedback
- Willing to develop further in Azure
- Strong information/data analysis skills
- A customer focused mind-set and having a structured way of working are key talents
- Quick learner
- Curiosity
- We will hold the interviews through a video call.
- .
- A security check is part of the process.
- We respect your privacy.
Het salaris bedraagt €4292 - €6131