Job description
Digifloat is looking for a passionate, motivated, eager to learn data enthusiast.
Responsibilities:
- Turn raw data into trustworthy analytical datasets, that decrease time to market towards insights and reduce the cost of data consumption
- Manage these datasets (and the related data pipelines) as a product with all the responsibilities that come with it:
- Contribute to the data product roadmap, which we prioritize as a team on a quarterly basis to fuel the overall company roadmap and initiatives with analytics
- Fit our data products with the overall data architecture, governance, standards, and guidelines design, build and test our data products
- Meeting the functional & non-functional requirements,
- Meeting the high standards for data ingestion, processing & storage
- and applying the right level of trust (quality, privacy, security) for their purpose
- Communicate on a regular basis to our stakeholders to increase data product awareness & usage
- Strive continuously to reduce manual or repetitive efforts through data innovation & automation.
- Co-create on our big value transformation cases together with data & analytics experts (like insights & reporting analysts, data scientists, application owners, functional/business analysts, data architects…)
- Work in a young and dynamic team of data analysts and engineers, operating in a self-organizing agile at company scale context
- The variety of “data related” work can be very broad from purely technical, to more functional or a combination of both, which makes it possible for people to perform a job which fits their interest and makes it also possible to change and evolve.
- You work with large volumes of network and viewing data.
Requirements:
- + years of experience in a Data Engineering role
- Master or bachelor’s degree in Computer Science (or equivalent through experience)
- You have proven experience in:
- Existing and upcoming Data technologies (Hadoop, Scala, Spark, Kafka, SQL, Flink, Cassandra, ELK, …) and like to stay in touch with new evolutions.
- Designing and building cloud-based data solutions and more specifically with AWS (IAM, S3, Glue, EMR, EKS, Sagemaker, …)
- Knowledge or experience of the following technologies is a plus. If you don’t master these yet, don’t worry! We can provide tailored inhouse training and coaching when required:
- System administration with Unix/Linux and knowledge of Docker or Kubernetes and containerization.
- Knowledge of proven software development best practices (coding standards, CI/CD, infrastructure as code)
- You are eager to learn and like variation in your job working on different scopes
- You have a knack for spotting optimization potential and know how to realize it with robust engineering
- You possess good communication skills. You feel comfortable challenging others
- The ability to get things done and take end-to-end ownership
- You are pragmatic in your approach. You use the right techniques to provide the needed quality and detail while avoiding complex solutions