Compartir esta oferta de trabajo

Data Software Engineer

Fecha: 13-ene-2022

Ubicación: Madrid, ES

Empresa: Vodafone

The Data SW Engineer delivers data artifacts. This will be at an industrial scale delivering 20+ countries' local data to a central “Data Ocean” hosted in the cloud. The data software engineer will need to design highly efficient plugins for performance and reliability.  Our goal is to create a self-healing solution that reduced operational support by over 70%.  Custom components will need to be developed to ensure privacy and integrity of the data.  The data software engineer will have to determine operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solution.

 

Key focus for and engineer will be:

  • Work with Business Analysts, Test Engineers and Product Owners in an Agile Squad to create artifacts.
  • To be able to design and develop software components based on business requirements following the software development lifecycle.
  • Have an end-to-end vision of the solutions, being able to identify all interactions between data services and detecting any possible gap.
  • Build data pipelines that make use of large volumes of data and generate outputs that allow commercial and business actions that generate incremental value.
  • Deliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines and support) to speed up the Local Markets delivery in the BI and Analytics


Core competencies, knowledge and experience:

  • Expert level experience in designing, building and managing data pipelines to process large amounts of data in a Big Data ecosystem.
  • Expert level experience in developing information systems by designing, developing, and installing software solutions.
  • Experience of build data pipelines and data layers on Google Cloud Platform.
  • Experience of troubleshooting and debugging artifacts.


Must have technical/professional qualifications: 

  • Expert level experience with software development
  • Expert level experience with Hadoop ecosystem (Spark, Hive/Impala, HBase, Yarn)
  • Strong experience in the development of distributed/scalable software components
  • Strong software development experience in Java, Python, and Node.js programming languages; other functional languages desirable
  • Experience with Google Cloud Platform
  • Experience with Unix-based systems, including bash programming
  • Experience in SQL preferably knowledge of Big Query
  • Worked in Agile environment