Bu İşi Paylaş

Big Data DevOps Engineer

Şimdi Başvur »

Tarih: 24.Tem.2021

Konum: Istanbul, TR

Şirket: Vodafone


Join our journey as we connect for a better future. Ready?
                                                                                                     

We are looking for a Big Data DevOps Engineer


#Vodafonespirit

 
Our purpose at Vodafone is to connect for a better future. As a Global Communications Technology company, we put the customer at the heart of everything we do. We are forever challenging, pushing boundaries and discovering innovative ways to connect our customers with their digital societies.

We connect people, businesses, and communities across the globe to create the future. We earn customer loyalty, experiment, learn fast and get it done, together. As you can imagine, this means that we have a vibrant and diverse mix of skills and people making Vodafone a great place to work.


Role Purpose

Big Data DevOps Engineer at Vodafone Turkey

-    We are looking for a Big Data DevOps Engineer with experience in infrastructure designing, installing, managing and operating applications/flows to process large amounts of data in a Hadoop/ Spark ecosystem. You’ll provide expert guidance to source and integrate structured and unstructured data from dozens of local data sources including streaming data feeds into a data lake.
-    As a Big Data DevOps Engineer, you will work closely with the team and our stakeholders to manage, operate and deliver desired datas with our Hadoop based solutions for a next generation Big Data Analytics platform. 
-    You will have a focus on monitoring, managing, CI/CD pipeline, capacity planning of realtime data streaming, data management, data quality, data security and maintain the systems that process huge volumes of data. 
-    At Vodafone, we don’t just produce creative products, we develop amazing people too. We are driven to empower people.  We are committed to helping our people perform at their best and achieve their full potential


Your Place in the team


-    You’ll be 7/24 responsible&service owner on data flows, ELTs, real time and batch data applications, data analtics models over Cloudera Big Data platforms and open source applications.
-    You’ll be managing, operating, performance tunning/monitoring, capacity planning over high performing and stable Big Data applications to perform complex processing of petabyte scale data in a Hadoop based environment.
-    You’ll be responsible on CI/CD pipeline and increase automation level over Big Data platform and batch/real time applications on it. 

We are looking for you if you have


-    BS degree or higher in a technology related field (e.g. Computer Science, Math, Information Systems, Industrial Engineering or another quantitative field)
-    You have a DevOps engineering mindset. You may even be a software engineer with a focus or passion for data-driven products.
-    Minimum 2 year experience in infrastructure designing, installing, managing and operating applications/flows to process large amounts of data in a (Cloudera) Hadoop ecosystem(HDFS, Yarn, Hive, Impala) 
-    Good command of English
-    Good Experience in SQL/PL SQL, Oracle ODI, Oracle GoldenGate 
-    Experience with data pipeline and workflow management tools like Apache Airflow, UC4
-    You’ll manage/operate robust data pipelines that output very high data quality at scale using combination of Apache Spark, Spark Streaming, Apache Kafka, Nifi or Flink.
-    Experience with Linux systems, shell scripting. 
-    Experience with noSQL distributed databases like Apache Hbase, Apache Phoenix and Apache Kudu
-    Experience of Python/Scala language and Python scripts is a plus
-    Experience of Jenkins Automation Tool for CI/CD pipelines is a plus
-    Experience with other distributed technologies such as Cassandra, Solr/ElasticSearch, MongoDB is a plus