Looking for talented, passionate and results-oriented individuals to join our team building data foundations and tools to craft the future of commerce and Apple Pay. Collaborating with the head of Data Engineering for IS&S Payments & Commerce Analytics, you will create scalable, extensible, highly-available and high performance data pipelines that will help create insights for measuring performance and driving strategy.
You will collaborate with various data analysts, instrumentation authorities and engineering teams to identify requirements that will derive the creation of data pipelines. You will work closely with the application server engineering team to understand the architecture and internal APIs involved in upcoming and ongoing projects related to Apple Pay.Our culture is about getting things done iteratively and rapidly, with open feedback and debate along the way; we believe analytics is a team sport, but we strive for independent decision-making and taking smart risks. Our team collaborates deeply with partners across product and design, engineering, and business teams: our mission is to drive innovation by providing the business and data scientist partners best-in-class systems and tools to make decisions that improve the customer experience of using our services. This will include using large and complex data sources, helping derive actionable insights, delivering dynamic and intuitive decision tools, and bringing our data to life via amazing visualizations.You are a self-motived teammate, skilled in a broad set of Big Data processing techniques with the ability to adapt and learn quickly, provide results with limited direction, and choose the best possible data processing solution is a requirement.
5+ years of professional experience with Big Data systems, data pipelines and data processing
Practical hands-on experience with technologies like Apache Hadoop, Apache Pig, Apache Hive and Apache SqoopAbility to understand server API Specs, identify the corresponding server events, extract data from the events and define & derive actionable data pipelines
Understanding on various distributed file formats such as Apache AVRO, Apache Parquet, data structures and common methods in data transformationEXPERTISE IN PYTHON SCRIPTING You have expertise in Unix Shell scripting and Dependency driven job schedulers Expertise in Oracle Database and ANSI SQL Proficiency in Core JAVA You have knowledge on Scala Familiarity with Apache Oozie , Apache Spark and PySpark Familiarity with Data visualization tools such as Tableau Familiarity with rule based multi-stage data correlation on large data sets is a plus You have excellent time management skills with the ability to run work to tight deadlines and handle the pressure of executive requests and product launches
Minimum of bachelor's degree, preferably in Computer Science, Information Technology or EE, or relevant industry experience is preferred