Publicis Media is one of the four solutions hubs of Publicis Groupe, alongside Publicis Communications, Publicis.Sapient and Publicis Healthcare. Led by Steve King, CEO, Publicis Media is comprised of five global brands, Starcom, Zenith, Spark Foundry, Blue 449 and Performics, powered by digital first, data driven global practices that together deliver client value and business transformation. Publicis Media is committed to helping its clients navigate the modern media landscape and is present in more than 100 countries with over 17,500 employees worldwide.
Data Sciences is a group within Publicis Media that is accelerating client business transformation through data and technology. Our products service the media agencies' users (i.e. planners, buyers, analytics) by centralizing data, automating/standardizing processes and ensuring data accuracy for all users.
The Soultions Engineer provides complete application lifecycle development, deployment, and operations support for Big Data solutions and infrastructure. You will partner with product owners, data scientists, and business analysts to facilitate the development, automation, and seamless delivery of analytics solutions into Big Data clusters. You will implement and enhance complex big data solutions with a focus on collecting, parsing, managing, analyzing, and visualizing large data sets that produce valuable business insights and discoveries. As such, a commitment to collaborative problem solving and open communication is extremely important.
The day to day includes:
Determine the required infrastructure, services, and software required to build advanced analytics solutions in the cloud
Partner with agency stakeholders and develop prototypes and proof of concepts for specified business challenges
Assist data scientists with exploration and analysis activities, understand advanced algorithms and apply problem solving experience to build high-performance, parallel, and distributed solutions
Perform code and solution review activities, then recommend enhancements that improve efficiencies, performance, stability, and lower support costs
Configure and conduct tuning exercises on Hadoop environments
Support big data platform, including incident and problem management
Perform debugging and triage of incident or problem and deployment of fix to restore services
Document requirements and configurations and clarifies ambiguous specs.
Bachelor's degree in Computer Science, Mathematics, or Engineering.
3+ years of enterprise software engineering experience with object-oriented design, coding and testing patterns, as well as, experience in engineering (commercial or open source) software platforms and large-scale data infrastructures.
2+ years of enterprise Big Data engineering experience within any Hadoop environment.
1+ years of Enterprise support experience with the following items: Hadoop, Hive, Presto, Spark, Sqoop services and clients
Languages: Python, SQL, Java, Scala, Jupyter or Zeppelin, Hue, Spark RDDs and DataFrames, machine learning algorithms
1+ years of experience utilizing Talend or Pentaho ETL
Basic knowledge of continuous integration tools (Jenkins or similar).
Basic knowledge of visualization tools such as Tableau or Qlik.
Experience with Enterprise Big Data security and management operations experience using Ranger is preferred
Professional Hadoop training and Hortonworks or Cloudera certifications is a plus
Experience with Qubole is a plus
All your information will be kept confidential according to EEO guidelines.