In a rapidly changing world with long-embedded inequalities, education is increasingly important as a means of enabling opportunities, increasing mobility, and unlocking human potential. However, access to education is itself very unequal, with many college students experiencing barriers of time, money, preparatory resources, and access to support.
As a 200-year-old company focused on human knowledge, Wiley is not new to thinking about education in terms of long-term trends and opportunities. Right now, Wiley is building the enabling educational tools of the future, leveraging data and AI to provide students and instructors with affordable, personalized support for their learning journeys.
We are looking for a Senior Data Engineer to join Wiley's Adaptive Learning team. This team manages large amounts of data on learning objects and student learning interactions, performs or enables analyses of these data sets, and develops algorithms to power real-time adaptive learning and educational AI.
As a provider of adaptive learning services both within Wiley and to third-party products for over 8 years, we have a large and well-functioning set of data, pipelines, and applications, and many existing users of data and analytics across the organization. However, there is lots of opportunity to modernize and improve of our data systems, responding to evolving needs, new technologies, and opportunities to empower users across the organization. In this role, you will:
Care for and evolve our data pipelines & data representation within our Snowflake data warehouse as our source systems and applications change
Model and integrate new data sources as they come online
Help integrate existing data sources as needs arise
Ensure our data stores are reliable, secure, and current status is easily observable
Triage and resolve data incidents responsively based on severity of impact
Work with end users to translate needs and specifications into technical requirements and designs
Investigate, prioritize, plan, and deliver on modernization and enhancement opportunities, such as:
Moving to a new data workflow manager
Expanding the use of Spark and PySpark
Standing up new open source tools to enable data scientists, analysts, and other end-users to learn from our data
Partner with other data professionals to define and implement data management best practices
Mentor data engineers and other technical data users to build maturity in data usage
Work with great and technically savvy software engineers, systems engineers, data engineers, and data scientists
You'll be joining a team that prioritizes a high level of inclusion and mutual trust, and creates space for creativity and autonomy by building trust with our stakeholders. Whatever your background looks like, we are looking forward to seeing how you approach solving for the team's goals and needs. A good contributor to the team's success will:
Be able to work mostly autonomously on a wide range of projects, from data ingestion, through data modeling, out to designing for end user data needs
Work collaboratively with a diversity of stakeholders, technical and nontechnical
Work towards agility by understanding and prioritizing delivery goals, and responding to changes and new information by updating and communicating plans
Stay current on emerging technologies and techniques in cloud computing and data lake / data warehouse management
We believe these qualifications will help you succeed in this role:
Training in Computer Science or a related degree, or equivalent professional experience
Hands-on experience with managing a data warehouse, ideally Snowflake and/or Redshift
4+ years' experience developing, scheduling, and maintaining ETL and ELT pipelines
6+ years' experience modeling data in databases and data warehouses, including strong knowledge of dimensional modeling concepts
6+ years' experience developing SQL scripts and procedures to process data, ideally Postgres or Snowflake variants
3+ years' experience developing in Python and familiarity with software engineering best practices (test-driven design, object-oriented programming)
Experience with batch processing data pipelines
Experience with distributed data processing frameworks, especially Spark
Attention to detail and organization, planning, and project management skills
Strong verbal and written communication skills with both technical and nontechnical colleagues
Wiley is an equal opportunity employer and does not discriminate on the basis on race, color, creed, national origin, sex, sexual orientation, religion, age, disability or other legally protected status. Employment is contingent upon the successful completion of a background check and employment review.
John Wiley & Sons, Inc.