Req ID: 31195
Experience Level: Professional
Other Location(s): Irvine (CA); Los Angeles (CA); Los Angeles West (CA)
Come grow with us
At Capital Group, how we work is defined by shared values that include absolute integrity, respect and collaboration. But it's more than that. It's smart and highly driven people united in purpose to serve our investors and one another.
Bring your energy and unique perspective to Capital and you'll have the opportunity to grow with us professionally, personally, and financially. You'll be part of a team that genuinely cares about helping you succeed. You'll work alongside talented colleagues, many of whom build long careers while progressing through multiple roles, establishing lifelong friendships and making a difference in our communities. In return for your contributions, you'll receive premier compensation and benefits, and a company-funded retirement plan that ranks among the most generous.
The Data Engineer will be an important member of Investment Group Technologies, a new and rapidly growing team within Capital Group. As a Data Engineer, you will lead the design, implementation, and successful delivery of large-scale, critical and complex data architecture, storage and pipelines for the Investment Group. You will build data stores, pipelines, tools, scheduled jobs, and reports that enable software engineers, product managers, and executives improve Analysts' and Portfolio Managers' investment and decision-making process. In this highly visible role, you will work across teams to gather requirements for data architecture, storage, tagging, temporality, lineage, security, fault-tolerance, transforming and reporting, and will build scalable, highly available and cloud-based solutions under a fast-paced environment. You will need to be able to understand and gain in depth knowledge of many existing data initiatives and be very thoughtful about what current or future state of these existing data initiatives you plug your pipelines in to and the alternative time and resource cost of building things from scratch. All these efforts may result in new software or a refactoring of existing software.
Design, implement and automate data pipelines sourcing data from internal and external systems, transforming the data in to the needs of various systems, data governance tenets and initiatives and storing data in the most fitting type of technology in its most ideal architecture and serving the data in highly performant read-only optimized, secure and fault-tolerant means.
Design data schema and operate cloud-based data warehouses and SQL/NoSQL/temporal database systems.
Write Extract-Transform-Load (ETL) jobs and Spark/Hadoop jobs.
Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
Monitor and troubleshoot operational or data issues in the data pipelines.
Drive architectural plans and implementation for future data storage, ETL, reporting, and analytic solutions.
Influence your team's technical and business strategy by making insightful contributions to team priorities and approach. Help in identifying and solving ambiguous problems, architecture deficiencies, or areas where your team's software bottlenecks the innovation of other teams. You make software simpler.
Provide insightful code reviews, receive code reviews constructively and take ownership of outcomes ("you ship it, you own it"), working very efficiently and routinely deliver the right things in the front-end UI area.
BS in Computer Science or Technical field
3+ years of relevant work experience in analytics, data engineering, complex ETL, business intelligence or related field, and 3+ years professional experience
2+ years of experience in implementing big data processing technology: Hadoop, Apache Spark, etc.
Experience writing and optimizing advanced SQL queries in a business environment with large-scale, complex datasets.
Experience with read-only optimization of data storage and queries to enable <100ms restful="" micro="" web="" services="" design="" and="" daas="" architecture,="" ideally="" without="" a="" need="" for="" a="" cache="">100ms>
Strong experience with cloud-first design, preferably AWS (VPC, Serverless databases and functions, dynamic autoscaling, container orchestration, ECS, Redshift, RDS, S3, EMR, etc.).
Detailed knowledge of data warehouse technical architecture, infrastructure components, ETL and reporting/analytic tools and environments.
Experience in data visualization software (Tableau/Qlikview) or open-source project.
Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc.).
Experience with Computer Science fundamentals including data structures, algorithms and complexity analysis.
Experience translating business requirements into stable, performant operational systems.
Willingness and ability to own all stages of development process: design, testing, implementation, operational support.
Willingness and ability to work in an agile team development environment. Knowledge of the agile design process. Experience developing software in an agile environment is highly preferred.
Excellent communication skills.
Founded in 1931, Capital Group is one of the world's largest and most trusted investment management companies and home to the American Funds. We manage more than US$1.7 trillion in assets, and our 7,500 associates make our clients their first priority every day. When we do our job right, millions of investors around the world fulfill their dreams and financial goals, from home ownership and higher education, to a comfortable retirement. Our long-term investment results and outstanding service set us apart from our competitors, while our workplace sets us apart from other employers.
The Capital Group Companies Inc