Senior Data Engineer, Infrastructure & Operations

Metromile San Francisco , CA 94118

Posted 3 months ago

About Us

On the off chance you've thought about insurance, it's likely because you've insured something you love, not because you loved your insurance company. Metromile is out to change that. As an insurtech powered by data science and customer centric-design, we're building a community of drivers who come for the savings and stay for the experience.

With technology at its core, Metromile is reimagining insurance to make it fairer and actually delightful. We're obsessed with savings, service, and features -- street sweeping alerts, monthly mileage summaries, fuel trackers and more -- that engage a customer all along their journey. We're on the forefront of disrupting a $250 billion auto insurance category that has gone unchanged for over 80 years.

Metromile's diverse team combines the best of Silicon Valley technologists with veterans from Fortune 500 insurers and financial services giants. This management structure ensures that the business is focused on growth, customer experience and technology innovation while also balancing unit economics and profitability. The team is growing quickly across its San Francisco, Tempe, and Boston offices. Our customer service, claims, and sales teams are all based in-house in the US.

Named a Glassdoor Best Place to Work two years in a row, our CEO consistently has a 95+ percent approval rating; nearly 90% say they'd recommend Metromile to a friend.

About the role:

Metromile is on the cusp of implementing a new data warehousing solution, to help build the next generation of Data & Analytics platform. This is a fantastic opportunity to learn a lot and grow your career, while working in the cutting-edge space of using Telematics data to drive Insurance decisions.

Metromile is seeking an experienced Data Engineer with proven experience in designing data ingestion pipelines to join our Data & Analytics team. This role would contribute to the vision for data infrastructure and business intelligence tools, architect table schemas and data storage, upgrade our existing data infrastructure, implement new data engineering solutions, and work to gather and store new data from across the company to power our decision making processes.

This role would work predominantly with the Analysts, SME's and other data engineers on the team, but would also allow for opportunities to architect how data is managed across the entire company, giving qualified candidates the chance to be involved in a large variety of projects. Because of the important role that data plays in making decisions within the company, this role is at the critical intersection of driving the future of our business while also ensuring that day-to-day operations continue to be successful.

You will:

  • Design data models, data ingestion pipelines and implement scalable ETL / ELT processes.

  • Collaborate with partners across business functions to define, implement and maintain vital business metrics.

  • Architect database, data integration processes and table optimizations.

  • Engage in self-driven investigation into new and upcoming technologies/techniques for data management and retrieval.

  • Support design and configuration of touch point/experimentation systems (Segment, Mixpanel, Optimizely, etc.)

  • Take ownership of data replication strategy (MySQL, Postgres etc) and use solutions like StitchData or Fivetran.

  • Own implementation of Enterprise Container Platforms like Docker.

  • Own configuration driven platforms (e.g. Airflow).

About you:

  • 3-5+ years professional experience.

  • Experience with toolsets available in the Cloud (AWS / Azure / GCP etc) used for Data Storage, Ingestion.

  • Possess expertise in designing data ingestion pipelines.

  • Strong software development fundamentals in Java and Python for building and shipping production data pipelines.

  • Expert knowledge of databases, specifically relational (e.g. SQL), column store (e.g. Redshift/Snowflake). Experience with unstructured (HBase/Cassandra) is a nice to have.

Nice to have:

  • Experience with storage and processing of sensor data (for example GPS time series) from external systems.

  • Some familiarity with data serialization formats such as json, avro, and protobuf is a nice to have.

What's in it for you:

  • Competitive salary plus equity

  • Robust benefit options (health, dental, vision, 401K)

  • Commuter and well-being benefits

  • Generous parental leave

  • Catered lunches and a fully stocked kitchen

  • Monthly social events (Movies, game nights, park days, etc.)

  • Mac equipment and adjustable workstations

Metromile is an Equal Opportunity-Affirmative Action Employer Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation.

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.


icon no score

See how you match
to the job

Find your dream job anywhere
with the LiveCareer app.
Mobile App Icon
Download the
LiveCareer app and find
your dream job anywhere
App Store Icon Google Play Icon
lc_ad

Boost your job search productivity with our
free Chrome Extension!

lc_apply_tool GET EXTENSION

Similar Jobs

Want to see jobs matched to your resume? Upload One Now! Remove
Senior Data Infrastructure Engineer (Data Team)

Bigcommerce

Posted 2 weeks ago

VIEW JOBS 12/29/2019 12:00:00 AM 2020-03-28T00:00 Senior Data Infrastructure Engineer (Data Team) San Francisco Office BigCommerce is disrupting the e-commerce industry as the SaaS leader for fast- growing, mid-market businesses. We enable our customers to build intuitive and engaging stores to support every stage of their growth. BigCommerce, is looking for a Senior Data Infrastructure Engineer. You will be working with team members to extend our products and integrate with a broad array of external services. BigCommerce offers a heavily collaborative environment helping you expand your skill set and take ideas from inception to delivery. As a Senior Data Infrastructure Engineer, you'll be responsible for ensuring the BigCommerce Data platform is available, reliable, rapid and secure. You'll be responsible for 24x7 operation of our Data Engineering infrastructure and services. Our Data infrastructure runs in AWS and GCP. It consists of Apache Kafka, Apache HBase, Apache Airflow, Filebeat, Kafka Streams, Terraform, Puppet, Redshift, Snowflake, etc. Who We're Looking For * You have excellent analytical skills & intuition in solving problems in 24x7 production environments * You are passionate about operating and maintaining large-scale systems * You thrive in multitasking among concurrent problems, including issue triage and prioritization * You drive issues to completion, marshalling resources in high-pressure situations * Your systems and scripts are clean, well-documented and comprehensible What you will do * Ensure our Data platform exceeds goals for availability, capacity, efficiency, scalability, and performance. * Manage our production Data infrastructure on AWS and GCP cloud * Performance analysis and tuning, service capacity planning and demand forecasting * Proactive monitoring of system stats and application logs to identify & prevent potential issues * Respond to production incidents across: triaging, troubleshooting and remediation * Communicate effectively w/engineers & stakeholders. Describe problems succinctly to enable issue management * Apply your deep expertise of systems administration and networking to improve operating procedures * Ensure staging and development environments are representative of production * Manage backups, configurations, documentation, monitoring, logging * Perform periodic on-call duty as part of a team Who you are * Minimum Bachelor's degree in Computer Science, Computer Engineering, Software Engineering, Electrical Engineering, MIS or related equivalent experience * 5+ years of total experience * Experience with IaaS technologies and virtualization: GCP and AWS * 3+ years' experience operating and troubleshooting an enterprise Linux production environment, including load balancing, caching, CDNs, and clustering technologies * Experience with Docker containers, Terraform and Puppet/Chef * Experience with common monitoring tools such as New Relic, Graphite, and Prometheus * Experience in benchmarking and performance evaluation of various permutations of hardware and software * A team player, fast learner, with a focus on getting things done Nice to have * Experience administering and maintaining Kafka and Airflow would be a huge plus. * Knowledge of machine learning. Diversity & Inclusion at BigCommerce We have the opportunity to build not only a great business, but a great company, with soul. Our beliefs and commitment to diversity and inclusion are a central part of achieving that. Our dedication to diversity and inclusion is grounded in two things: a moral belief in the dignity, value, and potential of every individual, and a practical belief that diverse, inclusive teams will create the best outcomes for our customers, partners, employees, and company. We welcome everyone to be a part of our journey. Bigcommerce San Francisco CA

Senior Data Engineer, Infrastructure & Operations

Metromile