Software Engineer III (Data)

Accolade, Inc. Seattle , WA 98113

Posted 3 weeks ago

Accolade provides personalized health and benefits solutions designed to empower every person to live their healthiest life. We help millions of people and their employers navigate the complexities of the healthcare system with empathy, expertise and through exceptional service while supporting them in lowering the cost of care and improving health outcomes. Accolade blends technology-enabled health and benefits solutions, specialized support from Accolade Health Assistants and Clinicians and access to expert medical opinion services for high-cost treatment decisions. We consistently receive consumer satisfaction ratings over 90 percent and have been recognized by Inc. Magazine as a Top Place to Work 2020 and Business Intelligence for Excellence in Customer Service 2020. Please visit us on LinkedIn, Twitter, Instagram and Facebook and at accolade.com.

Role overview

The Software Engineer III (Data) is responsible for building and supporting our next generation of ETLT, big data, and machine learning based anomaly detection tools. As a developer on our data ingestion team you will be creating and enhancing products that provide Accolade teams and customers a way to quickly and conveniently on-board customer data into the Accolade Cloud Platform in a secure and scalable way with high data quality.

A day in the life

  • Work with a small agile team to design and develop ETLT, Big Data, and machine learning implementations.

  • Develop tools that allow teams to configure high performance, highly scalable ETLT pipelines using Apache Spark, Python, and AWS EMR.

  • Use machine learning to implement near real time anomaly detection against data as it flow through our platform.

  • Develop Java and Node based micro-services and lambdas as part of the Accolade Cloud Platform.

  • Contribute to engineering best practices and help shape the future of our big data and technologies.

  • Support internal customers of our ETLT and customer configuration tooling including researching and validating detected anomalies, failed pipelines, and monitor alarms.

  • Interface with our internal users and product team to gather requirements and feedback.

What we are looking for

  • Minimum of 4 years of experience designing and building Big Data and ETLT products using cloud native solutions.

  • You are a Python, Apache Spark, and SQL expert.

  • Production experience building out scalable, secure, and performant ETLT pipelines.

  • Production experience with cloud native Big Data.

  • Experience with AWS and AWS EMR is a plus.

  • Preferred experience with: Java, Machine Learning, Anomaly Detection, AWS: Lambda, S3, SQS, SNS, DynamoDB.

  • Desire and willingness to work in an Agile, collaborative, innovative, flexible, and team-oriented environment.

  • Strong written and oral communication skills.

Where permitted by applicable law, candidates must have received or be willing to receive the COVID-19 vaccine by date of hire to be considered, if not currently employed by Accolade, Inc. The Company will provide reasonable accommodations to qualified employees with disabilities or for a sincerely held religious belief.

Please note that a request for exemption due to a personal preference not to receive a vaccine is not protected by law. All requests for exemptions from this mandate shall be directed to the Company recruiter who shall route the request to the Company's human resources department.

All your information will be kept confidential according to EEO guidelines.

icon no score

See how you match
to the job

Find your dream job anywhere
with the LiveCareer app.
Mobile App Icon
Download the
LiveCareer app and find
your dream job anywhere
App Store Icon Google Play Icon
lc_ad

Boost your job search productivity with our
free Chrome Extension!

lc_apply_tool GET EXTENSION

Similar Jobs

Want to see jobs matched to your resume? Upload One Now! Remove
Data Engineer III Seattle

KLM Careers

Posted Yesterday

VIEW JOBS 10/21/2021 12:00:00 AM 2022-01-19T00:00 <p>Data Engineer III - Seattle </p><p>Seattle, WA Must be located in the US. </p> <p><strong>The role has 4 different levels, the client is considering candidates at all levels. The compensation will depend on what level the candidate falls into. For the lower level candidates, they must be located in the Seattle, WA area. These are the minimum expectations. GO experience is a HUGE PLUS! </strong></p> <p>The Direct to Consumer Group is a technology company within Client. We are building a global streaming video platform (OTT) which covers search, recommendation, personalization, catalogue, video transcoding, global subscriptions and really much more. We build user experiences ranging from classic lean-back viewing to interactive learning applications. We build for connected TVs, web, mobile phones, tablets and consoles for a large footprint of Client owned networks (Client, Food Network, Golf TV, MotorTrend, Eurosport, Client Play, and many more). This is a growing, global engineering group crucial to Client’s future.</p> <p>We are hiring Senior Software Engineers to join the Personalization, Recommendation and Search team. As part of a rapidly growing team, you will own complex systems that will provide a personalized and unique experience for millions of users across over 200 countries for all the Client brands. You will be responsible for building scalable and distributed data pipelines and will contribute to the design of our data platform and infrastructure.</p> <p>You will handle big data, both structured and unstructured, at the scale of millions of users.</p> <p>You will lead by example and define the best practices, will set high standards for the entire team and for the rest of the organization. You have a successful track record for ambitious projects across cross-functional teams. You are passionate and results-oriented. You strive for technical excellence and are very hands-on. Your co-workers love working with you. You have built respect in your career through concrete accomplishments.</p> <p><strong>Qualifications:</strong></p> <ul> <li>5+ years of experience designing, building, deploying, testing, maintaining, monitoring and owning scalable, resilient and distributed data pipelines.</li> <li>High Proficiency in at least two of Scala, Python, Spark or Flink applied to large scale data sets.</li> <li>Strong understanding of workflow management platforms (Airflow or similar).</li> <li>Familiarity with advanced SQL.</li> <li>Expertise with big data technologies (Spark, Flink, Data Lake, Presto, Hive, Apache Beam, NoSQL, ...).</li> <li>Knowledge of batch and streaming data processing techniques.</li> <li>Obsession for service observability, instrumentation, monitoring and alerting.</li> <li>Understanding of the Data Lifecycle Management process to collect, access, use, store, transfer, delete data.</li> <li>Strong knowledge of AWS or similar cloud platforms.</li> <li>Expertise with CI/CD tools (CircleCI, Jenkins or similar) to automate building, testing and deployment of data pipelines and to manage the infrastructure (Pulumi, Terraform or CloudFormation).</li> <li>Understanding of relational databases (e.g., MySQL, PostgreSQL), NoSQL databases (e.g., key-value stores like Redis, DynamoDB, RocksDB), and Search Engines (e.g., Elasticsearch). Ability to decide, based on the use case, when to use one over the other.</li> <li>Familiarity with recommendation and search to personalize the experience for millions of users across million items.</li> <li>Masters in Computer Science or related discipline.</li> </ul> <p>If you are motivated to succeed, self-driven and excited by the idea that your work will define Client’s success and the daily viewing experience for millions of users, please connect with us, we would love to chat with you! </p> <p><br></p><p><strong>Requirements</strong></p><p>1. 5+ years of experience designing, building, deploying, testing, maintaining, monitoring and owning scalable, resilient and distributed data pipelines.</p> <p>2. High Proficiency in at least two of Scala, Python, Spark or Flink applied to large scale data sets.</p> <p>3. Strong understanding of workflow management platforms (Airflow or similar).</p> <p>4. Familiarity with advanced SQL.</p> <p>5. Expertise with big data technologies (Spark, Flink, Data Lake, Presto, Hive, Apache Beam, NoSQL, ...).</p> <p>6. Knowledge of batch and streaming data processing techniques.</p> <p>7. Obsession for service observability, instrumentation, monitoring and alerting.</p> <p>8. Understanding of the Data Lifecycle Management process to collect, access, use, store, transfer, delete data.</p> <p>9. Strong knowledge of AWS or similar cloud platforms.</p> <p>10. Expertise with CI/CD tools (CircleCI, Jenkins or similar) to automate building, testing and deployment of data pipelines and to manage the infrastructure (Pulumi, Terraform or CloudFormation).</p> <p>11. Understanding of relational databases (e.g., MySQL, PostgreSQL), NoSQL databases (e.g., key-value stores like Redis, DynamoDB, RocksDB), and Search Engines (e.g., Elasticsearch). Ability to decide, based on the use case, when to use one over the other.</p> KLM Careers Seattle WA

Software Engineer III (Data)

Accolade, Inc.