CLEAR transforms what is uniquely you your fingerprints, your face, your eyes into a secure, biometric key to frictionless experiences. We are creating a world where travel is effortless, where accessing your office building is as simple as walking in, and where shopping is as easy as walking in and out of a storewithout ever once showing an ID or credit card. CLEAR currently powers secure, frictionless customer experiences in 30 U.S. airports and venues. With over 3 million members so far, CLEAR is the identity platform of the future, today.
Through cutting edge biometrics and advanced Homeland Security certified data algorithms, CLEAR products guarantee identity and protect travelers and sports fans, while speeding them through security.
CLEAR is continuously extending its platform with new innovations, products, and IP. The Science and Analytics team delivers valuable insights to both internal and external partners via its next generation data platform. Leveraging its capabilities, we work alongside partners to best understand their most pressing data needs then build trusted solutions to support them. Sometimes that solution takes the form of an internal BI dashboard, sometimes delivered as mathematical model in a product feature and yet in other cases we create solutions that give CLEAR differentiated competitive advantage in the marketplace.
We're seeking an interdisciplinary data engineer to focus on building the data platform to feed our mathematical models and algorithmic research. As a critical member of our research and development team, you will have a prominent voice in the future of our company. You're a deep thinker who enjoys solving critical problems, and can own a solution from end to end. You will build a highly scalable, high throughput, model processing pipeline to support our large scale, real-time predictive risk platform. You will have considerable autonomy in setting and executing on a plan to build out the platform.
What You Will Do:
You have a strong desire to work in a highly collaborative, team oriented, intellectually curious environment.
Collaborating with our project manager and the VP of Data Science to distill ambiguous business requirements into detailed platform architectures and designs.
Build a pioneering platform based on those business requirements, including setting up virtual private clusters (in VPCs) and storage blocks in AWS.
Installing and scaling DWH instances in the VPC.
Working with the VP of Data Scientist to identify data sources to ingest into the Experimentation Platform.
Build and scale the star/snowflake data model.
Build ETL code to ingest from disparate data sources into the Experimentation Platform.
Interfacing with the Data Science team to install/setup the appropriate analytics warehouse and data science tools.
Ingesting new data as requested.
Who You Are:
Bachelor's degree in computer science or 3+ years of experience working with databases.
Experience designing and coding Big Data solutions utilizing Spark, Snowflake, Kafka, or similar technologies.
Working with service oriented architectures, web services and cloud technologies in AWS.
Demonstrated results oriented achievement in creative data solutions.
Strong knowledge of data structures, algorithms, enterprise systems, and asynchronous architectures.