Sr. Scala Engineer, Database Engineering

Experfy Inc Miami , FL 33124

Posted 2 weeks ago

As a Sr. Software Engineer for our Data Platform Engineering team you will join skilled Scala engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership.


  • Writing Scala code with tools like Apache Spark + Apache Arrow to build a hosted, multi-cluster data warehouse for Web3
  • Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques
  • Scaling up from proof of concept to cluster scale (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure
  • Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management
  • Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow and a wealth of other open source data tools)
  • Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components
  • Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective
  • Understand data and analytics use cases across Web3 / blockchains
Skills & Qualifications
  • Bachelors degree in computer science or related technical field. Masters or PhD a plus.
  • 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, and others)
  • 3+ years experience with Scala and Apache Spark
  • A track record of recruiting and leading technical teams in a demanding talent market
  • Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required
  • Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms is a strong plus but not required
  • Experience with rapid development cycles in a web-based environment
  • Strong scripting and test automation knowledge
  • Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this
icon no score

See how you match
to the job

Find your dream job anywhere
with the LiveCareer app.
Mobile App Icon
Download the
LiveCareer app and find
your dream job anywhere
App Store Icon Google Play Icon

Boost your job search productivity with our
free Chrome Extension!

lc_apply_tool GET EXTENSION

Similar Jobs

Want to see jobs matched to your resume? Upload One Now! Remove
Sr Data Engineer

Crown Castle Inc

Posted 5 days ago

VIEW JOBS 9/24/2022 12:00:00 AM 2022-12-23T00:00 <p>Position Title:  Sr Data Engineer (P4)</p><p>Company Summary:</p><p>Crown Castle is the nation's largest provider of shared communications infrastructure: towers, small cells and fiber. It all works together to meet unprecedented demand-connecting people and communities and transforming the way we do business. Whenever you make a call, track a workout or stream music and videos, we're the ones providing the communications infrastructure that makes it all possible. From 5G and the internet of things to drones, autonomous vehicles and AR/VR, we enable the technologies that help people stay safe, connected and ready for the future. Crown Castle is publicly traded on the S&amp;P 500, and one of the largest Real Estate Investment Trusts in the US, with an enterprise value of ~$100B.</p><p>We offer a total benefits package and professional growth development for teammates in any stage of their career. Along with caring for our teammates, we're an active member in the communities where we live, work and do business. We have a responsibility to give back, which we do through our Connected by Good program. Giving back allows us to improve public spaces where people connect, promote public safety and advance access to education and technology.</p><p>Role:</p><p>As a Sr. Data Engineer, you join a team responsible for the critical role of automating the ingestion, transformation, and integration of data between applications and for downstream analytics. You will design, develop, test, and deploy streaming and batch integration solutions across a variety of data domains and platforms. This includes utilizing programming languages, application integration software, messaging technologies, REST APIs and ETL tools.</p><p>You will work on a software development team with other software engineers, business intelligence engineers, QA analysts, and product managers focused on developing software solutions for the business. You will serve as the primary integration engineer and data engineer on this team. You will also provide support on existing data integration and data engineering solutions that have been developed for Crown Castle's applications. You will work with the Architecture team, as needed, to complete proof of concept projects for the introduction of changes to architecture.</p><p>Responsibilities</p><ul><li><p>Acts as the technical lead for data engineering initiatives on Amazon Web Services (AWS) platform</p></li><li><p>Leverage AWS to build scalable Data Lake and Data Warehouse solutions</p></li><li><p>Design, develop, optimize and troubleshoot complex data pipelines on AWS platform</p></li><li><p>Migrate On-Prem relational databases to AWS Aurora, RDS, Dynamo DB and Snowflake</p></li><li><p>Identify, design, and implement internal process improvements: mitigating risk, automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability and access to information</p></li><li><p>Design and implement data services for consuming data</p></li></ul><p>Expectations</p><ul><li><p>Performs work independently. Self-motivated individual who can handle ambiguous/undefined problems and think abstractly to deliver results.</p></li><li><p>Demonstrate a strong sense of ownership, urgency, and drive as well as the ability to work well with diverse teams. Strong organization skills are also expected.</p></li><li><p>Ability to effectively articulate technical challenges and solutions to business users and other technical teams</p></li><li><p>Strong analytical and problem-solving skills; ability to weigh various suggested technical solutions against the original business needs and choose the most cost-effective solution</p></li><li><p>Strong customer service orientation</p></li><li><p>Meets project deadlines by providing accurate estimates of effort required for committed deliverables</p></li></ul><p>Education/Certifications</p><ul><li><p>Bachelor's degree in Computer Science, Engineering, Information Science, or related discipline</p></li><li><p>Advanced degree in Computer Science, Information Science or related discipline is preferred</p></li><li><p>Amazon Data Engineering certification is preferred</p></li></ul><p>Experience/Minimum Requirements</p><ul><li><p>5+ years of experience designing, developing and optimizing batch and streaming data pipeline solutions on AWS platform. This includes experience in the following AWS Services -</p></li><li><p>AWS Analytical Services (Data Lake Formation, Glue, Athena, Kinesis, MSK, EMR)</p></li><li><p>AWS Database Services (Aurora, RDS, DynamoDB, Redshift, Database Migration Service)</p></li><li><p>AWS Orchestration Services (Step Functions, Airflow)</p></li><li><p>AWS Compute Services (Lambda, EC2)</p></li><li><p>AWS Storage Services (S3, EBS)</p></li><li><p>AWS Management Services (CloudWatch, IAM)</p></li><li><p>3+ years of full-stack development experience with Python, Scala, C++ or Java</p></li><li><p>2+ years of application integration experience using services buses or event hubs in a publish/subscribe architecture</p></li><li><p>Experience in data modeling and development in a Snowflake data warehouse</p></li><li><p>Experience in designing and implementing data services</p></li><li><p>Excellent SQL coding and performance tuning skills</p></li><li><p>Experience with Agile development and collaboration tools such as Azure DevOps or JIRA</p></li><li><p>Experience designing a Data Fabric architecture is preferred</p></li><li><p>ETL experience with Informatica Intelligent Cloud Services (IICS) is preferred</p></li></ul><p>Working Conditions: This is a remote role with the expectation of on-site/in-person collaboration with teammates and stakeholders for moments that matter and may require 25% travel.</p><p>Additional Information: Crown Castle has a COVID-19 Vaccine Policy in place requiring vaccination by your employment start date, unless approved for an accommodation or otherwise prohibited by law.</p><p>#LI-CM1</p><p>#LI-Remote</p><br> Crown Castle Inc Miami FL

Sr. Scala Engineer, Database Engineering

Experfy Inc