As a Data Engineer (Level II), you are familiar with the data warehousing technical components, infrastructure, and their integration. Youll analyze large amounts of data, discover and solve real world problems. You love the idea of being able to provide insight as well as presenting those insights. You are responsible for high level design/architecture. You are comfortable fostering relationships with internal business partners and other members of the development team.
Design, develop, and maintain modular code base to solve real world problems.
Conduct regular peer code reviews to ensure code quality and compliance following best practices in the industry.
Work in cross-disciplinary teams to understand client needs and ingest rich data sources.
Research, experiment, and utilize leading Big Data technologies in AWS
Help drive the process for pursuing innovations, target solutions, and extendable platforms for ampliFIs products.
Participate in developing and presenting thought leadership and assist in ensuring that ampliFIs data source technology stack incorporates and is optimized for using specific technologies.
Required Skills and Experience
Qualified individuals possess the ampliFI attributes of being smart, curious, committed to vision, passionate, fun/pleasant, an achiever and having a sense of urgency
Minimum of three years of big data experience with multiple programming languages and technologies, three years as a lead / team manager.
Bachelor's degree or masters degree from an accredited college/university in Computer Science, Computer Engineering, or related field (i.e. math and physics);
Ability to manage established relationships internally as well as with clients.
Ability to communicate complex technical concepts succinctly to non-technical colleagues, understand & manage interdependencies between all facets of a project.
Ability to interface with clients; Must have demonstrated advanced proficiency in complex, mature and sophisticated Design & Analysis technologies and solutions.
Skilled ability to rapidly ingest, transform, engineer, and visualize data, both for ad hoc and product-level (e.g., automated) data & analytics solutions.
Experience with large-scale, AWS big data methods such as EC2, S3, EMR, Kinesis, DynamoDB, and Redshift.
Ability to work efficiently under Unix/Linux environment, having experience with source code management systems like GIT.
Strong knowledge with programming methodologies (version control, testing, QA) and agile development methodologies.
3-5 years experience
Frequently required to sit and stand
Occasionally required to stoop, kneel and crouch
Required to use hands to handle or feel objects, tools or controls
Duties, responsibilities and activities are not all encompassing and may change at any time with or without notice. To perform this job successfully, an individual must be able to perform each essential job duty satisfactorily. Reasonable accommodations may be made to enable qualified individuals with disabilities to perform essential job functions