Sorry, this job is no longer accepting applications. See below for more jobs that match what you’re looking for!

Splunk Admin/Developer

Expired Job

Codeforce 360 Atlanta , GA 30301

Posted 2 months ago

CAREER OPPORTUNITY
Job Title: Splunk Admin/Developer

ABOUT CodeForce 360
CodeForce 360 is an established IT staffing and consultancy services company based out of Georgia which is powered by a team of seasoned professionals and driven by its proven track record in providing consultants with experience in some of the most advanced technologies. Our talisman is our pool of IT consultants backed by a portfolio which includes assignments with Fortune 500 companies and other blue chip corporations.

Position Overview
Splunk Admin/Developer


Requirements:
Build Splunk custom Apps, Add-on, leverage SDKs, use REST API for modular inputs, data models, scripted inputs & incorporate custom commands.
Create or Enhance Dashboards, Visualizations, Statistical reports, scheduled searches, alerts, summary indexes and knowledge objects.
Building queries/dashboards to detect and illustrate capacity trends, constraints, and risks.
Analysis for onboarding requests to determine fit for Splunk/monitoring platform.
Have knowledge of splunk admin tasks such as installing, configuring, monitoring and tuning.
Performing support on Splunk & Monitoring platform components.
Partnering with other cross-functional teams to identify tasks and drive them to completion on schedule.
Engaging and assisting other teams with issue identification and resolution utilizing Splunk/monitoring platforms.
Having ansible knowledge would be preferred.
Python/Perl/Linux shell scripting/Regex experience would be highly preferable.
Basic SQL knowledge.
Any experience with Splunk premium apps such as Splunk ITSI would be a big plus.
Superior communication and presentation skills.
Help the UNIX and Splunk administrators to deploy Splunk across the UNIX and windows environment.
Splunk training and/or certifications would be a major plus.
Training/User Support for Splunk platform and other monitoring components.
Capable of working independently/team with some guidance.
Capable of documenting requirements and designed solutions.
Ability to multitask and solve complex technical problem.
Functional

Knowledge:
Splunk Developer (Dashboards, Visualizations, Statistical reports, scheduled searches, alerts, summary indexes, knowledge objects & Custom apps/module) (mandate).
Regex (mandate).
Shell Scripting (mandate).
Ansible (Good to have).
Python Language (Good to have).
ITSI (Good to have).
SQL (Good to have).

How to Apply
Please send resumes and cover letters to:
Rajesh,
Codeforce360,

Only qualified individuals being considered will be contacted for an interview.

Skills:
Python, SQL, Shell Scripting
Contract
6+ Months


See if you are a match!

See how well your resume matches up to this job - upload your resume now.

Find your dream job anywhere
with the LiveCareer app.
Download the
LiveCareer app and find
your dream job anywhere
lc_ad

Boost your job search productivity with our
free Chrome Extension!

lc_apply_tool GET EXTENSION

Similar Jobs

Want to see jobs matched to your resume? Upload One Now! Remove
Lead Hadoop Admin

Honeywell

Posted 4 days ago

VIEW JOBS 11/10/2018 12:00:00 AM 2019-02-08T00:00 Lead Hadoop Admin Innovate to solve the world's most important challenges Honeywell Connected Enterprise Hadoop Support Admin Lead Hadoop admin support lead are responsible for Hadoop ecosystem & Honeywell Sentience ecosystem. We monitor and improve the system and suggest improvements for implementation by others. We are involved in incident and change management. We also act as consultants for engineers and product managers when new products and services are getting ready to launch.You will work directly with our Engineering and DevOps teams to support our next generation "always available" Sentience Platform. The qualified candidate will be comfortable working in an environment that is both matrixed and with some direct oversight of on and offshoreteams in other technical domains. The position reports to the Operations Manager. and will be basedout of Atlanta ( United states) Overall Responsibilities: * Good IT experience and hands on experience in Hadoop HDFS Clusters. * Responsible for implementation and ongoing administration of Hadoop infrastructure. * Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. * Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux/AD users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.· * Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. * Performance tuning of Hadoop clusters and Hadoop MapReduce routines. * Rolling Patch upgrades for Hadoop Cluster without causing any downtime. * Screen Hadoop cluster job performances and capacity planning * Monitor Hadoop cluster connectivity and securityManage and review Hadoop log files. * File system management and monitoring.HDFS support and maintenance. * Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability. * Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required. * Point of Contact for Vendor escalationAble to manage Docker swarm and kubernetes cluster.Able to manage and deploy API services as when required in Swarm and Kubernetes Cluster * Responsible for backup, recovery and Maintenance 25 Hadoop Cluster 25 Powershell 25 Python 25 Redhat Must Haves: * Practical expertise in managing and leading application reliability practices for Industrial products * Ability to work across teams to continuously analyze system performance in production, troubleshoot consumer and engineering reported issues, and proactively identify areas in need of optimization * Previous experience with automation and driving real-time monitoring solutions that provide visibility into cluster health and key performance indicators · Technical understanding of core Hadoop architect, cloud services, platforms and micro-services. * Working understanding of IT service management (Incident, Problem, Change and Knowledge management) * Ability to lead a technical team of support engineers through day to day operations and critical incidents * Prior experience with agile methodologies, performance engineering and automation tools * Clear communication skills. * Working knowledge of Hadoop eco system, Spark, Redhat Openshift platform, PowerShell, python, pig hive and sqoop. Nice-to-haves:· * Development experience on spark, python and hive· * Deep understanding of the business landscape and how site reliability influences our products and customers Required Skills: * Hands -on experience installing, configuring and administering Hadoop cluster ExemptCareers at Honeywell - EngineeringINCLUDES * Continued Professional Development ADDITIONAL INFORMATION * Job ID: HRD47284 * Category: Engineering * Location: 715 Peachtree Street, N.E., Atlanta, GA 30308 USA Honeywell is an equal opportunity employer. Qualified applicants will be considered without regard to age, race, creed, color, national origin, ancestry, marital status, affectional or sexual orientation, gender identity or expression, disability, nationality, sex, or veteran status. Honeywell Atlanta GA

Splunk Admin/Developer

Expired Job

Codeforce 360