- SRE and DevOps Recruiters and Staffing Specialists DevOps Administrator
SRE and DevOps Jobs
- Automation Engineer
- AWS DevOps Engineer
- Azure DevOps Engineer
- DevOps Admin
- DevOps Administrator
- DevOps Cloud Engineer
- DevOps Manager
- DevOps Software Engineer
- DevOps Specialist
- DevSecOps Engineer
- DevSecOps Specialist
- Director of SRE
- Head of DevOps
- Internet of Things Engineer
- Linux Admin
- Linux Administrator
- Linux Engineer
- ML/AI Engineer
- Platform Engineer
- Python Developer
- Site Reliability Engineer (SRE)
- Systems Engineer
- UNIX Admin
- Unix Admin
- Windows Engineer
The DevOps Administrator is responsible for working with the most recent cloud-native and distributed data platforms. In this practical position, you will administer Hadoop, guarantee big data clusters’ performance, dependability, and optimization, and suggest the resources needed to deploy and optimize big data technologies. If you have a passion for big data and using cutting edge technology to produce tangible business results, we want to hear from you.
Typical Duties and Responsibilities
- Install, manage, and configure big data clusters
- Manage and monitor the performance of distributed systems and middleware applications
- Configure and optimize the Hadoop environment
- Manage, troubleshoot, and optimize Java applications
- Oversee Hadoop security and encryption
- Manage LDAP, Active Directory, and Kerberos (KDC)
- Manage HDFS transparent data encryption, LUKS, and PKI techniques
Education
- Bachelor’s degree in computer science, information technology or a related field
Required Skills and Experience
- 5+ years of experience installing, configuring, managing, and optimizing Linux OS performance as a Linux system or Java Middleware engineer with a focus on distributed computing
- Knowledge of LDAP/Active Directory user authentication backend integration with Linux OS
- Expertise in Hadoop distribution, including cluster installation and configuration
- Expertise in the fundamentals of Hadoop (HDFS, Hive, YARN), as well as one or more ecosystem products and languages including HBase, Spark, Impala, Search, Kudu, etc.
- Experience with performance optimization for Java applications
- Experience with huge data clusters based on the cloud
- Knowledge of automation of infrastructure
- Working knowledge of scoping activities for complicated, large-scale technology infrastructure projects
- Demonstrated expertise working with key stakeholders and clients to translate business needs and use cases into a Hadoop solution
- Outstanding ability to manage client relationships, escalate projects, and participate in executive steering meetings
Preferred Qualifications
- Knowledge of Ansible and Git