Delta Technology & Management Services Pvt Ltd is an ethical company with a strong, global presence renowned for providing cost effective, innovative IT solutions and business services around the world. SAP to Microsoft to Java to Networks, we offer numerous disciplines, solutions & packages to suit your requirements.
We use stringent processes and excellent working practices that have earned us universally recognized accreditations such as ACCI, STPI and ISO 9001 to mention a few.
The Administrator will be responsible for maintaining the Hadoop Environment .
Responsible for base operating systems [Unix, Linux specifically CentOS and RedHat] maintenance and full support for bash scripting, local repository configuration, proxy and network issue resolution in OS level, file and user access control.
Responsible for implementation and ongoing administration of Hadoop infrastructure
Propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
Design and install Hadoop environment
Working with delivery teams to setup and testing Impala, HBase, Hive and Pig access for the new users
Responsible for security and access control in Hue and other related end-user access points [including operating knowledge of Sentry].
Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Management Console, Ambari or MapR Control System
Troubleshoot ongoing Hadoop issues as part of Support
Drive automation of Hadoop deployments, cluster expansion and maintenance operations
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
Point of Contact for Vendor escalation
Data transfer between Hadoop and other data stores (incl. relational database) [administrative knowledge of Sqoop]
Job scheduling, monitoring, debugging and troubleshooting [operational knowledge of Oozie
Documenting of ongoing operation activities over collaboration tools.
Qualifications /Additional Requirements
Should have min. 2+ years of experience in Hadoop Administration and 1+ years of experience in Linux environment
General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
Working knowledge of Hadoop Skills like Hue, Hive, HBase, Oozie, Sentry, YARN, Spark, Kafka, Flume, Hue, Sqoop, Impala and Cloudera.
They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
Good working shell scripting knowledge
Strong organizational and communication skills
Proven analytical and problem solving skills
Demonstrated ability to learn new skills quickly
Ability to work and contribute to a team environment