Location:CN-Shenzhen
Shift:Standard – 40 Hours(中国)
Scheduled Weekly Hours:40
Worker Type:Permanent
Job Summary:AVP, Information Technology
Job Duties:
Job description:
- Build and maintain cloud-based Big Data and Analytics Platforms, including Enterprise Data Lake, Data Governance and Management Platforms, Self-service BI and Augmented Analytics Platform, etc.
- Proactively manage production services and data pipelines to ensure service availability and overall system healthiness
- Create and own infrastructure and processes for both build and testing from scratch
- Minimise operational efforts, automate processes where possible and are repeatable and reliable
- Engage with Data Architects and Product Engineers to design and continuously refine platform offerings and architecture
- Work closely with Data Engineers for product and service launch, including release planning, capacity management and operational review
- Inspect the production environment and carry out security protection in a timely manner
- Building a knowledge base in daily work
Requirements:
The ideal candidate should have extensive experience (8+ years) in DevOps or SRE and should be a complex problem solver. You should be able to build and maintain cloud native platform on AWS, Azure or GCP and have strong interests in the latest Data Technology. You should be hands on in scripting (preferably Python) and developing infrastructure code (preferably Terraform).
- Data driven thinking
- Experience in DevOps toolchain (Jenkins / Git / Ansible, etc.)
- Experience in test automation framework (Robot, Selenium, etc.)
- Experience in ELK, CloudWatch or other application monitoring tools
- Experience in container technologies (OpenShift, Docker or k8s)
- Experience in configuration and management of router, switch, firewall
- Knowledge in building and operate data infrastructure on AWS, Azure or GCP
- Knowledge in network and protocol security mechanisms(TCP/IP and relevant protocol management)
- Knowledge in Data classification and sensitive labels
- Knowledge in access control model and the life cycle, and steps to implement access control(MAC/RBAC/ABAC or similar)
- Excellent communication skills and experience working in cross functional teams
- Experience in Apache Hadoop and Spark is a plus
- Experience in time-series DB and analysis is a plus
- Experience in in-memory data processing / databases is a plus
- Experience in Data Security and protection tools is a plus
- Solid grounding in the Financial Services industry is a plus
- Solid knowledge in SQL is a plus
- Understanding in statistical models, machine learning, graph analysis is a plus
- Good command of written and spoken English and Chinese. Proficiency in Mandarin is an advantage.
Company Introduction: