Group Ventures & Partnership - Data Engineer Group Ventures & Partnership - Data Engineer …

CIMB Malaysia
in Kuala Lumpur, Kuala Lumpur, Malaysia
Permanent, Full time
Be the first to apply
Competitive
CIMB Malaysia
in Kuala Lumpur, Kuala Lumpur, Malaysia
Permanent, Full time
Be the first to apply
Competitive
Group Ventures & Partnership - Data Engineer
Data Engineer runs the code, pipeline, and infrastructure that extracts, processes and prepares every piece of data generated or consumed by CIMB's systems. They do this by developing, maintaining, and testing infrastructures for data generation. Data engineers work closely with data scientists and are largely in charge of architecting solutions for data scientists that enable them to do their jobs. Detail on job responsibilities are as follow:
  • Designing, evaluating, and implementing a framework that can adequately handle the needs of a rapidly growing data-driven company.
  • Architecting and scaling data analytics infrastructure on big data platforms; finding opportunities to improve and optimize the workloads, processes to ensure that performance levels can support continuous accurate, reliable and timely delivery of key insights.
  • Building, designing and deploying ETL pipelines.
  • Manage continuous uptime of data services by implementing High Availability tools and best practices.
  • Manage the continuous testing and deployment of data pipelines, new data services, and analytical reporting dashboards.
  • Spearhead the development of systems, architectures, and platforms that can scale to the 3 Vs of Big data (Volume, Velocity, Variety).
  • Partner data scientists and engineers by leading the movement cleaning and normalizing subsets of data of interest as a preparatory step to prepare the rich data for deeper analysis on how to improve the user experience for CIMB customers around the region.
  • Work with data and analytics experts to strive for greater functionality in our data systems.


Qualifications

  • Candidate should have at least 3 years of hands-on experience, preferably in data infrastructure.
  • Experience in container management and orchestration tools like ECS, Kubernetes, and Mesos is compulsory.
  • Proficiency in Hadoop, Kafka, and Spark databases in a large scale environment.
  • Well versed in setting up continuous integration and deployment for big data or other projects.
  • Real passion for data, new data technologies, and discovering new and interesting solutions to the company's data needs.
  • Comfortable with Linux Systems Administration.
  • Deep understanding on databases and best engineering practices including handling and logging errors, monitoring the system, building human-fault-tolerant pipelines, understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning and ensuring a deterministic pipeline.
  • A degree or higher in Computer Science, Electronics or Electrical Engineering, Software Engineering, Information Technology or other related technical disciplines.

Close
Loading...