Overview In KPMG's Management Consulting practice, we do not limit ourselves to either strategy or implementation. Instead, we deliver both.
Our team in Hong Kong is the fastest growing division within KPMG China and represents a young and enthusiastic team that always pushes itself to succeed. Since our creation, we have developed in-depth knowledge of an incredibly broad spread of sectors and services.
Our Data & Analytics team assists clients to develop and execute their strategies by optimising the use of information. We enable businesses to realise their objectives through insights from the data and provide the right data to the right people at the right time.
As a Data Engineer, you will build the enterprise-grade data solution to execute clients' vision - Understand the current state process of the clients and bring it to future state improvement and solution
- Develop, construct and test architectures such as databases and large-scale processing solutions
- Create ETLs/ELTs to handle data from various data sources and various formats, e.g. Batch ETL and Streaming ETL
- Provide advice on the data solutions & development tools selection
- Develop API and employ a variety of languages and tools e.g., scripting languages, to marry systems together
- Build transformation and validation code that applies complex data aggregation and calculation in different programming languages depending on the project scope and requirements
- Document and write technical specifications for the requirements of the solution
- Plan, design and lead the implementation of a large-scale data platform
- Work with the technical architects and application designers to understand the data conversion requirements fully and to design the conversion procedures and applications
- Analyse new data sources and work with stakeholders to integrate new data into existing pipelines and models
- Develop go-to-market strategies with team leaders and expand team capabilities
You will have - Bachelor's degree (or higher) in mathematics, statistics, computer science, engineering or related field
- At least 3 years of working experience on data pipeline e.g., Azure Data Factory / Airflow / Informatica / Databricks
- Hands on programming skill e.g., SQL / Java / Python / C++ / Scala / SAS / Kafka
- Practical Experience with implementing large-scale enterprise data solutions e.g. data lake/data warehouse/data mesh
- Deep understanding of cloud computing and data technologies, business drivers, emerging computing trends, and deployment options e.g., Azure / AWS / Google Cloud / Alibaba Cloud / Tencent Cloud
- Experience with Agile & DevOps methodologies
- Knowledge of market trends and emerging technologies such as Big Data, Block Chain, Internet of Things, Artificial Intelligence, etc would be a bonus
- Strong problem-solving skills and logical thinking
- Excellent multitasking skills and task management strategies
- Ability to complete milestones and work toward multiple deadlines simultaneously
- Confident in decision making and the ability to explain processes or choices as needed
- Excellent communication and interpersonal skills
- Proficient in both spoken and written English and Chinese