Connecting linkedin

W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9oyw1sew4td2lsbglhbxmvanbnl2jhbm5lci1kzwzhdwx0lwpvyi5qcgcixv0

Job

Big Data Engineer (Remote)

  • Location

    Remote

  • Sector:

    Technology

  • Job type:

    Permanent

  • Salary:

    Competitive

  • Contact:

    Michelle Lam

  • Contact email:

    m.lam@hamlynwilliams.com

  • Salary high:

    0

  • Salary low:

    0

  • Job ref:

    CC1-001

  • Published:

    about 2 months ago

  • Expiry date:

    2022-04-22

  • Startdate:

    ASAP

My client is the global blockchain company behind the world’s largest digital asset exchange by trading volume and users, serving a greater mission to accelerate cryptocurrency adoption and increase the freedom of money.

Are you looking to be a part of the most influential company in the blockchain industry and contribute to the crypto-currency revolution that is changing the world?

Responsibilities:
•Responsible for data warehouse construction, data access/data modeling/data service, etc.
•Participated in the architecture design, development, release, operation and maintenance of the company's real-time computing platform
•Responsible for data collection, metadata extraction, data cleaning, data modeling, api development, etc., and building data links
•Responsible for the development iteration and code quality of data management tool set SDK, reduce the difficulty of data management and improvethe automation degree of data linkResponsible for annotation operation analysis, event tracking analysis, data set analysis and other data analysis work, and built unified data warehouse
•Responsible for docking with the research team to solve relevant data cleaning, data modeling, data analysis and other requirements
 
Qualifications:
•Major in computer science, bachelor degree or above, 3+ years working experience is preferred. Grading based on ability and experience
•Solid computer knowledge, systematic understanding of operating system, database, data structures, etc
•Skilled in Java or Golang, Python, more than one programming language
•Master big data ecological technology stack (HDFS, Hive, Elastic-search, HBase, Impala, Spark/Flink, Kafka, Airflow, Sqoop, etc.) have rich experience in the application and development of big data tools such as Hadoop/ HBase /Hive/Flink
•Experience in troubleshooting and tuning, studying component source code is preferred
•Solid SQL skills, understand the principle of SQL execution under different frameworks, familiar with structured and unstructured analysis tools of big data, and have rich practical experience•Rich experience in big data development, including but not limited to data acquisition system, data cleaning, real-time analysis system, multi-service data warehouse, etc
•Strong learning ability and problem solving ability, able to quickly grasp business knowledge and solve technical problems
•Fluent in English
 
If this role is of interest to you, or if you are interested in exploring other opportunities within the market, you are welcome to reach out to Michelle Lam directly to: m.lam@hamlynwilliams.com or +852 27776467 for further discussion.
 
#LI-ML2