• Build data pipelines to ingest, clean and consolidate data from multiple sources of data
  • Work together with external data partners to ensure that we can ingest data from them in an automated and efficient manner
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
  • Work closely with business, product and engineering and strengthen data-driven thinking in the culture
  • Bachelor’s degree (or higher) in computer science, computer engineering, information systems or equivalent practical experience
  • 4+ years of experience in a data engineer role, with familiarity with the following software/tools:
  • Familiarity with at least one of these systems:
    • - Airflow
    • - Hadoop
    • - Spark
    • - Kafka
  • Experience with both SQL and NoSQL database technologies
  • Experience in cloud platforms such as AWS and/ or Google Cloud
  • Experience developing RESTFul web services
  • Expert in Python and SQL
  • Knowledge of Go is a plus
  • Working-level knowledge of data modelling and machine learning is a plus
  • Good understanding of system design, data structure and algorithms, access and storage
  • Highly accountable and take ownership; must have a collaborative attitude and work well in a team
  • Experience with smart contracts platforms such as Solidity or Scilla will be a plus