What You’ll Do
Own Data Pipelines End-to-End
Design, build, and maintain scalable, high-performance data pipelines that support critical analytics and machine-learning workloads.
Architect Modern Data Infrastructure
Develop robust, cloud-native data platforms using tools like Spark, Airflow, dbt, or similar technologies.
Enable Advanced Analytics
Ensure data availability, quality, and modelling consistency for analysts, data scientists, and business stakeholders.
Drive Best Practices
Champion data governance, observability, testing, and documentation across the data ecosystem.
Collaborate Cross-Functionally
Work closely with engineering, product, and business teams to understand data needs and deliver impactful solutions.
Optimise for Scale & Performance
Continuously refine pipelines and architecture to support growth, new data sources, and demanding workloads.
Who You Are
- You’re a builder who loves creating reliable, scalable systems from the ground up.
- You have a strong understanding of data modelling, workflows, and distributed systems.
- You care deeply about data quality, reproducibility, and long-term maintainability.
- You thrive in fast-moving environments with autonomy, ownership, and accountability.
- You communicate clearly and work well with both technical and non-technical stakeholders.
Tech Stack You May Work With
Data Processing & Orchestration
- Spark, Flink, Beam
- Airflow, Prefect, Dagster
Languages
- Python, SQL, Scala, Java
Cloud & Infrastructure
- AWS / GCP / Azure
- S3, BigQuery, Redshift, Snowflake, Delta Lake
- Terraform, Docker, Kubernetes
Data Modelling & Transformation
- dbt, Data Vault, star schemas, event-driven designs
Observability & Tooling
- Prometheus, Datadog, Grafana
- CI/CD pipelines
Qualifications
Must-Have
- 4+ years of experience in data engineering or backend engineerin
- g with strong data exposure
- Production experience building and maintaining data pipelines
- Strong proficiency in SQL and Python
- Hands-on experience with cloud data platforms
- Solid understanding of data modelling, ETL/ELT workflows, and distributed systems
- Ability to design scalable data architectures from scratch
Nice-to-Have
- Exposure to ML pipelines or MLOps
- Experience with real-time/streaming systems (Kafka, Kinesis, Pub/Sub)
- Familiarity with data governance, lineage, and quality frameworks
- Experience in high-growth or product-led environments
What We Offer
At Globawise, we don’t just offer jobs – we provide launchpads for your career:
- Top-Tier Compensation – Competitive salary + benefits tailored to your experience.
- Remote-First – Work from anywhere with a team that values outcomes over hours.
- Accelerated Growth – Direct access to technical leadership and opportunities for rapid advancement.
- Meaningful Work – Build the data foundations powering next-generation products and decision making.
Ready to Shape the Future of Data?
If you think boldly, build cleanly, and want to work where your impact is felt across the entire organisation, we’d love to meet you.
Apply now and let’s build something exceptional – together.

