Senior Data Architect / Engineer
2026-03-31T15:06:57+00:00
Systems For Development (S4D) Consulting LLC
https://cdn.greatzambiajobs.com/jsjobsdata/data/employer/comp_10403/logo/Systems%20For%20Development%20(S4D)%20Consulting%20LLC.png
https://s4dconsulting.com/
FULL_TIME
Lusaka
Lusaka
10101
Zambia
Professional Services
Science & Engineering, Computer & IT
2026-04-08T17:00:00+00:00
8
Role Overview
S4D Consulting LLC is seeking a Senior Data Architect / Engineer to lead the design and delivery of the Predictive Stock Intelligence Engine under the Zambia National Digital Health Intelligence Hub. The role is focused on data architecture, data modelling, and engineering of robust data pipelines and storage infrastructure to support commodity stock-out prediction and supply chain decision-making across MoH facilities. The incumbent will work within S4D's delivery team and operate within the MoH technical environment.
Key Responsibilities
Data Architecture & Platform Design
- Design and document the end-to-end data platform architecture, covering ingestion, storage, transformation, and data serving layers.
- Define and enforce a medallion data layering strategy (Bronze, Silver, Gold) within a SQL-based data warehouse or lakehouse environment.
- Select and configure core platform technologies (e.g. PostgreSQL, Delta Lake, Apache Airflow) in line with MoH ICT standards and project requirements.
- Develop and maintain data models, entity-relationship diagrams, and data dictionaries for all platform datasets.
- Establish and govern data contracts between source systems and the central warehouse to ensure consistency and reliability.
Data Engineering & Pipeline Development
- Architect and build ETL/ELT pipelines to ingest MoH supply chain datasets (stock levels, consumption records, and delivery data) into the central warehouse.
- Develop and schedule pipeline workflows using Apache Airflow, Prefect, or equivalent orchestration tooling with Python 3.10+.
- Implement data quality checks, lineage tracking, and error-handling across all ingestion and transformation processes.
- Optimise query performance and storage efficiency across the data warehouse, with attention to scalability as data volumes grow.
- Version-control all pipeline and transformation code using Git, with documented deployment and rollback procedures.
Data Governance & Documentation
- Produce and maintain comprehensive technical documentation for the platform architecture, data models, pipeline specifications, and data lineage.
- Apply data governance practices including data classification, metadata management, and compliance with MoH data policies.
- Conduct architecture reviews and contribute to technical governance forums within the project.
- Provide technical input to project reporting and stakeholder briefings as required.
Qualifications & Experience
Essential
- Bachelor of Science in Computer Science, Software Engineering, or a closely related field.
- Minimum 6 years of experience in cloud data engineering and data architecture at production scale.
- Demonstrated experience designing and implementing data warehouse or lakehouse solutions, including medallion-layer data modelling.
- Strong proficiency in Python (3.10+) and SQL for data engineering and transformation work.
- Solid experience with cloud data platforms (e.g. AWS, Azure, or GCP) including managed database, storage, and compute services.
- Hands-on experience with pipeline orchestration tools such as Apache Airflow or Prefect.
- Strong knowledge of relational databases (PostgreSQL, MySQL, or SQL Server) in production environments.
- Proficiency with Git for version control of data platform code and configuration.
Desirable
- Experience working within public health, government, or NGO data environments in sub-Saharan Africa.
- Familiarity with health commodity management or supply chain systems (e.g. eLMIS, DHIS2).
- Knowledge of Delta Lake or open-table formats.
- Experience with PySpark for large-scale data transformation.
- Exposure to data catalogue or metadata management tools.
Competencies & Personal Attributes
- Strong systems-thinking ability, with a track record of translating business requirements into scalable data architecture.
- Detail-oriented approach to data modelling, documentation, and code quality.
- Ability to work independently and manage delivery in resource-constrained environments.
- Clear written and verbal communication skills, with the ability to convey technical concepts to non-technical stakeholders.
- Collaborative and constructive in cross-functional teams alongside analysts, engineers, and programme staff.
- Commitment to data quality, platform reliability, and sound engineering practice.
Performance Indicators
- Data platform architecture documented and approved by technical leads within the agreed onboarding period.
- ETL/ELT pipelines operational and reliably loading MoH supply chain data into the central warehouse with documented quality checks.
- Medallion-layer data models (Bronze, Silver, Gold) implemented and maintained with current data dictionaries.
- All pipeline and architecture code version-controlled and documented in accordance with project standards.
- Technical documentation and data governance artefacts produced and kept up to date throughout the engagement.
- Design and document the end-to-end data platform architecture, covering ingestion, storage, transformation, and data serving layers.
- Define and enforce a medallion data layering strategy (Bronze, Silver, Gold) within a SQL-based data warehouse or lakehouse environment.
- Select and configure core platform technologies (e.g. PostgreSQL, Delta Lake, Apache Airflow) in line with MoH ICT standards and project requirements.
- Develop and maintain data models, entity-relationship diagrams, and data dictionaries for all platform datasets.
- Establish and govern data contracts between source systems and the central warehouse to ensure consistency and reliability.
- Architect and build ETL/ELT pipelines to ingest MoH supply chain datasets (stock levels, consumption records, and delivery data) into the central warehouse.
- Develop and schedule pipeline workflows using Apache Airflow, Prefect, or equivalent orchestration tooling with Python 3.10+.
- Implement data quality checks, lineage tracking, and error-handling across all ingestion and transformation processes.
- Optimise query performance and storage efficiency across the data warehouse, with attention to scalability as data volumes grow.
- Version-control all pipeline and transformation code using Git, with documented deployment and rollback procedures.
- Produce and maintain comprehensive technical documentation for the platform architecture, data models, pipeline specifications, and data lineage.
- Apply data governance practices including data classification, metadata management, and compliance with MoH data policies.
- Conduct architecture reviews and contribute to technical governance forums within the project.
- Provide technical input to project reporting and stakeholder briefings as required.
- Python (3.10+)
- SQL
- Cloud data platforms (AWS, Azure, or GCP)
- Pipeline orchestration tools (Apache Airflow, Prefect)
- Relational databases (PostgreSQL, MySQL, or SQL Server)
- Git
- Delta Lake (desirable)
- PySpark (desirable)
- Data catalogue or metadata management tools (desirable)
- Bachelor of Science in Computer Science, Software Engineering, or a closely related field.
- Minimum 6 years of experience in cloud data engineering and data architecture at production scale.
- Demonstrated experience designing and implementing data warehouse or lakehouse solutions, including medallion-layer data modelling.
- Strong proficiency in Python (3.10+) and SQL for data engineering and transformation work.
- Solid experience with cloud data platforms (e.g. AWS, Azure, or GCP) including managed database, storage, and compute services.
- Hands-on experience with pipeline orchestration tools such as Apache Airflow or Prefect.
- Strong knowledge of relational databases (PostgreSQL, MySQL, or SQL Server) in production environments.
- Proficiency with Git for version control of data platform code and configuration.
- Experience working within public health, government, or NGO data environments in sub-Saharan Africa (desirable).
- Familiarity with health commodity management or supply chain systems (e.g. eLMIS, DHIS2) (desirable).
- Knowledge of Delta Lake or open-table formats (desirable).
- Experience with PySpark for large-scale data transformation (desirable).
- Exposure to data catalogue or metadata management tools (desirable).
JOB-69cbe3117600e
Vacancy title:
Senior Data Architect / Engineer
[Type: FULL_TIME, Industry: Professional Services, Category: Science & Engineering, Computer & IT]
Jobs at:
Systems For Development (S4D) Consulting LLC
Deadline of this Job:
Wednesday, April 8 2026
Duty Station:
Lusaka | Lusaka
Summary
Date Posted: Tuesday, March 31 2026, Base Salary: Not Disclosed
Similar Jobs in Zambia
Learn more about Systems For Development (S4D) Consulting LLC
Systems For Development (S4D) Consulting LLC jobs in Zambia
JOB DETAILS:
Role Overview
S4D Consulting LLC is seeking a Senior Data Architect / Engineer to lead the design and delivery of the Predictive Stock Intelligence Engine under the Zambia National Digital Health Intelligence Hub. The role is focused on data architecture, data modelling, and engineering of robust data pipelines and storage infrastructure to support commodity stock-out prediction and supply chain decision-making across MoH facilities. The incumbent will work within S4D's delivery team and operate within the MoH technical environment.
Key Responsibilities
Data Architecture & Platform Design
- Design and document the end-to-end data platform architecture, covering ingestion, storage, transformation, and data serving layers.
- Define and enforce a medallion data layering strategy (Bronze, Silver, Gold) within a SQL-based data warehouse or lakehouse environment.
- Select and configure core platform technologies (e.g. PostgreSQL, Delta Lake, Apache Airflow) in line with MoH ICT standards and project requirements.
- Develop and maintain data models, entity-relationship diagrams, and data dictionaries for all platform datasets.
- Establish and govern data contracts between source systems and the central warehouse to ensure consistency and reliability.
Data Engineering & Pipeline Development
- Architect and build ETL/ELT pipelines to ingest MoH supply chain datasets (stock levels, consumption records, and delivery data) into the central warehouse.
- Develop and schedule pipeline workflows using Apache Airflow, Prefect, or equivalent orchestration tooling with Python 3.10+.
- Implement data quality checks, lineage tracking, and error-handling across all ingestion and transformation processes.
- Optimise query performance and storage efficiency across the data warehouse, with attention to scalability as data volumes grow.
- Version-control all pipeline and transformation code using Git, with documented deployment and rollback procedures.
Data Governance & Documentation
- Produce and maintain comprehensive technical documentation for the platform architecture, data models, pipeline specifications, and data lineage.
- Apply data governance practices including data classification, metadata management, and compliance with MoH data policies.
- Conduct architecture reviews and contribute to technical governance forums within the project.
- Provide technical input to project reporting and stakeholder briefings as required.
Qualifications & Experience
Essential
- Bachelor of Science in Computer Science, Software Engineering, or a closely related field.
- Minimum 6 years of experience in cloud data engineering and data architecture at production scale.
- Demonstrated experience designing and implementing data warehouse or lakehouse solutions, including medallion-layer data modelling.
- Strong proficiency in Python (3.10+) and SQL for data engineering and transformation work.
- Solid experience with cloud data platforms (e.g. AWS, Azure, or GCP) including managed database, storage, and compute services.
- Hands-on experience with pipeline orchestration tools such as Apache Airflow or Prefect.
- Strong knowledge of relational databases (PostgreSQL, MySQL, or SQL Server) in production environments.
- Proficiency with Git for version control of data platform code and configuration.
Desirable
- Experience working within public health, government, or NGO data environments in sub-Saharan Africa.
- Familiarity with health commodity management or supply chain systems (e.g. eLMIS, DHIS2).
- Knowledge of Delta Lake or open-table formats.
- Experience with PySpark for large-scale data transformation.
- Exposure to data catalogue or metadata management tools.
Competencies & Personal Attributes
- Strong systems-thinking ability, with a track record of translating business requirements into scalable data architecture.
- Detail-oriented approach to data modelling, documentation, and code quality.
- Ability to work independently and manage delivery in resource-constrained environments.
- Clear written and verbal communication skills, with the ability to convey technical concepts to non-technical stakeholders.
- Collaborative and constructive in cross-functional teams alongside analysts, engineers, and programme staff.
- Commitment to data quality, platform reliability, and sound engineering practice.
Performance Indicators
- Data platform architecture documented and approved by technical leads within the agreed onboarding period.
- ETL/ELT pipelines operational and reliably loading MoH supply chain data into the central warehouse with documented quality checks.
- Medallion-layer data models (Bronze, Silver, Gold) implemented and maintained with current data dictionaries.
- All pipeline and architecture code version-controlled and documented in accordance with project standards.
- Technical documentation and data governance artefacts produced and kept up to date throughout the engagement.
Work Hours: 8
Experience in Months: 12
Level of Education: bachelor degree
Job application procedure
Interested in applying for this job? Click here to submit your application now.
Candidates should submit a CV and cover letter, Closing Date April 8,2026 . Applications should reference specific projects or platforms the candidate has designed or built. Submissions that do not evidence the required technical background will not be progressed.
Shortlisted candidates will complete a technical assessment before the interview stage.
All Jobs | QUICK ALERT SUBSCRIPTION