Software Developer – ETL (RQ08913)

  • Contract
  • Toronto
  • Applications have closed

Ministry of Health

Description

NOTE

Assignment Type: This position is currently listed as “Hybrid” and consultants will be required to work onsite at the work location 3 days a week and 2 days from home. The details of this arrangement will be confirmed with the Hiring Manager.

 

Extension/Amendment Attestation: Extension(s) only allowed using unused days/funds left on contract. No additional funds will be added beyond maximum contract value. The Statement of Work (SOW) shall expire on March 31, 2026. HSC may exercise its option(s) to extend a SOW beyond March 31, 2026 using unused days/funds left on the contract. Such extension(s) will be allowable only if the Master Service Agreement is extended beyond April 5, 2026 and be upon the same terms, conditions and covenants contained in the SOW.

=======================================

Responsibilities:

  • Design technical solutions for data acquisition and storage into our centralized data repository.
  • Develop ELT scripts, design data-driven logic and conduct unit testing.
  • Conduct database modeling and design as to improve overall performance.
  • Produce design artifacts and documentation which will allow future support of the implemented solutions.
  • Investigate and resolve incidents and identify whether the problem is caused by the data loading code or is due to bad data received from the data provider.
  • Execute service requests related to routine and ad-hoc data loads.
  • Provide the data quality check and report on the data quality issue.

Skills
Experience Requirements

Technical Skills

10+ years experience in:

·      
Designing
and developing scalable Medallion Data Lakehouse architectures.

·      
Expertise
in data ingestion, transformation, and curation using Delta Lake and
Databricks.

·      
Experience
integrating structured and unstructured data sources into star/snowflake
schemas.

·      
Building,
automating, and optimizing complex ETL/ELT pipelines using Azure Data Factory
(ADF), Databricks (PySpark, SQL, Delta Live Tables), and dbt.

·      
Implementing
orchestrated workflows and job scheduling in Azure environments.

·      
Strong
knowledge of relational (SQL Server, Synapse, PostgreSQL) and dimensional
modeling.

·      
Advanced
SQL query optimization, indexing, partitioning, and data replication
strategies.

·      
Experience
with Apache Spark, Delta Lake, and distributed computing frameworks in Azure
Databricks.

·      
Working
with Parquet, ORC, and JSON formats for optimized storage and retrieval.

·      
Deep
expertise in Azure Data Lake Storage (ADLS), Azure Synapse Analytics, Azure
SQL, Event Hubs, and Azure Functions.

·      
Strong
understanding of cloud security, RBAC, and data governance.

·      
Proficiency
in Python (PySpark), SQL, and PowerShell for data engineering workflows.

·      
Experience
with CI/CD automation (Azure DevOps, GitHub Actions) for data pipelines.

·      
Implementing
data lineage, cataloging, metadata management, and data quality frameworks.

·      
Experience
with Unity Catalog for managing permissions in Databricks environments.

·      
Expertise
in Power BI (DAX, data modeling, performance tuning).

·      
Experience
in integrating Power BI with Azure Synapse and Databricks SQL Warehouses.

·      
Familiarity
with MLflow, AutoML, and embedding AI-driven insights into data pipelines.

50 points

Core Skill and
Experience

·       10+ years of
experience with technical systems specifications and translating them into
working, tested applications for large, complex, mission critical
applications.

·       10+ years of
experience in technical analysis, program code, detailed programming and
reports specifications, program design, writing and /or generating code, and
conducting unit tests.

·       10+ years of
experience in software in various computing platforms, operating systems,
database technology, communication protocols, middleware and gateways.

·       10+ years of
experience in developing and maintaining system design models, technical
documentation, and specifications.

·       5+ years of
experience in conducting technical evaluation and assessment of options for
technical design issues, application figuration aspects and integration
capabilities, related tools and utilities, gap analysis of integration
components to technical requirements / specifications / documentation.

·       SDLC end-to-end.

30 points

General Skills

·       Demonstrated
strong leadership and people management skills.

·       Proven
technical leadership skills with ability to identify areas for improvement,
and recommend solutions.

·       Exceptional
analytical, problem solving and decision-making skills.

·       Demonstrated
strong interpersonal, verbal and written communication, and presentation
skills.

·       Proven
troubleshooting and critical thinking experience

·       Demonstrated
ability to apply strong listening skills to facilitate issue resolution.

·       Effective
consulting skills to engage with all stakeholders with proven track record
for building strong working relationships.

·       Strong
interpersonal, facilitation and negotiation skills with ability to build
rapport with stakeholders and drive negotiations to a successful outcome.

·      
Excellent customer service skills, including tact and
diplomacy to ensure client needs are managed effectively.

·      
A motivated, flexible, detail-oriented and creative team
player with perseverance, excellent organization and multi-tasking abilities,
and a proven track record for meeting strict deadlines.

15 points

Public Sector/Healthcare Experience

·      
Knowledge of
Public Sector Enterprise Architecture artifacts (or similar), processes and
practices, and ability to produce technical documentation that comply with
industry standard practices.

·      
Knowledge of
Project Management Institute (PMI) and Public Sector I&IT project
management methodologies.

·      
Knowledge and
understanding of Ministry policy and IT project approval processes and
requirements.

·      
Experience
with large complex IT Health-related projects.

5 points

Supplier Comments

Closing Date – 2025-03-14, 4:30 p.m.

Maximum Number of Submissions – one (1)

Hybrid – Candidate MUST be able to work 3 days onsite and 2 days remote

MUST HAVES:

10+ years experience in:

·       Designing and developing scalable Medallion Data Lakehouse architectures.

·       Expertise in data ingestion, transformation, and curation using Delta Lake and Databricks.

·       Experience integrating structured and unstructured data sources into star/snowflake schemas.

·       Building, automating, and optimizing complex ETL/ELT pipelines using Azure Data Factory (ADF), Databricks (PySpark, SQL, Delta Live Tables), and dbt.

·       Implementing orchestrated workflows and job scheduling in Azure environments.

·       Strong knowledge of relational (SQL Server, Synapse, PostgreSQL) and dimensional modeling.

·       Advanced SQL query optimization, indexing, partitioning, and data replication strategies.

·       Experience with Apache Spark, Delta Lake, and distributed computing frameworks in Azure Databricks.

·       Working with Parquet, ORC, and JSON formats for optimized storage and retrieval.

·       Deep expertise in Azure Data Lake Storage (ADLS), Azure Synapse Analytics, Azure SQL, Event Hubs, and Azure Functions.

·       Strong understanding of cloud security, RBAC, and data governance.

·       Proficiency in Python (PySpark), SQL, and PowerShell for data engineering workflows.

·       Experience with CI/CD automation (Azure DevOps, GitHub Actions) for data pipelines.

·       Implementing data lineage, cataloging, metadata management, and data quality frameworks.

·       Experience with Unity Catalog for managing permissions in Databricks environments.

·       Expertise in Power BI (DAX, data modeling, performance tuning).

·       Experience in integrating Power BI with Azure Synapse and Databricks SQL Warehouses.

·       Familiarity with MLflow, AutoML, and embedding AI-driven insights into data pipelines.

This entry was posted in . Bookmark the permalink.