Data Engineer
As a Data Engineer you can expect to… Design and develop data pipelines and ETL processes to ingest, transform, and load data from various sources into data warehouses or data lakes Build and maintain data models and data structures to support analytical and reporting needs Work with data scientists and analytics teams to help deploy data solutions and machine learning workflows Collaborate with cross-functional teams to integrate data engineering solutions into client applications and systems Support optimization and tuning of data pipelines and systems for performance and scalability Help ensure data quality and integrity throughout the data lifecycle Stay current with industry trends and best practices in data engineering and apply them to improve data processing efficiency Participate in project and client meetings and support successful delivery of solutions Follow best practices, security guidelines, and compliance requirements You’re great at… Handling multiple projects/tasks simultaneously in a team environment Managing multiple tasks and priorities in a collaborative team environment Building efficient pipelines to move data from source systems such as D365 F&O, CE, Salesforce, Workday, or SAP Creating reusable data extraction, transformation, and loading processes using Azure Data Factory or Synapse Supporting improvements to data flows to increase throughput and reliability for downstream systems Working with Azure Data Factory (ADF) or Synapse to build pipelines using cloud and on-prem data sources for full and incremental loads into ADLS Using PowerShell scripting and ARM templates (or learning quickly) Understanding core cloud concepts such as IaaS, PaaS, and SaaS Supporting CI/CD and automation workflows in cloud environments Ensuring data quality and helping maintain data infrastructure for analytics and reporting Solving problems methodically and learning new concepts quickly Ensuring data quality and integrity throughout the data lifecycle Prioritizing the development and implementation of data pipelines, ETL processes, and data infrastructure to support data analytics and reporting needs Working in a highly collaborative team environment Ensuring project deliverables meet quality standards Solving complex problems with creative solutions Learning new concepts quickly and thoroughly Promoting the mission and Shared Values of our company Sound interesting? If so, you’ll have… Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience) 2–4 years of experience with data platforms, data warehouses, Data Lakes, and ETL Hands-on experience with some of the following: ADF, Spark, Databricks, Python, Azure Synapse, ADLS, Azure Functions Exposure to performance tuning and optimization in Databricks or Synapse (preferred, not required) Familiarity with Delta Lake concepts such as delta tables, schema evolution, or SCD (nice to have) Understanding of data concepts such as data governance, metadata, or data quality Experience working with SQL / T-SQL Familiarity with Git/GitHub and modern development practices Knowledge of data modeling and data integration patterns Strong analytical and problem-solving skills Good communication and collaboration skills Ability to work with clients and internal teams to deliver solutions The Perks We offer competitive pay with and performance-based bonus. Our employees also enjoy generous paid time off and a flexible and affordable benefits program designed to help you be and stay well, including: medical, dental & vision coverage, flexible spending accounts, health reimbursement account, and a 401(k) plan with a company match. Additionally, you’ll have the benefit of working alongside enthusiastic and energetic teammates in a dynamic and thriving environment. HSO is an Equal Opportunity Employer.







