Aaron Rodriguez
Senior Data Engineer
Toney, AL · 901.607.6553 · ajrodr1982@gmail.com · github.com/ajrodr82
↓ Download PDF

Senior Data Engineer with 10+ years of experience designing end-to-end analytics platforms across cloud and on-premises environments. Deep expertise in medallion architecture, dimensional modeling, and pipeline orchestration on Azure and AWS. Proven ability to deliver high-trust, analytics-ready data products by integrating complex, multi-source pipelines — from raw ingestion through reporting — using tools like Databricks, Azure Data Factory, Fivetran, and Power BI. Known for strong technical ownership, data quality practices, and close collaboration with analysts, data scientists, and business stakeholders.

Senior Data Engineer
May 2022 – Present
National CineMedia (NCM)
  • Architect a centralized enterprise analytics platform integrating various data sources via Fivetran, Azure Data Factory, and custom pipelines, delivering analytics-ready datasets that support experimentation, measurement, and strategic decision-making across the organization.
  • Design and implement medallion architecture patterns (Bronze → Silver → Gold) in Databricks with robust data modeling at each layer, ensuring analytical flexibility, performance optimization, and long-term maintainability.
  • Build and orchestrate automated ingestion pipelines using Azure Data Factory, REST APIs, incremental sync patterns, and distributed processing in Databricks and Azure Data Lake Storage, with MERGE-based upsert logic and Z-order optimization applied at scale.
  • Establish strong data quality and governance practices through schema change detection, automated validation, and CI/CD workflows via Azure DevOps — preventing silent data corruption and ensuring high trust across critical datasets.
  • Design analytics-focused dimensional models in SSAS and Azure SQL, optimized for downstream reporting, self-service analysis, and query performance at scale.
  • Build and maintain Power BI reports and semantic models with DAX measures and calculated columns, delivering intuitive, reliable reporting experiences for business stakeholders.
  • Deliver self-serve analytics products end-to-end, from pipeline orchestration through interactive dashboarding, enabling analysts and non-technical stakeholders to explore insights without engineering dependencies.
Tools: Python, SQL, Spark SQL, DAX, M, Databricks, Azure Data Factory, Azure Data Lake Storage, Azure DevOps, Fivetran, Power BI, SSAS, SSMS, SQL Server, MySQL, AWS (Glue, Athena, Lambda, Redshift, S3), SSIS
Senior BI Developer
March 2022 – May 2022
Insight Global (AT&T Contractor)
  • Developed analytics-ready data products supporting executive decision-making, strategic performance reviews, and operational planning for a large enterprise client.
  • Modeled complex datasets using SQL and transformation logic to ensure metric correctness, consistency, and long-term scalability across analytical use cases.
  • Built end-to-end ETL pipelines ingesting data from relational databases, flat files, and APIs into centralized reporting and warehouse layers, supporting both full and incremental load patterns.
  • Developed SSIS packages to automate extraction, transformation, and loading workflows, incorporating business rule logic, data cleansing, type casting, and deduplication to ensure analytics-ready outputs.
  • Implemented error handling, audit logging, and email alerting across pipeline workflows to ensure reliable nightly processing and rapid identification of failures.
  • Developed and maintained stored procedures and SQL Agent jobs to automate data processing, scheduling, and pipeline workflows.
  • Optimized semantic data models and query patterns, improving downstream analytics performance and reducing query latency by approximately 40%.
Tools: Python, T-SQL, SQL Server, Power BI, SSAS, SSRS, SSIS, DAX, M, SSMS, ALM Toolkit, Tabular Editor, DAX Studio, Azure DevOps
Senior Data Engineer
September 2021 – March 2022
Provisions Group
  • Built end-to-end ETL pipelines ingesting data from relational databases, flat files, and APIs into centralized reporting and warehouse layers, supporting both full and incremental load patterns.
  • Developed SSIS packages to automate extraction, transformation, and loading workflows, incorporating business rule logic, data cleansing, type casting, and deduplication to ensure analytics-ready outputs.
  • Implemented error handling, audit logging, and email alerting across pipeline workflows to ensure reliable nightly processing and rapid identification of failures.
  • Designed analytics-focused dimensional models in SSAS and Azure SQL, optimized for downstream reporting, self-service analysis, and query performance at scale.
  • Built and maintained Power BI reports and semantic models with DAX measures and calculated columns, delivering intuitive, reliable reporting experiences for business stakeholders.
  • Improved dataset reliability and usability through performance tuning, validation checks, and clear documentation — ensuring consistent, trustworthy outputs across reporting and semantic layers.
Tools: Python, T-SQL, SQL Server, Power BI, SSAS, SSRS, SSIS, DAX, M, SSMS, ALM Toolkit, Tabular Editor, DAX Studio, Azure DevOps
Data Engineer / BI Developer
2018 – 2021
  • Built end-to-end ETL pipelines ingesting data from relational databases, flat files, and APIs into centralized data marts and warehouse layers, supporting full and incremental load patterns.
  • Developed SSIS packages and stored procedures to automate ETL pipeline workflows with business rule logic, cleansing, deduplication, and error handling across full and incremental load patterns.
  • Designed and maintained SSAS tabular and OLAP models with DAX measures, calculated columns, and row-level security to support self-service and governed analytics.
  • Built Power BI dashboards and reports using star schema data models, DAX, M, and Power Query — deploying to Power BI Premium capacity via deployment pipelines.
  • Converted legacy Tableau reports to Power BI and created SSRS paginated reports with parameterization, scheduled delivery, and export to PDF and Excel.
  • Analyzed and optimized slow-running DAX queries and reports using DAX Studio and Performance Analyzer, improving reporting performance across high-volume datasets.
  • Collaborated cross-functionally to define metrics, enforce governance standards, and document data assets to improve trust and adoption.
Tools: Python, T-SQL, PL/SQL, DAX, M, SQL Server, Oracle, MySQL, Redshift, Power BI, SSAS, SSRS, SSIS, SSMS, DAX Studio, Tabular Editor, Power BI Documenter, Tableau, Google Analytics, Google Data Studio, Power Automate, Azure Data Factory, Azure DevOps
Data Engineer / BI Developer / Application Developer
2013 – 2018
  • Built and maintained ETL pipelines extracting data from Progress OpenEdge and other source systems into SQL Server staging tables, performing transformations and loading into data warehouse layers using SSIS.
  • Designed and deployed parameterized SSRS reports and SSAS reporting cubes using SQL Server views, complex stored procedures, and DAX — delivering operational and financial insights to leadership.
  • Designed dimensional data models in Power BI and SSAS with DAX, calculated columns, and row-level security, producing rich dashboards and mobile solutions for self-service analytics.
  • Developed ABL-based dashboards and reporting pipelines using inmydata and Phocas Software for live operational data insights across logistics and manufacturing environments.
  • Partnered directly with business stakeholders, consultants, and end users to translate complex operational requirements into durable, production-grade data pipelines and reporting solutions.
  • Supported development lifecycle activities including technical specifications, work estimates, testing, and go-live actions to ensure reliable delivery of data and application projects.
Tools: T-SQL, PL/SQL, DAX, M, SQL Server, Oracle, MySQL, Power BI, SSAS, SSRS, SSIS, SSMS, DAX Studio, Tabular Editor, Tableau, Progress OpenEdge, OpenEdge ABL, inmydata, Phocas, Azure DevOps
Database & Reporting Engineer
2002 – 2013
Tec-Masters Inc.
  • Led the design and maintenance of database-driven systems supporting enterprise logistics, operations, and reporting.
  • Built automated SQL-based data pipelines and scheduled jobs to ensure data integrity and reliable nightly processing.
  • Developed high-performance reporting solutions providing operational visibility for leadership and program stakeholders.
  • Established early version control, automation, and documentation practices for long-running production data workflows.
Languages
Python, SQL, Spark SQL, DAX, M, D3.js
Architecture & Modeling
Medallion architecture, dimensional modeling, schema evolution, schema change detection, multitenant design
Data Engineering
ETL, batch and near-real-time pipelines, API ingestion, incremental sync patterns, MERGE-based upserts, distributed processing
Platforms
Databricks, Azure Data Factory, Azure Data Lake Storage, Fivetran, Power BI, SSAS, Snowflake, AWS (Glue, Athena, Lambda, Redshift, S3), SQL Server, MySQL, DuckDB, MotherDuck, Supabase, Streamlit, Vercel
DevOps & Governance
Azure DevOps, GitHub Actions, CI/CD, automated validation, access controls, documentation
Orchestration
dbt, SSIS, Dagster, Apache Airflow
Storage
Cloudflare R2, Azure Data Lake Storage, AWS S3, Supabase
Box Office Analytics Pipeline
End-to-end data pipeline using web scraping, cloud orchestration, DuckDB/MotherDuck, dbt transformations, and an interactive Streamlit dashboard for box office trend analysis. Built as a portfolio project demonstrating modern data stack tooling.
Tools: Python, DuckDB, MotherDuck, dbt, Apache Airflow, Streamlit
Weather Data Pipeline
End-to-end pipeline pulling live data from a public weather API, staging raw CSV files in Cloudflare R2 storage, and orchestrating ingestion into Snowflake via Apache Airflow DAGs. Demonstrates cloud storage integration, workflow orchestration, and data warehouse loading patterns.
Tools: Python, Apache Airflow, Cloudflare R2, Snowflake
YNAB Personal Finance Dashboard
Incremental sync pipeline from the YNAB API to Supabase PostgreSQL, scheduled via GitHub Actions, with a D3.js frontend deployed on Vercel. Demonstrates full-stack data engineering from API ingestion through interactive visualization.
Tools: Python, Supabase, PostgreSQL, GitHub Actions, D3.js, Vercel