Senior Data Engineer

Find your perfect job.

Senior Data Engineer

Job Details

Published:

10-07-2025

Salary:

$135,000.00 - $195,000.00 Annual

Location:

Category:

Permanent

Sector:

Data

Reference:

4732

Work Model:

On-Site

Description

 

 

Senior Data Engineer – Cloud & Modern Data Stack
Location: Remote (USA) | Hybrid NYC option
Salary Range: $135,000–$195,000 base

Are you a senior-level data engineer who thrives in hands-on environments and enjoys solving complex data challenges at scale? We're hiring on behalf of a modern, engineering-first consultancy that delivers high-impact solutions across data, cloud, and AI for industry-leading clients in finance, tech, and beyond.

This team is built for engineers, by engineers. They favor curiosity over credentials, prioritize clean architecture over complexity, and give their people the freedom to lead from the keyboard. If you're seeking a collaborative, innovation-driven environment with zero bureaucracy and high ownership—this is the place.

? What You'll Be Doing:

  • Architect & Build: Design, develop, and maintain scalable data pipelines, warehouses, and lakes across cloud platforms—primarily AWS.

  • Optimize & Automate: Build robust, production-grade ETL/ELT workflows using orchestration tools like Airflow , Databricks , DBT , or AWS Glue .

  • Lead Strategically: Help define the vision for data systems while remaining deeply hands-on with implementation.

  • Code Like a Pro: Use Python, Spark (PySpark), and SQL to engineer workflows that transform, load, and serve data reliably and at scale.

  • Mentor & Collaborate: Partner with stakeholders across engineering and business teams to turn data requirements into clean, usable, and governed solutions.

  • Drive Thought Leadership: Contribute to blog posts, share knowledge internally, and bring best practices to life in your team and community.

? What You Bring:

  • 8+ years in data engineering or related fields, with a proven track record building and maintaining complex data systems

  • Deep experience in cloud-native architectures —especially AWS (Redshift, S3, Lambda, DynamoDB, etc.)

  • Mastery of Python for data workflows and comfort working with Pandas , NumPy , Dask , and PySpark

  • Hands-on with Databricks , Delta Lake , and distributed computing frameworks like Apache Spark

  • Experience with orchestration tools like Airflow , DBT , or AWS Glue

  • Strong understanding of SQL and NoSQL systems (PostgreSQL, Redshift, Iceberg, DynamoDB)

  • Comfort with Infrastructure as Code tools like Terraform, AWS CDK, or CloudFormation

  • Excellent communication and collaboration skills—able to explain complex data concepts to both technical and non-technical audiences

  • A natural curiosity and the ability to ramp up quickly on new tools and problems

? Tech Stack Snapshot:

  • Languages & Libraries: Python, SQL, PySpark, Pandas, NumPy

  • Platforms & Tools: Databricks, Airflow, DBT, Terraform, AWS CLI

  • Cloud & Storage: AWS (Redshift, S3, Glue, Lambda), Delta Lake, Iceberg

  • Mindset: DataOps, clean architecture, automation-first


Apply Now
Share this job
Create job alerts
Create As Alert

Similar Jobs

Read More
SCHEMA MARKUP ( This text will only show on the editor. )