Lead Data Engineer

Find your perfect job.

Lead Data Engineer

Job Details

Published:

10-07-2025

Salary:

$195,000.00 - $220,000.00 Annual

Location:

New York

Category:

Permanent

Sector:

Data

Reference:

4731

Work Model:

Remote

Description

 

 

Lead Data Engineer – Databricks / Cloud Data Architect
Location: Remote (USA-based) | Hybrid NYC (optional)
Salary: $195,000–$220,000 base + potential NYC bonus

We’re working with a fast-growing, engineering-first consultancy that’s redefining how modern data solutions are designed and delivered. They partner with some of the biggest names in finance, media, and tech to build intelligent, scalable, cloud-native data platforms. Now, they're looking to bring on a Lead Data Engineer —a hands-on leader fluent in both the architecture and the execution of cutting-edge data systems.

This isn’t a manager-in-a-meeting kind of role. They’re looking for a "keyboard-first" engineer—someone who can set the strategy and implement it, mentor teams, and drive results with modern tools like Databricks , Delta Lake , Spark , and Airflow .

? What You’ll Do

  • Architect & Lead: Design and implement scalable, high-performance data solutions using Databricks, Delta Lake, and modern ETL/ELT tooling.

  • Collaborate Cross-Functionally: Work closely with engineers, analysts, and business teams to deeply understand requirements and deliver the right data infrastructure.

  • Mentor & Guide: Provide technical leadership and mentoring for data engineers, helping raise the bar on the team’s output and capability.

  • Drive Best Practices: Implement a DataOps mindset; build resilient, automated data pipelines that adhere to best practices in security, governance, and performance.

  • Share Knowledge: Lead from the front—not just in projects, but by contributing to blog posts, internal wikis, and other engineering-wide initiatives.

? What You Bring

  • 10+ years of experience in data engineering, architecture, or related fields

  • Expertise in Databricks and large-scale data lakes (Delta Lake experience preferred)

  • Strong programming skills (e.g. Python, Scala) and comfort with Spark

  • Deep understanding of ETL/ELT , data modeling, and cloud-native data solutions (e.g. AWS Redshift, Azure Synapse, Google BigQuery)

  • Hands-on experience with data orchestration tools like Apache Airflow

  • Strategic mindset and analytical skills—able to design and defend solutions at a high level, then get in the weeds to build them

  • Excellent communication skills—capable of interacting with both engineers and non-technical stakeholders

? Tech & Tooling You’ll Use

  • Languages: Python, SQL, Scala

  • Frameworks: Databricks, Delta Lake, Apache Spark

  • Cloud & Storage: AWS (preferred), Azure, GCP; Redshift, BigQuery, Synapse

  • Tooling: Airflow, Git, Terraform (bonus)

  • Mindset: DataOps, CI/CD, reusable architecture, scalable systems

? Why Join?

  • Collaborate with elite engineers and innovators across industries like finance and tech

  • Influence enterprise-scale systems at companies you already know

  • Be part of a community, not a hierarchy—where ideas win, not titles

  • Enjoy flexibility, growth, and hands-on learning with people who take craft seriously

? Interview Process

  • Technical assessment (GitHub-submitted take-home)

  • Technical & architecture interviews with engineering leads

  • Culture interview (optional for fast-tracked candidates)

  • Timeline: Can move quickly—most offers go out within 7–10 days


Apply Now
Share this job
Create job alerts
Create As Alert

Similar Jobs

Read More
SCHEMA MARKUP ( This text will only show on the editor. )