Senior Data Engineer - AWS + AtScale
Quantiphi
Bengaluru, India Full Time Engineering Jobs India
Job Description
Job Role : Sr Data Engineer
Experience : 4+ Years
Location : Bangalore/Mumbai
Must have Skills:
- 4+ years of experience in designing and implementing data warehouses and data lakes/lakehouses on AWS.
- Hands-on experience with AtScale or similar semantic layer tools to enable governed, business-friendly data access across BI platforms.
- Proven success working with globally distributed teams in collaborative delivery environments.
- Deep working knowledge across key AWS Data & Analytics services, including:
- Building large-scale data lake architectures on Amazon S3 and open table formats
- Implementing governance and cataloging through AWS Lake Formation.
- Developing ETL and metadata frameworks using AWS Glue.
- Leveraging AWS Lambda for serverless data processing.
- Running distributed data workloads on Amazon EMR.
- Enabling real-time data pipelines with AWS Kinesis (Data Streams and Firehose).
- Orchestrating pipelines using AWS Step Functions/Amazon MWAA/similar services.
- Designing and optimizing schemas and query performance on Amazon Redshift, including Spectrum and Serverless features.
- Querying large datasets interactively using Amazon Athena.
- Managing operational databases using Amazon RDS across engines such as PostgreSQL, MySQL, and Aurora.
- Integrating and migrating data using AWS DMS, Glue Connectors, EventBridge, SNS, and SQS.
- Strong understanding of semantic modeling, including logical data models, virtual cubes, and centralized metric definitions (single source of truth).
- Experience optimizing performance using query pushdown, caching, and aggregate awareness over platforms like Redshift and Athena.
- Ability to integrate semantic layers with BI tools (QuickSight, Tableau, Power BI) and enforce row/column-level security aligned with governance frameworks.
- Strong programming capability in Python and PySpark for large-scale data processing.
- Proficiency in writing complex SQL queries, analytical functions, and performance tuning for large datasets.
- Familiarity with NoSQL databases such as Amazon DynamoDB, MongoDB, or DocumentDB.
- Strong understanding of partitioning, indexing, scaling approaches, and query optimization techniques.
- Proven experience in architecting and implementing data pipelines using native AWS services in a modular and resilient manner.
- Solid understanding of data modeling concepts, including dimensional, normalized, and lakehouse patterns.
- Good experience with DBT.
- Solution Architect- Associate or Data Engineer- Associate Certification.
Good to have skills:
- Experience of working for customers/workloads in one of FSI,Retail,CPG domain.
- Exposure to IaC tools like Terraform and to CI/CD tools
- Familiarity with data virtualization (e.g., Amazon QuickSight,PowerBI, Tableau) and data governance tools (e.g., Collibra).
- Managing security, monitoring, and compliance with AWS IAM, Secrets Manager, CloudWatch, CloudTrail, and KMS.
- Experience with AI-assisted development tools such as GitHub Copilot, Amazon Kiro (or similar GenAI IDEs) for improving developer productivity, code generation, and pipeline acceleration
- Exposure to Data Mesh architecture,Data Governance frameworks
Posted April 12, 2026