Job Category: Tech Jobs
Hiring For: LTI Mindtree
Job Location: Bangalore Chennai Hyderabad Mumbai Pune
Experience: 6 to 12 Years

Job Summary:
We are seeking a skilled AWS Data Engineer with a strong background in building scalable data pipelines using Databricks, PySpark, and SQL. The ideal candidate will have hands-on experience with cloud platforms (especially AWS), S3 Data Lakes, and a strong understanding of big data processing frameworks.

Key Responsibilities:
Design, develop, and maintain scalable and efficient data pipelines using Databricks, PySpark, and Spark SQL.
Ingest, clean, transform, and process large-scale structured and unstructured datasets from various sources including AWS S3 and databases.
Work with AWS cloud services such as S3, EMR, IAM, and Glue to build end-to-end data workflows.
Optimize data pipelines and ETL processes for performance and scalability on AWS infrastructure.
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.
Ensure data governance, security, and quality compliance throughout the pipeline.
Troubleshoot and resolve issues related to data ingestion, transformation, and storage.
Contribute to data architecture decisions and best practices.
Manage multiple priorities in a fast-paced environment with a focus on timely delivery and high-quality output.

Must-Have Skills:
Hands-on experience with Databricks and Apache Spark.
Strong programming experience in Python, PySpark, and Spark SQL.
Proficiency in using AWS Cloud services, especially S3, EMR, and IAM.
Experience in building and optimizing large-scale data pipelines and ETL workflows.
Solid understanding of data lake architecture and working with S3 as the storage tier.
Strong SQL skills and experience with relational databases and data warehouse concepts.
Excellent problem-solving and analytical abilities.
Strong communication and collaboration skills.

Nice-to-Have / Preferred Skills:
Experience with Snowflake, Cloudera, or other cloud-based data warehouses.
Working knowledge of Talend, AWS Glue, or other ETL tools.
Familiarity with shell scripting and basic DevOps practices.
Exposure to Java and Scala for big data development.
Experience migrating relational and dimensional databases to AWS Cloud.
Prior experience in managing dimensional data models and building data marts.

Please fill out the below details and attach your resume. We will contact you shortly after you submit the application.

Refer Now