This role is for one of the Weekday's clients
Min Experience: 7 years
JobType: full-time
Requirements
Key Responsibilities
- Design, develop, and maintain scalable, high-performance data pipelines using AWS technologies.
- Build and orchestrate ETL workflows leveraging AWS Glue, Lambda, Step Functions, and other services.
- Work with both structured and unstructured data to transform raw inputs into clean, analytics-ready datasets.
- Implement data lake and data warehouse architectures utilizing AWS S3, Redshift, Athena, and related tools.
- Collaborate with cross-functional teams to gather data requirements and ensure consistency, accuracy, and reliability.
- Optimize performance of large-scale data systems, SQL queries, and data workflows.
- Uphold data governance, privacy, and compliance standards within the AWS environment.
🌐 Required Skills & Qualifications
- 7+ years of experience in data engineering or similar roles.
- Deep expertise in AWS ecosystem including:
- S3, Glue, Lambda, Redshift, Athena, EMR, Kinesis, IAM, Step Functions
-
- Proficient in Python and SQL for data processing and transformation.
- Solid foundation in data modeling, data warehousing, and big data processing.
- Hands-on experience with orchestration tools like Apache Airflow or AWS Step Functions.
- Familiarity with CI/CD pipelines, DevOps practices, and version control systems like Git.
- Experience working in Agile development environments.
✅ Preferred Skills
- Experience with Spark or PySpark for large-scale data processing.
- Understanding of data lakehouse architectures and best practices.
- Exposure to cloud platforms such as GCP or Azure (nice to have).
- Strong problem-solving skills and the ability to clearly communicate complex data concepts.
Key Skills