Responsibilities:
• Develop and maintain data pipelines using Databricks and Apache Spark.
• Collaborate with data scientists and analysts to understand data requirements and deliver solutions.
• Optimize and improve existing data processing workflows.
• Design and implement data models and ETL processes.
• Ensure data quality and consistency across various data sources.
Qualifications:
• Bachelor’s degree in Computer Science, Data Science, or a related field.
• 3+ years of experience in data engineering with a focus on Databricks and Apache Spark.
• Proficiency in SQL, Python, and ETL tools.
• Experience with cloud platforms like AWS, Azure, or GCP.
• Strong problem-solving skills and attention to detail.
• Excellent communication and teamwork skills.