ROLE SUMMARY: As a Data Engineer, you will be working with the Data team in Sisal Italy, consisting in Data Platform and DWH. You will be using cutting-edge technologies like Apache Spark, Delta Lake and Oracle Exadata to design, create and maintain a complex infrastructure in a Cloud environment.
JOB RESPONSIBILITIES: Your data engineering tasks include designing, constructing, installing, and maintaining the systems that allow for the flawless flow, availability, and reliability of data. You will develop ETL processes that can read to and write to an increasing number of heterogeneous systems inside and outside Sisal. You will also provide a stable and functioning service on top of which Data Scientists ad Data Analysts can seamlessly work, leveraging the power of the most cutting-edge technologies on the market.
KEY RESPONSIBILITIES:
- Design, develop and maintain data warehouse and lakehouse solutions using Oracle PL/SQL, Oracle Exadata and PySpark
- Write complex SQL queries, stored procedures, functions and triggers to support data integration and transformation processes
- Optimize database and lakehouse performance while ensuring data integrity and security
- Develop and maintain ETL processes to load data from various sources
- Monitor and solve performance issues
MUST-HAVE SKILLS:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in Data Engineering or a related role
- Experience developing software in Python (PySpark / SparkSQL / Databricks experience is highly desirable)
- Strong SQL programming skills and experience with database performance tuning
- Trained with Delta Lake and, in general, Lakehouse architectures
- Excellent problem-solving and analytical skills
- Strong communication and interpersonal skills
- Ability to work independently and as part of a team
- Good critical thinking
NICE-TO-HAVE SKILLS:
- Familiar with the concept of Cloud Computing, best if Oracle Cloud
- Experience in Oracle PL/SQL development and Oracle Exadata
- Experience with Apache Kafka is preferable
- Experience with Qlik or other BI tools is also preferable