strong experience in Big Data or Cloud projects in the areas of processing and visualization of large and unstructured datasets (in different phases of Software Development Life Cycle);
practical knowledge of the Azure cloud in Storage, Compute (+Serverless), Networking and DevOps areas supported by commercial project work experience;
very good knowledge of Python and SQL;
practical Azure cloud knowledge (for example from the MS Learn courses) supported with certificates (for example DP-900, DP-200/201, AZ-204, AZ-400);
experience with several of the following technologies: Data Lake Gen2, Event Hub, Data Factory, DataBricks, Azure DWH, API Azure, Azure Function, Power BI;
very good command of English.
Your responsibilities
designing and implementing Azure Data solutions for handling extensive and/or unstructured datasets;
utilizing Azure Databricks and other services for advanced data processing, analytics, and machine learning tasks, optimizing performance and scalability;
collaborating with solution architects to establish and promote data engineering best practices and standards within the Azure ecosystem;
implementation, optimization and testing of modern DWH/Big Data solutions based on Azure cloud platform and Continuous Delivery / Continuous Integration environment;
data processing efficiency improvement, migrations from on-prem to public cloud platforms;
mentoring and guiding junior data engineers in Azure-specific technologies, fostering a culture of continuous learning.