Data Engineer (Praca zdalna)

emagine Polska

Warszawa, Domaniewska +3 więcej
190 - 190 PLN
Zdalna
B2B
☁️ AWS
🔍 Google Cloud Platform
Hadoop
Cloudera
☁️ Azure
📊 Big Data
Java
PySpark
🐍 Python
Scala
💼 B2B
🌐 Zdalna
Pełny etat
📊 Data engineer
SQL
📊 Databricks
ETL
CI/CD
☁️ Microsoft Azure
📊 Data Engineering
🤖 Apache Airflow or Hadoop
Tableau or Looker
🔍 Google Cloud Platform (GCP)
📊 Data modeling
📊 Data Pipelines
Cloud
📊 Data Mapping
Spark
Delta Lake
☁️ Azure Data Factory
☁️ ADLS (Azure Data Lake Storage) / Blob Storage / S3 compatible storage solutions like MinIO or Ceph
etc. (if applicable
☁️ specify the cloud provider or technology if it's not Azure.)
Unix
Git
Jenkins
Ansible
SDLC
GitHub
Linux
☁️ Microsoft Azure Cloud
RESTful API

Job description

Information about the project:

Rate: depending on expectations

Location: Cracow - hybrid/remote

Industry: banking

We are seeking a Data Engineer to join our PSM Engineering team. The ideal candidate will be innovative and possess a strong desire for continuous improvement in engineering best practices. You will have deep technical expertise in various technologies and a passion for learning. Your experience in delivering software/technology projects using Agile methodologies is crucial. Candidates should demonstrate their contributions to critical business applications, ideally customer-facing, and effectively communicate complex ideas to non-expert audiences. Additionally, familiarity with emerging technologies in finance will be highly regarded.

Main Responsibilities: You will be responsible for creating data pipelines and supporting the data engineering lifecycle effectively. Key responsibilities include:• Develop and maintain robust data pipelines for data ingestion, transformation, and serving.• Apply modern software engineering principles to deliver clean, tested applications.• Collaborate with cross-functional teams to identify and solve engineering problems.• Migrate on-premise solutions to cloud ecosystems as required.• Utilize strong programming skills in Python and related technologies.• Ensure effective data modeling and schema design practices.• Manage CI/CD pipelines using tools like Jenkins and GitHub Actions.• Experiment with emerging technologies and methodologies in a fast-paced environment.

Key Requirements:• Extensive experience in the Data Engineering Lifecycle.• Strong proficiency in Hadoop and Cloudera.• Solid experience with AWS, Azure, or GCP, with a preference for GCP.• Proficient in Python and Pyspark, along with other languages like Scala/Java.• Familiar with Big Data technologies (Hadoop, HDFS, HIVE, Spark, etc.).• Knowledge in data lake formation and data warehousing principles.• Understanding of file formats such as Parquet, ORC, and Avro.• Experience with SQL and building data analytics.• Proven ability to use version control systems like Git.• Understanding of CI/CD principles.

Nice to Have:• Experience developing near real-time event streaming pipelines using Kafka or similar tools.• Familiarity with MLOps and maintaining ML models.• Understanding of NoSQL databases and their trade-offs compared to SQL.

Wyświetlenia: 4
Opublikowana7 dni temu
Wygasaza 23 dni
Rodzaj umowyB2B
Tryb pracyZdalna
Źródło
Logo
Logo

Podobne oferty, które mogą Cię zainteresować

Na podstawie "Data Engineer"