Job Description
To apply for this job, you need to complete both steps below:
STEP 1:
Please click the link to submit your application directly to the company:
https://www.linkedin.com/jobs/view/4367168864
Your application will only be received by Recruiter if submitted via above link.
STEP 2:
Kindly scroll to the bottom of this page and complete the short VinUni Tracking Form.
Filling out this form alone does not count as applying. Kindly remind this form is not part of the company’s application process. It only helps Careers, Alumni, Industry and Development (CAID) Department discover more opportunities and follow up in case of system issues.
JOB RESPONSIBILITIES :
- Support the development and maintenance of data pipelines using Azure Data Factory, Databricks (PySpark), and SQL Server.
- Assist in data ingestion from multiple structured and semi-structured sources across different countries.
- Collaborate with senior data engineers to ensure data quality, reliability, and governance.
- Help troubleshoot data pipeline issues and contribute to performance optimization.
- Participate in daily Agile rituals such as stand-ups and sprint reviews.
- Contribute to technical documentation and best practices
JOB REQUIREMENT
- Quick learner with a strong can-do attitude.
- Good command of English (verbal and written).
- Familiarity with SQL and at least one programming language (Python preferred).
- Basic understanding of data pipelines, ETL, or cloud platforms (Azure preferred).
- Strong problem-solving mindset and willingness to work in a collaborative environment.
Nice to have:
- Exposure to Azure, Databricks, or Big Data tools.
- Experience working on academic or personal data engineering projects.
- Understanding of Agile development methodology.

