LinkedIn y terceros utilizan cookies imprescindibles y opcionales para ofrecer, proteger, analizar y mejorar nuestros servicios, y para mostrarte publicidad relevante (incluidos anuncios profesionales y de empleo) dentro y fuera de LinkedIn. Consulta más información en nuestra Política de cookies.
Selecciona Aceptar para consentir o Rechazar para denegar las cookies no imprescindibles para este uso. Puedes actualizar tus preferencias en cualquier momento en tus ajustes.
Location: Fully remote. Open to candidates residing in Argentina, Chile, or Brazil. our inclusive and globally collaborative virtual working environment.
Role Overview
Globant is seeking a talented and experienced Senior+ Data Engineer with an innovative mindset and expertise in Azure and Databricks. This role offers a unique opportunity to work on high-impact projects across diverse industries while maintaining a flexible remote work arrangement. Our ideal candidate is ionate about data engineering, dedicated to optimizing data-related processes, and enjoys collaborating in a dynamic and inclusive team.
Key Responsibilities
Design and develop robust data pipelines and architectures using Azure Data services and Databricks.
Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
Implement data models and facilitate data integration, transformation, and migration to enhance organization-wide analytics capabilities.
Optimize data processing workflows and ensure data quality, reliability, and accessibility.
Maintain and innovate on our data infrastructure and recommend best practices for data governance and security.
Provide technical leadership and mentorship to junior data engineers, promoting best practices in data engineering.
Skills Required
Minimum of 7 years of experience in data engineering with strong expertise in cloud-based data platforms.
Hands-on experience with Microsoft Azure, specifically Azure Databricks for data processing.
Strong proficiency in data modeling, ETL processes, and data warehousing concepts.
Solid experience with programming languages such as Python, Scala, or SQL.
Proven ability to work with large datasets and a variety of storage solutions, including Azure Data Lake, Azure SQL, etc.
Excellent problem-solving skills and the ability to work collaboratively in a team-focused environment.
Fluency in English, preferably at a B2 level or higher, facilitating effective communication with global teams.
Preferred Qualifications
Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field.
Experience with big data tools and frameworks, such as Spark, Hadoop, or Kafka.
Background in Agile development environments and familiarity with CI/CD pipelines and version control systems.
Knowledge of machine learning models deployment in data-rich environments.
Adept at identifying opportunities for data-driven improvements and implementing innovative solutions.
Why Us?
Our company is at the forefront of technological innovation, providing employees with a vibrant and growth-oriented work culture. We emphasize continuous learning, collaboration, and creative problem-solving. As an integral part of our team, you will have the chance to work on transformative projects and advance your career while making a significant impact in the tech industry.
Application Process
Are you ready to embrace a new challenge and elevate your data engineering career? Follow the steps below to apply:
Submit your application through our website.
and complete your initial interview at www.talentconnect.ai.
We eagerly welcome talented individuals and invite you to be a part of Globant's success story. Apply now to our diverse and highly skilled team!
Nivel de antigüedad
Intermedio
Tipo de empleo
Jornada completa
Función laboral
Tecnología de la información
Sectores
Tecnología, información e internet
Las recomendaciones duplican tus probabilidades de conseguir una entrevista con Talent Connect
Queremos impulsar los conocimientos de la comunidad de una forma nueva. Los expertos añadirán información directamente a cada artículo, generado inicialmente con inteligencia artificial.