TM Philippines - Data Platforms and Delivery

PH - Data Operations Engineer, Data Engineering Track

Remote
Work Type: Full Time

Working at Thinking Machines

Thinking Machines is a technology consultancy building AI & data platforms to solve high impact problems for our clients. Our vision is for Southeast Asia to become a global hub for data science. To do that, we create data cultures, one organization at a time.


We’re a company made up of intellectually curious, civic-minded, forever-learning individuals. We believe that great data science products are built with care for people, and that the best way to drive inclusive innovation is to start with a diverse team.


Our field of work is incredibly dynamic, so we want to work with people who are committed to growing with us. We want to hire people who can demonstrate an ability to learn, then provide them with personalized coaching, growth opportunities, and a great working environment to get them to world-class.


Role Description

In a world of continuous integration, rapid development, and data growth, reliability and high availability are even more essential. Our Data Operations Team is integral to ensuring that the platforms and services clients use continue to be reliable and relevant to their business needs. We do this by proactively detecting and resolving incidents across our solutions while working with our clients to continuously enhance their platforms.


As a Data Operations Engineer in the Data Engineering track, you can expect to:

  • Work directly with our clients to ensure a successful customer journey by sharing the responsibility in system maintenance, incident management, feature development, and knowledge sharing.

  • Uncover issues or implement enhancements on existing data ingestion pipelines, machine learning models, or web applications.

  • Keep our platforms across the organization robust by collaborating with our product developers to develop improved systems, such as pipeline monitoring, infrastructure automation, and error logging.


If you are a “continuous improvement aficionado”, this role allows you to gain insight on the whole production, expand your knowledge, and ultimately, contribute high value work the rest of the organization can benefit from. Not only this, but we are also constantly making an impact for our customers and their user experience. In fact, if you’re looking for a role designed to help customers the most – then Data Operations is for you!



Requirements

We are looking for someone who meets the following profile:

  • Enjoys coding - You must be very comfortable with writing and explaining code.

  • Customer-centric and proactive to client needs -  You are motivated to collaborate with clients in building and running sustainable production systems that can evolve and adapt to changes in a global business environment.

  • Improvement-oriented with low tolerance for repetitive work - You have an innate drive to improve existing systems and processes, proactively spot problems, areas for improvement, performance bottlenecks, and take charge in driving new efficiencies that have not been previously tried.

  • Big picture thinking with a keen eye to quality assurance - Knowing how your work fits into a wider system helps you create better systems that prevent foreseeable problems and solve evolving challenges.

  • Quick learning agility - You're not expected to know everything, but we expect you to fill in the gaps quickly. Be resourceful by researching and asking questions!

  • Strong sense of initiative - Sometimes, the team won’t know that they need you. It helps to be aware about the projects people are working on and support when applicable.




Qualifications and competencies

We are open to engineers who have strong fundamental, language-agnostic skills. We believe that great engineers do not necessarily perfectly match our tech stack in the beginning, but will be able to pick it up quickly.

  • Proficiency with a programming language (preferably Python), Git, and modern terminals

  • Working knowledge of ETL pipelines (Airflow or Dagster), web services and scraping

  • Has experience as a software engineer, or built related projects

Bonus points for experience working on Google Cloud Platform, AWS, or similar cloud providers, and can demonstrate reasonable knowledge of its architecture and moving components.



High preference for candidates that have:

  • Have a working knowledge of DevOps concepts such as continuous deployment and integration, automated testing, Docker and Kubernetes
  • Have experience leading a large or complex engineering project

Benefits and Perks

We offer the following compensation and benefits:

  • Competitive salary — the compensation amount is positively correlated with the difficulty of the job, relevant experience, fit, and skill factors.

  • Remote First — due to the global pandemic, we have shifted to a remote-first company for the foreseeable future while we monitor the situation.

  • Individual professional development budget— an annual budget for conferences, training courses, books, and software is available to sharpen your skills and build new ones to help you grow in your role.

  • Full health benefits — generous health insurance package upon hiring.

  • Regular 1:1 meetings with the leadership team to discuss career and personal goals, job progress and any questions and concerns.

Submit Your Application

You have successfully applied
  • You have errors in applying
Other Portfolio Samples
If you have other portfolio samples you'd like to share, you can upload them here

Source

References

First Contact Questions

I certify that I have read and accept the Privacy Statement and consent to sharing my information with TM.