Plato is transforming the global trade infrastructure. Imagine a modern UI/UX overlay on traditional ERP systems in the wholesale sector. Instead of relying on outdated interfaces and sales-driven processes, Plato integrates AI to forecast product demand.
Our platform enhances user experience and delivers data-driven product recommendations. Plato secured $2.5 million in pre-seed funding from Cherry Ventures, Sequoia Capital, S16VC.
At Plato we are building a real-time data platform and as one of our team's founding members, you will lead the development of our data warehouse and the infrastructure for our suite of data products. Our clients data is central to everything we do at Plato and as such this role will involve close collaboration with our data science, product, design and engineering teams.
The ideal candidate is a natural problem-solver who is not afraid to get hands-on and take the initiative in implementing new ways in which we can ingest data from different source systems, transform this data into our unified schema and to deploy a system of cutting edge Machine Learning models.
This is an incredible opportunity to become an integral part of a high-growth startup with a groundbreaking product during its early stages.
What you'd be working on
Carry out data modelling that powers real-time customer facing analytics and a portfolio of data products that are consumed in our app
Design flexible schemas that store data from many different sources
Develop and maintain data pipelines for machine learning algorithms and real-time dashboards
Partner with data consumers across Plato to understand consumption patterns and design intuitive data models
Write and deploy high-performance real-time transformations within our data warehouse
Own and configure our data platform (Databricks)
Collaborate with our product and engineering teams to build a data product that our end-users love
What you bring along
3+ years of experience in data engineering; high-growth tech company experience is a plus
You have experience building ETLs/ELTs, ideally in PySpark and Airflow.
Experience in working with SQL, data warehouses (e.g. Databricks or Snowflake) and data transformation workflows (e.g. dbt)
You have interest in data engineering practices and product knowledge, specifically in how to leverage the power of analytics to power customer facing applications
You have a solid understanding of Software Engineering practices and Data Engineering principles. MLOps knowledge would be appreciated.
Mentality: An experimental mindset with a launch fast and iterate mentality. A strong statistics/mathematics background is a plus
It would be beneficial if you have previously built high performance real-time streaming infrastructure using Kafka
The tools you will be using
Python
Spark/PySpark
Data Modeling
SQL
Databricks (Delta Lake)
DBT
AWS
Hiring Process (2-4 weeks)
Step 1: Intro call Intro call with one of the founders to get to know each other and introduce Plato.
Step 2: Personal fit One personal interview with the other two founders, ideally in person.
Step 3: Technical fit One technical challenge (take-home exercise), followed by a call in which you present and elaborate on your take-home exercise.
Step 4: Reference checks Before sending the offer, we ask you to provide 2-3 contacts for us to do some reference checks.
Step 5: Offer Congratulations! We are convinced by your skills and personality and see a great fit! You will get an offer from us including an appropriate share package.