Company Description
Who we are?
Become a Zider member and join this amazing company that is on top of the
e-commerce game! Join a company that is not only growing but having fun while
doing it. We are a human centric organization with huge growth plans and with
a purpose to help more and more people with little or no digital experience to
start their online business (e-commerce), move their offline to online or grow
their offline businesses even more by having an online presence.
Where are we coming from and where are we going?
Zid has had a tremendous grow over the last 5 years. From 5 people to 300+
people at present and counting. Our revenues have been increasing more than 3
times year on year and we have had a great success. We are an e-commerce SaSS
platform, a fintech startup, a logistics and shipping consolidator and we
understand and believe that technology and business go hand-in-hand.
Job Description
* Architecture and implement optimal data pipelines.
* Redesign and improve current processes to eliminate manual processes and ensure timely delivery of data.
* Prepare scalability plans for smoother expansion when needed.
* Build data models that ensure wholesome view of the company's needed analytics.
* Work with Data Analysts to prepare Ad-Hoc or permanent batch/real-time data pipelines to enable further analysis or machine learning requirements.
* Perform basic Database Administration Tasks (e.g. Data Masking, Access management)
* Work with different business stakeholders and technical teams to gather requirements.
* Implement quality checks and monitoring schemes to ensure data quality and successful completion of data pipelines.
Qualifications
* 5+ years or more of experience working as a Business Intelligence / Data Engineering role.
* Strong problem solving and root cause analysis skills.
* Advanced knowledge of variety Relational Database Engines of a large scale.
* Ability to write advanced SQL queries.
* Enterprise-level knowledge of designing and implementing data pipelines.
* Ability to work with scripting languages such as Python for data manipulation and data pipelines orchestration.
* Ability to build complex data models for Reporting or Functional purposes.
* Solve performance issues in terms of data injection or data retrieval.
* Candidate should have experience and familiarity working with the following tools:
* Relational Database Management Systems, such as: Postgres, MySQL, SQL Server.
* Real-time OLAP data stores, such as: ClickHouse
* Distributed query engines, such as: Presto
* Stream processing tools, such as: Apache Kafka, Apache Spark, Apache Druid.
* NoSQL databases.
* Dataflow management tools, such as: Nifi, Prefect, Airbyte, Fivetran, Alteryx, Apache Airflow, Stitch.
* Reverse ETL tools, such as: Census.
* API integration with external sources.
* AWS cloud services, such as: S3, RedShift, DMS, RDS, CloudWatch, EMR, Athena.
* Object-oriented/Object Function programming in: Python, Java, Scala, etc.
* Data transformation tools, such as: dbt
* Data Visualization tools, such as: Tableau, Metabase, Apache Superset.
Additional Information
What are we offering?
* Competitive Salary
* Holidays of 21 days + additional days given regularly
* Training (we kicked the learning internally and recently completed an AWS Architecting course)
* Career path clearly defined!
* ZEA - Zid Entertainment - Fun Thursdays!
Company Description Denodo is a high-growth, market-leading enterprise software company backed by HGGC. We are recognized as a leader by Gartner and Forrester and uniquely positioned to address the data fragmentation problems that exist in many enterprises. We thrive in dynamic environments, and…
#### **Job Description** * Manage the operation, upgrade, and configuration of servers, OS, and Backup solutions. * Prepare and submits weekly and monthly progress report to the Data center team leader to clarify the current status and progress of assigned task. * Act as the technical interfa…
Company Description We are entrepreneurs in disruptive technology, at Devoteam, we deliver innovative technology consulting for business. Digital Transformakers, we are 7,000+ professionals across EMEA dedicated to ensuring our clients win their digital battle. We improve business performance ma…
**Work Styles at Zoom** In most cases, you will have the opportunity to choose your preferred working location from the following options when you join Zoom: in-person, hybrid or remote. Visit this page for more information about Zoom's Workstyles. **About Us** Zoomies help people stay connected s…
#### **Job Description** * Manage and deliver the operation, upgrade, and configuration of servers, OS, and Backup solutions. * Monitor the Engineers' corrective maintenance visits is undertaken efficiently, and timely meeting client satisfaction. * Support Datacenter engineers in operation a…
If **digital transformation** , **next generation technology** and **growth opportunities** excite you, then apply to join our **Netcracker Technology** Team! Our **culture** and **collaborative work environment** are the keys to our success. Here you will work with the **best in class global teams…
**Job Summary:** The Data Engineer will be primarily responsible for the data integration (ETL) between our operational data sources and central data-lake and creating specific data-marts according to data analytics team's needs. **Job Responsibilities:** Management of data inflow Cre…
Long Description Job Summary Data integration , unification, cleansing and data quality management Job Responsibilities Management of data inflow Create and maintain optimal data pipeline architecture; adopt new technologies to improve existing frameworks of data flow and monitoring …
## **Role purpose:** The Data engineer is a core member of the agile teams delivering data pipelines and data governance capabilities within the organization through building and automating data flows across all Analytics and Autonomics domain. ## **Key accountabilities and decision ownership:** …
#### **Job Description** * Develop and maintain data pipelines implementing ETL processes. * Take responsibility for data warehouses and data lakes (i.e. Hadoop, Elastic Stack and Spark) development and implementation. * Work closely with a data scientists implementing data analytic pipelines…