Company Overview
Opsfleet is a DevOps agency that helps tech startups build, scale, and optimize their cloud infrastructure by providing top-tier consulting, solutions, and fractional DevOps services. Opsfleet was built by former DevOps to help engineering teams reduce infrastructure bottlenecks and deliver projects faster by offering them immediate access to top DevOps talent. With offerings ranging from general consulting services to DevOps as a service to implementing tailored cloud solutions, Opsfleet is the ideal partner to cover any DevOps or infrastructure gaps companies may have. With Opsfleet, startups achieve a faster time to market, and we guarantee the highest implementation standard as if it were our own infrastructure.
About the job
Job description
We are seeking a talented and experienced Data Engineer to join our professional services team of 40+ engineers, on a part-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud platforms like AWS, Google Cloud, or Azure. You will play a vital role in ensuring the performance, efficiency, and integrity of data pipelines of our customers while contributing to insightful data analysis and utilization.
Key Responsibilities
Data Management:
- Design, implement, and manage high-performing data pipelines, focusing on scalability, efficiency, and stability.
- Continuously optimize data processing performance through proactive issue identification, query tuning, and best practice implementation.
- Automate routine tasks like ETL processes, data integration, and reporting to improve efficiency and minimize errors.
- Troubleshoot and resolve data pipeline issues swiftly and effectively, ensuring minimal downtime.
Data Expertise:
- Collaborate with stakeholders to understand data needs and translate them into efficient data models and solutions.
- Develop and execute data extraction, transformation, and loading (ETL) pipelines for various data sources and destinations.
- Perform data analysis tasks using SQL and Python to identify trends, patterns, and insights.
- Participate in data governance initiatives, ensuring data quality, consistency, and accessibility.
- Stay current with the latest trends and technologies in data engineering and analysis.
Data Migrations:
- Lead or participate in complex data migrations from one platform to another.
- Develop and execute well-defined migration plans with minimal downtime and data loss.
- Utilize specialized tools and techniques for seamless data transfer and schema conversion.
- Perform extensive testing and validation to ensure data integrity and functionality after migration.
Preferred Qualifications:
- 5+ years of experience as a Data Engineer, with a proven track record of success in complex data environments.
- Expertise in a major data processing framework like Apache Spark, Flink, or similar.
- Strong understanding of data engineering principles and best practices.
- Experience with data warehousing concepts and tools is a plus (e.g., Redshift, Snowflake).
- Proficiency in SQL and Python for data manipulation and analysis is beneficial.
- Experience leading or participating in successful data migrations is highly desirable.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work independently and manage multiple priorities effectively.
- Self-motivated and a continuous learner with a passion for data.
Cloud Experience:
- Experience with cloud-based data platforms (AWS, Azure, GCP) is a preference.
- If you have relevant cloud experience, you are encouraged to highlight it in your application.
Benefits of Working with Us:
- Competitive hourly rate and flexible work schedule.
- Opportunity to contribute to meaningful projects and drive data-driven decision-making.
- Collaborative environment with talented and supportive team members.
- Expand your expertise in data engineering and data analytics.
How to Apply:
APPLY