‹ Back to careers

Senior Data Engineer

Filevine is a Legal AI company delivering Legal Operating Intelligence for the future of legal work. Grounded in a singular system of truth, Filevine brings together data, documents, workflows, and teams into one unified platform—where modern legal work happens with clarity and consistency.

Powered by LOIS, the Legal Operating Intelligence System, Filevine connects context across every matter to transform legal operations from reactive to proactive. LOIS reads, understands, and reasons across your data to surface insight, automate complexity, and give professionals the clarity and confidence to see more, know more, and do more. Fueled by a team of exceptional collaborators and innovators, Filevine’s rapid growth has earned AI awards and recognition from Deloitte and Inc. as one of the most innovative and fastest-growing technology companies in the country.

We’re a team of driven, enthusiastic problem solvers with strong backgrounds in data engineering, machine learning, product management, legal, and operations all working to help attorneys resolve cases faster, for better outcomes. With one of the largest proprietary datasets in the legal industry spanning documents, notes, communications, billing, deadlines, and calendar information we’re now focused on building robust data infrastructure to support our growing AI, analytics, and product intelligence initiatives.

Key Responsibilities

  • Develop and manage data ingestion from diverse structured and unstructured sources (documents, communications, billing, operational data, etc.)
  • Ensure high data quality, integrity, and consistency across Snowflake and other data systems
  • Collaborate with ML engineers and data scientists to enable efficient model training, evaluation, and monitoring workflows
  • Design and optimize Snowflake data models, schemas, and transformation pipelines using modern ELT practices
  • Implement robust data monitoring, logging, and alerting to ensure reliability and visibility
  • Support cost optimization, security, and governance initiatives within Filevine’s cloud data infrastructure
  • Contribute to internal data tooling that improves developer experience, observability, and overall data reliability
  • Design, build, and maintain scalable data pipelines and ETL processes to support machine learning, analytics, and business intelligence use cases

Requirements

  • 3+ years of experience in data engineering or related roles
  • Strong proficiency in Python and SQL
  • Experience with Snowflake or similar data warehouse platforms. Expertise in, query performance tuning, data modeling, and data transformation workflows
  • Familiarity with modern data orchestration tools such as AWS Kinesis, Firehose, and Eventbus.
  • Experience with AWS services for data storage and computation
  • Solid understanding of ETL/ELT best practices, data quality management, and CI/CD integration for data pipelines
  • Experience with work with Vector Databases is a benefit
  • Excellent communication skills and ability to collaborate across ML, engineering, and product teams
  • Experience with large or sensitive datasets (legal, financial, or medical) is a plus
  • Experience with CI/CD deployment processes. Experience with terraform is a plus.

This opportunity is for independent contractors and is not intended to establish an employment relationship. Contractors are responsible for their own taxes and compliance with applicable laws in their jurisdiction.

This role is open exclusively to candidates currently residing in, or willing to relocate to, Czechia or Slovakia with the legal right to work in these countries. Unfortunately, we are unable to consider applications from candidates based outside of these locations due to operational and regulatory requirements

Privacy Policy Notice
Filevine will handle your personal information according to what’s outlined in our Privacy Policy.