Barton Technologies is a certified woman-owned results-oriented recruiting solutions and staffing company. We are the representative of choice for top professionals because we understand what motivates great people. We take an active role in your career and our concerns are long-term. We have an extensive support system and make the commitment to making the right match between you and a company. We are proud of our retention rate: over 90% of our consultants choose to work again with Barton Technologies.
Barton Technologies is founded on the core values of accountability, family, passion, trust and value. Barton Technologies offers all consultants comprehensive medical, dental and vision programs. We also offer direct deposit and a $1,500/year education and professional certification fund.
Barton Technologies does not discriminate in employment on the basis of race, color, religion, sex, pregnancy, gender identity, national origin, sexual orientation, disability, age, veteran or military status, retaliation, or other characteristic protected by law.
Only qualified individuals being considered will be contacted
We are seeking a highly experienced Senior Databricks / Data Engineer with a proven track record in designing and implementing large-scale, enterprise-grade data solutions. The ideal candidate will possess deep expertise in modern data engineering tools, cloud platforms (AWS, Azure, or GCP), data warehousing, ETL pipeline development, and performance optimization. The role involves direct collaboration with stakeholders to translate business requirements into scalable and efficient data architectures.
• Design, develop, and maintain complex data pipelines and ETL frameworks to support advanced analytics and reporting solutions.
• Work with large datasets from multiple structured and unstructured sources.
• Implement best practices for data governance, data security, and compliance.
• Optimize data workflows for performance, scalability, and reliability.
• Collaborate cross-functionally with analytics, engineering, and product teams to ensure data integrity and availability.
• Proficiency in SQL, Python, or Scala.
• Hands-on experience with cloud-native data services (e.g., AWS Glue, Redshift, Azure Data Factory, BigQuery, etc.).
• Strong understanding of distributed computing frameworks such as Apache Spark or Kafka.
• Familiarity with data modeling, warehousing concepts, and orchestration tools like Airflow.
• Exposure to CI/CD in data environments.