The ideal candidate for the Data Engineer role should have a solid background in SQL Server development, coupled with experience in data engineering using Python, Pyspark, and Databricks. Proficiency in PowerShell scripting for automation is also essential.
- Database Development: Demonstrated proficiency (2-5+ years) in SQL Server, including database design, writing SQL queries, and developing stored procedures.
- Data Engineering: Significant experience (2-5+ years) with object-oriented Python programming, as well as 1-2+ years of hands-on experience with Databricks and Spark (Pyspark).
- ETL and Automation: Experience (1-2+ years) using SSIS for ETL solutions and PowerShell scripting for automation, ensuring efficient data processing and workflow automation.
- Data Handling and Infrastructure: Skilled in assembling large and complex datasets that meet functional and non-functional business requirements. Ability to build robust data pipelines for efficient extraction, transformation, and loading from diverse data sources.
- Support and Administration: Assist in supporting SQL Server databases and Databricks environments, developing necessary automation processes (scripts/procedures) to ensure system compliance and operational efficiency.
- Project Ownership: Lead design review sessions, contribute to testing and validation feedback, and take ownership of projects from requirement gathering to documentation for internal and end-user use.
- Process Improvement: Identify, design, and implement process improvements, including automating manual processes and optimizing data delivery and infrastructure for scalability.
- Data Integration: Build infrastructure to facilitate optimal data extraction, transformation, and loading across SQL, Databricks, and other platforms.
- Collaboration: Capable of working collaboratively with cross-functional teams in dynamic environments to achieve project goals and operational objectives.
- Education: Bachelor’s degree in a relevant field.
- Experience: Minimum of 1-2 years in a data-related role, encompassing both database development and data engineering responsibilities.
- Technical Skills: Strong proficiency in SQL Server, Python, Pyspark, Databricks, SSIS, and PowerShell scripting. Familiarity with cloud platforms such as AWS, Azure, or GCP is desirable.
- Skills Set: Exposure to data models, data mining techniques, and understanding of data pipelines involving REST, SOAP, FTP, HTTP, and ODBC protocols.
Additional Considerations:
- This position is remote, operating on Eastern Standard Time (EST).
- Candidates must be U.S. citizens, permanent residents (Green Card holders), or hold GC-EAD status; OPT, CPT, or H4 EAD holders are not eligible.
- Contract-to-hire opportunity with a competitive hourly rate and potential conversion salary ranging from $90,000 to $120,000 per year based on experience.
AWS Cloud/Data Visualization Engineer @ Bosch | AWS, ETL, Data Pipeline Restructuring, QuickSight => 5x aws certified!
7 个月I am interested!
Actively looking for Big Data Engineer roles | SQL | Data Analytics | NoSQL | AWS | Azure | Hadoop | Spark | Cassandra | Big Data | Talend | Informatica | Oracle | ETL | Database Management | PowerBI | Kafka ||
7 个月[email protected]
Data Engineer,SQL, Python, Spark, Kafka | Specializing in Data Pipelines & Analytics| Cloud Data Solutions(GCP & Aws)| Snowflake| Databricks
7 个月Hi Lorie, Hope you are doing well. I'm intrested in this position. I'm having 5 years of experience.
Data Engineer
7 个月Hi Lorie, Hope you are well. I am comfortable with the Job opportunity, really excited to know further steps. I have 4+ Years of Experience and have hands-on experience in mentioned skills. Looking forward for your response
Senior Data Engineer
7 个月Hello lorie, Iam interested in this position and available for interview. I do have relavent work experience. Can you please reach me out at [email protected]