Join a high-performing, tight-knit team at a fast-growing company that is using the Internet of Things (IOT) to transform how organizations sense, monitor, and make decisions. Founded out of MIT in 2005, Smart Sense is trusted by more than 2,000 organizations, including Walmart, SpaceX, Apple, CVS Health, Coca-Cola, and the US State Department to help them make sensor-driven decisions. We have a solution that our customers rely on every day to make mission critical decisions; we are looking for team-oriented change agents to help shape the future of IOT.
LOCATION: Hybrid to SmartSense headquarters in Boston, MA
Position
Data Services team members are passionate about data products, engineering data flows, storage, and enabling predictive analytics. We are inspired by data products and data services and in building and delivering tools, infrastructure, frameworks that enable insights of our business increasing the value of our data to our customers. As good stewards of our data we contribute to all aspects related to the handling of data, whether from monitoring data flows, our field sensors, PII (Personally Identifiable Information), or reflecting internal processes such as our supply chain.
What We Offer
In this Sr. Data Engineer role, you will contribute to strategic data engineering solutions moving data from raw to cold storage, through ETL (Extract, Transform, Load) pipelines, to data sets used to train ML (Machine Learning) models. You will collaborate with our Data Science, Business Analysts and Machine Learning Engineers producing quality data flows, transformations, and cleansing towards improved data products for the customer. You will facilitate the democratization of data for data scientists to experiment and train machine learning models and business analysts supporting the enterprise. You will have an enthusiasm and drive to deliver data products that exceed expectations, a passion for data engineering and bring an eagerness to learn. This is an exciting opportunity for an engineer ready to bring this enterprise forward on our data maturity path towards predictive analytics. Join us on our data journey.
What You Will Do
- Join a tightly knit team solving hard problems the right way
- Understand the various sensors and environments critical to our customers’ success
- Know the data flows and technology that are currently in use to transform raw data into analytic products
- Build relationships with the awesome team members across other functional groups
- Learn our code practice, work in our code base, write tests, and collaborate with us in our workflows
- Contribute to on-boarding processes and make recommendations to make on-boarding process better
- Demonstrate your capabilities defining solutions, implementing, and delivering data products for your user stories and tasks
- Contributing to systems and processes to implement and automate quality on data pipeline deliverables
- Implement data tests for quality and focuses on improving inefficient tooling and adopting new transformative technologies while maintaining operational continuity.
- Contribute to the quality of data and the pipeline after working closely with the product team and stakeholders to understand how our products are used
- Identify opportunities to improve our infrastructure, operational performance, and data pipeline deliverables
- Evaluate new technologies and build proof-of-concept systems to enhance Data Engineering capabilities and data products
- Contribute to improving the efficiency of our pipeline scripts, automation, and general data operations
- Demonstrate command and accountability for the design and implementation of new features
- Develop and support data operations and efficiencies in production
- Demonstrate competencies in data modeling new and existing capabilities while progressing the maturity of our data
- Influence your peers through your excellence in delivering high quality data products and code reviews
- Deliver operational data from the data platform to software and analytic teams producing aggregate metrics from real time data streams
- Establish a reputation for reliability in data contextualization and troubleshooting with the team
- Improve the velocity of development of data ingestion, orchestration, fusion, transformation, and data analysis.
- Deliver infrastructure required for optimal extraction, transformation, and data loading in predictive analytic contexts
- Transform ETL development with optimizations for efficient storage, retention policies, access, and computation while accounting for cost
- Contributing to the strategic maturity of all our operations and delivery of product requests
- Define orchestrations of data transformations that distill information to highly valuable signals for ML models
- Collaborate with your teammates to deliver a data analytics and AI platform for advanced analytic data product development
Who You Are and What You Bring
- BA/BS degree in a technical/quantitative field, 5+ years of experience working in Data Engineering.
- Proven SQL and Python skills.
- Ability to work independently, solve problems, and learn quickly as part of a larger agile team
- Experience delivering and articulating data models to support enterprise and data product needs.
- Experience in the design and delivery of at least three Data Engineering areas (Data Pipelines/ETL, Relational and Non-Relational Databases, Data Warehousing, Data Lake, Business Intelligence Platforms)
- Able to demonstrate coding practice (comments, style, consistency, efficiency, reuse, etc.) and have experienced applying them in code reviews.
- Must have experience with managed services in AWS, Azure or GCP (Google Cloud Platform)
- Must have experience validating data quality, preferably with test automation (pytest, DBT, etc.)
- Core technologies: SQL, Python, JavaScript, Git, ODBC, HTTP, AWS
Desired But Not Required
- Proven experience building data pipelines with orchestration tools such as Luigi or AirFlow.
- Experience with data governance, PII, and data access paradigms.
- Proven experience in handling time series telemetry data.
- Practical experience with slowly changing dimension implementations
- Experience with Snowflake or other Cloud Data Platform and or Lakehouse architecture.
- Experience working with Kubernetes or other container-orchestration system.
- Experience deploying data pipelines and data models to a production environment.
- Experience operating and monitoring production data pipelines.
Digi International offers a distinctive Total Rewards package including a short-term incentive program, new hire stock award, paid parental leave, open (uncapped) PTO, and hybrid work environment in addition to our competitive medical, health & wellbeing and compensation offerings.
The anticipated base pay range for this position is $103,000 - $161,500. Pay ranges are determined by role, job level and primary job location. The range displayed reflects the reasonable range we anticipate paying for this position and reflects the cost of labor within several U.S. geographic markets. The specific salary offered within the range will depend on various factors including, but not limited to the candidate’s relevant and prior experience, education, skills, and primary work location. It is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each position. Pay ranges are typically reviewed and updated annually.
At Digi, we embrace diversity and inclusion among our teammates. It is critical to our success as a global company, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool. We are committed to providing an environment of respect where equal employment opportunities are available to all applicants and teammates.
-
职位级别
中高级 -
职位性质
全职 -
职能类别
工程师 -
所属行业
软件开发
找人内推,获得SmartSense by Digi面试的机会可以提高 2 倍
找找认识的领英会员主要福利
根据该职位描述推断
-
医疗保险 -
眼科保险 -
牙科保险 -
养老保险 -
带薪陪产假 -
学费补助 -
带薪产假 -
残障保险
美国 马萨诸塞州 波士顿有新的数据工程师职位时接收通知。
登录帐号,即可创建职位订阅相似搜索
查看协作文章
我们将以全新的方式解锁社区知识。专家直接在借助人工智能撰写的文章中添加见解。
查看更多