Week 15 - Post launch reflection

Week 15 - Post launch reflection

Jon (Product lead):

It’s been three weeks since we launched Weave, what’s been going on?

It’s always hard to define whether a product launch was a success or not. I guess it depends on the lens you look at it through.

In terms of generating excitement and getting some signal as to whether what we are working on is important, it was a pretty resounding yes.

Neither of us are regular LinkedIn posters, and a long way from Top Voices (is that what LinkedIn call their influencers?). I’m not sure we’d ever received double-figure likes on a post. So with that context, the response to launching Weave was incredible.

Across our personal LinkedIn accounts and the Centre for AI and Climate company account, the launch post had 25,000 impressions and 380 engagements/likes.

We must be on to something.

Public support for what we are doing is great, but what we wanted was users. The next step in the flow was getting people onto the website. Traffic has been better than expected, although It followed the classic post-launch; a sharp peak and an equally steep fall not longer afterwards. Having said that, we’re still getting decent daily visitors.

In total, 2,100 people have visited weave.energy, and this has all come via the LinkedIn launch posts.

What really matters is did people make it through to the data itself and download it? To date we’ve had just shy of 100 individual users experiment with the data. We managed to speak to a couple of these to learn more about their experience. One described our solution as a quantum leap improvement in accessibility. We’ll take that all day!

What surprised us the most was the geographic spread. We were focussed on getting UK energy data, so expected interest to be mostly UK based. But having reviewed the data, a lot of the traffic and downloads were European, North American and even further afield.?

Having thought about it, perhaps we were naive to think the data would only appeal to UK based data scientists. In areas where energy data is even harder to access, perhaps UK energy data could still provide valuable insights. One to keep an eye on for sure.

The plan right now is to address the issue of unclear real-world applications. To be honest, having gone on about it week after week, I’m considering whether it's actually what we should be chasing above all else.?

We’re using a new approach to serve new data to users in a relatively new field. Perhaps it’s going to be a slow burn and we should just focus on pushing the limit of what’s possible, on the assumption that the use cases will come with time.

(I need to double check that I’m not just giving myself an easy way out though!)

Something that keeps coming up is some sort of competition to generate interesting avenues to explore that could lead to real world use cases. We’re going to get the full smart meter data set together and start working on a plan for this in a month or so. Our current thinking is to start with a data visualisation competition that challenges data scientists to find something interesting within the dataset.

We’re also exploring additional datasets to provide in the same fashion. If there’s one thing I’m sure of it’s the format and method Steve landed on is where a lot of the value lies, so can we apply the same approach to more in demand datasets. EPCs and installed low carbon technology are top of mind at the moment.

The next milestone is to get all the smart meter for the full time period that’s been published, and have it backed up by a solid data pipeline that brings in new data as and when the DNOs release it.


Steve (Engineering lead):

We’re clearly entering that phase of the project where the work overtakes the weeknotes a bit! Since the last weeknotes a fortnight ago, I’ve mainly been heads down getting myself up to speed with data pipelines. It’s been quite a learning curve, with a few wrong steps, but I’m starting to see the results and feel more positive that we’ll be able to start shipping new data more quickly soon.

Now that we’re through the initial “figuring stuff out” phase (I hope) I’m happy to share the code I’ve been working on publicly too: https://github.com/centre-for-ai-and-climate/weave is our repository housing the data pipeline for Weave. As I said before, it’s based on the tool dagster, which lets us describe our pipeline as a series of software defined “assets”.

So far, I’ve been using SSEN as the “guinea pig” DNO whose data I’m working with and so our pipeline contains one asset: their raw data. What it also contains (and what’s taken up most of my time so far) is all the surrounding machinery: “sensors” to keep that “asset” up to date with new files from SSEN; tests and linting tools for continuous integration and github actions for continuous deployment.

One thing that might be particularly interesting to Python nerds is that I’ve been using the new uv package manager to manage our dependencies, python versions, virtual environments, etc.

It’s been absolutely fantastic for local development, but it does come with the downside that integration with the existing ecosystem of python tools can take a bit of yak-shaving because it’s so new. If you’re thinking about using it with Dagster in particular, check out our pyproject.yaml and github actions for an example of how to do that.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了