Measuring Live Events and Huge Crowds
It is a truism of people measurement that the more people you have in a space, the harder measurement becomes. I often see clients testing sensor technologies by tracking one or two people in a big empty space. Not a good idea! Technologies that work perfectly in open, uncrowded spaces can utterly fail when tasked with counting people in airport security lines or – even worse – packed into a live event. Most of the technologies that we use are geared to tracking full customer journeys – tracking each individual – and while they aren’t optimized for dense crowds, they can generally handle all but the most crowded environments. Still, within that “all but” there can be world of hurt.
That’s especially true for lidar based measurement. We love lidar and it’s our default choice for most people-measurement applications. But lidar has two big issues when it comes to people measurement in REALLY crowded environments: it has a hard time separating individuals (figuring out that a blob in the point cloud represents two people not one) and it tends to lose people into the environment when they stop moving for an extended period of time.
These problems are both artifacts of the perception software used with lidar, not necessarily with the sensor data itself. In theory, if lidar has decent line of sight and sufficient beam density it can accurately count people no matter how tightly you press them together (though fortunately there appears to be very little demand for flow-analytics at sex parties).? But perception systems have been built to optimize for the detection of moving objects in space. For automotive and robotic applications, it doesn’t really matter how many people are on a crosswalk – it only matters that they are there. Similarly, people sitting on a park bench just don’t matter. So perception software design is mostly built around point-clustering to identify objects and aggressive background removal of unmoving objects to maximize performance.? Both tactics make for reduced performance in crowded environments, especially when people aren’t moving around.
There’s also a performance-based limit on how many distinct objects the perception software will track in real-time – usually between 500-1000 people. Again, that’s all about software performance not sensor capabilities.
But limitations are limitations no matter their source. With existing lidar systems and commercially available perception software, we can track every single shopper in even the largest and busiest retail store. We can track every single person in an airport queue. But if you want to know how many people are seated in a Formula 1 grandstand or (as we were once asked about) are around the Ka’bah during Hajj, we’d have no effective means of answering your question.
That’s a problem that PMY has solved – and solved in a way that is cost-effective, highly portable, and is already(!) integrated into our platform. It’s no surprise that a company whose slogan is “Empowering Live” chose to focus on this part of people measurement. If there is one thing that sporting and live events have, it’s dense crowds. When I was out at the US Open earlier this year, I experienced first-hand the kind of density that can utterly defeat most people measurement. The main area where sponsors and F&B vendors live gets incredibly packed. The area outside Arthur Ashe is just a sea of people – mostly sitting down. And, of course, every court has grandstands filled with people packed together and largely unmoving for hours.
If you’re managing operations at a live event, you need to understand crowds and density and you need that information in near real-time. Historical uses of the data exist, but the overriding imperative on this kind of data is to deliver it immediately when it can do the most good.
Of course, this being people measurement, there are challenges with both environment and technology. Like many live events, the US Open happens once a year. No matter how much IT and camera infrastructure is there to support normal operation, it can’t possibly accommodate the vast influx of people that come with a major sporting event. I’m sure you’ve had the incredibly annoying cell-phone experience of having 4 bars of 5G service and being unable to load Google because the bandwidth is all taken! The challenges don’t end with bandwidth. Events are often day/night and must cope with highly variable lighting, mounting points are scarce or non-existent for many critical areas, and because of the ephemeral nature of events, it just doesn’t make sense to capex a lot of sensor equipment.
领英推荐
PMY’s OPTIC crowd-solution deals with all of that. It’s a camera-based ML system specifically designed for dense crowd counting. In fact, that’s all it does. If you can point a decent camera at a bleacher, a grandstand, a grassy field, or a packed concourse, you can get a pretty accurate crowd count in near real-time. You can do that with any digital camera – even using secondary feeds from existing CCTV systems. And you can do it even when you are mounting a camera in a place (like the roof of a stadium) from which people counting would normally be impossible.
The use of existing CCTV cameras can dramatically reduce the infrastructure cost. The ability to use inexpensive digital cameras lets us supplement existing infrastructure for key spots with very little overhead. And because PMY just rotates kit from event to event, people don’t have to buy the necessary additional sensors. The video streams flow into a single processor and the system works by snapshotting the scene and producing a count. Snapshotting removes the need to generate a count with every frame (even 1 fps is total overkill for this kind of counting) and makes it possible for a single processor to handle dozens of video streams.
It's the most practical approach I’ve seen to generating counts in densely packed areas – the kind of areas that dominate live events.
This isn’t by any means a general-purpose flow-analytics solution. Right now, dense crowd measurement is about the only supported use-case. We’re mapping out ways to enhance this basic video ML capability to deliver more value: from adding x,y coords to every individual (heatmapping!) to camera-based re-identification. But even without anything new, it gives us a dynamic measurement capability to monitor crowding in areas where it can matter most.
Integrating that data has been a snap and we’ve already got data flowing from PMY’s existing ML vision processing into our people-measurement platform.? That means you can measure a merch store, track real-time queue lines, AND monitor occupancy in any event area all with the same single pane of glass or real-time feed into your systems.
It’s the one pre-existing people-measurement capability that PMY has, and it’s uniquely suitable for the kinds of clients and events that PMY specializes in. PMY has a robust ML and AI team (much more than I anticipated) and we expect to continue to enhance these camera-based capabilities –making it easier to tie data between lidar and camera systems, doing re-identification, and adding customer meta-data ?to get full use out of the CCTV infrastructure many, many sites have already deployed. I’m particularly enthusiastic about the ways in which CCTV camera can enhance and extend continuous-flow lidar measurement. To me, that combination is the future of people measurement.
But even in its most basic stage, camera-based crowd counting delivers a unique solution in the marketplace. It’s an arrow in our quiver that can hit a target we would never otherwise attempt – because there’s just too many people!
?