Machine learning & Augmented Reality driving immersive experiences
Imagine a world where machines can assemble content and deliver interactive experiences with little or no human creative or interaction. While the creative part of machine learning is still a bit of a stretch, the reality is that machines can already follow design rules and incorporate recipes & templates combining such elements as social for editorial, synchronize to cameras that are also servers, talk to sensors and other data (statistics, human biometrics, scoring, etc) points. These all layer in as the templates direct further synchronizing with time, location and frame accurate for create immersive experiences that could only be achieved in trucks and production facilities not so long ago.
Tier One events and productions will continue to use expensive and resource intensive processes and people for the foreseeable future as the money is enough to feed the continued proliferation of inefficiencies. For everyone else, the playing field is about to level when the costs are machine and cloud based and you can now deliver more production rich features in an automated environment managed like IT (utility) and not like SI or agency (hours).
As consumers hunger for more rich content they are becoming less interested in best effort raw social shared video. Brands are certainly looking for better representation through produced content so it only makes sense that there is an intersection coming where the bar will continue to rise for expectations on both sides. Since the realities of the old school providers are so entrenched in hours of creative it is unlikely that the immediate innovations will come from those production/people heavy corners of the industry to meet the immediate needs for efficiency. The IT industry are focused on Everything as a Service, except creative, so it takes a unique set of capabilities and partnerships to develop this hybrid approach and move us to this new level of creative automation.
We have been focused on the first step transforming the camera into an intelligent contributor to the creative workflow and network versus a stranded lens. With microprocessing and sensor advances driven from smartphone and action camera development combined with IoT communication protocols, we can now network these microprocessors into a mesh of synchronized video & data contributors (servers with lenses) to harmonized experiences. It is going to be an amazing road ahead as the advancements AI applied to software defined media workflows allows the market to move beyond the bookends of expensive tier one broadcast and social video to market wide branded, interactive experiences accelerated by machines and automation.
What could this all look like? Take a peak into the future by clicking the link below.
Click here for an immersive multi-camera data synchronized experience