ViiVids demonstrate the potential for videos, similar to the birth of moving images from its less engaging and less complete predecessor, the static photo. Leveraging the widely available mobile phone, to deliver an experience previously only achievable with specialist and expensive AV hardware/software, democratising content with intuitively simple finger gestures, and innovative augmented reality. Given the commercial reach of the incumbent video format, the disruptive potential of viiVid technology is game-changing.
The mesh-network (recording network) over which data is transmitted and received can be expanded by routers/hubs, cell networks, and other IP uplinks. Mobile and static recording devices can include any recording device capable of transmitting over a network, including professional broadcasting systems, phones, cameras, etc. During a live recording, the footage is transmitted live to other viewers via a content delivery network, where it is transcoded for live consumption.
When captured, ViiVid data streams (including audio/visual output, time signatures, locational/positional information, metadata, status heartbeats, etc.) are continuously generated and broadcasted over a decentralised, P2P mesh network of recording devices, mobile and static. The pertinent data is synchronised and logged by each device within the network. Employing blockchain style principles, the network protocol has no single point of failure or corruptibility. It comprises direct (e.g. Bluetooth etc), local (e.g. Wi-Fi), and cellular networks simultaneously, allowing co-located individuals (including those with no cellular data) or remote viewers (transcoded via Content-Delivery-Network), to dynamically pan, in a given direction between vantages within the recording network in space/time synchronised manner. Unintrusive AR markers indicate the relative location and position (distance, altitude, and/or direction) of other available vantage points within the current field of view, and periphery markers indicating those out of frame.
Once the recording concludes, the media artifacts (coupled with checksums and a timeline of pertinent synchronisation data) are later sent for server-side, post-production processing to corroborate, merge and prepare the vantages for on-demand retrospective ViiVid playback. Alongside traditional audio/visual enhancements and transcoding, more sophisticated methods (audio isolation, ML-enabled visual object recognition, with temporal parallax, depth perception, and trigonometric synchronisation) can then be performed to better calculate the relative position between vantage points by triangulating between multiple points of interest within a shared field of vision where possible.
The diagram below demonstrates the data flows through the Viivid platform, demonstrating how content to viewer interchanges as well as how the server infrastructure acts within the multiple content and viewers. These data flows are split between types of content as a live recording, post-production processed content, and retrospective playback are different as each requires different data loads and permissions.
This will be refined during the project to include secure APIs for Happaning to run natively within existing social media apps such as Instagram/TikTok/Facebook/Twitch/Youtube.
Like google street view but with video, Happaning utilises Viivids by letting users broadcast and watch an event from multiple perspectives in one immersive navigable experience
– switching perspectives like they were there!