The combination of Navicam™ and NAD provide the following
• The capture of previously impossible camera angles
• Special “zoom tethering” feature where the system adjusts its zoom to maintain a
constant perceived distance to target. This gives viewers the feeling of actually “travelling” with the target
• Special “crash zoom” feature where the system energetically zooms in or out from the target, maintaining perfect focus throughout
• The ability to switch between moving targets and fixed positions such as a geographical feature, a position in the grandstands, the scoreboard, etc.
• The supply of quality close-up footage suitable for display on small screens, e.g. for online and mobile channels, currently only available with significant post-production cropping
• Reliable capture of incidents with complex dynamics, e.g. spins and crashes
• Super-human reaction times to assist with production decisions
• All footage is indexed in real time, automatically, and is searchable on the basis of the action captured.
NAD MODULES USED WITH TRADITIONALLY CONTROLLED CAMERAS
ADRS will also be partially useful also when only the dynamics of the targets are recorded, while those of the cameras are not. This is already the case in a number of sports events today, since the dynamics are used for various statistical purposes. ADRS will be able to reference all the scenes involving the targets without restrictions. What will be missing is a positive connection with the camera filming the event, and therefore metadata such as size and position on screen, quality of tracking and focus. A connection between the scenes and the correct camera may however be deducted with an acceptable degree of certainty, simply from the relative positions of the camera and the scene. ADRS would therefore still be able to reference and make ADDF footage searchable, with a certain margin for error.
NAD USED IN CONJUNCTION WITH NAVICAM
In order to exploit its potential to the fullest, NAD should be used in conjunction with the Navicam™ camera control system in filming events in which the dynamics of both target (such as participants in a sports event) and camera are recorded.
• All footage is effectively referenced by ADRS and therefore becomes searchable by ADSE in real time, eliminating the need for a human operator to reference all material visually, with the corresponding gains in time, reliability and repeatability
• ADRS will define search handles on the basis of binary metadata, removing the dependency on the language and judgment of the reference person; treated footage will be referred to as ADDF.
• ADDF containing none of the search handles defined by the settings selected by the director during the ADRS process can be ignored, entirely bypassing visual referencing
• Vastly reduced production time and costs
• Ability to rapidly process vast amounts of footage
Action Dynamics Search Engine
Software that allows the user to search specific scenes in ADDF, identified and ranked through ADRS. As a rough analogy, ADSE is a search engine for footage of sports events. To illustrate the benefits, ADSE would allow a post-production company to search footage automatically. In order to identify and rank the acquired scenes, ADSE uses metadata attached to the footage. Using racing cars as an example, a passing manoeuvre is easily defined by the x, y and z position data from their real positions in relation to each other and in relation to the track. This dynamic data will allow vastly more sophisticated searches than searches based on a human operator’s verbal scene descriptions.
Action Dynamics Defined Footage
Footage containing metadata produced by the ADRS