Start a new topic

Dolby Atmos metadata loss with 3rd party re-encoding tools

Hii , I have been given a Dolby Vision Test multimedia file that has DD+JOC 768kbps Atmos 5.1 soundtrack , and I decided to evaluate only the Atmos part of it since my monitor dosent support HEVC codec and the Dolby Vision extensions . Hence i decided to re-encode the given file into a SDR copy while not altering the embedded Atmos soundtrack , using a 3rd party blueray dvd ripping software . Now here is my doubt .... If I opt for such a tools that can re-code audio to just DD+ 640kbps Orr even if I set the ripper tool settings to "same as original" only for audio....is the Dolby Atmos metadata gets lost in both the cases ? If yes...then Dolby media producer sofware can sort out these problems ? The case is I don't want to convert the audio during the re coding process . Alternatively , does the Dolby Atmos Production Suite itself can decode DD+Joc , when I load this .MP4 Dolby Vision file into my protools and connect it to the renderer , will it work ?...hence no need to opt for a 3rd party tools. I'm afraid usually all media convertors that are available now a days let's say "Wondershare Uniconvertor" that advertise us that they can re-encode DD+ while we just want to use them for converting HDR's to SDR's task only...can't preserve the embedded Atmos metadata at the end right ?...the result is when we load such files into a consumer level hometheatre Dolby Atmos decoders , AVR's dosent recognise the Dolby Atmos signal and so it just upmixes it .

Most MP4 muxers will not be able to preserve the JOC metadata and/or the proper codec signaling to a downstream decoder and you will just be left with 5.1.


If you need to demux/remux an mp4 with DD+JOC you will need to use the open source tools that Dolby has posted on Github.   Links to them and their usage is described in this article:

https://developerkb.dolby.com/support/solutions/articles/16000100317-can-i-put-my-own-video-into-the-mp4-file-instead-of-black-


However, if this is Dolby Vision test/marketing material I discourage you from remuxing this with SDR as this will create misleading material.   If your platform is unable to decode HEVC I suggest that you contact the Dolby representative you obtained this from and ask if a h.264 version is available and/or there is alternative material that does not reference Dolby Vision.


Best,

Adam


1 person likes this
Thanks a lot for clarifying this. No this is not for any marketing purpose...I am an independent artist working in my home and has no connections with sales . I am from an engineering background , a student and recently started making music in Dolby Atmos since lockdown as a hobby... Currently pursing a course at object based immersive audio mixing here in India . I am learning your new set of tools and it's going great so far.... and I love it . At this beginner stage I have been testing these Dolby Vision × Dolby Atmos materials , understanding them .... watching some of your webinars , behind the mix and ...so on and so forth at my home . Thanks for the link ..I will be implementing it in my workflows. You can add some more suggestions if you would like to... with regards to spatial coding process... in the DAPS. I am curious to know about these concepts too...How can we efficiently configure spatial coding to preserve maximum number of objects in the final 768kbps Joc fold down generated by renderer ? Because I read in the manual that it groups certain nearby objects into one which can lead to loss precision right ?....How is this spatial coding process impact binaural deliverables ?

Spatial coding is always part of encoding to Dolby Digital Plus JOC.   It is not used at all in Binaural Rendering.   The number of elements used in Spatial Coding to Dolby Digital Plus JOC is dependent on the datarate.   Encoding at 384kbps uses 12 elements,  encoding at 448kbps and higher uses 16 elements.   16 elements provide sufficient resolution for spatial coding even with a full track count of 128 beds and objects reproduced on speaker layouts up to 9.1.6 and higher.   The efficiency of Spatial Coding works in part because even with high track counts often not all inputs are active at once.   It is remarkably transparent but not entirely transparent with some content.    That is why Spatial Coding emulation is available in the Renderer.  You can A/B test out the results.   Spatial Coding emulation should be left on during mixing to audition the effect of Spatial Coding and if needed, make any mix adjustments necessary to ensure you are satisfied with how it would sound encoded and streamed on OTT services or on Blu-Ray.


Hope this helps.


Best,

Adam


1 person likes this
Great ! .... I got it sir. Thanks. Lastly I would like to know is there a possibility to use my 5.1.4 hometheatre AVR as the output of my renderer ? . If yes then how ?... As for now....I am using binaural monitoring inside the DAPS and I have observed the following...The headphone environment in general is not that effective when panning objects from front to back and the effects are as near static in what ever mode I choose off , near , mid and far . For example slap delays. When I am actually panning those echos from front to back overhead ...I am hearing just as left to right ping pongs above my head on my headphones . But the plus side is...I have noticed a far better image and clarity on headphones with any objects that are panned directly above my head slightly offset to 2 degree left or right....And yeah...your virtualization technology is working great with headphones when things are panned either diretly above or as usuall when they are placed at rear corners ... For this purpose only...I prefer to use binaural renders instead of loudspeaker renders because this much of depth may or may not be achieved when using actual loudspeakers...since dealing with loudspeakers is largely dependent on room acoustics and so the precision may be lost with an Atmos rig in a poorly caliberated reveby room that emit resonances . Again your speaker decorrelation technology works best and the effects are prevalant when adresding individual speakers for special effects . But with headphones we always get a consistent precision and depth ( As said this works only best when objects are panned in a tetrahedron fashion ) Okay now coming to front to back panning....to address this correctly...I might need to use a well caliberated Dolby Atmos certified studio. but since I am in home currently that may be not possible . Fortunately I am blessed with Onkyo HT S5915 Dolby Atmos 5.1.4 hometheatre and it's room response was acoustically well caliberated. Therefore I decided to use my hometheatre as my primary monitoring system...If this is possible...then please let me know how to recognise and configure my DAPS to send signals out to a 5.1.4 HT and those relevant routings that I have to do without madi . I just have only HDMI and HDMI e-arc ports in . Also I need to stop any furher processing like upscaling done by my HT. Currently I am on a Mac Book Pro and have the required thunderbolt to HDMI convertor cable.
Login to post a comment