Revisiting rules for playable tracks


https://growmane.blogspot.com/2024/03/big-bug-emerged-whereby-cinelerra-cant.html

 

 So the playable track rule change on Mar 29 came back to haunt the lion kingdom.


 This now requires an alpha channel when it didn't before.  It's hardly important for lions because with opengl, there's no penalty in using an alpha channel but it does change the output of any previous project & it's extra work to manage alpha.

It now doesn't seem necessary to always play a track if that position doesn't have any effect.  The problem that day was a compressor reading from a transition which doesn't exist at the current position. It should really only be playable if a plugin or edit exists at the current position.  There's no way for something to read it unless it at least has a shared module at the current position.

The thing which no longer works is not playing the track if a current plugin isn't synthesis.  A plugin which doesn't synthesize can still process if it's random access.

Some things which have disappeared from the playable rules are checking if the projector is in view, checking if fade is off, checking mute.  Those broke with the addition of nested EDLs.  It has no way of knowing if a nested EDL has a plugin & the plugin accumulates data from previous frames.

Those could still work if it tests for a nested EDL at the current time.  The problem is lions have no use case for it & there's no other user base.  The lion kingdom's only use case for mute is with a nested EDL with plugins. It would require descending into the nested EDL & knowing the plugins don't use random access or accumulation.  It would be quite laborious to add that flag to every plugin.

There are fond memories of the fine granularity possible before nested EDLs.  Lions used to drag a projector out of frame or fade to 0 during playback & see the framerate instantly change.

The current kaboom case is if the track configuration changes in the middle of an audio buffer.  It basically has to test playable status for every sample.  It tries to optimize this test in hideous functions plugin_change_duration & edit_change_duration.

There's been much head banging about whether it should fragment the audio buffer at the boundaries between edits & silence.  If playable status is going to change by the edit, it has to fragment if it's crossing between silence & an edit or any plugin change.  It doesn't need to fragment if it's crossing between 2 playable edits.  Silence with a transition has to be treated as playable.

It was previously never testing for transitions when fragmenting the audio buffer.  This caused over fragmentation when it didn't need to.  It was masked by PlayableTracks testing the tracks & arriving at the same table.  Overfragmentation doesn't lead to failures but it slows things down.

Basically, VirtualConsole::test_reconfigure needs to truncate the audio buffer to the next configuration change & return if the previous PlayableTracks matches the current PlayableTracks.  PlayableTracks determines the configuration at the current position.  It might be possible to combine the 2 functions.  PlayableTracks could calculate the fragment length, the current table, & whether the previous table matches the current table. Then the table could be passed along until it was needed in build_virtual_console.

All this code only exists because of audio.  If every audio track was always playable, it could go without edit_change_duration.  plugin_change_duration is always needed because plugins do change inside audio tracks.  They would have to be static plugins for all time to truly do without plugin_change_duration.  There have been some desires to do that, since graph based compositing doesn't allow changing the graph over time. 

The processing involved in audio is so small compared to video, edit_change_duration could easily be dropped.  This logic only exists from a long past desire to routinely do 24 track audio editing on 150Mhz computers which were too slow to even do reverb.  There has never been that many audio tracks or any processing that intense.  Maybe it would be used in a theatric production involving hundreds of audio tracks but those guys use pro tools.




The 2 kaboom cases got covered in 2 new test files.  There's switching to silence with transitions, silence without transitions, switching to different tracks.  There's a random access effect reading from transitions & silence.  Note the resampler doesn't exist where silence is, but reads from the silence.  If the resampler exists where silence is, it has to be rendered even though it's not a synthesizer.

This requires the playable tracks to exist when the play head is over silence.  Then the transitions need to be created as needed in the read function instead of based on the location of the play head.  The virtual console is created at 1:11 but the transition is at 2:00.

----------------------------------------------------------------------------------------------------------------------

It came to pass that the abc_fall1999 fix

https://growmane.blogspot.com/2024/03/ny-cinelerra_14.html

caused 1 megsamples of silence to appear in the beginning of the table of contents for mp3 files.  mpeg3audio_decode_audio long ago got a new bit which padded the requested buffer size with 0 if it couldn't get enough data from the demuxer.  In the case of building a TOC, the 1st buffer always ended up empty for some reason while later buffers didn't.  It always filled the 1st buffer with 0 but not the later buffers.

It had a check to escape if the demuxer was below 4kb & this was preventing all the later buffers from getting padded, but this check didn't fire for the 1st buffer.  file->seekable has always been used to change behavior when building a TOC.







Comments

Popular posts from this blog