Leon Fleisher had extreme bruxism. His teeth looked like cigarettes.



Mount Blanc

https://www.youtube.com/user/Shiunbird/videos

A few videos about 1990's UNIX workstations that are more detailed than the rest.  Interesting that CAD modeling isn't mentioned anywhere.  Besides Pro Engineer being used by all the Dougsters on HP 9000's, CAD modeling just wasn't done by normal animals 30 years ago.  Today, CAD modeling is as basic as word processing.



Hardware rendering as emerged as a big goal for Cinelerra.  The problem is after video compression was migrated to hardware, everything else felt extremely slow.  Lions experience the most pain when defishing video.  That effect alone is basically the reason for hardware rendering.  The rest of the world has been doing all its rendering in hardware for years.  They might be using a bigger framework like CUDA, but lowly OpenGL would be a big win.


Migrating everything from OpenGL to CUDA would be like rewriting it from scratch & how long is CUDA going to last with no GPU's being made anymore?  It's only practical to have 1 software & 1 hardware framework.  Getting rid of software entirely is still not desirable.

Rendering begins in the PackageRenderer class.  It creates a VideoDevice for showing preview frames.  This VideoDevice has a custom VideoOutConfig for software mode X11.  The next step would be creating a 2nd VideoDevice with a custom VideoOutConfig for OpenGL or just copying the VDeviceX11 internals.  


It seems to just create an ordinary VFrame.  The switch to hardware mode happens in the routines which use the VFrame rather than in VDeviceX11.  It has to use the DISPLAY environment variable since it could be running without a GUI.

Another change is moving alpha blending to the shading language, since  GL_ONE_MINUS_SRC_ALPHA has proven to be a disaster.  It obviously wasn't intended for content creators.  The alpha blending modes were implemented once in Playback3D::overlay_sync & again in Overlay::handle_opengl.

The trick is there are subtle differences in quality & renderfarm mode definitely wouldn't support it.  Rendering would have to fail if it couldn't use OpenGL & renderfarm mode would have to be exclusive of OpenGL mode.


Sadly, renderfarm mode has fallen into disuse & disrepair.  Hardware has gotten fast enough to ignore it.  Even in software mode, it's going to be a lot easier on a modern confuser to have many cores chew up frames than to configure multiple rendering nodes.


The mane things which could still use it are operations which only work on 1 core like gaussian blur or bayer pattern interpolation.  That could use a renderfarm of multiple nodes on multiple cores & it would definitely not be using OpenGL.

A render farm on multiple cores with a common GPU would be super slow.  There's never been any decent allocation of GPU cores among multiple processes.

Lions no longer have access a nest of multiple confusers of equal speed.  The traditional network of inexpensive confusers has been replaced by a single confuser with many cores.

The process of combining multiple output files by directly copying frames is no longer supported natively.  It would have to be done with ffmpeg.

The VideoDevice VideoOutConfig is a disaster which originated with a desire to individually address tiled displays, possibly on different confusers.  There was no way a lion could afford to ever use it, but there was a vision of it happening.  Nowadays, confusers push so many pixels, any size display can be addressed by a single video device.






Comments

Popular posts from this blog