Last molar recently escalated in sensitivity. No obvious change from 2 years ago, but a crack might have widened. There's no doubt it's the last molar. It's the most sensitive this area has ever been. Lions can't chew on that side anymore & had to stop eating bread.
Dentist just ground a side of the chip flat & said she'll be right. That was a lot of pain. Can't imagine what male breadwinners with 5 crowns go through.
Toothgate has definitely reduced appetite. Intermittent fasting seems to have lost favor in recent years. Lions ate 3 meals/day until grad school, then 2 until 2001, then 3 again when they were waking up early, then back to 2 shortly after 2001. It's been manely 2 ever since the early 2000's.
More awake time before lunch has required eating a 3rd meal. Less awake time has resulted in a bigger dinner. There's a case for always eating 3 & a smaller dinner. Lions weighed less when eating 3 so it's plausible eating 2 causes more fat generation. Eating more frequent, smaller meals was the doctrine 40 years ago.
-------------------------------------------------------------------------------------------------------
The lion kingdom's 1st & last significant foray into free AI tools entailed constructing a voiceover using f5tts, pasting stonk footage over some of it, then filling the rest with a talking lion. Mouthed the talking lion segments on camera with many retakes. Then concatenated all the good retakes on the timeline without synchronizing them. Then synchronized the good retakes with the voiceover as a final step. The camera's sound track was essential in aligning the lip sync track.
Then resized the project to 512x512 & rendered all the lip sync segments at 512x512. The lip sync segments needed a few extra frames since LivePortrait garbles the last 2 frames.
Used a 512x512 section of the lion image as the LivePortrait source.
source X-Pose/bin/activate
cd LivePortrait/
python inference_animals.py -s lioness_source.jpg -d lioness_driver1.mp4 --no_flag_do_crop --no_flag_stitching
Resized the project back to 1920x1080 to insert all the lion segments.
Noted Liveportrait uses the 1st frame to determine the keypoints & from then on relative optical flow to detect facial expressions & try to keep the face stationary. Over time, the optical flow drifts so you need a motion curve or stabilization to keep the head aligned. Liveportrait might also lose track of the keypoints as it drifts, which limits the maximum length. It's another point in favor of synthetic lip syncing.
F5TTS can pitch & time shift by 1st adjusting the speed parameter, then applying resamplert. Lions most often do that.
Further testing showed the pitch setting doesn't work at all below 1x. Cinelerra's time stretcher does a better job.
There is another model which synthesizes lip syncing & facial expressions directly from audio.
https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar
Helas, it states 10GB minimum VRAM.
-----------------------------------------------------------------------------------------------------------------------
That was about all for this episode of AI model experiments. There might be a Keyush stunt dog animation or a mane animation. There isn't much potential with a lion budget. The only way to get good results is to pay.
The only way to design effective models is to get hired by a top tier company & have access to their computing resources. The mane path into that is going to be academia, much as it was 50 years ago. The capital costs are too high for an individual to do it alone.
The modern age is analogous to using millions of PDP-11's in parallel. There's no progress in the transistor density so they just keep making bigger & bigger data centers. The computing power for useful AI may never fit in a single room.
Lions don't see any model designers online. They're all spectators.





Comments
Post a Comment