summaryrefslogtreecommitdiff
path: root/localvideowebencode
diff options
context:
space:
mode:
Diffstat (limited to 'localvideowebencode')
-rwxr-xr-xlocalvideowebencode94
1 files changed, 64 insertions, 30 deletions
diff --git a/localvideowebencode b/localvideowebencode
index 624e895..75dfcdb 100755
--- a/localvideowebencode
+++ b/localvideowebencode
@@ -19,36 +19,70 @@
# Recommends: moreutils vainfo vpx-tools vorbis-tools opus-tools
#
# TODO:
-# * Offer to skip rendering again if an output file exist already.
-# * Support --width and --height, resolving the other part from input
-# or forced aspect ratio.
-# * Check and fail if all needed tools are not available.
-# * Test if beneficial to apply real_time=-2.
-# * Normalize each infile separately when xml fed as infile keeps sync.
-# Maybe as workaround re-feed audio separately from xml, as done at
-# <http://bernaerts.dyndns.org/linux/74-ubuntu/214-ubuntu-stabilize-video-melt>.
-# * Resolve flash player to use.
-# * Make choice of encoders configurable.
-# * Figure out how to apply application option when using opusenc.
-# * Handle channels per-codec for low-bitrate joint stereo Opus speech.
-# * Double-check audio bandwidth algorithms:
-# http://trac.ffmpeg.org/wiki/Encode/HighQualityAudio
-# * Change VP8/VP9 bandwidth algorithm:
-# Use https://developers.google.com/media/vp9/settings/vod/
-# * Tune VP8 parameters:
-# http://www.webmproject.org/docs/encoder-parameters/
-# * Tune VP9 parameters:
-# https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide
-# * Support DASH:
-# https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/instructions-to-playback-adaptive-webm-using-dash
-# * Support watermark
-# * Support live streaming - i.e. from (maybe faked) endless source
-# + Lipsync may require single stream of equally chunked tracks:
-# https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/instructions-to-do-webm-live-streaming-via-dash
-# * Support realtime streaming - i.e. with less than 200ms latency
-# * Compute (or hardcode) html5 codec types:
-# + Use mp4v2-utils
-# + http://stackoverflow.com/a/16365526
+# * improve workflow
+# + check initially and fail if all needed tools are not available
+# + offer to skip rendering again if an output file exist already
+# + maybe support extracting keyframes to aid in picking cover image
+# <https://video.stackexchange.com/questions/26066/dump-keyframes-of-a-livestream>
+# <https://video.stackexchange.com/questions/21970/speed-up-a-video-30x-using-only-key-frames-with-ffmpeg>
+# + offer to produce a full ladder at once
+# <https://developers.google.com/media/vp9/settings/vod>
+# <https://developer.apple.com/documentation/http_live_streaming/hls_authoring_specification_for_apple_devices>
+# * improve configurability
+# + support --width and --height,
+# resolving other dimension from (explicit or source) aspect ratio
+# + make choice of encoders configurable
+# * tune defaults
+# + adapt VP8/VP9 quantizer based on frame size
+# <https://developers.google.com/media/vp9/settings/vod#quality>
+# + maybe pass option real_time=-2 to melt
+# <https://www.reddit.com/r/kdenlive/comments/ka0aak/kdenlive_gpucpu_use_threads_mlt_and_ffmpeg_tips/>
+# + maybe reduce threads to "spend" total available cores only once
+# + apply application option when using opusenc
+# + use joint stereo for low-bitrate Opus speech
+# + maybe refine audio bandwidth algorithms
+# <http://trac.ffmpeg.org/wiki/Encode/HighQualityAudio>
+# + change VP8/VP9 bandwidth algorithm
+# <https://developers.google.com/media/vp9/settings/vod/>
+# + tune VP8 parameters:
+# <http://www.webmproject.org/docs/encoder-parameters/
+# + tune VP9 parameters
+# <https://sites.google.com/a/webmproject.org/wiki/ffmpeg/vp9-encoding-guide>
+# + compute (or hardcode) html5 codec types, e.g. using mp4v2-utils
+# <http://stackoverflow.com/a/16365526>
+# + generalize GOP handling, with keyframes every ~8s by default
+# which fits 48kHz audio at 25 fps (200f) and 30fps (240f),
+# or an "LL" option of 48f (1.92s at 25fps or 1.6s at 30fps)
+# + generalize GOP handling,
+# with default "normal" option of 200f@25fps or 240f@30fps (both 8s)
+# and "low-latency" option of 48f (1.92s at 25fps or 1.6s at 30fps)
+# <https://stackoverflow.com/questions/30979714/how-to-change-keyframe-interval-in-ffmpeg/41735741#41735741>
+# <http://anton.lindstrom.io/gop-size-calculator/>
+# <https://docs.unified-streaming.com/best-practice/content-preparation.html#recommended-fragment-boundaries-are-aligned-across-all-tracks-audio-video-text>
+# * players
+# + disable flash player by default
+# + support HLS player, when HLS is supported
+# <https://github.com/video-dev/hls.js>
+# * normalize each infile separately when xml fed as infile keeps sync
+# maybe as workaround re-feed audio separately from xml, as done at
+# <http://bernaerts.dyndns.org/linux/74-ubuntu/214-ubuntu-stabilize-video-melt>
+# * support DASH
+# <https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/instructions-to-playback-adaptive-webm-using-dash>
+# * support watermark
+# * support superstable live streaming, e.g. (maybe faked) endless source
+# + lipsync may require single stream of equally chunked tracks
+# <https://sites.google.com/a/webmproject.org/wiki/adaptive-streaming/instructions-to-do-webm-live-streaming-via-dash>
+# * support adaptive streaming
+# + compute ladder from mathematical model
+# as in "Optimal design of encoding profiles for ABR streaming"
+# <http://www.reznik.org/publications.html>
+# * support low-latency streaming a.k.a. "LL", i.e. 3-5s latency
+# + LL-HLS
+# <https://stackoverflow.com/questions/56700705/how-to-enable-lhls-in-ffmpeg-4-1>
+# + CMAF, using either fmpeg or (when released) GPAC
+# <https://github.com/FFmpeg/FFmpeg/commit/cc929ce>
+# <https://github.com/gpac/gpac/commit/4529c60>
+# * support realtime streaming a.k.a. "ULL", i.e. less than 0.5s latency
set -e