Massive memory footprint from rtmpdump
dinkypumpkin
dinkypumpkin at gmail.com
Thu Feb 13 07:22:09 EST 2014
On 13/02/2014 01:41, Peter S Kirk wrote:
> However, if the last copy fails to resume with the file size error, earleir
> invariably fail too. Very strange if all the copies at say 100MB intervals
> broken at a keyframe on some downloads, but not others.
In that case, I definitely have no idea. I can't think of why such a
pattern would hold, unless there is something peculiar to individual CDNs.
> download history as I clear that on reboot and when present. If I can make
> it work 99% of time it should help others too.
It should have no relation to download history. From my reading of the
code, there are basically three parts to resuming a download:
1. Find the last keyframe in partially-downloaded file and seek to
correct position.
2. Instruct the server to stream from the timestamp associated with last
keyframe.
3. Check first section of the stream to make sure it's in the right
place and then start appending to file.
The "standard" version of rtmpdump usually croaks at #1, presumably
because incomplete or junk data was written at the end of the file
("Last tag size must be greater/equal zero..."). The "KSV" version does
a better job of finding the last keyframe, but it falls down at #3
because the BBC streams don't conform to the structure it expects. So,
I would say the place to start is to tackle #3 in the KSV version and
see if it can be made to work for BBC streams. If you decide to work on
this, you can clone from my copy, where you'll find a "ksv" branch to
start from:
git clone https://github.com/dinkypumpkin/rtmpdump.git
More information about the get_iplayer
mailing list