In most cases, when fetch(1) is invoked (or libfetch called into), it is to either download sources for a port, or binaries for a package. In all of those cases multiple servers hosting the same files are known to the caller. Unfortunately, the caller must use the servers /in sequence/, because fetch does not allow downloading different ranges of the same file from different sources /in parallel/. This is, of course, suboptimal as some servers (such as ftp.freebsd.org) are hit by much higher loads. The users may also have a (much) wider download bandwidth, than an upload bandwidth, that a particular server has (or would allow a single client to use). It should be possible to implement parallel downloads (over different protocols -- http, ftp, etc.). The implementation should also be adaptive: if the download from one of the servers finishes faster, the program should proceed to automatically fetch a not-yet-downloaded sub-range of the range, that is being downloaded from another server. Various "p2p" downloading programs (torrent, gnuttella) implement this feature already -- out of necessity. We should do it, because we can. No, I don't have a patch. Not yet, anyway... Somebody looking for a cool project is most welcome to this idea.
Responsible Changed From-To: freebsd-bugs->des Assign to fetch maintainer
Unfortunately, this is not possible within the framework of libfetch as it stands today. Libfetch was designed and implemented with very specific requirements, namely that the API should provide a drop-in replacement for fopen(3) (fetchGetURL(3), which pkg_add(1) uses) and that it should cache server connections between invocations. As a consequence, libfetch must store a lot of state in global variables, and is not parallelizable. DES -- Dag-Erling Sm=C3=B8rgrav - des@des.no
=D1=81=D0=B5=D1=80=D0=B5=D0=B4=D0=B0 02 =D1=81=D1=96=D1=87=D0=B5=D0=BD=D1= =8C 2008 01:28 =D0=BF=D0=BE, Dag-Erling Sm=C3=B8rgrav =D0=92=D0=B8 =D0=BD= =D0=B0=D0=BF=D0=B8=D1=81=D0=B0=D0=BB=D0=B8: > Unfortunately, this is not possible within the framework of libfetch as > it stands today. Libfetch was designed and implemented with very > specific requirements, namely that the API should provide a drop-in > replacement for fopen(3) (fetchGetURL(3), which pkg_add(1) uses) and > that it should cache server connections between invocations. As a > consequence, libfetch must store a lot of state in global variables, and > is not parallelizable. Well, you wrote it -- can you try to reduce the number of places, where the global variables are used (implicitly) in favor of an explicitly passed context-pointer or some such? No necessarily to the full extent of implementing the wish, but to simply make some other would-be implementer's job easier... Or should we take libcurl as the basis instead? I don't know their API, but the license claims to be MIT/X-derivative: http://64.233.169.104/search?q=cache:GGdoH55jw08J:curl.haxx.se/docs/copyright.html so it may be Ok to include it into our tree some day... -mi
State Changed From-To: open->suspended It does not sound as though this is being actively worked on.
batch change: For bugs that match the following - Status Is In progress AND - Untouched since 2018-01-01. AND - Affects Base System OR Documentation DO: Reset to open status. Note: I did a quick pass but if you are getting this email it might be worthwhile to double check to see if this bug ought to be closed.