resolved reqq question
reqq question ============= reqq: An integer, the number of outstanding request messages this client supports without dropping any. The default in in libtorrent is 250. "handshake message" @ "Extension Protocol" @ http://www.bittorrent.org/beps/bep_0010.html TODO: maybe by requesting all pieces at once we are exceeding this limit? maybe we should request as we receive pieces? answer ====== almost every single peer I encountered (for brief 10 minutes... which I think is enough) had 255 as reqq value and the number of metadata pieces we requested very rarely exceeded 20... I think it's fair to assume that exceeding "that limit" will never be a question, and requesting the next piece as we receive the previous one might increase the latency, unnecessarily.
This commit is contained in:
parent
4b9b354171
commit
85fb2f5ea9
@ -150,15 +150,6 @@ func (l *Leech) doExHandshake() error {
|
||||
}
|
||||
|
||||
func (l *Leech) requestAllPieces() error {
|
||||
// reqq
|
||||
// An integer, the number of outstanding request messages this client supports without
|
||||
// dropping any. The default in in libtorrent is 250.
|
||||
//
|
||||
// "handshake message" @ "Extension Protocol" @ http://www.bittorrent.org/beps/bep_0010.html
|
||||
//
|
||||
// TODO: maybe by requesting all pieces at once we are exceeding this limit? maybe we should
|
||||
// request as we receive pieces?
|
||||
|
||||
// Request all the pieces of metadata
|
||||
nPieces := int(math.Ceil(float64(l.metadataSize) / math.Pow(2, 14)))
|
||||
for piece := 0; piece < nPieces; piece++ {
|
||||
@ -318,7 +309,7 @@ func (l *Leech) Do(deadline time.Time) {
|
||||
l.OnError(errors.Wrap(err, "doExHandshake"))
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
err = l.requestAllPieces()
|
||||
if err != nil {
|
||||
l.OnError(errors.Wrap(err, "requestAllPieces"))
|
||||
|
Loading…
Reference in New Issue
Block a user