Skip to content

lb-rs: size-based request timeouts#4108

Closed
tvanderstad wants to merge 2 commits intomasterfrom
lb-request-timeouts
Closed

lb-rs: size-based request timeouts#4108
tvanderstad wants to merge 2 commits intomasterfrom
lb-request-timeouts

Conversation

@tvanderstad
Copy link
Copy Markdown
Contributor

Fixes #3463 (or at least tries to)

  • Adds a generous minimum timeout for all network requests (15s)
  • Adds a generous size-based timeout for document push/pull (10KB/s)
  • Populates missing fields on File so that lockbook debug info reports file size

@tvanderstad tvanderstad changed the title lb: size-based request timeouts lb-rs: size-based request timeouts Jan 15, 2026
use crate::model::errors::LbErr;
use crate::model::pubkey;

// intended for small requests like metadata transfers and small file uploads
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// intended for small requests like metadata transfers and small file uploads
/// intended for small requests like metadata transfers and small file uploads

and to the one below

/// size. Timeout will never be less than the default/minimum timeout; it is
/// safe to pass a zero size.
pub async fn request_with_size<T: Request>(
&self, account: &Account, request: T, size: usize,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason we're not just using the serialized size?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

never mind, I see why below

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this approach also works for pulling files

Comment on lines 47 to +50
pub last_modified_by: Username,
pub owner: Username,
pub shares: Vec<Share>,
pub size_bytes: u64,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good additions

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think size_bytes calculation needs to match the one in server though, it needs to take into account the metadata fee as well. There's a tiny function somewhere that should be trivial to move around

.value
.doc_size()
.unwrap_or_default();
docs_to_pull.push((id, remote_hmac, size));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see why it's not based on the serialized size now

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice

@Parth
Copy link
Copy Markdown
Member

Parth commented Jan 15, 2026

I think this is a good move, I would be down to let you just dog food this and tell us if it actually fixes it before merging though

@Parth
Copy link
Copy Markdown
Member

Parth commented Jan 18, 2026

I'm curious to hear how this is been working out for you, I don't experience this as much as you, but I just experienced this recently while using my iPad, and I toggled on airplane mode which I imagine would kill requests but the app never recovered despite airplane mode.

@Parth
Copy link
Copy Markdown
Member

Parth commented Feb 11, 2026

Our investigation detailed in the above issue determined that this is likely not a network timeout related issue

@Parth Parth closed this Feb 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

macOS stuck unable to sync with no error message (fixed by restarting)

2 participants