Home_greyopenFATE - openSUSE feature tracking > #306379
Dashboard | Search | Sign up | Login

Please login or register to be able to edit or vote this feature.

Use rsync when refreshing repositories

Feature state

openSUSE-11.2
Rejected Information
openSUSE-11.3
Rejected Information

Description

It would be a great idea to use rsync when refreshing repositories, one of the bad things is the refresh speed. It gets worse when people have many repositories. I'm not sure but it already compares if something has changed in the repo but to speed things up it would be great to use rsync. For example big repositories like Packman for example download every time the default 10 minutes are over (in zypp settings) while nothing great changes there.

Discussion


icons/user_comment.png R. M. wrote: (5 years ago)

The best way to download incrementally only the diff of a binary file, for my best knowledge, is using the GDIFF protocol, who was submitted ten years ago to the W3C consortium:
http://www.w3.org/TR/NOTE-gdiff-19970901

I know for sure that a commercial product of Configuration Management (Marimba, now buyed by BMC - see
http://www.marimba.com ) use it, implemented in Java: it is very useful in low bandwidth nets, when downloading a service pack, for example. I don’t know if one person could use that Java algorithm implementation, anyway, being a commercial application.

Other implementations are in PERL and RUBY:
http://search.cpan.org/~geoffr/Algorithm-GDiffDelta-0.01/GDiffDelta.pm http://webscripts.softpedia.com/script/Development-Scripts-js/gdiff-gpatch-18695.html

An open source .NET (C#) implementation:
http://gdiff.codeplex.com/ with MPL license

I cannot understand why that algorithm is not widely used, given its quality; it shoud be useful if it was available when downloading large files like ISOs or VM images, or repositories information

icons/user_comment.png R. M. wrote: (5 years ago)

In your usecase, the repository could provide a GDIFF file of content metadata variation, the delta between two known "versions" of it in the time.

icons/user_comment.png P. J. wrote: (5 years ago)

Hmm, i guess packman wouldn't implement that only for me ;) Though it would speed things up as it is a widely known and spoken that refreshing the repo in openSUSE is slow.

icons/user_comment.png L. d. wrote: (5 years ago)

Why not rsync? Because it does not work with http(s). This is important since in many companies the only way to get data from the internet is via http proxy.

The GDIFF approach sounds promissing

icons/user_comment.png R. M. wrote: (5 years ago)

For a "GDIFF on HTTP" implementation, see http://www.w3.org/TR/NOTE-drp-19970825

icons/user_comment.png J. E. wrote: (5 years ago)

Making use of rsync would bring zypper the checksumming, automatic download resuming/repairing at no cost ;-)

icons/user_comment.png R. M. wrote: (4 years ago)

"delouw" says that rsync does not support HTTP. This is a real blocking problem.

icons/user_comment.png J. E. wrote: (4 years ago)

Even if the rsync *program* could do HTTP, it would not help you much, because HTTP does not implement that rolling checksum and all the other fluffy things of rsync.

Also, it seems obvious to me that use of rsync is an optional extra feature that you can chose to ignore when refreshing your repositories.

icons/user_comment.png R. D. wrote: (4 years ago)

There would be a cost, but it would be born by Mirrors and their Admins.  Currently Mirrors can offer HTTP and trad. ftp, Mirror Brain distributes downloads transparently and re-uses general cacheing infrastructure.  Gentoo have used a seperate infrastructure for rsync-ed portage data, rather than the usual high bandwidth/storage traditional mirrors because rsync support was niche.  It would be rare to enable the rsync checksum for daemon access on a public server, because of the high CPU load of that feature, and potential for DOS attack.

Implementing optionally, risks new ways for refresh to be slow eg) rsync protocol requests are tried and dropped silently by uncooperative firewalls.

If checksums & delta's are desirable additions for repo format, then a more general solution which worked with HTTP would be better and benefit more users and automatically re-use local proxy caches.

icons/user_comment.png R. D. wrote: (4 years ago)

Why not make the transfered refresh file, by definition based on deltas?  First time through, you have current v empty file, the repo can save a delta against the empty file &  and monthly / weekly changes, with delta's for changes made against those, then a refresh can check for updates in last week if it has an current week file (download only if exists), monthly if the weekly is out of date and fall backto delta v empty ifle if local montly & weekly are both out of date.  Some sanity check, based on server's idea of date can prevent clients getting things too horribly wrong.

Then most of the time it's a small file, that's easily cached for short time; the monthly can be cached for longer with predictable TTL, and current v empty could be cached for a day say.

I'm not sure about the repo format, but if delta handling involved compression then wouldn't simple text format for the repo transfer contents be natural and efficient, any binary file should likely be a cache generated locally.

Though this sounds much more complicated, presumably there's tools for generating the repo contents file, and processing the downloaded repo file, so it ought not be so difficult in principal.

Last change: 4 years ago
Voting
Score: 9
  • Negative: 2
  • Neutral: 1
  • Positive: 11
Feature Export
Application-xmlXML   Text-x-logPlaintext   PrinterPrint