At Fri, 19 Jun 2020 21:51:35 +0200, Johnny Billquist <bqt%update.uu.se@localhost> wrote: Subject: Re: Checking out src with Mercurial > > On 2020-06-19 20:19, matthew sporleder wrote: > > git clone with --depth 1, over http (instead of ssh), and with a few > > simple settings changes will make it work inside of 128M. > > Well, the whole point of virtual memory and demand paging is that you > don't have to have enough physical memory. I would hope that still > applies... Well, one still should prefer to keep the working set of pages well within the capacity of physical memory, but, yes, VM should allow simpler, but reasonable, programming models to work with larger data sets in smaller amounts of physical memory. Secondary storage vs. main storage is like molasses (even on a hot day) vs. jet streams. > My comment about have 128M (which, by the way, can be > considered a lot, when we talk about VAXen), was just about the > potential speed I possibly could expect. If git really requires that > people have at least 128M of physical memory to work, then I'd first > ask when did NetBSD break so badly that the amount of physical memory > becomes a limitation in this way, (a) History -- there's a _lot_ of it! (b) size -- since NetBSD supports so many platforms, and is such a usefully complete system, the total number and size of files of source code eventually adds up to a quite sizeable collection! > and second, why would a tool like > this require that much memory in the first place? (c) modern change tracking tools try to track changes to whole sets of files at once, so if you have lots of files, and lots of history, this combinatorial problem can sometimes bite at a bad time for the user of a tool trying to manage it all. -- Greg A. Woods <gwoods%acm.org@localhost> Kelowna, BC +1 250 762-7675 RoboHack <woods%robohack.ca@localhost> Planix, Inc. <woods%planix.com@localhost> Avoncote Farms <woods%avoncote.ca@localhost>
Attachment:
pgp_O40TpLY6m.pgp
Description: OpenPGP Digital Signature