Everything old is new again: memory optimization

(nibblestew.blogspot.com)

94 points | by ibobev 3 days ago

16 comments

  • muskstinks 2 hours ago
    I'm always confused as hell how little insight we have in memory consumption.

    I look at memory profiles of rnomal apps and often think "what is burning that memory".

    Modern compression works so well, whats happening? Open your taskmaster and look through apps and you might ask yourself this.

    For example (lets ignore chrome, ms teams and all the other bloat) sublime consumes 200mb. I have 4 text files open. What is it doing?

    Alone for chrome to implement tab suspend took YEARS despite everyone being aware of the issue. And addons existed which were able to do this.

    I bought more ram just for chrome...

    • pjc50 50 minutes ago
      https://learn.microsoft.com/en-us/sysinternals/downloads/vmm... for an empty sublime text window gives me:

      - 100MB 'image' (ie executable code; the executable itself plus all the OS libraries loaded.)

      - 40MB heap

      - 50MB "mapped file", mostly fonts opened with mmap() or the windows equivalent

      - 45MB stack (each thread gets 2MB)

      - 40MB "shareable" (no idea)

      - 5MB "unusable" (appears to be address space that's not usable because of fragmentation, not actual RAM)

      Generally if something's using a lot of RAM, the answer will be bitmaps of various sorts: draw buffers, decompressed textures, fonts, other graphical assets, and so on. In this case it's just allocated but not yet used heap+stacks, plus 100MB for the code.

      Edit: I may be underestimating the role of binary code size. Visual Studio "devenv.exe" is sitting at 2GB of 'image'. Zoom is 500MB. VSCode is 300MB. Much of which are app-specific, not just Windows DLLs.

      • muskstinks 16 minutes ago
        Tx for the breakdown. I will play around with it later on my windows machine.

        But isn't it crazy how we throw out so much memory just because of random buffers? It feels wrong to me

      • Capricorn2481 27 minutes ago
        But I have sublime text open with a hundred files and it's using 12mb.
    • gwbas1c 1 hour ago
      Basically, the short answer is that most memory managers allocate more memory than a process needs, and then reuse it.

      IE, in a JVM (Java) or dotnet (C#) process, the garbage collector allocates some memory from the operating system and keeps reusing it as it finds free memory and the program needs it.

      These systems are built with the assumption that RAM is cheap and CPU cycles aren't, so they are highly optimized CPU-wise, but otherwise are RAM inefficient.

    • senfiaj 1 hour ago
      It's partly because there are layers of abstractions (frameworks, libraries / runtimes / VM, etc). Also, today's software often has other pressures, like development time, maintainability, security, robustness, accessibility, portability (OS / CPU architecture), etc. It's partly because the complexity / demand has increased.

      https://waspdev.com/articles/2025-11-04/some-software-bloat-...

    • veunes 1 hour ago
      Part of the problem is that modern apps aren't really "one thing" anymore
    • Orygin 1 hour ago
      200Mb for Sublime does not seem so bad when compared to Postman using 4Gb on my machine...
    • Capricorn2481 30 minutes ago
      > sublime consumes 200mb. I have 4 text files open. What is it doing?

      Huh? Sublime Text? I have like 100 files open and it uses 12mb. Sublime is extremely lean.

      Do you have plugins installed?

      • muskstinks 19 minutes ago
        I do not have plugins installed and i have only a handful of files open on macos.

        Memroy statistics says 200mb and a peak of 750mb in the past (for whatever reason)

  • canpan 3 hours ago
    String views were a solid addition to C++. Still underutilized. It does not matter which language you are using when you make thousands of tiny memory allocations during parsing. https://en.cppreference.com/w/cpp/string/basic_string_view.h...
    • VorpalWay 2 hours ago
      The issue with retrofitting things to an existing well established language is that those new features will likely be underutilized. Especially in other existing parts of the standard library, since changing those would break backwards compatibly. std::optional is another example of this, which is not used much in the c++ standard library, but would be much more useful if used across the board.

      Contrast this with Rust, which had the benefit of being developed several decades later. Here Option and str (string views) were in the standard library from the beginning, and every library and application uses them as fundamental vocabulary types. Combined with good support for chaining and working with these types (e.g. Option has map() to replace the content if it exists and just pass it along if None).

      Retrofitting is hard, and I have no doubt there will be new ideas that can't really be retrofitted well into Rust in another decade or two as well. Hopefully at that point something new will come along that learned from the mistakes of the past.

      • menaerus 51 minutes ago
        Retrofitting new patterns or ideas is underutilized only when it is not worth the change. string_view example is trivial and anyone who cared enough about the extra allocations that could have happened already (no copy-elision taking place) rolled their own version of string_view or simply used char+len pattern. Those folks do not wait for the new standard to come along when they can already have the solution now.

        std::optional example OTOH is also a bad example because it is heavily opinionated, and having it baked into the API across the standard library would be a really wrong choice to do.

    • pjc50 2 hours ago
      C# gained similar benefits with Span<>/ReadOnlySpan<>. Essential for any kind of fast parser.
    • groundzeros2015 1 hour ago
      In C you have char*
      • rcxdude 50 minutes ago
        Which isn't very good for substrings due to the null-termination requirement.
      • kccqzy 1 hour ago
        And the type system does not tell you if you need to call free on this char* when you’re done with it.
      • pjc50 1 hour ago
        In C you only have char*.
  • tombert 19 minutes ago
    I've been rewriting a lot of my stuff in Rust to save memory.

    Rust is high-level enough to still be fun for me (tokio gives me most of the concurrency goodies I like), but the memory usage is often like 1/10th or less compared to what I would write in Clojure.

    Even though I love me some lisp, pretty much all my Clojure utilities are in Rust land now.

  • fix4fun 3 hours ago
    Digression: Nowadays when RAM is expensive good old zram is gaining popularity ;) Try to check on trends.google.com . Since 2025-09 search for it doubled ;)
  • gwbas1c 59 minutes ago
    A lot of frameworks that use variants of "mark and sweep" garbage collection instead of automatic reference counting are built with the assumption that RAM is cheap and CPU cycles aren't, so they are highly optimized CPU-wise, but otherwise are RAM inefficient.

    I wonder if frameworks like dotnet or JVM will introduce reference counting as a way to lower the RAM footprint?

    • pjc50 47 minutes ago
      Reference counting in multithreaded systems is much more expensive than it sounds because of the synchronization overhead. I don't see it coming back. I don't think it saves massive amounts of memory, either, especially given my observation with vmmap upthread that in many cases the code itself is a dominant part of the (virtual) memory usage.
      • zozbot234 31 minutes ago
        If you use an ownership/lifetime system under the hood you only pay that synchronization overhead when ownership truly changes, i.e. when a reference is added or removed that might actually impact the object's lifecycle. That's a rare case with most uses of reference counting; most of the time you're creating a "sub"-reference and its lifetime is strictly bounded by some existing owning reference.
    • vaylian 53 minutes ago
      Unlikely. Maybe I'm overly optimistic, but I think it's fairly likely that the RAM situation will have sorted itself out in a few years. Adding reference counting to the JVM and .NET would also take considerable time.

      It makes more sense for application developers to think about the unnecessary complexity that they add to software.

    • xyzzy_plugh 52 minutes ago
      That's not strictly true. Mark and sweep is tunable in ways ARC is not. You can increase frequency, reducing memory at the cost of increased compute, for example.
  • tzot 3 hours ago
    Well, we can use memoryview for the dict generation avoiding creation of string objects until the time for the output:

        import re, operator
        def count_words(filename):
            with open(filename, 'rb') as fp:
                data= memoryview(fp.read())
            word_counts= {}
            for match in re.finditer(br'\S+', data):
                word= data[match.start(): match.end()]
                try:
                    word_counts[word]+= 1
                except KeyError:
                    word_counts[word]= 1
            word_counts= sorted(word_counts.items(), key=operator.itemgetter(1), reverse=True)
            for word, count in word_counts:
                print(word.tobytes().decode(), count)
    
    We could also use `mmap.mmap`.
    • akx 1 hour ago
      This doesn't do the same thing though, since it's not Unicode aware.

          >>> 'x\u2009   a'.split()
          ['x', 'a']
          # incorrect; in bytes mode, `\S` doesn't know about unicode whitespace
          >>> list(re.finditer(br'\S+', 'x\u2009   a'.encode()))
          [<re.Match object; span=(0, 4), match=b'x\xe2\x80\x89'>, <re.Match object; span=(7, 8), match=b'a'>]
          # correct, in unicode mode
          >>> list(re.finditer(r'\S+', 'x\u2009   a'))
          [<re.Match object; span=(0, 1), match='x'>, <re.Match object; span=(5, 6), match='a'>]
      • contravariant 1 hour ago
        There's bound to be a way to turn a stream of bytes into a stream of unicode code points (at least I think that's what python is doing for strings). Though I'm explicitly not volunteering to write the code for it.
    • contravariant 1 hour ago
      For reasons I never quite understood python has a collections.Counter for the purpose of counting things. It's a bit cleaner.
  • griffindor 4 hours ago
    Nice!

    > Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

    I wish I knew the input size when attempting to estimate, but I suppose part of the challenge is also estimating the runtime's startup memory usage too.

    > Compute the result into a hash table whose keys are string views, not strings

    If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM. Is that included in the memory usage figures?

    Nonetheless, it's a nice optimization that the kernel chooses which hash table keys to keep hot.

    The other perspective on this is that we sought out languages like Python/Ruby because the development cost was high, relative to the hardware. Hardware is now more expensive, but development costs are cheaper too.

    The take away: expect more push towards efficiency!

    • zozbot234 25 minutes ago
      > If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM.

      Not so much, because you only need some fraction of that memory when the program is actually running; the OS is free to evict it as soon as it needs the RAM for something else. Non-file-backed memory can only be evicted by swapping it out and that's way more expensive,

    • pjc50 3 hours ago
      >> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

      At this point I'd make two observations:

      - how big is the text file? I bet it's a megabyte, isn't it? Because the "naive" way to do it is to read the whole thing into memory.

      - all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte. It gets more interesting when the file doesn't fit into RAM at all.

      The state of the art here is : https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times... , wherein our hero finds the terrible combination of putting the whole file in a single string and then running strlen() on it for every character.

      • dgb23 3 hours ago
        > all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte.

        I have to disagree. Bad performance is often a result of a death of a thousands cuts. This function might be one among countless similarly inefficient library calls, programs and so on.

        • rcxdude 47 minutes ago
          If you're not putting a representative amount of data through the test, you have no idea if the resource usage you're seeing scales with the amount of data or is just a fixed overhead if the runtime.
      • kloop 1 hour ago
        > how big is the text file? I bet it's a megabyte, isn't it?

        The edit in the article says ~1.5kb

        • pjc50 1 hour ago
          Single page on many systems, which makes using mmap() for it even funnier.
          • Filligree 19 minutes ago
            Not to mention inefficient in memory use. I would have expected a mention of interning; using string-views is fine, but making it a view of 4kB cache pages is not really.

            Though I believe the “naive” streaming read could very well be superior here.

    • veunes 1 hour ago
      I suspect it'll be selective
  • dgb23 2 hours ago
    Not a C++ programmer and I think the solution is neat.

    But it's not necessarily an apples to apples comparison. It's not unfair to python because of the runtime overhead. It's unfair because it's a different algorithm with fundamentally different memory characteristics.

    A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count. For most people that would be the first/naive approach as well when they programmed something like this I think. And it would showcase what the actual overhead of the python version is.

    • VorpalWay 2 hours ago
      > A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count.

      Wouldn't memory mapping the data in Python be the more fair comparison? If the language doesn't support that, then this seems to absolutely be a fair comparison.

      > For most people that would be the first/naive approach as well when they programmed something like this I think.

      I disagree, my mind immediately goes to mmap when I have to deal with a single file that I have to read in it's entirety. I think the non-obvious solution here is rather io-uring (which I would expect to be faster if dealing with lots of small files, as you can load them async concurrently from the file system).

      • dgb23 40 minutes ago
        I'd make the bet that "most people" (who can program) would not think of mmap, but either about streaming or would even just load the whole thing into memory.

        Ask a bunch of coding agents and they will give you these two versions, which means it's likely that the LLMs have seen these way more often than the mmap version. Both Opus and GPT even pushed back when I asked for mmap, both said it would "add complexity".

        • Filligree 11 minutes ago
          It does add complexity, and the optimal solution is probably not to use it. Consider what happens if a 4kB page has only a single unique word in it—you’d still need to load it to memory to read the string, it just isn’t accounted against your process (maybe).

          I would have expected something like this:

          - Scan the file serially.

          - For each word, find and increment a hash table entry.

          - Sort and print.

          In theory, technically, this does require slightly more memory—but it’s a tiny amount more; just a copy of each unique word, and if this is natural language then there aren’t very many. Meanwhile, OOP’s approach massively pressures the page cache once you get to the “print” step, which is going to be the bulk of the runtime.

          It’s not even a full copy of each unique word, actually, because you’re trading it off against the size of the string pointers. That’s… sixteen bytes minimum. A lot of words are smaller than that.

  • veunes 1 hour ago
    Not "C++ everywhere again" but maybe "understanding memory again"
  • callamdelaney 2 hours ago
    I shove everything in memory, it's a design decision. Memory is still cheap, relatively.
  • 90d 2 hours ago
    Speaking about optimization, is Windows just too far gone at this point? It is comical the amount of resources it uses at "idle".
  • est 4 hours ago
    I think py version can be shortened as:

    from collections import Counter

    stats = Counter(x.strip() for l in open(sys.argv[1]) for x in l)

    • voidUpdate 4 hours ago
      Would that decrease memory usage though?
  • amelius 2 hours ago
    > AI sociopaths have purchased all the world's RAM in order to run their copyright infringement factories at full blast

    The ultimate bittersweet revenge would be to run our algorithms inside the RAM owned by these cloud companies. Should be possible using free accounts.

  • gostsamo 3 hours ago
    > how much memory a native code version of the same functionality would use.

    native to what? how c++ is more native than python?

    • VorpalWay 2 hours ago
      Native code usually refers to code which is compiled to machine code (for the CPU it will run on) ahead of time, as opposed to code running in a byte code VM (possibly with JIT).

      I would consider all of C, C++, Zig, Rust, Fortran etc to produce native binaries. While things like Cython exist, that wasn't what was used here (and for various reasons would likely still have more overhead than those I mentioned).

    • fluoridation 50 minutes ago
      Native to the hardware platform.
  • biorach 4 hours ago
    "copyright infringement factories"
    • maipen 3 hours ago
      Tells you right away where this is coming from.
      • Dylan16807 53 minutes ago
        Do you mean something specific, because that sounds like a criticism but with some blanks that need to be filled in.

        If you just mean they come across as annoyed by AI, that's true, but that's also way too wide of a category to infer basically anything else about them.

      • muskstinks 2 hours ago
        The critisism is valid. The problem is how you value this critism.

        I agree they are stealing it but I also see the benefit of it for society and for myself.

        Suckerberg downloaded terabytes of books for training, while people around me got sued to hell 20 years ago for downloading one mp3 file.

        • yieldcrv 2 hours ago
          they got sued for uploading actually

          and Zuck isn’t sued for downloading either, he is sued for reproduction by the AI not being derivative enough, but so far all branches of government support that

        • anthk 2 hours ago
          Anna's Archive. Aaron Swartz.

          FB and so are CIA fronts and they can do anything they please. Until they hit against Disney and lobbying giants and if a CIA idiot tries to sue/bribe/blackmail them they can order Hollywood to rot their images into pieces with all the wars they promoted in Middle East and Latin America just to fill the wallets of CEO's. That among some social critique movie on FB about getting illegal user data all over the world to deny insurances and whatnot. And OFC with a clear mention of the Epstein case with related people, just in case the Americans forgot about it.

          Then the US industry and military complex would collapse in months with brainwashed kids running away from the army. Not to mention to the Call of Duty franchise and the like. It would be the end of Boeing and several more, of course. To hell to profit driven wars for nothing.

          Ah, yes, AIPAC lobbies and the like. Good luck taming right wing wackos hating the MAGA cult more than the 'woke' people themselves. These will be the first ones against you after sinking the US image for decades, even more than the illegal Iraq war with no WMD's and the Bush/Cheney mafia.

          The outcome of this? proper and serious engineering a la Airbus. Instant profit-driven MBA and war sickos being kicked out from the spot. OFC the AI snakeoil sellers except for the classical AI/NN against concrete cases (image detection and the like), these will survive fine, even better because these kind of jobs are highly specific and they are not statistical text parrots. They can provide granted results unlike LLM's prone to degrade because the human based content feeding needs to be continuous, while for tumour detection a big enough sample can cover a 99% of the cases.

          R&D on electric vehicles/energy and nuclear power like nowhere else. And, for sure, the EV equivalent of a Ford T for Americans. A cheap and reliable one, good enough for the common Joe/Mary without being a luxury item. A new Golden Age would rise, for sure. But the oil mafia will try to fight them like crazy.

    • MrBuddyCasino 2 hours ago
      I don't know how anyone can call the most amazing invention in computer science of the last 20 years "copyright infringement factories". We went from the ST:NG ship computer being futuristic tech to "we kinda have this now". Its like calling cars "air pollution factories", as if that was their only purpose and use.

      A fundamentally anti-civilisational mindset.

      • muskstinks 23 minutes ago
        You can see both sides, critzise how its done and still wanting to have the result of it.

        Its a little bit hypocritic which often enough ends in realism aka "okay we clearly can't fight their copyright infridgments because they are too powerful and too rich but at least we can use the good side of it".

        Nothing btw. enforces all of this to happen THAT fast besides capitalism. We could slow down, we could do it better or more right.

      • saintfire 27 minutes ago
        The people pushing this technology, that accelerates climate change, have lobbied the government to circumvent typical roadblocks created by society to limit sensationalist development. Incidentally, the same people who talk about how dangerous AI will be for society, but don't worry, they're going to be the one to deliver it safely.

        Now, I don't believe AI will ever amount to enough to be a critical threat to human life, you know, beyond the immense amounts of wasted energy they propose to convert into something more useful, like a market crash or heat and noise, or both.

        Not sure how you can call someone opposed to any of that "anti-civilisational" matter-of-factly.

      • vor_ 2 hours ago
        I'm sorry, but you're acting obtuse if you pretend you don't know why they're being called that.
  • yieldcrv 2 hours ago
    as long as you know what architecture questions to ask, agentic coding can help with this next phase of optimization really quickly

    delaying comp sci differentiation for a few months

    I wonder if assembly based solutions will become in vogue