Hacker News new | past | comments | ask | show | jobs | submit login
Something is deeply broken in OS X memory management (workstuff.tumblr.com)
389 points by fields on April 23, 2012 | hide | past | favorite | 258 comments



Not this again, we already went through this a few weeks ago.

Back then, I thought the conclusion was that there is nothing broken about OS X memory management, and that with every 'fix' you come up with, you will just introduce another degenerate corner case. The same holds for any OS, trade-offs are made that may have some negative effect in some cases, to the benefit of the general cases.

I don't recognize any of his symptoms anyway, and my OS X computers get pretty RAM-heavy use, with almost always a linux VM open, XCode, Safari with ~10 tabs, iTunes with a few thousand songs, etc.

Edit: Just to be sure I read through some of the links he provides that are supposed to explain what is going on and why the fix would be of any help, but nowhere do I see any hard facts that demonstrate what is going on. Only that he 'saw in vm_stat that OS X was swapping out used memory for unused memory'. I'd like to see some actual evidence supporting this statement.


The thing is, Lion runs like crap without an SSD or insane amounts of RAM for many people. It doesn't help if it works for some, or even the majority. If anything, we need more people digging in OS X internals. I am sorry to use a tired analogy here, but Vista did work great for many too.

I have two Macs, one brand new, one migrated from SL, and 10.7 Safari was almost unusable on both until I installed SSDs. If that isn't a negative effect in every possible use case, then I don't know one. I actually guessed it was just that Lion inofficially dropped support for HDDs (by removing all caches or so).


I upgraded to an SSD while on Leopard, and was amazed at the speed. Snow Leopard continued to impress. Lion slowed things down terribly (even with the SSD!), but I have good news: ever since Mountain Lion DP2, it's been fast again.


Yes, I am a fan of ML so far too.

Given that Apple has fixed none of my reported bugs in 10.7, but I can't reproduce many of them in 10.8, I wonder if it even makes sense to analyze 10.7 anymore - seems it's a done deal for Apple.


This is very good news indeed. I'm okay that they don't backport bugfixes as long as ML comes out in a reasonable timeframe.


ML will be a paid upgrade right? So people who paid for Lion are worth nothing?


I've been trying to get people I know at Apple to start referring to it internally as Mountain Goat.

It's working.


Are you implying that Mountain Lion is slow?


I have 12GB of RAM and Lion runs like crap on my iMac 2.93GHz i7.

It's like the system discards pages of programs just because the app has been inactive for an hour or so. So when I come back and start the same app the f*cking rotating HD I have sounds like a birds nest for too long periods.

Edit: Disabled the pager and the system now seems much more quiet regarding to disk seek noise when I start apps. Feels like a new machine! :-D


> It's like the system discards pages of programs just because the app has been inactive for an hour or so.

From what I've read, Windows memory manager does the same thing - after a while, it swaps out unused pages, even if plenty of free memory is available.

I wonder what's the logic behind this - did the engineers assume that the speedup coming from more free memory being available for disk cache is worth the hassle of waiting for the swapped out page (when it's actually needed)?


I was under the impression(from my FreeBSD years) that pages that are not used for a while is swapped out and marked as inactive. The pages are then fast to throw away for other use, or fast to brink back to active state.

In Lion I get the impression they are just swapped out and thrown away / reused for something else despite there are no real pressure on the VM.


The logic is for things like daemons or infrequently used processes that wake up maybe once a day. They don't need to be in ram all the time needlessly. So swapping out their memory that hasn't been used for N hours isn't a bad thing.

And yes, the logic is sound, its better to use a bit of swap for an infrequent daemon and let 4-5 megs of memory be at the ready if needed than leave it in place all the time. The "speedup" is not a speedup for your use, its to allow for better memory management. Which is what the VM subsystem is there for. Second guessing it all the time just makes its job harder.

Swap use when there is free memory isn't a bad thing. This fetish people have with their OS using swap at times seems to border on the ridiculous side. My iMac at home has 16g of memory and 400g of swap used right now (8g active, lots of file cache that'll get purged). Most of the swapped files belong to things like my ruby+pry repl, a clojure repl I haven't touched for 2 days, and other random things I don't use often enough to warrant they stay in active ram. Why SHOULDN'T that memory be reclaimed and at the ready for a new program or some other request? Its just going to page it out then and likely take longer to do. The only time its "wrong" is when I start using those processes again, which takes all of 1-2 seconds.

Its a hard problem, and both OSX/Windows choose the best possible solution you can heuristically.


So why not lazy page? Swap to disk, but don't unlink the memory until it's needed by another process. If the paged app comes back to like first, it's still in memory and the paged copy on disk can be dropped.


> The logic is for things like daemons or infrequently used processes that wake up maybe once a day. They don't need to be in ram all the time needlessly. So swapping out their memory that hasn't been used for N hours isn't a bad thing.

It is, actually. I find such things unacceptable, be they on desktop or server use cases. I put as much or far more RAM in my systems than they will need, and I expect nothing to be swapped until it's actually full. Many other people do as well, which is why the Linux kernel devs finally started fixing the stupidity several years ago. Time for OS X to catch up.


> From what I've read, Windows memory manager does the same thing - after a while, it swaps out unused pages, even if plenty of free memory is available.

Windows XP does that. It was a common source of grief. I remember it being mentioned as early as 2004. Since Windows Vista the memory manager doesn't have that problem.


> The thing is, Lion runs like crap without an SSD or insane amounts of RAM for many people.

YES. I have a MacBook Pro Core i7 from a little while back with an old style spinning rust drive and a 11" MacBook Air Core 2 Duo.

For purely CPU bound things, sure, the Core i7 kicks the pants out of the Core 2. Same for videogames. For day to day use, though, switching between Eclipse, Xcode, Chrome, etc. the Air provides a much more uniform experience. At its best it's far slower than the Pro at its best, but at its worst it's much faster and more responsive. I rarely see beachballs on the Air. I used to see them all the time on the Pro (the Pro has been sitting on a shelf for the past eight months as I switched to working exclusively on my Air, partially for this reason).

So my experience is that something may not be broken, but something definitely isn't set up optimally for users with poor disk performance and high memory/CPU performance.


Why don't you spend $140 and add an SSD to the pro? You can keep the HDD for an extra $80 with a cd drive bay HDD caddy.


I got this one for $16 (no enclosure for your old CD-ROM like the $80 one): http://www.amazon.com/gp/product/B0058AH2US/ref=oh_details_o...

Left some big files on the old HD, and symlinked them. The disk stays idle in the CD bay until I need it, then spins up.


Doesn't the latter void warranty?


Only if you try and claim the warranty for something you broke c.f. https://en.wikipedia.org/wiki/Magnuson-Moss_Warranty_Act


It doesn't appear to, however if you ever have to get your machine serviced you have to painstakingly undo and redo the installation, as Apple doesn't guarantee the machine that comes back from the factory will have your extra drive in it.

If I were to do it over again I'd just get a larger SSD and leave the optical drive alone.


Maybe technically, but unless you strip every screw on your way into the machine, they won't know you ever had the cd drive disconnected. Just put it back when you take it in for service.


Not in my experience. I did exactly that (install SSD in cd drive bay, and use orig hd for a second drive), and when I had a fan issue it was not a problem.


Yes, but you can un-void it by plugging your CD drive back in. Note that replacing the hard drive voids the warranty as well.


Only voids the warranty on the hard drive, in all actuality they don't care... They will simply call up and ask about it and you tell them you replaced it due to corporate policy and all is well.


As I said, it was only part of the reason I shelved it.

It also turns out I rather like working on an 11" screen. Keeps me focussed.


This

I haven't tried ML yet. My MBP is brand new (bought in January 2012), factory configuration (4GB RAM, Lion)

If you use it 'lightly' (that is, only Safari open) it's a breeze. But of course, it's never only that if you want to do any work.

Frankly, 4 GB should be enough! My past machine (with 3GB - and Linux) would rarely swap (in fact I could keep swap off and use a Windows 7 VM) But you can go only so far with an aging CPU

Some of the slowness can be attributed to Safari/Firefox, sure

But it really seems to have something wrong. Maybe they really neglected people with spinning disks.

(Yes, I considered buying a MacBook Air but 128Gb was not enough for me and the other options were above my budget)


I'm running on a 160GB HDD (nearly full), 2GB ram, on a 4 year old MacBook. I always have iTunes, Xcode, Safari/Chrome running with multiple tabs, and a mail client (Sparrow). The system originally ran Leopard and I upgraded to SL and then Lion. I wouldn't say it's slow. It's definitely getting slower but that's expected since it's such an old system. Could it be that you're just expecting too much from your computer?


"It's definitely getting slower but that's expected since it's such an old system."

I see this sentiment a lot, but I disagree with it. What are "iTunes, Xcode, Safari/Chrome, and a mail client" doing now that they weren't doing four years ago? Is it enough to justify their latest versions feeling less responsive than their versions from four years ago?


I think that as the developers are developing them on better hardware they pay less attention to performance. For example you wouldn't expect the lastest build of iTunes, or OS X to run on hardware for 10 years ago. As system specs improve developers seem to pay less attention to performance & file size. It seems crazy to me that modern web browsers are over 50mb in size. But as hard drive size isn't really a constraint anymore the developers don't optimize that aspect as much.


I would, actually. All iTunes is is an MP3 player, remember, and WinAmp did just fine on 90s machines.


Safari/Chrome: websites are more complex, more image-heavy, and more JavaScript-heavy

iTunes: layers of software for dealing with Wi-Fi sync, Ping social network, iTunes Match, and so on

Mail Client: totally agree with you there...

But I still pretty much agree with you overall...


I don't. Is that iTunes library really the same size it was when the machine was bought? Does it play songs with the same encoding size? Is the version of XCode still the same with the same feature set? Does the browser play HD video from the web as often now as it did years ago?

Things change. My old 2006 iMac core 2 duo feels a bit clunky sometimes these days, but it runs a lot of stuff fine and is actually just as good a machine as it ever was.


>I see this sentiment a lot, but I disagree with it. What are "iTunes, Xcode, Safari/Chrome, and a mail client" doing now that they weren't doing four years ago?

Lot's of things. XCode was rewritten and does live AST syntax completion, background compiles, etc.

Safari/Chrome have several more features --did Chrome even exist 4 years ago?


"nearly full"

Expectations are certainly a big part of the perceptual speed equation. But with OS X, don't underestimate the benefits of keeping your disk less than 90% full. With all the caches, iPhone and iPad backups (over 40 GB in my case), Xcode, sleepimages and swapfiles, installers (Adobe!), SyncServices, etc., a 160GB SSD fills up in no time. When things get slow, getting back below 90% works wonders.


My issue with Lion was that it broke WiFi on my 2009 iMac. Yet the 2011 iMac is just fine with it. There has been a near constant running thread on the Apple support since the day Lion dropped. All sorts of WiFi just going out, sometimes even with the icon being nice enough to gray out.

Currently at 145 pages and growing https://discussions.apple.com/thread/3191630?start=2160&...

Yeah, the 2009 iMac started working the day I put Snow Leopard back on it. I then sold it and warned the owner that upgrading to Lion was at his own risk. I replaced my router with an Airport Extreme; useful excuse to buy a new toy; at one point to see if that resolved it. I even moved the iMac NEXT to the router one day.


My late 2006 MBP had all sorts of wifi issues with 10.5 the first year or so it was out. Like routine kernel panic-bad level stuff.


AFAIK every odd-numbered version release of OS X breaks the networking on a significant number of machines.


I have the same experience: my MBP with 8GB RAM and SSD is pretty fast running Lion. But my two-year old Mac Mini at work with 4GB and platter disk runs terribly slow ever since I did a clean Lion install.


Two things I have found to help the most to improve everyday usage on OSX Lion without an SSD: 1. Disable Spotlight on the boot volume. This keeps the mds process from reindexing things so frequently. 2. Use Time Machine Scheduler (free utility) to backup at 5am or something so it's not interfering with daytime tasks.


Insane amounts? Like 16G? That I can buy for like $80?


16GB is an insane amount, regardless of the price.


Make that an extra $750 if you buy it built in to a new MacPro...


Also, because all this swapping the life of the SSD on Mac OS Lion are shorter than running on Linux or Windows.


Not this again, we already went through this a few weeks ago.

In-fricken-deed...

We have been through this problem again and again and again, in different OSes, at different times and with different things triggering the various problems.

It usually ends with a "neck-beard" saying with enough authority "look, really, they are doing it right even if it seems totally illogical to you and any brokenness is just your configuration, little man". Which is to say "you might not like senseless disk-thrashing but would you rather have your machine randomly freeze when it got out of memory?" And scratching a little more, it comes down to admitting that memory-allocation is a hard problem in its full generality and they don't teach you that in application-programmer-school, and further that the solutions to it that any of these OSes have are tuned-black-magic-split-the-difference-haphazard affairs.

Consider. Either the machine keeps all your information in memory it or keeps on-disk and in-memory and either way, the machine hasn't a clue what information is important to you, my friend. It's just data to it. It's not like the computer is intelligent or anything. Why do you think they call it "random-access-memory"? The problem of dividing up chunks of memory for application programs to use is as hard as dividing up that hard disk for large and small files to live in, EXCEPT that application programs expect to be handed a chunk of contiguous memory when they call malloc. Hard problem even with the powerful tools that have evolved for solving it over the years. So when a given memory management scheme works, it isn't really "fixed", it just has been tuned for the corner-cases that are shouting loudest on the help lines.

And yes indeed, it is "funny" how just getting the "simple stuff" to work is a hard problem. IE you can find lots of simple examples where the standard solution seems fail terribly.

Angry how your 100 GB memory machine isn't faster? Look under the hood and you'll find Scotty from Star Trek shouting "Captain, I'm allocating your memory as fast as I can Sir..."


If you're not oversubscribed, it shouldn't thrash. Isn't that the complaint? Then its a legitimate issue.


I don't recognize any of his symptoms anyway, and my OS X computers get pretty RAM-heavy use, with almost always a linux VM open, XCode, Safari with ~10 tabs, iTunes with a few thousand songs, etc.

I think you missed a key ingredient to this problem, which is heavy disk reads caused by either spotlight or time machine.

Reading a massive amount of data (that you'll probably not use again anytime soon) has the unfortunate side effect of polluting the disk cache with junk. Now if OSX is anything like Linux in this regard, it is loathe to toss out old disk cache (in response to all the incoming junk it's being asked to cache) and will instead start swapping to free up more memory for disk cache.

Linux has /proc/sys/vm/swappiness to control how aggressive it will be in swapping stuff out to preserve precious buffer cache, but I don't think OSX has any such mechanism.


Isn't this what POSIX_FADV_DONTNEED is for?

http://linux.die.net/man/2/posix_fadvise


>I don't recognize any of his symptoms anyway, and my OS X computers get pretty RAM-heavy use, with almost always a linux VM open, XCode, Safari with ~10 tabs, iTunes with a few thousand songs, etc.

Oh boy I wish I could say the same. Admittedly I don't shut down on a daily basis but this didn't used to be a problem in SL. FWIW I've an old tank of a tower that has a video card on its last leg.. shutting down invariably leads to ~30 minutes of downtime while the card heats up and reconnects whatever needs reconnecting for both monitors to work.

Since Lion, I've noticed frequent hangs and beach-balls when doing even menial tasks. Transmit, terminal, texmate, a few tabs in chrome. If time machine starts backing up I can forget about a smooth Preview open or switching to a largish open textmate file without beach-ball'ing. If I want to use a Win7 Parallels VM--I can't do anything else. Even now as I type this I have a ubuntu vm running at the login screent and it causes the machine to shake off the cobwebs between almost everything.

It's certainly not a bad machine--there's 8gb ram, and tons of diskspace--good processor. In fact, I would go multiple months without a reboot and heavy use when on SL without hardly any problems at all.

Then there is the new i5 MBPro. Cool trick you can do: hook up an external monitor via thunderbolt and watch as the [left side] dock becomes a mangled mess with icons miss-positioned and wrongly triggering apps--it's like playing a game of whack-a-mole trying to open terminal to kill -KILL dock :)

The i5 has also been less than stellar compared to the older MBPro I sold to buy it in terms of performance.


For what it's worth, I'm running Snow Leopard (10.6.8) and I see the same sort of terrible performance the article describes.

Though I did improve matters drastically by telling mds not to index my (Linux) MP3 server and my Time Machine drive. That made it go from nearly unusable to just frequently annoying.

The other crazy thing I've been seeing is it routinely takes Chrome minutes to shutdown -- in fact, pretty much every time I try to reboot my MBP without shutting down Chrome first, the shutdown process times out trying to exit Chrome.


Ah yes. The dreaded Chrome renderer never enderer.

I too have experienced the Chrome issue enough that I don't even try to close it normally anymore. Force quit is the only way I exit Chrome. Thankfully, the restore tabs functionality works well.


I'm glad to see I'm not the only one. This drives me crazy every time.


Firefox 12 does this too, on both my MB Pro and iMac. I think it has to flush so many caches that disk I/O just takes forever. This is purely conjecture, though.


Not sure if it will help, but have you tried running Firefox's hidden profile manager and created a new profile? Firefox was horrible to run on OS X for me, it would drag the whole system down, but since changing to a new profile it has run like a dream and not reverted back.


Creating a new FF profile made a significant difference for me, too, when I did it recently (for the first time in a couple years). I suspect that will be true on any OS, if you do a fair amount of installing and removing extensions, user scripts, etc.


"Vacuuming" your Firefox profile's .sqlite files also helps.


I had to shut down on a daily basis because of this problem with my system having only 4 GB of ram. There is definitely something broken in there.

http://i.imgur.com/ohBEF.png


And not to change the subject, but why does my instance of Dashboard need 340M of RAM? I have maybe 5-6 widgets. Is this thing spawning WebKit for each one?


I don't see anything wrong in this picture. You still have free memory. Even if it didn't show free memory you would probably still be OK, because it would just be reserved for use


In that case it should be marked as inactive memory so it can be reused, that memory that was taken up by the end of the day would never be reclaimed so I was paging a lot.


>In that case it should be marked as inactive memory so it can be reused

That would be the job of the memory manager to decide when to do that. Memory could be kept in the non-free state for longer that it actually is needed, but still be marked internally to be available when needed.

OTOH, if you have paging, as you say, then something is wrong, true.

But I don't think that the screenshot shows something wrong.


>we already went through this a few weeks ago. . . . Back then, I thought the conclusion was that there is nothing broken about OS X memory management

I read through the comment of a few weeks ago, and I did not see anything conclusive or even anything that would outweigh my subjective impressions that something about Lion on my (stock) 2011 Mac mini is causing unnecessary lack of responsiveness.

You might be uncomfortable with subjective impressions and many pieces of weak evidence, but given the popularity of OS X on this site, not all of us want to wait for what you refer to as "hard facts" before engaging in a discussion of the issue.


Do you have a link? I tried to search hn but, alas, in vain.


Yes, I had it bookmarked. Actually, I had bookmarked a particular subtree of the comments tree, but repeated following of the "parent" link should get you the whole tree. The particular subtree is at http://news.ycombinator.com/item?id=3585181


I see the same issues the guy in the article has seen. Basically, I have a total of 8GB of ram in this here mac mini, and when my desktop session gets heavy things start going south.

For example: Let's take a large Chrome session (~150 tabs spread over several windows), an IDE open somewhere, Spotify, Steam and some background apps, and a small Windows VM. Generally, Activity Monitor would show that Chrome in this instance would be eating 2-3GB of RAM, the VM would be eating 1GB + change of unpageable ('wired') memory, the random utils & spotify & steam & ide du jour & crap would eat another 1.5GB or so, and, long story short, there's very little 'free' memory left (think <100MB) on this 8GB system but a good >1GB of 'Inactive' memory.

Everyone agrees that Inactive memory should be freed when more memory is required by the system and we're out of Free. My own limited testing shows instead that opening more tabs during normal use to the point where the Free memory is consumed instead results in massive delays across the system as memory is paged out to disk, and the Inactive memory doesn't seem to noticeably change in size.

I don't really know what it's doing, but it will consistently make the system very unpleasant to use for a good 30 seconds until the hard disk stops clicking. (It's unpleasant enough that I try to limit my browser session sizes now, and only run the VM when I need to.)


What you are describing sounds like plain old memory starvation, not really a problem with the memory management.

You don't have to fill up every last bit of RAM before the OS starts swapping, as there could be pinned memory pages or processes that want to do larger allocations that are only held for a short time. If you have less than a few hundred MB of unused RAM with all the stuff you mentioned going on, it only takes some kind of scheduled OS background job to push the OS over the line where it decides it needs to swap in/out.

That said, from the comments some people posted here, it does appear that at least in some situations there seems to be something going on in some versions of OS X Lion. If Snow Leopard and the Mountain Lion preview are unaffected with the exact same usage pattern, maybe there actually is some kind of bug in the OS X memory management. But I'd still like to see some kind of evidence, facts or statistics, as I have never experienced anything like it myself, not even on my MacBook when it still had only 2 GB of RAM.


>I see the same issues the guy in the article has seen. Basically, I have a total of 8GB of ram in this here mac mini, and when my desktop session gets heavy things start going south. For example: Let's take a large Chrome session (~150 tabs spread over several windows), an IDE open somewhere, Spotify, Steam and some background apps, and a small Windows VM.

150 tabs? A VM? An IDE?

That's just A LOT. Of course things will go south, what did you expect a magic machine that can run everything and whistle away hapilly with 0% load?


I bought a brand new Macbook Pro, 15" with upgraded CPU, etc. with 4GB of RAM last year (2011 model). I didn't want to upgrade to a SSD, I wanted to do that later. After I upgraded to Lion (through the Mac Store) the machine is as speedy as my old Macbook Pro I bought in 2006 and is running Snow Leopard. It's horrible and it's so annoying and I'm pretty much really angry. If Apple will not fix this I will NEVER buy a machine from them again. The only reason I'm buying this overpriced hardware is because I like working on OSX and I don't want to struggle setting up a "Hackingtosh". You probably won't define this as a bug because you're not experiencing it yourself. Maybe it's not an implementation bug but it surely is a bug from the user's perspective.

edit: Just for the record. I'm not doing some kind of heavy processing work. Most of the time I've got one Chrome window open with some tabs, email client, macvim and iTerm2. It's not like I'm doing some heavy work. I'm not even running a VM.


Use something like fs_usage to see which of your programs is churning the disk. This is not the normal state of Lion; there are two system services (Spotlight and Time Machine) which can generate tons of background I/O through normal operation - I usually disable TM outright and make sure Spotlight ignores my development folder.


I'll check that out. Thank you for your suggestion. One thing I find strange though, this started happening after I upgraded to Lion. It wasn't like that when I was using Snow Leopard and both Time Machine and Spotlight exist in Snow Leopard.


It's hard to tell for sure, but it seems like this problem may only be present if you're running certain kinds of applications. For example - if it's related to garbage collection, anything that uses ARC instead of GC isn't going to be affected.

I thought it was pretty clear that this isn't a "fix", but there's definitely something wrong here.


MacBook Pro, 8GB, leave it running for weeks: No performance problems at all.

People say "Get and SSD", well, I've had 2 SSDs and 4 SSD failures (one drive failed three times, the other once and its replacement is still going.)

So, I'm all spinning rust here. 1.5 Terabytes of rust in my Macbook Pro and the only time I have a beach ball is trying to launch Team Fortress (but I blame valve for that).

I have massive, MASSIVE Final Cut and Aperture libraries. I leave the machine up for weeks. I leave Time Machine running all the time- there isn't even a slowdown when time machine is backing up.

My hard drives are encrypted with full disk encryption which means not only am I running spinning rust but its encrypted rust which means every read has to be decrypted.

No slowdowns or beach balls. Sure the occasional poorly written program will have a beach ball, and rendering video takes awhile, but that's to be expected.

Yet people constantly say that Lion sucks? Really? And they have these more beefy machines with more RAM?

Something doesn't add up here.


3 year old (non 'pro') macbook here, 5GB & 7200rpm drive, avg uptime is 30+days or whatever the time between service packs is. No issues at all.

Came with Leopard, upgraded to Snow Leopard (not a fresh install) and them app store upgrade to Lion

machine is snappy as, unlike my coworker who has a brand new quad core i7 w/ 256GB SSD that runs Lion like a dog. no idea why, but my humble old macbook is faster in every way than his shiny new mac mini


No actual data, barely any technical discussion at all, mention of "the garbage collection algorithm" which most likely isn't even being used by most of the apps running, capped by a total cargo-cult solution... and this is #1 on the front page?


The post is (too) light on information, but I don't dismiss it immediately. Perry Metzger was very much involved with NetBSD (maybe he still is) as a developer. He might know a thing or two about memory management ;).


Metzger's post is much more subdued and interesting. IMO it would have made a much better submission. Link for the curious (it's linked from this article):

https://plus.google.com/116685507294337280246/posts/camYp28M...


I still don't quite grasp it. He is talking about page outs, but how is disabling the dyn-pager helping with that? Shouldn't page outs only happen when RAM is full?

At my machine with 8 GB RAM and uptime of 4 days I have page outs of only 2 Megabyte. And page ins of 2 Gigabyte.

P.S. I subscribe to your blog! * starstruck


RAM serves (at least) two purposes: holding application data, and caching disk data. Sometimes it can be useful to swap application data out to disk to make more room for disk caching. Imagine if you have an app taking up a whole lot of memory that isn't actively using most of it, and another app reading a lot of data from the disk. In this case, you'll perform better if you swap out all that unused data, and use that RAM to cache disk access for the other app.

What he's saying is happening is that the OS is doing this too aggressively, and that it ends up swapping out data that's actually in use in favor of disk data which doesn't really need to be cached, which hurts performance.

By disabling the pager, you make it impossible to move application data to disk at all. This limits the amount of RAM available for disk caching, but if the OS really is caching too aggressively, that will ensure that it can never page out useful application data by mistake.

My experience mirrors yours, in that it really doesn't seem to be a problem on the computers I've used, but that's what he says he's seeing.

Glad you like the blog, but I'm just a regular guy. I put my pants on with a high speed pants installation robot just like everybody else.


Anecdotal data point (against the article): I have almost the same setup as this guy (2008 Mac Pro with 24GB and a 2011 MacBook Pro with 4GB), and even though I torture those machines with many hungry apps running concurrently, I haven't run into the same issues. Furthermore, I can't reproduce the high Inactive RAM count nor the high page-out activity, even after weeks of uptime.


Garbage collection does not have to mean GC as we know it from Java. It's not that the objects created by the app are garbage collected. The whole memory management in modern systems resembles a GC (it pretty much is a GC). Just instead of managing liveness of the objects, you manage the block mapping. Sometimes you have to write them back to the disk, sometimes you have no memory left and you have to swap them out...

That's pretty much the GC algorithm. There's nothing wrong with that mention.


When he says "the garbage collection algorithm may require that all of a program’s data be in physical RAM before collection can happen," it sure sounds like he's talking about in-app heap collection. Does "may require that all of a program's data be in physical RAM" really apply to the kernel level of memory management? It makes no sense to me when applied that way.


I don't see why this was downvoted. There are probably still many applications in the wild that use Objective-C 2 garbage collection. The garbage collector scans the stack and global memory for references, and recursively follows strong references to find out which objects are in the reachable set. This may touch a lot of memory pages that were paged out. So, even if the garbage collection algorithm does not require application to be in physical RAM, probably many pages will faulted to physical RAM as a part of garbage collection.


Yes, I agree. Way better than most HN articles.


Probably a lot of people hitting the up arrow to save the link to try out this evening. I plead guilty to doing that. I wish I'd read the comments first.


I do that too sometimes. Wtb separate save button.




The core reason that this happens is that OS X uses a memory management mechanism called Unified Buffer Cache (http://kerneltrap.org/node/315 is the only reference I can find on this).

This seems like a good idea to unify paging and disk cache memory, but it actually isn't. This means, that if you do a lot of I/O, resident pages (i.e. your programs) can actually get pushed out of memory to free up RAM for the disk cache. This degenerates pretty badly in scenarios like using VMs, since you're also using large sections of mmap'd memory.

This doesn't happen on NT or Linux, because disk cache can only be turned into memory (i.e. making disk cache smaller), not the other way around; the policy is "Disk cache gets whatever's left over, Memory Manager has priority"

Unfortunately, the only thing you can really do about it, is have a machine with a huge amount of RAM, which will kind of help.


> This doesn't happen on NT or Linux,

No, NT and Linux also have unified VM. What BSD had pre-UVM was pretty antiquated.


What really needs to happen is that Spotlight and Time Machine need to use direct i/o (F_NOCACHE) when they read data from the filesystem, this way they won't pollute the disk cache with their reads and OSX won't swap out a bunch of pages in response.

I think you could probably hack something together that does this with DYLD_INSERT_LIBRARIES (OSX's LD_PRELOAD) that would would hook the open system call and fcntl F_NOCACHE on the file descriptor before it hands it back to the application.


This is the correct solution. Time Machine and Spotlight should not pollute the OS cache.


I bet the odds are damn good that neither of them ARE polluting the OS Cache.

That they are is just speculation from someone whose taken his experience and projected it onto everybody.

Since he's a person who mucks around with random system settings (like the one in his article) there's no telling what previous damage he's done to cause this problem.


You might be right. In my testing, mdworker and friends behave like you'd want them to, I don't see them polluting the cache on my Lion machine. Haven't tried time machine yet.

EDIT: I can't get either spotlight or time machine to show any cache polluting behavior at all, at least not in the way that I run them. I used "mdutil -E /" to force a re-index of my disk, and I kicked off an initial time machine backup on a secondary drive I had lying around. I see both backupd and mdworker doing a lot of disk reads using iotop, but top shows my inactive memory not really changing as drastically as I'd expect, like if I were to cat a giant file to /dev/null.


For Linux, there is a program called nocache, which does something similar to this:

https://github.com/Feh/nocache


This is an excellent suggestion that might actually solve whole lot of these issues.

I hope Apple engineers are looking at this thread.


I did this for awhile and ran into an interesting (if fatal) edge case:

I use a 1.5tb external drive formatted in exFAT to minimize cross-platform headaches, and whenever the drive is marked dirty (improper shutdown, eject, etc), OSX will run fsck_exfat on it before I can use it.

fsck_exfat isn't a huge deal -- or wouldn't be, if it didn't have a nasty tendency to leak RAM... the moment you plug in, fsck_exfat's footprint climbs up and up and up... never stopping! Pretty soon it's eaten up 8gb out of my 8gb RAM and poor ol' lappy is unusable.

I can say with authority what happens when you run out of physical RAM in OSX: it hard locks. Nothing works -- no keyboard, no mouse, nothing.

So, if you plug in your large, dirty (you dirty drive you!) exFAT-formatted external drive, with dynamic_paging switched off, and let fsck_exfat do its thing, your laptop freezes! Leaving the drive dirty, only to be re-scan on boot-up... freezing the laptop, leaving the drive dirty, only to be re-scan on bootup...

EDIT: this is with Snow Leopard...


exFAT is Microsoft proprietary shit.


The problem isn't with exFAT, it is with fsck_exfat, which I'm fairly certain Microsoft didn't write.

chkdsk on Windows manages to clean exFAT volumes just fine without using up 8gb+ of memory.


yeah sometimes I'll boot into Windows just to clean the drives :-(


yeah but its the only real alternative for R/W support in OSX and Windows for large volumes. I've had corruption issues with NTFS and HFS+ drivers :-(


I've experienced exactly the same thing that was described in this article. All the way though to installing more ram and disabling the dynamic pager (this is a late 2011 mbp).

Like the author I was shocked at how accustomed I was to waiting for an app to become responsive again. I was trained to wait on the OS to do it's business before I could do my work. Now things happen as quickly as I can think to do them, this is how computing should be.


I've been a Mac user since the beginning, and by far my biggest frustration is the perpetual running-out-of-RAM, even when I close basically everything. I have 4GB of RAM, and frequently catch kernel_task using at least half of it.


As another Mac user since "the beginning", I could smell that rotten smell; slightly at first with SL, then thick and rancid with Lion. So my Macs are now back on 10.5 Leopard, and I'm not missing a thing, the lousy performance in particular. Unless ML turns out to be hugely superior, I might be stuck on 10.5 until I get a machine that simply can't run it anymore.

In the meantime, I'm also hedging my bets, and I've gotten very comfortable with Windows 7 for productivity (ok, it's really for gaming) and Ubuntu Linux for web/LAN serving.


Why shouldn't the kernel be using as much memory as possible? It's not like big disk caches or what have you cause your memory to go bad? As long as you get it back when you need it, who cares?


This. Memory that is not used is wasted. If you spend a bunch of money on a high-memory setup, you should be furious if the OS doesn't use it all.


The problem is it then swaps to disk when you use an application. I'm fine with 100% memory being used at all times, but it needs to actually be used and preferably by whatever needs it the most.


The issue is just transparency. You want to know how much memory you actually have available for use if you need it. How would you like it if your car's gas gauge was close to empty all the time because the car was caching gas for long trips?

It would seem there's a simple solution -- another number on the system monitor displaying how much memory is available for use if needed.


Flawed analogy: you use up the gas and your car stops. You use up memory and … the kernel swaps pages around. Now, if the kernel isn't giving you back memory, that's a problem, but the OP doesn't actually show that this is happening.


No, it captures what I want to worry about. I do not want to be in the situation where processes that I am interacting with in real time are paging stuff out to disk. This is really bad for my user experience.

So if my memory is "full" with a bunch of just-in-case stuff, I'll gladly swap it out for real data that a real running process is using it. But if it's "full" of data in use for running processes, then I want to think twice about opening a new application. And I want my memory manager to tell me the difference between those two "full" cases.


               total       used       free     shared    buffers     cached
  Mem:       2042520    1816496     226024          0     294344     486908
  -/+ buffers/cache:    1035244    1007276
  Swap:      4194300       8172    4186128
I don't know what you are using to view your memory usage, but any decent tool should give you at least the information that `free` does (see above).


Because he's not getting it back when he needs it?


Recent Windows NT kernels, recent Linux kernels and recent Darwin kernels will drop disk cache pages the moment something more important needs them. Memory management in modern kernels can be very complicated and just because it appears that a process does a lot of paging it doesn't necessarily mean that it needs more physical memory.


The notion that disk cache is so ungodly important that the OS will SWAP MY APPLICATIONS OUT TO DISK PRESERVE IT boggles my mind a bit.

Users of desktop systems clearly don't like this behavior, in fact they'll do crazy things like purging disk cache via cron every minute to try to stop this from happening.


Ok, consider the following scenario:

Process A (let's call it Safari) allocated 600MB of memory. Out of this 600MB, it hasn't used 400MB for quite a while (because, for example, it contains data for tabs you haven't looked at for hours). Now I'm not sure how Darwin does this but I know for a fact that Windows NT kernels will try to write the contents of in-memory pages to the disk at the first good opportunity; this way they save time when the pages in question will really get paged out to the disk. I assume that there's a similar mechanism in Darwin. So it's very likely that the 400MB in question is already on the disk. Now the user starts process B (let's call it Final Cut Pro) that reads ands writes to the disk very heavily, and typically the same things. It's not an unreasonable thing to do on the kernel's part to just drop Safari's 400MB from the physical memory and use it for disk caching Final Cut Pro. Throw in a few mmaps to the picture and suddenly it's not obvious at all which pages should be in the memory and which pages should be on the disk for the best user experience.


>It's not an unreasonable thing to do on the kernel's part to just drop Safari's 400MB from the physical memory and use it for disk caching Final Cut Pro.

The problem with this line of reasoning is that a large amount of cache will often not give you much more benefit than a small amount. Indeed, that's the nature of caching: you get most of the benefit from the first bit of cache, but the level of added benefit drops dramatically with more cache.

What if using 400MB of cache for FCP only gave 5% of a net performance advantage over using 40MB of cache? Would it still be worth it to take away that extra 360MB from Safari?

And there's the issue of human psychology: people deal much more easily with a little slowdown spread evenly than with a full-on stop for a short amount of time (even if the full-on stop scenario gives you greater average performance). I'd prefer Aperture run 5% more slowly than it might otherwise, if that meant I never saw a beachball when running Safari.


This is a very good point and I think it illustrates well how difficult it is to write a paging / caching system that does the right thing most of the time.


He is not talking about disk cache, disk cache is accounted for separately, he's talking about actual memory being allocated by the kernel_task process. It's been obvious since Lion came out that there's a problem, and so far Apple hasn't fixed it.


Isn't there anything to do about kernel_task using up so much memory? Since it's nonfree, is apple planning on doing anything about this?


Seriously, I have the same problem


I also have major memory problems. 8GB RAM total in the system and I have 4GB sitting in inactive and it's paging out? http://imgur.com/VE4GB At this point the system pretty much thrashes until I start closing apps or perform a manual `purge`


I have the same issue. This is unacceptable.


I had a similar problems with my Mac slowing down to a crawl with certain instances of disk access.

I tried turning off spotlight (which was taking a very long time to complete) but it did not help.

For me, the problem turned out to be a failing hard drive. After replacing my system hard drive, things returned to normal speed.

I'm just posting this in case it might help someone else.


I concur. I've seen this exact behavior on two different systems when the hard drive was beginning to fail. Swapping the drive fixed the problem.

My friend had installed a new larger drive that was causing the problem, whereas there were no beach balls while booting off the original drive via USB.


There's a nice script[1] for tweaking OS X's dynamic pager settings to reduce the system's swappiness that helps a bit. Incidentally, if you have both an SSD and HDD installed, you can use it to move the swapfiles to the HDD to reduce wear.

[1]: http://dropsafe.crypticide.com/article/3848


Have they not improved the dynamic pager parameters in the last two years?


Thank god for this article. My wife is a photographer making heavy use of Lightroom on her 17" MBP and has been experiencing these exact problems for a year or two. We've tried everything to fix it, rebuilding the system from scratch, to no avail.

She had 4 gigs of RAM which we recently upped to 8gigs which reduced the severity of the problem.

I really, really hope this is something that gets fixed in Mountain Lion. Tasks that should take 20 seconds take 10 minutes or more.

It's good to know she's not crazy.


She's not crazy, but she is running Adobe software on a machine without sufficient RAM. Adobe installs gods own cache of really crappy stuff that starts up at boot and who knows what kind of kexts they shove in there to make your machine unstable.

I won't run any adobe software after I saw the abuse they did to my machine.

Apple basically gets a free pass if you're running Adobe. This is a company that ships crap.

Also, you're probably starving it of sufficient memory. If Lightroom is up, you're probably out of memory, even with 8GB.

I'd recommend getting rid of Lightroom and going to Aperture, or given aperture is a bit behind the curve, upgrading to 16GB of RAM and seeing what adobe-installed processes and KEXTS you can get rid of.


Moving from Lightroom to Aperture's not a possibility, given the workflow, experience, and catalog data she's built up in Lightroom over the years.

Upgrading from 4 to 8 gigs last week helped a lot. I'd go to 16 except her MBP won't support it.

I'd love to get her on an SSD but she's on a 1tb drive now and it would be hard for her to try and fit into a 512gb SSD now (especially now that she's on the D800 with huge video files and 72mb raw photo files.

It's frustrating that it will work find some of the time and not others, implying that the problem could be fixed with better memory management. I do hope that a serious Adobe competitor arises to force Adobe to make its apps faster and more resource efficient.


>Moving from Lightroom to Aperture's not a possibility, given the workflow, experience, and catalog data she's built up in Lightroom over the years.

Forget the advice. Lightroom is faster, as noted in every review of both programs. Try Aperture yourself with the demo to find out.

Working with 10+ megapixel images is always going to be slow, and with camera advances, it will get worse every time your wife gets a higher resolution camera --so comparing it with how it used to be when you have 6mp files is not exactly correct.

More memory and an SSD will definitely help.


>Apple basically gets a free pass if you're running Adobe. This is a company that ships crap.

Yes, millions of professional designers using Adobe software are idiots. You are just making BS claims with no support whatsoever. Try opening a huge image in Photoshop and any other editor and see which behaves better and faster.

The only "crap" stuff Adobe does is mostly whatever it acquired from Macromedia.

>I'd recommend getting rid of Lightroom and going to Aperture, or given aperture is a bit behind the curve, upgrading to 16GB of RAM and seeing what adobe-installed processes and KEXTS you can get rid of.

And I'd recommend not listening to BS anecodotal suggestions on the internets. Read a couple of professionally done reviews and benchmarks. All state that Lightroom is faster and more efficient that Aperture. Aperture got a little better in the last version, but still no match to Lightroom.

(I'm not bashing Apple, I like both. Things are what they are though, and yes I've tried both of them.

Thing is: working with freaking huge images, like hundreds of 16 megapixel RAW files, will be slow, whatever you use.


It occurs to me that HFS might be the real culprit. A lot of the bad behaviors described here involve heavy disk use. John Siracusa has a nice round-up of all the HFS faults:

http://arstechnica.com/apple/reviews/2011/07/mac-os-x-10-7.a...


You can run `purge` in a terminal to free your 'inactive' ram.

I've set up a cron job to purge frequently, keeps thing humming.


That's a terrible idea. `purge` is not a tool for freeing inactive ram. It's a tool for deleting the filesystem cache in order to simulate cold boot conditions for performance analysis. Yes, it has the side effect of reducing memory usage, inasmuch as it's throwing away stuff that was kept in memory. But that doesn't make it an appropriate tool to use.


What's the downside? It worked for him.


The downside is it slows filesystem access down across the board until the caches get repopulated. So sure, go ahead and use it if you like having a slow filesystem.

Caches exist for a reason. Deleting them willy-nilly tends to be a bad idea.


Yeah, I've had zero problems doing this for over a year. I used to get into swap hell every other day and now my life is beautiful and awesome.


From my days of optimizing Firefox startup on the Mac, I remember that this doesn't work 100%. The only way to ensure that caches are 100% purged is to unmount the filesystem.


Can you explain a bit more how this works?

The `purge` command has a pretty short man page:

> purge -- force disk cache to be purged (flushed and emptied)

> Purge can be used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc.

[This seems to talk about it more: http://workstuff.tumblr.com/post/19036310553/two-things-that...

On my 10.5 laptop it didn't seem to dramatically decrease the memory marked "inactive".]


yeah, it's not directly attacking your 'inactive' memory. Just the disk cache, which shows up in the chart as 'inactive'. I'm not familiar with other cases that fit into the 'inactive' piece of the pie.

I have regularly seen 1-3G of ram get free'd up on a 'purge'


Cmd line syntax, for the lazy, please?


I made a bash script for it and I'll share it, hacker 2 hacker. Goes a little something like this:

#!/bin/bash

purge


post it to GitHub so I can fork it to support zsh


What license are you releasing your software under?


Proprietary, nanananana


...sudo purge? ;)

> purge // wait ~10 seconds Done.


purge


Does anyone else notice extreme time-based slowdowns using multiple monitors? I've looked through forums and system logs and I can't find an immediate explanation for it. The system tends to hang when using multiple monitors for any extended period of time.


Yes, ever since Lion I need to use the 9600m in my MBP instead of relying on the cooler running and less power hungry 9400m. Never noticed the ram problem, but there are other issues with Lion that I never encountered with Snow Leopard. For instance, sleep seems to take forever now and I can't use my external monitor without the fridge magnet hack. I might get downvoted for this, but my MBP seems faster and runs better on Windows 7 than on Lion at the moment. I hope ML improves the experience.


Yes, but I'm using weaker hardware (2009 Mini w/ 2.2GHz Core 2 Duo) on two Samsung panels @ 2048x1152 each. It's improved and doesn't completely hang since I bumped it to 8Gigs of RAM, but there is still a noticeable amount of lag at times. Goes away completely on one panel. Not sure which chunk of the hardware or OS I'd blame first, as I'm hitting the limits of most everything. Little box wasn't bought (or intended) for what I'm using it for.


http://wagerlabs.com/blog/2008/03/04/hacking-the-mac-osx-uni...

It's called the Unified Buffer Cache (UBC).


the OS X kernel is open source. so why aren't people reading it to figure out where this bug is?


Yeah, I'm sure if you just read the Darwin kernel there's a '#define USE_LOTS_OF_MEMORY 1' in there that you can change.

The class of problems described in the original post are not the sort of thing you 'just find' by glancing at kernel source code. The problems described sound like they could be an issue of poorly tuned heuristics/thresholds, or necessitate some extra machinery inside the OS X memory manager that isn't there currently. It's not like you can send Apple a pull request on github.


But given enough eyes all bugs are shallow! Are you implying that ESR is full of shit?


yes I know, that's the point of my question ;)

everyone is all positive about "open source" until they have to dive into a few millions lines of complicated system-level C code and then...


> yes I know, that's the point of my question ;)

It's a facile point.

> everyone is all positive about "open source" until they have to dive into a few millions lines of complicated system-level C code and then...

Does anyone doubt that 99% of open source users never read a line of the source code which they are using? The point is, they have the opportunity to, and more importantly, the 1% (or whatever) with the skills and resources are able to actually do something about it.

If you don't have the ability to change or examine the source code, then there is little incentive to do any runtime analysis which might illuminate the problem.


> and then...

become immensely employable.


What are you gonna do about it when you find and fix the bug? You can't run a custom kernel on OS X.



None of these posts detail how to actually replace the kernel, only how to build it, nor what happens when Apple ships an update that changes the kernel.


You install the kernel by running make install, then copying the stuff in BUILD/dst to /. (If you haven't changed any APIs accessible to kexts, you can get away with just copying mach_kernel.) When Apple ships an update, you compile the new kernel.


Do what normal people do. Tell Apple:

http://bugreporter.apple.com


I've gotten a lot of his symptoms, and there might be another cause:

http://reviews.cnet.com/8301-13727_7-20064489-263.html

Bad blocks in the disk, causing the system to beachball frequently due to disk I/O failures when swapping out to disk.

The solution for me was to back up, reformat the disk and zero-ing out everything causing bad sectors to get remapped, and then restore.


I have to say, I was experiencing a shitload of disk thrashing on my iMac, and eventually decided to replace the drive. So I did the whole ridiculous suction-cup routine (yeah, that's Apple "elegance") and replaced the drive.

Problem resolved. Not that I don't still get inexplicable pinwheels, but nothing like before.


Something is quite wrong indeed. I disabled the dynamic pager, and now my system is working as it's supposed to. Snappy and responsive.

I opened all of my apps, expecting it to crash miserably: instead, the system started paging as it should, stayed responsive (though slower), and promptly returned to normal once it regained memory.

I don't know what's going on, but I can definitely say that this is how I want my computer to work.


I think this is the penalty we pay for the advice from two years ago:

"Buy all your developers SSDs. It makes them more productive."


Try "Free Memory"... I looked for a solution to this problem a few months ago. My computer runs better if I free the memory every few days. For example, resuming a parallels virtual machine drops from 30 seconds down to 3-4. Note, this is a 4GB Core 2 Duo with SSD.


My completely non-scientific observations have found that OS X needs plenty of RAM, like any modern OS. However, any disk I/O task has a huge performance impact on the rest of the system, as described by this article. For example, something like unRARing a file will affect the entire system detrimentally, even if CPU usage is nominal. By affect I mean even the cursor can get jittery, which is normally unheard of on OS X.

This typically affects me in low memory situations, such as less than 100mb of free memory. The effect is most pronounced when switching between browser tabs, which would cause a lot of disk usage... pulling all of that data in and out of non-ram cache.


I don't have any tests to prove this, but switching from a 64-bit kernel to a 32-bit one and forcing apps to run in 32-bit mode helps a lot with memory usage on OS X. You can use this app to switch apps to 32-bit: http://www.macupdate.com/app/mac/40405/sixtyfour

If you look at Windows 7 memory consumption with the same set of software you use in OS X, you'll notice memory usage is 1/2 or 1/3 on Windows compared to OS X. Maybe someone knows why that is?


Anyone have any nice articles with tips on how to generally optimize OS X? Esp to better handle this paging/memory management problem that the OP is talking about.

I have a MBP with 4GB RAM and leave programs open all the time. After a few days, it feels very sluggish.

Aside from double my memory and changing my habits (i.e. shutting down every night), how do I fix this?


I'm (fairly) uninformed on this, but from an initial read of the post it seems like memory management may have been optimized for SSD-equipped systems, at the expense of hard disk performance?

Whether this is unintentional, part of a calculated tradeoff, or a cynical business/tactical decision is another thing.


1.5TB of spinning rust in my macbook pro, no observations of this problem. Did see something like it when I was using an SSD that was about to fail.


The only winners are those "Mac cleaner, keeper" apps whose Google Ads know we are all watching beach balls.


Apple should really sell ad space on those beach balls.


Ah, so it's some kind of hourglass/throbber display. As a linux user with a whopping 1 gb of ram on my laptop, I was puzzled by what that term meant. :)


Pinwheels.


Damm.. I turned swap off, and now I have 3 VMs running concurrently on my 2009 MBP with 8 GB RAM, and it's smooth! Before this, even one VM would cause the system to periodically become unresponsive. Ok, this is my _subjective_ opinion, and you can ignore it, but hey, it works for me.


I fucking hate OS X Lion Vista. I really miss the stability of Snow Leopard. But can't easily go back.


When time machine starts on my MBA ( to an external hd ) it almost freezes the Mac. Is this related?


Disabling dynamic paging as suggested in the fine article seems to have made some apps that constantly gave spinning wheels before (Firefox, Time Machine backups) some extra speed.

However, it's still early days. It might just be a "washed car effect."

(Mac Pro 1,1 / 7GB RAM / WD Caviar Black)


I too have noticed huge issues particularly when using photoshop or final cut pro. I figured it was the applications, but if it's the OS that's definitely a much bigger issue. I regularly restart every 2-3 hours when using those two programs heavily.


In your case, it is the applications. Final Cut Pro X still has some instability, and photoshop is crap.


If Photoshop is considered crap, the rest of Adobe software is coded by blindfolded bongo-drum players.


Interesting tangent: Plan 9 never had "dynamic paging" (swapping to disk). It supports virtual memory, but not swap. This information is accurate as of about 6 years ago (when I stopped following Plan 9).


Anyone have a radar number? This entire discussion is incredibly vague.


I've had problems with mtmd (Mobile Time Machine) really slowing down writes (and all disk stuff) since the Lion pre-releases. With that off (sudo tmutil disablelocal) things are pretty smooth.


Awesome. The thread is more informative and entertaining than the article! This is Hacker News!


i don't understand why anyone who cares about performance doesn't at least max out the RAM, let alone use an SSD as their boot disk. sure, the SSD is expensive, but the RAM? dirt cheap. i only wish the MBP could take more.


Please:

font-size: 16px; line-height: 1.5em;


I have an 8-way Xeon Mac Pro w/ 20GB of RAM, almost half of which is 'free' at any point during the day unless I'm doing something really out of the ordinary.

Yet it still swaps to disk ALL THE TIME and a new Terminal.app window can take up to 5 seconds to open.

I really don't give a shit how it's not "technically" broken - that's broken from an experience point of view. And I haven't re-installed the OS (this was an App Store upgrade from Snow leopard) because that's a major pain in the ass as this is an actual workstation used to do actual work.

I can't believe this is actually advice, either - that's what Windows users used to say in the 90s. Anyway, I guess I'm just ranting. OS X is wonderful except for the fact that it sucks at managing memory to keep a system snappy.


Yet it still swaps to disk ALL THE TIME and a new Terminal.app window can take up to 5 seconds to open.

That's not swapping. That delay is /usr/bin/login searching the system logs so that it can display the date and time of your last login.

Create a .hushlogin file in your home directory to prevent that.


Two other things that can make OSX terminal launching slow:

1. The default use of /usr/libexec/path_helper to manage your $PATH.[1]

2. An accumulation of log files in /var/log/asl.[2]

For (1), I just edit /etc/profile and disable path_helper altogether. I set the PATH manually. (This also allows me to put /usr/local/bin before /usr/bin, which is my preference. I've never understood Apple's default settings for $PATH. They put /usr/local/bin later - which defeats the whole point of installing, say, a newer vim in there.) For (2), a cron or launchd job can take care of it.

[1]: http://mjtsai.com/blog/2009/04/01/slow-opening-terminal-wind...

[2]: http://osxdaily.com/2010/05/06/speed-up-a-slow-terminal-by-c...


time /usr/libexec/path_helper PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/share/python:/usr/local/sbin:/Users/stefan/bin"; export PATH;

real 0m0.004s user 0m0.001s sys 0m0.002s

Really? Are you sure path_helper slows things down?


> Are you sure path_helper slows things down?

I'm sure that it did, but not sure that it does. The code you ran isn't quite what /etc/profile does. Here's a run of that on an older machine where I work (see below on versions):

    $ time eval `/usr/libexec/path_helper -s`

    real	0m0.106s
    user	0m0.084s
    sys	        0m0.021s
However, looking at my current machine, I realize that /usr/libexec/path_helper is no longer a shell script at all. It's compiled. Running that same test on a newer machine, I get this:

    $ time eval `/usr/libexec/path_helper -s`

    real	0m0.004s
    user	0m0.001s
    sys  	0m0.003s
So, in a nutshell, I think you're right for current machines: The path_helper advice looks to be out of date. I can't edit my original answer any more, but thanks for making me rethink this (I've been annoyed by slow-opening terminals in OSX for years. Apparently, they worked on this part of the problem.)


You're not crazy -- path_helper used to be godawful slow. It used to be a shell script with exponential (IIRC) time complexity in the number of components in your path. It could very easily add a huge delay to your launch time.


No. The sequence of events concerning this in login:

0) `login -pf`

1) quietlog = 0

2) if ("-q" in argv) quietlog = 1

3) if (!quietlog) getlastlogxbyname(&lastlog)

4) if (!quietlog) quietlog = access(".hushlogin") == 0

5) dolastlog(quietlog) ->

6) if (!quietlog) printf(lastlog)

You can see from this that the "searching the system logs" (which, to be clear, is going to be really really fast: /var/run/utmpx is a small file with fixed length fields) happens in step #3, before .hushlogin is checked in step #4.

If you wish to verify, you can read the code at the following URL. Note that __APPLE__ and USE_PAM are defined for the OS X distribution of this code, while LOGIN_CAP is not.

http://opensource.apple.com/source/system_cmds/system_cmds-5...


It does not use /var/run/utmpx anymore.

Look at the code for getlastlogxbyname(). It does an ASL query for last login, and that's the source of the delay.

http://www.opensource.apple.com/source/Libc/Libc-763.12/gen/...


As I stated, that cannot be the source of the delay, because getlastlogxbyname is called based on a check of quietlog before quietlog is updated to take into account .hushlogin. With the exception of step #7, all of this code is inside of a single function (main), which makes it very easy to verify that the sequence of events I'm describing is correct. (I will happily believe you, however, that getlastlogxbyname is internally now using something horrendously slow to look up that information.)

(edit: I have gone ahead and verified your statements regarding getlastlogxbyname now being based on ASL. Using that knowledge, and based on st3fan's comments about the output of dtrace, I then used dtruss to verify my own assertion regarding the order of events. The result: .hushlogin in fact only affects the output of "last login"; it does not keep login from getting that information in the first place with ASL. To keep it from doing so you must pass -q, something Terminal does not do.)


You're right. ASL is the source of the slowdown, but .hushlogin isn't actually doing anything to solve the problem.

The correct way to bypass the ASL query is to set Terminal to open shells with /bin/bash (or your shell of choice) instead of the default login shell. Terminal will still use /usr/bin/login to launch the shell, but it passes the -q switch to prevent the ASL query.

When I dug into the source code a couple of months ago, I inadvertently made both changes (Terminal settings and .hushlogin). Clearly it's the Terminal settings that solved the problem and not .hushlogin. Thanks for clearing it up.


Saurik, using dtrace I do see something touching about 50 logs in /var/log/asl every time in login. I wonder where that comes from. I don't think it is getlastlogxbyname() or dolastlog().


As thought_alarm states, getlastlogxbyname may not be accessing utmpx anymore (I have not myself checked); however, the behavior of that function cannot be affected by .hushlogin, as it is called before .hushlogin is checked. All of this logic (excepting step #7) happens within a single function (main), so it is very simple to see the flow.

(edit: I have gone ahead and checked: thought_alarm is correct, in that getlastlogxbyname is now using ASL instead of utmpx; however, I have also verified my sequencing assertion with dtrace: .hushlogin has no effect on the usage of ASL, but manually passing -q to login does: it thereby cannot be the source of a .hushlogin-mediated delay.)


But why does .hushlogin makes Terminal.app come alive faster?


As I cannot replicate this behavior, I am not certain. On my OS X 10.6.8 11" Air, terminal sessions are only being delayed by my 2.5MB .bash_history; if I clear that file ( something I sadly wouldn't want to always do ;P) terminals come up with almost no delay.

However, assuming that is the case for some people, we have to look elsewhere than the last login lookup. There are only a few other usages of quietlog: motd (open file, read it), mail (check environment, stat file), and pam_silent.

The first two are not going to cause any kind of performance issue, so we have to look at pam_silent. This variable is particularly interesting, as it is only set to 0 if -q is not passed (and it is not) and there is no .hushlogin (it is not directly controlled by quietlog).

If it is not 0, then it is left at a default value, which is PAM_SILENT, and is passed to almost every single PAM function. It could very well be that there is some crazy-slow logic in PAM that is activated if you do not set PAM_SILENT.

Given this, someone experiencing this issue might look through the code for PAM to see if anything looks juicy (and this is something that will best be done by someone with this problem, as it could be that they have some shared trait, such as "is using LDAP authentication").

(edit: FWIW, I looked through OpenPAM, and I am not certain I see any actual checks against PAM_SILENT at all; the only mentions of it are for parameter verification: the library makes certain you don't pass unknown flag bits to anything.)


Wow, I'd forgotten how long ago I'd done this:

  kore:~$ ls -l .hushlogin
  -rw-r--r--  1 jay  staff  0 Aug 15  2002 .hushlogin


> "That delay is /usr/bin/login searching the system logs so that it can display the date and time of your last login."

Am I missing something? `w` on my linux system takes well below one second:

  [burgerbrain@eeepc] ~ % time w   
  14:11:18 up 19:41,  6 users,  load average: 0.27, 0.10, 0.14
  USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
  REDACTED
  w  0.01s user 0.05s system 61% cpu 0.096 total
I can't imagine why anything like that would ever take anywhere near 5 seconds.


That shows your currently active login, which is of course in main memory. Your most recent login may well no longer be active, and so you have to search the log files.


Ah yeah, had a brain fart there. Even so, `last` is similarly fast.


You sir are a gentleman and scholar. As a laptop user with only 8 GB of RAM and a slow non-SSD hard disk, this one trick just made my day! Thanks.


Only 8GB? ha. I have 4GB and having any kind of flash player open in chrome means I have no RAM free.


Really? Because I have 2GB too and I can have several flash videos open and get along just fine with other programs.


If you are interested what happens when you open a new terminal window, run 'sudo opensnoop' in one window and then open a new window.

Here the majority of files being opened are in /var/log or Homebrew related.

Also interesting ... creating a .hushlogin did not change much. It still opens about 50 files in /var/log/asl/


Awesome! Thanks for pointing out opensnoop!


It's painful guessing how many hours of my life I could have back if only I had known about this before. Thanks for the pain, thought_alarm.


When the logs are read once, they should stay in the VFS cache as long as there is enough RAM, shouldn't they?

Anyway, instead of running /usr/bin/login, I just use /bin/zsh as my Terminal.app startup command which is much faster.

However, every time I access the file system, even some tab for autocompletion, it takes a few seconds. Even cd'ing into some directory can sometimes take a few seconds. Sometimes ~= it's more than 1h ago that I accessed that dir or so.

Edit: Maybe it's Time Machine or Spotlight or so which destroys the effectiveness of the VFS cache?


Changing directory is instant for me. (no SSD here, all spinning rust.) Tab completion is instant too. Run Time Machine every hour, never made a change to the spotlight config.


anything that hits the disk heavily, including time machine, kills my snow leopard machine. lack of ui responsiveness, beachballs et al.


I know I shouldn't say it (a dozen have already done so), but thank you. Just wonderful. My terminal just got 10 times better. I'd make it 'best comment of the month' if I could.


> That delay is /usr/bin/login searching the system logs > so that it can display the date and time of your last login.

Normally Unixes (Linux included) use a pretty efficient binary file called wtmp for that, I'm surprised if OS X doesn't. Reading the last disk block of that file would contain the last login with overwhelming probability.

There has to be a lot of seeks even on a slow laptop rotating hd to get a 5 second delay, with 15 ms seek time you get 333 seeks in 5 secs.


OS X does: /var/run/utmpx.


So it takes 5 seconds to search the system logs for the date of the last login? How is this any less broken?


It isn't broken from a memory management point of view, which is what the post being discussed is talking about, I think is the point.


Wow, I'm really surprised this isn't default behavior. Knowing when you last logged in isn't nearly useful enough to justify the delay it so routinely causes.


It's worth it when you realize someone else has been logging on as you over the weekend.


Well, I suppose it seems sillier on a laptop than on a desktop.


or a server...


Well sure, but I wasn't saying that it should be default behavior for Mac OS X server or server-aimed n*x distros. I don't think that most people are using non-server Mac OS X as a remote server, and those who are could override the default.


But then you can't trust the logs.


I want to reach across the internet and hug you right now.


Wow, thank you for this. The delay caused by this behavior has been driving me crazy for a really long time.


Hard drives can read read data sequentially very, very quickly. On whatever generic SATA hard drive I have in my workstation, I get:

  eklitzke@gnut:~ $ time sudo head -c 1073741824 /dev/sda > /dev/null
  
  real	0m8.267s
  user	0m0.220s
  sys	0m0.810s
So I can read 1G off the drive in about 8 seconds.

Even if the login command does need to sequentially read through logs to find the last login time (and I'm skeptical of that, because that would be a stupid way to implement login), I don't see how that would explain multiple seconds of waiting.


Log files are not stored sequentially on disk because they are constantly being appended to.

http://en.wikipedia.org/wiki/File_system_fragmentation


Is it possible that power-saver is spinning down the hard-disk?


Fantastic. Thanks for that.


Wow - that does make a difference in launching a terminal!


relatedly, i call "`eval resize`" in my .zshrc--any idea why that takes ~5s if it's more than a minute or two since i last did it, but is instantaneous otherwise? (yes, i'm certain that it's that line that's taking time--if i start typing during the delay before the prompt shows up, i get a resize error about unrecognized characters and then a couple bits of random ANSI on my input.)


Thanks for this. I love learning something simple, new, useful, and that I probably should have already known after decades writing software.


Thanks a ton! This has been really bothering me lately. I could not figure out what was causing such a delay.


Doesn't OSX rotate logs? Seems weird that it should take so long.


I'd send you a dollar if I could for this post. Awesome.


Does this apply to iTerm as well?


You can change your profile to not launch a new login shell for each window/terminal, but that might cause other issues.


Yes


HALLELUJAH. Thanks.


> I haven't re-installed the OS (this was an App Store upgrade from Snow leopard) because that's a major pain in the ass as this is an actual workstation used to do actual work.

This has never been hard to do on OS X since 10.0 because they followed the Unix convention of separating user data from system files. It is, of course, rarely necessary unless you've used superuser access to seriously muck with things under /System.

I've done this repeatedly over the years when dealing with beta releases & system migrations and it's never taken much longer than the time needed to copy the files.


"Yet it still swaps to disk ALL THE TIME "

You should run some dtrace magic to find out what 'it' is. MIght be the OS, might be a badly behaving application. Who knows.

I find it too easy to blame the OS for all of this. One poorly written app can cause a lot of performance damage.


Putting an SSD in for me solved all the problems you mentioned. Haven't seen a beach ball since.

Lion has probably been optimized for SSD since Apple is quickly getting rid of spinning disks in their entire line up.


> Yet it still swaps to disk ALL THE TIME

Is it actively paging to disk at times when there is plenty of free RAM? A common complaint on Linux is "I've closed a lot of stuff but it is still using swap" because even if the pages are read back into RAM when next used they are kept in the swap area in case they need to go out again (that way they don't need to be written unless changed, saving some I/O).

Under Linux you can see how much is found in both RAM and on disk with the "SwapCached" entry in /proc/meminfo - it won't stop counting those pages as present in the swap areas until either it runs out of never used swap space so needs to overwrite then to page out other pages or the page is changes in memory (at which point the copy on disk is stale so can not be reused without being updated anyway).

> and a new Terminal.app window can take up to 5 seconds to open.

Have you monitored system activity at such times to see where the delay is? While it could be due to unnecessary disk I/O it could also be elsewhere such as delayed DNS lookups if anything in your profile scripts does such a thing and there is an issue with your connectivity or DNS configuration.

(I'm not an OS X user and never have been so sorry if these thoughts are irrelevant - but I'm guessing memory management in OS X is similar enough for knowledge of how Linux plays the game not to be completely useless)


I disable swap on all my machines over 4gb... Windows, Linux, or OSX. There are people who will advise against this, but in 5 years I've not had any problems. I think once or twice firefox leaked uncontrollably and therefore killed.


Not sure what my settings are now, but I used to keep swap off on my Windows machine (especially while playing games, since swapping kills gaming performance).


I can't believe this is actually advice, either - that's what Windows users used to say in the 90s.

At least Windows, having gone through this particular growing pain, is nowadays fairly painless to reinstall. Did OSX ever improve that aspect of their product?


1) It's never been "hard" to reinstall OS X. What was difficult last time you tried?

2) The OS reinstall path is identical to the OS upgrade path, making it very well tested. This has been the case since (IIRC) Snow Leopard.

3) The latest few generations of hardware can even (re)install the OS over the internet, meaning you don't even need to carry around media to reinstall. (Assuming you're on a fast connection or are willing to wait.)

It's entirely painless to reinstall OS X.


Yes. I reinstalled OS X on a couple machines do to various issues. I was extremely surprised at how nice the experience is. When I logged back in after the reinstall, Chrome even reopened with my old tabs in it.


Not sure if that is a good or a bad thing. Would really really annoy me, tough. If I reinstall an OS, I do it because I want a clean plate, everything back to "standard". Opening my old tabs in my browser tells me, that it kept various config/temp files, which is probably what I wanted to get rid of.


If you want to lose all of your user settings, create a new user account and nuke the old one.

If you want to replace various bits of the system, reinstall the OS.

These are different scenarios with different use-cases, and I'd argue it's a much worse idea to conflate the behaviours, as in certain other OSes, than OS X's fault for properly treating them as separate operations.


User and System files are separate in Unix.

It's a good thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: