My computer goes very slow and crashes when I access folders with many files. It hasn't been a problem for a while, but today I've had three crashes.
I can reproduce the problem by opening any folder with a lot of content and waiting, or by clicking deeper and deeper and trying to open things. Generally speaking, trying to open a file that hasn't loaded a thumbnail will cause issues. It also seems that if I visit any one folder enough the problem goes away, but if I start suddenly using another I'll get the problem again.
(pic unrelated outside of being computery)
>>155406
I should note, CPU and memory usage appear to be perfectly normal while this is happening, but the machine freezes up completely for minutes at a time and eventually crashes.
try defragging your hard drive, it's probably super fragmented
>>155450
That seems to be recommendation number 1 from a number of sources. What risks are there to defragging? I've got pieces saying you should back up everything beforehand; is that overly cautious?
>>155454
>is that overly cautious?
Yes.
>>155454
it just reorganizes the disc so that the head can read data more efficiently. it won't do anything to your files
>>155454
they tell you to backup your stuff because it's good practice. any drive can crash at any moment, especially when it's undergoing intensive work
>>155406
If it's an SSD don't defrag it though, you will have to find another solution. Defragging an SSD is a bad idea.
>>155498
>Defragging an SSD is a bad idea.
It's not a terrible idea.
The minus side is that it needlessly drives up the SSD's write count, which isn't really the issue it was back when SSDs were new and experimental. SSDs nowadays can put up with thousands of full-device writes, meaning you could defragment your disk every single day for years without killing it. Here's a guy that ran a bunch of SSDs at full-speed write non-stop for over a year: http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead
The plus side is that it keeps related data contiguous, which spreads it across cache lines and means your reads and writes can exploit the full parallelism of the SSD. SSDs *are* faster at sequential I/O, primarily because sequential I/O is always spread evenly across every bank of chips (much like with multi-channel RAM).
Even if the FTL isn't putting data where it says it's putting data, the way every defragmenter writes a single contiguous file at a time means that the FTL will lay the file down contiguously *somewhere*. In addition, it leaves the free space contiguous, meaning that files the OS writes later will also be laid down contiguously.
The cost is miniscule, and there is a benefit. If you do defrag an SSD, certainly don't worry that you've ruined it.