Are hardlinks common?

A simple question, how often programs create hardlinks? Rarely? Often? Very OFTeN?

I had this 'genious' idea to move some cache (by soflinking directory from /tmp/from ~/.cache/to) I could think of to /tmp (ramdisk by tmpfs) and today I asked my self "am I doing something r3tarded?". Cause if the program attempted to create a hardlink into a softlinked directory outside of filesystem, then I could shoot myself in the foot right?

Hard links share the same inode so they only work on the same filesystem.
If you delete the original file, the hard link will still work. It's more like an actual "copy".

See this picture from AskUbuntu:

(correct me if I'm wrong please).

2 Likes

I do not exactly understand what you are trying to do. The subject talks about hardlinks but in the body you talk about softlinking something. So I am only referencing her to your last sentence "Cause if the program attempted to create a hardlink into a softlinked directory outside of filesystem, then I could shoot myself in the foot right?"

Since hardlinks only work across the same filesystem a program trying to create a hardlink through a soft linked directory on a different filesystem will have to fail.

But I do not know of any program that is using hardlinks other than backup programs. E.g. dirvish is using hardlinks.

I dont know what you try to achieve actually, but I think it is worthwhile to also look into bind mounts.

3 Likes

To ease up on writes on my SSD I have created a collection of softlink to ~/.cache/

for example ~/.cache/mozilla points to /tmp/mozilla (softlink it is)

My fear is, that some old and dangerou program would for example, writes a file to somewhere ~/.local/share/... and then decided ("oh I need that in cache, so I just hardlink it") failing in the process cause the cached directory is softliked to a /tmp/ and that's different filesystem.

lrwxrwxrwx   ~/.cache/qutebrowser -> /tmp/qutebrowser/

I just don't know what happens if hardlink is attempted cross filesystems.

That's the generall concern I have, I don't want programs to fail due to my "IDEA".

I had a similar concern with my PC. I ended up just linking the ~/.cache/spotify directory to a spinning disc. Everything else is not worthwhile thinking about from a write volume point of view.

Matthias

1 Like

I am in a mood to writte a small scripts that will check a subset of subdirs and relinks them if they are not yet linked after login.

The Steam browser cache is in idiotic place ~/.local/share/Steam/config/htmlcache (why? why not :smiley: but SAY NO, gone with you to ram)

And I have found so many caches on outside ~/.cache that I just said no! they grow and never shrinks filled with unneeded stuff

Modern SSDs can take LOTS of writes without a problem, while still maintaining a > 10 year lifetime even if you write 100 GB per day.

On topic: I also symlinked a lot of stuff from my home to the HDD, but never anything from .cache or .config.
Browser profiles can be easily relocated to HDD, in conjunction with the use of profile-sync-daemon it gets even better.

1 Like

I noticed that many Electron apps, have their cache in the ~/.config/$app/cache folder. Annoying. :stuck_out_tongue:

1 Like

Sounds more like those server units, I don't have a server grade SSD.

Only Intel SSDSC2BW240H6 ("The SSD will have a minimum of five years under client workload" --Manufacturer), yeah that could mean 2,5 under power user workload and even less for crazy psycho users :smiley: :P, the lesser the load the longer it will survive and I want my stuf to work at least 7 years.

No, not necessarily. I don't have server grade SSDs either.

The German computer magazine Heise/c't tested the lifetime of a bunch of SSDs recently.
The best of them (Samsung 850 Pro) took a whopping 10 Petabyte before it died (*).

My oldest SSD is nearly 10 years old, still going.

I wouldn't worry about lifetime unless you really do A LOT of writes (> tens of GB per day 365/365).

Intels are especially reliable from my experience (I have two Postvilles v2 and one v3 a.k.a. Intel 320).

(*) 10 million gigabytes, i.e. if you write 100 GB per day, every day, then it could last for 100000 days = 273 years (!!!) You will however lose some speed over the course of the years, and it's obviously a best case scenario.

1 Like

I question how they've tested that, they bought an SSD and waited 10 years for results? Is it a continous test, ho many units they tried?

I just think writes play a bigger role in ssd life than maybe you :slight_smile: so to be safe I moved cache to ram that I rarely use up (16gbram and I rarely hit 50% with my approach included) + I also have more space on in my /home :slight_smile: so win win

They used their own tool called H2Bench (on Windows) to bombard the SSD.

Sure, it's always best to reduce writes to a minimum.
But I rather rely on the test results from a renowned computer magazine than just pure feelings or assumptions :wink:

Don't get me wrong, I have plenty of stuff moved to /tmp or to my HDD for the same reason.
I just think that many people underestimate the lifetime of SSDs, with the exception being the very cheap ones, which only got around ~200 TB in the aforementioned test (which is still quite a lot).

PS: check the output of SMART for your SSD, it will tell you how many bytes have already been written. From that, you can extrapolate usage and lifetime.

Can you blame them? The first gen. ssds were such a horse excrement by design.

Standard HDD developers went in the way of (lets make it larger and correct all errors on the way) relying solely on software correction (no thank you).

SSD developers could do the same, decrease the size of the bits and let it be corrected by the HW driver (this will end up well).

That's why I never buy SSD larger than 256Gb.

it was not really life time the problem. more firmware bug in the controller that was causing data corruption. :wink:

purely on life time.. for the same amount of "user write" a bigger disk will have a longer life.
as there is more cell to distrbute the erase block that limit life of flash

2 Likes

The documentation I found says:

Minimum Useful Life/Endurance Rating: 5 years
The SSD will have a minimum of five years of useful life under client workloads with up to 40 GB of host writes per day.

I doubt that you get this as average daily volume for five years.

1 Like

My NVMe SSD is now 14 months old, and it has

Data Units Read:                    10.939.080 [5,60 TB]
Data Units Written:                 13.269.024 [6,79 TB]

under "normal" usage, so roughly 16 GB written per day average.
Maybe this gives you an idea. You can check with smartctl -A (from package smartmontools)

Ah, my SMART data is not as smart as yours apparently

smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.17.0-1-MANJARO] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0032   100   100   000    Old_age   Always       -       0
  9 Power_On_Hours_and_Msec 0x0032   100   100   000    Old_age   Always       -       4389h+00m+00.000s
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       1236
170 Available_Reservd_Space 0x0033   097   100   010    Pre-fail  Always       -       0
171 Program_Fail_Count      0x0032   100   100   000    Old_age   Always       -       0
172 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0
174 Unexpect_Power_Loss_Ct  0x0032   100   100   000    Old_age   Always       -       29
183 SATA_Downshift_Count    0x0032   100   100   000    Old_age   Always       -       2
184 End-to-End_Error        0x0033   100   100   090    Pre-fail  Always       -       0
187 Uncorrectable_Error_Cnt 0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0032   029   100   000    Old_age   Always       -       29 (Min/Max 15/49)
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       29
199 UDMA_CRC_Error_Count    0x0032   100   100   000    Old_age   Always       -       0
225 Host_Writes_32MiB       0x0032   100   100   000    Old_age   Always       -       150393
226 Workld_Media_Wear_Indic 0x0032   100   100   000    Old_age   Always       -       65535
227 Workld_Host_Reads_Perc  0x0032   100   100   000    Old_age   Always       -       37
228 Workload_Minutes        0x0032   100   100   000    Old_age   Always       -       65535
232 Available_Reservd_Space 0x0033   097   100   010    Pre-fail  Always       -       0
233 Media_Wearout_Indicator 0x0032   091   100   000    Old_age   Always       -       0
241 Host_Writes_32MiB       0x0032   100   100   000    Old_age   Always       -       150393
242 Host_Reads_32MiB        0x0032   100   100   000    Old_age   Always       -       92003
249 NAND_Writes_1GiB        0x0032   100   100   000    Old_age   Always       -       45265

So I don't know how to get Units Written field

2 Likes

Mine looks like yours. :stuck_out_tongue:

Please have a look at this: https://serversupportforum.de/forum/serverhardware/55283-smart-werte-ssd-lebensdauer.html

Your SSD as 2875 GB written and has a "remaining life" of 91%.

PS
I just checked my SSD and it is in good shape: Wear_Leveling_Count is at 99 % with 15 TB written.

1 Like

Yep, output for SATA SSDs is different, the NVMe output is easier to read.

mbod did the work for you :wink:

Forum kindly sponsored by