Discussion:
A 2025 NewYear present: make dpkg --force-unsafe-io the default?
Add Reply
Ángel
2025-01-13 00:20:01 UTC
Reply
Permalink
Resending without the attachments, since the mailing list seems to have
fully eaten the message, rather than just delaying it until a moderator
approval, as I originally thought.

-------- Forwarded Message --------
From: Ángel
To: debian-devel
Subject: Re: A 2025 NewYear present: make dpkg --force-unsafe-io the
default?
Date: Sat, 04 Jan 2025 15:26:33 +0100

I have been using eatmydata apt-get for many years.

In fact, it often pays off to do an initial apt-get install -y
eatmydata so that you can run the actual command as eatmydata apt-get.

This is specially noticeable when running pipelines or other similar
processes, where you install almost everything many times. On a
relatively-up-to-date system, not so much (but see below).


Of course, this makes sense if I'm working on a VM or a container,
where I know the system will not crash. If it actually did, the machine
would be rebuilt rather than needing the interrupted system to be
consistent.
On the other hand, if this machine was a pet on physical hardware, I
would probably keep them.


I seem to remember a big speedup adding eatmydata on a process that was
creating multiple images, from what used to be *hours* to something
_reasonable_ (whatever it was).


In order to do some benchmarks, I got a bullseye docker image that
happened to have a few old packages (e2fsprogs libcom-err2 libext2fs2
libsepol1 libss2 libssl1.1 logsave perl-base tzdata).


normal dist-upgrade: 1m6.561s

eatmydata: 0m1.911s

force-unsafe-io: 0m9.096s



I am attaching the full logs as benchmark-1.

The packages were downloaded but they were all fetching from an apt
proxy that had already processed this, so network factor is basically
nonexistent.


I then tried to stress it a bit more and install apache2 with all the
0 upgraded, 3835 newly installed, 0 to remove and 0 not upgraded.
Need to get 6367 MB of archives.
After this operation, 19.4 GB of additional disk space will be used.
This actually required multiple attempts for priming the cache, since
deb.debian.org seemed to be throttling me with 502 errors.


This longer install took:

normal: 245m57.148s = 4h 5m 57s

eatmydata: 36m56.748s

force-unsafe-io: 83m40.860s


Logs attached as benchmark-2.


Admittedly, this longer install does lots of other things, from mandb
builds to creation of ssh keys, with very diverse postinsts, which
eatmydata would be affecting as well.
Still, those additional steps would be the same on the three instances
(for example, on benchmark-2 package fetching raises to 4 minutes, but
difference between configs is negligible¹), and I think apt/dpkg would
still be the main fsync() user, so this seems a realistic scenario of
what an end user experiences.





¹ $ grep ^Fetch benchmark-2.txt
Fetched 6367 MB in 4min 31s (23.5 MB/s)
Fetched 6367 MB in 4min 29s (23.7 MB/s)
Fetched 6367 MB in 4min 41s (22.6 MB/s)


Versions used were:
ii apt 2.2.4 amd64 commandline package manager
ii dpkg 1.20.13 amd64 Debian package management system



Happy New Year everyone
Julien Plissonneau Duquène
2025-01-13 10:20:01 UTC
Reply
Permalink
Hi,
Post by Ángel
Resending without the attachments,
I would suggest using paste.debian.net or snippets on Salsa for
attachments.
Post by Ángel
normal dist-upgrade: 1m6.561s
eatmydata: 0m1.911s
force-unsafe-io: 0m9.096s
Thanks for these interesting figures. Could you please also provide
details about the underlying filesystem and storage stack, and the
effective mount options (cat /proc/fs/.../options)?

Cheers,
--
Julien Plissonneau Duquène
Ángel
2025-01-14 02:00:02 UTC
Reply
Permalink
(it seems the forwarding broke the thread 😕)
Post by Julien Plissonneau Duquène
Post by Ángel
normal dist-upgrade: 1m6.561s
eatmydata: 0m1.911s
force-unsafe-io: 0m9.096s
Thanks for these interesting figures. Could you please also provide
details about the underlying filesystem and storage stack, and the
effective mount options (cat /proc/fs/.../options)?
Cheers,
Sure.

This server has two (mechanical) disks, which are joined in a software
RAID 1, on top of which lies LUKS, which has an ext4 filesystem,
mounted with defaults,usrquota (i.e. rw,relatime,quota,usrquota,
data=ordered).

Then, docker is using aufs for the containers, which adds yet another
layer.

I'm afraid that if any of those is slowing things more than "normal",
it might be difficult to identify it.
Post by Julien Plissonneau Duquène
rw
delalloc
barrier
user_xattr
acl
quota
usrquota
resuid=0
resgid=0
errors=continue
commit=5
min_batch_time=0
max_batch_time=15000
stripe=0
data=ordered
inode_readahead_blks=32
init_itable=10
max_dir_size_kb=0
Cheers
Julien Plissonneau Duquène
2025-01-14 08:20:01 UTC
Reply
Permalink
Thank you.

It appears that these options lack auto_da_alloc, which may (still
hypothetical at this point) explain the much better performance of
--force-unsafe-io in your case.

Cheers,
--
Julien Plissonneau Duquène
Loading...