I replaced -resize with -thumbnail as you suggest in the options above, but in this case the speed
dropped. The same set of 100 images took 14 seconds to process when using -thumbnail.
Depending on the number of CPU cores you have active, -resize may be
faster than -thumbnail in terms of wall-clock time, but it will
usually use less total CPU. Since the JPEG library is already
returning a fairly small image, there is much less benefit from
Looking at the directory options, I don't think this is going to be helpful for my case. The file
layout I have is one full-size image and a number of derivative sizes per directory. For example,
in a single directory we have files something like: 12345_full.jpg, 12345_small.jpg,
12345_medium.jpg, and 12345_large.jpg. The new sizes I am batch processing will be derivatives of
the *_full.jpg size but must live in the same directory. Also, these are new files, and not
resizing a file in place of the original. Correct me if I'm wrong here.
In this case, be sure to investigate the '-write filename' option
which allows you to write the current image to the specified filename
and then continue processing. In this case you might not take
advantage of the JPEG -size option, but you can use -thumbnail (or
-resize, or -scale) in a sort of pyramid fashion to produce the
various smaller sizes while reading the original image just once.
% time gm convert -verbose 100_4994.jpg -quality 80 +profile '*' \
-write original.jpg -thumbnail 60% -write large.jpg -thumbnail 60% \
-write medium.jpg -thumbnail 60% -write small.jpg -thumbnail 120x80 \
100_4994.jpg JPEG 1728x2304+0+0 DirectClass 8-bit 1.3M 0.210u 0:01 (17.3M pixels/s)
100_4994.jpg=>original.jpg JPG 1728x2304+0+0 DirectClass 8-bit 798.5K 0.290u 0:01 (13.6M pixels/s)
100_4994.jpg=>large.jpg JPG 1728x2304=>1037x1382+0+0 DirectClass 8-bit 292.5K 0.100u 0:01 (38.0M pixels/s)
100_4994.jpg=>medium.jpg JPG 1728x2304=>622x829+0+0 DirectClass 8-bit 102.8K 0.040u 0:01
100_4994.jpg=>small.jpg JPG 1728x2304=>373x497+0+0 DirectClass 8-bit 31.6K 0.020u 0:01
100_4994.jpg JPEG 1728x2304=>60x80+0+0 DirectClass 8-bit 1.780u 0:01 (4.6M pixels/s)
100_4994.jpg=>thumb.jpg JPG 1728x2304=>60x80+0+0 DirectClass 8-bit 0.000u 0:01
gm convert -verbose 100_4994.jpg -quality 80 +profile '*' -write original.jpg 1.90s user 0.09s system 184% cpu 1.078 total
Just keep in mind that there could be a small bit of additional
quality loss due to resizing an already resized image.
Regardless, when using JPEG format, the bottleneck is likely to be the
JPEG decode/encode rather than GraphicsMagick.
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/