I routinely have to copy the contents of a folder on a network file system to my local computer. There are many files (1000s) on the remote folder that are all relatively small but due to network overhead a regular copy cp remote_folder/* ~/local_folder/
takes a very long time (10 mins).
I believe it's because the files are being copied sequentially – each file waits until the previous is finished before the copy begins.
What's the simplest way to increase the speed of this copy? (I assume it is to perform the copy in parallel.)
Zipping the files before copying will not necessarily speed things up because they may be all saved on different disks on different servers.
As long as you limit the copy commands you're running you could probably use a script like the one posted by Scrutinizer
SOURCEDIR="$1"
TARGETDIR="$2"
MAX_PARALLEL=4
nroffiles=$(ls "$SOURCEDIR" | wc -w)
setsize=$(( nroffiles/MAX_PARALLEL + 1 ))
ls -1 "$SOURCEDIR"/* | xargs -n "$setsize" | while read workset; do
cp -p "$workset" "$TARGETDIR" &
done
wait
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句