Nice! I used to do something like this, which avoids xargs altogether:
cat urls.txt | while read url; do echo download $url; done
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Nice! I used to do something like this, which avoids xargs altogether:
cat urls.txt | while read url; do echo download $url; done
You should use -r option for read command to preserve backslashes. I was using while loops before too, but wanted to have a compact single command replacement. And doing it with a while loop as an alias (or function) didn't work well, because the command has to be interpreted. xargs does exactly that, as it is designed for this kind of stuff. Other than having less stuff to type, I wonder if there are benefits from one over the other while vs xargs. In a script, I prefer writing full while loops instead.
You can also avoid cat since you aren't actually concatenating files (depending on file size this can be much faster):
while read -r url; do echo "download $url"; done < urls.txt
Pro tip: you can also put the < urls.txt at the start for readability. The arrow doesn't have to point at the command.
Out of curiosity, how?
< urls.txt while read -r url; ...
Is a syntax error.
while read -r url < urls.txt; ...
Result in an infinite loop.
Usually this is the way. Once you enter xargs’ world, you lose access to your shell aliases, functions, and un-exported variables, which will often bite you in the ass.
I recommend you gnu parallel. It does similar things, but runs the commands in parallel. And it's way easier to pipe than xargs. If you really need it to run one command at a time you can give number of cores to 1. And it also has progress bars, colors to differentiate stdout fo different commands, etc.
Basic example: to echo each line
parallel echo < somefile.txt
To download all links, number of jobs 4, show progress
parallel -j 4 --bar ''curl -O" < links.txt
You can do lot more stuffs with inputs {}, like numbered ({1} is first) that allows multiple unique arguments, transformers like remove extension, remove parent path, etc. worth learning
I am actually aware of parallel and use it for a different tool / script I built. The purpose of parallel is different than xargs, right? I mean xargs works on each line of a stdout string, which is what I was using it for. I never thought parallel as an alternative to xargs and need to investigate into this idea more. Thanks.
Be careful.
Because it only formats stdin streams to into string(s), xargs can be very dangerous, depending on the command to which the arguments are being passed.
Xargs used to be a practical way to get around bash globbing issues and parenthetical clause behavior, but most commands have alternate and safer ways of handling passed arguments.
find -exec is preferable to xargs to avoid file expansion "bombs", plus find doesn't involve the shell, so it doesn't care about whitespace problems.
I almost never use xargs. The most common case for it is find, but it is easier to use its -exec option. Also, with find your example is incorrect. You forgot that file names can contain special characters, the newline character in particular. That's why you need to pass -print0 option to find and -0 option to xargs.