If your web browser formats this page incorrectly, try viewing the page source. In netscape 4, click the view menu, then click page source. Things which are not in the man pages, but should be; simple explanations of things which are explained badly in the man pages; and examples. Everything refers to whichever version I was using at the time; this may not be the same version as you are using, or even the same version as I am using now. In other words, this may all be wrong. To find information about a specific program, use the search feature of the pager or editor you are using to read this file, and search for '**foo', where foo is the name of the program you want information about. **afio To make an archive with afio: afio -oZ -G 9 -M 4m -c 1048576 -e 1 archive < list Z means to compress with gzip. afio does not use a checksum, but gzip adds a checksum, so this gives better protection against data errors in the archive, besides making the archive smaller. Note the checksum probably only protects the contents of the file, not the file name or other directory information. G 9 means maximum compression. M 4m means use 4M of memory for holding file while it is being compressed. c means to use 1M of memory buffering tape reads and writes; c is not needed if the archive is a file on the hard drive. e 1 stops afio from adding 00h bytes to the end of the archive until the size is an exact multiple of the blocksize. Usually the device driver will adjust the blocksize as needed, which is better than having afio worry about it. archive is the name of the archive file, or the name of a device, or user@hostname:file(device), or - for standard output. list is the name of a file which contains a list of the files to be put into the archive. Note afio reads the list from standard input, so you can pipe the output of another program to afio to use as the list. To unarchive files with afio: afio -ikvZ -w list -c 1048576 archive k means to try to read the archive anyway even if the first part seems wrong. k is hardly ever needed, but it does no harm, so why not? v means to display file names as the files are unarchived. If a file is damaged, afio displays: gzip: stdin: invalid compressed data--crc error afio: "inentry xwait()": Exit 1 but afio does not say which file is damaged! afio unarchives the file and continues to the next file and gives an exit code of 0 as if no error had occurred. By using -v, you know the file name which is displayed AFTER the error message is the damaged file. Probably the filenames are written to standard output, but the error messages are written to standard error output, so if you redirect standard output, you should redirect standard error output as well. list is the name of a file which contains a list of files to unarchive. If files in the list are not in the archive, they are ignored; this is not an error. If '-w list' is left out, all files are unarchived. list can NOT be '-' for standard input. Z should always be used when unarchiving files; if it is not needed it will be ignored. c is same as above. archive is the same as above. To compare an archive withe the original files: afio -rkZ -w list -c 1048576 archive k, Z, w list, c, and archive are same as above. If everything matches, there is no output and the exit code is 0. If anything does not match, the exit code will be 1. If a file has been deleted, the following message is displayed: afio: "file": No such file or directory If a file has changed since it was archived, the following message is displayed: afio: "file": Difference in archive data and file afio: "file": Archive data and file cannot be aligned afio: "incheckentry xwait() gunzip": Exit 1 afio: "file": uncompressing failure afio: "file": Corrupt archive data THE ARCHIVE DATA IS NOT CORRUPT!! The message is misleading. If the archive was corrupt and the file was NOT changed, the following message would be displayed: afio: "file": Difference in archive data and file gzip: stdin: invalid compressed data--crc error afio: "incheckentry xwait() gunzip": Exit 1 afio: "file": uncompressing failure afio: "file": Corrupt archive data If the archive was corrupt and the file WAS changed: afio: "file": Difference in archive data and file afio: "file": Archive data and file cannot be aligned afio: "incheckentry xwait() gunzip": Exit 1 afio: "file": uncompressing failure afio: "file": Corrupt archive data Unfortunately, these error messages are too similar. Note the last message is the same as if the archive was not corrupt; this message probably means that the size or data or something has changed. A line beginning 'gzip:' indicates the archive is corrupt, and it is impossible to say if the file has changed. Probably 'Archive data and file cannot be aligned' means there is a change in size or date. I suspect that there is a minor bug in afio such that if the size of data does not match, afio sends nothing to gzip to be uncompressed, gzip returns 1 because it cannot uncompress nothing, and afio interprets this as corrupt archive data. I am not sure what error message would be displayed if the file was actually corrupt, if the contents of the file had changed but the size and date had not. If you use -v when comparing, you get a lot more output, and it is even more confusing. The information given by -z and -L is useless, unless you are working on the source code. afio dot z bug: When afio uncompresses a file, it adds '.z' to the file name. Unfortunately, sometimes afio forgets that it has added '.z' to the file name. If you are unarchiving or comparing, and if you are using -w and a list file, and if one of the files in the list is compressed; then afio will not be able to find that file in the archive, unless you add '.z' to the name of the file in the list. Remember files which are in the list but are not in the archive are ignored. If you add '.z' to the file name and afio finds the file in the archive, afio will remember that it added the '.z', and afio will unarchive or compare the file normally. If you are trying to add '.z's to the list, but you can not remember which files are compressed, give each file name twice, once with '.z' and once without. You can do this by piping the list through the following program: perl -e 'while(){chomp($_);print("$_\n$_.z\n");}' This usually works because afio ignores files in the list which are not in the archive, but you might have problems if you have two files in the same archive with similar names, one ends in '.z' and one does not, like 'foo' and 'foo.z'; then afio might get confused about which one you want. You can also work around the bug by adding '*' to the file name. The afio dot z bug also affects -W. It does not affect -y or -Y. There is no problem if you do not use -W or -w. The afio dot z bug is in afio version 2.4.1. **ash ash is like bash, except: ash is faster than bash '~' is not recognized as equivalent to $HOME ash does not recognize the command "source " as the same as ". ". ash does not recognize ! at the beginning of a command as meaning to reverse the exit code after running the command. These commands are not recognized: function, pushd, popd, Functions are not allowed in ash. In bash, 'read' is the same as 'read REPLY'. In ash, 'read' does not work, but 'read REPLY' does work. ash does not set the following environment parameters: PWD, ash does not recognize the special meanings of backslashes in the prompt. ash does not set -o, -C, ash test does not recognize -e In the following shell script: #!/bin/ash if test "a" = "a" then read REPLY true & fi the line 'read REPLY' is ignored. It is not ignored if 'true &' is changed to 'true'. It is not ignored if '#!/bin/ash' is changed to '#!/bin/bash'. I guess this is a bug in ash. If there are more commands between 'read REPLY' and 'true &', that does not make any difference. If there is more than one 'read REPLY' before 'true &', all are ignored. If 'read REPLY' is after 'true &', it executes normally. **bittorent using dellp3, fc8, trying to download knoppix dvd iso image bittorrent-console http://torrent.unix-ag.uni-kl.de/torrents/KNOPPIX_V6.4.3DVD-2010-12-20-EN.torrent many incomprehensible error messages, pressed control-c to exit. It is possible that bittorrent-console needs a local file instead of a URL, but the error message does not say that. bittorrent-curses http://torrent.unix-ag.uni-kl.de/torrents/KNOPPIX_V6.4.3DVD-2010-12-20-EN.torrent python returned ERR, exit code 1. It is possible that bittorrent-console needs a local file instead of a URL, but the error message does not say that. transmissioncli http://torrent.unix-ag.uni-kl.de/torrents/KNOPPIX_V6.4.3DVD-2010-12-20-EN.torrent Couldn't get information for file. Maybe transmissioncli needs a local file instead of a URL. rtorrent http://torrent.unix-ag.uni-kl.de/torrents/KNOPPIX_V6.4.3DVD-2010-12-20-EN.torrent This worked, but uploads were slow. I tried increasing the upload speed with -o upload_rate=200, but both uploads and downloads were much slower. So rtorrent seems to work better without options. But that was a different torrent, so maybe the torrent was slow, maybe it wasn't the fault of the options. rtorrent keeps running after finishing downloading the file, until we stop it. This allows it to keep sending parts of the file to other computers which are trying to download the file, which is the polite thing to do. If we are going to leave it running, we should put it in the background, but what if it keeps writing to standard output and disrupts the program in the foreground? Maybe we should give it its own terminal. **bzip2 Bzip2 gives better compression than gzip with large text files like source code; but there is very little difference between bzip2 and gzip with binary files like compiled programs. If you have a compressed file, and you compress the file again with bzip2, the file will probably become about one percent larger. But if you uncompress the file first, then recompress it with bzip2, the file will probably be smaller than the original compressed file. If you are trying to make an archive of files which are already compressed, use zip (or afio) instead of tar.bzip2 or tar.gzip (and instead of cpio.gz or cpio.bz2). zip tries to compress every file, but if the output is larger than the original file, zip stores the file uncompressed; other compressors compress every file even if the result is larger than the original file. Also zip archives are less susceptible to corruption; if a tar.gz archive is corrupted, you lose the whole archive; if a tar.bz2 archive is corrupted, you lose the whole corrupted block, which could be several files; if a zip archive is corrupted, you lose the corrupted files, but other files are unaffected. **chroot NEWROOT [COMMAND...] chroot changes the root directory and runs programs on the new root directory. The new root directory becomes the current directory. The program you run with chroot is relative to the new root directory. For example: chroot /foo/bar /glup/glog datafile will run /foo/bar/glup/glog /foo/bar/datafile. So you may need to copy the program you want to run to $NEWROOT or a subdirectory of $NEWROOT before you run chroot. If the program is dynamically linked, then the new root directory will need /lib/ld-linux (which is really $NEWROOT/lib/ld-linux) and libs. ld will look for libs in directories given in /etc/ld.so.cache (which is really $NEWROOT/etc/ld.so.cache). If /etc/ld.so.cache does not exist, ld will look for libs in /lib (which is really $NEWROOT/lib). So you may need to put ld and libs in $NEWROOT/lib before you run chroot. chroot does not change the environment, but various paths in the environment may be incorrect for the new root directory. So you may need to change the environment before running chroot. Symbolic links which are relative to the root directory will be relative to $NEWROOT. References to the parent directory of $NEWROOT will refer to $NEWROOT. Some programs interface with the operating system or with other programs through /var or /proc. These programs will probably not work with chroot because /var and /proc will be inaccessible. However, you might be able to copy files from /var and /proc to $NEWROOT/var and $NEWROOT/proc. /proc can be mounted in more than one place at the same time, so you can mount /proc to $NEWROOT/proc. The new root directory applies to the program you run with chroot, every child process created by that program, and every library function called by that program. However, the new root directory does not apply to operating system calls. Programs which you run with chroot can use pipes and sockets to communicate with other processes. If these other processes were created before you ran chroot, then the new root directory does not apply to these other processes, and the programs which you run with chroot may be able to access files which are not in the new root directory by communicating with these other processes. Do not put the command in quotes. chroot /foo/bar "/glup/glog datafile" chroot will think that the space in '/glup/glog datafile' is part of the filename, and will fail because there is no file with that name. **diff **patch Make a simple patch with: diff originalfile changedfile > patchfile Then apply the patch with: patch originalfile < patchfile and originalfile will be changed so that originalfile is the same as changedfile. This changes the file contents. This does not change the file name. patch does not care what name the file has. You can copy the original file to a new file with a completely different name, and you can patch the new file, patch will change the file contents and not change the file name. To make one patch for a directory of files: diff -Pru directorycontainingoriginalversions directorycontainingchangedversions > patchfile Then apply the patch with: cd directorycontainingoriginalversions patch < patchfile diff creates patches by default, and generates no output if the two files are the same. If you do not want a patch file; if you just want to know if two files are the same or not; then use diff -qs, -q means to not create a diff patch and to say if the files differ, -s means to say if the files are the same. If you are using -q, then it does not matter which filename you give first. To compare two directories, use 'diff -r'. This will compare all files in all subdirectories of the two directories given on the command line. Note that this does not compare all files in directory 1, skipping files in directory 2 which do not exist in directory 1, it compares ALL files in BOTH directories. If you try to patch a file which has already been patched, patch says reversed or previously applied patch found, and it asks you if you want to use -R. If you say yes, it applies the patch in reverse, which has the equivalent of removing the patch. **ed echo -e '/\. \/etc\/sysconfig\/network/ a'"\n NETWORKING=yes\n.\n\nwq" \ | ed /root/t/rc.sysinit That searches the file /root/t/rc.sysinit for '. /etc/sysconfig/network' and adds a new line ' NETWORKING=yes' after the first matching line, then saves the file and exits. Note that '.' and '/' in the search pattern are preceded with backslash. Note that echo use -e to enable backslash substitutions. Note first part of echo is in single quotes to prevent backslash substitutions, because those backslashes are for ed; second part of echo is in double quotes to enable backslash substitutions, because we are using \n to insert newlines, the newlines are for ed. After the ed command 'a, we send a newline, then ed enters insert mode, then we send the line of text we want inserted, then newline dot newline to leave insert mode and return to command mode, then send the ed command 'wq' to write the file and quit. If the ed command 'a' was replaced with 'i', then it would do the same thing except it would add the new line before the matching line instead of after the matching line. If we changed 'network/ a' to 'network/-1 a', then it would add the new line after the line before the matching line; in other words it would add the new line before the matching line; in other words it would be the same as using i instead of a. The following creates an ed script and uses it to add some stuff to a file: mv xcport.c xcport.c.orig cp xcport.c.orig xcport.c echo '/p = port/ i' > $ED_SCRIPT echo ' /*' >> $ED_SCRIPT echo '.' >> $ED_SCRIPT echo '/p = toupper/ a' >> $ED_SCRIPT echo ' This causes xc to change the last character of the modem device name' >> $ED_SCRIPT echo ' to uppercase, so that if you tell xc to use /dev/modem, it will use' >> $ED_SCRIPT echo ' /dev/modeM instead. I think this is correct for some unixes, where you' >> $ED_SCRIPT echo ' are supposed to use /dev/ttyA instead of /dev/ttya, but it is not' >> $ED_SCRIPT echo ' correct for Linux; so this should be skipped for Linux. */' >> $ED_SCRIPT echo '.' >> $ED_SCRIPT echo 'wq' >> $ED_SCRIPT cat $ED_SCRIPT | ed xcport.c rm $ED_SCRIPT **find To find files with a certain characters in the name: find -name '*qw*' That displays a list of all files, directories, devices, pipes, etc with a q followed by a w in the name of the file. find searches the current directory and all subdirectories of the current directory. If you want to ignore case, use -iname instead of -name. A file named './qw/foo' will not be found because the qw is in the directory name, not in the file name. If you want to find both files with matching names and files with matching directory names, use -path instead of -name. '*qw*' is the pattern to search for. To find all files, search for '*'. The output is like this: . ./kxarc ./dir ./dir/file ./dev ./s~ ./s To eliminate the leading './', pipe the output of find through sed. To eliminate the file '.', pipe the output of find through grep. The following example finds files named *.c or *.h: find -name '*.c' -or -name '*.h' A complicated search may require parentheses. The parentheses must be in quotes because without quotes, bash would interpret the parentheses as control characters. find '(' -path '*project2*' -or -path '*project5*' ')' -and \ '(' -name '*.c' -or -name '*.h' ')' -and -ls **ftp If you use ftp to cd to a directory or get a file with spaces in the name, put a backslash before each space; and you might want to rename files to eliminate the spaces in the file names. For example, to get a file named 'x42 manual.doc': get x42\ manual.doc x42_manual.doc **grep Use grep like this: grep -e '[pattern to search for]' [name of file to search] To find which files contain a pattern: grep -l -e '[pattern to search for]' * grep will display the names of all files in the current directory which contain the pattern. This is useful when you have a set of several files of source code, and you want to know which files use or contain the definition of some function. DO NOT FORGET THE '*' AT THE END. If you forget the '*' at the end, grep will do nothing, and will keep doing nothing until the universe explodes or until you press control-c, whichever comes first. With gnu grep version 2.1, the man page says we can use --files-with-matches instead of -l, but when I tried it, grep said unrecognized option; -l worked correctly. Use -lr if you want to search subdirectories too. The man page says that with -l, searching stops on the first match. That means grep stops searching the current file, and grep starts searching the next file. Also note that if you skip -l, grep will still display the names of files which contain matches; for each match grep will display the name of the file, a colon, and the line which matched. To find something which matches either of two patterns: grep -e 'pattern1' -e 'pattern2' or grep -e 'pattern1\|pattern2' To use grep as a filter: cat foo | grep -e 'bar' >> output_file Any line which contains 'bar' will pass through the filter to the output. Use grep -v -e 'bar' to pass lines which do NOT contain 'bar'. To use grep to check a string: echo foo | grep --quiet -e '\.gz$' The exit code will be 0 if foo matched the pattern and 1 if it did not match. ('foo' does not match the pattern '\.gz$'; 'foo.gz' does match.) (Or you could skip the --quiet option, and instead redirect the output to /dev/null.) If you really want to test a string, you probably would combine grep with if as follows: if echo foo | grep --quiet -e '\.gz$' then echo string matches else echo string does not match fi But note that you can do the same thing with bash, and bash is probably a little faster: if [[ foo == *.gz ]] then echo string matches else echo string does not match fi If you want to test for an exact match instead of a pattern match, it is easier to use test instead of grep. Note that grep patterns are not the same as sh patterns; grep patterns are like Perl patterns. I was looking for a pattern that would match both *.tar.gz and *.tgz, but '\.tar\.gz|\.tgz', '(\.tar\.gz)|(\.tgz)', '\.t(ar\.g|g)z$', and '(\.tar\.gz|\.tgz)' did not work. I guess grep does not accept | and () like Perl. grep removes the newline at the end of each line before checking if the line matches. If you are looking for a pattern at the end of a line, use 'foo$', not 'foo\n$'. If you have a command like: grep foo then you might be confused about whether foo is a pattern, or if foo is a file to be searched. In this case grep would assume that foo is a pattern, and grep would search standard input for 'foo'. So you do not have to use -e to tell grep that this is the pattern, but you might prefer to always use -e, because you might think it is less confusing. Also, you do not have to put the pattern in single quotes, but sometimes you have to use quotes to prevent the shell from altering the pattern. I think it is simpler to always use quotes, because then you do not have to worry about whether or not the quotes are needed. If you have a command like: grep foo bar then grep assumes that foo is the pattern to search for and bar is the file to search. The following finds the names of files in a zip archive; it throws away the header lines and keeps lines with filenames from the output of unzip -l. unzip -l something.zip | grep -e '^....... ..-..-.. ..:.. ' The following finds all lines which begin with '#DOS> '. I thought it would require backslashes before '#' and '>', but it does not work with backslashes, and it does work without backslashes. grep '^#DOS> ' **for bash internal command A for loop is like this: for A in * do echo $A done Or all on one line like this: for A in *; do echo $A; done If you have: for ' a b' then that is one datum, four characters long, for the for loop. But if you have: t=' a b' for $t then that is two data, each one character long, for the for loop. There is another version of for with arithmetic like this: set -x # echo all commands for (( x=1 ; $x<4 ; x=$x+1 )) do echo $x done The output is like this: + (( x=1 )) + (( 1<4 )) + echo 1 1 + (( x=1+1 )) + (( 2<4 )) + echo 2 2 + (( x=2+1 )) + (( 3<4 )) + echo 3 3 + (( x=3+1 )) + (( 4<4 )) Note that x=1 is executed only once, the do ... done commands are executed if $x<4, and x=$x+1 is executed AFTER the do ... done commands. **killall5 killall5 sends signals to all process except itself, its parent (the shell or script from which you ran killall5), process 1 (process 1 is usually init), and some kernel processes like kjournald. I think there is no way to send the signal to selected processes only; use kill for that. I think the default signal is 15, otherwise known as TERM. To send all processes the KILL signal: killall5 -9 I do not know if you can use process names as in 'killall5 TERM' or 'killall5 -TERM' **ln Suppose that one directory is the current directory. You want to create a link in a second directory to a file in a third directory. Should you give the name of the file relative to the current directory, or relative to the second directory? The second directory. ln --symbolic /root/t/d1/d2/s /root/t/d1/s that made s -> /root/t/d1/d2/s in /root/t/d1 ln --symbolic d2/s /root/t/d1/s that made s -> d2/s in /root/t/d1 In other words, the first option is what you put into the link pseudofile you are creating; the second option is the name of the link pseudofile you are creating. In other words, first you say what it links to, then you give the name of the link: ln -s 'what_it_links_to' 'name_of_link' If the name of the link is the name of a file or pseudofile which already exists, you will get an error message. ln does not display an error message if the file you are linking to does not exist. **lynx If you tell lynx to display a *.html.gz file, lynx automatically uncompresses the file and formats the html code normally, so what you see is the same as what you would have seen if the file was in *.html format. Lynx also displays gzipped text files the same as regular text files. **memtest memtest does not have an option for testing all memory. When you run memtest, memtest tries to malloc() the amount of memory you specified. If malloc() fails, memtest tries to malloc() less memory, maybe 10 percent less. So maybe you could try memtest with a very large amount of memory, and memtest would reduce the amount of memory until it worked. Note that memtest can test most of the memory in your computer, but not all the memory in your computer. I thought that memtest could test more memory if I shut down as many services as possible, so I rebooted with kernel option init=/bin/bash and no daemons running. But memtest was not able to malloc() any more memory than it could in my usual runlevel 7, and control-c failed to kill memtest. So we might as well run memtest with a normal configuration. I ran memtest with output logged, and after a while pressed control-c to abort memtest. But I was running with kernel option init=/bin/bash, so I had to reboot to kill memtest. When I rebooted the log file was empty. I think it is better to run memtest without the log option, and use my log output program to capture the output of memtest. I have run memtest with a large amount of memory, and memtest successfully malloc()ed that amount of memory, but when memtest began the first test, the kernel displayed an error message saying out of memory, and memtest displayed a message saying Received signal 15 (Terminated), and memtest exited. This error occurred every time I ran memtest with that amount of memory, and did not occur if I ran memtest with 1 megabyte less than that amount of memory. **minicom man page says default exit code of runscript is 1. man page is wrong. default exit code is 0. minicom runscript uses log with log command, does not log data received from modem. minicom runscript timeout really aborts script even if script is doing something. timeout 0 is syntax error. There are no error messages if global timeout is very large, but I doubt that runscript uses more than 32 bits to store the timeout, and the timeout may be stored as a signed integer, so I would not assume that timeouts of more than 2 billion will work. Frequently resetting global timeout is ok. global timeout aborts script if script is sleeping or shelled out to bash. minicom runscript send sends carriage return for both \n and \r. to exit minicom: control-a q or control-a x online help: control-a z configure: control-a o dial directory: control-a d dialing directory is $HOME/.dialdir modem serial port, script directory, download directory, etc are set in online configuration. if a script does exit 1, does minicom hangup and/or exit? are scripts logged? is autozmodem on in scripts? no $ in strings ok? yes quotes with ! ok? no do backslash substitutions work with !? control-a j jumps to a shell. it does not spawn a new shell, it puts minicom into the background and returns to the previous shell, so you type fg, not exit, to return to minicom from the shell. **mv mv does not have an option to not overwrite preexisting files. If the preexisting files have newer dates, you can use --update. Otherwise, you can use --interactive, or you can use --backup and restore the overwritten files later. **rpm The redhat web site says if you have a version of redhat earlier than 5.2, you need to upgrade to a new version of rpm to install the new *.rpm packages. If you have a linux distribution which you installed from the new *.rpm packages with the new rpm, and you want to install some packages which are in the old *.rpm format; then sometimes the new rpm fails to unpack the old *.rpm packages. You can still install the old *.rpm packages by using the old rpm, but the old rpm will think that no dependencies have been installed, so you will have to use --nodeps. After you have installed packages with the old rpm, both old and new rpm will not recognize dependencies provided by the packages; so if you want to install any packages which depend on packages which you installed with the old rpm, then you will have to use --nodeps with either new or old rpm. If you have an *.rpm file and you want to know more about it, run the following command: rpm --query -i -p ical-2.1-1.i386.rpm But change 'ical-2.1-1.i386.rpm' to the name of your *.rpm file. This will display various information like the name and version of the program in the *.rpm file, and a description. Hopefully the description will tell you what the *.rpm file is for. (If you skip the -i, rpm displays nothing. -iv is the same as -i. -ivv is like -i, except rpm also displays some useless header and checksum data.) If you run a command like 'rpm --query -f /usr/bin/foo', then rpm displays the name of the package which owns the file. If there is a file conflict, then the file may be owned by more than one package; in this case rpm lists all packages which claim to own the file. If rpm says the file is not owned by any package, you may have mispelled the file name; perhaps you forgot the leading slash, which is required. When you install an rpm package, rpm does not add each file until the file has been installed; if the file was not completely installed, then the file is not added to the database; therefore if rpm says a file does not belong to any package, then it may be that the file was not completely installed; in this case you need to reinstall the package; good luck guessing which package the file is supposed to belong to. If you have an *.rpm file and you want to know if you can install it, run the following command: rpm --install --test ical-2.1-1.i386.rpm but change 'ical-2.1-1.i386.rpm' to the name of your *.rpm file. This will list failed dependencies and file conflicts, and will return an exit code of 1 if there were any failed dependencies or file conflicts. (If you add -v, if there are no failed dependencies or file conflicts then rpm displays an incorrect message saying that it is installing the package; otherwise -v does nothing. If you add -vv, rpm lists all the dependencies it checks for, and lists files shared with other packages.) if the package is already installed, rpm writes 'package ical-2.1-1 is already installed' to standard error, writes nothing to standard output, and the exit code is 1. --test writes failed dependencies and file conflicts to standard error, not to standard output, and returns a nonzero exit code if there were any failed dependencies or file conflicts. --test checks for dependencies first. If the dependencies are ok, then it checks for file conflicts. If there are no file conflicts, it says the *.rpm file can be installed. If there are failed dependencies, it never checks for file conflicts. But note that dependencies are usually libraries. If the libraries are not there, you cannot run the program, but you can probably still install the program if you want, and you can install the libraries later, and then you can run the program. rpm will say you cannot install an *.rpm file if there are failed dependencies, but usually you can install it, you just cannot run it (but if you do install an *.rpm file with failed dependencies, you have to use --nodeps). Also note that sometimes there will be failed dependencies for an *.rpm file because the *.rpm file thinks it requires a specific version of a library, and you have a different version of that library. Usually this is not important because more than one version of a library will work, but rpm is not able to determine which library versions are ok and which are not. The only way to find out if it will work with the wrong library version is to install it with --nodeps and try running it. If some of the files in your *.rpm file already exist, that is a file conflict. But if the version of the file which already exists is exactly the same as the version which is in your *.rpm file, then that is a shared file, not a file conflict. --test only checks the rpm database; it does not actually check your filesystem. So if --test says there are failed dependencies, then that means the required dependencies are not listed in the rpm database; the dependencies may actually be there. And --test will not detect failed dependencies if the dependency is missing, but is listed in the rpm database. Likewise --test does not detect file conflicts if the file exists, but is not listed in the rpm database. The man page included with rpm 4.0.3 says to use --nobuild to test if a package can be installed, but that is wrong; use --test. Many *.rpm files include an install script, and you need to have all the utilities used in the install script to successfully install an *.rpm file. --test does not check to see if you have all these utilities. So --test is not always correct when it says whether or not you can install an *.rpm file. The only way to find out for sure is to try to install it. If you have an *.rpm file which you want to install or use, but it has failed dependencies, and you have some other *.rpm files, and you want to know which, if any, provides the missing dependencies, then you have to do something like this: for A in *.rpm do echo -e "\n\ndependencies provided by $A" >> output rpm --query --provides -p $A >> output done Then you search the file 'output' for the missing dependency. If you want to know which, if any, of your *.rpm files contains a certain file, then you do the same thing, except change '--provides' to '-l'. To install an *.rpm file: rpm --install ical-2.1-1.i386.rpm but change 'ical-2.1-1.i386.rpm' to the name of your *.rpm file. Or you can install many *.rpm files at once with: rpm --install *.rpm If any of the packages cannot be installed, none of the packages will be installed. If you install with the --relocate option, and the package is not relocatable, the package will not be installed. Usually an exit code of 0 means the rpm package was probably completely and successfully installed, and a nonzero exit code means the rpm package was not installed or not completely installed, but not always. Many rpm packages include install scripts, and the install scripts sometimes fail; I think rpm always checks the exit code of the install scripts, but some install scripts do not return a nonzero exit code when they fail, so rpm is sometimes wrong about whether the install script succeeded or failed. Sometimes if rpm thinks an install script failed, it does not install the package; other times when an install script fails, rpm does install the package. I have installed rpm packages, and there were no error messages, and the exit code was 0; but it was not correctly installed; one of the files was truncated and other files were not installed. A nonzero exit code may mean that none, some, or all files were installed; that the install scripts succeeded or failed; that the package was listed as installed in the rpm database or not listed. A nonzero exit code could mean anything. There is no way to verify that an rpm package was completely and successfully installed. The best you can do is to check the exit code, which is usually correct but not always. Sometimes rpm displays incomprehensible error messages. For example: 'error -2 reading header: Success' means the rpm package is corrupted. 'unpacking of archive failed on file ...' also means the rpm package is corrupted, unless the full message is 'unpacking of archive failed on file ...: -9: Operation not permitted', which means that there is already a different kind of file with the same name, perhaps rpm is trying to create a link, but there is already a directory with that name. Sometimes when you install an rpm package, you get error messages saying user ldt does not exist, using root, or group pvm does not exist, using root. There error messages do not cause rpm to return a nonzero exit code; I guess they are warnings, not errors. I usually ignore such messages, but it mght be better to do useradd ldt or groupadd pvm before installing the package. Why don't they put the useradd and groupadd commands into the install scripts? If the *.rpm file was already installed, it will not install again. But if you add the --replacepkgs option, then the version which is already installed will be uninstalled, and then your *.rpm file will install. If there are failed dependencies, your *.rpm file will not install. But if you add the --nodeps option, then your *.rpm file will probably install even if there are failed dependencies. However, your *.rpm file may include an install script, and the failed dependencies may be something required by the install script, and if the install script fails, then the installation will fail. So it is better to resolve the dependencies, rather than using --nodeps. Besides, if there are any failed dependencies, you will not be able to run the program until the dependencies are resolved, so you might as well resolve the dependencies before you install the *.rpm file. Suppose your *.rpm file includes a file named '/usr/bin/foo', and there is already a file named '/usr/bin/foo' listed in the rpm database. If the checksum and other file information in the rpm database match the information in your *.rpm file, then it is a shared file, and your *.rpm file will install. But if the information does not match, then it is a file conflict, and your *.rpm file will not install. But if you use the --replacefiles option, then your *.rpm file will install, and when your *.rpm file installs, it will install its version of /usr/bin/foo, overwriting the other version of /usr/bin/foo. If you have two *.rpm files which have a file conflict with each other, and you want to install both; decide which version of the conflicted file you want. Install the *.rpm file with the version you do NOT want first, then install the *.rpm file with the version you do want, using the --replacefiles option. But note that the *.rpm file which you installed first will not know that its file has been overwritten; if you try to verify that package, it will say that that file has been corrupted; and if you uninstall that package, it will remove that file with no error or warning messages, even though that file really belongs to the other package. I think that when you use --replacefiles, rpm should adjust the rpm database so that the package whose files have been overwritten no longer thinks it owns the overwritten files. It would be more accurate to say --ignorefileconflicts or --allowoverwrites instead of --replacefiles. Also note that --replacefiles works for replacing files with other files, directories with other directories, etc; but --replacefiles does not work for replacing files with directories, directories with files, etc. If you have two *.rpm files, one contains a file named foo and one contains a directory named foo, you try to install both, then the second will fail to install with an error message about file conflicts. If you try to install it with --replacefiles, it will fail to install with a message about operation not permitted. You have to install the first, delete foo, then install the second with --replacefiles. There are actually two file conflicts: the real file conflict and the file conflict in the rpm database; you have to delete the file/directory/etc to fix the real file conflict, and you have to use --replacefiles to fix the file conflict in the rpm database. Suppose your *.rpm file includes a file name /usr/bin/foo, and there is already a file named /usr/bin/foo, but /usr/bin/foo is NOT listed in the rpm database. This is not a file conflict; your *.rpm file will install. /usr/bin/foo will be renamed to /usr/bin/foo.rpmorig, and the version from your *.rpm file will be installed as /usr/bin/foo. If there is already a file named /usr/bin/foo.rpmorig, then the old version of /usr/bin/foo.rpmorig will be overwritten. If your *.rpm file includes an install script, then when you install it the install script will run. If there are errors running the install script, then you will probably be stuck with an incomplete installation. This could occur when you are trying to install linux onto a new hard drive. The install scripts use various utilities, and they use the utilities on the hard drive you are installing to, not the utilities on your boot disk. And if you have not installed the utilities to the hard drive yet, then the install script will fail. The best way to handle an incomplete installation is to figure out why the installation failed, fix the problem, and then install the *.rpm file again. Sometimes an incomplete installation is listed in the rpm database as installed, and sometimes it is not listed in the rpm database. If the rpm database says your *.rpm file is already installed, you will have to use --replacepkgs when you install it again. If the rpm database says it is not installed, you will get a lot of *.rpmorig files when you install it again. I have had the experience of installing an *.rpm file, and the exit code was 0, so it apparently was installed successfully, but it was not; because input/output errors occurred while reading the cdrom, and so the *.rpm file data was wrong, and rpm did not detect the error. I did not realize there was a problem until I noticed that some other programs did not work; they complained about a certain file being corrupted. I used rpm to query the rpm database to find out which package owned that file, and it said no package owned that file. I had to search every package on the cdrom to find out which package owned that file, then I fixed the problem by reinstalling that package. But I think that sort of error is rare. But that shows that rpm is not always able to tell when an *.rpm file has been corrupted. If you want to install just one file from an rpm package, use rpm2cpio to convert the rpm package into a cpio archive, and use cpio to unarchive the one file you want. If you have some *.rpm files installed, you might want to occaisionally run the following command: rpm --verify -a That tells rpm to check every file listed in the rpm database, and check to make sure each file actually exists, and has the correct checksum, etc. Since it verifies the checksum for each file, it should detect if a file is corrupted. Then it displays a list of files which do not match. This takes a long time. The output will tell you if any of the files are corrupted. However, interpreting the output is tricky, see man rpm. I have changed a lot of configuration files and documentation files; rpm --verify -a reports all these "errors", and it is hard to see the real errors for all the bogus errors. rpm --verify also reports many errors, even though the files were installed that way, and it reports failed dependencies which are not a problem; and if you have installed rpm packages with --replacefiles, then I think the replaced files will be reported as changed. I think many of these bogus errors occur because the rpm packages were not put together correctly in the first place; perhaps the install script changes the time/owner/mode of a file, but rpm thinks the time/owner/mode should never change. rpm --verify reports so many bogus errors that it is hardly worth bothering with. If you run a command like: rpm --verify -f /usr/bin/foo then rpm looks up which package /usr/bin/foo belongs to, and verifies all files belonging to that package. There is no command to verify a single file. If rpm says the file does not belong to any package, then you may have spelled the file name wrong; perhaps you forgot the leading slash, which is required. When you install an rpm package, rpm does not add each file until the file has been installed; if the file was not completely installed, then the file is not added to the database; therefore if rpm says a file does not belong to any package, then it may be that the file was not completely installed; in this case you need to reinstall the package; good luck guessing which package the file is supposed to belong to. rpm --verify lists files which are not ok; therefore if rpm --verify displays no output, then all files are ok. If you remove an rpm package, and you have changed any of the files, it deletes the files; it does not skip the files you have changed. If you remove an rpm package, and you have added files to any of the directories created by the rpm package, it displays a warning message saying that it is not deleting the directory because the directory is not empty; it deletes its files but not your files. If you remove an rpm package, but the uninstall script fails, then the remove is aborted; the package is not removed. However, the uninstall script may have been partially run, so the package may have been partially removed. If you run mc (Midnight Commander), and you press enter while an *.rpm file is selected; then mc shows what files are in the package, you can view any of the files, mc pretends that the *.rpm file contains a directory named 'INFO', package information including scripts can be viewed by viewing the files in INFO. This may be easier than using rpm -q to view package information. **rsync rsync creates temporary files in the local directory which rsync is copying files to or from. (Or are the temporary files created in the current directory?) One temporary file is created for each file. The temporary files have the same names as the original files, with random characters added. So if the local directory is readonly, or has no free disk space, or if adding extra characters to the filenames will create illegal filenames, then rsync will fail. In these cases use rsync --temp-dir=/tmp to put the temporary files in /tmp instead of in the local directory. rsync uses ssh. rsync will not work if sshd is not running on the remote computer. rsync prompts for the password for logging in to the remote computer. If the remote computer has no password, rsync still prompts for the password. If you press enter without entering a password, the remote login fails. If you press space and then enter, the remote login succeeds. **sed Also see ed, because ed commands are very similar to sed commands. The following changes the colons in the PATH to spaces echo $PATH | sed -e 's/:/ /g' This would be useful for creating a space seperated list of directories for a for command in a shell script. The following uses sed to change two lines in a makefile: cat Makefile.orig | sed \ -e 's/^LIBS = -lcurses -ltermcap -lbsd$/LIBS = -lncurses -ltermcap -lbsd/' \ -e 's|^HOME = \\"/usr/local/biocomp/hybrow\\"$'\ '|HOME = \\"/usr/etc/hybrow\\"|' > Makefile -e means a sed command follows, this example uses -e twice because it uses two sed commands, both commands are s, s means substitute. s is followed by /, then by the text to find, then another /, then the text to substitute, then another /. The text to find begins with ^, beginning of line and ends with $, end of line. The first command changes curses to ncurses. The second command changes the configuration directory from /usr/local/biocomp/hybrow to /usr/etc/hybrow. Note that the second command includes many slashes; I used s|old|new| instead of s/old/new/ so that I would not have to put backslashes before every slash; you can use any character in place of /. Note the line does not include a newline; sed removes the newline before editing the line, and adds a newline to the end after editing. Also note that this example changes two lines in a file; actually it changes every line which matches; if another line is exactly the same as the line to be changed, that line would be changed too. This same thing could have been done with the c command instead of the s command, except I could not make the c command work, and using the s command seems a little safer because with c you could give the wrong line number and change the wrong line. It also could have been done with ed instead of sed. The same thing could also have been done with: cat Makefile.orig | sed -e 's/curses/ncurses/' \ -e 's|/usr/local/biocomp/hybrow|/usr/etc/hybrow|' > Makefile I think the previous example is a little less precise and a little slower than the first example, but it is also easier to understand. Here is a sed command to change newlines to spaces: sed -n -e H -e '$g' -e '$s/\n/ /g' -e '$p' However, that will insert one space at the beginning, and there will be one newline at the end. If you do not want the extra space at the beginning, do this instead: sed -n -e 1h -e '2,$H' -e '$g' -e '$s/\n/ /g' -e '$p' There is no way to tell sed to not put a newline at the end. But bash can get rid of it for you like this: DATA=`cat foo | sed -n -e H -e '$g' -e '$s/\n/ /g' -e '$p'` and $DATA will be the output of sed, without the trailing newline. Actually, if you really want to convert newlines to spaces, I think it is better to use bash, like this: DATA=$(echo $DATA) or mv foo foo.orig; echo $(cat foo.orig) > foo; rm foo.orig However, bash also converts multiple spaces to single spaces; if you do not want that, you will have to use sed, as described above. You cannot convert newlines to spaces with something like sed -e 's/\n/ /' The substition will not find any newlines to replace with spaces. Some sed commands, like s, do not see the newline at the end of the line. Other commands, like H, do see the newline at the end of the line. Apparently sed removes the newline from the end of the line for some commands, and then adds the newline back after the command is done. The previous examples of how to convert newlines to spaces used H after each line; and then g, s, and p at the end of the file; after using H more than once and then using g, the current line had newlines in the middle, and s was able to find the newlines in the middle, but not the newline at the end. The following uses sed to convert the output of unzip -l into the format of dialog. sed picks lines which match a pattern, and does a set of commands for those lines. sed adds some quote marks to the beginnings of the lines, adds some quote marks to the ends of the lines, and adds the lines to the hold space. Then when it is done, it takes the data from the hold space, converts newlines to spaces, and writes the data to standard output. ZD_list=$(unzip -l $1 | sed -n -e '/^....... ..-..-.. ..:.. /{' \ -e 's/^/"/' -e 's/$/" ""/' -e H -e '}' -e '$g' -e '$s/\n/ /g' -e '$p') The following converts unix text to DOS text by adding a control-m, carriage return, to the end of every line. I tried using '\r' instead of an actual control-m, but it did not work with '/r' and it does work with an actual control-m. sed -e 's/$/ /' The following removes '#DOS> ' from the beginning of each line. I thought it might require a backslash before '#' and '>', but it does not work with backslashes, and it does work without backslashes. sed -e 's/^#DOS> //' Usually if you do not know whether or not backslashes are required, then you can use the backslashes and it will work because unneeded backslashes are ignored. But the previous example did not work with backslashes because sed does something different when you have backslash before '>'; '\>' matches the end of a word, not '>'. echo 'aa>aa' | sed -e 's/\>/b/' # displays aab>aa echo 'aa>aa' | sed -e 's/>/b/' # displays aabaa echo 'aa+aa' | sed -e 's/\>/b/' # displays aab+aa echo 'aa aa' | sed -e 's/\>/b/' # displays aab aa Characters like '.' and '*' have special meanings in a sed pattern; if you want to ignore the special meaning and search for the exact character, then you need to put a backslash before it; the leading backslash disables the special meaning. But other characters like '>', '<', and '?' also have special meanings, only the special meaning is ignored unless you put a backslash before it; the backslash enables the special meaning. So sometimes a backslash disables the special meaning, and sometimes a backslash enables the special meaning. Am I the only one who thinks this is confusing, that this is a stupid way to do things? A period matches any character; if you want to find a period, you need to put a backslash before the period. echo 'aa.aa' | sed -e 's/./b/' # displays ba.aa echo 'aa.aa' | sed -e 's/\./b/' # displays aabaa Adding a new line of text is tricky. The following does not work: echo 'foo' | sed -e 's/foo/foo\nbar/' # displays foonbar In order to add a new line of text, you must insert a newline into the command line with a backslash before the newline; this is tricky because bash is reluctant to put newlines in command lines. First create environment parameters for backslash and newline like this: BackSlash=\\ NewLine=' ' Note that the command to create environment parameter NewLine is two lines; bash usually assumes that a newline is the end of the command, but in this case bash does not assume that the newline is the end of the command because the newline is within single quotes. Then you can do either of the following: echo 'foo' | sed -e "s/foo/foo${BackSlash}${NewLine}bar/" echo 'foo' | sed -e "/foo/a${BackSlash}${NewLine}bar" Both of those display: foo bar Or you can do: echo 'foo' | sed -e "/foo/i${BackSlash}${NewLine}bar" which displays: bar foo You can insert more than one line. You must have ${BackSlash}${NewLine} at the beginning of each new line you insert. For example: echo -e "old line 1\nold line 2" | sed -e "/1/a${BackSlash}${NewLine}"\ "new line 2${BackSlash}${NewLine}new line 3" sed normally prints every line to standard output after making edits. The -n option suppresses autoprinting, so lines are only output if you specifically print the line, probably with the p command. This makes sed act sort of like grep. If you every want to do something like: grep something | sed something you can probably do without grep by using sed -n. The following: pacmd list-sinks | sed -n -e '/^ \* index: / { s/^ \* index: // p }' displays the pulseaudio default sink. The sed command starts with a pattern, so only lines which match the pattern are processed. The curly brackets enclose a set of two commands to run on matching lines. The first command is s, and deletes most of the line. The second command is p, which prints all of the line which is left. Thus this filters out all but part of one line. The following does the same thing, though it does not follow the rules of man sed: pacmd list-sinks | sed -n -e 's/^ \* index: // p' If you want to do sed -e 'something' you can skip the -e and abbreviate it to sed 'something' **sz **lsz If you want to test if sz and rz are working, and if you try to run 'sz file <> named_pipe' and 'rz <> named_pipe', running the two commands on different virtual terminals and from different directories, but using the same pipe, then it does not work. It seems like it should work. I do not know why it does not work. This was tested with sz (lrzsz) 0.12.20, fedora 2 linux, kernel 2.6.5. The source for sz and rz suggests that sz and rz can be tested by running the command 'sz file < named_pipe | (cd tempdir ; rz > ../named_pipe )'. That does work. If you want to check the exit codes of both sz and rz, do this: '( sz file < named_pipe ; echo exit code of sz is $? >> exit_codes ) | ( cd tempdir ; rz > ../named_pipe ; echo exit code of rz is $? >> ../exit_codes )' if you use sz and rz to transfer one file and the size of the file is 0, then the exit code of sz will be 139 and the exit code of rz will be 141. The destination file will be created and will have a size of 0. The file is transferred correctly, but sz and rz say the file transfer failed. I say this is a bug. This was tested with sz (lrzsz) 0.12.20, fedora 2 linux, kernel 2.6.5. **shift bash internal command shift changes the command line parameters. $1 is lost, $2 becomes $1, $3 become $2, etc. $0 remains unchanged. **tar By default, when tar creates an archive it writes the archive to /dev/rmt0; and when extracting, listing contents, or comparing, tar reads from /dev/rmt0. The ' --file ' option tells tar to write the archive to a different file or device pseudofile; or to extract, list, or compare from a different file or device pseudofile. For example: tar --create --file stuff.tar file1 file2 # write to a file tar --create --file /dev/nqft0 file1 file2 # write to a device pseudofile tar --create --file - file1 file2 # write to standard output tar --create --file=stuff.tar --files-from=- # get list of files to put into archive from standard input, and put archive into stuff.tar. '-' means standard input or standard output, depending on context. Note equals sign after '--file' and '--files-from'; this could be either an equals sign or a space, whichever you think looks better. tar --list --file stuff.tar # read from file tar --list --file /dev/nqft0 # read from device pseudofile tar --list --file - # read from standard input The output of list is one filename per line. Directories have a '/' on the end of the name of the directory. I am not sure if the trailing '/' is part of the name stored in the tar archive, or if --list adds it to the name. tar --create --exclude file1 file2 file3 # makes a tar of file2 and file3. The first file/directory/etc after '--exclude' is the file/directory/etc to be excluded, and the rest are the list of files to make into the tar. file1 is excluded, but since file1 is not on the list of files to make into the tar, '--exclude file1' is ignored; the command would be the same if that part was left out. tar --create --exclude file2 * # If '*' expands to 'file1 file2 file3', then file2 will NOT be excluded! Bash expands the command to tar --create --exclude file2 file1 file2 file3 'file1 file2 file3' is the list of things to put in the tar. So 'file2' is being excluded, and it is being included. tar resolves this conflict by including it. Apparently --exclude can only be used to exclude a file or subdirectory from a directory which is being tar'ed. For example: tar --create --exclude file2 . # If there are three files in the current directory named 'file1', 'file2', and 'file3'; then file2 will be excluded; only file1 and file3 will be put into the tar. tar --extract # unarchive all files tar --extract file1 # unarchive only file1 If you tell tar which files to unarchive, you need to give the file name exactly as it is in the archive. Usually you do --list, and then copy the name exactly as it was displayed by --list. If you want to unarchive a directory, --list will show a '/' at the end of the directory name; but when you give the name to --extract, you can include the trailing '/' or skip it; --extract works the same either way. tar --compare # compares all files tar --compare file1 # compares only file1 'tar --compare' has output like the following: etc/hosts: File does not exist etc/exports: Mod time differs etc/group: Mod time differs etc/group: Size differs usr/lib/autofs/lookup_program.so: Data differs So 'tar --compare' first checks to see if the file which exists in the tar also exists in the filesystem. If the file does exist, it checks the time and size. If the time and size match, it checks the data. If the files match, then tar displays nothing. But if the --verbose option was used, then tar displays the name of each file before comparing it. So if the files match, just the name would be displayed. If the files did not match, then the output would look like the following: etc/hosts etc/hosts: File does not exist If you want to tar everything in the current directory, you could use tar --create * But that would skip files whose names begin with '.', such as '.bashrc', so only use '*' if you are sure there are no files whose names begin with '.'. You could use tar --create * .* But that would include '.' and '..', which you do not want to include. You could use tar --create --exclude . --exclude .. * .* But that will not work because because you can not exclude something which is given in the list of things to include. Also note that if no files match the wildcards, then bash will pass the wildcards to tar, unless bash has been configured otherwise, and I am not sure what tar would do then. I am not sure what would happen if you put . and .. in an exclude file. You could cd to the parent directory and give tar the name of the current directory. The best way to tar everything in the current directory is to use tar --create . '.' is the current directory. This uses a feature of tar, in which if you tar a directory, tar assumes you want to tar everything in the directory except . and .. If you tar a directory named 'stuff' with tar --create stuff then the names of files in stuff/ will be saved in the tar file as 'stuff/filename1', 'stuff/filename2', etc. If you tar the current directory with tar --create . then you might expect that the file names would be saved in the tar file as './filename1', './filename2', etc. Wrong! The filenames are saved as 'filename1', 'filename2', etc. If you tar the current directory with tar --create . and then use 'tar --list' to display the contents, you will see that the first thing in the tar file is './'. This is meaningless, because you can not extract the current directory from a tar file, because it already exists, or else it would not be the current directory. If you give the name of a directory to be archived, compared, or unarchived, then tar will include all files in the directory. If you have an archive named foo.tar with a directory named 'd' and a file in the directory named 'f', and if you enter the following command: tar --extract --file=foo.tar d d/f then tar will unarchive directory d, then tar will unarchive all the files in d, which will include d/f, then tar will try to unarchive d/f, but it will not be able to find d/f in the archive because tar has already unarchived it and moved past it, and you will get an error message saying there is no d/f in the archive; even though there is a file named d/f and it was successfully unarchived. If you do not want tar to automatically include the contents of directories, you might think you could use the --no-recursion option. However, --no-recursion has no effect when used with --extract; perhaps --no-recursion is only for use with --create. The only way to unarchive a directory from a tar archive without automatically unarchiving all files in the directory is to use cpio instead of tar. If you are user root and you unarchive files with tar, tar gives the files the UID and GID which the files had when they are archived. If you are not user root, tar gives the files your UID and GID. There is no way to change this. There is an option --same-owner, but that is automatic when you are root and not allowed when you are not root, so it is useless. There are options for changing the UID and GID when files are processed, but these only work when making an archive; these are ignored when unarchiving. When you are making an archive for distribution to other computers, you probably should change the UID and GID of all files to 0 with --owner=root and --group=root, because the UID and GID numbers from your computer may be wrong on other computers. --owner=0 is an error, you have to use --owner=root. If you list the contents of a tar file on a tape, or if you extract one file from a tar file on a tape: the tape needs to be positioned at the beginning of the correct file/volume/backup before you give the tar command, and afterwards the tape will be positioned at the end of the same file/volume/backup. If you write data to a tape with 'tar --create', when it is done the tape will be positioned at eom, end of media, the beginning of the next file/volume/backup, the place to make a new file/volume/backup. The tape should have been positioned at eom before you used 'tar --create'; if it was not, then you just erased the tape, because ftape is supposed to automatically delete everything past the current position when it writes data to the tape. If you tar some things which begin with '/', then tar strips away the initial '/' when it saves the name. For example, a file named '/root/notes' would be saved as 'root/notes/'. But if you use the --absolute-paths option, then the file would be saved as '/root/notes'. If you exclude a file from a tar file with tar --create --exclude then the name of the file you exclude must be given in the same as it will be in the tar file. If you do not know what form the file name will have in the tar file, make the tar file and display the contents. For example: tar --create --exclude ./junk . # wrong! is wrong. The file 'junk' will be saved as 'junk', which is different from './junk', even though it means the same thing. It should be: tar --create --exclude junk . What if you tar some files beginning with '/', and tar throws away the initial '/'? Probably the same thing applies if you use the --exclude-from option. Maybe you can only exclude one thing with --exclude; maybe if you have more than one --exclude only the first one counts. The tar option --block-size does two things. First, tar will make the archive a size which is a multiple of the block size. If the size of the archive is not a multiple of the block size, tar will add nul to the end of the file as many times as neccessary to make the size of the archive a multiple of the block size. If the size of the archive is 1001 bytes and the block size is 512, tar will add 23 nuls to the end of the archive, and then the size of the archive will be 1024. nul is ASCII character 0. The second thing that block size does is whenever tar writes to or reads from the archive, it will always transfer block size bytes of data, or maybe it sometimes uses a multiple of block size. I guess that has something to do with the way that tar calls the kernel services to read from or write to a filehandle. My tar says that --block-size is obsolete and has been replaced with --blocking-factor When you unarchive files with tar, tar tells the operating system to create the files, and the operating system creates the files with you as the owner. After each file is created, tar may tell the operating system to change the UID and GID of the file to match the UID and GID numbers stored in the tar archive. If you are not user root, tar does not change the UID and GID numbers of the files. This is usually a good idea, because if tar changed the UID and GID numbers, then you would no longer be the owner of the files, and you might not have permission to do anything with the files; if the files were in your way and you wanted to delete the files, you would have to become user root or ask user root to do it for you. If you really want tar to change the UID and GID numbers of the files, you could use the option --same-owner; but that usually does not work because you are usually not allowed to do it. If you really want tar to change the UID and GID numbers of the files, you may have to become user root or ask user root to do it for you. But if you are user root, tar does change the UID and GID numbers of the files. This is good for backups. If you are user root and are restoring files from a backup, you probably want to reset the UID and GID of the files you are restoring to whatever the UID and GID were when the files were backed up, so the files will have the same owners as when the files were backed up. But this is a problem if you are installing some new program you downloaded from the internet. When user root installs some new program, user root usually wants the files of the new program to be owned by user root. But when tar unarchives the files, tar changes the UID and GID of the files to what the UID and GID were when the files were archived; the files were probably were archived on a different computer, and the UID and GID numbers from the other computer are probably wrong for your computer. We need an option to tell tar to not change the UID and GID. Unfortunately, there does not seem to be any such option. If you are lucky, whoever archived the files may have set the UID and GID to 0 before archiving the files; UID 0 means root owns the file, so if tar changes the UID and GID to 0, that is good, that is what you want. But many programs you download do not have the UID and GID set to 0, so after tar has unarchived the files and changed the UID and GID to whatever, you will have to change the UID and GID back to 0, so that root will be the owner of the files. The easy way to do this is to enter the command 'chown -R 0:0 *'. However, that is not very specific about what files to change; it could easily some files or change some files it was not supposed to change. You tell tar to list all the files in the archive and change the files one at a time, but that is very tedious. You could change to being a nonroot user before unarchiving the files; then tar will not reset the UID and GID to match the archive, but that will not make the files owned by root, and the nonroot user may not have write permission for the directories where the files are to be installed. We could use cpio to unarchive the *.tar file, but cpio is missing some tar options like --keep-old-files. Or you could make a bash function or shell script to list all the files in the tar archive, reformat the list to change newline to space, remove files from the list if the files already exist, unarchive the files in the list, change the owner of the files in the list. **cp cp does not have an option to not overwrite preexisting files. If the preexisting files have newer dates, you can use --update. Otherwise, you can use --interactive, or you can use --backup and restore the overwritten files later. **cpio The cpio option --io-size does two things. First, cpio will make the archive a size which is a multiple of the block size. If the size of the archive is not a multiple of the block size, cpio will add nul to the end of the file as many times as neccessary to make the size of the archive a multiple of the block size. If the size of the archive is 1001 bytes and the block size is 512, cpio will add 23 nuls to the end of the archive, and then the size of the archive will be 1024. nul is ASCII character 0. The second thing that --io-size does is whenever cpio writes to or reads from the archive, it will always transfer block size bytes of data, or maybe it sometimes uses a multiple of block size. I guess that has something to do with the way that cpio calls the kernel services to read from or write to a filehandle. I think the option --block-size is the same as --io-size, except you give the block size in a different format. To unarchive some files from a cpio archive: cpio --extract 'kde/share/config/*' < kdebase-B.cpio But if the directory kde/share/config does not exist, then you will get an error message saying: cpio: kde/share/config/desktop: No such file or directory You have to make the directory before unarchiving the file, or else use the --make-directories option When unarchiving files, cpio sets the file date to the current date. If you want the file date to be the same as when the file was archived, use the option --preserve-modification-time. --rename tells cpio to prompt for new names for all files, not only files which conflict with prexisting files. There is no default file name; if you press enter without entering a file name, the file is skipped. If you are unarchiving files, and if a file from the archive conflicts with a file which already exists, then: if the file which already exists is older than the file from the archive, cpio overwrites the file which already exists with the file from the archive, cpio does NOT display a message telling you that cpio is overwriting the file. If the file which already exists has a time which is the same as or newer than the file from the archive, cpio skips unarchiving that file, cpio displays a message telling you which file is being skipped and why. cpio decides whether or not to overwrite a pre-existing file by comparing the times of the files. File size, permissions, owners, etc have no effect on whether or not cpio overwrites pre-existing files. There is a rename option, which tells cpio to prompt you for new names for every file which is being unarchived, but there is no interactive option to tell cpio to prompt about file conflicts, but not prompt if there is no file conflict. There is an unconditional option, to tell cpio to always overwrite files which already exist, but there is no keep old files option to tell cpio to never overwrite files which already exist. **gzip By default gzip removes the original file when compressing or uncompressing. If you do not want to remove the original file: gzip --to-stdout t > t.gz gzip --uncompress --to-stdout t.gz > t If a compressed file has additional data appended to the end of it, gzip will uncompress the file, and will warn that it is ignoring trailing garbage. If a compressed file is truncated, then gzip will uncompress none of it. It should be possible to uncompress part of it, but gzip has no options for doing that. If a compressed file is corrupted, gzip will uncompress it up to where the error occurs, or maybe gzip will uncompress none of it unless you use --to-stdout or --stdout or -c. **bash At login, bash runs /etc/profile and then .bash_profile in the user's home directory. Some redhat linux computers are configured to include /etc/profile.d/*.sh from /etc/profile. If you enter 'bash' at the prompt, bash runs $HOME/.bashrc. Most linux computers are configured to include $HOME/.bashrc from $HOME/.bash_profile, so that .bashrc runs at login. Some linux computers set BASH_ENV to $HOME/.bashrc so that $HOME/.bashrc runs at the beginning of all shell scripts. Some linux computers are configured so that /etc/bashrc is included from $HOME/.bashrc. Note that when you shell out to bash by entering 'bash', the new bash has the environment from the old bash, but not the aliases and functions. I think that a normal user would want aliases to be set up at login and preserved when shelling out to a new bash process, but bash does not work that way. If aliases are put in profile, then the aliases are lost when you shell out to a new bash process. If the the aliases are put in .bashrc, then the aliases are not available when you first log in. Thus the scheme of putting the aliases in .bashrc, and the command to run .bashrc and the environment parameters in /etc/profile. you define environment parameters like this: R2=/cdrom/RedHat/RPMS/ PATH=.:/root/path:$PATH Then if you have a command like echo $R2setup-1.9-2.noarch.rpm then bash finds no R2setup=..., so the command is the same as echo -1.9-2.noarch.rpm So there are certain characters which bash looks for when looking for the end of the name of an environment parameter. These characters are space, -, /, $ If you define an environment parameter with ENV_PAR=testing then it is available in the current shell, but not in the parent shell, not in any child shells which have already been created, and not in any child shells which may be created in the future. Then, if you export it: export ENV_PAR it will be in any child shells which are created in the future, but it will still not be in any child shells which have already been created, and not in the parent shell either. If you define an environment parameter with T=`echo x` # backticks, not single quotes then bash runs the command 'echo x', the standard output of 'echo x' is two characters, x and newline; then bash substitutes the standard output for the part in backticks, so you might think that environment datum T is two characters, x and newline; but bash throws the newline away, so environment datum T is just 'x', one character. If you try to define an evironment parameter which include spaces like this: t=1 2 3 4 5 then bash will display an error message like this: bash: 2: command not found This is because bash assumes the first space in a definition of an environment parameter is a command seperator; thus bash assumes you meant: t=1 2 3 4 5 I do not know why bash assumes the first space in a definition of an environment parameter is different from other spaces. If you want to create an environment parameter with spaces, use quotes like this: t='1 2 3 4 5' or like this: LIST_OF_FILES="/fdrewa/fdosadfsa/fdsoad/fdsao/dss /dsdgsfdg/gfihgfd/fdsfa/gfds/gf /ghfds/fdsfgsd/gfdsfg/gfdsf/gfasdsa/gfdsa/gfdsa /gfdsadsa/ghfdsfg/gfdsfg/gfasd/ffdh/jtyreytr/fdgbxc" Newlines within the quotes are the same as spaces, and multiple spaces are the same as single spaces. Since the newlines are the same as spaces, we do not need to put backslashes before the newlines to tell bash to ignore the newlines. To redirect standard output, use '>'. To append standard output to a file, use '>>'. To redirect standard error, use '2>'. To append standard output to a file, use '2>>'. You can use the 'set' command to make sure bash does not overwrite a file when redirecting output, use the command 'help set' to display the details. For example: gzip --to-stdout filename 2>> errors > filename.gz That compresses a file named 'filename' while writing all error messages to a file named 'errors'. ProgramName >> output 2>&1 That writes all normal output and error messages from the program named 'ProgramName' to a file named 'output', combining standard output and standard error in a single file. Note that standard output is redirected before the error messages are redirected; this is important. '2>&1' redirects the error messages to standard output. Note that normal output is redirected with '>>', so it will be appended to the file. Error messages are appended as well, so it is not neccessary to use '2>>&1'. ProgramName > output 2>>&1 In this case, error messages would overwrite the previous contents of the file. The ' > ' takes precedence over the '2>>&1'. If you run the command set -o noclobber then bash will not allow files to be overwritten by redirection. The following shell script set -o noclobber dsack=dsick > t || echo nonzero exit code echo test > t || echo nonzero exit code echo xx $dsack will have the following output ./s: t: Cannot clobber existing file nonzero exit code ./s: t: Cannot clobber existing file nonzero exit code xx dsick Note that the commands with redirection were completed successfully, but the exit codes were set to nonzero, as if the commands had not been successful. Also note that the output of the commands which was redirected vanished; it was not displayed and it was not written to the file. If you redirect the output of a program to a file, and then abort the program before it finishes; then nothing will be written to the file, although the file will be created. Apparently bash opens the file before the program starts, then bash collects the output of the program in memory, and bash does not write the output to the file until the program is finished; if the program does not end normally, the output is never written to the file. But maybe if the program generates a lot of output, then the first part of the output may be written to the file. If you have a shell script or other program named s, and you want to redirect the output to a file and to have the output be displayed at the same time: s 2>&1 | tee -a output '2>&1' sends standard error output to standard output. Note that piping '|' is done before redirection, so that when standard error output is redirectied to standard output, standard output has already been redirected to the pipe. 'tee' is a program which displays its input and writes it to a file. 'output' is the name of the file which tee writes to. '-a' is an option for tee which tells tee to append to the end of the file if it already exists. If you were crazy, you could have tee write to a FIFO or named pipe, and then redirect the output of tee to a normal pipe. Thus you could pipe something to two different pipes simultaneously. If you pipe the output of a shell script or other program to tee or some other program; then you can not check the exit code of the first program because the last exit code is the exit code of tee or whatever you piped the output to. If you need to check the exit code of the first program, create a shell script which runs the program and checks the exit code, and pipe the output of the shell script to whatever. If the output of a shell script is redirected, and then the shell script runs another shell script, then the output of the other shell script will be redirected too. In other words, if a process is subject to redirection, then any child process created by that process will be subject to the same redirection. If you use # to mark a comment at the end of a line... ls # comment # works ls# comment # does not work, can not find program 'ls#' so you need to have a space or something before the #, to seperate the comment from the command. This applies to shell scripts and to commands typed in at the prompt. If you want to comment out many lines in a shell script, it is tedious to put # at the beginning of each line. It might be easier to put exit at the end of the shell script, then move the lines to after exit; this will also speed up the shell script because it takes time to read a line, decide it is a comment, and ignore it, but bash does not read lines after exit. Or you could make a fake if block, like: if ! true; then # begin comment block this is a comment more comment fi The if condition is impossible, so the if block is never executed. And bash does not check syntax until it executes commands, so you will not get error messages if the lines between then and fi are plain text instead of real bash commands. Or you could make a fake function like this: not_a_function____this_block_of_text_is_commented_out() { this is a comment more comment } Syntax errors in the function are ignored, and you can have many functions with the same function name. But do not blame me if you are so stupid that you try to execute the function. If you put a phrase in reverse single quotes, bash will assume the phrase is a command, and bash will run that command, and bash will substitute the output of that phrase/command for the phrase in the original command, and then run the original command. For example: `echo ls` The output of 'echo ls' is 'ls', so the previous command is the same as: ls However, the second version is affected by aliases, while the first is not; so if 'ls' is an alias for 'ls --color-tty' or something, then the two commands are not quite the same. So the first version could be used as a way to evade an alias. If there are no aliases, then the two commands are exactly the same. Note that this is an example of reverse single quotes; normal single quotes are different. In the following examples: command1 && command2 ; command3 command2 will be run only if command 1 succeeds, but command3 will be run no matter what. command1 ; command2 & command1 runs in the foreground. command2 runs in the background. command1 && command2 & command1 and command2 run in the background. If a line ends with a backslash, then the next line is a continuation of the previous line (unless the backslash comes after a #). The lines are combined and executed as if the backslash and newline did not exist. For example: l\ s is the same as: ls Note that the l and the s were put together without a space between them. If you want a space, remember to put a space after the l and before the backslash, or before the s. In the following commands: tar -xf \ foo there is one space after -xf and three spaces before foo, so that is equivalent to 'tar -xf foo' with four spaces between -xf and foo. Four spaces is the same as one space, but indenting the second line makes the script more readable. This is usually used in shell scripts when lines are longer than the width of the display. If you have a long quoted string which you want to split into two lines, you can use a backslash at the end of the first line, but you must end the quote before the backslash and requote at the beginning of the next line, and you must not put any spaces before the backslash or at the beginning of the next line. For example: sh -c 'echo 1;echo 2;echo 3;echo 4;echo 5;echo 6;echo 7;'\ 'echo 8;echo 9;echo 10;echo 11;echo 12' Bash removes the backslash and newline; then there are no spaces or other seperators between the two quotes; two quotes with no seperators between them are joined into one quote. If you have a command like: foo ; bar & then foo runs in the foreground and bar runs in the background. If you want to run both foo and bar in the background, try one of these: ( foo ; bar ) & foo & bar & sh -c 'foo ; bar' & If you have a command like: foo && bar & then both foo and bar run in the background. In the following command: tar 2> f & the tar error message is written to the file, not displayed. But in the following command: tar & 2> f the tar error message is displayed; the file is created but nothing is written to the file. '&' seperates commands, like ';' or newline, so the previous command is the same as: tar & 2> f If you run set -e then the shell script will abort if a command returns a nonzero exit code: false echo did not exit false returns a nonzero exit code, so the script aborts, so the echo command does not run. false || true echo did not exit false returns a nonzero exit code, so true runs. true returns an exit code of zero, so the exit code of 'false||true' is zero, so the script does not abort, and the echo command runs. ! true echo did not exit true returns an exit code of zero, ! inverts the exit code. So the exit code of '!true' is nonzero, so you might expect that the script would abort. Wrong! The script does not abort, the echo command runs. if false ; then echo foobar; fi false returns nonzero, but bash does not abort the script because the nonzero exit code occurred within an if test. The same is true of while tests. But some minimal versions of sh like ash may abort scripts for nonzero exit codes within if or while tests. If you run a command like: bash -c 'name_of_script_file' then bash will display an error message if you do not have execute permission for the script. If you really want to run a script without execute permission, do this: bash -c '. name_of_script_file' Bash can test if a string matches a pattern. For example, to check if a filename ends in '~': if [[ $filename == *~ ]] then something fi In fedora core 8 from 2007, this did not work if the four spaces inside the brackets were skipped. So the spaces are needed. It also did not work if there were quotation marks inside the brackets, even though man bash says you can use quotation marks. **if bash internal command An if command has the following format: if insmod /root/modules/2.0.32/cm206.o cm206_base=0x300 cm206_irq=10 then echo elif insmod /root/modules/2.0.32/cm206.o cm206_base=0x300 cm206_irq=10 then echo else echo unable to load module cm206.o rmmod cdrom exit 1 fi Or you can put it all on one line: if test -f $FILE; then echo $FILE exists; fi The 'elif ... then ...' section is not required; you can have as many 'elif ... then ...' sections as you want. The 'else ...' section is not required. There can be any number of commands between 'if' and 'then', but there is usually only one, and that one is usually test. If you want to test if a file name matches a pattern, use grep between 'if' and 'then'. I am not sure what would happen if there were no commands between 'if' and 'then'. There can be as many commands as you want after 'then', but there must be at least one. If there are no commands after 'then', try adding something harmless like 'echo', like in the long example above, or 'echo > /dev/null', or 'true'. It must be a real command; a comment does not work. The long example above loads a device driver module, which sometimes fails to initialize the controller card; in which case it tries again. The short example above checks to see if a file exists, and if it is a normal file, not a directory, link, device, etc. If you want to do if not, put an exclamation point and a space at the beginning of the last command between if and then. This works because if a command begins with '! ', then bash reverses the exit code, so that an exit code of 0 becomes 1, and an exit code of 1 or more becomes 0. For example: if ! test -f $FILE then echo $FILE does not exist fi Or put an exclamation point as an option to test: if test ! -f $FILE then echo $FILE does not exist fi You can put an if command inside another if command. **test bash internal command, and external command too. I think the program test and the bash internal command test are identical. You can use test anywhere in a shell script, and you can run test at the prompt, but test is usually used with 'if', 'while', and 'until' commands. Test sets the exit code, and does not do anything else. For example: test "$REPLY" = "HELP!" The exit code is zero if the two strings are the same, otherwise the exit code is one. The double quotes are not needed, but are a good idea in case $REPLY is a string of zero characters. The space before the equals sign and the space after the equals sign are needed; if these spaces are left out then there will not be an error, and the exit code of test will be zero. test -f filename The exit code is zero if there is a normal file named 'filename' in the current directory; otherwise the exitcode is nonzero. bash uses square brackets as an alternate form of test. The following two commands are the same: test -f filename [ -f filename ] The version with square brackets is used more frequently; I do not know why. The version with square brackes is one character shorter, but I think it is more difficult to understand for someone who is just starting out in Unix. If you are using the external program test instead of the bash internal command test, you may still be able to use square brackets in place of test, because '[' is usually set up as a link to 'test'. If you have a symbolic link to a file which does not exist, test -e will say the link does not exist. **while bash internal command **until bash internal command a while or until loop is like this: while read do echo $REPLY done Or you can put it all on one line like this: while read; do echo $REPLY; done The part between 'while' and 'do' is run, then if the exit code is not zero, it runs the part between 'do' and 'done' and then runs the part between 'while' and 'do' again, and checks the exit code again, and so on. If the exit code is zero, it goes on to whatever commands come after 'done'. If you put 'until' in place of 'while', then when the exit code is zero it runs the part beween 'do' and 'done', and then starts the loop again, and if the exit code is not zero it goes on to whatever comes after 'done'. So the part between 'while' and 'do' is always run one more time that the part between 'do' and 'done'. The part between 'while' and 'do' is always run at least once; the part between 'do' and 'done' might not be run at all. The 'do' marks the place where it tests the errorlevel. There can be any number of commands between 'while' and 'do', but there is usually only one, and it is usually test. There can be any number of commands between 'do' and 'done'. The example above first runs the command 'read', which waits for the user to type something and press enter, and then it sets 'REPLY' to whatever was typed. Then the exit code will be zero, so it will echo whatever was typed, and wait for something else to be typed. The user will have to press control-c to escape from this loop! But if another file was redirected into being the input for this shell script, then 'read' will return a nonzero exit code when it gets to the end of the file. In that case, the above example would display the file, like 'cat'. **echo bash internal command Sometimes backslash substitutions are performed, and sometimes backslash substitutions are not performed. Sometimes backslashes are ignored. echo a\bc\z # displays abcz echo a\\bc\\z # displays a\bc\z echo a\\\\bc\\\\z # displays a\\bc\\z echo "a\bc\z" # displays a\bc\z echo "a\\bc\\z" # displays a\bc\z echo "a\\\\bc\\\\z" # displays a\\bc\\z echo -e a\bc\z # displays abcz echo -e a\\bc\\z # displays c\z echo -e a\\\\bc\\\\z # displays a\bc\z echo -e "a\bc\z" # displays c\z echo -e "a\\bc\\z" # displays c\z echo -e "a\\\\bc\\\\z" # displays a\bc\z B='\' echo a${B}bc${B}z # displays a\bc\z echo a${B}${B}bc${B}${B}z # displays a\\bc\\z echo a${B}${B}${B}${B}bc${B}${B}${B}${B}z # displays a\\\\bc\\\\z echo "a${B}bc${B}z" # displays a\bc\z echo "a${B}${B}bc${B}${B}z" # displays a\\bc\\z echo "a${B}${B}${B}${B}bc${B}${B}${B}${B}z" # displays a\\\\bc\\\\z echo -e a${B}bc${B}z # displays c\z echo -e a${B}${B}bc${B}${B}z # displays a\bc\z echo -e a${B}${B}${B}${B}bc${B}${B}${B}${B}z # displays a\\bc\\z echo -e "a${B}bc${B}z" # displays c\z echo -e "a${B}${B}bc${B}${B}z" # displays a\bc\z echo -e "a${B}${B}${B}${B}bc${B}${B}${B}${B}z" # displays a\\bc\\z '\b' is backspace, which deletes the previous character. '\z' has no special meaning. '\\' seems to follow different rules than the other backslash substitutions. If you want to use backslash substitutions with echo, you probably want to use -e and use quotes. But backslash substitions are apparently interpreted twice! This means that if you want to echo a backslash, you have to use four backslashes! The following commands: foo="line1 line2 " echo $foo echo "$foo" echo -n $foo echo -n "$foo" produce the following output: line1 line2 line1 line2 line1 line2line1 line2 Environment parameter foo contains two newlines. For the command 'echo $foo', the newlines are converted to spaces, leading and trailing spaces are stripped, and a newline is added to the end; the net result is that the newline in the middle is converted to a space. For the command 'echo "$foo"', the newlines are not converted to spaces, but a newline is added to the end; thus two newlines are displayed at the end of the string. The next two commands with 'echo -n' are similar to the previous two commands, except that newlines are not added to the end; thus there is one less newline displayed at the end of each string. This behavior of converting newlines to spaces is done by echo, not by bash. If you do 'bar=$foo', bar will still contain the newlines from foo. You can take advantage of this on those occaisons where you want to convert newlines to spaces: foo=`echo $foo` or t=`cat foo`;echo $t > foo However, while bash is converting newlines to spaces, it is also removing leading and trailing spaces, and it is also converting multiple spaces to single spaces. If you want to convert newlines to spaces, but do not want the other conversions, use sed or perl. **read bash internal command echo test | read echo $REPLY # does not work echo test > delete.me read < delete.me echo $REPLY rm delete.me # works Apparently you can not pipe to read. Maybe you can not pipe to any internal command. Maybe when you pipe to an internal command, the internal command executes in a child process, and the results are lost when it returns to the parent process. You can use read to pause a shell script until you press enter. But if you run the shell script in the background, the shell script will not pause. Apparently it ignores the read command, or maybe a process in the background automatically gets a carriage return if it requests input, and there is no other input. You can mimic the basic command mid$ with echo $STRING | cut --bytes=5-18 - > string.tmp read < string.tmp PART_OF_STRING=$REPLY rm string.tmp But a shell script which does a lot of that will be very slow. Would it be better to use $()? read removes leading spaces. **kill bash internal command kill sends a signal to another process. The name is misleading; kill can send the signal SIGKILL, but it can also send any other signal. Signals are a kernel function, not a bash function. There are 32 signals, signal 0 to signal 31. Signals also have names, like SIGTERM and SIGKILL, and they may be referred to by their names. The names are given in the kernel source code, in include/asm/signal.h, or something like that; but it does not explain what the signals mean. A signal is like a flag; it either occurs or it does not occur; it does not transmit any other data. I created a shell script which trapped (see trap) every signal; then I read some data from a CD. There were some errors reading the CD, but the shell script did not receive any signals, so the kernel does not signal a program when there has been an IO error. A program which writes data to a pipe receives SIGPIPE if it tries to write to the pipe after the program on the other end of the pipe has closed the pipe; programs usually exit when they receive SIGPIPE. Note that the program might not know it is writing data to a pipe; the program might think it is writing data to a file or to standard output. A program which creates a child process receives SIGCHLD when the child exits. A program receives SIGINT when the user presses control-c; programs usually exit when they receive SIGINT. **su As root, I did: su --command='set > /c' a EUID, HOME, LOGNAME, USER, and UID were reset to user a correctly. But MAIL and PATH were still set to root. I tried again with -l, and then PATH was set to user a correctly, but MAIL was not set. So su resets some environment variables but not all. So probably we should usually use -l with su, because -l sets up the environment more completely. But what if bashrc starts a background process, maybe a mail checker, or xwindows; will su start that background process also? Can we workaround this by specifying sh or ash instead of bash as the shell? If you have become a different user with su, and you want to go back to being whatever user you used to be, type 'exit'. Once I wanted to change users in the middle of a shell script and change back to the original user at the end. I tried something like this: echo now original user su k echo now user k echo still user k exit echo now original user again When it gets to the command 'su k', it changes to user k and runs an interactive shell, and waits for you to type commands. When you type 'exit' or kill the shell, it returns to the original user and continues executing the shell script with the command after 'su k'. When it gets to the command 'exit', it does not return to the original user, it has already returned to the original user, it exits the shell script. So that does not work. I guess that if you want to change users in the middle of a shell script, you have to do something like this: echo now original user su --command='echo now user k' k su --command='echo still user k' k echo now original user again or you could do: su --command='echo now user k;echo still user k' k or you could combine the commands for the other user into a second shell script, and do: su --command='second_shell_script' k **trap bash internal command trap is sort of like an interrupt handler. It tells bash to do something when something happens. For example: trap "echo signal 0 has been received" "0" tells bash to display a message when it receives signal 0. Note that traps are set for signals, not for interrupts or IO events. For more about signals, see kill. **minicom The minicom man page says nothing about macros, but there are some sample macros for minicom in the doc directory. zmodem autodownloads are automatically enabled. When you download a file with zmodem autodownload, the file is given the current date. I think minicom has zmodem built in, I think it does not use the external program rz. **mt mt sends commands to the tape driver. Some of these commands are to move the tape, so mt is an abbreviation for 'move tape'. But mt actually sends commands to the tape driver. There are many different tape drivers, and one tape driver may do one thing in response to a command, and some other tape driver may do something different in response to the same command. The tape driver I use is ftape 4, so this information about mt is what ftape 4 does in response to commands from mt. Other tape drivers are probably the same, but may not be. Data on a tape drive is divided into volumes. These volumes are also called files. The place where one volume/file ends and the next begins is called the file mark. The file mark is imaginary. It does not exist on the tape. It does not take up any space on the tape. It is not possible to read a file mark because there is nothing there to read. The file mark is not a place on the tape; it is a boundary between two parts of the tape. The locations of the file marks are recorded in the volume table at the beginning of the tape. The tape driver is supposed to keep the volume table in memory, and count the bytes you have read, and stop or something when you get to a file mark. When you move the tape, it is not possible to stop at a file mark. You can move the tape to just before a file mark, or you can move the tape to just after a file mark, but you can not move the tape to a file mark. Usually you move the tape to just after a file mark, to the beginning of the volume/file after the file mark. (Actually, rumor has it that tar writes 512 nuls to the end of every archive as a file mark; if true, it does take up space on the tape and could be read from the tape. But that is a different kind of file mark; it is an older kind which is not used anymore. I repeat, that is a different kind of file mark. I do not think it is true anyway.) If the tape is positioned in one volume/file, and you move the tape backwards to the previous volume/file, then some people would call that the "next" volume/file, because it is the next one you come to; BUT I SAY IT IS THE PREVIOUS VOLUME/FILE BECAUSE THE TAPE IS BEING MOVED BACKWARDS. Previous is towards the beginning of the tape, next is towards the end of the tape. If you move the tape forward, you move to the next, and you move further from the previous. If you move the tape backward, you move to the previous, and you move further from the next. When you write a volume/file to the tape, a file mark is automatically put at the end of the volume/file, and the tape is left in the position just after the file mark, which is at the beginning of the next volume/file, so you can write another volume/file without moving the tape. When you use tar to read a volume/file from the tape, the tape is left in the position just before the file mark. In order to read the next volume/file, you have to move the tape past the file mark with mt fsf 1; or you could use tar to read the tape again, tar will read 0 bytes and the tape will be moved past the file mark. When you use dd to read a volume/file, reading until end of file, like 'dd < /dev/tape'; then when dd is done the tape is positioned after the file mark at the beginning of the next volume file; you do not have to move the tape to read the next volume/file. If you use dd to read a volume/file, and you use count to read part of the volume/file instead of reading until end of file, like 'dd count=1 < /dev/tape'; then when dd is done the tape will be positioned wherever dd stopped; you can run dd again and it will continue reading where it left off. When dd reaches end of file, it will stop just before the file mark; to read the next volume/file you have to move the tape with mt fsf 1, or running dd again; if you run dd again it will read 0 bytes and move the tape to the beginning of the next volume file. If you write a program to open the tape device and read until end of file, and close the tape device; when done the tape will be positioned at the beginning of the next volume/file, and you do not have to move the tape to read the next volume/file. When you read a tape, sometimes the tape is left at the end of the volume/file you just read, and sometimes the tape is left at the beginning of the next volume/file. This is complicated; read this section again. Also note that most archive programs will assume that they have reached the end of the archive if they encounter a lot of nuls. If so, they may stop reading before the end of the volume/file. Probably this is why tar reads stop before the file mark; a read which stops in the middle does not pass the file mark, and if some reads passed the file mark and some did not, you would never know where the tape was positioned; therefore all reads do not pass the file mark. But if the archive program is reading with a block size, it will read to the end of the block; if it encounters nuls it will throw away the rest of the block and not read any more blocks; the archive program thinks it has read to the nuls, but the tape device driver thinks it has read to the end of the block. If you read volume/files with the same block size as when you wrote it, the archive program will always read to the end of the volume/file even if it quits when it finds nuls. mt -f /dev/tape retension # rewind the tape, move all the way to the end of the tape, and rewind again. This takes about 4 minutes for a TR-3 EX. Some people say you should do this before doing anything else if you have not used the tape for a month or more, because tapes stretch and/or shrink. mt -f /dev/tape rewind # rewind the tape, moves the tape to just after the first file mark on the tape, positions the tape at the beginning of the first volume/file on the tape. It is probably a good idea to rewind the tape before removing it from the tape drive; and ftape 4 does not update the volume table until you rewind the tape, so you definitely should rewind the tape after you write new volume/files to the tape. mt -f /dev/tape seod # streamer end of data, moves the tape to just after the last file mark on the tape. This positions the tape at the beginning of the blank space at the end of the tape. Do this just before you write a new volume/file to the tape. If the tape was completely blank, seod would be the same as rewind. mt -f /dev/tape fsf 1 # move the tape to the beginning of the next volume/file. mt -f /dev/tape fsf 2 # move the tape to the beginning of the volume/file after the next volume/file. mt -f /dev/tape bsf 1 # move to the beginning of the current volume/file. Use this just to go back to the beginning of the volume/file you just READ. mt -f /dev/tape bsf 2 # move to the beginning of the previous volume/file. Use this to go back to the beginning of the volume/file you just WROTE. mt -f /dev/tape bsf 3 # move to the beginning of the volume/file before the previous volume/file. Note that doing 'fsf 1' twice is the same as doing 'fsf 2' once. But doing 'bsf 1' any number of times is the same as doing it once. Doing 'bsf 2' twice is the same as 'bsf 3'. Doing 'fsf 2' twice is the same as 'fsf 4'. With fsf, the number is the number of file marks crossed. With bsf, the number is one more than the number of file marks crossed. You might be thinking that this is illogical and inconsistant, that bsf and fsf use the numbers differently. But there is a logic to this. fsf means to move the tape FORWARD to the next file mark, and then move the tape FORWARD to the beginning of the next volume/file. bsf means to move the tape BACKWARD to the next file mark, and then move the tape FORWARD to the begining of the next volume/file. I think that both forward 0 and backward 0 should move the tape to the beginning of the current volume/file, forward 1 should move the tape to the beginning of the next volume/file, backward 1 should move to the beginning of the previous volume/file, etc; but whoever invented this did not ask me, and there is nothing I can do about it. The man page for mt says to use bsfm, not bsf. But bsfm does not work for me; bsf does work. I think that the person who wrote ftape 4 decided to switch bsfm and bsf. So if you are not using ftape 4, try bsfm instead of bsf. If you try to move past file marks when there are no more file marks, then mt gives an exit code of 2 and the tape does not move. Using mt with the ftape 4 often appears to do nothing, because ftape 4 often does not actually move the tape until you read or write some data. Apparently ftape 4 is lazy, and it does not move until it has to; it keeps hoping you will change your mind and tell it to go back to where it was before, and then it will not have to do anything at all. Some tape drivers may allow you to move the tape to the end of a volume/file, just before the file mark, and append to the volume/file. I did not try it. Some tape drivers may allow you to move to the middle of a volume file and write data, overwriting the rest of the volume/file, perhaps making the volume/file longer or shorter than it used to be. I did not try it. Some tape drivers may allow you to move to the beginning of the first volume/file (the beginning of the tape) and write a new volume/file, automatically erasing the tape. I did not try it. Some tape drivers may allow you to move the tape to the beginning of a volume/file other than the first one, and write a new volume/file, automatically deleting the current existing volume/file and all the following volume/files. ftape 4 does not allow that. mt usually returns an exit code of 0. mt returns an exit code of 1 if it can not send the command to the tape device, like if the tape device does not exist or you used a command which does not exist. mt returns an exit code of 2 if it successfully sent the command to the tape driver, but the tape driver reported an error, like if you rewind but there is no tape in the drive (thus you can use rewind to test if there is a tape in the drive), or if you fsf when it is already at the end of the data (thus you can use fsf 1 after rewind to check if the tape is blank). setblk succeeds even if there is no tape in the drive. **ftmt ftmt is like mt, except: The man page for ftmt has switched the meanings of bsf and bsfm, compared to the man page for mt. But I think that it is ftape 4 which switched the commands; if you are using ftape 4, then bsf and bsfm are as described in the ftmt man page; but if you are not using ftape 4, then bsf and bsfm are as described in the mt man page; it does not matter if you are using ftmt or mt; it depends on whether or not you are using ftape 4. ftmt has additional commands like reset. # rewind the tape/go to BOT, Beginning Of Tape; and write anything that # needs to be written to the tape volume table/directory/index # The tape will be positioned at the beginning of the first volume/file/ # backup, or is it actually positioned at the beginning of the # volume table/directory/index? # rewind the tape and update the volume table/directory/index ftmt -f /dev/nqft0 rewind # move tape to End Of Media. # move tape to the place to make a new backup. ftmt -f /dev/nqft0 eom # move back 1 volume/file/backup. # go back to the begining of the backup you just did. ftmt -f /dev/nqft0 bsf 2 # go to the begining of the current volume/file/backup ftmt -f /dev/nqft0 bsf 1 # go to the beginning of the next volume/file/backup ftmt -f /dev/nqft0 fsf 1 **rm If you execute 'rm --interactive' and instead of pressing y [enter], you press n [enter] or just [enter]; then the exit code of rm will be 1. And of course the file will not be deleted. Probably the same thing would happen if you pressed any other key. #!/bin/sh rm -fr /root/.ssh cd /dellp3 echo vEni42 | scp -p -o CheckHostIP=no -o StrictHostKeyChecking=no * 192.168.1.143:/hpx2 exit echo 'yes vEni42' | scp -p -o CheckHostIP=no * 192.168.1.143:/hpx2 exit rm -fr /root/.ssh cd /hpx2 echo vEni42 | scp -p -o CheckHostIP=no * 192.168.1.102:/dellp3 **scp Option -p means the new copy of the file has the same permissions and times as the original file. But the new copy of the file will have the UID and GID set to the defaults for the destination, which might not be the same as the original file. There is no option to give the new copy of the file the same UID and GID as the original. Option -B means batch mode and is supposed to eliminate prompts so scp can operate noninteractively. But it actually causes scp to abort at all prompts. There is no option to automatically continue at prompts. But some prompts can be eliminated with scp -o CheckHostIP=no -o StrictHostKeyChecking=no If matching public and private keys are found in the config files on both computers, there is no password prompt. Otherwise, the password prompt cannot be eliminated. echo password | scp bash -c 'sleep 5 ; echo password' | scp both of the previous still prompt for the password. Option -r means recursive. If we copy a directory recursively, hidden files in the directory are included. If we do scp /tmp/file1 fred@server: note that we did not specify the destination directory. The file will be copied to fred's home directory, not to /tmp, so the new copy of the file will be /home/fred/file1. If scp copies a soft or hard link, the new copy will be an ordinary file. Sometimes when we copy directories recursively, we want to preserve links, so the copies are also links, but scp has no option to do this. **set bash internal command 'set -e' tells bash to exit if an error occurs. If a command or program exits with an exit code of anything other than zero, the current bash process will die, and you will be dumped back in the parent process. If the current bash process is the login shell, you will be logged out and dumped back to the login prompt. Someone told me that set -e does not cause bash to exit if a nonzero exit code occurs between an 'if' and a 'then'. I made a script to test this, and it is true for bash, sh, and ash. But I vaguely recall once writing a script with set -e, and the script exited when an if failed. I do not recall the details. So I am not sure about this. I am not sure what happens if you have more than one command between if and then. I think set -e treats while and until blocks like if blocks. **setserial The man page for setserial says it can display or set the serial port speed. But it always displays 115200 for the speed, which is usually not the correct speed. If I try to set the speed to 57600 with the option 'baud_base 57600', then the serial port stops working. So use stty to display or set the serial port speed. **seyon seyon requires X windows. seyon will not run unless the X server is already running. **stty stty has no options to choose which terminal or device to use. It always figures out which terminal or device is its standard input, and uses that terminal or device. So to choose /dev/tty4, you do 'stty < /dev/tty4', and stty will see that its standard input is /dev/tty4, and will use /dev/tty4. stty does not actually read any data from standard input. **umount If you want to use 'no' with the '-t' option, put 'no' before the first filesystem type in the list, do not put a comma after the 'no', and do not put 'no' before any other filesystem types in the list. umount -ar -t nomsdos,ext The previous line will unmount all filesystems except msdos and ext filesystems. If a filesystem cannot be unmounted, the filesystem will be remounted readonly. proc filesystems will not be unmounted. **useradd If you want to create a user named bart, you run the command 'useradd bart'. But it does not automatically create the user's home directory. useradd gets its default options from the file /etc/login.defs. At the end of this file I have a line which says 'CREATE_HOME yes'. This is supposed to cause useradd to create the new user's home directory. But instead useradd displays a message saying: configuration error - unknown item 'CREATE_HOME' (notify administrator) So I have to use -m to tell useradd to create the home directory. Also, I cannot figure out what useradd assigns as the initial password. So to create a new user, run the command: useradd -m -p initial_password bart The system administrator's guide says you are supposed to create the user with useradd, then set the password with passwd (root can change the password for any user, and does not need to know the old password). But I think it is easier to use useradd to set the password when creating the user. **vtblc vtlbc probably requires ftape 4. It probably does not work with earlier versions of ftape, or with other tape drivers. After you write something to the tape, be sure to rewind the tape before using vtblc. ftape holds changes to the volume table in memory, and does not write the changes to the volume table until you rewind the tape. But vtblc always reads the volume table from the tape. So vtblc will not see any changes which have not yet been written to the tape. vtblc --file=/dev/rawft0 --print # display the volume table vtblc --file=/dev/rawft0 --truncate # delete the last volume/file vtblc --file=/dev/rawft0 --truncate=0 # delete all volume/files, erases the tape vtblc --file=/dev/rawft0 --modify=label='test 1' # change the name/label of the last volume/file to 'test 1' vtblc --file=/dev/rawft0 --modify=date # set the date and time of the last volume/file to the current data and time vtblc --file=/dev/rawft0 --vtbl-entry=2 --modify=label='Paranoid Backup' # changes the name/label of volume/file 2 to 'Paranoid Backup'. The first volume/file is 0, so 2 is the third volume/file. However, vtblc does not change the date correctly. When I change the date, it always changes it to 1974. I thought it used to work, before year 2000, so maybe it has the Y2K bug. **wget To download one or more http, https, or ftp files with wget: wget URL1 URL2 URL3 ... If you want to download a web page so you can display the page locally with a web browser; you probably want --html-extension, so that wget add .html to the file name if the file name does not already end in html; --page-requisites, so that wget will also download any graphics, stylesheets, etc which the browser needs to display the web page; and --convert-links, so that wget will attempt to fix the links in the web page if wget thinks that downloading the web page will break the links: wget --html-extension --page-requisites --convert-links URL sovernet granite (BSD4) wget does not have --html-extension or --page-requisites If you want to download a web page but skip everything except the text, try: wget --html-extension --page-requisites --convert-links \ --reject=.img,.gif,.css,.cs,.js,.png,.jpeg,.jpg URL If you want to download a complete website; use --recursive, so that wget will also get web pages which the first page links to; and --level=, to specify how many levels to recurse, --level=1 means to download this page plus every page it links to, --level=2 means to download this page plus every page it links to plus every page those pages link to, --level=0 means to follow links forever, which you probably do not want to do because you do not want to download the whole internet: wget --html-extension --page-requisites --convert-links \ --recursive --level=3 URL However, it is often difficult to determine what is part of the current website, and what is a different website which the current website links to. wget assumes that every file on the same computer as the starting URL is part of the same website, and every file on a different computer is a link to a different website. wget excludes files on other computers when you use --recursive. If you want to include files from other computers, use --span-hosts. But do not combine --span-hosts with --level=0 or you really will download the whole internet. wget has other options to include files from some computers but not other computers, or from some directories but not other directories. **xc If you use command line option -l or shell environment parameter MODEM to select device /dev/modem, then xc tries to use device /dev/modeM, and fails. So you need to select device /dev/cua0 (or cua1, etc). (Or you can make /dev/modeM a link to /dev/modem.) (This is probably a feature, not a bug. I heard a rumor that some versions of unix, perhaps BSD or SCO, use /dev/ttya for dialin serial ports, and /dev/ttyA for dialout serial ports. Thus xc might think the last letter of the device name should be capitalized. Except it really is a bug because that feature should be disabled when compiling for linux.) xc can run a script from the command line, and dialing commands can be embedded in a script; thus it is possible to bypass the dialing directory. You can give yourself execute permission for a script and make the first line '#!/usr/local/bin/xc -s', and run the script by typing the script name. A script can set key bindings, but key bindings have no affect while the script is running; key bindings are only used in terminal mode. xc has a lot of set commands; to see the defaults, create a blank xc.init and start xc; in command mode it displays the current settings; those are the defaults. If a script runs the command quit, then the modem hangs up. Probably it drops the DTR line, and the modem resets. Probably it closes logs, etc. If a script runs the command exit, it does not hang up, but it does close the log. So you can not put the command to log the data in a script, you have to put the command in xc.init to open start logging when entering terminal mode; this will start logging as soon as the script is done. If you want to log data received during a script and after the script is done, you probably have to have a capture on command in both the script and in xc.init. The man page says you have to use quotes in scripts, like: set rtscts "on" And xc displays an error message if you leave out the quotes. The same command could go in xc.init, but the default version of xc.init does not have quotes, and no error messages are displayed. I think the quotes are required in scripts, optional in xc.init. The file xc.init can have commands like a script, and it can have commands like you would type in command mode. It is sort of a combination of script mode and command mode. In a script, you can have a command like: if foo eq "bar" but you cannot have a command like: if "foo" eq bar The word before the 'eq' must be the name of a variable, it cannot be a string. The assign command also uses 'eq', and assign is meaningless if the word before the 'eq' is not the name of a variable. Thus, I suspect that xc treats if as a variation of assign. If there are errors in xc.init, xc will display error messages when xc starts, but the error messages will be immediately overwritten with the main display, so you probably will not see the error messages. For checking xc.init, you should create a script which quits and does nothing else. Run the script with xc -s, and you will see the error messages from xc.init. In scripts, call sometimes works without quotes, and sometimes does weird things if the quotes are missing. For example: call "foobarf" call foobarf call "foobarffoobarffoobarf" call foobarffoobarffoobarf files foobarf and foobarffoobarffoobarf exist and have a size of 0. the first three lines work, and the fourth line fails because "variable name too long" and "abnormal script termination" and then xc switches to terminal mode. It would be nice if xc would report missing quotes as a syntax error, but I guess that is too much to hope for. So we need to try to remember to always use quotes even if xc seems to work without quotes, and remember that weird error messages might mean missing quotes. In scripts, pause stops logging to the capture file. The following commands write nothing to the capture file: transmit "AT Z ^M" pause 3 #waitfor "nothing" 3 quit but if the waitfor line is uncommented, then AT Z and OK are written to the capture file. So after pause is done, data which was received during the pause is processed. There is probably a limit to how much data can be held during pause. The limit may be the size of the FIFO buffer in the serial port, which is probably 16 bytes. So if you are capturing text, you probably should use waitfor "nothing" instead of pause. The man page says that if you use an evironment variable in a script, and the environment variable is not set, then it is treated as an empty string, a string of zero characters. This is not quite true. If a script contains a command like: if $ENVIRONMENT_VARIABLE and there is no environment variable named ENVIRONMENT_VARIABLE, then the it fails as expected, but xc also displays an error message saying there is no such environment variable. The script does not abort. I do not think there should be an error message. If a script has a command like: if $TERM eq "linux" then that is an error, xc displays a message saying 'Abnormal script termination', and the script aborts, dumping you into terminal mode. So it appears that you cannot use environment variables in if ... eq commands. If the environment variable had not been set, the same thing happens, except you also get TWO error messages saying there is no such environment variable. If a script has a command like: assign v_a eq $TERM if v_a eq $TERM then xc exits with a message saying 'Segmentation fault (core dumped)' So do not use environment variables in if ... eq commands. Instead, use assign to set an xc variable to the environment variable, and use the xc variable in the if ... eq command. For example: assign v_term eq $TERM if v_term eq "linux" It is safe to use: if $TERM but note that in the following: assign v_modem_type eq "unknown" if $MODEM_TYPE then assign v_modem_type eq $MODEM_TYPE endif if there is no environment parameter named MODEM_TYPE, the previous commands cause xc to display two error messages saying there is no such environment variable. But the following does the same thing: assign v_modem_type eq $MODEM_TYPE if v_modem_type eq "" then assign v_modem_type eq "unknown" endif and you only get one error message if there is no environment parameter named MODEM_TYPE. So I think the second version is better. Probably any time you want to use an environment parameter in xc, you should assign the environment parameter to an xc variable, and use the xc variable. In other words, you should only use environment parameters in assign commands. If a script has a command like file echo "finished something, starting something else" then no newline is added to the message. You cannot add a newline by including '\n' or '^M' in the string. Substitution of control characters like ^M only occurs in strings for transmit commands. There is no substitution of control characters in other strings. If you press control-c while a script is running, the script aborts and you are dumped into terminal mode. The script commands echo and shell allow multiple strings. transmit does not allow multiple strings: echo "name is " NAME "; number is " NUMBER # ok transmit "ATDT" PHONENUMBER "^M" # wrong! transmit "ATDT"; transmit PHONENUMBER; transmit "^M" # ok If you download a file with zmodem, the file date is not preserved; your copy of the file is given the current date. Maybe that means I am using the wrong zmodem parameters. The following commands failed in xc.init: echo running xc.init ! xcext set rtscts on set rtscts "on" The following commands worked in xc.init: echo "running xc.init" shell "xcext" In a script command which takes a string like transmit, echo and shell; anything not in quotes is a variable name. The dollar sign is optional. The quotes are required for anything which is not a variable name. Variables can be either environment variables or xc variables. The following two commands are the same: echo "current modem is '" $MODEM "'" echo "current modem is '" MODEM "'" If a word which begins with a dollar sign is in quotes, xc treats the word as a string, not as a variable name. For example, the following command is not the same as the previous two commands: echo "current modem is '$MODEM'" I tried using the following command: set rtscts on and xc said: Invalid SET keyword: rtscts The problem was that I compiled xc under redhat 7.2, and the ifdefs in the xc source did not find the definition of CRTSCTS in the redhat 7.2 includes, and so the rtscts code was excluded. CRTSCTS is defined in kernel 2.4.19 source file include/asm-i386/termbits.h. xc is an old program which has not been updated recently. xc expects to use callout devices like /dev/cua0 and cannot set the modem speed faster than 38400. But versions of linux from after 2000 expect programs to use callin devices like /dev/ttyS0 and to be able to set the modem speed to 57600 or 115200 or even faster. xc does not configure the serial ports correctly with modern linux. We can work around this by using xc shell to run stty to configure the serial port after xc starts. I have a modem_init script which I call from other scripts to initialize the serial port and modem. My modem_init script begins like this: shell "stty cread cs8 -cstopb hupcl -parenb -ixon -ixoff -echo < " $MODEM shell "stty 115200 -clocal crtscts < " $MODEM transmit "ATZ^M" We could configure the serial port with stty before running xc, or we could put the serial port configuration commands in xc.init. However, if -clocal is in effect when xc opens the modem device, then the kernel will halt xc, and xc will not work. stty -clocal must be run after xc opens the modem device. So running stty -clocal before running xc will not work. Also, xc configures the serial port after running xc.init. So if we run stty 115200 before starting xc or from xc.init; then after running xc.init, xc will set the serial port speed to 2400. So running stty 115200 before running xc or from xc.init will result in a serial port speed of 2400. There are three ways to use xc with a modern linux. If you always use xc with scripts like I do, then you can put the stty commands at the beginning of every script like I do. Or you can use setserial to make 38400 mean 115200, and in xc.init set the modem speed to 38400 and run stty -clocal. Or you can start a background script which waits a few seconds and then reconfigures the serial port, and start xc; hopefully the background script will reconfigure the serial port after xc starts. You do not need stty -clocal if you are using a callout device like /dev/cua0. If you are using a callin device like /dev/ttyS0, you probably should run stty clocal immediately before or after exiting xc, or else immediately before starting xc. **yum man yum says that yum install can use simple patterns like a bourne shell. So we can install everything with yum --skip-broken install '*' The single quotes around the * are needed so the shell will not interpret the *. --skip-broken is needed because some packages conflict, or have file conflicts. Packages which are already installed are skipped automatically. The above command takes many hours to complete. I tried it with fedora 13, with the everything and rpmfusion repositories, free and nonfree, and updates enabled, about 20,000 packages, using 5mbps dsl internet. It took about 12 hours to download and about 8 hours to install. Some kernel module packages attempted to replace the existing kernel with their kernel, and some xwindows drivers attempted to replace the existing xwindows drivers. So it is not a good idea to install kernel modules or xdrivers unless they are appropriate for your hardware. So it is not a good idea to install everything unless you can exclude some packages. man yum says if the pattern begins with -, the pattern excludes instead of includes. But if we do something like yum install 'foo*' '-foobar*' it does not work because yum thinks we are using option -f. We have to use --, which means that everything after -- is not an option, like this yum install -- 'foo*' '-foobar*' But that does not work for -c, -d, and -e. I guess that is a bug in yum. By trial and error, I discovered we can work around this by putting c, d, or e in [], like this yum install -- 'doo*' '-[d]oodoo*' So if yum patterns can use [], I tried yum install '[def]oo*' but it did not work. It may be possible to use --exclude=package instead of -package, but the documentation says --exclude= is for updates, so I don't know if --exclude= works for install. I didn't try it. Sometimes but not always, if we try to install some packages with yum but yum fails, yum remembers the incomplete transaction. I don't know why it sometimes but not always. Anytime we try to do anything with yum, yum reminds us there are incomplete transactions, and some yum commands fail if there are incomplete transactions. We can try to complete the transaction with yum-complete-transaction but yum will probably fail for the same reason yum failed the first time. I tried to delete the incomplete transactions with yum clean all but that did not purge the incomplete transaction. I tried yum remove package for the package which failed to install, but yum said the package was not installed, and incomplete transaction to install that package remained. I eventually removed the incomplete transaction with rm /var/lib/yum/transaction-* The next time I ran yum there were no incomplete transactions and no errors, so this purged the incomplete transactions without corrupting the yum's data. Maybe it would have been better to remove /var/lib/yum/transaction-all.*, or to rename /var/lib/yum/transaction-all.* /var/lib/yum/transaction-done.* This was tested with fedora 13.