Man on the moon

My reminder place

Archive for February 2006

Linux Shell One-Liners

leave a comment »

Found an interresting article writen by BooRadley (olbooradley@aol.com).

To send the output of one command to another, put a | between them. This means that if you run the command ls | more, it will more the output of ls. Using text manipulation commands like grep and sed, this can become very handy. Here are some examples of commands to demonstrate the principle. (I’ve put shell logs in italics, and the actuall commands in bold. Where the commands are included in the shell log, they’re bolded and in italic.)

eric on /dev/pts/0 [/home/eric]
$ date | awk -F : ‘{ print $1 “:” $2 }’ | awk ‘{ print $1 ” ” $2 ” ” $3 }’
Sat Jan 3

The date command prints this:

Sat Jan 3 05:16:06 EST 2004

But the output is directed to awk, with -F (field seperator) set to “:”, and then prints only the first and second fields, or “Sat Jan 3 05:16”, then redirects it to another awk statement that prints just the first three fields, or Sat Jan 3. Normally, awk uses whitespace as a field separator, then you can use $1, $2, etc, to reference the fields, but for my use here, “:” made a better field seperator for the first pass, so that’s what I used. Then I awked it again to just take the parts of that output that I wanted. I’d say that -F is the coolest switch that awk has.

ping -w2 opentechsupport.net | grep “packets transmitted,” | awk ‘{print $6}’ | awk -F% ‘{print $1 ” percent lost packets, duder”}’

This would ping ots with two packets, filter out the line that counts, then print the sixth column, which is the lost percent, as so:

eric on /dev/pts/0 [/home/eric]
$ ping -w2 opentechsupport.net | grep “packets transmitted,” | awk ‘{print $ 6}’ | awk -F% ‘{print $1 ” percent lost packets, duder”}’
0 percent lost packets, duder

To kill off a hung ppp connection, I make use of xargs, which takes the stdout and runs it through the command of your choosing.

ps -ef | grep ppp | awk ‘{ print $2 }’ | xargs kill -9

ps -ef shows me the whole process tree, but it’s piped to grep so it only prints lines containing “ppp”, and then awk takes that and only prints the second column, which happens to be the process ID. Then it politely asks xargs to have kill -9 run on each PID.

ls -1 /bin | xargs -l1 whatis 2>/dev/null | grep -v “nothing appropriate”

Using xargs again, this relatively simple comman would ls -1, or ls in one column output, the contents of /bin, then run them through whatis, but redirect any errors to /dev/null, and then grep -v (-v to print all lines that DON’T match) lines that have “nothing appropriate” in them, so we don’t get output from things that have no manpage entries. You get a nice line by line listing of everything (that matters) in /bin, plus a short explanation of it’s use. Not a bad little reference. You could take the same command and make a text file with the alphabetical listing of /bin plus each entries meaning by adding > binhelp.xtx, or whatever filename you want. Like so:

ls -1 /bin | xargs -l1 whatis 2>/dev/null | grep -v “nothing appropriate” > binhelp.txt

This would create a textfile that gives a brief explanation of all the commands in /bin. Note that using “>” to redirect will overwrite any existiing file. If you also wanted a listing of all the commands in /usr/bin in the same file, you could run the command again, but use “>>” to redirect instead of “>”. “>>” appends to the bottom of the file instead of overwriting it.

Taking that a bit further and making it a bit more useful, I had saved the index for a bash scripting manual as “Bash.html” from faqs.org, but I wanted the whole manual to have a local copy. Rather than saving every single page from the index list, I used a command to strip the relative urls and run them through wget after appending the full URL. As seen here:

grep -i href Bash.html | grep -v ‘#’ | awk -F\” ‘{ print “http://www.faqs.org/docs/abs/HTML/” $2 }’ | xargs -l1 wget -U mozilla

This grep’d all lines (“-i” ignoring case) containing “href” from Bash.html, and piped them to grep -v, to tell it to ignore all lines with a “#” in them, so it didn’t download pages twice because there was an address link in the middle of it, then piped that to awk setting the field separator to a quotation mark (useing a backslash to escape the special meaning of a quote) so that it only printed the bit between the two quotes, the second column if the field seperator is a “, or the relaive url in the href statement, but added the full URL before it, then sent it to xargs -l1 to run the command “wget -U mozilla” for each line returned. Worked like a charm, and I now have a local copy. I needed to pretend to be using mozilla for reasons that I can’t imagine.

You can also put several logical lines of code on the command line to allow for decision making. On my login:

eric on /dev/pts/0 [/home/eric]
$ if [[ -e ~/html ]]; then echo “boom”; else echo “bust”;fi
boom

Since I have a directory called html in my home directory (“~” means $HOME, or your home directory), the file test “if [[ -e ~/html ]]” returns true. “-e” checks to see if a files exists. “;” means the next line, which goes to the then statement, to echo “boom”. Had it not existed, it would have gone to the else statement and done something else (echo “bust” in this case). Lastly, we end the “if” with “fi”.

I could have made the file test more complex with “if [[ -e ~/html && ! -d ~/html ]];”, and it would have tested to see if the file both existed “-e” and if it was NOT a directory “! -d”. “!” means “not”, “&&” means “and” in a test expression, and “-d” means “is the file a directory”. Since my html file is actually a directory, I would get this output:

eric on /dev/pts/0 [/home/eric]
$ if [[ -e ~/html && ! -d ~/html ]]; then echo “boom”; else echo “bust”;fi
bust

Directories in *nix are just special files. That’s why we test to see if the file is a directory (or in this case, to see if it’s NOT a directory).

You can also use variables on the command line. Here are two examples:

i=$RANDOM; i=$(echo “$i/10” | bc); sed -n ${i}p ~/bin/data/wordlist.txt

This assigns a random value to $i, then divides it (because $RANDOM goes about ten times higher than I want) by using they syntax variable=$(command). It echos the variable in an expression to bc, a basic calculator command, to get the new value, then uses sed to print the corresponding line in a wordlist file. For the sed command, which would be sed -n 10p file to print the tenth line of file, I used a variable instead of a number, so I had to use the {}s to seperate the variable from whatever followed, since there isn’t any whitespace. To understand the syntax, echo “$fooandahalf” would look for a variable called ‘fooandahalf’, but if you wanted the variable ‘foo’ echod, followed by the string ‘andahalf’, that’s not what you’d get. Since there’s no whitespace to seperate them, ou do it like this: echo ${foo}andahalf, so the shell knows where the variable ends and the new data begins.

Another variable example is in a one line for loop:

for beamer in *.jpg; do echo “one $beamer”;done

$beamer is my variable, just because I like the sound of it. It loops through all *.jpg files in the working directory, and echos “one” then the filename for each. Why? Hell, I dunno. I just thought it would be a good example.

Posted by: BooRadley

Here are a couple real world example from today. This was to rename all the files in a directory. It was a bunch of pictures of the sky during a great sunset, and I needed them named something unique, other than the default name the camera gives them so I could burn them on the same CD as other directories, so there didn’t end up being the same name for several files.

x=100;for filez in *;do let “x+=1”; cp $filez clouds1${x}.jpg;done

That worked well. And gave them names incrimenting up from clouds1101.jpg up to whatever.

x=100;for filez in *;do let “x+=1”; mv $filez clouds2${x}.jpg;done

For the next directory I was burning on the same CD, and so on, so they each had a unique filename.

I had some trouble with the burns, and had to try a few times on a multi-session burn, so I wanted to verify that there were the same number of files in the original directory as the target directory, but couldn’t remember how to count a total like that, so I just used a long command line, instead.

x=0;for beaner in $(ls -1 /home/eric/a/*); do let x+=1; echo $x; done

To verify, I did this, too

for beaner in $(ls -1 /mnt/cdrom/*); do let x-=1; echo $x; done

When it counted down to zero, I knew it was the same number of entries.

Come to think of it, I could have used:

ls /mnt/cdrom/* | grep -c .

I can’t think of a more legit way of doing that, though. Same with

ls | cat -n

Written by dinh

February 22, 2006 at 10:17 am

Posted in Work

Mysql trigger examples

with 3 comments

I’ve found a few examples of triggers. Check those source :

      Written by dinh

      February 16, 2006 at 1:37 pm

      Mysql store procedure example

      with 2 comments

      Frank Mash has some examples of store procedure. Check it out

      mysql> DROP PROCEDURE IF EXISTS build_table;
      ERROR 2006 (HY000): MySQL server has gone away
      No connection. Trying to reconnect...
      Connection id: 8976
      Current database: odp

      Query OK, 0 rows affected, 1 warning (0.01 sec)

      mysql> DELIMITER '/';
      mysql> CREATE PROCEDURE build_table()
      -> BEGIN
      -> DECLARE i INTEGER;
      -> DECLARE v INTEGER;
      -> SET i = 1;
      -> SET v = 100;
      -> WHILE i <= 125 DO
      -> INSERT into mytable VALUES (i, v);
      -> SET i = i + 1;
      -> SET v = v + 2;
      -> END WHILE;
      -> END/
      DELIMITER ';'/Query OK, 0 rows affected (0.01 sec)

      mysql> DELIMITER ';'/
      mysql> DROP TABLE IF EXISTS mytable;
      Query OK, 0 rows affected, 1 warning (0.00 sec)

      mysql> CREATE TABLE mytable (id INTEGER, value INTEGER);
      Query OK, 0 rows affected (0.04 sec)

      mysql> CALL build_table();
      Query OK, 1 row affected (0.01 sec)

      mysql> SELECT * from mytable LIMIT 0,1;
      +------+-------+
      | id | value |
      +------+-------+
      | 1 | 100 |
      +------+-------+
      1 row in set (0.00 sec)


      [Frank Mash blog]

      Written by dinh

      February 16, 2006 at 1:32 pm