Tuesday, December 8, 2009

accessing history of commands in bash

1. the most common is using !! , CTRL+P or !-1 to access the previous command typed. use !$ to retrieve all parameters for the previous command
2. CTRL+R on terminal allows you to search for a command with a keyword, you can do the same using !
3. there are several global variables controlling the history file, its size, name, allowance for duplications etc.
HISTSIZE - number of history lines
HISTFILESIZE - actual file size limit of the history file
HISTFILE - edit the actual history file used
HISTCONTROL - controls whether history allows duplicates, if not, how to eliminate them. option ignoredups erase consecutive repeating commands, erasedups erase all commands that arn't unique, ignorespace option sets all commands started with a space to be not shown in history
4. you can use an positive index to specify which history command you want to execute from the top of the history file, or negative index if from the button of the history file. history -c cleans up the history of your shell

Friday, November 20, 2009

More Xargs

just learned the coolest xargs trick today. normally xargs can only represent one input symbol using the -I, but you can make it multiply by combining xargs with "sh -c". example:

ls --format=single-column | xargs -n1 sh -c 'g++ -g $1 -o ${1%.cpp}' -

takes all files from the current directory, each line contains 1 file name, pass it to xargs, which executes sh -c 'g++ -g $1 -o ${1%.cpp}' - where the file name will be used as the $1 variable for the shell, - is used as placeholder for $0, and ${1%.cpp} shrinks the file of .cpp string at the end if it contains it. sh is used to execute an command if -c is used

This way we can access multiple variables with xargs, each can be placed in different locations of the command we want to execute.

Wednesday, November 18, 2009

xargs xargs xargs.. ??

I was learning about how to use xargs today.. quite an interesting unix tool. What it lets you do is to trigger a command on each input simple you received from input stream stdin. for example:

ls -l | awk '/.*/ { print $8}' | xargs -n1 -I{} gcc -o {}.o -g {}

long list the current directory with detail, take the last column with awk which contains a list of names, take each name and execute "gcc -o .o -g on them

xargs can print the command that is being generated using the -t option, or -p option for you to confirm each command that will be executed on the file. The -I{} indicates a special symbol {} is going to be used for the location where the symbol name should be put into instead of the default at the end of the command. the -n indicate how many symbols you will be using for each command, -l indicates how many lines of symbols you will be using for each command

xargs called without any command will act the same as an echo command.

It is essentially an easy replacement for for loops in a shell script

Saturday, November 14, 2009

Shell Scripting

For the past few weeks, I was working in broadcom, developing a shell script for their automated nightly build emails. It's been a struggle, and here is what I learned about shell scripts, and its limit:

Things I learned:
1. "echo" command will always echo out a line, use -n to prevent that, -e to let echo interpret the \ characters, which is turned off by default
2. "read" is the command used to read values from user, or from a file. but read automatically interpret the \ values, which is the opposite of echo. you need to use -r to disable that functionality
3. some general review of "sed":
1) you can give multiple sed expression replacements using the -e before every replacement string
2) sed only reads from a file and produce output to int stdout, don't know if there is a way to work around it
3) the format for the regular expression: '[general purpose] / [search pattern] / [replacement pattern] / [output options and search options]'
4) general purpose can be s for substitude, d for delete and such
5) use () in search pattern, those represent a single unit if you wish to output those patterns in the braket using \1 to \9 in the replacement pattern
6) . * ( ) ^ $ are all special parsing characters in search pattern, you need to use \ to indicate it is not otherwise
7) you can use & in the replacement pattern to indicate all the stuff that found to match your search pattern
8) in output options, g means global, which means sed does not quit until all matches are found, p means print the replacement pattern, w means write the replacement to a file indicated
9) sed has an option of -n, which means does not produce any output, normally, sed will put all characters not matched into the stdout, -n prevent that, using p in the output options is the exception to the no output rule, which prints the replacement pattern only into the stdout
10) eval is bad, it only evaluates the immediate expression in the next symbol, and each symbol is terminated by space, not \n
11) IFS is a variable indicating the end of a line or a group of symbols representing a command, which is useful for parsing the symbols with a for loop
12) be careful of if command condition spacing, $var=value is not parsable while $var = value is!
13) if command options: -a = and, -o = or, -z means string is empty, -f means file exists, -d means directory exists,
14) functions in shell script are weird, first of all it does not indicate that it takes any variable, but they do and they can take as many as they want. $# tells how many variables are passed to a shell function. $1 to $n indicate each parameters being passed to a shell script. the function act just like a shell command, no brakets needed to put parameters in it

Limitation:
right now I am still stuck in trying to do this: having two variables, one holds the name of the other, and you want to use echo and eval to show the value held by the other variable. the problem right now is in eval, it does not parse anything after the second space, which if the value in the second variable has space, those other parts will be lost..

I wonder how to fix this

Something to try out:
I wonder how the function environments in shell script functions are generated...

Thursday, August 27, 2009

tricks with bash

some micelaneous stuff learned when working in Safe Software:

in bash, you can use !! to indicate the last command typed in shell. This is useful when you want to debug the previous command you have just typed.

&> directive can pass only the standard in to a file and still have both standard out and err into the stdin of the terminal

>& does the reverse, passing std err to file

$ returns the value of the variable, $(command) returns the result of the command

^cat^less will scrach out the cat and replace the string with less, and will run less on the file. this is useful for editing a long string of commands

$@ from shell scripts returns the value of the shell parameter

if you are gdb-ing a project with multiple files, you can use "break : to specify which file at which line to break

ack-grep is a more powerful version of the grep program

scp is the secure copy over network program! useful

g++ compiler is more strict than windows nmake compiler, and will put each library being compiled into the same namespace, whereas in nmake it puts each library in its own separate namespace

Saturday, July 11, 2009

More Linux useful commands

whenever you entered a command from command line and you want to do something else at the same time, you can pause the current process using ctrl+z

you can send a process to the background of the terminal by using the bg . and you set a background process into the foreground using the fg

Tuesday, May 26, 2009

GDB quick note

si - step assembly instruction
s - step, or step in
n - step by line
run run the debug with arguments supplied
break and delete manages break points, it is also possible to break by condition

Wednesday, March 25, 2009

Yet another simple intro to LInux Kernel and history of X11 window manager

Linux kernel core is loaded into the directory /boot/vmlinuz-KERNEL-VERSION
additional kernel modules are loaded into /lib/modules/KERNEL-VERSION

all kernel modules can be viewed by using "lsmod" command
kernel modules can be added and removed using "modprobe", you may not remove a module if it is currently in use.
modules have parameters that you can pass to it when it is used. "modinfo" help to view to parameters. the information can also be viewed if you have the kernel source coded loaded into /usr/src
/proc/sys is a directory that contains all kernel core policies. one example is "echo 1 > /proc/sys/net/ipv4/ip_forward" modifies the ip forwarding policy to 1, thus your computer now acts as a gateway, which is useful for many man-in-the-middle attacks for stealing private information.
since linux kernel is open source, you can also modify the kernel policies freely. performance tuning is useful for running programs, such as database. oracle and IBM both provide kernel tuning page to run their database faster on linux. "sysctl -a" is a command that returns all policy documentations for the kernel.

it is very important to know that performance tuning have so many parameters, it is best you know what you're doing before changing the values.

X11 is a graphical transport layer protocol used to abstract linux window managers. this makes it possible for linux to adopt to having many different kinds of window manager without stuck with one came with the distribution. the most pro window manager for linux users is FVWM. It made it possible for linux user themselves to design their own window manager, you can make it look like anything you want, which is also welcomed by most hackers who want to have their own unique desktop style.

Saturday, March 21, 2009

New Laptop, New Linux




finally linux has been completely restored. someone mentioned that my linux desktop is pretty, so here is some pictures of my linux desktop. enjoy.