Unfortunately, my entire Linux continuum is going to shut down for a while, as i have just lost my laptop to a backpack snacher yesterday.
As I am sitting here, thinking about my unfortunate event. I don't really know what to feel like. I do blame myself, but to how long can you hold your alert senses? you can't be invincible forever, no one can. everyone is bound to have openings at some point. Not having my new laptop feels a bit like giving up smoking for me. i just needed that speed that my old Mac can't really provide by now.
and i'm also confused about what i really want at this point, to say no to "smoking", or to get myself a new laptop, in which i could also get stolen. and i don't really know what brand to choose either, and my reason of having a new laptop is being challenged too, why do i really need a new one?
... i think i'm just gona lay low, and ponder for sometime about this... i need some time... to reborn
Friday, December 12, 2008
Saturday, November 15, 2008
Java Hack: creating multiple classes within one file
I know there are plenty of people out there that don't really like java, and also plenty that do. Those that do says Java is easier to program, but the most good point is that java compilers/interpreters must follow a very strict rule of implementation, such that java code can run on any machine without a problem. now that sounds good, but Java is also a very strict programming langauge, and one the things that I have been having some problem with is they don't allow you to create multiple classes within one file, thinking that by having each file a separate class, everything looks more clear. While that is true, it also become a pain in the ass when you're dealing with a huge program with thousands of classes, which implies thousands of files, and each single one of them are listed in your Package Explorer, and some of them are so small you wonder why they have to be listed as a single file at all. so i figured out a trick to convert multiple classes in java into one file, breaking that rule.
the idea of putting multiple classes inside one is to convert a java class into what C++ refers to as a namespace, a live package that has both static and instance of itself, and putting all the other classes you want to conceal into the file as private member classes of the "namespace". This is the key idea of how to break from the one file one class rule. here is the procedure:
1. copy/create a private class inside your "namespace" class
2. write a public class generator function for each private class that is in the namespace class. the generator function will take all the arguments for the constructor and pass it to the private class constructor. it returns the instance once it is generated
3. create a public singleton instance of the namespace class. note that static instance does not work here.
and wala! now whenever you want to create a new class that is declared in the namespace class, just call the singleton instance and its generator function. and my friends, you have just turned a java class into a C++ namespace
the idea of putting multiple classes inside one is to convert a java class into what C++ refers to as a namespace, a live package that has both static and instance of itself, and putting all the other classes you want to conceal into the file as private member classes of the "namespace". This is the key idea of how to break from the one file one class rule. here is the procedure:
1. copy/create a private class inside your "namespace" class
2. write a public class generator function for each private class that is in the namespace class. the generator function will take all the arguments for the constructor and pass it to the private class constructor. it returns the instance once it is generated
3. create a public singleton instance of the namespace class. note that static instance does not work here.
and wala! now whenever you want to create a new class that is declared in the namespace class, just call the singleton instance and its generator function. and my friends, you have just turned a java class into a C++ namespace
Labels:
class,
hack,
Java,
private class,
programming,
static function
Friday, September 26, 2008
5 searching commands in linux
1. find
2. locate
-uses a database of file location instead of searching the real location, the database only updated everyday
3. whereis [-bsm]
- returns the location for a specific command line command, -b gives the location of the binary, -s gives the location of the source, and -m gives the location of the man pages
4. which
- returns all the location for the specific command. it searches the PATH env variable for locating the binary. great if there are multiple versions of a command in PATH.
- it returns the full path for each version of the command
5. type [-a]
- returns the type of command this console is using for the default, could be a shell built in, a UNIX command, a linux command etc. -a gives this property for all commands that are with the same name
2. locate
-uses a database of file location instead of searching the real location, the database only updated everyday
3. whereis [-bsm]
- returns the location for a specific command line command, -b gives the location of the binary, -s gives the location of the source, and -m gives the location of the man pages
4. which
- returns all the location for the specific command. it searches the PATH env variable for locating the binary. great if there are multiple versions of a command in PATH.
- it returns the full path for each version of the command
5. type [-a]
- returns the type of command this console is using for the default, could be a shell built in, a UNIX command, a linux command etc. -a gives this property for all commands that are with the same name
Sunday, September 14, 2008
Cloud Computing Digest
Cloud Computing is a fairly new internet service structure. it provide ways to virtually map physical storage, internet access together with a web based application for a remote user. The advantage of cloud computing is the reduction in the amount of server and energy needed to run a large scale service.
the idea of cloud computing resides in thinking of whole bunch of servers running for different services as a cloud, instead of independently running machines. It virtuallizes all physical machines into several layers: infrastructure service layer, platform service layer, and software service layer.
infrastructure service layer manages web throughput and storage space for remote server, giving each remote client a fixed network power and storage space without having all those service being tied down to one service machine. it also provide ways to run a operating platform on top of the infrastructure layer, limiting the storage and service to specific platforms that uses it.
the platform service layer provide users a selection of applications they could use. an example of platform service is the google application engine
the software service layer contains web applications the user may use, one example would be Google Apps.
information digested from here
the idea of cloud computing resides in thinking of whole bunch of servers running for different services as a cloud, instead of independently running machines. It virtuallizes all physical machines into several layers: infrastructure service layer, platform service layer, and software service layer.
infrastructure service layer manages web throughput and storage space for remote server, giving each remote client a fixed network power and storage space without having all those service being tied down to one service machine. it also provide ways to run a operating platform on top of the infrastructure layer, limiting the storage and service to specific platforms that uses it.
the platform service layer provide users a selection of applications they could use. an example of platform service is the google application engine
the software service layer contains web applications the user may use, one example would be Google Apps.
information digested from here
Monday, September 8, 2008
some more UNIX tips
1. auto-complete features in all shells:
bash: TAB or double TAB
cash: escape
korn: escape \ or double escape (depends on EDITOR var setting, either vi or emacs)
2. accessing previous command arguments from shell
$! access the last argument of the last command entry
!:1 or another number can access the last command argument at the position starting from 1
3. pushd and popd creates a stack of file locations that can be accessed without using cd command. it can also manipulate the stack, rotating its order with +i or -i where i is the rotation number to rotate, + puts the frontmost to the back, - puts the backmost to the front
4. curl command can be used to retrieve web information, use -s to ignore processing output, use -o to download any file from the internet
5. name matching special characters:
^ -> matching anything to the starting of line as in ^A
? -> matching anything at the end of the line A?
[] -> matching any character within the bracket, use - for indicating range
[^]-> matching any character except for those within the bracket
. -> match a single character of any value except for EOF
* -> match 0 or more preceding characters to expression
\{x,y\} -> match to x to y occurence of the preceding
\{x\} -> match to exactly x occurence of the preceding
\{x,\} -> match to x or more occurence of the preceding
6. some awk example (more examples)
$ cat text
testing the awk command
$ awk '{ i = length($0); print i }' text
23
$ awk '{ i = index($0,”ing”); print i}' text
5
$ awk 'BEGIN { i = 1 } { n = split($0,a," "); while (i <= n) {print a[i]; i++;} }' text
(summarized from here)
bash: TAB or double TAB
cash: escape
korn: escape \ or double escape (depends on EDITOR var setting, either vi or emacs)
2. accessing previous command arguments from shell
$! access the last argument of the last command entry
!:1 or another number can access the last command argument at the position starting from 1
3. pushd and popd creates a stack of file locations that can be accessed without using cd command. it can also manipulate the stack, rotating its order with +i or -i where i is the rotation number to rotate, + puts the frontmost to the back, - puts the backmost to the front
4. curl command can be used to retrieve web information, use -s to ignore processing output, use -o to download any file from the internet
5. name matching special characters:
^ -> matching anything to the starting of line as in ^A
? -> matching anything at the end of the line A?
[] -> matching any character within the bracket, use - for indicating range
[^]-> matching any character except for those within the bracket
. -> match a single character of any value except for EOF
* -> match 0 or more preceding characters to expression
\{x,y\} -> match to x to y occurence of the preceding
\{x\} -> match to exactly x occurence of the preceding
\{x,\} -> match to x or more occurence of the preceding
6. some awk example (more examples)
$ cat text
testing the awk command
$ awk '{ i = length($0); print i }' text
23
$ awk '{ i = index($0,”ing”); print i}' text
5
$ awk 'BEGIN { i = 1 } { n = split($0,a," "); while (i <= n) {print a[i]; i++;} }' text
(summarized from here)
Wednesday, September 3, 2008
Google Chrome, so much faster and so much more simple to use
it looks like google's new browser: Chrome is so 5 times faster than firefox and other browsers in general. it is also very small and very simple to use.
you may download it here
you may download it here
Thursday, August 28, 2008
another good reminder that lots of fun google search techniques are here
1. "define:" gives you the meaning of all words you write afterwards
2. "time" gives you the time of location
3. google search can be used as calculator with equation and end with '='
4. currency conversion by in
5. get the map of a city by map
6. get related search results using the "related:" keyword
7. use + and - in search term to plus a meaning or minus a meaning to the search term
8. type in a 3 digit and get the area where the 3 digit is used as phone number initial
9. quote search terms to get the exact result you want
2. "time" gives you the time of location
3. google search can be used as calculator with equation and end with '='
4. currency conversion by
5. get the map of a city by
6. get related search results using the "related:" keyword
7. use + and - in search term to plus a meaning or minus a meaning to the search term
8. type in a 3 digit and get the area where the 3 digit is used as phone number initial
9. quote search terms to get the exact result you want
Monday, August 25, 2008
good linux habits
taken from IBM Linux help site
1. mkdir -p option allows multiple directory in different depth to be created all at once
e.g. mkdir -p good/{fun,happy/photos}
will create a directory good, which contains 2 directories fun and happy, and in happy a new directory photos
2. tar xvf -C unarchive without having to move the tar to the destination directory, giving the user an option to specify where the directory the tar will be unarchived
e.g. tar xvf /temp/a newarc.tar.gz
3. the && and || operator in command line are more advaned replacements for ;, which is the command separator in console. && checks if previous command have executed and returned 0, and only if it returned 0 the second command runs. || checks if the previous command returned non-zero, and only execute second command if non-zero return exit has been returned
e.g. cd /temp/a && mkdir b
e.g. cd /temp/a || mkdir -p /temp/a
4. It is generally a good idea to enclose variable calls in double quotation marks, unless you have a good reason not to. Similarly, if you are directly following a variable name with alphanumeric text, be sure also to enclose the variable name in curly braces ({}) to distinguish it from the surrounding text.
5. the escape sequence \ makes commands more clear
6. it's good habit to group commands using () in subshell or {} in current shell. this way all commands inside () or {} have their outputs grouped together for further use. make sure that there is a space between commands and {}
7. xargs is a powerful output format tool that can do many types of filtering
e.g. ls -l | xargs
combines all listed files into one line
e.g. ls | xargs file
lists all files and its file type
a caution that xargs can cause error when reading '_', which if placed in a single line cause xargs to ignore anything afterwards, xargs -e turns off the end of file string feature
8. grep -c does the same thing as grep | wc -l and is faster. but grep -c only count lines containing matching patterns. to count all matching patterns,even if a line contain more than 1, use grep -o | wc -l
grep -o cannot be used with -c at the same time
9. use awk instead of grep when possible. awk captures the line if word matches the key at the right index
e.g. ls -l | awk '$6 = "Dec"'
captures all lines with 6th word = Dec
10. grep doesn't have to work with cat because grep can take file names as arguments
e.g. time grep and tmp/a/longfile.txt
does the same as time cat tmp/a/longfile.txt | grep and
1. mkdir -p option allows multiple directory in different depth to be created all at once
e.g. mkdir -p good/{fun,happy/photos}
will create a directory good, which contains 2 directories fun and happy, and in happy a new directory photos
2. tar xvf -C unarchive without having to move the tar to the destination directory, giving the user an option to specify where the directory the tar will be unarchived
e.g. tar xvf /temp/a newarc.tar.gz
3. the && and || operator in command line are more advaned replacements for ;, which is the command separator in console. && checks if previous command have executed and returned 0, and only if it returned 0 the second command runs. || checks if the previous command returned non-zero, and only execute second command if non-zero return exit has been returned
e.g. cd /temp/a && mkdir b
e.g. cd /temp/a || mkdir -p /temp/a
4. It is generally a good idea to enclose variable calls in double quotation marks, unless you have a good reason not to. Similarly, if you are directly following a variable name with alphanumeric text, be sure also to enclose the variable name in curly braces ({}) to distinguish it from the surrounding text.
5. the escape sequence \ makes commands more clear
6. it's good habit to group commands using () in subshell or {} in current shell. this way all commands inside () or {} have their outputs grouped together for further use. make sure that there is a space between commands and {}
7. xargs is a powerful output format tool that can do many types of filtering
e.g. ls -l | xargs
combines all listed files into one line
e.g. ls | xargs file
lists all files and its file type
a caution that xargs can cause error when reading '_', which if placed in a single line cause xargs to ignore anything afterwards, xargs -e turns off the end of file string feature
8. grep -c does the same thing as grep | wc -l and is faster. but grep -c only count lines containing matching patterns. to count all matching patterns,even if a line contain more than 1, use grep -o | wc -l
grep -o cannot be used with -c at the same time
9. use awk instead of grep when possible. awk captures the line if word matches the key at the right index
e.g. ls -l | awk '$6 = "Dec"'
captures all lines with 6th word = Dec
10. grep doesn't have to work with cat because grep can take file names as arguments
e.g. time grep and tmp/a/longfile.txt
does the same as time cat tmp/a/longfile.txt | grep and
Wednesday, August 20, 2008
Turn Gmail into a to do list with Superstar
you can turn gmail into a to do list. first turn on the new google lab gmail feature: Super star, and turn on 3 stars: red, orange, green. 2) enter
l:^ss_sr OR l:^ss_so OR l:^ss_sg
into the search field and any emails with red, orange and green stars will be selected, and the selection is updated every search
keep the search command page and enter a quick link on the left side for this, call it "to do list"
---------------
on the other hand, VMWare has released a linux virtual machine Fushion 2.0 for Mac. should check it out
l:^ss_sr OR l:^ss_so OR l:^ss_sg
into the search field and any emails with red, orange and green stars will be selected, and the selection is updated every search
keep the search command page and enter a quick link on the left side for this, call it "to do list"
---------------
on the other hand, VMWare has released a linux virtual machine Fushion 2.0 for Mac. should check it out
Labels:
Gmail,
Google Gmail Lab,
Mac Virtual Machine,
Superstar,
To do List,
Virtual Machine,
VMWare
Tuesday, August 19, 2008
A brief intro to Linux Cache Structure
there are many algorithms for memory allocation, but none of them have a net gain of benefit, all makes a trade off. the basic memory allocation algorithms are heap allocation, which the kernel divides cache into blocks that fit to the requested size, and de-allocate them after use. the algorithm is efficient in cache usage, but causes significant fragmentation.
the buddy allocation combines blocks of cache together after deallocation if neighborhood memory blocks are also free. and it allocates memory by the best fit approach. this is better than heap, but require more processing.
the linux kernel uses SunOS cache allocation algorithm the "slab allocation". its basic structure looks like this:
the kernel memory cache is listed as a chain, it is then subdivided into slabs, catagorized into slab full, slab partial, slab empty, and each slab contains a page of memory, each page is composed of objects being allocated.
the idea is that kernel takes much longer time to initilize objects than to allocate and deallocate memory, using slab caching, kernel can reuse the previously allocated memory, without initialize it, for reuse.
full tutorial can be found here.
the buddy allocation combines blocks of cache together after deallocation if neighborhood memory blocks are also free. and it allocates memory by the best fit approach. this is better than heap, but require more processing.
the linux kernel uses SunOS cache allocation algorithm the "slab allocation". its basic structure looks like this:
the idea is that kernel takes much longer time to initilize objects than to allocate and deallocate memory, using slab caching, kernel can reuse the previously allocated memory, without initialize it, for reuse.
full tutorial can be found here.
Tuesday, August 5, 2008
windows file system NTFS: bad bad baby
windows NTFS file system supports a feature that allows a file to contain multiple inodes, which was originally used to support Macintosh file system, is a bad idea. you can use this feature to create hidden files for other files that will never show up in windows explorer. you can even hide executable files this way and attach them to windows kernels and do very bad things.
an example to show you how:
1. create a file with any name, like example.txt
2. click Start + R, and type in that file name, for example, "notepad C:\example.txt:hidden.exe"
3. a new file that pretends to be the inode of example.txt has been created, and if you try to find that file example.txt:hidden.exe in your explorer, you can't find it. the inode file's size won't even appear in example.txt. so you can put a 1 GB file into example.txt and you won't even notice. in fact, the only way you can notice this hidden file is to move this file from NTFS to FAT
this multiple inode feature is only used to save a file's custom information, such as author and such, but it doesn't block any file from including additional inode files that arn't being used legally.
and for all those people who thinks Windows Vista is better, you better read the 10 page long deprecated features from Windows XP to Windows Vista. Some of those deprecated features are nice for security, but some are stupid.
an example to show you how:
1. create a file with any name, like example.txt
2. click Start + R, and type in that file name, for example, "notepad C:\example.txt:hidden.exe"
3. a new file that pretends to be the inode of example.txt has been created, and if you try to find that file example.txt:hidden.exe in your explorer, you can't find it. the inode file's size won't even appear in example.txt. so you can put a 1 GB file into example.txt and you won't even notice. in fact, the only way you can notice this hidden file is to move this file from NTFS to FAT
this multiple inode feature is only used to save a file's custom information, such as author and such, but it doesn't block any file from including additional inode files that arn't being used legally.
and for all those people who thinks Windows Vista is better, you better read the 10 page long deprecated features from Windows XP to Windows Vista. Some of those deprecated features are nice for security, but some are stupid.
Friday, August 1, 2008
linux booting process summary
just a summary of how linux boots up from the ground:
5 steps
1. PC loads BIOS from flash memory. BIOS sets up hardware configuration, and read from special registry to use a device as the booting device, usually the hard disk. if it is hard disk, it loads the first sector of 512 bytes into memory.
2. the 512 bytes contains executable code for boot loader, the first ~400 bytes are executable code, then follows with 64 bytes of 4 primary partition information, including starting sector, ending sector, size etc, and then a 2 byte magic code 0xAA55 as boot sector validity checker. It loads the chosen booting partition (and mark other partitions not booted) and loads the second part of the boot loader.
3. the second part of the boot loader reads the partition loads the file system of the volumn, and reads the bootloader entirely and prompt the user to load the kernel. it put kernel into memory and transfers control to the kernel
4. the kernel executes and creates the real root system, loads up modules to control the hardware etc, and starts the init process which is for user processies
5. the init initializes the user processies, this is where the terminal starts up, or the log screen starts up
4. the rest of the bootloader locates the
5 steps
1. PC loads BIOS from flash memory. BIOS sets up hardware configuration, and read from special registry to use a device as the booting device, usually the hard disk. if it is hard disk, it loads the first sector of 512 bytes into memory.
2. the 512 bytes contains executable code for boot loader, the first ~400 bytes are executable code, then follows with 64 bytes of 4 primary partition information, including starting sector, ending sector, size etc, and then a 2 byte magic code 0xAA55 as boot sector validity checker. It loads the chosen booting partition (and mark other partitions not booted) and loads the second part of the boot loader.
3. the second part of the boot loader reads the partition loads the file system of the volumn, and reads the bootloader entirely and prompt the user to load the kernel. it put kernel into memory and transfers control to the kernel
4. the kernel executes and creates the real root system, loads up modules to control the hardware etc, and starts the init process which is for user processies
5. the init initializes the user processies, this is where the terminal starts up, or the log screen starts up
4. the rest of the bootloader locates the
Labels:
BIOS,
boot loader,
booting,
booting up,
grub,
Linux
Thursday, July 31, 2008
Wednesday, July 30, 2008
Web spiders and some basic concepts in flash drive file system
web spider: programs that automatically fetch information from websites, a email spam program is a example (though bad one) about web spiders/crawlers. but there are some good ones, such as RSS, a HTTP web spider that feeds the user news.
Ubuntu tools that can act as web spider:
wget - web downloading command line tool that automatically fetch the file indicated
snarf - simple web resource fetcher, just like wget
Linux Flash Drive File system
unlike disk file system, flash drives are made of NAND memory blocks and have some different characteristics:
1. although faster in reading and writing, flash drives memory's rewritable times are smaller than conventional hard disk, writing optimization algorithm is needed to lengthen the life span of flash drives, diverging programs from constantly writing to a frequently used part of the flash drive.
2. flash drive's writing is different, instead of being able to write 1's and 0's, flash drive writing only consist of 1's. thus the writing of a block requires the flash driver head to first erase everything into 0, reset the entire block once, then write the 1's. it also means whenever just a single bit has changed in a block of flash drive, the entire block must be rewritten, further decrease the life span of flash drives
3. memory dissapation after frequent reading also happens to flash drive. so after a certain amount of time of reading for a block of flash drive, the block must be rewritten to charge the NAND transisters in order to keep the memory, further increasing the frequency of rewriting
linux uses JFFS (journalling flash file system) and YAFFS for flash drive. they both employ writing frequency prevention algorithms.
Ubuntu tools that can act as web spider:
wget - web downloading command line tool that automatically fetch the file indicated
snarf - simple web resource fetcher, just like wget
Linux Flash Drive File system
unlike disk file system, flash drives are made of NAND memory blocks and have some different characteristics:
1. although faster in reading and writing, flash drives memory's rewritable times are smaller than conventional hard disk, writing optimization algorithm is needed to lengthen the life span of flash drives, diverging programs from constantly writing to a frequently used part of the flash drive.
2. flash drive's writing is different, instead of being able to write 1's and 0's, flash drive writing only consist of 1's. thus the writing of a block requires the flash driver head to first erase everything into 0, reset the entire block once, then write the 1's. it also means whenever just a single bit has changed in a block of flash drive, the entire block must be rewritten, further decrease the life span of flash drives
3. memory dissapation after frequent reading also happens to flash drive. so after a certain amount of time of reading for a block of flash drive, the block must be rewritten to charge the NAND transisters in order to keep the memory, further increasing the frequency of rewriting
linux uses JFFS (journalling flash file system) and YAFFS for flash drive. they both employ writing frequency prevention algorithms.
Labels:
flash drive file system,
JFFS,
snarf,
web crawler,
wget,
YAFFS
Thursday, July 24, 2008
mental note...
I need to write a script that downloads all the software i need with just one click... and possibly nice if i compile them on my machine from source code and then install it for every program
Wednesday, July 23, 2008
Useful Commands for Linux Administration
(copied from IBM developer networks)
1. fuser
check who is accessing a mounted volumn. fuser -k kills the process of the user that is accessing the mounted volumn
2. eject
ejects cdrom
3. mount /media/cdrom
mounts the cd manually
4. reset
resets the current console without having to restart the shell
5. su -
become another user, granting access privilege of that user
6. screen -s
screen shares with another person for one computer, one user need to be connected using ssh. it only works if both user are the same. screen can also split screens etc, and you can get out of the screen by pressing ctrl-A D, and then come back to the screen using the same command (screen -s
7. iperf - the linux ethernet speed test program
can get it from
some example
1)
connect to every machine from 192.168.99.1 to 192.168.99.200 with machine names n001 to n200 via ssh, grab the free memory in the machine from free command and print the free memory (second column) using awk, then pipe to sort them and pipe to take all unique numbers out
9. view processor information
cat /proc/cpuinfo
10. check number of processors
Additionals:
GRUB boot option: press E in GRUB boot interface triggers editing option for booting command, add 1 after the kernel option will cause booting to single user mode. this is useful for admins lost their root password. once logged in as single user, use passwd to change the root password
SSH tunneling: you can tunnel through firewall using ssh to give access of a computer to networks outside using a intermediate machine. it takes 4 steps:
1)machine inside firewall ssh intermediate by the command ssh -R:localhost:22
2) while sshed into the intermediate, keep the connection alive by console script:
while [ 1 ]; do date; sleep 300; done
3) another machine connects to the intermediate using
ssh
4) the machine then ssh into machine inside firewall using
ssh -p root@localhost
it assumes you have root privilege in machine inside firewall
VNC tunneling (virtual network computing). VNC tunneling give the remote user a interface instead of console. to set it up takes 5 steps
1) start vnc server in machine inside firewall
vncserver -geometry 1024*768 -depth 24 :99
vncserver often starts on port 5900, thus :99 will open vncserver on 5999
2) machine inside firewall allows vnc forwarding to intermediate machine
ssh -R 5999:localhost:5999
at this time, the intermediate machine can view the machine inside firewall by
vncviewer localhost:99
3) keep the ssh open using
while [ 1 ]; do date; sleep 300; done
4) on the other machine that need to access the machine inside firewall, use this to connect to the intermediate.
ssh -L 5999:localhost:5999
the -L indicate only to pull information from the host, not to supply information (or pull, while -R indiate to push)
5) view the machine inside firewall by
vncviewer localhost:99
on sidenote, Putty in Windows can set the vnc port using user interface instead of command line in linux
Viewing error messages from programs during ssh: ssh doesn't report program errors when it is running. to view the program errors, you need to cat /dev/vcsl (or vcs1??)
1. fuser
check who is accessing a mounted volumn. fuser -k kills the process of the user that is accessing the mounted volumn
2. eject
ejects cdrom
3. mount /media/cdrom
mounts the cd manually
4. reset
resets the current console without having to restart the shell
5. su -
become another user, granting access privilege of that user
6. screen -s
screen shares with another person for one computer, one user need to be connected using ssh. it only works if both user are the same. screen can also split screens etc, and you can get out of the screen by pressing ctrl-A D, and then come back to the screen using the same command (screen -s
7. iperf - the linux ethernet speed test program
can get it from
http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz
to run iperf as server for other machine to detect ethernet speed, use
iperf -s -f M
to connect to a iperf server in order to test ethernet speed, use
iperf -c -P 4 -f M -w 256k -t 60
test to connect to server with bandwidth 256k and test for 60 seconds
8. bash scriping using for loops, while loops, seq, awk, sort, uniqsome example
1)
# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts
connect to every local machine from 192.168.99.1 to 192.168.99.200 with computer name n001 to n200 and append them into etc/hosts file
# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniqconnect to every machine from 192.168.99.1 to 192.168.99.200 with machine names n001 to n200 via ssh, grab the free memory in the machine from free command and print the free memory (second column) using awk, then pipe to sort them and pipe to take all unique numbers out
9. view processor information
cat /proc/cpuinfo
10. check number of processors
cat /proc/cpuinfo | grep processor | wc -l
11. grab BIOS information
dmidecode | less
note that dmidecode is difficult to grep
12. check driver for ethernet
ethtool -i eth0
Additionals:
GRUB boot option: press E in GRUB boot interface triggers editing option for booting command, add 1 after the kernel option will cause booting to single user mode. this is useful for admins lost their root password. once logged in as single user, use passwd to change the root password
SSH tunneling: you can tunnel through firewall using ssh to give access of a computer to networks outside using a intermediate machine. it takes 4 steps:
1)machine inside firewall ssh intermediate by the command ssh -R
2) while sshed into the intermediate, keep the connection alive by console script:
while [ 1 ]; do date; sleep 300; done
3) another machine connects to the intermediate using
ssh
4) the machine then ssh into machine inside firewall using
ssh -p
it assumes you have root privilege in machine inside firewall
VNC tunneling (virtual network computing). VNC tunneling give the remote user a interface instead of console. to set it up takes 5 steps
1) start vnc server in machine inside firewall
vncserver -geometry 1024*768 -depth 24 :99
vncserver often starts on port 5900, thus :99 will open vncserver on 5999
2) machine inside firewall allows vnc forwarding to intermediate machine
ssh -R 5999:localhost:5999
at this time, the intermediate machine can view the machine inside firewall by
vncviewer localhost:99
3) keep the ssh open using
while [ 1 ]; do date; sleep 300; done
4) on the other machine that need to access the machine inside firewall, use this to connect to the intermediate.
ssh -L 5999:localhost:5999
the -L indicate only to pull information from the host, not to supply information (or pull, while -R indiate to push)
5) view the machine inside firewall by
vncviewer localhost:99
on sidenote, Putty in Windows can set the vnc port using user interface instead of command line in linux
Viewing error messages from programs during ssh: ssh doesn't report program errors when it is running. to view the program errors, you need to cat /dev/vcsl (or vcs1??)
Tuesday, July 8, 2008
Some useful network linux/unix commands
1. ping
used for detecting connection speed and status
2. nmap -A...
used for detecting ports and services of the IPs. a very good network scanning tool
3. netcat
used to connect any IP and send/receive information from it
4. snort
packet logging and network traffic analysis tool
5. tcpdump
display tcp packet received by this computer from the network it has been attached to
6.kismet
network detector, packet sniffer and intrusion detection system. work with wireless
7. wireshark
network monitor tool
8. traceroute
detects the routers the packets went through as it is being transfered and returned from destination
9. telnet
telnet connection tool to connect to another computer through TCP
10. nslookup
domain name look up tool
11. john the ripper
password cracking software for unix
used for detecting connection speed and status
2. nmap -A
used for detecting ports and services of the IPs. a very good network scanning tool
3. netcat
used to connect any IP and send/receive information from it
4. snort
packet logging and network traffic analysis tool
5. tcpdump
display tcp packet received by this computer from the network it has been attached to
6.kismet
network detector, packet sniffer and intrusion detection system. work with wireless
7. wireshark
network monitor tool
8. traceroute
detects the routers the packets went through as it is being transfered and returned from destination
9. telnet
telnet connection tool to connect to another computer through TCP
10. nslookup
domain name look up tool
11. john the ripper
password cracking software for unix
Wednesday, June 25, 2008
C++ switch statement
just found out that VC++ switch statement does not allow to create new variables with dynamically allocated values inside any of the case statements
Tuesday, June 24, 2008
wxWidgets wxGrid is bad
some of the wxGrid selection getting functions arn't working correctly. in fact, i don't think they are even working in the latest version: wxGrid::GetSelectedRows, getGrid::GetSelectedCols, wxGrid::GetSelectedCells. all of them are supposed to return a wxArray subtype containing all the cells that are being selected, but none of them does that.
the only function that can be used for checking selection is IsSelection for checking if any cells are being selected, and IsInSelection to check if a cell is selected, you may build additional functions on top of those.
the only function that can be used for checking selection is IsSelection for checking if any cells are being selected, and IsInSelection to check if a cell is selected, you may build additional functions on top of those.
Sunday, June 22, 2008
Suspend problem still exist on Ubuntu
I was taking my laptop out of my bag today, and suddenly realized it is really damn hot all by itself, like around 50 celsius. It was on ubuntu suspend, and i guess suspend on ubuntu is still problematic, that it still drains a lot of energy from the battery, and because the fan no longer turns on, the entire laptop gets really hot. it can potentially melt the plastic. (and the battery drain was around 50% in just 50 minutes of travelling)
it could have been that i was just closing my lid in order to suspend my ubuntu, i felt heat when i was carrying it, but i thought that's because of the weather. i'' need to keep a close watch on this issue
it could have been that i was just closing my lid in order to suspend my ubuntu, i felt heat when i was carrying it, but i thought that's because of the weather. i'' need to keep a close watch on this issue
Friday, June 20, 2008
wxWidget bug
I found a weird bug in wxString::Format function, if i were to substitude > 64 variables into the string, all the other substitutions will not work and be converted into the letter of their substitution type, example:
char c = 'i';
wxString::Format("%c %c %c <... repeat 61 more times> %c %c", c, c, c,<... repeat 61 more times> c, c);
the output of the above string will be "i i i i <... repeat 60 more times> c c" where the c represents the %c used in the string, and they are not converted to the variable c even though the string is perfectly valid and the c is being assigned.
wxString::Format function is an inline function that basically uses the printf function, I don't know if that's the maximum insertion any printf function can handle or is it just wxString::Format function made a limit to itself. probabaly should test them out
char c = 'i';
wxString::Format("%c %c %c <... repeat 61 more times> %c %c", c, c, c,
the output of the above string will be "i i i i <... repeat 60 more times> c c" where the c represents the %c used in the string, and they are not converted to the variable c even though the string is perfectly valid and the c is being assigned.
wxString::Format function is an inline function that basically uses the printf function, I don't know if that's the maximum insertion any printf function can handle or is it just wxString::Format function made a limit to itself. probabaly should test them out
Sunday, June 8, 2008
things to ask before next Ubuntu question/answer session:
-cannot get disk usage from root. partition reports root has some error warning.
-after suspend wake up, it shows a white screen. don't know any way to get rid of it
-flash player no longer working with firefox. somehow there is no sound but the sound system works
UPDATE:
looks like all those problems listed above are fixed before me having the chance to visit FreeGeek.
openSUSE is still not running. I think it's because gdm and kdm cannot run on top of X11 due to the driver incompatibilty with NVidia video card. I don't exactly know how to fix it, but i heard openSUSE 11 fixed that problem. so i'll check that one out instead
-cannot get disk usage from root. partition reports root has some error warning.
-after suspend wake up, it shows a white screen. don't know any way to get rid of it
-flash player no longer working with firefox. somehow there is no sound but the sound system works
UPDATE:
looks like all those problems listed above are fixed before me having the chance to visit FreeGeek.
openSUSE is still not running. I think it's because gdm and kdm cannot run on top of X11 due to the driver incompatibilty with NVidia video card. I don't exactly know how to fix it, but i heard openSUSE 11 fixed that problem. so i'll check that one out instead
Thursday, June 5, 2008
Visit to Free Geek
I went to visit Free Geek linux help session today for helps in fedora. unfortunately, i did not solve the problem I have with ksynaptics. but i did learn some cool stuff:
mknod -> used for making a device file in /dev. the expert said it's something related to "udev". i should look into it
dpkg-reconfigure xserver-xorg -> used for debian xorg.conf reconfiguration. it doesn't work in fedora, however. it is very useful when your xorg or any other conf file is messed up
locate - a program that updates its file database in midnight and fast search indexing of files
also, in bash, you can double tab in command prompt to list out all files in the directory that contain matching strings you have typed. this is tested and working in bash
------------------
btw, i did some research in Mac file system. apparently Mac file system is much cooler than linux: the reason why mac has such good file searching utility is because mac uses a special autofs device under /dev that can track all the current changes to directories, and file them to the searching index system. this is much better than the locate program unix has because the update is instant.
also, the time machine in Mac has a very advanced technology which allows hard links to directories, which is forbidden in unix because such multiple directory hard linking can cause bad loops. but mac solved that problem, and used it extensively in time machine so that there is only one copy of an tire folder of files if none of the files in the folder has been modified before previous backup. the only problem with it is that even if you just modified one byte in a big file, the file will be saved as separate files from before, this causes problem if you use many virtual machines, or MS Entourage, which saves all emails as a single database file that is often changed and big.
mknod -> used for making a device file in /dev. the expert said it's something related to "udev". i should look into it
dpkg-reconfigure xserver-xorg -> used for debian xorg.conf reconfiguration. it doesn't work in fedora, however. it is very useful when your xorg or any other conf file is messed up
locate - a program that updates its file database in midnight and fast search indexing of files
also, in bash, you can double tab in command prompt to list out all files in the directory that contain matching strings you have typed. this is tested and working in bash
------------------
btw, i did some research in Mac file system. apparently Mac file system is much cooler than linux: the reason why mac has such good file searching utility is because mac uses a special autofs device under /dev that can track all the current changes to directories, and file them to the searching index system. this is much better than the locate program unix has because the update is instant.
also, the time machine in Mac has a very advanced technology which allows hard links to directories, which is forbidden in unix because such multiple directory hard linking can cause bad loops. but mac solved that problem, and used it extensively in time machine so that there is only one copy of an tire folder of files if none of the files in the folder has been modified before previous backup. the only problem with it is that even if you just modified one byte in a big file, the file will be saved as separate files from before, this causes problem if you use many virtual machines, or MS Entourage, which saves all emails as a single database file that is often changed and big.
Thursday, May 29, 2008
File Browsing Tip
the unix command cd is often not included as a command that sudo can do. In order to open up folders that somehow not viewable because it belongs to another user, you must sudo gnome-open to open up a gnome browser to view the folder.
Wednesday, May 28, 2008
Lenovo, just how good is it
I just saw a blog regarding the quality of Lenovo T61p xealcom.co.uk
I always thought that Lenovo actually have good laptop quality, but it turns out that it might not be correct. A friend of mine's Lenovo t41p just broke. he has been using it for the past 3 years. So I would say that even though Lenovo thinkpad series are engineered to stand security and physical abuse, the normal quality of computer parts the company use to make their hardware are not durable.
I was talking to my co-worker today. and He said that most power supplies are not providing the level of voltage they are tested to provide, which greatly reduces the life time of capacitors and inductors in computers. Normally, capacitors are tested to last long enough under cretain amount of voltage, let's say 10 volts, but if the power supply is supplying the capacitor 15 volts, the capacitor will only have half of its guaranteed life time. A good power supply lengthen the life time of a computer greatly because of that.
.. okay, that may not be related to the topic i was saying.
my point is that Lenovo thinkpads truely has some really nicely engineered features that no other brands offer: shock mounted hard drive, magnesium + carbon fibre monitor encasing, steel turn hinges, low power comsumption motherboard, replacable compartment, and last but not least, keyboard sink holes in case of spill. Its normal computer components are average: my frined's monitor LCD screen just died after 3 years of use, not-so durable motherboard, normal power supply and such. Thus its key components arnt really any better than other brands. It may prevent youe from destrying the laptop due to accidents, but it doesn't guarantee that the laptop will have a longer normal lifespan. and if you like to take really good care of your laptop, any PC brand can just be as good as lenovo.
I always thought that Lenovo actually have good laptop quality, but it turns out that it might not be correct. A friend of mine's Lenovo t41p just broke. he has been using it for the past 3 years. So I would say that even though Lenovo thinkpad series are engineered to stand security and physical abuse, the normal quality of computer parts the company use to make their hardware are not durable.
I was talking to my co-worker today. and He said that most power supplies are not providing the level of voltage they are tested to provide, which greatly reduces the life time of capacitors and inductors in computers. Normally, capacitors are tested to last long enough under cretain amount of voltage, let's say 10 volts, but if the power supply is supplying the capacitor 15 volts, the capacitor will only have half of its guaranteed life time. A good power supply lengthen the life time of a computer greatly because of that.
.. okay, that may not be related to the topic i was saying.
my point is that Lenovo thinkpads truely has some really nicely engineered features that no other brands offer: shock mounted hard drive, magnesium + carbon fibre monitor encasing, steel turn hinges, low power comsumption motherboard, replacable compartment, and last but not least, keyboard sink holes in case of spill. Its normal computer components are average: my frined's monitor LCD screen just died after 3 years of use, not-so durable motherboard, normal power supply and such. Thus its key components arnt really any better than other brands. It may prevent youe from destrying the laptop due to accidents, but it doesn't guarantee that the laptop will have a longer normal lifespan. and if you like to take really good care of your laptop, any PC brand can just be as good as lenovo.
Things to do when installing Ubuntu on Lenovo T61p with Nvidia graphic card
Just found out that for some weird reason that after installation my Ubuntu does not use my swap partition. With some detection, i found out that the /etc/fstab is using a partition with the wrong UUID as the swap partition and therefore the system does not enable swap. so I think I'll have to make a reminder list of things to do after installing Ubuntu:
Things to do after installing Ubuntu on Lenovo T61p with Discrete graphic card:
1. Install Nvidia graphic card
2. setup program update source and update software
3. edit /etc/fstab and enter all other partitions into designated folders
4. edit /boot/grub/menu.lst to include other OS if there is any
5. set ~/Downloads folder as the default firefox downloads folder
6. enable/download Compiz fusion and setup conpiz fusion
7. download and install Kiba-Dock
8. download the latest JDK and install into /usr/lib/jvm
9. using update manager to get latest Java
10. download the latest version of Eclipse and NetBeans and install them into /usr/lib; create a link in /usr/bin to those executables; edit /usr/share/Applications to include those programs as application in category development
11. install latest Apache into /usr/lib and setup Apache
12. install administration applications: boot manager, firestarter, partition editor
Things to do after installing Ubuntu on Lenovo T61p with Discrete graphic card:
1. Install Nvidia graphic card
2. setup program update source and update software
3. edit /etc/fstab and enter all other partitions into designated folders
4. edit /boot/grub/menu.lst to include other OS if there is any
5. set ~/Downloads folder as the default firefox downloads folder
6. enable/download Compiz fusion and setup conpiz fusion
7. download and install Kiba-Dock
8. download the latest JDK and install into /usr/lib/jvm
9. using update manager to get latest Java
10. download the latest version of Eclipse and NetBeans and install them into /usr/lib; create a link in /usr/bin to those executables; edit /usr/share/Applications to include those programs as application in category development
11. install latest Apache into /usr/lib and setup Apache
12. install administration applications: boot manager, firestarter, partition editor
Thursday, May 22, 2008
C++ Namespace buzz
namespace has proven more tricky than I thought. I have always thought of it like an env created in more powerful langauges such as Dr.Scheme and Haskell, but there are quite some major differences, some syntaxes of namespaces with just minor details will cause drastic differences. here is what i learned:
1. namespace is just like how Scheme represent functional environments using the let syntax, execept namespace allows you to give a name to the environment for other files to references to it (let itself in Scheme doesn't do it, you must use define syntax to define a variable and then combine it with the let)
2. to reference to a namespace env, you can using either syntax: using namespace name; or directive syntax :: . those 2 things mean totally different things, however:
using namespace name;
using syntax basically loads all the env variable symbol into the current env, but does not bind them automatically! those symbols will be bind to the definition during linking. it is less powerful than the directive :: . it probably work like this:
load into current env symbol binding pair (symbol,
directive ::
the directive is more powerful, it automatically binds the env symbol to the definition. using directive does not create any symbol binding in the env, but just directly links the namespace_symbol::symbol into the definition directly inside the namespace
alternatively, you are also allowed to write:
using namespace name::symbol;
this declaration work just like directive, binding the env symbol to the definition right away, also making the rest of the env to alias any symbol without the namespace declaration to the same symbol defined inside the namespace. i.e.
using namespace std::string;
string st = "string";
this code basically says load namespace std's into current env with binding with binding pairs: (string, std::string) and (std::string, )
-----------------------------------------------------
so, with the compiler behaivior defined, here comes the interesting part: classes and data in a namespace are compiled as part of the namespace, i.e. if thinking of namespace as functions with its own scope, then the local variables are basically being part of the function, whereas functions being declared inside this function is not physically inside this function, but through compilation, is in another part of the program that is labeled to be inside this function's local env.
now this leads to a problem, whenever you declare a function inside a namespace, and you declare it in one .cpp file, inside the file you used the
directive symbol loading to define the function, the function is not automatically linked into the header file's namespace function because the using directive only loads the name, but since .cpp files can also have functions for itself, it assumes this function belong to the file, not the namespace during linking.
so as a result, an linking error pops whenever you use this function somewhere else, saying this function is not defined.
however, this will not happen to namespace objects and data using the directive, because the linker assumes the symbol taken first by the namespace.
------------------------------------------------------------
on side note, namespace can be declared without a name, creating an anonymous environment, but it behaves like this:
namespace uniquely_generated_label{
data...
}
using namespace uniquely_generated_label;
which actually looks quite like the Dr.Scheme functional define + let thingie.
though, those unamed namespace can only be used inside the file that declares it.
TODO: try what happens to the env symbol if 1 symbol is inside of 1 un-named namespace of another un-named namespace, and being accessed outside both namespaces at the later part of the file
UPDATE 1:
typing
using namespace nspace::func1;
will generate an error stating that func1 is not an object in namespace, this could only mean that c++ treats functions just as primitive as C, where all functions are basically from the same environment, any functions that are inside a namespace basically have the compiler extended their function name with the namespace name. and therefore, functions are not treated as objects still, just a name tag.
this leaves only 1 way to define any functions in C++ namespace: using the :: directive.
more on namespace without names later
1. namespace is just like how Scheme represent functional environments using the let syntax, execept namespace allows you to give a name to the environment for other files to references to it (let itself in Scheme doesn't do it, you must use define syntax to define a variable and then combine it with the let)
2. to reference to a namespace env, you can using either syntax: using namespace name; or directive syntax :: . those 2 things mean totally different things, however:
using namespace name;
using syntax basically loads all the env variable symbol into the current env, but does not bind them automatically! those symbols will be bind to the definition during linking. it is less powerful than the directive :: . it probably work like this:
load into current env symbol binding pair (symbol,
directive ::
the directive is more powerful, it automatically binds the env symbol to the definition. using directive does not create any symbol binding in the env, but just directly links the namespace_symbol::symbol into the definition directly inside the namespace
alternatively, you are also allowed to write:
using namespace name::symbol;
this declaration work just like directive, binding the env symbol to the definition right away, also making the rest of the env to alias any symbol without the namespace declaration to the same symbol defined inside the namespace. i.e.
using namespace std::string;
string st = "string";
this code basically says load namespace
-----------------------------------------------------
so, with the compiler behaivior defined, here comes the interesting part: classes and data in a namespace are compiled as part of the namespace, i.e. if thinking of namespace as functions with its own scope, then the local variables are basically being part of the function, whereas functions being declared inside this function is not physically inside this function, but through compilation, is in another part of the program that is labeled to be inside this function's local env.
now this leads to a problem, whenever you declare a function inside a namespace, and you declare it in one .cpp file, inside the file you used the
directive symbol loading to define the function, the function is not automatically linked into the header file's namespace function because the using directive only loads the name, but since .cpp files can also have functions for itself, it assumes this function belong to the file, not the namespace during linking.
so as a result, an linking error pops whenever you use this function somewhere else, saying this function is not defined.
however, this will not happen to namespace objects and data using the
------------------------------------------------------------
on side note, namespace can be declared without a name, creating an anonymous environment, but it behaves like this:
namespace uniquely_generated_label{
data...
}
using namespace uniquely_generated_label;
which actually looks quite like the Dr.Scheme functional define + let thingie.
though, those unamed namespace can only be used inside the file that declares it.
TODO: try what happens to the env symbol if 1 symbol is inside of 1 un-named namespace of another un-named namespace, and being accessed outside both namespaces at the later part of the file
UPDATE 1:
typing
using namespace nspace::func1;
will generate an error stating that func1 is not an object in namespace, this could only mean that c++ treats functions just as primitive as C, where all functions are basically from the same environment, any functions that are inside a namespace basically have the compiler extended their function name with the namespace name. and therefore, functions are not treated as objects still, just a name tag.
this leaves only 1 way to define any functions in C++ namespace: using the :: directive.
more on namespace without names later
Wednesday, May 21, 2008
Adding another linux os into linux grub boot list
finally fixed my ubuntu grub to include a option of booting up fedora 9 today. Apparently it could have been because I was tired when I was writing the fedora boot info into the grub menu.lst. first I mounted my fedora's partition, went to the /boot/grub and retrieved the menu item that boots up itself, copied it into ubuntu grub. it didn't work. and the problem that it doesn't work is becuase there are two initrd images, and i used the wrong one. the correct initrd i should be using is
it was not because of the root= value, which was actually correct (even though during the failed fedora booting up, it says it's because it cannot find such partition as what the root= describes)
the correct value for root= should be the UUID of the parition, which can be retrieved using the command:
ls -l /dev/disk/by-uuid
/boot/initrd-2.6.25-14.fc9.i686.img
it was not because of the root= value, which was actually correct (even though during the failed fedora booting up, it says it's because it cannot find such partition as what the root= describes)
the correct value for root= should be the UUID of the parition, which can be retrieved using the command:
ls -l /dev/disk/by-uuid
Tuesday, May 20, 2008
wxWidget 2.4.2 library wxHashMap
spent 3 hours trying to fix the wxHashMap. the tutorial for using wxHashMap is completely wrong and not working. Even if I copy the example out and build them in VC++ IDE, it will not work (linking error)
I searched the samples in wxWidget 2.4.2, there is only 1 sample that uses wxHashMap, and ironically it only tests it on declaring the hash map, but NOT using it... way to go sample writers
TODO: test out wxHashMap in newest wxWidget 2.8.7
update:
wxHashMap works just like the one in the tutorial in version 2.8.7
I searched the samples in wxWidget 2.4.2, there is only 1 sample that uses wxHashMap, and ironically it only tests it on declaring the hash map, but NOT using it... way to go sample writers
TODO: test out wxHashMap in newest wxWidget 2.8.7
update:
wxHashMap works just like the one in the tutorial in version 2.8.7
Subscribe to:
Comments (Atom)