## Sep 25, 2008

### large makefiles with variable

Our example makefile didn't use any variables. Let's include some, to see if it help us out:

CC = gcc
CFLAGS = -g -O2
OBJECTS = main.o foo.o

main.exe : $(OBJECTS)$(CC) $(CFLAGS)$(OBJECTS) -o main.exe

main.o : main.c
$(CC)$(CFLAGS) -c main.c

foo.o : foo.c
$(CC)$(CFLAGS) -c foo.c

This makefile looks a lot like the old makefile, except that a lot of the commands have been replaced with variable substitutions. What make does is replace the variables with their variables in the target, dependency, and command sections of the rules. That lets you specify some things in one place to make it easier to maintain. In our example, we use $(CC) to specify the compiler, so we could set it to something else if we wanted to without having to change the whole makefile. Here's another trick that GNU make can let you do. In the above makefile, we had to include the rule for compiling sources into objects twice - once for each source file. That could get tiresome when we have dozens of sources, so let's define a pattern instead. This pattern will be used whenever make needs to compile any source: %.o : %.c$(CC) $(CFLAGS) -c$<

Here, we have used the percent (%) character to denote that part of the target and dependency that matches whatever the pattern is used for, and the $< is a special variable (imaging it like$(<)) that means "whatever the depencies are". Another useful variable is $@, which means "the target". Our Makefile now looks like this: CC = gcc CFLAGS = -g -O2 OBJECTS = main.o foo.o main.exe :$(OBJECTS)
$(CC)$(CFLAGS) $(OBJECTS) -o main.exe %.o : %.c$(CC) $(CFLAGS) -c$<

Now, if we need to add more source files, we only have to update the line that defines the OBJECTS variable!

Note that make is pretty smart about a lot of things already, like how to build object files. In our above example, we could have left out the last rule completely! The fact that we chose CC and CFLAGS as names was no coincidence, since those are the variables that make's built-in rules use. To see a list of all the built-in rules, consult the make documentation or run "make -p"

The reference manual for make (run "info make") contains many more examples and nifty tricks for building makefiles, but this section covered the bare minimum that you'll need to know to manage your projects with make.

## Sep 24, 2008

### uppercase any filenames with lowercase chars

#!/bin/sh
# uppercase any filenames with lowercase chars
for file in $* do if [ -f$file ]
then
ucfile=echo $file | tr [:lower:] [:upper:] if [$file != $ucfile ] then mv -i$file $ucfile fi fi done ## Jul 18, 2008 ### Search and replace all files in dir Method I: #!/bin/sh for file in$(grep -il "Hello" *.txt)
do
sed -e "s/Hello/Goodbye/ig" $file > /tmp/tempfile.tmp mv /tmp/tempfile.tmp$file
done

Method II:

find /path/to/start/from/ -type f | xargs perl -pi -e 's/applicationX/applicationY/g'

Method III:

#!/bin/sh
myname="/tmp/whoamidate +%d%m%H%M%S"
if test -f $myname then echo "$0: Cannot make directory $myname (already exists)" 1&>2 exit 0 fi mkdir "$myname"
for FILE in $@; do sed 's/old_string/new_string/g'$FILE > "$myname"/"$FILE"new_tmp
mv "$myname"/"$FILE"new_tmp $FILE done rmdir$myname

Replace old_string with the string you want to replace and new_string with the replacement string.

Note: This script makes use of the /tmp directory.

The sed command uses the same syntax as Perl to search for and replace strings. Once you have created the script, enter the following at the Unix command line prompt: sh script_name file_pattern Replace script_name with the filename of the script, and file_pattern with the file or files you want to modify. You can specify the files that you want to modify by using a shell wildcard, such as *.html.

## May 14, 2008

### The process table and the nice command

The process table and the nice command

The kernel maintains a list of all the current processes in a "process table"; you can use the ps command to view the contents of this table.

Each process can also be assigned a priority, or "niceness" level; a value which ranges from -20 to 19. A priority of "-20" means that the process will be given access to the CPU more often, whereas a priority of "19" means that the process will only be given CPU time when the system is idle.

You can use the nice and renice commands to specify and alter these values for specific processes.
Process creation

From your shell prompt (bash), you will usually instruct the system to run an application for you; for example, "vi". This will then cause your "bash" process to "fork" off a new process. The initial process is referred to as the "parent process", and the process which it forked as the "child process".

The process table contains the parent PID (PPID), and uses this to track which processes spawned which other ones.
System Processes

As well as the standard user processes that you would expect to find running, such as your shell and perhaps your editor, there are also several system processes that you would expect to find running. Examples of these include the cron daemon (crond), which handles job scheduling, and the system log daemon (syslogd), which handles the logging of system messages.
Scheduling Command execution with batch (cron) jobs

There are two methods of scheduling jobs on a Unix system. One is called at, which is used for once-off batch jobs. The other is called cron, which is used for regularly run tasks.

The at jobs are serviced by the "at daemon (atd)".
at

SYNTAX:
at [-f script] TIME

This command is used to schedule batch jobs.

You can either give it a script to run with the "-f" parameter, or you can specify it after you've typed the command.

The "TIME" parameter can be in the form of HH:MM, or "now + n minutes". There are several other complicated methods of specifying the time, which you should look up in the man page for at(1).

debian:~# at now + 5 minutes
warning: commands will be executed using /bin/sh
at> echo hello!
at>
job 1 at 2004-03-12 13:27

We have now scheduled a job to run in 5 minutes time; that job will simply display (echo) the string "hello!" to stdout.

To tell at that you're finished typing commands to be executed, press Ctrl-d, that will display the marker that you can see above.
atq

SYNTAX:
atq

This command displays the current batch jobs that are queued:

debian:~# atq
1 2004-03-12 13:27 a root
debian:~#

This is the job that we queued earlier.

The first number is the "job id", followed by the date and time that the job will be executed, followed by the user who the job belongs to.
atrm

SYNTAX:
atrm

This command simply removes jobs from the queue.

debian:~# atrm 1
debian:~# atq
debian:~#

We've now removed our scheduled job from the queue, so it won't run.

Let's add another one, and see what happens when it is executed:

debian:~# at now + 1 minute
warning: commands will be executed using /bin/sh
at> touch /tmp/at.job.finished
at>
job 3 at 2004-03-12 13:27
debian:~# atq
3 2004-03-12 13:27 a root
debian:~# date
Fri Mar 12 13:26:57 SAST 2004
debian:~# date
Fri Mar 12 13:27:04 SAST 2004
debian:~# atq
debian:~# ls -l /tmp/at.job.finished
-rw-r--r-- 1 root root 0 Mar 12 13:27 /tmp/at.job.finished

As you can see, we scheduled a job to execute one minute from now, and then waited for a minute to pass. You'll notice how it was removed from the queue once it was executed.
cron

The cron jobs are serviced by the "cron daemon (crond)".
crontab

SYNTAX:
crontab [ -u user ] { -l | -r | -e }
crontab [ -u user ] filename

You can use the crontab command to edit, display and delete existing cron tables.

The "-u" switch lets the root user specify another user's crontab to perform the operation on.

Table 7.1. crontab options
l lists current crontab
r removes current crontab
e edits current crontab

If a filename is specified instead, that file is made the new crontab.

The syntax for a crontab is as follows:

# minute hour day month weekday command

Example:

# minute hour day month weekday command
0 1 * * * backup.sh

This cron job will execute the backup.sh script, at 01:00 every day of the year.

A more complicated example:

# minute hour day month weekday command
5 2 * * 5 backup-fri.sh

This cron job will execute the backup-fri.sh script, at 02:05 every Friday.

Weekdays are as follows:

01 - Monday
02 - Tuesday
etc.
07 - Sunday

[Note] Note

There is also a "system crontab", which differs slightly from the user crontabs explained above. You can find the system crontab in a file called /etc/crontab.

You can edit this file with vi, you must not use the crontab command to edit it.

You'll also notice that this file has an additional field, which specifies the username under which the job should run.

debian:~# cat /etc/crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the crontab'
# command to install the new version when you edit this file.
# This file also has a username field, that none of the other crontabs do.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user command
25 6 * * * root test -e /usr/sbin/anacron ||
run-parts --report /etc/cron.daily
47 6 * * 7 root test -e /usr/sbin/anacron ||
run-parts --report /etc/cron.weekly
52 6 1 * * root test -e /usr/sbin/anacron ||
run-parts --report /etc/cron.monthly
#

Some of the daily system-wide jobs that run are:

1.

logrotate - this checks to see that the files in /var/log don't grow too large.
2.

find - this builds the locate database, used by the ?locate? command.
3.

man-db - this builds the "whatis" database, used by the whatis command.
4.

standard - this makes a backup of critical system files from the /etc directory, namely, your passwd,shadow and group files - that backups are given a .bak extension.

Monitoring system resources

The follow commands are vital for monitoring system resources:
ps and kill

The ps command displays the process table.

SYNTAX:
ps [auxwww]

a -- select all with a tty except session leaders
u -- select by effective user ID - shows username associated with each process
x -- select processes without controlling ttys (daemon or background processes)
w -- wide format

debian:~# ps
PID TTY TIME CMD
1013 pts/0 00:00:00 bash
1218 pts/0 00:00:00 ps

debian:~# ps auxwww
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.4 0.1 1276 496 ? S 13:46 0:05 init
root 2 0.0 0.0 0 0 ? SW 13:46 0:00 [kflushd]
root 3 0.0 0.0 0 0 ? SW 13:46 0:00 [kupdate]
root 4 0.0 0.0 0 0 ? SW 13:46 0:00 [kswapd]
root 5 0.0 0.0 0 0 ? SW 13:46 0:00 [keventd]
root 140 0.0 0.2 1344 596 ? S 13:46 0:00 /sbin/syslogd
root 143 0.0 0.3 1652 836 ? S 13:46 0:00 /sbin/klogd
root 151 0.0 0.1 1292 508 ? S 13:46 0:00 /usr/sbin/inetd
daemon 180 0.0 0.2 1388 584 ? S 13:46 0:00 /usr/sbin/atd
root 183 0.0 0.2 1652 684 ? S 13:46 0:00 /usr/sbin/cron
root 682 0.0 0.4 2208 1256 tty1 S 13:48 0:00 -bash
root 1007 0.0 0.4 2784 1208 ? S 13:51 0:00 /usr/sbin/sshd
root 1011 0.0 0.6 5720 1780 ? S 13:52 0:00 /usr/sbin/sshd
root 1013 0.0 0.4 2208 1236 pts/0 S 13:52 0:00 -bash
root 1220 0.0 0.4 2944 1096 pts/0 R 14:06 0:00 ps auxwww

The USER column is the user to whom that particular process belongs; the PID is that processes unique Process ID. You can use this PID to send signals to a process using the kill command.

For example, you can signal the "sshd" process (PID = 1007) to quit, by sending it the terminate (TERM) signal:

debian:~# ps auxwww | grep 1007
root 1007 0.0 0.4 2784 1208 ? S 13:51 0:00 /usr/sbin/sshd
debian:~# kill -SIGTERM 1007
debian:~# ps auxwww | grep 1007

The "TERM" signal is the default that the kill command sends, so you can leave the signal parameter out usually.

If a process refuses to exit gracefully when you send it a KILL signal; e.g. "kill -SIGKILL ".
top

The top command will display a running process table of the top CPU processes:

14:15:34 up 29 min, 2 users, load average: 0.00, 0.00, 0.00
20 processes: 19 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: 1.4% user, 0.9% system, 0.0% nice, 97.7% idle
Mem: 257664K total, 45104K used, 212560K free, 13748K buffers
Swap: 64224K total, 0K used, 64224K free, 21336K cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
1 root 0 0 496 496 428 S 0.0 0.1 0:05 init
2 root 0 0 0 0 0 SW 0.0 0.0 0:00 kflushd
3 root 0 0 0 0 0 SW 0.0 0.0 0:00 kupdate
4 root 0 0 0 0 0 SW 0.0 0.0 0:00 kswapd
[ ... ]

nice and renice

SYNTAX:
nice -

Example:

To run the sleep command with a niceness of "-10":

debian:~# nice --10 sleep 50

If you then run the top command in a different terminal, you should see that the sleep's NI column has been altered:

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
2708 root -9 -10 508 508 428 S <> [ -p ] [ -u ]

The "-p pid" parameter specifies the PID of a specific process, and the "-u user" parameter specifies a specific user, all of whose currently running processes will have their niceness value changed.
Example:

To renice all of user "student"'s processes to a value of "-10":

debian:~# renice -10 -u student

vmstat

The vmstat command gives you statistics on the virtual memory system.

SYNTAX:
vmstat [delay [count]]

debian:~# vmstat 1 5
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 0 212636 13748 21348 0 0 4 4 156 18 1 1 98
0 0 0 0 212636 13748 21348 0 0 0 0 104 12 0 0 100
0 0 0 0 212636 13748 21348 0 0 0 0 104 8 0 0 100
0 0 0 0 212636 13748 21348 0 0 0 0 104 10 0 0 100
0 0 0 0 212636 13748 21348 0 0 0 0 104 8 0 0 100
debian:~#

Field descriptions:

Table 7.2. procs
r processes waiting for run time
b processes in uninterpretable sleep
w processes swapped out but otherwise runnable

Table 7.3. memory
swpd virtual memory used (Kb)
free idle memory (Kb)
buff memory used as buffers (Kb)

Table 7.4. swap
si memory swapped in from disk (kB/s)
so memory swapped out to disk (kB/s)

Table 7.5. io
bi blocks sent to a block device (blocks/s)
bo blocks received from a block device (blocks/s)

Table 7.6. system
in interrupts per second (including the clock)
cs context switches per second

Table 7.7. cpu
us user time as a percentage of total CPU time
sy system time as a percentage of total CPU time
id idle time as a percentage of total CPU time
system monitoring tools:

It is often useful to be able to keep a historical record of system activity and resource usage. This is useful to spot possible problems before they occur (such as running out of disk space), as well as for future capacity planning.

Usually, these tools are built by using system commands (such as vmstat, ps and df), coupled together with rrdtool or mrtg, which store the data and generate graphs.

rrdtool: http://people.ee.ethz.ch/~oetiker/webtools/rrdtool/

mrtg: http://people.ee.ethz.ch/~oetiker/webtools/mrtg/

Some of the more complex monitoring systems that have been built using these tools include the following:

Cacti: http://www.raxnet.net/products/cacti/

Zabbix:http://www.zabbix.com/

All of the above tools are open source and free to use.
ulimit:

You may use the bash shell built-in command "ulimit" to limit the system resources that your processes are allowed to consume.

The following is an excerpt from the bash man page:

SYNTAX:
ulimit [-SHacdflmnpstuv [limit]]

Provides control over the resources available to the shell and to
processes started by it, on systems that allow such control.

The -H and -S options specify that the hard or soft limit is set for
the given resource. A hard limit cannot be increased once it is set;
a soft limit may be increased up to the value of the hard limit. If
neither -H nor -S is specified, both the soft and hard limits are set.

The value of limit can be a number in the unit specified for the
resource or one of the special values hard, soft, or unlimited, which
stand for the current hard limit, the current soft limit, and no
limit, respectively. If limit is omitted, the current value of the
soft limit of the resource is printed, unless the -H option is given.
When more than one resource is specified, the limit name and unit are
printed before the value.

Other options are interpreted as follows:

-a All current limits are reported
-c The maximum size of core files created
-d The maximum size of a process's data segment
-f The maximum size of files created by the shell
-l The maximum size that may be locked into memory
-m The maximum resident set size
-n The maximum number of open file descriptors
(most systems do not allow this value to be set)
-p The pipe size in 512-byte blocks (this may not be set)
-s The maximum stack size
-t The maximum amount of cpu time in seconds
-u The maximum number of processes available to a single user
-v The maximum amount of virtual memory available to the shell

If limit is given, it is the new value of the specified resource (the
-a option is display only). If no option is given, then -f
is assumed. Values are in 1024-byte increments, except for -t, which
is in seconds, -p, which is in units of 512-byte blocks, and -n and
-u, which are unscaled values. The return status is 0 unless an
invalid option or argument is supplied, or an error occurs while
setting a new limit.

On a Debian system, the default ulimit settings should appear as follows:

debian:~# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited

The common use of this command is to prevent long running processes, such as web servers (e.g., Apache) and CGI scripts from leaking memory and consuming all available system resources. Using the ulimit command to reduce the locked memory and memory size options before starting up your web server would mitigate this problem.

Another common use is on shell servers where users may not "tidy up" after themselves; you can then set the cpu time limit in /etc/profile, thus having the system automatically terminate long running processes.

Another example specifically relates to core files. You'll notice that the default core file size is set to 0. This means that when an application crashes (called a "segmentation fault"), it does not leave behind a core file (a file containing the memory contents of the application at the time that it crashed). This core file can prove invaluable for debugging, but can obviously be quite large as it will be the same size as the amount of memory the application was consuming at the time! Hence the default "0" value.

However, to enable core dumps, you can specify the "-c" switch:

debian:~# ulimit -c
0
debian:~# ulimit -c 1024
debian:~# ulimit -c
1024

Any further applications launched from this shell, which crash, will now generate core dump files up to 1024 blocks in size.

The core files are normally named "core" or sometimes processname.core, and will be written to the current working directory of the specific application that crashed.
Working with log files

On a Linux system, you should find all the system log files are in the /var/log directory.

The first place you should look if you were experiencing problems with a running system is the system "messages" logfile.

You can use the tail command to see the last few entries:

$tail /var/log/messages It's sometimes useful to keep the log scrolling in a window as entries are added, and you can use tail's -f (follow) flag to achieve this:$ tail -f /var/log/messages

Other files of interest in /var/log:

1.

auth.log -- log messages relating to system authentication
2.

daemon.log -- log message relating to running daemons on the system
3.

debug -- debug level messages
4.

syslog -- system level log messages
5.

kern.log -- kernel messages

The process which writes to these logfiles is called "syslogd", and its behavior is configured by /etc/syslog.conf.

Each log message has a facility (kern, mail, news, daemon) and a severity (debug, info, warn, err, crit). The syslog.conf file uses these to determine where to send the messages.

As machines continue to run over time, these files can obviously become quite large. Rather than the system administrator having to manually trim them, there is a utility called "logrotate".

This utility can be configured to rotate (backup and compress) old log files and make way for new ones. It can also be configured to only store a certain amount of log files, making sure that you keep your disk space free.

The files which controls this behavior is /etc/logrotate.conf. See the logrotate(8) man page for details on the syntax of this file.
Prev Up Next
Chapter 7. Managing Processes Home Chapter 8. Hardware Installation

## May 9, 2008

### GetOptions

http://www.perl.com/doc/manual/html/lib/Getopt/Long.html#EXAMPLES

GetOptions is called with a list of option-descriptions, each of which consists of two elements: the option specifier and the option linkage. The option specifier defines the name of the option and, optionally, the value it can take. The option linkage is usually a reference to a variable that will be set when the option is used. For example, the following call to GetOptions:

GetOptions("size=i" => \$offset); will accept a command line option size'' that must have an integer value. With a command line of --size 24'' this will cause the variable$offset to get the value 24.

Alternatively, the first argument to GetOptions may be a reference to a HASH describing the linkage for the options, or an object whose class is based on a HASH. The following call is equivalent to the example above:

%optctl = ("size" => \$offset); GetOptions(\%optctl, "size=i"); For the other options, the values for argument specifiers are: ! Option does not take an argument and may be negated, i.e. prefixed by no''. E.g. foo!'' will allow --foo (with value 1) and -nofoo (with value 0). The option variable will be set to 1, or 0 if negated. + Option does not take an argument and will be incremented by 1 every time it appears on the command line. E.g. more+'', when used with --more --more --more, will set the option variable to 3 (provided it was 0 or undefined at first). The + specifier is ignored if the option destination is not a SCALAR. =s Option takes a mandatory string argument. This string will be assigned to the option variable. Note that even if the string argument starts with - or --, it will not be considered an option on itself. :s Option takes an optional string argument. This string will be assigned to the option variable. If omitted, it will be assigned '' (an empty string). If the string argument starts with - or --, it will be considered an option on itself. =i Option takes a mandatory integer argument. This value will be assigned to the option variable. Note that the value may start with - to indicate a negative value. :i Option takes an optional integer argument. This value will be assigned to the option variable. If omitted, the value 0 will be assigned. Note that the value may start with - to indicate a negative value. =f Option takes a mandatory real number argument. This value will be assigned to the option variable. Note that the value may start with - to indicate a negative value. :f Option takes an optional real number argument. This value will be assigned to the option variable. If omitted, the value 0 will be assigned. A lone dash - is considered an option, the corresponding option name is the empty string. A double dash on itself -- signals end of the options list. ## May 6, 2008 ### Cat examples http://www.uwm.edu/cgi-bin/IMT/wwwman?topic=cat(1)&msection= EXAMPLES 1. To display the file notes, enter: cat notes If the file is longer than one screenful, it scrolls by too quickly to read. To display a file one page at a time, use the more command. 2. To concatenate several files, enter: cat section1.1 section1.2 section1.3 > section1 This creates a file named section1 that is a copy of section1.1 fol- lowed by section1.2 and section1.3. 3. To suppress error messages about files that do not exist, enter: cat -s section2.1 section2.2 section2.3 > section2 If section2.1 does not exist, this command concatenates section2.2 and section2.3. Note that the message goes to standard error, so it does not appear in the output file. The result is the same if you do not use the -s option, except that cat displays the error message: cat: cannot open section2.1 You may want to suppress this message with the -s option when you use the cat command in shell procedures. 4. To append one file to the end of another, enter: cat section1.4 >> section1 The >> in this command specifies that a copy of section1.4 be added to the end of section1. If you want to replace the file, use a single > symbol. 5. To add text to the end of a file, enter: cat >> notes Get milk on the way home Get milk on the way home is added to the end of notes. With this syn- tax, the cat command does not display a prompt; it waits for you to enter text. Press the End-of-File key sequence ( above) to indicate you are finished. 6. To concatenate several files with text entered from the keyboard, enter: cat section3.1 - section3.3 > section3 This concatenates section3.1, text from the keyboard, and section3.3 to create the file section3. 7. To concatenate several files with output from another command, enter: ls | cat section4.1 - > section4 This copies section4.1, and then the output of the ls command to the file section4. 8. To get two pieces of input from the terminal (when standard input is a terminal) with a single command invocation, enter: cat start - middle - end > file1 If standard input is a regular file, however, the preceding command is equivalent to the following: cat start - middle /dev/null end > file1 This is because the entire contents of the file would be consumed by cat the first time it saw - (dash) as a file argument. An End-of-File condition would then be detected immediately when - (dash) appeared the second time. ## Apr 29, 2008 ### why snort content doesn't work Posted by PanamaJax on November 17, 2005 20:47:23 win32 based... running in packet mode works fine in IDS mode: alert tcp any any -> any any (msg:"TCP traffic";) works fine alert tcp any any -> any any (flow: to_server, established; content: "test"; msg: "Saw Test";) - does absolutely nothing when 'test' is sent any method. Messenger service, http, telnet, ftp etc. Based on the default virus.rules it should alert on an email beig sent with a .vbs attachment but it doesn't, through pop or exchange. Everything seems to work fine, just not with a content statement. I've tried every permutation I can come up with, in/out bound, stateless etc etc etc and I don't get it. If a user were to type 'google' in the web browser and there was an active rule: alert tcp any any -> any any (flow: to_server, established; content: "google"; nocase; msg: "Saw google";) shouldn't that trigger the alert? Posted by brevizniak on November 26, 2005 18:36:50 yes it should work fine. There are some things to check. - you actually see test in the traffic when running with -dve - you can see both sides of hte connection - the flow preprocessor is enabled - The stream4 preprocessor is enabled - you are not testing on the machine you are snorting from This is because alot of systems have cards that compute a checksum in hardware so if you test and sniff on teh same machine snort may ignore the traffic because of a bad checksum. ### Bro supported notice Builtin Policy Files Bro policy script is the basic analyzer used by Bro to determine what network events are alarm worthy. A policy can also specify what actions to take and how to report activities, as well as determine what activities to scrutinize. Bro uses policies to determine what activities to classify as hot, or questionable in intent. These hot network sessions can then be flagged, watched, or responded to via other policies or applications determined to be necessary, such as calling rst to reset a connection on the local side, or to add an IP address block to a main router's ACL (Access Control List). The policy files use the Bro scripting language, which is discussed in great detail in Reference Manual. Policy files are loaded using an @load command. The semantics of @load are "load in this script if it hasn't already been loaded", so there is no harm in loading something in multiple policy scripts. The following policy scripts are included with Bro. The first set are all on by default, and the second group can be added by adding them to your site/brohost.bro policy file. Bro Analyzers are described in detail in the Reference Manual. These policy files are loaded by default: site defines local and neighbor networks from static config alarm open logging file for alarm events tcp initialize BPF filter for SYN/FIN/RST TCP packets login rlogin/telnet analyzer (or to ensure they are disabled) weird initialize generic mechanism for detecting unusual events conn access and record connection events hot defines certain forms of sensitive access frag process TCP fragments print-resources on exit, print resource usage information, useful for tuning signatures the signature policy engine scan generic scan detection mechanism trw additional, more sensitive scan detection http general http analyzer, low level of detail http-request detailed analysis of http requests http-reply detailed analysis of http replys ftp FTP analysis portmapper record and analyze RPC portmapper requests smtp record and analyze email traffic tftp identify and log TFTP sessions worm flag HTTP-based worm sources such as Code Red software track software versions; required for some signature matching blaster looks for blaster worm synflood looks for synflood attacks stepping used to detect when someone logs into your site from an external net, and then soon logs into another site reduce-memory sets shorter timeouts for saving state, thus saving memory. If your Bro is using < 50% of you RAM, try not loading this These are not loaded by default: Policy Description Why off by default drop Include if site has ability to drop hostile remotes Turn on if needed icmp icmp analysis CPU intensive and low payoff dns DNS analysis CPU intensive and low payoff ident ident program analyzer historical, no longer interesting gnutella looks for hosts running Gnutella Turn this on if you wantto know about this ssl ssl analyzer still experimental ssh-stepping Detects stepping stones where both incoming and outgoing connections are ssh Possibly too CPU intensive (needs more testing) analy Performs statistical analysis only used in off-line alalysis backdoor Looks for backdoors only effective when also capturing bulk traffic passwords Looks for clear text passwords may want to turn on if your site does not allow clear text passwords file-flush Causes all log files to be flushed every N seconds may want to turn on if you are doing "real time" analysis To modify which analyzers are loaded, edit or create a file in$BROHOME/site. If you write your own new custom analyzer, it goes in this directory too. To disable an analyzer, add "@unload policy.bro" to the beginning of the file $BROHOME/site/brohost.bro, before the line "@load brolite.bro". To add additional analyzers, add them @load them in$BROHOME/site/brohost.bro.
Notices

The primary output facility in Bro is called a Notice. The Bro distribution includes a number of standard of Notices, listed below. The table contains the name of the Notice, what Bro policy file generates it, and a short description of what the Notice is about.
Notice Policy Description
AckAboveHole weird Could mean packet drop; could also be a faulty TCP implementation
AddressDropIgnored scan A request to drop connectivity has been ignored ; (scan detected, but one of these flags is true: !can_drop_connectivity, or never_shut_down, or never_drop_nets )
BackscatterSeen scan Apparent flooding backscatter seen from source
ClearToEncrypted_SS stepping A stepping stone was seen in which the first part of the chain is a clear-text connection but the second part is encrypted. This often means that a password or passphrase has been exposed in the clear, and may also mean that the user has an incomplete notion that their connection is protected from eavesdropping.
ContentGap weird Data has sequence hole; perhaps due to filtering
CountSignature signatures Signature has triggered multiple times for a destination
DNS::DNS_MappingChanged DNS Some sort of change WRT previous Bro lookup
DNS::DNS_PTR_Scan dns Summary of a set of PTR lookups (automatically generated once/day when dns policy is loaded)
DroppedPackets netstats Number of packets dropped as reported by the packet filter
FTP::FTP_ExcessiveFilename ftp Very long filename seen
FTP::FTP_PrivPort ftp Privileged port used in PORT/PASV
FTP::FTP_Sensitive ftp Sensitive connection (as defined in hot)
FTP::FTP_UnexpectedConn ftp FTP data transfer from unexpected src
HotEmailRecipient smtp Image:todo.pngFIXME Need Example, default = NULL
ICMP::ICMPConnectionPair icmp Too many ICMPs between hosts (default = 200)
IdentSensitiveID ident Sensitive username in Ident lookup
LocalWorm worm Worm seen in local host (searches for code red 1, code red 2, nimda, slammer)
Multiple SigResponders signatures host has triggered the same signature on multiple responders
MultipleSigResponders signatures host has triggered the same signature on multiple responders
MultipleSignatures signatures host has triggered many signatures
Multiple SigResponders signatures host has triggered the same signature on multiple responders
OutboundTFTP tftp outbound TFTP seen
PortScan scan the source has scanned a number of ports
RemoteWorm worm worm seen in remote host
Resolver Inconsistency dns the answer returned by a DNS server differs from one previously returned
ResolverInconsistency dns the answer returned by a DNS server differs from one previously returned
ResourceSummary print-resources prints Bro resource usage
Retransmission Inconsistency weird possible evasion; usually just bad TCP implementation
RetransmissionInconsistency weird possible evasion; usually just bad TCP implementation
SSL_SessConIncon ssl session data not consistent with connection
SSL_X509Violation ssl blanket X509 error
ScanSummary scan a summary of scanning activity, output once / day
SensitiveConnection conn connection marked "hot", See: Reference Manual section on hot IDs for more information.
SensitiveDNS_Lookup dns DNS lookup of sensitive hostname/addr; default list of sensitive hosts = NULL
Sensitive PortmapperAccess portmapper the given combination of the service looked up via the pormapper, the host requesting the lookup, and the host from which it's requiesting it is deemed sensitive
SensitivePortmapperAccess portmapper the given combination of the service looked up via the pormapper, the host requesting the lookup, and the host from which it's requiesting it is deemed sensitive
SensitiveSignature signatures generic for alarm-worthy
SignatureSummary signatures summarize number of times a host triggered a signature (default = 1/day)
SynFloodEnd synflood end of syn-flood against a certain victim. A syn-flood is defined to be more than SYNFLOOD_THRESHOLD (default = 15000) new connectionshave been reported within the last SYNFLOOD_INTERVAL (default = 60 seconds) for a certain IP.
SynFloodStart synflood start of syn-flood against a certain victim
SynFloodStatus synflood report of ongoing syn-flood
TRWAddressScan trw source flagged as scanner by TRW algorithm
TRWScanSummary trw summary of scanning activities reported by TRW
Terminating Connection conn "rst" command sent to connection origin, connection terminated, triggered in the following policies: ftp and login: forbidden user id, hot (connection from host with spoofed IP address)
TerminatingConnection conn "rst" command sent to connection origin, connection terminated, triggered in the following policies: ftp and login: forbidden user id, hot (connection from host with spoofed IP address?)
W32B_SourceLocal blaster report a local W32.Blaster-infected host
W32B_SourceRemote blaster report a remote W32.Blaster-infected host
WeirdActivity Weird generic unusual, alarm-worthy activity

Note that some of the Notice names start with "ModuleName::" (e.g.: FTP::FTP_BadPort) and some do not. This is becuase not all of the Bro Analyzers have been converted to use the [1] Modules facility} yet. Eventually all notices will start with "ModuleName::".

To get a list of all Notices that your particular Bro configuration might generate, you can type:

sh . $BROHOME/etc/bro.cfg; bro -z notice$BRO_HOSTNAME.bro

## Apr 28, 2008

### Using a 'OR' condition in Signature payloads

Re: Using a 'OR' condition in Signature payloads: msg#00000
Subject: Re: Using a 'OR' condition in Signature payloads

On Tue, Oct 31, 2006 at 00:32 -0800, Vern Paxson wrote:

> I believe what's going on is that "payload" is matching the TCP *byte-stream*
> rather than individual packets. As such, there's just one match to the
> pattern, since the .*'s eat up everything else in the byte-stream.

That's right.

> There's an option to just match packet payloads, but I don't recall what
> it is.

No, there is no option (UDP is matched packet-wise but even for UDP
Bro reports each signature-match only once per UDP flow).

Robin

### How does Bro capture the traffic of ftp data connection

On Thu, Mar 15, 2007 at 12:01 +0800, you wrote:

> So how does it dynamically add the filter string to capture the
> temporary traffic?

It doesn't. Dynamically changing the BPF filter is too expensive as
it would need to be recompiled every time (and the filter would
quickly get huge).

If you want Bro to analyze the content of ftp-data sessions, you
need to manually override the pcap filter to include all packets,
e.g., by running with "-f tcp".

Robin

--
Robin Sommer * Phone +1 (510) 931-5555 * robin at icir.org
LBNL/ICSI * Fax +1 (510) 666-2956 * www.icir.org

### Find Man

List all files that belong to the user Simon:

find . -user Simon

List all the directory and sub-directory names:

find . -type d

List all files in those sub-directories (but not the directory names)

find . -type f

find . -type l

List all files (and subdirectories) in your home directory:

### End-Of-File (EOF) when reading from cin

End-Of-File (EOF) when reading from cin

When a program is reading from a disk file, the system "knows" when it gets to the end. This condition is called End-Of-File (EOF). All systems also provide some way of indicating an EOF when reading from the keyboard. This varies from system to system.

Dev-C++
Type: Enter Control-z Enter
MS Visual C++
Type: Enter Control-z Enter Enter
Reportedly there is a Microsoft patch that can be applied so that only one Enter is required after the Control-z. I wouldn't bother.
Other systems
Some may use other characters: control-D then Enter, or control-D followed by a control-Z, or ... .

You can just provide bad data to make cin fail in many cases. A student once claimed that typing "EOF" was the way to indicate and end-of-file from the console. Yes, it stops reading (because of an error) if you're reading numbers, but not when reading characters or strings!
Resetting after EOF

Altho it doesn't make sense to read after an EOF on a file, it is reasonable to read again from the console after an EOF has been entered. The clear function allows this.

while (cin >> x) {
}

cin >> n;
...

### Find the number of set bits in a given integer

Q: Find the number of set bits in a given integer

Sol: Parallel Counting: MIT HAKMEM Count

HAKMEM (Hacks Memo) is a legendary collection of neat mathematical and programming hacks contributed mostly by people at MIT and some elsewhere. This source is from the MIT AI LABS and this brilliant piece of code orginally in assembly was probably conceived in the late 70’s.

int BitCount(unsigned int u)

{
unsigned int uCount;

uCount = u
- ((u >> 1) & 033333333333)
- ((u >> 2) & 011111111111);
return
((uCount + (uCount >> 3))
& 030707070707) % 63;

}

Lets take a look at the theory behind this idea.

Take a 32bit number n; n = a31 * 231 + a30 * 230 +.....+ ak * 2k +....+ a1 * 2 + a0;

Here a0 through a31 are the values of bits (0 or 1) in a 32 bit number. Since the problem at hand is to count the number of set bits in the number, simply summing up these co-efficients would yeild the solution. (a0 + a1 +..+ a31 ).

How do we do this programmatically?

Take the original number n and store in the count variable. count=n;

Shift the orignal number 1 bit to the right and subtract from the orignal. count = n - (n >>1);

Now Shift the original number 2 bits to the right and subtract from count; count = n - (n>>1) - (n>>2);

Keep doing this until you reach the end. count = n - (n>>1) - (n>>2) - ... -( n>>31);

Let analyze and see what count holds now. n = a31 * 231 + a30 * 230 +.....+ ak * 2k +....+ a1 * 2 + a0; n >> 1 = a31 * 230 + a30 * 229 +.....+ ak * 2k-1 +....+ a1; n >> 2 = a31 * 229 + a30 * 228 +.....+ ak* 2k-2 +....+ a2

; .. n >> k = a31 * 2(31-k) + a30 * 2(30-k) +…..+ ak * 2k;

.. n>>31 = a31;

You can quickly see that: (Hint: 2k - 2k-1 = 2k-1; ) count = n - (n>>1) - (n>>2) - ... -( n>>31) =a31+ a30 +..+a0; which is what we are looking for;

int BitCount(unsigned int u)
{
unsigned int uCount=u;
do
{
u=u>>1;
uCount -= u;

}
while(u);
}

This certainaly is an interesting way to solve this problem. But how do you make this brilliant? Run this in constant time with constant memory!!.

int BitCount(unsigned int u)

{
unsigned int uCount;

uCount = u
- ((u >> 1) & 033333333333)
- ((u >> 2) & 011111111111);
return
((uCount + (uCount >> 3))
& 030707070707) % 63;

}

For those of you who are still wondering whats going? Basically use the same idea, but instead of looping over the entire number, sum up the number in blocks of 3 (octal) and count them in parallel.

After this statement uCount = n - ((n >> 1) & 033333333333) - ((n >> 2) & 011111111111); uCount has the sum of bits in each octal block spread out through itself.

So if you can a block of 3 bits

u = a222 + a12+ a0; u>>1 = a2*2 + a1; u>>2 = a2;

u - (u>>1) - (u>>2) is a2+a1+a0 which is the sum of bits in each block of 3 bits.

The nexe step is to grab all these and sum them up:

((uCount + (uCount >> 3)) will re-arrange them in blocks of 6 bits and sum them up in such a way the every other block of 3 has the sum of number of set bits in the original number plus the preceding block of 3. The only expection here is the first block of 3. The idea is not to spill over the bits to adjacent blocks while summing them up. How is that made possible. Well, the maximum number of set bits in a block of 3 is 3, in a block of 6 is 6. and 3 bits can represent upto 7. This way you make sure you dont spill the bits over. To mask out the junk while doing a uCount>>3. Do and AND with 030707070707. THe only expection is the first block as I just mentioned.

What does ((uCount + (uCount >> 3)) & 030707070707) hold now? Its 2^0 * (2^6 - 1) * sum0 + 2^1 * (2^6 - 1) * sum1 + 2^2 * (2^6 - 1) * sum2 + 2^3 * (2^6 - 1) * sum3 + 2^4 * (2^6 - 1) * sum4 + 2^5 * (2^3 - 1) * sum5 where sum0 is the sum of number of set bits in every block of 6 bits starting from the ‘low’ position. What we need is sum0 + sum1 + sum2 + sum3 + sum4 + sum5; 2^6-1

## Feb 9, 2008

### 堆和栈的区别

solost 于 2004年 10月09日发表

1、栈区（stack）— 由编译器(Compiler)自动分配释放，存放函数的参数值，局部变量的值等。其操作方式类似于数据结构中的栈。
2、堆区（heap） — 一般由程序员分配释放，若程序员不释放，程序结束时可能由OS回收。注意它与数据结构中的堆是两回事，分配方式倒是类似于链表，呵呵。
3、全局区（静态区）（static）—，全局变量和静态变量的存储是放在一块的，初始化的全局变量和静态变量在一块区域，未初始化的全局变量和未初始化的静态变量在相邻的另一块区域。 - 程序结束后有系统释放
4、文字常量区 — 常量字符串就是放在这里的。程序结束后由系统释放
5、程序代码区— 存放函数体的二进制代码。

//main.cpp
int a = 0; 全局初始化区
char *p1; 全局未初始化区
main()
{
int b; 栈
char s[] = "abc"; 栈
char *p2; 栈
char *p3 = "123456"; 123456\0在常量区，p3在栈上。
static int c =0；全局（静态）初始化区
p1 = (char *)malloc(10);
p2 = (char *)malloc(20);

strcpy(p1, "123456"); 123456\0放在常量区，编译器可能会将它与p3所指向的"123456"优化成一个地方。
}

2.1申请方式
stack:

heap:

2.2 申请后系统的响应

2.3申请大小的限制

2.4申请效率的比较：

2.5堆和栈中的存储内容

2.6存取效率的比较
char s1[] = "aaaaaaaaaaaaaaa";
char *s2 = "bbbbbbbbbbbbbbbbb";
aaaaaaaaaaa是在运行时刻赋值的；

#include
void main()
{
char a = 1;
char c[] = "1234567890";
char *p ="1234567890";
a = c[1];
a = p[1];
return;
}

10: a = c[1];
00401067 8A 4D F1 mov cl,byte ptr [ebp-0Fh]
0040106A 88 4D FC mov byte ptr [ebp-4],cl
11: a = p[1];
0040106D 8B 55 EC mov edx,dword ptr [ebp-14h]
00401070 8A 42 01 mov al,byte ptr [edx+1]
00401073 88 45 FC mov byte ptr [ebp-4],al

2.7小结：

## Feb 5, 2008

### Java and C++ 区别

AVA和C++都是面向对象语言。也就是说，它都能够实现面向对象思想（封装，继乘，多态）。而由于c++为了照顾大量的C语言使用者，

Java和c++的相似之处多于不同之处，但两种语言问几处主要的不同使得Java更容易学习，并且编程环境更为简单。

1．指针

JAVA 语言让编程者无法找到指针来直接访问内存无指针，并且增添了自动的内存管理功能，从而有效地防止了c／c++语言中指针操作失误，如野指针所造成的系统崩溃。但也不是说JAVA没有指针，虚拟机内部还是使用了指针，只是外人不得使用而已。这有利于Java程序的安全。

2．多重继承

c+ +支持多重继承，这是c++的一个特征，它允许多父类派生一个类。尽管多重继承功能很强，但使用复杂，而且会引起许多麻烦，编译程序实现它也很不容易。 Java不支持多重继承，但允许一个类继承多个接口(extends+implement)，实现了c++多重继承的功能，又避免了c++中的多重继承实现方式带来的诸多不便。

3．数据类型及类

Java是完全面向对象的语言，所有函数和变量部必须是类的一部分。除了基本数据类型之外，其余的都作为类对象，包括数组。对象将数据和方法结合起来，把它们封装在类中，这样每个对象都可实现自己的特点和行为。而c++允许将函数和变量定义为全局的。此外，Java中取消了c／c++中的结构和联合，消除了不必要的麻烦。

4．自动内存管理

5．操作符重载

Java不支持操作符重载。操作符重载被认为是c十十的突出特征，在Java中虽然类大体上可以实现这样的功能，但操作符重载的方便性仍然丢失了不少。Java语言不支持操作符重载是为了保持Java语言尽可能简单。

6．预处理功能

Java不支持预处理功能。c／c十十在编译过程中都有一个预编泽阶段，即众所周知的预处理器。预处理器为开发人员提供了方便，但增加丁编译的复杂性。JAVA虚拟机没有预处理器，但它提供的引入语句(import)与c十十预处理器的功能类似。

7. Java不支持缺省函数参数，而c十十支持

Java没有函数，作为一个比c十十更纯的面向对象的语言，Java强迫开发人员把所有例行程序包括在类中，事实上，用方法实现例行程序可激励开发人员更好地组织编码。

8 字符串

c和c十十不支持字符串变量，在c和c十十程序中使用Null终止符代表字符串的结束，在Java中字符串是用类对象(strinR和stringBuffer)来实现的，这些类对象是Java语言的核心，用类对象实现字符串有以下几个优点：

(1)在整个系统中建立字符串和访问字符串元素的方法是一致的；

(2)J3阳字符串类是作为Java语言的一部分定义的，而不是作为外加的延伸部分；

(3)Java字符串执行运行时检空，可帮助排除一些运行时发生的错误；

(4)可对字符串用“十”进行连接操作。

9“goto语句

“可怕”的goto语句是c和c++的“遗物”，它是该语言技术上的合法部分，引用goto语句引起了程序结构的混乱，不易理解，goto语句子要用于无条件转移子程序和多结构分支技术。鉴于以广理由，Java不提供goto语句，它虽然指定goto作为关键字，但不支持它的使用，使程序简洁易读。

l0．类型转换

11.异常

JAVA中的异常机制用于捕获例外事件，增强系统容错能力

try{／／可能产生例外的代码 }catch(exceptionType name){ //处理 }

# J2EE

J2EE（Java 2 Enterprise Edition）是建立在Java 2平台上的企业级应用的解决方案。J2EE技术的基础便是Java 2平台，不但有J2SE平台的所有功能，同时还提供了对EJBServlet，JSP，XML等技术的全面支持，其最终目标是成为一个支持企业级应用开发的体系结构，简化企业解决方案的开发，部署和管理等复杂问题。事实上，J2EE已经成为企业级开发的工业标准和首选平台。

J2EE并非一个产品，而是一系列的标准。市场上可以看到很多实现了J2EE的产品，如BEA WebLogic，IBM WebSphere以及开源JBoss等等。

J2EE，是sun公司提出的一个标准，符合这个标准的产品叫"实现"；其中你下载的sun公司的j2ee开发包中就有一个这样的"实现"，而 jboss，weblogic，websphere都是j2ee标准的一个"实现"。由于jboss，weblogic，websphere自身带有 j2ee的api，所以可以不使用sun的j2ee实现。

J2EE是一种利用Java 2平台来简化企业解决方案的开发、部署和管理相关的复杂问题的体系结构。J2EE技术的基础就是核心Java平台或Java 2平台的标准版，J2EE不仅巩固了标准版中的许多优点，例如"编写一次、随处运行"的特性、方便存取数据库JDBC API、CORBA技术以及能够在Internet应用中保护数据的安全模式等等，同时还提供了对 EJB（Enterprise JavaBeans）、Java Servlets API、JSP（Java Server Pages）以及XML技术的全面支持。其最终目的就是成为一个能够使企业开发者大幅缩短投放市场时间的体系结构。

J2EE体系结构提供中间层集成框架用来满足无需太多费用而又需要高可用性、高可靠性以及可扩展性的应用的需求。通过提供统一的开发平台，J2EE降低了开发多层应用的费用和复杂性，同时提供对现有应用程序集成强有力支持，完全支持Enterprise JavaBeans，有良好的向导支持打包和部署应用，添加目录支持，增强了安全机制，提高了性能。

J2EE为搭建具有可伸缩性、灵活性、易维护性的商务系统提供了良好的机制:

J2EE使用多层的分布式应用模型，应用逻辑按功能划分为组件，各个应用组件根据他们所在的层分布在不同的机器上。事实上，sun设计J2EE的初衷正是为了解决两层模式(client/server)的弊端，在传统模式中，客户端担 当了过多的角色而显得臃肿，在这种模式中，第一次部署的时候比较容易，但难于升级或改进，可伸展性也不理想，而且经常基于某种专有的协议――通常是某种数 据库协议。它使得重用业务逻辑和界面逻辑非常困难。现在J2EE 的多层企业级应用模型将两层化模型中的不同层面切分成许多层。一个多层化应用能够为不同的每种服务提供一个独立的层，以下是 J2EE 典型的四层结构:

J2EE应用程序组件
J2EE应用程序是由组件构成的.J2EE组件是具有独立功能的软件单元，它们通过相关的文件组装成J2EE应用程序，并与其他组件交互。J2EE说明书中定义了以下的J2EE组件:

Java Servlet和JavaServer Pages(JSP)是web层组件.
Enterprise JavaBeans(EJB)是业务层组件.

J2EE应用程序可以是基于web方式的,也可以是基于传统方式的.
web 层组件J2EE web层组件可以是JSP 页面或Servlets.按照J2EE规范，静态的HTML页面和Applets不算是web层组件。

· Servlet

Servlet是Java平台上的CGI技术。Servlet在服务器端运行，动态地生成Web页面。与传统的CGI和许多其它类似CGI的技术相比， Java Servlet具有更高的效率并更容易使用。对于Servlet，重复的请求不会导致同一程序的多次转载，它是依靠线程的方式来支持并发访问的。

· JSP

JSP(Java Server Page)是一种实现普通静态HTML和动态页面输出混合编码的技术。从这一点来看，非常类似Microsoft ASP、PHP等 技术。借助形式上的内容和外观表现的分离，Web页面制作的任务可以比较方便地划分给页面设计人员和程序员，并方便地通过JSP来合成。在运行时态， JSP将会被首先转换成Servlet，并以Servlet的形态编译运行，因此它的效率和功能与Servlet相比没有差别，一样具有很高的效率。

· EJB

EJB定义了一组可重用的组件：Enterprise Beans。开发人员可以利用这些组件，像搭积木一样建立分布式应用。在装配组件时，所有的Enterprise Beans都需要配置到EJB服务器(一般的Weblogic、WebSphere等J2EE应用服务器都 是EJB服务器)中。EJB服务器作为容器和低层平台的桥梁管理着EJB容器，并向该容器提供访问系统服务的能力。所有的EJB实例都运行在EJB容器 中。EJB容器提供了系统级的服务，控制了EJB的生命周期。EJB容器为它的开发人员代管了诸如安全性、远程连接、生命周期管理及事务管理等技术环节， 简化了商业逻辑的开发。EJB中定义了三种Enterprise Beans：

◆ Session Beans

◆ Entity Beans

◆ Message-driven Beans

· JDBC

JDBC(Java Database Connectivity，Java数据库连接)API是一个标准SQL(Structured Query Language， 结构化查询语言)数据库访问接口，它使数据库开发人员能够用标准Java API编写数据库应用程序。JDBC API主要用来连接数据库和直接调用SQL命令执行各种SQL语句。利用JDBC API可以执行一般的SQL语句、动态SQL语句及带IN和OUT参数的存储过程。Java中的JDBC相当与Microsoft平台中的ODBC(Open Database Connectivity)。

· JMS

JMS(Java Message ServiceJava消息服务) 是一组Java应用接口，它提供创建、发送、接收、读取消息的服务。JMS API定义了一组公共的应用程序接口和相应语法，使得Java应用能够和各种消息中间件进行通信，这些消息中间件包括IBM MQ-Series、Microsoft MSMQ及纯Java的SonicMQ。通过使用JMS API，开发人员无需掌握不同消息产品的使用方法，也可以使用统一的JMS API来操纵各种消息中间件。通过使用JMS，能够最大限度地提升消息应用的可移植性。 JMS既支持点对点的消息通信，也支持发布/订阅式的消息通信。

· JNDI

· JTA

JTA(Java Transaction API)提供了J2EE中处理事务的标准接口，它支持事务的开始、回滚和提交。同时在一般的J2EE平台上，总提供一个JTS(Java Transaction Service)作为标准的事务处理服务，开发人员可以使用JTA来使用JTS。

· JCA

JCA(J2EE Connector Architecture)是J2EE体系架构的一部分，为开发人员提供了一套连接各种企业信息系统(EIS，包括ERP、SCM、CRM等)的体系架构，对于EIS开发商而言，它们只需要开发一套基于JCA的EIS连接适配器，开发人员就能够在任何的J2EE应用服务器中连接并使用它。基于JCA的连接适配器的实现，需要涉及J2EE中的事务管理、安全管理及连接管理等服务组件。

· JMX

JMX(Java Management Extensions)的前身是JMAPI。JMX致力于解决分布式系统管理的问题。JMX是一种应用编程接口、 可扩展对象和方法的集合体，可以跨越各种异构操作系统平台、系统体系结构和网络传输协议，开发无缝集成的面向系统、网络和服务的管理应用。JMX是一个完 整的网络管理应用程序开发环境，它同时提供了厂商需要收集的完整的特性清单、可生成资源清单表格、图形化的用户接口；访问SNMP的网络API；主机间远程过程调用；数据库访问方法等。

· JAAS

JAAS(Java Authentication and Authorization Service)实现了一个Java版本的标准Pluggable Authentication Module(PAM)的框架。JAAS可用来进行用户身份的鉴定，从而能够可靠并安全地确定谁在执行Java代码。同时JAAS还能通过对用户进行授 权，实现基于用户的访问控制。

· JACC

JACC(Java Authorization Service Provider Contract for Containers)在J2EE应用服务器和特定的授权认证服务器之间定义了一个连接的协约，以便将各种授权认证服务器插入到J2EE产品中去。

· JAX-RPC

· JAXR

JAXR（Java API for XML Registries）提供了与多种类型注册服务进行交互的API。JAXR运行客户端访问与JAXR规范相兼容的Web Servcices，这里的Web Services即为注册服务。一般来说，注册服务总是以Web Services的形式运行的。JAXR支持三种注册服务类型：JAXR Pluggable Provider、Registry-specific JAXR Provider、JAXR Bridge Provider(支持UDDI Registry和ebXML Registry/Repository等)。

· SAAJ

SAAJ(SOAP with Attachemnts API for Java)是JAX-RPC的一个增强，为进行低层次的SOAP消息操纵提供了支持。

J2EE安全(Security)模型可以让你配置 web 组件或enterprise bean ,这样只有被授权的用户才能访问系统资源. 每一客户属于一个特别的角色，而每个角色只允许激活特定的方法。你应在enterprise bean的布置描述中声明角色和可被激活的方法。由于这种声明性的方法，你不必编写加强安全性的规则。

J2EE 事务管理（Transaction Management）模型让你指定组成一个事务中所有方法间的关系，这样一个事务中的所有方法被当成一个单一的单元. 当客户端激活一个enterprise bean中的方法，容器介入一管理事务。因有容器管理事务，在enterprise bean中不必对事务的边界进行编码。要求控制分布式事务的代码会非常复杂。你只需在布置描述文件中声明enterprise bean的事务属性，而不用编写并调试复杂的代码。容器将读此文件并为你处理此enterprise bean的事务。

JNDI 寻址(JNDI Lookup)服务向企业内的多重名字和目录服务提供了一个统一的接口,这样应用程序组件可以访问名字和目录服务.

J2EE远程连接（Remote Client Connectivity）模型管理客户端和enterprise bean间的低层交互. 当一个enterprise bean创建后, 一个客户端可以调用它的方法就象它和客户端位于同一虚拟机上一样.