LINUX GAZETTE

September 2003, Issue 94       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors
The Answer Gang knowledge base (your Linux questions here!)
Search (www.linuxgazette.com)


Linux Gazette Staff and The Answer Gang

TAG Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Webmaster of Linux Gazette, webmaster@linuxgazette.com

Copyright © 1996-2003 Specialized Systems Consultants, Inc.

LINUX GAZETTE
...making Linux just a little more fun!
News Bytes
By Michael Conry

News Bytes

Contents:

Selected and formatted by Michael Conry

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to bytes@linuxgazette.com


 September 2003 Linux Journal

[issue 113 cover image] The September issue of Linux Journal is on newsstands now. This issue focuses on Community Networks. Click here to view the table of contents, or here to subscribe.

All articles in issues 1-102 are available for public reading at http://www.linuxjournal.com/magazine.php. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.


Legislation and More Legislation


 European Software Patents

On the 1st of September 2003, the European Parliament will hold a vote which may have very far reaching and long lasting effects on the software industry and community within the European Union. The central issue being addressed in this vote is the patentability of software. In the past, there has been some vagueness in the attitude of the European Patent Office towards the patenting of software. Although official regulations appeared to make software, mathematics, algorithms and business methods essentially unpatentable, working practise in the EPO has been to bypass the legal framework intended to constrain it and to allow such innovations to be patented. The new directive on the patentability of computer-implemented inventions is supposed to be a measure aimed at resolving this confusion by regularising the rules regarding patentability. However, what the EU blurb glosses over is that the provisions in the new directive significantly alter the legislation currently governing software patentability. Rather than vindicating the existing legal situation, the legislation is being recast in the image of the current operations of the EPO. This is strikingly borne out by some research performed by the FFII. The FFII intended to show that the infamous "one click" Amazon.com patent would be acceptable under the proposed new regulations. During the course of these investigations, it emerged that Amazon.com had already been granted a closely related patent covering computerised methods of gift delivery.

Of course, when considering these changes we must ask ourselves whether perhaps these changes may be desirable. Though there are naturally those who support the initiative, there is a very broad constituency that strongly opposes this move towards European software patents. An unscientific measure of the opposition to the software patent proposals can be obtained by doing a search on Google News for the terms "european software patents". The vast majority of headlines are hostile or gloomy regarding the proposal. There is a striking absence of outright support, all the more striking given that this is a search of news outlets rather than personal or lobby-group websites. This scepticism is shared by many economists who fear that the legal changes will lead to a reduction in innovation and cutbacks in R&D expenditure. These fears are felt very acutely among small and medium size software companies who have perhaps the most to lose. Equally, open source developers may be left in a vulnerable position by these proposed changes. As has been seen in the operation of software patents in the United States, the patent system tends to work best for parties with large financial resources, such as multinational corporations. Such deep pockets allow an organisation to acquire a stock of patents, and then to defend the patents through the courts. A well resourced holder of even a very spurious patent can thus intimidate would-be competitors out of the market simply by virtue of the differences in scale. The only group which will benefit to a greater degree than large corporations is the legal fraternity.

It remains to be seen whether the protests and lobbying organised by anti-patent groups will prove to be effective. Though actions such as closing down websites make an impact online, the real world effect can be quite small. As was pointed out by the Register, even though open source produces great code, it does not necessarily produce great lobbying. The key for open-source groups elsewhere and in the future is to share information about what works and does not work in the political sphere, and to apply this information in future struggles.


 SCO

Writing an article on the SCO lawsuit(s) is getting steadily more difficult as the volume of material on the subject mounts up. Much of it is simply noise and it will not be until the case is dropped or reaches court that we will have a chance to properly judge the true nature of SCO's plans. This is especially true given SCO's reluctance to release any of the source code they claim is infringing their "intellectual" property (the words "SCO" and "intellectual" seem more mutually exclusive to me each day). Perhaps to impress investors, SCO did deign to display a couple of samples at their annual reseller show. This was very nice of them and illustrates why they should perhaps release more of the "disputed" code. Analysis done by Linux Weekly News and by Bruce Perens indicated that the origination of the code was entirely legal and did not infringe on SCO's property. SCO spokesman Blake Stowell's rather pointless response was to show a typically SCO-like disdain for facts and to assert that "at this point it's going to be his [Perens'] word against ours". Unfortunately for Blake, Perens' word is backed up by verifiable documentation and historical record not to mention the fact that people who worked on and remember the code are still alive. Meanwhile, SCO's assertions are, at least at this stage, no more than random bleatings.

Reaction to the SCO case has been mostly muted, though it is likely that some more-cautious corporate types are somewhat reluctant to engage further with Open Source and Free Software under the shadow of the court case. Few though are likely to be so nervous as to stump up the licence fees requested by SCO. The advice of Australian lawyer John Collins sounds about right:

"If you don't know whether or not you have a valid license because there is uncertainty as to the providence of the software and who actually owns the copyright, then to walk up and drop your pants to the person who is likely to sue you sounds a little counter-intuitive and a bit uncommercial,"

Some have speculated that the true purpose of SCO's actions may be connected to the (mostly positive) effect on its share price these developments have had. An example of these arguments can be found in the writings of Tim Rushing, though ultimately everybody is still speculating. Further analyses can be found at GrokLaw and at sco.iwethey.org, though keeping up with the twists and turns, not to mention the irrational behaviour of SCO execs, is rather taxing on the grey matter.


Linux Links

ActiveState has made freely available the ActiveState Field Guide to Spam. This is a living compilation of advanced tricks used by spammers to hide their messages from spam filters.

Some links from Newsforge

Some interesting links from the O'Reilly stable of websites:

Ernie Ball guitar string company dumps Microsoft for Linux after BPA audit.

Linus says SCO is smoking crack.

The Register reported on the launch of Open Groupware.org, an application which claims to complete the OpenOffice productivity software set.

Some links of interest from Linux Today:

Bruce Perens analyzes SCO's code samples in detail.

Debian Weekly News highlighted an article by Ian Murdock arguing that Linux is a process, not a product.


Upcoming conferences and events

Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.

LinuxWorld UK
September 3-4, 2003
Birmingham, United Kingdom
http://www.linuxworld2003.co.uk

Linux Lunacy
Brought to you by Linux Journal and Geek Cruises!
September 13-20, 2003
Alaska's Inside Passage
http://www.geekcruises.com/home/ll3_home.html

Software Development Conference & Expo
September 15-18, 2003
Boston, MA
http://www.sdexpo.com

PC Expo
September 16-18, 2003
New York, NY
http://www.techxny.com/pcexpo_techxny.cfm

COMDEX Canada
September 16-18, 2003
Toronto, Ontario
http://www.comdex.com/canada/

IDUG 2003 - Europe
October 7-10, 2003
Nice, France
http://www.idug.org

Linux Clusters Institute Workshops
October 13-18, 2003
Montpellier, France
http://www.linuxclustersinstitute.org

Coast Open Source Software Technology (COSST) Symposium
October 18, 2003
Newport Beach, CA
http://cosst.ieee-occs.org

LISA (17th USENIX Systems Administration Conference)
October 26-30, 2003
San Diego, CA
http://www.usenix.org/events/lisa03/

HiverCon 2003
November 6-7, 2003
Dublin, Ireland
http://www.hivercon.com/

COMDEX Fall
November 17-21, 2003
Las Vegas, NV
http://www.comdex.com/fall2003/

Southern California Linux Expo (SCALE)
November 22, 2003
Los Angeles, CA
http://socallinuxexpo.com/

Linux Clusters Institute Workshops
December 8-12, 2003
Albuquerque, NM
http://www.linuxclustersinstitute.org

Storage Expo 2003, co-located with Infosecurity 2003
December 9-11, 2003
New York, NY
http://www.infosecurityevent.com/


News in General


 GNU Server breach

It emerged over the past month that the main file servers of the GNU project were compromised by a malicious cracker in mid-march. Although the breach was only noticed in July, it appears that no source code was tampered with. Nonetheless, it is important that individuals and organisation who may have downloaded from the compromised server verify for themselves that the code they received was intact and untainted. This incident should also bring home to users the importance of keeping up to date with patches and software updates, and also the necessity to have established security procedures and backups in place.

Original reporting on this story can be found here:


 Alan Cox Sabbatical

Kerneltrap reported that Alan Cox is to take a one year sabbatical. He plans to spend his year studying for an MBA and learning Welsh.


 GNU/Linux Security Certification

Slashdot recently highlighted the story that IBM has succeeded in getting Linux certified under the Common Criteria specification. This has implications for government bodies considering Linux when making purchasing decisions. The Inquirer reports that this has been a bit of a black-eye for Red Hat, whose certification effort is stalled, held up indefinitely by the UK-based testing laboratory Red Hat selected to do the work.


Distro News


 Ark

Tux Reports have taken a look at Ark Linux. This RPM based distribution particularly aims to provide a comprehensive and useful desktop environment.


 Debian

Debian Weekly News linked to Jan Ivar Pladsen's document which describes how to install Debian GNU/Linux on Indy.


On August 16th, the Debian Project celebrated its 10th birthday. Linux Planet published a Debian 10-year retrospective to mark the occasion.


 Knoppix

Klaus Knopper describes the Philosophy behind Knoppix.


 Libranet

Linuxiran has reviewed Libranet GNU/Linux 2.8. Evidently they were impressed: "Only one word can describe Libranet's installer: 'awesome...'" (Courtesy Linux Today).


 Mepis

As higlighted by DWN, Mepis Linux is a LiveCD derived from Debian GNU/Linux. LinuxOnline has some articles on this distribution, including this LiveCD. The first is an overview, a full review and an interview with Mepis creator Warren Woodford.


 SuSE

SGI and SuSE Linux today announced plans to extend the Linux OS to new levels of scalability and performance by offering a fully supported 64-processor system running a fully supported, enterprise-grade Linux operating system. Expected to be available in October, SGI will bundle SuSE Linux Enterprise Server 8 on SGI Altix 3000 servers and superclusters.


Siemens Business Servicess has decided to use SuSE Linux Enterprise Server 8 to underpin its mySAP HR management system, processing payrolls for more than 170,000 employees worldwide. The open source operating system and the platform independence of the SAP R/3 software enable an easy migration to an open, powerful, and efficient Intel architecture. Linux-based application servers can be operated independently alongside existing Unix-based servers. Thus, the RM systems can continue to run until they were amortized and gradually replaced by Linux servers.


Software and Product News


 Biscom Announces Linux FAXCOM Server

Biscom, a provider of enterprise fax management solutions, has announced the market release of its Linux FAXCOM Server. The new product integrates the reliability and efficiency of the Windows FAXCOM Server with the stability and security of the Linux operating system. Linux FAXCOM Server has been thoroughly tested is currently available for market release. Linux FAXCOM Server features support for multiple diverse document attachments via on-the-fly document conversion, and up to 96 ports on one fax server. Expanded fax routing destination options for inbound faxes include: fax port, dialed digits, sender's Transmitting Station Identifier (TSID) and Caller ID. Furthermore, if appropriate, the same fax may be routed to multiple destinations, including one or more printers.


 GNU Scientific Library 1.4 released

Version 1.4 of the GNU Scientific Library is now available at:

ftp://ftp.gnu.org/gnu/gsl/gsl-1.4.tar.gz
and from mirrors worldwide (see http://www.gnu.org/order/ftp.html).

The GNU Scientific Library (GSL) is a collection of routines for numerical computing in C. This release is backwards compatible with previous 1.x releases. GSL now includes support for cumulative distribution functions (CDFs) contributed by Jason H. Stover. The full NEWS file entry is appended below.


 Mod_python 3.1.0 Alpha

The Apache Software Foundation and The Apache HTTP Server Project have announced the 3.1.0 ALPHA release of mod_python.

Some feature highlights:

Mod_python 3.1.0a is available for download from: http://httpd.apache.org/modules/python-download.cgi


 Samba

Linux Today has carried the news that Samba-3.0.0 RC2 is now available for download

 

Mick is LG's News Bytes Editor.

[Picture] Born some time ago in Ireland, Michael is currently working on a PhD thesis in the Department of Mechanical Engineering, University College Dublin. The topic of this work is the use of Lamb waves in nondestructive testing. GNU/Linux has been very useful in this work, and Michael has a strong interest in applying free software solutions to other problems in engineering. When his thesis is completed, Michael plans to take a long walk.


Copyright © 2003, Michael Conry. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
Ecol
By Javier Malonda

The Ecol comic strip is written for escomposlinux.org (ECOL), the web site tha t supports, es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author.

These images are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.

[cartoon]
[cartoon]
[cartoon]

All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and http://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.

These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.

 


Copyright © 2003, Javier Malonda. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
From C To Assembly Language
By Hiran Ramankutty

1. Overview

What is a microcomputer system made up of? A microcomputer system is made up of a microprocessor unit (MPU), a bus system, a memory subsystem, an I/O subsystem and an interface among all components. A typical answer one can expect.

This is only the hardware side. Every microcomputer system requires a software so as to direct each of the hardware components while they are performing their respective tasks. Computer software can be thought about at system side (system software) and user side (user software).

The user software may include some in-built libraries and user created libraries in the form of subroutines which may be needed in preparing programs for execution.

The system software may encompass a variety of high-level language translators, an assembler, a text editor, and several other programs for aiding in the preparation of other programs. We already know that there are three levels of programming and they are Machine language, Assembly language and High-level language.

Machine language programs are programs that the computer can understand and execute directly (think of programming in any microprocessor kit). Assembler language instructions match machine language instructions on a more or less one-for-one basis, but are written using character strings so that they are more easily understood, and high-level language instructions are much closer to the English language and are structured so that they naturally correspond to the way programmers think. Ultimately, an assembler language or high-level language program must be converted into machine language by programs called translators. They are referred to as assembler and compiler or interpreter respectively.

Compilers for high-level languages like C/C++ have the ability to translate high-level language into assembly code. The GNU C and C++ Compiler option of -S will generate an assembly code equivalent to that of the corresponding source program. Knowing how the most rudimentary constructs like loops, function calls and variable declaration are mapped into assembly language is one way to achieve the goal of mastering C internals. Before proceeding further, you must make it a point that you are familiar with Computer Architecture and Intel x86 assembly language to help you follow the material presented here.

2. Getting Started

To begin with, write a small program in C to print hello world and compile it with -S options. The output is an assembler code for the input file specified. By default, GCC makes the assembler file name by replacing the suffix `.c', with `.s'. Try to interpret the few lines at the end of the assembler file.

The 80386 and above family of processors have myriads of registers, instructions and addressing modes. A basic knowledge about only a few simple instructions is sufficient to understand the code generated by the GNU compiler.

Generally, any assembly language instruction includes a label, a mnemonic, and operands. An operand's notation is sufficient to decipher the operand's addressing mode. The mnemonics operate on the information contained in the operands. In fact, assembly language instructions operate on registers and memory locations. The 80386 family has general purpose registers (32 bit) called eax, ebx, ecx etc. Two registers, ebp and esp are used for manipulating the stack. A typical instruction, written in GNU Assembler (GAS) syntax, would look like this:

movl $10, %eax

This instruction stores the value 10 in the eax register. The prefix `%' to the register name and `$' to the immediate value are essential assembler syntax. It is to be noted that not all assemblers follow the same syntax.

Our first assembly language program, stored in a file named first.s is shown in Listing 1.

#Listing 1
.globl main
main:
  movl $20, %eax
  ret

This file can be assembled and linked to generate an a.out by giving the command cc first.s. The extensions `.s' are identified by the GNU compiler front end cc as assembly language files and invokes the assembler and linker, skipping the compilation phase.

The first line of the program is a comment. The .globl assembler directive serves to make the symbol main visible to the linker. This is vital as your program will be linked with the C startup library which will contain a call to main. The linker will complain about 'undefined reference to symbol main' if that line is omitted (try it). The program simply stores the value 20 in register eax and returns to the caller.

3. Arithmetic, Comparison, Looping

Our next program is Listing 2 which computes the factorial of a number stored in eax. The factorial is stored in ebx.

#Listing 2
.globl main
main: 
	movl $5, %eax
	movl $1, %ebx
L1:	cmpl $0, %eax		//compare 0 with value in eax
	je L2			//jump to L2 if 0==eax (je - jump if equal)
	imull %eax, %ebx	// ebx = ebx*eax
	decl %eax		//decrement eax
	jmp L1			// unconditional jump to L1
L2: 	ret

L1 and L2 are labels. When control flow reaches L2, ebx would contain the factorial of the number stored in eax.

4. Subroutines

When implementing complicated programs, we split the tasks to be solved in systematic order. We write subroutines and functions for each of the tasks which are called when ever required. Listing 3 illustrates subroutine call and return in assembly language programs.

#Listing 3
.globl main
main:
	movl $10, %eax
	call foo
	ret
foo:
	addl $5, %eax
	ret

The instruction call transfers control to subroutine foo. The ret instruction in foo transfers control back to the instruction after the call in main.

Generally, each function defines the scope of variables it uses in each call of the routine. To maintain the scopes of variables you need space. The stack can be used to maintain values of the variables in each call of the routine. It is important to know the basics of how the activation records can be maintained for repeated, recursive calls or any other possible calls in the execution of the program. Knowing how to manipulate registers like esp and ebp and making use of instructions like push and pop which operate on the stack are central to understanding the subroutine call and return mechanism.

5. Using The Stack

A section of your program's memory is reserved for use as a stack. The Intel 80386 and above microprocessors contain a register called stack pointer, esp, which stores the address of the top of stack. Figure 1 below shows three integer values, 49,30 and 72, stored on the stack (each integer occupying four bytes) with esp register holding the address of the top of stack.

Figure 1

Unlike the stack analogous to a pile of bricks growing up wards, on Intel machines stack grows down wards. Figure 2 shows the stack layout after the execution of the instruction pushl $15.

Figure 2

The stack pointer register is decremented by four and the number 15 is stored as four bytes at locations 1988, 1989, 1990 and 1991.

The instruction popl %eax copies the value at top of stack (four bytes) to the eax register and increments esp by four. What if you do not want to copy the value at top of stack to any register? You just execute the instruction addl $4, %esp which simply increments the stack pointer.

In Listing 3, the instruction call foo pushes the address of the instruction after the call in the calling program on to the stack and branches to foo. The subroutine ends with ret which transfers control to the instruction whose address is taken from the top of stack. Obviously, the top of stack must contain a valid return address.

6. Allocating Space for Local Variables

It is possible to have a C program manipulating hundreds and thousands of variables. The assembly code for the corresponding C program will give you an idea of how the variables are accommodated and how the registers are used for manipulating the variables without causing any conflicts in the final result that is to be obtained.

The registers are few in number and cannot be used for holding all the variables in a program. Local variables are allotted space within the stack. Listing 4 shows how it is done.

#Listing 4
.globl main
main:
	call foo
	ret
foo:
	pushl %ebp
	movl %esp, %ebp
	subl $4, %esp
	movl $10, -4(%ebp)
	movl %ebp, %esp
	popl %ebp
	ret

First, the value of the stack pointer is copied to ebp, the base pointer register. The base pointer is used as a fixed reference to access other locations on the stack. In the program, ebp may be used by the caller of foo also, and hence its value is copied to the stack before it is overwritten with the value of esp. The instruction subl $4, %esp creates enough space (four bytes) to hold an integer by decrementing the stack pointer. In the next line, the value 10 is copied to the four bytes whose address is obtained by subtracting four from the contents of ebp. The instruction movl %ebp, %esp restores the stack pointer to the value it had after executing the first line of foo and popl %ebp restores the base pointer register. The stack pointer now has the same value which it had before executing the first line of foo. The table below displays the contents of registers ebp, esp and stack locations from 3988 to 3999 at the point of entry into main and after the execution of every instruction in Listing 4 (except the return from main). We assume that ebp and esp have values 7000 and 4000 stored in them and stack locations 3988 to 3999 contain some arbitrary values 219986, 1265789 and 86 before the first instruction in main is executed. It is also assumed that the address of the instruction after call foo in main is 30000.

Table 1

6. Parameter Passing and Value Return

The stack can be used for passing parameters to functions. We will follow a convention (which is used by our C compiler) that the value stored by a function in the eax register is taken to be the return value of the function. The calling program passes a parameter to the callee by pushing its value on the stack. Listing 5 demonstrates this with a simple function called sqr.

#Listing 5
.globl main
main:
	movl $12, %ebx
	pushl %ebx
	call sqr
	addl $4, %esp       //adjust esp to its value before the push
	ret
sqr:
	movl 4(%esp), %eax
	imull %eax, %eax    //compute eax * eax, store result in eax 
	ret

Read the first line of sqr carefully. The calling function pushes the content of ebx on the stack and then executes a call instruction. The call will push the return address on the stack. So inside sqr, the parameter is accessible at an offset of four bytes from the top of stack.

8. Mixing C and Assembler

Listing 6 shows a C program and an assembly language function. The C function is defined in a file called main.c and the assembly language function in sqr.s. You compile and link the files together by typing cc main.c sqr.s.

The reverse is also pretty simple. Listing 7 demonstrates a C function print and its assembly language caller.

#Listing 6
//main.c
main()
{
	int i = sqr(11);
	printf("%d\n",i);
}

//sqr.s
.globl sqr
sqr:
	movl 4(%esp), %eax
	imull %eax, %eax
	ret

#Listing 7
//print.c
print(int i)
{
	printf("%d\n",i);
}

//main.s
.globl main
main:
	movl $123, %eax
	pushl %eax
	call print
	addl $4, %esp
	ret

9. Assembler Output Generated by GNU C

I guess this much reading is sufficient for understanding the assembler output produced by gcc. Listing 8 shows the file add.s generated by gcc -S add.c. Note that add.s has been edited to remove many assembler directives (mostly for alignments and other things of that sort).

#Listing 8
//add.c
int add(int i,int j)
{
	int p = i + j;
	return p;
}

//add.s
.globl add
add:
	pushl %ebp
	movl %esp, %ebp
	subl $4, %esp		//create space for integer p
	movl 8(%ebp),%edx	//8(%ebp) refers to i
	addl 12(%ebp), %edx	//12(%ebp) refers to j
	movl %edx, -4(%ebp)	//-4(%ebp) refers to p
	movl -4(%ebp), %eax	//store return value in eax
	leave			//i.e. to movl %ebp, %esp; popl %ebp ret

The program will make sense upon realizing the C statement add(10,20) which gets translated into the following assembler code:

pushl $20
pushl $10
call add

Note that the second parameter is passed first.

10. Global Variables

Space is created for local variables on the stack by decrementing the stack pointer and the allotted space is reclaimed by simply incrementing the stack pointer. So what is the equivalent GNU C generated code for global variables? Listing 9 provides the answer.

#Listing 9
//glob.c
int foo = 10;
main()
{
	int p foo;
}

//glob.s
.globl foo
foo:
	.long 10
.globl main
main:
	pushl %ebp
	movl %esp,%ebp
	subl $4,%esp
	movl foo,%eax
	movl %eax,-4(%ebp)
	leave
	ret

The statement foo: .long 10 defines a block of 4 bytes named foo and initializes the block with zero. The .globl foo directive makes foo accessible from other files. Now try this out. Change the statement int foo to static int foo. See how it is represented in the assembly code. You will notice that the assembler directive .globl is missing. Try this out for different storage classes (double, long, short, const etc.).

11. System Calls

Unless a program is just implementing some math algorithms in assembly, it will deal with such things as getting input, producing output, and exiting. For this it will need to call on OS services. In fact, programming in assembly language is quite the same in different OSes, unless OS services are touched.

There are two common ways of performing a system call in Linux: through the C library (libc) wrapper, or directly.

Libc wrappers are made to protect programs from possible system call convention changes, and to provide POSIX compatible interface if the kernel lacks it for some call. However, the UNIX kernel is usually more-or-less POSIX compliant: this means that the syntax of most libc "system calls" exactly matches the syntax of real kernel system calls (and vice versa). But the main drawback of throwing libc away is that one loses several functions that are not just syscall wrappers, like printf(), malloc() and similar.

System calls in Linux are done through int 0x80. Linux differs from the usual Unix calling convention, and features a "fastcall" convention for system calls. The system function number is passed in eax, and arguments are passed through registers, not the stack. There can be up to six arguments in ebx, ecx, edx, esi, edi, ebp consequently. If there are more arguments, they are simply passed though the structure as first argument. The result is returned in eax, and the stack is not touched at all.

Consider Listing 10 given below.

#Listing 10
#fork.c
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>

int main()
{
	fork();
	printf("Hello\n");
	return 0;
}

Compile this program with the command cc -g fork.c -static. Use the gdb tool and type the command disassemble fork. You can see the assembly code used for fork in the program. The -static is the static linker option of GCC (see man page). You can test this for other system calls and see how the actual functions work.

There have been several attempts to write an up-to-date documentation of the Linux system calls and I am not making this another of them.

11. Inline Assembly Programming

The GNU C supports the x86 architecture quite well, and includes the ability to insert assembly code within C programs, such that register allocation can be either specified or left to GCC. Of course, the assembly instruction are architecture dependent.

The asm instruction allows you to insert assembly instructions into your C or C++ programs. For example the instruction:

asm ("fsin" : "=t" (answer) : "0" (angle));

is an x86-specific way of coding this C statement:

answer = sin(angle);

You can notice that unlike ordinary assembly code instructions asm statements permit you to specify input and output operands using C syntax. Asm statements should not be used indiscriminately. So, when should we use them?

#Listing 11
#Name : bit-pos-loop.c 
#Description : Find bit position using a loop

#include <stdio.h>
#include <stdlib.h>

int main (int argc, char *argv[])
{
	long max = atoi (argv[1]);
	long number;
	long i;
	unsigned position;
	volatile unsigned result;

	for (number = 1; number <= max; ; ++number) {
		for (i=(number>>1), position=0; i!=0; ++position)
			i >>= 1;
		result = position;
	}
	return 0;
}

#Listing 12
#Name : bit-pos-asm.c
#Description : Find bit position using bsrl

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[])
{
	long max = atoi(argv[1]);
	long number;
	unsigned position;
	volatile unsigned result;

	for (number = 1; number <= max; ; ++number) {
		asm("bsrl %1, %0" : "=r" (position) : "r" (number));
		result = position;
	}
	return 0;
}

Compile the two versions with full optimizations as given below:

$ cc -O2 -o bit-pos-loop bit-pos-loop.c
$ cc -O2 -o bit-pos-asm bit-pos-asm.c

Measure the running time for each version by using the time command and specifying a large value as the command-line argument to make sure that each version takes at least few seconds to run.

$ time ./bit-pos-loop 250000000

and

$ time ./bit-pos-asm 250000000

The results will be varying in different machines. However, you will notice that the version that uses the inline assembly executes a great deal faster.

GCC's optimizer attempts to rearrange and rewrite program' code to minimize execution time even in the presence of asm expressions. If the optimizer determines that an asm's output values are not used, the instruction will be omitted unless the keyword volatile occurs between asm and its arguments. (As a special case, GCC will not move an asm without any output operands outside a loop.) Any asm can be moved in ways that are difficult to predict, even across jumps. The only way to guarantee a particular assembly instruction ordering is to include all the instructions in the same asm.

Using asm's can restrict the optimizer's effectiveness because the compiler does not know the asms' semantics. GCC is forced to make conservative guesses that may prevent some optimizations.

12. Exercises

  1. Interpret the assembly code for C program in Listing 6. Modify it for eliminating errors that are obtained when generating assembly code with -Wall option. Compare the two assembly codes. What changes do you observe?
  2. Compile several small C programs with and without optimization options (like -O2). Read the resulting assembly codes and find out some common optimization tricks used by the compiler.
  3. Interpret assembly code for switch statement.
  4. Compile several small C programs with inline asm statements. What differences do you observe in assembly codes for such programs.
  5. A nested function is defined inside another function (the "enclosing function"), such that:

    Nested functions can be useful because they help control the visibility of a function.

    Consider Listing 13 given below:

    #Listing 13 /* myprint.c */ #include <stdio.h> #include <stdlib.h> int main() { int i; void my_print(int k) { printf("%d\n",k); } scanf("%d",&i); my_print(i); return 0; }

    Compile this program with cc -S myprint.c and interpret the assembly code. Also try compiling the program with the command cc -pedantic myprint.c. What do you observe?

 

[BIO] I have just given my final year B.Tech examinations in Computer Science and Engineering and a native of Kerala, India.


Copyright © 2003, Hiran Ramankutty. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
Play Encoded DVDs in Xine
By LeaAnne Kolp

Play Encoded DVDs in Xine

First of all, you'll need to download the plugins.

xine_d4d_plugin-0.3.2.tar.gz

xine-d5d-0.2.7.tgz

xine-lib-0.9.12.tar

xine-ui-0.9.12.tar

These plugins will ONLY work with the xine-lib and xine-ui-0.9.12. If you get 0.9.13 it will NOT work.

After you download those, switch to root

[tux@linux tux]$ su

Password: *****

Then you'll have to move the files that you just downloaded to your /root/ directory. Do this by typing in the
following at the command promt.

mv *.tar.gz /root/

If that doesn't work, then just type out the following:

mv xine-lib-0.9.12.tar.gz /root/

Do this for each of the files. After you get that done, then switch to your /root/ directory by typing in the following:

cd /root/ type: ls

And you'll get a listing of all the files in your /root/ directory. Now for the good part :)

Now to gunzip and untar it :)

To do this, type in the following:

gunzip -d xine-lib-0.9.12.tar.gz

Switch to that directory by typing the following:

cd xine-lib-0.9.12

Now type in:

ls

Now that you're in the directory, you'll have a README file and INSTALL file. ALWAYS read the README file.
No matter how many times you've done this before, something might have changed. If the README doesn't tell
you anything read the INSTALL file.

To do this, type in:

more README 
(just like it is in the directory, if you don't type it identical, it won'taccess it)

more INSTALL

Normally, a typical installation is done by typing in these commands:

./configure 
make 
make check 
make install

Again, always read the README. Each distribution of Linux is different and therefore the installation
instructions could be different.

Keep repeating the above steps until all 4 files are installed.

Then type in:

tar -xvf xine-lib.0.9.12.tar

Now type in:

ls

You'll see a directory (in dark blue) with the name:

xine-lib-0.9.12

Then type in the following to update your drive:

updatedb

That could take awhile depending on your drive. When that's done, you'll have to locate xine.

To do so, type in the following:

locate xine

It usually puts it in /usr/local/bin/ but to be on the safe side, locate it. :)

Once you have it located, until you add it to your menus, type in the directoryof where it was.
So if it was in /usr/local/bin/xine you would type in: /usr/local/bin/xine

That would start the program running if that's where it was located.

Now here's the tricky part that you'll have to play with and figure out on your own. When xine comes up, you'll
see the d4d and d5d buttons at the bottom. When you put a dvd into the dvdrom drive you'll have to click on either the
d4d or d5d button to get it to play the encoded dvd.

Unfortunately, I don't know which one will work with the dvd you put in.

Some dvds take the d5d, others take the d4d, you'll just have to play around with it and experiment to find the one that's right.
What I've started doing is when I put a dvd in and find out which plugin works (i.e. d4d, d5d) I write it down, so I know and
I don't have to play games with it to figure it out! :)

Congratulations! You've just gotten the plugins to work and now you can sit back and enjoy the movie!

 

[BIO] Hi, my name is LeaAnne and I've been Windows Free since March 2003.


Copyright © 2003, LeaAnne Kolp. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
Under /etc (A Simple Guide)
By AmirAli Lalji

Abstract:

This article is focused towards Linux newbies providing them with a basic understanding of /etc sub-directory.

Introduction

Newcomers to Linux, especially those coming from the Windows background, often find files in the /etc directory difficult to understand. In this article I will give a brief explanation of some of these files and their use. But, before we dive into the /etc directory I would like to point out that changes to some of these files can render your system unstable or in some circumstances unbootable. I cannot emphasize this enough that you should make a backup of the file(s) before making any changes.

Lets Dive In....

/etc/exports

This file contains the partition configuration to load NFS (Network File Systems). It tells how partitions are mounted and shared with other Linux/UNIX systems.

/etc/ftpusers

This file contains login names of users who are not allowed to login via FTP. It is recommended to add user root in this file for security.

/etc/fstab

This file automatically mounts filesystems which are spread accross multiple drives or seperate partitions. This file is checked when the system boots and filesystems mounted.

/etc/hosts.[allow, deny]

You can control access to your network by using these files. You can add hosts to hosts.allow file to which you want to grant access to your network or add hosts to hosts.deny to which you dont.

/etc/inetd.conf or /etc/xinetd.conf

The inetd file can be called the father of networking services. This file is responsible for starting services like FTP, TELNET etc. Some Linux distributions come with xinetd.conf which stands for Extended Internet Services Daemon, which provides all the functionnalities and capabilities of inetd but extends them further.

It is advisable to comment services which you do not use.

/etc/inittab

This file describes what takes place or which processes are started at bootup or at different runlevels. Runlevel is defined as a state in which the Linux box is currently in. Linux has 7 runlevels from 0-6.

/etc/motd

This file which stands for "message of the day" is executed and its contents displayed after a successful login.

/etc/passwd

This file contains user information. Whenever a new user is added, an entry is added to this file containing the loginname, password etc. This file is readable by everyone on the system. If the password field contains "x" than encrypted passwords are stored in /etc/shadow file which is only accessible by user root.

/etc/profile

When a user logs in a number of configuration files are executed, including /etc/profile. This file contains settings and global startup information for bash shell.

/etc/services

This file works in conjunction with /etc/inetd.conf or /etc/xinetd.conf files (see above). This file determines which port a service mentioned in inetd.conf is to use, for eg. FTP/21, TELNET/23 etc.

/etc/securetty

This file lists ttys from which root is allowed to login. For security reasons it is recommended to keep just tty1 for root login.

/etc/shells

This file contains names of all shells installed in the system with their full path names.

In the end....

I hope you enjoyed this article and hope it helped in understanding the /etc directory. You might find other subdirectories beneath the /etc directory which are application specific for eg. /etc/httpd, /etc/sendmail are for apache and sendmail respectively.

If you have any comments or suggestions, please feel free to email me at aalalji@bcs.org.uk

 

[BIO] AmirAli Lalji is a System Administrator/DBA and lives and works in UK and Portugal.


Copyright © 2003, AmirAli Lalji. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
Linux based Radio Timeshifting
By Yan-Fa Li

1.0 Introduction

Like a lot of gadget freaks I have a Tivo in the living room. Now while one could argue that thanks to an infestation of Clear Channel there really isn't much of interest to listen to anymore, there is still public radio. I listen to a lot of NPR while driving in the car, and I often find I miss programs I find interesting, or worse, I arrive at my destination and have to stop listening. So naturally I'd been thinking for some months now, why not invest some time and effort and look at how to build a PRR (Personal Radio Recorder).

Obviously I'm not the only one. There are now some commercial offerings like this, and quite a few people appear to have done projects to timeshift radio. There's even a how-to, and slashdot had a big thread about it recently.

These notes are all based on using RedHat Linux 7.3, so your mileage may vary if using something like SuSe or Mandrake. I believe they already come with Alsa, for example, so you can skip those parts that involve installing them if your system already comes pre-installed with it.

The Basics

Pretty much all the projects I've seen out there have the same things in common. They use one of three kinds of setup:

Naturally I picked the USB radio, since I wanted the flexibility of being able to replace the radio easily, and knew that drivers already existed in the Linux kernel for this device under the Video For Linux APIs.

3. Architecture

I had a few requirements:
  1. Output straight to mp3, and avoid creating any intermediate wave files. Two hours of Prairie Home Companion, for example, would be 1.2GB as 44KHz wave files, so this is definitely something to be avoided.
  2. Decent ID3 tags so the files had some useful info contained within them rather than just in the filename.
  3. An automated system of reaping the files at regular intervals to avoid eating up huge amounts of disk. NPR is highly topical and current and generally not something I want to archive long term.

The basic capture system ended up looking like this:

Alsa HW Interface -> [ ecasound ] -> < Wav Stream > -> [ lame encoder ] -> < mp3 >
Nice and simple. It just requires a little syntactic sugar to hold it all together. Since I wanted to encode VBR mp3s on the fly I had a few worries about CPU usage. I also planned installing it on my main file server since it's always on and therefore an ideal candidate. Using an 850MHz celeron, my tests showed the load to be about 40-50% while capturing and encoding. This still left plenty of CPU for other tasks like serving files, ssh and http.

Your mileage will of course vary if you're running X for example, which notoriously causes skipping with sound cards. However, since my system is a dedicated file server I've long stopped running a GUI on it. It never became an issue, but keep it in mind for your target system.

Encoding FM in real-time as MP3

A little googling got me a wealth of excellent information about FM mp3 encoding. Firstly, FM has a hard cut off for all frequencies above 15KHz at the transmitter, so any information above that can be safely ignored while encoding; secondly, the optimal bitrates for FM appear to be somewhere between 96 and 112kbits, since it's mostly voice with only a modicum of music and we're already saving 5KHz in the frequency range anyway. We therefore have more than enough bits remaining to get a faithful encoding. Thirdly, since I was planning on using Lame encoder, joint stereo was a must. Not only is the joint stereo mode of Lame excellent, but it also leaves more bits for the encoding.

Hardware Constraints

At a bare minimum, I wouldn't recommend anything slower than a 450MHz Pentium III for this task, though if you are willing to switch to Average Bit Rate and a slightly lower encoding rate from the one I've used, you could possibly get away with a well tuned 350MHz PII. Having a disk on the system itself is also optional, as it only needs to keep up with ~112kbps of data plus network overhead, so one could create a completely diskless system that booted off the network or a flash card which dumped all its recordings back to the network.

Check out compgeeks or CSO for low end systems which would be suitable for this duty.

4.0 Recording Audio Under Linux

This is actually a bigger challenge than it seems. While a large number of audio cards are supported by Linux for playback, not very many are all that great for recording. A great source of information can be found at the Alsa Project web site. Having already used Alsa on a number of systems and projects previously, I decided to use it for the PRR project. And don't forget, it's the future of Linux audio, as it's already in the 2.6 kernels.

The next problem is figuring out what to use to do the actual recording. After looking at a variety of solutions such as sox and alsarecord, I settled on using Ecasound. While it's probably overkill for this project, some of the features I liked were the built-in audio conversion routines, the ability to specify realtime scheduling under Linux if run by root, and support for writing data to stdout. Audio is really a real-time sort of task with hard requirements on meeting certain scheduling goals, so being able to specify this was a big plus for Ecasound. Unix pipes also avoids the creation of large temporary data files keeping disk requirements down to manageable levels.

Preparing to Set Up Alsa Sound System

First things first, identify what kind of sound card you have and figure out whether it's supported by Alsa. As I said earlier, not all audio cards are very good for recording. Interestingly enough, the sound card I ended up using is one of the most inexpensive available and yet it works quite well.

I ended up using a Cirrus Logic 46XX (Hercules Fortissmo) series card. I picked one these up in the sfbay area for ~35USD at retail. I'm pretty sure they're not much more than that elsewhere. Alsa has pretty good support for them and for FM recording they work just fine. I had started out with a CMI8738, an even cheaper card at around ~20USD, but I could not make it record audio without a horrible whine and very poor input gain. It was just fine for playback, but pretty much useless as a recording device.

Detailed instructions for setting up Alsa are on their website listed card by card. But here are a few notes before you start. Firstly, Alsa really needs the kernel source you are compiling against and your running kernel to be the same. So for example, on a typical RedHat installation, say 2.4.18-26.7, you would need the same kernel sources installed in your /usr/src/linux-2.4/ directory. By default, this is not the case, because RedHat specifically renames the kernel sources to use the extra version string "custom".

To avoid this problem just re-compile, re-install and boot your new redhat kernel before attempting to install Alsa. If you don't know how to do this there are lot of good instructions on the net and I suggest you look some up before proceeding. If you want the default RedHat options, simply use something like:

cd /usr/src/linux-2.4
make mrproper && \
make oldconfig && \
make dep clean bzImage modules MAKE='make -j2'
This will rebuild your kernel sources. You will need to install this and reboot using it. How you do that depends on whether you're using lilo or grub, and is beyond the scope of these instructions.

Additional Build Notes For RedHat Kernel Users

There's also a problem which seems to happen on RedHat 2.4.20 kernels, which is quite infruriating. I suspect it's because of all the scheduler patches they've been packporting from the 2.5 series. Anyway, whenever you run configure it deletes the file include/linux/workqueue.h from the working directory. This has the unfortunate side effect of letting the compile proceed cleanly, but refuse to load because of unknown symbols. Most annoying. The fix is simple. Run configure, and before running make once again to compile, un-tar the workqueue.h file from the tarball again. Something like:
tar jxvf alsa-driver-0.9.5.tar.bz2 alsa-driver-0.9.5/include/linux/workqueue.h
works for the 0.9.5 release. This will prevent the dreaded complaints about being unable to install modules due to failed dependencies.

Building and Installing Alsa

Assuming you've got your new kernel running, download the Alsa drivers package, and compile and install it as root. Next, download the Alsa library package, compile and install this. Finally, at a bare minimum you'll need the Alsa Utils, otherwise you won't be able to do anything useful like unmute the audio. Tools and OSS compat are nice to have but not required at this point.

As an additional note, by default, recent versions of Alsa install themselves in the default system paths. Older versions installed themselves in /usr/local, and on RedHat systems this would cause problems because the dynamic loader wasn't configured to look there for libraries. This would cause configure scripts to fail to find libraries and generally not compile. This is actually pretty easy to fix, just add the /usr/local/lib path to /etc/ld.so.conf and re-run ldconfig. This will update your ld cache and also dynamic libs that have been installed there to run. FYI, this is also one of the main reasons why a lot of open source packages fail to compile on RedHat systems.

The new drivers will install in your currently running kernel directory. You will need to reconfigure your modules.conf to reflect the new sound card set up. This normally involves removing the entries added by kudzu, or disabling them using a '#' sign and then adding the new driver entries. Here's the one from my CS46XX setup:

# ALSA portion
alias char-major-116 snd cards_limit=1 device_mode=0660
post-install snd alsactl restore
alias snd-card-0 snd-cs46xx
# module options
# OSS/Free portion
alias char-major-14 soundcore
alias sound-slot-0 snd-card-0
# card #1
alias sound-service-0-0 snd-mixer-oss
alias sound-service-0-1 snd-seq-oss
alias sound-service-0-3 snd-pcm-oss
alias sound-service-0-8 snd-seq-oss
alias sound-service-0-12 snd-pcm-oss
Notice the post-install directive. This lets you restore audio settings on reboots as soon as the driver loads. You can also achieve this by modifying the /etc/rc.local too, but I like this way better in case I need to unload the driver. You can also add a pre-remove directive if you like to save any settings you may have changed before unloading the sound modules. I prefer to restore to known defaults.

Next we need to add an entry to the rc.local file. For whatever reason, the OSS emulation drivers don't load automatically. KDE complains for example when starting artsd because the sound system hasn't initialized while it's trying to load. You can force OSS emulation to pre-load by adding:

modprobe snd-pcm-oss
setpci -s 01:09 latency_timer=60
to the end of rc.local. The first entry loads the pcm oss driver and keeps apps which depend on OSS being there to stay none the wiser. The second entry adjusts the PCI timers for the sound card to give it a little more time on the bus; that part is optional. I find tweaking the PCI bus helps avoid pops and clicks in the audio. If you do choose to tweak your PCI latency this way, remember to use lspci to find the correct device number for your card. The one listed here is for my system bus and will likely be different on your system.

It's probably easiest at this point just to just reboot one more time, however if you're confident about what you're doing, manually remove the old OSS audio drivers using rmmod and do a

modprobe snd-pcm-oss
Follow up with a lsmod and you should see a lot of bright and shiny new alsa drivers loaded:
Module                  Size  Used by    Not tainted
snd-pcm-oss            45668   0
snd-mixer-oss          16536   0  [snd-pcm-oss]
snd-cs46xx             79156   0  (autoclean)
snd-rawmidi            18656   0  (autoclean) [snd-cs46xx]
snd-seq-device          6316   0  (autoclean) [snd-rawmidi]
snd-ac97-codec         44640   0  (autoclean) [snd-cs46xx]
snd-pcm                83264   0  (autoclean) [snd-pcm-oss snd-cs46xx]
snd-timer              19560   0  (autoclean) [snd-pcm]
snd-page-alloc          8520   0  (autoclean) [snd-cs46xx snd-pcm]
gameport                3412   0  (autoclean) [snd-cs46xx]
snd                    43140   0  [snd-pcm-oss snd-mixer-oss snd-cs46xx snd-rawmidi snd-seq-device snd-ac97-codec snd-pcm snd-timer]
soundcore               6532   6  [snd]
This is an excerpt from a working system. Test it by playing some audio, anything you have handy will do, like mpg123 for example. Though on a RedHat system, you'll actually have to download and compile and install it yourself since they no longer ship mpeg decoders due to patent issues with Fraunhofer and Philips.

5.0 Additional Packages to Downloaded

Download and Compile ECASOUND

Since you now have an installed and working alsa sound system, you do have it working, don't you ? It is now a good time to get ecasound downloaded and running. I did my initial implementation using 2.2.1, but I recently upgraded to 2.2.3 as it seems to have a lot of bugfixes specifically for use with Alsa. Building it should be fairly straightforward, since it's GNU autoconf based. Just follow the INSTALL instructions.

If you want a slightly more optimized build, and you're using a version of GCC that supports more advanced x86 optimizations, (gcc 3.2 or higher), I would recommend the following configure line on an PII and above system. This especially includes the new FPGA2 Celerons.

CXXFLAGS='-O2 -march=i686' CFLAGS='-O2 -march=i686' ./configure
Or the even more aggressive:
CXXFLAGS='-O2 -march=i686 -msse -mmmx' CFLAGS='-O2 -march=i686 -mmmx -msse' ./configure
However, be warned this second version may not compile correctly and could cause more problems than it's worth. On the other hand, it probably wouldn't hurt to at least give it a try and see if it makes a difference on your system. In my particular case I see gains in the 2-4% range with these optimizations.

Download and Compile Lame

Now download Lame 3.93.1 and compile that. It's also gnu configure based so you can use the same flags as we used above. Install it. I'll also recommend downloading and installing your favorite mp3 player such as xmms or mpg321 as it will be useful while testing the installation.

6.0 Configuring The Recording Devices

This is probably the hardest part of getting this all running. I found the easiest thing to do was set the volumes before the recording was going, using the alsamixer tool. It's a curses based program that lets you adjust sliders for various audio devices and lets you take a trial and error approach to your particular sound card. The basic trick is to put the devices you are interested in capturing from, into capture mode. If anyone has better information on how to configure this using a command line interface, please drop me a line.

If you go into the alsamixer interface you'll see a group of sliders that have values from 0 to 100. Bars which have 6 hyphens above them are potential capture sources. Because each sound card appears to have slightly different DACs it's not always clear which ones to activate to enable recording.

Using a combination of trial and error and ecasound, I was able to test which devices were capable of recording. Having a pair of speakers hooked up to the speaker out at this point is very useful, but headphones would be just as useful at this point too. Typically you want line-in device. Crank up the volume to around 70%. If you're using a Video 4 Linux radio you can use the 'radio' util to tune in and turn on the radio device giving you a source of audio. The Dlink radio is a line device, meaning it has fixed volume so how loud it sounds will be directly related to how loud you set the line-in volume.

If you've hooked it up correctly to the line-in of your sound card you should now hear some audio. Adjust the volume until it doesn't sound distorted. Distorted audio will a) sound horrible and b) make your mp3s sound really crappy. Just play it by ear (sic), and get it to where it sounds reasonably clear and undistorted. Remember it's FM so it's already lost about 5KHz of fidelity from being converted, so don't expect miracles. You may need to look for sources of noise and reposition your antenna. This will vary from installation to installation.

Notes on Noise

Because the source is analog, you really need to pay close attention to sources of electrical noise, devices such as other computers or monitors are electrically very noisy. For example, I recently discovered that I was getting an annoying pulsing static sound from my setup. Turns out the routing of the audio cable was a little too close to power cords. A simple re-route solved the problem, but as with a lot of audio you have to do regular sound checks to make sure you haven't introduced a new source of interference.

7.0 Gluing it all together

So now we have the basic infrastructure going, we need to do some simple glue scripting to make it all work together. I wrote a couple of simple scripts to do the functions I needed. The first one is to do the actual recording. It's started via cron jobs. It simply invokes all the programs in a big fat pipe with ecasound at the head and lame at the tail. Works quite well. The name of the file to be created and any pertinent parameters are passed in via Cron. It's written in bash and is quite easy to understand.

Recordshow2: the capture script

#!/bin/bash
echo "Recordshow2 (c)2003 Yan-Fa Li (yanfali@best.com) under GNU LGPL"
# FREQUENCY TIMEINMINS "PROGRAM NAME"
#set -x

tune_channel()
{
	echo -n "Tuning to FM Channel $1..."


	# Reset and Turn on and Tune Radio
	$RADIO -qm 2>/dev/null && sleep 1 && $RADIO -qf $1
}

record_program()
{
	echo "Recording $TITLE for $1 Minutes ($TIME seconds) to:"
	echo -e "\t$FILENAME"

	# Record and Pipe to Lame
	TITLE2=${TITLE#*/}
	$ARECORD $APARMS | $LAME $LPARMS - "$FILENAME" \
		--tt "$TITLE2 on $DATE" \
		--ta "KQED/NPR" \
		--ty `date +"%Y"` \
		--tg 101 \
		--tc "$COMMENT"

	if [ $? -ne 0 ]
	then
		echo -n "Error Recording - Check the Soundcard Isn't Recording"
		echo " Already"
		turn_radio_off
		exit 1
	fi
}

fix_permissions()
{
	# Correct Permissions
	if [ -f "$FILENAME" ]
	then
		chown $OWNER "$FILENAME"
   		chmod 664 "$FILENAME"
	fi
}

turn_radio_off()
{
	echo -n "Turning off Radio..."
	# Turn off Radio
	$RADIO -qm
}


#
# Main Program
#

# Arg Check
if [ $# -ne 3 ]
then
	echo "usage: `basename $0` FREQUENCY TIME_IN_MINS \"NAME_OF_PROGRAM\""
	exit -1
fi

DEST=/mnt/music/radio
declare -i TIME=$2
TIME=TIME*60
OWNER="yan:music"

RADIO=/usr/bin/radio

ARECORD=/usr/local/bin/ecasound
APARMS="-b 512 -i alsahw,default -o:stdout -t $TIME"

LAME=/usr/local/bin/lame
LPARMS="-r -x -mj -s44.1"	# required for ecasound
# -r raw pcm input
# -x swap bytes
# -mj join stereo
# -s incoming sample rate

LPARMS=$LPARMS" -V5 --vbr-new -q0 -b112 --lowpass 15 --cwlimit 10"
# Thanks to: http://www.jthz.com/mp3/ for the settings
# -V5 encoding speed 
# --vbr-new
# -q0 highest quality
# -b112 bitrate of 112Kbps
# --lowpass 15 filter all frequencies above 15KHz (FM cutoff)
# --cwlimit 10 acoustic model

DATE=`date +"%a %b %d %Y (%k:%M)"`
SIMPLEDATE=`date +"%Y-%m-%d-%a"`

FILENAME="$DEST/$3-$SIMPLEDATE.mp3"
TITLE="$3"
COMMENT="$1 MHz"

echo "`basename $0`: Recording Started on "$DATE
# Call it twice to avoid radio coming up mute
tune_channel $1
tune_channel $1

record_program $2

fix_permissions 

turn_radio_off

echo "`basename $0`: Recording Ended on "`date +"%a %b %e, %Y  %k:%M"`

Example Crontab

# usage: recordshow2 FREQUENCY TIME_IN_MINS "NAME_OF_PROGRAM"
# leave a minute at the end otherwise you'll overlap the audio
# device and fail to record
#
# Weekdays
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/etc/radio
MAILTO=root
HOME=/

# Daily
0 9 * * Mon-Fri 	recordshow2 88.5 59 "Forum/Forum H1"
0 10 * * Mon-Fri 	recordshow2 88.5 59 "Forum/Forum H2"
0 11 * * Mon-Fri 	recordshow2 88.5 59 "Talk of the Nation/Talk Of The Nation H1"
0 12 * * Mon-Fri 	recordshow2 88.5 59 "Talk of the Nation/Talk Of The Nation H2"
0 13 * * Mon-Fri 	recordshow2 88.5 59 "Fresh Air/Fresh Air"
0 16 * * Mon-Fri 	recordshow2 88.5 29 "Market Place/Marketplace"
30 16 * * Mon-Fri 	recordshow2 88.5 119 "All Things Considered/All Things Considered"
0 21 * * Mon-Fri 	recordshow2 88.5 59 "BBC World Service/BBC World Service"

# Weekly Recordings
# Thursday
30 18 * * Thu	 	recordshow2 88.5 29 "Pacific Time/Pacific Time"
# Saturday
0 11 * * Sat	 	recordshow2 88.5 59 "WaitWait/Wait Wait Don't Tell Me"
0 12 * * Sat	 	recordshow2 88.5 59 "This American Life/This American Life"
0 18 * * Sat	 	recordshow2 88.5 119 "Prarie Home Companion/Prarie Home Companion"
# Sunday - in case you messed up saturday

# Sunday
0 11 * * Sun	 	recordshow2 88.5 119 "Prarie Home Companion/Prarie Home Companion"
# Maintenance
0 1 * * Sun		weekly_file_cleanup.rb

File Administration

The second script is more of a util script. Remember my requirement about automatically removing files after a certain time ? What this script does is that it scans the date when the file was created and after a predetermined time, two weeks in my case, it deletes them. I used ruby, because I was learning it and it actually made writing the program quite easy. Go figure. Feel free to re-write it in bash, but it really was much easier to write it in ruby. See for yourself.

Because there are some recordings I don't ever want to delete, usually weekly programs, I added the ability to ignore a directory by simply writing the file .donotreap into the directory. It'll bail on this directory if it finds it. As a secondary safe guard it will also only delete mp3 files. Everything else will be ignored. It's not fancy but it works quite well.

#!/usr/bin/ruby -w
=begin
Simple script to clean up the Timeshifted Radio Directories
Looks for files more than two weeks old and removes them
=end

puts "Timeshifted Radio File Cleaner v0.1"
puts "(c) 2003 Yan-Fa Li (yanfali@best.com) under GNU LGPL"

Dir.chdir("/mnt/raid5/music/radio")
TWOWEEKS = 60 * 60 * 24 * 7 * 2

file_list=Dir["**"]

# Find All Directories
dir_list = []
file_list.each { |x| dir_list << x if File.ftype(x) == "directory" }

topdir = Dir.pwd

# Recurse through all directories
dir_list.each do |x|
	Dir.chdir(topdir + "/" + x)

	# Do Not Reap Flagged Directories
	next if File.zero?(".donotreap")

	puts "Entering Directory: #{x}"

	# Build File List and Filter on name mp3
	file_list=Dir["**"]
	puts "\tFound #{file_list.length} Files Total"
	file_list.each { |y| file_list.delete(y) if not y.include?("mp3") }
	puts "\tFound #{file_list.length} MP3 Files"

	# Find Files Older than 2 Weeks
	del_list = []
	file_list.each { |y|
		del_list << y if (Time.now - File.stat(y).mtime) > TWOWEEKS }

	puts "\t#{del_list.length} Files Scheduled For Deletion"

	next if del_list.length == 0
	del_list.each { |z| File.delete(z) }
end

8.0 Bugs and Things To Do

One recurring problem which I have not been able to fix is that occasionally the recording will be skewed and sound slightly off. I haven't been able to figure out what's causing it. Most of the time the recordings are pristine, but every so often one will be off. You can still listen to it, but it sounds likes it's slightly off frequency or tinny. Since I'm streaming the audio straight into the recording software and then compressing, it could be any one of those elements in the chain; keeping around large wave files is not an option. Upgrading to the latest greatest versions has not helped. It could also be the sound card itself. It doesn't bug me enough to want to fix it, though I do lose some recordings that way. If anyone out there knows what it might be, I'd appreciate a heads up.

Since it pretty much all just works, I haven't messed with it much. I recently used a lot of the recordings on a long road trip: iPods rule. But I do have a few ideas on things that would be nice to have. First, it'd be great to scrape NPR program listings and get the details for each recording, attaching a reference file to each mp3 or changing the id3 comment to match the program listing.

Second, a dedicated scheduler would also be great. Right now if you have a clash in using the recording device, due to overruns, the second recording fails because the audio device is busy. Having a dedicated scheduler that is recording centric would pretty much fix this problem. I know Tivo has something similar so it's obviously a known problem with known solutions. Cron is the wrong solution for this, so the work around is of course to deliberately stop recording a minute sooner than necessary. In general this works very well.

Third, it would be great to have a web based interface for interacting with the recordings, changing programs to be recorded and listening, say via shoutcast, to stuff that's already been recorded. I'm far too lazy to write it myself, so I leave it as a challenge to all you out there :D

9.0 Summary

As you can see, it's a little bit of work to set up Linux to timeshift radio, but it's really not that difficult. I've been using it now for about six months and it's a real pleasure not having to miss any shows that catch my attention while driving. PBS is a great source of material and I strongly encourage you to support your local station. When combined with a portable music player like an iPod, long car rides become much more enjoyable.

The good news about building one is that things will be getting much easier in the 2.6 timeframe because of the integration of Alsa sound system into the mainline kernel. While the files it generates by default are quite large, you can reduce that footprint by choosing a lower encoding bitrate than my default of 112kbps. As low as 64kbps should sound fine for just voice, though music will sound pretty horrible at this bitrate. I haven't experimented with OGG or any other formats as I don't have portable players that support alternative formats, but changing it to support them should be a simple matter of modifying the backend a bit.

Any feedback or comments are appreciated, and if you have a solution to my occasional bad recordings drop me a line.

 

[BIO]


Copyright © 2003, Yan-Fa Li. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
USENET groups, email, and ssh tunnels over dial-up connection
By Nikolay Zhuravlev

When not at work, I have to use a dial-up modem for all my network needs. However, I still want to have all the power and flexibility that Linux provides. More importantly, I want to be able to use the same tools at home that I am used to at work. Namely, slrn for USENET news and fetchmail for downloading emails. In this article, I am going to discuss the use of ssh tunnels and compression for efficient and secure delivery of news and mail over a dial-up connection.

As it was previously discussed , a combination of slrnpull and 'slrn --spool' can be used to fetch USENET news and read them offline. This is especially useful when there is only one user, and she is stuck with pay-per-hour dial-up connection. Let's look more closely into this issue.

First, one should avoid working under root as much as possible, and use sudo instead. Use visudo to edit /etc/sudoers and add the following lines at the bottom:

# we want to be specific here
jane   localhost=/usr/bin/slrnpull -h news.server.com
where jane is the username authorized to run the command slrnpull -h news.server.com from localhost. Whenever Jane needs to fetch the news, she runs sudo:
jane@localhost ~$ sudo slrnpull -h news.server.com
Password:
jane@localhost ~$ slrn --spool

Fetching a large number of articles from a wide variety of USENET groups can take quite some time. Let us consider a scenario where Jane has ssh access to a machine with fast Internet. This could be a machine at work, at school, or even abroad. Assuming that the other machine can access news.server.com, and provided that there are no other obstacles (for ex. strict firewalling), an ssh tunnel with compression can be used to significantly speed up the news fetching, when done over a dial-up connection. A tunnel is established like this:

jane@localhost ~$ ssh -C -N -f -L 8081:news.server.com:119 janedoe@work.some.com
jane@localhost ~$
Here -C is for compression, -N and -f avoid executing the remote command and send ssh to background, and -L is for local port forwarding. Now, the lines in /etc/sudoers need to be adjusted to make use of the tunnel:
# we want to be specific here
#jane   localhost=/usr/bin/slrnpull -h news.server.com
# notice the use of backslash
jane   localhost=/usr/bin/slrnpull -h localhost\:8081

Jane can now run slrnpull. Instead of trying to connect to news.server.com directly, slrnpull will connect to local port 8081 and the traffic will travels through an ssh tunnel between localhost and work.some.com.

jane@localhost ~$ sudo slrnpull -h localhost:8081
Password:
jane@localhost ~$ slrn --spool

The two machines, i.e. the news server news.server.com and the work.some.com, are on the fast network. The connection between them is in clear-text and is not compressed. However, the localhost is connected to work.some.com via dial-up. The traffic between the later two is encrypted and compressed. The compression is the same as the one used by gzip. Compression of the ASCII traffic greatly decreases the download times, which is especially useful if one likes to subscribe to a lot of USENET groups. The proposed scheme also provides some privacy for Jane, since the traffic between her machine and work.some.com is encrypted.

Finally, to avoid typing long ssh commands to establish a tunnel, Jane could have something like this in her .ssh/config file:

Host work
HostName work.some.com
LocalForward 8081 news.server.com:119
IdentityFile /home/janedoe/.ssh/id_dsa
Protocol 2
User janedoe
CompressionLevel 6
Notice that there is only one colon sign in the LocalForward line above. Now the tunnel can be established with just:
jane@localhost ~$ ssh -C -N -f work

Just don't forget to kill the old ssh tunnel before establishing a new one. If in doubt, use netstat -tupan | grep LIS to see what is going on. The exact syntax of the commands may depend on the particular flavor of SSH that you have. The above works for me (RH 9, openssh-3.5p1-1). Also check out the article on ssh-agent , which makes dealing with ssh even less painless.

In a similar fashion, ssh tunnels can be used in combination with fetchmail to retrieve email from the server. Just add a new LocalForward entry to the .ssh/config file:

Host work
HostName work.some.com
LocalForward 8081 news.server.com:119
LocalForward 8082 pop3.some.com:110
IdentityFile /home/janedoe/.ssh/id_dsa
Protocol 2
User janedoe
CompressionLevel 6
and edit .fetchmailrc accordingly:
poll localhost with proto POP3 port 8082
user 'Doe0001' there with password "blah" is 'jane' here options fetchall
So, now the command ssh -C -N -f work will establish two tunnels, one for the news and one for the pop3 mail. Fire the fetchmail to see how it works:
fetchmail -e 50 -m "/usr/sbin/sendmail     -oem     -f     %F     %T"
To learn more about fetchmail and setting up the email system check the recent issue of LG. My experience was that, on average, mail and news get downloaded at least twice as fast comparing to the conventional methods. Over a modem line, that is. To summarize, the use of ssh tunnels with compression provides both efficiency and security for your everyday communication. Use it, love it, and pass the knowledge along ;)

 

[BIO] Born in Moscow, Russia, in 1976. I have been coding and/or messing with computers in one way or another since I was 12. I have entered the realm of *nix in 1995, and I never regretted it. Currently, I am a Ph.D. student in the Department of Chemistry at the University of Minnesota, MN.


Copyright © 2003, Nikolay Zhuravlev. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
Python Weather Station
By Phil Hughes

This program is a simple interface that allows you to build a web page from the Metar data output from weather stations around the world. I wouldn't call it exciting but it does work.

Rather than try to describe what you will see, go here and take a look. You should see at least two and possibly as many as five weather reports from around Costa Rica. That is, the program has a list of five weather stations to check and displays the information from all that report.

System Structure

The heart of this system is the Pymetar package available here. This is a Python program which fetches Metar data described here. Pymetar is a command-line tool but it does all the dirtywork.

My goal is to make this information available on a web page. I didn't want to turn this into a huge programming project but I want the implementation to make sense. The most basic approach would have been to run the program as a CGI script for each request. However, this was potentially very inefficient because it would require the program to grab all the data each time a CGI request came in. More important, it would mean the user would have to wait for all these requests to complete.

I decided the best compromise was to set up a cron job to fetch the data and build a weather page. Then, each page request would just be displaying a static page. As the weather data does not change all that often, this actually offers pretty much current information.

Implementation

First, here is the code:


#!/usr/bin/env python

import sys
import time
sys.path.insert(0, "/home/fyl/pymetar-0.5")
import pymetar

def stations(args):
    for arg in map(lambda x: x.strip(), args):
        try:
            weather = pymetar.MetarReport(arg)
        except IOError, msg:
            # uncomment the following and remove pass line to see the errors
            # sys.stderr.write("Problem accessing the weather server: %s\n" % msg)
            pass
        else:
            if weather.valid:
		print "<h3>"
                print weather.getStationName()
                print " ( Lat: %s, Long: %s, Alt: %s m)" % \
		  weather.getStationPosition()
		print "</h3>"
		print "<table border=\"2\">"
                print "<tr><td>Updated</td><td> %s</td></tr>" % \
		  weather.getTime()
                if weather.getWindDirection() is not None:
		    print "<tr><td>Wind direction</td><td> %s�</td></tr>" % \
		      weather.getWindDirection()
                if weather.getWindSpeed() is not None:
                    print "<tr><td>Wind speed</td><td> %6.1f m/s</td></tr>" % \
		      weather.getWindSpeed()
                if weather.getTemperatureCelsius() is not None:
                    print "<tr><td>Temperature</td><td> %.1f�C (%.1f�F)</td></tr>" % \
		      (weather.getTemperatureCelsius(), \
		      weather.getTemperatureFahrenheit())
                if weather.getDewPointCelsius() is not None:
                    print "<tr><td>Dew point</td><td> %.1f�C (%.1f�F)</td></tr>" % \
		      (weather.getDewPointCelsius(), \
		      weather.getDewPointFahrenheit())
                if weather.getHumidity() is not None: 
                    print "<tr><td>Humidity</td><td> %.0f%%</td></tr>" % \
		      weather.getHumidity()
                if weather.getVisibilityKilometers() is not None:
                    print "<tr><td>Visibility</td><td> %.1f Km</td></tr>" % \
		      weather.getVisibilityKilometers()
                if weather.getPressure() is not None:
                    print "<tr><td>Pressure</td><td> %.0f hPa</td></tr>" % \
		      weather.getPressure()
                if weather.getWeather() is not None: 
			print "<tr><td>Weather</td><td> %s</td></tr>" % \
			  weather.getWeather()
                if weather.getSkyConditions() is not None: 
			print "<tr><td>Sky conditions</td><td> %s</td></tr>" % \
		          weather.getSkyConditions()
		print "</table>"
            else:
                print "Either %s is not a valid station ID, " % arg
		print "the NOAA server is down or parsing is severely broken."


print "<html>"
print "<head>"
print "<title>Costa Rica weather from PlazaCR.com</title>"
print "</head>"
print "<body>"
print "<h1>Costa Rica weather from PlazaCR.com</h1>"
print "<p>Latest reports as of %s CST" % time.ctime()
gm = time.gmtime()
print "(%d.%02d.%02d %02d%02d UTC)" % (gm[0], gm[1], gm[2], gm[3], gm[4])
print '<p><a href="images/costa_rica.gif" target="_blank">Costa Rica map</a>'

stations(["MROC", "MRLM", "MRCH", "MRLB", "MRPV"])

print "</body>"
print "</html>"

I chose to just import the pymetar.py code in the wrapper than generated the HTML page. To do this, I added the Pymetar directy to the path being searched by Python.

Next I define stations, a function that queries the weather stations using the Pymetar code and then formats the output into HTML. It looks pretty ugly because it is just some long print statements building HTML strings with some if statements tossed in to see if we actually got the data. The important point is that you pass it a list of the station names and you get the body of the web page back.

Finally, the last maybe 15 lines of code just build the HTML boilerplace and call stations to produce the guts.

Testing and Installation

Because of the design, testing is very easy. There are no web-based dependencies in the design so you can just run the program from the command line.

In my case, I called the program wcr, so just typing ./wcr will run the program and display the HTML on standard output. If all goes well, run the program again, redirecting the output to a file. For example,

./wcr > /tmp/weather.html

You can now point a web browser at the file and see if it renders the page the way you want. If not, now is the time to make changes in wcr and continue testing.

Once you are happy with the output, upload the code to your web server and set up a cron job to run it. Normally, crontab -e will allow you to edit your crontab entry.

I elected to run the program twice an hour, at 5 and 35 minutes past. The crontab entry must execute the program and write the output file to a location the web server can get to. I used:

5,35 * * * * /home/fyl/pymetar-0.5/bin/wcr > /var/www/htdocs/weather.html

The four asterisks tell cron that the 5 and 35 minute times apply to every hour of every day. The next field is the name of the program to run. Finally the redirect operator (>) is followed by location where the HTML file is to be stored.

Assuming you set all the permissions right--that is, the program can write to the file and the web server can read the file, you are all done. Just point to this file and you have a weather page.

Conclusion

For the perfectionist, you probably need a fancier soluution. Why? Well, there will be a point in time when the contents of the HTML file will not be valid. When cron fires of the job the contents of the output file are truncated. Then the program runs and builds a new file.

Because of the way the program works this time is not just a short execution time of some Python code as the program queries the various weather stations and has to wait for a response. With the five stations I poll, I see elapsed times between one and ten seconds. If having bad data on the site for a maximum of 10 seconds every 30 minutes is acceptable to you, all is well. If not, write the output to a temporary file and then move it to the real file when all is done. Still not perfect but really close.

Now, for us mortals, we have a quick and dirty weather page. Have fun.

 

Phil Hughes is the publisher of Linux Journal, and thereby Linux Gazette. He dreams of permanently tele-commuting from his home on the Pacific coast of the Olympic Peninsula. As an employer, he is "Vicious, Evil, Mean, & Nasty, but kind of mellow" as a boss should be.


Copyright © 2003, Phil Hughes. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003

LINUX GAZETTE
...making Linux just a little more fun!
SCO Interview
By Anonymous

This is Bob Chumps, President of Chumps News Network. We're here today for our Washington Insider Interview Luncheon with Baryl O'Hubris, President of the Sexy Condom Operation Corp.

CNN: Baryl, you're controversial suit has been in the news a lot lately. Can you give us a little background? Where did you first get the idea for all this?

Baryl: Well, we'd been talking one day in the board room about how we didn't have a lawyer's chance in hell in making any money over the next five years. All these new glow-in-the dark and French tickler designs have really screwed up our market, so to speak; we just don't have the R&D funding to get any of this done by our group, and we couldn't compete if we had to license them from anyone else. It looked bleak. I was so bummed, I went home and tried to take my mind off the whole situation. I had just picked up a copy of that new Open Source biography, "Pulling the Wool: Adventures With Our Own Bush". Well, I couldn't relax and concentrate, so I took a viagra, two peyote buttons, and did a couple of lines. I felt better immediately and went back to reading. After the chapter on doing up the election, a small idea was forming. When I got to the part about 9/11, I could really feel it taking hold. By the time I got to the chapter called "Bombing the Browns & Marketing the Oil War", I couldn't contain it any longer. I could see it all, people in the US have been prepared for this type of thing for years. We could do it!

CNN: What happened then?

Baryl: I immediately called Pukinda Djellow, our Chairman of the Board. "Puke," I said. "Get the lawyers in here tomorrow. I've got an idea." Puke listened, told me it was brilliant, that I'm a genius, emphasized just how amazing I am; he's great, a real team player. We met the next day and formed our basic plan.

We had the lawyers take apart the licensing agreement on our condom packs. It couldn't be clearer, Bob. The way it's written, if you've used one of our prophylactics, you've implicitly agreed that we have a license on your having sex for the rest of your life. We had been sitting on a gold mine without even knowing it!

Basically, the way the lawyers have it, anytime you slide an SCO scumbag on your schlong, you owe us more money. So the lawyers worked feverishly, came up with the idea for a *big* suit. Now, we knew we couldn't identify individual purchasers of our condoms, the Total Information Awareness thing isn't quite up yet. So, we just sued everybody. It's the Kill 'em All, Let Moroni Sort 'em Out approach. Eventually everyone's going to either pay up or pack their peter away for good.

CNN: Didn't anyone object?

Baryl: Sure, one guy, Uphinder Bowwow didn't like it and quit. We don't need him, it's no big deal.

CNN: Yes, but how are you really going to make money? Eventually people are going to figure this out!

Baryl: That's the brilliant part. We knew it would take years for the whole thing to get to court. In the meantime, we could be reduced to drinking T-Bird. We had to find a way to cash in faster. Well, I remembered we'd hired this guy named Veg Roughage, VP of Worldwide Intercourse, which of course left him with not a whole lot to do. We brought him into the meeting. Turns out he's had some past experience in this stuff, knows how to do the stock and option thing. He said it was really simple, came up with this plan how we could issue options to all of us who were "in the know". Got him off his duff and on to something he knows about. Ya know, it worked; as soon as we made the announcement, our stock took off. We're all rolling in dough now, no thoughts of having to drink the cheap stuff anymore!

CNN: All of this seems a little far-fetched.

Baryl: After I came down the first time, I thought so too, Bob. I was frankly worried. But, I just did the same mix again: viagra, peyote, and toot, and soon relized everything was Ok. And it's working! The main thing is keeping up the hype, keeping the general public and the employees confused but motivated.

CNN: How did you do that?

Baryl: Well, it turns out to be pretty simple. First of all, we hired professional script writers so we all know what to say all the time; sort of synchronizes our processes so to speak. Then, we call regular press and phone conferences. Krisp Blaughjob, Senior VP, has been a leader in this area, bought himself a couple of nice Armanis and some new ties. When he wears that outfit, everyone believes anything he says, I'm sure you've seen him on TV. Of course, the other employees were still a problem, but I figured that one out myself. Since everyone eats at the SCO cafeteria, we just put up some cool subliminal motivational posters and spiked the food with my mix. Of course, we couldn't afford coke for everyone, so we knocked it out of my original recipe and substituted 2,000 milligrams of caffeine. Does a pretty good job, costs about 98% less.

CNN: I guess you can't argue with success. What do you think you're biggest problem is going to be in all this?

Baryl: So far, it's been clear sailing inside the US, so we're concentrating on that market. We're realistic, we know we can't control the world yet, they just haven't been properly prepared. Domestically the media has really made our job easy.

Our biggest concern is that most people will just switch to masturbation. Right now, we don't know how to handle that one for sure. But we've got the script writers doing research. Turns out with the right ads, we can probably convince most people that it really does grow hair on your palms and eventually cause blindness.

CNN: Wow, the whole thing is amazing. Any last comments?

Baryl: Well, I'd just like to thank God people are stupid. Oh yeah, the lawyers think it's really funny: all those people who bought our stock (symbol SCOC) are going to be known as SCOC suckers.

 

[BIO] Anonymous,...


Copyright © 2003, Anonymous. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 94 of Linux Gazette, September 2003