Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
...making Linux just a little more fun! |
From The Readers of Linux Gazette |
Hello!
I just switched from Gnome 2.0 to KDE 3.1 and I notice that the settings for the devices created by devfsd aren't save between reboots. So I read through the docs and I saw that I have to create some dev-state dir. Well, I already have that dir in /lib and devfsd is set to save the settings (in /etc/devfsd.conf). And if I change the permissions on some devices (/dev/dsp for example), the change is also visible in /lib/dev-state directory. However, after I reboot, the same problem. I don't have permissions. And this is really annoying me.
So any suggestions are greatly appreciated!
P.S. I am using Mandrake 9.0 with the default kernel.
Thanks!
Regards,
Stelian I.
Hi!
I've got my home LAN behind a cable modem, masqueraded to the outside world. The masquerading machine runs RedHat 7.3. What I'm trying to achieve is equally share the bandwidth between the machines (about 7) following this algorithm: if only one host is making a connection at a given time, it gets the whole bandwidth; when a second connection from a second masqueraded machine arrives at the gateway, the bandwidth is equally divided between the two machines; if a third machine makes a connection, the bandwidth is split in three equal shares and so on. Now if one of the machines that has already opened a connection, makes a second one, I would want this connection to be allocated inside the machine's share, not as a separate member participating in the bandwidth division. Following this idea, if someone has 4 open downloads, someone else 7 and a third machine only 1, then bandwidth should be divided only by three and not 12.
I've already read about SFQ, qdiscs and tc filter from the 'Advanced routing HOW-TO' but I couldn't find any info on how to shape/police traffic dynamically and based on ip source addresses. I do not want to split the bandwidth into seven slices from the beginning since not everybody is online all the time and this would waste available bandwidth for the others. I'd rather have the traffic shaped depending on how many internal hosts wish to access the internet at a given time.
I'm not really interested in providing differentiated traffic based on content (interactive, bulk, etc.) just a fair sharing of bandwidth, ignorant of how many download managers/ftp's each and everyone is running, and not allowing anyone to suffocate the shared internet connection with his/her requests.
Thank you very much in advance for the time taken to
answer this,
Radu Negut
Hi everyone.
When Linux shutsdown with halt -p, my pc will turn off, but Linux won't switch off the power to my PS/2 port. It is turned on when X starts, but when X shuts down, or the PC is shut down, the port remains on - and my Optical Mouse stays on. Light remains glowing, etc.
.... However, Windows 98SE will shut this down properly every time. I have kernel 2.4.20 and have tried enabling ACPI and APM. And of course I have an ATX PSU, and nothing weird enabled either in cmos or jumpered.
I know that some boards just have power going through PS/2 ports after soft shutdown as a feature/bug, but Win98SE manages to shut down this one ok.
If someone knows how to fix this, I would really appreciate your help.
Thanks in advance.
D.Radel.
PS. Sorry for mentioning that other OS in this email.
Greetings.I installed Red Hat Linux 8.0 on my desktop computer. I used my PS/2 keboard and mouse to install the software from CD images downloaded from Red Hat. After software installation completed my computer rebooted to the KDE login screen. My PS/2 keyboard and mouse does not work. Only a USB keyboard and mouse work. When I boot my system into run level 3 my PS/2 keyboard works. How do I configure my system so that I can use my PS/2 mouse and keyboard with KDE?
Any information is appreciated. Thanks.
Hi all,
are there any reputable statistics available on the web comparing linux,
*nixes and windows on the enterprise server market? Can somebody give
some pointers or links? Any reputable articles would also be welcome. I've
been rummaging the web the whole day but couldn't find anything useful.
Thanks.
Dear Mike,
After my first article was published, about thirty people downloaded my console interface library. In the few days since you published my second, over ninety people have come for it. If only ten percent of those try to write an editor like I described, you will have turned my dream into a reality.
When I cycled into the city to log on at the daycentre this morning, I had been in the countryside for a week. I had no idea I had been published because I expected it would be in the March edition. I agreed with your comments about C++ not being the universal language I made it out to be and was going to rewrite it with your suggestions in mind.
Unless the author says he plans to do a revision, I assume the article is finished when I receive it. -- Mike
Now I realise it's gone out and I've seen the response, I don't care how bigotted people think I am
I cannot thank you enough.
Your faithfully, Stephen Bint
We have encouraged Stephen to write or be involved in more articles; you'll see some of the results when they're ready for publication. -- Heather
Thanks for the encouragement. It was good to hear what the article is doing for you. -- Mike
Mike,
Thank you for pointing out that I gave the misleading impression, that C++ is the first language of all Linux users in my article, The Ultimate Editor (LG#87). Obviously Linux users vary widely in their choice of first language.
It would be a boon to the users of any language, especially beginners, to have an editor which is extensible in their own language. C++ users seem to be the only group who do not have one yet.
Stephen Bint
Dear Editor,
I can not fully understand the article "The Ultimate Editor" in Feb. LG. Having migrated from DOS to Linux without passing MSWindooze I have to ask what is wrong with the Linux text editors such as joe, xedit, gedit, gxedit, xeditplus, kedit, kwrite, kate, vim, gvim, cooledit, any more?, yes I am sure.
I have seen the text editor in Windooze and thought it a joke compared with some of the Linux text editors mentioned.
May be Stephen Bint should try them all first before picking up more cigarette butts in the gutter thus damaging his lungs and consequently his brain.
Regards
Peter Heiss
Well, I can understand the article. I can also disagree with it, but first I have to understand it. The title seems destined to invite flames (perhaps he's asking for a light for those soggy gutter butts).
He doesn't like the Linux text/console editors he's tried. He doesn't bother to lay out the criteria against which he's rating them. Other than that it's simply an announcement of a library which is built over the top of SLang which, of course is built over the top of ncurses.
It would be easy to cast aspersions, even to question my fellow editors on the merits of including this article. However, I'll just let the article speak for itself. I'll ask, why doesn't xemacs support mouse on the console or within some form of xterm (xemacs does support ncurses color, and menus)? How about vim?
Personally I mostly use vim or xemacs in viper (vi emulation) mode. There are about 100 other text editors for Linux and UNIX text mode (and more for X --- nedit being the one I suggest for new users who don't want to learn vi --- or who decide they hate it even after they learn it).
-- Jim Dennis
I hope that Stephen's comment in the previous portion clarifies what he was really thinking. On the cigarette analogy, he has roll-your-own papers in his pocket, of a C++ variety, but needs someone to share loose tobacco. Then everyone sharing this particular vice can enjoy having a smoke together... downwind of folk who already like their text-editors :D Yes, folk who are used to seeing their brand down at the liquor store are likely to think making your own cigarettes is either quaint or nutty. But it's a big world out here, and the open source world is built by folk who like to roll their own... -- Heather
Let's remember that when Stephen complains, he doesn't just whine and expect others to do things his way. Rather, he takes it upon himself to contribute code that does whatever it is he's complaining about. See I Broke the Console Barrier in issue 86. That was the main reason I published The Ultimate Editor, even though I strongly objected to his assumptions that (1) C/C++ are the only worthwhile languages and (2) emacs should be flogged over the head for not using menus and keystrokes à la DOS edit. The second bothered me enough to insert an Editor's note saying there are other issues involved. The first didn't bother me quite as much, so I sent the author a private e-mail listing the C/C++ objections and asked him to consider a follow-up article or Mailbag letter that took them into account. And it worked: we had a great discussion between Stephen and the Editors' list about C/C++ vs scripting languages, and that led to some excellent article ideas.Also remember that Stephen is homeless, and his Internet access is limited to an hour here, an hour there on public-access terminals. A far cry from simply sitting in front of your computer that happens to be already on. So he is putting a high level of commitment into writing these articles and programs, higher than many people would be willing to do. It's unfortunate that his limited Internet access prevented me from knowing at press time that he had decided on a last-minute revision to tone down the article and make it more balanced, but c'est la vie. -- Iron
In Linux Gazette ( a most excellent ongoing effort, btw):
On behalf of the staff and the Gang, thanks! -- Heather
http://www.linuxgazette.com/issue87/bint.html
there's an editorial aside:
The Ultimate Editor would be what emacs should have been: an extensible
editor with an intuitive mouse-and-menu interface. [Editor's note: emacs
was born before mice and pulldown menus were invented.]
AFAIK, nope Or at least, not exactly! This would be better:
............... [Editor's note: emacs was born before mice and pulldown menus were *widely known outside research institutes*.] ............... |
Though of course, RMS was at a research institute, so may have known of mice by then
For mouse references, see (amongst many other possibilities):
http://www.digibarn.com/friends/butler-lampson/index.html
or any of the Engelbart stuff. Mice were pretty well known by '72, Emacs dates from '76: TECO (Emacs' predecessor) does however date back almost to the invention of the mouse - I haven't found out exactly when TECO was initiated, around '64 I guess (but see
http://www.ibiblio.org/pub/academic/computer-science/history/pdp-11/teco/doc/tecolore.txt
if the question is really of interest).
I think, strictly speaking, that the editor macros were by their nature trapped in the environment of the editor they were macros for : TECO. So it isn't precisely right to say that TECO was emacs' predecessor; "parent" or "original environment" maybe, but I don't believe TECO was intended to be a general purpose editor ... much less the incredible power beyond that, that the emacs environment grew into after taking off on its own.
Not all menus are pull-down, nor should a mouse be required to reach pull-down menus... a matter of style and usability. For my own opinion, I feel that emacs does have menus; they just don't always look the part. -- Heather
This is all, I agree, excessively pedantic - I've also offered my services as occasional proofreader
JR
Thanks to everybody who offered to proofread. We now have some twenty voluteers. -- Iron
Dear Ben,
This is with reference to "Perl One-Liner of the Month: The Case of the Evil Spambots" which was published in th LG#86. I especially enjoyed you defination of Gibberish.
Here is something I found in my fortune files. I am pretty sure wordsmithing in the Marketroid language is done using this procedure. Please keep up the good work of giving underhand blows to the Marketroid.
............... Column 1 Column 2 Column 3 0. integrated 0. management 0. options 1. total 1. organizational 1. flexibility 2. systematized 2. monitored 2. capability 3. parallel 3. reciprocal 3. mobility 4. functional 4. digital 4. programming 5. responsive 5. logistical 5. concept 6. optional 6. transitional 6. time-phase 7. synchronized 7. incremental 7. projection 8. compatible 8. third-generation 8. hardware 9. balanced 9. policy 9. contingency The procedure is simple. Think of any three-digit number, then select the corresponding buzzword from each column. For instance, number 257 produces "systematized logistical projection," a phrase that can be dropped into virtually any report with that ring of decisive, knowledgeable authority. "No one will have the remotest idea of what you're talking about," says Broughton, "but the important thing is that they're not about to admit it." - Philip Broughton, "How to Win at Wordsmanship" ............... |
Cheers Raj Shekhar
Gene's HTML-only email barely escaped the spam trap, when Mike recognized that it was a followup to Issue 87, Mailbag #2
Folks, while our main publication form is HTML, we have our own style guidelines and pre-processing to do; if you're not submitting a full article, we greatly prefer plain text. -- Heather
There's always the real thing.
ViewTouch is genuine killer app. My life's work resulted in the sales of millions of computers in the 26 years since I first started writing and using POS software. I invented many of the concepts in use today worldwide in retail software, including virtual touchscreen graphics to represent the universe of retail business operations. Much of what we are doing today will become standard in the future. ViewTouch is the original and longest-lived. Thanks for your comments.
Gene Mosher
Hello, Gene - I remember talking to you when I wanted to install VT for a client in Florida a few years back (they backed out of the deal by trying to rip me off, but, erm, I had the root password. We parted ways, and they're still without a POS last I heard. As I'd mentioned, I really like the look and feel of your app; however, good as it is, not being Open Source limits its applicability in the Linux world. If I remember correctly, that was the upshot of our discussion here.
Just for the record, folks - Gene was very friendly and very helpful despite the fact that the client had not yet bought a license from him; given his help, the setup (at least the part that I got done before the blow-up) was nicely painless.
Ben Okopnik
We also got a request for aid finding a POS from a fellow with a pizza parlor; luckily, Linux folk have already dealt with Pizza, although it's worth following the old articles over at LJ and seeing how that project moved along. We're still looking for news or articles from people using or developing open source Point of Sale, and I re-emphasize, we mean physical cash registers, not just e-commerce. E-commerce apps we've got by the boatload, on sale and in "AS IS" condition. -- Heather
I will be out of town March 18 - April 3 at the Python conference and Webware sprint (and visiting New York, Chicago, and Columbus [Ohio]), Heather will be busy the week before Memorial Day (May 26), and I'll be gone Memorial Day weekend.
This means I'll need to finalize the April issue by March 14, so the article deadline is March 10. I've let the recent authors know.
May's issue will be normal.
For June, the article deadline will be May 19 (a week early).
...making Linux just a little more fun! |
By The Readers of Linux Gazette |
hai,
I am subbu and I encounterd this problem when i ran
make - filename.
How to fix this problem?Can you help me.
make: *** Warning: File `makefile.machine' has modification time in the future (2003-01-28 07:07:00 > 2003-01-28 00:09:19) make: Nothing to be done for `all'. make: warning: Clock skew detected. Your build may be incomplete.
I guess that my real-time clock has set incorrectly. how to correct it.
I appreciate your time.
thanks,
subbu
Ugly HTML had to be beaten up and reformatted. Please send messages to The Answer Gang in text format. -- Heather
[Mike] The message means what it says: 'make' found a file that "was" modified in the future. That may or may not be a problem, and if it is, it may or may not be significant. Do you know by other means whether 'makefile.machine' should have been updated? I.e., did you modify any file related to it?
How did that file get on your machine in the first place? Did you copy or untar it from another computer in a way that would have preserved the foreign timestamp? If so, then the clock on the other computer may be wrong.
To check your own computer's clock, see the 'date' and 'hwclock' commands. 'date' shows and sets Linux's time; 'hwclock' shows and sets the real-time clock. First set Linux's time correctly, then use 'hwclock --utc --systohc' to reset the hardware clock.
If your hardware clock is pretty unreliable (as many are), you can use 'hwclock --adjust' periodically (see "man hwclock"), run ntp or chrony to synchronize your time with an Internet time server, or put the kernel in "eleven-minute mode" where it resets the hardware clock every eleven minutes. (Answer Gang, how do you activate eleven-minute mode anyway?)
[Ben] In the "hwclock" man page:
This mode (we'll call it "11 minute mode") is off until something turns it on. The ntp daemon xntpd is one thing that turns it on. You can turn it off by running anything, including hwclock --hctosys, that sets the System Time the oldfashioned way.
Also, see the "kernel" option under "man ntpd".
In reference to: Issue87, help wanted #1 -- Heather
You could try installing libdetect, and then running /usr/sbin/detect (detect is also used by Mandrake). Aside from that, the only thing I can suggest is filing bugs with Debian.
In reference to: Issue87, help wanted #2 -- Heather
The problem is the authentication on the Win2K side. Check out http://msdn.microsoft.com/library/default.asp?url=/library/en-us/apcguide/htm/appdevisv_8.asp Basically, since I assume the RAS server is running etc, you just need to enter this command on NT:
netsh ras set authmode NODCC
Last month Linux Magazine (UK - http://www.linux-magazine.com/issue/26/index_html) ran an article on setting up Direct Cable Connections with NT. I'll send on the details when I find where I left the magazine. You may try searching http://linux-magazin.de since Linux Magazine is a translated version of that.
http://www.tldp.org/HOWTO/Modem-Dialup-NT-HOWTO-9.html
This page may also be of use.
In reference to: Issue87, help wanted #1 -- Heather
Solution: stop using redhat, debian, mandrake kernels, download a fresh kernel from kernel.org and build with that.
The other answer, is to look in you Makefile, and check the line beginning with "EXTRAVERSION=" If you add your own name to that line, and run make, you brand the kernel and modules with that name. Hope that fixes your problem.
"Sean Shannon" <sean@dolphins.org>
Tue, 4 Feb 2003 10:48:20 -0500
The hardest part in compiling a kernel is making the ".config" file. Some things to check:
[Thomas Adams] Yep -- good idea.
[Thomas Adams]
Well, I usually do something like:
alias beep='echo -e "\a"' make modules && for i in $(seq 10); do beep; done && make bzImage && for i in $(seq 10); do beep; done
[Thomas Adams] or /dev/sda if s/he has a SCSI
To install the new kernel:
Copy the new kernel and system map to the �boot� directory
cp /usr/src/linux/arch/i386/boot/bzImage /boot/vmlinuz-2.2.16-22-custom cp /usr/src/linux/System.map /boot/System.map-2.2.16-22-custom
Edit file: �/etc/lilo.conf�. Add a new �image� section (add everything below )
See attached customkernel.lilo.conf.txt
[Thomas Adams] Often called a "stanza". Be careful though. I'd be more inclined to "label" this as "linux-test" so that it doesn't infringe on the "old" version of the kernel. Remember that up until this point, you're still testing (a trial run) the new kernel.
Activate the change as of next re-boot
/sbin/lilo
Install new System.map
rm /boot/System.map ln -s /boot/System.map-2.2.16-22-custom /boot/System.map
Reboot the system to build module.dep file
shutdown -r now
[Thomas Adams] Hmmm, deprecated. "Init 6" is a better way.
Reboot the system after the login prompt appears Enter �alt-ctrl-del� key combination
Reboot performed because modules.dep is created on first boot (if not, try running the "depmod" command manually then reboot)
[Thomas Adam] Not necessary. "depmod" is ran through all of the init levels on a modern Linux system......
Good luck. Sean Shannon
[Jim Dennis] Most of this can be automated down to just two lines:
make menuconfig
make clean dep bzImage modules modules_install install
... note the list of multiple targets all on one line. Make install will look for an executable (usually a shell script) named /sbin/installkernel (or even ~/bin/installkernel) and call that with a set of arguments as documented in ... (/usr/src/linux) arch/i386/boot/install.sh
Here's a relevant excerpt:
# Copyright (C) 1995 by Linus Torvalds # Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin # "make install" script for i386 architecture # Arguments: # $1 - kernel version # $2 - kernel image file # $3 - kernel map file # $4 - default install path (blank if root directory) # # User may have a custom install script if [ -x ~/bin/installkernel ]; then exec ~/bin/installkernel "$@"; fi if [ -x /sbin/installkernel ]; then exec /sbin/installkernel "$@"; fi
So this can put the approprite files into the appropriate places and run /sbin/lilo or whatever is necessary on your system.
I like to copy .config into /boot/config-$KERNELVERSION Also, in my case the script as to mount -o remount,rw /boot since I normally keep /boot mounted in read-only mode. The script remounts it back to ro mode after running /sbin/lilo.
For new kernels you can save some time in menuconfig by preceding that make with:
cp /boot/config-$RECENTKERNELVERSION ./.config.old make oldconfig
... which will set all the new config options to match any corresponding settings in the old config. Then you can focus on the new stuff in menuconfig.
Another useful tweak for some people is to edit ... (/usr/src/linux) .../scripts/Menuconfig and find the single_menu_mode variable:
# Change this to TRUE if you prefer all kernel options listed # in a single menu rather than the standard menu hierarchy. # single_menu_mode=
... for those that don't like to have to expend extra keystrokes popping in and out of subsections of the menuconfig dialogs.
Sadly this particular featuer as changed (at least by 2.5.59) with the inclusion of a new kconfig system (instead of menuconfig).
You can get a collapsible try of menu options in the new system using: make menuconfig MENUCONFIG=single_menu (However, it it starts with all branches collapsed. <grump!>
In reference to: Issue87, help wanted #6 -- Heather
if you use ipchains, then you should look at masquerading and port-forwarding.
following command
ipmasqadm portfw -a -P tcp -L $4 4662 -R 192.168.1.100 4662
should do the trick.
rgds Patrick De Groote
Bruce Ferrell <bferrell@baywinds.org>
Sat, 22 Feb 2003 17:30:34 -0800
if you're using ipchains you need something like this:
/usr/sbin/ipmasqadm portfw -a -P tcp -L <EXTERNAL ADDRESS> 11900 -R <INTERNAL ADDRESS> 11900
The point is, whether you use a variable or hardwire in an address, you need to specify both sides of the forwarding connection. Also note that the two examples selected a different port to play on, but the principle is the same. I hope that leaving both examples in makes it all clearer to readers. -- Heather
Jim Kielman <jimk@midbc.com>
05 Feb 2003 23:30:27 -0800
I ran into a similar problem with a client that had to have PCAnywhere access to one of the computers on his network. My solution was to use "ipmasqadm portfw" to forward the ports PCAnywhere needed to access. The server is running Debian potato with a stock 2.2.20 kernel. Here is what I use:
ipmasqadm portfw -a -P tcp -L <internet IP> 4162 -R <mldonkey IP> 4162 ipmasqadm portfw -a -P udp -L <internet IP> 4162 -R <mldonkey IP> 4162 ipmasqadm portfw -a -P tcp -L <internet IP> 4161 -R <mldonkey IP> 4161 ipmasqadm portfw -a -P udp -L <internet IP> 4161 -R <mldonkey IP> 4161
internet IP = the IP address of the computer connected to the internet.
mldonkey IP = the IP address of the computer running mldonkey.
I don't know if you need both udp and tcp, but it works for me. Hope this helps.
Regards
Jim Kielman
In reference to: Issue 86, 2c Tips #3 -- Heather
John Karns:
Cool - thnks for the pointer. I think I'll check it out. I knew that
some IDE's exist for Linux, but never really took the time to look at one.
Note we're pointing to his gathered list of numerous "integrated development ennvironments" - the previous entry pointed to his description answering that (1) yes we have them, lots and lots; and (2) that if you think you're seeking one, you should make sure you are solving the right problem first. -- Heather
Is it possible to remap the <tab> key to another key on the keyboard?? One of my co-workers has a broken left pinky and is going insane not being able to use the tab key to complete commands.
I done a fair amount of searching to no avail... any help would be greatly appreciated.
[Mike] Grr, I just read yesterday about somebody turning Scroll Lock into another Escape key, now where was it...?
You can remap any key using the "loadkeys", "showkey" (singular) and "dumpkeys" commands. That's on the console. You have to do additional steps for X. See the Keyboard and Console HOWTO
http://www.tldp.org/HOWTO/Keyboard-and-Console-HOWTO.html
Thanks for the quick reply. Helps a lot.
James
I was desperatly trying to use my palm with the evolution mailer, recompiled everything but the kitchen sink to get Gnome2 and Gnome1.4 capplets and applets totlaly mixed up in the end, it was working, but gnome was broken, so now I'm repairing Gnome2, and then try to write the apropriate spells for my Paml connetion
Halb uses the Sorceror distro, which refers to compiling its scripts and packages as "casting spells". -- Heather
[Ben] I've found the "appropriate spells" for the Palm - for my M-125 with the USB cable, at least - to be "jpilot" and "coldsync". "jpilot" is really well done, except for the selection interface in the "Install" menu (select a file, click "Add". Select next file, click "Add". And so on for, say, 50 files.) "coldsync" works at a lower level - it's great for reinitializing user info, a quick install with or without synching, and generally tweaking Palm comms. As an example, among the files that I carry on the Palm, I have The Moby Shakespeare collection (all of The Bard in one file) and Gibbon's "Decline and Fall of the Roman Empire", volumes 1-6; both rather large (~5MB). "jpilot" refused to load them (segfaulted). So did my brother's Wind*ws Palm desktop. "coldsync", however, when used with the "slow sync" option, managed it just fine. KDE's palm app, though, is severely broken (to its credit, it mentions that in the initial screens); it hosed my Palm so hard that I had to do a hard reset, and re-init the user (another thing that "jpilot" couldn't handle.)
Yes, well thanks for the info, Jpilot and stuff works like a charm (Palm M105 the small one), but I wanted to Sync my mailadresses in evolution........ wich is based upon gnome 1.4 (c)applets, which are horible to get to play nice with the Gonme2.0 install.
Good to know about the big files though...
For those of you who don't know, ratpoison is a light (very light) window manager. (http://ratpoison.sourceforge.net). The basic scheme is to have all apps fullscreen, using screen-like key bindings to switch between windows. I've been using it for about an hour or so now (Hint: Look at the sample.ratpoisonrc in the doc directory. Don't end up hacking the source code to change the prefix key like I did.), and I'm liking it. The best thing, of course, is the tons of screen real estate you get without any window title bars, borders, etc.
If you like doing everything with the keyboard or you want tons of screen real estates, give ratpoison a whirl.
Also see this article on freshmeat: http://freshmeat.net/articles/view/581
If you aren't using RHL, simply edit /etc/rc.d/rc.local
Atul
but there is no such file in debian . what file should I edit in debian ?
thanks in advanced.
The Linux Oracle has pondered your question deeply.
And in response, thus spake the Oracle:
echo '#!/bin/sh' > /etc/rc.local chmod 744 /etc/rc.local RL=`grep ':initdefault:' /etc/inittab | cut -d: -f2` echo "LO:$RL:once:/etc/rc.local" >> /etc/inittab killall -HUP init
You owe the Oracle a better understanding of why subverting the SysVInit architecture is fundamentally a bad idea in the first place.
Hi!i'm rayho, i would like to ask how to receive sound from the microphone and then transmit the sound from the linux os to the window os system.Also,I'm not understand where the sound source is stored in which file in the linux os and what hardware and software do i need to do this transmition.Thankyou for your help!!
[Halb] Hi there,
This may sound a bit simple but I would do it like this:
- record your sound with anything that works (grecord or something)
- save as any file format you like (wav, mp3, ogg)
- copy this file over to the windoze box (samba)
- play file on windows (media-player, realplayer,..)
needed Hardware:
- 2 pc with networking cards (rj45, Wlan,..)
- microphone
- loudspeakers (? I looked this one up in dict.leo.org)
needed software:
- Linux (any flavour you like)
- Windoze
On the other hand, you might not want to transport single files, but want to do some kind of Internet audio broadcasting or something. You might want to look into
- http://www.shoutcast.com
- http://www.peercast.org ( p2p radio based on the gnutella protokol, I don't know about the license, but source is available)
- http://streamerp2p.com/streamer.htm ( p2p radio GPLed)
What did you have in mind?
Neil Belsky wrote:
NTCR is another name for NTLM, which is supported by fetchmail.
I receieved this tip for inclusion in my HOWTO
http://geocities.com/lunatech3007/doing-things-howto.html
However as it a bit advanced for a newbie's howto I did not include it. i am forwarding it to you.
Regards
Raj
[C.R. Bryan III] Subject: Doing Things in GNU/Linux
Good stuff Something I can put on a firewall machine when I put it onsite (since I leave Apache in for a status.cgi page anyway)
In the section "Terminating Misbehaving Programs":
If the afflicted machine is on a network with another Linux machine, or a Windows machine with PuTTY, there are additional steps that can be taken before hitting the Big Red Two-by-Four switch. (My network runs RHL 6.2 on older boxes, old as in P133, so I get practice in this every time Netscape walks into a Java site and freezes.)
- Shell into the afflicted machine. Use ssh if you've got it, telnet otherwise. If VNC is installed at both ends, maybe you can use that. Just because the local desktop is frozen doesn't always mean that all desktop functioning is frozen. If the machine won't log you in, obviously it's game-over, so at that point you have to reset the box. Often, though, especially on older boxen, it's just X that's either frozen or in a really deep thrashing session, and you can get a shell prompt. Root-to-root ssh is most convenient.
- Get root on the afflicted box with su.
- Try to kill off just the program that's freezing things, and try to do it nicely.
a. If you can get X apps to forward, or you can get a VNC window open, you can bring up kpm (the KDE process manager), which, with all the information presented, allows you to pinpoint just the app to kill with a right-click. Try several times to get it to go away, starting with Hangup, then Terminate, then Kill. The more of a chance you give the program to clean up its exit, the less garbage you'll leave lying around in the system.
b. If you know the name of the program that has gotten hung, and only one instance of it is running, use killall. Let's assume for example that it's netscape:
# killall -HUP netscape
# killall -TERM netscape
# killall -KILL netscape
Killall does just that, kills off every instance of a program that it finds. That's appropriate for netscape, since it has a session-manager core which is usually the part that's locked up. If you've got a dozen xterms open, and ytree running in half of them, though, killing off every ytree might not be what you want; often, it's the helper-app that ytree launched that's frozen up (lynx, for instance) and you can killall that.
c. Use top and other shell tools to zero in on which process to kill, then use kill. (Here I don't have that much experience: when I need to use top and kill, it's on a firewall without X, where all the running processes fit in an xterm/ssh window, so it's simple to fish out the pid to kill.)- If it won't kill, or you can't figure out who to kill, or things just seem hosed at the X level, as long as you can get root on a shell command-line, you can tell it:
# init 3;init 5
...and that'll do what ctrl-alt-bs would do, restart X to a graphic login. Your underlying filesystem will have cores and DEADJOEs left lying around from the X-level programs that had to abort, but you won't have to fsck everything on a dirty boot.- If you think you might have stuck ports and locks from the killed X-level processes, and the machine doesn't have duties that would prevent it, or if X won't come back up, you can do a clean reboot to put things back in order, probably in less time than it'd take to find and free the stuck resources...
# shutdown -r now
That'll take down the X level, giving the X programs a chance to clean up after themselves, then the rest of the machine, and your filesystem will be unmounted and rebooted cleanly.
Bottom line: if you can shell or VNC into the frozen machine, there are things you can do to avoid losing data in the innocent processes you're running in X or corrupting your filesystem. You can even do some of these things from Windows if you have the right tools (telnet, ssh, PuTTY, VNC), as long as you have two or more machines on the same network.
How much of this you think might be appropriate to a newbie-help, I don't know, but that's my experience, anyway
In reference to: Issue 87, 2c Tips #1 -- Heather
Hello,
Great how you tackled this problem. I have a simple Sounblaster 16 card. This card (with this chipset) appeared to be multichannel.
I play online games on the internet (Tribes2) and we use for communication a voice communication program (Teamspeak2). I also want to hear the sound of the game. Teamspeak2 is able to use a different channel (dsp0/dsp1).
So i adress the gamesound to dev/dsp1 and the voice communication to /dev/dsp0. I couldn't get it working with alsa drivers, but others with different soundcards can. So i used the OSS driver. It works great with only one soundcard.
If a program only wants to adress the default /dev/dsp (dsp0) and you want to let it use /dev/dsp1 you can change the link /dev/dsp --> /dev/dsp1
More information on http://www.teamspeak.org
Linux is a very stable platform for games and there is now a (free) voicecommunication program too.
Whew! One thing I can say, there was a lot of good stuff this month. There's so many good things to say and I just can't edit them all.
But don't you worry. We've got something for everyone this month. Newbies can enjoy a list of a bunch of apps designed to help setup be a little more fun (or at minimum, a little less headache). The intelligencia can see what the Gang thinks of some academic notions for the future of kernels. And everyone hungering for more about routing has something keen to get their teeth into. Experimenters... nice trick with two monitors, here.
In the world of Linux there's more to politicking than just the DMCA guys trying to get us to stop ever looking at "their" precious copyrighted works ever again. Among the Linux kernel folk there's snatches here and there of an ongoing debate about source code control systems. You see, BitKeeper has the power to do grand things... but for people who have not decided that they hate CVS, it's a bit of a pain to pull out small patches. For people who don't qualify to use BitKeeper under their only-almost-free license (you can't use it if you work for someone who produces a competing sourcecode control system, if I read things right ... thus anyone who works for RH shouldn't, et al.) this is a bad thing.
For that matter I'm a bit of a programmer myself, but if I'm going to even glance in the kernel's direction, I need much smaller peices to chew on, and I really didn't want to spend the better part of a month learning yet another source system. (Not being paid for doing so, being a guiding factor in this case.) I had to thrash around the net quite a bit to get a look at the much smaller portion of the whole.
So some of the kernel gang wrote some scripts to help them with using the somewhat friendly web interface (folks, these definitions of "friendly" still need a lot of work) and Larry threatened to close down bkweb if that bandwidth hit got too high. In my opinion, just about the worst thing he could have said at that moment - it highlights why people are trying to escape proprietary protocols - they want features, but Linux folk, having tasted the clean air of freedom, don't want to be locked indoors just because a roof over their code's head is good to have at times.
Don't get me wrong. Giant public mirrors of giant public projects are great things, and as far as I can tell BitKeeper is still committed to a friendly hosting of the 2.5.x kernel source tree, among a huge number of other projects. Likewise SourceForge. But we also need ways to be sure that the projects themselves can outlast the birth and death of companies, friendships, or the interest of any given individual to be a part of the project. The immortality of software depends on the right to copy it as much as you need to and store it anywhere or in any form you like. If the software you are using isn't immortal in this sense then neither are the documents, plans, hopes, or dreams that you store in it. More than the "viral freedom" clauses in the GPL or the "use it anywhere, just indemnify us for your dumb mistakes" nature of the MIT and BSDish licenses, this is the nature of the current free software movement. And you can quote me on that.
Readers, if you have any tales of your own escapes from proprietary environments into Linux native software, especially any where it has made your life a little more fun, then by all means, we'd love to see your articles and comments. Thank you, and have a great springtime.
From Chris Gibbs
Answered By Jimmy O'Regan, Jim Dennis
Hi ya,
I have a dual headed system. I am not really happy with xinerama cause having a different resolution on each monitor does not make sense for me, and having two seperate Desktops for a single X session seems limiting. Neither solution works well for apps like kwintv.
But this is linux! I don't just want to have cake and eat it I want the factory that makes it! What I really want is to have a ps2 mouse and keyboard associated with one monitor and associate a usb mouse and keyboard with the other monitor and have ability not just to run X from each, but to have text mode available also.
Idea also being I could have text mode session and X session at the same time, that way I can have kwintv fullscreen and play advmame in svga mode full screen at the same time
So how do I initialise the second video card (one pci, one agp) so I can make it tty2 monitor or similar?
[Jimmy] Google
http://www.google.com/linux?hl=en&lr=&ie=UTF-8&oe=utf-8&q=two+keyboards+two+mice+two+keyboards&btnG=Google+Search
came up with these links: http://www.ssc.com/pipermail/linux-list/1999-November/028191.html http://www.linuxplanet.com/linuxplanet/tutorials/3100/1
Am I greedy or wot?
[Jimmy] Nah, cost effective. "Able to maximise the potential of sparse resources". Some good CV-grade B.S.
These links are to articles about X, I already know I can have X however I want it accross the monitors. Thats easy...
What I want is seperate text mode consoles, so at risk of repeating myself how do I initialise the second video card for text mode (not for X) and how do I associate it with specific tty's
[Jimmy] Well, you could set up the first set for the console and use the second for X Okay, not what you asked . So, to your actual question.
The device should be /dev/fb1, or /dev/vcs1 and /dev/vcsa1 on older kernels. You should have better luck with a kernel with Framebuffer support - according to the Linux Console Project (http://linuxconsole.sourceforge.net) there's hotplug support & multiple monitor support. The Framebuffer HOWTO has a section on setting up two consoles (http://www.tldp.org/HOWTO/Framebuffer-HOWTO-14.html). The example focuses on setting up dual headed X again, but it should contain what you need - "an example command would be "con2fb /dev/fb1 /dev/tty6" to move virtual console number six over to the second monitor. Use Ctrl-Alt-F6 to move over to that console and see that it does indeed show up on the second monitor."
[JimD] It's serendipitous that yhou should ask this question since I just came across a slightly dated article on how to do this:
http://www.linuxplanet.com/linuxplanet/tutorials/3100/1
Some of the steps in this process might be unnecessary in newer versions of XFree86 and the kernel. I can't tell you for sure as I haven't tried this. Heck, I haven't even gotten around to configuring a dual headed Xinerama system, yet.
From Joydeep Bakshi
Answered By Rick Moen, Dave Bechtel, Heather Stern
[Heather] All this is in response to last month's Help Wanted #1
1) kudzu is the DEFAULT H/W detection tool in RH & harddrake in MDK. is there anything in debian?
[Rick] As usual, the Debian answer is "Sure, which ones do you want?"
- discover
- Hardware identification system (thank you, Progeny Systems, Inc.), for various PCI, PCMCIA, and USB devices.
[Dave]
apt-get update; apt-get install discover (' apt-cache search discover ': ) discover - hardware identification system discover-data - hardware lists for libdiscover1 libdiscover-dev - hardware identification library development files libdiscover1 - hardware identification library
[Heather] Worthwhile to also search on the words "detect" and "config" and "cfg" since many of the configurators or their helper apps have those words in their package names.
discover only detects the h/w, but kudzu does one task extra that is it also configure the h/w. do u have any info. whether the latest version of discover do this auto-config. ? ( I am in debian 3.0).
[Rick] I'm unclear on what you mean by "configure the hardware". Discover scans the PCI, USB, IDE, PCMCIA, and SCSI buses. (Optionally, it scans ISA devices, and the parallel and serial ports.) It looks (by default) for all of these hardware types at boot time: bridge cdrom disk ethernet ide scsi sound usb video. Based on those probes, it does appropriate insmods and resetting of some device symlinks.
What problem are you trying to solve?
[Heather] For many people there's a bit of a difference between "the machine notices the hardware" and "my apps which want to use a given piece of hardware work without me having to touch them." In fact, finishing up the magic that makes the second part happen is the province of various apps that help configure XFree86 (SaX2/SuSE, Xconfigurator/RedHat, XF86Setup and their kindred) - some of which are better at having that magical "just works" feeling than others. Others are surely called on by the fancier installation systems too. Thus Rick has a considerable list below.
For ide, scsi, cdrom it all seems rather simple; either the drives work, or they don't. I haven't seen any distros auto-detect that I have a cd burner and do any extra work for that, though.
PCMCIA and USB are both environments that are well aware of the hot swapping uses they're put to - generally once your cardbus bridge and usb hub types are detected everything else goes well. or your device is too new to have a driver for its part of the puzzle. You must load up (or have automatically loaded by runlevels) the userland half of the sypport, though. (package names: pcmcia-cs, usbmgr)
There are apps to configure X and one can hope that svgalib "just works" on its own since it has some effort to video detection built-in. If you don't like what you get, try using a framebuffer enabled kernel, then tell X to use the framebuffer device - slower, but darn near guaranteed to work. svgalib will spot your framebuffer and use it. My favorite svgalib app is zgv, and there are some games that use it too.
I know of no app which is sufficiently telepathic to decide what your network addresses should be, the first time through. However, if you're a mobile user, there are a number of apps that you can train to look for your few favorite hosting gateways and configure the rest magically from there, using data you gave them ahead of time. PCMCIA schemes can also be used to handle this.
[Rick]
- kudzu, kudzu-vesa
- Hardware-probing tool (thank you, Red Hat Software, Inc.) intended to be run at boot time. Requires hwdata package. kudzu-vesa is the VBE/DDC stuff for autodetecting monitor characteristics.
- mdetect
- Mouse device autodetection tool. If present, it will be used to aid XFree86 configuration tools.
- printtool
- Autodetection of printers and PPD support, via an enhanced version of Red Hat Software's Tk-based printtool. Requires the pconf-detect command-line utility for detecting parallel-port, USB, and network-connected printers (which can be installed separately as package pconf-detect).
- read-edid
- Hardware information-gathering tool for VESA PnP monitors. If present, it will be used to aid XFree86 configuration tools.
[Heather] Used alone, it's an extremely weird way to ask the monitor what its preferred modelines are. Provided your monitor is bright enough to respond with an EDID block, the results can then be used to prepare an optimum X configuration. I say "be used" for this purpose because the results are very raw and you really want one of the apps that configure X to deal with this headache for you. Trust me - I've used it directly a few times.
[Rick]
- sndconfig
- Sound configuration (thank you, Red Hat Software, Inc.), using isapnp detection. Requires kernel with OSS sound modules. Uses kudzu, aumix, and sox.
[Dave] BTW, Knoppix also has excellent detection, and is also free and Debian-based: ftp://ftp.uni-kl.de/pub/linux/knoppix
[Heather] Personally I found his sound configuration to be the best I've encountered; SuSE does a pretty good job if your card is supported under ALSA.
When you decide to roll your own kernel, it's critical to doublecheck which of the three available methods for sound setup you're using, so that you can compile the right modules in - ALSA, OSS, or kernel-native drivers. Debian's make-kpkg facility makes keeping extra packages that depend directly on kernel parts - like pcmcia and alsa - able to keep in sync with your customizations, by making it easy for you to prepare the modules .deb file to go with your new kernel.
[Rick]
- hotplug
- USB/PCI device hotplugging support, and network autoconfig.
- nictools-nopci
- Diagnostic and setup tools for many non-PCI ethernet cards
- nictools-pci
- Diagnostic and setup tools for many PCI ethernet cards.
- mii-diag
- "A little tool to manipulate network cards" (examines and sets the MII registers of network cards).
2) I have installed kudzu in debian 3.0 , but it is not running as a service. it needs to execute the command kudzu manually.
[Rick] No, pretty much the same thing in both cases. You're just used to seeing it run automatically via a System V init script in Red Hat. If you'd like it to be done likewise in Debian, copy /etc/init.d/skeleton to /etc/init.d/kudzu and modify it to do kudzu stuff. Then, use update-rc.d to populate the /etc/rc?.d/ runlevel directories.
Finally the exact solution. I was searching 4 this looong. Rick, can't understand howto give u thanks. take care.
moreover it couldn't detect my epson C21SX printer. but under MDK 9.0 kudzu detected the printer .
[Heather] Perhaps it helpfully informed you what it used to get the printer going? Many of the rpm based systems are using CUPS as their print spooler; it's a little smoother under cups than some of its competitors, to have it auto-configure printers by determining what weird driver they need under the hood. My own fancy Epson color printer needed gimp-print, which I used the linuxprinting.org "foomatic" entries to link into my boring little lpd environment happily. Some printers are supported directly by ghostscript... which you will need anyway, since many GUI apps produce postscript within their "print" or "print to file" features.
[Rick] Would that be an Epson Stylus C21SX? I can't find anything quite like that name listed at:
http://www.linuxprinting.org/printer_list.cgi?make=Epson
I would guess this must be a really new, low-end inkjet printer.
The version of kudzu (and hwdata) you have in Debian's stable branch (3.0) is probably a bit old. That's an inherent part of what you always get on the stable branch. If you want versions that are a bit closer to the cutting edge, you might want to switch to the "testing" branch, which is currently the one named "sarge". To do that, edit /etc/apt/sources.list like this:
deb http://http.us.debian.org/debian testing main contrib non-free deb http://non-us.debian.org/debian-non-US testing/non-US main contrib non-free deb http://security.debian.org testing/updates main contrib non-free deb http://security.debian.org stable/updates main contrib non-free
Then, do "apt-get update && apt-get dist-upgrade". Hilarity ensues.
(OK, I'll be nice: This takes you off Debian-stable and onto a branch with a lower commitment on behalf the Debian project to keep everything rock-solid, let alone security-updated. But you might like it.)
a nice discussion. thanks a lot.
[Rick] All of those information items are now in my cumulative Debian Tips collection, http://linuxmafia.com/debian/tips . (Pardon the dust.)
ok, thanks a lot. u have clarified evrything very well. now I must not have any prob. regarding auto-detection in deb..
Great site !
[Heather] For anyone looking at this and thinking "Oy, I don't already have debian installed, can I avoid this headache?" - Yes, you probably can, for a price. While debian users from both commercial and homegrown computing environments alike get the great upgrade system, this is where getting one of the commercial variants of Debian can be worth the bucks for some people. Note that commercial distros usually come with a bunch of software which is definitely not free - and not legal to copy for your pals. How easy they make it to seperate out what you could freely tweak, rewrite, or give away varies widely.
- Libranet
- http://www.libranet.com
Canadian company, text based installer based on but just a little more tuned up than the generic debian one. Installs about a 600 MB "base" that's very usable then offers to add some worthwhile software kits on your first boot.
- Xandros
- http://www.xandros.com
The current bearer of the torch that Corel Linux first lit. Reviews about it sing its newbie-friendly praises.
- Lindows
- http://www.lindows.com
Mostly arriving pre-installed in really cheap Linux machines near you in stores that you just wouldn't think of as computer shops. But it runs MSwin software out of the box too.
- Progeny
- http://www.progenylinux.com
More into offering professional services for your corporate or perhaps even industrial Linux needs than particularly a distribution anymore, they committed their installer program to the auspices of the Debian project. So it should be possible for someone to whip up install discs that use that instead of the usual geek-friendly textmenu installer.
If you find any old Corel Linux or Stormix discs lying around, they'll make an okay installer, provided your video card setup is old enough for them to deal with. After they succeed you'll want to poke around, see what they autodetected, takes some notes, then upgrade the poor beasties to current Debian.
In a slightly less commercial vein,
- Knoppix
- http://www.knopper.net/knoppix
[Base page in German, multiple languages available] while not strictly designed as a distro for people to install, has great hardware detection in its own accord, and a crude installer program available. At minimum, you can boot from its CD, play around a bit, and take notes now that it has detected and configured itself. A runs-from-CD distribution. If you can't take the hit from downloading a 700 MB CD all at once - it takes hours and hours on my link, and I'm faster than most modems - he lists a few places that will sell a recent disc and ship it to you.
- Good-Day GNU-Linux
- http://ggl.good-day.net
LWN's pointer went stale but this is where it moved to; the company produced sylpheed and has some interesting things bundled in this. It also looks like they preload notebooks, but I can't read japanese to tell you more.
And of course the usual Debian installer discs.
Anytime you can ask a manufacturer to preload linux - even if you plan to replace it with another flavor - let them. You will tell them that you're a Linux and not a Windows user, and you'll get to look at the preconfiguration they put in. If they had to write any custom drivers, you can preserve them for your new installation. Likewise whatever time they put into the config files.
There's a stack more at the LWN Distributions page (http://old.lwn.net/Distributions) if you search on the word Debian, although many are localize, some are specialty distros, and a few are based on older forms of the distro.
From Beth Richardson
Answered By Jim Dennis, Jason Creigton, Benjamin A. Okopnik, Kapil Hari Paranjape, Dan Wilder, Pradeep Padala, Heather Stern
Hello,
I am a Linux fan and user (although a newbie). Recently I read the paper entitled "Maintainability of the Linux Kernel" (http://www.isse.gmu.edu/faculty/ofut/rsrch/papers/linux-maint.pdf) in a course I am enrolled in at Colorado State University. The paper is essentially saying that the Linux kernel is growing linearly, but that common coupling (if you are like me and cannot remember what kind of coupling is what - think global variables here.) is increasing at an exponential rate. Side note, for what it is worth - the paper was published in what I have been told is one of the "most respected" software journals.
I have searched around on the web and have been unable to find any kind of a reply to this paper from a knowledgeable Linux supporter. I would be very interested in hearing the viewpoint from the "other side" of this issue!
Thanks for your time, Beth Richardson
[JimD] Basically it sounds like they're trying to prove that bees can't fly.
(Traditional aerodynamic theories and the Bernoulli principle can't be used to explain how bees and houseflies can remain aloft; this is actually proof of some limitations in those theories. In reality the weight of a bee or a fly relative to air density means that insect can do something a little closer to "swimming" through the air --- their mass makes air relatively viscous to them. Traditional aerodynamic formulae are written to cover the case where the mass of the aircraft is so high vs. air density that some factors can be ignored.).
I glanced at the article, which is written in typically opaque academic style. In other words, it's hard to read. I'll admit that I didn't have the time to analyze (decipher) it; and I don't have the stature of any of these researchers. However, you've asked me, so I'll give my unqualified opinion.
Basically they're predicting that maintainance of the Linux kernel will grow increasing difficult over time because a large number of new developments (modules, device drivers, etc) are "coupled" (depend on) a core set of global variables.
[Jason] Wouldn't this affect any OS? I view modules/device drives depending on a core as a good thing, when compared to the alterative, which is depending on a wide range on varibles. (Or perheps the writers have a different idea in mind. But what other alterative to depending a core would there be other than depending on a lot of things?)
[Ben] You said it yourself further down; "micro-kernel". It seems to be the favorite rant of the ivory-tower CS academic (by their maunderings shall ye know them...), although proof of this world-shattering marvel seems to be long in coming. Hurd's Mach kernel's been out, what, a year and more?
[Kapil] Here comes a Hurd of skeletons out of my closet! Being a very marginal Hurd hacker myself, I couldn't let some of the remarks about the Hurd pass. Most of the things below have been better written of elsewhere by more competent people (the Hurd Wiki for example, http://hurd.gnufans.org) but here goes anyway...
The Mach micro-kernel is what the Hurd runs on the top of. In some ways Hurd/Mach is more like Apache/Linux. Mach is not a part of the Hurd. The newer Hurd runs over the top of a version of Mach built using Utah's "oskit". Others have run the "Hurd" over "L-4" and other micro-kernels.
The lack of hardware and other support in the existing micro-kernels is certainly one of things that is holding back the common acceptance of the Hurd. (For example neither "mach" nor "oskit" have support for my video card--i810--for which support came late in the Linux 2.2 series).
Now, if only Linux was written in a sufficiently "de-coupled" way to allow the stripping away of the file-system and execution system, we would have a good micro-kernel already! The way things are, the "oskit" guys are perenially playing catch-up to incorporate Linux kernel drivers. Since these drivers are not sufficiently de-coupled they are harder to incorporate.
[JimD] This suggests that the programming models are too divergent in some ways. For each class of device there are a small number of operations (fops, bops, ioctl()s) that have to be supported (open, seek, close, read, write, etc). There are relatively few interactions with the rest of the kernel for most of this (which is why simple device driver coding is in a different class from other forms of kernel hacking).
The hardest part of device driver coding is getting enough information from a vendor to actually implement each required operation. In some cases there are significant complications for some very complex devices (particularly in the case of video drivers; which, under Linux sans framebuffer drivers, are often implemented in user space by XFree86.)
It's hard to imagine that any one device driver would be that difficult to port from Linux to any other reasonable OS. Of course the fact that there are THOUSANDS of device drivers and variants within each device driver does make it more difficult. It suggestst the HURD needs thousands (or at least hundreds) of people working on the porting. Obiviously, if five hundred HURD hackers could crank out a device driver every 2 months for about a year --- they'd probably be caught up with Linux device driver support.
Of course I've only written one device driver for Linux (and that was a dirt simple watchdog driver NAS system motherboard) and helped on a couple more (MTD/flash drivers, same hardware). It's not so much "writing a driver" as plugging a few new values into someone else's driver, and reworking a few bits here or there.
One wonders if many device drivers could be consolidated into some form of very clever table-driven code. (Undoubtedly what the UDDI movement of a few years ago was trying to foist on everyone).
[Kapil] One the other side Linux "interferes too much" with user processes making Hurd/Linux quite hard and probably impossible---but one can dream...
[JimD] Linux was running on Mach (mkLinux) about 5 years ago. I seem to recall that someone was running a port of Linux (or mkLinux) on an L4 microkernel about 4 years ago (on a PA RISC system if I recall correctly).
It's still not HURD/Linux --- but, as you say, it could (eventually) be.
Linux isn't really monolithic, but it certainly isn't a microkernel. This bothers purists; but it works.
Future releases of Linux might focus considerably more on restructing the code, providing greater modularity and massively increasing the number of build-time configuration options. Normal users (server and workstation) don't want more kernel configuration options. However, embedded systems and hardware engineers (especially for the big NUMA machines and clustering system) need them. So the toolchain and build environment for the Linux kernel will have to be refined.
As for features we don't have yet (in the mainstream Linux kernel): translucent/overlay/union filesystems, transparent process checkpoint and restore, true swapping (in addition to paging, might come naturally out of checkpointing), network console, SSI (system system image) HA clustering (something like VAX clusters would be nice from what I hear), and the crashdump, interactive debuggers, trace toolkit, dprobes and other stuff that was "left out" of 2.5 in the later stages before the feature freeze last year.
I'm sure there are things I'm forgetting and others that I've never even thought of. With all the journaling, EAs and ACLs, and the LSM hooks and various MAC (mandatory access contol) mechanisms in LIDS, SELinux, LOMAC, RSBAC and other patches, we aren't missing much that was ever available in other forms of UNIX or other server operating systems. (The new IPSec and crypto code will also need considerable refinement).
After that, maybe Linux really will settle down to maintenance; to optimization, restructuring, consolidation, and dead code removal. Linus will might find that stuff terminally boring and move on to some new project.
[JimD] What else is there to add the kernel?
[Pradeep] Like my advisor says, Every thing that is never thought before. Lot of people feel the same about systems research. I am planning to specialize in systems. What do you guys think about systems research? Is is as pessimistic as Rob Pike sounds? http://www.cs.bell-labs.com/who/rob/utah2000.pdf
[Dan] Some would say, "streams". (he ducks!)
[JimD] LiS is there for those that really need it. It'll probably never be in the mainstream kernel. However, I envision something a like a cross between the Debian APT system and the FreeBSD ports system (or LNX-BBCs Gar or Gentoo's source/package systems) for the kernel.
In this case some niche, non-mainstream kernel patches would not be included in Linus' tarball, but hooks would be found in a vendor augmented kbuild (and/or Makefile collection) that could present options for many additional patches (like the FOLK/WOLK {Fully,Working} OverLoaded Kernel). If you selected any of these enhancements then the appropriate set of patches would be automatically fetched and applied, and any submenus to the configuration dialog would appear.
Such a system would have the benefit of allowing Linus to keep working exactly as he does now, keeping pristine kernels, while making it vastly easier for sysadmins and developers to incorporate those patches that they want to try.
If it was done right it would be part of UnitedLinux, Red Hat, and Debian. There would be a small independent group that would maintain the augmented build system.
The biggest technical hurdle would be patch ordering. In some cases portions of some patches might have to be consolidated into one or more patches that exist solely to prevent unintended dependency loops. We see this among Debian GNU/Linux patches fairly often --- though those are binary package dependencies rather than source code patch dependencies. We'd never want a case where you had to include LiS patches because the patch maintainer applied it first in his/her sequence and one of its changes became the context for another patch --- some patch that didn't functionally depend on LiS but only seemed to for context.
I think something like this was part of Eric S. Raymond's vision for his ill-fated CML2. However, ESR's work wasn't in vain; a kbuild system in C was written and will be refined over time. Eventually it may develop into something with the same features that he wanted to see (though it will take longer).
As examples of more radical changes that some niches might need or want in their kernels: there used to be a suite of 'ps' utilities that worked without needing /proc. The traditional ps utils worked by walking through /dev/kmem traversing a couple of data structures there. I even remember seeing another "devps" suite, which provided a simple device interface alternative to proc. The purpose of this was to allow deeply embedded, tightly memory constrained kernels to work in a smaller footprint. These run applications that have little or no need for some of the introspection that is provided by /proc trees, and have only the most minimal process control needs. It may be that /proc has become so interwoven into the Linux internals that a kernel simply can't be built with out it (that the build option simply affects whether /proc is visible from userspace). These embedded systems engineers might still want to introduce a fair number of #defines to optionally trim out large parts of /proc. Another example is the patch I read about that effectively refines the printk macro as a C comment; thus making a megabyte (uncompressed) of prink()' calls disappear in the pre-processor pass.
These are things that normal users (general purpose servers and workstations) should NOT mess with. Things that would break a variety of general purpose programs. However, they can be vital to some niches. I doubt we'll ever see Linux compete with eCOS on the low end; but having a healthy overlap would be good.
[JimD] Are there any major 32 or 64 bit processors to which Linux has not been ported?
[Ben] I don't mean to denigrate the effort of the folks that wrote Hurd, but... so what? Linux serenely rolls on (though how something as horrible, antiquated, and useless as a monolithic kernel can hold its head up given the existence of The One True Kernel is a puzzle), and cooked spaghetti still sticks to the ceiling. All is (amazingly) still right with the world.
[Jason] You know, every time I get to thinking about what the Linux kernel should have, I find out it's in 2.5. Really. I was thinking, Linux is great but it needs better security, more than just standard linux permissions. Then I look at 2.5: Linux Security Modules. Well, we need a generic was to assign attributes to files, other then the permission bits. 2.5 has extened attribues (name:value pairs at the inode level) and extended POSIX ACLs.
[Ben] That's the key, AFAIC; a 99% solution that's being worked on by thousands of people is miles better than a 100% solution that's still under development. It's one of the things I love most about Linux; the amazing roiling, boiling cauldron of creative ideas I see implemented in each new kernel and presented on Freshmeat. The damn thing's alive, I tell ya.
[Kapil] What you are saying is true and is (according to me) the reason why people will be running the Hurd a few years from now!
The point is that many features of micro-kernels (such as a user-process running it's own filesystem and execution system a la user-mode-linux) are becoming features of Linux. At some point folks will say "Wait a minute! I'm only using the (micro) kernel part of Linux as root. Why don't I move all the other stuff into user space?" At this point they will be running the Hurd/Linux or something like it.
Think of the situation in 89-91 when folks on DOS or Minix were jumping through hoops in order to make their boxes run gcc and emacs. Suddenly, the hoops could be removed because of Linux. The same way the "coupled" parts of Linux are preventing some people from doing things they would like to do with their system. As more people are obstructed by those parts---voila Linux becomes (or gives way to) a micro-kernel based system.
Didn't someone say "... and the state shall wither away".
[Heather] Yes, but it's been said:
"Do not confuse the assignment of blame with the solution to the problem. In space, it is far more vital to fix your air leak than to find the man with the pin." - Fiona L. Zimmer
Problems as experienced by sysadmins and users are not solely the fault of designs or languages selected to write our code in.
...and also:
"Established technology tends to persist in the face of new technology." - G. Blaauw, one of the designers of System 360
...not coincidentally, at least in our world, likely to persist inside "new" technology, as well, possibly in the form of "intuitive" keystrokes and "standard" protocols which would not be the results if designs were started fresh. Of course truly fresh implementations take a while to complete, which brings us back to the case of the partially completed Hurd environment very neatly.
[JimD] Thus any change to the core requires an explosion of changes to all the modules which depended upon it. They are correct (to a degree). However they gloss over a few points (lying with statistics).
First point: no one said that maintaining and developing kernels should be easy. It is recognized as one of the most difficult undertakings in programming (whether it's an operating system kernel or an RDBMS "engine" --- kernel). "Difficult" is subjective. It falls far short of "impossible."
Second point: They accept it as already proven that "common" coupling leads to increasing numbers of regression faults (giving references to other documents that allege to prove this) and then they provide metrics about what they are calling common coupling. Nowhere do they give an example of one variable that is "common coupled" and explain how different things are coupled to it. Nor do they show an example of how the kernel might be "restructured with common coupling reduced to a bare minimum" (p.13).
So, it's a research paper that was funded by the NSF (National Science Foundation). I'm sure the authors got good grades on it. However, like too much academic "work" it is of little consequence to the rest of us. They fail to show a practical alternative and fail to enlighten us.
Mostly this paper sounds like the periodic whining that used to come up on the kernel mailing list: "Linux should be re-written in C++ and should be based on an object-oriented design." The usual response amounts to: go to it; come back when you want to show us a working prototype.
[Jason] Couldn't parts of the kernel be written in C, and others in C++? (okay, technically it would probably all be C++ if such a shift did occur, but you can write C in a C++ compiler just fine. Right? Or maybe I just don't know what I'm talking about.)
[Pradeep] There are many view points to this. But why would you want to rewrite parts of it in C++?
Popular answer is: C++ is object-oriented, it has polymorphism, inheritance etc. Umm, I can do all that in C and kernel folks have used those methods extensively. The function pointers, gotos may not be as clean as real virtual functions and exception handling. But those C++ features come with a price. The compilers haven't progressed enough to deliver the performance equivalent to hand-written C code.
[Dan] At one point, oh, maybe it was in the 1.3 kernel days, Linus proposed moving kernel development to C++.
The developer community roundly shot down the idea. What you say about C++ compilers was true in spades with respect to the g++ of those days.
[Pradeep] What is the status of g++ today? I still see a performance hit when I compile my C programs with g++. Compilation time is also a major factor. g++ takes lot of time to compile especially with templates.
[JimD] I'm sure that the authors would argue that "better programming and design techniques" (undoubtedly on their aggenda for their next NSF grant proposal) would result in less of this "common" coupling and more of the "elite" coupling. (Personally I have no problem coupling with commoners --- just do it safely!)
As for writing "parts" of Linux in C++ --- there is the rather major issue of identifier mangling. In order to support polymorphism and especially function overloading and over-riding, C++ compilers have to modify the identifiers in their symbol tables in ways that C compiler never have to do. As a consequence of this it is very difficult to link C and C++ modules. Remember, loadable modules in Linux are linkable .o files. It just so happens that they are dynmically loaded (a little like some .so files in user space, through the dlopen() API --- but different because this is kernel space and you can't use dlopen() or anything like it).
I can only guess about how bad this issue would be but a quick perusal of the first C++ FAQ that could find on the topic:
http://users.utu.fi/sisasa/oasis/cppfaq/mixing-c-and-cpp.html
... doesn't sound promising.
[JimD] I'm also disappointed that the only quotations in this paper were the ones of Ken Thompson claiming that Linux will "not be very successful in the long run" (repeated TWICE in their 15 page paper) and that Linux is less reliable (in his experience) than MS Windows.
[Jason] I'm reminded of a quote: "Linux is obsolete" -- Andrew Tanenbaum. He said this in the (now) famous flame-war between himself and Linus Torvalds. His main argument was the micro-kernels are better than monolithic kernels and thus Linux was terribly outdated. (His other point was that linux wasn't portable.) BTW, I plan to get my hands on some Debian/hurd (Or is that "GNU/hurd"? ) CDs so I can see for myself what the fuss over micro-kernels is all about.
[JimD] Run MacOS X --- it's a BSD 4.4 personality over a Mach microkernel.
(And is more mature than HURD --- in part because a significant amount of the underpinnings of MacOS X are NeXT Step which was first released in the late '80s even before Linux).
[Ben] To quote Debian's Hurd page,
............... The Hurd is under active development, but does not provide the performance and stability you would expect from a production system. Also, only about every second Debian package has been ported to the GNU/Hurd. There is a lot of work to do before we can make a release. ............... |
Do toss out a few bytes of info if you do download and install it. I'm not against micro-kernels at all; I'm just slightly annoyed by people whose credentials don't include the Hard Knocks U. screaming "Your kernel sucks! You should stab yourself with a plastic fork!" My approach is sorta like the one reported in c.o.l.: "Let's see the significant benefits."
[JimD] These were anecdotal comments in an press interview --- they were not intended to be delivered with scholastic rigor. I think it weakens the paper considerably (for reasons quite apart from my disagreement with the statements themselves).
What is "the Long run?" Unix is a hair over 30 years old. The entire field of electronic computing is about 50 or 60 years old. Linux is about 12 years old. Linux is still growing rapidly and probably won't peak in marketshare for at last 5 to 10 years. Thus Linux could easily last longer than proprietary forms of UNIX did. (This is not to say that Linux is the ultimate operating system. In 5 to 10 years there is likely to be an emerging contender like EROS (http://www.eros-os.org ) or something I've never heard of. In 15 to 20 years we might be discussing a paper that quotes Linus Torvalds as saying: "I've read some of the EROS code, and it's not going to be a success in the long run."
(We won't even get into the criteria for "success" in Ken Thompson's comment --- because I think that Linux' current status is already been a huge success by the standards of it's advocates and to the chagrin of it's detractors. By many accounts Linux is already more "successful" than UNIX --- having been installed on more systems than all UNIX predecessors combined --- an installation base that is only recently rivaled by MacOS X in the UNIX world)
From Jose Avalis
Answered By Faber Fedor, Jason Creighton, Benjamin A. Okopnik, John Karns
Hi guys and thanks in advance fro your time. I'm Joe From Toronto.
I have this scenario at home.
3 WS with Winxx
1 Linux redhat 7.3
1 DSL Connection (Bell / Sympatico)
I would like to use the linux machine as a router for the internal PC> Could you help me with that, please???
[Ben] OK, I'll give it a shot. You have read and followed the advice in the IP-Masquerade HOWTO, right? If not, it's always available at the Linux Documentation Project <http://www.linuxdoc.org>, or possibly on your own system under /usr/doc/HOWTO or /usr/share/doc/HOWTO.
The Linux Machine has 2 NIC eth0 (10.15.1.10 | 16 ) is connected to the internal net (hub) , while the other ETH1 (10.16.1.10 | 16) is connected to the DSL Modem.
[Ben] You have private IPs on both interfaces. Given a DSL modem on one of them, it would usually have an Internet-valid address, either one that you automatically get via DHCP or a static one that you get from your ISP (that's become unusual for non-commercial accounts.) Looks like you have a PPPoE setup - so you're not actually going to be hooking eth0 to eth1, but eth0 to ppp0.
as you can see in the following text, everything is Up and run and I can access internet from the Linux machine.
[Jason] This may see like a stupid question, but do the internal PCs have valid internet address? (i.e., those outside the 10.*.*.*, 172.16.*.*-172.31.*.* or 192.168.*.* ranges) If they don't, you need to do IP masquerading. This is not all that hard, I could give a quick & dirty answer as to how to do it (Or you could look at the IP-Masquerading-HOWTO, for the long answer), but I'd like to know if that's your situation first. Yes, I am that lazy.
ifconfig says
See attached jose.ifconfig-before.txt
See attached jose.ping-before.txt
The problem is that when I try to access the internet from the internal lan. I can't access it.
[Ben] Yep, that's what it is. That MTU of 1492 is a good hint: that's the correct setting for PPPoE, and that's your only interface with a Net-valid IP.
[John] The adjusted MTU for PPPoE (from the usual 1500 to 1492) is necessary, but can cause problems with the other machines on the LAN unless they too are adjusted for MTU.
[Ben] Right - although not quite as bad as the gateway's MTU (that one can chase its own tail forever - looks like there's no connection!)
[John] I've been stuck with using PPPoE for about a month now, and have found the Roaring Penguin pkg (http://www.roaringpenguin.com) to work quite well, once it's configured. I seem to remember reading that it does the MTU adjustment internally, and alleviates the headache of having to adjust the rest of the machines on the LAN to use the PPPoE gateway (see the ifconfig output below).
[Ben] Oh, _sweet._ I'm not sure how you'd do that "internally", but I'm no network-programming guru, and that would save a bunch of headaches.
[John] Especially nice if one of the LAN nodes is a laptop that gets carried around to different LAN environments - would be a real PITA to have to reset the MTU all the time.
# ifconfig eth1 eth1 Link encap:Ethernet HWaddr 00:40:F4:6D:AA:3F UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21257 errors:0 dropped:0 overruns:0 frame:0 TX packets:14201 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:4568502 (4.3 Mb) TX bytes:1093173 (1.0 Mb) Interrupt:11 Base address:0xcc00
Then I just tacked on the firewall / masq script I've been using right along, with the only change being the external interface from eth0 to ppp0. PPPoE is also a freak in that the NIC that connects to the modem doesn't get an assigned IP.
[Ben] Yep, that's what got me thinking "PPPoE" in the first place. Two RFC-1918 addresses - huh? An MTU of 1492 for ppp0 and reasonably short ping times to the Net - oh.
all the PCs in the net have as Default gateway 10.15.1.10 (Linux internal NIC )
[Ben] That part is OK.
Linux's default gateway is the ppp0 adapter
[root@linuxrh root]# netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 64.229.190.1 0.0.0.0 255.255.255.255 UH 40 0 0 ppp0 10.16.0.0 0.0.0.0 255.255.0.0 U 40 0 0 eth1 10.15.0.0 0.0.0.0 255.255.0.0 U 40 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 40 0 0 lo 0.0.0.0 64.229.190.1 0.0.0.0 UG 40 0 0 ppp0 [root@linuxrh root]#
[Ben] Yep, that's what "netstat" says. I've never done masquerading with PPP-to-Ethernet, but it should work just fine, provided you do the masquerading correctly.
[Ben] Can you guys give me some cues of what my problem is ???
I don't have any firewall installed.
Thanks a lot. JOE
[Ben] That's probably the problem. Seriously - a firewall is nothing more than a set of routing rules; in order to do masquerading, you need - guess what? - some routing rules (as well as having it enabled in the kernel.) Here are the steps in brief - detailed in the Masquerading HOWTO:
- Make sure that your kernel supports masquerading; reconfigure and
- Load the "ip_masq" module if necessary.
- Enable IP forwarding (ensure that /proc/sys/net/ipv4/ip_forward is
- Set up the rule set (the HOWTO has good examples.)
That's the whole story. If you're missing any part of it, go thou and fix it until it cries "Lo, I surrender!" If you run into problems while following the advice in the HOWTO, feel free to ask here.
[Faber] One thing you didn't mention doing is turning on forwarding between the NICs; you have to tell the Linux to forward packets from one NIC to the other. To see if it is turned on, do this:
cat /proc/sys/net/ipv4/ip_forward
If it says "0", then it's not turned on. To turn it on, type
echo "1" > /proc/sys/net/ipv4/ip_forward
And see if your Win boxen can see the internet.
If that is your problem, once you reboot the Linux box you'll lose the setting. There are two ways not to lose the setting. One is to put the echo command above into your /etc/rc.local file. The second and Approved Red Hat Way is to put the line
net.ipv4.ip_forward = 1
in your /etc/sysctl.conf file. I don't have any Red Hat 7.3 boxes lying around, so I don't know if Red Hat changed the syntax between 7.x and 8.x. One way to check is to run
/sbin/sysctl -a | grep forward
and see which one looks most like what I have.
Hey Faber in NJ /.... thanks for your clues. In fact it was in 0, I changed it to 1, I've restarted tehe box and it is in 1 now; but it is still not working.
[Faber] Well, that's a start. There's no way it would have worked with it being 0!
First at all, m I right with this setup method? I mean using Linux as a router only ??? or shoud I set up a masquerading and use the NAT facility to populate all my internal addresses in Internet?
[Faber] Whoops! Forgot that piece! Yes, you'll hve to do masquerading/NAT (I can never keep the two distinct in my head).
[Jason] It seems to me that you would want the DSL modem (eth1) to be the default route to the internet, not the modem (ppp0).
Because maybe the problem is that I'm trying to route my internal net to the DSL net and Internet and maybe it is not a valid proc.
[Faber] Well, it can be done, that's for sure. We just have to get all the t's dotted and the i's crossed.
[Jason] IP-Masquerading. Here's the HOWTO:
http://www.tldp.org/HOWTO/IP-Masquerade-HOWTO
And here's a script that's supposed (I've never used it) to just be a "fill in the blanks and go":
http://www.tldp.org/HOWTO/IP-Masquerade-HOWTO/firewall-examples.html#RC.FIREWALL-2.4.X
Note this is in the HOWTO, it's just later on after explaining all the gory details of NATing.
Hey, thanks for your mail, the thing is working now. I didn�t know that the NAT functions in Linux are called Masquerading.
[Ben] Yeah, that's an odd one.
Masquerading is only a specific case (one-to-many) of NAT. As an example of other stuff that NAT can do, IBM had an ad for the Olympics a while back (their equipment handled all the traffic for the website); they did "many-to-many" NAT to split up the load.
Thanks again for your help, due to I'm new in Linux, it took me a while to learn of the terminology in this platform.
To many NOS in may head.
I have averything working now, including the firewall, I had to compile the kernel again, but it was ok.
C U.
[Ben] You're welcome! Glad we could help.
...making Linux just a little more fun! |
By Michael Conry |
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to gazette@ssc.com
All articles older than three months are available for
public reading at
http://www.linuxjournal.com/magazine.php.
Recent articles are available on-line for subscribers only at
http://interactive.linuxjournal.com/.
Fox News report on the US Congress's tech agenda.
ITworld.com reported that the European Commission has presented a draft directive that punishes copyright infringement for commercial purposes, but spares home music downloaders, irritating music industry lobby groups (from Slashdot). Also reported at The Register.
Washington Post reports that Howard Schmidt, the new cybersecurity czar, is a former Microsoft security chief. (from Slashdot).
The Register reports how US gov reps "defanged" pro open source declaration.
EFF's comments on the "German DMCA" as part of its ongoing effort to avoid the worldwide export of over broad DMCA-type legislation. The German judicial commission is currently holding hearings on draft German legislation to implement the 2001 European Union Copyright Directive (EUCD).
After winning the initial judgement in a trademark suit, the German software firm MobiliX had to give up its name after all to the publishers of a similarly-named comic book character. MobilX.org is now TuxMobil.org. (NewsForge announcement.)
The EFF has announced the release of "Winning (DMCA) Exemptions, The Next Round", a succinct guide to the comment-making process written by Seth Finkelstein, who proposed one of the only two exemptions granted in the last Library of Congress Rule-making.
The Register reports that the DMCA has been invoked in DirecTV hack. 17 have been charged.
Slashdot Interview with Prof. Eben Moglen who has been the FSF's pro bono general counsel since 1993.
The Happypenguin Awards for the 25 Best Linux Games (courtesy NewsVac).
Lindows launches $329 mini PC, ubiquity beckons.
ZDNet UK reports on how KDE's new version has responded to government needs.
NewsForge report: The rise of the $99 'consumer' Linux distribution
A look at installing Bayesian filtering using Bogofilter and Sylpheed Claws.
Doc Searls at Linux Journal writes on the value shifts underpinning the spread of Linux and free software.
PC-To-Phone calls available for GNU/Linux.
Wired reports on getting an iPod to run on linux.
Jay Beale (of Bastille Linux) with an article on computer security, and how to tell if you've been hacked (from NewsVac).
Monthly Monster Machines back in action LinuxLookup.com have announced a monthly feature Monthly Monster Machines. This is a monthly updated spec of budget, workstation, and dream Linux machines.
It has been reported that starting this year, the Swiss State of Geneva will mail all tax forms with a CD which includes OpenOffice and Mozilla. This replaces an Excel sheet.
Recent NewsForge IRC chat with OpenOffice.org publicist/activist Sam Hiser.
Open Source security manual. There is a report on this at NewsForge.
Possible data write corruption problem on Linux.
NewsForge report on the launch of the new Linux in Education portal.
Slashdot highlighted a recent Business Week feature of Linux comprising 9 articles.
Slashdotters respond to an article about the Microsoft Home of Tomorrow by speculating what an AppleHouse, SunHouse and LinuxHouse would look like.
Interview with Dennis Ritchie, a founding father of Unix and C.
Some Links from Linux Weekly News:
For the Chinese readers among you, a Chinese translation of the Peruvian refutation of Microsoft FUD.
Listings courtesy Linux Journal. See LBJ's Events page for the latest goings-on.
Game Developers Conference | March 4-8, 2003 San Jose, CA http://www.gdconf.com/ |
SXSW | March 7-11, 2003 Austin, TX http://www.sxsw.com/interactive |
CeBIT | March 12-19, 2003 Hannover, Germany http://www.cebit.de/ |
City Open Source Community Workshop | March 22, 2003 Thessaloniki, Greece http://www.city.academic.gr/cosc |
Software Development Conference & Expo | March 24-28, 2003 Santa Clara, CA http://www.sdexpo.com/ |
Linux Clusters Institute (LCI) Workshop | March 24-28, 2003 Urbana-Champaign, IL http://www.linuxclustersinstitute.org/ |
4th USENIX Symposium on Internet Technologies and Systems | March 26-28, 2003 Seattle, WA http://www.usenix.org/events/ |
PyCon DC 2003 | March 26-28, 2003 Washington, DC http://www.python.org/pycon/ |
Linux on Wall Street Show & Conference | April 7, 2003 New York, NY http://www.linuxonwallstreet.com |
AIIM | April 7-9, 2003 New York, NY http://www.advanstar.com/ |
FOSE | April 8-10, 2003 Washington, DC http://www.fose.com/ |
MySQL Users Conference & Expo 2003 | April 8-10, 2003 San Jose, CA http://www.mysql.com/events/uc2003/ |
LinuxFest Northwest 2003 | April 26, 2003 Bellingham, WA http://www.linuxnorthwest.org/ |
Real World Linux Conference and Expo | April 28-30, 2003 Toronto, Ontario http://www.realworldlinux.com |
USENIX First International Conference on Mobile Systems,
Applications, and Services (MobiSys) | May 5-8, 2003 San Francisco, CA http://www.usenix.org/events/ |
USENIX Annual Technical Conference | June 9-14, 2003 San Antonio, TX http://www.usenix.org/events/ |
CeBIT America | June 18-20, 2003 New York, NY http://www.cebit-america.com/ |
ClusterWorld Conference and Expo | June 24-26, 2003 San Jose, CA http://www.linuxclustersinstitute.org/Linux-HPC-Revolution |
O'Reilly Open Source Convention | July 7-11, 2003 Portland, OR http://conferences.oreilly.com/ |
12th USENIX Security Symposium | August 4-8, 2003 Washington, DC http://www.usenix.org/events/ |
LinuxWorld Conference & Expo | August 5-7, 2003 San Francisco, CA http://www.linuxworldexpo.com |
Linux Lunacy Brought to you by Linux Journal and Geek Cruises! | September 13-20, 2003 Alaska's Inside Passage http://www.geekcruises.com/home/ll3_home.html |
Software Development Conference & Expo | September 15-19, 2003 Boston, MA http://www.sdexpo.com |
PC Expo | September 16-18, 2003 New York, NY http://www.techxny.com/pcexpo_techxny.cfm |
COMDEX Canada | September 16-18, 2003 Toronto, Ontario http://www.comdex.com/canada/ |
LISA (17th USENIX Systems Administration Conference) | October 26-30, 2003 San Diego, CA http://www.usenix.org/events/lisa03/ |
HiverCon 2003 | November 6-7, 2003 Dublin, Ireland http://www.hivercon.com/ |
COMDEX Fall | November 17-21, 2003 Las Vegas, NV http://www.comdex.com/fall2003/ |
IBM, United Devices and Accelrys have announced a project supporting a global research effort that is focused on the development of new drugs that could potentially combat the smallpox virus post infection. The Smallpox Research Grid Project is powered by an IBM infrastructure, which includes IBM eServer[tm] p690 systems and IBM's Shark Enterprise Storage Server running DB2[r] database software using AIX and Linux.
Opera Software has released a very special Bork edition of its Opera 7 for Windows browser. The Bork edition behaves differently on one Web site: MSN. Users accessing the MSN site http://www.msn.com/ will see the page transformed into the language of the famous Swedish Chef from the Muppet Show: Bork, Bork, Bork! This is retaliation to apparent targeting by MSN of Opera users. Opera browser users were supplied with a different stylesheet to MSIE users, which made the site display in a less appealing way.
Debian Weekly News reported the announcement of the new archive key for 2003. This is used to sign the Release file for the main, non-US and security archives, and can be used with apt-check-sigs to improve security when using mirrors.
Also from DWN, and of use to many Debian users, is Adrian Bunk's announcement of the backport of OpenOffice.org 1.0.2 to woody. Packages are available online.
Debian powers PRISMIQ MediaPlayer home entertainment gateway device.
IBM developerWorks has published a recent article on Knoppix.
Part 4 of DistroWatch's review of Mandrake 9.1 is online
Open For Business has published a review of SuSE Linux 8.1
NewsForge has reviewed SuSE Linux Office Desktop.
LGP is pleased to announce that Candy Cruncher has arrived from the replicators and is available immediately.
CourseForum Technologies today introduced CourseForum 1.3, its web-based software for e-learning content creation, sharing and discussion. CourseForum can be hosted on MacOS X, Windows 98/ME/NT/2000/XP, Linux or other Unixes, while users need only a standard web browser.
CourseForum Technologies today introduced ProjectForum 1.3, a web-based software for flexible workgroup collaboration and coordination of projects and teams. ProjectForum can be hosted on MacOS X, Windows 98/ME/NT/2000/XP, Linux or other Unixes, while users need only a standard web browser. Licenses start at US$199, and a free version is also available.
AquaFold, Inc have announced the latest version of Aqua Data Studio, a universal database tool for building, managing and maintaining enterprise relational databases. Aqua Data Studio includes support for all major database platforms such as Oracle 8i/9i, IBM DB2, Sybase Adaptive Server, Microsoft SQL Server and the open source databases MySQL and PostgreSQL. Developed with the Java programming language, Aqua Data Studio supports all major operating systems, including Linux, Microsoft Windows, Mac OSX, and Solaris. Screenshots and downloads available online.
...making Linux just a little more fun! |
By Shane Collinge |
These cartoons are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.
Tux continues his career as an
Eminem wannabe.
All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in a pair of colorful tights fighting criminals. During the day... well,
he just runs around. He eats when he's hungry and sleeps when he's sleepy.
...making Linux just a little more fun! |
By Javier Malonda |
The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports, es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author. Text commentary on this page is by LG Editor Iron. Your browser has shrunk the images to conform to the horizontal size limit for LG articles. For better picture quality, click on each cartoon to see it full size.
Your Editor couldn't resist getting Javier to do the same cartoon in Esperanto.
These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.
...making Linux just a little more fun! |
By Graham Jenkins |
Actually, it's not. The Common Desktop Environment (CDE) has been around for as long as some of those who will be reading this article. It was jointly developed by IBM, Sun, HP and Novell so as to provide a unified "look and feel" to users of their systems. It was also adopted by other companies (notably Digital Equipment). You can find further details at "IBM AIX: What is CDE?".
[Screenshot: a typical CDE screen]
The early versions of KDE appear to have been based on CDE, and the more recent releases of XFce have a look-and-feel which is very similar to that of CDE. A key difference here is that both KDE and XFce are Open Source developments.
One of the most-used CDE applications is probably its desktop terminal 'dtterm' which was based on 'xterm' with some extra menu-bar capabilities; its look is not unlike that of 'gnome-terminal'. There are also image-viewer, performance-monitoring, mail-reader and other useful tools.
I work in an environment where I am required to access and manage a number of Solaris and HP-UX servers. Most of my work is done at a NetBSD-based Xterminal, managed by a remote Solaris machine so that I have a CDE desktop. There are times it is managed instead by a remote Linux machine so that I have a Gnome desktop. And there are times (too many of them!) when I work from home, using a Linux machine with a locally-managed Gnome desktop.
It matters little where I am working; as soon as I open up a CDE utility such as 'dtterm', my Xserver starts looking for CDE-specific fonts. It seems that a number of vendor-supplied backup and other utilities also make use of these fonts.
In the case of 'dtterm' the end-result is that an attempt to select a different-sized font produces a selection list containing eight fonts, and seven of these can't be found. It is actually possible to get around this by redefining on the Solaris or HP host the names of the fonts which are used for the 'dtterm' application. This can be done at either a system-wide or a user-specific level; either way, it's hardly an elegant solution.
In the case of a splash-screen produced at CDE-login time, the result can be quite dire: the user is unable to read the login prompts or error messages! More recent versions of both Solaris and HP-UX get around this by attempting to append an entry like 'tcp/hpmachine:7100' to the font-path at login time. That's fine unless your site security policy prohibits the activation of font service on your Solaris and HP servers.
You can designate a couple of machines as font-servers for your site. These can be small dedicated machines, or they can offer other services (such as DHCP, NTP, etc.) as well. That's actually the way that it's done with 'thin' Xterminals from companies like IBM, NCD and HP.
There are several issues. First up, you have to actually install the CDE-fonts on the font-server machines; there may be some copyright issues here if you are installing (for instance) HP CDE fonts on Linux machines.
Something we noticed in practice is that the Xserver software we are using doesn't seem smart enough to do a transparent fail-over in the event of a single server disconnection. So what happens is that a user suddenly finds himself presented with a blank screen.
If you are working from home with a modem connection to the LAN on which your font-servers reside, it can take some time for required fonts to arrive when you start a 'dtterm' application.
This is certainly a possibility, and if you can live with the copyright issues, it will solve most of the problems outlined above. But it will require an extra 10Mb of filespace on each system.
The good news is that you don't have to lose sleep over the copyright issues. And you don't have to install strange fonts all over your font directories.
All you need do is identify some commonly-available fonts which closely match the CDE-specific fonts, and create one 'fonts.alias' file. Place it in an appropriate directory (e.g. '/usr/X11R6/lib/X11/fonts/local'), and run 'mkfontdir' in that directory. Then ensure that the directory name is included in your font-server configuration file (e.g. '/usr/X11R6/lib/X11/fs/config'). If your version of Linux (or NetBSD, or FreeBSD ..) doesn't include a term like 'unix/:7100' in its 'XF86Config' (or similar) server configuration file, you should place the name of your selected font directory in that configuration file.
Here's what the 'fonts.alias' file looks like. For clarity, I've shown just the first two and the last alias hereunder, and I've broken each line at the whitespace between the alias-name and it's corresponding real font. There wasn't a great deal of science went into the development of this file, although I did use a couple of simple scripts to assemble it. It was just a matter of finding, for each alias, a font having similar characteristics and size.
! XFree86-cdefonts-1.0-2 ! Font Aliases for Common Desktop Environment using XFree86 fonts. ! Graham Jenkins <grahjenk@au1.ibm.com> October 2001. -dt-application-bold-i-normal-serif-11-80-100-100-m-60-iso8859-1 "-adobe-courier-bold-o-normal--11-80-100-100-m-60-iso8859-1" -dt-application-bold-i-normal-serif-14-100-100-100-m-90-iso8859-1 "-adobe-courier-bold-o-normal--14-100-100-100-m-90-iso8859-1" ... "-dt-interface user-medium-r-normal-xxl serif-21-210-72-72-m-140-hp-roman8" "-b&h-lucidatypewriter-medium-r-normal-sans-24-240-75-75-m-140-iso8859-1"
OK, so you've read this far, and you're still asking "Why Should I Care?". My guess is that eighty percent of you have never used CDE and are unlikely to use it in the future.
But what I can guarantee is that most of you are going to run an application one day, and wonder why it's fonts don't display or scale properly. My hope is that when that happens, you'll recall what you've read here - and apply it to the creation of an appropriate 'fonts.alias' file as outlined above.
Graham is a Unix Specialist at IBM Global Services, Australia. He lives
in Melbourne and has
built and managed many flavors of proprietary and open systems on several
hardware platforms.
...making Linux just a little more fun! |
By Janine M Lodato |
Because the baby-boom generation will soon be the senior population, the market for voice-activated telephone services will be tremendous. An open-minded company such as IBM or Hewlett-Packard will surely find a way to meet the market demand. What is needed by this aging population is a unified messaging system -- preferably voice-activated -- that lets the user check for caller ID, receive short messages, check for incoming and outgoing e-mail, access address books for both telephone numbers and e-mail addresses, and place telephone calls.
Everything that is now done by typing and text will be more quickly and easily performed with voice recognition. That is, a voice will identify a caller, read short messages aloud, provide e-mail services in both text-to-voice reading of the incoming e-mail and voice-to-text for outgoing e-mail, voice access of address books, and voice-activated placing phone calls (and ending them when you're done). Once the users are able to answer, make and end a call using just their voices, working with the telephone will be a breeze and seniors will not feel isolated and lonely. What a boon to society voice-activated telephone services will be. Whether or not users are at all computer-savvy, e-mail will also be an option applied to the telephone. It is, after all, a form of communication as is the telephone. It is a Linux-based unified communication system.
Of great value to the user would be e-mail and its corresponding address book. As e-mail comes in, messages could be read by way of a text-to-voice method. Also of great value would be a telephone system with its corresponding address book and numbers. Short messaging could be read through text-to-voice technology and short messages can be left using voice-to-text methodology.
One of the most advanced and productive uses of such simple Linux-based communication devices is to search the web without going on-line to a search engine. Instead, one can just send an e-mail to Agora in Japan and do multiple Google searches with a single e-mail. You do not even need a browser. For example, we are interested how Linux has been recently doing in the press in connection with the life sciences and medical applications. Just send a single e-mail to a free service such as Agora at dna.affrc.go.jp. In the body of a single e-mail one can put a number of searches. Of course, one can modify the search terms:
Send http://www.google.com/search?q=Linux+press+2003+life*sciences\&num=50 Send http://www.google.com/search?q=Linux+press+2003+medical*devices\&num=50 Send http://www.google.com/search?q=Linux+press+2003+telemedicine\&num=50Within thirty minutes or so, depending on the time of the day and the load the Agora server is under, you get a number of e-mails back, one for each send command in your email. Each e-mail lists the URLs, accompanied by a one-paragraph review of the corresponding web site, which fits the keywords one has specified in the send command. Then just simply select the reference number next to the URL you are interested in and list them in a reply email back to Agora, and they will send the web page you have selected. Or you can use the deep command to get the entire web-site for the URL. To learn more send a Help e-mail to the Agora server for details.
How productive one can get, but do not abuse these fine services since they are for the researchers. Use it when it's nighttime in Japan: after 7pm on the US west coast, after 4pm on the US east coast, and after 11am in western Europe.
Anything that allows independence for the user is bound to be helpful to every aspect of society.
With the attractive price of a Linux-based unified communication device encompassing all the applications mentioned above, users can be connected and productive without the need for an expensive Windows system.
There's a list of Agora and www4mail servers at http://www.expita.com/servers.html. Two other (less reliable) Agora servers are agora at kamakura.mss.co.jp (Japan) and agora at www.eng.dmu.ac.uk (UK).
Www4mail is a very modern type of server that works similar to Agora. Two servers the author has tested are www4mail at kabissa.org and www4mail at web.bellanet.org. Send an e-mail with the words "SEND HELP" in the body for instructions.
...making Linux just a little more fun! |
By Janine M Lodato |
The most important capital of an alliance: people, successfully collaborating via the Internet.Hope springs eternal at the World Internet Center (The Center) in Palo Alto, California. Located in an upstairs suite at the historic Stanford Barn, The Center hosts a weekly social event on Thursdays from 5 to 7 PM called "the Pub". Aside from sushi and wine provided by The Center at nominal cost to those who attend, the networking that takes place at the Pub offers hope to millions of people.
The Center brings hope by connecting Silicon Valley entrepreneurs, corporate executives and technologists, all wanting to forge a start-up company that will make a mark on today's economy using info-tech such as the Internet. From the business opportunities that develop, the beneficiaries of such opportunities are not just the businessmen putting together the deal. In the long run, the beneficiaries may also include people around the world afflicted with physical malfunctions and illnesses.
The Pub allows people to put together start-up firms of varying interests. Small, narrowly-focused companies such as those concentrating on life sciences, soon to be headquartered in Singapore, rely on larger businesses to disseminate their services and capabilities. These larger businesses are called systems integrators.
Visitors to The Center come from as far away as Russia, Australia, Iran, Europe, China, Japan, Chile, Brazil and, of course, from Silicon Valley. California's Silicon Valley is the Mecca of high technology: telecom, multimedia telecom, computers, Internet and e-commerce, attracting countries wanting to ride the high-tech wave of the future because of its potential for financial gain.
Because people forming a team and working well together as a group make for the success of a new company, the elbow-rubbing they do socially at the Pub is an indication of how things will work out in the long run. People make the deal work, not technology, not ideas, not money, but people with those things. If new businesses can speed along medical help for people with all sorts of physical malfunctions, The Center will have achieved a major milestone: lowering the cost of medicine and improving the lives of the needy.
The main theme of The Center is to connect its current and past large corporate sponsors such as Amdocs, Deutsche Telekom, HP, IBM, SAP, Sun with with small high-tech companies and expert individuals in the form of a series of focused think-tanks.
Because my husband, Laszlo Rakoczi, a Hungarian revolutionary who emigrated to the USA after the revolution in Hungary was crushed by the Evil Empire (the Soviet Union), is a member of the Advisory Board of The Center, many small companies seek him out to discuss the potential of collaborative strategic alliance type business arrangements. One such high-tech company recently approaching him is Sensitron.net. Dr. Rajiv Jaluria, founder and CEO, met Laszlo through The Center. Sensitron is a small high-tech firm which built an end-to-end system to connect medical instruments to monitoring stations and databases thus improving the productivity of the medical professionals and increasing the quality of medical care. Of course the question of what type of platform should the application run on came up. Laszlo immediately introduced the idea of embedded Linux based systems for the medical instruments as well as for the PDAs and Tablets for the professionals and even the potential of Linux based servers and databases. Laszlo suggested these since Linux would allow...
Laszlo could not resist pointing out that the real Evil Empire which is holding down and fighting the real revolution -- the simple and low cost collaboration of all peoples via the Internet, not just the ones who can pay for the high cost of a Windows based PC -- is Microsoft with their monopolistic pressure tactics. One of such evil practices of Microsoft is the campaign under which they embrace a small company like Sensitron, enhance their application of Sensitron, then extinguish the original team. Embrace, enhance, extinguish. The Soviets were never that good and imaginative in their tyrannical approach. Maybe that is the reason they have failed.
As the biotech and IT arenas converge, IT enables life sciences companies to accelerate the development, testing and marketing of their intellectual properties, products and services. Life sciences encompass the fields of biotechnology, medical equipment and pharmaceutical products and services. Such companies include many small, as well as large entities like Pfizer, Chiron, Philips and Agilent.
It is hard to believe such a sophisticated, practical idea could come from people socializing over wine and sushi, but that is indeed the case. Many future start-up companies in the Silicon Valley will have the World Internet Center and its weekly Pub to thank for their conception.
One such important think tank, currently in formation stages looking for corporate sponsors, is an NIH-funded project for the disabled, aging and ailing. This proposed think tank planning to investigate the potential of collaborative telemedicine. For example, due to the shortage of medical professionals, China must use telemedicine to connect the small clinics in 260,000 communities to the 100 large teaching hospitals via VSAT type Internet linkage. NeuSoft of China is putting together such a system and of course they do not want to fall prey to Microsoft's overpriced systems. In fact Linux is the major platform China wants for all their applications supported by the Red Flag project.
Telemed systems of this type apply to a very large group, including disabled, aging and ailing people as well as the professionals supporting them. The sum of these people account for half the population of the world and very few of them can afford the artificially high cost of Windows-based systems. Telemed can lower the cost of medicine, improve the capabilities of the medical professionals and at the same time improve the quality of life of the patients.
Sensitron, with the support of NeuSoft will propose that NIH should provide a grant to their strategic alliance under which a disabled and female investigator will do a clinical study of the potential of significantly improved condition of health via Internet-based collaborative virtual community style involvement. This significant upgrade of self-supported health improvement can be achieved using assistive technologies (AT) connected via the web. However, such AT technologies must be upgraded to allow collaboration between the health service professionals and their patients linked via the virtual community. The AT based virtual community needs functions such as...
Melbourne, Singapore, Dailan, Shanghai, Hong Kong, Kuala Lampur, Munich, Budapest, Vienna, Lund, Bern, Helsinki, Shenyang, Dublin, London, Stuttgart, Hawaii, Vancouver, Toronto, etc., would all love to come to Silicon Valley in this virtual community manner, through a club equipped with a standard wireless local area network (WLAN), connected to a virtual private network (VPN). This cross-oceanic virtual private network will have kiosk-based unified messaging (UM) between the clubs. This would also including very low-cost voice over the Internet protocol (VoIP) connected in all major APEC, Asian Pacific economic community, cities with the VoIP and UMoIP as well as through carrier allies with IP backbone to 120 of the important cities in USA/Canada as well as many in the EU.
Those of us with neurological dysfunctions such as MS, ALS, ALD, Parkinsons, Alzheimers and myriad more, have a very special personal stake in the networking that goes on over sushi and wine. Life sciences and information technology working together can aid these patients in a very effective way. For example, techniques like neuroprosthetics -- interaction with devices using voice and eye signals -- can develop.
As I sit in the only quiet spot at The Center during its weekly, after-hours social event, I notice the networking that takes place. The Center provides a great opportunity for people to share ideas for business. Everyone from the original architects of the Valley to new entrepreneurs is there. Investors look for good investment opportunities, and start-up companies look for anyone wanting to put money into their new venture. Basically, it's a people-to-people scene and is exciting to observe.
Then there are those who find the allure of the event as a singles bar irresistible. Where else can they find stimulating company, fresh sushi and good wine at such a fair price? Personally, having attended the weekly occasion for so many months now because my husband, Laszlo, is a member of The Center's Advisory Board, I could care less if I ever see sushi again in my life!
By now I have my own circle of friends at this gathering. And I find those wanting to do business with my important husband very courteous and attentive to me. In general, the entire encounter is an "upper" for me, a technology midget among giants.
Nibbling on the cheese set before me, my taste for sushi having long since expired, I fulfill my role as a mouse in the boardroom to the max. I overhear conversations of businessmen from the already-mentioned countries exchanging e-mail addresses to further negotiate via the Internet. The Center has achieved its goal.
I smile a little inward smile, realizing medical researchers around the world have been sharing ideas and breakthroughs on the Internet for years. A medical Manhattan Project has been globalized thanks to the Internet. I know a lot of afflicted people who were ready for medical help yesterday.
What can we do besides raise money to hurry things along? Hope the convergence of biotechnology and IT accelerates treatments for physical malfunctions worldwide and promotes the free exchange of intellectual property among biotechnology companies and research institutions, that's what. And keep that sushi and wine readily available for the Thursday night Pub at the World Internet Center.
...making Linux just a little more fun! |
By Ben Okopnik |
When Woomert Foonly answered the door in response to an insistent knocking, he found himself confronted by two refrigerator-sized (and shaped) men in dark coats who wore scowling expressions. He noted that they were both reaching into their coats, and his years of training in the martial arts and razor-sharp attention to detail resulted in an instant reaction.
- "Hello - you're obviously with the government, and you're here to help me, even if didn't call you. May I see those IDs?... Ah. That agency. Do come in, gentlemen. Feel free to remove your professional scowls along with your coats, you won't need them. Pardon me while I call your superiors just to make sure everything is all right; I need to be sure of your credentials. Please have a seat."
Some moments later, he put down the phone.
- "Very well; everything seems right. How may I help you, or, more to the point, help your associates who have a programming problem? I realize that security is very tight these days, and your organization prefers face-to-face meetings in a secure environment, so I'm mystified as to your purpose here; I don't normally judge people by appearances, but you're clearly not programmers."
The men glanced at each other, got up without a word, and began a minute security survey of Woomert's living room - and Woomert himself - using a variety of expensive-looking tools. When they finished a few minutes later, they once again looked at each other, and nodded in unison. Then, each of them reached into the depths of their coat and extracted a rumpled-looking programmer apiece, both of whom they carefully placed in front of Woomert. The look-and-nod ritual was repeated, after which they each retired to the opposite corners of the room to lurk like very large shadows.
Woomert blinked.
- "Well. The requirements of security shall be served... no matter what it takes. Have a seat, gentlemen; I'll brew some tea."
A few minutes later, after introductions and hot tea - the names of the human cargo turned out to be Ceedee Tilde and Artie Effem - they got down to business. Artie turned out to be the spokesman for the pair.
- "Mr. Foonly, our big challenge these days is image processing. As you can imagine, we get a lot of surveillance data... well, it comes to us in a standardized format that contains quite a lot of information besides the image: the IP of the originating site, a comment field, position information, etc. The problem is, both of us are very familiar with text processing under Perl but have no idea how to approach extracting a set of non-text records - or, for that matter, how to avoid reading in a 200MB image file when all we need is the header info... I'll admit, we're rather stuck. Our resident C++ guru keeps trying to convince us that this can only be done in his language of choice - it wouldn't take him more than a week, or so he says, but we've heard that story before." After an enthusiastic nod of agreement from Ceedee he went on. "Anyway, we thought we'd consult you - there just has to be something you can do!"
Woomert nodded.
- "There is. One thing, though: since we're not dealing with actual classified data or anything that requires a clearance - I assume you've brought me a carefully-vetted specification sheet, yes? - I want my assistant, Frink Ooblick, to be in on the discussion. This is, in fact, similar to the kind of work he's been trying to do lately, so he should find it useful as well."
Frink was brought in and debugged by the pair Woomert had dubbed Strong and Silent, although "perl -d" [1] was nowhere in evidence. After introductions all around, he settled into his favorite easy chair from which he could see Woomert's screen.
- "All right, let's look at the spec sheet. Hmmm... the header is 1024 bytes; four bytes worth of raw IP address, a forty-byte comment field, latitude and longitude of top left and bottom right, each of the four measurements preceded by a length-definition character... well, that'll be enough for a start; you should be able to extrapolate from the solution for the above."
"What do you think, Frink? Any ideas on how to approach this one?"
Frink was already sitting back in his chair, eyes narrowed in thought.
- "Yes, actually - at least the first part. Since they're reading a binary file, ``read'' seems like the right answer. As for the second... well, ``substr'', maybe..."
- "Close, but not quite. ``read'' is correct: we want to get a fixed-length chunk of the file. However, "substr" isn't particularly great for non-text strings - and hopeless when we don't know what the field length is ahead of time, as is the case with the four lat/long measurements. However, we do have a much bigger gun... whoa, boys, calm down!" he added as Strong and Silent stepped out of their corners, "it's just a figure of speech!"
"Anyway," he continued, with a twinkle in his eye that hinted at the "slip" being not-so-accidental, "we have a better tool we can use for the job, one that's got plenty of pep and some to spare: ``unpack''. Here, try this:
The moment of silence stretched until Ceedee cleared his throat.
# Code fragment only; actually processing the retrieved data is left as an # excercise, etc. :) ... $A="file.img";open A or die "$A: $!";read A,$b,1024;@c=unpack "C4A40(A/A)4", $b ...
- "Ah... Mr. Foonly... what the heck is that? I can understand the ``open'' function, even though it looks sort of odd... ``read'' looks reasonable too... but what's that ``unpack'' syntax? It looks as weird as snake suspenders."
Woomert glanced around. Artie was nodding in agreement, and even Frink looked slightly bewildered. He smiled and took another sip of tea.
- "Nothing to worry about, gentlemen; it's just an ``unpack'' template, a pattern which tells it how to handle the data. Here, I'll walk through it for you. First, though, let's expand this one-liner into something a bit more readable, maybe add a few comments:
The new syntax of "open" (starting with Perl 5.6.0) allows us to "combine" the filehandle name and the filename, as I did in the first two lines; the name of the variable (without the '$' sigil) is used as the filehandle. If you take a look at ``perldoc -f pack'', it contains a longish list of template specifications, pretty much anything you might want for conversions; you can convert various types of data, move forward, back up, and in general dance a merry jig. The above was rather simple, really:
$A = "file.img"; # Set $A equal to the file name open A or die "$A: $!"; # Open using the "new" syntax read A, $b, 1024; # Read 1kB from 'A' into $b @c = unpack "C4A40(A/A)4", $b; # Unpack $b into @c per template
The resulting output was assigned to @c, which now contains something like this:
C4 An unsigned "char" value repeated 4 times A40 An ASCII string 40 characters long (A/A)4 ASCII string preceded by a "length" argument which is itself a single ASCII character, repeated 4 times
Obviously, you can extend this process to your entire data layout. What do you think, gentlemen - does this fit your requirements?"
$a[0] The first octet of the IP quad $a[1] " second " " " " $a[2] " third " " " " $a[3] " fourth " " " " $a[4] The comment field $a[5] The latitude of the upper left corner $a[6] " longitude " " " " " $a[7] The latitude of the lower right corner $a[8] " longitude " " " " "
After the now-enthusiastic Artie and Ceedee had been bundled off by their hulking keepers and the place was once again as roomy as it had been before their arrival, Woomert opened a bottle of Hennessy's "Paradise" cognac and brought out a pair of small but fragrant cigars which proved to be top-grade Cohibas.
- "Well, Flink - that's another case solved; something that never fails
to make me feel cheery and upbeat. As for you - hit those books, young
man! - at least when we get done with this little treat. ``perldoc perlopentut''
will make a good introduction to the various ways to open a file, duplicate
a filehandle, etc.; ``perldoc -f pack'' and ``perldoc -f unpack'' will
explain those functions in detail. When you think you've got it, find a
documented binary file format and write a parser that will pull out the
data for examination. By this time tomorrow, you should be quite expert
in the use of these tools..."
Ben is a Contributing Editor for Linux Gazette and a member of The Answer Gang.
Ben was born in Moscow, Russia in 1962. He became interested in
electricity at age six--promptly demonstrating it by sticking a fork into
a socket and starting a fire--and has been falling down technological mineshafts
ever since. He has been working with computers since the Elder Days, when
they had to be built by soldering parts onto printed circuit boards and
programs had to fit into 4k of memory. He would gladly pay good money to any
psychologist who can cure him of the resulting nightmares.
Ben's subsequent experiences include creating software in nearly a dozen
languages, network and database maintenance during the approach of a hurricane,
and writing articles for publications ranging from sailing magazines to
technological journals. Having recently completed a seven-year
Atlantic/Caribbean cruise under sail, he is currently docked in Baltimore, MD,
where he works as a technical instructor for Sun Microsystems.
Ben has been working with Linux since 1997, and credits it with his complete
loss of interest in waging nuclear warfare on parts of the Pacific Northwest.
...making Linux just a little more fun! |
By Justin Piszcz |
I have a Pentium 3 866MHZ CPU. After reading the freshmeat article on optimizing GCC a few days ago, it got me thinking. So I posed the following question: How much faster would gcc compile the kernel if gcc itself was optimized? I chose to benchmark kernel compilation times, because I think it is a good benchmark, and many other people also use it to benchmark system performance. Also, at one point or another, most Linux users will have to take the step and compile the kernel, so I thought I'd benchmark something that is useful and something that people have a general idea of how long it takes to compile without optimizations. So my test is comprised of the following:
With an non-optimized compiler, (configure;make;make install)
Average of 10 'make bzImage':
TIME: 12.42 minutes (762 seconds)
With an optimized compiler, I specifically used:
-O3 -pipe -fomit-frame-pointer -funroll-loops -march=pentium3 -mcpu=pentium3 -mfpmath=sse -mmmx -msseIn case you are wondering how to do this, it is in the FAQ of the gcc tarball. The following line is what I used:
./configure ; make BOOT_CFLAGS="optimization flags" bootstrap ; make installAverage of 10 'make bzImage'
I compile almost everything I run on my Linux box. I use a package manager called relink to manage all of my installed packages.
Optimizing the compiler alone: offers a speed increase of: 33% (or 3:11 minutes, 191 seconds faster). This may not seem like a lot, but for compiling big programs, it will significantly reduce compile time making those QT & Mozilla builds that much faster :) The actual test consisted of this:
cd /usr/src/Linux for i in `seq 1 10` do make dep make clean /usr/bin/time make bzImage 2>> /home/war/log doneIn case you're wondering about the time elapsed per build and how much the CPU was utilized, here they are:
No Optimization (Standard GCC-3.2.2): 720.88user 34.54system 12:43.97elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k 719.06user 35.69system 12:42.09elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 719.14user 34.37system 12:39.64elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 720.52user 36.42system 12:46.68elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k 721.07user 33.86system 12:41.59elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 718.95user 35.65system 12:41.31elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 721.83user 36.26system 12:51.54elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k 720.29user 34.18system 12:40.63elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 719.14user 34.80system 12:39.19elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 721.16user 33.88system 12:41.93elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k Optimized Compiler (GCC-3.2.2 w/ "-O3 -pipe -fomit-frame-pointer -funroll-loops -march=pentium3 -mcpu=pentium3 -mfpmath=sse -mmmx -msse") 532.09user 33.62system 9:32.76elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k 531.57user 32.92system 9:29.25elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 532.99user 33.12system 9:31.18elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 532.58user 33.16system 9:30.57elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 533.18user 32.96system 9:31.34elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 534.01user 32.21system 9:32.50elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k 532.59user 33.41system 9:31.56elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 532.76user 33.68system 9:32.01elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 534.19user 32.54system 9:31.92elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k 534.11user 32.76system 9:32.40elapsed 99%CPU (0avgtext+0avgdata 0maxresident)kNote: I realize some of the optimizations, most specifically (-fomit-frame-pointer) may not be a good optimization feature, especially for debugging. However, my goal is to increase compiler performance and not worry about debugging.
...making Linux just a little more fun! |
By Pramode C.E |
In last month's article (Fun with Simputer and Embedded Linux), I had described the process of developing programs for the Simputer, a StrongArm based handheld device. The Simputer can be used as a platform for learning microprocessor and embedded systems programming. This article describes my attempts at programming the watchdog timer unit attached to the SA1110 CPU which powers the Simputer. The experiments should work on any Linux based handheld which uses the same CPU.
Due to obscure bugs, your computer system is going to lock up once in a while - the only way out would be to reset the unit. But what if you are not there to press the switch? You need to have some form of `automatic reset'. The watchdog timer presents such a solution.
Imagine that your microprocessor contains two registers - one which gets incremented every time there is a low to high (or high to low) transition of a clock signal (generated internal to the microprocessor or coming from some external source) and another one which simply stores a number. Let's assume that the first register starts out at zero and is incremented at a rate of 4,000,000 per second. Lets assume that the second register contains the number 4,000,000,0. The microprocessor hardware compares these two registers every time the first register is incremented and issues a reset signal (which has the result of rebooting the system) when the value of these registers match. Now, if we do not modify the value in the second register, our system is sure to reboot in 10 seconds - the time required for the values in both registers to become equal.
The trick is this - we do not allow the values in these registers to become equal. We run a program (either as part of the OS kernel or in user space) which keeps on moving the value in the second register forward before the values of both become equal. If this program does not execute (because of a system freeze), then the unit would be automatically rebooted the moment the value of the two registers match. Hopefully, the system will start functioning normally after the reboot.
The Intel StrongArm manual specifies that a software reset is invoked when the Software Reset (SWR) bit of a register called RSRR (Reset Controller Software Register) is set. The SWR bit is bit D0 of this 32 bit register. My first experiment was to try resetting the Simputer by setting this bit. I was able to do so by compiling a simple module whose `init_module' contained only one line:
RSRR = RSRR | 0x1
The StrongArm CPU contains a 32 bit timer that is clocked by a 3.6864MHz oscillator. The timer contains an OSCR (operating system count register) which is an up counter and four 32 bit match registers (OSMR0 to OSMR3). Of special interest to us is the OSMR3.
If bit D0 of the OS Timer Watchdog Match Enable Register (OWER) is set, a reset is issued by the hardware when the value in OSMR3 becomes equal to the value in OSCR. It seems that bit D3 of the OS Timer Interrupt Enable Register (OIER) should also be set for the reset to occur.
Using these ideas, it is easy to write a simple character driver with only one method - `write'. A write will delay the reset by a period defined by the constant `TIMEOUT'.
[Text version of this listing]
/* * A watchdog timer. */ #include <linux/module.h> #include <linux/ioport.h> #include <linux/sched.h> #include <asm-arm/irq.h> #include <asm/io.h> #define WME 1 #define OSCLK 3686400 /* The OS counter gets incremented * at this rate * every second */ #define TIMEOUT 20 /* 20 seconds timeout */ static int major; static char *name = "watchdog"; void enable_watchdog(void) { OWER = OWER | WME; } void enable_interrupt(void) { OIER = OIER | 0x8; } ssize_t watchdog_write(struct file *filp, const char *buf, size_t count, loff_t *offp) { OSMR3 = OSCR + TIMEOUT*OSCLK; printk("OSMR3 updated...\n"); return count; } static struct file_operations fops = {write:watchdog_write}; int init_module(void) { major = register_chrdev(0, name, &fops); if(major < 0) { printk("error in init_module...\n"); return major; } printk("Major = %d\n", major); OSMR3 = OSCR + TIMEOUT*OSCLK; enable_watchdog(); enable_interrupt(); return 0; } void cleanup_module() { unregister_chrdev(major, name); }
It would be nice to add an `ioctl' method which can be used at least for getting and setting the timeout period.
Once the module is loaded, we can think of running the following program in the background (of course, we have to first create a device file called `watchdog' with the major number which `init_module' had printed). As long as this program keeps running, the system will not reboot.
[Text version of this listing]
#include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #define TIMEOUT 20 main() { int fd, buf; fd = open("watchdog", O_WRONLY); if(fd < 0) { perror("Error in open"); exit(1); } while(1) { if(write(fd, &buf, sizeof(buf)) < 0) { perror("Error in write, System may reboot any moment...\n"); exit(1); } sleep(TIMEOUT/2); } }
If you are not bored to death reading this, you may be interested in knowing more about Linux on handheld devices (and in general, embedded applications). So, till next time, Bye!
I am an instructor working for IC Software in Kerala, India. I would have loved
becoming an organic chemist, but I do the second best thing possible, which is
play with Linux and teach programming!
...making Linux just a little more fun! |
By Dustin Puryear |
Title: Unix Storage Management
Authors: Ray A. Kampa, Lydia V. Bell
Publisher: Apress
Year: 2003
There are some rather complex--and dare I say it--arcane issues involved in managing storage in a Unix environment. Indeed, Unix storage management can be a complicated affair. This is especially true when you consider the many needs business places on storage systems such as fault tolerance, redundancy, speed, and capacity. Apress has published a book that they promote as being written specifically about this topic, which weighs in at a comfortable 302 pages of actual material.
In general, I find Unix Storage Management to be a good primer on storage management. However, I am a little disappointed in the lack of focus on actually administering storage. When requesting the book I assumed that I would learn how to pull up my sleeves and tune and tweak file system performance, optimize access to network-based storage, and in general get a real feel for managing storage in a Unix environment. But alas, that isn't the case. Unix Storage Management deals mostly with the higher-level details of understanding how storage works, determining what kind you need, and then working to integrate that storage into your network.
This isn't to say that the book doesn't do a good job of introducing the reader to the major components of modern Unix storage systems. Indeed, technologies covered include RAID, SANS, NAS, and backups, to name just a few. Kampa and Bell actually do a good job of introducing this material, but they do not treat the subject matter in great depth. Essentially, after reading the text, readers will have enough knowledge to do more research and know what they are looking for, but they doubtless would not be in a position to actually implement a solution in a demanding environment.
The target audience for this book, whether intentional or not, are IT managers and those that want a broad overview of how storage systems work. Administrators that are in the trenches would also enjoy skimming this book if for no other reason than to remind themselves of the technologies available for them. Also, most administrators will look favorably on the chapter "Performance Analysis", which does a rather good job of detailing the process of collecting and analyzing performance information on storage systems. All in all, this is not a bad book as long as you aren't expecting to walk away with guru-like powers over Unix storage systems.
Dustin Puryear, a respected authority on Windows
and UNIX systems, is founder and Principle Consultant of Puryear
Information Technology. In addition to consulting in the information
technology industry, Dustin is a conference speaker; has written
articles about numerous technology issues; and authored "Integrate
Linux Solutions into Your Windows Network," which focuses on
integrating Linux-based solutions in Windows environments.
...making Linux just a little more fun! |
By Raj Shekhar, Anirban Biswas, Jason P Barto and John Murray |
talk was the first chatting program developed from UNIX long ago when there was no MS trying to capture the Internet. The computing world was a free land then, and you could share any program with any one, you could change them too to suit your needs - much like what Free Software is trying to do. talk is still available with UNIX & GNU/Linux .
From talk, other chatting concepts were developed. IRC was the first to be developed,then other companies came and hence ICQ, Yahoo, MSN, Jabber, AIM etc. chat systems were developed.
I shall try to touch on each of the chat systems here.
To chat with your friend then all you have to do the following
[anirban@anirban anirban]$ talk <username>@host <tty>
i.e. if the user name is raj (it will the same as the login name to
the system) and his host computer is www.anyhost.com then it will be
[anirban@anirban anirban]$ talk raj@www.anyhost.com
You may be wondering what tty is? Suppose your friend opened many terminals - the terminal in which you want to send the message is specified by the tty number. Numbers start from 0 and only integers are allowed.
You can do the above with write too.
[anirban@anirban anirban]$ write <username@host> <tty>
If you want not to receive any chat invitition or any chatting then
you have to give the command.
[anirban@anirban anirban]$ mesg n
to remove the blocking you have to do
[anirban@anirban anirban]$ mesg y
If you a GUI lover and a heavy Yahoo or MSN chatter then you may not like this kind of chatting, but for many of us who like GNU/Linux this old system is still gold.
The main difference is that you do not have to sign up to get a ID or password. So what do you do instead? Choose a nickname and a host (IRC server) to connect to. Since it is not run by a single company you have to know the host address, like you have to know the URL to visit a page on Internet. You can get the addresses of different hosts from the internet and also to which topic it is dedicated; for example, irc.openprojects.net is dedicated to the betterment of open source projects and open source developers.
So you have to provide your nickname and the host you want to connect to. If the nickname you pick is already taken then you'll have to provide another nickname.
IRC newbies should check out the IRC Primer before using IRC for the first time.
The first window of Xchat will appear. Provide the nickname you would like. You can provide more than one. In case a nickname is already taken in a room xchat will use the other nickname you provided, else it will pick the first nickname in the list. You also can provide your real name and as which user you want to use it - (generally you do not have to provide all these; the system guesses it for you from your system login name and real name).
Now choose a host from the list of hosts and double click it or click on `Connect' at the bottom. A new window will open with some text flowing in it. It will take a little time to connect, then after connecting it will show the rules you should follow to chat in this host. Since IRC generally is a volunteer effort by good-at-heart people and not by any company, please try to follow it or else you may get banned. Maintainers of IRC chat rooms are very strict about the rules. (That is why chatting experience is much better here than yahoo or MSN).
Now you will see a single line text box where you can write both what you want to say, and also commands to navigate. Commands all start with a / (ie. slash). To get the list of rooms (or channels) in the host type /list . You will see all the rooms, choose the one which suits you and then type /join #<roomname> and then click `Enter'. Please note that you have to always give the number sign (#) before any room name.
Now you will enter that room and start chatting. At the extreme right there will be the list of all users/chatters in that room; selecting any one will get info about him /her. You will find many buttons at the right side of your chat window, by selecting a user and clicking the buttons you can ban or block a user, get info about him/her, invite him/her in a personal chat or even transfer files in IRC.
So I think you will now be able to chat in IRC. Some day you may even meet me in IRC. I generally live in the host irc.openprojects.net and in the room linux (you have to give a number sign ie. # before joining ie. /join #linux).
Since it is similar to the windows version you will find `Add Friends, Your Status, Ban' etc buttons in their usual places, generally as part of a menu at the top of Yahoo Messenger. Currently you can send files, invite people to group chat, and get email notifications.
You can find the list of printers that are supported by GNU/linux at linuxprinting.org. I have used RH 7.3 and a HP 810c Printer here as an example.
Just remember that generally speaking, you'll need to be running as root to install, upgrade or delete packages. However any user can run queries.
Or start your graphical tool, either from the menu or a terminal. You'll
see a list of all the installed packages. Click on the package you want
to remove, then click the `Uninstall' button. Note that if there
are other packages installed that require files from the one you are deleting,
a warning will appear and the uninstall won't go ahead. You can override
this by using
rpm -e --nodeps mozilla (command line),
or selecting "Ignore Dependencies" (GUI tool), but be aware that this
will break the other programs.
GNU/Linux is subject to the same problems, except that RPM will advise you of the problem before the program is installed. Many problems can be avoided when you install Linux - selecting Gnome and KDE for installation will help, even if you don't intend to run them, as many other programs use the same libraries.
So what do you do when RPM complains that a package can't be installed, because of missing packages or files. Write down the missing package/file names, and check your installation CD-ROMs for packages with similar names to the ones required. You can use the rpm-qpl command to view the files supplied by a not-yet-installed package. Often it is just a matter of installing these packages to resolve the problem. Sometimes, though, it leads to even more dependencies, so it can be a rather lengthy process.
*Warning*
Breach of copyright is taken very seriously in most parts of the world
- this article in no way encourages users to break the law.
Ordinary audio CDs like the ones you'd play in your home stereo differ from data CDs in that the music is recorded onto the disk as raw data, that is, there is no file system on the disk. That's why if you put an ordinary audio CD into your CD drive and try to read the contents in a file manager, you won't find anything. Your computer is looking for a file system where there is none. An audio CD doesn't need to be mounted to be read or burnt - unlike data disks.
Data CDs on the other hand use a file system to organize the way in which the data is written to and read from the disk, similar to the file system on a hard disk. Music files in formats such as .mp3, wav, or ogg are written onto data CDs using a file system just like any other CDROM. These CDs can be opened in a file manager or from the command line, and the music played using the appropriate program.
cdparanoia n
`n` specifies the track number to record. By default the track will be recorded to a file named cdda.wav. If cdda.wav already exists it will be overwritten, so be careful if you are recording several tracks! You can specify your own file name like this:
cdparanoia n filename.wav
To record the entire CD type: cdparanoia -B
The -B in the above command simply ensures that the tracks are put into separate files (track1.wav, track2.wav etc.). Cdparanoia has many more options and an easy to understand manual page; type man cdparanoia to read it.
bladeenc filename.wav
This will produce a file with the same name as the source file, but with the .mp3 suffix. If you want to specify a destination filename you can add it to the end like this:
bladeenc filename.wav filename.mp3
By default, bladeenc will encode the file at 128kbit/sec, this is the most commonly used bitrate and results in a very compact file of reasonable quality. Higher rates can be specified, giving a better sound quality at the expense of a slightly bigger file size, though it's hard to detect any improvement in sound quality using sampling rates above 160kbits/sec. To convert a file at 160kbits/sec use:
bladeenc -160 filename.wav
oggenc filename.wav
As with bladeenc, the sampling rate (and sound quality) can be specified. This is done by using the following command:
oggenc -q n filename.wav (where n is the quality level)
The default level is 3, but can be any number between 1 and 10. Level 5 seems to be roughly equivalent to an mp3 encoded at 160kbits/sec.
mpg123 -w filename.wav filename.mp3 (note - the destination filename comes first)
Note also that there is some slight loss of sound quality when a .wav file is converted to mp3 format, and this isn't regained when converting back to .wav - so if possible, you should try to use .wav files that have been ripped from an audio CD rather than converting back from mp3s.
normalize -m /path/to/files/*.wav
Of course, your speed and device numbers might be different - you can use cdrecord -scanbus to find the device address, and the speed setting will depend on your CD burners' speed rating. In general, burning will be more reliable at slower speeds, especially on older machines.
mkisofs -R /path/to/folder_to_record/ | cdrecord -v speed=4 dev=0,0,0 -
Don't forget the hyphen at the end! As in the example for burning audio CDs, you might have to use different speed and dev numbers. Older or slower computers might have difficulties running both mkisofs and cdrecord at once - if so you can do it as two separate operations like this:
mkisofs -R -o cdimage.raw /path/to/folder_to_record/
This creates an image named cdimage.raw. Then burn the disk:
cdrecord -v speed=4 dev=0,0,0 cdimage.raw (using suitable speed and device settings..)
Xcalc is a scientific calculator desktop accessory that can emulate a TI-30 or an HP-10C. Xcalc can be started from a terminal emulator or from the Run dialog box by typing xcalc. It takes the following command line argument (among others)
KCalc can be started by typing kcalc on the command prompt or in the Run Program Dialog box.
You can get Adobe Acrobat Reader from: http://www.adobe.com/products/acrobat/readstep2.html
The Linux-Office Site is
a very useful resource for Linux office apps.
The KOffice website
The Gnome-Office
website
Codeweavers Crossover
Office can run Windows apps like MS Office, Lotus Notes and others
under Linux.
Setting up 3D graphics with Linux used to be a bit tricky, but now many modern distros will set up the appropriate drivers during installation, giving accelerated 3D out of the box. When you are setting up your machine, keep in mind that it isn't the brand of graphics card you have that is important, but rather the brand of chipset it uses. In other words, you would use ATI drivers for a card with an ATI chipset, regardless of its brand. Currently, most Linux gamers seem to prefer nVidia based cards, and with good reason. NVidia write their own (closed source) drivers for Linux; these are easy to install and set up and their performance is generally on a par with their Windows counterparts. ATI based cards are also popular, and ATI have recently released unified drivers for Linux users with their higher end cards. Check out this site to see what cards are supported. As well as suitable hardware, you'll also want to use a recent version (>4.0) of XFree86. Later versions have much better 3D support, so if you are having problems an XFree86 upgrade should be one of your first steps.
Once you've downloaded the packages, you should exit X (not strictly
necessary, but it makes recovery easier if things go wrong..) and install
the kernel package and then the GLX package. If you are upgrading rather
than installing, nVidia recommend removing the old GLX package first instead
of upgrading over it. Now all you need to do is edit a couple of lines
in your XF86 configuration file (usually this will be /etc/X11/XF86Config-4).
Assuming you already have an XF86Config file working with a different driver
(such as the 'nv' or 'vesa' driver that is installed by default), then
all you need to do is find the relevant Device section and replace the
line:
Driver "nv" (or Driver "vesa")
with
Driver "nvidia".
In the Module section, make sure you have:
Load "glx"
You should also remove the following lines:
Load "dri"
Load "GLcore"
if they exist. Now restart X to use the new drivers. If you have any
problems, check the `XF86' log file (named ` /var/log/XFree86.0.log'
or similar) for clues. Also read the documentation on the nVidia website
and in the README file included in the NVIDIA_GLX package.
Another way to run Windows games is to use an emulator like Wine, or WineX. The list of programs that will run well under Wine is growing steadily, though for gaming you'll probably be more interested in WineX by Transgaming. WineX is a commercial offshoot of the Wine project, and while Wine aims to enable Windows programs in general to be run under Linux, WineX focusses exclusively on games. Many Windows games install and play perfectly with WineX, including Max Payne, Warcraft III, Diablo II, The Sims etc. There is a list of games at the TransGaming website, however I have found that there are some games not listed that will still play under WineX. Try searching Google for name of game + winex for help on unlisted games. You can download the WineX source from the CVS tree for free, but compiling and configuring can be confusing for a newbie. Much better is the precompiled packages that are available to subscribers. Subscriptions cost US$5 per month, with a 3 month minimum. There are some other benefits to subscribers, though I think the binaries alone are worth the price.
The
Linux Gamers HOWTO - I can't recommend this one highly enough; if you
are serious about gaming with Linux, read this doc!
Linux for Kids - This site
has lots of links and info about games and educational apps. You don't
have to be a kid to enjoy this stuff - adults will probably find some good
stuff here too.
The Linux Game FAQ - A
comprehensive list of Frequently Asked Questions about Linux gaming.
The Linux Game Tome
- Definitely worth a look!
New Breed Software
- Bill Kendrick and co. have written some good games, mainly for kids.
Racer - is a promising race car
game with extremely good graphics and physics. Not finished yet, but still
playable, and makes a nice change from the shooters.
Transgamings Winex Homepage
LinuxGamers is another interesting
game site.
However, by doing some or all of them you should end up with a system that boots more quickly, has more disk space and slightly more free memory, and a small but noticeable improvement in performance. The one thing you can do that will have a profound effect on performance is run lightweight, efficient software, so you should make that your first priority when building a fast Linux desktop.
I'll assume you'll be running a SysV type system, as this is the most common and what you'll have if you are running a Red Hat-type distribution. SysV simply refers to the way services etc are started at boot time. If you are running some other system, you can still clean up the boot process; check your distribution's documentation for details. It's a good idea to browse through any documentation that came with your distribution anyway. This might be in the form of HTML files or a printed manual, and with many modern distributions is very comprehensive. The documentation should be able to provide you with details of any variations to the boot process used by your particular distribution, though I think the common distributions are pretty much all the same in this regard.
If this is a fresh installation, you should make sure all your hardware is properly configured first. Linux has really come a long way as far as hardware recognition goes, and chances are you won't have to do anything, though things like sound cards sometimes have to be setup manually. Once you are sure everything is going to work, you can continue with the tuning ....
After the kernel is kicked into life by GRUB or LILO or whatever, the following steps occur (with possible minor variations):
Do you really need six editors, four file managers, five shells, three ftp clients etc.? Don't be surprised to get rid of a hundred megs or more of stuff. Packages like the Tex related ones, Emacs/Xemacs, and various emulators are never used by many, yet they occupy lots of space. If you are doubtful about removing some packages, keep some notes so you can re install them later if you have to.
Many distributions also install lots of documentation (check out /usr/doc or /usr/share/doc ). You'll probably find that there are only a few files in there worth keeping, and remember most of this stuff is available on the Web anyway. The du tool is invaluable for finding disk hogs. Also look for core files left over from crashes; these are only really useful to debuggers and can be deleted.
Running hdparm without any flags (or with the -v flag) will display the current settings. To see the current settings for my first hard disk (/dev/hda) for example, I would use: hdparm /dev/hda. To do a basic check of the speed of the first hard disk I would use: hdparm -Tt /dev/hda. Some more commonly used flags:
I guess the logical way to use hdparm would be to find out what your disk supports, then set hdparm accordingly. More commonly though, trial and error is used, changing one setting at a time and measuring the performance after each change. Don't use settings recommended by someone else ; while they may have worked perfectly on that persons disk, your disk might be completely different and the results may not be good. There are several tools available for testing disk performance, one of the better known ones is bonnie. And remember the changes will be lost when you re-boot, so if you want to make them permanent, you'll have to add them to a boot script like `/etc/rc.d/rc.local'.
If you are serious about tuning your Linux box, you'll need some benchmarking tools. To get started, take a look at this site: The Linux Benchmarking Project.
Obviously you'll be aiming to conserve memory as much as possible. Use the free command from a terminal emulator to see memory usage details. Ideally, you'll be able to balance usage against available memory so that swap isn't used.
You can save some memory by using a plain background on your desktop, rather than an image file.
Other useful tools are ps -aux (shows details of running processes), and top (similar to ps but continually updates).
Help reduce the time it takes X to update the screen on low-end machines by not using a greater colour depth than necessary, eg. use 16bit instead of 32 bit. You can check X's performance with x11bench, which is often installed by default.
I have completed my Bachelor in Information Technology from University
of Delhi. I have been a Linux fan since the time I read "Unix Network
Programming" by Richard Stevens and started programming in GNU/Linux in my seventh
semaster . I have been trying to convert people right, left and center ever
since.
I am Anirban Biswas from Calcutta, India. I have been using Linux for 4 years
(from RH 6.1 to RH 8.0, then to MDK 9.0). Currently I'm in the
final year of computer enginnering.
I am from Pittsburgh, Pennsylvania and have been using Linux for 7 years. My
first distro was Redhat 3 or something like that. back when configuring the X
server was a real adventure. I'm currently an avid Slackware fan, and have been
working in software development for Lockheed Martin Corporation for three
years.
John is a part-time geek from Orange, Australia. He has been using
Linux for four years and has written several Linux related
articles.