Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
...making Linux just a little more fun! |
From The Readers of Linux Gazette |
Hi
I have tried lots and lots and lots of things to get DNS setup properly, but can only get it to work intermitently. What I am try to achieve is a fairly small, fairly simple, multi domain DNS host for a few domains that my company owns, and then a few that we are/will be hosting in the near(ish) future. I have some linux boxes RH62/RH7x configured for various tasks like email, database servers, apache, samba etc. Some of the DNS authorities are with register.com, some with various ISP's. I can modify the register.com ones once I have the thing working, and will ask the ISP's to "hand over" the SOA's as well, but first I have to get the confounded thing to work properly.
I shall describe what I would like, using thumbsuck names and ip's, and would be VERY happy and appreciative if you could tell me how the config files should look.
lets say I own/have howdoesdnswork.com as my main domain, and host some others like:
whywontthiswork.net imconfused.co.za plshelp.org.za
I plan to run (i think i have it working, but can't test properly till DNS works) a virtual domain mailhost (qmail based) system. I have a fixed IP/permanent connection for my main(own) domain which is on ip subnet 99.8.77.0
DNS server (primary) is/will_be 99.8.77.13 (using BIND8)
DNS server (secondary) is/will_be 99.8.77.12 (also using BIND8)
Mail server for howdoesdnswork.com (and in fact all above) is 99.8.77.2
Web server (for all above using apache virtualhosting) is on 99.8.77.4
You're probably thinking "Why is someone so clueless even attempting something like this", but I've gotta start somewhere if I'm ever gonna learn. Pls pardon my ignorance (& I'll pardon your sniggering ).
By what I have read, and tried and struggled with, I need zone files for each domain, each of which contains host info etc. Here are my attempts, comments etc still included ..... followed by a desperate request (on my knees, tears running down my face etc) for assistance/guidance/criticism.
See attached Murgatroyd.dns-configuration-files.txt
I would also like to get some info on "mail server splitting" - as in having a local mail server (proxy) on a DSL connected LAN which forward internet emails to a main server (mailhost) on a permanent connection, but transmits local mail as local mail, and then which downloads mail from the "mailhost" to the local "mail proxy" on a polled interval, but I'm probably pushing my luck here, so I'll post this one another time..... unless of course...??? I have a working system using micro%$#* but would like to get rid of ALL M$ products as soon as humanly possible.
Thanx a stack Trent
For DNS questions there's a great resource called "Ask Mr.DNS" - but he won't answer generic requests, they'd have to be reachable from the net. Still, his archives are catalogued by category at http://www.acmebw.com/cats.htm
This certainly looks like a solvable problem; if your patience wears too thin with Micro[snip] and your time too short, you might want to dig into the Consultants Howto. While by title you'd think it was "Howto become a consultant" it's really "Howto find a consultant". There's lots of 'em, but you may recognize a few names in those pages... Check out http://www.tldp.org -- The Scissors
Hi
I hope this is a legitimate question: At faked.org a person subscribes to Info@fake.nl with his e-mailadress special.person@faked.org Okay, fetchmail collects the e-mails and sendmail distributes them on a linuxserver. What is weird: the sender sends to himself . . . in pine one sees: From: Info@fake.nl To: Info@fake.nl
Now I would not deliver such an e-mail . . . . But linux is politer and sends it to the local person of last resort.
I cannot but forward the e-mail to special-person@faked.org I do not want to mess myself with the sendmail and fetchmail configurations as they do perfectly what they should do with normal e-mails.
For your information I have put here-under what Outlook Express shows One blames me for not delivering the e-mail in the normal way . . . (Help !)
In outlook express one sees:
See attached Chris-de-Boer.headers-oe.txt
This is a multi-part message in MIME format. I had extra fun snipping the bulky HTML attachment into shreds small enough to wheelbarrow off in a tilted over greater-than symbol. Clipping the equals marks off the line ends was gravy. -- The Scissors
Hi,
Excellent material on this site !!!
I have a small problem. I have two subnets: 192.149.1.0/24 and 192.168.52.0/24. I want all the hosts on one to see the others on the other subnet. In other words, I want to have NO BLOCKING of any service from either side.
I have been able to make the 168 network hosts see and access the 149 hosts. From the 149 subnet I can ping a host on the other subnet, but I can't, for example, see a PC's shared directories. Security is not an issue as they are both internal networks.
I am running Coyote Linux on a floppy.
Any help will be greatly appreciated and thanks in advance.
Sesh
Hi...
I had simular problem, the other way around... The BIOS have a function for ps2 keyboard and mouse power-up, check that it is set for your needs... The problem is that it doesn't work with all os-shutdowns Don't know how that come...
Ruben Hansen alias GbyTe
I assume you mean for APM features in the BIOS. I wonder if perhaps, this is BIOS specific somehow. I know that almost all BIOS's are "standard" in terms of options, but you never know......
Thanks, Ruben!
Hi
Hello,
I got a lot of useful answers from this group for my previous question [TAG] RPM -Installing Packages.Thanks a lot to Thomas ,Ben,Breen,David and Rick.
<blushing>...you're welcome.
Actually i'm trying (to simulate the RPM Tool functionalities using C ) to develop a Java Packager Tool for handling *. rpm files in a Solaris Box. It should support packaging operations like Install , Query, Verify,Erase etc.
Sounds good.
As a first task , I'm tring to simulate the RPM Query option , to query installed packages.If we query an uninstalled package ,it should say it's uninstalled.It should generate package infos. as we get in : rpm -qi <file>. Also to query the listof files , pre & post installation scripts , list of dependencies, list of dependencies covered and not covered.I don't know how to use <rpmdb.h> file for generating these informations.
Hmmm, from my limited knowledge I think that you can just query the RPM database directly without going via <rpmdb.h>
Can anybody (in this group) help me in this regard?
Which part are you implementing in Java (I got 93/100 for that last semester ???
Any other suggestions that would help me to procede in this project?
Sounds like an interesting project. Readers, if any of you have done the same, or seen a project with this in progress, let us know! -- The Scissors
Hi all,
I know that this list is called "linux-questions-only" and therefore I have to prove that it is indeed a linux question! So: I want to travel to the USA with my linux laptop and need some reliable internet access via modem or ISDN (is ISDN available there and which standard is used then).
In Germany we have something called "Internet by call" where you call a number and pay via telephone bill, you don't have to register and don't have to pay in advance.
What are the options to get a linux laptop on the net in the USA? Since some of you guys are living there, you may answer the question. A quick glance on google results has only shown me some Calling Card providers where you buy "points" and then use the web until they are empty. AOL (since it's not linux-compatible) and T-Online global access (since I'm not a customer) are no option.
"If liberty means anything at all, it means the right to tell people what they do not want to hear."
George Orwell
Heh, normally I snip sig blocks, but this seems particularly apt to the BitKeeper related mail a little later on this page. -- The Scissors
[Tux] You can do what I do, since I travel a lot: use AT&T. It's $20/month, and they have dial-up numbers pretty much everywhere; AFAIK, more so than any other service. The only caveat is, no outgoing SMTP - you have to use their mail servers to push your stuff out. For most people, that's not a problem; I just find it to be an annoyance. Easy fix: set up one conffile for a smarthost and one for a local MTA and swap them as necessary. For extra ease of use, don't run in daemon mode - just invoke the MTA per-message (and an occasional cron job in case anything gets stuck in the pipe.)
[Swirl] You know, if I faced that situation, I'd rsync or scp my outgoing mail over to my own MTA. That's what Andrew Tridgell does. (For that matter, I usually just ssh over to my MTA box anyway, where I have mutt perpetually running under GNU screen, interacting directly with the local mail sppol.) They don't block port 22 (ssh), do they? That would be a deal-breaker.
[Tux] You know, I just got a shell account with <freeshell.org>; that is an excellent idea. I'd really like the ability to keep 'Fcc's on my laptop, but I can always pull down the files.
[Swirl] Entrusting outbound mail to AT&T's smarthost seems an unjustifiable compromise, in any event. Not acceptable.
[Tux] Parallels my own attitude; however, I didn't have the means to support it until now.
These are interesting thoughts, but quite untrue to their nature, the Gang didn't answer the question as offered. If you know of a purely by-the-call internet provider in the United States, or you happen to be one, chime in, and we'll see that you get noticed. At the moment the very closest I can think of is some internet coffee shops have taken to selling daypasses into their wireless hookup, or sell hourly time while you enjoy the coffee, and hotels in urban areas are starting to offer high speed access, also paid by the day. You'd need a wireless or ethernet card respectively. Europeans please note that the phone system in the US rarely offers direct-plug ISDN - that's considered a business class data line around here. -- The Scissors
I read your 'Greeting' in the latest LinuxGazette online and found it interesting, and correct. The last company I worked for actually switched from cvs, to BitKeeper, for all the cool features. Very long story short, after one year of us debugging their product, paying tens of thousands of $ for the privilege, and never having Larry McVoy stop being a pain in the ass, we dumped them and went back to cvs, and all was well (after they threatened to sue us, etc... at the time we definitely had superior lawyers, and they knew they had no case--but why even threaten? Leaves a bad taste.) Afterward, we missed changesets a bit, but not as much as you might think. And we got so much better performance for "simple things", that it made up for it, in our minds. Oh, and we could save the tens of 000s of $, which was in-line with our whole philosophy, anyhow.
My. I've no idea why large enough corporations think being a poor sport would keep them customers who are on the verge of flying the coop. One would think the other way happens more often - attempts to lure one back and all.
"Large enough corporations" I suppose -- BitMover is/was only a handful of folks. I felt they were very deceptive about the quality of their code (which did improve during the year we spent with it, but should have been that way to start with), and were obviously trying to exploit Linux (by convincing Linus to use BK), as a marketing tool. I don't believe LM has ever contributed to an opensource project, if that tells you something.
My own experiences of these systems have been with the bits only and not so much which the personailities that drive them. I'm all for people having pride in their work... but a little respect around the naighborhood here pays us all back best.
(Search Linux Gazette back archives on the title "The Coin of the Realm" for an interesting editorial on that. Issue 65, I think.)
It was issue 64 actually. OpenProjects has become http://www.freenode.net and if you enjoyed the concept, you might be interested in reading ESR's two papers that came after "The Cathedral and the Bazaar" ... "Homesteading the Neosphere" and "The Magic Cauldron" ... since he explores the anthropologic concepts of "gift culture" and other modes of economics in more academic detail. -- The Scissors
It's too bad BitMover (the company) isn't nearly as cool as well as BitKeeper, the product, is. I told Linus in email (not sure if he ever even read it) way back when that, be careful, Larry is not really a "plays well with others" kind of guy. (This is probably the understatement of the week, but I'm in a charitable mood.)
Heh. Linus has been known to declare himself "not a nice guy" on occasion too. I've always found him gentlemanly, but I wasn't toe to toe with him on the right or wrong ways to implement a deeply integral kernel function, either.
Your comments were right-on, and interesting to read. I don't usually visit the site but maybe I'll try to do so more in the future. (And if you're curious, my current shop is all-windows, that's what I inherited... trying to slowly turn things towards opensource solutions, but it's quite the effort. I thought switching from SourceSafe would be hard, but it turns out they don't even use that!! Wow. We're going to use cvs hosted on linux, with the excellent and free TortoiseCVS windows clients.)
Ben Margolin
I'm glad you enjoyed my mangled thoughts on it all. Your response goes to show that one of the lessons of open source remains the ability to vote with our feet, ultimately enforced by the right to just plain re-do it ourself.
Let us know if there's any good stuff you'd like to see in our pages!
This is with reference to " Perl One-Liner of the Month: The Case of the Evil Spambots" which was published in th LG#86. I especially enjoyed you defination of Gibberish.
Here is something I found in my fortune files. I am pretty sure wordsmithing in the Marketroid language is done using this procedure.
I wouldn't be surprised at all... Of course now I've just got to turn it into a Perl script.
See attached gibberish.pl.txt
There's something to convince your boss that Perl is the language of choice...
Thanks for writing, Raj - hope you're enjoying the articles! -- Tux
-- Jimmy O'Regan
Extra cool. I loved the reader's comments. -- Tux
This looked interesting enough to toss the clipping in. Maybe we should have stuffed it in News Bytes, but the air compressor wasn't in at press time. The "groups of linux users everywhere" is a list of LUGs and service also hosted at SSC. -- The Scissors
Some of you may have seen the recent story by ESR on NewsForge about SCO suing IBM for billions over IBM's "disclosure" of SCO intellectual property to the "free software" community.
In a nutshell, SCO bought the Unix source and related IP from Novell in 1995. Caldera (which was never much of an "open source" company) recently became SCO, and since then, they have been looking high and low for who they could sue over their IP. A rumour surfaced a while back that they had retained David Bois to sue people, which they promptly denied. Now, guess who's leading the charge against IBM? Yep. SCO has become openly hostile to the Open Source community, and this looks like the desperate effort of a dying company to grab money by suing people rather than making a better product. IBM has the deepest pockets, so they get sued first.
Anyway, the whole point of this is that I recently received a package of SCO Linux software for distribution to my LUG. You may have received such a package as well. If so, I would encourage you to send it back to SCO with a note explaining (lucidly) why. I don't know that it will ultimately do any good, but maybe it will get their attention.
Paul M. Foster
President
Suncoast Linux Users Group (SLUG)
I am sorry if this message is in HTML format; Hotmail doesn't give a plain text option, so I don't know what it's doing.
I believe you need to see: http://expita.com/nomime.html#hotmail -- Swirl
I enjoyed reading the letters page. It reflected the range of responses I received quite well. Just for your information, over 500 of your readers downloaded my library during February.
Stephen
On behalf of our missing Editor Gal, thanks, Stephen! It's good to know we snipped it just right. Loyal readers, I've also snipped the ensuing fragmented discussion about the nature of languages that sprung up among TAG ... you'll probably see something of that in a later issue. -- The Scissors.
What a week. I looked high and low and all over the offices. Those editors have gone and left me here to deal with everything on my own. Luckily I have quite a bit of practice at the grindstone and keeping my wits sharp.
I thought I was being a shear genius when it occurred to me to check the back room and see if the Answer Gang was in there. This is their column, after all. I'm sure they'd lend me a hand. I can snip right through this thing.
Imagine my surprise when there just aren't any people to be found around here at all. They must be off at a conference or something. I can't even find Ben's dark glasses.
Ok. fine. I can do it. I'm the Editor's Scissors and I've seen all of the good stuff that have ended up in /dev/cuttingroom as the great stuff makes it to print. I raided the loose bits on the desk (I made nice use of this old SCSI adaptor for that!) and I've rounded up a few buddies to make up for the missing Answer Gang. I do believe you'll recognize a few of these characters. To introduce any that you don't recognize, I've (as per the editing guidelines) included their bios.
For extra credit, if you can solve the mystery of who all these figures are standing in for, feel free to send our staff a note. I'm sure when they've stopped fooling around they'll see I've done a fair job.
Hoping to be back in the editor's hands next month... have a great one, readers!
Snippings Provided By The Wizard's Hat
From Billy a.k.a. CustomerMarket
Hi All
I am trying to configure my two computers With Linux and Windows 2000 into network. I am using DSL modem and router. I would really appreciate if somebody can spare a few ideas because I am on verge of breaking my head. (Not literally thou)
Thank you all
Billy
[Wizard Hat] Okay. You install and configure Linux and connect it to your network. Then you install MS Windows 2000 on the other computer and connect it to your network.
I'm going to make a wild ass guess that your DSL modem/router is doing IP Masquerading (a particular form of NAT, network address translation) and it problem offers DHCP services on it's "inner" (or LAN, local area network) interface --- leasing out a set of RFC1918 "reserved" addresses (192.168.x.*, 10.* or 172.16.*.* through 172.31.*.*). So, you can probably configure both computers to just get their networking information from the router dynamically (automatically).
The exact details of configuring your router, and W2K for this are beyond our purview. Talk to your ISP or refer to the router's documentation for the former. Call Microsoft or find a Microsoft-centric support forum for the latter.
The precise details of configuring Linux to use DHCP depend on which distribution you use. In general the installation programs for mainstream distributions will offer this option in some sort of dialog box or at some sort of prompt. That's the easiest way of doing it (easiest meanining: "requiring the least explanation in this e-mail"). You haven't said what distribution you're running, so I couldn't offer more specific suggestions without having to write a book.
This all seems pretty obvious. I suspect that you have some other needs in mind. However, we haven't installed the telepathy protocol daemons in our little brains yet. So we can't hazard a guess as to what you mean by 'configure.'
I might gues that you want to do file sharing between the two: read a book on Samba to let Linux export/share some of it's disk space (filesystems and directories) to the MS Win2K system and perhaps looks for a chapter or so on smbfs for Linux to "mount" (access) shares from the W2k system (i.e. to go the other way).
I might guess that you want to access your Linux system, particular it's command line interface from your Windows desktop system. In that case download and install Putty (the best free ssh client, I would say the best ssh client all around, for MS Windows). That will allow you to "ssh" into your Linux system (open command prompt windows to administer it and run programs there from. You might even want to remotely access graphical Linux programs from the Windows box (or vice versa). In that case you'd probably want to look into VNC (virtual network computing --- actually a rather silly name). VNC clients and servers run under Linux (and other forms of UNIX) and MS windows, and there is a Java client that can even run from a web browser.
There are numerous other ways to do each of these, BTW. You could install NFS clients on the Windows side for filesharing (those were all commercial last I heard). You could use the MS Windows telnet clients and install and configure the deprecated (as in "insecure, use at your own peril) telnet service (daemon) on the Linux side for character mode (terminal and command line) access. And you could get X servers for MS Windows --- most are commercial, and/or you could run rdesktop for Linux to access the MS Windows "Terminal Server" features (however the Terminal Services are an expensive add-on for Windows, as far as I know). In other words, Samba/smbfs, Putty/ssh, and VNC represent a set of services that provide file, command, and remote graphical support between the two systems using only free software and well known software at both ends.
I might provide more details on how these packages could be used. However, each of these is just a shot in the dark at what you might be looking for. So I've spend enough time on the question.
Here are a few URLs you can use to read more about these packages:
Please note: anything I say about MS Windows is likely to be wrong. I haven't used MS Windows regularly for almost 10 years. At the last couple of places where I worked or contracted that put MS Windows systems on my desk (to access Exchange for their e-mail and groupware/sheduler functions) I found that I barely used them --- e-mail, browser, and PuTTY were as much as I ever used on any of them. I'm almost exclusively a UNIX/Linux administrator and programmer, so I deeply lost touch with the whole Microsoft based universe.
- Samba: http://www.samba.org/
- OpenSSH: http://www.openssh.org/ (Both of the preceding packages are included with every major mainstream Linux distribution by default. SSH is often installed and configured automatically these days --- just check the appropriate box during your Linux installation, Samba may require somewhat more manual configuration).
- Putty: http://people.nl.linux.org/~bjs/putty/download.html (SSH client for MS Windows can be installed by just dropping one .EXE file into any directory -- optionally on your PATH. --- other optional components are similarly easy to install)
- Cygwin: http://cygwin.com/ (Environment to support UNIX and Linux software, compiled and running natively under MS Windows. I mention it here primarily because they have a list of packages that have already been ported --- ssh clients and servers in particular. Note: the level of integration and interoperation between the Cygwin environment and the rest of MS Windows can be frustratingly rudimentary. It can be confusing and the Cygwin environment can feel like an isolated subsystem of the Windows box; almost like being on a different machine at times).
- VNC: http://www.realvnc.com/ (Included with many distributions, but usually not installed by default. You have to install and configure it manually).
- TightVNC: http://www.tightvnc.com/ (An enhanced version of VNC, also free under the GPL. Might be better on the MS Windows side as client and server for the Win2K box)
- rdesktop: http://www.rdesktop.org/ (A client for the MS Windows RDP (remote desktop protocol), which is apparently derived from the Citrix ICA protocol. The client runs on Linux or UNIX. Might require special MS Windows softare or licensing on the server side).
This was posted in the open forums attached to "Langa Letter" -- one of
the InformationWeek regular
columns. The Answer Guy's actual reply is what's sitting here in my
clippings-box; the column which he is replying to was
Fred Langa / Langa Letter: Linux Has Bugs: Get Over It
/ January 23, 2003
Fred's comment about "severity" is, as he points out, inherently subjective. His numerical analysis is also subject to more issues that he's simply ignoring.
For example the 157+ bug count for RH 7.2 or 7.3 includes fixes for many overlapping products and many which are rarely installed by Linux users -- RH simply includes a lot of optional stuff. Meanwhile the count for Micrsoft may still be artificially low, since MS is known to deliberately minimize the number and severity of their bug reports. Many of their 30+ reported patches might include multiple fixes and descriptions which downplay their signficance.
Fred also, inexcusably, argues that "first availability" of a fix (in source form, sometimes in focused, though public, mailing lists and venues) "doesn't count" as faster. That is simply jury rigging the semantics to support a prejudiced hypothesis.
Another approach to looking at the severity of bugs is to view the effect of exploits on the 'net as a whole.
In the history of Linux there have only been a couple of widespread worms (episodes where a bug's exploit was automated in a self-propagating fashion). Ramen, Lion and Adore are the three which come to mind.Subjectively the impact of these were minimal. The aggregate traffic generated by them was imperceptable on the global Internet scale. Note that the number of Linux web, DNS and mail servers had already surpassed MS Windows servers by this time --- so the comparison is not numerically outrageous.
Compare these to Code Red, Nimba, and the most recent MS SQL injection worms. The number of hosts compromised, and the effect on the global Internet have been significant.
I simply don't have the raw data available to make any quantitative assertions about this. However, the qualitative evidence is obvious and irrefutable. The bugs in MS systems seem to be more severe than comparable bugs on Linux systems.
If a researcher were really interested in a rigorous comparison, one could gather the statistics from various perspectives --- concurrently trying to support and refute this hypothesis.
Fred is right, of course, that Linux has many bugs --- far too many. However, he then extends this argument too far. He uses some fairly shoddy anecdotal numbers, performs trivial arithmetic on them and tries to pass this off as analysis to conclude that there is no difference between MS XP security (and that of their other OSes) and Linux' (Red Hat).
I won't pass my comments off as anything but anecdotal. I won't look up some "Google" numbers to assign to them and try to pass them off as statistical analysis.
I will assert that Linux is different. That bugs in core Linux system components are fewer, less severe, fixed faster, and are (for the skilled professional) easier to apply across an enterprise (and more robust) than security issues in Microsoft based systems.
The fact that numerous differences in these to OSes make statistical comparison non-trivial doesn't justify the claim that there is no difference.
Further anecdotal observations show that the various Linux distributions and open source programming teams have done more than simply patch bugs as they were found. Many of the CERT advisories in Linux and elsewhere (on the LWN pages, for example: http://www.lwn.net/ ) are the result of proactive code auditing by Connectiva, Gentoo, S.u.S.E., IBM and The MetaL group at Stanford, among many others. In addition many of these projects are signficantly restructuring their code, their whole subsystems, in order to eliminate whole classes of bugs and to minimize the impact of many others. For instance the classic problems of BIND (named, the DNS server) running as root and having access to the server's whole filesystem used to be mitigated by gurus by patching and reconfiguring it to run "chroot" (locked into a subdirectory tree) and with root privileges dropped after initial TCP/port binding (before interacting with foreign data). These mitigations are now part of the default design and installation of BIND 9.x. Linux and other UNIX installations used to enable a large number of services (including rsh/rlogin and telnet) by default. These services are now deprecated, and mainstream distributions disable most or all network services by default and present dire warnings in their various enabling dialog boxes and UI! s). before allowing users to enable them.
These changes are not panacea. However, they are significant in that they hold out the promise of reducing the number and severity of future bugs, and they artificially inflate recent statistics (since the majority of this work as been over the last two or three years).
Fred will undoubtedly dismiss these comments as being more "rabid advocation" by a self-admitted Linux enthusiast. He may even point to MS' own widely touted "trustworthy computing" PR campaign as evidence of a parallel effort on "the other side of the Gates." However this message isn't really written to him.
It's written to those who want to make things better.
The real difference between security in MS and in Linux is qualitative rather than quantitative. With Linux every user and administrator is empowered to help themselves. Every one of us can, and many more of us should, accept a greater responsibility for our systems and their integrity and security. Linux users (including corporations, governments and other organizations) can find and fix bugs and can participate in a global community effort to eliminate them and improve these systems for everyone.
Let's not get wrapped up in blind enthusiasm and open source patriotism. But let us not fall prey the the claim that there is no difference. There is a difference and each one of us can be a part of making that difference.
From Licht Bülb
Answered By Dolavimus the platypus, Pretzel, Virtual Beer, Konqi, Tuxedo T. Herring, Swirl, corncob Pipe, the Scissors, Amanda the Panda
Hi Gang.
I'm fiddling on my laptop again (almost all things are working) trying to get s-video out working. In the course of this fiddling I'm raking wildly in the bios which sometimes screws up the display (and I am still able to login via ssh) or the machine freezes. Then I have to "push the button" and - since I was forced to use ext2 (ext3 accesses the hd every 5 secs -> spindown and therefore power saving impossible) I have to wait quite some time for the fsck to finish with my 10 GB root system (yeah, now I know why to have multiple partitions..). So,
does any of you know a jounaling fs which plays nice with laptops? I googled a bit, read stuff, but didn't find anything about it. I think I remember someone here in the TAG saying that reiserfs had some patch to play nice? Can someone confirm that?
Cheers and TIA
[Dolavimus] I've been using rfs on my laptop since new two yrs ago with good results. Although I haven't done anything in particular to address hd spin down for power economy.
[Amanda] I have actually used ext3 with a recently installed Debian "testing" distribution. The hdd access can be "reduced" by installing "noflushd".
noflushd doesn't work with any j-fs. It says so on the web page.
[Amanda] However, I confess that I haven't really had a chance to fully examine this issue. As regards partitioning the disk the following works well: /, /usr, /boot as ext2 mounted read-only. Of /tmp, /var and /home which need to be writable only /home really is usually large enough to require journalling ( and sometimes /var ).
I have tried several things over time: only one root and a small boot, full monty with all dirs on seperate partitions and some things in between. The prob with several partition: when you need some large space (naturally) none is there on a single partition. Across two or more there would be enough... Disadvantage of only one /: you can not unmount anything beforehand if you know you gonna crash the system now... And a crypto-fs is hard to make then too.
[Pretzel] I think the very idea of a journaling filesystem makes "play[ing] nice" impossible. Journaling filesystems have to access the hard drive on every write. More accurately, they have to access the journal device each filesystem write.
Well, it might be necessary for the j-fs to write to it's journal every fs access, but ext3 writes to hdd every 5 secs, regardless of fs access or not.
[Pretzel] I think most journaling filesystem in Linux have an option for journaling device which is normally on-disk but can be on any block device, at least with ext3 and reiserfs. Some non-volatile memory would do nicely, but on a laptop, I think the chances of being able to do this are almost nil.
I (as a not-knower) would have two ideas: a compact flash card in the pcmcia slot and - RAM! In case of the RAM the journal would then be written to hdd when also a normal hdd access takes place. True, this would be bad if a system outage would occur without the journal having been rewritten to disk but I would take these chances... In case of CF, dunno if you can plug them in straight away or if you need an adapter, but 16MB are really cheap anmd if you can save some power (-> time) with it...
[Pretzel] Doing it in RAM would effectivly make a journaling fs useless. What would be the point then? That's the same effect as using a non-journaled filesystem.
Well, the journal would get written to disk with the data. If you use noflushd then writes of the system (logging etc.) get postponed or get written to RAM and then get written to hdd if a normal write (user initiated) occurs. So, I dunno exactly how noflushd does this but when it redirects the writes to ram the journal entries (if they need to be made in that case) should be written to ram too.
[Scissors] This isn't so much "writtem to RAM" like scribbled in a ramdisk - it's more like being hidden in the RAM of a caching controller. That's all noflushd does, is allow some buffering at the filesystem-driver level. So if something really does have content for the disk - and yes, that includes its journals - it's either got to hit the disk eventually, or you get to bear the risk that something might fail before it does.
But the whole point of having a journal is to have it still be present after a reboot event made something which normally isn't volatile space, lose its cookies. Having a journal that isn't allowed to do its job just complicates matters. Ergo, it shouldn't be put on volatile RAM.
[Pretzel] Another possiblily compiling a kernel with magic sysrq support, if the machice isn't totally frozen, you could do an emergency Sync/Unmount/Reboot.
OK, now I have to admit it: HOW THE HECK DOES THIS WORK? I read the stuff in /usr/src/linux/docs/ but as much as I gathered Alt-Print d should do something? It doesn't in my case. I compiled this in (if it's only the Magic SysRq key in Kernel hacking) some time (not in my current kernel I see now) but then (I tested it) it didn't work.
[Pretzel] Worked for me. (I say worked because at the moment I don't have it compiled it.) Try Alt-Sysrq-<magic sysrq command> (all three at the same time.) I don't remember if it needed all three at once or not. sysrq probably only says "print screen" on some keyboards.
Yeah, the prob was all at the same time... Now it works.
[Scissors] At least part of the confusion is with SysRq - on some keyboards the SysRq lives as a subfunction on another key. Thus for such keyboards you'd also need the extra key that invokes the secondary keycode. Fn maybe.
When it works then you should be able to (for instance) press Z and get a little help list. In fact any character that doesn't do something is supposed to show the little list. What I'm not clear on is "get outta sysrq mode"...
I've sometimes seen a console get into a state where it would respond to Magic SysRq, but it couldn't get out of that mode anymore. So I hope you have some spare virtual consoles, if you are just using it to settle something simpler than "telinit 6 doesn't work".
[Pretzel] So, in summary: No.
A No? <shout> I DON'T ACCEPT NO "NO" ! </shout> (yes, I've been in the army..)
[Beer] Why not plug one of these USB memory sticks and keep the journals there? That way the immediate access is only required for the USB device and not for the actual HD.
Cool idea, hadn't thought of this one, although I recently bought one! <me stupid>
[Beer] Which one? I've seem that Sony ads -- but they tell a lot about some strange cruft with user mangerment and cryptosoftware which suggest custom WinXX drivers to my suspicious ears.
The cheapest one I could find. It was 49 Euro, super slim and works like a charm. I think I will use gpg to encrypt files on the thing and simply put a win gpg version on the stick too.
[Beer] If I ask for a drive give me drive and not an encryption device where I can't trust the encryption anyway....
True, true...
[Beer] Might still need patches, but if the journal keeping actually needs only access to the journmal file and is not accessing the HD where the data are finally going anyway -- It might even work with existing journal/kernel code.
Then on the other hand you want journaling for some testing period -- why is power saving durting that time so important? You could always switch back to unjournaled if power saving is important. ext2/3 in that case would be easiest as a cleanly unmounted ext3 is mountable as ext2 without problems. The switching between ext3 and ext2 might be done by boot options I guess. (can you ask in a init.d script for boot options ?)
Well, the case is pressing just now in this testing period. I just rebooted and had a full fsck (with Ctrd-d at the end) just now and I really need a j-fs NOW. But I like the idea of j-fs's generally so I would like to keep it after the fiddling too. You are right that ext2|3 would work now but I am a little bit burnt with that since I had ext3 in the very beginning, installed the system (a lot) and then wanted to switch to ext2 (with removing the journal) according to manual. But the ext-tools weren't current at that time in testing|sid so this broke (!) the fs with FULL reinstall (some bad things occured at the same time leading to this). But ext2|3 would be an option.
[Beer] I never tried to completely switch back to ext2 and delete the journals, but I did mount ext3 as ext2 and nothing bad happened. I made sure they are clean so.
On Laptop power save is only a real issue on battery -- then the sudden crashes should not be that frequent unless you insist of fiddling around in kernel space on the train.... So for some normal usage running on battery and ext2 might be sufficient, but when you are on mains power you can switch back to ext3. There must be a way to make this decission in lilo by some option which is evaluated in init.d/boot*
Or check if you are on battery and make the decision based on that. Frequent sleep/resume inbetween are then not that practical so.
[Konqi] For those living in Germany: Aldi has an 128 MB USB memory stick for 49.99 Euro next week, I read in an Heise announcement.
And for those who want to know something about mine: http://www.computer-cash-carry.de/ccc/index-ns.html see Festplatten IDE, there at the bottom (all german, just for the curious)
[Tux] Any of you guys want to write up an article on what it takes to get one working, or is it simple enough to describe in a 2-cent tip? I'd be very curious.
modprobe sd_mod modprobe usb-storage mount /dev/sda1 /mnt/usbstick
[Tux] Wow. Mondo cool. I love Linux.
I think that there's a USB memory stick in my near future...
[Swirl] I wonder if what I have on my website is sufficient for a 2-cent tip? I can re-write it, if it's considered too half-assed. Here's the relevant excerpt:
USB on Linux seems a bit... different, especially since I'm not really used to devfs-type things.
[Beer] The only thing I ever tried was a digital camera (Olympus C100 zoom) and it needed a kernel patch (usb-driver) to be properly recognised. Apart from that the camera works as a disk as expected. I mount /dev/sda1, it will be rw right on the first mount, no need for remounting.
So if you really have to go through all this remounting, you can't fdisk the thing into partitions, etc. It really seems strange.
I do remember that zip-drive "problem" -- the win programs to access it partition it to use sda4 instead of sda1. So this might all be very specific to the hardware and drivers used (or used the first time the disk was partitioned).
But basically these memory sticks seem to work, then I'll keep my eyes open for one.
[Swirl] I've been figuring out how best to deal with it on my laptop, since my wife gave me a 32 MB Easy Disk "memory stick", a cute little plastic thing on a keychain fob.
I'm still working it out, but it looks like I need the usb-uhci and usb-storage drivers (impliedly also usbcore), at which point I can do:
# mount -o rw,uid=1000,gid=1000 -t vfat /dev/sda /mnt/fob/
(where 1000 is my own login account's UID and GID).[1]
The bizarre thing is: That command (or anything like it) always returns:
mount: block device /dev/sda is write-protected, mounting read-only
[corncob Pipe] I believe that this error is more of a "fail-safe" than it is an annoyance...
[Swirl] I banged my head up against that problem for a couple of days, and "fdisk /dev/sda" kept insisting...
Um, my usb stick has a small switch on it's side to make it write-protected. You don't have by chance missed that one?
[Swirl] That was part of what I spent some part of a couple of days, looking for. No, there's no hardware switch. Moreover, doing a remount entirely via software _does succeed in making it writeable, following the initial mount. If the obstacle were a hardware switch, that would not have happened.
# fdisk /dev/sda
You will not be able to write the partition table.
...until it finally occurred to me to do...
# mount -o rw,remount /mnt/fob
[Pipe] Does it? To me, the mount command references "/mnt/fob" (unless of course, "/mnt/fob" points to "/dev/sda"???)
[Swirl] The point is that the _original mount command, which gets re-mounted here, uses /dev/sda, _not /dev/sda1. I'm trying to stress that at least _some memory sticks must be addressed as if they were SCSI-based floppy drives. No partition numbers, you see? This was non-obvious point #1.
[Pipe] What you have done here it to re-mount in in "rw" mode, yay!
[Swirl] Um, yes. That's exactly my point. The necessity to do so was non-obvious point #2.
[Swirl] Notice that the mount command references /dev/sda, instead of /dev/sda1. That's the other odd thing: I kept had no luck until I happened to try /dev/sda.
[Pipe] I think this is as a result of the way in which device files are referenced under Linux
[Swirl] Well, where were you when I was trying to mount the device, the first time?
I'm saying _the point is non-obvious_ because many other instructions one finds for mounting similar devices tell you otherwise, because intuition suggests that a memory stick ought to be addressed like a hard drive, such that it can have multiple partitions, and because it's the first time in a decade of addressing SCSI devices on Linux that I've not put a number after /dev/sda.
[Pipe] Oh, now I see what you meant. I mis-interpreted you the first time.
[Swirl] My guess is that the Easy Disk is being addressed as if it were a floppy disk instead of a hard drive partition, which is why it's sda instead of sda1.
[Pipe] Hmm, I don't quite follow you here. Why would a HD partition not utilise "sda1" if told to do so?
[Swirl] I'm not sure what's the nature of your confusion: Are you willing to simply take my word for it that addressing /dev/sda1 on the memory stick absolutely does not work, and that addressing /dev/sda does? That would make this conversation a great deal easier.
[Pipe] Oh, I'm not doubting you one bit, I just mis-read your sentence, I'm sorry, Rick.
[Swirl] And I believe you're missing my point: Because the Easy Disk is (it seems) classed as functionally the same as a floppy disk, there is no concept of partitioning, and thus no partition numbers.
[Pipe] And if Easy Disk were being addressed as a floppy then it ought to use "/dev/fdx" in this case, surely?
[Swirl] I beg your pardon, but no. That wouldn't be SCSI then, would it?
[Pipe] It would, it would.
[Scissors] I seem to recall that normal floppy disks used in an IDE-chain LS-120 bay are still accessed by the fdx mechanism. However the one around here lost a pin so I can't check.
[Swirl] I hope you're not under the impression that there aren't all Linux floppy disks must be addressed as /dev/fdx? SCSI ones are /dev/sdX, and parallel-port ones are /dev/pfN . Neither supports partitioning.
[Pipe] !
[Swirl] I have no idea why Linux always mounts it read-only, though.
[Pipe] Me neither.
I have no clue what it is recognised or attached like, but in my case I have to use /dev/sda1, else it will make trouble.
[Swirl] Yes, I've heard such reports. That is why I have concluded that some USB memory sticks are recognised as quasi-floppies, and some as quasi-hard disks.
I guess there are two (or more) controller producers and there comes this trouble.
[Swirl] If by "controller" you mean the USB chipset on the motherboard, I very much doubt that, because the difference seems specific to the add-on USB storage device, not the USB host.
No, I didn't mean the host, I meant the usb controller in the sticky. These need some controller too, don't they?
[Swirl] They need some sort of USB circuitry, and, to be sure, differences in such circuitry seem to have caused these differences in mode of operation between memory sticks. Whether it's common to call that circuitry a "USB controller", I am unsure, but such was not my impression.
Maybe there are even patents involved so that the other producer had to take a different approach (and maybe this also helped in case of "booting off floppy -> off usbstick). Anybody anything about the subject?
But there is a strange thing: sometimes there comes a error message saying that there is (can't rememeber what) wrong, then I try to mount /dev/sda, this will fail, and then I can mount /dev/sda1 - So much for deterministics.
[Pipe] Unless of course this is as a result of the Kernel's divine intervention??? Va savoir??
[Swirl] I'm ultimately less interested in kernel psychoanalysis than in making it do what I want, thanks.
From Michael Havens
Answered By Feather, Tuxedo T. Herring, Wizard's Hat, Swirl, Digital Surfboard, and the Scissors
Could someone bring kpppconfig to bandersnatch? It won't unpack when I put Mandrake 9.0 back on my box. If someone would be willing to bring me every thing to do with kppp it would be even better. kpppconfig is what I know is missing (via Hans) but there may be more. It would be terrific if that someone could give me a call and I could getit before Bandersnatch but if noone is willining to do that then Bandersnatch will work for me
[Tux] Is it a message from Mars? Is it an attempt to mix Klingon with Medu Neter?
[Feather] Actually Mars is in San Francisco, the beer there is pretty good, the food edible and unlike the Bandersnatch below there is no connectivity.
[Scissors] Mars also has an alternate transport landing zone in New York. No connectivity there either, but pretty good food at an underground colony beneath the red sands.
[Tux] Is it a misdirected message?
[Feather] I can only think that Michael misaddressed this. He is a member of the Phoenix AZ lug and the bandersnatch is a local bar the part of the lug meets at monthly.
Not quite sure how he managed to get this so misaddressed.
[Tux] Tune in next time when our querent
- turns off the weird MIME characters by reading the pointer to expita.com in our knowledgebase.
- (possibly) explaining what the heck he needs in understandable terms,
(The only "Bandersnatch" I know of is a 58' ferrocement boat about 200 yards away from mine ...
[WizHat] A couple years ago there used to be a Bandersnatch Pub in Tempe Arizona, right near the University. It was a microbrewery with a free Internet drop. I used to go there frequently during the six months or so that I was "wastin' away in Motorolaville" (working on a Linuxcare contract at the Motorola Computer Products division helping them with their HA-Linux project).
Wow! Of all the gin joints in the world --- who'da thunk that three people on this list would all be aware of the same Bandersnatch pub in a place like Arizona! (BTW: Tempe and Phoenix are essentially one urban area --- I thought Bandersnatch was in Tempe, but that point is moot).
[Feather] Right you are Jim. It is in Tempe. The LUG here is known as PLUG (Phoenix LUG) but they really cover the metro area include Tempe, Scottsdale, Glendale, etc.
As a gin joint they have good beer, food is edible and the connectivity pretty good.
[Tux] ... (they've even got a cat named Frumious).
[Swirl] Does it get the bowsprit mixed with the rudder sometimes?
[Tux] <sadly> It wouldn't matter if it did. "the Man at the Helm shall speak to no one.", so remonstrance is impossible.
(To those of you who are confused, see Lewis Caroll's "The Hunting of the Snark" (An Agony, in Eight Fits.) Hint: you're _supposed to be confused.
[Surfboard] is that where "Fit the First" comes from. People look at me so weird...
[Tux] It was a bit startling to see someone talking about "bringing $X to Bandersnatch"; I got Norm converted to Linux a while back, and was wondering if he was asking for help here.
[Scissors] So in conclusion: Frumious had no problem teleporting a copy of kppp wherever it needed to go, the Mandrake was advised to avoid the jub jub bird while reconfiguring, and no boojums were harmed in the drawing up of this column. Lewis Carroll is welcome to join the Answer Gang at any time, provided he can get away from his mathemagical textbooks. Other ppp configurators include: xisp, wvdial, debian's "pppconfig" and a few dockapps. I believe gkrellm also has simple plugins for this feature.
The Wizard's Hat is part of the ensemble worn by the wizard cartoon adorning Linux Gazette's pages and representing this column up until around issue 56 or so.
The Robes and Orb have been enjoying their retirement since we got the new template for the site settled in, but the Hat has stayed pretty active, even if working in the background. You can bet he knows a lot of stuff - every time someone says "keep it under your hat" it's just another tidbit to file away for later.
Taking up the slack, the Pipe has long been a symbol of introspection and study. You'll most often find the Pipe indulging in detangling shell commands for less patient souls, and messing around under the hood of more complicated system scripts. During the off time, Pipes have been known to be taken with a bit of music...
The web surf has been up since ages ago, and this Surfboard has been here to enjoy it all. Once just considered a beach bum, now I get paid to roam the digital seas, design apps, tame the wild radio waves and dig the Suncoast.
When the scissors popped into the break room looking for everyone, this brewski was just enjoying some ginger snaps and some free time. Knowing that it'll just be a short while until everyone's returned and it's back to the case, the Beer eagerly chimed in to help out.
Beer turned on to Linux around the same time Tux got involved; they'd virtually been drinking buddies.
Dolavimus the platypus adorned t-shirts in the heyday of Linux 1.x, plugging "Guerilla Linux Development" and delving, or maybe dolaving, deep in the code. Overseeing the TCP/IP stack revamp and seeing a new view on the whole modular subsystem are among his finer programming efforts.
Tux's family resemblance can definitely be seen in the chin.
Inheriting a cheerful red scribble nature from Dolavimus, the Swirl is a dyed-in-it Debian advocate. (What a surprise.) He can be found at installfests just about anywhere, and knows quite enough about enough architectures, packages, and generally curious stuff about Linux to make you dizzy.
Tux was drawn by Larry Ewing using the GIMP. Satisfying himself with herring (pickled or otherwise) and sailing around the world to star in photo galleries (see Linux Weekly News' Penguin Gallery among others) have given him a broad perspective, lots of experience, and an amazing number of cousins. Tux has also been involved in a large number of menu setups... you just have to know where to start, and keep control in your corner. It's rumored he is quite popular with the lady penguins.
The Magic Wand has been using Linux since the days when it was considered magic to get Linux working. He took up with modern day distros in the age of Infomagic and is pretty well known these days by Mandrake users everywhere. As handy with the 'file' command as he ever was, nowadays the Wand prefers programming in an object oriented style.
I'd say I'm pretty bright. My name's a mouthful (Licht Bülb) so most folk just call me Idea which is easier to say. After I get my own darkness lit up, I enjoy passing the clue on to others. I've been known to pick up the laser once in a while to point something out, and of course I'm fond of movies.
The Scissors normally hang out around the Editor's desks trying to make sure the goodies that make it in aren't too snippy to Make Linux A Little More Fun. This time, they're cutting in on behalf of the readers. Scissors know that Linux is great stuff, but it's not always a software problem that really needs to be solved.
Pretzel perked up at the chance to escape the break room for a while, chime in which a few salty comments, and generally show a crisp new look on things. Not everyone understands his twisted point of view, but that's okay. Like Pretzel, linux itself comes in all shapes and sizes.
Konqi certainly knows his graphical environments. A genuinely helpful sort who is not inclined to waste words, he can usually be found when you need him to debug something.
Amanda the Panda enjoys a rather different look on the desktop, and a different take on life than most of us here in the Gang. It was another AMANDA that went to MIT, but she still agrees that it's important to keep good backups.
This Feather had mastered the ways of web servers before they really started painting up the graphics. Now there's a feather in every cap and the Feather has emigrated to the sunny lands of the southwest. In his spare time he has been known to write rather eloquently in the hands of the Answer Gang.
...making Linux just a little more fun! |
By Michael Conry |
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to gazette@ssc.com
All articles older than three months are available for
public reading at
http://www.linuxjournal.com/magazine.php.
Recent articles are available on-line for subscribers only at
http://interactive.linuxjournal.com/.
It has been reported [The Register] that Jon Johansen will have to return to court once more in connection with his involvement in the creation and distribution of the DeCSS computer code. As is well known, the DeCSS code is used to circumvent a primitive technique used by DVD producers to obstruct users in their enjoyment of the DVD movies they have purchased. Norwegian prosecutors argued that to play a DVD in this manner was a form of theft, similar to cracking somebody's web server. Though the ruling was in Johansen's favour, leave for an appeal has been granted, so the whole business will have to be repeated once more, probably in the summer.
Such is the duration of this case that one wonders if prosecutors are hoping that when Jon gets older he will lose public sympathy. As Jon is now 19, headlines like "Norwegian Teen Harassed by DVD STASI" have a limited shelf-life.
Lexmark's attempt to ban recycling of ink cartridges received an unsettling boost as an injunction has been issued against the company producing the environmentally beneficial products. SCC has hit back by bringing an anti-trust case against Lexmark. Though the case is far from over, it highlights some disturbing potential outcomes of the DMCA. Wired asked the question (and not rhetorically, this is a real possibility) how much more would it cost to maintain your car if manufacturers put computer chips in all the important components.
Don't mention it to anybody, but at Linux Gazette we have long suspected that some users print out articles from time to time: circumventing the CRT/LCD encryption we use to manage access to our content. Though our lawyers are still preparing the brief, it would appear Lexmark printers are often used in this circumvention scheme... lets get us some of those DMCA dollars.
Hilary Rosen of the RIAA has been commenting on the issue of P2P networks again. It seems her son Tristram managed to track down a very rare recording of Edith Piaf singing Knees Up Mother Brown, a recording that Hilary had been hoping to find for a very long time. Though Hilary was initially ecstatic to hear the tune she had for so long sought, her ecstasy soon turned to agony as it emerged that Tristram had not taken care to clear up the thorny copyright issues before downloading and playing the track.
It was with a heavy heart that Hilary phoned the cops to have Tristram arrested.
"Though it hurts to see my son doing hard time, it is nothing compared to the pain he was causing through his criminal actions. I want to set an example for all the parents of America, and the world. What my son did was despicable. Even though I love him, he has to be made to pay for terrorising the music industry."When pressed further, Ms. Rosen replied by quoting another lyric from her favourite songstress "Je ne regrette rien". It is unclear at time of press whether or not she had arranged permission to use the lyric.
If you read any internet linux-news sites, you must be aware that SCO, the company formerly known as Caldera (more or less), have sued IBM. This followed several weeks of speculation regarding SCO's recent hiring of some top legal talent. There has been much analysis and discussion of this development, so I am not going to go through the details of the case. Linux Weekly News has a good overview and discussion of the substance of the claim. A major plank in their argument is that IBM used knowledge of UNIX technology it gained from intellectual property owned by SCO to benefit Linux development. This was allegedly an attempt by IBM to kill commercial UNIX and build-up Linux, though given that there is some debate about many of the "facts" on which these claims are based, a healthy dose of scepticism may be in order. Doc Searls at Linux Journal has also cogently analysed the story, and says that with this action SCO has finally made clear which side of the open-source/closed-source divide it truly stands. I have to say that this does not entirely surprise me having seen a "partners-presentation" given by staff from SCO a few months ago in Dublin. Further discussions can be read in the Slashdot thread.
Senator Ernest "on the Fritz" Hollings has proposed an interesting new initiative to improve the working environment for his fellow politicians. In Fritz's vision, members of congress would have the opportunity to attend in any one of a range of Disney inspired costumes. Capitol buildings will also be re-worked to bring them more into line with the spirit-lifting aesthetic popularised at Disneyland. Hollings is unapologetically evangelical regarding the proposal:
"Nothing in the world is as magical as a holiday in Disneyland, it is the pinnacle of our great civilisation and must surely be the finest place to work. What we are doing is bringing that magic into the every day lives of our Senators and Congress-folk. Now, even a Senator can live every day in the joyous glow of Disney magic."This initiative is sure to gladden the hearts of the hard working men and women of the government. Fritz was quick to point out that this endeavour would have the additional advantage of raising the profile of the much loved entertainment conglomerate.
To facilitate the promotion and introduction of this measure, Ernest is sub-contracting his job to Michael Eisner, Disney CEO. Eisner declined to comment at length. "It will be business as usual", he purred.
This is seen as a very progressive manoeuvre by Hollings, and is typical of the visionary and principled leadership of the man who brought us the CBDTPA. Indeed, the CBDTPA itself had an origin rooted in Disney culture. Taking inspiration from the Disney slogan "Where the magic comes to you" this much-misunderstood act was intended to put a little bit of Disney "magic" into every commonly used electronic device.
Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.
Linux on Wall Street Show & Conference | April 7, 2003 New York, NY http://www.linuxonwallstreet.com |
AIIM | April 7-9, 2003 New York, NY http://www.advanstar.com/ |
FOSE | April 8-10, 2003 Washington, DC http://www.fose.com/ |
MySQL Users Conference & Expo 2003 | April 8-10, 2003 San Jose, CA http://www.mysql.com/events/uc2003/ |
LinuxFest Northwest 2003 | April 26, 2003 Bellingham, WA http://www.linuxnorthwest.com/ |
Real World Linux Conference and Expo | April 28-30, 2003 Toronto, Ontario http://www.realworldlinux.com |
USENIX First International Conference on Mobile Systems,
Applications, and Services (MobiSys) | May 5-8, 2003 San Francisco, CA http://www.usenix.org/events/ |
USENIX Annual Technical Conference | June 9-14, 2003 San Antonio, TX http://www.usenix.org/events/ |
CeBIT America | June 18-20, 2003 New York, NY http://www.cebit-america.com/ |
ClusterWorld Conference and Expo | June 24-26, 2003 San Jose, CA http://www.linuxclustersinstitute.org/Linux-HPC-Revolution |
O'Reilly Open Source Convention | July 7-11, 2003 Portland, OR http://conferences.oreilly.com/ |
12th USENIX Security Symposium | August 4-8, 2003 Washington, DC http://www.usenix.org/events/ |
LinuxWorld Conference & Expo | August 5-7, 2003 San Francisco, CA http://www.linuxworldexpo.com |
LinuxWorld UK | September 3-4, 2003 Birmingham, United Kingdom http://www.linuxworld2003.co.uk |
Linux Lunacy Brought to you by Linux Journal and Geek Cruises! | September 13-20, 2003 Alaska's Inside Passage http://www.geekcruises.com/home/ll3_home.html |
Software Development Conference & Expo | September 15-19, 2003 Boston, MA http://www.sdexpo.com |
PC Expo | September 16-18, 2003 New York, NY http://www.techxny.com/pcexpo_techxny.cfm |
COMDEX Canada | September 16-18, 2003 Toronto, Ontario http://www.comdex.com/canada/ |
LISA (17th USENIX Systems Administration Conference) | October 26-30, 2003 San Diego, CA http://www.usenix.org/events/lisa03/ |
HiverCon 2003 | November 6-7, 2003 Dublin, Ireland http://www.hivercon.com/ |
COMDEX Fall | November 17-21, 2003 Las Vegas, NV http://www.comdex.com/fall2003/ |
Sony, IBM and Grid pioneer Butterfly.net announced the activation of a Linux-based computing grid that makes it easier and cheaper to run Sony PlayStation 2 games on the Internet. It is claimed that the massive "Butterfly Grid" can support millions of concurrent PlayStation-online users around the world, with no limit to the number of players who can be on the Grid at one time.
The Film Gimp team has changed the project name to Cinepaint. The decision was made during the recent Film Gimp panel discussion in Los Angeles during the Linux Movies conference track.
The Appro HyperBlade B221X cluster is designed for the High-Performance Computational (HPC) market and provides a fully integrated Linux cluster solution for large-scale complex computations. It achieves high-density architecture by using commodity x86 components in a single cluster. With support for up to 80 compute blades, Appro HyperBlade architecture doubles the current rack density using 1U servers.
DesktopLinux.com interviews Prof. David Costa, Dean of Robert Kennedy College in Switzerland about his school's recently released GNU/Linux offering: CollegeLinux.
The Debian project has announced new licencing terms. Beginning April 1, 2003, software upgrades will now be on a pay-as-you-go scheme. This will initially affect users of Debian mirrors who will have to buy prepaid blocks of access via the new Debian online store iDebian. Starting with the next stable release there will also be a pay-per-install scheme which is expected to be very well subscribed. Each installation will have an administrator program DebiWin which will continuously audit the system and make sure that the user has not innocently installed twice from a single installation medium. Uniquely in the default Debian install, DebiWin is a closed-source application. This is to maintain the "unique qualities" of the application.
Though initial reactions have been mixed, many users are non-plussed:
"...We already pay in blood, sweat and tears installing Debian, so what's a few dollars. You only install it once you know! What's that, you have to pay for upgrades... d'oh!"
A verbose guide to updating and compiling Debian kernels.
Anthony Towns is looking for volunteers to help with Release Manager work for the forthcoming release of Sarge.
Guardian Digital has announced the availability of the Guardian Digital Secure Mail Suite, which is available with Guardian Digital's EnGarde Secure Linux v1.5, also released today.
SCO has announced the availability of a new education program, updated Linux, UnixWare and OpenServer courseware, and a new UnitedLinux certification program.
In a surprise announcement, the Slackware project has announced the removal of any console-based install tools in the upcoming version 9.0. Daringly, installation will now be completed using installation tools running on the 3DWM platform. Minimum graphics requirements have not been announced yet, but if you were saving up for a new card to play Doom 3 with, you're on the right track.
The Slackware Linux Project has a brief anouncement on their site about their latest release Slackware 9.0-rc1.
Ximian, Inc., a provider of desktop and server Linux solutions, and SuSE Linux have announced a partnership concerning SuSE's corporate business. As part of the agreement, SuSE will resell Red Carpet Enterprise from Ximian - enabling customers to centrally deploy and manage software on servers and desktops running SuSE Linux and SuSE Linux Enterprise Server. The companies will also offer Ximian Connector software to integrate the popular Ximian Evolution groupware suite for Linux with the SuSE Linux Openexchange server. In addition, the companies will pursue opportunities to collaborate on future Linux desktop offerings incorporating Ximian Desktop technology.
UnitedLinux has announced that it has completed certification of UnitedLinux Version 1.0 with both Oracle9i Database and its database clustering technology, Oracle9i Real Application Clusters.
Yoper (Your Operating System) has released version 1.0 of their distribution.
XFree86 version 4.3.0 has been released. The new release includes many new graphics card driver improvements, automatic PS2 mouse protocol detection, run-time root window resizing, support for alpha-blended and animated cursors, and an improved font server.
TextMaker for Linux beta now available. TextMaker for Linux is a a word processor that is claimed to read and write Microsoft Word 6/95/97/2000/XP files without losing formatting or content.
McObject, developer of the eXtremeDB small footprint, in-memory database, has joined with MontaVista Software to offer a GNU/Linux-based technology combination for intelligent devices ranging from consumer electronics to carrier-grade communications gear.
With eXtremeDB, McObject offers a new type of database to meet the unique performance requirements and resource constraints of intelligent, connected devices. With a footprint of 100K or less, eXtremeDB spares RAM and CPU resources while delivering critical data management features. eXtremeDB's ultra-small footprint and exceptional performance enables device manufacturers to deploy less expensive hardware, providing a key economic benefit through reduction of BOM (Bill of Material) costs. Similarly, by choosing MontaVista Linux tools and platforms, companies gain significant savings over proprietary real-time operating systems (RTOSes). Together, McObject and MontaVista lower development and deployment costs, enhancing customers' positioning in the marketplace.
...making Linux just a little more fun! |
By Stephen Bint |
... featuring those Lovable Dolts, Stan Laurel and Oliver Hardy.
(Official L&H site) (UK tribute page)This article is a follow-up to the author's The Ultimate Editor in January and the Mailbag letters (three of them) it received in February.
Ollie is sitting in front of a terminal. Stan enters, carrying a book.
Ollie: Where have you been?
Stan: I went to the bookstore to get a book like you said.
Ollie: Well, you took your sweet time. We have to get this CGI script finished by tomorrow morning. Let's see what you've got.
Stan hands Ollie the book.
Ollie: "A Guide to Programming in C". What is this?
Stan: It's a guide to programming in C, Ollie.
Ollie looks at camera - not amused.
Ollie: I can see that, you idiot. I thought I told you we were going to write it in Perl.
Stan: But Mr. Bint said...
Ollie: Mr. Bint?
Stan: The man at the bookstore. Mr. Bint.
Ollie: [Impatiently] What did "Mr. Bint" say?
Stan: He said he was all sold out of books about Perl, but he had a whole shelf full of books about C. He said I was lucky.
Ollie: How did he make that out?
Stan: He said C is a better choice for CGI programs. He said C is a professional programming language, but Perl is a toy language. He said Perl is a jumped up scripting language that's gotten ahead of itself.
Ollie: Oh, he did, did he?
Stan: [Nods] Mr. Bint says that though the learning curve with C is initially steeper, ultimately there is less to learn, because C has fewer rules. He said Perl does most things for you and you need to learn exactly what it does in every case, to understand what your program is doing. He said that's why books about C are so thin and books about Perl are so fat. He says the reason he is sold out is that Perl books are so fat he can only fit four on a shelf. Not only that, but the trucks that deliver them often fail to arrive because their tyres keep exploding.
Ollie: Is that a fact?
Stan: [Nods] That's what Mr. Bint says. He says that Perl has confusing syntax and fails to define function interfaces correctly, which invites sloppiness.
Ollie: And what else does he say?
Stan: He says that C is often made out to be full of danger and potential for disaster compared to Perl, but in fact, Perl is only proof against a couple of common bugs and its lack of readability makes other bugs more likely. He says the gcc compiler gives good warnings about badly written C, there are tools which can check for memory leaks and Lint to check for other common errors.
Ollie: Mmmph. Mr. Bint recommends Lint, does he?
Stan: Yes.
Ollie looks impatiently at the camera, then at Stan.
Ollie: Well, I suppose we will just have to write it in "C".... "Mr. Bint". Mmmpph!
Ollie faces keyboard and prepares to type.
Ollie: OK, you read the book out to me and I'll type in the program.
while( ollie_waits ) { Stan_looks_at_book(); Stan_looks_at_Ollie(); Stan_looks_panic_stricken(); if( ollie_looks_around() ) ollie_waits = false; }
Ollie: Well, what is it now Stanley?
Stan: [Blubbering] I'm sorry Ollie... blubber... I don't think we can just start straight away... snivel...I think we have to read the book and learn the language first...
Ollie: Will you stop sniveling? How hard can it be? We'll simply learn the language, then we'll write the program.
Ollie turns to give Stan his full attention.
Ollie: OK, tell me what we need to learn about C before we can start.
Stan: According to this table of contents... variable types, user-defined types, typedefs, static variables, initialisation vs. assignment, constants, statements, binary operators, unary operators, arithmetic operators, logical operators, bitwise operators, operator precedence, if, for, switch, while, continue, break, arrays...
Ollie looks at camera - not amused.
Stan: multi-dimensional arrays, pointers, pointer arithmetic, function pointers, function declaration and definition, preprocessor directives and macros, printf formats, automatic and allocated memory, command-line arguments, recursion...
Ollie removes Stan's hat, slaps him round the chops with the mouse mat (both sides), carefully replaces hat. Stan stops reading.
Ollie: Well, congratulations Stanley. This is another fine mess you've gotten me into.
while( camera_is_running() ) Stan_blubbers();
Fin.
Stephen is a homeless Englishman who lives in a tent in the woods. He eats out
of bins and smokes cigarette butts he finds on the road. Though he once worked
for a short time as a C programmer, he prefers to describe himself as a "keen
amateur".
...making Linux just a little more fun! |
By Shane Collinge |
These cartoons are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.
All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in a pair of colorful tights fighting criminals. During the day... well,
he just runs around. He eats when he's hungry and sleeps when he's sleepy.
...making Linux just a little more fun! |
By Daniel Guerrero |
The eXtensible Stylesheet Language Transformations (XSLT) is used mostly to transform the XML data to HTML data, but with XSLT we could transform from XML (or anything which uses the xml namespaces, like RDF) to whatever thing we need, from xml to plain text.
The w3 defines that XSL (eXtensible Stylesheet Language) consists of three parts: XSLT, XPath (a expression language used by XSLT to access or refer to parts of an XML document), and the third part is XSL Formatting Objects, an XML vocabulary for specifying formatting semantics
First of all, we need to specify that our XML document will be an XSL stylesheet, and import the XML NameSpace:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> ... </xsl:stylesheet>
After that, the principal element which we will use, will be the xsl:template match
, which is called when
the name of a xml node matchs with the value of the xsl:template match
:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <!-- '/' is taken from XPath and will match with the root element --> <!-- do something with the attributes of the node --> </xsl:template> </xsl:stylesheet>
Inside of the xsl:template match
, we could get an attribute of the node with the element:
xsl:value-of select
, and the name of the attribute, lets first make an xml of example with
some information:
<!-- hello.xml --> <hello> <text>Hello World!</text> </hello>
And this is the xslt which will extract the text
of the root element (hello
):
<!-- hello.xsl --> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <html> <head> <title>Extracting <xsl:value-of select="//text"/> </title> <!-- in this case '//text' is: 'hello/text' but because I'm a lazy person... I will short it with XPath --> </head> <body> <p> The <b>text</b> of the root element is: <b><xsl:value-of select="//text"/></b> </p> </body> </html> </xsl:template> </xsl:stylesheet>
The HTML output is:
<!-- hello.html --> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Extracting Hello World! </title> </head> <body> <p> The <b>text</b> of the root element is: <b>Hello World!</b> </p> </body> </html>
@att
will match with the attribute att
. For example:
<!-- hello_style.xml --> <hello> <text color="red">Hello World!</text> </hello>
And the XSLT:
<!-- hello_style.xsl --> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <html> <head> <title>Extracting <xsl:value-of select="//text"/> </title> </head> <body> <p> The <b>text</b> of the root element is: <b><xsl:value-of select="//text"/></b> and his <b>color</b> attribute is: <xsl:value-of select="//text/@color"/> </p> </body> </html> </xsl:template> </xsl:stylesheet>
The HTML output will be:
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Extracting Hello World! </title> </head> <body> <p> The <b>text</b> of the root element is: <b>Hello World!</b> and his <b>color</b> attribute is: red </p> </body> </html>
If you are thinking in use this information to, in this case, put in red color the text Hello World!,
yes it's possible, in two forms, making variables and using they in the attributes of the font, for
example, or using the xsl:attribute
element.
Variables could be used to contain constants or the value of an element.
Assigning constants are simple:
<!-- variables.xsl --> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <!-- definition of the variable --> <xsl:variable name="path">http://somedomain/tmp/xslt</xsl:variable> <html> <head> <title>Examples of Variables</title> </head> <body> <p> <a href="{$path}/photo.jpg">Photo of my latest travel</a> </p> </body> </html> </xsl:template> </xsl:stylesheet>
The html output:
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Examples of Variables</title> </head> <body> <p><a href="http://somedomain/xslt/photo.jpg">Photo of my latest travel</a></p> </body> </html>
You can also get the value of the variable selecting it from the values or attributes of the nodes:
<!-- variables_select.xsl --> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <html> <head> <title>Examples of Variables</title> </head> <body> <xsl:apply-templates select="//photo"/> </body> </html> </xsl:template> <xsl:template match="photo"> <!-- definition of the variables --> <xsl:variable name="path">http://somedomain/tmp/xslt</xsl:variable> <xsl:variable name="photo" select="file"/> <p> <a href="{$path}/{$photo}"><xsl:value-of select="description"/></a> </p> </xsl:template> </xsl:stylesheet>
And the xml source (I don't put images of myself, because I don't want to scare you :-) )
<!-- variables_select.xml --> <album> <photo> <file>mountains.jpg</file> <description>me at the mountains</description> </photo> <photo> <file>congress.jpg</file> <description>me at the congress</description> </photo> <photo> <file>school.jpg</file> <description>me at the school</description> </photo> </album>
And the html output:
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Examples of Variables</title> </head> <body> <p><a href="http://somedomain/tmp/xslt/mountains.jpg">me at the mountains</a></p> <p><a href="http://somedomain/tmp/xslt/congress.jpg">me at the congress</a></p> <p><a href="http://somedomain/tmp/xslt/school.jpg">me at the school</a></p> </body> </html>
If you note, you will see that the photo
element-match is called three times because of
the xsl:apply-templates
, every time xslt finds an element that match it,
is called the xsl:template match
that matches it.
Ok, so you are impatient to try to make the text in red of the hello_style.xml
?, try to do this with
variables, if you can't do it, open this page misc/danguer/hello_style_variables.xsl
XSLT could sort the processing of xml tags with <xsl:sort select="sort_by_this_attibute">
, this
element must be placed into xsl:apply-templates
element, you could sort by an xml element or attribute,
in ascending or descending order, you could also specify the order of the case (if the lower case
is before than a upper case, or vice versa).
I will use the example of the album, and I will add only the sort element:
<xsl:apply-templates select="//photo"> <xsl:sort select="file" order="descending"> </xsl:apply-templates>
This will alter only the order of photos is put in the html, in fact, xslt will order
first all the elements photo
of our xml, and it will send to the
template-match
element in that order, that's why the xsl:sort
element must go inside the xsl:apply-templates
.
The xsl's and html's files are in the examples, you can get it with these links:
There will some cases when you need to put some text if some xml element (or attribute) appears,
or other if doesn't appears, the xsl:if
element will do this for you, I will show you
what can do, let's image you have a page with documents (this example is taken from my 'tests' at
TLDP-ES project) and from these documents, you know if the sources were converted to PDF, PS or
HTML format, this information is in you xml, so you can test if the PDF file was generated, and
put a link to it:
<xsl:if test="format/@pdf = 'yes'"> <a href="{$doc_path}/{$doc_subpath}/{$doc_subpath}.pdf">PDF</a> </xsl:if>
If the pdf attibute of the document is yes, like this example:
<document> <title>Bellatrix Library and Semantic Web</title> <author>Daniel Guerrero</author> <module>bellatrix</module> <format pdf="yes" ps="yes" html="yes"/> </document>
Then it will put a link to the document in the PDF format, if the attribute is 'no' or whatever value the xml's DTD allow you, then no link will put, if you want to check all the xsl and xml documents they are in:
If you check the xml document of the below example, you will see, in the first document we have
three authors separated by a comma, obviously a better way to separate the authors will put it
in separated <author>
tagas:
<document> <title>Donantonio: bibliographic system for automatic distribuited publication. Specifications of Software Requeriments</title> <author>Ismael Olea</author> <author>Juan Jose Amor</author> <author>David Escorial</author> <module>donantonio</module> <format pdf="yes" ps="no" html="yes"/> </document>
And you could think to make an xsl:apply-templates
and a
xsl:template match
to put every name in a separate row, for example, this could
be done, but if you also could utilice the xsl:for-each
statement.
<xsl:for-each select="author"> <tr> <td> Author: <xsl:apply-templates /> </td> </tr> </xsl:for-each>
In this case, the processor will go through all the authors that the document had, and
if you are wondering what template I made to process the authors, I will say there is no
template, the processor will take the apply-templates
element like a 'print'
the text of the element selected by the for-each
element.
The last xslt element I will show you is the choose element, this works like the popoular
switch
of popular languages like C.
First you must declare a xsl:choose
element, and after, put all the options
in xsl:when
elements, if element couldn't satisfy any when, then you could
put an xsl:otherwise
element:
<xsl:variable name="even" select="position() mod 2"/> <xsl:choose> <xsl:when test="$even = 1"> <![CDATA[<table width="100%" bgcolor="#cccccc">]]> </xsl:when> <xsl:when test="$even = 0"> <![CDATA[<table width="100%" bgcolor="#99b0bf">]]> </xsl:when> <xsl:otherwise> <![CDATA[<table width="100%" bgcolor="#ffffff">]]> </xsl:otherwise> </xsl:choose>
The position()
returns the number of element processed, in the case of the
documents, the number will increment as many documents you had, in this case, we only want
to know which document is even or odd, so we can put a table of a color for the even
numbers and other for the odd numbers; I put the xsl:otherwise
only to illustrate
its use, but actually I think it will never be a table with blank background in our library.
If you ask me why I put a CDATA
section?, I will answer you, because if I don't
put it, then the processor will ask for his termation tag (</table>
) but
its termination is bottom, so, the termination tag will need also the CDATA
section.
Once again, I have to short the code, if you want to see all the code, you must see these documents:
Saxon is a XSLT Processor written in Java, I'm using the version 6.5.2, the following instructions will be for this version, in others versions you have to check the properly information for running Saxon.
After you have downloaded the saxon zip, you must unzip it:
[danguer@perseo xslt]$ unzip saxon6_5_2.zip
After this, you must include the saxon.jar file in you class path, you can pass the path of the jar to java with the -cp path
option.
I will put saxon.jar under the dir xslt, you must write to Java the Class you will use; in the case of my saxon version (6.5.2) the Class is:
com.icl.saxon.StyleSheet
and also pass as argument the document in xml and the XSLT StyleSheet that you will use. For example:
[danguer@perseo xslt]$ java -cp saxon.jar com.icl.saxon.StyleSheet document.xml tranformation.xsl
This will send the output of the transformation to the standard output, you can send to a file with:
[danguer@perseo xslt]$ java -cp saxon.jar com.icl.saxon.StyleSheet document.xml tranformation.xsl > file_processed.html
For example, we will transform our first example of XSLT with saxon:
[danguer@perseo xslt]$ java -cp saxon.jar com.icl.saxon.StyleSheet cards.xml cards.xsl > cards.html
And as I said, the result of the processing with xslt is:
[danguer@perseo xslt]$ java -cp saxon.jar com.icl.saxon.StyleSheet hello.xml hello.xsl > hello.html
xsltproc comes with all the major distributions, the sintaxis it's like the saxon's one:
[danguer@perseo xslt]$ xsltproc hello.xsl hello.xml > hello.html
I know there are others xslt processors, like sablotron, but I haven't used, so, I can't suggest you ;-).
I'm trying to finish my Bachelor Degree at BUAP in Puebla, Mexico. I'm involved with TLPD-ES
project, and they make I learn all about this technologies, now I'm learning
about Semantic Web.
...making Linux just a little more fun! |
By Javier Malonda |
The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports, es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author. Text commentary on this page is by LG Editor Iron. Your browser has shrunk the images to conform to the horizontal size limit for LG articles. For better picture quality, click on each cartoon to see it full size.
These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.
...making Linux just a little more fun! |
By Jose Salvador Gonzalez Rivera |
Debian has a package manager (DPKG) that resolves dependency problems automatically. It help us to automatically keep up to date programs looking for new versions on the internet, resolving and completing the files and libraries dependencies which a package requires, making system administration easy and keeping us up to date with the new security changes. It also shows some important and substantial security features: it doesn't have commercial goals, also doesn't obey mercantile urgencies, It has a good pursuit of errors, problems are fixed in less than 48 hours and it's priority is to develop a complete and reliable operating system.
Before Installing
From a security and reliability standpoint, it's better to have separate hard disk partitions for directories that are large, and especially to separate those which are frequently-changing (/tmp and /var) from those that can be mounted read-only except when installing software (/usr). Some people also make separate partitions for /home and /usr/local. Separate partitions mean that if one gets corrupted, the others won't be affected. It also means you can mount some partitions (especially /usr and /boot) read-only except when doing system administration: this decreases the likelihood of corruption or mistakes dramatically. Don't do the distribution default, which is usually to put everything in one partition. Of course, you can go overboard if you use too many partitions, and if you don't anticipate your sizes correctly you may end up with wasted space in some partitions and not enough space in others. In that case you'll either have to back up the files and repartition, or use symbolic links to steal space from another partition. Both strategies are undesirable, so think beforehand about how many partitions are appropriate for this machine, which directories contain irreplaceable data, and leave some extra space for unexpected additions later.
The Debian installation, text mode, consists of two phases. The first one consists of installing the base system and the second one allows us to configure several details and the installation of additional packages. It is also necessary to identify those services that the system will offer. It doesn't make sense to install packages that could open ports and offer unnecessary services, so we will begin installing just the base system and after that the services our system will offer.
There are some software tools to perform vulnerability verification or security auditing in our servers; these tools are intended to detect well-known security problems and also to offer detailed information in how to solve almost any problem you find. This kind of analysis is also called "ethical hacking" because we can check the way our servers can be penetrated as an intruder would do it. Nessus audits insecurity. Its main advantage is that it is totally modernized with the latest attacks, with the possibility to include them in plug-ins form. It is available for any UNIX flavor from its Web site: www.nessus.org It is composed of two programs:
Nessusd
The server performs the exploration. It should be started with root privileges and uses the ports 1241 and 3001 to listen to nessus client's requests. To install it is necessary to type the following command:
# apt-get install nessusd
It
only runs in UNIX and the client should be authenticated by means of a login
and a password that has to be activated in the system with the different
options offered by nessus-adduser
command.
Nessus Client
It
is the client who communicates with nessusd
. This program has its own
graphical front end for administrative purposes. It's not just for UNIX but for
Windows too. Also one of its tasks is report generation at the end of the
exploration, showing the vulnerabilities found and their possible solutions. To
install it we have to type:
# apt-get install nessus
Nessus
uses a couple of keys stored in the .nessus.keys
directory located in
user's HOME. They are used to communicate with nessusd
.
I do not want to repeat the HOWTO and manuals information so I will focus on specific points and situations not considered frequently, the use of limits and files attributes.
The Linux permissions and attributes system allows us to restrict file access to non authorized users. The basic permissions are read (r), writ (w) and execute (x).
To
visualize a directory permission structure we type ls -l
total 44
drwxr-xr-x��� 2 root���� root�������� 4096 May 27� 2000 backups
drwxr-xr-x��� 4 root���� root�������� 4096 Jul 17 14:36 cache
drwxr-xr-x��� 7 root���� root�������� 4096 Jul 17 09:30 lib
drwxrwsr-x��� 2 root���� staff������� 4096 May 27� 2000 local
drwxrwxrwt��� 2 root���� root�������� 4096 May 27� 2000 lock
drwxr-xr-x��� 5 root ����root�������� 4096 Jul 17 14:35 log
drwxrwsr-x��� 2 root���� mail�������� 4096 Jun 13� 2001 mail
drwxr-xr-x��� 3 root���� root�������� 4096 Jul 17 14:36 run
drwxr-xr-x��� 3 root���� root�������� 4096 Jul 17 14:34 spool
drwxr-xr-x��� 5 root���� root���� ����4096 Jul 17 14:35 state
drwxrwxrwt��� 2 root���� root�������� 4096 May 27� 2000 tmp
The permission column has 10 characters divided in 4 groups:
- rw- rw- r--
The first part indicates the file type:
-������� common file.
d������� directory.
l������ �symbolic link.
s������� socket.
The
other characters indicate if the owner, the owner group and all others have
permission to read, write or execute the file. The chmod
command
is used to change permission with - + = operators to remove, add or to assign
permissions. For example:
$ chmod +x foo
Assigns to foo execution attributes. To remove execution permission to the group members we type:
$ chmod g-r foo
Another way to change the permission schema is by the octal system where each number represents a place-dependant permission for owner, group or all others.
0������� no permission
1������� execution
2������� writing
3������� writing and execution
4������� reading
5������� reading and execution
6������� reading and writing
7������� reading, writing and execution
For example, if we type:
$ chmod 751 foo
We assign read, write and execute permission to the file owner (7), the group can read it and to execute it (5) and can be executed by everybody else (1).
We
can also modify file attributes with chattr and list them with lsattr
, this
allows us to increase file and directory security. Attributes can be assigned
in this way:
A������� Do not update the atime file attribute allowing to limit the input and output to disk.
a������� Open the file only in update mode.
c������� File compressed automatically.
d������� Marks file so dump program will not touch it
i������� File can not be erased, renamed, modified or linked.
s������� Fills the erased file blocks with zeroes.
S������� Changes in file will be immediately recorded.
u������� File content will be saved when erasing the file.
An example to assign "immutability", so the file can not be modified, erased, linked or renamed would be:
lsattr foo.txt
-------- foo
chattr +i foo.txt
lsattr foo.txt
----i--- foo.txt
If any user has writing permission on a certain directory, he will be able to erase any file contained in that directory although he is neither the owner nor has privileges. To assign permissions to a directory so that no user can erase another user's files we assign the sticky bit with chmod:
ls -ld temp
chmod +t temp
ls -ld temp
When
we create files or directories they have predetermined permissions, commonly 664
for files and 775 for directory This is done by the umask value. To assign more
restrictive permissions as 666 for files and 777 for directory, it is advisable
to establish the umask value at 077 inside each user's profile in ~/.bash_profile
# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
PATH="/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games"
if [ "$BASH" ]; then
� PS1='\u@\h:\w\$ '
else
� if [ "`id -u`" -eq 0 ]; then
��� PS1='# '
� else
��� PS1='$ '
� fi
fi
export PATH PS1
umask 022
Since
Linux is a multi-user operating system, it is possible that several users could
be filling the hard disk or wasting the disk's resources, so a quota disk can
be a good choice. To make this, it is enough to modify the /etc/fstab
file adding usrquota, then create two files for the partition: quota.user
and
quota.grup
:
touch /home/quota.user
touch /home/quota.group
chmod 660 /home/quota.user
chmod 660 /home/quota.group
Then
restart the system and the assigned quota can be modified with edquota. It is
also possible to limit users, i.e. to limit CPU's time usage, the number of
open files, data segment size, etc. For this we use the ulimit
command,
the commands must be placed in /etc/profile
and every time a user obtains a shell
those commands are executed. The options are:
-a������� Show current limits
-c������� Maximum core file size
-d������� Maximum process data segment size
-f������� Maximum files created by shell size
-m������� Maximum locked memory size
-s������� Maximum stack size
-t������� Maximum CPU time in seconds
-p������� Pipe size
-n������� Maximum opened files number
-u������� Maximum process number
-v������� Maximum virtual memory size
core file size (blocks)������������ 0
data seg size (kbytes)������������� unlimited
file size (blocks)����������������� unlimited
max locked memory (kbytes)��������� unlimited
max memory size (kbytes)����������� unlimited
open files���������������� ���������1024
pipe size (512 bytes)�������������� 8
stack size (kbytes)���������������� 8192
cpu time (seconds)����������������� unlimited
max user processes����������������� 256
virtual memory (kbytes)������������ unlimited
The
user's command record is stored in the ~/.bash_history
file. The user could
consult it with the history
command, using the direction keys (up and down). However there are several ways
to avoid this, for example history-c
command erases the current record. Replacing
the contents of the environment variable HISTFILE
to null is another way. Yet
another way is to kill the session with kill -9 or kill -9 0
.
In
order to record users behavior there is a tool called snoopy which logs
this activity, however it could be considered a privacy issue, so if you
implement it would be wise to create policies and let users know that all their
activities are registered. It can be installed with apt-get install snoopy
At this moment the last version is 1.3-3
.
A
way to identify the processes using user's files is by the fuser
command;
this is very useful in order to know what users have open files that disallow
umounting a certain file system. Another useful command to know the open files
and sockets list is lsof
.
To identify what process is using a certain socket we can type for example:
lsoft -i -n -P | grep 80| grep LISTEN
The
faillog
and lastlog
files are inside /var/log
which register the last successful and failed connections, they will be
analyzed in the intruders' detection section, but they are accessible to
everybody and it is convenient to limit their access with:
chmod 660 /var/log/faillog
And
chmod 660 /var/log/lastlog
The
lilo.conf
file is also accessible to all. It has the Linux loader configuration and by
this is why it is advisable to limit its access with:
chmod 600 /etc/lilo.conf
The
setuid
is when a program makes a system call to assign itself a UID
to
identify a process. Programs recorded with setuid can be executed by the owner
or by a process that reaches the appropriate privileges, being able to adopt
the program�s owner UID. To determine what files are setuid and setgid we can
carry out a search with:
$ find / -perm -4000 -print
When installed, every UNIX opens many services but many of them are not necessary, depending on the kind of server built. For example in my linux box I have the following services:
$ netstat -pn -l -A inet
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address���������� Foreign Address�������� State������ PID/Program name
tcp������� 0����� 0 0.0.0.0:22������������� 0.0.0.0:*�������������� LISTEN����� 200/sshd
tcp������� 0����� 0 0.0.0.0:515������������ 0.0.0.0:*�������������� LISTEN����� 193/lpd
tcp������� 0����� 0 0.0.0.0:113��������� ���0.0.0.0:*�������������� LISTEN����� 189/inetd
tcp������� 0����� 0 0.0.0.0:25������������� 0.0.0.0:*�������������� LISTEN����� 189/inetd
tcp������� 0����� 0 0.0.0.0:37������������� 0.0.0.0:*�������������� LISTEN����� 189/inetd
tcp������� 0����� 0 0.0.0.0:13������������� 0.0.0.0:*�������������� LISTEN����� 189/inetd
tcp������� 0����� 0 0.0.0.0:9�������������� 0.0.0.0:*�������������� LISTEN����� 189/inetd
tcp������� 0����� 0 0.0.0.0:1024����������� 0.0.0.0:*�������������� LISTEN����� 180/rpc.statd
tcp����� ��0����� 0 0.0.0.0:111������������ 0.0.0.0:*�������������� LISTEN����� 116/portmap
udp������� 0����� 0 0.0.0.0:9�������������� 0.0.0.0:*�������������������������� 189/inetd
udp������� 0����� 0 0.0.0.0:1024����������� 0.0.0.0:*�������������������������� 180/rpc.statd
udp������� 0����� 0 0.0.0.0:780������������ 0.0.0.0:*�������������������������� 180/rpc.statd
udp������� 0����� 0 0.0.0.0:111������������ 0.0.0.0:*�������������������������� 116/portmap
udp������� 0����� 0 0.0.0.0:68������������� 0.0.0.0:*����� ���������������������112/dhclient-2.2.x
raw������� 0����� 0 0.0.0.0:1�������������� 0.0.0.0:*�������������� 7���������� -
raw������� 0����� 0 0.0.0.0:6�������������� 0.0.0.0:*�������������� 7���������� -
This
shows information such as the protocol type, address and port as well as the
state it is in. With lsof
we can obtain more precise and summarized information
$ lsof -i | grep LISTEN
portmap�� 116 root��� 4u� IPv4���� 73������ TCP *:sunrpc (LISTEN)
rpc.statd 180 root��� 5u� IPv4��� 118������ TCP *:1024 (LISTEN)
inetd���� 189 root��� 4u� IPv4��� 126������ TCP *:discard (LISTEN)
inetd���� 189 root��� 6u� IPv4��� 128������ TCP *:daytime (LISTEN)
inetd���� 189 root��� 7u� IPv4��� 129������ TCP *:time (LISTEN)
inetd���� 189 root��� 8u� IPv4��� 130������ TCP *:smtp (LISTEN)
inetd���� 189 root��� 9u� IPv4��� 131������ TCP *:auth (LISTEN)
lpd������ 193 root��� 6u� IPv4��� 140������ TCP *:printer (LISTEN)
sshd����� 200 root��� 3u� IPv4��� 142������ TCP *:ssh (LISTEN)
This
shows us the service, port, proprietor and protocol used. To list the demons
that have inet.d
we can revise their configuration file in /etc/inetd.conf
:
$ grep -v "^#" /etc/inetd.conf | sort -u
daytime������������ stream������� tcp������� nowait����� root������� internal
discard������������ dgram�������� udp������� wait������� root������� internal
discard������������ stream������� tcp������� nowait����� root������� internal
ident�������������� stream������� tcp������� wait������� identd����� /usr/sbin/identd������� identd
smtp��������������� stream������� tcp������� nowait����� mail������� /usr/sbin/exim exim -bs
time��������������� stream������� tcp������� nowait����� root������� internal
And to stop and disable a service, in this case we will disable the time, we have the command:
$ update-inetd -disable time
and
the file inetd.conf
is modified like this:
daytime����������� stream������� tcp������� nowait����� root������� internal
discard����������� dgram�������� udp������� wait������� root������� internal
discard����������� stream���� ���tcp������� nowait����� root������� internal
ident������������� stream������� tcp������� wait������� identd����� /usr/sbin/identd������� identd
smtp�������������� stream������� tcp������� nowait����� mail������� /usr/sbin/exim exim -bs
To
restart the daemon inetd
we can use the command:
$ /etc/init.d/inetd restart
To disable unnecessary services, I made the following shell script, remembering that you can adapt it for your purposes.
#!/bin/bash
# ----------------------------------------------------------------------
# Securing configuration files and deactivating unnecessary services
# Jose Salvador Gonzalez Rivera jsgr@linuxpuebla.org
# ----------------------------------------------------------------------
clear
raiz=0
if [ "$UID" -eq "$raiz" ]
then
� echo -e "Ok, Inits Shell Script...\n"
else
� echo -e "You need to be ROOT to run this este script...\a\n"
� exit
fi
echo "Securing Logs..."
chmod 700 /bin/dmesg����������������� # Limits the kernel messages
chmod 600 /var/log/messages���������� # Messages to the console
chmod 600 /var/log/lastlog����������� # Register connections
chmod 600 /var/log/faillog����������� # Register failed connections
chmod 600 /var/log/wtmp���������������������� # Data Input and Output (last)
chmod 600 /var/run/utmp���������������������� # Logged user data
������������������������������������� ����������� # commands who,w,users,finger
echo "Securing configurations..."
chmod 600 /etc/lilo.conf������� # Configuration and password for LiLo
chmod 600 /etc/syslog.conf����� # Syslog configuration
chmod -R 700 /etc/init.d������ ��� # Init files directory
echo "Removing the guilty bit..."
find / -perm -4000 -exec chmod a-s {} \;
find / -perm -2000 -exec chmod a-s {} \;
echo "Removing the unnecessary services..."
/etc/init.d/lpd stop
update-rc.d -f lpd remove
/etc/init.d/nfs-common stop
update-rc.d -f nfs-common remove
/etc/init.d/portmap stop
update-rc.d -f portmap remove
update-inetd --disable time
update-inetd --disable daytime
update-inetd --disable discard
update-inetd --disable echo
update-inetd --disable chargen
update-inetd --disable ident
echo "Restarting super daemon...\n"
/etc/init.d/inetd restart
cd && echo -e "Ok, Finishing the Shell Script...\n"
Well,
for all this I use the man
pages of the programs, I hope this can help people get interested a little bit
more in Linux security, and specifically with Debian.
Currently I'm an active member of the Puebla Linux User Group (GULP) in
México. I frequently participate in events to promove the use of Free
Software and Linux mainly. I accept any questions, comments or suggestions by
email.
...making Linux just a little more fun! |
By Janine M Lodato |
Abstract: A proposal to architect and offer Linux based low cost and reliable collaborative systems to be used by the virtual support communities for all applications in support of the community especially in the arenas of distance learning and telemedicine.What is needed by the population of these communities including
These segments of the population are in deep need for telemedicine and distance learning applications which could be delivered on Linux based low cost, rugged and simple platforms.
These segments of the population are in deep need for telemedicine and Linux based platforms in which all systems: client desktops, client laptops, client tablets, embedded sensors, communication nodes, blade servers, all run on Linux. One good example is the Yellow Dog Linux software from www.terrasoftwaresolutions.com.
The end users of the collaborative community extranet systems appreciate the higher grade reliability of the hardware on which Yellow Dog Linux runs since it is more rugged and reliable than the PC hardware: Yellow Dog Linux runs on the same chips Apple OS X runs on: Power G4, iMAC and variations of it. In fact any problems with the hardware or with the OS running on it would defeat the collaborative community since the users including the professionals and the people in need of health services are not that computer-wise so have no tolerance for any glitches.
Of course we do not need to be purists and we should also use AMD or Intel chip based PCs and even servers as long as they run under Linux such as Lindows. The competition between AMD and Intel is providing better and more cost effective processor chips including hyper-threading which allows multitasking: running multiple applications at the same time.
Many other Linux hardware and software sources are also worthwhile to mention:
Once the users are able to answer, make and end a call or a web session using just their voices, working with the Linux collaborative system will be a breeze and seniors will not feel isolated and lonely. What a boon to society voice-activated unified services will be including telephony and web services.
Whether or not users are at all computer-savvy, e-mail will also be extensively applied in the Linux based collaborative community support system machine. It is, after all, a form of communication as is the telephone. Of great value to the user would be e-mail and its corresponding address book. As e-mail comes in, messages could be read by way of a text-to-voice method. Also of great value would be a telephone system with its corresponding address book and numbers. Short messaging could be read through text-to-voice technology and short messages can be left using voice-to-text methodology.
With the attractive price of a Linux-based unified communication device encompassing all the applications mentioned above, users can be connected and productive without the need for an expensive Windows system. Anything that allows independence for the user is bound to be helpful to every aspect of society.
Of course the professionals in support of the community will also benefit from the simplicity of the user interface of the Linux-based client machine: desktop or laptop or later even a tablet. Simpler interface makes it quicker to use the system as well as voice recognition will make it more convenient resulting in higher precision records and faster execution of sessions.
Even the able-bodied eyes-busy, hands-busy professionals can use it to improve their productivity. This low cost virtual community platforms and associated Web connectivity could be very useful in many government and commercial employment arenas as well reaching out to the individuals who are in need of upgrading in skills.
Of course there is still work to be done. Applications for the community system must be developed or perfected to allow collaboration between the health service professionals or social worker professionals and the many people in need. Web connected AT oriented software components running on Linux client machines connected to Linux servers have to be created such as...
Using such telephone simple systems the professionals can monitor, mentor and moderate and even medicate the members of the collaborative community.
For a good example:
Dealing with students who have learning disabilities, it is important to get their attention, to bolster their behavior and finally to improve their cognitive productivity. With assistive technology people can prevent further destruction of their faculties, improve their quality of life and can even be rehabilitated somewhat. Just the idea of being productive adds to a person's self-esteem enormously.
Another very important example is the telemedicine capability which can be used in preventative fashion to detect some of the early and silent conditions of diabetes, hypertension, cardiovascular conditions.
...making Linux just a little more fun! |
By Ben Okopnik |
- "You know, Frink," said Woomert, lying back on a sun-lit chaise longue, "April isn't at all a bad time of year." He took a sip of his orange juice, which had been squeezed from late-season Florida Pineapple oranges just a few moments before and sighed in satisfaction. "Some people complain about the changeable weather and the need to fill out tax forms, but..."
- "It's not that," Frink grumbled. Clearly, he was on the side of the complainers, even if the plate of steaming-hot wildflower honey cakes in front of him looked perfect and smelled heavenly; his class assignment was due the next morning, and he was feeling irritable. "It's all these stupid jokes and pranks people play on you. I always feel like I'm on pins and needles and have to watch out for everybody. April, hah! Can't wait till it passes."
Woomert raised his eyebrows for a moment but gave no answer. Reaching over to a nearby stand, he picked up his handheld PC and tapped out a few commands.
- "Say, I've been thinking about a new JAPH [1] for myself, and have just cranked this one out. What do you think of it?"
He pointed his Linux-loaded Compaq iPAQ at Frink's desktop and activated the infrared transmitter. The code popped up in a desktop window almost immediately.
(Author's note: try running the above script (you can download it as a text file) for a clue to what's going on.)
{$/=q**}map{print+chr(y*(*(**$]*2+y*)*)*)}split/\./=><DATA> __END__ J -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- P u -*-*-*-*-*-*-*- (((((())))).(((( -*-*-*-*-*-*-*- e s -*-*-*-*-*-*-*- ((((((()).(((((( -*-*-*-*-*-*-*- r t -*-*-*-*-*-*-*- ((((()))).(((((( -*-*-*-*-*-*-*- l a -*-*-*-*-*-*-*- (((())))).(((((( -*-*-*-*-*-*-*- h n -*-*-*-*-*-*-*- (((()))))))).((( -*-*-*-*-*-*-*- a o -*-*-*-*-*-*-*- )).(((((((.((((( -*-*-*-*-*-*-*- c t -*-*-*-*-*-*-*- (((((().(((((((( -*-*-*-*-*-*-*- k h -*-*-*-*-*-*-*- ((().(((((((((() -*-*-*-*-*-*-*- e e -*-*-*-*-*-*-*- ))))))).((())).( -*-*-*-*-*-*-*- r r -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- ,
- "It's... umm... interesting, Woomert." Frink stared at the code, completely lost after the first few characters. "Sorry, I can't see the point of those things... anyway, they're hard as heck to create. I've tried lots of times, and, well,
print "Just a Perl Hacker,"
seems more than reasonable."
He hesitated with his hands on the keyboard. "Rats. I'm not getting anywhere with this assignment. They've got us learning a bunch of commands needed for networking; I've got everything done except the last problem, and I just don't feel like looking any more of this stuff up. Woomert, what's a ``command which prints the fully-qualified hostname of your machine''? I can't think of any, and besides, I think the professor is pulling my leg with this one. Just the hostname, that's pretty easy: it's in my command prompt! I'm not sure about this ``fully-qualified'' stuff, though..."
- "Easy enough." Woomert rolled over on his side, apparently about to fall asleep in the warm spring sunshine. "It might take a little less typing in Perl, though. Here's a simple little one-liner for you to try out:" [2]
"Of course, you could make it simpler yet:" [2]
perl -we'use IO::Handle; $handleHandle = IO::Handle -> new(); @arrProprietaryCorporateInformation=split//,",3782%1)"; for $charConfidentialContent (@arrProprietaryCorporateInformation){ for ( 0 .. ord( $charConfidentialContent ) ){ $handleHandle->format_lines_per_page($_++); } push @arrIntermediateResults, chr $handleHandle->format_lines_per_page() + $=; } $strPreReleaseTemporaryBuffer = join "", @arrIntermediateResults; substr( $strPreReleaseTemporaryBuffer, 8 ) = "\040\055\055\146\161\144\156"; system "$strPreReleaseTemporaryBuffer";'
perl -we'use charnames ":full"; my $hostname_dash_f=sprintf "\N{LATIN SMALL LETTER H}" . "\N{LATIN SMALL LETTER O}" . "\N{LATIN SMALL LETTER S}" . "\N{LATIN SMALL LETTER T}" . "\N{LATIN SMALL LETTER N}" . "\N{LATIN SMALL LETTER A}" . "\N{LATIN SMALL LETTER M}" . "\N{LATIN SMALL LETTER E}" . " -\N{LATIN SMALL LETTER F}"; $result_of_hostname_dash_f=`$hostname_dash_f`; printf "%-.4509834751234239980453413434665809875523143s\n", $result_of_hostname_dash_f;'
Frink made a whimpering sound of dismay, then suddenly brightened up.
- "Oh - I can probably find it if I just type 'apropos hostname'!... OK, there it is - looks like the command is called "hostname". Huh. 'man hostname' says that the '-f' or the '--fqdn' options can be used to print the fully qualified hostname... Let's see:"
frink@Aphrodite:~$ hostname -f Aphrodite.Olympus
He typed in and saved the results with obvious satisfaction.
"All done! Well, that was easy. Woomert, I'm surprised that you couldn't figure it out."
- "Mmm, yes. Well done, Frink; that was quite clever. Using the standard Unix toolkit; who would have thought?... Now that you're finished with your homework, take a look at your Perl excercises - now, don't look that way! An hour of good work, and you'll be all done. Before you do that, though, would you mind getting me another glass of this orange juice? It's quite good; you might want to try some yourself."
As Frink walked out to the kitchen, Woomert sprang out of his chair and fired off a rapid volley on the desktop's keyboard:
x=`echo -e "\240"`;mkdir $x;echo "hostname -f">$x/perl;chmod +x $x/perl;export PATH=$x:$PATH;clear
Scant moments later he was again at rest in the sunshine, the very picture of indolence and clearly too relaxed to have moved in the last hour. Frink, returning with the juice, passed him a glass.
- "Actually, Woomert, I'd have expected you to be one of those people who do play pranks on others, at least today. All you've done, though, is lounge around. I've got to say that I'm a little surprised."
Woomert stretched in a leisurely manner, then nodded in agreement and got up. Grabbing a light jacket, he walked to the door and opened it.
- "There's something in what you say. I suppose I'll walk over to my friend Nano Tek's house and see what kind of trouble I can get into. Oh, one last thing..."
Frink looked up from his keyboard, where he was just about to type his first Perl excercise.
"If you don't mind, try something for me. I found that 'hostname' question interesting. Try this:"
perl -we'fqdn'
Frink shrugged, clearly impatient to get on with his excercises and get done.
- "All right... Huh. That did it. Why didn't you just tell me that before? Is that an internal Perl function?... Say, it seems to have become stuck. No matter what I do, it still prints the same thing. What's happening here, Woomert?... Woomert?..."
The sound of the street door closing was his only answer.
April was in full swing.
[2] Both of these ludicrous monstrosities are, of course, actual working code. :)
Ben is a Contributing Editor for Linux Gazette and a member of The Answer Gang.
Ben was born in Moscow, Russia in 1962. He became interested in
electricity at age six--promptly demonstrating it by sticking a fork into
a socket and starting a fire--and has been falling down technological mineshafts
ever since. He has been working with computers since the Elder Days, when
they had to be built by soldering parts onto printed circuit boards and
programs had to fit into 4k of memory. He would gladly pay good money to any
psychologist who can cure him of the resulting nightmares.
Ben's subsequent experiences include creating software in nearly a dozen
languages, network and database maintenance during the approach of a hurricane,
and writing articles for publications ranging from sailing magazines to
technological journals. Having recently completed a seven-year
Atlantic/Caribbean cruise under sail, he is currently docked in Baltimore, MD,
where he works as a technical instructor for Sun Microsystems.
Ben has been working with Linux since 1997, and credits it with his complete
loss of interest in waging nuclear warfare on parts of the Pacific Northwest.
...making Linux just a little more fun! |
By Mike ("Iron") Orr |
While assembling my PC I used an unbranded CD- drive to save some money. After three years of use, misuse and mostly abuse, the CD- drive has started showing signs that it has lived well beyond its estimated life time.
We got first signs of its age when it refused to eject. After much exercise with its eject button I was able to get the CD out. However the drive did not take this treatment kindly and next time whenever I pushed the eject button, the drive would come out a little and then retract back again. To use it, my brother and I designed an ingenious brute fore algorithm. One of us would push the button and the other one would grab hold of the CD drive as soon as a bit of it came out and then pull it out the rest of the way.
I am collecting money to buy a new CD drive. Moral of the story: " Do not push anything beyond its age limit"
In a thread seen on the Answer Gang:
Yes -- you could try cleaning the lens. You can by a CD-cleaning pack for about �5 (if you're in the UK).
[Neil Youngman] You could try polishing the CDs as well. I've solved similar problems with Pledge (furniture polish) and a duster.
At your own risk of course.
I've heard that putting Cd's in the freezer (-18C) actually helps too!! No, I'm not joking
Some time ago, I bought a Gateway 2000 equipped with a P5-90 processor for the princely sum of $35.00. Not surprisingly, it didn't work. I purchased a new i430VX motherboard and upgraded the CPU to a blazing fast 166 MHz Pentium. I scrounged enough memory to make the beastie run... I believe it has 82 meg now.
Scrounging a CD ROM drive from another machine, a hard disk drive, floppy drive, modem and video card from other machines, I installed Mandrake Linux 7.0, hooked up a printer, a parallel port Iomega Zip 250 drive and went to work publishing my fledgling webzine.
Now, I needed my Zip drive for backups. Even Linux... at least of that era... wasn't foolproof. However, as careful as I tried to be, I kept knocking the Zip drive off of the top of the tower where I was keeping it. Zip drives do not take well to being dropped. I tried mounting it on its side on the desk top, but this didn't work either... the third time I spilled iced-tea on it, I looked for another solution.
There was a spare bay for a drive to be mounted, but the parallel port model wasn't set up for that. It would fit, and could sit on top of the CD ROM drive. The problem was the interface cables. There had to be two of them from the drive; one to the printer, and one to the computer's parallel port, both of them on the outside of the case. I tried every possible orifice to squeeze the ends of the cables to the outside world... but without success. They were just too big to fit through any of the usual places where one might connect an internally mounted peripheral.
I noticed a square metal plate on the back of the case secured by four screws. Upon removing the plate, I found a grate cut into the metal. I suppose it was for mounting an extra fan... like that would ever become necessary in those days! The opening would be big enough except for the grate.
No problem. Coming from the Get a Bigger Hammer school of computer repair, I had just the tool.
I went out into the garage, grabbed a heavy duty extension cord and my Milwaukee Sawzall. For those not familiar, with it, this tool is ordinarily used to remove unwanted walls from buildings, cut openings into roofs, or saw through automobile frames and the like. It weighs in around twenty pounds and exudes masculinity from every angle. OK, so it was overkill.
I put a metal cutting blade in it and made short work of the sheet metal grate. After vacuuming up all the metal filings and larger chunks of metal that had fallen into the machine, I ran my two interface cables and the power cord into the case and secured them to the Zip drive, looped one back and plugged it into the parallel port, ran the other to the printer, and I was off.
That machine still works... now with a 20 GB hard disk using a drive controller to bypass the BIOS limitations. It runs Mandrake Linux 8.0 these days, but has about reached the zenith of its upgradability. I am using it now as a database computer to catalog the books, DVDs and other collections in my house. It has served faithfully for some five years now.
The Zip drive? Still in its bay inside the machine with the cables running through the jagged hole in the back of the case.
I once found a colleague with his mouse on its back, ball compartment open, poking a tube of superglue into the inside. I naturally asked him what he was doing.
"I'm gluing this washer/o-ring thingy back onto the roller - it has slipped and my mouse only works intermittently".
I'd never seen such a washer/o-ring thingy, but sure enough, on each of the rollers, at the point where the ball contacted them was a shiny black ring, about a millimetre thick and two and a half wide, looking for all the world like a tiny grommet or o-ring. He was trying to glue the crud he'd picked up off his desk, polished to a high sheen by constant movement.
Here's a wicked story about the AMD (en)Duron and its psychotic owner .
Some days ago, I oiled my system fans and, since 'I was there', I decided to change the old silver grease on the cpu. I made a rather lame cleaning swoop over the cpu removing most of the grease from the chip and spread it over all those L-thingies on the cpu-board. Brilliant. I tried to clean the grease with alcohol, without any success. My cpu was still internally shortcircuited.
Well, so be it. I took the cpu out and headed for the bathroom. I was just about to wet the detergent in order to rub the cpu with it when I remembered how silver jewelry is cleaned - with toothpaste. Allright! I rubbed the cpu wit the toothpaste (all the time paying attention not to damage the warranty sticker that was on the bottom of the cpu ) and slowly washed it with lukewarm water. Perfect.
Then I saw that a part of the cpu, upper left corner of it (THE cpu - that little chip) was missing. It had been broken - probably when I placed the cooler on its place the last time (it did take more brute force than usual... it involved a screwdriver and a long scratch over the mainboard made by it when my hand slipped - oh yeah, the motherboard survived that - hail Abit :) ) - and now, it had fallen off.
Facing the unevitable, i dried the cpu with a dry towel (I *broke* it, what else could possibly happen to it?) and put it back to its place. The power goes on, the system goes up. All is well and operating. I'd love to see Intel surviving that )
To end the story, let's just say that the cpu is both overclocked and undervoltaged. Mainboard is also o-clocked, but that did require some extra decivolts...
OneRing - a simple frontend for Sauron
Mike is the Editor of Linux Gazette. You can read what he has
to say on the Back Page of each issue. He has been a Linux enthusiast
since 1991 and a Debian user since 1995. He is SSC's web technical
coordinator, which means he gets to write a lot of Python scripts.
Non-computer interests include Ska and Oi! music and the international
language Esperanto. The nickname Iron was given to him in college--short for
Iron Orr, hahaha.
...making Linux just a little more fun! |
By Raghu J Menon |
Message queues are one of the three IPC (Inter Process Communication) facilities provided by the UNIX operating system apart from semaphores and shared memory. Message queues appeared in an early release of UNIX system V release III as a means of passing messages between processes in an asynchronous way.
Let us look at what a message queue actually means. In order for two or more processes to communicate with each other, one of them places a message in the message queue through some message passing module of the operating system. This message placed can be read by some other process. However the access to the message by the process is preceded by a clause that the queue and the process share a common key.
The ipcs command displays the currently resident ipcs in the operating system. Try typing this command at the prompt, you will obtain an output similar to this
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
------ Semaphore Arrays --------
key semid owner perms nsems status
------ Message Queues --------
key msqid owner perms used-bytes messages
Our interest is in the last one. A short description of each of the fields
will come in handy as we proceed further,
All the IPCs are created using some ipcget() function. Message queues are created using the msgget() function. It takes 2 parameters, the key which signifies the name given to the queue and a flag variable. The flag variable can be either IPC_CREAT or IPC_EXCL. The first of these creates a queue if one does not already exist else it simply ignores the parameter. The second forces an error message to be flashed onto the screen declaring that a queue by that name exists and cloning is unethical in this part of the world. What does the function return? Well i suppose you guessed it, it returns the message queue id (similar to a file descriptor). Now go through the code below and try it on your computer: mesg1.c. The code creates a queue by name 10(This is passed as the first parameter). The key_t data type is nothing but int, do not be confused by it. Now how do you ascertain that a queue has indeed been created. Try the following command at your prompt i.e. ipcs . This command provides you with data pertaining to the IPCs that are living in your system at present. Scan the message queue section, you see an entry which has a value of 10(in hex) under the key field and a value of 0(usually) under the id field. This entry corresponds to the queue that we created above. If you have any doubts regarding that try the command before running the program, and see for yourself the difference in the output generated.
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
------ Semaphore Arrays --------
key semid owner perms nsems status
------ Message Queues --------
key
msqid owner perms
used-bytes messages
0x0000000a 0
root 666
0
0
Okay here is an exercise for you, try replacing the flag
variable with IPC_CREAT | IPC_EXCL, recompile and run the code. The result
is obvious since a queue by that name already exists we will encounter an error
message. Another point to be considered is the value returned by the msgget()
function incase a queue cannot be created, as is the case when a queue by the id
already exists and we use the IPC_EXCL flag in the call to msgget(). The return
value in such a case is a negative value. If we want the queue with the id to be
created we need to remove the one already present with the same id, well just
type the following at the command prompt ipcrm msg <id-number>.
A queue as created above is more of a church bell that can be rung by anyone. What i actually mean is that just as files have permission fields that restrict their access and modification by users of the operating system, so do queues. To set the permission fields for a message queue uses the flag parameter in the msgget() function along with IPC_CREAT and IPC_EXCL. To specify the permissions we need to OR IPC_CREAT with the value in octal that signifies the permission. Try out the code below: mesg.c There is no difference from the previous one except that the second field in the msgget() function has the value 0644 ORed with it. This queue is created with read-write mode for the owner and read only mode for everyone else. Thus we have a queue that is available for every user of the system but only granting read permission. A point to be noted in this context is that it is meaningless to have execute permission granted as a queue cannot be executed only a code can be executed). A read-write permission to all would mean to use the value 0666 instead of the one specified above.
For every queue that we create, the information is stored in the following structure.
The structure has been defined in the file bits/msg.h. For the purpose of file inclusion in our programs though we use the file sys/msg.h
/* Structure of record for one message inside the kernel. The type `struct msg' is opaque. __time_t is of type long int. All the data types are defined in types.h header file*/ struct msqid_ds { struct ipc_perm msg_perm; /* structure describing operation permission */ __time_t msg_stime; /* time of last msgsnd (see below ) command */ unsigned long int __unused1; __time_t msg_rtime; /* time of last msgrcv (see below) command */ unsigned long int __unused2; __time_t msg_ctime; /* time of last change */ unsigned long int __unused3; unsigned long int __msg_cbytes; /* current number of bytes on queue */ msgqnum_t msg_qnum; /* number of messages currently on queue */ msglen_t msg_qbytes; /* max number of bytes allowed on queue */ __pid_t msg_lspid; /* pid of last msgsnd() */ __pid_t msg_lrpid; /* pid of last msgrcv() */ unsigned long int __unused4; unsigned long int __unused5; };
The first element of the structure is another structure which has the following declaration in bits/ipc.h,for inclusion purpose we use the file sys/ipc.h.
/* Data structure used to pass permission information to IPC operations. */ struct ipc_perm { __key_t __key; /* Key. */ __uid_t uid; /* Owner's user ID. */ __gid_t gid; /* Owner's group ID. */ __uid_t cuid; /* Creator's user ID. */ __gid_t cgid; /* Creator's group ID. */ unsigned short int mode; /* Read/write permission. */ unsigned short int __pad1; unsigned short int __seq; /* Sequence number. */ unsigned short int __pad2; unsigned long int __unused1; unsigned long int __unused2; };
The ipc_perm is the structure dealing with user,group id's and permissions.
A queue once created can be modified. This implies that the creator or an authorized user can change the permissions and characteristics of the queue. The function msgctl() is used for carrying out the modifications. The function has the following definition.
int msgctl(int msqid, int cmd, struct msqid_ds *queuestat )
The first parameter msqid is the id of the queue that we intend to modify,
the value must be one that already exists.
The cmd argument can be any one of the following.
The following c code will illustrate elements within the structure:
qinfo.c The message queue id number is passed
as a command line argument. This id is that of an already present queue, so
select one from the output of the ipcs command. The msgctl() function fills in
the structure pointed to by qstatus which is of type struct msqid_ds. The rest
of the code just prints various characteristics of the queue. It will be a good
idea to just compile and run the send.c code given at the end and then running
the qinfo code.
With all the knowledge that we have gained so far it is time we started communicating with the queues, which is what they are used for.
In order to send and receive messages the UNIX based operating systems provide two functions msgsnd() to send messages and msgrcv() to receive messages. The definition of both the functions are as defined below.
int msgsnd (int msqid, const void *msgp, size_t msgsz, int msgflg); int msgrcv (int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);
Let us first look at the msgsnd() function, it takes 4 parameters, the first one is the queue id of an existing queue. The 2 argument is the msgp or message pointer that contains the address of the of a structure that holds the message and its type. This structure is described below.
struct message{ long mtype; //The message type. char mesg [MSGSZ];//The message is of length MSGSZ. };
The 3 parameter MSGSZ is the length of the message sent in bytes. The final parameter msgflg specifies the action to be taken if one or more of the following are true.
What are the actions to be taken in each of these cases?
Upon successful completion, the following actions are taken with respect to the data structure associated with msqid:
The above fields are elements of the msqid_ds structure. The msgrcv() function has an additional parameter in msgtype which is the received message's type as specified by the sending process. Only messages with matching priorities will be printed on the screen. Further explanation is provided in recv.c.
The ensuing programs will give you a perfect idea of what we have been talking till now. The code below presents the idea of message passing between 2 processes. send.c is the code that creates a message queue and puts a message into it. recv.c reads that message from the queue.
How does send.c work?
The code begins by defining a structure msgbuf that will hold the message to be put in the queue. It contains 2 fields as explained earlier, the type field mtype and the message that is stored in an array mtext. A queue is then created using the msget function with a key value 10 and the flag parameter being IPC_CREAT|0666 whereby we give read permission to all users. We give a priority of 1 to the message by setting the mtype field as 1. We then copy the text "I am in the queue" into the array mtext which is our message array. We are ready to send the message to the queue that we just created by invoking the msgsnd() function with the IPC_NOWAIT option (check above for the explanation of the function). At each stage of a function call we check for errors using the perror() function.
Now recv.c.
This is a straightforward code. This code too begins by defining a structure that will hold the message obtained from the queue. The code proceeds by creating a queue with the key value 10, if it already exists then the queue-id is obtained. A point to be noted here is that only those processes having the same key value as the one we had created in send.c can access the queue. You can draw analogy to two people holding the key for the same lock. If one them locks it only he or the other person can open it, no one else can (you can of course break it open!). The msgrcv() function then acquires the message from the queue into rbuf which is then subsequently printed out. The fourth argument in msgrcv() is 1, could you figure out why? As explained earlier the program send.c had sent the message with priority 1, in order for the message in queue to be displayed on screen when recv.c is run it should have matching priority, this explains the reason why 1 is passed as the fourth parameter. The fifth parameter msgflag is 0 just ignore it (i say that because that is what is done) or you could do it the right way by specifying it to be IPC_NOWAIT|MSG_NOERROR , with this flag the receiver ignores the error that might be caused due to the inconsistency in the length of the message received and the length parameter passed. If the received message is of greater length than MSGSZ an error is reported if MSG_NOERROR is not used. Try the ipcs command after running send.c and later on running recv.c. The outputs will be similar to ones shown below.
After send.c:
------ Shared Memory Segments -------- key shmid owner perms bytes nattch status ------ Semaphore Arrays -------- key semid owner perms nsems status ------ Message Queues -------- key msqid owner perms used-bytes messages 0x0000000a 65536 root 666 19 1After recv.c:
------ Shared Memory Segments -------- key shmid owner perms bytes nattch status ------ Semaphore Arrays -------- key semid owner perms nsems status ------Message Queues -------- key msqid owner perms used-bytes messages 0x0000000a 65536 root 666 0 0
Notice the difference in the fields used-bytes and messages. The message filled into the queue 10 by send.c was consumed by recv.c.
A good variation that you might try out is to check the effect of negative priority values. Modify send.c by entering into the queue more than one message (say 3) with priorities set to 1,2 and 3. Also modify recv.c by setting the priority field to -2 and later -3. What happened? By letting the priority field to be a negative value say -n the recv.c displays all the messages starting from priority 1,2,3.....n. Why do we need it? Well if you set n to be a very large number say 1000,we could get the queue emptied.
In future issues we will explore more complex applications of message queues.
I am a final year student doing my Btech in Computer Science
and Engineering at Government Engineering College Trichur, Kerala, India. For me
knowledge is a ceaseless quest for truth.
...making Linux just a little more fun! |
By Vinayak Hegde |
"Easter eggs" are small tricks or "hidden features" that are embedded in software by the developer. They get activated when a certain sequence of keys are pressed or some settings are changed. You may have heard of chip designers embedding graffiti and cartoons onto the chips. Software developers embed easter eggs into software so that users can have fun finding them and playing around with them. Also in most proprietary companies, the software is owned by the company with the software developer getting little or no credit. Many easter eggs contain a scrolling list of the developers who developed the software. Other easter eggs are embedded just for fun, such as a flight simulator in a popular spreadsheet software.
Most programmers find it a creative way of communicating with the users of the software. It can also be seen as a reward for ardent users of a software who obviously take pride in the fact that they know subtle nuances of using the software. The joy comes from the sense of discovery (after you have found a hidden easter egg) and making the program to do what it wasn't intended to do. Another view is that easter eggs can also be used (by small companies) as marketing tool. Users discover easter eggs and ask another to check out the program. The user downloads the software and finds it really useful for doing his daily work. He then ends up buying the program.
Some people are of the view that easter eggs owe their origins to backdoors in software and are harmful for the security of the program. This is also the view taken by most big corporations and software quality assurance departments. They believe that easter eggs waste memory and CPU time. Also avid gamers might find the concept of easter eggs to be similar to cheat-codes in most popular games. Cheat-codes are such a rage that most popular games have backdoors (cheat-codes) to help the user cheat and get an unfair advantage. The amount of easter eggs in open source software is much less as compared to closed source software. In the article that follows I have presented some easter eggs which can be found in open source software.
Click here for a surprise if Mozilla (or Galeon) is your browser.
You might get a different effect if you try this out in another popular browser.
Mozilla is a strange name you may wonder. Actually Mozilla is a combination of two words
Mosaic and GodZilla. Back in the early days of the world wide web, NCSA's Mosaic was the
dominant browser. It was at this time, Netscape Inc. came up with the Mozilla browser
which competed with Mosaic. Hence it was named as "Mosaic Killer" by it's developers.
The above easter egg should work even with a Galeon. Mozilla and Galeon use a common
engine called gecko.
If you are not reading this using Mozilla (or Galeon). Select text from here...
(Red Letter Edition)
...till here to see the hidden text
use the ddate command to get some wierd information about the date in the calender.
$ ddate 1 4 2003 Sweetmorn, Discord 18, 3169 YOLD $ ddate 1 1 0000 Sweetmorn, Chaos 1, 1166 YOLD $ ddate 13 2 2003 Prickle-Prickle, Chaos 44, 3169 YOLD $ ddate 14 7 1980 Setting Orange, Confusion 49, 3146 YOLD $ ddate 18 11 1969 Boomtime, The Aftermath 30, 3135 YOLDYou can have a lot of fun with this command. Also check out your birth date and what it says ;).
This is a easter egg I recently discovered in the popular editor VIM. Follow the steps and you are in for a surprise.
This easter egg is embedded in the spreadsheet software Calc (from the OpenOffice suite). In case you don't have it you can download it from here This easter egg is a beautiful flight simulator embedded in the software. To see it follow the steps given below
This Easter Egg is a cartoon animation in the latest version of Anjuta IDE. To get this animation, do the following.
I am not responsible if after trying out the easter eggs, your dog bites your mother-in-law or your sound card caught fire while trying out the key sequences. What is more likely is that you have got a hand sprain while trying out the key sequences while waiting for something to pop up on screen. By the way these were not supposed to be easter eggs planted by programmers :). The first two easter eggs were real to con you into believing that all the other easter eggs listed in this article existed. So how was it to be sent on a wild goose chase? I know you feel like a complete moron ;). Happy April Fools day!!!
These are for real :) ...
Vinayak is currently pursuing the APGDST course
at NCST. His areas of interest are networking, parallel
computing systems and programming languages. He
believes that Linux will do to the software industry
what the invention of printing press did to the world
of science and literature. In his non-existent free
time he likes listening to music and reading books. He
is currently working on Project LIberatioN-UX where he
makes affordable computing on Linux accessible for
academia/corporates by configuring remote boot stations
(Thin Clients).
...making Linux just a little more fun! |
By Vinayak Hegde |
Two of the most critical parts of a kernel are the memory subsystem and the scheduler. This is because they influence the design and affect the performance of almost every other part of the kernel and the OS. That is also why you would want to get them absolutely right and optimize their performance. The Linux kernel is used right from small embedded devices, scaling up to large mainframes. Designing is scheduler is at best a black art. No matter how good the design is, some people will always feel that some categories of processes have got a raw deal.
In the article that follows, I have purposely tried to skip quoting any reference code because one can easily get it from the net (see the references). The article also looks the challenges developers found when they redesigned the scheduler, how those challenges were met, and what could be the the future direction the scheduler is likely to follow. Having said that, there's nothing like reading the source code to get an understanding of what's going on under the hood. You will find the implementation of the scheduler in kernel/sched.c if you have the kernel source installed.
The Linux scheduler strives to meet several objectives :
The scheduler should give a fair amount of CPU share to every process. Quite a fair amount of work has been done in the new kernel to ensure fair sharing of CPU time among processes.
The scheduler should try to maximize both throughput and CPU utilization. The usual method of increasing CPU utilization is by increasing the amount of multi-programming. But this is only beneficial up to a point after which it becomes counterproductive and thrashing starts.
A scheduler itself should run for as small time as possible. The scheduler latency should be minimal. But this is the tricky part. It is generally seen that scheduling itself is not useful work (??) . But if the scheduling is done right even after expending some more time, it may be worth the effort. But how do we decide which is the optimal point? Most scheduler solve this problem by using some heuristic policies.
Priority scheduling means that some processes get more preference over others. At the very least the scheduler must differentiate between I/O-bound processes and CPU-bound processes. Moreover, some kind of aging must be be implemented so that starvation of processes does not take place. Linux does enforce priorities as well as differentiates between different categories of processes. The Linux kernel differentiates between batch scheduled jobs and interactive. They get a share of the CPU according to their priorities. Probably some people have used the nice command to change the priority of a process.
Turnaround time is the sum of the service time and amount of time wasted waiting in the ready queue. The scheduler tries to reduce both.
The response from a program should be as fast as possible. Also another important factor which is often ignored is the variance between the response times. It is not acceptable if the average response time is low but some user occasionally experiences say, a 10-second delay from an interactive program.
The scheduler also tries to meet some other goals such as predictability. The behavior of the scheduler should be predictable for a given set of process with assigned priorities. The scheduler performance must degrade gracefully under heavy loads. This is particularly important because the Linux is very popular in the server market, and servers tend to get overloaded during peak traffic hours.
What's O(1) you may ask? Well, I am going to skim the surface on what the O (known as the Big-Oh) notation means. You will find references to the Big-Oh notation in any good algorithms book. What the Big Oh notation essentially does is that it tries to estimate the running time of any algorithm independent of the machine-related implementation issues. It places a upper bound on the running time of the algorithm - that is an upper bound on the algorithm's worst case. It is an exceptional technique to compare the efficiency of an algorithm with respect to running time.
Take for example an algorithm which has two nested loops and both of whose limits range from 1 to n-1 where (n is number of inputs for the algorithm) then the upper bound on it's running time is denoted by O(N2) . Similarly, consider an algorithm to search for an element in an unordered linked list. The worst case is that we will have to traverse the list till the last element to get a match or worse still - to find out that the element is not in the list. Such an algorithm is said to have O(N) complexity as it's running time is directly proportional to the number of elements in the list - N.
The Linux scheduler takes constant time to schedule the processes in the ready queue. Hence, it is said to have O(1) complexity. No matter what is the number of processes active on the Linux system the scheduler will always take constant time to schedule them. All the "parts" - wakeup , selection of the process to be run next, context switching and timer interrupt overhead - of the current Linux kernel (2.5.49 is the reference here - See resources for details) have O(1) complexity. Hence, the new scheduler has O(1) complexity in it's entirety.
As mentioned in the introduction, the Linux kernel runs on almost anything from wristwatches to supercomputers. With the earlier schedulers there were some problems with the scalability. Some of these problems were solved by modifying the kernel for the application and the target architecture. However, the core design of the scheduler was not very scalable. The new scheduler is much more scalable and SMP (Symmetric Multi-Processing) aware. The performance of the scheduler is much better for SMP systems now. One on the goals stated by Ingo Molnar who has written the O(1) scheduler is that in SMP, processors should not be idle when there is work to do. Also, care should be taken that processes do not get scheduled on different processors form time to time. This is to avoid the overhead of filling in the cache with the required data every time.
This is not exactly a new feature, but there are some patches that can be applied to support batch scheduling. Earlier kernels also had some support for batch scheduling. As of now, the batch scheduling of task is done using priority levels. The Linux kernel uses about 40 nice levels (though they are all mapped to about 5 levels of priority). Batch scheduled processes generally get the CPU when there are not many interactive and CPU-bound processes around, which have more priority. Batch scheduled processes get bigger time-slices to run than normal processes. This also minimizes the effect of swapping data in and out of the cache frequently, thus improving the performance of batch jobs.
One of the major improvements in the new scheduler is detection and boosting the performance of interactive tasks. Tweaking the old scheduler code was a bit cumbersome. In the new scheduler, detection of interactive jobs is decoupled from other scheduler tasks such as time-slice management. Interactive jobs in the system are detected in the system with the help of usage-related statistics. That means interactive tasks have good response times under heavy loads, and CPU-hogging processes cannot monopolize CPU time. The new scheduler actively detects interactive tasks and give them more precedence over other tasks. Even then, a interactive task is scheduled with other interactive tasks by using Round-Robin scheduling. This makes a lot of sense for desktop users as they will not see response time increase when they start a CPU intensive job such as encoding songs into the ogg format. Plans are on to merge O(1) and pre-emption code to give better response times for interactive tasks.
Because of the redesign of the new scheduler, it scales more easily to other architectures such as NUMA (Non-Uniform Memory Access) and SMT. NUMA architecture is used on some high-end servers and supercomputers. Also there is some ongoing work on the SMT (Symmetric Multi-Threading). SMT is also known by a more popular term: HyperThreading. One of the reasons is that now every CPU has it's own run queue. Only the load balancing sub-part has a "global" view of the system. So any changes for some architecture are to be made mainly in the load balancing sub-part. Also recently a patch for NUMA was released. In fact it has been incorporated in Linux 2.5.59. SMT processors implement two (or more) virtual CPUs on the same physical processor - one "logical" processor can be running while the other waits for memory access. SMT can certainly be seen as a sort of NUMA system, since the sibling processors share a cache and thus have faster access to memory that either one has accessed recently. Some work is going also going on SMT, but the new O(1) scheduler handles SMT pretty well even without any changes. Recently a patch was released for SMT. Though the NUMA architecture bears some resemblance to the SMT architecture, the Linux scheduler handles them differently.
The scheduler gives more priority to fork()ed children than to the parent itself. This could potentially be useful for servers which use the fork call to service requests. It could also be useful for GUI applications. There are also some real-time scheduling (based on priority) available in the kernel.
Vinayak is currently pursuing the APGDST course
at NCST. His areas of interest are networking, parallel
computing systems and programming languages. He
believes that Linux will do to the software industry
what the invention of printing press did to the world
of science and literature. In his non-existent free
time he likes listening to music and reading books. He
is currently working on Project LIberatioN-UX where he
makes affordable computing on Linux accessible for
academia/corporates by configuring remote boot stations
(Thin Clients).
...making Linux just a little more fun! |
By Alan Ward |
Anybody who has had to install a park of 10 - 100 workstations with exactly the same operating system and programs will have wondered if there is a neater - and faster - way of doing it than moving the CDs around from box to box. Cloning consists in installing - once - a model workstation setup, and then copying it to all the others.
The purpose of this text is to explore several of the many ways of cloning a workstation hard disk configuration. In the cloning process, we will use the native possibilities of Linux to produce more or less the same effect as the well-known Norton Ghost of the Windows world.
Though we will be booting the workstations under Linux, the final operating system they will be running may or may not be Linux. Actually, I use this system for a park of Windows ME workstations that get to be reformatted at least once a year - for evident reasons.
Hard disk switching
The oldest way of cloning a hard disk requires two workstations (A is the model, B is the clone), and another computer C. Only C needs to run Linux.
1. We take the hard drives out of the two workstations, and add them to C. Take care to leave C's original hard disk in the first IDE position. For example:
IDE bus 0, master => C's hard disk => /dev/hda IDE bus 0, slave => A's hard disk => /dev/hdb IDE bus 1, master => B's hard disk => /dev/hdc
We then get to copy the contents of /dev/hdb to /dev/hdc. If they are both the exact same model, we can get by with a plain byte-by-byte copy:
dd if=/dev/hdb of=/dev/hdc
or even:
cp /dev/hdb /dev/hdc
These are the easiest ways of doing the copy, however you should be aware of the following points:
This way can be the best for people using bootloaders such as lilo or grub, as the bootsector is copied along with the rest.
The second, slightly more complicated way of copying A to B consists in two steps:
In this case, copying means mounting:
mkdir /mount/A ; mkdir /mount/B mount /dev/hdb /mount/A mount /dev/hdc /mount/B cp -dpR /mount/A/* /mount/B umount /dev/hdb ; umount /dev/hdc
This can be a bit of a pain if there are a lot of workstations to clone, but takes less time than a complete install ... and you are sure they have the same configuration.
Important: if you are using a bootloader such as lilo or grub to boot a Linux workstation, you then get to write a personalized bootloader configuration file and install it on disk B's boot sector.
Basically, you need to tell the bootloader:
Be careful: you may end up having to use your rescue disks if you do this wrong! Been there, done that. You've been warned. Before starting, take a close look at your current /etc/lilo.conf or /boot/grub/menu.lst files, and at their man pages
Alternatively, if you are just booting Linux you can:
This second way can be much easier for people with less flying time on Linux systems. :-)
Another version of the same setup is, if C's disk is large enough, to copy once from A to C, and then to copy many times from C to B1, B2, B3 ... etc. If your IDE setup has enough busses (or you are using SCSI) you can copy 5 disks or more at a time.
Neadless to say, we use this only if we have no networking set up - a rather uncommon situation these days. However, speed can be rather high as we are working directly at IDE-interface speeds.
Copying over a network
Copying over a network consists in booting workstation B with a diskette or CD into an operating system that can drive the network (let's see now ... here Linux is in, Windows is out) and getting the hard disk image either directly from station A, or more commonly from a file server C. In our examples, I will use workstation B as the computer to be configured and suppose we have the image files from workstation A copied to a directory on server C.
There are several "tiny" Linux-on-a-diskette distributions available out there. MicroLinux (muLinux) is my favourite, but they all work in similar ways.
The idea is to boot from the diskette, and set up networking.
You can then either:
An example of the first way, over NFS:
mkdir /mount/C mount server:/exported.directory /mount/C dd if=/mount/C/my.image of=/dev/hda umount server:/exported.directory
An example of the second (supposing you already have set up and formatted the partitions on local hard disk /dev/hda):
mkdir /mount/B ; mkdir /mount/C mount /dev/hda /mount/B mount server:/exported.directory /mount/C cp -dpR /mount/C/* /mount/B umount server:/exported.directory /mount/C umount /dev/hda
In the second case, if you use a bootloader, remember to install it either immediately after copying the files, or after rebooting workstation B from a rescue diskette.
The nice thing about Linux is that in essence, copying an image or separate files from a network is exactly the same as from another hard disk on your computer.
NFS is naturally not the only way of downloading the file or files from server C. There are actually as many suitable protocols as you have available clients on your bootable diskette. I would suggest you use whatever server you already have installed on your network. Some choices:
NFS (Network File System) | This is the native way Un*x systems use to share files; robust and easy to set up. My favourite. |
HTTP (as in Web server) | Easy to set up on the server side, but it can be difficult to find a suitable client. Used mainly with automated install scripts. You may already have one of these running. |
FTP | Less easy on the server side, but very easy to find clients. You may already have one of these running. |
TFTP (trivial FTP) | Very easy to set up on the server, very easy to use the client. Many routers (eg. Cisco) use tftp to store their configuration files. |
SMB (or Netbios) | Yes, this works. Your server can run either Linux + Samba or any version of WinXX. The client Linux system on workstation B can mount volumes using smbmount. Why you would ever want to is your business, though. |
rcp or scp | (scp is preferable for security) |
rsync | Another favourite of mine. Used normally to synchronize a back-up file or web server to the main server. This can be a bit of a security hole if server C is accessible from outside your network, so take care to block this on your firewall. Performs compression. |
There is a recent on-a-CD distribution called Knoppix that boots you directly into a KDE desktop. From here, you can use all your regular graphics-based file tools if you are so inclined.
Booting from the network
A final twist is to boot workstation B directly from the network without using a boot diskette. The idea is to tell the BIOS to load a minimum network driver from an EPROM. Control is then passed to this driver, which goes onto the network searching for a DHCP server it can get an IP address and a kernel image from. It then boots the kernel, which in turn gets the root filesystem from an NFS server.
By this time, workstation B is up and running with a Linux system. You can then format its local hard disk and copy files from the server.
Needless to say, this rather more complicated to set up than a diskette or CD Linux system. However, the process can be completely automated and suits large networks with many workstations that must be reconfigured often.
Another twist of the same is to dispense completely with the local hard disks on workstations B1, B2, B3 ... and have them boot each time from the network. Users' files are stored on the central NFS file server.
Further reading
Another program used by many scientific cluster administrators is dolly. I've heard a lot of good about it, but have not tried it out yet.
On booting from a network, look up etherboot or, if your hardware supports it, PXE.
PS. Should anybody want to translate this article: I wrote it in the spirit of the GPL software licence. i.e. you are free (and indeed encouraged) to copy, post and translate it -- but please, PLEASE, send me notice by email! I like to keep track of translations -- it's good for the curriculum :-)
Alan teaches CS in Andorra at high-school and university levels. His hobbies
include science photography (both digital and traditional), trekking, rock and
processor collecting.