Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
...making Linux just a little more fun! |
From The Readers of Linux Gazette |
Is it possible? Another forums told me that it was! They told me it was yet it will not. Whenever a sync is attempted the PDA tells me that a connection "could not be established". What could I download in it's stead that I don't have to fiddle-faddle with? Perhaps it isn't reading the USB correctly. If I open the hardware browserand click 'USB Devices' it gives me a manufacturer and a driver (usb-uhci) but no device. Any help you can give me would be very much appreciated. Once I learn more how would I go about joining your team? I want to help others later who are in the same predicament that I am now
Cool. Anybody who wants to be helpful is welcome to join; see http://www.linuxgazette.com/tag/members-faq.html. I'll add that a cheery sense of humor is a plus. -- Heather
One last thing, do I have to uninstall anything like in Windows? If I remember correctly the answer is 'no' but I best make sure before I erase any programs.
I'm running RedHat 7.3 with a KDE desktop (did I phrase that correctly?).
~Mike~ (-:
Yes, you did. It might be handy to know a kernel version, but we can guess you have the stock one that came with Red Hat 7.3. -- Heather
Oops, one more question. When do you release the monthly editions of your web magazine? If you already covered these issues in previous editions just refering me to the edition's URL would work.
Linux Gazette is published on the first of the month at midnight (UTC-0800). Sometimes it's a few hours late (as one smart alec in Australia noticed at 12:15am on the millenium New Year in 2000 ), but that's the goal. -- Mike
Hi,
My computer version is redhat 7.1.I had two xeon processors inside. (8*512GHz) I am using gnome as my window manager.
It frezees randomly (like once a week or twice or thrice) and I can not do anything other than use the power swtich to reboot it though mouse is moving but it is not doing anything on the screen. Hardaware diganstic test, I did with a CD sayhing that there is no error.
I though it would be a temperature problem,but in UK the temperature is not so hot and there are six fan inside the cpu.I put an extra fan as well, it does not help.
I have used the cpu memory and tempertaute controller in linux to monitor the temp. chnages but it reveale normal temperture. I have not got any clue and why. I consult some body but most of the people are unware about the OS and the problems.
SO it would be helpful If you could tell me sugesstions and ideas.
regards
rajaraman
I have a linux server and for various reasons I have processes telneting in. I need to identify the ip address of the client fron within a c program running in the telnet session
- so i can tell the client his ip address from application
- so ican limit what that node can do.
Any thoughts Thanks in advance
Hi,
My manager wants me to setup the network so that based on userid and IP address (more so userid) you can print anywhere in the building, or just to the printer in the room. I am doing this at a school. Any ideas as to how that can be accomplished?
TIA, -Tony
[David Mandala] Really need more information in order to answer your question. What types of computers are on the network, what types of print servers, etc.
Cheers
The network consists of a server (RH 7.3) with about 50 ThinkNICs (diskless workstations) booting via PXE into Linux. The printers consist of HP DeskJets in the classroom hooked to JetDirect boxes, a LJ 4100 DTN with JD built in, and a Xerox Document Centre 425.
Does anyone know if it is possible to compile against a specific glibc version
To be clearer, I have glibc-2.2.93 installed which contains versions up to and including 2.3
What I am trying to do is set up a build system for producing RPM's that will work on RH 7.3 setups (which is 2.2.5)
Hi Linux gang, I am a fairly recent convert to Linux, I am currently running a Win98 (boo hiss) and Redhat 7.2 dual boot system.
I wonder if you could help me? After delving through your back issues I came to number 75 and part one of a very interesting article about Alan Turing. What happened to part 2? Regards and thankyou for the magazine. Shane Doveton. (Scarborough, England).
The author, G James Jones, has health problems and was unable to complete the series. However, the good news is his health is now better and he's started working again on the second part. I for one really appreciate his articles because they are so readable and make the history come alive, and readers have also sent in a significant amount of positive feedback too.
If anybody else would like to write some articles about the giants in computer science history, we'd be interested in publishing them. -- Mike
I realy enjoy finding new ways to code something with examples that actualy work!
This notion came to me after I found the artical on "Adding Plugin Capabilities To Your Code" By Tom Bradley. Except for a implicit cast and some missing header file includes, the code worked like a charm.
I usualy find it difficult to find code that does what it says it does and is in a simple an understandable fasion. I have been impressed. I expect (read hope) to see more of this in the rest of your issues!
Thanks.
Hi Heather,
The use of daemon/demon in Operating Systems goes back to the early 1960's. I did some further checking on the web and found that it was used by the team at Project MAC around 1963 (see http://ei.cs.vt.edu/~history/Daemon.html). On that web page Fenando Corbato attributes the inspiration to Maxwell's daemons. He says "Maxwell's daemon was an imaginary agent which helped sort molecules of different speeds and worked tirelessly in the background. We fancifully began to use the word daemon to describe background processes which worked tirelessly to perform system chores.". There is also a notion of "demon" in Artificial Intelligence; that was where I heard about the etymology from Selfridge's paper from 1958. I thought that Selfridge's work inspired their use in operating systems (since his paper was so early), but I should have done some more checking. In any case the concept of "daemon" in operating systems predates BSD by some time.
Bob
Thanks for the extra effort to chase that down. It's cool to learn about these things! Forwarding to the Answer Gang so they get to see it, and so I can get it added into The Mailbag for this month.
Have a great day
Hi folks! This letter has some feature requests, some tips and lots of virtual beer.
Heather & Mike
LG# 84 was great, awesome, cool!Keep up the good work .
Heather
Your list of Do's and Dont's was really in the spirit of Linux. Enjoyed it and have copied it
Ashwin
Thanks for the tip on using Konqueror for
reading info pages.
Ben
Thanks for the tips on whatis,whereis. It seems you have something against info. I find it(info) good.
Michael Conry
Your News Byte "Venezuela and Other Government News" in LG#83 helped me a lot in writing a paper on using Free Software in egovernance in India. Your selection of sites for News Byte is always wonderful.
And now a "Feature Request" I use a cyber cafe to download TWDT(HTML) for LG. Earlier you included author bio with the article itself.
Can it be possible to append the author bio to the TWDT file. Or maybe make a TWDT for the author bio itself for each issue. I really enjoyed reading the bios .
I have sent my tip to TAG
May the great gnu have mercy on your soul!
Raj Shekhar
We've shared the kudos around to everybody, and I restocked the TAG fridge with your v-beer. Glad you're enjoying the 'zine. -- Heather
(regarding bios in TWDT) We'll think about this. One of the purposes of the Author pages is to have the latest contact information and bio; the articles and TWDT would not be changed after publication.
Pehaps I can put the entire bio page (minus the links to previous articles, and minus the large type in the header) at the bottom of the TWDT article, with a note that this information may be old and another link to the Author page. -- Mike
An email thread occurred which was not linux, but about rescuing documents in some oddball word processing format. A few of the Gang gave it a shot. -- Heather
To all who replied, "THANK YOU!"
With the information you provided, I was able to find a local professional who had administered Xenix systems in years past and was able to use "strings" to recover the data. I still do not understand exactly what he did, but I am elated and very grateful to your group for your assistance. If this is the kind of help I can get for Linux, maybe it's time to learn it and switch.
[Jay R. Ashworth] Probably.
Outstanding; glad to ehar you got your data back. Now you understand why Unix people (and especially Linux people) are fond of textual configuration and data files whenever possible...
What he did was to use the Unix strings(1) program, which sifts through a [random] file looking for strings of characters that appear to be ASCII text, extracting them from the surrounding (binary) data, and printing them on it's output. Once you do that, it's usually just a cleanup pass.
[Thomas Adam] You're welcome!!!
I'm glad that people such as Jay, and myself, were of some use. Makes a change actually!!
He he....
Once upon an email, a good question came in. Too bad it had one of those automatic confidentiality notes attached. Darn. The Answer Gang (I don't recall who at the moment) sent the fellow a little note, suggesting that we can help him if he attaches counter-disclaimers, or gives us permission. We could make him anonymous, of course.
He replied with a short, brusque note saying he found the answer elsewhere. Whose exact text, of course, we can't repeat -- Heather
[Rick Moen] Don't worry, we know what you really meant by that rather graceless, if not arrogant, comment: You meant "Er, sorry about failing to compensate for a dumb disclaimer that defeats the purpose of your group entirely, and if deliberate would have suggested that I don't value what you do. I'll make sure I don't do it in the future."
We understand that sometimes you just don't say what you mean, and we hear the intended message, anyway.
[Robos] Hi Rick. I normally don't post on /. but I read this there quite often and somehow this also applies in your case:
How about in school, teaching the kids to have some manners and we all might get along more nicely...
[Richard Meyer] Hi Heather,
Just a minor correction on the advice you gave the laddie asking about Net2Phone. The .za is South Africa's TLD. In case you're interested (and I admit that you may not be), in the 19th century the Afrikaners used to call South Africa, Zuid Afrika in the Dutch-descended Afrikaans. So that's where SA becomes ZA, leaving SA for Saudi Arabia? (I think).
Funny, I though we did publish a correction about that in the same Mailbag item. It must have been a letter that came in after publication. -- Mike
Keep up the good work with the Gazette.
Thanks . Mike's right, of course: -- Heather
[Chris Duncombe Rae] First off, ZA is South Africa's country code; Zambia is ZM.
...but the corrector had more important news than that I forgot to look up the ISO codes before going to press. -- Heather
[Chris] The http://www.linux.org.za/LDP URL leads nowhere. Hunting and pecking around from http://www.linux.org.za leads to some HOWTOs and more dead links. Speaking as one who also suffers bandwidth limitations I'd prefer to be pointed directly at the Linux Documentation Project than have to scratch around a supposedly closer site fruitlessly.
Second, I've had a look at your mirror sites in South Africa and a lot of them are very stale.
Of the ones he tried two lead to mirrors that are more than 2 years stale, one may be alive but having connection problems, and others were dead. -- Heather
[Chris] Time to update your mirror site list? Or maybe everyone turned off their sites as well as their mirrors while you were upgrading yours?
I wrote to www.linux.org.za to see if they plan to reinstate their mirror. For the others, I'll check again in a couple weeks and if they're still down I'll delete the listings.
We don't get feedback when mirrors go down unless somebody tells us, and we don't have the time to check 210 mirrors manually. I have looked into writing an automatic mirror checker or finding one off the shelf, but haven't found anything satisfactory yet, anything that can deal with timeout errors on 200 sites, do retries, and report problems back to a program in a way it can take action. -- Mike
Folks, if you are running one of our listed mirrors and decide you can't handle the bandwidth anymore, take it private, or otherwise aren't going to mirror visibly... Please, take a spare moment, and let us know that you're leaving the mirror system; we'll be glad to take all the extra visitors back off your shoulders. Our blessings to you for what you could provide aren't any less when you can't any longer.
Also, new mirrors are always welcome -- Heather
...making Linux just a little more fun! |
By The Readers of Linux Gazette |
Is there a way to share the users in a Linux Mail Server for Outlook clients? We will connect out Outlook clients via pop3/smtp to the linux email server but wonder how to share the global address list (like Exchange) ..
What you need to do is set up a shared address book using the OpenLDAP server, an open-source facility for serving up Lightweight Directory Access Protocol information to networks, that is routinely included in Linux distributions. This needs to be done with some care on the OpenLDAP end of things, because Micros*ft Outlook is unusually picky about the LDAP schema. One hands-on guide to configuring the schema is here:
http://www.dclug.org.uk/archive-Nov00-May01/msg00253.html
You can find one general guide to setting up LDAP (server end) software, in the form of a set of lecture notes I wrote about LDAP, a year or so ago:
http://linuxmafia.com/~rick/lecture-notes/ldap
An example of how to set up the client (MS-Outlook) end of the problem (at a university site) is here:
http://www.cae.wisc.edu/fsg/info/mail/ldap_outlook.html
Note that appending to the address book from MS-Outlook is not supported (or desirable, actually).
Good luck with the project. Expect it to take a while, to work out all the details.
We now keep an MD5 sum the body of every message submitted to the Answer Gang. If another identical message body shows up, it gets sidelined.
As usual, this is run over procmail, with two stanzas in the list's procmailrc that look like:
See attached dupekiller.procmail.txt
The first stanza says "filter this message through a program".
The second says "sideline if you see an X-Duplicate header in the result".
The duplicate elimination script being used on this list has been upgraded to use Python's library md5 routines rather than an external pipeline, and to employ locking on the db.
By popular request, we're now filtering other lists here with this, and one local user who often receives duplicate emails that are not always spams has asked for the script, too.
The upgraded script, which the procmail recipe calls upon:
See attached dup.py.txt
I have a Linux server that functions as a Mail Relay in my system. All I want to do is to change its IP address. How shuld I do it ? witch files shuld I change, and how ??
I would be very thanksfull for some help
eyal
This depends quite a bit on the precise distribution of Linux you have installed. Is it RedHat, Debian, SuSE, Mandrake,...
It also depends on how your network is configured. By static addresses entered in some file under /etc or via DHCP.
At the very least you should do:
grep -ril "your_current_ip_address_here" /etc
to find out which files refer to your IP address.
In addition if you use SSL and/or SSH you must go through the configuration of these services and check that the new IP address is reflected.
Having gone through this procedure more than once, I must warn you that if you a free machine that can take the place of your mail server then the easiest solution is to setup that machine as the new mail server and switch off the old machine.
Regards,
Kapil.
You might also want to check that reverse-resolution of DNS is updated to reflect that your new host is attached to this IP address; it's normally handled by the ISP who owns the IP block, so it's not stored locally unless you have made special arrangements, and even if you have, best to make sure they went through safely for both the old and new address. -- Heather
http://www.linuxgazette.com/issue35/tag/magickeys.html
James,
Further to your technical article quoted above;
You explain that I can use the /other/ alt key for ttys 13-24, but in my case, I only want to use both alt keys to switch between the same 12 ttys. Is it possible to configure this? Would making tty24 a symbolic link to tty12 accomplish it? I realise it's been over 4 years since you wrote the original article, but if you can still help, I would greatly appreciate it.
Yours, Gavin McDonald.
You DON'T want to try symlinking those device files around.
Just use the 'loadkeys' utility to change your Linux console's keymaps around to suit you tastes. You can start by reading the following man pages: loadkeys(1), keymaps(5), dumpkeys(1), and possibly showkey(1)
Then use 'dumpkeys' to dump a set of all the current key bindings. Edit that (delete all the stuff you don't want to change) and look for the section that looks something like this:
See attached jimd.console-keymap-fragment-1.txt
... and another section like:
See attached jimd.console-keymap-fragment-2.txt
Now simply change those to read:
See attached jimd.console-keymap-fragment-otheralt.txt
Notice that all I'm doing is changing the Console_13 to Console_1 etc. (at the end of each line that begins with the word keycode).
Then simply pass that through the loadkeys command. In fact you could take that last excerpt (as show between the " and " quotes above) save it to a file --- /usr/local/etc/mykeymap.def --- for example and add a line to your rc.local file to perform a simple:
loadkeys < /usr/local/etc/mykeymap.def
... command.
plz excuse me for asking questions without your permission ,
now my question is ...........
This group (answergang) is willing to answer questions related to the operating system linux, so if you ask a question according to this little help what to ask and how to ask it:
http://www.linuxgazette.com/tag/ask-the-gang.html
you won't have to appologise for asking.
"can we delete a file of a particular version ?" if so how , if not what is the alternate for that
Now this question is somewhat... broad. Yes, certainly linux has a version management system, My preferred one is CVS. But unless you tell us what you use if you use one we will have trouble guessing what might be appropriate in your case.
file name is test
test 1.1---1.2--1.3----1.4---1.5
i want to delete version 1.3 what is the command for that and tell me the condition of 1.4
For cvs this would be the command "admin" with flag "-o" for outdate.
khh > cvs -H admin Usage: cvs admin [options] files... [.......] -o range Delete (outdate) specified range of revisions: rev1::rev2 Between rev1 and rev2, excluding rev1 and rev2. rev:: After rev on the same branch. ::rev Before rev on the same branch. rev Just rev. rev1:rev2 Between rev1 and rev2, including rev1 and rev2. rev: rev and following revisions on the same branch. :rev rev and previous revisions on the same branch.
Information on a particular version would be told by cvs status or cvs log on the file with an additional "-r revnumber" if you really are interested only in that particular version.
Hi, love your Mag, and your doing a great job here.
[Thomas] I know I love the magazine too
My MDK 8.1, kernel 2.4.8.26-mdk system stops at
running DevFs deamon invald operand:0000 CPU:0 EIP: ......... EFLAGS ......... eax ......... asi ......... ds ......... Process devfs pid 123 Stack: ......... CallTrace: ..... Code: (Lots of letters and numbers)
Is this a hardware problem ?
[Thomas] Oh, it most certainly would suggest a hardware problem. As I am sure you are aware the "dev fs" sets up those hardware devices contained within "/dev" such as soundcard, etc.
i have no problems in SuSE or Win (SuSE and Win on hda, MDK and some vfat partitions on hdb) and i can mount MDKs partitions (in rescue) ok.
I've had problems when booting with devfs twice, the first time (some weeks ago) it put it back to the old dev system, 10 to 15 boots back, it put it back to the devfs system.
[Thomas] I'm not certain but is the new way ("devfs") actually a kernel module rather than it being "built-in" to the kernel???
I tried rescue to rebuild devfs but not knowing/finding any commands (no man pages) i got nowhere, reiserfsck and e2fsck found no problems, i commented out pts from fstab but it made no difference. I tried booting with devfs=nomount but lilo would not recognize it, not in lilo i guess.
[Thomas] hmmm...the script "/dev/MAKEDEV" does some things, but not what you're trying to do.
I had no luck with your DB or google.
Neither did I
Sorry for being slow getting back to you, only got it going late last night and read your email (and 450 others).
[Thomas] Oh, that's ok. You actually read 450 consecutive e-mails? Gosh -- hope you haven't got eye-strain
I changed the "devfs=mount" to "devfs=nomount" in lilo.conf but it made no difference,
[Thomas] Hmm, that would suggest that your filesystem type for the particular partition is abnormal in someway.
then out of desperation i tried reiserfsck again on / but this time i did reiserfsck --rebuild-tree and it fixed it , dmesg says "Mounted devfs on /dev".
[Thomas] Ah.... that's interesting and something that Mandrake should have tested and/or implemented in both the kernel and their documentation. I'm sure there are other like you running MDK8.1 with the same problem/.
I'll see if devfs and reiserfs has an update for MDK 8.1.
[Thomas] Unlikely -- you'll probably have to re-compile your kernel as a result. But it's not as hard as you might think....honest. Last I heard Eric Raymond was working on a graphical "maze" frontend for compilation!!! So much for the tcl/tk interface
[ashwin] Linus rejected that for kernel 2.5. Instead, a Qt interface was chosen, so that's what will be in 2.6 (or it may even be called 3.0).
Thanks Thomas for your reply.
[Thomas] As I said -- it's what we're here for Anytime. If you have any other problems, let us know!
Gentle readers, it's also worth mentioning that journaled fs' will still be fsck'd when the volumes reach their maximum mount count. Journals make them robust, so a crash (which marks notmal filesystems "dirty", forcing fsck) simply results in a journal replay. So now we know one thing that can happen if the journal itself gets an ouchie. -- Heather
My name is Deviyanti, I want to ask a question, I have a foxpro 2.6 under dos that runs on windows NT. Now I want to migrate from windows NT to linux Redhat 7.2. The question is will my application in foxpro 2.6 can run in Linux? If can, what are the additional software that I should install first, before I move my aplication in foxpro 2.6 to linux.
Something called "Recital Linux Developer" runs FoxPro 2.6 applications unchanged on Linux:
http://www.recital.com/solutions_foxpro.htm
Additionally, this question did sort of come up once before, a few years back, when Answer Gang founder Jim Dennis was The Answer Guy, all by his lonesome:
http://www.linuxgazette.com/issue30/tag_database.html
Some of that will no doubt still be relevant.
Hi, I could sure use some help with this problem. I've followed the "Linux from Scratch" guides to building a Linux system. Their instructions and guides were very good, and everything seems to have compile correctly. Also, I have posted this question on their support mail, and received several suggestion but none helped. When I boot into the new Linux system, the process hangs and the last three lines displayed are:
Freeing unused kernel memory: 140k freed Warning: unable to open an initial console Kernel Panic: Attempted to kill init
Entering this lfs root=/dev/hda9 init=/bin/sh in lilo still hangs.
I'm pretty sure (since I had the same when I was first time switching from 2.2.x kernel to 2.4.x style) that the console driver is not in the kernel. my config seems to have that as "y" not as module.
See attached k-h.kernel-dot-config-fragment.txt
I'm not using devfs.
The inittab file appears correct, and was reviewed by the LFS folks.
The fstab file appears correct, and was reviewed by the LFS folks.
The configuration (.config) for the Kernel build appears to be correct. It was reviewed by the LFS folks and I compared it to the distribution that loads.
Maybe or maybe not -- make sure the above mentioned character devices are there.
The new Linux system is on its own partition and the root and boot are on the same partition.
My original Linux distribution, which is on its own partition, still boots and can mount the partition with the new Linux system.
Any suggestion as to what else I can check or change would really help.
Thanks
Lawrence
I have a RedHat Linux 8.0 machine with kernal 2.4.18-14. One of the network card (Eth0 eg. 192.168.10.1) is connected to my private network (consisting of a FTP server and 2 pc). Another network card (Eth1 eg. 201.1.1.*) is connected to the internet. How do I make my FTP server accessible from other pcs in the internet and make pcs in my private network access the internet?
Thanks
Chris Hong
Well, I haven't played with Red Hat 8.0 yet. However, the key to your question lies in two steps. First you have to enable the kernel's packet forwarding feature. Manually this can be done via a command like:
echo 1 > /proc/sys/net/ipv4/ip_forward
However, that would not persist beyond a reboot. Under Red Hat there is an /etc/sysctl.conf file which needs to have an entry like:
net.ipv4.ip_forward = 1
This allows the kernel to route packets (from your internal network to the outside world).
However, that obviously won't do much good by itself. Packets from your network that "leaked" out to the Internet would be useless since no responses could get back to your RFC1918 non-routable addresses (192.168.*.*, 10.*.*.*, and 172.16.*.* through 172.31.*.*).
So, the other requisite step is to enable IP masquerading. Over the years the Linux IP packet filtering features haved changed radically with each major kernel release. So old versions of Linux used the 'ipfw', then the 'ipfwadm', and then the 'ipchains' commands to manage the kernel's packet filtering tables and configure its behavior. Red Hat version 8.0 uses a 2.4 kernel with the netfilter subsystem and the 'iptables' command to manage it.
modprobe iptable_nat # In the NAT table (-t nat), Append a rule (-A) after routing # (POSTROUTING) for all packets going out eth1 (-o eth1) which says to # MASQUERADE the connection (-j MASQUERADE). iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
- Example slightly modified from http://www.netfilter.org/documentation/HOWTO//NAT-HOWTO-4.html#ss4.1
You may have to hunt around in the Red Hat /etc/ directory tree to figure out the best place to put his command. I think they have an /etc/rc.d/init.d/iptables script which you can enable with their 'chkconfig' command. If you read that I think you'll find some file like /etc/sysconfig/network-scripts/iptables.dat or something like that. If I recall correctly from Red Hat 7.x, you could put just the arguments for this iptables command (from the -t to the end of the line) into that file.
The reason I'm tossing in so many qualifiers in this last paragraph is because I mostly use Debian and haven't actually installed or managed a Red Hat 8.0 system, yet. In addition some of the details change with every major release. The differences are minor --- easy to adapt to if you can read simple shell scripts.
There is probably also a way to do all of this using some GUI tool. However, I still avoid graphical system administration tools. I'm firmly of the opinion that the most important systems administration tool is your favorite text editor!
What is the best book to learn RH's 8.0? Or will the books I have on learning 5 or 6 and maybe 7 be good enough to learn the basics or anything except the fine points.
Stuff about the bash shell will be pretty much the same.
Learning how to use a text editor will be pretty much the same.
Chances are that in a modern one the screen may look a little different but it will likely be a little easier to read.
Anything showing screen shots walking you through the install will show pictures only good for that exact version. You can read the chapter anyway, as the basic steps of partitioning and answering network questions will still be asked, but the screens will look different.
Pretty much, you can follow along in an older book, and look at man pages or --help output from a program to catch up on some things that may be new. If you also connect to the internet and surf to the home pages of some software you are trying to learn, there may be discussion forums and more things to read there.
And of course there's the Linux Documentation Project (www.tldp.org)
Many of these things will be equally valid for red hat, or for other linux distributions.
I tried to use the e-mail program that came with it and I set it up wrong some how so that I couldn't send e-mails. I was able to use Mozzarella or Netscape's e-mail program.
You have to connect to an internet provider before you can read emails. Your system usually has to have an SMTP program (sendmail, or one of its competitors) in order to send emails.
Mozarella, yum. You probably meant "mozilla" - the browser's firebreathing dinosaur-like mascot.
Mozilla and netscape use the same code under the hood; they compose SMTP messages and transmissions directly, rather than needing a local server. Think of this as driving the mail up to the post office yourself all the time instead of leaving it at your door for the postman to pick up when he comes by every day for the mail.
Thank you for your time.
Jim
You're welcome.
Are there any additional sources for manpages [we've checked kernel-doc package, http://kernelnewbies.org/documents, Kernel* HOWTO's and so on, but without success].
Linux source is the authoritative documentation for kernel functions. I guess you already know about http://lxr.linux.no. That's the right place to look for documentation
Apart from that Alessandro Rubini's book on device drivers has some information on this. Information regarding poll is here in that book:
http://www.xml.com/ldd/chapter/book/ch05.html#t3
This should give a fair idea of what needs to be done when poll on a device is done.
Also try to follow any driver's code which implements 'select' or 'poll' for the device.
I just downloaded 3 Mandrake CDs via FTP and read after doing that that I should have set the download mode to binary not ASCII. I didn't do that, but when I run MD5 on all the .iso files they are all fine....is it possible that even though the MD5 checksums are all matching, the files still aren't correct, or is MD5 an infallible test of the downloaded ISOs?
MD5 should be a good enough test of validity. It has got some weaknesses which have recently come to light, but it's extremely unlikely that you've come across three seperate examples.
It's probably the case that your FTP/download software switched to binary by itself, without you having to explicitly do it.
Hi Folks
I did look into the mgp with embedded mplayer issue today again and got a little further: after looking into the man-page of xwininfo I found -name. If I call:
xwininfo -name MagicPoint
(always the same I hope I get the win-id like this:
xwininfo -name MagicPoint |grep Window |awk '{print $4}' >/tmp/wid
and then:
mplayer /home/robos/movies/play* -vo x11 -wid cat /tmp/wid=1BOB
OK, I actually put the whole calls into a bash script since mgp makes some strange things if I call it from within mgp with %system. So, in the mgp text I do a
%system "/home/robos/mplayer.sh"
and call the whole thing like this:
mgp mplayer.mgp -x vflib -U
The -U is the important one: -U since forking is prohibited otherwise... This sorta works, but the display stays a little garbled afterwards (I put a %system "killall mplayer" on the next page) and in the page that displays the vid nothing else is shown (no text). But, I would say something to improve upon. If you use -o with mgp it doesn't go fullscreen and then the vid is also centered in my case (I use enlightenment btw).
OK
I'll toy a little more
Robos
In last issue ( LG 84) help wanted #3: http://www.linuxgazette.com/issue84/lg_mail.html#wanted/3 it was asked if Linux has net2phone support. -- Heather
I see that this request is a month or more old. Has this problem been solved?
Many times, people do get their solutions, but don't pass them back along to us. So I cannot really say. -- Heather
I have a linux firewall (ipchains) at home, and run Net2Phone on a window98 box that goes through the firewall. If you are still having problems, I may be able to help with some of the settings.
Okay, I'm at home now and can check the settings. On the Net2Phone client, choose menu->preferences->network. Make note of the "doorman" URLs and port numbers (mine are call1.net2phone.com and call2.net2phone.com, both on port 6801). In the client box, choose a number for your ports (I use the same for both TCP and UDP). Valid numbers are greater than 1024 and less than 65000.
My firewall uses masquerading, and is not a proxy. I don't know what your setup is, so this may or may not work for you. In my previous message I said I use ipchains. Sorry, that shoud have been iptables. I got it set up a while ago, and really haven't touched it since.
Here are the variables I use in my script:
${ISP} is the network card connected to my ISP, ${LAN} is the network card connected to my home network. ${PHINIT}is the port used by the doorman (6801) ${PHCTL} and ${PHVCE} are the TCP and UDP port numbers I picked
Here are the iptables commands I added to my script to start my firewall:
iptables -A INPUT -p udp -i ${ISP}-s call1.net2phone.com -m state --state != INVALID --source-port ${PHINIT} -j ACCEPT iptables -A INPUT -p udp -i ${ISP}-s call2.net2phone.com -m state --state != INVALID --source-port ${PHINIT} -j ACCEPT iptables -A INPUT -p udp -i ${ISP}--source-port ${PHVCE} -j ACCEPT iptables -A INPUT -p tcp -i ${ISP}--source-port ${PHCTL} -j ACCEPT
Hope this isn't too late to be helpful....
Dear Answer gang - my problem is an inaccessible C: drive holding my win95 system and all my data - much of it not backed up, naturally .
Here is how I think it happened.
I started with a standard Win95 set up, with a 5G C: drive, a bootable 48x cd drive and a standard floppy a: drive.
I then added a 20G Western Digital secondary drive. This came with the Phoenix bios overlay ez-bios, which took control of both internal drives (despite the fact that c: was within the old bios limit).
With both drives running a single dos partition, the system ran without problems, until I tried to partition the d: drive to load linux (6.3 suse). Neither partition magic, nor fip would repartition the disk.
I then downloaded the latest data life guard (DLG) (=ez-bios) installation utility from the web, and used it to partition the d: drive. I also made a floppy win95 boot disk.
At this point the win95 system was operating correctly, but with a reduced disk size visible on d:.
I then started to load linux by booting from the cd. It ran through the initial screens without problem, but when it came to assigning the partition to mount the system, the second partition on d: was not visible. There was no escape route, so I powered off.
Now the system would not boot from c:.
Nor would it boot from the system disc in a:, or ,rather, when I did the c: drive was not accessible (nor the d: drive!).
I tried fdisk /mbr, and restoring the mbr "before installing ez bios" and " after installing ez-bios" (options in the downloaded DLG utility). The DLG utility also told me that the c: drive had a "non dos partition".
I assume that I have inadvertently created a linux partition on the c: drive.
How can I recover from this? Or is there some other explanation? Is this a
diy job, or should I consider a data recovery service (my marriage may be at stake here!).
Very grateful for any help you could give. I'm keen to join the penguins, but this is off-putting!
John Hodgson
[mike] First off, can you boot into linux? If so check the data as follows
mount the c: partition
type ls /mnt to see if a mount point has been setup by your distro
if you see something like /mnt/dos_c do ls <this dir> to see if there are any files
if there is no /mnt/dos etc directory do the following
mkdir /mnt/c mount /dev/hda1 /mnt/c
then type df to see what partitions are mounted
then type ls /mnt/c to see if your files are still there
Thanks, Mike...
To avoid the possibility of further over-writing on the old C: drive, I used DemoLinux running from the CD drive. By default this loads the KDE desk top.
This showed two internal drive icons, but clicking on hda1 gave an error:
"Unable to run the command specified. The file or directory file:/mnt/hda1 does not exist"
Moving to console mode:
ls /mnt gave the response
cdrom floppy hdb1
Apparently the old C: drive is not being recognised
Mkdir /mnt/c gave error message
Mkdir cannot create directory '/mnt/c' : permission denied
While DemoLinux was loading I spotted a line that I think related to the old C: drive, giving it the following properties: win98 FAT-32 LBA-matched partition
[Heather] Sorry to come a bit late to the game. Anyways it looks to me as if your initial diagnosis is correct - the partition table has gotten somehow mismatched with what is really on the drive.
The Linux utility to deal with this problem is gpart - it will physically look at the bits on the drive, and guess a partition table for you. If your drive electronics do not agree with what your BIOS reads for cylinder/head/sector values, it might actually be wrong, but if you see something that looks like the layout you remember, it's probably right, and you can write the result into the MBR-tail with a commandline switch.
(I say "tail" because strictly speaking the first 446 bytes are the boot loader and the 64 bits at the end are the partition table, and some techies refer to only the loader as the MBR, while others call the whole 512byte cluster this. But we digress.)
The DOS analogue to solve this problem - bearing in mind that I've not had to use it for years, so I cannot vouch for the current edition one way or another... is Symantec's Norton Disk Doctor... NDD /REBUILD. As a few repartitioning utlities are on the market they might also have some sort of "reset to whatever the disk has on it" feature - possibly as a last-ditch rescue against their own failure modes. The same caveat against the BIOS mismatch problem applies. Also, if it isn't new enough a DOS tool may not recognize any linux bits you've managed to get on there.
Anyways, I have used gpart recently myself and can assure you that it works. The real fun is getting a cd-boot or floppy-boot distro that has it in there. I don't recall if I used Knoppix, or if I host-mounted one of my laptop drives temporarily (so /dev/hda was a known good system). DemoLinux, if it has a copy of gpart on it, can help you solve that quite quickly, and if it doesn't have it, you may be able to fetch a binary of the program into your ramdisk.
Pretty much, all the live-CD discs use a ramdisk or two.
Hey there Answer Gang,
You've helped me in the past, I'm hoping you can help me again.
I'm having diffuculties setting up sendmail and friends on a small home network. I can't seem to get mail to work between hosts. I feel fairly competent in linux in general, but this continues to baffle me.
I'm using RedHat 8.0 on two systems, my main desktop, and our firewall/dns/nat/etc box. My roommate is using WinXP. But basically, I'm looking for a good howto doc on setting up email between the gateway box and my desktop, so I can forward the root mail form the gateway to an arbitrary account on my desktop. Y'know, for getting alerts, logwatch info, etc.
And just to learn a bit more about the workings of email in general.
At present, I can't get ANY kind of email to move between the two boxes.
Mostly, I'm looking for a really good writeup on how to configure things to my liking. I mean, I don't want to have to buy a book on it, it's just for home use, but I want a good understanding.
If you people can point me towards a good resource, I'd really appreciate it.
[John Karns] Well I suppose the best resource is the O'Reilly book on sendmail - but since you mentioned that you don't want to buy a book, I do recall stumbling across a helpful sendmail web site about 3 yrs ago. So a web search would probably turn up a few sources of info. There are also some fairly comprehensive FAQ's etc available...
[Heather]
- try the faq's and other helpful notes at sendmail.org, then the community forums at sendmail.net.
- each of sendmail's major competitors also have websites; since some of their FAQs are in the form of "under sendmail I would... how do I do that in this mail transport?" then reading the documentation of all the major mailers should help considerably toward learning about email in general.
- for your NT box to get mail from your linux server, either your linux server needs to run POP or IMAP daemons... or your NT system has to run an SMTP daemon and be listed as a MX for itself. The first one is much easier.
Thanks Heather, I'll have a look at these resources. Luckily, I've managed to muddle through a bit of it on my own, the mail is moving, just need to fine-tune things a bit. I now understand why the sendmail.cf file is so infamous
rewrite rules, UGH...
[John Karns] Finally, I can provide a quick hint about (one method of) setting up mail between hosts. For my purposes I just added the host names in /etc/mail/mailertable in form of
machine1.my.psuedo.dom smtp:machine1.my.psuedo.dom machine2.my.psuedo.dom smtp:machine2.my.psuedo.dom
In the comments in that file:
...............
............... |
And from /etc/mail/README:
............... sendmail.cf supports some more external database files. The default configuration uses /etc/aliases, /etc/mail/mailertable, /etc/mail/genericstable and /etc/mail/virtusertable. These files are normal text files that are converted with "makemap" to the real database files (ending in .db). For all outgoing email, sendmail will use the destination hostname and look into /etc/mail/mailertable to see how this email should be transported to the next destination. Please read that file for some examples on email-routing. ............... |
Note 1: There is a Makefile in that dir to enable running 'make' after adding the host names to the text file. That will create the .db file which sendmail actually uses.
Note 2: I'm not sure how much of this structure is from the generic sendmail and how much may be contributed by SuSE, but my gueess is that it is mostly generic. This seems to be born out by the above reference to sendmail.cf pointing to those files.
Note 3: This setup works for me. I don't have a name server set up, just use a hosts file. YMMV.
can I use a zoom/modem usb model 3090 with redhat 7.2 ?
The best place to research USB-hardware support problems in Linux is http://www.linux-usb.org. You might want to make a note of that, for the future. Selecting "Working devices list" on the front page takes you the Overview page. From there, we select Devices, since we're looking up support for a particular hardware device, rather than any of the other information categories. We're now shown the dozen or so USB device categories, and pick "Comm: Communications devices (Modems)". This brings us to a long multipage list of modems by manufacturer. Moving through that to the Zs, eventually finding the line item for "Zoom Telephonics, Inc. 3090". Finally, selecting that item brings us to http://www.qbik.ch/usb/devices/showdev.php?id=660.
And it's bad news:
Zoom sales claims this is "a winmodem and will not work with Linux". Shame.
There's more, but that about sums it up: This is undoubtedly a unit designed to achieve the lowest possible retail price by omitting key circuitry normally integral to all modems (the ROM or "controller" chip implementing required communications protocols, and/or the UART chip to control and buffer serial communications). The omitted functionality is then emulated in software by MS-Windows-only proprietary "engine' software.
If/when you go shopping for a better modem, you might want to consult Rob Clark's modem database, at http://www.idir.net/~gromitkc/winmodem.html.
The real tip here, for newbies and old hands alike; we can no longer assume that being external or internal, or which interface a modem is plugged into, indicates whether it has an incomplete chipset and needs a booster shot from specialized driver software. Some manufacturers offer fully-capable internal modems, and some external ones are duds like this one. Use the net resources at http://www.linmodems.org, and if you decide to use a supported or partially supported winmodem, don't expect too much out of it when you have your system under a heavy CPU load. -- Heather
...making Linux just a little more fun! |
By Jim Dennis, Ben Okopnik, Dan Wilder, Breen, Chris, and... (meet the Gang) ... the Editors of Linux Gazette... and You! |
We have guidelines for asking and answering questions. Linux questions only, please.
We make no guarantees about answers, but you can be anonymous on request.
See also: The Answer Gang's
Knowledge Base
and the LG
Search Engine
Welcome once more folks, to the world of The Answer Gang. We haven't decided where to hang the stockings; Tux goes out on Geek Cruises all the time, so he's rarely found at the South Pole anymore. Perhaps I should create a /hearth in my home directory, and give it a /chimney, some /stockings, and what the heck, /menorah, /presents, and /peace.on.earth. Top things off with a /var/log/yule we can burn in January, and...
Oh, you didn't want to hear all this silliness. You wanted to get to the presents. Well, I can tell you this little nerdette is still looking for LCD monitor prices to come down. I guess my New Year's Resolution will have to be running past a nice scanner.
If you can't think of anything for the geek in your life, I recommend a good uninterruptible power supply (UPS). We can always use more...
For those who are wondering, the top reason for anyone not getting answered this month is: insufficient detail! Folks, we're pretty smart, and might even be accused of telepathy, but we are not there in the room with you, so we can't see that machine. We really need those error messages, any bleeps it's making, how it worked before and what you were expecting of it. With these things, we can provide answers you probably had no idea were available ... beyond just how to do the thing you think will work. WIthout these hints, we're as blind and as stumped as you are to what's going on.
To all the tiny elves, Kris Kringles and Gnomes in our computers, enjoy your extra trons and blinkylights this season.
From Raj Shekhar
Comments By Mike Orr, Heather Stern, Rick Moen
In response to LG 84, Tips 25: http://www.linuxgazette.com/issue84/lg_tips.html#25 -- Heather
Muthukumar Kalimani wanted to install three operating systems on his box. I had helped my friend do the same and here are some hard found lessons.
[Mike] The Large Disk HOWTO http://www.tldp.org/HOWTO/Large-Disk-HOWTO.html claims this is mostly not a problem any more.
(Paranoid people like myself continue to place /boot partitions and C: partitions below 1024.)
[Rick] In theory, it went away in 1994.
That was the year that motherboard manufacturers rolled out Yet Another BIOS Extension, providing a new method by which boot-time software could get extended BIOS routine 13h information to directly address logical cylinders numbered 1024 and above. A new version of LILO immediately came out, that requested and could process that BIOS information.
So, in theory, the only people who need put /boot below the 1024th logical cylinder are
- using really antique booting software (a very bad idea) or
- contending with very old motherboard BIOSes, usually on 486es. I'm unclear on whether any early Pentium motherboards used the older-version int 13h call, or whether it's a 486-only issue.
A lot of us old-timers retain the /boot-filesystem-first habit just from long usage, but also because people sometimes come in the door with antique BIOSes and fail to mention that fact. Better to put /boot near the outer tracks than risk spending considerable effort building a system and then find it unbootable.
[Heather] Rick and I both do installfests; I especially help people with laptops, which have all sorts of oddball things in their BIOS. It's far easier to obey this rule of thumb than have to do things over on the limited time available at such install parties (usually only about 4 or 5 hours, but people arrive late, and would rather spend time learning the diffs between K and Gnome, set up their mailer, etc).
I think it's important to note that /boot doesn't care about being first, only about early on the disk (if it cares at all). I usually give it partition 2; that satisfies some MSwin setups that want the first entry, and avoids the 4th entry, which some hibernation setups like to take. Make the third an extend partition, put a D: in there if you were planning a more even split on a large disk, a swap, and at least one more volume for / (though I refer you to past articles for The Gang's recommendations about partition layout beyond that).
If you are really trying for maximum space "under the bar" assigned elsewhere instead ... it can be as small as 7 or 8 MB. I wouldn't go smaller for fear that monolithic kernels might get pretty big at some point. You always want room for three things: the bootloader parts themselves, a known good kernel, and whichever one you are recently trying out. If you're triple-or-more booting and more than one are Linux, you might want to lean the other direction and make room for lots of modules to go with them (symlink /lib/modules to /boot/modules in all distros and share the goodies).
If you want to know more about Partitioning using "fdisk" refer to:
Linux Partition HOWTO
by Tony Harris and Kristian Koehntopp
(it is a mini HOWTO)
in particular see section [5] [Partitioning with fdisk]
http://www.tldp.org/HOWTO/mini/Partition/index.html
[Heather] If you create the FAT filesystem for it ahead of time, MSwin's SETUP.EXE usually won't gratuitously fill the entire disk for you, which saves digging up a resizer later.
Hope you find this relevant.
[Mike] Are you specifically excluding LILO and GRUB? Why?
I had written that the querent needs to install a loader _if he has trouble_ booting into GNU/Linux (using either GRUB/LILO). I had installed RH 7.1 with Win2k and I had trouble booting into GNU/Linux. RH 7.1 came with LILO version 21.4-4
Thankfully this problem was well documented in Linux+NT-Loader mini-HOWTO It had adviced to use Boot Part for solving this problem. I am still using Boot Part to boot into my GNU/Linux(RH 7.1) OS. I think newer versions of RH do not show this problem but I am not sure as I have only RH 7.1. (thinking of shifting to Debian).LILO does not need any special hacking to detect and boot up Win98.
One of my friends had discovered XOSL and even though he was a newbie, he had three OSs up and running in no time. (Win 98,WinXP,Linux and maybe Win2k too!)
[Mike] GRUB is more user-friendly than LILO. I wish I could use it on my computer but the "linear" option doesn't work. I had to switch back to LILO because my computer won't work with the "lba32" setting.
Talking about loaders, three years back I experimented with Be OS.It was really cool and really sparing of machine resources. I had Win98 installed on my box. It installed quite easily on FAT filesystem and it placed an icon on my Win98 desktop. On clicking it, Be would boot up. And I think it did not take much time to startup. I removed it beacuse it did not come bundled with much apps. What I wanted to know do we have this sort of funky loader in GNU/Linux?
[Heather] Yes. The canonical way to launch Linux froma running DOS or MSwin system is a program called LOADLIN.EXE. I understand there is a mildly different version of it for NT, and you should prepare a PIF for it that tells MSwin it's okay to give it all the resources it needs - go ahead and take over the CPU - then you'll have a happy one way trip to whatever kernel you told it to load. Oh yeah, and the linux kernel you use has to be visible in your DOS filesystem. I usually suggest to keep such parts in C:\LINUX so it's obvious.
I have not experimented with GRUB but LILO can be tough for a newbie (IMVVHO). Again I am talking about the older versions and I have no experience with newer versions.
[Rick] A lot of people never learned the Zen of LILO:
- /sbin/lilo (the "map installer") is best thought of as a compiler, and /etc/lilo.conf as its source code.
- Therefore, if you change /etc/lilo.conf or any of the files it points to, you must run /sbin/lilo before rebooting, to "recompile".
- You should always have a "safeboot" stanza in /etc/lilo, pointing to a known-good kernel image that you never fool with, as a fallback. This ensures that if, e.g., you compile a new kernel but accidentally omit console support, you can easily recover.
GRUB is a capable and flexible bootloader, but practically all of the reasons commonly cited for it being preferable to LILO boil down to "I once messed with my boot files before reading LILO documentation, shot myself in the foot, and therefore blame LILO."
From Jose Nazario
Comments By Mike Orr, Ben Okopnik, Steve Kemp, Tom Bradley
i was looking through the november issue of linux gazette and something caught my eye. overall the issue had a few things i was pretty happy to see: a piece on mono, elf kernel execution, and adding loadable plugins to code. it's this last piece i have a problem with.
tom bradley's code, while demonstration code, is a perfect example of unreliable code and illustrates why this kind of thing should be avoided. in main.c (truncated to save space):
#define PATH_LENGTH 256 ... char path[PATH_LENGTH], * msg = NULL; ... /* build the pathname for the module */ getcwd(path, PATH_LENGTH); strcat(path, "/"); strcat(path, argv[1]);
it's quite trivial to overflow path[PATH_LENGTH], even inadvertantly. before you say "look, this isn't setuid root, this isn't anything but demonstration code, don't rush off to bugtraq" i want to say this: for precisely the reason that it is demonstration code it should do bounds checking.
[Ben Okopnik] Agreed, 100%. One of the many security-related sites I read on a regular basis had a "ha-ha-only-serious" quote that's worth paying attention to:
<ironic> Security hint of the day:
find . -name '*.[ch]' | xargs egrep -l 'sprintf|strcat|strcpy' | xargs rm
</ironic> -- Pavel Kankovsky aka Peak
Funny, but...
[Steve Kemp] There are a few decent scanning tools available, like 'flawfinder', 'rats', and 'its' which are worth using if you want to be scared!
Steve
---
# Debian Security Audit Project
http://www.steve.org.uk/Debian
lots of people are going to code their apps with this as a start and not think twice about the reliability of the foundation of this code. the fact is someone can easily hit this upper limit inadvertantly (think of a well organized person who has a deep directory structure ... suddenly path[] has a lot less overhead).
secondly, bounded string manipulation should just be a habit, and habits develop after repeated application of the effort. crappy, unchecked runtime errors are the bane of software quality, there's no reason you shouldn't always do sanity checks, even in demo code. one reason alone to do it is that you'll get so annoyed you may want to improve the interface to error checked code, benefitting us all.
anyhow, thanks for the november issue.
Forwarding to the author, Tom Bradley <tojabr@tojabr.com>. This message will be in December's LG . Feel free to write a response or a follow-up article if you wish. -- Mike
thanks mike. tom, in all seriousness that article was really cool and timely, and i will definitely be referring to it to make use of it. i just take issue with unchecked errors in code ...
thanks for an otherwise well written piece.
[Tom Bradley] I agree that was setting a bad example on my part, below is a corrected version.
(truncated to only changed partion) ...
char * path, * msg = NULL; int (*entry)(); void * module;
if(argc < 2) {
printf("No module given.\n"); return; }
path = (char*)get_current_dir_name(); path = (char*)realloc(path, strlen(path) + strlen(argv[1]) + 2); strcat(path, "/"); strcat(path, argv[1]); ...
the #define has been removed.
Tom
From Reilly Burke
Comments By Thomas Adams, Mike "Iron" Orr, Heather Stern
[Heather] Reilly Burke is Technical Advisor for a company called Aero Training Products, Inc. (http://www.aerotraining.com)
To Derek Holliday
We have copies of PC-MOS and LanLink available. We also produce LanLink drivers for PC-MOS.
PC-MOS is required to run POS systems with DOS applications. DOSEMU is not good enough to run many (most) of the apps. PC-MOS is file-compatible with DOS systems, but only the November 93 kernel (of PC-MOS) can access 3.5" floppies.
I'd love to replace our PC-MOS applications, but nothing quite measures up yet. Linux is nowhere near being able to do the job (it's way too big, complex, & geeky)! Possibly DR-DOS 8 (coming out in spring 2003 with FAT32) might do the job.
[Thomas] How would you know until you tried? Just because Linux is too big and "geeky" in your eyes; does not mean to say that it couldn't do the job! It's not really logical to say that.
[Iron] DOS programs, however, often access the hardware directly, so it's not surprising DOSEMU can't emulate the environment quite well enough.
[Heather] Thanks for this tip on an old thread; it's not Linux, but since we seem to be the only place that talks about it...
I'm curious about what the problems under DOSEMU + (say) MS-DOS 5.0 are, but unless this is a problem you're trying to solve for yourself, you may not want to bother delving any further.
The buzzword "point of sale" typed into the Freshmeat search index (http://freshmeat.net) yields 7 direct hits, and a category for point of sale containing 42 projects. Well over a year ago I saw one written up in a magazine article (I think it was Linux Journal actually) about a POS system optimized for a pizza place. That's geeky; but the pizzas he was selling are real.
Some of these projects will really be "e-business" (aimed at web based stores, not one where a high school student has to run the register, nor where the machine has a real register to pop the change out of) and a few of them are optimized for a specific kind of shop. But they may do for some people.
Of course we're still trying to move our PC-MOS apps to Linux, but so far, after years of experimenting and coding, we're still running the PC-MOS systems because there's still nothing like them for Point-Of-Sale utility. It's fast and small and entirely bug-free. The last PC-MOS kernel released was November 93 (9 years ago). But it's designed for old hardware (ISA slots, NE2000 ethernet cards, Wyse terminals, and serial printers), and the systems are becoming increasingly difficult to maintain. There's probably still 100,000 PC-MOS users looking for an answer, but the closest thing is probably DR-DOS. Linux is not being maintained by POS geeks, so there's a real shortage of Linux POS tools and solutions.
We've tried disassembling the drivers (we succeeded in cloning the Lan client drivers with new serial numbers!) , but disassembling the entire OS is far too complicated. We've also tried rewriting the DOS apps (in particular, the Shark database). We have its horribly complicated monolithic Microsoft C source code, with chunks of assembler mixed in, but it's still a giant task. The only feasible direction looks like rewriting the Shark compiler in Kylix, but even that is a horrendous prospect. So far, PC-MOS still works (and it's paid for , and the Shark database is still fast and flexible.
We'd really like to hear from any other POS types who are trying to move to Linux.
Reilly Burke
From Mustafa C. Kuscu
Answered By Jay R. Ashworth, Rick Moen, Robos, Heather Stern, Kapil Hari Paranjape
Hi, James. When a remote X-forwarding ssh connection is broken, all the windows at my local server get lost. Is there a way to prevent the remote processes from shutting down, so as to resume the processes and have the windows re-sent to the local X-server when I relogin?
Thanks Mustafa
[jra] Not per se, but investigate VNC. I'm in the midst of writing an article on it as it happens, but it can be used to do what you need.
[Rick] Jay, just to help: I know of these VNC implementations (also known as "RFB" = Remote Frame Buffer):
- RealVNC, formerly AT&T Cambridge's reference implementation, http://www.realvnc.com
- TridiaVNC, http://www.tridiavnc.com
- TightVNC, http://www.tightvnc.com
- x0rfbserver (great name, eh?), http://www.hexonet.de/software/x0rfbserver, optionally with kfrb, http://www.tjansen.de/krfb or x0rfb from the rfb package, http://hexonet.de/software/rfb
You'll find a number of resources about VNC over SSH in my ssh-clients file, http://linuxmafia.com/pub/linux/security/ssh-clients
Also worth looking into:
- MLView DXPC, http://www.medialogic.it/projects/mlview : Compressed and proxied X11 -- sort of an update of the LBX idea. Much faster than VNC.
- rdesktop, http://www.rdesktop.org, an RDP client for Windows Terminal Services. Likewise much faster than VNC; also, fully multiuser, unlike VNC.
[Robos] Well, not entirely true IIRC since I had some thoughts about this lately too and shortly after that a friend of mine told me that there exists something like screen for xserver connections. And now guess what, he and me forgot it again. Great. So, it exists, but somewhere and I can't tell where...
[Rick] Possibly, you're trying to think of xnest?
[Heather] I suspect not; xnest handles issues about color depth, not being able to set processes to sleep and waken them up from another console.
[Kapil] Actually you may mean "ratpoison" but that is only a window manager which has a "screen"-ish look and feel.
The following setup works well for me from home and work.
At work:
start ratpoison
get ratpoison to start rfb (or to give its full name x0rfbserver).
get ratpoison to start a screen session.
Do some real work via screen.
(All programs that invoke graphics work via ratpoison).
At Home:
run ssh -L 5900:localhost:5900 to the work machine.
on the remote machine run "screen -D -R"
start xvncviewer on the local machine.
Do some real work via screen!
Thus text based applications work via ssh and screen so are reasonably fast. Meanwhile any remote program that invokes graphics creates a window within the xvncviewer.
Needless to say ratpoison runs at home too!
I was quite pleased when I cooked up this config as you can see!
As long as the machine at work continues to run none of the sessions is ever exited or lost. VNC and screen passwrods provide some security as well.
Hope this helps,
Kapil.
[Robos] Nope, I found it! I actually mean - xmove! Look here:
ftp://ftp.cs.columbia.edu/pub/xmove
Thats also the thing the original querent might wanna have.
[Kapil] I tried out "xmove". Er, ... just one problem. It uses TCP connections to connect with the xserver which means that X with "-nolisten tcp" does not work.
In the modern security conscious world this is essentially all X servers!
[Robos] Well, thats true. But, you can either remove the call in /etc/X11/xinit/xserverrc and maybe /etc/gdm/gdm.conf (dunno for kdm or xdm) or ssh -X _should be permitted if I gather some comment I read correctly. The say the others?
[Heather] As far as X is concerned ssh -X merely yields a valid server running at a higher screen number - 10 rather than 1 is typical, so localhost:10 would send all processes down the ssh pipe back to where you are sitting.
If you're sure it works at the TCP level it may not work. If it works with normal TCP/IP packets, then it can surely be tunneled. But you can try playing games with ssh at the transport layer first. There are stacks of examples for POP over SSH out there; that's how they work, so it's worth a look too.
...making Linux just a little more fun! |
By Michael Conry |
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to gazette@ssc.com
All articles older than three months are available for
public reading at
http://www.linuxjournal.com/magazine.php.
Recent articles are available on-line for subscribers only at
http://interactive.linuxjournal.com/.
Finding bizarre implications of the DMCA with which to ridicule it is as easy as shooting fish in a barrel. The Register recently reported that four major retailers (WalMart, Target, Best Buy and Staples) had invoked the DMCA to prevent FatWallet.com from disseminating information about sales and price comparisons. The argument runs that sale prices are copyrightable information (and not just simple facts). FatWallet has had to comply to avoid the risk of a very costly legal battle.
In other DMCA related matters, Security Focus reported that hardware manufacturers producing games console mod chips have found themselves under pressure, applied through the device of the DMCA, to cease such production. The argument hinges on whether the non-infringing uses legitimise the chips, and its a criterion which can vary substantially from country to country.
Finally, there is a very interesting DMCA article by Adam C. Engst at TidBITS. It provides a good overview of the issues arising from the law, and the stakes the "content industry" is playing for in its long-term strategy. Related to this is another article at TidBITS (by Cory Doctorow) entitled Can the Digital Hub Survive Hollywood?. It does a fine job of highlighting the tensions between the content/media industries versus the interests of the technology industry at large (as opposed to the welfare of sectoral interests within the tech industry who might do very well if their technology is used for protecting "content"). Particular attention is paid to the BPDG (Broadcast Protection Discussion Group).
In a reversal of an lower court's decision, a German court has ruled that the name Mobilix is sufficiently close in sound and appearance to Obelix to cause confusion. Mobilix is a website dealing with the area of Unix on mobile devices. Obelix is a character from a French comic book. The final implications of this decision are not clear. You can follow the entire story on the Mobilix website.
Slashdot recently reported on the ACM Digital Rights Management Workshop. Among those present was Ed Felten whose brief commentary can be read here. It was reported that there was some scepticism that DRM was truly a panacea for the copy-protection worries of Hollywood info-hoarders. In a not unrelated story, The Register reported on the future of Microsoft's Palladium and the Trusted Computing Platform Alliance (TCPA). Even proponents of the system admit it is not totally secure and potentially vulnerable to intelligent hardware attacks.
Some links from Linux Weekly News:
Some links from the O'Reilly websites:
DesktopLinux.com have an article about Film Gimp. Film Gimp is a motion picture frame-editing tool. The article has some technical details and reports on its use in the film industry.
Linux Journal have an article explaining how to train mutt to catch spam using ESR's bogofilter.
News.com have reported on IBM's plans to build two new machines which would be the fastest supercomputers to date. The Blue Gene/L, the faster of the two, will be Linux powered, and 10 times faster than the current title-holder, NEC's Earth Simulator.
A survey of some open source multimedia projects which might be of interest.
Some links from Linux Today
The Inquirer has run a series of articles describing the Linux install process in a way designed to help beginners. Parts 1, 2, 3, 4, 5.
IBM Developerworks have an article on open source scientific software, being used increasingly by those involved in scientific research.
A report at Linux and Main on the path to the next major kernel release.
From Debian Weekly News, a link to an interview with Klaus Knopper of Knoppix. Particular comments on hardware detection implementation.
Some links of interest from The Register:
Some links from Slashdot which may interest you:
Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.
Linux-Bangalore/2002
| December 3-5, 2002 Bangalore, Inda http://linux-bangalore.org/2002/ |
USENIX 5th Symposium on Operating Systems Design
and Implementation (OSDI) | December 9-11, 2002 Boston, MA http://www.usenix.org/ |
Consumer Electronics Show | January 9-12, 2003 Las Vegas, NV http://www.cesweb.org/ |
LinuxWorld Conference & Expo | January 21-24, 2003 New York, NY http://www.linuxworldexpo.com/ |
O'Reilly Bioinformatics Technology Conference | February 3-6, 2003 San Diego, CA http://conferences.oreilly.com/ |
Game Developers Conference | March 4-8, 2003 San Jose, CA http://www.gdconf.com/ |
SXSW | March 7-11, 2003 Austin, TX http://www.sxsw.com/interactive |
COMDEX Canada | March 11-13, 2003 Vancouver, BC http://www.comdex.com/vancouver/ |
CeBIT | March 12-19, 2003 Hannover, Germany http://www.cebit.de/ |
4th USENIX Symposium on Internet Technologies and Systems | March 26-28, 2003 Seattle, WA http://www.usenix.org/events/ |
4th USENIX Symposium on Internet Technologies and Systems | March 26-28, 2003 Seattle, WA http://www.usenix.org/events/ |
PyCon (the first "low budget" Python conference)Note: This is the first low budget Python conference, so if you've been avoiding Python conferences due to the cost, this one is for you! Another conference, the main International Python Conference, will be held in July as part of O'Reilly's OSCON (Open Source Convention). | March 26-28, 2003 Washington, DC http://www.python.org/pycon/ |
AIIM | April 7-9, 2003 New York, NY http://www.advanstar.com/ |
SD West | April 8-10, 2003 Santa Clara, CA http://www.sdexpo.com/ |
COMDEX Chicago | April 15-17, 2003 Chicago, IL http://www.comdex.com/chicago/ |
Real World Linux Conference and Expo | April 29-30, 2003 Toronto, Ontario http://www.realworldlinux.com |
USENIX First International Conference on Mobile Systems,
Applications, and Services (MobiSys) | May 5-8, 2003 San Francisco, CA http://www.usenix.org/events/ |
USENIX Annual Technical Conference | June 9-14, 2003 San Antonio, TX http://www.usenix.org/events/ |
CeBIT America | June 18-20, 2003 New York, NY http://www.cebit-america.com/ |
O'Reilly Open Source Convention | July 7-11, 2003 Location: TBD http://conferences.oreilly.com/ |
12th USENIX Security Symposium | August 4-8, 2003 Washington, DC http://www.usenix.org/events/ |
LinuxWorld Conference & Expo | August 5-7, 2003 San Francisco, CA http://www.linuxworldexpo.com |
MySQL AB has settled its dispute with NuSphere corporation. MySQL AB had claimed that NuSphere violated the GPL and misused the MySQL trademark. (NuSphere includes MySQL with NuSphere's enhancements in its product.) NuSphere has assigned to MySQL AB the copyrights for its contributions to MySQL. Not so coincidentally, MySQL AB has just donated $25,000 to the Free Software Foundation's GPL Compliance Lab, which helps companies offering GPL'd software follow up on GPL violations. According to the FSF's Executive Directory Bradley Kuhn, almost all GPL violations are mistakes rather than wilful infringement.
Debian Weekly News highlighted a LinuxOrbit HOWTO on installing and configuring ALSA. The piece describes the correct "Debian way" to perform the task.
Debian Weekly News Reported that people from the Debian-Med subproject have started a Knoppix-Med project. The aim is to include particular pieces of medical software into the Debian Based Knoppix distribution. Details of the procedure are available online.
SuSE Linux has announced a multi-stage product campaign for the corporate desktop deployment of SuSE Linux. Starting January 2003, small and medium-scale enterprises will be able to migrate to Linux on desktops using the "SuSE Linux Office Desktop". "SuSE Linux Enterprise Desktop", a Linux version optimised for desktop deployment in large-scale enterprises, is expected to be released in the first quarter of 2003.
SuSE Linux has also announced that the SuSE Linux Enterprise Server (SLES) has proved itself as a powerful Linux platform for IBM`s DB2 Version 8 database software with SLES latest certification for DB2. SuSE Linux Enterprise Server is the first distribution to be validated on all hardware platforms supported by DB2 for Linux (including IBM zSeries mainframes) and validated to run DB2 Enterprise Server Edition. More information on IBM's DB2 for Linux Validation Program is available online.
The UnitedLinux group have announced the release of Version 1.0 of its UnitedLinux product, a standards-based Linux operating system targeted at the business user. UnitedLinux is the result of an industry initiative to streamline Linux development and certification around a global, uniform distribution of Linux. Founding companies of UnitedLinux are Linux industry leaders Conectiva S.A., The SCO Group, SuSE Linux AG, and Turbolinux, Inc.
A new stable kernel 2.4.20 has been released. A new ancient kernel 2.2.23 has also been released if you're still living in the medieval ages. Get your update at a kernel mirror near you.
Wolfram Research and NEC have collaborated to port Mathematica to NEC's Itanium Linux platform for the upcoming release of Mathematica 4.2.
Appligent has unveiled a new Alliance Program designed to help integrators and consultants develop more powerful electronic document management applications for their clients. Appligent's main product is a range of PDF-related software which supports, among other operating systems, Red Hat Linux.
Opera Software have released Opera 6.1 for Linux for Intel and PowerPC users. The PowerPC version is the first released on this platform since the tentative Opera 5 for Linux Beta in May 2001. In addition to several bugfixes, this release features better support for fonts in the new version, with font anti-aliasing enabled by default and improved support for Chinese, Japanese and Korean characters. The changelog documents all the new developments in this release.
Cylant has ported the management console for its Linux Intrusion Prevention system to Windows. The Windows console allows administrators to managed CylantSecure server agents from their Windows workstations.
...making Linux just a little more fun! |
By Shane Collinge |
These cartoons are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.
More adventures of VI-Agra, the vi paperclip assistant, are in the Qubism column this issue, and in the back issues under both HelpDex and Qubism.
Recent HelpDex cartoons are at Shane's new web site, www.shanecollinge.com, on the Linux page. Cartoons during his Asia trip this year are at the CORE CORE web site.
What's this? Shane found it in the Los Amigos hostel in Madrid. I kid you not, it's true.
...making Linux just a little more fun! |
By Javier Malonda |
These cartoons were made for es.comp.os.linux (ECOL), the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author. Text commentary on this page is by LG Editor Iron. Your browser has shrunk the images to conform to the horizontal size limit for LG articles. For better picture quality, click on each cartoon to see it full size.
One creative translation here. "Ecol brand" is really whiskey and Fanta. The Spanish phrase "con naranjita" means "with a little orange", but in practice it means "mixed with Fanta". Fanta is a carbonated orange drink. "A favorite in Europe since the 1940s, Fanta was acquired by the Coca-Cola Company in 1960," says the Coca-Cola web site. (I would link to it but it's a brower-crashing site.) Whiskey and Fanta is popular in Spain, so I'm told. Since neither the words nor the concept translate to English very well, the author changed it to "Ecol brand", haha.
That last browser is Opera. He's being a Valkyrie from Wagner's The Ring. Webster's defines valkyrie as "any of the maidens of Odin who choose the heroes to be slain in battle and conduct them to Valhalla".
Regarding the two main characters, Bilo and Nano, Javier writes, "Bilo and Nano are two students who share a flat. Although their personalities are completely different, they get along good enough. Bilo tries to keep a calm perspective on life, but Nano is pure concentrated bad milk. I don't know much more about them." (Spanish version: http://bilo.homeip.net/ceferino/bilo-nano/bn_index.html).
Javier says the Ecol (the comic strip) started as a joke, "but people liked it and now we have 1000 visits daily and 10 mirrors". Ecol (the organization) -- or escomposlinux.org as it is officially known -- is an all-volunteer organization run on Linux boxen. The staff pay its DSL fee out of their own pocket. Javier is preparing an article for next month about Ecol the organization.
These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier.
...making Linux just a little more fun! |
By Richard Johnson |
Linux possesses a much-vaunted ability to run on just about any machine you care to throw at it. Well, I've not tried running my washing machine on it yet but I suspect that one day that time will come. In an effort to open up the joys of Linux to a wider audience, the trend in recent Linux distributions has been to wrap up the installation process with a nice user-friendly GUI. SuSE Linux is a case in point. Their installation is run through the YaST installer which sports an attractive interface and makes installing a complex operating system like Linux an almost pleasant experience. There is one drawback however - these graphical installers tend to need a certain amount of, for want of a better word, oomph in your computer. SuSE suggest, for example, that the YaST installer requires a minimum of 64MB RAM on your machine.
Once the operating system is actually installed, however, Linux is capable of running on machines that do not meet this more demanding spec. You can tailor and tweak an installation such that you will be able to get some use out of that old 486 you might have stuck in the attic, gathering dust. It could be used as a spare machine, a router, even a small web server.
Nonetheless, if you try and install a recent distribution of Linux on such a machine you may well find that the attractive and user-friendly installer simply refuses to run.
Recently I myself had occasion to install a modern Linux distro on a reasonably old machine. I had to surmount a few problems in the course of this so I've written up my experiences - in this document - in the hope that it may prove of use to others out there who also wish to put older, otherwise redundant, machines to good use.
For a variety of reasons it had become clear to me that the small company for whom I work needed an intranet site available on our local area network. Nothing too complex was required, just a simple site on which to host various company documents and other information; making them readily available to everyone on the network.
I also realised that this would provide me with a perfect excuse to bring Linux into our otherwise Microsoft-only network.
In the retail business margins are always tight and we simply could not justify the cost of a new powerful machine such as would be required to run, for example, Microsoft's IIS. We did, however, have a spare PC that was currently unused. This PC was an older model which was unable to run our relatively new accounting system and so had found itself replaced and relegated to a corner of my office. I duly designated this PC as our future intranet server.
The spec of this box was relatively low by modern standards - in fact the operating system installed on it was MS-DOS 6.22 with Windows for Workgroups 3.11! The PC had a 200MHz Pentium MMX processor, 16MB of RAM and a relatively generous 2GB of hard disk space. I felt this was ample for the somewhat limited demands that our intranet would place on the box - at least to begin with. I knew that I would probably need to increase the amount of RAM in the box; but RAM is cheap and it would be easy to upgrade once I'd installed Linux, if need be. I decided to crack on with my installation.
I needed only a relatively minimal Linux installation as the box would operate purely as a web server on the local network - installation of an X server was not required. The distribution that I decided to install was SuSE Linux 8. This was because it is the Linux distro that I use on my own PC at home and I'm familiar and comfortable with SuSE's way of doing things. I've tried a few different distros in my time and have pretty much settled on SuSE as my favourite. It was with a certain amount of relish that I set about installing a new operating system from my own set of CD's without having to worry about any visits from the licensing gestapo.
I booted up the PC and hit the F2 key to get into the BIOS. A quick check revealed that I could instruct the PC to boot from the cd-rom drive first so I duly set it to behave like this. I inserted disk 1 from my set of SuSE Linux disks and rebooted the PC with the new bios settings. After successfully booting from the CD I chose to do a 'standard installation' from the SuSE menu that appeared.
The SuSE Linux installation process begins by loading a copy of Linux into your system memory, making use of a ramdisk to provide an initial filesystem rather than using the hard disk. Or at least - it tries to. The system appeared to lock up when it tried to uncompress the ram disk into memory. After waiting a while, mindful of the low spec of the system, just to check that I wasn't simply being impatient, I hit ctrl-alt-del and the system shut itself down gracefully.
I tried again - booting the computer once more from the cd-rom, however this time I tried SuSE's 'safe installation'. Unfortunately the same problem manifested itself just as before.
I suspected that the problem was being caused by the limited physical memory on my box. A quick root around on SuSE's website revealed that the minimum memory suggested for running their setup program, as I mentioned earlier, is 64MB; which is rather more than the 16MB my poor box was blessed with.
Not being one to give up without a fight I booted once more from the cd-rom and hit the F2 key at the initial SuSE screen to start up the text based installation - on the hope that this would require less memory than the fancy framebuffer GUI-based installation that SuSE normally provides. It worked! The initial copy of Linux, ramdisk and all, successfully loaded into the system memory and the text-based YaST installation began. I was asked couple of questions regarding such matters as my preferred language and then...a message popped up telling me that I did not have sufficient memory to run YaST. The installation had halted once more.
At this stage YaST actually gave me the option of activating a swap partition to provide some virtual memory in lieu of physical memory. Unfortunately I didn't have a swap partition on this box - just one huge 2GB FAT16 DOS partition. It did, however, point up a possible solution to my problem. I realised that if I manually repartitioned my disk before actually running the YaST - providing myself with a genuine Linux swap partition - then I might actually be able to get somewhere.
Not having access to any sophisticated partioning software, I decided to obtain a Linux boot disk with the Linux version of FDISK - so that I could roll my own partitions.
To this end I downloaded the truly wonderful Tom's Root Boot Disk. This provides a DOS executable that will format an ordinary 1.44MB floppy disk with a complete bootable Linux system; including a small filesystem and all the handy utilities you could need. It even includes FDISK. Tom's Root Boot Disk can be downloaded from http://www.toms.net/rb/ and I cannot recommend it highly enough.
The DOS executable provided by Tom will not run in a command prompt on Windows 2000 - the OS on my usual desktop PC at work. Instead it requires an actual DOS operating system, so I copied the downloaded zip file onto the box that I was trying to install Linux onto - if you recall, it had MS-DOS 6.22 installed on it - and unzipped it. Rebooting a Win95/98 PC in MS-DOS mode would also provide you with a suitable environment. I stuck a floppy in the appropriate drive and let Tom's program create my boot disk for me. A painless procedure. Finally I re-booted the computer using my newly created Linux boot disk.
After pausing to marvel for a moment at how darned clever Tom's Root Boot Disk is, I got to work. I should mention at this point that FDISK misbehaved the first time I tried it. I thought I'd zapped the original DOS partition and created for myself a sparkling new Linux swap partition but it turned out that FDISK had misreported the number of heads, cylinders and sectors on my disk. Thus, when it wrote the new partition table it made a right pig's ear of it. I didn't realise this until YaST started throwing up bizarre errors about my disk. I rebooted with Tom's root boot and tried again. The second time around FDISK behaved itself and all was well. Though - left somewhat paranoid by FDISK's behaviour - to make sure that it was now correctly detecting the details of my hard disk I actually took the PC apart so that I could check the label on the disk itself!
FDISK is often cited as being a scary bit of software to use, but I've always found it to be quite straightforward myself. To launch FDISK you type:
#> fdisk /dev/hda
Assuming, of course, that the hard disk to be partitioned is the first IDE disk. You have to tell FDISK which device you want to partition, in this case /dev/hda. If you have any doubts about FDISK's syntax you can check out the man page - yes, Tom's root boot disk even provides man pages for your edification and delight!
Once you've started up FDISK you control its behaviour with single letter commands. Type 'm' (without the quotes) for a list of the available commands. 'p' prints out the current partition details on screen for reference.
First you need to delete the existing partition by typing 'd' and specifying, when prompted, which partition number you wish to delete. The partition numbers are revealed when you print the partition details on screen. Bear in mind that FDISK doesn't actually make any changes to your disk until you use the 'w' command to write your changes. If you screw up you can just type 'q' to quit without saving your changes. Once you have written them, however, there is no going back so be careful. New partitions are added to the disk with the 'n' command. New partitions will be normal Linux partitions by default so you'll need to use the 't' command to change the new partition's type to Linux swap. You need to know the hex code for swap partitions when you change the partition type and you can get this by using the 'l' command to list all the different partition types supported by FDSIK. Linux swap is type 82.
In my case I created two separate primary partitions on the disk. The first was a Linux swap partition; 128MB in size. The second partition was a standard Linux partition taking up the rest of the disk. I then formatted my partitions. For the swap partition I used the following command:
#> mkswap -c /dev/hda1
This sets up a Linux swap area on partition 1 of device hda. The -c flag tells mkswap to check the partition for bad blocks. The second Linux partition I formatted as a Linux Second Extended Filesystem with the command:
#> mke2fs -c /dev/hda2
The syntax, as you can see, is rather similar to the mkswap command.
Once I'd created my partitions I booted once more from the SuSE Linux CD - again pressing F2 to opt for the text based installation. This time, when asked if I wanted to activate a swap partition, I could specify the partition that I had just created which was located at /dev/hda1. YaST then proceeded without a hitch - if a little slowly.
From this point onwards the installation was relatively straightforward. SuSE's YaST is a very good setup tool striking just the right balance between sophistication and user-friendlyness, IMHO. The only problem that I needed to watch out for was with YaST's own disk partitioner. This recognised my existing partitions and suggested reformatting the second partition with the ReiserFS journalling filesystem - which I was more than happy to do - however, it also suggested reformatting the swap partition as well. I thought it best not to let it do this as I suspected it might cause problems if I tried to format a swap partition that was in active use...
YaST is good but memory hungry and it certainly made extensive use of the swap partition that I'd created. Once Linux was installed and running I was able to tune my installation to ensure that only the necessary services were running and it ran tolerably well with just the 16MB of RAM. Notwithstanding a lot of activity on the swap partition of course.
In the end I did install more memory in my box. This made the system more responsive and better able to cope with the demands placed upon it by multiple users over the network. Yet I was nonetheless impressed by how well Linux - which is a powerful modern operating system after all - ran on a system with such limited resources.
...making Linux just a little more fun! |
By Janine M Lodato |
This article explores Linux's potential role in assistive technology (AT). AT allows those living with multiple schlerosis, other handicaps or the affects of aging to take greater control in maintaining their health and living independently.
Some of the most criminal and immoral aspects of the monopolistic practices of Microsoft, which for all practical purposes eliminated or curtailed competition, is the fact that PCs today are
These negative attributes of the Windows world makes the PCs of today useless for the truly needy:
Now that Linux is available, it is feasible to approach this very large market using a low-cost, rugged and simple client system. Linux-based client systems connected to Linux servers are perfect for such end-to-end AT systems offering. The reliable and simple features of Linux coupled with low cost Linux based hardware and platforms and applications are the only solution for these end users who need AT capabilities.
A very significant upgrade of self-supported health improvement can be achieved using assistive technologies (AT) connected via the Web. Recent scientific studies by major universities in the field of behavioral medicine including psychoneuroimmunology (PNI) indicate that getting involved with collaborative group activities has significant rehabilitation potential. In fact behavioral medicine can prevent disease, and improve quality of life and rehabilitate. Of course it does not replace the pharmaceuticals, but it does improve their effectiveness.
It is suggested that the collaborative virtual community systems, based on Web-connected AT clients and servers, supporting the disabled and the aging can also be used for the able-bodied eyes-busy, hands-busy professionals to improve their productivity. Also learning-disabled children can make very good use of AT. This low cost set of AT platforms and associated Web connectivity could be very useful in many government and commercial employment arenas. This dual-use type approach will significantly lower the cost of the needed technologies for all groups.
Of course there is still work to be done. Applications for AT technologies must be developed or perfected to allow collaboration between the health service professionals or social worker professionals and the many people in need. Web connected AT oriented software components running on Linux client machines connected to Linux servers have to be created such as
Through such systems the professionals can monitor, mentor and moderate and even medicate the members of the collaborative community. For a good example: when dealing with students with learning disabilities, it is important to get their attention, to bolster their behavior and finally to improve their cognitive productivity. With assistive technology people can prevent further destruction of their faculties, improve their quality of life and can even be rehabilitated somewhat. Just the idea of being productive adds to a person's self-esteem enormously.
I have many years of personal experience using AT and found it very helpful in SPMS (secondary progressive multiple sclerosis) conditions as described below in a brief review of my personal experiences.
In addition to my extensive experience with AT I also have related graduate credentials from both California State Univ at Northridge (the center for AT corporate interactions) as well as CSU in Sacramento and UOP in Stockton.
In spite of my handicap, I find it gratifying and fulfilling to concentrate my efforts on projects worthwhile to a very deserving community. Involvement such as this has proved to have healing powers for me. I am living proof of the powers of PNI based on personal involvement.
Having relied on AT in order to survive my wheelchair imprisonment, specifically voice recognition for writing, I see dual value: one for the hands-busy, eyes-busy professionals increasing their productivity through ease of use, and the other, of course, for use by the physically disabled.
Being disabled with MS, I use IBM ViaVoice on a MAC to write. It allows me to verbally communicate by email with my friends as well as giving me the opportunity to express myself and get involved with worthwhile projects in the AT arena.
Typically voice recognition systems spell very well but now and then some of them do make typos which really take the cake:
I receive enduring fulfillment from developing my intellectual strengths and putting them to positive use. I learn from my negative experiences which have been many in my 54 years of existence and I savor my positive experiences to learn optimism.
The best way to use these intellectual strengths is to get involved with collaborative teamwork and personal communications within the disabled community and with companies who provide assistive technologies for this community.
It is important for me to maintain what little health I have and to become involved in something I hold great faith in. So I have decided to become involved in the latest AT systems available to people with disabilities. I am especially interested in technologies that help the disabled express themselves, such as voice recognition for writing and voice-activated telephone service for talking.
There are many AT type technologies that focus on, and make good use of the physical abilities a disabled person may still have such as voice, lip movement, eye motion and brain waves. These capabilities can be used with brain-actuated computer systems and voice recognition software, to name a few. Integrating these already-existing technologies into something usable by disabled clients so they can express themselves will offer them freedom in spite of their handicap.
Understanding that there are companies already seeking to address this market makes my involvement in the area that much easier and completely natural. Finding companies geared toward brain-actuated computer control systems is my next assignment.
As a handicapped woman who still has control of her mental faculties and voice, I have something to offer by connecting the right people so that I can integrate systems through the Internet to develop a mutually beneficial virtual community.
Personal communications and collaborative teamwork need assistive technologies to further the self-esteem of the disabled. Linux, due to its low cost, open architecture and international development, provides an ideal platform for building these technologies. Those living with handicaps (and their relatives and friends) can make a unique contribution to this effort because they know firsthand what benefits AT can provide.
Involvement in AT projects can help disabled people in another way too. Not only does it provide a distraction from their problems, but it's also a constructive way to spend their time while furthering a cause they believe in.
The positive rehabilitative effects of Behavioral Medicine is my method of surviving and thriving until a final cure for MS is developed.
[LG would like to see additional articles and Mailbag letters about Linux's applicability in assistive technology. If you have any ideas, let us know. -Ed.]
...making Linux just a little more fun! |
By Patrick Mahoney |
Several articles related to this topic appeared in the last few issues of the Linux Gazette. I plan to approach it in a much less programming oriented manner, only presenting to the reader the tools and tips he will need to begin the development of his own OS. Once done with this article, the interested reader should be all set to start browsing the resources available to him and start designing and coding.
You might not be aware of it, but operating system development doesn't start at the beginning. (!!) Writing a solid bootloader is a whole project in itself, and I would not advise one to begin an OS development project by writing a bootloader. Many reliable ones are available for free (Grub, lilo, ppcboot, etc...). If you plan on writing your own, I suggest you delay this task to a later stage of the project. In this article, I will be using GNU Grub, the Grand Unified Bootloader.
This article will present one of many possible environments which meets these requirements. It will consist of a development machine and a testbed machine that both lie on a common network.
A tool I found more useful than I initially thought it would be is an emulator. Such a tool will help debug your kernel and will allow you to rapidly test your newly added line of code. Don't be fooled, though. An emulator never replaces a good ol' testbed machine.
Next, you need a TFTP server. This tool will allow your testbed machine's tftp enabled bootloader to acquire a kernel from the development machine via the network connection.
Bochs version 1.4.1 is the chosen x86 emulator. Special care should be taken to compile it with debugger mode enabled. These commands should do the job:
$ ./configure --enable-x86-debugger $ makeIn order to properly use Bochs, you need to create a disk image. This image needs to have both a bootloader and a filesystem. This can be done using the mkbimage script. If you're too lazy to do it yourself, grab this gzipped 10MB disk image and add
diskc: file=c.img, cyl=24, heads=16, spt=63to your .bochrc file.
As for the TFTP server, I chose to use atftpd. It's an easy to use linux-based TFTP server implementation.
$ ./configure --enable-ne --enable-ne-scan=0x220 $ makeNote that a PnP PCI card would be easier to configure. Now, you can either install the Grub images on the testbed machine's MBR or on a floppy which your testbed machine will boot from. I prefer the latter, since my testbed machine is also used for other purposes, and therefore, I'd rather not play with its HD.
$ cat ./stage1/stage1 ./stage2/stage2 > /dev/fd0Now just insert your floppy in your testbed machine to see if your network card gets recognized. You can either configure it by hand or use a dhcp server, if any.
grub> dhcp Probing... [NE*000] NE2000 base 0x220, addr 00:C0:A8:4E:5A:76 Address: 192.168.22.14 Netmask: 255.255.255.0 Server: 192.168.22.1 Gateway: 192.168.22.1Note that you won't have to configure these parameters by hand each time you boot. See the GNU Grub documentation and the 'grub-install' script for details.
That's it! You're all set to test your setup!
The kernel is built from three source files: boot.S, kernel.c and multiboot.h. You can build the kernel by doing:
$ gcc -I. -c ./boot.S $ gcc -I. -c ./kernel.c $ ld ./kernel.o ./boot.o -o kernel -Ttext 100000Here's a quick and incomplete explanation. Multiboot is a standard that defines a way for the bootloader to pass information to the kernel it tries to load. boot.S accepts this information, sets up a stack, and calls 'cmain'. This function sets up the vga display, reads the information passed to him, prints some stuff and leaves. Then, boot.S gets the control back, prints the string 'Halted.', and enters an infinite loop. Pretty simple stuff, right? The reader is invited to dig into the code to get more details.
# /sbin/losetup -o 32256 /dev/loop1 ./c.img # /bin/mount -t ext2 /dev/loop1 /mnt/osdev/ # cpOf course, that can be automated by your Makefile. Once in Grub, simply do:/docs/kernel /mnt/osdev # umount /mnt/osdev/ # /sbin/losetup /dev/loop1 -d $ bochs
grub> kernel (hd0,0)/kernel grub> boot
(Click the image for the full size.)
# /usr/sbin/atftpd --daemon /home/bono/src/grub-0.92/docsFire off your testbed machine. Configure your network connection as shown above. Next, specify your devel machine's ip address as the TFTP server address and the location of the kernel image. Note that this option can be set by the dhcp server. Finally, start the boot process.
(...) grub> tftpserver 192.168.22.36 Address: 192.168.22.14 Netmask: 255.255.255.0 Server: 192.168.22.36 Gateway: 192.168.22.1 grub> kernel (nd)/kernel [Multiboot-elf, <0x100000:0x807:0x0>, <0x101808:0x0:0x4018>, shtab=0x106190, entry=0x100568] grub> bootA screen similar to that of Bochs should appear on your testbed machine's display.
If your debugging needs come to outgrow both the emulator and your kernel's printk's, one setup you could add to your OS is a serial debugger. This can range from some bytes thrown on the serial port, to a gdb-compatible remote-debugging extension. This information could be retrieved and processed by your development machine through a null-modem serial cable. It's a handy common practice in OS development.
...making Linux just a little more fun! |
By Mark Nielsen |
For my setup, I was using efax, which is not that easy to get along with. For any sane person, I recommend HylaFax or some other alternative (mgetty has some hope).
Please read my other efax article at Linux Focus.
I have a directory, /usr/local/apache2/htdocs/fax, where I put in my Perl script and .htaccess files.
Underneath this directory, I have these directories:
AuthName Test AuthType Basic AuthUserFile /usr/local/apache2/passwords/Passwords order deny,allow require user mark ted
You can change/add passwords with htpasswd.
Next, the last thing is to create a perl script. Here is my very crude Perl script. If I ever do anything else with it, I will convert it to a Python script first as Python is the next wave for programming (I hope). Python, Zope, Apache, Linux, and PostgreSQL are the top choices for my programming environment. Save it as "fax.pl" and perform a "chmod 755 fax.pl" after saving it.
You can download it or just view it below.
#!/usr/bin/perl use CGI; print "Content-type: text/html\n\n\n"; my $Home = "/usr/local/apache2/htdocs/fax"; my $Source = "$Home/source"; my $Archives = "$Home/archives"; my $AB_Archives = "$Home/ab"; my $Display = "$Home/display"; my $Home_Archives = "$Home/home"; `mkdir -p $Source`; `mkdir -p $Archives`; `mkdir -p $Display`; `rsync -av /var/spool/fax/incoming/fax* $Source`; `mkdir -p $AB_Archives`; #------------------------------------ my @Files = <$Source/fax*>; foreach my $File (@Files) { # print "$File\n"; my (@Temp) = split(/\//, $File); my $File_Name = pop @Temp; if (!(-e "$Archives/$File_Name\.pdf")) { print "<br>Processing new fax: $File\n"; my $Command = "tiff2ps $File > $Archives/$File_Name\.ps"; # print "$Command\n"; `$Command`; my $Command = "/usr/bin/ps2pdf $Archives/$File_Name\.ps $Archives/$File_Name\.pdf"; # print "$Command\n"; `$Command`; `cp $Archives/$File_Name\.pdf $Display/$File_Name\.pdf`; } } #--------------------------------------- my $query = new CGI; my $Action = $query->param('action'); my $File = $query->param('file'); $File =~ s/[^a-zA-Z0-9\_\.]//g; if (!(-e "$Display/$File")) {} elsif ($Action eq "archive") { print "<br>Archiving $File\n"; `rm -f $Display/$File`; } elsif ($Action eq "archive2") { print "<br>Archiving $File\n"; `cp $Display/$File $AB_Archives/`; `rm -f $Display/$File`; } elsif ($Action eq "archive_home") { print "<br>Archiving $File\n"; `cp $Display/$File $Home_Archives/`; `rm -f $Display/$File`; } print qq(<hr><a href="archives/">Archives</a> -- might be password protected. <br><a href="home/">Home Archives</a> -- might be password protected. <br><a href="ab/">Audioboomerang Archives</a>\n); my $Table_Entries = ""; my @Files = <$Display/fax*>; foreach my $File (sort @Files) { my (@Temp) = split(/\//, $File); my $File_Name = pop @Temp; my $Link = "<a href='display/$File_Name'>$File_Name</a>"; my $Delete = "<a href='fax.pl?action=archive&file=$File_Name'>archive file</a>"; my $AB ="<a href='fax.pl?action=archive2&file=$File_Name'>archive to AB</a>"; my $Home ="<a href='fax.pl?action=archive_home&file=$File_Name'>archive for Home</a>"; $Table_Entries .= qq(<tr><td>$Link</td><td>$Delete</td><td>$Home</td><td>$AB</td></tr>\n); } print "<table border=1><tr><th>View Fax</th><th>Archive the Fax</th> <th>Archive to AudioBoomerang</th></tr>\n"; print $Table_Entries; print "</table>\n"; if (@Files < 1) {print "<h1> No faxes or they are all archived.</h1>\n";}
I am not sure what other fax setups utilize the web, but from my perspective, I always want to have access to my faxes over the web or to send a fax over the web.
...making Linux just a little more fun! |
By Ben Okopnik |
The e-mail was short, succinct, and got right to the point.
Woomert - I'll be short, succinct, and get right to the point. Three-company merger. Nervous sysadmin. 3000+ users. /etc/passwd. UIDs. Regards, Frink Ooblick
Woomert Foonly, the Hard-Nosed Computer Detective, chuckled to himself. The client had been rather loud and incoherent on the phone, with "It doesn't work!" and "I need help!" being the chief features of his conversation. Woomert had sent Frink to the site to reconnoiter, and the above was the highly satisfactory result. All that remained was to come up with the solution; given that only a few short hours remained before the client shut down for the day, Woomert decided to use his time productively. Let's see - where was his favorite pillow?...
Refreshed and ready, Woomert appeared at the site, and immediately encountered a rather excited Frink.
- "Woomert, it's terrible! The file is far too long to search manually, and the UIDs are all over the map. The sysadmin is contrite, frantic, and panicked by turns, and his hair is almost all gone. What can we do?"
- "No worries, mate... oh, sorry. I was just in Canberra a few hours ago, and some of the influence is still with me. I can tell you from horrible experience that tomorrow will be even worse: I've got to be in Dallas in the morning, New York in the afternoon, and Tel Aviv in the evening. I would advise you to wear earplugs, or absent yourself from my environs until the accents fade. Ah, the perils of travel..."
Frink was becoming visibly upset.
- "Woomert - you're not taking this seriously. Can't you see that this is a major problem?"
- "Oh, this? Relax, take it easy. It's not nearly as bad as it looks, Frink; in fact..."
Woomert deftly extracted his favorite typing gloves from his pocket and slipped them on.
- "...Perl makes it rather trivial. What we'll do is give the sysadmin a couple of command-line tools that he can use to resolve this problem, and - since he's using 'bash' - he'll be able to pull them up with the 'up-arrow' key as he needs them. Here we go!"
A list of duplicate UIDs, along with their related usernames scrolled down the screen after Woomert pressed the "Enter" key. Both Woomert and Frink noted with interest that there was a triple entry for UID0 -
perl -F: -walne'$h{$F[2]}.="$F[0] ";END{$h{$_}=~/ ./&&print"$_: $h{$_}"for keys%h}' /etc/passwd
0: root sashroot kill3r
- "Well, well. Looks like somebody managed to break in and give themselves a UID0 (root) account. 'sashroot' is OK - that's the 'standalone shell' for those rough repair jobs - but 'kill3r'? Well, we'll let the client know; meanwhile, on with the current problem. The sysadmin will now have a list of all the duplicates - there don't seem to be all that many - but searching for the next available UID could be a pain. So, here's a second tool -"
- "That should give him a good start on getting it all straightened out. As for us - we're homeward bound!"
perl -wle'{getpwuid++$n&&redo;print$n}'
When they had returned to Woomert's house and were seated in front of the fireplace - the night had been a cold one, and the wind whistled outside the window - Frink looked expectantly at Woomert. Noting the look, Woomert laughed.
- "I know, I know. I should explain, shouldn't I? The air of mystery is a sharp, pleasant thing, but it is as nothing compared to the pleasure of learning. Here, let's start with the first one:
"First, take a look at the command-line switches I used:"
perl -F: -walne'$h{$F[2]}.="$F[0] ";END{$h{$_}=~/ ./&&print"$_: $h{$_}"for keys%h}' /etc/passwd
-w Enable warnings -a Autosplit (see "-F") -l Enable line-end processing -n Implicit non-printing loop -e Execute the following commands -F: Use ':' as the separator for the '-a' autosplit"If you remember our last adventure, all of the above except '-a' and '-F' are already familiar to you. Autosplitting splits the lines read in by '-n' or '-p', using whitespace as a default separator and saving the result in the '@F' array. '-F' optionally redefines the separator by which to split."
"Since we're reading in '/etc/passwd', let's look at the format of the individual lines in it:"
borg:x:1026:127:All your base are belong to us!:/home/borg:/bin/bash"There are seven standard fields, laid out as 'name - passwd - UID - GID - GECOS - dir - shell'. The only things we're interested in for the moment are name and UID; what I'm going to do is build a hash - a very important data structure in Perl, one of the three basic ones - that contains the UID (3rd field) as the key, and the name (1st field), followed by a space, as the value, for all the entries in '/etc/passwd':
$h{$F[2]}.="$F[0] "Since usernames can't have spaces in them, it makes a convenient separator. Once that's done, I'll loop over the hash and print out any value which contains a space followed by any character:"
$h{$_}=~/ ./&&print"$_: $h{$_}"for keys%h}"I see you still look puzzled. Here, let me write out the above in a more readable form:"
for ( keys %h ){
# Loop over the "%h" hash
if ( $h{$_} =~ / ./ ){
# Does the value contain a space followed by anything?
print "$_: $h{$_}\n";
# If so, print the UID, a colon, a space, and the value
}
}
"If you think about it, you'll see that the only thing that will match the above regex is a value with more than one name in it - meaning a duplicate UID."
- "All right - now I can see how you got the results. What about the second expression, the 'next available UID' tool?"
- "Ah, you mean this one:"
"It's nothing but a short loop in which I check if the UID specified by '$n' exists. If that test succeeds - meaning that there is a UID equal to '$n' in use - 'redo' gets invoked, '$n' is incremented, and the test happens again. If it fails, however, '$n' is printed to STDOUT and the program exits. Useful, and not too complicated. Just a bit of work, and they should have it all done. The security breach is something else, but at least now they know about it..."
...making Linux just a little more fun! |
By Mike ("Iron") Orr |
Here's what happened to me a few years ago when computers were not so cheap and a group of 5 very old machines were worth saving from a flood.
I was working in a Laboratory at the time. We had a room with 2 big microscopes and 5 old Macs used for image analysis. The room ended up flooded during the night after an autoclave (kind of big pressure cooker that biologically-inclined geeks use to sterilize things) broke down. Although the microscopes were safe, the table with the Macs got hit. All the machine ended up covered with a muddy rusty water.
The next morning, I decided to bring the Macs to the lab to dismantle them. By chance the drives and power supply were dry but the motherboards were in really bad shape.
I washed all the cards in distilled water and then in alcohol. I then put them in an oven at 40 degrees Celsius for a day.
Everyone was smiling at me until I rebuilt the Mac and got them running again. At the time, I did not know that it was the way electronic boards were washed in fact, and I was not really sure of the result before it came.
In the meantime, the machines got reimbursed by the insurance that did not consider worth getting the old one back, so we doubled our investment in the computers.
Once upon a time (maybe 4-5 years ago), I had a 80286 case for old timess sake. But its floppy drive wasn't working. So I decided to use my Pentium II' s floppy with it. I was trying to install a DOS 6.22 system on it think. But i wasn't able to take the floppy out of its original case. A bit acrobaticly I put the cases in parallel and with a long cable connected the floppy to 286. Everything was OK.
But there was something wrong. (Did I metioned the pentium was where my father did his civil-engineering tasks?) The floppy's LED was on continuoussly. I was in a hurry and didn't think the cable was wrongly plugged. The PC didn't boot. The system was down. It should be the doom. I got angry and started to hit the floppy drive with a hammer. After that I got the idea of the cable. HIT! Everything seemed to be ok then. But i got a damaged HDD. The HDD was below the floppy.. The real doom :) But the good thing was that I had that drive backed up.
So never work without backups and hammers when working inside the case.
Probably the most expensive learning experience in my history was hooking up a second drive, a used 20mb Miniscribe SCSI 3.5" as the second in a chain to my Amiga 2000 years ago. I didn't know about SCSI termination, and back then, it was real important. I watched in dazed amazement as a single wire on the cable smoked and burnt down toward the first drive, like the black powder burning toward the weapons room in a Looney Toons featuring Bugs Bunny and Yosemite Sam as the pirate. At probably the last possible second, I broke the trance and lunged forward, groping around the back of the machine for the power switch. I got it just in time. I only lost the 20mb SCSI.
About a year later, I got a job at a used retail computer store, and found a dead miniscribe 20mb among the waste products. I removed the controller card, swapped it out with the controller on my drive, and brought life back to my drive. That evening, it was resting on the corner of a desk, and a co-worker bumped it and it fell onto the concrete floor. That was the end of that drive.
I think I paid $200 for the drive, used. Needless to say, I shortly became an expert in proper termination of SCSI chains. :)
Years ago now (about 1989 or so) I was the grateful recipient of an old XT that no one wanted. I hadn't had much to do with computers up until then on the hardware side - but this one came in pieces, so it was a matter of getting my sleeves rolled up an' putting it all together.
It was great - a complete change to the ol' Commodore 64 and plus 4 that I'd played with before. But I kept getting this wierd problem. The hard drive ( a monster and a half) -- all of twenty megabytes, and in a double-height casing (so it weighed a ton) -- was connected to the IDE controller card, which in turn was seated into the motherboard). When switching on the computer everything was fine. The old XT booted up with its old (DOS 3 I think) OS and worked fine. But whenever I tried to format or delete any of the old stuff on it it seemed fine until next reboot, when everything was still there. Wierd.
So I took it along to a computer shop with a workshop and admitted to being completely baffled by the phenomenon. The techie took one look at the ribbon cable connecting the hard drive to the IDE controller and unplugged it and plugged it so that it was seated over BOTH rows of pins. I had plugged the cable in so that one whole row of pins had been missed.
Needless to say, I was one really embarrassed teenager! Needless to say as well that it has never happened again - some mistakes are just too stupid to repeat!
[If you have a story about something foolish or ingenious you did to your computer, send it to gazette@ssc.com. -Iron.]
...making Linux just a little more fun! |
By Ariel Ortiz Ramirez |
In my previous article, I introduced the C# programming language and explained how it works in the context of the Mono environment, an open-source implementation of Microsoft's .NET framework. I will now go on to some details on the data types supported by the C# programming language.
In the subsequent discussion I will use the following diagrammatic notation to represent variables and objects:
The variable diagram is a cubic figure that depicts three traits (name, value and type) relevant during the compilation and execution of a program. In the von Neumann architecture tradition, we will consider a variable as a chunk of memory in which we hold a value that can be read or overwritten. The object diagram is a rounded edge rectangle that denotes an object created at runtime and allocated in a garbage collectable heap. For any object in a certain point in time, we know what type (class) it is, and the current values of its instance variables.
In the C# programming language, types are divided in three categories:
In a variable that holds a value type, the data itself is directly contained within the memory allotted to the variable. For example, the following code
int x = 5;
declares an 32-bit signed integer variable, called x
,
initialized with a value of 5. The following figure represents the corresponding
variable diagram:
Note how the value 5 is contained within the variable itself.
On the other hand, a variable that holds a reference type contains the
address of an object stored in the heap. The following code declares a variable
called y
of type object
which gets initialized, thanks
to the new
operator, so that it refers to a new heap allocated object
instance (object
is the base class of all C# types, but more of
this latter).
object y = new object();
The corresponding variable/object diagram would be:
In this case, we can observe that the "value" part of the variable diagram contains the start of an arrow that points to the referred object. This arrow represents the address of the object inside the memory heap.
Now, let us analyze what happens when we introduce two new variables and do some copying from the original variables. Assume we have the following code:
int a = x; object b = y;
The result is displayed below:
As can be observed, a
has a copy of the value of x
.
If we modify the value of one of these variables, the other variable would
remain unchanged. In the case of y
and b
, both
variables refer to the same object. If we alter the state of the object using variable
y
, then the resulting changes will be observable using variable b
,
and vice versa.
Aside from references into the heap, a reference type variable may also
contain the special value null
, which denotes a nonexistent object.
Continuing with the last example, if we have the statements
y = null; b = null;
then variables y
and b
no longer refer to any
specific object, as shown below:
As can be seen, all references to the object instance have been lost. This object has now turned into "garbage" because no other live reference to it exists. As noted before, in C# the heap is garbage collected, which means that the memory occupied by these "dead" objects is at sometime automatically disposed and recycled by the runtime system. Other languages, such as C++ and Pascal, do not have this kind of automatic memory management scheme. Programmers for these languages must explicitly free any heap allocated memory chunks that the program no longer requires. Failing to do so gives place to memory leaks, in which certain portions of memory in a program are wasted because they haven't been signaled for reuse. Experience has shown that explicit memory de-allocation is cumbersome and error prone. This is why many modern programming languages (such as Java, Python, Scheme and Smalltalk, just to name a few) also incorporate garbage collection as part of their runtime environment.
Finally, a pointer type gives you similar capabilities as those found
with pointers in languages like C and C++. It is important to understand that
both pointers and references actually represent memory addresses, but that's
where their similarities end. References are tracked by the garbage collector,
pointers are not. You can perform pointer arithmetic on pointers, but not on
references. Because of the unwieldy nature associated to pointers, they can only
be used in C# within code marked as unsafe
. This is an advanced
topic and I won't go
deeper into this matter at this time.
C# has a rich set of predefined data types which you can use in your programs. The following figure illustrates the hierarchy of the predefined data types found in C#:
Here is a brief summary of each of these types:
Type | Size in Bytes |
Description |
---|---|---|
bool |
1 | Boolean value. The only valid literals are true
and false . |
sbyte |
1 | Signed byte integer. |
byte |
1 | Unsigned byte integer. |
short |
2 | Signed short integer. |
ushort |
2 | Unsigned short integer. |
int |
4 | Signed integer. Literals may be in decimal (default)
or hexadecimal notation (with an 0x prefix). Examples: 26 ,
0x1A |
uint |
4 | Unsigned integer. Examples: 26U , 0x1AU
(mandatory U suffix) |
long |
8 | Signed long integer. Examples: 26L , 0x1AL
(mandatory L suffix) |
ulong |
8 | Unsigned long integer. Examples: 26UL , 0x1AUL
(mandatory UL suffix) |
char |
2 | Unicode character. Example: 'A' (contained within single quotes) |
float |
4 | IEEE 754 single precision floating point number.
Examples: 1.2F , 1E10F (mandatory F suffix) |
double |
8 | IEEE 754 double precision floating point number.
Examples: 1.2 , 1E10 , 1D (optional D
suffix) |
decimal |
16 | Numeric data type suitable for financial and monetary
calculations, exact to the 28th decimal place. Example: 123.45M
(mandatory M suffix) |
object |
8+ | Ultimate base type for both value and reference types. Has no literal representation. |
string |
20+ | Immutable sequence of Unicode characters. Example: "hello
world!\n" (contained within double quotes) |
C#'s has a unified type system such that a value of any type can
be treated as an object. Every type in C# derives, directly or indirectly, from
the object
class. Reference types are treated as objects simply by
viewing them as object
types. Value types are treated as objects by
performing boxing and unboxing operations. I will go deeper into
these concepts in my next article.
C# allows you to define new reference and value types. Reference types are
defined using the class
construct, while value types are defined
using struct
. Lets see them both in action in the following
program:
struct ValType { public int i; public double d; public ValType(int i, double d) { this.i = i; this.d = d; } public override string ToString() { return "(" + i + ", " + d + ")"; } } class RefType { public int i; public double d; public RefType(int i, double d) { this.i = i; this.d = d; } public override string ToString() { return "(" + i + ", " + d + ")"; } } public class Test { public static void Main (string[] args) { // PART 1 ValType v1; RefType r1; v1 = new ValType(3, 4.2); r1 = new RefType(4, 5.1); System.Console.WriteLine("PART 1"); System.Console.WriteLine("v1 = " + v1); System.Console.WriteLine("r1 = " + r1); // PART 2 ValType v2; RefType r2; v2 = v1; r2 = r1; v2.i++; v2.d++; r2.i++; r2.d++; System.Console.WriteLine("PART 2"); System.Console.WriteLine("v1 = " + v1); System.Console.WriteLine("r1 = " + r1); } }
First we have the structure ValType
. It defines two instance
variables, i
and d
of type int
and double
,
respectively. They are declared as public
, which means they can be accessed
from any part of the program where this structure is visible. The structure
defines a constructor, which has the same name as the structure itself and,
contrary to method definitions, has no return type. Our constructor is in charge
of the initialization of the two instance variables. The keyword this
is used here to obtain a reference to the instance being created and has to be
used explicitly in order to avoid the ambiguity generated when a parameter name
clashes with the an instance variable name. The structure also defines a method
called ToString
, that returns the external representation of a
structure instance as a string of characters. This method overrides the ToString
method (thus the use of the override
modifier) defined in this
structure's base type (the object
class). The body of this method
uses the string concatenation operator (+) to generate a string of the form
"(i, d)", where i and d represent the
current value of those instance variables, and finally returns the expected
result.
As can be observed, the RefType
class has basically the same
code as ValType
. Let us examine the runtime behavior of variables
declared using both types so we can further understand their differences. The Test
class has a Main
method that establishes the program entry point.
In the first part of the program (marked with the "PART 1" comment) we
have one value type variable and one reference type variable. This is how they
look after the assignments:
The value type variable, v1
, has its instance variables contained
within the variable itself. The new
operator used in the assignment
v1 = new ValType(3, 4.2);
does not allocate any memory in the heap as we've learned from other
languages. Because ValType
is a value type, the new operator is
only used in this context to call its constructor and this way initialize the
instance variables. Because v1
is a local variable, it's actually
stored as part of the method's activation record (stack frame), and it exists
just because it's declared.
Objects referred by reference type variables have to be created explicitly at some point in the program. In the assignment
r1 = new RefType(4, 5.1);
the new
operator does the expected dynamic memory allocation
because in this case RefType
is a reference type. The corresponding
constructor gets called immediately afterwards. Variable v2
is also
stored in the method's activation record (because it's also a local variable)
but it's just big enough to hold the reference (address) of the newly created
instance. All the instance's data is in fact stored in the heap.
Now lets check what happens when the second part of the program (marked after
the "PART 2" comment) is executed. Two new variable are introduced and
they are assigned the values of the two original ones. Then, each of the
instance variables of the new variables are incremented by one (using the ++
operator).
When v1
is copied into v2
, each individual instance
variable of the source is copied individually into the destination, thus
producing totally independent values. So any modification done over v2
doesn't affect v1
at all. This is not so with r1
and r2
in which only the reference (address) is copied. Any change to the object
referred by r2
is immediately seen by r1
, because they
both refer in fact to the same object.
If you check the type hierarchy diagram above, you will notice that simple
data types such as int
, bool
and char
are
actually struct
value types, while object
and string
are class
reference types.
If you want to compile and run the source code of the above example, type at the Linux shell prompt:
mcs varsexample.csmono varsexample.exe
The output should be:
PART 1 v1 = (3, 4.2) r1 = (4, 5.1) PART 2 v1 = (3, 4.2) r1 = (5, 6.1)
...making Linux just a little more fun! |
By Jon "Sir Flakey" Harsem |
These cartoons are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.
More adventures of VI-Agra are in the HelpDex column this issue, and in the back issues under both HelpDex and Qubism.
All Qubism cartoons are here at the CORE web site.
...making Linux just a little more fun! |
By Sandeep S |
The basic features of ptrace were explained in Part I. In Part II we saw a small program which accessed the registers of a process and modified them so as to change the output of that process, by injecting some extra code. This time we are going to access the memory of a process. The purpose of this article is to introduce a methods for infecting binaries on runtime. There are many possible areas of use for this technique.
We are familiar with ptrace and know the techniques of attaching a process, how to trace it and finally to free it. We also have an idea about the structure of the Linux binary format - ELF.
Our plan is to fetch/modify a running binary. So we have to locate the
symbols inside the binary. There we need link_map
. link_map is
the dynamic
linker's internal structure with which it keeps track of loaded libraries
and symbols within libraries.
The foramt of link_map is (from /usr/include/link.h)
struct link_map
{
ElfW(Addr) l_addr; /* Base address shared object is loaded at. */
char *l_name; /* Absolute file name object was found in. */
ElfW(Dyn) *l_ld; /* Dynamic section of the shared object. */
struct link_map *l_next, *l_prev; /* Chain of loaded objects. */
};
A small explanation for the fields.
Link-map is a linked list, each item on list having a pointer to loaded library. What we have to do is, to follow this chain, go through every library and find our symbol. Now we have a question. Where we can find this link_map?
For every object file, there is a global offset table (GOT) which contains many details of the binary. In GOT, the second entry is dedicated for the link_map. So we get the address of link_map from GOT[1] and we go on searching our symbol.
Now we have collected the basic information needed to access the memory. Let's
start now. First of all we attach the process 'pid' for tracing. Now we go for
finding out the link_map we require. You will find functions read_data
,
read_str
etc. These are helper functions to make working with ptrace easier.
Helper functions are self explaining.
The function for locating the link_map is:
struct link_map *locate_linkmap(int pid)
{
Elf32_Ehdr *ehdr = malloc(sizeof(Elf32_Ehdr));
Elf32_Phdr *phdr = malloc(sizeof(Elf32_Phdr));
Elf32_Dyn *dyn = malloc(sizeof(Elf32_Dyn));
Elf32_Word got;
struct link_map *l = malloc(sizeof(struct link_map));
unsigned long phdr_addr, dyn_addr, map_addr;
read_data(pid, 0x08048000, ehdr, sizeof(Elf32_Ehdr));
phdr_addr = 0x08048000 + ehdr->e_phoff;
printf("program header at %p\n", phdr_addr);
read_data(pid, phdr_addr, phdr, sizeof(Elf32_Phdr));
while (phdr->p_type != PT_DYNAMIC) {
read_data(pid, phdr_addr += sizeof(Elf32_Phdr), phdr,
sizeof(Elf32_Phdr));
}
read_data(pid, phdr->p_vaddr, dyn, sizeof(Elf32_Dyn));
dyn_addr = phdr->p_vaddr;
while (dyn->d_tag != DT_PLTGOT) {
read_data(pid, dyn_addr += sizeof(Elf32_Dyn), dyn, sizeof(Elf32_Dyn));
}
got = (Elf32_Word) dyn->d_un.d_ptr;
got += 4; /* second GOT entry, remember? */
read_data(pid, (unsigned long) got, &map_addr, 4);
read_data(pid, map_addr, l, sizeof(struct link_map));
free(phdr);
free(ehdr);
free(dyn);
return l;
}
We start from the location 0x08048000 to get elf header of the process we are tracing. We get the elf header and from its fields we can get the program header. (The fields of headers were discussed in Part II.) Once we get the program header, we go on checking for the header with dynamic linking information. From the header/struct with dynamic linking information, we fetch the location of the information. Go on searching until we get the base address of global offset table.
Now we have the address of GOT with us and take the second entry of GOT (there we have link_map). From there get the address of the link_map which we require and return.
We have the struct link_map and we have to get symtab and strtab. For this,
we move to l_ld
field of link_map and traverse through dynamic sections until
DT_SYMTAB and DT_STRTAB have been found, and finally we can seek our symbol
from DT_SYMTAB. DT_SYMTAB and DT_STRTAB are the addresses of symbol table and
string table respectively.
The function resolv_tables is:
void resolv_tables(int pid, struct link_map *map)
{
Elf32_Dyn *dyn = malloc(sizeof(Elf32_Dyn));
unsigned long addr;
addr = (unsigned long) map->l_ld;
read_data(pid, addr, dyn, sizeof(Elf32_Dyn));
while (dyn->d_tag) {
switch (dyn->d_tag) {
case DT_HASH:
read_data(pid, dyn->d_un.d_ptr + map->l_addr + 4,
&nchains, sizeof(nchains));
break;
case DT_STRTAB:
strtab = dyn->d_un.d_ptr;
break;
case DT_SYMTAB:
symtab = dyn->d_un.d_ptr;
break;
default:
break;
}
addr += sizeof(Elf32_Dyn);
read_data(pid, addr, dyn, sizeof(Elf32_Dyn));
}
free(dyn);
}
What we actually do here is just reading dynamic sections one by one and checks
whether the tag is DT_STRTAB or DT_SYMTAB. If yes, we can get their respective
pointers and assign to strtab
and symtab
. Once the dynamic sectoins are
over, we can stop.
Our next step is getting the value of symbol from the symbol table. For this we take every symbol table entry one by one and check it whether it's a function name. (We are interested in finding the value of a library function). If it is then it's compared with the function name given by us. If here also they match now the value of the symbol is returned.
Now we have got the value of the symbol what we actually required. What help will the value do for us? The answer depends upon the reader. As I have already stated we may use this for both good and evil purposes.
You might be thinking that everything is over. We forgot a step that we shouldn't forget - detaching the traced process. This may leave the process in a stopped state for ever and the consequences are already discussed in Part I. So our last and final step is to detach the traced process.
The program may be obtained from. Ptrace.c Almost the whole code is self explaining.
Compile it by typing
#cc Ptrace.c -o symtrace
Now we want to test the program. Run some process in some other console, come
back and type.
(Here my test program is emacs
and the symbol I give is strcpy
).
You may trace any program that is traceable instead of emacs and any symbol
you want to inspect.
#./symtrace `ps ax | grep 'emacs' | cut -f 2 -d " "` strcpy
and watch what is going on.
So, we come to the end of a series of three articles which has gone through
the basic programming with ptrace
. Once you have understood the basic
concept it is not difficult to make steps by your own. More details on ptrace
and elf are available at
www.phrack.org. One more thing
I have to write is that, we reached here without even mentioning a major topic.
One major feature of ptrace is its play with system calls. In User Mode Linux,
this feature is used in a large scale. I am busy with my classes and final year
project, and I promise, if time permits we will continue this series and then
we will have a look at those features of ptrace.
All Suggestions, Criticisms, Contributions etc. are welcome. You can contact me at busybox@sancharnet.in
...making Linux just a little more fun! |
By Juraj Sipos |
I noticed that the issue of making a multiboot CD is not very much covered on the Internet, and if so, only sparsely. Commercial Windows vendors include some possibility to create� bootable CD's in their software, but I haven't yet seen an option to create a multiboot CD in their packages. For me creating a bootable CD in Linux is much easier than in Windows. There are also many free utilities that help you create a Linux bootable CD, but having a multiple boot CD is a delicacy. You can have several versions of Linux boot images on the CD - versions with support for journaling file systems, repair utilities, various breeds of Linux or BSD, or even QNX, Plan9 and more.
Why do I thing this may be good for you? Imagine you use Linux and FreeBSD simultaneously, you have more Linux distributions installed on your hard disk, but something happened to your system - there is no way to access the data anymore. Either you use a bootable diskette (but there may be many obstacles if you work with a specific system like XFS journaling file system, for example, or encrypted files system, and you find that you must have at least 5 Linux bootable diskettes to suit you), or you create a multiboot CD on which you put various breeds of Linux kernels and utilities. A little CD with 10 operating systems on it is redemption from the illusion of this world that makes you believe that something is always wrong.
I want this article to be easy, practical and intelligible for beginners, too, and I'd like to avoid too technical language that is not understood by many of us. This will help attract readers of various sort.
A bootable CD is based upon the so-called El Torrito standard - but there are other sites that explain this. Visit, for example, http://www.cdpage.com/Compact_Disc_Variations/danaboot.html
An important information for us will be that we may have up to 10 bootable operating systems on a CD that we may boot anywhere where the boot ability is supported by BIOS. The bootable ISO image file may be created with 1.44MB diskette emulation, 2.88MB diskette emulation, or hard disk emulation.�
Now follows the practical guide on how to prepare a multiboot CD
First, you must have a bootable DOS or Linux diskette image file. An image is a file that contains the contents of a disk or diskette. There may be many types of image files - if you dd (disk dump) your Linux partition with a command (let's suppose that your Linux partition is on the /dev/hda1 partition):�
dd if=/dev/hda1 of=/my_image.file
a file my_image.file will appear in your file system. Not every image file is bootable - it depends on its contents, so a good idea would be to prepare some Linux or BSD diskette image files. The simplest way would be to download such image files from the Internet. Here is the link:
http://www.ibiblio.org/pub/Linux/system/recovery/
The Ibiblio archive is very good. The image files you may download from the above URL are prepared in such a way that they are bootable, so you don't need to care much about building your own image. However, if you want to make your own image, at the above URL you may also find some utilities like Bootkit, CatRescue, SAR, disc-recovery-utils, etc., which will help you create your own bootable diskettes (or bootable image files).
The files we will need for our work, in order to make a multiboot CD, are fbsd-flp-1.0.3.bin (a bootable FreeBSD 2.8 MB diskette image), tomsrtbt, or you may create your own images from the diskettes you already have. Put your DOS or Linux diskette in the diskette drive and type the following command:
dd if=/dev/fd0 of=boot.img bs=512 count=2880
A good idea would also be to visit http://freshmeat.net and search for a keyword "mini", so you will find even some esoteric mini Linux distributions you normally don't hear about.
The site http://www.ibiblio.org/pub/Linux/system/recovery/ contains (I deleted some stuff):
Some other good sites where you can download bootable diskette images:
LIAP (http://www.liap.eu.org/): LIAP is a Linux in a Pill - the site contains many 1.44MB diskette images with various utilities and kernel breeds suitable for recovery of various types of disasters.
LEKA RESCUE FLOPPY (http://leka.muumilaakso.org/): Leka Rescue Floppy is a small 1.44Mb distribution.
TOMSRTBT (http://www.toms.net/rb/): Tomsrtbt (Tom's Root Boot) is a rescue utility, a very good one. You may also download the 2.88MB image file from the above site.
You can also download bootable DOS images. Visit, for example, http://www.bootdisk.com and download DOS images if you do not have them available. The site contains DOS 5.00 to 6.22, Win 95/98/Me Bootdisks, DOS/Windows 9X/2000/XP bootdisks, Win 95/98/ME - NT4/NT5 bootdisks, DrDOS 7.X disk for Bios Flashing Basic, etc. You may also create a FreeDOS boot diskette.
First, some terms. Let's see a difference between a bootable image file of a diskette or disk and an ISO image file to be burned on a CD. What we must have are bootable diskette image files from which we will create one ISO image file.
1) You may prepare your bootable diskette images from diskettes you already have with the command:
dd if=/dev/fd0 of=/my_image.img
or you may download some bootable diskette image files from the Internet (see the links). Make a directory in your Linux box, for example - /CD, and copy the images to this directory (remember, you may have not more than ten bootable images). Make sure you keep the 8.3 format for file names - 8 characters for the file name and 3 characters for its suffix - this maximum is only for the compatibility issue with the DOS makebt.exe program we will later use).2) If you want to make use of the space on the CD (ten images of bootable diskettes would only require about 14MB), place some other utilities in a subdirectory, for example, /CD/Soft. An information how to access the CD is included at the bottom of this article.
3) Run the following command from the /CD directory:
mkisofs -b image.img -c boot.cat -J -l -R -r -o /cd.iso /CD
The "boot.cat" or "boot.catalog" file will be automatically created, so you don't have to have it in your /CD directory - just type the command as you see it - you can type the name of any image file, as long as its name corresponds with the names of image files placed in the /CD directory. The image file included in the above command will be the one you will boot your CD from. The image files must have the size of 1.44MB or 2.8MB.
4) A cd.iso file will be created in your / directory (/cd.iso). When you check this file and mount it (mount /cd.iso /mnt -o loop), the contents of the ISO file should be seen in the directory where you mounted it. This ISO image, if we burn the CD with it, will be bootable but only one image to boot from will be available.
5) So we must edit the ISO image to make a multiple boot CD, thus we will get other images to be included in the menu (0, 1, 2, 3, etc.) we will see when we boot the CD (we will be welcomed by a multiple boot menu with options for 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. By pressing the chosen number we will boot the desirable operating system.
6) After editing it, we may now burn the CD.
Since I don't have the time and effort to create a Perl script that would edit the ISO image for me and because the editing of the ISO image file may appear complicated for some (I want this article to be as simple as possible), it would be a good idea to use some free programs available on the Internet. One of such free programs is makebt.exe. Some time ago, I found this free program on some sites, but now I was unlucky to find it on the net, so I put it on my website http://www.tankred.sk/~juro/freebsd/makebt.zip where you can download it from.
You may run makebt.exe in DOSEMU, BOCHS emulator (http://bochs.sourceforge.net), or you can download DOS system diskette images available at http://www.bootdisk.com, or make a FreeDOS bootable diskette and boot your PC with it in order to run the makebt.exe utility. If you don't have a DOS partition, the best idea would be to use DOSEMU emulator - DOSEMU can also access Linux partitions, where you may have your CD.ISO file waiting to be "grasped in your clever hands".
When you run MAKEBT.EXE at the DOS prompt, it will ask for the full path and filename of the ISO file to be modified: you will type the name of the ISO file with multiple boot diskette images in it, for example, CD.ISO, and you will see the following screen:
-------------------------------------------------------------------------------------------------------
Make Multiple Boot CD-ISO Image Modifier ver 1.02
ISO File path and name: cd.iso
Bootable Disk Image Boot media type Default LBA
------------------- --------------- ------- --------
BC ) BOOT.CAT
1 ) FBSD.IMG ��� ��� ��� 1.44M Floppy ��� ��� ��� Y
2 ) LINUX.IMG �� ��� � 2.88M Floppy ��� ��� � � � -
3 ) PLAN9.IMG ��� � �� 1.44M Floppy �� ��� ��� �� -
4 ) QNX.IMG ��� ��� � � �1.44M Floppy ��� ��� � � � -
5 ) OPENBSD.IMG ��� 2.88M Floppy ��� ��� � � �-
6 )
7 )
8 )
9 )
10 )
<TAB> = move between fields, up/down arrows = move between rows, F1 = Confirm
Press 'y' key to make this image as default boot
------------------------------------------------------------------------------------------------------
BC stands for Boot Catalog. You just write boot.cat and don't worry about it anymore, as you already used this string in the above mkisofs command (it is, however, important that the ISO image file contains the string "boot.cat" in it). Now you carefully type the names of the images. You have to type the name of images in the DOS 8.3 format (this is a DOS restriction for file names - the file may have only 8 characters and suffix 3 characters maximum).
In the middle of the screen you will choose from 1.44MB floppy emulation, 2.88MB floppy emulation, hard disk emulation, or no emulation. We will only use 1.44MB and 2.88MB emulation (if you want to make a hard disk emulation, make a 650MB Linux partition and copy there the filesystem of your Linux system you booted your hard disk from - experiment...) Use the right keyboard arrow to select between the types of emulation. On the right of the screen you have to choose one bootable image as the default one by pressing "Y".
When you are finished, press F1 (you may try this several times, as the program may not� respond everytime). The program is intelligent - if you typed the image file name incorrectly, you will receive a warning message (after pressing F1). Do not include any descriptions for boot images in the menu that follows after pressing F1, as this feature is mostly exploitable in SCSI CD-ROMs and I haven't studied it very much.
That's it. Now you may burn your CD.
cdrecord -v speed=8 dev=0,0,0 /cd.iso
When you boot the CD, you will not see descriptions for operating systems, only numbers. The first and the second number will (0,1) usually stand for the same operating system. I had not much time to experiment with this issue, but a good idea would be to write down the number, so that you know which operating system you are going to boot from.
We deal here with diskette images and emulation, so if you boot your images with the multiple boot CD you just created, you may access your CD-ROM by typing "mount /dev/hdc /mnt", for example, and have also access to your /Soft directory, where you may have other utilities you plan to work with later. In case of a DOS system disk, you should include drivers to access the CD-ROM.
If you want to study or make a Linux program to patch the ISO file, you can compare an ordinary ISO image file with one boot possibility only with the ISO file patched by the makebt.exe utility. A good binary patcher is a diff utility by Giuliano Pochini. Bdiff is a simple and small program for making what the very common utilities "diff" and "patch" do with text files, but also works with binary files. It may be downloaded from: http://space.virgilio.it/g_pochini@virgilio.it/ - however, both ISO files must be identical. The diff utility (for comparing files) will show you the place (offsets) where the information with a multiboot flag was written. It is sector 17 (Boot Volume Descriptor) and the Boot Catalog Sector.
I created many multiboot CD's with the above information and I have never experienced a problem. But first, in order to avoid writing unusable CD-Rs - I had some problems making my own OS/2 images - burn the ISO image on rewritable CD-RW disks. Enjoy!
...making Linux just a little more fun! |
By Vinayak Hegde |
If you did come to the site to read an article about Tux the Penguin -- the lucky mascot of Linux -- you might be disappointed. But don't go away just yet: read on to find what TUX the webserver can do do for you in terms of performance and you will be delighted. You might just discover something to hack on and tweak. This is an article about TUX - the webserver embedded within the Linux kernel.
The name TUX comes from 'Threaded linUX webserver'. TUX was written by Red Hat and is based on the 2.4 kernel series. It is a kernel-space HTTP subsystem. As you may have guessed by now TUX is released under the GNU GPL. So in the free software tradition, you are free to tweak it and modify it to meet your own specific needs. One of the ways of adapting TUX for our needs ,is by writing TUX modules, which can be user-space or kernel-space modules. The main goal behind writing TUX was to enable high-performance webserving on Linux. This was especially important as Linux is extremely popular in the webserver market.
TUX is not as feature-filled as Apache and has some limitations. But nevertheless, TUX is a complete HTTP/1.1 compliant webserver supporting HTTP/1.1 persistent (keep-alive) connections, pipelining, CGI execution, logging, virtual hosting, various forms of modules, and many other webserver features. TUX is now officially known as the Red Hat Content Accelerator (RHCA).
Though quite some amount of today's webcontent is dynamic generated, most of the webcontent is static. Take for example static webpages and images. This leads to quite a overhead as user-space webservers such as apache have to be use some system calls for actually serving the content. The frequent context switches between kernel-space and user-space programs is quite a performance hit. TUX is a saviour here. TUX can be built into the monolithic kernel or dynamically loaded as a module. The first approach is preferable for servers which are dedicated to webserving. When built as a loadable module, it can be dynamically inserted and removed, as when the service is started or stopped respectively. This approach affords some amount of flexibility.
TUX is used primarily for serving static content, leaving generation and serving of dynamic content to backend webservers such as Apache. Now, newer versions of TUX have the capability to cache dynamic content as well. TUX modules can create "objects" which are stored using the page cache. To respond to a request for dynamic data, a TUX module can send a mix of dynamically-generated data and cached pre-generated objects. Thus, most of the requests which are just "network-copy" operations can be handled efficiently by TUX. The new version of TUX uses zero copy block IO instead of a temporary buffer as in TUX 1.0. Also virtual hosting support has been enhanced for TUX and the number of virtual hosts that can be supported is only limited by disk space and RAM.
Now that we know what TUX is capable of, we can move to installing and configuring TUX. All the information that follows has been tested on Red Hat 7.2 with TUX-2.1.0-2. Due to ease of use and familiarity Apache has been used as the user-space webserving daemon.
Check whether you have tux installed using the command :-
# rpm -q tux
You may get messages similar to the ones below :
# rpm -ivh tux-2.1.0-2.i386.rpm
# patch -p0 < tux2-full-2.4.10 # make oldconfig (enable tux here,recompile and install the kernel)Install the user-space utilities
# tar xzvf tux-2.1.0.tar.gz # cd tux-2.0.25 # make # make install
Create the directory /var/www/html (or some other directory of our choice) and make it the root directory of TUX by changing the value of DOCROOT in /etc/sysconfig/tux. Also you can give the path where your CGI-scripts are stored to CGIROOT. Also the TUXTHREADS variable can be set to an appropriate number here. Also create the index.html page in the root directory. This will be used for testing later.
TUX can be started by using the command. (As superuser)
# service tux start (on RH systems) # ./tux.init start (on non-RH systems) # lsmod Module size Used by tux 75568 0 .... ....Now point your favorite browser to localhost and you should see the index.html page we created earlier. If not something has gone wrong or the configuration is not proper. Check step 8 for details.
# lynx localhost
By default, logging is disabled. To enable logging and referrer logging, give the following commands.
# echo 1 > /proc/sys/net/tux/logging # echo 1 > /proc/sys/net/tux/referer_logging # cat /proc/sys/net/tux/logfile /var/log/tux (this is the default logfile)
For each request, TUX logs the address of the requester, a date and time stamp accurate to at least one second, specification of the file requested, size of the file transferred, and the final status of the request. The log files for TUX are stored in /var/log/tux (as seen above) in binary format. In this binary format, the log files are approximately 50% smaller than standard ASCII text log files. To view log files issue the following command
# tux2w3c /var/log/tux 127.0.0.1 - - Wed Nov 20 00:22:24 2002 "GET /manual/sections.html HTTP/1.1" - 5523 200 127.0.0.1 - - Thu Nov 21 01:36:55 2002 "GET / HTTP/1.0" - 2890 200 127.0.0.1 - - Thu Nov 21 01:37:20 2002 "GET /manual/index.html HTTP/1.0" - 5557 200 127.0.0.1 - - Thu Nov 21 01:37:24 2002 "GET /manual/mod/index-bytype.html HTTP/1.0" - 6186 200The tux2w3c program converts the binary log files into into standard W3C-conforming HTTPD log files.
As we already know TUX is all about speeding up the response time. Using Gzip compression, it is also possible to reduce the download time as well as save some bandwidth. But for this feature to work the client must support Gzip compression. By default, this data compression is disabled. To enable it, do the following:
# echo 1 > /proc/sys/net/tux/compressionTo enable it at startup add the following line to /etc/sysctl.conf
net.tux.compression=1Also Gzip file with the extension .gz must be in the same directory as the uncompressed versions of the pages you wish to serve.
We are not finished with configuration yet. There are some more interesting features/tweaks which you can use. (Some of these are available only in RHCA v2.2)
As mentioned before, the recommended configuration is to use TUX as a front-end Web server listening on port 80(the default http port) and to use a back-end Web server (Apache is used here as an example) on port 8080 for answering requests that TUX does not understand (generally dynamically generated content eg.PHP pages). For this configuration, some changes have to be made to the httpd.conf file of Apache webserver.
Replace the line Port 80 with Port 8080 (port on which Apache will listen)Also to prevent users from bypassing TUX and directly accessing apache make the following changes. This may be necessary for security reasons.
Replace the line BindAddress * with BindAddress 127.0.0.1 (loopback address)Finally, restart httpd using
# service httpd restart
You can stop/restart TUX using the following commands:
# service tux stop (for RH-Systems) OR # ./tux-init stop (for non-RH Systems) # service tux restart OR # ./tux-init restartFor debugging purposes you can use the gettuxconfig script in the /usr/share/doc/tux-version/ directory. If you have an SMP system you can check whether all the interfaces have been setup properly using the checkbindings scripts. It is also present in the same directory.
As we have seen above, TUX helps a lot to improve the efficiency of webservers by shifting some of the operations from user-space to kernel-space. This results in better performance and better use of server resources. TUX is very configurable and has a number of interesting features. Hope you enjoyed the article. Happy Hacking!!
...making Linux just a little more fun! |
|
Steve Cody sent in a few more additions to If Operating Systems Ran The Airlines.
Happy Linuxing!
Mike ("Iron") Orr
Editor, Linux Gazette, gazette@ssc.com