The SimpleSoundServer interface is the interface that the KDE soundserver artsd provides when running. To connect to it, you can simply use the following lines: Make sure not to access functions of the server after you find out isNull() is true. So what does it offer? First, it offers the most basic command, playing some file (which may be .wav or any other format aRts can understand), with the simple method: Therefore, in a few lines, you can write a client that plays wave files. If you already have a SimpleSoundServer called server, its just | It is necessary here to pass a full path, because it is very likely that your program doesn't have the same working directory as artsd. Thus, calling play with an unqualified name will mostly fail. For instance, if you are in /var/share/sounds and artsd is in /home/kde2, if you write play ("asound.wav"), the server would try to play /home/kde2/asound.wav. |
The play method returns a long value with an ID. If playing succeeded, you can use this to stop the sound again with if not, the ID is 0. Then, there is another set of methods to attach or detach streaming-sound sources, such as games that do their own sound mixing (Quake, for instance). They are called If you want to use these, the way to go is to implement a ByteSoundProducer object. This has an outgoing asynchronous byte stream, which can be used to send the data as signed 16-bit little endian stereo. Then, simply create such an object inside your process. For adapting Quake, the ByteSoundProducer object should be created inside the Quake process, and all audio output should be put into the data packets sent via the asynchronous streaming mechanism. Finally, a call to attach() with the object is enough to start streaming. When you're done, call detach(). An example showing how to implement a ByteSoundProducer is in the kdelibs/arts/examples directory. But in most cases, a simpler way is possible. For porting games such as Quake, there is also the C API, which encapsulates the aRts functionality. Thus, there are routines similar to those needed to access the operating system audio drivers, like OSS (open sound system, the Linux sound drivers). These are called arts_open(), arts_write(), arts_close(), and so on, which, in turn, call the things that ought to happen in the background. Whether a layer will be written to simplify the usage of the streaming API for KDE 2.0 apps remains to be seen. If there is time to do a KAudioStream, which handles all attach/detach and packet production, it will go into some KDE library. Finally, two functions are left. One is It can be used to create an arbitrary object on the soundserver. Therefore, if you need an Example_ADD for some reason—and it shouldn't be running inside your process, but inside the soundserver process—a call looking like this: should do the trick. As you see, you can easily cast Object to Example_ADD using DynamicCast. Just a few words explaining why you may want to create something on the server. Imagine that you want to develop a 3D game, but you are missing 3D capabilities inside aRts, such as creating moving sound sources and things like that. Of course, you can render all that locally (inside the game process) and transfer the result via streaming to the soundserver. However, a latency penalty and a performance penalty are associated with that. The latency penalty is this: you need to do streaming in packets, which have a certain size. If you want to have no dropouts when your game doesn't get the CPU for a few milliseconds, you need to dimension these like four packets with 2048 bytes each, or something like that. Although the resulting total time needed to replay all packets of 47 milliseconds protects you from dropouts, it also means that after a player shoots, you'll have a 47-millisecond delay until the 3D sound system reacts. On the other hand, if your 3D sound system runs inside the server, the time to tell it "player shoots now" would normally be around 1 millisecond (because it is one oneway remote invocation). Thus, you can reduce the latency by 47 milliseconds by creating things server side. The performance penalty, on the other hand, is clear. Putting all that stuff into packets and taking it out again takes CPU time. With very small latencies (small packets), you need more packets per second, and thus, the performance penalty increases. So for real-time applications such as games, running things server side is the most important. Last but not least, let's take a look at effects. The server allows inserting effects between the downmixed signal of all clients and the output. That is possible with the attribute As you see, you get a StereoEffectStack, for which the interface will be described soon. It can be used to add effects to the chain. | |