What is the easiet way to mix 8bit 8000Hz PCM_ULAW samples?

Hi!
I want to mix to multiple ULAW samples together. Is there a way to do so without converting to PCM? Im trying to do a 8000Hz 8bit VoIP app. Im testing with AU files (created in goldwave) since they use the ULAW encoding. My current implementation is:
        AudioFormat f = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 8000, 16, 1, 2, 8000, false);
        SourceDataLine sdl = AudioSystem.getSourceDataLine(f);
        sdl.open(f);
        sdl.start();
        File file1 = new File("C:\\Scream3.au");
        AudioInputStream ais1 = AudioSystem.getAudioInputStream(file1);
        AudioInputStream aisTarget1 = AudioSystem.getAudioInputStream(f, ais1);
        File file2 = new File("C:\\Blackout3.au");
        AudioInputStream ais2 = AudioSystem.getAudioInputStream(file2);
        AudioInputStream aisTarget2 = AudioSystem.getAudioInputStream(f, ais2);
        byte[] data = new byte[10000];
        int[] calc = new int[5000];
        AudioInputStream[] streams = {aisTarget1, aisTarget2};
        int count = streams.length + 1;
        while (true) {
            int r = -1;
            for (int i = 0; i < streams.length; i++) {
                r = streams.read(data, 0, data.length);
if (r == -1) break;
for (int j = 0; j < calc.length; j++) {
int tempVal = ((data[j * 2 + 1] << 8) | (data[j * 2] & 0xFF));
calc[j] += tempVal;
for (int i = 0; i < calc.length; i++) {
calc[i] /= count;
data[i * 2 + 0] = (byte) (calc[i] & 0xFF);
data[i * 2 + 1] = (byte) (calc[i] >> 8);
if (r == -1) break;
sdl.write(data, 0, data.length);
If its not possible to mix the ulaw samples directly and I have to convert to PCM, how do I convert from PCM Format (AudioFormat.Encoding.PCM_SIGNED, 8000 Hz, 16bits, 1 channel, 2 byte framesize, 8000 frame rate, little endian)
to ULAW (8bit 8000Hz).
Do I do something like:
1) Write a WAVE header to a Byte stream
2) Write the PCM data to the byte stream
3) Get PCM AIS with AudioSystem.getAudioInputStream(byte stream)
4) Get ULAW Target AIS with AudioSystem.getAudioInputStream(ulawFormat, PCM AIS)
Any help is appreciated.
Edited by: Xiphias3 on Oct 24, 2010 6:09 PM

captfoss wrote:
You can do it like this with JMF
http://web.archive.org/web/20080119140742/java.sun.com/products/java-media/jmf/2.1.1/solutions/Merge.html
or like this with JavaSound
http://www.jsresources.org/examples/AudioConcat.html
Edited by: captfoss on Oct 25, 2010 9:56 AMgreat... the solutions are back :)

Similar Messages

  • What is the easiest way to mix and match drum samples from Logic software instruments

    I have been using Logic software instruments (EXS 24 drums kits) for a while. I know how to edit indiviual zones in a drum kit but I would like to know how to copy and paste multiple zones all at once from one kit to another so that I can create a hybrid ESX24 drum kit with my favorite snares, kicks, toms etc. Is there a good tutorial video showing how to do this?

    so that I can create a hybrid ESX24 drum kit with my favorite snares, kicks, toms etc.
    To my opinion the Environment "Mapped Instrument" object is designed exactly for what you want. You can load even different Software (drum) Instrument kits into say EXS, Addictive Drums, UB etc. Patch multiple cables to each Software Drum Instrument instance, open the Mapped Instrument and assign the pads "cable" outputs in the Cable column - this is one of the most elegant ways in Logic...
    A.G
    www.audiogrocery.com
    Author of: Logic GUI Deluxe(Free),
    Vox De Bulgaria - s.a.g.e vocal pack for RMX,
    Logic Snapshot Console,
    RMX Power CTRL - Logic Environment Midi editor for Stylus etc.

  • What is the best way to deal with different audio sample rates on the same timeline ?

    what is the best way to deal with different audio sample rates on the same timeline ?

    You don't have to do anything special. If possible, start your project with a clip that has the desired target frame rate and audio sample rate, and your project parameters will be set automatically. Other sample rates will be converted under the covers.
    For example, if your video is shot at 48khz, you can add music files at 44.1khz with no problem.
    If you are recording audio that you want to synch with video (multicam), you will get best results if everything is 48khz, but you can use 44.1 if that is all you have. Once I forgot to reset my Zoom to 48,000 and it still worked.

  • What's the proper way of mixing voices on top of loud music?

    Hello!
    I'm working on a little something that contains very loud music and very loud voices mixed together.
    It sounds great on my Logitech G35 headphones, but when playing it through my studio monitors in the living room, the voices seem to get a little drowned out by the music (The music is at a pleasingly loud level that I wish to keep, as it doesn't sound as powerful as soon as I bring it's gain down a little).
    So I was wondering what is the proper way of keeping the great loudness of the music and the voices at the same time, but make the voices a little more understandable?
    The music and the voices are on seperate tracks of course in my Audition file.
    Here is an example of what it sounds like currently: http://soundcloud.com/stefanpanic/gameshow-trailer
    Thanks in advance!

    Okay, with my acoustician's hat on:
    Headphone mixing - especially when it comes to voices in the centre of a stereo field - has always been a disaster area, and you can almost always spot when a mix has been made to play on headphones, because the vocal is invariably too quiet.
    I suppose that the easiest way to explain this is to consider what happens to the central voice with loudspeakers. That voice is essentially a virtual image; there isn't a loudspeaker there to support it. So it's been created out of the off-axis responses of your loudspeakers, and is in a space that's quite a distance from your ears, compared to where it would be on headphones. So, the level of this voice depends as much on the angle of your speakers as anything, and whilst this varies considerably between different setups, it's still considerably different to the relationship that headphones have with your ears!
    Traditionally in the past, it's been recommended that you should sit in an equilateral triangle with your speakers, which should be pointing towards you which means an included angle of 120 degrees between them. Despite this information being repeated all over the place since the 1950's, there's no physical basis for it at all, and most people don't have setups like that anyway. And I have to say that this really isn't good positioning for establishing a central image - that angle is too great, and you are relying on an extremely good off-axis response to achieve any level at all there.
    In this day and age, what you really need is a monitoring compromise that will let you create a mix that sounds not so bad on both headphones and loudspeakers, and there are a couple of things you can do to improve the situation considerably, and get a generally better result. And FWIW, it's what I do in this situation...
    The first is to alter the angle of your monitors so that the included angle is 90 degrees (a right-angle) and sit so that both of them are pointing directly at your ears. This gets you a lot closer to the monitors, admittedly, but is far more realistic as far as a compromise mix is concerned. If you do your whole mix like this, you'll find that it's a lot easier to position things in it too. And don't put anything like soft furnishings between them either - that will definitely make things worse. The second thing is that one of the important things you should always do with a mix like this, to finally establish vocal levels, is to listen to it really quietly. No, really quietly! Almost at vanishing point. What you should hear is the whole mix, but if anything is standing out (like the vocal), it will become obvious like this in a way that it simply won't when it's louder. You want it to be there, certainly - but it shouldn't be either missing or standing out too much.

  • What's the best way to mix AIR 14 and the Flex 3.5 SDK and use new AIR features?

    I am returning to development of a popular desktop AIR app, after about 4 years of no code changes. Both AIR and Flex have actively moved forward during my coding absence, and it is time to play catch up.
    When last built, the app was using Flex 3.5 and AIR 2.6.
    End Goal - I want my app to look good on high density displays
    I'd like to keep Flex version at 3.5, but use the newer version of AIR, to render more clearly on high density displays (Retina on OSX and hiDPI on Win8).
    The pixel doubling performed by the "compatibility" modes of OSX.Retina or Win8.hiDPI make my app look pretty gross, and the customer base is starting to complain.
    While I may eventually switch over to the Apache Flex SDK to bring the application design into the current state, my customer base just doesn't care right now. They like the current app, but want it to work, out-of-the-box, on high density displays.
    So I need to limit my scope to changing only the AIR SDK, not the Flex SDK at this time.
    Step 1 - Overlaying AIR14 SDK on Flex 3.5
    I followed the official generic overlay instructions here, and that worked well enough. I named my hybrid SDK folder "3.5.0.AIR14". I have been able to recompile, run, and verify my app using the hybrid SDK. (my app is compiled and packaged from an ant script, using the Antennae framework. I had already switch SDKs a number of times over the initial course of development, so pointing my project to a new SDK was pretty simple enough.
    Step 2 - Updating the app.xml descriptor
    This part was also easy. I used the templates\air\descriptor-template.xml as a starting point, customizing the name, app id, and folders. Now my app descriptor is correctly based on the <application xmlns="http://ns.adobe.com/air/application/14.0"> namespace.
    Step 3 - Enabling Retina/hiDPI support - Help??
    I added <requestedDisplayResolution>high</requestedDisplayResolution> to the <initialWindow> tag of the app descriptor, but that made no difference. The app compiles, installs, and runs, but pixel doubling is still occurring and the app looks gross.
    I also tried setting the SWF version to 25, according to the official overlay guide. This proved to be more difficult. The official overlay guide suggests setting the -swf-version=25 compiler option, but that option is not supported by the Flex 3.5 compiler. So all I had to try was using the legacy -target-player=25 compiler option. That setting was accepted by the compiler, and it produced a SWF with byte offset 0x3 == 0x19 (25 dec), so that appears to be right.
    But -target-player=25 didn't have any effect either.
    Is setting the SWF version even required? Isn't the whole point of using the AIR 14 namespace in the app descriptor the way of telling the compiler "I want to use all features of the AIR 14 release". Why do I need to tell the compiler multiple times to use all the features of the SDK I'm compiling with? It just seems weird to me.
    Have I missed a secret setting somewhere?
    How can I tell the AIR runtime to simply run as pixel-dense as possible? When the workarounds listed below are performed, my app looks fantastic on high-density displays. But its the pixel scaling that is making everything look bad, and I desperately want to get this fixed.
    Workarounds?
    On Windows 8+, we are asking our users to enable the "Disable display scaling on high DPI settings" checkbox on the AIR application shortcut. This works, but is a confusing setting for average users to discover. Most just give up in frustration.
    On OSX, we can't even disable Retina mode on a per-application basis, its all or nothing, so that's even worse. SwitchResX will automatically switch resolutions based on the selected app, but that's a pretty clunky (and non-free) workaround too.
    Any other workaround ideas are appreciated too.
    Cheers,
    Doug

    It took me a while to figure out (without much help from Adobe, grrr!), since some internet writeups were terse and implied somehow that AIR's Retina support (setting your app descriptor's <initialWindow/requestedDisplayResolution> to high) would also work on Windows. They don't.
    On OSX, the steps to disable pixel-doubling are:
    update your app descriptor to AIR 14
    set initialWindow/requestedDisplayResolution = high
    compile with SWF version 25 or greater
    vector assets, including text, will scale automatically
    you'll need to replace your bitmap assets with Retina-quality bitmaps as appropriate
    when running on a Retina display, you will see stage.contentsScaleFactor=2. It will be 1 for non-Retina displays.
    On Windows, the pixel-doubling kicks in when you have a HiDPI scaling set to about 150% or greater (hiDPI scaling was introduced in Win7). There is no way to detect from within an AIR app when Windows is doing its HiDPI scaling. stage.contentsScaleFactor is always 1, under all configurations.
    The only thing you can do for AIR apps on Windows is explicitly disable display scaling (like you have done) and update your app to manually scale all UI elements at runtime (that's really gross and hard, and it what I working on right now).
    For my app, I updated our Windows installer to set the registry to disable hiDPI scaling, for all users, just for our app. Here's how you do that:
    Key = HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers
    Name = <fullPathToYourExe>
    Type = REG_SZ
    Value = "~ HIDPIAWARE" (without the quotes, tilde space HIDPIAWARE)
    That should be set in the full 64-bit registry, not the Wow32Node registry, even if your app is a 32-bit app (which all AIR apps are). If your installer is a 32-bit app (mine was), you may need to jump through some hoops to have it affect the 64-bit registry hive from a 32-bit process.
    If you only want to change the setting for the current user (not all users), the KEY root is HKEY_CURRENT_USER instead of HKEY_LOCAL_MACHINE.
    If you don't have an explicit installer for your AIR app (ie. if you are deploying from the web via a badge installer), then you're even more messed up, and you will need to tell your users to disable the scaling manually.
    I know, it's a total pain. I hope this helps.
    Cheers,
    Doug
    PS: Adobe devs, if you are listening ...

  • What is the best way to work with mixed media in 1080 timeline?

    Hi there,
    I have a project shot mostly in 1080 24p but a bit of 720 24p and 4x3 30p footage.
    What is the best way to work with this mixed media?
    Thanks!
    Steven

    Hi Shane,
    Ok just to recap (thanks for being patient by the way)...I have put some questions in here...feel free to write in caps to respond and we'll put this baby to rest!
    1) I will work in a 1080p FCP sequence, correct?
    2) HD 1080p footage captured as applepro res hq I will leave as is and work with?
    3) HD 720p footage which was captured as applepro res hq. Can I just drop in 1080 timeline and let FCP do its work? Or should i run through compressor, if so what setting shall I submit it too?
    4) I dont have a budget for an external hardware to convert SD to HD...should I capture my SD Beta through Final Cut and just leave and let FCP do its work? Or should I run through compressor? If so what setting shall I submit it too?
    5) For for my master sequence in FCP am I not using an I-Frame format apple pro res if I am using my apple pro res hq 1080 footage as my formatted sequence? I am not sure what you mean by GOP?
    6) I also found some 60p footage 1080p XDCAM compressor shot with the same XDCAM camera. I put in the 1080p 24p timeline but it was pretty choppy...any ideas of this conform?
    Thanks very much for all your help it has gone along way,
    Steven
    Saying what CODECS you are working with was my question. 1080, 720, 4:3...really says nothing. There are a dozen 1080 codecs, another dozen 720 codecs, and nearly 100 4:3 formats.
    Best to capture all the footage to one uniform codec.
    Second best is to work with one format and let FCP conform the rest to that...IF and only IF that format is an I-Frame format like ProRes. GOP formats as master sequence formats cause TONS of issues.
    What I'd do is work 1080 ProRes, just add the 720p footage (Use Compressor to convert it if you didn't have a lot, but if you had a lot, just add it), but I'd capture all the SD 4:3 footage via a Kona 3 or Matrox MXO2 as 1080 ProRes. Hardware conversion of SD to HD is much better than anything FCP can do. AE might do good as well. But then use Compressor to convert 29.97 footage captured (you can't convert 29.97 to 23.98 when you capture) to get to 23.98.

  • What is the best way to plug my iPhone 5 into a mixer for live music performances?

    I'm thinking of performing live with my iPhone 5 or maybe an iPad that I haven't bought yet. What is the best way to get the sound for the device to a mixer or PA? I'm wondering if anyone has any experience with wireless options. I'm wondering if it is stable enough for live performance. There doesn't seem to be any audio interface options yet.

    there is always the 3.5mm minijack out connector sure it's analogue but a cable with male minijack in 1 and and 2 rca in the other would work with most mixers and or pa's

  • What is the best way to setup premiere sequence for export when mixing 24fps and 23.976fps for final

    What is the best way to setup a premiere sequence for export when mixing 24fps (CG) and 23.976fps (live action) for final output at 23.976 (vimeo and such).
    Right now my sequence is at 24fps, and when I export to 23.976, my dialogues seem to shift.
    Should I set my sequence at 23.976fps instead of 24fps?
    Thanks in advance for the help, and sincere apologies if this is a noob question.
    David

    That makes it easy.  Remove the CGs from the project.  Go to Edit>Preferences>Media... and set the Indeterminate Media Timebase to 23.976.  Then when you import the image sequence, PP will assign it the correct frame rate.
    Edit in a 23.976 sequence.

  • What is the best way to add volume to my mix?

    Hola,
    I have a track which is very very low in volume, and there's a ton of automation data on a ton of tracks... And I am just wondering what is the best way to compensate for this? I read somewhere that Logic's gain plugin may actually be degrading sound quality due to it being restricted to a (unspecified) bit depth... Is there another gain plug-in or method to do this?
    Or.. Is there a way to tell logic to raise each and every point of automation on each and every track by a specified db?
    -patrick

    Yeah. I think that's probably a much better way to
    work.
    Is there any problem adding a Free G plugin to OUT
    1-2, and cranking that up until I have a good
    signal?
    -patrick
    Hi Patrick,
    Is the stereo mix the one that is not loud enough? When you play it through a different system it's too soft,but when you play it through your mix monitors it sounds loud enough?
    I think if this is the case,you need to turn down your monitors,and turn up everything up inside Logic. I had a friend who did not understand monitoring,and had reeeeeeally expensive speakers,but no way to control their leve,so all his mixes ended up being -30 dB from maximum.they coul dnot be mastered,becasue when you add that much gain to a mix,you also bring the noise floor up byt that much,and you end up with a very hissy,messy,crappy,disgusting mix.
    So,if I were you,I'd check my gain structure,and make sure you can control levels at every stage of the signal path,from mic to speakers.
    Set your monitors to playback at about 80dB spl.Go to Radio Shack and invest in a SPL meter,they are not expensive,and are well worth it.
    Cheers

  • What is the best way to retrive data from a Global Variable?

    Here is what I want to do,
    I have several PC's that run different types of tests. I want to use a global variable, running on a single PC, that acts like a sever that can be accessed by the other PC's in my lab. This Global variable will store the hostnames of the different PC's that are currently running each test, along with a description of the test.  Then, a user can access this Global variable to read the different values and select the PC and connect to it's desktop using Remote Desktop in Windows.
    Is it possible to write data to the Global variable that is running on the single PC?
    What is the best way to do this? Does anyone have a sample VI?
    What is the best way to then read the data from the Global variable?
    (I will probably use an array\cluster to store the hostnames.) 

    Another pre-LV8 idea...
    A functional global can be accessed using VI-Server and called using "call by reference".
    This approach harnesses the TCP functionality built into the VI-Server to manage the conncetion.
    This can be pretty quick and (if the functional global is written correctly) will support buffered- mixed data types. (Try to do that with the Shared Variable  ).
    Just another idea,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • [Flex 4.5.1] What is the best way of adopting a flex built website for mobile "as is"?

    What is the best way of adopting a flex 4.5.1 built website for mobile "as is"?
    I have a website built with flex 4.5.1 which I want to be working on mobile as it should - meaning that touch style of interactions should be working not mouse.
    The question is: Do I HAVE TO recode all of my website from Application into Mobile Application in order to be a more lightweight version of the original one, which includes: remove some functionality, pages, animation, build a separate backend and cms to control it etc.?
    OR
    Can I just define mobile skins, for my existing components to use when in mobile mode, in which I could have done some tweaking by removing some unnecessary parts and changing component interactionMode to Touch ?
    Are both options which I see possible and if yes - which one would you recommend?
    Thanks in advance!

    Thanks for the answer. Yeah I bet you figured out what I want the answer to be, but is it ? That's why I am asking here. I can see my website on my HTC Desire HD and it's working pretty well. If I reskin the parts that need a bigger hit area or the scrollers to work with touch so I don't have to scroll the actual scrollbars with my finger, remove some complex animations - everything will work just fine... The problem is that I am not sure if that is the right way to do it or if it is possible at all.
    I do get your point about not designed for mobile stuff, but while with html 5 you get the job done for web and mobile with little differences at the same time, in flash you need double time cause you'll have to build 2 projects. And since the framework is one for both web and mobile.. why not...
    I want to see the pros and cons of the both approaches I suggested. In a complex modular architecture I could easily swap one web module with a mobile module for example or change skins or components if I need more lightweight ones, but do I have to copy all that logic in to a MobileApplication, can't I do it with Application and use the MobileComponents (which are simplier and less functional) if the project actually needs them...
    I do hope you get why I am asking these questions, and I can't believe I haven't found a guide or something like that "why this and not that" when it comes to mixing web with mobile development and flex... If you have, please share the link, I would be happy to read. Or someone with the experience in what I am talking about shares some knowledge, that would be great!
    Thanks.

  • What is the best way to work on multiple songs in one long recording?

    I often record the local open-mic nights and generally just leave logic recording the entire night so i end up with lots of songs/bands in one big recording.
    What is the best way to work with such a recording? Is there a better way than just using the event marker when a new band plays?
    Also once i get home id like to save each band to a separate project but i cant seem to find an easy way to do this?
    Obviously each band needs a different mix hence needing separate projects? or am i missing a much better work method?
    any help greatly appreciated!
    many thanks

    As for your second point, just do a "save as" new file name and cut and erase any unwanted audio from the arrange window (ie. the bits that aren't in the song you want separate). You can do this for as many songs as there are and create a different mix for each one no problem. You multiple select and drag all the regions to the beginning of the arrange window (make sure you do them at the same time so's they don't go out of synch. All the separate songs and mixes can be saved in the same project folder as the original audio files are stored in (the folder structure should be there already if you started a new logic project, so no need to change that.
    I would think using markers so you know where each song begins during the show, and the above method after, would be a very good way of organizing things. You may want to stop actually recording between bands too though I expect you are already doing this.
    Its really as easy as that. Sometimes things just are.

  • What's the best way ?

    Hey, I know there are a lot smarter guys out there than me so
    if you are there please advise. I have to create 4 stereo tracks (sub mixes) for a live show but in order to play them on the external device (Tascam Portastudio) I have to convert the tracks to 8 x mono 16 bit, 44.1khz WAV files and then export them to the Tascam. What's the best way to do this ?.
    I know I can bounce the files internally in Logic but how do I
    create the mono files ?.

    Smart? Noooo... Just familiar, I have spent many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many many hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours hours using Logic. : ^)
    Best,
    J
    Is that reasonable? : ^)

  • What's the Best Way to do Different Takes?

    Hello there ...
    I'm slowly making my way around Garageband ... I did a recording the other day of a combo ... electric piano, bass, drums, sax, and a vocalist ...
    It was the first recording I've ever done in my life ... and I am amazed at the results .... it sounds pretty darn good. I'm fortunate that I have good mikes and a presonus preamp ... as I'm sure that helps a lot.
    That said ... there are things I would do differently now ... and I want to ask how I would best do them ...
    For example ... I just tracked one huge file ... then found out I would have to cycle each region and export them to make separate files as tracks on a cd.
    The cycling thing worked great ... except that all the parameter changes I made on a region to make it sound better (depending on the cut) affected the whole recording.
    So I realize I need to make whole new files for every take for when it comes down to mixing ...
    My question is ... what is the best way to do this? Just do a "save as", rename ... and then delete everything that was recorded from the previous take and start fresh?
    Thanks for any advice .....
    I also noticed that when I adjusted the volume of part of a track using the graph points in the expanded track volume panel ... that I couldn't go back and adjust the original main individual track volume in the track input bar window. It became greyed out ... am I missing something there?

    Now, if a particular take sounds good but you want to
    change your effect settings, save as a different
    GarageBand file and delete all the less-great takes.
    This will be your master for that tune. You will
    still have your other file with everything on it, but
    this is the best way to preserve the many and note
    the exception.
    Hi Schneb ...
    That sounds like a very good solution .... that way I would not waste any time during the session doing all those "save as" between takes .... I would just stop recording ... then start again and make a new region.
    It makes a lot of sense to do the "save as" thing later and then just delete everything but the particular region (take) you want to work on ...
    Thanks ...!
    About the volume thing .... so once you fiddle with expanded volume track settings .... you can no longer manipulate the track slider volume?
    If so ... I remember there is some sort of command to restore the track volume to its original is there not? At least that way you can start over ...

  • What's the easiest way to do this?

    hey, my music videos are included in the lists of songs, which usually cause my iPod to freeze. what's the easiest way to get them out of my songs lists so that they only show up when i go to Videos. should I just change the names of the music video artists so they don't get mixed in with songs, or is there an easier way to do this that i am missing?

    why restore it? does this not happen to everyone? i thought it was just my fault for giving the same artist names for songs and videos...

Maybe you are looking for

  • Has anyone run the connection pooling for mysql & tomcat successfully?

    I'm trying to set up connection pooling. I'm following the how-to page at http://jakarta.apache.org/tomcat/tomcat-4.1-doc/jndi-datasource-examples-howto.html But when i test the DBTest/test.jsp file, tomcat displays an error = could not load jdbc dri

  • Itunes won't open, no error, just won't show up at all

    I have an older version of itunes stuck on my computer because i cannot open itunes to upgrade or download a newer version from online. There is no error message or anything. Nothing happens when you click on it. It's very frustrating. I'd appreciate

  • How to show no. of contents in each folder in a KM navigation iView....

    Hi everyone, Requirement - we need to display a KM Navigation iview which shows the count i.e. no. of the contents in each folder. Scenario - I have tried changing this property - in the resource renderer and have created a new layout set and a layou

  • UpPgrading to 7.6 and get the same error message

    I'm trying to upgraqde to the lates version of iTunes, 7.5 and half way through the installation, I get an error message : Invalid Drive F:/ and the installation quits. Has anybody experienced the same error? and if so, how do you correct it?

  • Differences between MD01 and MD02.

    Dear SAP Experts, Please tell me the various differences between MD01 and MD02. In which specific cases, MD01 should be used?? Thanks and regards, Kaushik Basu.