Improve Performance of Report

I have a report on the opportunities subject area but the performance for this report is very slow.
I have included the external id but the output gets timed out always.
Is there a way to improve the performance for this report?
Edited by: user636400 on Aug 27, 2008 11:11 PM

Garima,
You mentioned that you used the ID field in the report, but you are interested in the sum of values... do you really need to see each record int he report, or an aggregation of data by user? The ID column forces the report to return every record and then perform the aggregation. Without the ID, the query is able to leverage the database server to perform the aggregation rather than the report having to calculate it over the entire dataset.
Look for columns in your report that are unique at the record level and remove those first, then as Alex said, add them back in one at a time and you will quickly find the column that is causing the report to time out.
Also, this message recently went out to all primary contacts:
We have discovered a product defect that enables Oracle CRM On Demand users to submit real-time reports that take longer than 10 minutes to run. When this occurs, there is the potential for performance degradation for all users who are co-located on your Pod.
A fix for this defect will be rolled-out across our entire fleet over the next eight weeks. After this fix is deployed to your Pod, any real-time report that fails to complete within 10 minutes will be terminated and will result in the display of the following error message: "The user request exceeded the maximum query governing execution time.”
Note that historical reports are unaffected by this fix. Should you have any real-time reports which exceed the 10 minute window, then you need to set them up as historical reports using Analytics. Alternatively, you can continue to run them as real-time reports but must reduce the amount of data selected using filters, column prompts, dashboard prompts or report filter chains so that the report completes within the 10 minute window.
Regards,
Mike L.

Similar Messages

  • To improve performance for report

    Hi Expert,
    i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
    it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
    SELECT vbeln
               auart
               submi
               vkorg
               vtweg
               spart
               knumv
               vdatu
               vprgr
               ihrez
               bname
               kunnr
        FROM vbak
        APPENDING TABLE itab_vbak_vbap
        FOR ALL ENTRIES IN l_itab_temp
    *BEGIN OF change 17/Oct/2008.
        WHERE erdat IN s_erdat              AND
             submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              auart = l_itab_temp-auart     AND
    *BEGIN OF change 17/Oct/2008.
              submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              vkorg = l_itab_temp-vkorg     AND
              vtweg = l_itab_temp-vtweg     AND
              spart = l_itab_temp-spart     AND
              vdatu = l_itab_temp-vdatu     AND
              vprgr = l_itab_temp-vprgr     AND
              ihrez = l_itab_temp-ihrez     AND
              bname = l_itab_temp-bname     AND
              kunnr = l_itab_temp-sap_kunnr.
        DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
      ENDDO.
    Please give me suggession for improving performance for the programmes.

    hi,
    you try like this
    DATA:BEGIN OF itab1 OCCURS 0,
         vbeln LIKE vbak-vbeln,
         END OF itab1.
    DATA: BEGIN OF itab2 OCCURS 0,
          vbeln LIKE vbap-vbeln,
          posnr LIKE vbap-posnr,
          matnr LIKE vbap-matnr,
          END OF itab2.
    DATA: BEGIN OF itab3 OCCURS 0,
          vbeln TYPE vbeln_va,
          posnr TYPE posnr_va,
          matnr TYPE matnr,
          END OF itab3.
    SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
    START-OF-SELECTION.
      SELECT vbeln FROM vbak INTO TABLE itab1
      WHERE vbeln IN s_vbeln.
      IF itab1[] IS NOT INITIAL.
        SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
        FOR ALL ENTRIES IN itab1
        WHERE vbeln = itab1-vbeln.
      ENDIF.

  • Split of Cubes to improve the performance of reports

    Hello Friends . We are now Implementing the Finance GL Line Items for Global Automobile operations in BMW and services to Outsourced to Japan which increased the data volume to 300 millions records for last 2 years since we go live. we have 200 Company codes.
    How To Improve performance
    1. Please suggest if I want to split the cubes based on the year and Company codes which are region based. which means european's will run report out of one cube and same report for america will be on another cube
    But Question here is if I make 8 cube (2 For each year : 1- current year comp code ABC & 1 Current Year DEF), (2 For each year : 1- Prev year comp code ABC & 1 Prev Year DEF)
    (2 For each year : 1- Arch year comp code ABC & 1 Archieve Year DEF)
    1. Then what how I can I tell the query to look the data from which cube. since Company code is authorization variable so to pick that value  of comp code and make a customer exit variable for infoprovider  will increase lot of work.
    Is there any good way to do this. does split of cubes make sense based on company code or just make it on year.
    Please suggest me a excellent approach step by step to split cubes for 60 million records in 2 years growth will be same for
    next 4 years since more company codes are coming.
    2. Please suggest if split of cube will improve performance of report or it will make it worse since now query need to go thru 5-6 different cubes.
    Thanks
    Regards
    Soniya

    Hi Soniya,
    There are two ways in which you can split your cube....either based on Year or based on Company code.(i.e Region). While loading the data, write a code in the start routine which will filter tha data. For example, if you are loading data for three region say 1, 2, and 3, you code will be something like
    DELETE SOURCE_PACKAGE WHERE REGION EQ '2' OR
    REGION EQ '3'.
    This will load data to your cube correspoding to region 1.
    you can build your reports either on these cubes or you can have a multiprovider above these cubes and build the report.
    Thanks..
    Shambhu

  • Improving Performance of a multidatabase report

    Hi All,
    This is regarding multiple database reprot.
    I am getting a query from mysql like this .
    SELECT *  FROM OBJSETTING_DATA
    and other query from oracle like this.
    select country,empno from HO_USERS
    and other also from oracle like this
    select linemanager,empno from hr_apps
    And the parameters are year,division,Status.And I am linking empno using cr links tab
    May be I will have some hundreds of records only.
    In my report I need show country,empno,name,linemanager,grade,status.
    So here country,linemanager coming from oracle and rest of all coming from mysql.
    Please suggest what are the steps to follow to improve performance.

    Hi Abhilash and Sastry,
    I did like this instead of linking tables in links tab and somehow I am able to improve performance
    Created main report using mysql query..
    And created 2 sub reports using oracle db with parameter empno and linking empno field with empno parameter using sub reports links tab and placed the sub reports in details section of main report as per my requirement
    I am getting somewhat better performance compared to earlier.
    Please suggest

  • How to improve query performance when reporting on ods object?

    Hi,
    Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
    Thanks in advance,
    Ravi Alakuntla.

    Hi Ravi,
    Check these links which may cater your requirement,
    Re: performance issues of ODS
    Which criteria to follow to pick InfoObj. as secondary index of ODS?
    PDF on BW performance tuning,
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards,
    Mani.

  • Improving performance while adding groups

    Hello,
    I've been monitoring my crystal reports from a week or so and the report performance is going for a toss. I would like to narrate this in little detail. I have created 3 groups to select dynamic parameters and each group has a formula for itself. In my parameters I have added one parameter with 7 entities (which is hard coded), now a user can select any 3 entity out of those seven when initiallly refreshing the document, each of the parameter entity is bundeled in a conditional formula (mentioned under formula fields) for each entity. The user may select any entity and may get the respective data for that entity.
    For all this i have created 3 groups and same formula is pasted under all the 3 groups. I have then made the formula group to be selected under Group expert. The report works fine and yields me correct data. However, during the grouping of the formula's crystal selects all the database tables from the database field as these tables are mentioned under the group formula. Agreed all fine.
    But when I run the report the "Show SQL query" selects all the database tables under Select clause which should not be the case. Due to this even if i have selected an entity which has got only 48 to 50 records, crystal tends to select all the 16,56,053 records from the database fields which is hampering the crystal performance big time. When I run the same query in SQL it retrives the data in just 8 seconds but as crystal selecting all the records gives me data after 90 seconds which is frustrating for the user.
    Please suggest me a workaround for this. Please help.
    Thank you.

    Hi,
    I suspect the problem isn't necessarily just your grouping but with your Record Selection Formula as well.  If you do not see a complete Where clause is because your Record Selection Formula is too complicated for Crystal to translate to SQL. 
    The same would be said for your grouping.  There are two suggestions I can offer: 
    1)  Instead of linking the tables in Crystal, use a SQL Command and generate your query in SQL directly.  You can use parameters and at the very least, get a working WHERE clause. 
    2)  Create a Stored Procedure or view that can use the logic you need to retrieve the records. 
    At the very least you want to be able to streamline the query to improve performance.  Grouping may not be possible but my guess it's more with the Selection formula than the grouping.
    Good luck,
    Brian

  • Performance of report & uni

    Hi all
    How to improve Performance in business object
    In universe designing  and Report development

    Hello,
    follow this link to find a white paper which gives some hints on where to work to improve performance of a BI project using Universes:
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b08b5968-6d64-2b10-b690-bece9462dfff
    There is no magic formula and many places where you can act.
    Hope that it helps
    PPaolo

  • Improving performance of zreport

    Hi,
    I have on requirement of improving performance of zreport.
    (1) Apart from nested loops, select within loop, inner joins, is there any other changes i have to consider.
    (2) There 4 nested loops found, which have upto 1 level.  Is it mandatory for avoiding nested loops which have upto 1 level?
    (3) How to avoid SELECT, ENDSELECT within LOOP and ENDLOOP
    (4) The FM WS_DOWNLOAD, which is outdated is called for downloading data into excel. Which one i have to choose for replacing this FM.
    I will reward, if it is useful.

    Hi,
    Downloading Internal Tables to Excel
    Often we face situations where we need to download internal table contents onto an Excel sheet. We are familiar with the function module WS_DOWNLOAD. Though this function module downloads the contents onto the Excel sheet, there cannot be any column headings or we cannot differentiate the primary keys just by seeing the Excel sheet.
    For this purpose, we can use the function module XXL_FULL_API. The Excel sheet which is generated by this function module contains the column headings and the key columns are highlighted with a different color. Other options that are available with this function module are we can swap two columns or supress a field from displaying on the Excel sheet. The simple code for the usage of this function module is given below.
    Program code :
    REPORT Excel.
    TABLES:
      sflight.
    header data................................
    DATA :
      header1 LIKE gxxlt_p-text VALUE 'Suresh',
      header2 LIKE gxxlt_p-text VALUE 'Excel sheet'.
    Internal table for holding the SFLIGHT data
    DATA BEGIN OF t_sflight OCCURS 0.
            INCLUDE STRUCTURE sflight.
    DATA END   OF t_sflight.
    Internal table for holding the horizontal key.
    DATA BEGIN OF  t_hkey OCCURS 0.
            INCLUDE STRUCTURE gxxlt_h.
    DATA END   OF t_hkey .
    Internal table for holding the vertical key.
    DATA BEGIN OF t_vkey OCCURS 0.
            INCLUDE STRUCTURE gxxlt_v.
    DATA END   OF t_vkey .
    Internal table for holding the online text....
    DATA BEGIN OF t_online OCCURS 0.
            INCLUDE STRUCTURE gxxlt_o.
    DATA END   OF t_online.
    Internal table to hold print text.............
    DATA BEGIN OF t_print OCCURS 0.
            INCLUDE STRUCTURE gxxlt_p.
    DATA END   OF t_print.
    Internal table to hold SEMA data..............
    DATA BEGIN OF t_sema OCCURS 0.
            INCLUDE STRUCTURE gxxlt_s.
    DATA END   OF t_sema.
    Retreiving data from sflight.
    SELECT * FROM sflight
             INTO TABLE t_sflight.
    Text which will be displayed online is declared here....
    t_online-line_no    = '1'.
    t_online-info_name  = 'Created by'.
    t_online-info_value = 'KODANDARAMI REDDY.S'.
    APPEND t_online.
    Text which will be printed out..........................
    t_print-hf     = 'H'.
    t_print-lcr    = 'L'.
    t_print-line_no = '1'.
    t_print-text   = 'This is the header'.
    APPEND t_print.
    t_print-hf     = 'F'.
    t_print-lcr    = 'C'.
    t_print-line_no = '1'.
    t_print-text   = 'This is the footer'.
    APPEND t_print.
    Defining the vertical key columns.......
    t_vkey-col_no   = '1'.
    t_vkey-col_name = 'MANDT'.
    APPEND t_vkey.
    t_vkey-col_no   = '2'.
    t_vkey-col_name = 'CARRID'.
    APPEND t_vkey.
    t_vkey-col_no   = '3'.
    t_vkey-col_name = 'CONNID'.
    APPEND t_vkey.
    t_vkey-col_no   = '4'.
    t_vkey-col_name = 'FLDATE'.
    APPEND t_vkey.
    Header text for the data columns................
    t_hkey-row_no = '1'.
    t_hkey-col_no = 1.
    t_hkey-col_name = 'PRICE'.
    APPEND t_hkey.
    t_hkey-col_no = 2.
    t_hkey-col_name = 'CURRENCY'.
    APPEND t_hkey.
    t_hkey-col_no = 3.
    t_hkey-col_name = 'PLANETYPE'.
    APPEND t_hkey.
    t_hkey-col_no = 4.
    t_hkey-col_name = 'SEATSMAX'.
    APPEND t_hkey.
    t_hkey-col_no = 5.
    t_hkey-col_name = 'SEATSOCC'.
    APPEND t_hkey.
    t_hkey-col_no = 6.
    t_hkey-col_name = 'PAYMENTSUM'.
    APPEND t_hkey.
    populating the SEMA data..........................
    t_sema-col_no  = 1.
    t_sema-col_typ = 'STR'.
    t_sema-col_ops = 'DFT'.
    APPEND t_sema.
    t_sema-col_no = 2.
    APPEND t_sema.
    t_sema-col_no = 3.
    APPEND t_sema.
    t_sema-col_no = 4.
    APPEND t_sema.
    t_sema-col_no = 5.
    APPEND t_sema.
    t_sema-col_no = 6.
    APPEND t_sema.
    t_sema-col_no = 7.
    APPEND t_sema.
    t_sema-col_no = 8.
    APPEND t_sema.
    t_sema-col_no = 9.
    APPEND t_sema.
    t_sema-col_no = 10.
    t_sema-col_typ = 'NUM'.
    t_sema-col_ops = 'ADD'.
    APPEND t_sema.
    CALL FUNCTION 'XXL_FULL_API'
      EXPORTING
      DATA_ENDING_AT          = 54
      DATA_STARTING_AT        = 5
       filename                = 'TESTFILE'
       header_1                = header1
       header_2                = header2
       no_dialog               = 'X'
       no_start                = ' '
        n_att_cols              = 6
        n_hrz_keys              = 1
        n_vrt_keys              = 4
       sema_type               = 'X'
      SO_TITLE                = ' '
      TABLES
        data                    = t_sflight
        hkey                    = t_hkey
        online_text             = t_online
        print_text              = t_print
        sema                    = t_sema
        vkey                    = t_vkey
    EXCEPTIONS
       cancelled_by_user       = 1
       data_too_big            = 2
       dim_mismatch_data       = 3
       dim_mismatch_sema       = 4
       dim_mismatch_vkey       = 5
       error_in_hkey           = 6
       error_in_sema           = 7
       file_open_error         = 8
       file_write_error        = 9
       inv_data_range          = 10
       inv_winsys              = 11
       inv_xxl                 = 12
       OTHERS                  = 13
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    reward if helpful
    raam

  • How to improve performance on MacBook Pro 10.5.8?

    I have a MacBook Pro running 10.5.8 that has become way too slow. It hesitates between key strokes and response and is slow to search and navigate websites.
    I just checked and there are no updates due.
    What can I do to improve performance?

    Please read this whole message before doing anything.
    This procedure is a diagnostic test. It won’t solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
    Third-party system modifications are a common cause of usability problems. By a “system modification,” I mean software that affects the operation of other software -- potentially for the worse. The following procedure will help identify which such modifications you've installed. Don’t be alarmed by the complexity of these instructions -- they’re easy to carry out and won’t change anything on your Mac.
    These steps are to be taken while booted in “normal” mode, not in safe mode. If you’re now running in safe mode, reboot as usual before continuing.
    Below are several lines of text in monospaced type, which are UNIX shell commands. They’re harmless, but they must be entered exactly as given in order to work. If you have doubts about the safety of running these commands, search this site for other discussions in which they’ve been used without any report of ill effects.
    Some of the commands will line-wrap in your browser, but each one is really just a single line, all of which must be selected. You can accomplish this easily by triple-clicking anywhere in the line. The whole line will highlight, and you can then either copy or drag it. The headings “Step 1” and so on are not part of the commands.
    Note: If you have more than one user account, Step 2 must be taken as an administrator. Ordinarily that would be the user created automatically when you booted the system for the first time. The other steps should be taken as the user who has the problem, if different. Most personal Macs have only one user, and in that case this paragraph doesn’t apply.
    To begin, launch the Terminal application; e.g., by entering the first few letters of its name in a Spotlight search. A text window will open with a line already in it, ending either in a dollar sign (“$”) or a percent sign (“%”). If you get the percent sign, enter “sh” (without the quotes) and press return. You should then get a new line ending in a dollar sign.
    Step 1
    Copy or drag -- do not type -- the line below into the Terminal window, then press return:
    kextstat -kl | awk '!/com\.apple/ {print $6 $7}'
    Post the lines of output (if any) that appear below what you just entered (the text, please, not a screenshot.)
    Step 2
    Repeat with this line:
    sudo launchctl list | sed 1d | awk '!/0x|com\.apple/ {print $3}'
    This time, you'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning not to screw up. You don't need to post the warning.
    Step 3
    launchctl list | sed 1d | awk '!/0x|com\.apple/ {print $3}'
    Step 4
    ls -1A /e*/mach* {,/}L*/{Ad,Compon,Ex,Fram,In,Keyb,La,Mail/Bu,P*P,Priv,Qu,Scripti,Servi,Sta}* L*/Fonts 2> /dev/null
    Important: If you synchronize with a MobileMe account, your me.com email address may appear in the output of the above command. If so, change it to something like “[email protected]” before posting.
    Step 5
    osascript -e 'tell application "System Events" to get the name of every login item'
    Remember, steps 1-5 are all drag-and-drop or copy-and-paste, whichever you prefer -- no typing, except your password.
    You can then quit Terminal.

  • Improve Performance with QaaWS with multiple RefreshButtons??

    HI,
    I read, that a connection opens maximal 2 QaaWS. I want to improve Performance.
    Currently I tried to refresh 6 connections with one Button. Would it improve performance if I split this 1 Button with 6 Connections to 3 buttons each 2 connections ?
    Thanks,
    BWBW

    Hi
    HTTP 1.1 limits the number of concurrent HTTP requests to maximum two, so your dashboard will actually be able to send & receive maximum 2 request simultaneously, third will stand-by till one of those first two is handled.
    QaaWS performance is mostly affected by database performance, so if you plan to move to LO to improve performance, I'd recommend you use LO from WebI parts, as if you use LO to consume a universe query, you will experience similar performance limitations.
    If you actually want to consume WebI report parts, and need report filters, you can also consider XI 3.1 SP2 BI Services, where performance is better than QaaWS, and interactions are also easier to implement.
    Hope that helps,
    David.

  • Brush Lag? - Here are OpenGL settings that improve performance

    For anyone having horrible Brush lag, please try the following settings and report your findings in this thread.
    After much configuring and comparing CS4 vs CS3, I found that these settings do improve CS4's brush lag significantly. CS3 is still faster, but these settings made CS4 brush strokes a lot more responsive.
    Please try these settings and share your experiences.
    NOTE: these do not improve clone tool performance, the best way to improve clone performance right now seems to be to turn off the clone tools overlay feature.
    Perhaps Adam or Chris from Adobe could explain what is happening here. The most significant option that improved performance appears to be the "Use for Image Display - OFF". I have no idea what this feature does or does not do but it does seem to be the biggest performance hit. The next most influential setting seems to be "3D Interaction Acceleration - OFF"
    Set the following settings in Photoshop CS4 Preferences:
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - OFF
    Color Matching - ON

    Hi guys,
    As I am having very little problems with my system I though I should post my specs and settings for comparison reasons.
    System - Asus p5Q deluxe,
    Intel quad 9650 3ghz,
    16gb pc6400 800Mhz ram,
    loads of drives ( system drive on 10K 74gb Raptor, Vista partition on fast 500gb drive, Ps scratch on an another fast 500gb drive, the rest are storage/bk-ups ),
    Gainward 8800GTS 640mb GPU,
    30ins monitor @2560x1600 and a 24ins 1920x1200,
    Wacom tablet.
    PS CS4 x64bit
    Vista x64
    All latest drivers
    No Antivirus. Index and superfetch is ON, Defender is ON
    No internet connection except for updates
    No faffing around with vista processes
    Wacom Virtual HID and Wacom Mouse Monitor are disabled
    nVidia GPU set to default settings
    On this system I am able to produce massive images, the last major size was 150x600cm@200ppi and the brushes are smooth until they increased to around 700+ pixels then there is a slight lag of around 1 second if I draw fast, if I take it slow then there's no lag. All UI is snappy.
    I have the following settings in Photoshop CS4 Preferences:
    Actual Ram: 14710MB
    Set to use (87%) :12790MB
    Scratch Disk
    on a separate fast 500gb - to become a 80gb ssd soon
    History state: 50
    Cache: 8
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - ON
    Color Matching - ON
    I hope this helps in some way too...
    EDIT: I should also add that I Defrag all my drives with a third party defrag software every night due to the large image files.

  • Improve performance using mass activities -step by step OSS 144461

    Hi evrybody,
    A customer report is handling a high volume of data and takes very long time.
    We would like to execute it using mass activity principle (OSS not 144461)
    Do you think if that can help us improving performance ?
    Can anybody explain what we have to do ?
    Thanks in advance
    Marie

    If you're trying to tune a custom report, I think the best place to start is by looking at the report code. Can you post it?
    Rob

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • How to preload sound into memory to improve performance?

    Hello all
    I have an application where it needs to play 4 different short wave files on some events. The wave files are small (less then 1 sec each) so they can be preloaded into memory. But I don't really know how to do that.. This is my current code... Performance is really important here, so the faster users can hear the sounds, the better...
    import java.io.*;
    import javax.sound.sampled.*;
    import javax.swing.*;
    import java.awt.event.*;
    public class PlaySound implements ActionListener
         private Clip clip = null;
         public void play(String name)
              if (clip != null)
                   clip.stop();
                   clip = null;
              loadClip(name);
              clip.start();
         private void loadClip(String fnm)
              try
                   AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fnm + ".wav"));
                   AudioFormat format = stream.getFormat();
                   DataLine.Info info = new DataLine.Info(Clip.class, format);
                   if (!AudioSystem.isLineSupported(info))
                        JOptionPane.showMessageDialog(null, "Unsupported sound line", "Warning!", JOptionPane.WARNING_MESSAGE);
                   else
                        clip = (Clip) AudioSystem.getLine(info);
                        clip.open(stream);
                        stream.close();
              catch (Exception e)
                   JOptionPane.showMessageDialog(null, "loadClip E: " + e.toString(), "Warning!", JOptionPane.WARNING_MESSAGE);
         public static void main(String[] args)
              play("a wav file name");
    }     I would appreciate it if someone can point out how I can preload them to improve performance... Thanks in advance!

    The message above should be:
    OMG, me dumb you smart Florian...
    Thank you for your suggestion... It's not the best OR anything close to what I thought it would be, it's certainly one way to do it and better then what I've got now...
    Thanks again Florian, I really appreciate it!!
    BTW, is there anything that would produce the sound faster then this?
    Message was edited by:
    BuggyVB

  • How to improve performance of MediaPlayer?

    I tried to use the MediaPlayer with a On2 VP6 flv movie.
    Showing a video with a resolution of 1024x768 works.
    Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
    Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
    Does somebody know a solution for this problems?
    Cheers
    masim

    duplicate thread..
    How to improve performance of attached query

Maybe you are looking for

  • Problems moving mails out of Inbox

    Hi, We are having problems with a couple of users that occur when they manually move emails out of their Inbox in Outlook; they're complaining that the changes aren't re-conciled to their BB. They are using Curve 8900's. We've tried wiping their BB a

  • How to configure distributed transaction in Oracle Database Gateway for ODBC? (ORA-02047)

    I am connecting from Oracle to another database server (MS SQL Server, DB2 the error message is the same) through an ODBC connection using Oracle SQL Developer. (This is how I set up) I want to query the schema of a non-Oracle database using the data

  • Sending multiple attchment in FM 'SO_NEW_DOCUMENT_ATT_SEND_API1'

    Hi Experts, I am trying to attach four csv attachment thru FM 'SO_NEW_DOCUMENT_ATT_SEND_API1' . Mail sending part is working fine with 4 attachments.Only problem is data populated in the attachment. it_objbin  internal table contains all the data for

  • Error message PLEASE ENTER BUSINESS AREA

    Hi experts, Facing problem while doing 531 movement for by product generation by MB1A. Getting error message  PLEASE ENTER BUSINESS AREA. and for same material accounting document not heat. experts suggestion need to resolve issue. Waiting Bhupendra

  • 8500 909n new printheads installed can't get rid of error message

    I purchased new printheads from the HP store.  The arrived and I installed them into my printer.  Everything worked fine.  Several days later I attempted to print something & got the Printhead Probem The following printhead has a problem: black/yello