Help to improve expdp performance and compression

Hi,
We have a oracle standard Edition which do not support parallel , compression features of expdp.
The expdp dmp file is around 90GB and it takes 45 minutes to create in production server . Then the scripts compresses the file with gzip utility and compression takes 80 minutes.
To copy the compressed file from prod to staging server it takes another 47 minutes .
We have automated the process but it takes long time for expdp + compression + copy ( Around 3 hrs ) . On staging server it does take more than 4 hours to create the staging db.
Is there anyway I can improve the performance of these 3 operations .
Can I do compression while file is exporting ? I tried using pipes in unix and it doesn't work for expdp.
We don't want to use network link .
Will expdp commands writes the file sequentially ? If so , can I start gzipping parallely when files are exported .
Also tried compressing with gzip -1 option , but it has increased the file size by 30% , and eventually increased the copy time to staging server .
Please help
Thanks,
Bharani J
Edited by: 973089 on Nov 27, 2012 9:40 AM
Edited by: 973089 on Nov 27, 2012 9:41 AM

Hi,
Why 'do not support parallel' ?
I understand you don't want to use database link, i had this problem here [i used expdp].
This is what i've done:
A script that do:
A full logic backup using expdp,
a bzip2 to compact,
and a transfer to the machine of destiny.
It would be far more easily if i could use the database link, but i couldn't.
however i used the parallel in expdp command.
Hope you find a good solution.

Similar Messages

  • Help to improve the performance of a procedure.

    Hello everybody,
    First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
    Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
    The shorter calls (less than 30 minutes) have segmentID = 'C1'.
    The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
    So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
    But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
    As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
    So I decided to come here and ask you for some tips on how to improve the performance of this.
    I think you are getting confused already, so I'm just going to put some comments in the code.
    I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
    DECLARE
    CURSOR cur_c21 IS
        select * from table1
        where segmentID = 'C21'
        order by start_date_of_call;     // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
    CURSOR cur_c22 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;
    CURSOR cur_c22_2 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;  
    cursor cur_c23 is
        select * from table1
        where segmentID = 'C23'
        order by start_date_of_call;
    v_temp_rec_c22 cur_c22%ROWTYPE;
    v_dur table1.duration%TYPE;           // using this for storage of the duration of the call. It's number.
    BEGIN
    insert into table2
    select * from table1 where segmentID = 'C1';     // inserting the calls which are less than 30 minutes long
    -- and here starts the mess
    FOR rec_c21 IN cur_c21 LOOP        // taking the first part of the call
       v_dur := rec_c21.duration;      // recording it's duration
       FOR rec_c22 IN cur_c22 LOOP     // starting to check if there is a middle part for the call
          IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
            (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)                
    /* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
          THEN
             v_dur := v_dur + rec_c22.duration;     // updating the new duration
             v_temp_rec_c22:=rec_c22;               // recording the current record in another variable because I use it for the next check
             FOR rec_c22_2 in cur_c22_2 LOOP
                IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND 
                  (rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)        
    /* logic is the same as before but comparing with the last value in v_temp...
    And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
                THEN
                   v_dur:=v_dur + rec_c22_2.duration;
                   v_temp_rec_c22:=rec_c22_2;
                END IF;
             END LOOP;                     
          END IF;
          EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
                   (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);       
    /* exiting the loop if we have at least one middle part.
    (I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
       END LOOP;
       FOR rec_c23 IN cur_c23 LOOP             
          IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
             (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration          
    /* we should always have one last part, so we need this check.
    If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
    (yes we can have these situations in calls longer than 30 and less than 60 minutes). */
          THEN
             v_dur:=v_dur + rec_c23.duration;
             rec_c21.duration:=v_dur;               // updating the duration
             rec_c21.segmentID :='C1';
             INSERT INTO table2 VALUES rec_c21;     // inserting the whole call in table2
          END IF;
          EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
                    (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;                 
                    // exit the loop when the last part has been found.
       END LOOP;
    END LOOP;
    END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
    It's my first post here so hope this is the right sub-forum.
    I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
    I know I'm still missing a lot of knowledge so every help is really appreciated.
    Thank you very much in advance!

    Atiel wrote:
    Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
    SQL> ed
    Wrote file afiedt.buf
      1  select 'C1' as segmentid
      2        ,start_date_of_call, duration, callingnumber, callednumber
      3  from (
      4        select distinct
      5               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      6              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      7              ,callingnumber
      8              ,callednumber
      9        from table1
    10*      )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349
    Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
    Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
    SQL> select * from table1;
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    C21        15-MAR-2012 09:07:26  134676480 5581790386      0113496771567          219        100      10.16
    C23        11-MAY-2012 09:37:26  134676480 5581790386      0113496771567          321         73       2.71
    C21        11-MAY-2012 12:13:10 3892379648 1982032041      0631432831624          959         80       2.87
    C22        11-MAY-2012 12:43:10 3892379648 1982032041      0631432831624          375         57       8.91
    C22        11-MAY-2012 13:13:10  117899264 1982032041      0631432831624          778         27       1.42
    C23        11-MAY-2012 13:43:10  117899264 1982032041      0631432831624          308         97       3.26
    7 rows selected.
    SQL> ed
    Wrote file afiedt.buf
      1  with t2 as (
      2  select 'C1' as segmentid
      3        ,start_date_of_call, duration, callingnumber, callednumber
      4  from (
      5        select distinct
      6               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      7              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      8              ,callingnumber
      9              ,callednumber
    10        from table1
    11       )
    12  )
    13  --
    14  select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
    15        ,t1.col1, t1.col2, t1.col3
    16  from   t2
    17         join table1 t1 on (   t1.start_date_of_call = t2.start_date_of_call
    18                           and t1.callingnumber = t2.callingnumber
    19                           and t1.callednumber = t2.callednumber
    20*                          )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624          959         80       2.87
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567          219        100      10.16
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows).

  • Need help in improving the performance for the sql query

    Thanks in advance for helping me.
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
    Any suggestions or solutions for improving performance are appreciated
    SQL query:
    update targettable tt
    set mnop = 'G',
    where ( x,y,z ) in
    select a.x, a.y,a.z
    from table1 a
    where (a.x, a.y,a.z) not in (
    select b.x,b.y,b.z
    from table2 b
    where 'O' = b.defg
    and mnop = 'P'
    and hijkl = 'UVW';

    987981 wrote:
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
    The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
    The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
    The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
    Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
    From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
    That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
    So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both?

  • Need help on improving expdp speed

    I just tested a export of one table of 3.5 gb, it took almost 1 and half hours.
    See logs here:
    Export: Release 11.1.0.7.0 - 64bit Production on Saturday, 28 April, 2012 22:27:53
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=exp_t454.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 22.59 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "ADMIN"."T454" 3.833 GB 3340156 rows
    Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
    /u01/export/admin_migration/exp_admin_t454_01.dmp
    /u01/export/admin_migration/exp_admin_t454_02.dmp
    Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 23:55:15
    my par file looks like this:
    tables=admin.t454 DIRECTORY=data_pump_dir dumpfile=exp_admin_t454_%U.dmp logfile=exp_admin_t454.log parallel=3 filesize=5000m compression=all
    in the middle of expdp, I ran a status of the job and got this:
    admin1 $ expdp system attach=SYS_EXPORT_TABLE_01
    Export: Release 11.1.0.7.0 - 64bit Production on Saturday, 28 April, 2012 22:49:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Password:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    Job: SYS_EXPORT_TABLE_01
    Owner: SYSTEM
    Operation: EXPORT
    Creator Privs: TRUE
    GUID: BEC5BBC2966860B0E0430AEC944B60B0
    Start Time: Saturday, 28 April, 2012 22:28:07
    Mode: TABLE
    Instance: admin1
    Max Parallelism: 3
    EXPORT Job Parameters:
    Parameter Name Parameter Value:
    CLIENT_COMMAND system/******** parfile=exp_t454.par
    COMPRESSION ALL
    State: EXECUTING
    Bytes Processed: 0
    Current Parallelism: 3
    Job Error Count: 0
    Dump File: /u01/export/admin_migration/exp_admin_t454_%u.dmp
    size: 5,242,880,000
    Dump File: /u01/export/admin_migration/exp_admin_t454_01.dmp
    size: 5,242,880,000
    bytes written: 4,096
    Dump File: /u01/export/admin_migration/exp_admin_t454_02.dmp
    size: 5,242,880,000
    bytes written: 28,672
    Worker 1 Status:
    Process Name: DW01
    State: WORK WAITING
    Worker 2 Status:
    Process Name: DW02
    State: EXECUTING
    Object Schema: admin
    Object Name: T454
    Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
    Completed Objects: 1
    Total Objects: 1
    Completed Rows: 1,695,732
    Worker Parallelism: 1
    Export>
    The database version is 11.1.0.7, and os is aix.
    I wonder what I can do to speed up the expdp. I have to do a migration to expdp a 1tb database soon.
    Thanks in advance.

    Is the table partitioned ? Have you tried traditional export to see how long it takes ?
    Pl see these MOS Docs for possible causes
    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Bug 12780993 - Poor Datapump EXPDP performance for ESTIMATE phase [ID 12780993.8]     
    Data Pump Export (EXPDP) Runs Very Slow After Upgrade From 11.1.0.6 to 11.1.0.7 [ID 1075468.1]     
    Oracle DataPump Export (EXPDP) Is Slow On Partitioned Tables [ID 1300895.1]     
    Expdp Slow for a Small Table [ID 950995.1]     
    Slow Performance of DataPump Export during Estimate Phase [ID 1354535.1]     
    HTH
    Srini

  • I need a clarification : Can I use EJBs instead of helper classes for better performance and less network traffic?

    My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
    I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
    Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
    Please suggest me which method (is Helper classes or Using EJBs) is perferable
    1) to get better performance and.
    2) for less network traffic
    3) for better container resource utilization
    I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
    Please give detailed explanation.
    thank you,
    sudheer

    <i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
    1) to get better performance</i>
    EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
    <i>2) for less network traffic</i>
    There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
    <i>3) for better container resource utilization</i>
    Again, the EJB version will consume a lot more container resources.

  • Help us improve the look and feel of our community

    The BTCare Community Team needs your help!
    We want to improve the layout of the forum and would like to get your thoughts and feedback.
    Please complete this short questionnaire (it's very short, promise) to share your ideas on how we can improve the look and feel of our community.
    Thanks,
    Stephanie
    Stephanie
    BTCare Community Manager
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post. If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

    BinaryBurnout wrote:
    I am curious how I might go about implementing some of these other L&F's. Every time I try to use them like the following...
    UIManager.setLookAndFeel("org.jvnet.substance.skin.SubstanceRavenGraphiteGlassLookAndFeel");I keep getting a java.lang.ClassNotFoundException: org.jvnet.substance.skin.SubstanceRavenGraphiteGlassLookAndFeel error. How might I go about fixing this?Make sure the according classes/ jars are on the classpath.
    -Puce

  • Required help in improving the performance

    Hi I am very new to java concept, I am working with an API, where the records are being processed in for loop, and taking time, to process 10k records it is taking almost 35 min, and as I have incorporated in my apex, if the multiple users using the same that stage the performance even being dropped, it is taking almost near to an hour, somehow with the help of online tutors, I was able to incorporate oracle.sql.array, not able to increase the performance,
    My first requirement is there is any way that I can process the records parallel in batches, or not how do I increase the performance, and I got know that by enabling setautoindex and setautobuffer on we can increase the performance, but I could not do that can anyone help me on this.

    Hi
    I apologize for not adding the process in the initial phase
    The task for me to pass the records from my table to api, and update the results given by the api, the steps involved are
    1) I have created type of strarray and have assigned the same to rec1,rec2 in my stored procedure
    2) rec1 is the input details which consist of batch_id unique identifier by batch,row_id unique identifier for the batch and the contact address information.
    3) rec2 is the output for rec1, where i will get the batch_id,row_id, and formatted address in output form
    4) I will capture the opt put in temp table and update these results to the input table
    5) With this stored procedure, i am not able to allow parallel transaction i..e multiple users
    6) As records are being processed row by row, consuming time
    Here is the code, Please let me know if you need more information on this.
    PROCESS_INT (REC_IN, REC_OUT); which will call the following process
         public static int process(oracle.sql.ARRAY rec_in, oracle.sql.ARRAY[] rec_out) {
              // If everything has been initialized then we want to write some data
              // to the socket we have opened a connection to
              if (m_clientSocket != null && m_out != null && m_in != null) {
                   try {
                        String[] record = (String[])rec_in.getArray();
                        for (int i = 0; i < 9; i++) {
                             if (record[i] != null)
                                  m_out.println(record);
                             else
                                  m_out.println("");
                        m_out.flush();
                        // Read the result
                        for (int i = 0; i < 14; i++) {
                             record[i] = m_in.readLine();
                        Connection conn = new OracleDriver().defaultConnection();
                        ArrayDescriptor descriptor = ArrayDescriptor.createDescriptor( rec_in.getSQLTypeName(), conn );
                        rec_out[0] = new ARRAY( descriptor, conn, record );
                   } catch (UnknownHostException e) {
                        System.err.println("Unable to connect to lqtListener: " + e);
                        return -1;
                   } catch (IOException e) {
                        System.err.println("IOException in process: " + e);
                        return -2;
                   } catch (SQLException e) {
                        System.err.println("SQLException in process: " + e);
                        return -4;
              else
                   return -3;
              return 0;

  • Looking for help to improve Airport performance over LAN (WAN is fine)

    Ok, I've read through several threads on this forum that address problems people are having with slow performance with Airport. I've also checked out all of the Apple KBs that address Airport, recommended settings. Unfortunately my issue isn't addressed by anything I've read to date.
    The bottom line is that both download and upload performance between any of my devices and the internet is fine, no problems. I am paying for 30MBps download from Verizon FIOS, I routinely get 20, and I'm guessing that the delta is Verizon's problem, not my network's. However, streaming from my media server to another device on the wireless LAN is a different story entirely. I get somewhere between 1 and 2 MBps, tops, and this poses big problems for streaming music and movies.
    My network is comprised of 3 Airport Expresses. One of them is a MC414LL/A model. This one is connected to my Verizon FIOS Actiontec MI424WR router (which I have set to bridge-mode according to the instructions provided at http://www.dslreports.com/forum/r17679150-Howto-make-ActionTec-MI424WR-a-network -bridge) via CAT5 ethernet. This Airport Express is set to "create a network" network mode, "802.11 only (5GHz) - 802.11b/g/n" radio mode (although I have also tried "802.11 only (5GHz) - 802.11/n only (2.4 GHz)" radio mode, and this didn't solve the problem). Finally, I have chosen 2.4 and 5GHz channels that have little interference (2 and 161 respectively). My other two Airport Expresses are MB321LL/A models and are set to "Extend a wireless network" network mode, with the 5GHz network chosen as the network that they extend. (I have tried switching over to having them extend the 2.4GHz network, and performance gets worse, not better.)
    I am using a late 2009 Mac Mini as a media server. It is connected to the 5GHz network (though I've tried the 2.4GHz network), and it runs XBMC and JRiver media servers (not simultaneously, either one or the other.) I have a PS3 and a Sony Blu Ray player, each plugged into one of the MB321LL/A Airport Expresses via CAT5 ethernet, and I stream media to each of these devices via one or the other media server (both devices are DNLA-enabled). My Mac Mini has a 3TB external hard drive connected via FireWire 800, which is where all of my media resides. In addition to streaming media over the network, I have a TV plugged directly into the Mac Mini. When I play media to this TV, performance is outstanding, so I'm confident my poor performance to the PS3 and Sony BDP is a network issue, not an issue with the external drive.
    Although my building has several other wireless networks, only one of them is 5 GHz, and it isn't using channel 161. The 2.4 GHz band is crowded with several networks, although channel 2 is usually in the clear. I have tried switching let Airport choose a channel automatically, and I haven't noticed a difference. It has occurred to me that the problem could be with how I bridged the Verizon Actiontec router and not with any of the Airports, but I don't get any errors (e.g. double NAT errors, which some people who have bridged improperly get), and I am pleased with my download and upload speeds to the internet. The issue is only on my LAN. Finally, yes, all of my firmware is up to date, version 7.6.3 on all three Airport Expresses.
    Can anyone offer me suggestions for how I can get better performance streaming media from my server to the two playback devices? Since all 3 Airport Expresses support 5GHz, I'd have thought I'd be able to take advantage of 802.11n speeds when streaming between them. (MB321LL/A  supports "Draft N", but does this matter?) With the settings that I'm currently using, I can't stream faster than 2MBps (and that's on a good day), which is below what I ought to be able to get rom 802.11g. This is especially problematic when I try to stream hi res (96 MHz / 24 bit or higher)  music files, whether uncompressed or compressed. I hear awful pulsing sounds through my speakers. If I pause the track and let my streaming device buffer, I might get 10 or 15 seconds of clean playback, but then it starts the pulsing again as soon as the buffered music is finished playing. On occasion when I stream music from my iPhone via Airplay to one of the Airport Expresses, I get clean playback most of the time, but on occasion the music cuts out. (It's my understanding the Airplay requires ~800 Kbps, which seems consistent with my LAN speed usually being between 1 and 2 Mbps but sometimes dropping).
    I have iStubler and I've used the Apple network diagnostics -- these are the tools that led me to choose channels 2 and 161 for 2.4 and 5GHz respectively. I'm sure I could be using these tools to learn more about my network's performance, but I'm sure what to look for.
    Thanks for your suggestions.

    Ok, cool. I'm really glad that the issue has been isolated. Thanks a ton for your insight!
    Hopefully I can find a spot where the signal strength of the hub is noticably better but that isn't too inconvenient for an ethernet run. My Sony BDP, which is the device connected to the problem basestation, has wifi capability, so I could always ditch the ethernet cable if the best spot for the basestation doesn't permit a cable run. But I'm aware that ethernet usually offers faster transfer speeds than wifi. Moreover, I'm not sure that the Sony BDP supports 5GHz. It might be a 2.4GHz-only device, in which case I'll have new interference issues to contend with, since like I said in my original post, there are several other 2.4GHz networks in my building.
    Anyhow, now that I understand the problem, I can figure out a solution. Thanks again.

  • Needed help to improve the performance of a select query?

    Hi,
    I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
    i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
    cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
    select sca.branch_code "BRANCH",
    sca.cust_ac_no "ACCOUNT",
    to_char(p_applDate, 'YYYYMM') "YEARMONTH",
    sca.ccy "CURRENCY",
    sca.account_class "PRODUCT",
    sca.cust_no "CUSTOMER",
    sca.ac_desc "DESCRIPTION",
    null "LOW_BAL",
    null "HIGH_BAL",
    null "AVG_CR_BAL",
    null "AVG_DR_BAL",
    null "CR_DAYS",
    null "DR_DAYS",
    --null                                 "CR_TURNOVER",       
    --null                                 "DR_TURNOVER",       
    null "DR_OD_DAYS",
    (select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
    (case when (p_applDate >= sca.tod_limit_start_date and
    p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
    sca.tod_limit else 0 end) dd
    from getm_facility gf, sttm_cust_account_linkages scal
    where gf.line_code || gf.line_serial = scal.linked_ref_no
    and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
    --sc.credit_rating                      "CR_GRADE",        
    null "AVG_NET_BAL",
    null "UNAUTH_OD_AMT",
    sca.acy_blocked_amount "AMT_BLOCKED",
    (select sum(amt)
    from ictb_entries_history ieh
    where ieh.acc = sca.cust_ac_no
    and ieh.brn = sca.branch_code
    and ieh.drcr = 'D'
    and ieh.liqn = 'Y'
    and ieh.entry_passed = 'Y'
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select * from ictm_pr_int ipi, ictm_rule_frm irf
    where ipi.product_code = ieh.prod
    and ipi.rule = irf.rule_id
    and irf.book_flag = 'B')) "DR_INTEREST",
    (select sum(amt)
    from ictb_entries_history ieh
    where ieh.acc = sca.cust_ac_no
    and ieh.brn = sca.branch_code
    and ieh.drcr = 'C'
    and ieh.liqn = 'Y'
    and ieh.entry_passed = 'Y'
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select * from ictm_pr_int ipi, ictm_rule_frm irf
    where ipi.product_code = ieh.prod
    and ipi.rule = irf.rule_id
    and irf.book_flag = 'B')) "CR_INTEREST",
    (select sum(amt) from ictb_entries_history ieh
    where ieh.brn = sca.branch_code
    and ieh.acc = sca.cust_ac_no
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select product_code
    from ictm_product_definition ipd
    where ipd.product_code = ieh.prod
    and ipd.product_type = 'C')) "FEE_INCOME",
    sca.record_stat "ACC_STATUS",
    case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
    and not exists (select 1
    from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null))
    then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
    case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
    and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
    and not exists (select 1
    from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null))
    then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
    (select 1 from dual
    where exists (select 1 from ictm_td_closure_renew itcr
    where itcr.brn = sca.branch_code
    and itcr.acc = sca.cust_ac_no
    and itcr.renewal_date = sysdate)
    or exists (select 1 from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
    (select maturity_date from ictm_acc ia
    where ia.brn = sca.branch_code
    and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
    sca.ac_stat_no_dr "DR_DISALLOWED",
    sca.ac_stat_no_cr "CR_DISALLOWED",
    sca.ac_stat_block                     "BLOCKED_ACC",       Not Reqd
    sca.ac_stat_dormant "DORMANT_ACC",
    sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
    sca.ac_stat_frozen "FROZEN_ACC",
    sca.ac_open_date "ACC_OPENING_DT",
    sca.address1 "ADD_LINE_1",
    sca.address2 "ADD_LINE_2",
    sca.address3 "ADD_LINE_3",
    sca.address4 "ADD_LINE_4",
    sca.joint_ac_indicator "JOINT_ACC",
    sca.acy_avl_bal "CR_BAL",
    0 "DR_BAL",
    0 "CR_BAL_LCY", t
    0 "DR_BAL_LCY",
    null "YTD_CR_MOVEMENT",
    null "YTD_DR_MOVEMENT",
    null "YTD_CR_MOVEMENT_LCY",
    null "YTD_DR_MOVEMENT_LCY",
    null "MTD_CR_MOVEMENT",
    null "MTD_DR_MOVEMENT",
    null "MTD_CR_MOVEMENT_LCY",
    null "MTD_DR_MOVEMENT_LCY",
    'N' "BRANCH_TRFR", --New
    sca.provision_amount "PROVISION_AMT",
    sca.account_type "ACCOUNT_TYPE",
    nvl(sca.tod_limit, 0) "TOD_LIMIT",
    nvl(sca.sublimit, 0) "SUB_LIMIT",
    nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
    nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
    from sttm_cust_account sca, sttm_customer sc
    where sca.branch_code = p_branch_code
    and sca.cust_no = sc.customer_no
    and ( exists (select 1 from actb_daily_log adl
    where adl.ac_no = sca.cust_ac_no
    and adl.ac_branch = sca.branch_code
    and adl.trn_dt = p_applDate
    and adl.auth_stat = 'A')
    or exists (select 1 from catm_amount_blocks cab
    where cab.account = sca.cust_ac_no
    and cab.branch = sca.branch_code
    and cab.effective_date = p_applDate
    and cab.auth_stat = 'A')
    or exists (select 1 from ictm_td_closure_renew itcr
    where itcr.acc = sca.cust_ac_no
    and itcr.brn = sca.branch_code
    and itcr.renewal_date = p_applDate)
    or exists (select 1 from sttm_ac_stat_change sasc
    where sasc.cust_ac_no = sca.cust_ac_no
    and sasc.branch_code = sca.branch_code
    and sasc.status_change_date = p_applDate
    and sasc.auth_stat = 'A')
    or exists (select 1 from cstb_acc_brn_trfr_log cabtl
    where cabtl.branch_code = sca.branch_code
    and cabtl.cust_ac_no = sca.cust_ac_no
    and cabtl.process_status = 'S'
    and cabtl.process_date = p_applDate)
    or exists (select 1 from sttbs_provision_history sph
    where sph.branch_code = sca.branch_code
    and sph.cust_ac_no = sca.cust_ac_no
    and sph.esn_date = p_applDate)
    or exists (select 1 from sttms_cust_account_dormancy scad
    where scad.branch_code = sca.branch_code
    and scad.cust_ac_no = sca.cust_ac_no
    and scad.dormancy_start_dt = p_applDate)
    or sca.maker_dt_stamp = p_applDate
    or sca.status_since = p_applDate
    l_tb_acc_det ty_tb_acc_det_int;
    l_brnrec cvpks_utils.rec_brnlcy;
    l_acbr_lcy sttms_branch.branch_lcy%type;
    l_lcy_amount actbs_daily_log.lcy_amount%type;
    l_xrate number;
    l_dt_rec sttm_dates%rowtype;
    l_acc_rec sttm_cust_account%rowtype;
    l_acc_stat_row ty_r_acc_stat;
    Edited by: user13710379 on Jan 7, 2012 12:18 AM

    I see it more like shown below (possibly with no inline selects
    Try to get rid of the remaining inline selects ( left as an exercise ;) )
    and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
    select sca.branch_code "BRANCH",
           sca.cust_ac_no "ACCOUNT",
           to_char(p_applDate, 'YYYYMM') "YEARMONTH",
           sca.ccy "CURRENCY",
           sca.account_class "PRODUCT",
           sca.cust_no "CUSTOMER",
           sca.ac_desc "DESCRIPTION",
           null "LOW_BAL",
           null "HIGH_BAL",
           null "AVG_CR_BAL",
           null "AVG_DR_BAL",
           null "CR_DAYS",
           null "DR_DAYS",
    --     null "CR_TURNOVER",
    --     null "DR_TURNOVER",
           null "DR_OD_DAYS",
           w.dd "OD_LIMIT",
    --     sc.credit_rating "CR_GRADE",
           null "AVG_NET_BAL",
           null "UNAUTH_OD_AMT",
           sca.acy_blocked_amount "AMT_BLOCKED",
           x.dr_int "DR_INTEREST",
           x.cr_int "CR_INTEREST",
           y.fee_amt "FEE_INCOME",
           sca.record_stat "ACC_STATUS",
           case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
                 and not exists(select 1
                                  from ictm_tdpayin_details itd
                                 where itd.multimode_payopt = 'Y'
                                   and itd.brn = sca.branch_code
                                   and itd.acc = sca.cust_ac_no
                                   and itd.multimode_offset_brn is not null
                                   and itd.multimode_tdoffset_acc is not null
                then 1
                else 0
           end "NEW_ACC_FOR_THE_MONTH",
           case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
                 and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
                 and not exists(select 1
                                  from ictm_tdpayin_details itd
                                 where itd.multimode_payopt = 'Y'
                                   and itd.brn = sca.branch_code
                                   and itd.acc = sca.cust_ac_no
                                   and itd.multimode_offset_brn is not null
                                   and itd.multimode_tdoffset_acc is not null
                then 1
                else 0
           end "NEW_ACC_FOR_NEW_CUST",
           (select 1 from dual
             where exists(select 1
                            from ictm_td_closure_renew itcr
                           where itcr.brn = sca.branch_code
                             and itcr.acc = sca.cust_ac_no
                             and itcr.renewal_date = sysdate
                or exists(select 1
                            from ictm_tdpayin_details itd
                           where itd.multimode_payopt = 'Y'
                             and itd.brn = sca.branch_code
                             and itd.acc = sca.cust_ac_no
                             and itd.multimode_offset_brn is not null
                             and itd.multimode_tdoffset_acc is not null
           ) "RENEWED_OR_ROLLOVER",
           m.maturity_date "MATURITY_DATE",
           sca.ac_stat_no_dr "DR_DISALLOWED",
           sca.ac_stat_no_cr "CR_DISALLOWED",
    --     sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
           sca.ac_stat_dormant "DORMANT_ACC",
           sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
           sca.ac_stat_frozen "FROZEN_ACC",
           sca.ac_open_date "ACC_OPENING_DT",
           sca.address1 "ADD_LINE_1",
           sca.address2 "ADD_LINE_2",
           sca.address3 "ADD_LINE_3",
           sca.address4 "ADD_LINE_4",
           sca.joint_ac_indicator "JOINT_ACC",
           sca.acy_avl_bal "CR_BAL",
           0 "DR_BAL",
           0 "CR_BAL_LCY", t
           0 "DR_BAL_LCY",
           null "YTD_CR_MOVEMENT",
           null "YTD_DR_MOVEMENT",
           null "YTD_CR_MOVEMENT_LCY",
           null "YTD_DR_MOVEMENT_LCY",
           null "MTD_CR_MOVEMENT",
           null "MTD_DR_MOVEMENT",
           null "MTD_CR_MOVEMENT_LCY",
           null "MTD_DR_MOVEMENT_LCY",
           'N' "BRANCH_TRFR", --New
           sca.provision_amount "PROVISION_AMT",
           sca.account_type "ACCOUNT_TYPE",
           nvl(sca.tod_limit, 0) "TOD_LIMIT",
           nvl(sca.sublimit, 0) "SUB_LIMIT",
           nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
           nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
      from sttm_cust_account sca,
           sttm_customer sc,
           (select sca.cust_ac_no
                   sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
                       case when p_applDate >= sca.tod_limit_start_date
                             and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
                            then sca.tod_limit else 0
                       end
                      ) dd
              from sttm_cust_account sca
                   getm_facility gf,
                   sttm_cust_account_linkages scal
             where gf.line_code || gf.line_serial = scal.linked_ref_no
               and cust_ac_no = sca.cust_ac_no
             group by sca.cust_ac_no
           ) w,
           (select acc,
                   brn,
                   sum(decode(drcr,'D',amt)) dr_int,
                   sum(decode(drcr,'C',amt)) cr_int
              from ictb_entries_history ieh
             where ent_dt between p_st_dt and p_ed_dt
               and drcr in ('C','D')
               and liqn = 'Y'
               and entry_passed = 'Y'
               and exists(select null
                            from ictm_pr_int ipi,
                                 ictm_rule_frm irf
                           where ipi.rule = irf.rule_id
                             and ipi.product_code = ieh.prod 
                             and irf.book_flag = 'B'
             group by acc,brn
           ) x,
           (select acc,
                   brn,
                   sum(amt) fee_amt
              from ictb_entries_history ieh
             where ieh.ent_dt between p_st_dt and p_ed_dt
               and exists(select product_code
                            from ictm_product_definition ipd
                           where ipd.product_code = ieh.prod
                             and ipd.product_type = 'C'
             group by acc,brn
           ) y,
           ictm_acc m,
           (select sca.cust_ac_no,
                   sca.branch_code
                   coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
                            nvl2(coalesce(t2.account,t2.account),'exists',null),
                            nvl2(coalesce(t3.acc,t3.brn),'exists',null),
                            nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
                            nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
                            nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
                            nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
                            decode(sca.maker_dt_stamp,p_applDate,'exists'),
                            decode(sca.status_since,p_applDate,'exists')
                           ) existence
              from sttm_cust_account sca
                   left outer join
                   (select ac_no,ac_branch
                      from actb_daily_log
                     where trn_dt = p_applDate
                       and auth_stat = 'A'
                   ) t1
                on (sca.cust_ac_no = t1.ac_no
               and  sca.branch_code = t1.ac_branch
                   left outer join
                   (select account,account
                      from catm_amount_blocks
                     where effective_date = p_applDate
                       and auth_stat = 'A'
                   ) t2
                on (sca.cust_ac_no = t2.account
               and  sca.branch_code = t2.branch
                   left outer join
                   (select acc,brn
                      from ictm_td_closure_renew itcr
                     where renewal_date = p_applDate
                   ) t3
                on (sca.cust_ac_no = t3.acc
               and  sca.branch_code = t3.brn
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttm_ac_stat_change
                     where status_change_date = p_applDate
                       and auth_stat = 'A'
                   ) t4
                on (sca.cust_ac_no = t4.cust_ac_no
               and  sca.branch_code = t4.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from cstb_acc_brn_trfr_log
                     where process_date = p_applDate
                       and process_status = 'S'
                   ) t5
                on (sca.cust_ac_no = t5.cust_ac_no
               and  sca.branch_code = t5.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttbs_provision_history
                     where esn_date = p_applDate
                   ) t6
                on (sca.cust_ac_no = t6.cust_ac_no
               and  sca.branch_code = t6.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttms_cust_account_dormancy
                     where dormancy_start_dt = p_applDate
                   ) t7
                on (sca.cust_ac_no = t7.cust_ac_no
               and  sca.branch_code = t7.branch_code
           ) z
    where sca.branch_code = p_branch_code
       and sca.cust_no = sc.customer_no
       and sca.cust_ac_no = w.cust_ac_no
       and sca.cust_ac_no = x.acc
       and sca.branch_code = x.brn
       and sca.cust_ac_no = y.acc
       and sca.branch_code = y.brn
       and sca.cust_ac_no = m.acc
       and sca.branch_code = m.brn
       and sca.cust_ac_no = z.sca.cust_ac_no
       and sca.branch_code = z.branch_code
       and z.existence is not nullRegards
    Etbin

  • Help in improving Query Performance

    Hi,
    I would like to know if there a way to using so many OR's as it is causing performace issue in our application .
    The value C.x1 is dynamic here . it may have values like 'yy','zz', 'xx' and number of OR's depends on number of different C.x1's based on user delection .
    Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
    OR
    Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
    OR
    Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
    OR
    Select A.x1 from Table1 A , Table2 B where A.x2 = B.x2 AND A.y1 = ( select C.x1 from Table C )
    sugestions please.
    regards,
    Kar

    Select A.x1 from Table1 A , Table2 B where A.x2 =
    B.x2 AND ( A.y1 = ( select C.x1 from Table C ) OR
    A.y1 = ( select C.x1 from Table C )
    OR A.y1 = ( select C.x1 from Table C ) OR A.y1 = (
    select C.x1 from Table C ) )why use two sub-queries on your WHERE predicates when they are the same?
       AND (   A.y1 = ( select C.x1 from Table C ) OR A.y1 = ( select C.x1 from Table C )
            OR A.y1 = ( select C.x1 from Table C ) OR A.y1 = ( select C.x1 from Table C ) ) you can have one subqueries:
       AND (A.y1 = ( select C.x1 from Table C ) OR A.y1 = ( select C.x1 from Table C ))

  • Improve query performance

    Hi,
    I am executing one query it takes 40-45 mins, can anybody tell me where is the issue because I have index on SUBSCRIPTION table.
    Query is taking time in Nested Loop. Can anyboduy please help to improve query performance.
    Select count(unique individual_id)
    from SUBSCRIPTION S ,SOURCE D WHERE S.ORDER_DOCUMENT_KEY_CD=D.FULFILLMENT_KEY_CD AND prod_abbr='TOH'
    and to_char(source_start_dt,'YYMM')>='1010' and mke_mag_source_type_cd='D';
    select count(*) from source; ----------3,425,131
    select count(*) from subscription;---------394,517,271
    Below is exlain Plan
    Plan
    SELECT STATEMENT CHOOSECost: 219 Bytes: 38 Cardinality: 1                                              
    13 SORT GROUP BY Bytes: 38 Cardinality: 1                                                   
    12 PX COORDINATOR                                              
         11 PX SEND QC (RANDOM) SYS.:TQ10001 Bytes: 38 Cardinality: 1                                         
         10 SORT GROUP BY Bytes: 38 Cardinality: 1                                    
         9 PX RECEIVE Bytes: 38 Cardinality: 1                               
              8 PX SEND HASH SYS.:TQ10000 Bytes: 38 Cardinality: 1                          
              7 SORT GROUP BY Bytes: 38 Cardinality: 1                     
              6 TABLE ACCESS BY LOCAL INDEX ROWID TABLE SUBSCRIPTION Cost: 21 Bytes: 3,976 Cardinality: 284                
                   5 NESTED LOOPS Cost: 219 Bytes: 604,276 Cardinality: 15,902           
              2 PX BLOCK ITERATOR      
                   1 TABLE ACCESS FULL TABLE SOURCE Cost: 72 Bytes: 1,344 Cardinality: 56
                   4 PARTITION HASH ALL Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16     
                   3 INDEX RANGE SCAN INDEX XAK1SUBSCRIPTION Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
    Please suggest

    it eliminate hidden conversation from char to numberi dont know indexes/partition on TC table, and you?
    drop table test;
    create table test as select level id, sysdate + level/24/60/60 datum from dual connect by level < 10000;
    create index idx1 on test(datum);
    analyze table test compute statistics;
    explain plan for select count(*) from test where to_char(datum,'YYYYMMDD') > '20120516';   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 3467505462                                                    
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |    
    |   0 | SELECT STATEMENT   |      |     1 |     7 |     7  (15)| 00:00:01 |    
    |   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |    
    |*  2 |   TABLE ACCESS FULL| TEST |   500 |  3500 |     7  (15)| 00:00:01 |    
    Predicate Information (identified by operation id):                            
       2 - filter(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD')>'20120516')       
    explain plan for select count(*) from test where datum > trunc(sysdate);   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 2330213601                                                    
    | Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     | 
    |   0 | SELECT STATEMENT      |      |     1 |     7 |     7  (15)| 00:00:01 | 
    |   1 |  SORT AGGREGATE       |      |     1 |     7 |            |          | 
    |*  2 |   INDEX FAST FULL SCAN| IDX1 |  9999 | 69993 |     7  (15)| 00:00:01 | 
    Predicate Information (identified by operation id):                            
       2 - filter("DATUM">TRUNC(SYSDATE@!))                                        
    drop index idx1;
    create index idx1 on test(to_number(to_char(datum,'YYYYMMDD')));
    analyze table test compute statistics;
    explain plan for select count(*) from test where to_number(to_char(datum,'YYYYMMDD')) > 20120516;   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 227046122                                                     
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |     
    |   0 | SELECT STATEMENT  |      |     1 |     5 |     2   (0)| 00:00:01 |     
    |   1 |  SORT AGGREGATE   |      |     1 |     5 |            |          |     
    |*  2 |   INDEX RANGE SCAN| IDX1 |     1 |     5 |     2   (0)| 00:00:01 |     
    Predicate Information (identified by operation id):                            
       2 - access(TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD'))>       
                  20120516)                                                        
    explain plan for select count(*) from test where datum > trunc(sysdate);   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 3467505462                                                    
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |    
    |   0 | SELECT STATEMENT   |      |     1 |     7 |     7  (15)| 00:00:01 |    
    |   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |    
    |*  2 |   TABLE ACCESS FULL| TEST |  9999 | 69993 |     7  (15)| 00:00:01 |    
    Predicate Information (identified by operation id):                            
       2 - filter("DATUM">TRUNC(SYSDATE@!))                                        

  • Improve the Performance of Loops

    Has anyone read "Improve the Performance of Loops" on http://archive.devx.com/free/tips/tipview.asp?content_id=3945 ?
    If so, would you agree that what's written there is absolute b.....t?
    He claims that decreasing the counter improves the performance and tries to prove it with the program:
    for (int i=0,n=Integer.MAX_VALUE;i<n;i++){
    a =-a;
    // is slower than
    long midTime =System.currentTimeMillis();
    for (int i=Integer.MAX_VALUE-1 ; i>=0 ; i--){
    a =-a;
    }The result is pretty impressive:
    Increasing Loop:4891
    Decreasing Loop:3781
    The only stupid thing is that:
    1. if you run it more times you get
    Increasing Loop:4891
    Decreasing Loop:3781
    Increasing Loop:3782
    Decreasing Loop:3796
    Increasing Loop:3891
    Decreasing Loop:3891
    Increasing Loop:3828
    Decreasing Loop:3937
    Increasing Loop:3891
    Decreasing Loop:3906
    Increasing Loop:3860
    Decreasing Loop:3937
    Increasing Loop:3891
    Decreasing Loop:3906
    So you can see that the performance is worse for decreasing loops after hotspot warmed up.
    2. If you run it with -server, you'll even get:
    Increasing Loop:16
    Decreasing Loop:0
    Increasing Loop:0
    Decreasing Loop:0
    Increasing Loop:0
    Decreasing Loop:0
    Increasing Loop:0
    Decreasing Loop:0
    Increasing Loop:0
    Decreasing Loop:0
    Increasing Loop:0
    Decreasing Loop:0
    Increasing Loop:0
    Decreasing Loop:0
    This shows that hotspot sever is much more clever than some programmers.
    Even if you change the code to do something bit better like
        public TimeLoop() {
            int a = 2,b=2;
            long startTime = System.currentTimeMillis();
            for (int i = 0, n = Integer.MAX_VALUE; i < n; i++) {
                a ^= i;
            long midTime = System.currentTimeMillis();
            for (int i = Integer.MAX_VALUE - 1; i >= 0; i--) {
                   a ^= i;
            long endTime = System.currentTimeMillis();
            System.out.println("Increasing Loop:" + (midTime - startTime));
            System.out.println("Decreasing Loop:" + (endTime - midTime));
              System.out.println("a="+a+" b="+b);      // Hotspot must perform _some_ kind of calculation to print this          
        }You'll find that it doesn't really matter whether you're xoring in increasing or decreasing order.
    For -client:
    Increasing Loop:296
    Decreasing Loop:297
    a=2 b=2
    Increasing Loop:297
    Decreasing Loop:281
    a=2 b=2
    Increasing Loop:297
    Decreasing Loop:297
    a=2 b=2
    For -server:
    Increasing Loop:141
    Decreasing Loop:156
    a=2 b=2
    Increasing Loop:141
    Decreasing Loop:141
    a=2 b=2
    Increasing Loop:140
    Decreasing Loop:156
    a=2 b=2
    (Last three runs for each).
    And I don't believe that accessing array.length is slower than storing the length in an int and comparing against that int!
    Please let's just stop posting silly perfomance tuning tips!

    Well, you can always look at the bytecode produced. I wrote two little classes:public class t {
      public static void main ( String[] args ) {
        int a = 0;
        for (int i=0,n=Integer.MAX_VALUE;i<n;i++){ a =-a; }
    }andpublic class t1 {
      public static void main ( String[] args ) {
        int a = 0;
        for (int i=Integer.MAX_VALUE-1 ; i>=0 ; i--){ a =-a; }
    }And here's the bytecode for their main() methods. (Extra/different bytecodes in "t" are marked):t: (incrementing)Method void main(java.lang.String[])
       0 iconst_0
       1 istore_1
    ==>2 iconst_0
       3 istore_2
       4 ldc #2 <Integer 2147483647>
       6 istore_3
       7 goto 16
      10 iload_1
      11 ineg
      12 istore_1
      13 iinc 2 1
      16 iload_2
    ==>17 iload_3
    ==>18 if_icmplt 10
      21 return
    t1: (decrementing)
    Method void main(java.lang.String[])
       0 iconst_0
       1 istore_1
       2 ldc #2 <Integer 2147483646>
       4 istore_2
       5 goto 14
       8 iload_1
       9 ineg
      10 istore_1
      11 iinc 2 -1
      14 iload_2
      15 ifge 8
      18 return The decrementing code does use fewer bytecodes to do its thing.
    However, as someone pointed out - once Hotspot gets involved, all bets are off. And as someone else pointed out, if the body of the loop does nearly anything at all, the 2-bytecode-difference is going to get completely swamped.
    In general, this is the kind of micro-optimizing that I'd ignore completely...
    Grant

  • Options to improve the performance of the Job

    Hi Team,
    As part of the CRM Upgrade requirement, we are planning to use Account Life cycle functionality to reflect the status of an account.
    As per the SAP recommendations( Note 1113330) we are currently executing the program CRM_BUPA_USERSTATUS_CONV2ROLE to convert user status master data to BP roles. We have noticed that this program is taking more time even when we run this for single business partner. We are trying to explore the options on how to improve the performance of the job. Incase if anyone have done this kind of exercise in any of their previous assignments or have information on this, request to provide your feedback on the below points.
    1) Total Volume of Customer Master Data
    2) How many records did we consider for one execution of the conversion program
    3) How much time did it toke for one execution ?Did we do any performance tuning
    4) When we are running the program in back ground mode..we are not getting the spool
    showing the log information. Was there any custom report developed to view the log when
    we execute the program in background mode..if so can you share us the technical details
    6) Any information on how many work processors that were available for executing the jobs 
    Appreciate your help.
    Regards,
    Varun

    Hello Udaya ,
    Could you please tryy providing a range of BPs as per note 1121015? This can help in improving the performance .
    Thanks & regards,
    Krishnen

  • EP6 sp12 Performance Issue, Need help to improve performance

    We have a Portal development environment with EP6.0 sp12.
    What we are experiencing is performance issue, It's not extremely slow, but slow compared to normal ( compared to our prod box). For example, after putting the username and password and clicking the <Log on> Button it's taking more than 10 secs for the first home page to appear. Also currently we have hooked the Portal with 3 xAPPS system and one BW system. The time taken for a BW query to appear ( with selection screen) is also more than 10 secs. However access to one other xAPPS is comparatively faster.
    Do we have a simple to use guide( Not a very elaborate one) with step by step guidance to immediately improve the performance of the Portal.
    Simple guide, easy to implement,  with immediate effect is what we are looking for in the short term
    Thanks
    Arunabha

    Hi Eric,
      I have searched but didn't find the Portal Tuning and Optimization Guide as you have suggested, Can you help to find this.
    Subrato,
      This is good and I would obviously read through this, The issue here is this is only for Network.
      But do you know any other guide, which as very basic ( may be 10 steps) and show step by step the process, it would be very helpful. I already have some information from the thread Portal Performance - page loads slow, client cache reset/cleared too often
    But really looking for answer ( steps to do it quickly and effectively) instead of list of various guides.
    It would be very helpful if you or anybody( who has actually done some performance tuning) can send  a basic list of steps that I can do immediately, instead of reading through these large guides.
    I know I am looking for a shortcut, but this is the need of the hour.
    Thanks
    Arun

  • Important!! Improve the life and performance of the battery.

    Reduce the operating temperature and increase battery life
    The battery in your notebook PC is designed to provide the necessary amount of energy for the processor while maintaining HP high safety standards. As a result, the battery may not charge or may stop providing power to the notebook when the battery temperature exceeds the specified, design safety level.
    If the battery life appears shorter than normal, the battery stops charging before it is 99%-100% full and the battery appears warmer than usual, the battery has most likely reached its designed "no charge" safety state. The battery will no longer charge until the temperature condition is corrected.
    Try one of the following methods to correct the battery temperature:
    When charging the battery, do not use applications that require large amounts of system resources such as graphic or memory intensive applications, heavy and extended hard drive usage.
    Turn off your notebook and remove the battery to allow it to return to a safe operating temperature.
    Make sure the notebook PC is operating on a hard surface. Using the Notebook PC on a bed or sofa may block the vents causing the notebook PC to heat up and shut down.
    By taking these steps, the battery will return to its normal operating temperature range and continue to charge and discharge as designed.
    Calibrating the battery while PC not in use
    Recalibrating the battery requires a cycle of a complete charge and a complete discharge. To recalibrate the battery while using the PC is not is use complete the following steps.
    The recalibration may take 1-5 hours depending on the age of the battery and the configuration of the notebook PC you own. The PC should not be used while you perform the following steps. Completing all the following steps will also calibrate the battery so that the power meter readings are accurate.
    Shut down the notebook PC
    Connect the AC Adapter to the notebook PC and to an electrical socket.
    Charge the Notebook PC until the Battery Charge light is Green. This indicates the battery is completely charged.
    Press and release the Power Button to start the computer.
    Press the F8 key several times when the HP Logo displays.
    When the Windows Advanced Startup Menu displays, select the Startup in Safe Mode option.
    Remove the AC power adapter from the notebook PC.
    Allow the battery to discharge completely until the notebook PC turns off.
    The battery is now calibrated and the battery level reading on the power meter is now accurate.
    If you are not using the notebook regularly then please unplug the AC adapter and shut down the notebook. By following these practices will improve the life and performance of the battery. Here is a quick list of Do's and Don'ts for the care of your Li-On batteries:
    Do's
    When you receive a new Notebook or Tablet PC, leave the battery to fully charge overnight.
    Condition a new battery by using it until it is fully discharged, and then re-charge it fully. Doing this once a month will help to accurately calibrate your battery.
    Always ensure the battery is recharged as soon as possible after it becomes fully discharged. A battery will be permanently damaged if left for an extended length of time in a fully discharged state.
    Remember that a Lithium-Ion battery will slowly deteriorate; a new battery will always perform better than one that is 6-months old.
    Remember that the battery half-life is rated for a certain total number of charge/discharge cycles (see your User Manual or Quick Start Guide for the rating). For example, a battery that is rated for 3 hours and 500 charge/discharge cycles, will still be considered as within specification, even if it only lasts for 1 hour 45 minutes after 500 charge/discharge cycles.
    Heat is the worst enemy of a battery. Allow plenty of air to circulate around the Notebook/Tablet PC, so that the battery is kept as cool as possible when charging and also when in use. If provided, use the integrated 'legs' under the Notebook to raise the notebook and improve air circulation.
    Remove the battery if storing for several months (the battery should be at approximately 50% charge or higher).
    If you use a NoteBus or if charging your Notebooks or Tablet PCs in a confined space, allow for adequate ventilation in order to keep the batteries as cool as possible.
    Don'ts
    Do Not - Expose the battery to excessive heat or cold (i.e. outside the range of 10-35 degrees Centigrade ambient).
    Do Not - Store the battery in a fully charged state (store batteries with about 50% charge).
    Do Not - Allow a nearly flat battery to be unused for more than a month or so. The battery will slowly discharge until it becomes fully discharged and this will permanently damage the battery cells.
    Do Not - Charge your Notebook/Tablet PC inside a carry case - the battery may overheat.
    Do Not - Charge your Notebook/Tablet PC when stacked on top of each other - the battery may overheat.
    Remember: Your battery is slowly degrading all the time, even if it is not used. Keeping your battery as cool as possible will slow down this degradation considerably.
    For more information please visit the following links:
    How to Improve the Performance of the Battery
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01297640&cc=us&lc=en&dlc=en
    10 Tips to make your Laptop Battery last longer
    http://labnol.blogspot.com/2006/03/10-tips-to-make-your-laptop-battery.html
    Disclaimer: By clicking on the link above, you will be leaving HP.com to visit a web site that is not maintained by HP and where the HP privacy policy does not apply. This link is provided to you for convenience and does not serve as an endorsement by HP of any information or contacts that you may find on this non-HP site.
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

    I hope the above article will help you guys..
    ||-Although I am working on behalf of HP, I am speaking for myself and not for HP.-||
    //Click on Kudos if my reply was helpful and answered your question//
    ||-If my answer solved the problem please mark the topic as the accepted solution-||

Maybe you are looking for

  • Web ADI issue after Hardware Change

    Hi, We are encountering an issue when uploading the template in Linux OS. The same template works in Solaris instance. Can you please help. The error on the excel template is "ORA-00976: Specified pseudocolumn or operator not allowed here." 9/22/10 5

  • ICC profiles from iMac to laptop

    I use Aperture on a MacBook and an iMac. I print on an Epson R280 connected to the iMac. The ColorSync options for the Epson appear in my dialog box on the iMac, but do not on the laptop. How do I import them to the Aperture running on my laptop?

  • PO Release for Account assignment category is K(Cost centre)

    HI Experts    I need to set up release procedure for PO when account assignment categeory which is being maintained at Item level so suggest suitable method Regard Srinivasan

  • Help resize  SWF in vertical direction

    I have more reach application, in simplify my problem looks so: I dont' need in scroling. How do it? With JavaScript? or exists decision in ActionScript 3? Help plz.

  • IPhone 4 buttons not working.  HELP!

    I live in Spain and I've had an iPhone 4 for a little over 2 years.  All of the buttons on my phone no longer work except the home button.  How can I get this fixed?  Is this something that Apple will fix if I don't have Apple Care?  How much would i