Performance Problem between Oracle 9i to Oracle 10g using Crystal XI

We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
Our technical Setup:
Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
Database server is Oracle 10g
What we have concluded:
Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
Oracle 10g no longer supports the /*+ RULE */ hint.
Verify DB Query:
select /*+ RULE */ *
from
(select /*+ RULE */ null table_qualifier, o1.owner table_owner,
o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
o1.object_type), o1.object_type) table_type, null remarks from all_objects
o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
table_type, null remarks from all_objects o3, all_synonyms s where
o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
null remarks from all_synonyms s1 where s1.db_link is not null ) tables
WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
3
SQL From Main Report:
SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
SQL From Sub Report:
SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
Does anyone have any suggestions on how we can improve the report performance with 10g?

Hi Eric,
Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
Here are the Doc Ids, if you are interested:
Note 377037.1
Note:364822.1
Thanks again for your response.
Venu Boddu.

Similar Messages

  • Performance problem between Oracle.DataAccess v1 and v2

    Hi, I have serious performance problem with OracleDataReader when I use the GetValues method.
    My server is Oracle 9.2.0.7, and i use ODAC v10.2.0.221
    I create a dummy table for benchmark :
    create table test (a varchar2(50), b number)
    begin
    for i in 1..62359 loop
    insert into test values ('Values ' || i, i);
    end loop;
    commit;
    end;
    I use the same code for benchmark Framework v1 and Framework v2.
    Code :
    try {
    OracleConnection c = new OracleConnection("user id=saturne_dbo;password=***;data source=satedfx;");
    c.Open();
    go(c);
    c.Close();
    catch (Exception ex) {
    MessageBox.Show(ex.Message);
    private void go(IDbConnection c) {
    IDbCommand cmd = c.CreateCommand();
    cmd.CommandText = "select * from test";
    cmd.CommandType = CommandType.Text;
    DateTime dt = DateTime.Now;
    IDataReader reader = cmd.ExecuteReader();
    int count = 0;
    while (reader.Read()) {
    object[] fields = new object[reader.FieldCount];
    reader.GetValues(fields);
    count++;
    reader.Close();
    TimeSpan eps = DateTime.Now - dt;
    MessageBox.Show("Time " + count + " : " + eps.TotalSeconds);
    Result are :
    Framework v1 with OracleDataAccess 1.10.2.2.20 "Time 62359 : 0.5"
    Framework v2 with OracleDataAccess 2.10.2.2.20 "Time 62359 : 3.57" FACTOR 6 !!!!!
    I notice same problem with oleDb provider and Microsoft Oracle Client provider..
    It's a serious problem for my production server, the time calculation explode...
    Where is the explication ?
    Do u know solution ?

    Can you please try out following -
    1. Create a .NET 1.x DLL with your benchmark code. This will obviously use ODP.NET for .NET 1.x.
    2. Call this assembly routine from a .NET 1.x executable and note the results.
    3. Now call this assembly routine from a .NET 2.0 executable and note the results.
    The idea is to always use "ODP.NET for .NET 1.x" even in .NET 2.0 runtime. This will tell us whether the performance degradation is a runtime issue.

  • Problem connecting (read only) to Sybase Server using Crystal XI

    Hi,
    I'm having a very similar problem to Renuha in the thread 'Problem connecting to Sybase Server using Crystal XI-Version-11.0.0.1994'
    The thread is marked as assumed answered but I suspect not!
    I am experiencing this issue in Crystal XI R1 11.0.0.2495, post SP4 install.
    Though the issue was exactly the same pre SP4 install, when it was a vanilla install @ version 11.0.0.1282.
    I am trying to connect to a Sybase database via Crystal > Start Page > Standard Report Wizard > Standard Report Creation Wizard > Sybase Server > Make New Connection
    I enter the details, of my read only user account, and select my desired database, from the (successfully) populated 'Database' drop down.
    After some time, in the 'Standard Report Creation Wizard' window I get the server listed under the Sybase Server branch, but, on expanding the server I only get '...no items found...'. However ,if I use the sa account, after selecting a particular database, I can see all the available database objects under the Sybase Server > [server] node.
    I am on Windows XP Pro SP3, with Sybase Open Client v12.5.2
    I assume my Sybase Open client is correctly installed as I am able to successfully connect using the sa account.
    I am trying to connect to a Solaris 10 (5.10) system running Sybase @@version= Adaptive Server Enterprise/15.0.3/EBF 16548 ESD#1/P/Sun_svr4/OS 5.8/ase1503/268
    Our database vendor/supplier has said:
    "...Crystal reports is not handling the granularity of the Sybase revoke permissions and assuming we've revoked all access to any table where we have revoked only write access.
    Is anyone able to assist?
    Thanks,
    Matt

    Don,
    Thanks again for the response and my apologies for the delay in reply - they keep giving me other work to do!!
    Anyway.
    CR XI R2 SP6 successfully installed.
    Same outcome on Sybase connect, with a full read/write sa account
    i.e. successfull connect and sight of all database objects within my chosen database.
    Same outcome on my restricted read only account
    i.e. I am able to successfully authenticate and choose which database I wish to select but subsequently, still, 'no items found' is displayed when I expand my database node.
    I believe it is a problem with the read only account as both accounts are able to connect, as shown by the availablilty of the dropdown, listing the available databases within the specified Sybase instance.
    The reasons for going down this path are as you suspect - I've been asked to provide access which is not full!
    As far as the testing via test tbl creation.
    I know very little of Sybase (?!) and all our Sybase DBA activities are carried out by our system/dB vendor/supplier.
    To do further testing I would have to go back to our dB vendor/supplier but, as mentioned, (I get the impression) they already beleive they have carried out all required of them by providing locked down read only access.
    I ought to mention that the database trying to be accessed is a restored copy of "the previous days" live data, on an MIS server. The read-only account comes over as with full privileges, and it is a script, subsequent to database restore, which knocks down the accounts privs, to read-only. Given this scenario what would I have to ask of them re further testing/troubleshooting?
    Thanks,
    Matt

  • Query Performance problem after upgrade from 8i to 10g

    Following query takes longer time in 10g.
    SELECT LIC_ID,FSCL_YR,KEY_NME,CRTE_TME_STMP,REMT_AMT,UNASGN_AMT,BAD_CK_IND,CSH_RCPT_PARTY_ID,csh_rcpt_id,REC_TYP,XENT_ID,CLNT_CDE,BTCH_CSH_STA,file_nbr,
    lic_nbr,TAX_NBR,ASGN_AMT FROM (
    SELECT /*+ FIRST_ROWS*/
         cpty.lic_id,
    cpty.clnt_cde,
    cpty.csh_rcpt_party_id,
    cpty.csh_rcpt_id,
    cpty.rec_typ,
    cpty.xent_id,
    cr.fscl_yr,
    cbh.btch_csh_sta,
    nam.key_nme,
    lic.file_nbr,
    lic.lic_nbr,
    cr.crte_tme_stmp,
    cr.remt_amt,
    cr.unasgn_amt,
    ee.tax_nbr,
    cr.asgn_amt,
    cr.bad_ck_ind
    FROM lic lic
    ,csh_rcpt_party cpty
    ,name nam
    ,xent ee
    ,csh_rcpt cr
    ,csh_btch_hdr cbh
    WHERE 1 = 1
    AND ee.xent_id = nam.xent_id
    AND cbh.btch_id = cr.btch_id
    AND cr.csh_rcpt_id = cpty.csh_rcpt_id
    AND ee.xent_id = cpty.xent_id
    AND cpty.lic_id = lic.lic_id(+)
    AND (cpty.clnt_cde IN ( SELECT clnt_cde
    FROM clnt
                   START WITH clnt_cde = '4006'
    CONNECT BY PRIOR clnt_cde_prnt = clnt_cde)
    OR cpty.clnt_cde IS NULL)
    AND nam.cur_nme_ind = 'Y'
    AND nam.ent_nme_typ = 'P' AND nam.key_nme LIKE 'WHITE%')
    order by lic_id
    Explain Plan in 8i
    0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=17 Card=1
    Bytes=107)
    1 0 FILTER
    2 1 NESTED LOOPS (Cost=17 Card=1 Bytes=107)
    3 2 NESTED LOOPS (Cost=15 Card=1 Bytes=101)
    4 3 NESTED LOOPS (OUTER) (Cost=13 Card=1 Bytes=73)
    5 4 NESTED LOOPS (Cost=11 Card=1 Bytes=60)
    6 5 NESTED LOOPS (Cost=6 Card=1 Bytes=35)
    7 6 INDEX (RANGE SCAN) OF 'NAME_WBSRCH1_I' (NON-UN
    IQUE) (Cost=4 Card=1 Bytes=26)
    8 6 TABLE ACCESS (BY INDEX ROWID) OF 'XENT' (Cost=
    2 Card=4649627 Bytes=41846643)
    9 8 INDEX (UNIQUE SCAN) OF 'EE_PK' (UNIQUE) (Cos
    t=1 Card=4649627)
    10 5 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT_PARTY
    ' (Cost=5 Card=442076 Bytes=11051900)
    11 10 INDEX (RANGE SCAN) OF 'CPTY_EE_FK_I' (NON-UNIQ
    UE) (Cost=2 Card=442076)
    12 4 TABLE ACCESS (BY INDEX ROWID) OF 'LIC' (Cost=2 Car
    d=3254422 Bytes=42307486)
    13 12 INDEX (UNIQUE SCAN) OF 'LIC_PK' (UNIQUE) (Cost=1
    Card=3254422)
    14 3 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT' (Cost=2
    Card=6811443 Bytes=190720404)
    15 14 INDEX (UNIQUE SCAN) OF 'CR_PK' (UNIQUE) (Cost=1 Ca
    rd=6811443)
    16 2 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_BTCH_HDR' (Cost=
    2 Card=454314 Bytes=2725884)
    17 16 INDEX (UNIQUE SCAN) OF 'CBH_PK' (UNIQUE) (Cost=1 Car
    d=454314)
    18 1 FILTER
    19 18 CONNECT BY
    20 19 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (UNIQUE) (Cost=1 Ca
    rd=1 Bytes=4)
    21 19 TABLE ACCESS (BY USER ROWID) OF 'CLNT'
    22 19 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (Cost=2 Card
    =1 Bytes=7)
    23 22 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (UNIQUE) (Cost=1
    Card=1)
    Explain Plan in 10g
    0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=19 Card=1
    Bytes=112)
    1 0 SORT (ORDER BY) (Cost=19 Card=1 Bytes=112)
    2 1 FILTER
    3 2 NESTED LOOPS (Cost=18 Card=1 Bytes=112)
    4 3 NESTED LOOPS (Cost=16 Card=1 Bytes=106)
    5 4 NESTED LOOPS (OUTER) (Cost=14 Card=1 Bytes=78)
    6 5 NESTED LOOPS (Cost=12 Card=1 Bytes=65)
    7 6 NESTED LOOPS (Cost=6 Card=1 Bytes=34)
    8 7 INDEX (RANGE SCAN) OF 'NAME_WBSRCH1_I' (INDE
    X) (Cost=4 Card=1 Bytes=25)
    9 7 TABLE ACCESS (BY INDEX ROWID) OF 'XENT' (TAB
    LE) (Cost=2 Card=1 Bytes=9)
    10 9 INDEX (UNIQUE SCAN) OF 'EE_PK' (INDEX (UNI
    QUE)) (Cost=1 Card=1)
    11 6 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT_PAR
    TY' (TABLE) (Cost=6 Card=1 Bytes=31)
    12 11 INDEX (RANGE SCAN) OF 'CPTY_EE_FK_I' (INDEX)
    (Cost=2 Card=4)
    13 5 TABLE ACCESS (BY INDEX ROWID) OF 'LIC' (TABLE) (
    Cost=2 Card=1 Bytes=13)
    14 13 INDEX (UNIQUE SCAN) OF 'LIC_PK' (INDEX (UNIQUE
    )) (Cost=1 Card=1)
    15 4 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_RCPT' (TABLE
    ) (Cost=2 Card=1 Bytes=28)
    16 15 INDEX (UNIQUE SCAN) OF 'CR_PK' (INDEX (UNIQUE))
    (Cost=1 Card=1)
    17 3 TABLE ACCESS (BY INDEX ROWID) OF 'CSH_BTCH_HDR' (TAB
    LE) (Cost=2 Card=1 Bytes=6)
    18 17 INDEX (UNIQUE SCAN) OF 'CBH_PK' (INDEX (UNIQUE)) (
    Cost=1 Card=1)
    19 2 FILTER
    20 19 CONNECT BY (WITH FILTERING)
    21 20 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (TABLE) (C
    ost=2 Card=1 Bytes=15)
    22 21 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (INDEX (UNIQUE)
    ) (Cost=1 Card=1)
    23 20 NESTED LOOPS
    24 23 BUFFER (SORT)
    25 24 CONNECT BY PUMP
    26 23 TABLE ACCESS (BY INDEX ROWID) OF 'CLNT' (TABLE)
    (Cost=2 Card=1 Bytes=7)
    27 26 INDEX (UNIQUE SCAN) OF 'CLNT_PK' (INDEX (UNIQU
    E)) (Cost=1 Card=1)
    28 20 TABLE ACCESS (FULL) OF 'CLNT' (TABLE) (Cost=5 Card
    =541 Bytes=5951)
    Explain plan looks different in steps 19 to 28. I am not sure why 10g have more steps

    Hi
    I have no experience in 8i. I do know 10g does costing different from 8i. So I think the other plan might got elliminated.
    Normally when I see differences. I just collect statistics on the tables and the indexes and remove the hints. Hints are not good . This has helped me to solve few problems.
    Thanks
    CT

  • Performance problems between dev and prod

    I run the same query with identical data and indexes, but one system takes a 0.01 seconds to run while the production system takes 1.0 seconds to run. TKprof for dev is:
    Rows Row Source Operation
    1 TABLE ACCESS BY INDEX ROWID VAP_BANDVALUE
    3 NESTED LOOPS
    1 NESTED LOOPS
    41 NESTED LOOPS
    41 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID VAP_PACKAGE
    1 INDEX UNIQUE SCAN SYS_C0032600 (object id 51356)
    41 TABLE ACCESS BY INDEX ROWID VAP_BANDELEMENT
    41 AND-EQUAL
    82 INDEX RANGE SCAN IDX_BE2 (object id 53559)
    41 INDEX RANGE SCAN IDX_BE1 (object id 53558)
    41 TABLE ACCESS BY INDEX ROWID VAP_BAND
    41 INDEX UNIQUE SCAN SYS_C0034599 (object id 53556)
    1 INDEX UNIQUE SCAN SYS_C0032549 (object id 51335)
    1 INDEX RANGE SCAN IDX_BV1 (object id 53557)Tkprof for Prod is :
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDVALUE' (TABLE)
    52001 NESTED LOOPS
    26000 NESTED LOOPS
    26000 NESTED LOOPS
    26000 NESTED LOOPS
    1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_PACKAGE' (TABLE)
    1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018725' (INDEX (UNIQUE))
    26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDELEMENT' (TABLE)
    26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BE2' (INDEX)
    26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BAND' (TABLE)
    26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0030648' (INDEX (UNIQUE))
    26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018674' (INDEX (UNIQUE))
    26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BV1' (INDEX).The row count varies greatly. But it shouldn't as the data is the same.
    Any ideas?

    From DEV you show the Row Source Operations for the query. The column named "Rows" signifies the actual number of rows processed with each step.
    From PROD you show the Execution Plan for the query; that is, tkprof was executed with the EXPLAIN option which generates the execution plan as of the time when tkprof was run. The "Rows" column in the Explain Plan output comes from the PLAN_TABLE.CARDINALITY, which represents an estimate by the CBO for the number of rows [expected to be] processed with each step.
    So, if by <quote>The row count varies greatly</quote> you meant these "Rows" columns outputs then you're are comparing actuals from a database with estimates from another. Get the Row Source Operations from both.
    "Identical data and indexes":
    1. data may be the same, but it is not necessarily stored physically the same way.
    2. Indexes being the same means their definitions are the same; again, physically they are not necessarily identical
    In other words, data in PROD (the way it is stored on disk) may have evolved as a result of discrete deletes/updates/inserts ... in DEV it could be, for example, stored more compact if you took a copy of PROD and moved into DEV. So, the number of blocks for your segments will likely be different between PROD and DEV, the clustering factor for your indexes are likely different, etc ... things which could [and do] influence the CBO. The statistics may be different.
    I guess what I'm saying is ... it is quite hard, if not outright impossible, to get two identical databases/instances/load ... hence, don't expect the executions to be 100% identical, even if you have "identical data and indexes". By all means compare between DEV and PROD (make sure you compare the same thing though) and use the observed differences as an indicator for further investigation ... don't chase the goal of 100% identical behavior.
    Now, by all means look at that query taking 1 second in PROD ... I have only addressed <quote>The row count varies greatly. But it shouldn't as the data is the same.</quote>

  • Create Portal user within Oracle IAS9.0.4 (10g) using APIs

    With the Portal version 3.0 we created our users using the functions:
    1. wwsec_api.add_portal_user(...)
    2. wwsec_api.activate_portal_user(<user>)
    3. WWSSO_API_USER_ADMIN.CREATE_USER(...)
    Now with the Portal version 9.0.4 (10g) we can't create the users.
    Any sugestions?

    declare
    l_group varchar2(240);
    l_member varchar2(240);
    l_guid varchar2(32);
    l_id number;
    l_user_name varchar2(64) := 'user';
    l_pass varchar2(64):= 'user';
    l_email varchar2(64) := '[email protected]';
    begin
    --user to ldap
    l_guid := wwsec_oid.create_user_entry
    p_base => wwsec_oid.get_user_search_base,
    P_USER_NAME => l_user_name,
    p_password => l_pass,
    p_email => l_email
    commit;
    --include to default group
    l_group := 'cn=AUTHENTICATED_USERS,'||wwsec_oid.get_group_search_base;
    l_member := 'cn='||l_user_name||','||wwsec_oid.get_user_search_base;
    wwsec_oid.grant_group_membership
    (p_group_dn => l_group, p_member_dn => l_member);
    commit;
    --in portal default group
    wwsec_api.add_user_to_list(
    p_person_id => wwsec_api.id(l_user_name),
    p_to_group_id => wwsec_api.group_id('AUTHENTICATED_USERS'),
    p_is_owner => wwsec_api.not_owner
    commit;
    --add to portal
    l_id := portal.wwsec_api.add_portal_user(p_User_Name=>l_user_name,
    p_Portal_User=>'Y',
    p_Email => l_email);
    commit;
    end;
    lower('LUCK')

  • Sync Problems between Blackberry 8330 and Outlook 2007 using Desktop software v 4.7

    I have a Blackberry Curve 8330 and I have been experiencing some problems syncing it with my HP Notebook running Outlook 2007.  I am using the most current Desktop Software version 4.7.  I am using the BB Internet Service and all of my email accounts seem to be operating well.  When I attempt to sync I receive the following errors in this sequence
    Error 1:  Intellisync;  There are no applications configured for synchronization.  From the synchronize screen, go to Configure > Synchronization to configure the applications.
    At this point, I attempt to follow the above instructions and configure the applications I want to Sync.  When I complete the above step for all applications, or each one seperately, I receive the following error:
    Error 2:  Runtime Error;  Folder is no longer a part of the system data source or  the folder can not be found.
    As I stated above, I receive the error for all 4 applications (Address Book, Calendar, Memos and Tasks) whether they are configured seperately or all at once.  I have unistalled the desktop software v4.7 5 times and Outlook 3 times. I am truly at a loss and in dire need of my Calendar and Contacts on my phone.  If anyone has come across this in the past or has any ideas, Please feel free to comment or reply.  Any info will be greatly appreciated!!!
    BB:  8330 Curve
    DS:  Desktop Software v 4.7
    PC:  HP TX2500Z Notebook/Tablet
    OS:  Windows Vista Ultimate 64-Bit
    DO:  Microsoft Office Outlook 2007
    Thanks,
    Brian

    uninstall the desktop manager, and install "4.7 without Media Manager"
    The search box on top-right of this page is your true friend, and the public Knowledge Base too:

  • Performance problem with CR SDK

    Hi,
    I'am currently on a customer site and I have the following problem :
    The client have a performance problem with a J2EE application wich call a Crystal report with th CR SDK. To reproduce the problem on the local machine (the CR server), I have developped a little jsp page wich used the Crystal SDK to open a Crystal report on the server (this report is based on a XML data source), setting the new data source (with a new xml data flow) and refresh the report in PDF format.
    The problem is that the 2 first sequences take about 5 seconde each (5 sec for the opening report and 5 seconds for the setting data source). Then the total process take about 15 seconds to open and refresh the document that is very long for a little document.
    The document is a 600Ko file, the xml source is a 80Ko file.
    My jsp page is directly deployed on the tomcat of the Crystal Report Server (CRXIR2 without Service Pack).
    The Filestore and the MySQL database are on the CR server.
    The server is a 4 quadripro (16 proc) with 16Go of RAM and is totally dedicated to Crystal Report. For the moment, there is no activity on the server (it is also used for the test).
    The mains jsp orders are the followings :
    IEnterpriseSession es = CrystalEnterprise.getSessionMgr().logon("administrator", "", "EDITBI:6400", "secEnterprise");
        IInfoStore infoStore = (IInfoStore) es.getService("", "InfoStore");
        IInfoObjects infoObjects = infoStore.query("SELECT * FROM CI_INFOOBJECTS WHERE SI_NAME='CPA_EV' AND SI_INSTANCE=0 ");
        IInfoObject report = (IInfoObject) infoObjects.get(0);
    IReportAppFactory reportAppFactory = (IReportAppFactory)es.getService("RASReportFactory");
    ReportClientDocument reportClientDoc = reportAppFactory.openDocument(report.getID(), 0, null);
    IXMLDataSet xmlDataSet = new XMLDataSet();
    xmlDataSet.setXMLData(new ByteArray(ligne_data_xml));
    xmlDataSet.setXMLSchema(new ByteArray(ligne_schema_xml));
    DatabaseController db = reportClientDoc.getDatabaseController();
    db.setDataSource(xmlDataSet, "", "");
    ByteArrayInputStream bt = (ByteArrayInputStream)reportClientDoc.getPrintOutputController().export(ReportExportFormat.PDF);
    My question is : does this method is the good one to do this ?
    Thank's in advance for your help
    Best regards
    Emmanuel

    Hi,
    My problem is not resolved and I have'nt news from the support.
    If you have any idea/info, don't forget me
    Thank's in advance
    Emmanuel

  • Performance Problem - MS SQL 2K and PreparedStatement

    Hi all
    I am using MS SQL 2k and used PreparedStatement to retrieve data. There is strange and serious performance problem when the PreparedStatement contains "?" and using PreparedStatement.setX() functions to set its value. I have performed the test with the following code.
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    // statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 961
    10 1061
    200 1803
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 1171
    10 2754
    100 18817
    200 36443
    The above test is performed with DataDirect JDBC 3.0 driver. The one uses ? and setString functions take much longer to execute, which supposed to be faster because of precompilation of the statement.
    I have tried different drivers - the one provided by MS, data direct and Sprinta JDBC drivers but all suffer the same problem in different extent. So, I am wondering if MS SQL doesn't support for precompiled statement and no matter what JDBC driver I used I am still having the performance problem. If so, many O/R mappings cannot be used because I believe most of them if not all use the precompiled statement.
    Best regards
    Edmond

    Edmond,
    Most JDBC drivers for MS SQL (and I think this includes all the drivers you tested) use sp_executesql to execute PreparedStatements. This is a pretty good solution as the driver doesn't have to keep any information about the PreparedStatement locally, the server takes care of all the precompiling and caching. And if the statement isn't already precompiled, this is also taken care of transparently by SQL Server.
    The problem with this approach is that all names in the query must be fully qualified. This means that the driver has to parse the query you are submitting and make all names fully qualified (by prepending a db name and schema). This is why creating a PreparedStatement takes so much using these drivers (and why it does so every time you create it, even though it's the same PreparedStatement).
    However, the speed advantage of PreparedStatements only becomes visible if you reuse the statement a lot of times.
    As about why the PreparedStatement with no placeholder is much faster, I think is because of internal optimisations (maybe the statement is run as a plain statement (?) ).
    As a conclusion, if you can reuse the same PreparedStatement, then the performance hit is not so high. Just ignore it. However, if the PreparedStatement is created each time and only used a few times, then you might have a performance issue. In this case I would recommend you try out the jTDS driver ( http://jtds.sourceforge.net ), which uses a completely different approach: temporary stored procedures are created for PreparedStatements. This means that no parsing is done by the driver and PreparedStatement caching is possible (i.e. the next time you are preparing the same statement it will take much less as the previously submitted procedure will be reused).
    Alin.

  • Oracle 10G performance problems

    Hello,
    we have a lot of performance problems with oracle 10G. Especially tables scan on DRAW or AEN1 have long response times. It seems that the CBO uses the wrong strategy. The latest merge fix is already installed. Any idea to solve the problem is welcome.
    Best regards
    Juergen Remmert

    We had similar performance issues in our environment, once we upgraded from 9.2.0.2 to 10.2.0.2.
    Oracle: 10.2.0.2
    SAP: 4.7x110
    OS: SOLARIS 9 64bit
    The above mentioned notes were very helpful. We had to install an oracle patch as well (found on marketplace)  --  6321245
    and make the following oracle parameters changes:
    pga_aggregate_target - 144MB (default = 25MB)
    *.event="10027 trace name context forever, level 1"
    *.event="10028 trace name context forever, level 1"
    *.event="10162 trace name context forever, level 1"
    *.event="10183 trace name context forever, level 1"
    *.event="10191 trace name context forever, level 1"
    *.event="10629 trace name context forever, level 32"
    *.event="38068 trace name context forever, level 100"
    *.event="38043 trace name context forever, level 1"
    *.optimizer_index_caching=50
    *.optimizer_index_cost_adj=20
    *.parallel_execution_message_size=16384
    *._b_tree_bitmap_plans=FALSE
    *._index_join_enabled=FALSE
    *._optim_peek_user_binds=FALSE
    *._optimizer_mjc_enabled=FALSE
    *._sort_elimination_cost_ratio=10
    Remove
    *.optimizer_features_enable='9.2.0'
    HTH

  • Performance Problem After upgrade to oracle 10g

    Hi
    I have upgrade one of my datawarehouse database from oracle 9.2.0.8 to oracle 10.2.0.4 running on solaris 9
    After the upgrade jobs which were running in the database is taking hell lot of time.
    The jobs are accessing the views which is used to get the monthly report data from the database.
    what could be the solution and where to start from to get the RCA to resolve this performance issue
    Please let me know if you require any other information
    database is currently running in the automatic shared memory management mode ie SGA_MAX and SGA_TARGET parameters are defined for that

    There are a lot of differences between 10g and 9i in this regard, among these are:
    - There is a default job that gathers statistics every night which is not there in 9i. You might have totally different statistics as in 9i due to that job, depending on how and if at all you used to collect statistics in 9i
    - The 10g DBMS_STATS package collects histograms on some columns by default (parameter METHOD_OPT=>'FOR ALL COLUMNS SIZE AUTO' default in 10g whereas 'FOR ALL COLUMNS SIZE 1' in 9i) which can have a significant effect on the execution plans
    - The 10g optimizer has CPU costing enabled by default which can make significant changes to your execution plans due to different costing of table scans and order of predicate evaluation. In addition it uses NOWORKLOAD system statistics if system statistics have not been gathered explicitly
    - 10g checks the min and max values stored for columns in the data dictionary. If your predicates are way off compared to these values then 10g begins to adjust the calculated selectivity of the predicate which can again significantly affect your execution plans
    - 10g introduces the "Cost Based Query Transformation (CBQT)" feature which means that rather than applying heuristic transformation rules transformations are costed and potentially discarded whereas 9i applies transformations unconditionally whenever possible
    Check also the following note resp. white paper:
    http://optimizermagic.blogspot.com/2008/02/upgrading-from-oracle-database-9i-to.html
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Integration problem between oracle forms 10g and oracle report 10g

    Hi!
    I've got any error message "Unable to connect to the report server "server name"" when a oracle report is run using run_report_object in the oracle form under oracle form developer 10g. Please advise any settings are required in order to run the report. Thank you very much.
    Best Regards
    Pinga

    The report server is running as the report can be run via URL in the brower. However, it prompts out the error when it is called by oracle form using the run_report_object.

  • Erratic performance problems in Oracle 8.0.x

    Hi all,
    We are having a performance problem that appeared somewhere between 8.0.6 and 8.1.5 when using embedded SQL and the ProC compiler under Linux and Solaris.
    The moment we use client libraries > 8.0.x, things seem to grind to a halt. We are currently using 8.0.6 client against 8, 9 and 10g databases. Using 8.1.5 or 10g clients against Oracle 9 or 10 databases triggers the problem.
    The problem also isn't tied to any specific query. On the latest run we tried, the timings for a specific problem query are as follows: Oracle 10 server, Oracle 8.0.6 client - 40 seconds; Oracle 10 server (same database), Oracle 10 client - 14 hours
    Explain plan doesn't show anything funny with the query. On occasion, the query does get through quickly. Subsequent runs are then also quick.
    The query also runs fine in SQLPlus.
    What we have noticed is that the server process flatlines it at 100% CPU usage for the entire duration. The client, on the other hand is just sleeping, waiting for data from the server. Stopping the client in the debugger shows that the client is waiting in the sqlcxt() call when opening the cursor, not actually fetching data.
    We are at our wits end as to where look next, and we can't stay on 8.0.6 client libraries for ever as this is starting to cause us other hassles now.
    Did something significant perhaps change between 8.0.x and 8.1.x that we need to cater for in our apps?
    Any help/ideas would be greatly appreciated.
    Regards,
    Gerhard

    Check Metalink
    Client / Server / Interoperability Support Between Different Oracle Versions
    Doc ID: Note:207303.1
    Looks like Client version 8.1.5 has some problem, it was never designed to support Oracle version higher than 8.1.7
    On the other hand, 8.0.6 was supported up to 9.2
    I would stay with 8.0.6 if I have to use Oracle 8 client. Client version 8.1.7 seems much better.

  • Performance problems loading an XML file into oracle database

    Hello ODI Guru's,
    I am trying to load and XML file into the database after doing simple business validations. But the interface takes hours to complete.
    1. The XML files are large in size >200 Mb. We have an XSD file for the schema definition instead of a DTD.
    2. We used the external database feature for loading these files in database.
    The following configuration was used in the XML Data Server:
    jdbc:snps:xml?f=D:\CustomerMasterData1\CustomerMasterInitialLoad1.xml&d=D:\CustomerMasterData1\CustomerMasterInitialLoad1.xsd&re=initialLoad&s=CM&db_props=oracle&ro=true
    3. Now we reverse engineer the XML files and created models using ODI Designer
    4. Similar thing was done for the target i.e. an Oracle database table as well.
    5. Next we created a simple interface with one-to-one mapping from the XSD schema to the Oracle database table and executed the interface. This execution takes more than one hour to complete.
    6. We are running ODI client on Windows XP Professional SP2.
    7. The Oracle database server(Oracle 10g 10.2.0.3) for the target schema as well as the ODI master and work repositories are on the same machine.
    8. I tried changing the following properties but it is not making much visible difference:
    use_prepared_statements=Y
    use_batch_update=Y
    batch_update_size=510
    commit_periodically=Y
    num_inserts_before_commit=30000
    I have another problem that when I set batch_update_size to value greater that 510 I get the following error:
    java.sql.SQLException: class org.xml.sax.SAXException
    class java.lang.ArrayIndexOutOfBoundsException said -32413
    at com.sunopsis.jdbc.driver.xml.v.a(v.java)
    The main concern is why should the interface taking so long to execute.
    Please send suggestions to resolve the problem.
    Thanks in advance,
    Best Regards,
    Nikunj

    Approximately how many rows are you trying to insert?
    One of the techniques which I found improved performance for this scenario was to extract from the xml to a flat file, then to use SQL*LOADER or external tables to load the data into Oracle.

  • Problem of Data Import/Export after migration of Oracle DB 9i to 10g

    We have encountered the following problem after the migration of Oracle DB 9i to 10g R1 and ESRI ArcSDE 8.3 to 9.1.
    In our development server, a view was created by joining of one feature class (point feature), two attribute tables and one F table. We have to perform a process to export all the features in that particular view from the development server and then import them to the production server. In total, there should be about 60,000 points.
    Form our past experience (using Oracle DB 9i and ESRI ArcSDE 8.3); we spent about 15-20 minutes to complete the import-export procedure. However, after the system migration, the speed of the import-export procedures is extremely slow, which is talking about 2 hours for ONLY EXPORTING 5MB data.
    We would like to seek advice in solving the above problems. THANKS!

    Try to delete old stats, then gather new stats on the schema and try the export again.

Maybe you are looking for

  • Missing Snippet Descriptions (upgraded from RH7 to RH9)

    Hi all, I've pretty much successfully upgraded my RH7 project to RH9. I only noticed two problems, one of which is the subject of this thread - the descriptions for each of my snippets isn't displaying. It looks to me like this problem has something

  • "C:/Progra​m Files(x86)​PCPowerSpe​ed/unins.m​sg is missing (HP TouchSmart 15)

    I have a big issue with all these windows popping up on my screen, running "scans" & trying to get me to purchase anti-virus and backup software.  When I try to uninstall these programs,  I get the above msg, then "please correct the problem or obtai

  • Core Center freezes after 15 seconds

    I dont know if its MOBO related or if its Core Center thats the problem. When I start Core Center its ok but if i leave it running for more than about 15 seconds, the entire pc freezes and i have to reboot. Regrdless of weather it is in the sys tray

  • User Mode Block Device Driver

    Hi, I want to write a block device driver in user mode so that the read and write calls from the FileSystem comes to user mode component than a regular kernel mode driver. why i want to do this ... because I want to pass on the data coming to another

  • Is there a utility that documents our worksheet settings?

    i.e. conditions, calcuations, sorting, worksheets used etc. Same with Admin. Joins, hierarchies etc. Thanks, Dominique