Sub VI to enter database data taking more time when calling from main VI

Hi All,
I created a sub vi to enter data to a new database, after reading from another database. when i executed that sub vi alone, it took only 2 minutes to enter 20000 records. But when i called that sub VI from my main VI, its taking 35 minutes to enter 20000 records. 
my maiin vi has another while loop running parallelly which will perform some other operations. How to handle memory in this case? i mean when i call my sub vi from main vi, i need to give complete processor memory for that sub vi. is it possible?
please help. thanks in advance.
Sooraj

Hi Oliver,
in your example the numeric data is passed from sub-vi to main only when sub-vi terminates, this is what it's supposed to be when you fetch the data from the connector output of a sub-vi.
You can use queues function to exchange data between sub-vi and main.vi; take a look at the example Message Logging with Named Queue.vi.
An alternative method is passing to sub-vi a reference of an indicator (or control) and update its value on the fly.
See example.
Alberto
Attachments:
passing_data_from_sub-vi.llb ‏35 KB

Similar Messages

  • Query response time takes more time when calling from package

    SELECT
    /* UTILITIES_PKG.GET_COUNTRY_CODE(E.EMP_ID,E.EMP_NO) COUNTRY_ID */
    (SELECT DISTINCT IE.COUNTRY_ID
    FROM DOCUMENT IE
    WHERE IE.EMP_ID =E.EMP_ID
    AND IE.EMP_NO = E.EMP_NO
    AND IE.STATUS = 'OPEN' ) COUNTRY_ID
    FROM EMPLOYEE E
    CREATE OR REPLACE PACKAGE BODY UTILITIES_PKG AS
    FUNCTION GET_COUNTRY_CODE(P_EMP_ID IN VARCHAR2, P_EMP_NO IN VARCHAR2)
    RETURN VARCHAR2 IS
    L_COUNTRY_ID VARCHAR2(25) := '';
    BEGIN
    SELECT DISTINCT IE.COUNTRY_ID
    INTO L_COUNTRY_ID
    FROM DOCUMENT IE
    WHERE IE.EMP_ID = P_EMP_ID
    AND IE.EMP_NO = P_EMP_NO
    AND IE.STATUS = 'OPEN';
    RETURN L_COUNTRY_ID;
    EXCEPTION
    WHEN OTHERS THEN
    RETURN 'CONT';
    END;
    END UTILITIES_PKG;
    when I run above query its coming in 1.2 seconds.but when comment subquery and call from package its taking 9 seconds.query returns more than 2000 records.i am not able to find the reason why it is taking more time when calling from package?

    You are getting a different plan when you run it as PL/SQL most likely. Comment your statement:
    SELECT /* your comment here */then find them in V$SQL and get the SQL IDs. You can then use DBMS_XPLAN.DISPLAY_CURSOR to see what is actually happening.
    http://www.psoug.org/reference/dbms_xplan.html

  • "3.x Analyzer Server" EVENT taking more Time when Executing Query

    Hi All,
    When I am Executing the Query Through RSRT   taking more time. When I have checked the statistics
    and observed that "3.x Analyzer Server" EVENT taking more Time .
    What I have to do , How I will reduce this 3.x Analyzer Server EVENT time.
    Please Suggest me.
    Thanks,
    Kiran Manyam

    Hello,
    Chk this on query performance:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance
    Reg,
    Dhanya

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • One database user taking more time to connect

    Hi All,
    I have a question,
    I have a database say 500GB and I have around 50 schemas in this database.But there is one User that takes more than normal time to connect to the database.
    What can be the reason for this and what should be my apprach to look for the problem where it could possibly be.
    Regards,
    Sphinx

    yes, check for online trigger if none there put your own on logon trigger on a normal user and your issue user. do a 10046, examine the trace files, everything the user is doing will be in it.
    CREATE OR REPLACE TRIGGER SYS.set_trace
    AFTER LOGON ON DATABASE
    WHEN (USER in 'GOODUSER','BADUSER')
    DECLARE
    lcommand varchar(200);
    BEGIN
    EXECUTE IMMEDIATE 'alter session set statistics_level=ALL';
    EXECUTE IMMEDIATE 'alter session set max_dump_file_size=UNLIMITED';
    EXECUTE IMMEDIATE 'alter session set events ''10046 trace name context forever, level 12''';
    END set_trace;
    /tkprof the trace files, whats the difference between good and bad.
    moneys on a trigger though.

  • Database shutdown taking more time, is listener process a problem??

    Dear all,
    though its a general process to stop the listener before shutting down the database for cold backup. but is it so that if you don't stop the listener before giving shutdown immediate command, the shutdown process takes long time than normal expected time?
    because as per my understanding, the listener process is used just for the connection and when we give shutdown command the database automatically rejects any new connections. your valuable comments are required.

    No version, as usual, and the answer is version specific.
    Why is it so difficult to include those 4 digits?
    Dear all,
    though its a general process to stop the listener
    before shutting down the database for cold backup.
    but is it so that if you don't stop the listener
    before giving shutdown immediate command, the
    shutdown process takes long time than normal expected
    time? No. Must be a fairy tale without proof.
    You might need set job_queue_processes to 0 and aq_tm_processes to 0, but that has nothing to do with the listener.
    >
    because as per my understanding, the listener process
    is used just for the connection and when we give
    shutdown command the database automatically rejects
    any new connections. your valuable comments are
    required.--
    Sybrand Bakker
    Senior Oracle DBA

  • Query is taking more time to execute in PROD

    Hi All,
    Can anyone tell me why this query is taking more time when I am using for single trx_number record it is working fine but when I am trying to use all the records it is not fatching any records and it is keep on running.
    SELECT DISTINCT OOH.HEADER_ID
    ,OOH.ORG_ID
    ,ct.CUSTOMER_TRX_ID
    ,ool.ship_from_org_id
    ,ct.trx_number IDP_SHIPMENT_ID
    ,ctt.type STATUS_CODE
    ,SYSDATE STATUS_DATE
    ,ooh.attribute16 IDP_ORDER_NBR --Change based on testing on 21-JUL-2010 in UAT
    ,lpad(rac_bill.account_number,6,0) IDP_BILL_TO_CUSTOMER_NBR
    ,rac_bill.orig_system_reference
    ,rac_ship_party.party_name SHIP_TO_NAME
    ,raa_ship_loc.address1 SHIP_TO_ADDR1
    ,raa_ship_loc.address2 SHIP_TO_ADDR2
    ,raa_ship_loc.address3 SHIP_TO_ADDR3
    ,raa_ship_loc.address4 SHIP_TO_ADDR4
    ,raa_ship_loc.city SHIP_TO_CITY
    ,NVL(raa_ship_loc.state,raa_ship_loc.province) SHIP_TO_STATE
    ,raa_ship_loc.country SHIP_TO_COUNTRY_NAME
    ,raa_ship_loc.postal_code SHIP_TO_ZIP
    ,ooh.CUST_PO_NUMBER CUSTOMER_ORDER_NBR
    ,ooh.creation_date CUSTOMER_ORDER_DATE
    ,ool.actual_shipment_date DATE_SHIPPED
    ,DECODE(mp.organization_code,'CHP', 'CHESAPEAKE'
    ,'CSB', 'CHESAPEAKE'
    ,'DEP', 'CHESAPEAKE'
    ,'CHESAPEAKE') SHIPPED_FROM_LOCATION --'MEMPHIS' --'HOUSTON'
    ,ooh.freight_carrier_code FREIGHT_CARRIER
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)
    + NVL(XX_FSG_NA_FASTRAQ_IFACE.get_line_fr_amt ('FREIGHT',ct.customer_trx_id,ct.org_id),0)FREIGHT_CHARGE
    ,ooh.freight_terms_code FREIGHT_TERMS
    ,'' IDP_BILL_OF_LADING
    ,(SELECT WAYBILL
    FROM WSH_DELIVERY_DETAILS_OE_V
    WHERE -1=-1
    AND SOURCE_HEADER_ID = ooh.header_id
    AND SOURCE_LINE_ID = ool.line_id
    AND ROWNUM =1) WAYBILL_CARRIER
    ,'' CONTAINERS
    ,ct.trx_number INVOICE_NBR
    ,ct.trx_date INVOICE_DATE
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('LINE',ct.customer_trx_id,ct.org_id),0) +
    NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) +
    NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)INVOICE_AMOUNT
    ,NULL IDP_TAX_IDENTIFICATION_NBR
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) TAX_AMOUNT_1
    ,NULL TAX_DESC_1
    ,NULL TAX_AMOUNT_2
    ,NULL TAX_DESC_2
    ,rt.name PAYMENT_TERMS
    ,NULL RELATED_INVOICE_NBR
    ,'Y' INVOICE_PRINT_FLAG
    FROM ra_customer_trx_all ct
    ,ra_cust_trx_types_all ctt
    ,hz_cust_accounts rac_ship
    ,hz_cust_accounts rac_bill
    ,hz_parties rac_ship_party
    ,hz_locations raa_ship_loc
    ,hz_party_sites raa_ship_ps
    ,hz_cust_acct_sites_all raa_ship
    ,hz_cust_site_uses_all su_ship
    ,ra_customer_trx_lines_all rctl
    ,oe_order_lines_all ool
    ,oe_order_headers_all ooh
    ,mtl_parameters mp
    ,ra_terms rt
    ,OE_ORDER_SOURCES oos
    ,XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V
    WHERE ct.cust_trx_type_id = ctt.cust_trx_type_id
    AND ctt.TYPE <> 'BR'
    AND ct.org_id = ctt.org_id
    AND ct.ship_to_customer_id = rac_ship.cust_account_id
    AND ct.bill_to_customer_id = rac_bill.cust_account_id
    AND rac_ship.party_id = rac_ship_party.party_id
    AND su_ship.cust_acct_site_id = raa_ship.cust_acct_site_id
    AND raa_ship.party_site_id = raa_ship_ps.party_site_id
    AND raa_ship_loc.location_id = raa_ship_ps.location_id
    AND ct.ship_to_site_use_id = su_ship.site_use_id
    AND su_ship.org_id = ct.org_id
    AND raa_ship.org_id = ct.org_id
    AND ct.customer_trx_id = rctl.customer_trx_id
    AND ct.org_id = rctl.org_id
    AND rctl.interface_line_attribute6 = to_char(ool.line_id)
    AND rctl.org_id = ool.org_id
    AND ool.header_id = ooh.header_id
    AND ool.org_id = ooh.org_id
    AND mp.organization_id = ool.ship_from_org_id
    AND ooh.payment_term_id = rt.term_id
    AND xla_ael_sl_v.last_update_date >= NVL(p_last_update_date,xla_ael_sl_v.last_update_date)
    AND ooh.order_source_id = oos.order_source_id --Change based on testing on 19-May-2010
    AND oos.name = 'FASTRAQ' --Change based on testing on 19-May-2010
    AND ooh.org_id = g_org_id --Change based on testing on 19-May-2010
    AND ool.flow_status_code = 'CLOSED'
    AND xla_ael_sl_v.trx_hdr_id = ct.customer_trx_id
    AND trx_hdr_table = 'CT'
    AND xla_ael_sl_v.gl_transfer_status = 'Y'
    AND xla_ael_sl_v.accounted_dr IS NOT NULL
    AND xla_ael_sl_v.org_id = ct.org_id;
    -- AND ct.trx_number = '2000080';
    }

    Hello Friend,
    You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
    1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
    and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
    First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
    Please check the possbility of finding it through other AR tables.
    2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
    For example : For ra_cust_trx_types_all use ra_cust_trx_types.
    This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
    3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
    You can also check the same using
    SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
    If the tables are not analyzed , the CBO will not be able to tune your query.
    4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
    5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
    report.
    Thanks,
    Neeraj Shrivastava
    [email protected]
    Edited by: user9352949 on Oct 1, 2010 8:02 PM
    Edited by: user9352949 on Oct 1, 2010 8:03 PM

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • CDP Performance Issue-- Taking more time fetch data

    Hi,
    I'm working on Stellent 7.5.1.
    For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
    public void getManager(final HashMap binderMap)
    throws VistaInvalidInputException, VistaDataNotFoundException,
    DataException, ServiceException, VistaTemplateException
         String collectionID =
    getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
         long firstStartTime = System.currentTimeMillis();
    HashMap resultSetMap = null;
    String isNonRecursive = getStringLocal(VistaFolderConstants
    .ISNONRECURSIVE_KEY);
    if (isNonRecursive != null
    && isNonRecursive.equalsIgnoreCase(
    VistaContentFetchHelperConstants.STRING_TRUE))
    VistaLibraryContentFetchManager libraryContentFetchManager =
    new VistaLibraryContentFetchManager(
    binderMap);
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
              resultSetMap = libraryContentFetchManager
    .getFolderContentItems(m_workspace);
    * used to add the resultset to the binder.
    addResultSetToBinder(resultSetMap , true);
    else
         long startTime = System.currentTimeMillis();
    * isStandard is used to decide the call is for Standard or
    * Extended.
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
    String isStandard = getTemplateInformation(binderMap);
    long endTimeTemplate = System.currentTimeMillis();
    binderMap.put(VistaFolderConstants.IS_STANDARD,
    isStandard);
    long endTimebinderMap = System.currentTimeMillis();
    VistaContentFetchManager contentFetchManager
    = new VistaContentFetchManager(binderMap);
    long endTimeFetchManager = System.currentTimeMillis();
    resultSetMap = contentFetchManager
    .getAllFolderContentItems(m_workspace);
    long endTimeresultSetMap = System.currentTimeMillis();
    * used to add the resultset and the total no of content items
    * to the binder.
    addResultSetToBinder(resultSetMap , false);
    long endTime = System.currentTimeMillis();
    if (perfLogEnable.equalsIgnoreCase("true"))
         Log.info("Time taken to execute " +
                   "getTemplateInformation=" +
                   (endTimeTemplate - startTime)+
                   "ms binderMap=" +
                   (endTimebinderMap - startTime)+
                   "ms contentFetchManager=" +
                   (endTimeFetchManager - startTime)+
                   "ms resultSetMap=" +
                   (endTimeresultSetMap - startTime) +
                   "ms getManager:getAllFolderContentItems = " +
                   (endTime - startTime) +
                   "ms overallTime=" +
                   (endTime - firstStartTime) +
                   "ms folderID =" +
                   collectionID);
    Edited by: 838623 on Feb 22, 2011 1:43 AM

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

  • Taking more time for retreving data from nested table

    Hi
    we have two databases db1 and db2,in database db2 we have number of nested tables were there.
    Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
    For feching records it's going to take much more time even records are less in table . what would be the reason.
    plz help me daliy we are facing the problem regarding the same

    Please avoid duplicate thread :
    quaries taking more time
    Nicolas.
    +< mod. action : thread locked>+

  • Deletion of Data Mart Request taking more time

    Hi all,
    One of the Data Mart process(ODS to Infocube) has failed.
    Im not able to delete the request in Infocube.
    When delete option executed, its taking more time while im monitoring that job in SM37 and its not getting completed.
    The details seen in the Job Log is as shown below,
    Job started                                                                  
    Step 001 started (program RSDELPART1, variant &0000000006553, user ID SSRIN90)
    Delete is running: Data target BUS_CONT, from 376,447 to 376,447             
    Please let me know your suggestions.
    Thanks,
    Sowrabh

    Hi,
    How many records are there in that request. Usually when you try to delete out a request it takes more time. Depending on the data volume in the request the deletion time will vary.
    Give it some more time and see if it finishes. To actually know if the job is doing anything, goto SM50/51 and look at whats happening.
    Cheers,
    Kedar

  • Taking more time!

    hi all,
    i am using Forms [32 Bit] Version 6.0.8.24.1 (Production)
    i am ultimate task is to read the data from excel and insert into the table. if any duplication of records then stop the process and tell the user that which records are repating..
    i have used ole2 package to read data from excel.
    i have two pl/sql table for further process.
    one table is to inser whatever the data come from excel. 2nd table is to check for the existance of current record with the previously inserted record and 3rd table is do store the duplicated records.
    i am inserting each record as it is to the table1. in the table 2 checking for the existance of the record. if current record match with the previous record then store that record into the 3 record.
    at the end if the error_table(3rd table) row count is 0 then no duplication then write the 1st table data into the forms and save,else display the table 3(error table) data into the form and say to the user that these records are duplicating.
    but its taking more time time.
    can anybody tell me the logic which reduces the time consuption(better performance)
    Thanks..

    hi,
    i don't think its a sql issue because if i entered manually(not through the excel upload) its just working fine. the form is database block. so i have not written any sql.
    what i am duing is upload data from excel to plsql table check for the duplication if any duplication found display the error_plsql table else display the first table as i mentioned earlier. the 2nd table is to check for the existance of the current record(since i am reading from excel record by record).
    Thanks..

  • Expdp taking more time to start export

    Hi Gurus
    my database oracle 10.2.0.3 in AIX
    I have started expdp to export selective around 130 tables but it is taking more time to start , almost 20 min passed since it start.
    $ expdp system/*** parfile=parfile.par
    Export: Release 10.2.0.3.0 - 64bit Production on Friday, 22 June, 2012 17:21:26
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_02": system/******** parfile=parfile.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    If share me tips and let me know the reason
    Regards
    Rabi

    user623166 wrote:
    Hi Gurus
    my database oracle 10.2.0.3 in AIX
    I have started expdp to export selective around 130 tables but it is taking more time to start , almost 20 min passed since it start.
    $ expdp system/*** parfile=parfile.par
    Export: Release 10.2.0.3.0 - 64bit Production on Friday, 22 June, 2012 17:21:26
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_02": system/******** parfile=parfile.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    I am not sure that this is enough information for us to suggest something. For starters, can you try selecting less tables than 120 and see what's going on?
    Aman....

  • Sql query is taking more time

    Hi all,
    db:oracle 9i
    I am facing below query prob.
    prob is that query is taking more time 45 min than earliar (10 sec).
    please any one suggest me .....
    SQL> SELECT MAX (tdar1.ID) ID, tdar1.request_id, tdar1.lolm_transaction_id,
    2 tdar1.transaction_version
    3 FROM transaction_data_arc tdar1
    4 WHERE tdar1.transaction_name ='O96U '
    5 AND tdar1.transaction_type = 'REQUEST'
    6 AND tdar1.message_type_code ='PCN'
    7 AND NOT EXISTS (
    8 SELECT NULL
    9 FROM transaction_data_arc tdar2
    10 WHERE tdar2.request_id = tdar1.request_id
    11 AND tdar2.lolm_transaction_id != tdar1.lolm_transaction_id
    12 AND tdar2.ID > tdar1.ID)
    13 GROUP BY tdar1.request_id,
    14 tdar1.lolm_transaction_id,
    15 tdar1.transaction_version;
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=42)
    1 0 SORT (GROUP BY) (Cost=12 Card=1 Bytes=42)
    2 1 FILTER
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=1 Card=1 Bytes=42)
    4 3 INDEX (RANGE SCAN) OF 'NK_TDAR_2' (NON-UNIQUE) (Cost
    =3 Card=1)
    5 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=5 Card=918 Bytes=20196)
    6 5 INDEX (RANGE SCAN) OF 'NK_TDAR_7' (NON-UNIQUE) (Cost
    =8 Card=4760)

    prob is that query is taking more time 45 min than earliar (10 sec).Then something must have changed (data growth/stale statistics/...?).
    You should post as much details as possible, how and what it is described in the FAQ, see:
    *3. How to improve the performance of my query? / My query is running slow*.
    When your query takes too long...
    How to post a SQL statement tuning request
    SQL and PL/SQL FAQ
    Also, given your database version, using NOT IN instead of NOT EXISTS might make a difference (but they're not the same).
    See: SQL and PL/SQL FAQ

  • Taking More Time while inserting into the table (With foriegn key)

    Hi All,
    I am facing problem while inserting the values into the master table.
    The problem,
    Table A -- User Master Table (Reg No, Name, etc)
    Table B -- Transaction Table (Foreign key reference with Table A).
    While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
    While inserting we need to query the Table A first to have the values in TableABean.java.
    final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
    Then, we need to create the instance for TableB
    TableB tableB= (TableB)uow.newInstance(TableB.class);
    tableB.setID(bean.getID);
    tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
    This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
    For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
    regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
    Time delay is there for different users when they enter transaction in TableB.
    I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
    Please help me to resolve this issue...I am facing it now in production.
    Thanks & Regards
    VB

    Hello,
    Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
    1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
    2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
    Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
    http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
    Best Regards,
    Chris

Maybe you are looking for

  • How do i change the name of a user in Yosemite?

    it's not how it used to be... ~Sunny

  • Strawman Feature Factoring List - Feedback requested

    OK - as promised, here's the strawman feature list for Oracle Database XE. Please send feedback on 1) Anything you think is missing from the list 2) Anything that has a no against it that you simply cannot live without Last but not least, this is jus

  • Data Source and Application Component

    Dear Experts, I am a beginner of BI and I create a Data Source but I forgot to create an "Application Component" beforehand. Now I create an "Application Component" individually and trying to drug my "Data Source" to the "Application Component", but

  • Restricting number of messages from one person

    Is there a way to restrict the number of text messages you can receive in a day from one person? There's someone I work with, I sent a text to them about something they did, and then I got bombarded by over 20 text messages which froze my messaging (

  • Rearranging XML structure

    Noob question, Can you rearrange an XML structure with XSLT or scripting? Lets say I've got an clean XML structure like this: <Workbook>      <Element_A_01>           <Element_A_02></Element_A_02>      </Element_A_01>      <Element_A_03></Element_A_0