Performance issue while fetching metadata from Informatica Repository

I'm working on Informatica 8.6(ETL tool) which contains its own repository to save metadata and using their Mapiing SDK API's I'm developing a java application which fetches the objects from the repository.
For this purpose by using "mapfwk.jar", i'm first connecting to the repository using RepositoryConnectionManager class and then at the time of fetching the metadata i used getFolder, getSource & getTarget functions.
Issue: Program is taking to much time for fetching the folders. The time taken by it depends on the number of metadata objects present in it,i.e. as object number increases, time increases.
Please advise how to reduce time for fetching metadata from repository.
Source Code:
#1 - Code for connecting to repository
protected static String PC_CLIENT_INSTALL_PATH = "E:\\Informatica\\PowerCenter8.6.0\\client\\bin";
     protected static String TARGET_REPO_NAME = "test_rep";
     protected static String REPO_SERVER_HOST = "blrdxp-nfadate";
     protected static String REPO_SERVER_PORT = "6001";
     protected static String ADMIN_USERNAME = "Administrator";
     protected static String ADMIN_PASSWORD = "Administrator";
     protected static String REPO_SERVER_DOMAIN_NAME = "Domain_blrdxp-nfadate";
protected void initializeRepositoryProps(){
CachedRepositoryConnectionManager rpMgr = new CachedRepositoryConnectionManager(new PmrepRepositoryConnectionManager());
                  RepoProperties repoProp = new RepoProperties();
                  repoProp.setProperty(RepoPropsConstant.PC_CLIENT_INSTALL_PATH, PC_CLIENT_INSTALL_PATH);
                  repoProp.setProperty(RepoPropsConstant.TARGET_REPO_NAME, TARGET_REPO_NAME);
                  repoProp.setProperty(RepoPropsConstant.REPO_SERVER_DOMAIN_NAME, REPO_SERVER_DOMAIN_NAME);
                  repoProp.setProperty(RepoPropsConstant.REPO_SERVER_HOST, REPO_SERVER_HOST);
                  repoProp.setProperty(RepoPropsConstant.REPO_SERVER_PORT, REPO_SERVER_PORT);
                  repoProp.setProperty(RepoPropsConstant.ADMIN_USERNAME, ADMIN_USERNAME);
                  repoProp.setProperty(RepoPropsConstant.ADMIN_PASSWORD, ADMIN_PASSWORD);
                  rep.setProperties(repoProp);
                  rep.setRepositoryConnectionManager(rpMgr);
}#2 - Code for fetching metadata
Vector<Folder> rep_FetchedFolders = new Vector<Folder>();
public void fetchRepositoryFolders(){
          initializeRepositoryProps();
          System.out.println("Repository Properties set");
          //To fetch Folder
                Vector<Folder> folders = new Vector<Folder>();
          folders = (Vector<Folder>)rep.getFolder();
                for(int i=1 ; i < folders.size(); i++){
           Folder t_folder = new Folder(); 
           t_folder.setName(((Folder)folders.get(i)).getName());
           Vector listOfSources = ((Folder)folders.get((i))).getSource();
                        //To fetch Sources from folder
               for(int b=0; b<listOfSources.size();b++){
                Source src = ((Source)listOfSources.get(b));
                t_folder.addSource(src);
          Vector listOfTargets = ((Folder)folders.get((i))).getTarget();
                        //To fetch Sources from folder
               for(int b=0; b<listOfTargets.size();b++){
                 Target trg = ((Target)listOfTargets.get(b));
                 t_folder.addTarget(trg);
               rep_FetchedFolders.addElement(t_folder);
          }

Hi neel007,
Just use a List instead of a Vector, it's more performant :
List<Folder> rep_FetchedFolders = new ArrayList<Folder>();If you need to synchronize your list, then
List<Folder> rep_FetchedFolders = Collections.synchronizedList(new ArrayList<Folder>());Also, if you're using Java 5 or higher and if you're sure listOfTargets contains only Target objects, instead of this
for(int b=0; b<listOfTargets.size();b++){
   Target trg = ((Target)listOfTargets.get(b));
   t_folder.addTarget(trg);
}you may do this :
for (Target trg : listOfTargets) {
   t_folder.addTarget(trg);
}Edited by: Chicon on May 18, 2009 7:29 AM
Edited by: Chicon on May 18, 2009 7:35 AM

Similar Messages

  • Performance issues while query data from a table having large records

    Hi all,
    I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
    SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
    SELECT SUM (B.BASE_TRANSACTION_VALUE)
    FROM
    MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A  
    WHERE A.ORGANIZATION_ID =    B.ORGANIZATION_ID 
    AND A.ORGANIZATION_ID =  :b1 
    AND B.REFERENCE_ACCOUNT =    A.MATERIAL_ACCOUNT 
    AND B.TRANSACTION_DATE <=  LAST_DAY (TO_DATE (:b2 ,   'MON-YY' )  )  
    AND B.ACCOUNTING_LINE_TYPE !=  15  
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.02       0.05          0          0          0           0
    Fetch        3    134.74     722.82     847951    1003824          0           2
    total        7    134.76     722.87     847951    1003824          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 193  (APPS)
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
        788242     788242     788242   NESTED LOOPS  (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
             1          1          1    TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
             1          1          1     INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
        788242     788242     788242    TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
       8704356    8704356    8704356     INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
    788242    NESTED LOOPS
          1     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_PARAMETERS' (TABLE)
          1      INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                     'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
    788242     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_TRANSACTION_ACCOUNTS' (TABLE)
    8704356      INDEX   MODE: ANALYZED (RANGE SCAN) OF
                     'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      row cache lock                                 29        0.00          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                    847951        0.40        581.90
      latch: object queue header operation            3        0.00          0.00
      latch: gc element                              14        0.00          0.00
      gc cr grant 2-way                               3        0.00          0.00
      latch: gcs resource hash                        1        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      gc current block 3-way                          1        0.00          0.00
    ********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
    Is there any way I can improve the performance of this query?
    Regards
    Edited by: mhosur on Dec 10, 2012 2:41 AM
    Edited by: mhosur on Dec 10, 2012 2:59 AM
    Edited by: mhosur on Dec 11, 2012 10:32 PM

    CREATE INDEX mtl_transaction_accounts_n0
      ON mtl_transaction_accounts (
                                   transaction_date
                                 , organization_id
                                 , reference_account
                                 , accounting_line_type
    /:p

  • Performance Issue while changing Characteristics from CT04

    Hi Experts,
    Just now we have upgraded our system from 4.6C to ECC 6.0. In the new system we have created some characteristics and later I am trying to change these characteristics using transaction CT04 .
    There are some characteristics which are already present in the new system which has come from 4.6 C. Now when I try to open/change these characteristic (already existing) using CT04 it does not take any time where as if  I try to open/change a characteristic which is created newly in ECC 6.0, then it takes a lot of time.
    When I run SQL trace for both the scenario, then I find that most of the time taken is on Query on table PLFV.
    Trace Result for Newly Created Characteristic :
    115 PLFV       PREPARE      0     SELECT WHERE "MANDT" = :A0 AND "ATINN"
          = :A1 AND "LOEKZ" = :A2 AND ROWNUM <= :A3
    3 PLFV       OPEN                 0   SELECT WHERE "MANDT" = '070' AND "ATINN" =
       0000000575 AND "LOEKZ" = ' ' AND ROWNUM <= 1*
    336681733 PLFV       FETCH       0   1403        
    For this time taken is 336681733 .
    Trace Result for Existing Characteristic :
    2    PLFV       OPEN               0           SELECT WHERE "MANDT" = '070' AND "
                                                                  ATINN" = 0000000575 AND "LOEKZ" = ' ' AND ROWNUM <= 1
    For this time taken is 2.
    Here one difference I see is that for the Newly created characteristic, Prepare, Open and Fetch part is executed where as for the already existing query only Open is executed.
    The program which is used for querying PLFV is  SAPLCMX_TOOLS_CHARACTERISTICS.
    Could you please help me in this.
    Your response is highly appreciated.
    Regards,
    Lalit Kabra

    hi Rajesh,
    Pls check the below comments
    Please install the Oracle optimizer patch 6740811 and 6455795 as per
    note 871096. Also ensure you have installed the other patches listed in
    the same note as these are mandatory when running on Oracle 10.2
    If this does not solve your problem open an OSS message for this.
    Regards,
    Lalit

  • Performance issue while extracting data from non-APPS schema tables.

    Hi,
    I have developed an ODBC application to extract data from Oracle E-Business Suite 12 tables.
    e.g. I am trying to extract data from HZ_IMP_PARTIES_INT table of Receivables application(Table is in "AR" database schema) using this ODBC application.
    The performance of extraction (i.e. number of rows extracted per second) is very low if the table belongs to non-APPS schema. (e.g. HZ_IMP_PARTIES_INT belongs to "AR" database schema)
    Now if I create same table (HZ_IMP_PARTIES_INT) with same data in "APPS" schema, the performance of extraction improves a lot. (i.e. the number of rows extracted per second increases a lot) with the same ODBC application.
    The "APPS" user is used to connect to the database in both scenarios.
    Also note that my ODBC application creates multiple threads and each thread creates its own connection to the database. Each thread extract different data using SQL filter condition.
    So, my question is:
    Is APPS schema optimized for any data extraction?
    I will really appreciate any pointer on this.
    Thanks,
    Rohit.

    Hi,
    Is APPS schema optimized for any data extraction? I would say NO as data extraction performance should be the same for all schemas. Do you run "Gather Schema Statistics" concurrent program for ALL schemas on regular basis?
    Regards,
    Hussein

  • Performance issue while transferring data from one itab to another itab

    hi experts,
    i have stored all the general material details in one internal table and description, valuation details of the meterial is stored in another internal table which is of standard table. now i need to tranfer all the data from these two internal tables into one final internal table but it is taking lot of time as it has to transfer lacs of data.
    i have declared the output table as shown below
    DATA:
      t_output TYPE standard TABLE
               OF type_output
               INITIAL SIZE 0
               WITH HEADER LINE.
    (according to the standard i have to declare like this and the two internal tables are declared similar to the above one)
    could somebody suggest me how shall I procced through this...
    thanks in advance....
    ragerds,
    Deepu

    Have a look at the following article which you may find useful:
      <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/40729734-3668-2910-deba-fa0e95e2c541">Improving performance in nested loops</a>
    good luck, damian

  • How can we improve the performance while fetching data from RESB table.

    Hi All,
    Can any bosy suggest me the right way to improve the performance while fetching data from RESB table. Below is the select statement.
    SELECT aufnr posnr roms1 roanz
        INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
        FROM resb
        WHERE kdauf  = p_vbeln
        AND   ablad  = itab-sposnr+2.
    Here I am using 'KDAUF'  & 'ABLAD' in condition. Can we use secondary index for improving the performance in this case.
    Regards,
    Himanshu

    Hi ,
    Declare intenal table with only those four fields.
    and try the beloe code....
    SELECT aufnr posnr roms1 roanz
    INTO  table itab
    FROM resb
    WHERE kdauf = p_vbeln
    AND ablad = itab-sposnr+2.
    yes, you can also use secondary index for improving the performance in this case.
    Regards,
    Anand .
    Reward if it is useful....

  • Performance issue while generating Query

    Hi BI Gurus.
    I am facing performance issue while generating query on 0IC_C03.
    It has a variable as (from & to) for generating the report for a particular time duration.
    if the variable (from & to) fields is filled then after taking a long time it shows run time error.
    & if the query is executed without mentioning the variable(which is optional) then the data is extracted from beginning to till date. the same takes less time in execution.
    & after that the period has to be selected manually by option keep filter value. please suggest how can i solve the error
    Regards
    Ritika

    HI RITIKA,
    WEL COME TO SDN.
    YOUHAVE TO CHECK THE FOLLOWING RUN TIME SEGMENTS USING ST03N TCODE:
    High Database Runtime
    High OLAP Runtime
    High Frontend Runtime
    if its high Database Runtime :
    - check the aggregates or create aggregates on cube and this helps you.
    if its High OLAP Runtime :
    - check the user exits if any.
    - check the hier. are used and fetching in deep level.
    If its high frontend runtime:
    - check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    For From and to date variables, create one more set and use it and try.
    Regs,
    VACHAN

  • Error while fetching data from OWB Client using External Table.

    Dear All,
    I am using Oracle Warehouse Builder 11g & Oracle 10gR2 as repository database on Windows 2000 Server.
    I facing some issue in fetching data from a Flat File using external table from OWB Client.
    I have perform all the steps without any error but when I try to view the data, I got the following error.
    ======================================
    RA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-04040: file expense_categories.csv in SOURCE_LOCATION not found
    ORA-06512: at "SYS.ORACLE_LOADER", line 19
    java.sql.SQLException: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-04040: file expense_categories.csv in SOURCE_LOCATION not found
    ORA-06512: at "SYS.ORACLE_LOADER", line 19
         at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
         at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1030)
         at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)
         at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:774)
         at oracle.jdbc.driver.T4CStatement.executeMaybeDescribe(T4CStatement.java:849)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
         at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
         at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:386)
         at oracle.wh.ui.owbcommon.QueryResult.<init>(QueryResult.java:18)
         at oracle.wh.ui.owbcommon.dataviewer.relational.OracleQueryResult.<init>(OracleDVTableModel.java:48)
         at oracle.wh.ui.owbcommon.dataviewer.relational.OracleDVTableModel.doFetch(OracleDVTableModel.java:20)
         at oracle.wh.ui.owbcommon.dataviewer.RDVTableModel.fetch(RDVTableModel.java:46)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerPanel$1.actionPerformed(BaseDataViewerPanel.java:218)
         at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1849)
         at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2169)
         at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:420)
         at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:258)
         at javax.swing.AbstractButton.doClick(AbstractButton.java:302)
         at javax.swing.AbstractButton.doClick(AbstractButton.java:282)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerPanel.executeQuery(BaseDataViewerPanel.java:493)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerEditor.init(BaseDataViewerEditor.java:116)
         at oracle.wh.ui.owbcommon.dataviewer.BaseDataViewerEditor.<init>(BaseDataViewerEditor.java:58)
         at oracle.wh.ui.owbcommon.dataviewer.relational.DataViewerEditor.<init>(DataViewerEditor.java:16)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
         at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
         at oracle.wh.ui.owbcommon.IdeUtils._tryLaunchEditorByClass(IdeUtils.java:1412)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1349)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1367)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:869)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:856)
         at oracle.wh.ui.console.commands.DataViewerCmd.performAction(DataViewerCmd.java:19)
         at oracle.wh.ui.console.commands.TreeMenuHandler$1.run(TreeMenuHandler.java:188)
         at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:461)
         at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:242)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:163)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:157)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:149)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:110)
    ===========================
    In the error it is showing that file expense_categories.csv in SOURCE_LOCATION not found but I am 100% sure that file is very much there.
    Is anybody face the same issue?
    Do we need to configure something before loading data from a flat file from OWB Client?
    Any help would higly appreciable.
    Regards,
    Manmohan Sharma

    Hi Detlef / Gowtham,
    Now I am able to fetch data from flat files from OWB Server as well as OWB Client.
    One way I have achieved as suggested by you
    1) Creating location on the OWB Client
    2) Samples the files at client
    3) Created & Configured external table
    4) Copy all flat files on OWB Server
    5) Updated the location which I created at the client.
    Other way
    1) Creating location on the OWB Client
    2) Samples the files at client
    3) Created & Configured external table
    4) Copied flat files on the sever in same drive & directory . like if my all flat files are on C:\data at OWB Client then I copied flat file C:\data on the OWB Server. But this is feasible for Non-Windows.
    Hence my problem solved.
    Thanks a lot.
    Regards,
    Manmohan

  • Eliminate duplicate while fetching data from source

    Hi All,
    CUSTOMER TRANSACTION
    CUST_LOC     CUT_ID          TRANSACTION_DATE     TRANSACTION_TYPE
    100          12345          01-jan-2009          CREDIT
    100          23456          15-jan-2000          CREDIT
    100          12345          01-jan-2010          DEBIT
    100          12345          01-jan-2000          DEBITNow as per my requirement, i need to fetch data from CISTOMER_TRANSACTION table for those customer which has transaction in last 10 years. In my above data, customer 12345 has transaction in last 10 years, whereas for customer 23456, does not have transaction in last 10 years so will eliminate it.
    Now, CUSTOMER_TRANSACTION table has approximately 100 million records. So, we are fectching data in batches. Batching is divided into months. Total 120 months. Below is my query.
    select *
    FROM CUSTOMER_TRANSACTION CT left outer join
    (select distinct CUST_LOC, CUT_ID FROM CUSTOMER_TRANSACTION WHERE TRANSACTION_DATE >= ADD_MONTHS(SYSDATE, -120) and TRANSACTION_DATE < ADD_MONTHS(SYSDATE, -119) CUST
    on CT.CUST_LOC = CUST.CUST_LOC and CT.CUT_ID = CUST.CUT_IDThru shell script, months number will change. -120:-119, -119:-118 ....., -1:-0.
    Now the problem is duplication of records.
    while fetching data for jan-2009, it will get cust_id 12345 and will fetch all 3 records and load it into target.
    while fetching data for jan-2010, it will get cust_id 12345 and will fetch all 3 records and load in into target.
    So instead of having only 3 records, for customer 12345 it will be having 6 records. Can someone help me on how can i eliminate duplicate records from getting in.
    As of now i have 2 ways in mind.
    1. Fetch all records at once. Which is impossible as it will give space issue.
    2. After each batch, run a procedure which will delete duplicate records based on cust_loc, cut_id and transaction_date. But again it will have performance problem.
    I want to eliminate it while fetching data from source.
    Edited by: ace_friends22 on Apr 6, 2011 10:16 AM

    You can do it this way....
    SELECT DISTINCT cust_doc,
                    cut_id
      FROM customer_transaction
    WHERE transaction_date >= ADD_MONTHS(SYSDATE, -120)
       AND transaction_date < ADD_MONTHS(SYSDATE, -119)However please note that - if want to get the transaction in a month like what you said earlier jan-2009 and jan-2010 and so on... you might need to use TRUNC...
    Your date comparison could be like this... In this example I am checking if the transaction date is in the month of jan-2009
    AND transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)  AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)) Your modified SQL...
    SELECT *
      FROM customer_transaction 
    WHERE transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)  AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27))Testing..
    --Sample Data
    CREATE TABLE customer_transaction (
    cust_loc number,
    cut_id number,
    transaction_date date,
    transaction_type varchar2(20)
    INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2009','dd-MON-yyyy'),'CREDIT');
    INSERT INTO customer_transaction VALUES (100,23456,TO_DATE('15-JAN-2000','dd-MON-yyyy'),'CREDIT');
    INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2010','dd-MON-yyyy'),'DEBIT');
    INSERT INTO customer_transaction VALUES (100,12345,TO_DATE('01-JAN-2000','dd-MON-yyyy'),'DEBIT');
    --To have three records in the month of jan-2009
    UPDATE customer_transaction
       SET transaction_date = TO_DATE('02-JAN-2009','dd-MON-yyyy')
    WHERE cut_id = 12345
       AND transaction_date = TO_DATE('01-JAN-2010','dd-MON-yyyy');
    UPDATE customer_transaction
       SET transaction_date = TO_DATE('03-JAN-2009','dd-MON-yyyy')
    WHERE cut_id = 12345
       AND transaction_date = TO_DATE('01-JAN-2000','dd-MON-yyyy');
    commit;
    --End of sample data
    SELECT *
      FROM customer_transaction 
    WHERE transaction_date BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27)  AND LAST_DAY(ADD_MONTHS(TRUNC(SYSDATE,'MONTH'), -27));Results....
    CUST_LOC     CUT_ID TRANSACTI TRANSACTION_TYPE
          100      12345 01-JAN-09 CREDIT
          100      12345 02-JAN-09 DEBIT
          100      12345 03-JAN-09 DEBITAs you can see, there are only 3 records for 12345
    Regards,
    Rakesh
    Edited by: Rakesh on Apr 6, 2011 11:48 AM

  • Issue while fetching the file through *.extension by FTP sender file adapte

    Hello Experts,
    I am facing a issue while fetching the data through sender File adapter with  ' *.file extension' .I am illustarting the scenario as below .
    It is a simple scenarion of File to File inboumd scenarion.Here file is getting picked up from a third party system
    through FTP sender chanel and stored in a temp folder of PI through NFS rceiver file adapter .
        The problem is however I am getting, while picking the file with file name as "*.exo"(where exo is the file extension).
    But while fetching the file with particular name like"abcd_10032011*.exo"(file naming has been done by combination of abcd(always same)_currentdate(change according to current date)),file picked successfully .
    So here ,in the prior case file not getting picked up,but in later case it dose .
    Can anyone please let me know what might be the issue?

    Hi Sunit,
    Connect from your PI System to 3rd Party System (where are placed the Source Files)
    FTP <PartySystemHostName>
    eg. FTP 10.2.3.456 (then insert Username & Password to Login)
    Go to source directory
    cd \<SourceDirectory>
    eg. cd \donaldduck\directory\
    Execute a File List command
    ls -la *.<extension>
    eg. ls -la *.exo
    In this way you should be able to view all files with this extension (*.exo), the same action that Sap XI perform to pickup the file.
    Then, try to copy that file to your Local PI System (to check if there're some permissions issue):
    mget <filename>.exo
    eg. mpget File1_01012011.exo

  • Performance issue while opening the report

    HI,
    I am working BO XI R3.1.there is performance issue while opening the report in BO Solris Server but  on window server it is compratively fast.
    we have few reports which contains 5 fixed prompt 7 optional prompt.
    out of 5 fixed prompt 3 prompt is static (it contains 3 -4 record only )which is coming from materlied view.
    we have already use many thing for improve performance in report like-
    1) Index Awareness
    2) Aggregate Awareness
    3) Array fatch size-250
    3) Aray bind time -32767
    4) Login time out -600
    the issue is that before refresh opening the report iteslf taking time 1.30 min on BO solris server but same report taking time in BO window server 45 sec. even we  import on others BO solris server it is taking same time as per old solris server(1.30 min).
    when we close the trace in solris server than it is taking 1.15  sec time.it should not be intial phase it is not hitting more on database.so why it is taking that much time while opening the report.
    could you please guide us where exectly problem is there and how we can improve performance for opening the report.In case the problem related to solris server so what would be and how can we rectify.
    Incase any further input require for the same feel free to ask me.

    Hi Kumar,
    If this is happening with all the reports then this issue seems to be due to firewall or security settings of Solaris OS.
    Please try to lower down the security level in solaris and test for the issue.
    Regards,
    Chaitanya Deshpande

  • I'm facing performance issue while accessing the PLAF Table

    Dar all,
    I'm facing performance issue while accessing the PLAF Table.
    The START-OF-SELECTION of the report starts with the following select query.
        SELECT plnum  pwwrk matnr gsmng psttr FROM plaf
        INTO CORRESPONDING FIELDS OF TABLE it_tab
        WHERE matnr IN s_matnr
          AND pwwrk = p_pwwrk
          AND psttr IN s_psttr
          AND auffx = 'X'
          AND paart = 'LA' .
    While executing the report in the Quality system it does not face any performance issue...
    When it comes to Production System the above said select query itself it is taking 15 - 20 minutes time to move further.
    Kindly help me to over come this problem...
    Regards,
    Jessi

    Hi,
    "Just implement its primary Key
    WHERE PLNUM BETWEEN '0000000001' AND '9999999999' " By this you are implementing the Primary Key
    This statement has nothing to do with performance, because system is not able to use primary key or uses every row.
    Jessica, your query uses secondary index created by SAP:
    1     (Material, plant) which uses fields MANDT MATNR and PLWRK.
    but it is not suitable in your case.
    You can consider adding new one, which would containt all fields: MANDT, MATNR, PWWRK, PSTTR AUFFX PAART
    or - but it depends on number of rows meeting and not meeting (auffx = 'X' AND paart = 'LA' ) condition.
    It could speed the performance, if you would create secondary index based on fields MANDT, MATNR, PWWRK, PSTTR
    and do like Ramchander suggested: remove AUFFX and PAART from index and where section, and remove these unwanted rows
    after the query using DELETE statement.
    Regards,
    Przemysław
    Please check how many rows in production system

  • How to use for all entires clause while fetching data from archived tables

    How to use for all entires clause while fetching data from archived tables using the FM
    /PBS/SELECT_INTO_TABLE' .
    I need to fetch data from an Archived table for all the entries in an internal table.
    Kindly provide some inputs for the same.
    thanks n Regards
    Ramesh

    Hi Ramesh,
    I have a query regarding accessing archived data through PBS.
    I have archived SAP FI data ( Object FI_DOCUMNT) using SAP standard process through TCODE : SARA.
    Now please tell me can I acees this archived data through the PBS add on FM : '/PBS/SELECT_INTO_TABLE'.
    Do I need to do something else to access data archived through SAP standard process ot not ? If yes, then please tell me as I am not able to get the data using the above FM.
    The call to the above FM is as follows :
    CALL FUNCTION '/PBS/SELECT_INTO_TABLE'
      EXPORTING
        archiv           = 'CFI'
        OPTION           = ''
        tabname          = 'BKPF'
        SCHL1_NAME       = 'BELNR'
        SCHL1_VON        =  belnr-low
        SCHL1_BIS        =  belnr-low
        SCHL2_NAME       = 'GJAHR'
        SCHL2_VON        =  GJAHR-LOW
        SCHL2_BIS        =  GJAHR-LOW
        SCHL3_NAME       =  'BUKRS'
        SCHL3_VON        =  bukrs-low
        SCHL3_BIS        =  bukrs-low
      SCHL4_NAME       =
      SCHL4_VON        =
      SCHL4_BIS        =
        CLR_ITAB         = 'X'
      MAX_ZAHL         =
      tables
        i_tabelle        =  t_bkpf
      SCHL1_IN         =
      SCHL2_IN         =
      SCHL3_IN         =
      SCHL4_IN         =
    EXCEPTIONS
       EOF              = 1
       OTHERS           = 2
       OTHERS           = 3
    It gives me the following error :
    Index for table not supported ! BKPF BELNR.
    Please help ASAP.
    Thnaks and Regards
    Gurpreet Singh

  • Fatal error while fetching data from bi

    hi,
    i am getting following error while fetching data from bi using select statement
    i have written code in this way
    SELECT  [Measures].[D2GFTNHIOMI7KWV99SD7GPLTU] ON COLUMNS, NON EMPTY { [DEM_STATE].MEMBERS} ON ROWS FROM DEM_CUBE/TEST_F_8
    error description when i click on test
    Fatal Error
    com.lighthammer.webservice.SoapException: The XML for Analysis provider encountered an error

    thanks for answering .but when i tried writing the statement in transaction 'MDXTEST' and clicked on check i am getting following error
    Error occurred when starting the parser: timeout during allocate / CPIC-CALL: 'ThSAPCMRCV'
    Message no. BRAINOLAPAPI011
    Diagnosis
    Failed to start the MDX parser.
    System Response
    timeout during allocate / CPIC-CALL: 'ThSAPCMRCV'
    Procedure
    Check the Sys Log in Transaction SM21 and test the TCP-IP connection MDX_PARSER in Transaction SM59.
    SO I WENT IN SM 59 TO CHECK THE CONNECTION.
    CAN U TELL ME WHAT CONFIGERATION I NEED TO DO FOR MAKING SELECT STATEMENTS WORK?

  • Special character issue while loading data from SAP HR through VDS

    Hello,
    We have a special character issue, while loading data from SAP HR to IdM, using a VDS and following the standard documentation: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e09fa547-f7c9-2b10-3d9e-da93fd15dca1?quicklink=index&overridelayout=true
    French accent like (é,à,è,ù), are correctly loaded but Turkish special ones (like : Ş, İ, ł ) are transformed into u201C#u201D in Idm.
    The question is : does someone know any special setting to do in the VDS or in IdM for special characters to solve this issue??
    Our SAP HR version is ECC6.0 (ABA/BASIS7.0 SP21, SAP_HR6.0 SP54) and we are using a VDS 7.1 SP5 and SAP NW IdM 7.1 SP5 Patch1 on oracle 10.2.
    Thanks

    We are importing directly to the HR staging area, using the transactions/programs "HRLDAP_MAP", "LDAP" and "/RPLDAP_EXTRACT", then we have a job which extract data from the staging area to a CSV file.
    So before the import, the character appears correctly in SAP HR, but by the time it comes through the VDS to the IDM's temporary table, it becomes "#".
    Yes, our data is coming from a Unicode system.
    So, could it be a java parameter to change / add in the VDS??
    Regards.

Maybe you are looking for

  • Issue in CATS search help Hit List

    Hi All, At the moment we are facing an issue in CATS . We have added customer fields in the CATS data entry area by implementing CAT0005 and CAT0002 enhancements. One of customer filed we have created Search Help by using SE11 in that search help we

  • Premiere Pro CC 2014 (8.1.0) freezes while scrubbing timeline

    Using a Mac Mini (late 2012 model), running Mavericks 10.9.5 Processor: 2.6 GHz Intel Quad Core i7 Memory: 16 GB Graphics: Intel HD Graphics 4000 1024MB I am also using a 2TB Fantom G-Force Megadisk external HD to store all my media. It is connected

  • New Services Taking forever to be connected

    BT's disgustingly poor level of service - worse than a 3rd world country's debt. Constant lack of any level of professionalism or ability to provide the service that we as customers pay for. This has been since day 1 and we have been with BT now for

  • Variable Sequence issue

    Dear All, I have situation where I have 3 work books in a query. I have variable sequence A,B,C,D,E in the query properties set now when open the workbook I am getting the sequence as A C E B D. I am confused why would this happen as the sequence sho

  • CS2 on Mac, InDesign/Acrobat: Printing 2-sided pdf's on digital printer

    A small printer client of mine is using an Oce CM4521 color printer to print 2-sided business cards 10-up with crops all around. The second side is not aligning with the first, because, I think, the pdf-making function in InDesign is cropping the pdf