Getting data from table BSEG taking too long ... any solutions.

Hello people I am currently trying to get data from table BSEG for one particular G/L Account Number With restrictions using For All Entries.
The problem is that even with such tight restrictions its causing my report program to run way too slow. I put an option where you dont have to access table bseg. And it runs just fine. (all of this is done during PRD Server).
My question is
1.) How come BSEG seems to make the report slow, even though I put some tight restrictions. <b>Im using For All Entries where Zuonr eq i_tab-zuonr</b>it seems to work fine in DEV and <b>hkont EQ '0020103101'</b>(Customer Deposits).
2.) Is there a way for me to do the same thing as what I mentioned in #1 but only much faster.
Thanks guys and take care

Hi
It should be better you don't read BSEG table if you haven't the keys BUKRS and BELNR, because the reading could take many times if there are many hits.
If you want to find out the records of G/L Account it's better read the index table BSIS (for open items) and BSAS (for cleared items), here the field HKONT is a key (and ZUONR too). So you can improve the performance:
DATA: T_ITEMS LIKE STANDARD TABLE OF BSIS.
SELECT * FROM BSAS INTO TABLE T_ITEMS
  FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
                           AND HKONT = '0020103101'
                           AND ZUONR = I_ITAB-ZUONR.
SELECT * FROM BSIS APPENDING TABLE T_ITEMS
  FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
                           AND HKONT = '0020103101'
                           AND ZUONR = I_ITAB-ZUONR.
Remember every kind of item has an own index tables:
- BSIS/BSAS for G/L Account
- BSIK/BSAK for Vendor
- BSID/BSAD for Customer
These table have the same informations you can find out from BSEG and BKPF.
Max

Similar Messages

  • Export to xls from table is taking too long (Around 20 min)

    Hi all,
    I am new member to this forum. I had a table with 80 columns and 20000 records. It is having primary key column. When i try to export the table data in toad from data grid it is taking 15-20 min. When i try to export from my front end java application it is saying time out. when i query the table i am getting results in 3 secs, but when tried to export it is taking more time.
    Can any one please provide me solution for exporting the data faster with in a min?
    My table structure is
    COL1     NUMBER
    COL2      VARCHAR2 (150 Byte)
    COL3     DATE
    COL4      DATE
    COL5      NUMBER
    COL6      NUMBER
    COL7      VARCHAR2 (6 Byte)
    COL8      NUMBER
    COL9      NUMBER
    COL10     NUMBER
    COL11     NUMBER
    COL12     NUMBER
    COL13     NUMBER
    COL14     NUMBER
    COL15     CHAR (1 Byte)
    COL16     CHAR (1 Byte)
    COL17     NUMBER
    COL18     NUMBER
    COL19     NUMBER
    COL20     NUMBER
    COL21     NUMBER
    COL22     NUMBER
    COL23     VARCHAR2 (200 Byte)
    COL24     VARCHAR2 (200 Byte)
    COL25     VARCHAR2 (200 Byte)
    COL26     VARCHAR2 (200 Byte)
    COL27     VARCHAR2 (200 Byte)
    COL28     VARCHAR2 (200 Byte)
    COL29     NUMBER
    COL30     VARCHAR2 (200 Byte)
    COL31     NUMBER
    COL32     NUMBER
    COL33     NUMBER
    COL34     NUMBER
    COL35     NUMBER
    COL36     NUMBER
    COL37     NUMBER
    COL38     NUMBER
    COL39     NUMBER
    COL40     NUMBER
    COL41     DATE
    COL42     VARCHAR2 (150 Byte)
    COL43     DATE
    COL44     VARCHAR2 (150 Byte)
    COL45     NUMBER
    COL46     VARCHAR2 (50 Byte)
    COL47     VARCHAR2 (50 Byte)
    COL48     NUMBER
    COL49     NUMBER
    COL50     CHAR (1 Byte)
    COL51     CHAR (1 Byte)
    COL52     VARCHAR2 (150 Byte)
    COL53     CHAR (1 Byte)
    COL54     VARCHAR2 (150 Byte)
    COL55     NUMBER
    COL56     NUMBER
    COL57     VARCHAR2 (250 Byte)
    COL58     CHAR (1 Byte)
    COL59     CHAR (1 Byte)
    COL60     CHAR (1 Byte)
    COL61     CHAR (1 Byte)
    COL62     CHAR (1 Byte)
    COL63     CHAR (1 Byte)
    COL64     CHAR (1 Byte)
    COL65     CHAR (1 Byte)
    COL66     CHAR (1 Byte)
    COL67     CHAR (1 Byte)
    COL68     CHAR (1 Byte)
    COL69     CHAR (1 Byte)
    COL70     CHAR (1 Byte)
    COL71     CHAR (1 Byte)
    COL72     NUMBER
    COL73     NUMBER
    COL74     NUMBER
    COL75     NUMBER
    COL76     NUMBER
    COL77     CHAR (1 Byte)
    COL78     CHAR (1 Byte)
    COL79     CHAR (1 Byte)

    Hi,
    I got it. But this export is happening from front end java application, Which i am simulating in back-end toad. Could you please help me any architecture level approach while creating table. Once again thanks for your input.
    Regards,
    Pavan.

  • I am trying to download Adobe X1 Pro but get error message 413 header length too long - any ideas?

    I am trying to download Adobe X1 Pro but get error message 413 header length too long - any ideas?

    Hi Theresa ,
    That's a browser error.
    Clear your browser cache and cookies and try again.
    Or use a different browser.
    You may refer to the following link to download Acrobat 11.
    https://helpx.adobe.com/acrobat/kb/acrobat-downloads.html
    Regards
    Sukrit Dhingra

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • Fm to get archived data from table bseg

    hi ,
    what is the FM to get archived data from table bseg

    Hi Friend,
    Here is the FM you need,
    FAGL_GET_ARCH_ITEMS_BSEG.
    Regards,
    Lakshmanan

  • Query OR Stored Proc to get data from Tables from All Schemas in the d/base

    Hello Experts, (I appologize if i am not using the right way to ask questions)
    I have a database, and it has around 400 schemas in it. I have designed a query which will fetch the data from three different table's from Schema1.
    But it will be a tedious process of entering the 400 schemas names and pulling the information.
    I would like to know as to what would be the best possible way to;
    1) Look for all the schemas in the database
    2) Look for those specific tables in the schema, which has the data in the tables.
    3) If the tables are not present, than Ignore that schema and proceed further.
    4) Load the data into a table
    Any help, would appreciate it.
    Thanks!
    The query that i am using is as follows;
    -- Query to select all the Schemas from the database
    select username from all_users
    order by username;
    -- Sample Query to see if Tables exsist in the schema
    SELECT DISTINCT OWNER, OBJECT_NAME
    FROM ALL_OBJECTS
    WHERE OBJECT_TYPE = 'TABLE'
    AND OBJECT_NAME IN ('ENROLLMENT', 'PRDCT', 'L_P_L')
    AND OWNER in ('Schema_1', 'Schema_2', Schema_3', Schema_4',Schema_5', Schema_6')
    ORDER BY OWNER;
    --Query to get the data from the tables in a Schema
    select 'Schema_1@DATABASE_NAME' AS SCHEMA,
    (SELECT MAX(LOAD_DT) FROM Schema_1.LOAD_STATUS) AS MAX_LOAD,
    L_PROD_LINE.PROD_LINE,
    COUNT(DISTINCT ENROLLMENT.MEM_NBR) AS MEMBERSHIP
    FROM
    Schema_1.ENROLLMENT,
    Schema_1.PRDCT,
    Schema_1.L_P_L
    WHERE
    ENROLLMENT.PRODUCT_ID = PRDCT.PRODUCT_ID AND
    PRODUCT.PROD_LINE_ID = L_P_L.ID
    GROUP BY
    L_P_L.PROD_LINE;

    Hi,
    999355 wrote:
    Hello Experts, (I appologize if i am not using the right way to ask questions)See the froum FAQ {message:id=9360002}
    I have a database, and it has around 400 schemas in it. I have designed a query which will fetch the data from three different table's from Schema1.
    But it will be a tedious process of entering the 400 schemas names and pulling the information.
    I would like to know as to what would be the best possible way to;
    1) Look for all the schemas in the database
    2) Look for those specific tables in the schema, which has the data in the tables.
    3) If the tables are not present, than Ignore that schema and proceed further.
    4) Load the data into a table
    Any help, would appreciate it.
    Thanks!
    The query that i am using is as follows;
    -- Query to select all the Schemas from the database
    select username from all_users
    order by username;
    -- Sample Query to see if Tables exsist in the schema
    SELECT DISTINCT OWNER, OBJECT_NAME
    FROM ALL_OBJECTS
    WHERE OBJECT_TYPE = 'TABLE'
    AND OBJECT_NAME IN ('ENROLLMENT', 'PRDCT', 'L_P_L')
    AND OWNER in ('Schema_1', 'Schema_2', Schema_3', Schema_4',Schema_5', Schema_6')
    ORDER BY OWNER; Do you want to give a list of possible schemas (like the 6 above), or do you want to consider all schemas, however many and whatever they are called?
    You can get the right information for ALL_OBJECTS, but, since you known all the objects of interest are tables, ALL_TABLES will be faster and simpler.
    --Query to get the data from the tables in a Schema
    select 'Schema_1@DATABASE_NAME' AS SCHEMA,
    (SELECT MAX(LOAD_DT) FROM Schema_1.LOAD_STATUS) AS MAX_LOAD,
    L_PROD_LINE.PROD_LINE,
    COUNT(DISTINCT ENROLLMENT.MEM_NBR) AS MEMBERSHIP
    FROM
    Schema_1.ENROLLMENT,
    Schema_1.PRDCT,
    Schema_1.L_P_L
    WHERE
    ENROLLMENT.PRODUCT_ID = PRDCT.PRODUCT_ID AND
    PRODUCT.PROD_LINE_ID = L_P_L.ID
    GROUP BY
    L_P_L.PROD_LINE;I take it that the tables in question are ENROLLMENT, PRDCT and L_P_L; they won't have different names in different schemas.
    You can start this way:
    BEGIN
        FOR  c  IN  (
                          SELECT    owner
                  FROM      all_tables
                  WHERE     table_name  IN ( 'ENROLLMENT'
                                            , 'PRDCT'
                                  , 'L_P_L'
                  GROUP BY  owner
                  HAVING    COUNT (*) = 3
        LOOP
            ...  -- Now get the results for tables in the c.owner schema
        END LOOP;
    END;
    /This will find the schemas that have all 3 of those tables.
    Inside the loop, write another dynamic query. All that will change is the value of c.owner
    Sorry, I'm running out of time now. I hope this helps.

  • Create table query taking too long..

    Hello experts...
    I am taking the backup of table A which consist of 135 million records...
    for this am using below query..
    create table tableA_bkup as select * from tableA;
    it is taking more than hour..still running....
    is there any other way to query fast....
    thanks in advance....

    CTAS is one of the fastest ways to do such a thing.
    Remember you duplicate data. THis means if your table holds 50GB of data then it will have to copy those 50GB of data.
    A different way could be to use EXPDP to create a backup dump file from this table data. However I'm not sure if there is a big performance difference.
    Both versions could profit from parallel execution.

  • How to get data from table to pass into alvgrid function module

    i want to get some data from below table to pass into function module of alvgrid
    how can i get data please help.
    thanks in advanced.
    form get_data.
    select * into corresponding fields of table itab
             from  J_1IEXCHDR
                     inner join  J_1IEXCDTL
                        on  J_1IEXCDTLlifnr =  J_1IEXCHDRlifnr
                     where  J_1IEXCHDr~status = 'P'.
       append itab.
    endform.

    Pass your final table(internal table which you want to display) along with fieldcatalog to FM reuse_alv_grid_display.:\
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
        EXPORTING
          i_callback_program = i_repid
       IT_SORT               = gt_sort
          it_fieldcat        = lt_fieldcat[]
        TABLES
          t_outtab           = lt_final
        EXCEPTIONS
          program_error      = 1
          OTHERS             = 2.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.

  • Getting data from tables

    Hello All,
        I have a scenario where I have two different software systems (SAP and xyz systems), where a intermediate table will be created between the two systems that is shared. Data will be updated by the xyz systems into this shared table. Now, my questions regarding this shared table.
       1) Can we write some program or something to get the data from shared table to update the SAP?
       2) If possible send me the suggestions
       3) Please also send me the sample code to get the data from the shared table
    Thanks in advance,
      SDN powered

    Isn't this something for XI?
    You can create RFC's which can connect to other systems.
    or you just place the data on the application server and schedule a daily job to retrieve data from that to fill an SAP table.

  • SRM - How to get data from tables???

    Hi experts -
    I am new to SRM and I have a requirement to code a program that will retrieve purchase orders based on selection criteria, write an ALV report and all the user to select particular lines to close the p.o.
    My question is that with the following selection criteria, I see that there are tables such as CRMD_ORDERADM_H and CRMD_ORDERADM_I as well as BBP_PDBEH and BBP_PDBEI, but not exactly sure if I should do selects on them individually or can you collect the records from a function module or BAPI?
    Selection criteria:
    PO Number - range
    Purchasing group - range
    Vendor number - range
    Buyer ID - range
    Cost Center - range
    GL Account - range
    WBS - range
    Asset - range
    Plant - range
    Company code - range
    Delivery Date - range
    PO Create Date - range
    How can I pull the qualifying records in the easiest manner?
    Thanks in advance!
    Mark

    I'm not certain about this, but I don't think the information you need is in the SRM frontend. I think you'll have to use an RFC enabled FM to retrieve the data from the R/3 backend to the SRM system. Or maybe there is a BAPI that does this for you.
    Rob

  • Data activation in ODS taking too long

    Hi gurus,
    I have loaded data into PSA. From there I have loaded this data into ODS. However, the data is not being activated. So, I have to do manual activation on it.
    The problem is, I have to wait for a very long time. I started the data activation about 1 hour ago. My data records have 1.5 million lines.
    In SM37 the log shows the following messages:
    05.02.2007 12:13:37 Job started                                                                       00           516          S    
    05.02.2007 12:13:37 Step 001 started (program RSODSACT1, variant &0000000000061, user ID XXXXX)      00           550          S    
    05.02.2007 12:13:49 Activation is running: Data target ZFIGL_O2, from 1,553 to 1,553                  RSM          744          S    
    05.02.2007 12:13:49 Data to be activated successfully checked against archiving objects              RSMPC         153          S    
    05.02.2007 12:13:49 SQL: 05.02.2007 12:13:49 S12940                                                  DBMAN         099          I    
    05.02.2007 12:13:49 ANALYZE TABLE "/BIC/AZFIGL_O240" DELETE                                          DBMAN         099          I    
    05.02.2007 12:13:49 STATISTICS                                                                       DBMAN         099          I    
    05.02.2007 12:13:49 SQL-END: 05.02.2007 12:13:49 00:00:00                                            DBMAN         099          I    
    05.02.2007 12:13:49 SQL: 05.02.2007 12:13:49 S12940                                                  DBMAN         099          I    
    05.02.2007 12:13:49 BEGIN DBMS_STATS.GATHER_TABLE_STATS ( OWNNAME =>                                 DBMAN         099          I    
    05.02.2007 12:13:49 'SAPBWP', TABNAME => '"/BIC/AZFIGL_O240"',                                       DBMAN         099          I    
    05.02.2007 12:13:49 ESTIMATE_PERCENT => 1 , METHOD_OPT => 'FOR ALL                                   DBMAN         099          I    
    05.02.2007 12:13:49 INDEXED COLUMNS SIZE 75', DEGREE => 1 ,                                          DBMAN         099          I    
    05.02.2007 12:13:49 GRANULARITY => 'ALL', CASCADE => TRUE ); END;                                    DBMAN         099          I    
    05.02.2007 12:13:56 SQL-END: 05.02.2007 12:13:56 00:00:07                                            DBMAN         099          I    
    It stuck there at SQL-END for some time...
    Is this normal? How can I improve the performance for my data loading & activation?
    Thank you very much.

    Hi Ken.
    Many thanks for  your input. I think I will follow what have been suggested in the note as follow.
    +     4.      Activation of data in an ODS object
    To improve system performance when activating data in the ODS object, you can make the following entries in Customizing under Business Information Warehouse ® General BW Settings ® ODS Object Settings:
    the maximum number of parallel processes when activating data in the ODS object as when moving SIDs
    the minimum number of data records for each data package when activating data in the ODS object, meaning you define the size of the data packages that are activated
    the maximum wait time in seconds when activating data in the ODS object. This is the time when the main process (batch process) waits for the dialog process that is split before it classifies it as having failed.+
    However, can someone advises  me what would be an optimum / normal value for
    1. the maximum number of parallel processes
    2. the minimum number of data records for each data package
    3.the maximum wait time in seconds?
    Many thanks.

  • Getting data from table control to the report program.

    Hi,
    I created a table control using report program and i am trying to enter data in the table control which i want to update in the DB table. How can i get the data entered in table control to the report program, so that i can update the DB table.
    Please help me finding out which variable will hold the data entered in table control(dynamically).

    hi,
    in your table control you give some name to that table control say it_cntrl.
    this only serves as the internal table to process the table control data.
    like u can write like this.
    loop at it_cntrl into wa_cntrl.   "wa_cntrl is work area of type it_cntrl table type
    .........        "do your functining
    end loop.
    any clarification get in touch
    thnks

  • Join query not getting data from tables

    I have total 4 tables, zemployee, zemp_comm,zemp_adress, zemp_edu1 all  of this tables are connected through emp_id , but when I write left outer join it returns all rows but emp_id is retrunign null ,  when I insert all of the entries in respective table  it show the respective emp_id, when  i left some of the entries in table it will not show  emp_id in records,
    I am using this select statment........
    Select * from   zemployee as a left outer join zemp_comm as b on b~emp_id = a~emp_id left outer join zemp_adress
    as c on c~emp_id =  a~emp_id left outer  join zemp_education1 as d on d~emp_id = a~emp_id into corresponding fields of table
    emp_itb
    where  a~emp_name = zemployee-emp_name.
    I want  to show show records with emp_id whether i put entry in all tables or missed some of the entries.
    please help me in this regard.

    Hi Amir,
                You can use Inner join . please refer below code.
    Select * from   zemployee  as a inner join zemp_comm as b on b~emp_id = a~emp_id inner join zemp_adress
    as c on c~emp_id =  a~emp_id inner  join zemp_education1 as d on d~emp_id = a~emp_id into corresponding fields of table
    emp_itb
    where  a~emp_name = zemployee-emp_name.
    Regards,
    Thangam.P

  • In boot camp assistance download the latest windows support from apple is taking too long

    I am trying to download the latest windows support software from apple in Boot Camp Assistance and is taking for ever. Any suggestions?

    In this threadhttps://discussions.apple.com/message/16605226#16605226there's a link to download the BootCamp 4.0 Driver package.

  • Fetch data from table and generate attachment than mail it.

    Hello Experts,
    From couple of day I am searching on Google for a better database procedure that will help me to get data from tables and generate attachment and mail it but i fail.
    My Scenario is:
    I have a query that will fetch almost 5000 records from database tables. Each record has almost 75 characters
    select a.location_code,
                   a.item_code,
                   b.description item_desc,
                   to_char(a.manufact_date,'ddMonyy')mfg,
                   to_char((a.manufact_date + nvl(b.expiry_period,0)),'ddMonyy')expr,
                   to_char((a.manufact_date + nvl(b.qurantine_period,0)),'ddMonyy')qrtn,
                   round(nvl (b.qurantine_period, 0) - (sysdate - a.manufact_date)) days_elapsed,
                   a.closing_balance_posted quantity
              from wms_stock_current_balance_v a, wms_item_setup_mast b
             where a.closing_balance > 0
               and a.item_code = b.item_code
               and a.loc_type in ('RACKING','PICKING','QUICKA','BUNDLED')
               and nvl(b.qurantine_period,0) > 0
               and round(nvl (b.qurantine_period, 0) - (sysdate - a.manufact_date)) <= 0
          order by a.item_code, a.location_code;
    Sample data of above query is
    LOCATION_CODE
    ITEM_CODE
    ITEM_DESC
    MFG
    Expiry
    Quarantine
    Days Elapse
    Quantity
    13DL2
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    20-Feb-10
    31-Mar-14
    4-Jun-13
    -122
    160
    14DL0
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    23-Feb-10
    3-Apr-14
    7-Jun-13
    -119
    134
    14DL2
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    23-Feb-10
    3-Apr-14
    7-Jun-13
    -119
    160
    14DR2
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    23-Feb-10
    3-Apr-14
    7-Jun-13
    -119
    20
    14LL2
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    20-Feb-10
    31-Mar-14
    4-Jun-13
    -122
    160
    17ER2
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    20-Feb-10
    31-Mar-14
    4-Jun-13
    -122
    160
    17GL2
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    20-Feb-10
    31-Mar-14
    4-Jun-13
    -122
    160
    17SL0
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    17-Feb-10
    28-Mar-14
    1-Jun-13
    -125
    64
    18QL0
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    23-Feb-10
    3-Apr-14
    7-Jun-13
    -119
    160
    19AR5
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    17-Feb-10
    28-Mar-14
    1-Jun-13
    -125
    160
    19DL1
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    20-Feb-10
    31-Mar-14
    4-Jun-13
    -122
    160
    19JR0
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    17-Feb-10
    28-Mar-14
    1-Jun-13
    -125
    60
    19TL1
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    20-Feb-10
    31-Mar-14
    4-Jun-13
    -122
    160
    20GR2
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    20-Feb-10
    31-Mar-14
    4-Jun-13
    -122
    40
    36FL3
    000000000000000F0487
    CLEAR COOL BLACK 05ML
    18-Feb-10
    29-Mar-14
    2-Jun-13
    -124
    65
    19UR0
    000000000000000F0591
    COMFORT WHITE 24ML*300
    28-Oct-09
    28-Oct-11
    1-May-11
    -887
    1
    12SL1
    000000000000000F0593
    COMFORT PINK 24ML*300
    28-Oct-09
    28-Oct-11
    1-May-11
    -887
    42
    12SR1
    000000000000000F0593
    COMFORT PINK 24ML*300
    28-Oct-09
    28-Oct-11
    1-May-11
    -887
    42
    14OR1
    000000000000000F0593
    COMFORT PINK 24ML*300
    28-Oct-09
    28-Oct-11
    1-May-11
    -887
    8
    36EL4
    000000000000000F0594
    CLEAR HF DECRASE 5M*360
    14-Feb-10
    14-Feb-11
    12-Oct-10
    -1088
    14
    13VL1
    000000000000000F0595
    CLEAR COM SFT CRE 5*360
    8-Feb-10
    8-Feb-11
    6-Oct-10
    -1094
    160
    14ER0
    000000000000000F0595
    CLEAR COM SFT CRE 5*360
    8-Feb-10
    8-Feb-11
    6-Oct-10
    -1094
    105
    Database Info
    Oracle 10g
    Version 10.2.0.1.0

    Look at the sample code for generating a CSV file that I've just posted in response to a similar question:
    Re: How to execute a proc and spool files in a database job
    And the use the search button in this forum to find sample code for sending a CLOB as a plain/text e-mail attachment using UTL_SMTP.

Maybe you are looking for

  • Cannot see host drives from within guest OS

    My host PC is Win7 64-bit, and guest PC is Win7 32-bit. I've installed and enabled Integration Services and ticked the relevant resource sharing options under "Integration features". Should I expect to see the host's drives appear in the guest's Wind

  • How do I release memory when done with a large Image?

    I've got a sample program here. Enter a filename of a .jpg file, click the button and it will load and display a thumbnail of it. However memory is not released so repeatedly clicking the button will let you watch the memory use grow and grow. What s

  • Can't update or uninstall itunes or quicktime

    Hi I'm using an old XP unit, and lately, I've not been able to update, uninstall, or reinstall itunes or quicktime.  Any suggestions?

  • Aperture as a finishing tool

    Hello, The Aperture discussions well document the varying usefulness people find for Aperture, and while everyone it seems is united in the potential we can see, there appears to be heavy division on whether Aperture can be integrated successfully in

  • How do I deactivate my acrobat pro X?

    I want to deactivate my acrobat pro X so I can use it on another computer. I am looking for help-deactivate, but this does not show up? Is it because I did not register the product.  I bought it on Feb 4, 2011 and have the oder number.