A DATA_BUFFER_EXCEEDED viewing data on a large table

Hello,
Am getting error below while fetching data in DS designer from SAP ECC Tables.
Error calling RFC function to get table data: <RFC_ABAP_EXCEPTION-(Exception_Key: DATA_BUFFER_EXCEEDED, SY-MSGTY: E, SY-MSGID:
I followed the KBA :  1752954 - DATA_BUFFER_EXCEEDED error - Data Services  and found the RFC: Z_AW_RFC_READ_TABLE is available in SAP ECC .
Please help , how do i resolve this kind of error.
Aisurya

Hi Aisurya,
The cause of the exception is a combination of factors:
The data extracted for a row in an SAP application table source is larger than 512 bytes.
The Data Services Remote Function Call (RFC) /BODS/RFC_STREAM_READ_TABLE is not installed on the SAP application server.
If the function /BODS/RFC_STREAM_READ_TABLE is not loaded to the SAP application server, Data Services extracts data using the SAP-supplied function, RFC_READ_TABLE. This function call limits extracted data to 512 bytes per row.
Also you can try using the enhanced version of  RFC_READ_TABLE called /BODS/RFC_READ_TABLE2.
Regards
Arun Sasi

Similar Messages

  • Retrieve data from a large table from ORACLE 10g

    I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
    The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
    Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
    When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
    But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
    Any help on this problem will be highly appriciated.
    Thanks in advance...
    -Jahedur Rahman
    Edited by: Jahedur on May 16, 2010 11:42 PM

    Girish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
    1."export the data into another media into the hard drive."
    What does it mean by this line i.e. another media into hard drive???
    ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
    2."I am not able to connect to the database directly because of license issue"
    huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
    ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
    3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
    ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
    4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
    ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
    E.g: 1 to 20,000 records in 1st phase
    20,001 to 40,000 records in 2nd phase
    40,001 to ...... records in 3nd phase
    and so on...
    Please let me know if this does not clarify your confusions... :)
    Thanks...
    -Jahedur Rahman
    Edited by: user13114507 on May 12, 2010 11:28 PM

  • Unable to view data in some HR tables

    Since I upgraded to SQL Developer 2.1.1.64 I have not been able to view data in the data tab for the following tables that I have run accross: per_all_people_f, per_all_assignments_f or per_all_positions_f.
    Other tables that I have been using seem to be working fine, but these I need to use all the time. I get the column numbers returned, but no data or column headings.
    Another co-worker that has upgraded is experiencing the same problem.

    I could not get the ddl to show up on the sql tab in version 2.1.1.64, but I did run it in version 1.5.4 and this is what was returned.
    -- Unable to Render DDL with DBMS_METADATA using internal generator.
    CREATE TABLE HR.PER_ALL_PEOPLE_F
    PERSON_ID NUMBER(10, 0) NOT NULL,
    EFFECTIVE_START_DATE DATE NOT NULL,
    EFFECTIVE_END_DATE DATE NOT NULL,
    BUSINESS_GROUP_ID NUMBER(15, 0) NOT NULL,
    PERSON_TYPE_ID NUMBER(15, 0) NOT NULL,
    LAST_NAME VARCHAR2(150 BYTE) NOT NULL,
    START_DATE DATE NOT NULL,
    APPLICANT_NUMBER VARCHAR2(30 BYTE),
    BACKGROUND_CHECK_STATUS VARCHAR2(30 BYTE),
    BACKGROUND_DATE_CHECK DATE,
    BLOOD_TYPE VARCHAR2(30 BYTE),
    COMMENT_ID NUMBER(15, 0),
    CORRESPONDENCE_LANGUAGE VARCHAR2(30 BYTE),
    CURRENT_APPLICANT_FLAG VARCHAR2(30 BYTE),
    CURRENT_EMP_OR_APL_FLAG VARCHAR2(30 BYTE),
    CURRENT_EMPLOYEE_FLAG VARCHAR2(30 BYTE),
    DATE_EMPLOYEE_DATA_VERIFIED DATE,
    DATE_OF_BIRTH DATE,
    EMAIL_ADDRESS VARCHAR2(240 BYTE),
    EMPLOYEE_NUMBER VARCHAR2(30 BYTE),
    EXPENSE_CHECK_SEND_TO_ADDRESS VARCHAR2(30 BYTE),
    FAST_PATH_EMPLOYEE VARCHAR2(30 BYTE),
    FIRST_NAME VARCHAR2(150 BYTE),
    FTE_CAPACITY NUMBER(5, 2),
    FULL_NAME VARCHAR2(240 BYTE),
    HOLD_APPLICANT_DATE_UNTIL DATE,
    HONORS VARCHAR2(45 BYTE),
    INTERNAL_LOCATION VARCHAR2(45 BYTE),
    KNOWN_AS VARCHAR2(80 BYTE),
    LAST_MEDICAL_TEST_BY VARCHAR2(60 BYTE),
    LAST_MEDICAL_TEST_DATE DATE,
    MAILSTOP VARCHAR2(45 BYTE),
    MARITAL_STATUS VARCHAR2(30 BYTE),
    MIDDLE_NAMES VARCHAR2(60 BYTE),
    NATIONALITY VARCHAR2(30 BYTE),
    NATIONAL_IDENTIFIER VARCHAR2(30 BYTE),
    OFFICE_NUMBER VARCHAR2(45 BYTE),
    ON_MILITARY_SERVICE VARCHAR2(30 BYTE),
    ORDER_NAME VARCHAR2(240 BYTE),
    PRE_NAME_ADJUNCT VARCHAR2(30 BYTE),
    PREVIOUS_LAST_NAME VARCHAR2(150 BYTE),
    PROJECTED_START_DATE DATE,
    REHIRE_AUTHORIZOR VARCHAR2(30 BYTE),
    REHIRE_REASON VARCHAR2(60 BYTE),
    REHIRE_RECOMMENDATION VARCHAR2(30 BYTE),
    RESUME_EXISTS VARCHAR2(30 BYTE),
    RESUME_LAST_UPDATED DATE,
    REGISTERED_DISABLED_FLAG VARCHAR2(30 BYTE),
    SECOND_PASSPORT_EXISTS VARCHAR2(30 BYTE),
    SEX VARCHAR2(30 BYTE),
    STUDENT_STATUS VARCHAR2(30 BYTE),
    SUFFIX VARCHAR2(30 BYTE),
    TITLE VARCHAR2(30 BYTE),
    VENDOR_ID NUMBER(15, 0),
    WORK_SCHEDULE VARCHAR2(30 BYTE),
    WORK_TELEPHONE VARCHAR2(60 BYTE),
    COORD_BEN_MED_PLN_NO VARCHAR2(30 BYTE),
    COORD_BEN_NO_CVG_FLAG VARCHAR2(30 BYTE),
    DPDNT_ADOPTION_DATE DATE,
    DPDNT_VLNTRY_SVCE_FLAG VARCHAR2(30 BYTE),
    RECEIPT_OF_DEATH_CERT_DATE DATE,
    USES_TOBACCO_FLAG VARCHAR2(30 BYTE),
    BENEFIT_GROUP_ID NUMBER(15, 0),
    REQUEST_ID NUMBER(15, 0),
    PROGRAM_APPLICATION_ID NUMBER(15, 0),
    PROGRAM_ID NUMBER(15, 0),
    PROGRAM_UPDATE_DATE DATE,
    ATTRIBUTE_CATEGORY VARCHAR2(30 BYTE),
    ATTRIBUTE1 VARCHAR2(150 BYTE),
    ATTRIBUTE2 VARCHAR2(150 BYTE),
    ATTRIBUTE3 VARCHAR2(150 BYTE),
    ATTRIBUTE4 VARCHAR2(150 BYTE),
    ATTRIBUTE5 VARCHAR2(150 BYTE),
    ATTRIBUTE6 VARCHAR2(150 BYTE),
    ATTRIBUTE7 VARCHAR2(150 BYTE),
    ATTRIBUTE8 VARCHAR2(150 BYTE),
    ATTRIBUTE9 VARCHAR2(150 BYTE),
    ATTRIBUTE10 VARCHAR2(150 BYTE),
    ATTRIBUTE11 VARCHAR2(150 BYTE),
    ATTRIBUTE12 VARCHAR2(150 BYTE),
    ATTRIBUTE13 VARCHAR2(150 BYTE),
    ATTRIBUTE14 VARCHAR2(150 BYTE),
    ATTRIBUTE15 VARCHAR2(150 BYTE),
    ATTRIBUTE16 VARCHAR2(150 BYTE),
    ATTRIBUTE17 VARCHAR2(150 BYTE),
    ATTRIBUTE18 VARCHAR2(150 BYTE),
    ATTRIBUTE19 VARCHAR2(150 BYTE),
    ATTRIBUTE20 VARCHAR2(150 BYTE),
    ATTRIBUTE21 VARCHAR2(150 BYTE),
    ATTRIBUTE22 VARCHAR2(150 BYTE),
    ATTRIBUTE23 VARCHAR2(150 BYTE),
    ATTRIBUTE24 VARCHAR2(150 BYTE),
    ATTRIBUTE25 VARCHAR2(150 BYTE),
    ATTRIBUTE26 VARCHAR2(150 BYTE),
    ATTRIBUTE27 VARCHAR2(150 BYTE),
    ATTRIBUTE28 VARCHAR2(150 BYTE),
    ATTRIBUTE29 VARCHAR2(150 BYTE),
    ATTRIBUTE30 VARCHAR2(150 BYTE),
    LAST_UPDATE_DATE DATE,
    LAST_UPDATED_BY NUMBER(15, 0),
    LAST_UPDATE_LOGIN NUMBER(15, 0),
    CREATED_BY NUMBER(15, 0),
    CREATION_DATE DATE,
    PER_INFORMATION_CATEGORY VARCHAR2(30 BYTE),
    PER_INFORMATION1 VARCHAR2(150 BYTE),
    PER_INFORMATION2 VARCHAR2(150 BYTE),
    PER_INFORMATION3 VARCHAR2(150 BYTE),
    PER_INFORMATION4 VARCHAR2(150 BYTE),
    PER_INFORMATION5 VARCHAR2(150 BYTE),
    PER_INFORMATION6 VARCHAR2(150 BYTE),
    PER_INFORMATION7 VARCHAR2(150 BYTE),
    PER_INFORMATION8 VARCHAR2(150 BYTE),
    PER_INFORMATION9 VARCHAR2(150 BYTE),
    PER_INFORMATION10 VARCHAR2(150 BYTE),
    PER_INFORMATION11 VARCHAR2(150 BYTE),
    PER_INFORMATION12 VARCHAR2(150 BYTE),
    PER_INFORMATION13 VARCHAR2(150 BYTE),
    PER_INFORMATION14 VARCHAR2(150 BYTE),
    PER_INFORMATION15 VARCHAR2(150 BYTE),
    PER_INFORMATION16 VARCHAR2(150 BYTE),
    PER_INFORMATION17 VARCHAR2(150 BYTE),
    PER_INFORMATION18 VARCHAR2(150 BYTE),
    PER_INFORMATION19 VARCHAR2(150 BYTE),
    PER_INFORMATION20 VARCHAR2(150 BYTE),
    PER_INFORMATION21 VARCHAR2(150 BYTE),
    PER_INFORMATION22 VARCHAR2(150 BYTE),
    PER_INFORMATION23 VARCHAR2(150 BYTE),
    PER_INFORMATION24 VARCHAR2(150 BYTE),
    PER_INFORMATION25 VARCHAR2(150 BYTE),
    PER_INFORMATION26 VARCHAR2(150 BYTE),
    PER_INFORMATION27 VARCHAR2(150 BYTE),
    PER_INFORMATION28 VARCHAR2(150 BYTE),
    PER_INFORMATION29 VARCHAR2(150 BYTE),
    PER_INFORMATION30 VARCHAR2(150 BYTE),
    OBJECT_VERSION_NUMBER NUMBER(9, 0),
    DATE_OF_DEATH DATE,
    ORIGINAL_DATE_OF_HIRE DATE,
    TOWN_OF_BIRTH VARCHAR2(90 BYTE),
    REGION_OF_BIRTH VARCHAR2(90 BYTE),
    COUNTRY_OF_BIRTH VARCHAR2(90 BYTE),
    GLOBAL_PERSON_ID VARCHAR2(30 BYTE),
    COORD_BEN_MED_PL_NAME VARCHAR2(80 BYTE),
    COORD_BEN_MED_INSR_CRR_NAME VARCHAR2(80 BYTE),
    COORD_BEN_MED_INSR_CRR_IDENT VARCHAR2(80 BYTE),
    COORD_BEN_MED_EXT_ER VARCHAR2(80 BYTE),
    COORD_BEN_MED_CVG_STRT_DT DATE,
    COORD_BEN_MED_CVG_END_DT DATE,
    PARTY_ID NUMBER(15, 0),
    NPW_NUMBER VARCHAR2(30 BYTE),
    CURRENT_NPW_FLAG VARCHAR2(30 BYTE),
    GLOBAL_NAME VARCHAR2(240 BYTE),
    LOCAL_NAME VARCHAR2(240 BYTE)
    , CONSTRAINT PER_PEOPLE_F_PK PRIMARY KEY
    PERSON_ID,
    EFFECTIVE_START_DATE,
    EFFECTIVE_END_DATE
    ENABLE
    TABLESPACE "HR_DATA_SPACE_01"
    LOGGING
    PCTFREE 10
    PCTUSED 40
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 48K
    NEXT 8000K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PEOPLE_F_FK1 FOREIGN KEY
    BUSINESS_GROUP_ID
    REFERENCES HR.HR_ALL_ORGANIZATION_UNITS
    ORGANIZATION_ID
    ) ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PEOPLE_F_FK2 FOREIGN KEY
    PERSON_TYPE_ID
    REFERENCES HR.PER_PERSON_TYPES
    PERSON_TYPE_ID
    ) ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT HR_PER_DATE_OF_DEATH CHECK
    (DATE_OF_DEATH >= DATE_OF_BIRTH)
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_ON_MILITARY_SRV_CHK CHECK
    (ON_MILITARY_SERVICE IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_DPDNT_VLNTRY_SVCE_FLAG_CHK CHECK
    (DPDNT_VLNTRY_SVCE_FLAG in ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_SECOND_PASSPORT_CHK CHECK
    (SECOND_PASSPORT_EXISTS IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_FAST_PATH_EMPLOYEE_CHK CHECK
    (FAST_PATH_EMPLOYEE IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_COORD_BEN_NO_CVG_FLAG CHECK
    (COORD_BEN_NO_CVG_FLAG in ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_RESUME_EXISTS_CHK CHECK
    (RESUME_EXISTS IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_SEX_CHK CHECK
    (SEX IN ('M', 'F'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_EXPENSE_CHECK_SEND_CHK CHECK
    (EXPENSE_CHECK_SEND_TO_ADDRESS IN ('H', 'O', 'P'))
    ENABLE
    CREATE INDEX HR.CSUH_PPF_ATTR12_IDX ON HR.PER_ALL_PEOPLE_F (ATTRIBUTE12 ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    STORAGE
    INITIAL 1M
    NEXT 104K
    MINEXTENTS 1
    MAXEXTENTS 8192
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.CSUH_PPF_ATTR1_IDX ON HR.PER_ALL_PEOPLE_F (ATTRIBUTE1 ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 560K
    NEXT 160K
    MINEXTENTS 1
    MAXEXTENTS 8192
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N1 ON HR.PER_ALL_PEOPLE_F (UPPER(FULL_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N2 ON HR.PER_ALL_PEOPLE_F (UPPER(LAST_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N50 ON HR.PER_ALL_PEOPLE_F (LAST_NAME ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 496K
    NEXT 160K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N51 ON HR.PER_ALL_PEOPLE_F (EMPLOYEE_NUMBER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 328K
    NEXT 80K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N52 ON HR.PER_ALL_PEOPLE_F (APPLICANT_NUMBER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 8K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N53 ON HR.PER_ALL_PEOPLE_F (NATIONAL_IDENTIFIER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 496K
    NEXT 160K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N54 ON HR.PER_ALL_PEOPLE_F (FULL_NAME ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 760K
    NEXT 240K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N55 ON HR.PER_ALL_PEOPLE_F (PARTY_ID ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N56 ON HR.PER_ALL_PEOPLE_F (NPW_NUMBER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N57 ON HR.PER_ALL_PEOPLE_F (UPPER(GLOBAL_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N58 ON HR.PER_ALL_PEOPLE_F (UPPER(LOCAL_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N59 ON HR.PER_ALL_PEOPLE_F (EMAIL_ADDRESS ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N60 ON HR.PER_ALL_PEOPLE_F (GLOBAL_NAME ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    ;

  • Java.lang.OutOfMemory error while retrieving data from a large table

    Hi,
    i am trying to fetch data using "executeQuery()" into a ResultSet from the database. But since the data in that table is large. i am recieving "java.lang.OutOfMemory" Error. So, to resolve that, i have used "setMaxRows()" for my statement object. This resolved the error but i don't recieve the entire data. If i call "executeQuery()" again, i recieve the same data. I don't even know a filtering criterion where by i can filter the data for each "executeQuery()"..
    How can i resolve this problem
    Thanx in advance
    --Chaitanya                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Either use some criteria you develop related to one of the keys on the table or use some sort of record limiting method.
    Note the method of limiting will vary related to the database you are using. You will have to look at the documentation.
    For example I am told this will work in MySQL to get 200 records starting at record 100.
    SELECT * FROM myTable ORDER BY whatever ASC LIMIT 100,200
    Because you are running out of memroy I assume the table is large,
    I am not sure what the impact of the above will have on performance because if in the above if the order by is not based on an index at the server level all the records will be selected and sorted before the records are limited.
    I would make sure you have an appropriate index.
    If you use the advanced search over the user forums using "resultset paging" and possibility the database you are using you should be able to get some ideas.
    I hope this makes sense to you.
    rykk

  • Create a view that limits a large table, but also allows an outer join ?

    oracle 10.2.0.4
    CREATE TABLE MY_PAY_ITEMS
    ( EMP     VARCHAR2(8) NOT NULL
    , PAY_PRD VARCHAR2(8) NOT NULL
    , KEY1    VARCHAR2(8) NOT NULL
    , KEY2    VARCHAR2(8) NOT NULL
    , LN_ITEM VARCHAR2(4) NOT NULL
    , ITEM_AMT NUMBER(24,2) NOT NULL
    , FILLER  VARCHAR2(100) NOT NULL)
    INSERT INTO MY_PAY_ITEMS
    SELECT A.EMP
    , B.PAY_PRD
    , C.KEY1
    , D.KEY2
    , E.LN_ITEM 
    , F.ITEM_AMT
    FROM (SELECT TO_CHAR(ROWNUM, '00000000') "EMP" FROM DUAL  CONNECT BY LEVEL <= 50 ) A
    , (SELECT '2010-' || TO_CHAR(ROWNUM,'00') "PAY_PRD" FROM DUAL CONNECT BY LEVEL <= 52) B
    , (SELECT TO_CHAR(ROWNUM, '000') "KEY1" FROM DUAL CONNECT BY LEVEL <= 8) C
    , (SELECT TO_CHAR(ROWNUM, '000') "KEY2" FROM DUAL CONNECT BY LEVEL <= 5) D
    , (SELECT TO_CHAR(ROWNUM,'000') "LN_ITEM" FROM DUAL CONNECT BY LEVEL <= 20) E
    , (select round(DBMS_RANDOM.VALUE * 400,2)  "ITEM_AMT" from dual) F
    CREATE UNIQUE INDEX MY_PAY_ITEMS ON MY_PAY_ITEMS (EMP, PAY_PRD, KEY1, KEY2, LN_ITEM)
    CREATE TABLE MY_ITEM_DISPLAY
    ( DISPLAY_CODE VARCHAR2(4) NOT NULL
    , SEQUENCE     NUMBER(2) NOT NULL
    , COLUMN_ITEM1 VARCHAR2(4) not null
    , COLUMN_ITEM2 VARCHAR2(4) not null
    , COLUMN_ITEM3 VARCHAR2(4) not null
    , COLUMN_ITEM4 VARCHAR2(4) not null)
    INSERT INTO MY_ITEM_DISPLAY VALUES ('01',10,'001','003','004','005');
    INSERT INTO MY_ITEM_DISPLAY VALUES ('01',20,'007','013','004','009');
    INSERT INTO MY_ITEM_DISPLAY VALUES ('01',30,'001','004','009','011');
    INSERT INTO MY_ITEM_DISPLAY VALUES ('01',40,'801','304','209','111');
    INSERT INTO MY_ITEM_DISPLAY VALUES ('02',10,'001','003','004','005');
    INSERT INTO MY_ITEM_DISPLAY VALUES ('02',20,'007','013','004','009');
    INSERT INTO MY_ITEM_DISPLAY VALUES ('02',30,'001','004','009','011');
    MY_PAY_ITEMS is a table that stores payslip line items.  It has a total size of 500,000,000 rows.
    EMP is the unique employee id,  We have approx 200,000 employees (with approx 50,000 being active today).
    PAY_PRD is a weekly pointer (2010-01, 2010-02 ... 2010-52), we have data from 2004 and are adding a new pay period every week.  2010-01 is defined as the first monday in 2010 to the first sunday in 2010 etc.
    KEY1 is an internal key, it tracks the timeline within the pay period.
    KEY2 is a child of KEY1, it tracks the sequence of events within KEY1.
    LN_ITEM is the actual pay item that resulted from the event on average a person generates 20 rows per event.  Note that in this example everybody gets the same LN_ITEM values, but in practice it is 20 selected from 300
    ITEM_AMT is the net pay for the line item.
    FILLER is an assortment of fields that are irrelevant to this question, but do act as a drag on any row loads.
    MY_ITEM_DISPLAY is a table that describes how certain screens should display items.  The screen itself is a 4 column grid, with the contents of the individual cells being defined as a lookup of LN_ITEMS to retrieve the relevant LN_AMT.
    We have an application that receives a DISPLAY_CODE and an EMP.  It automatically creates a sql statement along the lines of
    SELECT * FROM MY_VIEW WHERE DISPLAY_CODE = :1 AND EMP = :2
    and renders the output for the user.
    My challenge is that I need to rewrite MY_VIEW as follows:
    1) Select the relevant rows from MY_ITEM_DISPLAY where DISPLAY_CODE = :1
    2) Select the relevant all rows from MY_PAY_ITEMS that satisfy the criteria
       a) EMP = :2
       b) PAY_PRD = (most recent one for EMP as at sysdate, thus if they last got paid in 2010-04 , return 2010-04)
       c) KEY1 = (highest key1 within EMP and PAY_PRD)
       d) KEY2 = (highest key2 within EMP, PAY_PRD and KEY1)
    3) I then need to cross reference these to create a tabular output
    4) Finally I have to return a line of 0's where no LN_ITEMs exist ( DISPLAY_CODE 01, sequence 40 contains impossible values for this scenario)
    The below query does part of it (but not the PAY_PRD, KEY1, KEy2 )
    select * from (
    SELECT A.DISPLAY_CODE
    , B.EMP
    , A.SEQUENCE
    , MAX(DECODE(B.LN_ITEM, A.COLUMN_ITEM1, B.ITEM_AMT, 0)) "COL1"
    , MAX(DECODE(B.LN_ITEM, A.COLUMN_ITEM2, B.ITEM_AMT, 0)) "COL2"
    , MAX(DECODE(B.LN_ITEM, A.COLUMN_ITEM3, B.ITEM_AMT, 0)) "COL3"
    , MAX(DECODE(B.LN_ITEM, A.COLUMN_ITEM4, B.ITEM_AMT, 0)) "COL4"
    FROM MY_ITEM_DISPLAY A, MY_PAY_ITEMS B
    WHERE B.PAY_PRD = '2010-03'
    GROUP BY A.DISPLAY_CODE, B.EMP, A.SEQUENCE)
    WHERE DISPLAY_CODE = '01'
    AND EMP = '0000011'
    ORDER BY SEQUENCE
    My questions
    1) How do I do the PAY_PRD, KEY1, KEY2 constraint, can I use some form of ROW_NUMBER() OVER function ?
    2) How do I handle the fact that none of the 4 column LN_ITEMS may exist  (see sequence 40, none of those line items can exist)...  Ideally the above SQL should return
    01, 0000011, 10, <some number>, <some number>, <some number>, <some number>
    01, 0000011, 20, <some number>, <some number>, <some number>, <some number>
    01, 0000011, 30, <some number>, <some number>, <some number>, <some number>
    01, 0000011, 40, 0            , 0            , 0            , 0           
    I tried a UNION, but his prevented the view from eliminating the bulk of the MY_PAY_ITEMS rows, as it resolve ALL of MY_PAY_ITEMS instead of just retrieving rows for the one EMP passed to the view.  The same seems to be true for any outer joins.

    Hi, if i understood you properly, you need :
    select nvl(q.display_code,lag(q.display_code) over (order by rownum)) display_code,
           nvl(q.emp,lag(q.emp) over (order by rownum)) emp,
           m.s,
           nvl(q.COL1,0) COL1,
           nvl(q.COL2,0) COL2,      
           nvl(q.COL3,0) COL3,
           nvl(q.COL4,0) COL4,
           nvl(PAY_PRD,lag(q.PAY_PRD) over (order by rownum)) PAY_PRD,
           nvl(KEY1,lag(q.KEY1) over (order by rownum)) KEY1,
           nvl(KEY2,lag(q.KEY2) over (order by rownum)) KEY2  
    from(
    select d.display_code,
           t.emp,
           d.sequence,
           max(DECODE(t.LN_ITEM, d.COLUMN_ITEM1, t.ITEM_AMT, 0)) keep (dense_rank first order by to_date(t.pay_prd,'yyyy-mm') desc ) "COL1",
           max(DECODE(t.LN_ITEM, d.COLUMN_ITEM2, t.ITEM_AMT, 0)) keep (dense_rank first order by to_date(t.pay_prd,'yyyy-mm') desc ) "COL2",
           max(DECODE(t.LN_ITEM, d.COLUMN_ITEM3, t.ITEM_AMT, 0)) keep (dense_rank first order by to_date(t.pay_prd,'yyyy-mm') desc ) "COL3",
           max(DECODE(t.LN_ITEM, d.COLUMN_ITEM4, t.ITEM_AMT, 0)) keep (dense_rank first order by to_date(t.pay_prd,'yyyy-mm') desc ) "COL4",
           max(t.PAY_PRD) PAY_PRD,
           max(t.key1) keep (dense_rank first order by to_date(t.pay_prd,'yyyy-mm') desc ) key1,
           max(t.key2) keep (dense_rank first order by to_date(t.pay_prd,'yyyy-mm') desc ) key2
      from MY_PAY_ITEMS t
      join MY_ITEM_DISPLAY d
        on d.display_code = '01'
    where t.emp = '00000011'
    group by d.display_code, t.emp, d.sequence
    ) q
    full outer join (select level*10 s from dual connect by level <= 4) m
    on m.s = q.sequence
    DISPLAY_CODE
    EMP
    S
    COL1
    COL2
    COL3
    COL4
    PAY_PRD
    KEY1
    KEY2
    01
    00000011
    10
    101.1
    103.1
    104.1
    105.1
    2010-03
    008
    005
    01
    00000011
    20
    107.1
    113.1
    104.1
    109.1
    2010-03
    008
    005
    01
    00000011
    30
    101.1
    104.1
    109.1
    111.1
    2010-03
    008
    005
    01
    00000011
    40
    0
    0
    0
    0
    2010-03
    008
    005
    Ramin Hashimzade

  • Fast data retrieval from large tables

    Hello,
    we have created a table in oracle 10g DB. at an average there are around 20-30 rows inserted onto the table. so in 1 mnth the record count for the table is around 800-900 rows.
    in one year which is becoming to 10,000 rows. so speed up data retreval from this bulk data we need suggestion. various things that come to my mind are -
    Indexing of table/# Index of table/partitioning of table/materialized view etc.
    but not sure what exactly needs to be done here. please suggest on how smooth data retreival can be chaived whatever be the size of the table.
    Thanks,
    Sam

    Here is a simple look of your data progression.
    year        Row
    1           10,000
    5           50,000
    10          1,00,000
    50          5,00,000
    100         10,00,000You need to wait another 100 years to reach a million. And million is not a big number when it comes to Oracle.
    So as others have stated what the objective behind your question?
    basically the choice of Index or Partition mainly depends on how the data is going to be accessed from a table. If I am going to access all the rows of a table every time then there is no use of both the features. I would just prefer a FTS. In that situation i would be looking how to optimize my FTS.
    So you need to set you objective clear before picking up the solution. Just because you have a hammer you cant go around banging the wall ;)

  • How to view data on E fact Table?

    Hi Experts,
    I am trying to view the contents on a E-Fact table (/BIC/E<Infocube Name>), what I tried so far show 0 entries since data have beed compressed. I tried SE16 for example, Listschema, RSCUBE....
    Any Input Please?

    Hi
    Can u tell me hwo many records are there before compression ?
    Go to se11--->/bic/e<ic name> ->no. of entries.
    I think the u did not perform compression step . once check the data in the f fact table.
    Thanx & Regards,
    RaviChandra
    Edited by: Ravichandra.bi on Dec 21, 2011 6:12 PM

  • I need to view data from my user table in my UserForm

    Hi to all,
    I created a user table with fields(id,name,datem,time_from and time_to) and form for this usertable. This form has two edit box for code of employer and name of employer and matrix box with columns (Date,id,name,time_from,time_to). In    columns I want to browse data of user table and I want to add data to this table fromthis form. Is it possible and how can I make it? Im writing in C#.
    Thanks for your answer.
    P.S. Im sorry of my english

    Hello Vit,
    The easier way to do it, is to use the UDO (User Defined Object). You can check the documentation of how to create it using B1 interfase, or you can do ti also via the DI API.
    In the following thread you can check how to do it using the DI API.
    SAP Business One SDK
    There's also a sample delivered when installing the Sap Business One SDK.
    HTH,
    Felipe

  • Fetching data from a large table

    hi
    I am trying to fetch data from a table with 100 million rows. There are 5 conditions(AND) in the "WHERE" clause. Out of five columns there is composite non unique index on 2 columns. rest of the three columns are haing data with low cardinality.i just wana know what type of index is suggested for my query so that i can fetch the data immediately i.e. should i go for composite index on all the 5 columns. secondly there is continuous insertion also. if i create index on all the 5 columns how it effect my insertion.i am using 8i
    Thanx
    Tarun

    This forum is for posting feedback about the OTN site.
    The best place to get an answer to your question is a Database forum, perhaps the PL/SQL forum.

  • How to extract data from a large table!

    Hi all,
    I am working on a survey application using JSP, where users login to the application and fill out survey form. Responses of users are stored in a database table as a numeric value ranging from 1-10. Now I want to create some standard reports based on these records. The problem I foresee here is that every survey has almost 15000 user�s records, in this way we have million of records in the table where we keep records. Now while creating reports, if I search the database for these millions of records and then do some calculations/computations on these values, this will turn out a mess. Please suggest what is the best approach to handle the situation.
    Thanks

    thats something for a Database forum.
    But basicly: Use good indexes !

  • Best ways to view data, total records of an application table ie VBAK

    Hi all,
    What is the best way to view data of an application table in the source system?
    I know about SE16....but are there other ways to know details ie the total no of records and different field information about a
    application table ie VBAK in source R3?
    Also, using SE16 when i checked for VBAK and clicked on the "number of enteries" then it showed 0...however
    when i directly checked from the sqlplus then i found about 5000 records in there in VBAK. I am not sure why
    via SE16 it showed 0. Does anybody have any idea what i missed here?
    Thanks...will give points for ur input.
    ak

    I tried "number of enteried" on se16 and it shows 0 enteries without any selection criterion...i cheked by putting relevant time range as well but it shows 0...
    As i told that when i checked VBAK separately via logging to database directly then i did find 5678 rows there.
    Please note that this is a new demo version....so i thought that i first need to activate the table which i did using tcode SE11. Now the VBAK table is active but still via SE16 shows 0 nuber of enteris....
    Can anybody please advise here..
    Thx
    ak

  • IS_Data Insight View Data Function Limitation

    Hi Experts,
    Is there any limitation to view data of a table using View Data function in Data Insight module in Information Steward. I come across a strange issue with this, the details are explained below.
    I am trying to perform Data profiling on table, as part of this I imported table into a Data Insight project. When i tried to view data of a table using View Data function, it is showing blank like (0 from 998987). I am able able to see data in database and even in DS designer too.
    Then i created a view on top of this table by selecting all columns and tried to view data, again showed blank. Then i removed some columns in view and tried, now it showed data. The table contains 150 columns, I used around 110 columns in view.
    My question here is, is there any limitations in Data Insight for viewing data apart 500 records. Will View Data function consider the number of Rows or the size of data to display the data. If it consider these two, is there any option available in IS to control these two parameters i.e., increase / decrease the size or no of rows.
    If anyone come across with this issue, could you please help me if any solutions to fix this.
    Thanks,
    Ramakrishna Kamurthy

    Hello Rama,
    In IS 4.2 this limitation is actually stated.
    See here: IS_421_user_en.pdf in Related Information section pg 44 which states that:
    The software displays only 500 records when you view data from an SAP table. 
    Also more details available in section: 2.5.10.2 Limit of 500 records when viewing data from SAP tables.
    The software displays only 500 records when you view data from an SAP table.
    Views that contain SAP tables have the potential to be quite large, especially when they are joined with other SAP tables. The limit of 500 records when viewing data prevents your computer from hanging or never completing the task because the tables were too large.
    In addition to the 500 records limit, you can take steps to enhance performance in the following ways:
    ● Reduce the size of the file by mapping fields, join conditions, filters, and so on to limit the data in the table to information that you really need.
    ● Use SAP ABAP-supported functions in forming expressions in views. Using non-supported functions is allowed, but doing so may adversely affect performance.
    ● Use the View Data filter tools when you view and export data from SAP tables.
    With the 500 records limit for viewing SAP table data, there is a potential for no records showing up in the View Data window.
    This could happen, for example, when the view contains a child view, the child view contains one or more SAP tables, and a join is set up to join the entire data set.
    A message appears at the top of the View Data window that instructs you to export the data to an external source (text file, CSV, or Excel file) to view all of the records.
    I hope this is helpful.
    Mike

  • Performance during joining large tables

    Hi,
    I have to maintain a report which gets data from many large tables as below. Currently it is using join statement to join all 8 tables and causing a very slow performance.
    SELECT
        into corresponding fields of table equip
        FROM caufv
                  join afih on afih~aufnr = caufv~aufnr
                  join iloa on iloa~iloan = afih~iloan
                  join iflos  on iflos~tplnr = iloa~tplnr
                  join iflotx on iflos~tplnr = iflotx~tplnr
                  join vbak on vbak~aufnr = caufv~aufnr
                  join equz on equz~equnr = afih~equnr
                  join equi on equi~equnr = equz~equnr
                  join vbap on vbak~vbeln = vbap~vbeln
        WHERE
    Please suggest me another way, I'm newbie in ABAP. I tried using FOR ALL ENTRIES IN but it did not work. I would very appreciate if you can leave me some sample lines of code.
    Thanks,

    Hi Dear ,
    I will suggest you not to use inner join for such i.e. 8 number of table and that too huge tables. Instead use For All entries wherever possible. But before using for all entries check initial for base table and if its not possible to avoid inner join then try to minimise it. Use inner join between header and item.
    Hope this will help you to solve your problem . Feel free to ask if you have any doubt.
    Regards,
    Vijay

  • Need to delete specific Months Data from SQL Server Table

    Greetings Everyone,
    So i have one table which contains 5 years old data, now business wants to keep just one year old data and data from qurter months i.e. (jan, mar, June, sep and December), i need to do this in stored procedure. how i can achive this using month lookup table.
    Thank you in advance
    R

    Hi Devin,
    In a production environment, you should be double cautious about the data. I have no idea why you’re about to remove the data just years old. In one of the applications I used to support, the data retention policy is like to keep raw data for latest month
    and the elder data would get rollup as max, min, average and so on to store in another table. That’s a good example for data retention.
    In your case I still suggest you keep the elder data in another table. If the data size is so huge that violates  your storage threshold, get the data rollup and store the aggregated would be a good option.
    Anyway if you don’t care about the elder data, you can just delete them with code like below.
    DELETE
    FROM yourTable
    WHERE YEAR(dateColumn) < YEAR(CURRENT_TIMESTAMP) OR (MONTH(dateColumn) not in (1,3,6,9,12) AND YEAR(dateColumn) = YEAR(CURRENT_TIMESTAMP))
    In some cases to remove data from very large table, DELETE performs bad. TRUNCATE would be a better option which works faster. Read more by clicking
    here. In your case, if necessary, you can reference the below draft code.
    SELECT * INTO tableTemp FROM yourTable WHERE YEAR(dateColumn) = YEAR(CURRENT_TIMESTAMP) AND MONTH(dateColumn) IN(1,3,6,9,12)
    TRUNCATE yourTable;
    INSERT INTO yourTable SELECT * FROM tableTemp
    As you mentioned, you need to do the deletion in Stored Procedure(SP). Can you post your table DDL with sample data and specify your requirement details so that I can help to compose your SP.
    If you have any question, feel free to let me know.
    Best regards,
    Eric Zhang

  • Updating a large table

    Hello,
    We need to update 2 columns on a very large table (20000000 records). Every row in the table is to be updated and the client wants to be able to update the records by year. Below the procedure that has been developed
    DECLARE
    l_year VARCHAR2 (4) := '2008';
    CURSOR c_1 (l_year1 VARCHAR2)
    IS
    SELECT ROWID l_rowid, (SELECT tmp.new_code_x
    FROM new_mapping_code_x tmp
    WHERE tmp.old_code_x = l.code_x) code_x,
    (SELECT tmp.new_code_x
    FROM new_mapping_code_x tmp
    WHERE tmp.old_code_x = l.code_x_ori) code_x_ori
    FROM tableX l
    WHERE TO_CHAR (created_date, 'YYYY') = l_year1;
    TYPE typec1 IS TABLE OF c_1%ROWTYPE
    INDEX BY PLS_INTEGER;
    l_c1 typec1;
    BEGIN
    DBMS_OUTPUT.put_line ( 'Update start - '
    || TO_CHAR (SYSDATE, 'DD/MM/YYYY HH24:MI:SS')
    OPEN c_1 (l_year);
    LOOP
    FETCH c_1
    BULK COLLECT INTO l_c1 LIMIT 100000;
    EXIT WHEN l_c1.COUNT = 0;
    FOR indx IN 1 .. l_c1.COUNT
    LOOP
    UPDATE tableX
    SET code_x = NVL (l_c1 (indx).code_x, code_x),
    code_x_ori =
    NVL (l_c1 (indx).code_x_ori, code_x_ori)
    WHERE ROWID = l_c1 (indx).l_rowid;
    END LOOP;
    COMMIT;
    END LOOP;
    CLOSE c_1;
    DBMS_OUTPUT.put_line ( 'Update end - '
    || TO_CHAR (SYSDATE, 'DD/MM/YYYY HH24:MI:SS')
    END;
    We do not want to do a single update by year as we fear the update might fail with for example rollback segment error.
    It seems to me the above developed is not the most efficient one. Any comments on the above or anyone having a better solution?
    Thanks

    Everything wrong with the sample code and approach used. This is not how one uses Oracle. This is not how one designs performant and scalable code.
    Transactions must be consistent and logical. A commit in the middle of "+doing something+" is wrong. Period. (and no, the reasons for committing often and frequently in something like SQL-Server do not and never have applied to Oracle)
    Also, as I/O is the slowest and most expensive operation that one can perform in a database, it simply makes sense to reduce I/O as far as possible. This means not doing this:
    WHERE TO_CHAR (created_date, 'YYYY') = l_year1;Why? Because an index on created_date is now rendered utterly useless... and in this specific case will result in a full table scan.
    It means using the columns in their native data types. If the column is a date then use it as a date! E.g.
    where created_date between :startDate and :endDateThe proper approach to this problem is to determine what is the most effective logical transaction that can be done, given the available resources (redo/undo/etc).
    This could very likely be daily - dealing and updating with a single day's data at a time. So then one will write a procedure that updates a single day as a single transaction.
    One can also create a process log table - and have this procedure update this table with the day being updated, the time started, the time completed, and the number of rows updated.
    One now has a discrete business process that can be run. This allows one to run 10 or 30 or more of these processes at the same time using DBMS_JOB - thus doing the updates for a month using parallel processing.
    The process log table can be used to manage the entire update. It will also provide basic execution time details allowing one to estimate the average time for updating a day and the total time it will take for all the data in the large table to be updated.
    This is a structured approach. An approach that ensures the integrity of the data (all rows for a single day is treated as a single transaction). One that also provides management data that gives a clear picture of the state of the data in the large table.
    I'm a firm believer that is something is worth doing, it is worth doing well. Using a hacked approach of blindly updating data and committing ad-hoc without any management and process controls... That is simply doing something very badly. Why? It may be interesting running into a brick wall the first time around. However, subsequent encounters with the wall should be avoided.

Maybe you are looking for

  • How to export a table with half a million rows?

    I need to export a table that has 535,000 rows. I tried to export to Excel and it exported only 65,535 rows. I tried to export to a text file and it said it was using the clipboard (?) and 65,000 rows was the maximum. Surely there has to be a way to

  • Is there a way to hide gridlines in Numbers?

    This is not the same as custom borders on cells -- I'm referring to the soft gray cell grid on the screen. I'm doing some work requiring screen shots, and they would look better without the lines.

  • Load Balancing for EBS R12

    Hi husssein, I have a pc shuttle with test EBS R12 on RHEL4.6. I want to practise/test installing "appsTier Load Balancing" because this is one of our client's requirement. I have 1 laptop and i can borrow another from my officemate. I will install r

  • Apple Script/Program start through link on HTML page

    Dear Apple Gurus Probably I'm asking something silly, but I don't know any other way to solve my problem. I have to do some training videos about the finder and system tools. To have it easy, I would like to ad some buttons to the project where you c

  • hr element placed incorrectly on html page

    <HR> element is placed in a row of a grid but it appears above the table on HTML page. JSP code and HTML output is as follows: <?xml version="1.0" encoding="UTF-8"?> <jsp:root version="1.2" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.