Archive table data
I am new to do DBA work.
We have a table containing 227M records. Now we want to only keep 24 months data. So we need to archive data for this table and only keep data from 2005 to 2007. This table has partition for each month. What steps exactly we need to follow? Very appreciate if someone can give me some ideas. Thanks.
One approach to what you want to do would be to create a second table to hold the archive data. For example, if your current table is called table_data then you could create a second table called table_data_archive. The second table must be created as an exact copy of the original table... same field names, key structure, etc. And of course you would want to partition the table, too.
Then execute a sql query to copy the data you wish to archive from the original table into the second table. The query could be something like: INSERT INTO table_data_archive (SELECT * FROM table_data WHERE date_field <= 12/31/2004).
Let me know if you have any additional questions, and good luck!
Similar Messages
-
Appraoch for archiving table data from a oracledb and load in archiv server
Hi,
I've a requirement where I need to archive and purge old data in datawarehouse.
The archival strategy will select data from a list of tables and load into files.
I also need to use sql loader control files to load data from the above files into archival server.
I want to know which is the better approach to load data into files.
Should I use utl_file or spool table data into file using select stmts.
I also have some clob columns in some tables.I've doing something like this a couple of months ago. After some performance tests: the fastest way is to create files through UTL_FILE used in procedure with BULK SQL. Another good idea is to create files with Python which operates on text files blazingly. (Use PL/SQL if you need to create that files on server, but if you have to create files on remote machine use something else (Python?))
-
Best Solution for Archiving Table data
Hi All,
I have a table with huge data. It is not partitioned table.
On an average per day 10000 records will be inserted into this table. Now I want to archive(backup)
every one years data manually and keep in safe location and hence delete those archived rows
from the table. Whenever required it should be easily imported back to this table. All this happens through
Application.
One appraoch in my mind right now, is transferring the data from table to flat file with comma separted,
and whenever required again importing back to the table from Flat file using external tables concept.
Can any body suggest what is best solution for this.
ThanksThe best solution would be partitioning.
Any other solution requires DML - running DELETE and INSERT transactions to remove a data set and to add a data set (if need be) again.
With partitioning this is achieved (in sub-seconds) using DDL by exchanging a partition's contents with that of a table. Which means that after the archived data has been loaded (SQL*Loaded, Import, etc) into a table (and indexes created), that table (with indexes) is "swapped" into the partition table as a partition. -
Need Suggestion for Archival of a Table Data
Hi guys,
I want to archive one of my large table. the structure of table is as below.
Daily there will be around 40000 rows inserted into the table.
Need suggestion for the same. will the partitioning help and on what basis?
CREATE TABLE IM_JMS_MESSAGES_CLOB_IN
LOAN_NUMBER VARCHAR2(10 BYTE),
LOAN_XML CLOB,
LOAN_UPDATE_DT TIMESTAMP(6),
JMS_TIMESTAMP TIMESTAMP(6),
INSERT_DT TIMESTAMP(6)
TABLESPACE DATA
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
LOB (LOAN_XML) STORE AS
( TABLESPACE DATA
ENABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCACHE
NOPARALLEL;
do the needful.
regards,
SandeepThere will not be any updates /deletes on the table.
I have created a partitioned table with same struture and i am inserting the records from my original table to this partitioned table where i will maintain data for 6 months.
After loading the data from original table to archived table i will truncating the original table.
If my original table is partitioned then what about the restoring of the data??? how will restore the data of last month??? -
Can we delete the archived RS tables data in unix path?
Hi Experts
As you all know that when we perform RS table archiving we setup a Logical file path 'ARCHIVE_GLOBAL_PATH' pointing to the physical path '/usr/sap/<SYSID>/SYS/global/<FILENAME>' in unix system.
Here we have our RS table data in the unix path and this logical path will make sure that there are no problems with requests status.
Now our unix path has grown in size hence we would like to delete some archived data from this path. Could you pls confirm if we delete the data from this unix path will there be any problems with the requests status?
Simply can we delete the RS table archived data from the unix file path??
Thanks!!Hi All
Any idea about this would be appreciated.
Thanks!! -
How to use for all entires clause while fetching data from archived tables
How to use for all entires clause while fetching data from archived tables using the FM
/PBS/SELECT_INTO_TABLE' .
I need to fetch data from an Archived table for all the entries in an internal table.
Kindly provide some inputs for the same.
thanks n Regards
RameshHi Ramesh,
I have a query regarding accessing archived data through PBS.
I have archived SAP FI data ( Object FI_DOCUMNT) using SAP standard process through TCODE : SARA.
Now please tell me can I acees this archived data through the PBS add on FM : '/PBS/SELECT_INTO_TABLE'.
Do I need to do something else to access data archived through SAP standard process ot not ? If yes, then please tell me as I am not able to get the data using the above FM.
The call to the above FM is as follows :
CALL FUNCTION '/PBS/SELECT_INTO_TABLE'
EXPORTING
archiv = 'CFI'
OPTION = ''
tabname = 'BKPF'
SCHL1_NAME = 'BELNR'
SCHL1_VON = belnr-low
SCHL1_BIS = belnr-low
SCHL2_NAME = 'GJAHR'
SCHL2_VON = GJAHR-LOW
SCHL2_BIS = GJAHR-LOW
SCHL3_NAME = 'BUKRS'
SCHL3_VON = bukrs-low
SCHL3_BIS = bukrs-low
SCHL4_NAME =
SCHL4_VON =
SCHL4_BIS =
CLR_ITAB = 'X'
MAX_ZAHL =
tables
i_tabelle = t_bkpf
SCHL1_IN =
SCHL2_IN =
SCHL3_IN =
SCHL4_IN =
EXCEPTIONS
EOF = 1
OTHERS = 2
OTHERS = 3
It gives me the following error :
Index for table not supported ! BKPF BELNR.
Please help ASAP.
Thnaks and Regards
Gurpreet Singh -
Goldengate Extracts reads slow during Table Data Archiving and Index Rebuilding Operations.
We have configured OGG on a near-DR server. The extracts are configured to work in ALO Mode.
During the day, extracts work as expected and are in sync. But during any dialy maintenance task, the extracts starts lagging, and read the same archives very slow.
This usually happens during Table Data Archiving (DELETE from prod tables, INSERT into history tables) and during Index Rebuilding on those tables.
Points to be noted:
1) The Tables on which Archiving is done and whose Indexes are rebuilt are not captured by GoldenGate Extract.
2) The extracts are configured to capture DML opeartions. Only INSERT and UPDATE operations are captured, DELETES are ignored by the extracts. Also DDL extraction is not configured.
3) There is no connection to PROD or DR Database
4) System functions normally all the time, but just during table data archiving and index rebuild it starts lagging.
Q 1. As mentioned above, even though the tables are not a part of capture, the extracts lags ? What are the possible reasons for the lag ?
Q 2. I understand that Index Rebuild is a DDL operation, then too it induces a lag into the system. how ?
Q 3. We have been trying to find a way to overcome the lag, which ideally shouldn't have arised. Is there any extract parameter or some work around for this situation ?Hi Nick.W,
The amount of redo logs generated is huge. Approximately 200-250 GB in 45-60 minutes.
I agree that the extract has to parse the extra object-id's. During the day, there is a redo switch every 2-3 minutes. The source is a 3-Node RAC. So approximately, 80-90 archives generated in an hour.
The reason to mention this was, that while reading these archives also, the extract would be parsing extra Object ID's, as we are capturing data only for 3 tables. The effect of parsing extract object id's should have been seen during the day also. The reason being archive size is same, amount of data is same, the number of records to be scanned is same.
The extract slows down and read at half the speed. If normally it would take 45-50 secs to read an archive log of normal day functioning, then it would take approx 90-100 secs to read the archives of the mentioned activities.
Regarding the 3rd point,
a. The extract is a classic extract, the archived logs are on local file system. No ASM, NO SAN/NAS.
b. We have added "TRANLOGOPTIONS BUFSIZE" parameter in our extract. We'll update as soon as we see any kind of improvements. -
How to check the data of an archived table.
I have archived a table created by me. I have executed the write program for the archiving object in SARA. Now how can check the data of my archived table.
Hello Vinod,
One thing to check in the customizing settings is your "Place File in Storage System" option. If you have selected the option to Store before deleting, the archive file will not be available for selection within the delete job until the store job has completed successfully.
As for where your archive file will be stored - there are a number of things to check. The archive write job will place the archive file in whatever filesystem you have set up within the /nFILE transaction. There is a logical file path (for example ARCHIVE_GLOBAL_PATH)where you "assign" the physical path (for example UNIX: /sapmnt/<SYSID>/archivefiles). The logical path is associated with a logical file name (for example ARCHIVE_DATA_FILE_WITH_ARCHIVE_LINK). This is the file name that is used within the customizing settings of the archive object.
Then, the file will be stored using the content repository you defined within the customizing settings as well. Depending on what you are using to store your files (IXOS, IBM Commonstore, SAP Content Server, that is where the file will be stored.
Hope this helps.
Regards,
Karin Tillotson -
Archiving old data from a partitioned table
Hi,
While sifting through all the options for archiving the old data from a table which is also indexed, i came across a few methods which could be used:
1. Use a CTAS to create new tables on a different tablespace from the partitions on the exisitng table. Then swap these new tables with the data from the partitions, drop the partitions on the exisiting table (or truncate them although i m not sure which one is the recommended method),offline this tablespace and keep it archived. In case you require it in the future, again swap these partitions wih the data from the archived tables. I am not sure if i got the method correctly.
2. Keep an export of all the partitions which need to be archived and keep that .dmp file on a storage media. Once they are exported, truncate the corresponding partitions in the original table. If required in the future, import these partitions.
But i have one constraint on my Db which is I cannot create a new archive tablespace for holding the tables containing the partitioned data into then as I have only 1 tablespace allocated to my appplication on that DB as there are multiple apps residing on it together. Kindly suggest what option is the best suited for me now. Should I go with option 2?
Thanks in advance.Hi,
Thanks a bunch for all your replies. Yeah, I am planning to go ahead with the option 2. Below is the method I have decided upon. Kindly verify is my line of understanding is correct:
1. export the partition using the clause:
exp ID/passwd file=abc.dmp log=abc.log tables=schemaname.tablename:partition_name rows=yes indexes=yes
2. Then drop this partition on the original table using the statement:
ALTER TABLE tablename drop PARTITION partition_name UPDATE GLOBAL INDEXES;
If I now want to import this dump file into my original table again, I will first have to create this partition on my table using the statement:
3. ALTER TABLE tablename ADD PARTITION partition_name VALUES LESS THAN ( '<<>>' ) ;
4. Then import the data into that partition using:
imp ID/passwd FILE=abc.dmp log=xyz.log TABLES=schemaname.tablename:partition_name IGNORE=y
Now my query here is that this partitioned table has a global index associated with it. So once i create this partition and import the data into it, do i need to drop and recreate the global indexes on this table or is there any aother method to update the indexes. Kindly suggest.
Thanks again!! :) -
Procedure for Archive the data
Hello Experts,
I need to write a procedure for clean up the data base. Below is my rek.
User should be provided with options of entering the table name and also should be given option to select whether he wants to delete all the data or archive the table.
--> Program should provide the list of tables that are related to the table name that was given by the user. (This is not required if the table is stagging table, as they don't have constraints associated with them)
--> If user wants to archive then the data in the tables and its related table should be archived (exported) into a flat file and then delete the data from each table in a sequence. Else, we need to delete the data without archiving.
Can you Please let me know the procedure for the above rek and also I am not sure about the archiving the data. If you don't know the table name and the columns. so How can you define a cursor record to handle the record.
Can you please send me the complete code for the above rek to [email protected]
I appreciate help in this regard.
Thanks & Regards,
Sree.Can you please send me the complete code for the
above rek to [email protected]
I appreciate help in this regard.The goal of this forum is not to make your job, but to assist you with guidelines, references, and concepts, on specific issues. If you want someone to code for you, then you should hire a programmer instead.
~ Madrid -
AFRU table data - taking long time to access
Hi,
In my report i am accessing the afru table, but it takes a lot of time. what should i do ? how can i archive the data from this table ?
Regards,Hi Khushi,
Your query is simple but quite bad in terms of performance.
Change it to something like:
SELECT [...]
FROM afko
INNER JOIN afvc ON afko~aufpl = afvc~aufpl
INNER JOIN afru ON afvc~rueck = afru~rueck
WHERE afko~aufnr = [...]
For more information you can check OSS note 187906 Performance: Customer developments in PP and PM. -
Accrual Formula Archive Table Issue?
Hello Experts,
I customized PTO_PAYROLL_BALANCE_CALCULATION as per our business requirment. High Level of the plan- It is based on the overtime an employee works per pay period and depending on overtime worked an employee gets time and half times. For example if the emp works 12 hours he accrues 18 hours of benefit time(formula calculating fine). To meet the business requirment we included employees who are hired in the mid of the pay period , here the issue is when the archive process is run
We are unable to populate the employee's period accural(acrrued in a particular pay period) and if we populate the period accrual we are unable to process period accrual for an employee terminated in middle of the pay period. The requirement is to populate both in the archive table. ie to include both period accrual and mid pay period terminated employees.
Below is the customized formula. Thanks much and appreciate your time and response in advance.
DEFAULT FOR ACP_START IS 'HD'
DEFAULT FOR ACP_INELIGIBILITY_PERIOD_TYPE IS 'CM'
DEFAULT FOR ACP_INELIGIBILITY_PERIOD_LENGTH IS 0
DEFAULT FOR ACP_CONTINUOUS_SERVICE_DATE IS '4712/12/31 00:00:00' (date)
DEFAULT FOR ACP_ENROLLMENT_END_DATE IS '4712/12/31 00:00:00' (date)
DEFAULT FOR ACP_TERMINATION_DATE IS '4712/12/31 00:00:00' (date)
DEFAULT FOR ACP_ENROLLMENT_START_DATE IS '4712/12/31 00:00:00' (date)
DEFAULT FOR ACP_SERVICE_START_DATE IS '4712/12/31 00:00:00' (date)
default for Accrual_Start_Date is '4712/12/31 00:00:00' (date)
default for Accrual_Latest_Balance is 0
INPUTS ARE
Calculation_Date (date),
Accrual_Start_Date (date),
Accrual_Latest_Balance
/* bug 4047666*/
prm_Accrual_Start_Date (date) = Accrual_Start_Date
prm_Calculation_Date (date) = Calculation_Date
/* bug 4047666*/
E = CALCULATE_PAYROLL_PERIODS()
For the payroll year that spans the Calculation Date
get the first days of the payroll year. If we have a latest balance,
we use the Accrual Start Date.
Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
/**XXX CUSTOM **/
/*Calculation_Date = get_date('PAYROLL_PERIOD_END_DATE')*/
Payroll_Year_First_Valid_Date = GET_DATE('PAYROLL_YEAR_FIRST_VALID_DATE')
IF (Calculation_Date <> Calculation_Period_ED) AND
(Calculation_Period_SD > Payroll_Year_First_Valid_Date) THEN
E = GET_PAYROLL_PERIOD(ADD_DAYS(Calculation_Period_SD,-1))
Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
ELSE IF (Calculation_Period_SD = Payroll_Year_First_Valid_Date) AND
(Calculation_Date <> Calculation_Period_ED) THEN
Calculation_Period_ED = ADD_DAYS(Calculation_Period_SD,-1)
Set the Calculation_Date to the Termination Date / Enrollment end date if not defaulted
IF NOT (ACP_TERMINATION_DATE WAS DEFAULTED) OR
NOT (ACP_ENROLLMENT_END_DATE WAS DEFAULTED) THEN
Early_End_Date = least(ACP_TERMINATION_DATE, ACP_ENROLLMENT_END_DATE)
IF (Early_End_Date < Calculation_Date) THEN
Calculation_Date = Early_End_Date
Get the last whole payroll period prior to the Calculation Date and ensure that it is within the
Payroll Year (if the Calculation Date is the End of a Period then use that period)
E = GET_PAYROLL_PERIOD(Calculation_Date)
Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
/**XXX CUSTOM **/
/*Calculation_Date = get_date('PAYROLL_PERIOD_END_DATE')*/
IF (Calculation_Date <> Calculation_Period_ED) AND
(Calculation_Period_SD > Payroll_Year_First_Valid_Date) THEN
E = GET_PAYROLL_PERIOD(ADD_DAYS(Calculation_Period_SD,-1))
Calculation_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Calculation_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
ELSE IF (Calculation_Period_SD = Payroll_Year_First_Valid_Date) AND
(Calculation_Date <> Calculation_Period_ED) THEN
Calculation_Period_ED = ADD_DAYS(Calculation_Period_SD,-1)
Set the Continuous Service Global Variable, whilst also
ensuring that the continuous service date is before the Calculation Period
IF (ACP_CONTINUOUS_SERVICE_DATE WAS DEFAULTED) THEN
E = set_date('CONTINUOUS_SERVICE_DATE', ACP_SERVICE_START_DATE)
ELSE IF(ACP_CONTINUOUS_SERVICE_DATE > Calculation_Period_SD) THEN
Total_Accrued_PTO = 0
E = PUT_MESSAGE('HR_52796_PTO_FML_CSD')
E = set_date('CONTINUOUS_SERVICE_DATE', ACP_CONTINUOUS_SERVICE_DATE)
ELSE
E = set_date('CONTINUOUS_SERVICE_DATE', ACP_CONTINUOUS_SERVICE_DATE)
Determine the Accrual Start Rule and modify the start date of the accrual calculation accordingly
N.B. In this calculation the Accrual Start Rule determines the date from which a person may first accrue
PTO. The Ineligibility Rule determines the period of time during which the PTO is not registered.
Once this date has passed the accrual is registered from the date determined by the Accrual Start Rule.
Continuous_Service_Date = get_date('CONTINUOUS_SERVICE_DATE')
IF (ACP_START = 'BOY') THEN
First_Eligible_To_Accrue_Date =
to_date('01/01/'||to_char(add_months(Continuous_Service_Date, 12), 'YYYY'),
'DD/MM/YYYY')
ELSE IF (ACP_START = 'PLUS_SIX_MONTHS') THEN
First_Eligible_To_Accrue_Date = add_months(Continuous_Service_Date,6)
ELSE IF (ACP_START = 'HD') THEN
First_Eligible_To_Accrue_Date = Continuous_Service_Date
Determine the date on which accrued PTo may first be registered, i.e the date on which the
Ineligibility Period expires
Accrual_Ineligibility_Expired_Date = First_Eligible_To_Accrue_Date
IF (ACP_START <> 'PLUS_SIX_MONTHS' AND
ACP_INELIGIBILITY_PERIOD_LENGTH > 0) THEN
IF ACP_INELIGIBILITY_PERIOD_TYPE = 'BM' THEN
Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH*2)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'F' THEN
Accrual_Ineligibility_Expired_Date = add_days(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH*14)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'CM' THEN
Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'LM' THEN
Accrual_Ineligibility_Expired_Date = add_days(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH*28)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'Q' THEN
Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH*3)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'SM' THEN
Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH/2)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'SY' THEN
Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH*6)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'W' THEN
Accrual_Ineligibility_Expired_Date = add_days(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH*7)
ELSE IF ACP_INELIGIBILITY_PERIOD_TYPE = 'Y' THEN
Accrual_Ineligibility_Expired_Date = add_months(Continuous_Service_Date,
ACP_INELIGIBILITY_PERIOD_LENGTH*12)
IF Accrual_Ineligibility_Expired_Date > First_Eligible_To_Accrue_Date
AND Calculation_Date < Accrual_Ineligibility_Expired_Date THEN
First_Eligible_To_Accrue_Date = Accrual_Ineligibility_Expired_Date
If the employee is eligible to accrue before the start of this year,
we must get the period dates for the first period of the year.
Otherwise, we do not need these dates, as we will never accrue that
far back.
IF (not Accrual_Start_Date was defaulted) AND
((Calculation_Date < Accrual_Ineligibility_Expired_Date) OR
(Accrual_Start_Date > Accrual_Ineligibility_Expired_Date)) THEN
* This function checks for unprocessed plan element entries, and
* returns the EE effective start date of the earliest it finds. This may
* be useful if we amend the design to process a partial year starting at
* this date.
* At the moment, however, we simply recalculate for the entire plan term
* in these circumstances, so Adjusted_Start_Date is never used
Adjusted_Start_Date = Get_Start_Date(Accrual_Start_Date,
Payroll_Year_First_Valid_Date)
/* Check whether RESET_PTO_ACCRUAL action parameter is defined and set to Y */
/* If yes, then we need to calculate from the beginning */
Reset_Accruals = Reset_PTO_Accruals()
/* Check for retrospective Assignment changes */
/* Return earliest effective date */
Earliest_AsgUpd_Date = Get_Earliest_AsgChange_Date
( 'PTO Event Group',
add_days(Calculation_Period_SD,-1),
Calculation_Period_ED,
Accrual_Start_Date)
New_Adj_Start_Date = LEAST(Adjusted_Start_Date,
Earliest_AsgUpd_Date)
IF ((New_Adj_Start_Date < Accrual_Start_Date) OR
(Reset_Accruals = 'TRUE')) THEN
Process_Full_Term = 'Y'
ELSE
Process_Full_Term = 'N'
ELSE
Process_Full_Term = 'Y'
Latest_Balance = 0
IF (Process_Full_Term = 'Y') THEN
/* Ensure the Payroll Year Start Date gets reset if caculating */
/* from the beginning of the year. */
E = SET_DATE('PAYROLL_YEAR_SD', Payroll_Year_First_Valid_Date)
IF (Process_Full_Term = 'N') AND
(Accrual_Start_Date >= First_Eligible_To_Accrue_Date) THEN
E = GET_PAYROLL_PERIOD(Adjusted_Start_Date)
Payroll_Year_1st_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Payroll_Year_1st_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
Latest_Balance = Accrual_Latest_Balance
Effective_Start_Date = Adjusted_Start_Date
) /* XXX Custom to include mid pay period hires*/
ELSE IF First_Eligible_To_Accrue_Date <= Payroll_Year_First_Valid_Date THEN
IF (not Accrual_Start_Date was defaulted) THEN
Latest_Balance = Accrual_Latest_Balance
ELSE
Latest_Balance = 0
E = GET_PAYROLL_PERIOD(Payroll_Year_First_Valid_Date)
Payroll_Year_1st_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Payroll_Year_1st_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
Effective_Start_Date = Payroll_Year_First_Valid_Date
ELSE
Get the first full payroll period following the First_Eligible_To_Accrue_Date
(if it falls on the beginning of the period then use that period)
IF (not Accrual_Start_Date was defaulted) THEN
Latest_Balance = Accrual_Latest_Balance
ELSE
Latest_Balance = 0
E = GET_PAYROLL_PERIOD(First_Eligible_To_Accrue_Date )
First_Eligible_To_Accrue_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
First_Eligible_To_Accrue_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
/* IF First_Eligible_To_Accrue_Date <> First_Eligible_To_Accrue_Period_SD THEN
E = GET_PAYROLL_PERIOD(add_days(First_Eligible_To_Accrue_Period_ED,1))
First_Eligible_To_Accrue_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
First_Eligible_To_Accrue_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
IF (First_Eligible_To_Accrue_Period_SD > Calculation_Period_ED) THEN
Total_Accrued_PTO = 0
E = PUT_MESSAGE('HR_52793_PTO_FML_ASG_INELIG')
) */ /* XXX Custom to include mid pay period hires*/
Payroll_Year_1st_Period_SD = First_Eligible_To_Accrue_Period_SD
Payroll_Year_1st_Period_ED = First_Eligible_To_Accrue_Period_ED
Effective_Start_Date = First_Eligible_To_Accrue_Date
Effective_Start_Date = GREATEST(Effective_Start_Date, ACP_ENROLLMENT_START_DATE)
Output messages based on calculated date
IF (Early_End_Date < Payroll_Year_1st_Period_ED) THEN
Total_Accrued_PTO = 0
E = PUT_MESSAGE('HR_52794_PTO_FML_ASG_TER')
If (Calculation_Period_ED < Payroll_Year_1st_Period_ED) THEN
Total_Accrued_PTO = 0
E = PUT_MESSAGE('HR_52795_PTO_FML_CALC_DATE')
Determine the date on which PTO actually starts accruing based on Hire Date,
Continuous Service Date and plan Enrollment Start Date. Remember, we have
already determined whether to user hire date or CSD earlier in the formula.
If this date is after the 1st period and the fisrt eligible date then
establish the first full payroll period after this date
(if the Actual Start Date falls on the beginning of a payroll period then
use this period)
Enrollment_Start_Date = ACP_ENROLLMENT_START_DATE
Actual_Accrual_Start_Date = GREATEST(Enrollment_Start_Date,
Continuous_Service_Date,
Payroll_Year_1st_Period_SD)
Determine the actual start of the accrual calculation
IF (Actual_Accrual_Start_Date > Payroll_Year_1st_Period_SD AND
Actual_Accrual_Start_Date > First_Eligible_To_Accrue_Date) THEN
E = GET_PAYROLL_PERIOD(Actual_Accrual_Start_Date)
Accrual_Start_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Accrual_Start_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
IF Actual_Accrual_Start_Date > Accrual_Start_Period_SD THEN
( E = GET_PAYROLL_PERIOD(Actual_Accrual_Start_Date) /* XXX CUSTOM*/
Accrual_Start_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Accrual_Start_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
E = GET_PAYROLL_PERIOD(add_days(Accrual_Start_Period_ED,1))
Accrual_Start_Period_SD = get_date('PAYROLL_PERIOD_START_DATE')
Accrual_Start_Period_ED = get_date('PAYROLL_PERIOD_END_DATE')
If the Actual Acrual Period is after the Calculation Period then end the processing.
IF (Accrual_Start_Period_SD > Calculation_Period_ED) THEN
Total_Accrued_PTO = 0
E = PUT_MESSAGE('HR_52797_PTO_FML_ACT_ACCRUAL')
ELSE IF (First_Eligible_To_Accrue_Date > Payroll_Year_1st_Period_SD) THEN
Accrual_Start_Period_SD = First_Eligible_To_Accrue_Period_SD
Accrual_Start_Period_ED = First_Eligible_To_Accrue_Period_ED
ELSE
Accrual_Start_Period_SD = Payroll_Year_1st_Period_SD
Accrual_Start_Period_ED = Payroll_Year_1st_Period_ED
Now set up the information that will be used in when looping
through the payroll periods
IF Calculation_Period_ED >= Accrual_Start_Period_ED THEN
E = set_date('PERIOD_SD',Accrual_Start_Period_SD)
E = set_date('PERIOD_ED',Accrual_Start_Period_ED)
E = set_date('LAST_PERIOD_SD',Calculation_Period_SD)
E = set_date('LAST_PERIOD_ED',Calculation_Period_ED)
IF (Process_Full_Term = 'N') THEN
E = set_number('TOTAL_ACCRUED_PTO', Latest_Balance)
ELSE
E = set_number('TOTAL_ACCRUED_PTO', 0)
Initialize Band Information
E = set_number('ANNUAL_RATE', 0)
E = set_number('UPPER_LIMIT', 0)
E = set_number('CEILING', 0)
E = LOOP_CONTROL('PTO_PAYROLL_PERIOD_ACCRUAL')
Total_Accrued_PTO = get_number('TOTAL_ACCRUED_PTO') - Latest_Balance
IF Accrual_Start_Period_SD <= Calculation_Period_SD THEN
Accrual_end_date = Calculation_Period_ED
IF Process_Full_Term = 'Y' AND
Effective_Start_Date > Actual_Accrual_Start_Date THEN
Effective_Start_Date = Actual_Accrual_Start_Date
Effective_End_Date = Calculation_Date
/* bug 4047666*/
IF Process_Full_Term = 'N' AND NOT (Accrual_Start_Date WAS DEFAULTED)
AND NOT (Accrual_Latest_Balance WAS DEFAULTED)
AND prm_Accrual_Start_Date > prm_Calculation_Date THEN
Effective_Start_Date = ADD_DAYS(Effective_End_Date,1)
ELSE
/* bug 4047666*/
IF Effective_Start_Date >= Effective_End_Date THEN
Effective_Start_Date = least(Effective_End_Date, Accrual_Start_Period_SD)
RETURN Total_Accrued_PTO, Effective_start_date, Effective_end_date, Accrual_end_date
Regards
Edited by: user13149420 on Sep 5, 2012 2:50 PMissue in tcode : OAC0.. Content server path was incorrect.
-
Unable to store PDF , XL , Word document into SAP archived tables
Hi Experts,
I have created a web interface in WD ABAP which stores the employee's attachement data into sap arcived link.
For that I have carried out following activities.
I have Complted customizing for a document type under business object PREL and for this I have reffered following link.
SASAP Archived Link
Now I have used file Upload UI element in WDABAP which passes the local file data in Xstring to following function modoules for creating
an attachment.
data: it_out type TOADT,
it_storage type ZDMSSTORAGE.
CALL FUNCTION 'ARCHIV_CREATE_TABLE'
EXPORTING
ar_object = 'HRPDATA' " Object catagory that I have created for file storage under Business Object PREL
object_id = '10000008'
sap_object = 'PREL'
document = filedata
IMPORTING
OUTDOC = it_out
EXCEPTIONS
error_archiv = 1
error_communicationtable = 2
error_connectiontable = 3
error_kernel = 4
error_parameter = 5
error_user_exit = 6
OTHERS = 7.
For reading the attached document I am using following FM's
CALL FUNCTION 'SCMS_AO_TABLE_GET'
EXPORTING
MANDT = SY-MANDT
arc_id = 'Z1'
doc_id = lv_doc_type"im_doc "'4D5D8445165220C8E10000000A3C082E'
COMP_ID = 'data'
IMPORTING
LENGTH =
tables
data = bindata.
*data: BINARY_TAB type
CALL FUNCTION 'SCMS_BINARY_TO_XSTRING'
EXPORTING
input_length = 10000
FIRST_LINE = 0
LAST_LINE = 0
IMPORTING
BUFFER = V_XSTRING
tables
binary_tab = bindata.
Now when I upload any file text or image file, its working fine. When I try to upload any pdf , XL or word document, it uploads the file into acchived table
but when I read this file by converting file data into xstring and passing it into file downlod UI element, It says file is corrupt. Please suggest that if that
is a issue with object catagory configration ( With storage class) or it is wrong way of reading a document for PDF, XL and word
Thanks in advance
Abhayhi
please check the function module used whether it can handle the PDF as well as xl, word documents. if not use some other function module -
What are the cleanup opportunities in term of Temporary tables,Archive tabl
what are the cleanup opportunities in term of Temporary tables,Archive tables etc...
can you provide any scripts which will give storage by environment and by Oracle ID.(to check size in terms of GB)
Example:
=========
APPS : xxxGBfor archiving and purging, take a look at the documents below.
Reducing Your Oracle E-Business Suite Data Footprint using Archiving, Purging, and Information Lifecycle Management [ID 752322.1]
Edited by: Erman Arslan on 31.Ara.2012 03:35
Edited by: Erman Arslan on 31.Ara.2012 03:37 -
Oracle Database Table data Load it into Excel
Hello All,
Please I need your help for this problem:
I need to load Oracle database Table data and load it into Excel and saved with xls format.
Example -Select * from Slase data load it into the Excel.
I appreciate ans sample code to help me do that, Please help me out. This is very urgent.
Thanks alot and best regards,
anbu>
I need to load Oracle database Table data and load it into Excel and saved with xls format.
Example -Select * from Slase data load it into the Excel.
I appreciate ans sample code to help me do that, Please help me out. This is very urgent.
>
Nothing in these forums is 'urgent'. If you have an urgent problem you should contact Oracle support or hire a consultant.
You have proven over and over again that you are not a good steward of the forums. You continue to post questions that you say are 'urgent' but rarely take the time to mark your questions ANSWERED when they have been.
Total Questions: 90 (78 unresolved)
Are you willing to make a commitment to to revisit your 78 unresolved questions and mark them ANSWERED if they have been?
The easiest way to export Oracle data to Excel is to use sql developer. It is a free download and this article by Jeff Smith shows how easy it is
http://www.thatjeffsmith.com/archive/2012/09/oracle-sql-developer-v3-2-1-now-available/
>
And One Last Thing
Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet.
>
And you have previously been ask to read the FAQ at the top of the thread list. If you had done that you would have seen that there is a FAQ for links that have many ways, with code, to export data to Excel.
5. How do I read or write an Excel file?
SQL and PL/SQL FAQ
Maybe you are looking for
-
i downloaded mountain lion for my macbook pro and i want to install it on my sisters' macbook air, my question is, how many times can i share my purchase?
-
Linked PDF file won't preview in Safari
I linked a PDF file to my iWeb page but when I click on it in Safari it loads almost to the end but then errors at the last two pages. The same file works fine in Mac Firefox and IE on a few PC's I tried. Safari though on multiple macs just wouldn't
-
I really want the username and password info to stay in the respective boxes without having to do anything to get them there, like they have been the last three years until today. (This is on a site I access multiple times daily, and is not a site ha
-
How to handle Quotation marks?
Hi, I've got a problem with "Pages". As a German user I'd like to use German quotation marks, those that open low and close high. At the moment I can use only English quotation marks, opening and closing high. How can I fix this specific problem so t
-
Apple Authorised Reseller & Service Provider Store - Should they replace?
Hi There, The silence button fell off my iphone 3G 8gb today, and ive been reading online to hear that apple will replace the phone for this, which is good news. What isn't so good is that there isn't an apple store where i am, only a store which is