Too Many Table logs in DBTABLOG, RSTBPDEL is taking too much time
Hi Experts,
In one of our CRM system, DBTABLOG table is logging one table which is having 1 Billion entries right now. Business dont want to switch off the logging at this moment. But the table is increasing rapidly 42 Gb per month. RSTBPDEL program is running from weeks to delete them, but no control on increment.
Can you please suggest any way to delete them quickly at first, so that my house keeping job will run daily and finish soon.
Regards,
Mohan.
Hello Mohan,
The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
Note 531923 - Audit Trail: Indexes on table DBTABLOG
Note 579980 - Table logs: Performance during access to DBTABLOG
Regards,
Siddhesh
Similar Messages
-
Create procedure is generating too many archive logs
Hi
The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
What would be the answer? The db must remain in archivelog mode.
I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
,P_GRE NUMBER
,P_SDATE VARCHAR2
,P_EDATE VARCHAR2
,P_ssn VARCHAR2
) IS
CURSOR MainCsr IS
SELECT DISTINCT
PPF.NATIONAL_IDENTIFIER SSN
,ppf.full_name FULL_NAME
,ppa.effective_date Pay_date
,ppa.DATE_EARNED period_end
,pet.ELEMENT_NAME
,SUM(TO_NUMBER(prv.result_value)) VALOR
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAf.ASSIGNMENT_ID ASSG_ID
,paf.ORGANIZATION_ID
FROM
pay_element_classifications pec
, pay_element_types_f pet
, pay_input_values_f piv
, pay_run_result_values prv
, pay_run_results prr
, pay_assignment_actions paa
, pay_payroll_actions ppa
, APPS.pay_all_payrolls_f pap
,Per_Assignments_f paf
,per_people_f ppf
WHERE
ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND ppa.payroll_id = pap.payroll_id
AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
AND ppa.payroll_action_id = paa.payroll_action_id
AND paa.action_status = 'C'
AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
AND ppa.action_status = 'C'
--AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
AND paa.assignment_action_id = prr.assignment_action_id
AND prr.run_result_id = prv.run_result_id
AND prv.input_value_id = piv.input_value_id
AND piv.name = 'Pay Value'
AND piv.element_type_id = pet.element_type_id
AND pet.element_type_id = prr.element_type_id
AND pet.classification_id = pec.classification_id
AND pec.non_payments_flag = 'N'
AND prv.result_value <> '0'
--AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
-- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
------------------------------------------------------------------TO get emp.
AND ppf.person_id = paf.person_id
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
------------------------------------------------------------------TO get emp. ASSIGNMENT
--AND paf.assignment_status_type_id NOT IN (7,3)
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
GROUP BY PPF.NATIONAL_IDENTIFIER
,ppf.full_name
,ppa.effective_date
,ppa.DATE_EARNED
,pet.ELEMENT_NAME
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAF.ASSIGNMENT_ID
,paf.ORGANIZATION_ID
BEGIN
DELETE cust.DFC_PAYROLL_DW
WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND tax_unit_id = NVL(p_GRE, tax_unit_id)
AND ssn = NVL(p_ssn, ssn)
COMMIT;
FOR V_REC IN MainCsr LOOP
INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
COMMIT;
END LOOP;
END ;
So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
Thanks
Oracle 9.2.0.5
AIX 5.2The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo. -
System I/O and Too Many Archive Logs
Hi all,
This is frustrating me. Our production database began to produce too many archived redo logs instantly --again. This happened before; two months ago our database was producing too many archive logs; just then we began get async i/o errors, we consulted a DBA and he restarted the database server telling us that it was caused by the system(???).
But after this restart the amount of archive logs decreased drastically. I was deleting the logs by hand(350 gb DB 300 gb arch area) and after this the archive logs never exceeded 10% of the 300gb archive area. Right now the logs are increasing 1%(3 GB) per 7-8 mins which is too many.
I checked from Enterprise Manager, System I/O graph is continous and the details show processes like ARC0, ARC1, LGWR(log file sequential read, db file parallel write are the most active ones) . Also Phsycal Reads are very inconsistent and can exceed 30000 KB at times. Undo tablespace is full nearly all of the time causing ORA-01555.
The above symptoms have all began today. The database is closed at 3:00 am to take offline backup and opened at 6:00 am everyday.
Nothing has changed on the database(9.2.0.8), applications(11.5.10.2) or OS(AIX 5.3).
What is the reason of this most senseless behaviour? Please help me.
Thanks in advance.
Regards.
BurakSelam Burak,
High number of archive logs are being created because you may have massive redo creation on your database. Do you have an application that updates, deletes or inserts into any kind of table?
What is written in the alert.log file?
Do you have the undo tablespace with the guarentee retention option btw?
Have you ever checked the log file switch sequency map?
Please use below SQL to detirme the switch frequency;
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"+
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
+) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC+
+) WHERE ROWNUM < 8+
Ogan -
Bex Query: Too many table names in the query The maximum allowable is 256
Hi Experts,
I need your help, Im working on a Query using a multiprovider of 2 datastores, I need to work with cells to assign specific acconts values to specific rows and columns, so I was creating a Structure with elements from a Hierarchy, but I get this error when I'm half way of the structure:
"Too many table names in the query. The maximum allowable is 256.Incorrect syntax near ')'.Incorrect syntax near 'O1'."
Any idea what is happening? is ti possible to fix it? do I need to ask for a modification of my Infoproviders? Some one told me is possible to combine 2 querys, is it true?
Thanks a lot for your time and pacience.Hi,
The maximum allowable limit is 256 holds true. It is the max no. of characteristics and key figures that can be used in the column side. While creating a structure, you create key figures (restricted or calculated) and formulas etc.. The objects that you use to create these should not be more than 256.
http://help.sap.com/saphelp_nw70/helpdata/EN/4d/e2bebb41da1d42917100471b364efa/frameset.htm
Not sure if combination of 2 query's is possible. You can use RRI. Or have a woorkbook with 2 queries.
Hope it helps. -
Select from (too many) tables
Hi all,
I'm a proud Oracle Apex developer. We have developed an Interactive Report that is generated from many joined tables in a remote system. I've read that to improve performances we can do the following:
1) Create a temporary table on our system that stores the app_user id and the colmun as a result of the query
2) Create a procedure that does:
declare
param1:= :PXX_item
param2:= :PXY_item.
param3:= :V('APP_USER')
insert into <our_table>
(select param3, <query from remore system>)
commit;
3) Rediresct to a query page where IR reads from this temp table
On "Exit" button there's a procedure that purge table data of that user (delete from temp where user=V('app_user'), so the temp table is only filled with necessary data.
Do you see any inconvenience? Application will be used from about 500 users, about 50 concurrent users at a time.
Thank you!1) We don't have a control on source syste, we can only perform query on itI was referring to a materialized view on the system where Apex is installed, not on the source database.
2) There are many tables involvedI don't understand why this is a problem. Too much data I can see, but too many tables... not so much.
3) Data has to be in real time, with no delayThis would a problem for MV or collections. The collections would store the data as of the initial query. Any IRs using the collection after the fact would be using stale data. If you absolutely have to have the data as of right now every time, then the full query must run on the remote system every time. Tuning that query is the only option to make it faster.
4) There are many transactions on the source tables (they are the core of the source system) and so MV could not be refreshed so fastProbably could be with fast refresh enabled, but not necessarily practical to do so. As I indicated in 3, you have painted yourself into a corner here. You have indicated a need for a real-time query and that eliminates a number of possibilities for query-once use-many performance solutions. -
Too many tables with EXECUTE permision
Hi Gurus,
I found that my live databases has too many tables with EXECUTE permision. But I dont know how it happens: my query as follows:
select table_name
from dba_tab_privs
where owner='SYS' AND
privilege = 'EXECUTE' AND
grantee = 'PUBLIC'
result
TABLE_NAME
/598cc2d9_AWExceptionMessageRe
/24bd47b0_AWExceptionMessageRe
/b99e8561_AWExceptionMessageRe
/968869b8_AWExceptionMessageRe
/f8bf68b3_AWExceptionMessageRe
/9abd5a42_AWExceptionMessageRe
/5e83964b_AWExceptionMessageRe
/f01cb9e5_AWExceptionMessageRe
/380f765f_AWExpressCommandExce
/adef78c4_AWMemberExistsExcept
/5166f5c2_AWObjectExistsExcept
TABLE_NAME
oracle/AWXML/SparseDefinition
oracle/AWXML/ModelDimRef
/9d17934e_AWFunctionNotSupport
/d18d9de8_AWHandlerBaseTest
DBMS_AW_XML
INTERACTIONEXECUTE
CWM2_OLAP_INSTALLER
DBMS_XSOQ_ODBO
OLAPI_MDX_ROWSET_IMPL_T
OLAPI_MDX_ROWSET_TABLE
16444 rows selected.
==========================================================
Then I execute above query in other database. result was:
TABLE_NAME
STANDARD
UTL_HTTP
DBMS_PICKLER
DBMS_JAVA_TEST
UTL_FILE
UTL_RAW
UTL_TCP
UTL_INADDR
UTL_SMTP
DBMS_TRANSACTION
DBMS_SESSION
DBMS_DDL
DBMS_UTILITY
DBMS_SPACE
DBMS_ROWID
DBMS_PCLXUTIL
DBMS_APPLICATION_INFO
DBMS_OUTPUT
DBMS_DESCRIBE
DBMS_SQL
DBMS_EXPORT_EXTENSION
DBMS_JOB
DBMS_STATS
DBMS_ZHELP_IR
DBMS_PSP
DBMS_RULE
AQ$_AGENT
AQ$_DEQUEUE_HISTORY
AQ$_SUBSCRIBERS
AQ$_RECIPIENTS
AQ$_HISTORY
AQ$_NOTIFY_MSG
AQ$_DUMMY_T
DBMS_AQ_EXP_QUEUE_TABLES
DBMS_AQ_EXP_INDEX_TABLES
DBMS_AQ_EXP_TIMEMGR_TABLES
DBMS_AQ_EXP_HISTORY_TABLES
DBMS_AQ_EXP_SUBSCRIBER_TABLES
DBMS_AQ_EXP_QUEUES
DBMS_AQ_IMP_INTERNAL
DBMS_RMIN
DBMS_RESOURCE_MANAGER
DBMS_RESOURCE_MANAGER_PRIVS
DBMS_RMGR_PLAN_EXPORT
DBMS_RMGR_GROUP_EXPORT
DBMS_RMGR_PACT_EXPORT
LOW_GROUP
DEFAULT_CONSUMER_GROUP
DBMS_DEBUG_VC2COLL
DBMS_DEBUG
PBSDE
DBMS_SUMMARY
DBMS_SNAPSHOT
DBMS_REFRESH
DBMS_SNAPSHOT_UTL
DBMS_REFRESH_EXP_SITES
DBMS_REFRESH_EXP_LWM
DBMS_TRACE
DBMS_LOB
UTL_REF
UTL_COLL
ODCIPREDINFO
ODCIRIDLIST
ODCIINDEXCTX
ODCIARGDESCLIST
ODCIFUNCINFO
ODCISTATSOPTIONS
ODCICOLINFOLIST
ODCIOBJECT
ODCIOBJECTLIST
ODCIQUERYINFO
ODCICONST
SYSEVENT
DICTIONARY_OBJ_TYPE
DICTIONARY_OBJ_OWNER
DICTIONARY_OBJ_NAME
DATABASE_NAME
INSTANCE_NUM
LOGIN_USER
IS_SERVERERROR
SERVER_ERROR
DES_ENCRYPTED_PASSWORD
IS_ALTER_COLUMN
IS_DROP_COLUMN
GRANTEE
REVOKEE
PRIVILEGE_LIST
WITH_GRANT_OPTION
DICTIONARY_OBJ_OWNER_LIST
DICTIONARY_OBJ_NAME_LIST
IS_CREATING_NESTED_TABLE
CLIENT_IP_ADDRESS
DBMS_REPUTIL
DBMS_REPUTIL2
DBMS_OFFLINE_RGT
DBMS_REPCAT_RGT_EXP
DBMS_REPCAT_INSTANTIATE
DBMS_CRYPTO_TOOLKIT
DBMS_RANDOM
how come it happens? need help from u all !!!!!!!!!!you asked why you have such grant found in dba_tab_privs. Let's try to find DBMS_RANDOM (listed on your output) in $ORACLE_HOME/rdbms/admin:
cd $ORACLE_HOME/rdbms/admin
grep dbms_random *
dbmsrand.sql:CREATE OR REPLACE PACKAGE dbms_random AS
dbmsrand.sql: -- execute dbms_random.seed(12345678);
dbmsrand.sql: -- execute dbms_random.seed(TO_CHAR(SYSDATE,'MM-DD-YYYY HH24:MI:SS'));
dbmsrand.sql: -- my_random_number := dbms_random.random;
dbmsrand.sql: -- my_random_real := dbms_random.value;
dbmsrand.sql: -- select dbms_random.value from dual;
dbmsrand.sql: -- insert into a values (dbms_random.value);
dbmsrand.sql: -- execute :x := dbms_random.value;
dbmsrand.sql:END dbms_random;
dbmsrand.sql:CREATE OR REPLACE PACKAGE BODY dbms_random AS
dbmsrand.sql:END dbms_random;
dbmsrand.sql:CREATE OR REPLACE PUBLIC SYNONYM dbms_random FOR sys.dbms_random;
dbmsrand.sql:GRANT EXECUTE ON dbms_random TO public;
This is run by catproc.sql script which you run while creating database. -
Hi,
I have a problem on a production server: our support partner told me that we are creating too many Archive Log files. They are not sure at all of what is happening.
Please, do you know if there is any System View which we could obtain more information about Archive Logs usage?
Thanks in advance.
Best Regards,
Joan PadillaHi Joan,
The sensible number of archive logs is determined by your backup and recovery strategy. You can delete them, once you have back-upped the database and the archivelogs. Further info in the Backup and Recovery Concepts manual.
You can find information on them in v$archived_log.
If you decide to delete them without having read the aforementioned manual, I will pray for you you don't suffer a hard disk crash.
You may want to review the relationship with your support partner too.
Their advice is not exactly professional.
Sybrand Bakker
Senior Oracle DBA -
Will Analyze create too many redo logs ??
DB : 10gR2
Hello Folks,
Is analyze process creates too many redo log files ?? Is their any other valid option other than no log ?I guess that's also going to create too many log switches as it's part of DDLHUH?
REDO log switches result from DML.
If you are obsessed with LOG switches, then you need to use LOGMINER to observe the the relative per centage of changed data is from DBMS_STATS
when compared to normal DML activity.
I seriously doubt that DBMS_STATS has any significant impact to redo log switches. -
Hi,
I have to create a pdf with a table having multiple columns.
But the table has too many columns to fit in a A4 page.
I do not wan to change the paper type. I have to use A4 paper only.
Is there a way in which I can show the table records in two consecutive rows..
so that I can split the columns into two rows.
There will be two header rows... and two data rows (only data rows will repeat)
data row1 will have data from say fields 1 to 10 and data row 2 will have data from fields 11 to 20.
Regards
Reema.As far as I know there's probably no way ID is going to do what you want.
You can place a table across a multiple page spread, but the odds of being able to do that and still keep the file printable are marginal, at best. You can't for example, leave blank space at the gutters unless you are able to add a blank column that spans the gutter, and the limt for multipage spreads is 10 pages wide, whcih doesn't sound like it's probably enough to hold almost 600 columns.
Perhaps placing the table using named ranges of appropriate widths would work... -
Oracle Instanc has too many tables, way to subset?
<p>Hello,</p><p> </p><p>When i try to bring up an Oracle instance and do a query, thereare so many tables that it takes 10 minutes for H. to bring up thelist of tables.</p><p> </p><p>Is there a way to subset by Owner like TOAD does, so that i'mnot trying to bit off the whole thing at once?</p><p> </p><p>Thanks very much!</p><p> </p><p>BobK</p>
Thanks for the idea. I use the filters all of the time to find tables of interest. Like if I'm looking for tables with product info, I'll put "%prod%", or something like that. But as you suggest, I guess there's no reason why I couldn't key in every one of my 100+ tables of interest so I don't see 300+ tables that are empty and unused.
One thing that worries me is that I would make this huge investment in time (keying into the filter) and I'd hit "clear filter" by accident one day.
I think I'll write a macro in AutoIt and enter the interesting tables in the filter using the (reusable) macro. Thanks again.
-- Dale -- -
Importing a table with a BLOB column is taking too long
I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
1 - set buffer to 500 Mb
2 - pre-created the table and turned off logging
3 - set indexes=N
4 - set constraints=N
5 - I have 10 online redo logs with 200 MB each
6 - Even turned off logging at the database level with disablelogging = true
It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
For your info:
Computer: Sun v490 with 16 CPUs, solaris 10
memory: 10 Gigabytes
SGA: 4 GigabytesLegatti,
I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
Thanks for your reply. -
401 Unauthorized after too many tables in AXL SQL Query
Hi...
I have an app that sends several AXL calls. All work fine with the exception of one accessing MGCP data via the AXL SQL QUERY command. I have found that if I only do a couple of tables it works fine, but if I had in more than 3 I get a 401 unauthorized return. Now, I know the commands are built correctly becuase its working for every other command in the set and like I said if I only do 3 or less tables the query works. Also, if I SSH into the CM locally and do run sql with the full command, it returns all tables fine, which leads me to believe this is a restriction in the return of the axl soap call...
Help?
I am using CM 6.0. The full query being passed in that returns 401 unauthorized is:
SELECT MGCP.pkid, MGCP.DomainName, TypeProduct.Name AS GatewayProduct, CallManagerGroup.Name AS CallManagerGroup, MGCPSlotConfig.Slot, TypeMGCPSlotModule.Name AS UnitModule, MGCPSlotConfig.Subunit AS SubUnitIndex, TypeMGCPVic.Name AS SubUnitProduct, TypeMGCPVic.MaxNumPorts, MGCP.VersionStamp, MGCP.SpecialLoadInformation FROM TypeMGCPVic RIGHT OUTER JOIN MGCPSlotConfig ON TypeMGCPVic.Enum = MGCPSlotConfig.tkMGCPVic LEFT OUTER JOIN CallManagerGroup RIGHT OUTER JOIN MGCP ON CallManagerGroup.pkid = MGCP.fkCallManagerGroup LEFT OUTER JOIN TypeProduct ON MGCP.tkProduct = TypeProduct.Enum ON MGCPSlotConfig.fkMGCP = MGCP.pkid LEFT OUTER JOIN TypeMGCPSlotModule ON MGCPSlotConfig.tkMGCPSlotModule = TypeMGCPSlotModule.Enum ORDER BY MGCP.DomainName, MGCPSlotConfig.Slot, UnitModule DESC, MGCPSlotConfig.Subunit"
Again, this works locally on the CM machine, it works if I pull back some of the tables... The AXL trace logs do not have errors, they just stop (with a return soap call that isnt passed back because 401 is recieved).
Thanks!Here is the trace log. The return is there! It just never gets recieved.
2008-05-22 09:50:23,458 INFO [http-8443-Processor23] axl.AXLRouter - <?xml version="1.0" encoding="UTF-8"?>http://schemas.xmlsoap.org/soap/envelope/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">http://www.cisco.com/AXL/API/6.0" xmlns:xsi="http://www.cisco.com/AXL/API/6.0" sequence="1">67a5af3c-878b-8dd3-2865-ca3ca007d2c22811-Test-VG1Cisco 2811Sub-Pub0NM-4VWIC-MBRD0VWIC2-1MFT-T1E1-T111209133478-fb3741a3-bb70-4013-90fa-734bea80f77267a5af3c-878b-8dd3-2865-ca3ca007d2c22811-Test-VG1Cisco 2811Sub-Pub0NM-4VWIC-MBRD1VWIC2-1MFT-T1E1-T111209133478-fb3741a3-bb70-4013-90fa-734bea80f77267a5af3c-878b-8dd3-2865-ca3ca007d2c22811-Test-VG1Cisco 2811Sub-Pub0NM-4VWIC-MBRD2VIC2-4FXO41209133478-fb3741a3-bb70-4013-90fa-734bea80f77267a5af3c-878b-8dd3-2865-ca3ca007d2c22811-Test-VG1Cisco 2811Sub-Pub0NM-4VWIC-MBRD3VIC-4FXS41209133478-fb3741a3-bb70-4013-90fa-734bea80f77243187db4-aa08-2141-a7eb-c865f45c5996Router.atrion.internalCisco 2811Pub-Sub0NM-4VWIC-MBRD0VWIC-2MFT-T121207845753-f4497a98-4ee7-42e7-9770-090898a52ca9eb492b1e-f238-ace9-3ffe-0a7b8765f364test-3845Cisco 3845Pub-Sub0NM-4VWIC-MBRD0VIC2-2MFT-T1E1-E121210884106-fa04b172-efe5-4ca0-9e44-1934af3fa7c6eb492b1e-f238-ace9-3ffe-0a7b8765f364test-3845Cisco 3845Pub-Sub0NM-4VWIC-MBRD1VIC2-2MFT-T1E1-E121210884106-fa04b172-efe5-4ca0-9e44-1934af3fa7c6
2008-05-22 09:50:23,461 INFO [http-8443-Processor23] axl.AXLRouter - Request 1211393338927 was process in 30ms -
Too many archive logs getting generated on 11.1.0.7
I could see heavy arch generation on PROD instance although there is not much activity on PROD
this is a fresh instance which has gone live month ago and archive log being enabled weeks ago but
i could see about 20 to 23 GB of arch generation daily although the database size is 90 GB around
Raised SR they told me its unable to Purge statistics from the SYSAUX tablespace
they asked me to run some queries an run this
exec dbms_stats.purge_stats(sysdate - 50);
which was running for long hours and just coming out because of insufficient space
although the retention policy is 31 days
SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
31
history is avail for more than 90 days
SQL> select dbms_stats.get_stats_history_availability from dual;
GET_STATS_HISTORY_AVAILABILITY
01-APR-13 11.00.07.250483000 AM +05:30
asked to apply this patch 12683802 which i applied on DEV instance
but still i can see so many archs generating although there is no activity on DEV instance.
now when i run this scripts little by little
exec dbms_stats.purge_stats(sysdate - 50);
its purging but its taking ages and the size of sysaux is getting filled the current size of DEV have 3 datafiles each with around 4000mb
-- B4 applying patch
SQL> select trunc(first_time) on_date,
2 thread# thread,
3 min(sequence#) min_sequence,
4 max(sequence#) max_sequence,
5 max(sequence#) - min(sequence#) nos_archives,
6 (max(sequence#) - min(sequence#)) * log_avg_mb req_space_mb
7 from v$log_history,
8 (select avg(bytes/1024/1024) log_avg_mb
9 from v$log)
10 group by trunc(first_time), thread#, log_avg_mb
11 order by on_date
12 /
ON_DATE THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
24-JUN-13 1 1 3 2 2000
25-JUN-13 1 4 17 13 13000
26-JUN-13 1 18 30 12 12000
27-JUN-13 1 31 43 12 12000
28-JUN-13 1 44 51 7 7000
29-JUN-13 1 52 64 12 12000
30-JUN-13 1 65 77 12 12000
01-JUL-13 1 78 88 10 10000
-- after applying patch
ON_DATE
THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
21-JUN-13
1
1
5
4
4000
22-JUN-13
1
6
20
14
14000
23-JUN-13
1
21
35
14
14000
24-JUN-13
1
36
85
49
49000
25-JUN-13
1
86
111
25
25000
26-JUN-13
1
112
127
15
15000
27-JUN-13
1
128
134
6
6000
28-JUN-13
1
135
143
8
8000
29-JUN-13
1
144
151
7
7000
30-JUN-13
1
152
158
6
6000
01-JUL-13
1
159
163
4
4000
the above results b4 and after are taken from TEST and DEV which are cloned from PROD instance and only on DEV the patch is applied
here are env details
EBS:21.1.3
Database:11.1.0.7
OS:RHEL 5.6
am still not satisfied wanted to know if any one of you have a solution for this
please help
ZaviHi Amogh,
As said from support as well to run logminer i have run here is the output from it
i followed note id:1504755.1
------------------------------log miner output for archs of 02.07.13 on PROD------------------------
-- following logs
Jul 2 10:59 archive_PROD_1_1446_807549584.arc
Jul 2 11:05 archive_PROD_1_1447_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1446_807549584.arc', -
OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1447_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT APPLSYS FND_LOGINS 47
UPDATE PO RCV_TRANSACTIONS_INTERFAC 2
E
INSERT PO RCV_TRANSACTIONS 3
UNSUPPORTED INV MTL_SUPPLY 6
DELETE PO RCV_SUPPLY 3
UPDATE CSI CSI_ITEM_INSTANCES 3
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 17
INSERT JA JAI_RTP_POPULATE_T 3
INSERT GL GL_CODE_COMBINATIONS 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_AP_TDS_INV_TAXES 7
UNSUPPORTED AP AP_INVOICE_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 4
INSERT INV MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_HEADERS_GT 3
UPDATE XLA XLA_AE_HEADERS_GT 3
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 4
R
UPDATE APPLSYS FND_USER_DESKTOP_OBJECTS 10
INSERT APPLSYS FND_APPL_SESSIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE XLA XLA_TRANSFER_LOGS 2
INSERT GL GL_JE_HEADERS 2
DELETE GL GL_INTERFACE_CONTROL 1
DELETE GL GL_INTERFACE 3
INSERT CE CE_SECURITY_PROFILES_GT 1
INSERT PA PA_PROJECTS_FOR_ACCUM 8
INSERT PA PA_PJM_REQ_COMMITMENTS_TM 1162
P
DELETE PA PA_PROJECT_ACCUM_COMMITME 162
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE PA PA_TXN_ACCUM 1749
UPDATE PA PA_RESOURCE_LIST_ASSIGNME 13
NTS
INSERT ENI ENI_OLTP_ITEM_STAR 1
START 561
COMMIT 934
INSERT ICX ICX_SESSION_ATTRIBUTES 45
INSERT INV MTL_SUPPLY 8
UPDATE INV MTL_MATERIAL_TRANSACTIONS 11
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_TEMP
INSERT CSI CSI_TRANSACTIONS 3
INSERT CSI CSI_I_VERSION_LABELS 1
INSERT CSI CSI_I_VERSION_LABELS_H 1
INSERT JA JAI_RTP_TRANS_T 3
INSERT JA JAI_AP_INVOICE_LINES 1
UNSUPPORTED AP AP_INVOICE_DISTRIBUTIONS_ 9
ALL
DELETE BOM BOM_RESOURCE_CHANGES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
ROLLBACK 3
INSERT XLA XLA_TRANSACTION_ENTITIES, 2
AP
INSERT ENI MLOG$_ENI_OLTP_ITEM_STAR 7
UNSUPPORTED XLA XLA_TRANSACTION_ENTITIES, 4
AP
INSERT XLA XLA_DISTRIBUTION_LINKS,AP 6
UPDATE QA QA_CHARS 1
UPDATE MRP MRP_SCHEDULE_DATES 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE ENI ENI_DENORM_HIERARCHIES 4
UNSUPPORTED SYS SEG$ 1
INSERT INV MTL_TRANSACTION_ACCOUNTS 4
UPDATE APPLSYS FND_USER_PREFERENCES 5
DELETE SYS WRI$_OPTSTAT_HISTHEAD_HIS 235644
TORY
INSERT BNE BNE_DOC_USER_PARAMS 1
INSERT GL GL_INTERFACE 6
INSERT GL GL_JE_LINES 4
INSERT GL GL_JE_SEGMENT_VALUES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT GL GL_IMPORT_REFERENCES 4
UPDATE PA PA_MAPPABLE_TXNS_TMP 3
UPDATE PA PA_PROJECT_ACCUM_COMMITME 95
NTS
INSERT INV MLOG$_MTL_SYSTEM_ITEMS_B 1
INSERT INV MTL_SYSTEM_ITEMS_TL 1
UPDATE APPLSYS FND_CONFLICTS_DOMAIN 6040
INSERT APPLSYS MO_GLOB_ORG_ACCESS_TMP 97
UPDATE APPLSYS FND_CONCURRENT_QUEUES 117
UNSUPPORTED JA JAI_RCV_LINES 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT INV MLOG$_MTL_MATERIAL_TRANSA 21
C
INSERT CSI CSI_ITEM_INSTANCES 1
INSERT JA JAI_AP_TDS_INV_TAXES 2
INSERT BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_TXN_INTERFACE_AUDIT_AL 4
L
INSERT PA PA_EXPENDITURE_COMMENTS 2
DELETE PA PA_TRANSACTION_XFACE_CTRL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_ALL
INSERT AP AP_LINE_TEMP_GT 3
UPDATE ENI ENI_OLTP_ITEM_STAR 3
INSERT XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_AE_HEADERS,AP 8
DELETE XLA XLA_VALIDATION_LINES_GT 2
INSERT INV MTL_TXN_COST_DET_INTERFAC 2
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE INV MTL_CST_TXN_COST_DETAILS 2
DELETE INV MTL_TXN_COST_DET_INTERFAC 2
E
UNSUPPORTED SYS HISTGRM$ 16
UPDATE INV MTL_MATERIAL_TRANSACTIONS 6
INSERT XLA XLA_EVENTS,CST 3
DELETE BNE BNE_DOC_ACTIONS 1
INSERT GL GL_INTERFACE_CONTROL 2
INSERT XLA XLA_TB_WORK_UNITS 1
UPDATE ZX ZX_TRX_HEADERS_GT 67
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6611_EC264F 1
91
DELETE PA PA_TXN_ACCUM_DETAILS 1327
UNSUPPORTED PA PA_TXN_ACCUM 523
INSERT PA PA_MAPPABLE_TXNS_TMP 2
DELETE PA PA_RESOURCE_LIST_PARENTS_ 2
TMP
DELETE PA PA_PROJECTS_FOR_ACCUM 17
UPDATE SYS SEQ$ 224
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE APPLSYS WF_DEFERRED 5
UPDATE APPLSYS FND_CONC_PROG_ONSITE_INFO 51
INSERT APPLSYS FND_CONCURRENT_REQUESTS 46
INSERT PO PO_SESSION_GT 7
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
INSERT INV MTL_ONHAND_QUANTITIES_DET 3
AIL
UPDATE SYS SYS_FBA_BARRIERSCN 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE MRP MRP_MESSAGES_TMP 29
DELETE MRP MRP_MESSAGES_TMP 29
UPDATE AP AP_INVOICES_ALL 15
UPDATE JA JAI_AP_TDS_INV_TAXES 43
INSERT BOM MLOG$_BOM_RESOURCE_CHANGE 4
S
UNSUPPORTED PA PA_TRANSACTION_INTERFACE_ 2
ALL
INSERT PA PA_EXPENDITURE_GROUPS_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PA PA_EXPENDITURE_ITEMS_ALL 2
INSERT AP AP_INVOICE_LINES_ALL 3
INSERT APPLSYS FND_LOG_MESSAGES 29
INSERT ZX ZX_ITM_DISTRIBUTIONS_GT 71
UNSUPPORTED ZX ZX_TRX_HEADERS_GT 5
INSERT XLA XLA_EVENTS,AP 3
UPDATE AP AP_PREPAY_HISTORY_ALL 3
UPDATE AP AP_PREPAY_APP_DISTS 1
INSERT INV MLOG$_MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_LINES,AP 6
DELETE XLA XLA_AE_HEADERS_GT 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE SYS HIST_HEAD$ 76
INSERT SYS WRI$_OPTSTAT_IND_HISTORY 10
UNSUPPORTED ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 3
R
UPDATE INV MTL_CST_ACTUAL_COST_DETAI 3
LS
INSERT XLA XLA_TRANSACTION_ENTITIES, 3
CST
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ICX ICX_TRANSACTIONS 1
UPDATE GL GL_INTERFACE 4
UPDATE PO PO_REQ_DISTRIBUTIONS_ALL 66
UNSUPPORTED PO PO_REQUISITION_HEADERS_AL 1
L
UNSUPPORTED PO PO_REQUISITION_LINES_ALL 66
DELETE PA PA_COMMITMENT_TXNS 1461
UNSUPPORTED PA PA_MAPPABLE_TXNS_TMP 3
INSERT PA PA_PROJECT_ACCUM_COMMITME 197
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM CST_ITEM_COSTS 1
INSERT JA JAI_RCV_TRANSACTIONS 3
UPDATE PO RCV_SUPPLY 3
UPDATE INV MTL_SUPPLY 11
DELETE INV MTL_SUPPLY 11
UPDATE PO PO_DISTRIBUTIONS_ALL 3
UPDATE PO PO_LINE_LOCATIONS_ALL 9
INSERT INV MLOG$_MTL_ONHAND_QUANTITI 3
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE PO RCV_TRANSACTIONS_INTERFAC 3
E
INSERT MRP MRP_MESSAGES_TMP 22
INSERT AP AP_INVOICE_DISTRIBUTIONS_ 3
ALL
UPDATE AP AP_INVOICE_DISTRIBUTIONS_ 30
ALL
INSERT PA PA_EXPENDITURES_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ZX ZX_TRX_HEADERS_GT 8
UNSUPPORTED ZX ZX_LINES_DET_FACTORS 566
DELETE AP AP_LINE_TEMP_GT 9
UPDATE AP AP_PAYMENT_SCHEDULES_ALL 5
UPDATE ICX ICX_SESSIONS 12
UNSUPPORTED AP AP_INVOICES_ALL 3
INSERT JA JAI_RCV_JOURNAL_ENTRIES 4
UNSUPPORTED INV MTL_TRANSACTIONS_INTERFAC 24
E
UPDATE MRP MRP_RECOMMENDATIONS 14
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ENI MLOG$_ENI_DENORM_HIERARCH 16
I
INSERT SYS WRI$_OPTSTAT_HISTGRM_HIST 8
ORY
INSERT APPLSYS WF_CONTROL 1
UPDATE BNE BNE_DOC_USER_PARAMS 1
DELETE APPLSYS WF_CONTROL 1
UPDATE GL GL_JE_BATCHES 2
INSERT GL GL_POSTING_INTERIM 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_PO_OSP_LINES 1
INSERT AP AP_INVOICES_ALL 2
INSERT PA PA_COMMITMENT_TXNS_TMP 160
UPDATE PA PA_COMMITMENT_TXNS 1391
DELETE PA PA_MAPPABLE_TXNS_TMP 3
INTERNAL 4906910
UPDATE APPLSYS FND_CONCURRENT_REQUESTS 153
UPDATE JA JAI_RCV_LINES 3
INSERT INV MLOG$_MTL_SUPPLY 30
INSERT CSI CSI_I_PARTIES_H 1
UNSUPPORTED JA JAI_RCV_TRANSACTIONS 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE JA JAI_RCV_TRANSACTIONS 24
INSERT MRP MRP_RECOMMENDATIONS 8
UNSUPPORTED AP AP_PAYMENT_SCHEDULES_ALL 4
UPDATE PA PA_TRANSACTION_INTERFACE_ 4
ALL
INSERT XLA XLA_ACCT_PROG_EVENTS_GT 4
INSERT XLA XLA_AE_LINES_GT 12
UNSUPPORTED XLA XLA_AE_LINES_GT 30
INSERT XLA XLA_AE_HEADERS,AP 3
INSERT XLA XLA_VALIDATION_LINES_GT 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE XLA XLA_EVENTS_GT 1
UNSUPPORTED XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
DELETE XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
INSERT APPLSYS FND_CONC_REQUEST_ARGUMENT 2
S
UPDATE QA QA_RESULTS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT MRP MRP_SCHEDULE_CONSUMPTIONS 21
INSERT ENI ENI_DENORM_HRCHY_PARENTS 4
UNSUPPORTED ENI ENI_DENORM_HRCHY_PARENTS 4
UPDATE APPLSYS FND_USER 6
INSERT XLA XLA_TRANSFER_LOGS 2
UPDATE GL GL_JE_LINES 4
UPDATE APPLSYS FND_NODES 3
UNSUPPORTED PO PO_REQ_DISTRIBUTIONS_ALL 66
DELETE ZX ZX_ITM_DISTRIBUTIONS_GT 66
INSERT AP AP_PAYMENT_SCHEDULES_ALL 2
INSERT IBY IBY_DOCS_PAYABLE_GT 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT AP AP_DOC_SEQUENCE_AUDIT 1
INSERT PA PA_COMMITMENT_TXNS 1380
INSERT PA PA_RESOURCE_ACCUM_DETAILS 3
INSERT ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 19
I
UPDATE PA PA_PROJECT_ACCUM_ACTUALS 12
INSERT EGO EGO_ITEM_TEXT_TL 1
INSERT INV MTL_ITEM_REVISIONS_TL 1
UNSUPPORTED 1720
INSERT APPLSYS FND_CONC_PP_ACTIONS 47
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED APPLSYS FND_CONCURRENT_PROCESSES 97
DELETE APPLSYS MO_GLOB_ORG_ACCESS_TMP 11
INSERT MRP MRP_RELIEF_INTERFACE 16
UPDATE PO PO_REQUISITION_LINES_ALL 1
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
INSERT CSI CSI_I_PARTIES 1
UNSUPPORTED SYS DBMS_LOCK_ALLOCATED 15
UPDATE PO PO_SESSION_GT 4
INSERT MRP MRP_SCHEDULE_DATES 4
INSERT MRP MLOG$_MRP_SCHEDULE_DATES 15
DELETE MRP MRP_SCHEDULE_DATES 4
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_COST_DISTRIBUTION_LINE 2
S_ALL
INSERT ZX ZX_TRANSACTION_LINES_GT 138
UNSUPPORTED CSI CSI_ITEM_INSTANCES 2
UNSUPPORTED XLA XLA_EVENTS,AP 6
INSERT XLA XLA_AE_SEGMENT_VALUES 13
UNSUPPORTED SYS TAB$ 15
INSERT SYS WRI$_OPTSTAT_HISTHEAD_HIS 81562
TORY
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6610_EC264F 1
91
UNSUPPORTED SYS IND$ 10
DELETE APPLSYS FND_CONC_PP_ACTIONS 7
INSERT BNE BNE_DOC_ACTIONS 1
INSERT GL GL_JE_BATCHES 1
INSERT PA PA_TXN_ACCUM_DETAILS 1386
INSERT PA PA_TXN_ACCUM 2
INSERT PA PA_RESOURCE_LIST_PARENTS_ 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
TMP
UPDATE CTXSYS DR$INDEX 1
INSERT INV MTL_PENDING_ITEM_STATUS 1
DELETE SYS WRI$_OPTSTAT_IND_HISTORY 2130287
UNSUPPORTED APPLSYS FND_CONCURRENT_REQUESTS 49
INSERT PO RCV_TRANSACTIONS_INTERFAC 3
E
UPDATE APPLSYS FND_CONCURRENT_PROCESSES 40
UPDATE PO RCV_TRANSACTIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED PO PO_HEADERS_ALL 3
INSERT INV MTL_CST_TXN_COST_DETAILS 5
INSERT CSI CSI_ITEM_INSTANCES_H 3
DELETE INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
UPDATE SYS DBMS_LOCK_ALLOCATED 15
DELETE MRP MRP_RECOMMENDATIONS 8
UPDATE AP AP_INVOICE_LINES_ALL 11
INSERT BOM MLOG$_BOM_RES_INSTANCE_CH 4
A
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM BOM_RESOURCE_CHANGES 2
UPDATE PA PA_TRANSACTION_XFACE_CTRL 1
_ALL
INSERT AP AP_PREPAY_HISTORY_ALL 1
INSERT AP AP_PREPAY_APP_DISTS 1
INSERT XLA XLA_EVT_CLASS_ORDERS_GT 4
UPDATE XLA XLA_EVENTS,AP 6
INSERT ENI ENI_DENORM_HIERARCHIES 6
INSERT SYS WRI$_OPTSTAT_TAB_HISTORY 5
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED XLA XLA_AE_HEADERS_GT 3
UPDATE BNE BNE_DOC_ACTIONS 1
UPDATE GL GL_JE_HEADERS 2
DELETE XLA XLA_TRANSFER_LOGS 1
UNSUPPORTED ZX ZX_TRANSACTION_LINES_GT 264
INSERT PA PA_PJM_PO_COMMITMENTS_TMP 378
UNSUPPORTED INV MTL_SYSTEM_ITEMS_B 2
INSERT INV MTL_ITEM_REVISIONS_B 1
266 rows selected.
------------------------------log miner output for archs of 03.07.13 on PROD------------------------
--following log files
Jul 3 10:56 archive_PROD_1_1469_807549584.arc
Jul 3 10:58 archive_PROD_1_1470_807549584.arc
Jul 3 11:01 archive_PROD_1_1471_807549584.arc
Jul 3 11:03 archive_PROD_1_1472_807549584.arc
Jul 3 11:04 archive_PROD_1_1473_807549584.arc
Jul 3 11:05 archive_PROD_1_1474_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1469_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1470_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1471_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1472_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1473_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1474_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE SYS CCOL$ 6
DELETE SYS SEG$ 3
INSERT APPLSYS FND_LOGINS 108
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 33
UPDATE PO RCV_TRANSACTIONS_INTERFAC 4
E
INSERT APPLSYS FND_LOG_TRANSACTION_CONTE 1
XT
INSERT PO RCV_TRANSACTIONS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED INV MTL_SUPPLY 4
INSERT PO RCV_RECEIVING_SUB_LEDGER 4
INSERT GL GL_JE_HEADERS 12
DELETE GL GL_INTERFACE 49
DELETE GL GL_INTERFACE_CONTROL 8
INSERT JA JAI_RTP_POPULATE_T 1
INSERT JA JAI_RGM_TRM_SCHEDULES_T 4
UPDATE JA JAI_RCV_CENVAT_CLAIMS 2
INSERT APPLSYS FND_APPL_SESSIONS 9
UPDATE APPLSYS WF_NOTIFICATIONS 4
UPDATE PO PO_HEADERS_INTERFACE 10
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PO PO_DISTRIBUTIONS_INTERFAC 6
E
INSERT JA JAI_PO_LINE_LOCATIONS 18
UNSUPPORTED PO PO_LINE_LOCATIONS_ALL 2
UNSUPPORTED PO PO_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 14
INSERT PA PA_STRUCTURES_TASKS_TMP 440
INSERT SYS SEQ$ 2
INSERT BNE BNE_DOC_CREATION_PARAMS 9
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 6
OPERATION SEG_OWNER SEG_NAME COUNT(*)
R
INSERT SYS MON_MODS$ 9
UPDATE PA PA_PROJECTS_ALL 4
UPDATE WIP WIP_MOVE_TXN_INTERFACE 1
INSERT CE CE_SECURITY_PROFILES_GT 8
UPDATE WIP WIP_PERIOD_BALANCES 1
INSERT INV MTL_MWB_GTMP 14
UPDATE SYS WRI$_SCH_CONTROL 1
DELETE SYS OBJ$ 51
DDL SYS WRH$_ROWCACHE_SUMMARY 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS WRH$_ACTIVE_SESSION_HISTO 1
RY
DDL SYS WRH$_SYS_TIME_MODEL 1
INSERT IBY IBY_DOCS_PAYABLE_ALL 6
UPDATE IBY IBY_DOCS_PAYABLE_ALL 36
INSERT GL GL_CODE_COMBINATIONS 3
INSERT XLA XLA_AE_HEADERS_GT 12
UPDATE -
Too many archived logs when trying a backup
Hello all,
I'm having a bit of trouble running a backup script on an Oracle instance (10g1, on Solaris).
As a normal DBA practice, I guess the backup should be scheduled and run from the very beginning of using a DB. Sometimes, from various reasons, this does not happen. In this case, before running the first (full) backup of the DB, there might be tens or hundreds of archived logs waiting to be backed up, and the flash recovery area might just not be able to handle all of them (at least that's how I see it, I might be wrong, I'm still fighting my way through Oracle's Backup and Recovery issues). In that case, a backup script containing the following RMAN sequence:
run{
allocate channel ch1 type disk;
backup
incremental
level = 0
database;
release channel ch1;
fails with the error message ORA-19804 (cannot reclaim disk space from the DB_RECOVERY_FILE_DEST_SIZE limit).
After this, the archived logs that were backed up are marked as obsolete, and I can delete them from RMAN with "delete obsolete". The script I'm using for backup runs fine afterwards. Before attempting a backup, however, no logs are reported as obsolete.
The retention policy is the default "redundancy 1" and the archivelog deletion policy is none.
How could I prevent the backup script from crashing? If I'm changing the archivelog deletion policy, will I be able to restore the DB properly from my backup set? (as earlier logs will be deleted before making a backup)
Thank you for any suggestions, your help is very much appreciated,
AdrianI am having an impression that you are not using scheduled backups and let client decided when to take backup. Well this is not good, in this case you won't be able to get rid of this error bcz you never know when the next or even first backup is going to occur and without that you can't even think of deleting your logs. If you are not using tape drives then your archivelogs and backups both will sit in recovery area and you should have enough available space to hold both of them. I would say to schedule your backups and use DELETE INPUT clause of RMAN backup to delete the archivelogs after backing them up. And also delete the obsolete backups according to your recovery window. This is the proper way to manage recovery area space. You really need to tune the recovery area space by testing the amount of redo generation, backup size, retention policy etc etc and then come up with a figure of recovery area size which is suitable for your env to hold all of the required files for required time (recover window).
Daljit Singh -
iam developing h.r web application that use ejb technology, and the database of my application about 73 tables and about 65% of these tables is lookup tables do i have to map all the tables to entity beans?
i think this will kill the performance??????????i do not know exectly but i thought that i can map all the lookup tables to one entity bean, my qiz is can i do this??????????????/
please help
thanx in advancei think this will kill the performance??????????Maybe not, you know, entity beans can cache date quite well.
If you use WEBLogic, than map lookup tables to read only entity beans.
i do
not know exectly but i thought that i can map all the
lookup tables to one entity bean, my qiz is can i do
this??????????????/No, it's not possible with CMP beans.
Maybe you are looking for
-
How can i print all pages instead of only the first page using firefox
How can i set firefox to print off my mac to my lexmark printer all pages i want. It is only printing the first page as of about 4 days ago == This happened == Every time Firefox opened == about 4 days ago. it works ok on safari!
-
Hi , i did XREF1, XREF2 and XREF3 available (GL account available ) in the system in the Transactions OB32, OB41 , OBC4 and also in Define special fields for line item display. But actually i was trying to display it in transaction FBL3n changing the
-
I would like to know if there's an extension, or if I need to write a script so that Firefox will stop a redirect to a trap yahoo page, that yahoo causes when I try to access my yahoo mail account. Yahoo redirects from my email page every time I try
-
I dont understand what camera raw is
Can anyone explain it to me like I was in 3rd grade? How do I access it??
-
On Material Description: CAUFVD-MATXT
Dear experts, I would like to know if the following two fields are the same or not:- (1) Description on the material master (nomenclature): MAKT-MAKTX for AFPO-MATNR (2) CAUFVD-MATXT (Material Description) The proposed solution in the spec given to m