NLS Archival of Data
My query is on NLS Implementation of Sybase IQ.
Suppose, we have archived our data based on time-slice (We have two options : time-slice / request-based).
During the first archival, the data was archived for years 2010,2011 and 2012. Are partitions created based on time in IQ ?
Next time when I archive the data for 2010, will a new partition be created in NLS storage or will the data be send to the previous archived partition?
If a new archived partition is created in NLS storage, which partition will the query search if criteria is 2010 ? New or the older partition ?
I need to know how the data is stored in NLS archived storage.
Can the archival process from BW on HANA to SYbase IQ be automated on a timely basis instead of manually doing it ?
It would be really helpful, if anyone could provide a link which explains the whole process completely.
Thanks in advance.
Hi Pooja,
My answers below:
Does that mean, the query will search for all the partitions of the table in IQ? Suppose for a table
Archive Request 1 partition : data for years 2010, 2011 , 2012
Archive Request 2 partition: data for year 2010
- The partitions are based on the archive request, but your example is not correct, because the year 2010 is already archived and locked in the first request.
- If you make requests based on quarter, you will have the data partitioned by quarter. If by year, the partition will be by year. And if by a range of years, that will be a single partition. So your first archive request would create a single partition with three years of data.
Will the query search both the requests ? Doesn't it affect the performance ?
- If the query is based on a date range, it will only search the partitions holding the date range. It will do this by the request ID. If your query does not specify a date and uses another criteria, it will search all partitions.
And , while creating an archival process we specify time characteristics as the primary partition.
So, in the Archive request 1 partition, will there be separate sub-partition for years 2010, 2011 , 2012 respectively ? So that, as and when the query search is for 2010, it goes and searches only the 2010 partition ?
- This is done by the request. You should plan your requests or your archiving process chains in BW to meet your requirements.
Or does IQ does any indexing ?
- IQ does create indices on all of the columns. In addition, the archiving process will create some indices. BW will create a view for the archive and will create indices on the columns used in the view's where clause. In addition to the IQ indexing, BW is also aware of which archive request holds which time slice, so the performance is optimized on both ends.
Hope that helps,
Eric
Similar Messages
-
NLS Archiving in BI 7.0
Hi Experts,
I have some doubts on NLS Archiving.
1) When I am working with SAP ADK Archiving in BI 7.0, the data which we are going to archive needs to be compressed. I have come across this problem and I compressed the data and the data archiving is fine. Is this same if I go for any NLS archiving (SAND, PBS etc) in BI 7.0 do I need to compress the data before I go fror archive.
2) We canu2019t show the archived data in Report using Conventional SAP ADK. But when we go for NLS Archiving we have the option to check the Data Archived also in report by setting in RSRT Properties: Read Data from Near Line Storage. My doubt here is by checking this flag ,can we see the archived data only in RSRT or can we see the archived data even When we execute the report in BEX analyzer or through portal . Why because in our case users will check reports from Portal only.Then do we have any option in portal or Bex Analyser also to see archived data from Near Line Storage
Please Correct me if I am wrong also please share your Comments.
Thanks
vamsiHi Vamsi,
1) Yes, you need to compressed the data also before archiving it thru NLS. Otherwise, it will not be available on your data options for the list of records to be archived.
2) Enabling this option lets the user to query the archived data aside from rsrt. Btw, you need to test also if NLS application is unavailable, and check if your online data is still accessible, as it reads the archived data even if you only queried for online data based on our experience.
cheers!
CCC -
Dear All,
Could you please do let me know, how to perform NLS archiving for SAP BW7.4 powered by HANA to HADOOP system.
Regards,
JoHi Jyotsna,
I agree with srinivasan, NLS license from sap means sybase -IQ as data base for Near line storage.
SAP BW Near-Line Storage Solution (NLS) Based on SAP Sybase IQ
SAP BW 730: What's New in the SAP BW Near-Line Storage Solution
You can access data from HADOOP by using smart data access technique.
There are also other NLS database ( DB2, Oracle Etc ) that supports BW but the down side is, only authorized NLS implementation partners can implement in NLS in customer namespace.There are very few companies can implement third party NLS.
Thanks,
Shakthi Raj Natarajan. -
Procedure for Archive the data
Hello Experts,
I need to write a procedure for clean up the data base. Below is my rek.
User should be provided with options of entering the table name and also should be given option to select whether he wants to delete all the data or archive the table.
--> Program should provide the list of tables that are related to the table name that was given by the user. (This is not required if the table is stagging table, as they don't have constraints associated with them)
--> If user wants to archive then the data in the tables and its related table should be archived (exported) into a flat file and then delete the data from each table in a sequence. Else, we need to delete the data without archiving.
Can you Please let me know the procedure for the above rek and also I am not sure about the archiving the data. If you don't know the table name and the columns. so How can you define a cursor record to handle the record.
Can you please send me the complete code for the above rek to [email protected]
I appreciate help in this regard.
Thanks & Regards,
Sree.Can you please send me the complete code for the
above rek to [email protected]
I appreciate help in this regard.The goal of this forum is not to make your job, but to assist you with guidelines, references, and concepts, on specific issues. If you want someone to code for you, then you should hire a programmer instead.
~ Madrid -
Lock NOT set for: Archiving the data from a data target
Dear Expert,
I try to archive one of my info cube, when i start to write the Archive file the Free Space in the Archive folder not enough and it make the process error.
Then i try to changes the Archive folder to another path with enough free space. But when i start to write the archive file with new variant this error message came up :
==============================================================
An archiving run locks InfoProvider ZICCPS810 (archivation object BWCZICCP~0)
Lock NOT set for: Archiving the data from a data target
InfoProvider ZICCPS810 could not be locked for an archiving session
Job cancelled after system exception ERROR_MESSAGE
==============================================================
Please Help Me.
Wawan SHi Wawan,
If the earlier archive session resulted in error, please try to invalidate the earlier session in archive management and try running archive job again.
Hope this helps,
Naveen -
Help Me:NLS - Multiple Language data in a database
In order to have multilingual data in one oracle database, I am did this:
1.installed oracle(8i)in NT(this NT also works as a client to feed and get data) with database character to be UTF8( since any kind of data is going to reside)
2. NLS_LANG TO BE "FRANCE_FRENCH.EE8ISO8859P" in server as I will be using the server terminal to see russian character data ( THIS IS FOR EASTERN LANGUAGE, Apart from English, I am trying to store and retrieve Russian character in the same table)
3. created a table with NCHAR data type.
create table test( name nchar(25))
4. I have the driver installed for English and Russian in my machine.
5. I log into the SQL PLUS and in seperate worp pad I write an insert statement.
like:
insert into test values (n'@KH');
I copy and paste this insert statement into the SQLPLUS, then it looks like
insert into test values (n'???');
when
I select * from test , I get,
NAME
My question are:
Is this setup OK, or do I have to make some more changes?
Why can't I type russian letter in SQL PLUS?
( I see in one of the Oracle doc in NLS : topic : Locale Data , Heading - Calendar system uses Japanese character in the SQL PLUS directly). So my guess is that I should be able to insert and select these(russian) char as well.
I will really appreciate , if I can get some feed back on this., I have already spent 3 days on this? Please help meeeeeeeeee.
Thanks
Susil
[email protected]
nullSimilar request; if there are several different languages used in a database, but a search word can be typed in by anyone and not necessarily in the default language, I still want to be able to pick up all the records (including text in blobs) in any language where my search word = the equivelant word in the other languages; eg. searching on "tree", the search would bring back a document that contained "l'arbre". I then want to set the default language, and allow the user to select a "search in these languages" option to expand the search from the default "tree" to all languages in the database.
I know about the multi_Lexer but does it extend functionality to this level? -
Hi All,
I have one issue on LOG data.
I have to fetch archived log data with external id
By SLG1, we can get log data.
But i need to develop this scenario in logically in program.
We can find the records (EXTERNAL ID) is archived or not with FM BAL_ARCHIVE_SEARCH.
But how to get log data of the record?
For Ex:
My External ID : 123456789
After executing the Above FM ,it is confirmed as Archived record.
But now i have to fetch the log data of this record(log data means messages).
Please tell me how to fetch these message(log data)?
is there any FM or Sample logic?
Thanks & Regards
Sathish -
Archive - Purge data from F1 and U1 clusters
Hello Experts,
I have been given the task of purging data from the F1 cluster, Remuneration Statement forms, and the U1 cluster, Tax Reporter forms, in PCL4. I was hoping to accomplish this by using an archive function to delete the data but not store the archived files. I have not been successful in finding much information about purging from these clusters. I am looking for any advice anyone can provide or a direction to take to reduce this data. Thank you in advance for your assistance.Martin,
which would help keep everything intact
I don't know what that means. The whole purpose of archiving is to remove data from the 'ready' database and place it somewhere else. Leaving it intact means not archiving at all.
In the archiving process, data is selected and copied into some sort of storage medium that typically has a lower state of availability than un-archived data. The business decides what level of availability is acceptable, and the archiving policy (e.g., how long should the data remain archived before it is finally physically deleted forever).
So 'intact' is a bit vague. All the bits of data that the business decide are important are replicated 100% in the archive medium, validated, and then the source records that were archived are physically deleted from the ready database. Functionally, all archived data is intact, it just may be in another format.
I have never heard of a major ERP system that did not offer archiving in some form. There are also many third party vendors who offer archiving for the major ERP packages.
Level of success is hard to predict. There are tools available as standard in SAP that monitor critical factors: memory access, disk access, response times, etc etc etc. Here too there are third party tools that measure critical factors. You can run these before and after the archiving process to measure what success you have had.
I have never seen anyone who will stand up and say "if you archive x million records from your ready database, you will see a performance increase of y percent." There are too many variables. As they say in the MPG ads, "Your results may vary". You usually can get some qualitative numbers by testing your archiving process in a test or dev system.
Best Regards,
DB49 -
Is there any t code in SAP to display archived shipping data
Hi All
we have a issue with unarchiving shipping doc , our basis team has unzipped the file from the path it was archived and provided display access , when i cross checks in Tcode SARI theya are un zipped and in sap this document is still in status archived i am not able to view vith vt03n
for archived billing documents once thay are unzipped , document will not open in vf03 but we can display in vf07
Please let us know how to view this shipping data in sap ?
Is there any t code in SAP to display archived shipping data (like for archived billing dicuments vf07)
Your kind help would be highly appreciated.
Thank you
Rajendra PrasadHello,
Once shipment document is archived then you can't display by VT03N transaction. As you have pointed out SARI or SARE transaction will help in displaying the archived shipment documents from archive server. (you have to select Archiving object = SD_VTTK and Archive Infosturcture = Select from display option).
VF07 - Display archived billing document. We call this transaction VF07 as archived enable transaction.
I have gone through the OSS note 590656 mentioned by Eduardo Hinojosa, with this enanchment of VT03N (respective program) you should be able to display archived shipment document. This Oss note should help you.
let me know if required further clarification on this.
-Thanks,
Ajay
Edited by: Ajay Kumar on Aug 25, 2009 6:16 AM -
Data archiving and data cleansing
hi experts,
Can any one tell me step by step guide for Data archiving and Data cleansing of SAP-ISU Object.
what is the difference between Data archiving and Data cleansing .
Thanx & RgdsData Archiving: So many objects are there you can look some of them..........
ISU_BBP IS-U Archiving: Budget Billing Plan
ISU_BCONT Business Partner Contacts (Contract A/R + A/P)
ISU_BILL IS-U Archiving: Billing Document Header
ISU_BILLZ IS-U Archiving: Billing Line Item
ISU_EABL IS-U Archiving: Meter Reading Results
ISU_EORDER IS-U Archiving: Waste Disposal Order
ISU_EUFASS Archiving of Usage Factors
ISU_FACTS Installation Facts
ISU_INSPEC IS-U Archiving: Campaigns for Inspection List
ISU_PPM Prepayment Documents
ISU_PRDOCH IS-U Archiving: Print Document Header
ISU_PRDOCL IS-U Archiving: Print Document Line Item
ISU_PROFV IS-U Archiving: EDM Profile Values
ISU_ROUTE IS-U Archiving: Waste Disposal Route
ISU_SETTLB Settlement Document
ISU_SWTDOC Archive Switch Document
Go to SARA t-code and enter the object CA_BUPA for business partner then Press F6 you will get step by step documentation. please follow the procedure for all the objects.
Regards,
Siva -
Archiving model data in BPC 7.5 NW
Hi expert
Some of our models start to get big and we do not need all the data anymore. But, the business would like to archive the data in
case they would need it later on. I am not talking about archiving audit data. I am talking about archiving data in a planning model.
What is the best practice for archiving data in a BPC model?
What is the performance improvements related to archiving data from a model?
Kind regards
ErikHi Erik,
Archival is nothing but deletion.
Create a backup cube in BW. Copy the data from your planning cube to the backup cube, and then delete that data region from your planning cube.
Archival will definitely improve the performance of your templates, scripts, etc; since the system will now search from a smaller dataset.
Hope this helps. -
What should i do to implement Data Archiving and Data Reporting
we want to implement Data Archiving and Data Reporting in our product. Can someone tell me what techniques or approaches people take in general to implement Data Archiving and Data reporting.
i am currently looking into DataWarehousing. is this the right apporach ? i have no idea as where i should start on this. can someone give me a good direction as a starting point.
thank you,
PujaDid you setup Find My Mac before it was stolen?
-
ORA-00308: cannot open archived log '+DATA'
Hello all,
I created new physical standby but I facing problem with shipping archived file between primary and standby.
Primary : RAC (4 nodes)
Standby : single node with ASM
when I run :
alter database recover managed standby database disconnect from session;
in alert log file :
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 24 slaves
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Media Recovery Waiting for thread 1 sequence 25738
Tue Mar 03 12:21:13 2015
Completed: alter database recover managed standby database disconnect from session
and when i checked archived files by
select max(sequence#) from v$archivanded_log;
It was null.
I understand thatno shipping between primary and standby till this point i deiced to use manual recovery by :
alter database recover automatic standby database;
But i get this error in alert log file :
alter database recover automatic standby database
Media Recovery Start
started logmerger process
Tue Mar 03 12:38:38 2015
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 24 slaves
Media Recovery Log +DATA
Errors with log +DATA
Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4989.trc:
ORA-00308: cannot open archived log '+DATA'
ORA-17503: ksfdopn:2 Failed to open file +DATA
ORA-15045: ASM file name '+DATA' is not in reference form
ORA-279 signalled during: alter database recover automatic standby database..
when i opened oracledrs_pr00_4989.trc: file i find :
*** 2015-03-03 12:38:39.478
Media Recovery add redo thread 4
ORA-00308: cannot open archived log '+DATA'
ORA-17503: ksfdopn:2 Failed to open file +DATA
ORA-15045: ASM file name '+DATA' is not in reference form
When I created i set these parameter in duplicate command:
set db_file_name_convert='+ASM_ORADATA/oracle','+DATA/oracledrs'
set log_file_name_convert='+ASM_ARCHIVE/oracle','+DATA/oracledrs','+ASM_ORADATA/oracle','+DATA/oracledrs'
set control_files='+DATA'
set db_create_file_dest='+DATA'
set db_recovery_file_dest='+DATA'
What please the mistake here
Thanks in advance,Yes I have datafiles under +DATA
ASMCMD> cd +DATA/ORACLEDRS/DATAFILE
ASMCMD> ls
ASD.282.873258045
CATALOG.288.873258217
DEVTS.283.873258091
EXAMPLE.281.873258043
FEED.260.873227069
FEED.279.873257713
INDX.272.873251345
INDX.273.873252239
INDX.278.873257337
SYSAUX.262.873227071
SYSTEM.277.873256531
SYSTEM_2.280.873257849
TB_WEBSITE.284.873258135
TB_WEBSITE.285.873258135
TB_WEBSITE.286.873258181
TB_WEBSITE.287.873258183
UNDOTBS1.275.873253421
UNDOTBS2.276.873255247
UNDOTBS3.261.873227069
UNDOTBS4.271.873245967
USERS.263.873227071
USERS.264.873235507
USERS.265.873235893
USERS.266.873237079
USERS.267.873238225
USERS.268.873243661
USERS.269.873244307
USERS.270.873244931
USERS.274.873252585
asd01.dbf
catalog01
dev01.dbf
example.dbf
feed01.dbf
feed02.dbf
indx01.dbf
indx02.dbf
indx03.dbf
sysaux01.dbf
system01.dbf
system02.dbf
undotbs01.dbf
undotbs02.dbf
undotbs03.dbf
undotbs04.dbf
user1.dbf
users01.dbf
users02.dbf
users03.dbf
users04.dbf
users05.dbf
users06.dbf
users07.dbf
users08.dbf
website01.dbf
website02.dbf
website03.dbf
website04.dbf
ASMCMD>
Standby :
[root@oracledrs ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
Primary :
[root@dbn-prod-1 disks]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
1- yes i have needed archived files at my primary
2- select inst_id,thread#,group# from gv$log;
Primary :
INST_ID,THREAD#,GROUP#
1,1,1
1,1,2
1,2,3
1,2,4
1,3,5
1,3,6
1,4,7
1,4,8
1,1,9
1,2,10
1,3,11
1,4,12
3,1,1
3,1,2
3,2,3
3,2,4
3,3,5
3,3,6
3,4,7
3,4,8
3,1,9
3,2,10
3,3,11
3,4,12
2,1,1
2,1,2
2,2,3
2,2,4
2,3,5
2,3,6
2,4,7
2,4,8
2,1,9
2,2,10
2,3,11
2,4,12
4,1,1
4,1,2
4,2,3
4,2,4
4,3,5
4,3,6
4,4,7
4,4,8
4,1,9
4,2,10
4,3,11
4,4,12
Standby :
INST_ID,THREAD#,GROUP#
1,1,9
1,1,2
1,1,1
1,2,3
1,2,4
1,2,10
1,3,5
1,3,6
1,3,11
1,4,12
1,4,7
1,4,8
3- That's sample from alert log since i started the standby (for standby and primary)
Standby :
alter database mount standby database
NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libasm.so
NOTE: Loaded library: System
SUCCESS: diskgroup DATA was mounted
ERROR: failed to establish dependency between database oracledrs and diskgroup resource ora.DATA.dg
ARCH: STARTING ARCH PROCESSES
Tue Mar 03 18:38:16 2015
ARC0 started with pid=128, OS id=4461
ARC0: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Tue Mar 03 18:38:17 2015
Successful mount of redo thread 1, with mount id 1746490068
Physical Standby Database mounted.
Lost write protection disabled
Tue Mar 03 18:38:17 2015
ARC1 started with pid=129, OS id=4464
Tue Mar 03 18:38:17 2015
ARC2 started with pid=130, OS id=4466
Tue Mar 03 18:38:17 2015
ARC3 started with pid=131, OS id=4468
Tue Mar 03 18:38:17 2015
ARC4 started with pid=132, OS id=4470
Tue Mar 03 18:38:17 2015
ARC5 started with pid=133, OS id=4472
Tue Mar 03 18:38:17 2015
ARC6 started with pid=134, OS id=4474
Tue Mar 03 18:38:17 2015
ARC7 started with pid=135, OS id=4476
Completed: alter database mount standby database
Tue Mar 03 18:38:17 2015
ARC8 started with pid=136, OS id=4478
Tue Mar 03 18:38:17 2015
ARC9 started with pid=137, OS id=4480
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC8: Becoming the 'no FAL' ARCH
ARC2: Becoming the heartbeat ARCH
ARC2: Becoming the active heartbeat ARCH
Tue Mar 03 18:38:18 2015
Starting Data Guard Broker (DMON)
ARC9: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Tue Mar 03 18:38:23 2015
INSV started with pid=141, OS id=4494
Tue Mar 03 18:39:11 2015
alter database recover managed standby database disconnect from session
Attempt to start background Managed Standby Recovery process (oracledrs)
Tue Mar 03 18:39:11 2015
MRP0 started with pid=142, OS id=4498
MRP0: Background Managed Standby Recovery process started (oracledrs)
started logmerger process
Tue Mar 03 18:39:16 2015
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 24 slaves
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Media Recovery Waiting for thread 1 sequence 25738
Completed: alter database recover managed standby database disconnect from session
Tue Mar 03 18:41:17 2015
WARN: ARCH: Terminating pid 4476 hung on an I/O operation
Killing 1 processes with pids 4476 (Process by index) in order to remove hung processes. Requested by OS process 4224
ARCH: Detected ARCH process failure
Tue Mar 03 18:45:17 2015
ARC2: STARTING ARCH PROCESSES
Tue Mar 03 18:45:17 2015
ARC7 started with pid=127, OS id=4586
Tue Mar 03 18:45:18 2015
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 03-MAR-2015 18:45:18
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 0
nt OS err code: 0
Client address: <unknown>
ARC7: Archival started
ARC2: STARTING ARCH PROCESSES COMPLETE
Tue Mar 03 18:47:14 2015
alter database recover managed standby database cancel
Tue Mar 03 18:48:18 2015
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 03-MAR-2015 18:48:18
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 0
nt OS err code: 0
Client address: <unknown>
Tue Mar 03 18:51:18 2015
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.4.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
Time: 03-MAR-2015 18:51:18
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 0
nt OS err code: 0
Client address: <unknown>
Error 12170 received logging on to the standby
FAL[client, USER]: Error 12170 connecting to oracle for fetching gap sequence
MRP0: Background Media Recovery cancelled with status 16037
Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4500.trc:
ORA-16037: user requested cancel of managed recovery operation
Recovery interrupted!
Tue Mar 03 18:51:18 2015
MRP0: Background Media Recovery process shutdown (oracledrs)
Tue Mar 03 18:51:19 2015
Managed Standby Recovery Canceled (oracledrs)
Completed: alter database recover managed standby database cancel
Tue Mar 03 18:51:56 2015
alter database recover automatic standby database
Media Recovery Start
started logmerger process
Tue Mar 03 18:51:56 2015
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 24 slaves
Media Recovery Log +DATA
Errors with log +DATA
Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4617.trc:
ORA-00308: cannot open archived log '+DATA'
ORA-17503: ksfdopn:2 Failed to open file +DATA
ORA-15045: ASM file name '+DATA' is not in reference form
ORA-279 signalled during: alter database recover automatic standby database...
Tue Mar 03 18:53:06 2015
db_recovery_file_dest_size of 512000 MB is 0.13% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Primary :
Tue Mar 03 17:13:43 2015
Thread 1 advanced to log sequence 26005 (LGWR switch)
Current log# 1 seq# 26005 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
Tue Mar 03 17:13:44 2015
Archived Log entry 87387 added for thread 1 sequence 26004 ID 0x66aa5a0d dest 1:
Tue Mar 03 18:00:18 2015
Thread 1 advanced to log sequence 26006 (LGWR switch)
Current log# 2 seq# 26006 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
Tue Mar 03 18:00:18 2015
Archived Log entry 87392 added for thread 1 sequence 26005 ID 0x66aa5a0d dest 1:
Tue Mar 03 18:55:33 2015
Thread 1 advanced to log sequence 26007 (LGWR switch)
Current log# 9 seq# 26007 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
Current log# 9 seq# 26007 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
Tue Mar 03 18:55:33 2015
Archived Log entry 87395 added for thread 1 sequence 26006 ID 0x66aa5a0d dest 1:
Tue Mar 03 19:14:22 2015
Dumping diagnostic data in directory=[cdmp_20150303191422], requested by (instance=4, osid=10234), summary=[incident=1692472].
Dumping diagnostic data in directory=[cdmp_20150303191425], requested by (instance=4, osid=10234), summary=[incident=1692473].
Tue Mar 03 20:00:06 2015
Thread 1 advanced to log sequence 26008 (LGWR switch)
Current log# 1 seq# 26008 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
Tue Mar 03 20:00:07 2015
Archived Log entry 87401 added for thread 1 sequence 26007 ID 0x66aa5a0d dest 1:
Tue Mar 03 21:00:02 2015
Thread 1 advanced to log sequence 26009 (LGWR switch)
Current log# 2 seq# 26009 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
Tue Mar 03 21:00:03 2015
Archived Log entry 87403 added for thread 1 sequence 26008 ID 0x66aa5a0d dest 1:
Thread 1 advanced to log sequence 26010 (LGWR switch)
Current log# 9 seq# 26010 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
Current log# 9 seq# 26010 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
Tue Mar 03 21:00:06 2015
Archived Log entry 87404 added for thread 1 sequence 26009 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:00:00 2015
Setting Resource Manager plan SCHEDULER[0x32DA]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Tue Mar 03 22:00:00 2015
Starting background process VKRM
Tue Mar 03 22:00:00 2015
VKRM started with pid=184, OS id=4838
Tue Mar 03 22:00:07 2015
Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Mar 03 22:00:25 2015
Thread 1 advanced to log sequence 26011 (LGWR switch)
Current log# 1 seq# 26011 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
Tue Mar 03 22:00:26 2015
Archived Log entry 87408 added for thread 1 sequence 26010 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:00:58 2015
Thread 1 advanced to log sequence 26012 (LGWR switch)
Current log# 2 seq# 26012 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
Tue Mar 03 22:01:00 2015
Archived Log entry 87412 added for thread 1 sequence 26011 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:02:37 2015
Thread 1 cannot allocate new log, sequence 26013
Checkpoint not complete
Current log# 2 seq# 26012 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
Thread 1 advanced to log sequence 26013 (LGWR switch)
Current log# 9 seq# 26013 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
Current log# 9 seq# 26013 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
Tue Mar 03 22:02:41 2015
Archived Log entry 87415 added for thread 1 sequence 26012 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:03:26 2015
Thread 1 cannot allocate new log, sequence 26014
Checkpoint not complete
Current log# 9 seq# 26013 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
Current log# 9 seq# 26013 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
Thread 1 advanced to log sequence 26014 (LGWR switch)
Current log# 1 seq# 26014 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
Tue Mar 03 22:03:29 2015
Archived Log entry 87416 added for thread 1 sequence 26013 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:05:50 2015
Thread 1 cannot allocate new log, sequence 26015
Checkpoint not complete
Current log# 1 seq# 26014 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
Tue Mar 03 22:05:52 2015
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Thread 1 advanced to log sequence 26015 (LGWR switch)
Current log# 2 seq# 26015 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
Tue Mar 03 22:05:54 2015
Archived Log entry 87418 added for thread 1 sequence 26014 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:07:29 2015
Thread 1 cannot allocate new log, sequence 26016
Checkpoint not complete
Current log# 2 seq# 26015 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
Thread 1 advanced to log sequence 26016 (LGWR switch)
Current log# 9 seq# 26016 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
Current log# 9 seq# 26016 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
Tue Mar 03 22:07:33 2015
Archived Log entry 87421 added for thread 1 sequence 26015 ID 0x66aa5a0d dest 1:
Thread 1 cannot allocate new log, sequence 26017
Checkpoint not complete
Current log# 9 seq# 26016 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
Current log# 9 seq# 26016 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
Thread 1 advanced to log sequence 26017 (LGWR switch)
Current log# 1 seq# 26017 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
Tue Mar 03 22:07:39 2015
Archived Log entry 87422 added for thread 1 sequence 26016 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:16:36 2015
Thread 1 advanced to log sequence 26018 (LGWR switch)
Current log# 2 seq# 26018 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
Tue Mar 03 22:16:37 2015
Archived Log entry 87424 added for thread 1 sequence 26017 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:30:06 2015
Thread 1 advanced to log sequence 26019 (LGWR switch)
Current log# 9 seq# 26019 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
Current log# 9 seq# 26019 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
Tue Mar 03 22:30:07 2015
Archived Log entry 87427 added for thread 1 sequence 26018 ID 0x66aa5a0d dest 1:
Tue Mar 03 22:30:18 2015
Thread 1 advanced to log sequence 26020 (LGWR switch)
Current log# 1 seq# 26020 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
Tue Mar 03 22:30:19 2015
Archived Log entry 87428 added for thread 1 sequence 26019 ID 0x66aa5a0d dest 1:
Tue Mar 03 23:07:27 2015
Dumping diagnostic data in directory=[cdmp_20150303230727], requested by (instance=4, osid=25140), summary=[incident=1692496].
Dumping diagnostic data in directory=[cdmp_20150303230730], requested by (instance=4, osid=25140), summary=[incident=1692497].
Thanks in advance sir , -
Cannot display archived idoc data records in SARA
Hello,
In our ERP system, we regularly archive idocs older than 6 months. To view these archived idocs I use transaction SARA (with archive object IDOC and infostructure SAP_IDOC_001) to search for the relevant idoc that has been archived. Once the idoc is displayed, I drill down further by clicking the magnify glass button which then displays the idoc levels:
EDIDC Control record (IDoc)
EDIDD Data record (IDoc)
EDIDS Status Record (IDoc)
SRL_ARLNK SREL: Archive Structure for Links
When I try to view the Data Records, I get a message saying "You are not authorized to display table EDIDD". According to our Authorizations department, this is not an Auth issue but rather config setup or program issue.
Why can't I view the archived idoc data records? Is there another way to view archived idoc data?
Regards,
FawaazHi Jurgen,
Thanks for moving my post to the correct space.
Our Auth team is very confident that this is not a user auth issue. This could possibly be true because the idoc data resides on the following tables when in the database (before archive) - EDIDC, EDID4 & EDIDS. The idoc could then be viewed via transaction WE02 or the Data Browser (SE16). There is no EDIDD table in our ERP system so obviously no authorization object to assign to.
Once the idoc is archived, the data is removed from the ERP tables and moved to our archive database/server for storage. So when trying to view the archived record, the system does not access the ERP tables but rather the archive directory (that it's mapped to in settings). I assume the SARA transaction merely displays the data in the same segments/grouping with these table names (mentioned above in my first post) but instead of EDID4 it displays EDIDD.
According to the error longtext, "The check performed when data is read from the archive is the same as that of the Data Browser (transaction SE16)". So I was not involved with setting up our archiving procedure but could it be that table EDID4 was incorrectly mapped to table EDIDD in archives?
Regards,
Fawaaz -
Archiving same data more than once due to overlapping variant values
Hi all,
i had accidently run 2 archiving jobs on the same data. For instance, job 1 was archiving for company code IN ( where the company code was from IN00 till ZZ00), which was the unwanted job. The second archive job archived data from IN99 till INZZ ( not the whole IN company code ).
These 2 jobs failed due to log fulll ( the data was too huge to be archived), however when i expand the jobs in the failed SARA session, the archive files has up to 100 MB size.
Below are some of the problems which will incur if we archive the same data more than once ( which i found from my online search )
- some archiving objects require that data only exists once in the archive therefore duplicate data can lead to erroneous results in the totals of archived data
- Archiving the data again will affect checksum. Checksum normally conducted before and after archiving process and its purpose is to validate whether the same file contents exist in the newly created archive files as compare the original data.
Could anyone advice me on how to overcome this multiple archiving on the same data issue. Apart from above stated impact, what are the other problems of multiple archiving on same data?
The failed archived sessions are currently in "Incomplete Archiving Session" and in 1 week time they will be deleted by archive delete jobs and will be moved to "Completed Archive Session". I highly appreciate if anyone could help
Source of finding:
http://help.sap.com/saphelp_nw73/helpdata/en/4d/8c7890910b154ee10000000a42189e/content.htm
http://help.sap.com/saphelp_nwpi71/helpdata/en/8d/3e4fc6462a11d189000000e8323d3a/frameset.htm
http://help.sap.com/saphelp_nw70/helpdata/en/8d/3e4fc6462a11d189000000e8323d3a/content.htmHello,
There are several issues here. In this case it seems pretty clear cut that you did not want the first variant to be executed. Hopefully none of the deletions have taken place for this archive run.
In cases where you have overlapping selection criteria and some of the deletions have been processed you can be in a very difficult situation. The best advise that I have would be to check your archive info structure CATALOG definition and make sure that both the archive file and the offset fields are set to DISPLAY fields and not KEY fields.
If your file and offset are key fields then when you use the archive info structure you would pull up more than one copy of the archived document.
Example: FI document 12345 was archived and deleted in archive run 1 and archive run 2.
The search for the archive info structure when the file and offset are keys fields would return two results.
12345 from run 1
12345 from run 2
If the CATALOG has the file and offset as display only fields you would only return one result
12345 from (whichever deletion file was processed first)
The second deletion process would have a warning message in the job log that not all records were inserted.
Please note that any direct access of the data archive file that bypasses the archive info structure and goes directly to the data archiving files would still show two documents and not a single document.
Regards,
Kirby Arnold
Maybe you are looking for
-
Using Creative Cloud to edit video on multiple computers
Hello, Is there any way that I can use the creative cloud to continually edit a single video across two seperate computers? I've read that the cloud does not currently support video sharing, but I got the impression that this referred specifically to
-
How can I add a new channel in CWAI control in Borland C++ Builder 5.0
I'm using Borland C++ Builder 5.0 and CWAI control of ComponentWorks to get the analog signal from DAQ card. I want to delete or add a new channel object in CWAI control when the program is running. So I did: CWAI1->Channels->RemoveAll(); <--It seems
-
Safari 6.0.5 update loses Favicons!
I just updated to Safari 6.0.5. I'm running OS X Lion 10.7.5 on a Macbook Pro. When I RESET Safari, 95% of the Favicons in my Bookmarks Menu are gone. Just so we're clear -- in Safari Preferences/General I've designated "Remove History Items" to be
-
How do I sync just checked songs & playlists, not everything in iCloud?
I used to be able to sync selected songs, especially playlists, to my iPhone 5. Currently, the window for syncing music in iTunes when I connect my iPhone tells me that I can access music from iCloud and therefore seems to have copied all my playlis
-
Hi Experts, Here i am giving one program about narrow casting in which i am not able to access the method called 'fasting', but here i am able to access dynamically. So, while doing narrow casting may i know that only dynamically we can able to acces