Get failed with SXI_CACHE
Hello expert,
I executed the tcode sxi_cache in copied client 100,
then system throwed error like
com.sap.aii.ib.server.abapcache.CacheRefreshException: Receiver EX1CLNT100 is not an ABAP business system >= 7.10 and therefore has no cache
what's the reason, i have configurated the sxmb_adm, added business system in sld and adjust the exchangeProfile,
what else should i do
waiting for your reply
thanks a lot in advance
Kevin
Hi,
Refer the note
Link: https://service.sap.com/sap/support/notes/1394710
also Check the message and the blog given below.
Link: /people/venugopalarao.immadisetty/blog/2007/03/15/adapter-engine-cannot-be-found-in-integration-directory
Link: Cache refresh issue - SLD - adapter engine is missing
Regards
Vijay G
Similar Messages
-
Hi All. Could anyone resolve my issue.
I was created a package with an Script task written in VB.Net.
The Package was executing Successfully in BIDS.
But when i tried to exceute the same using Sql server Agent Job, Its getting Failed with the below error message
"Executed as user: Admin. Microsoft (R) SQL Server Execute Package Utility Version 11.0.2100.60 for 32-bit Copyright (C) Microsoft Corporation. All rights reserved. Started: 5:12:27 PM Error: 2013-03-13 17:12:32.33
Code: 0x00000005 Source: Checking Alcon Files Checking Alcon Files Description: Failed to compiled scripts contained in the package. Open the package in SSIS Designer and resolve the compilation errors.
End Error Error: 2013-03-13 17:12:32.33 Code: 0x00000005 Source: Checking Alcon Files Checking Alcon Files Description: BC30179 - enum 'ScriptResults' and enum 'ScriptResults' conflict
in class 'ScriptMain'., ScriptMain.vb, 156, 22 End Error Error: 2013-03-13 17:12:32.36 Code: 0x00000005 Source: Checking Alcon Files Checking Alcon Files Description: The binary
code for the script is not found. Please open the script in the designer by clicking Edit Script button and make sure it builds successfully. End Error Error: 2013-03-13 17:12:34.28 Code: 0x00000005
Source: Formating Excel Sheet Formating Excel Sheet Description: Failed to compiled scripts contained in the package. Open the package in SSIS Designer and resolve the compilation errors. End Error Error: 2013-03-13 17:12:34.28
Code: 0x00000005 Source: Formating Excel Sheet Formating Excel Sheet Description: BC30179 - enum 'ScriptResults' and enum 'ScriptResults' conflict in class 'ScriptMain'., ScriptMain.vb, 191, 22 End Error
Error: 2013-03-13 17:12:34.29 Code: 0x00000005 Source: Formating Excel Sheet Formating Excel Sheet Description: The binary code for the script is not found. Please open the script in the
designer by clicking Edit Script button and make sure it builds successfully. End Error Error: 2013-03-13 17:12:51.56 Code: 0x00000004 Source: Checking Alcon Files Description:
The binary code for the script is not found. Please open the script in the designer by clicking Edit Script button and make sure it builds successfully. End Error Error: 2013-03-13 17:12:51.56 Code: 0xC0024107
Source: Checking Alcon Files Description: There were errors during task validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 5:12:27 PM Finished: 5:12:51 PM
Elapsed: 24.336 seconds. The package execution failed. The step failed."
Please give some solution to this.
Thanks in advanceAre you editing this job in SQL 2012? Is it wrapping your paths in \"? Like for example does the command line tab on the step look like this:
/FILE "\"D:\yourPathGoesHere.dtsx\"" /CONFIGFILE "\"D:\yourPathGoesHere.dtsConfig\"" /CHECKPOINTING OFF /REPORTING E
That's the problem I had, and I resolved it by recreating the whole thing via SQL Script and getting rid of all those \" things. I think that is preventing SSIS from looking up the precompiled binaries for your Script Task. So you should edit the command
line manually:
/FILE "D:\yourPathGoesHere.dtsx" /CONFIGFILE "D:\yourPathGoesHere.dtsConfig" /CHECKPOINTING OFF /REPORTING E
Please write back whether that helps you. David Dye's answer didn't help me at all. -
Dear Experts,
Background job SAP_COLLECTOR_FOR_PERFMONITOR getting failed with abap dump LOAD_PROGRAM_NOT_FOUND.
As it is hourly scheduled job and it gets finished 22 times in a day but other 2 times fails with abap dump.
Finished Job log
Job started
Step 001 started (program RSCOLL00, variant , user ID Bharath)
Clean_Plan:Cleanup of DB13 Plannings
Clean_Plan:started by RSDBPREV on server
Clean_Plan:Cleaning up jobs of system DEV
Clean_Plan:finished
Job finished
Failed Job Log
Job started
Step 001 started (program RSCOLL00, variant , user ID Bharath)
Internal session terminated with a runtime error (see ST22).
Kindly suggest on this..
Thanks,
Bharath.Dear Divyanshu,
Our system in ERP 6.0 EHP5 with SP level 10. The ABAP Dump shows below error.
|Short text |
| Program "RSORA811" not found. |
|What happened? |
| There are several possibilities: |
| |
| Error in the ABAP Application Program |
| |
| The current ABAP program "RSCOLL00" had to be terminated because it has |
| come across a statement that unfortunately cannot be executed. |
| or |
| Error in the SAP kernel. |
| |
| The current ABAP "RSCOLL00" program had to be terminated because the |
| ABAP processor detected an internal system error. |
|What can you do? |
| Note down which actions and inputs caused the error. |
| |
| |
| To process the problem further, contact you SAP system |
| administrator. |
| |
| Using Transaction ST22 for ABAP Dump Analysis, you can look |
| at and manage termination messages, and you can also |
| keep them for a long time. |
|Error analysis |
| On account of a branch in the program |
| (CALL FUNCTION/DIALOG, external PERFORM, SUBMIT) |
| or a transaction call, another ABAP/4 program |
| is to be loaded, namely "RSORA811". |
| |
| However, program "RSORA811" does not exist in the library. |
| |
| Possible reasons: |
| a) Wrong program name specified in an external PERFORM or |
| SUBMIT or, when defining a new transaction, a new |
| dialog module or a new function module. |
| b) Transport error |
|Information on where terminated |
| Termination occurred in the ABAP program "RSCOLL00" - in |
| "LOOP_AT_SYSTEMS_AND_REPORTS". |
| The main program was "RSCOLL00 ". |
| |
| In the source code you have the termination point in line 535 |
| of the (Include) program "RSCOLL00". |
| The program "RSCOLL00" was started as a background job. |
| Job Name....... "SAP_COLLECTOR_FOR_PERFMONITOR" |
| Job Initiator.. " bharath" |
| Job Number..... 18243400
Kindly check and suggest..
Thanks,
Bharath. -
IDOC is Getting Fail with - 56 Status EDI Partner profile not available
Hi,
I am trying to Post invoice Data into IDOC on ECC Side.
My Scenario is File u2013 XI u2013 ECC(IDOC).
But It is Getting Fail with 56 Status u201C EDI: Partner profile not available u201C
On Control record I am getting this
Port BLANKKKKKKKKKK
Partner Number CLNTDEC110 Logical system for Client
Partn.Type LS Logical system
Function BLANKKKKKKKKKK
Port SAPDPI
Partner number CLNTSAMPLE
Partn.Type LS Logical system
Partner Role BLANKKKKKKKKKK
My configurations are Like this :----
On ECC Side MY SID is DEC
On ECC side I have two logical system in CLNTDPI100 for PI
CLNTDEC110 for ECC.
I have Partner profile on ECC system on CLNTDPI100 logical system - we20
Added message type in inbound side of partner Profile (INVOIC-INVOIC02)
ON SAP PI/ XI System MY SID is DPI
IDX 1 has Port name SAPDEC
On Message mapping EDI_DC40 is mapping with constants with below given value
<INVOIC02>
<IDOC BEGIN="">
<EDI_DC40 SEGMENT="">
<TABNAM> </TABNAM>
<DIRECT>2</DIRECT>
<IDOCTYP> </IDOCTYP>
<MESTYP>INVOIC</MESTYP>
<SNDPOR>SAPDPI</SNDPOR>
<SNDPRT>LS</SNDPRT>
<SNDPRN>CLNTDPI100</SNDPRN>
<RCVPOR>SAPDEC</RCVPOR>
<RCVPRT>LS</RCVPRT>
<RCVPRN>CLNTDEC110</RCVPRN>
</EDI_DC40>
Regards
PSCheck the following :
in We 02 which partner number is displayed just in the posted IDOC -> 2nd coloum in the IDOC list .and verify if the
same you have in Partner profile. Actually this details comes from ECC business system's logical system name which you give in SLD.
in we 19 - take the error iDOC numner and open the IDOC ->click on the first line -> check entries as you mentioned
above sender port should be PI port not empty.you need to check the partner profiles properly
follow these steps
1. create a RFC destination of tyoe H for Pi system- 2. create a Port and assigen the RFC destination to it
3 create logical system for PI BD54 PICLNT001 say , 4 with the same name create a partner profile in We20
in parter profile maintain the inbbound message parameters and add the PI port as receiver port in it .Give the basic type also .
now
for sender details in ur case PI : u have port (defined in PI IDX1),patner number (LS defined in ECC PICLNT001), parter type LS.
for receiver you have port defined (as above), partner number the logical system for ECC system.
in we 19 ,, edit the control record as above and go to the tab inbound processing and test the internal posting
it should work fine. in the adapte specific attribuet for receiver ECC system , maitain the same LS name , if any wrong entry is
there then change the LS in sld to poing to correct LS/
refer this
http://www.riyaz.net/blog/xipi-settings-in-r3-partner-system-to-receive-idocs/technology/sap/26/
Regards,
Srinivas -
Migration Request getting failed with error code 152
Hi Experts,
We are creating a migration request for moving a transport along the path Development-Quality-Production ( tcode /POWERCOR/MRCRE ). But the import into Quality is getting failed with return code 152 and status code E. While the transports without creating the migration request are normal. Please let us know how we can resolve this issue.
Thanks in advance.
Regards,
Surendra Julury,
+91 9611107275Try running the exe remotely on a computer using PSExec under the system context. What happens?
See here for PSExec - you need the -s switch for local system
http://technet.microsoft.com/en-ie/sysinternals/bb897553.aspx
Gerry Hampson | Blog:
www.gerryhampsoncm.blogspot.ie | LinkedIn:
Gerry Hampson | Twitter:
@gerryhampson
If you do this then you will know. This mimics the behaviour of the ConfigMgr deployment. Use the same .exe -silent etc.
Gerry Hampson | Blog:
www.gerryhampsoncm.blogspot.ie | LinkedIn:
Gerry Hampson | Twitter:
@gerryhampson -
ORA-29786: SIHA attribute GET failed with error [Attribute 'ASM_DISKSTRING'
Hi,
I un-register below services to configure them with solaris SMF (I know oracle won't support SMF yet).
they are running quite good, but only issue we are facing was RBAL create error logs in trace file.
/u01/grid/oracle/product/11.2.0/asm_1/bin/crs_unregister ora.LISTENER.lsnr
/u01/grid/oracle/product/11.2.0/asm_1/bin/crs_unregister ora.DATA.dg
/u01/grid/oracle/product/11.2.0/asm_1/bin/crs_unregister ora.FRA.dg
/u01/grid/oracle/product/11.2.0/asm_1/bin/crs_unregister ora.asm
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_rbal_14028.trc:
ORA-29786: SIHA attribute GET failed with error [Attribute 'ASM_DISKSTRING' sts[200] lsts[0]]
KGGPNP_SIHA: resource 'ora.asm' is not available [200]
KGGPNP_SIHA: attribute 'ASM_DISKSTRING' get failed sts[200] lsts[0]
NOTE: failed to discover disks from gpnp profile asm diskstring
ORA-29786: SIHA attribute GET failed with error [Attribute 'ASM_DISKSTRING' sts[200] lsts[0]]
WARNING::lib=/opt/oracle/extapi/64/asm err:9 rc:opendir location:skgdllOpenDi
errbuf=2
msgbuf=No such file or directory other=Directory does not exist
*** 2011-03-22 11:23:02.281
kfgbRegister: registering group 1/0xB9966B09 (DATA)
kfgbBind: binding kfgpn for group 1/0xB9966B09 (DATA)
kfdp_query(DATA): 7 Edited by: Sachin B on Mar 28, 2011 3:17 AMHi, it seems this errors comes from the fact that the asm wasn't registred in the cluster.
so you have ti recreate the asm instance and register this instance in the cluster.
srvctl add asm -p $ORACLE_HOME/dbs/init$+ASM.ora
srvctl config asm
i hope it could help !
Huet Bruno
Senior DBA Brinks France. -
Online backup getting failed with error BR0278E - Invalid Argument
Hi All,
Online backup in Production system is getting failed when taken using DB13. Last week the SAP Production Server was migrated to SAN and it was up & running fine.
SAP System - ECC 6
OS - AIX 5.3
Database - Oracle 10g
Even the Archive log backup has been successful, but Online backup is failing. Following is the error log:
BR0280I BRBACKUP time stamp: 2009-09-10 12.00.46
BR0057I Backup of database: RTP
BR0058I BRBACKUP action ID: bebkzprw
BR0059I BRBACKUP function ID: ant
BR0110I Backup mode: ALL
BR0077I Database file for backup: /oracle/RTP/sapbackup/cntrlRTP.dbf
BR0061I 41 files found for backup, total size 97416.297 MB
BR0143I Backup type: online
BR0112I Files will not be compressed
BR0130I Backup device type: tape
BR0102I Following backup device will be used: /dev/rmt3.1
BR0103I Following backup volume will be used: RTPB01
BR0289I BRARCHIVE will be started at the end of processing
BR0134I Unattended mode with 'force' active - no operator confirmation allowed
BR0208I Volume with name RTPB01 required in device /dev/rmt3.1
BR0280I BRBACKUP time stamp: 2009-09-10 12.00.46
BR0226I Rewinding tape volume in device /dev/rmt3 ...
BR0351I Restoring /oracle/RTP/sapbackup/.tape.hdr0
BR0355I from /dev/rmt3.1 ...
BR0241I Checking label on volume in device /dev/rmt3.1
BR0242I Scratch volume in device /dev/rmt3.1 will be renamed to RTPB01
BR0280I BRBACKUP time stamp: 2009-09-10 12.00.46
BR0226I Rewinding tape volume in device /dev/rmt3 ...
BR0202I Saving /oracle/RTP/sapbackup/.tape.hdr0
BR0203I to /dev/rmt3.1 ...
We have checked using different tapes, restarted SAP, etc but could not resolve it.
Please suggest further.
Thanks
VamsiHi Vamsi
Please go through below link
Re: onLine backup fails with error BR0278E
There is similar problem with the exact error you are facing for online backup and its solution is given in above link.
Let us know if this information was useful to you.
with regards,
Parin Hariyani -
"weblogic.Admin GET" fails with NoSuchMethodException
Hello,
trying to "play" with monitor and listeners, I am trying sample commands listed
in http://edocs.bea.com/wls/docs70/jmx/basics.html.
unfortunately, the following command:
===
%BEAHOME%\jdk131_03\bin\java -classpath "CR086552_700sp1.jar;weblogic.jar" weblogic.Admin
-url http://mmm:7001 -username uuuu -password pppp GET -pretty -type Log
===
fails with:
===
java.lang.NoSuchMethodException
at java.lang.Class.getMethod0(Native Method)
at java.lang.Class.getMethod(Class.java:883)
at weblogic.management.tools.OperationInfo.readObject(OperationInfo.java:133)
at java.lang.reflect.Method.invoke(Native Method)
at java.io.ObjectInputStream.invokeObjectReader(ObjectInputStream.java:2209)
at java.io.ObjectInputStream.inputObject(ObjectInputStream.java:1406)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:381)
at java.io.ObjectInputStream.inputArray(ObjectInputStream.java:1137)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at java.io.ObjectInputStream.inputClassFields(ObjectInputStream.java:2258)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:514)
at java.io.ObjectInputStream.inputObject(ObjectInputStream.java:1407)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:381)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:231)
at weblogic.common.internal.ChunkedObjectInputStream.readObject(ChunkedObjectInputStream.java:111)
at weblogic.rjvm.MsgAbbrevInputStream.readObject(MsgAbbrevInputStream.java:91)
at weblogic.rmi.internal.ObjectIO.readObject(ObjectIO.java:56)
at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:161)
at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:128)
at weblogic.management.internal.RemoteMBeanServerImpl_WLStub.getAttribute(Unknown
Source)
at weblogic.management.internal.MBeanProxy.getAttribute(MBeanProxy.java:246)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:176)
at $Proxy0.getMBeanInfo(Unknown Source)
at weblogic.management.commandline.CommandLine.getAllAttribute(CommandLine.java:323)
at weblogic.management.commandline.CommandLine.doGet(CommandLine.java:246)
at weblogic.management.commandline.CommandLine.doOperation(CommandLine.java:207)
at weblogic.management.commandline.CommandLine.doCommandline(CommandLine.java:192)
at weblogic.management.commandline.CommandLine.<init>(CommandLine.java:104)
at weblogic.Admin.main(Admin.java:998) Unexpected Exception
===
classpath set to weblogic.jar only throws the same exception
what's going wrong?
thanks
Joseph
note: posted on weblogic.developer.interest.management.general_and_jmx but got
no answer :(can't understand why, but a reinstall solved my problem :((
Joseph -
Gns is getting failed with error CRS-2632 during RAC installation
hello guys i am new to oracle RAC and i am trying to configure two node ORACLE 11G R2 RAC setup on OEL 5.4 using GNS Every things works great till I execute
root.sh script on the first node
It gives me error
CRS-2674: Start of 'ora.gns' on 'host01' failed
CRS-2632: There are no more servers to try to place resource 'ora.gns' on that would satisfy its placement policy
start gns ... failed
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
When i check status of cluster recourse i get this as output
[root@host01 ~]# crs_stat -t
Name Type Target State Host
ora.DATA.dg ora....up.type ONLINE ONLINE host01
ora....N1.lsnr ora....er.type OFFLINE OFFLINE
ora....N2.lsnr ora....er.type OFFLINE OFFLINE
ora....N3.lsnr ora....er.type OFFLINE OFFLINE
ora.asm ora.asm.type ONLINE ONLINE host01
ora.eons ora.eons.type ONLINE ONLINE host01
ora.gns ora.gns.type ONLINE OFFLINE
ora.gns.vip ora....ip.type ONLINE OFFLINE
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE host01
ora.host01.gsd application OFFLINE OFFLINE
ora.host01.ons application ONLINE ONLINE host01
ora.host01.vip ora....t1.type ONLINE ONLINE host01
ora....network ora....rk.type ONLINE ONLINE host01
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE host01
ora....ry.acfs ora....fs.type OFFLINE OFFLINE
ora.scan1.vip ora....ip.type OFFLINE OFFLINE
ora.scan2.vip ora....ip.type OFFLINE OFFLINE
ora.scan3.vip ora....ip.type OFFLINE OFFLINE
These are my GNS configuration file entries
vi /var/named/chroot/etc/named.conf
options {
listen-on port 53 { 192.9.201.59; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
allow-query-cache { any; };
zone "." IN {
type hint;
file "named.ca";
zone "localdomain" IN {
type master;
file "localdomain.zone";
allow-update { none; };
zone "localhost" IN {
type master;
file "localhost.zone";
allow-update { none; };
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
allow-update { none; };
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
type master;
file "named.ip6.local";
allow-update { none; };
zone "255.in-addr.arpa" IN {
type master;
file "named.broadcast";
allow-update { none; };
zone "0.in-addr.arpa" IN {
type master;
file "named.zero";
allow-update { none; };
zone "example.com" IN {
type master;
file "forward.zone";
allow-transfer { 192.9.201.180; };
zone "201.9.192.in-addr.arpa" IN {
type master;
file "reverse.zone";
zone "0.0.10.in-addr.arpa" IN {
type master;
file "reverse1.zone";
vi /var/named/chroot/var/named/forward.zone
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS server1
IN A 192.9.201.59
server1 IN A 192.9.201.59
host01 IN A 192.9.201.181
host02 IN A 192.9.201.182
host03 IN A 192.9.201.183
openfiler IN A 192.9.201.184
host01-priv IN A 10.0.0.2
host02-priv IN A 10.0.0.3
host03-priv IN A 10.0.0.4
vi /var/named/chroot/var/named/reverse.zone
$ORIGIN cluster01.example.com.
@ IN NS cluster01-gns.cluster01.example.com.
cluster01-gns IN A 192.9.201.180
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
59 IN PTR server1.example.com.
184 IN PTR openfiler.example.com.
181 IN PTR host01.example.com.
182 IN PTR host02.example.com.
183 IN PTR host03.example.com.
vi /var/named/chroot/var/named/reverse1.zone
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
2 IN PTR host01-priv.example.com.
3 IN PTR host02-priv.example.com.
4 IN PTR host03-priv.example.com.
Please suggest me what i am doing wrong
Edited by: 1001408 on Apr 21, 2013 9:17 AM
Edited by: 1001408 on Apr 21, 2013 9:22 AMHello guys finally i find mistake i was doing
while configuring Public Ip for the nodes i was not giving Default Gateway .I was assuming as all these machine is in same network with same Ip range so they would not be needing Gateway but here my assumption mismatch with oracle well finally happy to see 11G r2 with GNS running on my personal laptop.
cheers
Rahul -
BPM getting failed with unknown error in inbound queue
Hi ,
I am using BPM for simple IDOC collection( time based collection) .Its working fine also .However once it fail we can see the BPE failed in inbound processing error in smq2 and status is system faliure.
on debugging we can see a unknown error in transformation step that error in conversion [NN]. to proxy Z_interface name ( the proxy which automatically get created for transformation step. ) data related issue.
When we check the same input data in IR its working fine in message mapping . we are unable to find any other error either in BPM or in the input data.
The most surprising thing is that when we deleted the message from queue and re-triggered the IDOC once again from SAP system then its worked fine .
Can any body kindly suggest what can be the probable reason of this .As we are afraid that if the same thing will happen in production what we will do without any expalination we can't delete queue and retrigger IDOCs there.
Regards,
saurabhOne option would be to raise an exception for such steps....in the Exception handler you can repeat the same step....in your case it will be the mapping (n:1) .....are you sure that the process failed in Transformation step and that the data was correct....i mean all the 5 IDOCs were sent as an input to the mapping?
Can you ensure that during this time there was no connection issue between the BPM and the mapping runtime (IE)....may be you can check with your BASIS.
Regards,
Abhishek. -
DBcheck get failed from db13 with error BR0301E SQL error -1031
Hi,
I am executing DBcheck from db13 but it gets failed with error "BR0301E SQL error -1031 at location BrDbdiffRead-1, SQL statement".
Here is the job logs:
Job started
Step 001 started (program RSDBAJOB, variant &0000000000424, user ID SVKM_BASIS2)
Execute logical command BRCONNECT On host svkmeccdbci
Parameters: -u / -jid CHECK20110930103211 -c -f check
BR0801I BRCONNECT 7.00 (40)
BR0477I Oracle pfile /oracle/SEP/102_64/dbs/initSEP.ora created from spfile /oracle/SEP/102_64/dbs/spfileSEP.ora
BR0805I Start of BRCONNECT processing: cegwudvg.chk 2011-09-30 10.32.12
BR0484I BRCONNECT log file: /oracle/SEP/sapcheck/cegwudvg.chk
BR0280I BRCONNECT time stamp: 2011-09-30 10.32.12
BR0813I Schema owners found in database SEP:
DBSNMP, DIP, OPS$ORASEP, OPS$SAPSERVICESEP, OPS$SEPADM, ORACLE_OCM, OUTLN, SAPSR3*, SYS, SYSTEM,
TSMSYS
BR0280I BRCONNECT time stamp: 2011-09-30 10.32.20
BR0814I Number of tables in schema of owner SAPSR3: 77207
BR0836I Number of info cube tables found for owner SAPSR3: 49
BR0814I Number of tables/partitions in schema of owner SYS: 625/189
BR0814I Number of tables/partitions in schema of owner SYSTEM: 134/27
BR0280I BRCONNECT time stamp: 2011-09-30 10.32.29
BR0815I Number of indexes in schema of owner SAPSR3: 92099
BR0815I Number of indexes/partitions in schema ofowner SYS: 678/199
BR0815I Number of indexes/partitions in schema ofowner SYSTEM: 175/32
BR0280I BRCONNECT time stamp: 2011-09-30 10.32.52
BR0816I Number of segments in schema of owner DBSNMP: 25
BR0816I Number of segments in schema of owner OPS$SEPADM: 1
BR0816I Number of segments in schema of owner OUTLN: 9
BR0816I Number of segments/LOBs in schema of owner SAPSR3: 174072/2383
BR0816I Number of segments/LOBs in schema of owner SYS: 1838/87
BR0816I Number of segments/LOBs in schema of owner SYSTEM: 353/22
BR0816I Number of segments in schema of owner TSMSYS: 4
BR0280I BRCONNECT time stamp: 2011-09-30 10.32.52
BR0961I Number of conditions found in DBCHECKORA:118
BR0280I BRCONNECT time stamp: 2011-09-30 10.32.52
BR0301E SQL error -1031 at location BrDbdiffRead-1, SQL statement:
'PREPARE stmt_5 STATEMENT FROM'
'SELECT OBJNAME FROM "SAPSR3".DBDIFF WHERE DBSYS IN ('ORACLE', ' ') AND OBJTYPE = 'TABL' AND DIFFKIND IN ('02', '61', '99') ORDE
ORA-01031: insufficient privileges
BR0806I End of BRCONNECT processing: cegwudvg.chk2011-09-30 10.32.52
BR0280I BRCONNECT time stamp: 2011-09-30 10.32.52
BR0804I BRCONNECT terminated with errors
External program terminated with exit code 3
BRCONNECT returned error status E
Job finished
Please help.
Thanks in advanced.
OceanDear,
Thanks for the reply.
I have checked the note you have mention, but it show it is for SAP Release 6.20 or lower.
we have ECC6 EHP4 release 700.
So, please advise, should i go for that note and perform the required action ?
Thanks. -
Procurement load fails with socket error in biapps 11.1.1.7.1
Hi friends,
While performing a procurement load via CM, my load is getting failed with one of the mapping SDE_ORA_RequisitionlinescostFact and below is the error that is occurring.
Failed child steps: SDE_ORA_REQUISITIONLINESCOSTFACT (InternalID:17404500)
ODI-1217: Session SDE_ORAR1213_ADAPTOR_SDE_ORA_REQUISITIONLINESCOSTFACT (10930500) fails with return code 17410.
ODI-1226: Step SDE_ORA_RequisitionLinesCostFact.W_RQSTN_LINE_COST_FS fails after 1 attempt(s).
ODI-1240: Flow SDE_ORA_RequisitionLinesCostFact.W_RQSTN_LINE_COST_FS fails while performing a Loading operation. This flow loads target table W_RQSTN_LINE_COST_FS.
ODI-1227: Task SrcSet0 (Loading) fails on the source ORACLE connection OLTP.
Caused By: java.sql.SQLRecoverableException: No more data to read from socket
Hence im using my Target db as 11.2.0.3.0.
What could be the exact reason behind this issue.
Thanks in advance
REgards,
SaroIt seems to be a bug with Procurement load in BIAPPS 11g, hence we need to contact OS for solving this issue.
Regards,
Saro -
Fail with quickstart with fresh install the All-in-one Installer
Hi all,
I install the quick install with all-in-one installer, and start the Endeca Server from program menu, and then I can get to the http://localhost:8080, and login successfully.
Then I start the Integrator, open the quickstart project, but when I try to run the Baseline.grf it get fail with:
IERROR [WatchDog] - Graph execution finished with error
ERROR [WatchDog] - Node WEB_SERVICE_CLIENT0 finished with status: ERROR caused by: org.apache.axis2.AxisFault: Read timed out
ERROR [WatchDog] - Node WEB_SERVICE_CLIENT0 error details:
com.opensys.cloveretl.component.ws.exception.SendingMessegeException: org.apache.axis2.AxisFault: Read timed out
at com.opensys.cloveretl.component.ws.proxy.b.b(Unknown Source)
at com.opensys.cloveretl.component.ws.proxy.b.a(Unknown Source)
at com.opensys.cloveretl.component.WebServiceClient.a(Unknown Source)
at com.opensys.cloveretl.component.WebServiceClient.execute(Unknown Source)
at org.jetel.graph.Node.run(Node.java:414)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.axis2.AxisFault: Read timed out
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:389)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:222)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:435)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:402)
at org.apache.axis2.description.OutInAxisOperationClient$NonBlockingInvocationWorker.run(OutInAxisOperation.java:442)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
... 1 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413)
at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1973)
at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:346)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:550)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:189)
... 9 more
INFO [WatchDog] - [Clover] Post-execute phase finalization: 0
INFO [WatchDog] - [Clover] phase: 0 post-execute finalization successfully.
INFO [WatchDog] - Execution of phase [0] finished with error - elapsed time(sec): 600
ERROR [WatchDog] - !!! Phase finished with error - stopping graph run !!!
INFO [WatchDog] - -----------------------** Summary of Phases execution **---------------------
INFO [WatchDog] - Phase# Finished Status RunTime(sec) MemoryAllocation(KB)
INFO [WatchDog] - 0 ERROR 600 16351
INFO [WatchDog] - ------------------------------** End of Summary **---------------------------
INFO [WatchDog] - WatchDog thread finished - total execution time: 600 (sec)
INFO [main] - Freeing graph resources.
ERROR [main] - Execution of graph failed !
and the start Endeca Server window is stop on Starting
2012-12-14 22:09:48.043:INFO:oejs.AbstractConnector:Started SelectChannelConnect
[email protected]:7770 STARTING
Any help is really appreciate.
Thanks,
Jason.Baseline.grf is not able to connect to Endeca Server. Make sure Endeca Server has started correctly and then run baseline.grf graph.
Regards,
Srikanth -
Filesystem Restore is getting failed "NDMP server reported a general error"
When i performing filesystem restore to different location, its getting failed with the error message "NDMP server reported a general error (name not found?)" whereas restoring
in the same location is getting success without any error.
Please find the attached transcript output for the failed job with debug on.
ob>catxcr -fl0 admin/80
2012/09/04.13:17:33 ______________________________________________________________________
2012/09/04.13:17:33
2012/09/04.13:17:33 Transcript for job admin/80 running on backup-server
2012/09/04.13:17:33
2012/09/04.13:17:33 (amh) qdv__automount_in_mh entered
2012/09/04.13:17:33 (amh) qdv__automount_in_mh tape at 2012/09/04.13:17:33, flags 0x100
2012/09/04.13:17:33 (amh) mount volume options list contains:
2012/09/04.13:17:33 (amh) vtype 1 (rd), vid DC-ORCL-MF-000001, vs_create 1346566310, family (null), retain (null), size 0,
mediainfo 2, scratch 0
2012/09/04.13:17:34 (amh) don't preserve previous mh automount state
2012/09/04.13:17:34 (gep) getting reservation for element 0x1 (dte)
2012/09/04.13:17:34 (una) unload_anywhere entered
2012/09/04.13:17:34 (fal) find_and_load entered
2012/09/04.13:17:34 (fal) calling find_vid2 for volume DC-ORCL-MF-000001
2012/09/04.13:17:34 (fal) find_vid2 worked - volume DC-ORCL-MF-000001 in se11 (not in drive)
2012/09/04.13:17:34 (fal) moving volume FL-MF-000001 from se11 to dte1 (tape)
2012/09/04.13:18:12 (fal) load of tape worked; returning to do automount
2012/09/04.13:18:12 (fal) find_and_load exited
2012/09/04.13:18:12 (atv) qdv__automount_this_vol entered
2012/09/04.13:18:12 (atv) calling qdv__mount
2012/09/04.13:18:12 (mt) qdv__read_mount_db() succeeded, found vol_oid 0
2012/09/04.13:18:20 (mt) qdv__read_label() succeeded; read 65536 bytes
2012/09/04.13:18:20 (mt) exp time obtained from label
2012/09/04.13:18:20 (mt) qdb__label_event() returned vol_oid 137
2012/09/04.13:18:20 (mt) setting vol_oid in mount_info to 137
2012/09/04.13:18:20 (mt) updated volume close time from db
2012/09/04.13:18:20 (atv) qdv__mount succeeded
2012/09/04.13:18:20 (atv) automount worked
2012/09/04.13:18:20 (atv) qdv__automount_this_vol exited
2012/09/04.13:18:20 (gep) getting reservation for element 0x1 (dte)
2012/09/04.13:18:20 (amh) 0 automount worked - returning
2012/09/04.13:18:20 (amh) end of automount at 2012/09/04.13:18:20 (0x0)
2012/09/04.13:18:20 (amh) returning from qdv__automount_in_mh
2012/09/04.13:18:20 Info: volume in tape is usable for this operation.
13:18:20 OBTR: obtar version 10.4.0.1.0 (Solaris) -- Fri Sep 23 23:41:16 PDT 2011
Copyright (c) 1992, 2011, Oracle. All rights reserved.
13:18:20 OBTR: obtar -Xjob:admin/80 -Xob:10.4 -xOz -Xbga:admin/80 -JJJJv -y /usr/tmp/[email protected] -Xrdf:admin/80 -e DC-ORCL-
MF-000001 -F3 -f tape -Xrescookie:0xBE1A8F2 -H client01 -u
13:18:20 RRDF: restore "/wdn/file01" as "/restore", pos 000043290003
13:18:20 OBTR: running as root/root
13:18:20 OBTR: record storage set to internal memory
13:18:20 ATAL: reserved drive tape, cookie 0xBE1A8F2
13:18:20 OBTR: obsd=1, is_job=1, is_priv=0, os=3
13:18:20 OBTR: rights established for user admin, class admin
13:18:20 SUUI: user info root/root, ??/??
13:18:21 MAIN: using blocking factor 128 from media defaults/policies
13:18:21 STTY: background terminal I/O or is a tty
13:18:21 MAIN: interactive
13:18:21 DOLM: nop (for tape (raw device "/dev/obt1"))
13:18:21 DOLM: ok
13:18:22 RLE: connecting to volume/archive database host
13:18:22 RLE: device tape (raw device "/dev/obt1")
13:18:22 RLE: mount_info is valid
13:18:22 RLE: qdb__device_spec_se reports vol_oid 0, arch_oid 0
13:18:22 A_O: using max blocking factor 128 from media defaults/policies
13:18:22 A_O: tape device is local
13:18:22 A_O: Devname: HP,Ultrium 4-SCSI,H61W
13:18:22 Info version: 11
13:18:22 WS version: 10.4
13:18:22 Driver version: 10.4
13:18:22 Max DMA: 2097152
13:18:22 Blocksize in use: 65536
13:18:22 Query frequency: 134217728
13:18:22 Rewind on close: false
13:18:22 Can compress: true
13:18:22 Compression enabled: true
13:18:22 Device supports encryption: true
13:18:22 8200 media: false
13:18:22 Remaining tape: 819375104
13:18:22 A_GB: ar_block at 0x100352000, size=2097152
13:18:22 A_GB: ar_block_enc at 0x100554000, size=2097152
13:18:22 ADMS: reset library tape selection state
13:18:22 ADMS: reset complete
13:18:22 GLMT: returning "", code = 0x0
13:18:22 VLBR: from chk_lm_tag: "", code = 0x0
13:18:22 VLBR: tag on label just read: ""
13:18:22 VLBR: master tag now ""
13:18:22 RLE: noticed volume TEST-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
13:18:22 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 369
(alv) backup image label is valid, file 1, section 1
(ial) invalidate backup image label (was valid)
13:18:22 RSMD: rewrote mount db for tape
13:18:22 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
13:18:22 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:18:22 CALE: created backup section oid list entry for oid 369
13:18:22 PF: here's the label at the current position:
Volume label:
Intro time: Fri May 04 13:35:03 2012
Volume UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
Volume ID: TEST-MF-000001
Volume sequence: 1
Volume set owner: root
Volume set created: Sun Sep 02 11:56:50 2012
Volume set closes: Sat Dec 01 11:56:50 2012 (no writes after this time)
Volume set expires: Sat Mar 02 11:56:50 2013
Media family: TEST-MF
Original UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
Archive label:
File number: 1
File section: 1
Owner: root
Client host: client01
Backup level: 0
S/w compression: no
Archive created: Sun Sep 02 11:56:50 2012
Archive owner UUID: f32ac938-6410-102f-a3d5-b94c4468403b
Owner class UUID: f32a3504-6410-102f-a3d5-b94c4468403b
Encryption: off
Searching tape for requested file. Please wait...
13:18:22 PF: spacing forward 2 FMs
13:18:24 VLBR: not at bot: 0x90000000
13:18:24 VLBR: tag on label just read: ""
13:18:24 VLBR: master tag now ""
13:18:24 RLE: noticed volume TEST-MF-000001, file 3, section 1, vltime 1346566310, vowner root, voltag
13:18:24 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 380
(alv) backup image label is not valid
13:18:24 ULVI: set mh db volume id "TEST-MF-000001" (retid ""), volume oid 137, code 0
13:18:24 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:18:24 CALE: created backup section oid list entry for oid 380
13:18:24 VLBR: setting last section flag for backup section oid 369
13:18:24 PF: here's the label at the current position:
Volume label:
Intro time: Fri May 04 13:35:03 2012
Volume UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
Volume ID: TEST-MF-000001
Volume sequence: 1
Volume set owner: root
Volume set created: Sun Sep 02 11:56:50 2012
Volume set closes: Sat Dec 01 11:56:50 2012 (no writes after this time)
Volume set expires: Sat Mar 02 11:56:50 2013
Media family: TEST-MF
Original UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
Archive label:
File number: 3
File section: 1
Owner: root
Client host: client01
Backup level: 0
S/w compression: no
Archive created: Tue Sep 04 11:53:17 2012
Archive owner UUID: f32ac938-6410-102f-a3d5-b94c4468403b
Owner class UUID: f32a3504-6410-102f-a3d5-b94c4468403b
Encryption: off
13:18:24 PF: at desired location
13:18:24 ACFD: positioning (SCSI LOCATE) is available for this device
13:18:24 ADMS: reset library tape selection state
13:18:24 ADMS: reset complete
13:18:24 VLBR: not at bot: 0x90000000
13:18:24 VLBR: tag on label just read: ""
13:18:24 VLBR: master tag now ""
13:18:24 RLE: noticed volume DC-ORCL-MF-000001, file 3, section 1, vltime 1346566310, vowner root, voltag
13:18:24 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 380
(alv) backup image label is not valid
13:18:25 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
13:18:25 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:18:25 CALE: found existing backup section oid list entry for oid 380
13:18:25 ADMS: reset library tape selection state
13:18:25 ADMS: reset complete
13:18:25 RLE: read volume DC-ORCL-MF-000001, file 3, section 1, vltime 1346566310, vowner root, voltag
13:18:25 RLE: qdb__read_se reports vol_oid 137, arch_oid 380
(alv) backup image label is not valid
13:18:25 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
13:18:25 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:18:25 PTNI: positioning to "/wdn/file01" at 000043290003
13:18:27 CNPC: data host reports this butype_info:
13:18:27 CNPC: tar (attr 0x2C78: B_DIRECT, R_DIRECT, B_INCR, R_INCR, B_FH_DIR)
13:18:27 CNPC: DIRECT = y
13:18:27 CNPC: HISTORY = y
13:18:27 CNPC: LEVEL = 0
13:18:28 SNP: connection types supported by mover -
13:18:28 tcp
13:18:28 SNP: connection types supported by data service -
13:18:28 tcp
13:18:28 SNP: selected data connection type: tcp
13:18:28 SNP: using separate data and tape/mover connections
13:18:28 SNP: the NDMP protocol version for tape/mover is 4, for data is 4
13:18:28 SNP: backup-server's NDMP tape/mover service session id is 7844
13:18:28 RONPC: begin OSB NDMP data restore
13:18:28 RONPC: need to restore from "/wdn/file01" tree:
13:18:28 RONPC: tape position before restore is 000043290004
13:18:28 MGS: ms.record_size 65536, ms.record_num 0x0, ms.bytes_moved 0x0
13:18:28 RONPC: about to start restore; here are the environment variables:
13:18:28 RONPC: env BEGINTREE=1
13:18:28 RONPC: env NAME=/wdn/file01
13:18:28 RONPC: env AS=/restore
13:18:28 RONPC: env POSLEN=6
13:18:28 RONPC: env POS=
13:18:28 RONPC: env BLEVEL=0
13:18:28 RONPC: env FIRSTCH=1
13:18:28 RONPC: env POS_HERE=1
13:18:28 RONPC: env EX2KTYPE=
13:18:28 RONPC: env DATA_BLOCK_SIZE=64
13:18:28 RONPC: env SKIP_RECORDS=3
13:18:28 RONPC: env LABEL_VERSION=0000012
13:18:28 SMW: setting NDMP mover window to offset 0x0, length 0xFFFFFFFFFFFFFFFF
13:18:28 MLIS: mover listen ok for tcp connection; these addresses were reported:
13:18:28 MLIS: 0.0.0.0:58243
13:18:28 MLIS: 192.168.3.109:58243
13:18:28 RONPC: tape fileno/blockno before restore are 0/0
13:18:28 APNI: a preferred network interface does not apply to this connection
13:18:28 DPNI: load balancing is in use, skipping default PNI
13:18:28 RONPC: directing data service to connect to mover
13:18:01 PPVL: obtar option OB_JOB = admin/80
13:18:01 PPVL: obtar option OB_RB = 10.4
13:18:01 PPVL: obtar option OB_EXTR = 1
13:18:01 PPVL: obtar option OB_EXTRACT_ONCE = 1
13:18:01 PPVL: obtar option OB_DEBUG = 1
13:18:01 PPVL: obtar option OB_DEBUG = 1
13:18:01 PPVL: obtar option OB_DEBUG = 1
13:18:01 PPVL: obtar option OB_DEBUG = 1
13:18:01 PPVL: obtar option OB_VERBOSE = 1
13:18:01 PPVL: obtar option OB_CLIENT = client01
13:18:01 PPVL: obtar option OB_HONOR_IN_USE_LOCK = 1
13:18:01 PPVL: obtar option OB_STAT = 1
13:18:01 PPVL: obtar option OB_VOLUME_LABEL = 1
13:18:01 PPVL: obtar option OB_SKIP_CDFS = 1
13:18:01 PPVL: obtar option OB_DEVICE = tape
13:18:01 PPVL: obtar option OB_BLOCKING_FACTOR = 128
13:18:01 PPVL: obtar option OB_VERIFY_ARCHIVE = no
13:18:01 PPVL: obtar option OB_PQT = 134217728
13:18:01 DSIN: 2GB+ files are supported, 2GB+ directories are supported
13:18:01 SETC: identity is already root/root
13:18:28 qtarndmp__ssl_setup: SSL has been disabled via the security policy
13:18:28 RONPC: issuing NDMP_DATA_START_RECOVER
13:18:33 RONPC: started NDMP restore
13:18:33 MNPO: received NDMP_NOTIFY_DATA_READ, offset 0x0, length 0xFFFFFFFFFFFFFFFF
13:18:33 MNPO: sent corresponding NDMP_MOVER_READ
13:18:33 QTOS: received osb_stats message for job admin/80, kbytes 64, nfiles 0
13:18:33 await_ndmp_event: sending progress update
13:18:33 SPU: sending progress update
Error: Could not make file /restore: Is a directory
13:19:27 MNPO: jumped over filemark fence
13:19:27 VLBR: not at bot: 0x90000000
13:19:27 VLBR: tag on label just read: ""
13:19:27 QTOS: received osb_stats message for job admin/80, kbytes 3145856, nfiles 0
13:19:27 VLBR: master tag now ""
13:19:27 RLE: set kb remaining to 819375104
13:19:27 RLE: qdb__set_kb_rem_se reports vol_oid 0, arch_oid 0
13:19:27 RLE: noticed nil label
13:19:27 RLE: qdb__noticed_se reports vol_oid 0, arch_oid 0
13:19:27 VLBR: setting last section flag for backup section oid 380
13:19:27 MNPO: sent successful mover close
13:19:27 MNPO: data service halted with reason=internal error
13:19:27 SNPD: Data Service reported bytes processed 0xC0020000
13:19:27 SNPD: stopping NDMP data service (to transition to idle state)
13:19:27 MNPO: mover halted with reason=connection closed
13:19:27 MGS: ms.record_size 65536, ms.record_num 0xC002, ms.bytes_moved 0xC0020000
Error: NDMP operation failed: unspecified error reported (see above)
13:19:27 RONPC: finished NDMP restore with status 97
13:19:27 RONPC: NDMP read-ahead positioned tape past filemark; backing up
13:19:27 RONPC: We believe this because initial file # 0 isn't end file # 1
13:19:27 RONPC: the section-relative block number at end of restore is 0x1
13:19:27 RONPC: tape position after restore is 0001032B0080
13:19:27 QREX: exit status upon entry is 97
13:19:27 QREX: released reservation on tape drive tape
13:19:27 RDB: reading volume record for oid 137
13:19:27 RDB: reading section record for oid 369
13:19:27 RDB: adding record for oid 369 (file 1, section 1) to section list
13:19:27 RDB: reading section record for oid 378
13:19:27 RDB: adding record for oid 378 (file 2, section 1) to section list
13:19:27 RDB: reading section record for oid 380
13:19:27 RDB: adding record for oid 380 (file 3, section 1) to section list
13:19:27 RDB: file 1 has all 1 required sections; clearing incomplete backup flags
13:19:27 RDB: reading section record for oid 369
13:19:27 RDB: file 2 has all 1 required sections; clearing incomplete backup flags
13:19:27 RDB: reading section record for oid 378
13:19:27 RDB: file 3 has all 1 required sections; clearing incomplete backup flags
13:19:27 RDB: reading section record for oid 380
13:19:27 RDB: 1 volumes in volume list
13:19:27 RDB: volume oid 137 reports first:last files of 1:3
13:19:27 RDB: marking volume oid 137 as authoritative
13:19:27 VMA: reading volume record for oid 137
13:19:27 RLYX: exit status 97; checking allocs...
13:19:27 RLYX: from mm__check_all: 1
ob> catxcr -fl0 admin/81
2012/09/04.13:19:29 ______________________________________________________________________
2012/09/04.13:19:29
2012/09/04.13:19:29 Transcript for job admin/81 running on backup-server
2012/09/04.13:19:29
2012/09/04.13:19:30 Info: mount data verified.
2012/09/04.13:19:30 Info: volume in tape is usable for this operation.
13:19:31 OBTR: obtar version 10.4.0.1.0 (Solaris) -- Fri Sep 23 23:41:16 PDT 2011
Copyright (c) 1992, 2011, Oracle. All rights reserved.
13:19:31 OBTR: obtar -Xjob:admin/81 -Xob:10.4 -xOz -Xbga:admin/81 -JJJJv -y /usr/tmp/[email protected] -Xrdf:admin/81 -e DC-ORCL-
MF-000001 -F1 -f tape -Xrescookie:0xBE1A8F6 -H client01 -u
13:19:31 RRDF: restore "/wdn/testf" as "/restore", pos 000000010003
13:19:31 OBTR: running as root/root
13:19:31 OBTR: record storage set to internal memory
13:19:31 ATAL: reserved drive tape, cookie 0xBE1A8F6
13:19:31 OBTR: obsd=1, is_job=1, is_priv=0, os=3
13:19:31 OBTR: rights established for user admin, class admin
13:19:31 SUUI: user info root/root, ??/??
13:19:31 MAIN: using blocking factor 128 from media defaults/policies
13:19:31 STTY: background terminal I/O or is a tty
13:19:31 MAIN: interactive
13:19:31 DOLM: nop (for tape (raw device "/dev/obt1"))
13:19:31 DOLM: ok
13:19:32 RLE: connecting to volume/archive database host
13:19:32 RLE: device tape (raw device "/dev/obt1")
13:19:32 RLE: mount_info is valid
13:19:32 RLE: qdb__device_spec_se reports vol_oid 0, arch_oid 0
13:19:32 A_O: using max blocking factor 128 from media defaults/policies
13:19:32 A_O: tape device is local
13:19:32 A_O: Devname: HP,Ultrium 4-SCSI,H61W
13:19:32 Info version: 11
13:19:32 WS version: 10.4
13:19:32 Driver version: 10.4
13:19:32 Max DMA: 2097152
13:19:32 Blocksize in use: 65536
13:19:32 Query frequency: 134217728
13:19:32 Rewind on close: false
13:19:32 Can compress: true
13:19:32 Compression enabled: true
13:19:32 Device supports encryption: true
13:19:32 8200 media: false
13:19:32 Remaining tape: 819375104
13:19:32 A_GB: ar_block at 0x100352000, size=2097152
13:19:32 A_GB: ar_block_enc at 0x100554000, size=2097152
13:19:32 ADMS: reset library tape selection state
13:19:32 ADMS: reset complete
13:19:35 ACFD: positioning (SCSI LOCATE) is available for this device
13:19:35 GLMT: returning "", code = 0x0
13:19:35 VLBR: from chk_lm_tag: "", code = 0x0
13:19:35 VLBR: tag on label just read: ""
13:19:35 VLBR: master tag now ""
13:19:35 RLE: noticed volume DC-ORCL-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
13:19:35 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 369
(alv) backup image label is valid, file 4, section 1
(ial) invalidate backup image label (was valid)
13:19:35 RSMD: rewrote mount db for tape
13:19:35 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
13:19:35 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:19:35 CALE: created backup section oid list entry for oid 369
13:19:35 PF: here's the label at the current position:
Volume label:
Intro time: Fri May 04 13:35:03 2012
Volume UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
Volume ID: DC-ORCL-MF-000001
Volume sequence: 1
Volume set owner: root
Volume set created: Sun Sep 02 11:56:50 2012
Volume set closes: Sat Dec 01 11:56:50 2012 (no writes after this time)
Volume set expires: Sat Mar 02 11:56:50 2013
Media family: DC-ORCL-MF
Original UUID: d40ea6c6-d6c2-102f-bf51-da716418c062
Archive label:
File number: 1
File section: 1
Owner: root
Client host: client01
Backup level: 0
S/w compression: no
Archive created: Sun Sep 02 11:56:50 2012
Archive owner UUID: f32ac938-6410-102f-a3d5-b94c4468403b
Owner class UUID: f32a3504-6410-102f-a3d5-b94c4468403b
Encryption: off
13:19:35 PF: at desired location
13:19:35 BT: resid is 1
13:19:35 ACFD: positioning (SCSI LOCATE) is available for this device
13:19:35 ADMS: reset library tape selection state
13:19:35 ADMS: reset complete
13:19:35 GLMT: returning "", code = 0x0
13:19:35 VLBR: from chk_lm_tag: "", code = 0x0
13:19:35 VLBR: tag on label just read: ""
13:19:35 VLBR: master tag now ""
13:19:35 RLE: noticed volume DC-ORCL-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
13:19:35 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 369
(alv) backup image label is not valid
13:19:35 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
13:19:35 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:19:35 CALE: found existing backup section oid list entry for oid 369
13:19:35 ADMS: reset library tape selection state
13:19:35 ADMS: reset complete
13:19:35 RLE: read volume DC-ORCL-MF-000001, file 1, section 1, vltime 1346566310, vowner root, voltag
13:19:35 RLE: qdb__read_se reports vol_oid 137, arch_oid 369
(alv) backup image label is not valid
13:19:35 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
13:19:36 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:19:36 PTNI: positioning to "/wdn/testf" at 000000010003
13:19:37 CNPC: data host reports this butype_info:
13:19:37 CNPC: tar (attr 0x2C78: B_DIRECT, R_DIRECT, B_INCR, R_INCR, B_FH_DIR)
13:19:37 CNPC: DIRECT = y
13:19:37 CNPC: HISTORY = y
13:19:37 CNPC: LEVEL = 0
13:19:38 SNP: connection types supported by mover -
13:19:38 tcp
13:19:38 SNP: connection types supported by data service -
13:19:38 tcp
13:19:38 SNP: selected data connection type: tcp
13:19:38 SNP: using separate data and tape/mover connections
13:19:38 SNP: the NDMP protocol version for tape/mover is 4, for data is 4
13:19:38 SNP: backup-server's NDMP tape/mover service session id is 7935
13:19:38 RONPC: begin OSB NDMP data restore
13:19:38 RONPC: need to restore from "/wdn/testf" tree:
13:19:38 RONPC: tape position before restore is 000000010004
13:19:38 MGS: ms.record_size 65536, ms.record_num 0x0, ms.bytes_moved 0x0
13:19:38 RONPC: about to start restore; here are the environment variables:
13:19:38 RONPC: env BEGINTREE=1
13:19:38 RONPC: env NAME=/wdn/testf
13:19:38 RONPC: env AS=/restore
13:19:38 RONPC: env POSLEN=6
13:19:38 RONPC: env POS=
13:19:38 RONPC: env BLEVEL=0
13:19:38 RONPC: env FIRSTCH=1
13:19:38 RONPC: env POS_HERE=1
13:19:38 RONPC: env EX2KTYPE=
13:19:38 RONPC: env DATA_BLOCK_SIZE=64
13:19:38 RONPC: env SKIP_RECORDS=3
13:19:38 RONPC: env LABEL_VERSION=0000012
13:19:38 SMW: setting NDMP mover window to offset 0x0, length 0xFFFFFFFFFFFFFFFF
13:19:38 MLIS: mover listen ok for tcp connection; these addresses were reported:
13:19:38 MLIS: 192.168.3.109:58303
13:19:38 MLIS: 0.0.0.0:58303
13:19:38 RONPC: tape fileno/blockno before restore are 0/0
13:19:38 APNI: a preferred network interface does not apply to this connection
13:19:38 DPNI: load balancing is in use, skipping default PNI
13:19:38 RONPC: directing data service to connect to mover
13:19:11 PPVL: obtar option OB_JOB = admin/81
13:19:11 PPVL: obtar option OB_RB = 10.4
13:19:11 PPVL: obtar option OB_EXTR = 1
13:19:11 PPVL: obtar option OB_EXTRACT_ONCE = 1
13:19:11 PPVL: obtar option OB_DEBUG = 1
13:19:11 PPVL: obtar option OB_DEBUG = 1
13:19:11 PPVL: obtar option OB_DEBUG = 1
13:19:11 PPVL: obtar option OB_DEBUG = 1
13:19:11 PPVL: obtar option OB_VERBOSE = 1
13:19:11 PPVL: obtar option OB_CLIENT = client01
13:19:11 PPVL: obtar option OB_HONOR_IN_USE_LOCK = 1
13:19:11 PPVL: obtar option OB_STAT = 1
13:19:11 PPVL: obtar option OB_VOLUME_LABEL = 1
13:19:11 PPVL: obtar option OB_SKIP_CDFS = 1
13:19:11 PPVL: obtar option OB_DEVICE = tape
13:19:11 PPVL: obtar option OB_BLOCKING_FACTOR = 128
13:19:11 PPVL: obtar option OB_VERIFY_ARCHIVE = no
13:19:11 PPVL: obtar option OB_PQT = 134217728
13:19:11 DSIN: 2GB+ files are supported, 2GB+ directories are supported
13:19:11 SETC: identity is already root/root
13:19:38 qtarndmp__ssl_setup: SSL has been disabled via the security policy
13:19:38 RONPC: issuing NDMP_DATA_START_RECOVER
13:19:43 RONPC: started NDMP restore
13:19:43 MNPO: received NDMP_NOTIFY_DATA_READ, offset 0x0, length 0xFFFFFFFFFFFFFFFF
13:19:43 MNPO: sent corresponding NDMP_MOVER_READ
13:19:43 QTOS: received osb_stats message for job admin/81, kbytes 64, nfiles 0
13:19:43 await_ndmp_event: sending progress update
13:19:43 SPU: sending progress update
/restore
Error: Could not make file /restore: Is a directory
13:19:44 MNPO: jumped over filemark fence
13:19:44 VLBR: not at bot: 0x90000000
13:19:44 VLBR: tag on label just read: ""
13:19:44 QTOS: received osb_stats message for job admin/81, kbytes 51328, nfiles 0
13:19:44 VLBR: master tag now ""
13:19:44 RLE: noticed volume DC-ORCL-MF-000001, file 2, section 1, vltime 1346566310, vowner root, voltag
13:19:44 RLE: qdb__noticed_se reports vol_oid 137, arch_oid 378
(alv) backup image label is not valid
13:19:45 ULVI: set mh db volume id "DC-ORCL-MF-000001" (retid ""), volume oid 137, code 0
13:19:45 ULTG: set mh db tag "" (retid "DC-ORCL-MF-000001"), volume oid 137, code 0
13:19:45 CALE: created backup section oid list entry for oid 378
13:19:45 VLBR: setting last section flag for backup section oid 369
13:19:45 MNPO: sent successful mover close
13:19:45 MNPO: data service halted with reason=internal error
13:19:45 SNPD: Data Service reported bytes processed 0x3220000
13:19:45 SNPD: stopping NDMP data service (to transition to idle state)
13:19:45 MNPO: mover halted with reason=connection closed
13:19:45 MGS: ms.record_size 65536, ms.record_num 0x322, ms.bytes_moved 0x3220000
Error: NDMP operation failed: unspecified error reported (see above)
13:19:45 RONPC: finished NDMP restore with status 97
13:19:45 RONPC: NDMP read-ahead positioned tape past filemark; backing up
13:19:45 RONPC: We believe this because initial file # 0 isn't end file # 1
13:19:45 RONPC: the section-relative block number at end of restore is 0x1
13:19:45 RONPC: tape position after restore is 000003230080
13:19:45 QREX: exit status upon entry is 97
13:19:45 QREX: released reservation on tape drive tape
13:19:45 RDB: reading volume record for oid 137
13:19:45 RDB: reading section record for oid 369
13:19:45 RDB: adding record for oid 369 (file 1, section 1) to section list
13:19:45 RDB: reading section record for oid 378
13:19:45 RDB: adding record for oid 378 (file 2, section 1) to section list
13:19:45 RDB: reading section record for oid 380
13:19:45 RDB: adding record for oid 380 (file 3, section 1) to section list
13:19:45 RDB: file 1 has all 1 required sections; clearing incomplete backup flags
13:19:45 RDB: reading section record for oid 369
13:19:45 RDB: file 2 has all 1 required sections; clearing incomplete backup flags
13:19:45 RDB: reading section record for oid 378
13:19:45 RDB: file 3 has all 1 required sections; clearing incomplete backup flags
13:19:45 RDB: reading section record for oid 380
13:19:45 RDB: 1 volumes in volume list
13:19:45 RDB: volume oid 137 reports first:last files of 1:3
13:19:45 RDB: marking volume oid 137 as authoritative
13:19:45 VMA: reading volume record for oid 137
13:19:45 RLYX: exit status 97; checking allocs...
13:19:45 RLYX: from mm__check_all: 1
ob>
Please help me to resolve the issue...
Thanks,
SamIf you're restoring a file you have to list it, so if you are restoring /wdn/file01 then you specify the alternate path as /restore/file01
Thanks
Rich -
Orchestration Process Cleanup Task schedule job getting failed
Hi All,
We have an out of the box scheduled job "Orchestration Process Cleanup Task". This job is getting failed with the below error.
"Failed.oracle.iam.platformservice.exception.OrchDataCleanupException:java.sql.SQLSyntaxErrorException:ora00913:too many values"
This recon take two parameters in which we have given values as below:
Batch size:100
Delete Just One Batch: No
I am not able to find how this job is configured or how it's deleting the values.
Please help me out and if anyone know about the links where I can get the info on this job, it will be much helpful.
Thanks
IshankAs far as the
http://docs.oracle.com/cd/E21764_01/doc.1111/e14308/scheduler.htm#r26c1-t8
is concerned,
Batch Size: Use this attribute to specify the number of completed orchestration processes to be deleted in each iteration.
Delete Just One Batch: Use this attribute to specify the value true or false. Only a single batch is deleted if the value is true. All the completed events are deleted batch at a time in a loop if the value is false.
So, let's try:
Batch Size: 1 (just 1)
Delete Just One Batch: yes (radio button)
Maybe you are looking for
-
Following error occurred while executing the application:
I saw the RWB in message monitoring. Message monitoring is not working. I capture the scrpit. How can I ? Help me..... Stack trace for the above error message is: javax.servlet.ServletException: Error while executing the compilation process: [D:/usr/
-
Finding unique object from a collection
Hi, Is there a way to select all the unique objects from a pool of collections? ie, i have a set which contains (1,1, 2,3,4,4,5) and want to get (1,2,3,4,5) back? Is there a class or function that able to do that? thx in adv
-
Can you suggest a good compatible modem for Airport?
Can you suggest a good modem that is compatible with airport express?
-
00206: %TYPE must be applied to a variable, column
it's 10g create or replace type my_type as object (customer number) declare my_var my_type.customer%type; begin null; end; /
-
Grant Regional Settings Permission
Running SharePoint 2010 and we would like to give the granular permission of "Regional settings" to certain users who have Design rights. Is this possible?