Async RFC to file -data not passsed to PI
Hi all,
Im facing a peculiar problem in Async RFC to File scenario.
I have done all the required configurations for it.
Referred the following blogs :
/people/swaroopa.vishwanath/blog/2006/12/28/send-rfc-to-sap-xi-150-asynchronous
/people/michal.krawczyk2/blog/2005/03/29/configuring-the-sender-rfc-adapter--step-by-step
https://www.sdn.sap.com/irj/scn/advancedsearch?query=rfctofile+scenario
As soon as I trigger the scenario in R/3 , I get a message in PI but the data is not passed.
The payload I get in PI is with empty tags...
Hi Ramya,
As you are getting the empty payload there is not probelm i nthe connection between R/3 and XI system.
1. Check the logic you used in RFC .. place a break point and check your execution is a getting data properly.
2. Some times due to the re design in the interface the cache needs to be updated properly.. Check this too.
Regards
Sunil.
Similar Messages
-
File dates not updated in finder window (Mozilla only)
OK, folks - here's a real puzzler for you. I'm using Mozilla 1.7.12 to maintain a simple set of web pages. When I save an HTML file with Mozilla composer, the file's modify date in the Finder window does not update consistently. (The window is in list view.)
The date does not update as long as that finder window remains in the background. Selecting the window sometimes updates one file date (but not others if I've modified multiple files). Switching to icon view sometimes updates the file dates, sometimes not. Only logging out and back in (i.e., starting a new Finder) guarantees that all dates are current. This only happens with Mozilla composer - saving files with other software results in the date in the finder window updating immediately. It seems to me this behavior is new since I updated to OSX 10.4.3, but I can't be positive.
Any bright ideas? - Andy
PowerMac G5 Mac OS X (10.4.3)Is this causing any problems, or is it just a quirky behavior? For example, if you change to list view and sort by date, does it still get the dates wrong?
I'm thinking this is probably a really marginal bug, where Composer isn't telling the Finder to refresh the date as most applications do, but then, Mozilla does a lot of unMaclike things. Eventually the Finder figures out that the files have been updated though. There are probably a host of ways you can force it to update, as you've already discovered. You can also force it to update with a simple AppleScript:
tell application "Finder"
update every item of window 1
end tell
This tells the Finder to update everything in the frontmost window to match what's on the disk. -
Testing RFC (Async RFC to file)
HI,
can we test whether message has reached XI by triggering FM in Sender SAP system just by creating RFC destination in sender SAP and a communication channel in XI with RFC Sender Adapter?
I think, we cant right.
we tested FM in sender SAP but we get short dump error (TCode: ST 22 call_function_remote_error)in sender SAP.
so we just need to do all the steps as usually we do ,right?
is there any way we can test our Async RFC whether data reaching XI system/
thank you.
BabuHi babu..
Have you checked the RFC destination? This could be as basic as incorrect password for WF-BATCH.
Use SWU3 to test the RFC destination, then run the verification workflow and choose the "[...] in background" option. If this works, your RFC destination is OK.
refer this...
http://help.sap.com/saphelp_nw04s/helpdata/en/f9/3f69fd11a80b4e93a5c9230bafc767/frameset.htm
regards,
vasanth -
Sender rfc..async rfc to file scenario
Since im new to XI...Can you please give a overview of the whole scenario which meets the requirements... wrapping 1 rfc in another and then passing the final export parameters to a file using XI..
->i refered to Re: XI : RFC Response Mapping
also to the blogs 1438 and 3865.
but the things r so abstract as to how to triggger the rfcs..
Is it like calling the first rfc frm an abap pgm n calling the second one from first ...the only way?? How to achieve this?
->In what cases do we need this wrapping??
Is there any way as to just pass the export parameters of a single rfc to a file using xi.
->Dont know whr to start the scenario to work out...even the mappings are confusing...For implementing Sender RFC: (yet to work on this)
1) We cannot use all the BAPI/RFC as is while doing an Async sender RFC.
2) We should use a wrapper around this BAPI/RFC and it should be a custom RFC again.
3) All the data that you need in XI for mapping should be passed through exporting or tables only.
4) Importing parameters should be commented.
5) Call this custom RFC in BACKGROUND TASK with DESTINATION. COMMIT WORK after calling custom RFC.
6) Create RFC destination with Program ID same as the program ID in your communication channel.
7) Import your custom RFC into repository.
8) Use the custom RFC(request not response) as interface in all your agreements and interface mapping,
message mapping.
That should be all the steps. You should get your data in receiver adapter. -
Text File Data not Displaying completely
Hi All,
Have one issue. The program correctly extracts the data & export to Excel. All column /field data's are dispalying.However while saving this data in a text file in Aplications server file path, though all column/ field data are extracting bit it is not displaying complete column/field data's i.e last 3 or 4 column data's are not displaying after 257 character space (but actually it extracts data.It is just not dispalying while i am checking text file via AL11).
What could be the reason of not dipslaying the all column/filed values in text file wherea as while exporting to excel it is displaying? is there any character length constraint for text file?
Thanks>
Debabrata wrote:
> What could be the reason of not dipslaying the all column/filed values in text file wherea as while exporting to excel it is displaying? is there any character length constraint for text file?
There is no restriction on the file size which you can upload on the app server. AL11 is nothing but a report program & there is a limitation on the max number of characters displayed on the page.
BR,
Suhas -
Some Get Info File Data Not Showing Up On Ext Hard Drives
I've stored all my videos on my external firewire harddrives. I notice some of the file information in the Get Info window on these videos is missing after they are transferred to the external harddrive. Last Opened, Dimensions and Duration are blank. If I play the video these fields remain blank. However the same file on my internal harddrive shows this data.
Anyway to get this information to appear in these files on my external hard drives?
KelvinWhat volume format are your external hard drives?
For example, if they're formatted as Windows FAT32 rather than Mac OS Extended (HFS+), it's possible that that type of extended Finder information isn't supported or can't be preserved when copying the file to the drive.
Hope this helps.... -
HI,
Ive developed a BAPI which is is reading file from client machine using FM gui_upload. The bapi is going to be called via rfc . The bapi works fine in whenever executed in stand-alone method. However whenever the bapi is called via rfc , the file is not read.
Can anyone share his/her experiences on such similar requirement ? Kindly advise.
ByeHello,
In some ways your problem is the same as one that is forever popping up here on SDN. That is ,
how to down/upload a file to/from the presentation server in background. In your scenario a remote process is trying to use a GUI to upload a file, in the above scenario it is a background process trying to do the same. In both cases the answer is the same. There is no easy way to do this. Your remote process will not be able to directly execute gui_upload. One possible suggestion which might help may be found in this document :
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/9831750a-0801-0010-1d9e-f8c64efb2bd2
It is not pretty but it might give you some suggestions.
You can also search the form for 'downloading to the presentation server in background' ( or some variant of this string). You may get an idea or two from the previous posts.
Regards
Greg Kern -
I have one problem with Data Guard. My archive log files are not applied.
I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
In Enterprise Manager on Primary database it looks ok. I get the following message Data Guard status Normal
But as I wrote above the archive log files are not applied
After I created the Physical Standby database, I have also done:
1. I connected to the Physical Standby database instance.
CONNECT SYS/SYS@luda AS SYSDBA
2. I started the Oracle instance at the Physical Standby database without mounting the database.
STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
3. I mounted the Physical Standby database:
ALTER DATABASE MOUNT STANDBY DATABASE
4. I started redo apply on Physical Standby database
alter database recover managed standby database disconnect from session
5. I switched the log files on Physical Standby database
alter system switch logfile
6. I verified the redo data was received and archived on Physical Standby database
select sequence#, first_time, next_time from v$archived_log order by sequence#
SEQUENCE# FIRST_TIME NEXT_TIME
3 2006-06-27 2006-06-27
4 2006-06-27 2006-06-27
5 2006-06-27 2006-06-27
6 2006-06-27 2006-06-27
7 2006-06-27 2006-06-27
8 2006-06-27 2006-06-27
7. I verified the archived redo log files were applied on Physical Standby database
select sequence#,applied from v$archived_log;
SEQUENCE# APP
4 NO
3 NO
5 NO
6 NO
7 NO
8 NO
8. on Physical Standby database
select * from v$archive_gap;
No rows
9. on Physical Standby database
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1110
MRP0: Background Media Recovery process shutdown
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 2148
RFS[1]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 2384
RFS[2]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[3]: Assigned to RFS process 3188
RFS[3]: Identified database type as 'physical standby'
Primary database is in MAXIMUM PERFORMANCE mode
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[4]: Assigned to RFS process 3168
RFS[4]: Identified database type as 'physical standby'
RFS[4]: No standby redo logfiles created
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
10. on Physical Standby database
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 9 13664 2
RFS IDLE 0 0 0 0
10) on Primary database:
select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARCm: Becoming the 'no FAL' ARCH
ARCm: Becoming the 'no SRL' ARCH
ARCd: Becoming the heartbeat ARCH
Error 1034 received logging on to the standby
Error 1034 received logging on to the standby
LGWR: Error 1034 creating archivelog file 'luda'
LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
11)on primary db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
Luda 4 NO
Luda 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
Luda 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
Luda 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
Luda 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
Luda 8 NO
12) on standby db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
13) my init.ora files
On standby db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_unique_name='luda'
*.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='luda'
*.fal_server='irina'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
*.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
On primary db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
*.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='irina'
*.fal_server='luda'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
*.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
Please help me!!!!Hi,
After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
In another session 'show configuration' results in the following, confirming that the enable succeeded.
DGMGRL> show configuration
Configuration
Name: avhtest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
avhtest - Primary database
avhtestls53 - Physical standby database
Current status for "avhtest":
Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
It there anybody that experienced the same problem and/or knows the solution to this?
With kind regards,
Martin Schaap -
SenderAgreement not found in RFC to File
Hi,
In RFC - File scenario, i am facing the following error:
Error in processing caused by: com.sap.aii.adapter.rfc.afcommunication.RfcAFWException: senderAgreement not found: lookup of binding via CPA-cache failed for AdapterType RFC, AdapterNS http://sap.com/xi/XI/System, direction INBOUND, fromParty '', fromService 'DEC', toParty '', toService '', interface 'SLDJAVA_ACCESSOR_REQUEST', NS 'urn:sap-com:document:sap:rfc:functions' for channel 'CC_LOTUSNOTES_OUTBOUND_SND_RFC'
I deleted the sender CC and created it again, did a CPACahe Refresh but nothing happened.
could anybody help to solve this issue.
in the ECC side I have coded a RFC which is included in the BAPI, then i am calling this RFC in ABAP program. i created a LS in ECC which has the PI details. RFC is called like below:
CALL FUNCTION 'Z_HR_OUTBOUND_DATA2'
IN BACKGROUND TASK DESTINATION 'DPICLNT100' -
> this is the PI server
is this correct, or am i missing some steps?
thanksHi Siva,
I have done the following for RFC to File scenario:
ECC side:
1. Created a RFC and tagged it to BAPI
2. In the ABAP program, i am calling the RFC like below:
CALL FUNCTION 'Z_HR_OUTBOUND_DATA2'
IN BACKGROUND TASK DESTINATION 'Z_HR_OUTBOUND_DATA2'
EXPORTING
INTERFACE =
IMPORTING
FILENAME = FILENAME
RETURN = RETURN
TABLES
ITAB10 = ITAB10
P_STATUS = P_STATUS
COMMIT WORK.
3. Created a RFC destination using SM59
RFC detination : Z_HR_OUTBOUND_DATA2
Connection type : TCP/IP
Enabled Registered Server Program
Program ID : SAPSLDAPI_DPI -
> not sure whether this correct. but took this vvalue in
on the TCP/IP connection in PI (As mentioned if i am giving any other value say SAPECC etc, then it gives an error while Tst connection * TP sapecc not registered* )
Gateway host : ECC's host number
Gateway service : ECC's service number.
In PI side:
I have created :
1 DT for File Rcvr
1 MT for File Rcvr
1 SI for File Rcvr (async)
1 MM (sender RFC and receiver File)
1 OM
1 CC with RFc as sender with the following parameters:
Communication component : ECC's component
application server (gateway) : ecc's gateway
application server service(Gateway) : ecc's service number
Program id : SAPSLDAPI_DPI -
> not sure whether this correct.
in on the TCP/IP connection in PI
inital connections 1
max. connections 1
And also gave ecc's details in application server , s/y number, auth. mode, logon user ,
pwd, ecc's client number
1 CC with File adapter and file content conversion
1 receiver determination
1 interface determination
1 sender agreement
1 receiver agreement
When i am executing the ABAP program in R/3 I am getting the following error:
*Error in processing caused by: com.sap.aii.adapter.rfc.afcommunication.RfcAFWException:
lookup of alternativeServiceIdentifier via CPA-cache failed for channel
'CC_LOTUSNOTES_OUTBOUND_SND_RFC' (f455111112de33bba4c3dc3dbf93ed1b, party '', schema
'TechnicalSystem', identifier 'DEC#120')*
Please let me know if any of my steps are wrong and how to rectify this error and make
the scenario working.
Thanks for your help -
Data Guard. My archive log files are not applied.
I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
In Enterprise Manager on Primary database it looks ok. I get the following message Data Guard status Normal
But as I wrote above the archive log files are not applied
After I created the Physical Standby database, I have also done:
1. I connected to the Physical Standby database instance.
CONNECT SYS/SYS@luda AS SYSDBA
2. I started the Oracle instance at the Physical Standby database without mounting the database.
STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
3. I mounted the Physical Standby database:
ALTER DATABASE MOUNT STANDBY DATABASE
4. I started redo apply on Physical Standby database
alter database recover managed standby database disconnect from session
5. I switched the log files on Physical Standby database
alter system switch logfile
6. I verified the redo data was received and archived on Physical Standby database
select sequence#, first_time, next_time from v$archived_log order by sequence#
SEQUENCE# FIRST_TIME NEXT_TIME
3 2006-06-27 2006-06-27
4 2006-06-27 2006-06-27
5 2006-06-27 2006-06-27
6 2006-06-27 2006-06-27
7 2006-06-27 2006-06-27
8 2006-06-27 2006-06-27
7. I verified the archived redo log files were applied on Physical Standby database
select sequence#,applied from v$archived_log;
SEQUENCE# APP
4 NO
3 NO
5 NO
6 NO
7 NO
8 NO
8. on Physical Standby database
select * from v$archive_gap;
No rows
9. on Physical Standby database
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1110
MRP0: Background Media Recovery process shutdown
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 2148
RFS[1]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 2384
RFS[2]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[3]: Assigned to RFS process 3188
RFS[3]: Identified database type as 'physical standby'
Primary database is in MAXIMUM PERFORMANCE mode
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[4]: Assigned to RFS process 3168
RFS[4]: Identified database type as 'physical standby'
RFS[4]: No standby redo logfiles created
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
10. on Physical Standby database
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 9 13664 2
RFS IDLE 0 0 0 0
10) on Primary database:
select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARCm: Becoming the 'no FAL' ARCH
ARCm: Becoming the 'no SRL' ARCH
ARCd: Becoming the heartbeat ARCH
Error 1034 received logging on to the standby
Error 1034 received logging on to the standby
LGWR: Error 1034 creating archivelog file 'luda'
LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
11)on primary db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
Luda 4 NO
Luda 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
Luda 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
Luda 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
Luda 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
Luda 8 NO
12) on standby db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
13) my init.ora files
On standby db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_unique_name='luda'
*.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='luda'
*.fal_server='irina'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
*.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
On primary db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
*.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='irina'
*.fal_server='luda'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
*.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
Please help me!!!!Hi,
After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
In another session 'show configuration' results in the following, confirming that the enable succeeded.
DGMGRL> show configuration
Configuration
Name: avhtest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
avhtest - Primary database
avhtestls53 - Physical standby database
Current status for "avhtest":
Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
It there anybody that experienced the same problem and/or knows the solution to this?
With kind regards,
Martin Schaap -
RFC (Sync) - File (async) always via s/a Bridge?
Hi,
maybe a silly question, but do I always have to use the s/a Bridge when I have a sync. RFC sender and a async receiver? Is there another possibility without bpm (well using a async RFC of course)?
Thank you
ThomasHey Thomas..
well the concept of BPM comes, only when u have a syn/asyn communication...
when u say a RFC, which is a async, u need to have a BPM as the file is an Async.
also, if u just want to put the data in R3(say), then u have the scenario FTP to IDOC , wherin u dont get any response and no BPM required...
hope it clears some of ur doubt on BPM...
regards..
vishal -
I have the following XI Scenario:
Async RFC (R/3) -> XML File (Filesystem)
I want to achieve a QoS of EOIO. I know that the RFC Adapter doesn't support this but I would like to see if there is another way of achieving this:
Assumptions:
- XI is SP16
- No ABAP proxies because R/3 is 4.6D.
- XI receives the data in the correct order because we've implemented qRFC outbound queues in R/3.
- The QOS in XI ends up being EO because the RFC Adapter doesn't support qRFC (inbound queues).
Questions:
1) Is there a way to "force" EOIO on these incoming messages without using BPM? They are already being received in the correct order. The issue is that XI can process them in any order because the QOS is EO.
2) Is it possible to use BPM to do this? Here's one idea similar to what I use to collect IDocs.
There's no collecting in this case, but by using a receive step in a loop to handle multiple calls, it guarantees that only one instance of the BPM will be running at a given time.
Loop
- Checks a counter < 100 which allows us to set a maximum # of calls that the BPM will handle. This is so the workflow log doesn't get to big to view.
Inside the Loop is a Block:
- Contains deadline branch - 1 minute from 'Creating the Step'.
- If the call sits in the block for more than a minute (aka if we haven't received anything in a minute) call the deadline branch
- In the deadline branch throw an exception
- In the exception branch quit the BPM
Inside the Block is:
- Receive Step
- Increment Counter
- Send Step with EOIO (SP16)
3) Will the calls be sent to this BPM in the order that they were received or will they be sent to the BPM in any order since they are EO?
Any ideas or comments on this approach will be appreciated. Points will be awarded.
Thanks,
JesseKrishna,
> <i>3) Will the calls be sent to this BPM in the order
> that they were received or will they be sent to the
> BPM in any order since they are EO?</i>
> >>Yes.. this depends on , how and which order XI is
> receiving the message..
The IE receives the messages in the correct order because we make the RFC call asynchronously (tRFC) and use a qRFC outbound queue.
> Now my question is , what is the idea behind to
> achieve the EOIO here? How is the RFC triggered here?
> Is it scheduled ? Whatever, at a time only one
> message will be sent right from R/3.. So what is the
> purpose/possibility of the multiple messages will be
> coming from the R/3..
We need EOIO because the RFC calls contain changes to R/3 records. If the calls are processed out of order, the data will get out-of-synch in the receiving system.
The RFC is triggered nightly and real-time on demand. There are typically multiple RFC calls (each with 1000 records) coming from R/3 within a few seconds (configurable) of each other. Even if there weren't multiple calls, if XI is down or a message fails, the IE could process them out of order when it comes back up because they're EO.
Essentially, I can get the messages to the IE in the correct order. I just need to process them in the correct order from that point forward.
Are you saying that the BPM (with a loop) will definitely pick up (receive) messages in the order that they are sent to the IE? I think this is critical to this approach working.
Alternatively, is there anyway to change (hack) the SOAP header using an adapter module to switch it from EO to EOIO?
Thanks,
Jesse -
Upload and Download Errror " File does not contain valid data"
We are on the ECC 6.0 Unicode system.
I dowloaded the Role from QA server and tried to upload into DEV box.
I am getting the Error " File does not contain valid data".
However, when I downloaded the Role, it was done successfully.
Please let me know if anyone has that problem.
Thanks,
From
PRANAVThis is what I see here.
DATE 20090716 131756RELEASE 700LOADED_AGRS Z_BI_HR_REPORTING_FUNCTIONALAGR_DEFINE 210Z_BI_HR_REPORTING_FUNCTIONAL PTHAKER 20080523131557000000000000000PTHAKER 20090709142400000000000000000AGR_TCODES 210Z_BI_HR_REPORTING_FUNCTIONAL TRRRMX X 00000AGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000001S_RFC T-BD90003500 U O000004Authorization Check for RFC AccessAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000002S_RS_AUTH T-BD90003500 U O000013BI Analysis Authorizations in RoleAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000003S_RS_COMP T-BD90003500 U O000016Business Explorer - ComponentsAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000004S_RS_COMP1T-BD90003500 U O000023Business Explorer - Components: Enhancements to the OwnerAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000005S_RS_FOLD T-BD90003500 U O000029Business Explorer - Folder View On/OffAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000006S_RS_HIER T-BD90003500 U O000032Data Warehousing Workbench - HierarchyAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000007S_RS_ODSO T-BD90003500 U O000038Data Warehousing Workbench - DataStore ObjectAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000008S_RS_PARAMT-BD90003500 U O000045Business Explorer - Variants in Variable ScreenAGR_1250 210Z_BI_HR_REPORTING_FUNCTIONAL 000009S_TCODE T-BD90003500 S O000009Transaction Code Check at Transaction StartAGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000001S_RS_COMP T-BD90003500 RSZCOMPTP CKF U O000021AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000002S_RS_COMP T-BD90003500 RSZCOMPID * U O000020AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000003S_RS_COMP1T-BD90003500 RSZCOMPID ZHCM* U O000025AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000004S_TCODE T-BD90003500 TCD RRMX S O000010AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000005S_RS_HIER T-BD90003500 ACTVT 71 U O000033AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000006S_RS_HIER T-BD90003500 RSHIENM * U O000034AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000007S_RS_HIER T-BD90003500 RSIOBJNM * U O000035AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000008S_RS_HIER T-BD90003500 RSVERSION * U O000036AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000009S_RS_PARAMT-BD90003500 PARAMNM * U O000047AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000010S_RS_COMP T-BD90003500 RSINFOAREAZPA_ECM* U O000018AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000011S_RS_ODSO T-BD90003500 RSINFOAREAZPA_ECM* U O000040AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000012S_RS_COMP1T-BD90003500 RSZCOMPID ZPAOS* U O000025AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000013S_RS_ODSO T-BD90003500 ACTVT 03 U O000039AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000014S_RS_COMP1T-BD90003500 RSZCOMPID ZPAPA* U O000025AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000015S_RS_ODSO T-BD90003500 RSODSOBJ * U O000041AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000016S_RS_ODSO T-BD90003500 RSODSPART DATA U O000042AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000017S_RS_ODSO T-BD90003500 RSODSPART DEFINITION U O000042AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000018S_RS_ODSO T-BD90003500 RSODSPART EXPORTISRC U O000042AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000019S_RS_ODSO T-BD90003500 RSODSPART UPDATERULE U O000042AGR_1251 210Z_BI_HR_REPORTING_FUNCTIONAL 000020S_RS_COMP T-BD90003500 ACTVT 03 U O000017AGR_1251 -
NO DATAS IN OUTPUT FILE FOR RFC TO FILE SCENARIO
In RFC to File scenario, I am executing the ABAP program in ECC and generates a empty file in PI server.
I am getting the error message in SM58 of ECC:
u2018Bean Z_HR_OUTBOUND_DATA2 not found on host SDNPI1,u2019
'call FM Z_HR_OUTBOUND_DATA2 to ProgId ECCTOPI_OUTBOUND on host SDNPI1 wit'
RFc Source code:
FUNCTION Z_HR_OUTBOUND_DATA2.
""Local Interface:
*" EXPORTING
*" VALUE(FILENAME) TYPE BBP_ACC_DESCRIPTION-ACC_OBJ_NAME
*" VALUE(RETURN) LIKE BAPIRETURN STRUCTURE BAPIRETURN
*" TABLES
*" ITAB10 STRUCTURE YSTRING1 OPTIONAL
*" P_STATUS STRUCTURE ZHRT0031 OPTIONAL
DATA : wa_status TYPE zhrt0031,
wa_itab10 LIKE ITAB10.
break-point.
LOOP AT p_status INTO wa_status.
CONCATENATE wa_status-pernr
wa_status-ename
wa_status-orgeh
wa_status-plans
wa_status-persg
wa_status-rank
wa_status-***
wa_status-icnum
wa_status-usrid
wa_status-dept
INTO wa_itab10-str1.
APPEND wa_itab10 TO itab10.
ENDLOOP.
wa_itab10-str1 = 'test'.
append wa_itab10 to itab10.
append wa_itab10 to itab10.
CONCATENATE sy-datum
'PYO_EMPDAT.TXT'
INTO
filename.
ENDFUNCTION.
And in the ABAP program the RFC is called like:
CALL FUNCTION 'Z_HR_OUTBOUND_DATA2'
in background task destination 'ECCTOPI'
EXPORTING
INTERFACE =
IMPORTING
FILENAME = filename
return = return
TABLES
ITAB10 = itab10
P_STATUS = p_status
When i am testing the standard program STFC_CONNECTION in ECC with the same RFc destination it works fine and creates a o/p file with the datas in it, but if i m executing the other function module it doesnt contain any data it.
What could be the error? and how to resolve the errors that i am getting in SM58?
ThanksBean Z_HR_OUTBOUND_DATA2_1 not found on host SDNP1, ProgId =ECCTOPI_OUTB
Change the case of your Program ID to lower ecctopi similar was the issue in this thread:
Bean ZFM_MODULE_OUT not found on host <XI_HOST>
And may be for the same reason even Michal used lower case program ID in his blog.
Regards,
Abhishek. -
Hi all,
I have an iMovie project in iMovie HD 6.0.3. (Mac OS 10.5.8)
It plays, but I cannot copy the file (including saving it to a flash drive). I get either a message
"the finder cannot complete the operation because some data in [name of
iMovie file] could not be read or written (Error Code 26)" or the
message "A project file is missing. The file [name of the QuickTime
movie inside the project.dv] couldn't be opened and is being skipped."
This happened suddenly, I was able to save a backup version numerous
times while working on the project.
I followed some of Karl Petersen's advise on apple discussions. I opened
the package contents, and everything seems to be there. The source file
(original QuickTime movie) is there and can be opened and played. I
replaced the current project file with the backup in the package
contents, but that did not change anything. I was able to export the
movie timeline into a new project as a single clip, but would like to be
able to copy the current project.
Thanks for any ideas..Make sure all drives are formatted as "Mac OS Extended". (Journalling preferred, but not necessary).
Maybe you are looking for
-
Getting output in excel sheet from jsp file
Hi, I am a new programmer. I am trying to get output in excel sheet by clicking a link on page. I have used that 2 statements response.setContentType("application/x-download"); response.setContentType("application/Octet-Stream"); response.setHeader("
-
Create Server-side WebService un an ADF Fusion-Web application
Hello, I have a Fusion Web application with a model having several entities / VOs. I saw in the Application Module options that I can expose some of the methods as a Web Service. Since I may need to use this model in a remote way (from another applic
-
Oracle Fusion Middleware MapViewer Version 12c
Mapviewer 12c recently became available on OTN, but there is no documentation yet. (Oracle Fusion Middleware MapViewer) Does any one know when the supporting docs (and demos) will be available?
-
Finder window text messed up?
My fiinder window is messed up graphically My MBP hangs or freezes I recently migrated my files and software from my older 2005 MBP running 10.5.8 Could there be any conflicts?
-
Hi, I wolud like to allow my user only they need to work on rserver thrgh ACS means go to reserver -- no in service/in service Regards. sateesh