Fileadapter filename in archive
Hi,
just a question about the archive functionality in the file adapter.
We are using the process mode "archive" with the option "add time stamp" in the sender fileadapter. Is there a way to get the filename of the archived file in the mapping ? I checked the DynamicConfiguration but it's not there.
Bernd
Is there a way to get the filename of the archived file in the mapping ?
As mentined there is no standard ASMA or Dynamic configuration for this.
But you can combine both of these to achive your requiremnet. Please read the source file name from ASMA and get that name in your mapping using dynamic configuration. In your mapping you can add the timestamp with your file name (standard graphical mapping functiioalities). does this approach looks feasible to you.?
One issue would be the time difference in sender adapter and in the mapping runtime.
Similar Messages
-
How to modify Fileadapter filename with val from JMS msg using xsl?
In my case I need to name the file based on information received in a JMS message.
I would like to receive an xml document from a JMS queue, the document will contain content as well as the name of a file to be saved off. I found references to something similar in the BPEL and ESB documentation for the file adapter by using "ehdr:setOutboundHeader(" as a means to set the outbound header value but I cannot find the mechanics of how to accomplish getting that set with a value that is selected from an incoming message in XSL code.
The examples shown indicate the creation of a pass through mechanism that would take a message from a request header and pass it along to the outbound header which is great but I need a snippet of code to get me through setting the outbound header with an element value that is sourced from the inbound message.
I also found an incomplete reference in a "Oracle SOA Suite Enterprise Server Bus" powerpoint presentation that indicates JMS custom properties support for "Property[@name="Country"]/@value" but no details were provided to assist further.
I would greatly appreciate any information you can provide.
Thank You,
JohnOk I was able to get fileadapter filenames modified.
First you cannot do this after a dbadapter query as that functionality seems to be broken. The best way to solve it for me was to simplify the problem, I started with placing the setoutbound header data after reading a simple file and ignoring the files content altogether. I do the set inside a match of the xslt using a constant value, again for simplicity. For the test just put in 'ateststring.txt' in the setoutbound header call, no need to select data from the document at this point; we will get to that. Now get that to work.
Once you have that working take a value from the incoming document and select it into a variable named myvariable then replace the 'ateststring.txt' with $myvariable. So basically its a three step process for the final solution. Select the data of interest from the document into a variable. add a select statement to make the method call just after that. Make the method call inside the select statement and prepend the variable name with the $ symbol.
So far I have only tested this file to file and file to ftp, I suspect that jmsadapter to file will work fine too. So the trick for me was to understand that if an incoming document is the result of a query that just wont do. My requirement was to take an input JMS request, use that to drive a query, take the data from the query and write it to a filename as defined in the JMS request. No matter what I did to set the frustrating fileadapter/ftpadapter filename in the header I was unable to do so. I used constant strings etc in every part of the document ( before select, inside select, outside select etc ). It wasnt until I went file to file that I was able to get this to work at all. And then not in the xslt header etc, it MUST be in an area of the document that resulted in a select match, which after review makes sense.
To work around the query to file issue, I place the results of the data query into a temporary filename%SEQ%.tmp, along with the query results I add the real filename inside the document. I have a file listener that listens for the filename using wildcards filename*.tmp, it takes the file reads it in, pulls the permanent filename from the incoming document, and sets that filename in the header using the steps worked out above by selecting the filename into myvariable.
You cannot ( to my knowlege ) store variables in ESB so I found I had to put them inside the documents to move the data around. When I am all done with the work through the routing service I strip out the variable data fields that I needed to carry along with the data.
I hope this saves someone else a bit of frustration!
John -
Need to Pass filename for archived file to FTP adapter using SynchRead
Hi
I am archiving the source file which i am reading using an FTP adapter, Operation- SynchRead.
In my case as the source filename is dynamic( abc_<timestamp>.xml) hence before the SynchRead, I am using a FTP List adapter to get the filename.
Currently,the archived file is getting name in pattern: encryptedToken_yyyymmdd_hhmmss.(e.g. wQ2c3w7Cj7Y6irWnRKqEu77_jD7kLtj6Uc0QCZIyjC4=_20121122_012453_0305)
I need to pass the sourceFilename(which i am getting from FTPList adapter) for the archived file also.
Thanks in advance for the help!
Regards,
AchalHi Neeraj,
While trying the above alternative, i am facing an issue when my source file is a .csv file. The file is getting recreated with the original filename and file content but without header.
As per the requirement i need the original file to be recreated. The header of .csv file has the field names.
Please let me know how should i configure my FTP adapter to read the header of the .csv file as well.
Thanks,
Achal -
Can I control filenaming when archiving files using the file adapter?
Hi folks,
Is there anyway to control the filename used when the File Adapter writes out to an archive?
Second question, I also need to be able to pass a "filename" to the adapter from an "input file." Is there a way to do this in the file adapter?
Sincerely,
lpacHi,
I have done that with the ftp adapter. In the .xsl file I wrote the following after the <xsl:stylesheet version="1.0" ....> tag:
<xsl:variable name="INFILENAME" select="ehdr:getRequestHeader('/fhdr:InboundFTPHeaderType/fhdr:fileName','fhdr=http://xmlns.oracle.com/pcbpel/adapter/ftp/;')"/>
<xsl:template match="/">
<xsl:variable name="OUTFILENAME"
select="ehdr:setOutboundHeader('/fhdr:OutboundFileHeaderType/fhdr:fileName', $INFILENAME, 'fhdr=http://xmlns.oracle.com/pcbpel/adapter/file/;')"/>
<opaque:opaqueElement>
<xsl:value-of select="/opaque:opaqueElement"/>
</opaque:opaqueElement>
</xsl:template>
</xsl:stylesheet>
To use this with the file adapter, you would have to wirte file where is written ftp.
Hope this helps,
Zaloa -
[SOLVED]How do I deal with non-unicode filenames inside archives?
I often get archives from Japanese Windows users with file names encoded in one of the myriad of Japanese character sets. Windows is pretty good about converting the filename to unicode before passing it on to any program or computer that expects it. However, it won't touch the filenames of things inside the archive for obvious reasons. Any way to make, say, unzip or 7z convert the filenames to unicode as they're extracted? At the moment I run Windows in Virtualbox and connect back to a shared folder and let Windows do the unzipping.
Last edited by DJQuiteFriendly (2011-09-22 22:13:26)I don't know if there's any advancement WRT this, but there's a work-around here (under "UnZip Locale Issues" ) using convmv.
-
FTP Adapter - Filename of archived XML File
Hi All,
We are trying to poll Files from Remote Servers using FTP Adapter, after its polled we are keeping the archived file in a directory. Files are archived but its with other name eg: XXXXX.XML is archived as XXXXX.XML_HmuUJDA1qKj2btjp_y1JUeLmgZsJcqhyCnrdlgB6sgU=_20110112_220537_0567.
I want to archive the XML file as it is like XXXXX.XML name only.
Thanks,
RaHi Vinit,
Did you look into the logs? What does it say?
How frequent it happens?
Enable detailed logs by navigating to right click soa-infra > Logs > Log Configuration > Log Levels tab > Search oracle.soa.adapter > Set Trace 32 and see if ther'es anything of interest.
Regards,
Neeraj Sehgal -
Custom Filename for ESB FileAdapter Write
Hi,
I have a ESB service that monitors and deqeues from the oracle.apps.po.event.xmlpo business event via the Oracle Applications adapter. This works fine as the WF_EVENT_T schema is routed via an XSL to a custom XML which is used later.
However, I want the file to use a custom filename with the file adapter. In principle I know how to do this using the ehdr:setOutboundHeader functionality and have got this to work when setting a static filename.
However, I want to be able to create the filename based on the Purchase Order Number coming from the WF_EVENT_T schema. It seems that the ehdr:setOutboundHeader function has to be set before any of the XML Template matches, which means at this point the Purchase Order number is not available. Below is the main parts of the XSL.
<xsl:variable name="CustomFilename" select="'/ns2:PONo"/>
<xsl:variable name="AssignFilename"
select="ehdr:setOutboundHeader('/ns1:OutboundFileHeaderType/ns1:fileName',$CustomFilename,'ns1=http://xmlns.oracle.com/pcbpel/adapter/file/Capture_PO_Event_Data/;')"/>
<xsl:template match="/">
<ns2:HCNPOWFEVENT>
<xsl:for-each select="/imp1:WF_EVENT_T/PARAMETER_LIST/PARAMETER_LIST_ITEM">
<xsl:if test='NAME = "DOCUMENT_NO"'>
<ns2:PONo>
<xsl:value-of select="VALUE"/>
</ns2:PONo>
</xsl:if>
</xsl:for-each>
<xsl:for-each select="/imp1:WF_EVENT_T/PARAMETER_LIST/PARAMETER_LIST_ITEM">
<xsl:if test='NAME = "ECX_TRANSACTION_TYPE"'>
<ns2:POType>
<xsl:value-of select="VALUE"/>
</ns2:POType>
</xsl:if>
</xsl:for-each>
<xsl:for-each select="/imp1:WF_EVENT_T/PARAMETER_LIST/PARAMETER_LIST_ITEM">
<xsl:if test='NAME = "ECX_TRANSACTION_SUBTYPE"'>
<ns2:POSubType>
<xsl:value-of select="VALUE"/>
</ns2:POSubType>
</xsl:if>
</xsl:for-each>
<xsl:for-each select="/imp1:WF_EVENT_T/PARAMETER_LIST/PARAMETER_LIST_ITEM">
<xsl:if test='NAME = "ECX_PARAMETER5"'>
<ns2:OrgId>
<xsl:value-of select="VALUE"/>
</ns2:OrgId>
</xsl:if>
</xsl:for-each>
<xsl:for-each select="/imp1:WF_EVENT_T/PARAMETER_LIST/PARAMETER_LIST_ITEM">
<xsl:if test='NAME = "ECX_PARTY_ID"'>
<ns2:ECXPartyId>
<xsl:value-of select="VALUE"/>
</ns2:ECXPartyId>
</xsl:if>
</xsl:for-each>
<xsl:for-each select="/imp1:WF_EVENT_T/PARAMETER_LIST/PARAMETER_LIST_ITEM">
<xsl:if test='NAME = "ECX_PARTY_SITE_ID"'>
<ns2:ECXPartySiteId>
<xsl:value-of select="VALUE"/>
</ns2:ECXPartySiteId>
</xsl:if>
</xsl:for-each>
</ns2:HCNPOWFEVENT>
</xsl:template>
</xsl:stylesheet>
Any ideas on how this can be achieved ?Ok I was able to get fileadapter filenames modified.
First you cannot do this after a dbadapter query as that functionality seems to be broken. The best way to solve it for me was to simplify the problem, I started with placing the setoutbound header data after reading a simple file and ignoring the files content altogether. I do the set inside a match of the xslt using a constant value, again for simplicity. For the test just put in 'ateststring.txt' in the setoutbound header call, no need to select data from the document at this point; we will get to that. Now get that to work.
Once you have that working take a value from the incoming document and select it into a variable named myvariable then replace the 'ateststring.txt' with $myvariable. So basically its a three step process for the final solution. Select the data of interest from the document into a variable. add a select statement to make the method call just after that. Make the method call inside the select statement and prepend the variable name with the $ symbol.
So far I have only tested this file to file and file to ftp, I suspect that jmsadapter to file will work fine too. So the trick for me was to understand that if an incoming document is the result of a query that just wont do. My requirement was to take an input JMS request, use that to drive a query, take the data from the query and write it to a filename as defined in the JMS request. No matter what I did to set the frustrating fileadapter/ftpadapter filename in the header I was unable to do so. I used constant strings etc in every part of the document ( before select, inside select, outside select etc ). It wasnt until I went file to file that I was able to get this to work at all. And then not in the xslt header etc, it MUST be in an area of the document that resulted in a select match, which after review makes sense.
To work around the query to file issue, I place the results of the data query into a temporary filename%SEQ%.tmp, along with the query results I add the real filename inside the document. I have a file listener that listens for the filename using wildcards filename*.tmp, it takes the file reads it in, pulls the permanent filename from the incoming document, and sets that filename in the header using the steps worked out above by selecting the filename into myvariable.
You cannot ( to my knowlege ) store variables in ESB so I found I had to put them inside the documents to move the data around. When I am all done with the work through the routing service I strip out the variable data fields that I needed to carry along with the data.
Please let me know how you are going with this, I hope this saves someone else a bit of frustration.
Thanks again,
John -
SFTP receiver channel Archive filename isssue
Hi,
I am working on SAP SFTP adapter (SAP PI7.4).
SFTP receiver channel is placing the file with correct filename as generated by dynamic configuration code but sftp channel is adding an extra .txt extension in filename while archiving.
Example: Actual Filename: CCBGI.SSEN_TNG.BATCH0000000000000052.sft
At archive folder : CCBGI.SSEN_TNG.BATCH0000000000000052.sft.txt
Please suggest resolution for this.
Thanks & Regards,
Nida FatimaThere is an SAP-note 1817747 on this which tells you: "It's not a bug, it's a feature".
http://service.sap.com/sap/support/notes/1817747
Symptom
At the sender side of SFTP Adapter, the flag "Archive Files on PI Server" is checked and the field "Archive name" is configured to archive the files on PI Server. During message processing, it is noticed, that, files are being archived in the archive directory with ".txt" extension.
Reason and Prerequisites
This feature has been added as a part of security so that the files will be saved with ".txt" extension in the archive directory. If it is allowed to be saved with original name on the PI server, then an external attack can manipulate the path where the file has been archived and gain access to OS related path and modify the files. Hence, to avoid such security issues, the file extension is always changed to ".txt".
Anyway, you might have a look at SAP-note 1815655 as there is an advanced parameter mentioned to add a "default extension" to a file. http://service.sap.com/sap/support/notes/1815655
Also, an advanced mode parameter "addDefaultFileExtension" has been introduced. If this parameter is set to 'true', then, while archiving the files on SFTP server, the additional check will be performed to check whether the file has ".txt" extension or not. If not, then the ".txt" extension will be added with the file name. Else, normal execution continues i.e. files will be archived on the SFTP server with their original name.
The default value for the parameter "addDefaultFileExtension" is 'false'. -
While mat .doc archiving error
Hi
While mat .doc archiving i am getting error can not be archived due to Document
not completed.
How to find what are all the documents and how to correct the same ?
Thanks
Regards,
DhineshDear,
Please go through the below:
Archiving (transaction SARA)
Data archiving removes bulk data which is no longer required in the System, but which must be retained accessibly, from the database. Data in the R/3 database can only be archived via archiving objects, which describe the data structure. Financial accounting documents, for example are archived via the archiving object FI_DOCUMNT, which comprises the document header, company-code-dependent postings, change documents, SAPscript texts and other elements. The application archiving objects are pre-defined in the system. The archiving programs are scheduled as background jobs, but can run during on-line processing. The system need not be shutdown.
The archiving procedure comprises two main steps:
Create archive files: the data to be archived are first written sequentially into a newly created file. The data then exist twice in the database. These archive files can, e.g. be passed to an archive system, via ArchiveLink.
Execute delete program: the data in the archive files are removed from the database by the delete program.
You can check how the archive filenames and archive destination are setup in transaction FILE
Always remembers to check the setting before any archiving. The settings will determine things like whether the delete programs will start automatically. (it is not advisable to start your delete programs automatically).
You can configure all the customizing settings with transaction AOBJ - Archive Objects.
MM_MATBEL - Archive material document
OMB9 - Archiving - Material document retention period.
The default documents life is 200 days. You need to maintain the documents life for each Plant in your company before you can start archiving. If you did not maintain, you will get a list with 'Valid.period not maintained for tr./ev. type of doc.'.
MM_ACCTIT - Archive accounting interface tables
The documents store in the table are for applications which are to be supplied with the posting data at a later date of MM Inventory Management and MM Invoice Verification.
If you are using 3.x, this archiving object might not be found. You need to import it from sapservX. Read note 89324, 99388, 83076.
Take a look at OSS note 48009 for the detail explanation of which data is updated in the ACCT* tables.
PP_ORDER - Archive production order
OPJH - Production Order retention period. Maintain Retention 1 and 2.
If your set your retention 1 and 2 as 10, that means that after 10 days of setting the delete flag then you can set the deletion indicator. After 10 days of setting the deletion indicator then you can start archiving. Therefore, to archive immediately, you can leave retention 1 and 2 as space. Please take note that retention 1 and 2 act as a safety net if you happened to archive the wrong record. You will have to decide whether to have the retention time gap or not.
FI_DOCUMNT - Archive Financial Accounting Documents
Maintain the account life - transaction OBR7
Maintain the documents life - transaction OBR8
SD_VBRK - Archive Billing Documents
There are no posting date or fiscal year/period selection. You can only specify the billing documents date. In order that you do not archive the wrong record, you have to write a ABAP query or report to print out the various billing documents number ranges you want to archive. You can extract the data from table
VBRK - Billing: Header Data.
VN01 - Check all the number ranges for Billing Documents.
Click Overview buttom or Hold the Shift key and press F7.
RV_LIKP - Archiving Deliveries
VORL - Maintain the retention period. Number of days which must have elapsed since the delivery was created, before the delivery can be archived.
VLAL - Archive deliveries
Regards,
Syed Hussain. -
Hello Gurus,
I reviwed all the PP archiving objects/tables for BOM, Workcenters, Routings, Production orders, Process orders and Backflushing. I couldn't find any information related to ECM archiving.
Could you please let me know how Engineering Change Orders can be archived.
We are currently using 4.6c.
I appreciate your help.
Thanks in advance
SriniDear Srini
Use T-CODE : SARA and Archiving Object :LO_CHANGEM
SOME TIPS:
Archiving (transaction SARA)
Data archiving removes bulk data which is no longer required in the System, but which must be retained accessibly, from the database. Data in the R/3 database can only be archived via archiving objects, which describe the data structure.
The archiving procedure comprises two main steps:
Create archive files: the data to be archived are first written sequentially into a newly created file. The data then exist twice in the database. These archive files can, e.g. be passed to an archive system, via ArchiveLink.
Execute delete program: the data in the archive files are removed from the database by the delete program.
You can check how the archive filenames and archive destination are setup in transaction FILE
Always remembers to check the setting before any archiving. The settings will determine things like whether the delete programs will start automatically. (it is not advisable to start your delete programs automatically).
customizing settings with transaction AOBJ - ECM Object LO_CHANGEM
Regards
Narayana -
"recover database until cancel" asks for archive log file that do not exist
Hello,
Oracle Release : Oracle 10.2.0.2.0
Last week we performed, a restore and then an Oracle recovery using the recover database until cancel command. (we didn't use backup control files) .It worked fine and we able to restart the SAP instances. However, I still have questions about Oracle behaviour using this command.
First we restored, an online backup.
We tried to restart the database, but got ORA-01113,ORA-01110 errors :
sr3usr.data1 needed media recovery.
Then we performed the recovery :
According Oracel documentation, "recover database until cancel recovery" proceeds by prompting you with the suggested filenames of archived redo log files.
The probleme is it prompts for archive log file that do not exist.
As you can see below, it asked for SMAarch1_10420_610186861.dbf that has never been created. Therefore, I cancelled manually the recovery, and restarted the database. We never got the message "media recovery complete"
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10417_61018686
Fri Sep 7 14:09:45 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf'
Fri Sep 7 14:09:45 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_61018686
Fri Sep 7 14:10:03 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf'
Fri Sep 7 14:10:03 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_61018686
Fri Sep 7 14:10:13 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf'
Fri Sep 7 14:10:13 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
Errors with log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
ORA-308 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_61018686
Fri Sep 7 14:15:19 2007
ALTER DATABASE RECOVER CANCEL
Fri Sep 7 14:15:20 2007
ORA-1013 signalled during: ALTER DATABASE RECOVER CANCEL ...
Fri Sep 7 14:15:40 2007
Shutting down instance: further logons disabled
When restaring the database we could see that, a recovery of online redo log has been performed automatically, is it the normal behaviour of a recovery using "recover database until cancel" command ?
Started redo application at
Thread 1: logseq 10416, block 482
Fri Sep 7 14:24:55 2007
Recovery of Online Redo Log: Thread 1 Group 4 Seq 10416 Reading mem 0
Mem# 0 errs 0: /oracle/SMA/origlogB/log_g14m1.dbf
Mem# 1 errs 0: /oracle/SMA/mirrlogB/log_g14m2.dbf
Fri Sep 7 14:24:55 2007
Completed redo application
Fri Sep 7 14:24:55 2007
Completed crash recovery at
Thread 1: logseq 10416, block 525, scn 105140074
0 data blocks read, 0 data blocks written, 43 redo blocks read
Thank you very much for your help.
Frod.Hi,
Let me answer your query.
=======================
Your question: While performing the recovery, is it possible to locate which online redolog is needed, and then to apply the changes in these logs
1. When you have current controlfile and need complete data (no data loss),
then do not go for until cancel recovery.
2. Oracle will apply all the redologs (including current redolog) while recovery
process is on.
3. During the recovery you need to have all the redologs which are listed in the view V$RECOVERY_LOG and all the unarchived and current redolog. By querying V$RECOVERY_LOG you can find out about the redologs required.
4. If the required sequence is not there in the archive destination, and if recovery process asks for that sequence you can query V$LOG to see whether requested sequence is part of the online redologs. If yes you can mention the path of the online redolog to complete the recovery.
Hope this information helps.
Regards,
Madhukar -
Sap Archiving u0096 STO error on version ECC 6.0
Hi experts. I take a long of time with this error and I hope that you can help me. Explain the program.
We have a archiving project of no standard objects. This objects run on Sap version 4.6C, but now, on version ECC 6.0, the Store program give a execution error. The log of the STO Job on Sara transaction is the follow:
Job started
Step 001 started (program RSARCH_STORE_FILE, variant , user ID ARUIZ)
Archive file 000241-001ZIECI_RF02 is being processed
Archive file 000241-001ZIECI_RF02 does not exist (Message no. BA111)
Error occured when checking the stored archive file 000241-001ZIECI_RF02 (Message no. BA194)
Job cancelled after system exception ERROR_MESSAGE (Message no. 00564)
The Write and Delete programs runs correctly.
A strandar Archiving Object like FI_TCJ_DOC runs Ok (WRI, DEL and STO programs). The customazing for both objects are nearly on OAC0, FILE transactions. The changes are on Sap Directories. Write for you the most important customazing actions:
Transaction: FILE
ZIECI_RF02 (No Standard)
Logical File Path --> ZZA_ARCHIVE_GLOBAL_PATH
Physical Path --> /usr/sap/CX6/ADK/files/ZA/<FILENAME>
Logical File Name Definition -->
ZZA_ARCHIVE_DATA_FILE_ZIECI_RF02
FI_TCJ_DOC (Standard)
Logical File Path --> ARCHIVE_GLOBAL_PATH
Physical Path --> <P=DIR_GLOBAL>/<FILENAME>
Logical File Name Definition --> ARCHIVE_DATA_FILE
Transaction: AOBJ
Customazing settings:
ZIECI_RF02
Logical File Name --> ZZA_ARCHIVE_DATA_FILE_ZIECI_RF02
FI_TCJ_DOC (Standard)
Logical File Name --> ARCHIVE_DATA_FILE
I prove to put in own archiving object (ZIECI_RF02) the logical file name -> ARCHIVE_DATA_FILE too and the error go on.
In the others parameters the values for both objects are:
Delete Jobs: Start Automatically
Content Repository: ZA (Start Automatically)
Sequence: Delete Before Storing
I see the point of the storing STO program (RSARCH_STORE_FILE) where own archiving failed. I made a program with the function and debbuging this can see the next:
REPORT zarm_06_prueba_http_archivado.
DATA: LENGTH TYPE I,
T_DATA TYPE table of TABL1024.
BREAK ARUIZ.
CALL FUNCTION 'SCMS_HTTP_GET'
EXPORTING
mandt = '100'
crep_id = 'ZA'
doc_id = '47AAF406C02F6C49E1000000805A00A9'
comp_id = 'data'
offset = 0
length = 4096
IMPORTING
length = length
TABLES
data = t_data
EXCEPTIONS
bad_request = 1
unauthorized = 2
not_found = 3
conflict = 4
internal_server_error = 5
error_http = 6
error_url = 7
error_signature = 8
OTHERS = 9.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
If execute the program, SAP returns the next error:
Error in HTTP Access: IF_HTTP_CLIENT -> RECEIVE 1
The sy-subrc variable value is 6 -> error_http
The call of the method is the follow:
call method http_client->receive
exceptions
http_communication_failure = 1
http_invalid_state = 2
others = 3
sy-subrc value is 1 -> http_communication_failure
The unusual of the case are that the Archiving Objects work on SAP 4.6C perfectly,
furthermore on version ECC 6.0 the standard and no standard objects uses Sap Content Server with the same connection (HTTP), IP, PORT and Repository.
I hope that anypeople can help me. A lot of thanks for your time.
Best Regards.Hi Samanjay.
Answer your questions:
1) I proved the archiving from Sara with the Object ZIECI_RF02 with both customazings. The Stored program fail with logical file name = ZZA_ARCHIVE_DATA_FILE_ZIECI_RF02 and logical file name = ARCHIVE_DATA_FILE (Example at answer 2).
2) AL11 Transaction (Sap Directories):
/usr/sap/CX6/ADK/files/ZA/
cx6adm 09.04.2008 18:02:15 ZIECI_RF02_20080409.180215.ARUIZ
This is the Fich from ZIECI_RF02 on AL11. I prove this yesterday. The variable name: 000241-001ZIECI_RF02 is the generate to save the fich on repository ZA. In this case the variable name is: 000342-001ZIECI_RF02. Can view this with the table ADMI_FILES on TA SE11.
Entries on ADMI_FILES with create date = 09.04.2008
DOCUMENT: 342
ARCHIV KEY: 000342-001ZIECI_RF02
CREAT DATE: 09.04.2008
CREAT TIME : 18:18:28
OBJ COUNT: 1
FILENAME: ZIECI_RF02_20080409.18182
STATUS OPT: Not Stored
STATUS FIL: Archiving Completed
PATHINTERN: ZZA_ARCHIVE_GLOBAL_PATH
CREP:
ARCH DOCID:
Now, I put the same information from FI_TCJ_DOC (Standard Object):
AL11 Transaction (Sap Directories):
/usr/sap/CX6/SYS/global
cx6adm 10.04.2008 11:24:15 FI_FI_TCJ_DOC_20080410_112409_0.ARCHIVE
Entries on ADMI_FILES with create date = 10.04.2008
DOCUMENT: 343
ARCHIV KEY: 000343-001FI_TCJ_DOC
CREAT DATE: 10.04.2008
CREAT TIME: 11:24:09
OBJ COUNT: 2
FILENAME: FI_FI_TCJ_DOC_20080410_112409_0.ARCHI
STATUS OPT: Stored
STATUS FIL: Archiving Completed
PATHINTERN: ARCHIVE_GLOBAL_PATH
CREP: ZA
ARCH DOCID: 47FD890364131EABE1000000805A00A9
Finally, made the example with Archiving Object ZIECI_RF02, but assigning the standard logical file.
AOBJ (Customazing settings):
Object Name: ZIECI_RF02 Archivado de datos: Facturas de Baja
Logical File Name: ARCHIVE_DATA_FILE
Now, execute the archiving at SARA transaction.
AL11 Transaction (Sap Directories):
/usr/sap/CX6/SYS/global
cx6adm 10.04.2008 12:33:25 FI_ZIECI_RF02_20080410_123324_0.ARCHIVE
Entries on ADMI_FILES with create date = 10.04.2008
DOCUMENT: 345
ARCHIV KEY: 000345-001ZIECI_RF02
CREAT DATE: 10.04.2008
CREAT TIME: 12:33:24
OBJ COUNT: 1
FILENAME: FI_ZIECI_RF02_20080410_123324_0.ARCHIVE
STATUS OPT: Not Stored
STATUS FIL: Archiving Completed
PATHINTERN: ARCHIVE_GLOBAL_PATH
CREP:
ARCH DOCID:
It´s that the unusual. At first, I thought that´s the problem was the directory of SAP, but with this proof I looked a new error reason.
3) The details of repository ZA are:
Content Rep: ZA
Description: Document Area
Document Area: Data Archiving
Storage type: HTTP Content Server
Version no: 0046 Content Server version 4.6
HTTP server: 128.90.21.59
Port Number: 1090
HTTP Script: ContentServer/ContentServer.dll
Phys. Path: /usr/sap/CX6/SYS/global/
Too much thanks for your answer and your time. If you have any doubt, question me.
Best regards. -
Keep archived logs but deleting backup of db
Hey
I'm running a backup script every night that issues a "backup database plus archivelog" and saves the backup to disk. Due to disk space limitations I have to delete the obsolete backup taken the night before. However the "delete obsolete" also deletes the archived logs on disk. Is there a way to delete the obsolete backup but still keep the logs on disk?
Tried the "delete backup of database completed before 'sysdate-1' but I cant seem to get passed the promting for deletion - it has to run automatically.
Thanks for any advice..Why don't you work with retention and than you delete the obsolete?
delete backup of database will remove the full backup of the database (datafiles, archivelog, spfile). You can create a script to delete the backup of a given list of datafiles.
RMAN> report obsolete;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type Key Completion Time Filename/Handle
Archive Log 540 26-MAY-05 /db/ARON/fs1/archived_for_logminer/1_1.dbf
Archive Log 541 26-MAY-05 /db/ARON/fs1/archived_for_logminer/1_2.dbf
RMAN> configure retention policy to redundancy 1;
old RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete
RMAN> report obsolete;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of obsolete backups and copies
Type Key Completion Time Filename/Handle
Backup Set 3125 22-JUL-05
Backup Piece 3127 22-JUL-05 /db/ARON/BACKUP/RMAN/backup_ARON_564316052_208_1_6ggq5hsk_1_1.bck
Backup Set 3126 22-JUL-05
Backup Piece 3128 22-JUL-05 /db/ARON/BACKUP/RMAN/backup_ARON_564316052_207_1_6fgq5hsk_1_1.bck
Backup Set 3139 22-JUL-05
Backup Piece 3140 22-JUL-05 /db/ARON/BACKUP/RMAN/backup_ARON_564316097_209_1_6hgq5hu1_1_1.bck
Backup Set 3190 24-OCT-05
Backup Piece 3193 24-OCT-05 /db/ARON/BACKUP/RMAN/backup_ARON_572546170_212_1_6kh20n3q_1_1.bck
Backup Set 3191 24-OCT-05
Backup Piece 3194 24-OCT-05 /db/ARON/BACKUP/RMAN/backup_ARON_572546170_213_1_6lh20n3q_1_1.bck
Backup Set 3192 24-OCT-05
Backup Piece 3195 24-OCT-05 /db/ARON/BACKUP/RMAN/backup_ARON_572546180_214_1_6mh20n44_1_1.bck
Archive Log 540 26-MAY-05 /db/ARON/fs1/archived_for_logminer/1_1.dbf
Archive Log 541 26-MAY-05 /db/ARON/fs1/archived_for_logminer/1_2.dbfBye, Aron -
Discussion: File CC Transport. Setting location of Folder, Server, Archive
Dear Experts,
I want to open a discussion.
This is not something technical but rather to be more managerial.
- When we transport the objects in Integration Builder to another server (like from DEV to QA), who do you think should be responsible to activate the transported objects? Will it be the basis guy or the development guy?
If the Basis how can they find which object to be activated easily? Is it from the change list? Because the Basis guy usually does not know about the objects, he is just given the TR and then transports it.
- This is a similar question but this is related to the File Communication Channel. Because when we transport the file communication channel we need to re-configure the filename, location, archive location, FTP server, username and password (for object has not been transported before).
Who do you think should re-configure that?
Any opinion is appreciated.
Thank you,
Suwandi C.Hi,
I use the file transport mechanism.
I've been trying to configure the CTS+, but fails as I've read from the following document:
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/20ccd0ec-2839-2b10-0097-
828fcd8ed809?overridelayout=true
In Chapter 2 it is stated that
"Please keep in mind that the CTS+ Close Coupling integration is only available for SAP NetWeaver
Process Integration 7.1 starting of SP06.
If you have older Patch Levels, only the loose coupling via File Export and upload of Non-ABAP
objects to a CTS+ communication system exists. "
Since I'm using the PI 7.1 SP04, I guess I have no other choice but to use the File transport mechanism.
So then, I have no choice instead of having the basis guy to activate and configure CC when transporting the PI Objects in Integration Builder.
EDIT:
I've just doing some reseach and it seems that only the close couping for SAP PI available starting at version 7.1 SP06, For older version the loose coupling CTS by uploading the non-ABAP object to the TR is still possible. Is this correct?
https://help.sap.com/saphelp_ctsplug20sm71/helpdata/en/09/734a27116b4f6185e072df4e8332fa/content.htm
https://help.sap.com/saphelp_ctsplug20sm71/helpdata/en/f5/6a5c15357b4db19fb07f5ee1a97472/content.htm
If this is correct, I need more insights of how this work. Will it work like this:
PI Developer will chose the PI Objects to be transported and export the objects to a *.tpz file (using the file system mechanism). PI Developer will then attach the file system to a TR from the ABAP TCode SE09. TR will be given to the Basis guy to be transported. Once Transported the PI Objects in Integration Directory will be based by the Developer user ID because it is using the CTS mechanism transport.
is that correct?
Thank you,
Suwandi C. -
Backup archive log with delete all input clause
Our database is 10.2.0.3 RAC db and database server is window 2003.
Our RMAN catalog db was down for a couple of weeks. During this two weeks period we use control file instead. But when I compare the log files before using control file and after going back to catalog db I found the following differences. I also pasted backup script below. It looks like that after we reuse the catalog db it is able to delete all archive logs as soon as it is backed up. The only change I can think of is one of the archive log destination is changed from F:\archive to G:\archive. This db has a physical standby db which is not up to date. Can you help me to figure out why this differences in the backup process since I am kind of worried if we bring the standby db up to date we will not be able to ship the archive log since they are deleted from the backup process. Thank you so much for your help. Shirley
10> resync catalog;
11> #change archivelog all crosscheck;
12> crosscheck archivelog all;
13>
14> #Backup Database and archive log files.
15> backup as compressed backupset
16> incremental level 0 format 'F:\backup\%d_LVL0_%T_%u_s%s_p%p' filesperset 5 tag 'INDRAC'
17> database plus archivelog format 'F:\backup\%d_LVL0_%T_%u_s%s_p%p'
18> filesperset 10 tag 'INDRAC'
19> delete all input;
Before using control file:
channel ORA_DISK_2: finished piece 1 at 24-MAY-08
piece handle=F:\BACKUP\PRODRAC_LVL0_20080524_04JH7588_S41988_P1 tag=INDRAC comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:04:17
RMAN-08137: WARNING: archive log not deleted as it is still needed
archive log filename=F:\ARCHIVE\PRODRAC_004_04966_0575926036.ARC thread=4 sequence=4966
RMAN-08137: WARNING: archive log not deleted as it is still needed
archive log filename=E:\ARCHIVE\PRODRAC_004_04966_0575926036.ARC thread=4 sequence=4966
RMAN-08137: WARNING: archive log not deleted as it is still needed
archive log filename=F:\ARCHIVE\PRODRAC_004_04967_0575926036.ARC thread=4 sequence=4967
RMAN-08137: WARNING: archive log not deleted as it is still needed
archive log filename=E:\ARCHIVE\PRODRAC_004_04967_0575926036.ARC thread=4 sequence=4967
After went back to catalog:
channel ORA_DISK_2: backup set complete, elapsed time: 00:02:47
channel ORA_DISK_2: deleting archive log(s)
archive log filename=G:\ARCHIVE\PRODRAC_004_05760_0575926036.ARC recid=51689 stamp=660017344
archive log filename=E:\ARCHIVE\PRODRAC_004_05760_0575926036.ARC recid=51688 stamp=660017344
archive log filename=G:\ARCHIVE\PRODRAC_004_05761_0575926036.ARC recid=51697 stamp=660032069
archive log filename=E:\ARCHIVE\PRODRAC_004_05761_0575926036.ARC recid=51696 stamp=660032069
archive log filename=E:\ARCHIVE\PRODRAC_004_05762_0575926036.ARC recid=51704 stamp=660051690
archive log filename=G:\ARCHIVE\PRODRAC_004_05762_0575926036.ARC recid=51705 stamp=660051690
archive log filename=E:\ARCHIVE\PRODRAC_004_05763_0575926036.ARC recid=51710 stamp=660061718
archive log filename=G:\ARCHIVE\PRODRAC_004_05763_0575926036.ARC recid=51711 stamp=660061718
archive log filename=E:\ARCHIVE\PRODRAC_004_05764_0575926036.ARC recid=51716 stamp=660069980
archive log filename=G:\ARCHIVE\PRODRAC_004_05764_0575926036.ARC recid=51717 stamp=660069980
archive log filename=E:\ARCHIVE\PRODRAC_004_05765_0575926036.ARC recid=51720 stamp=660081117
archive log filename=G:\ARCHIVE\PRODRAC_004_05765_0575926036.ARC recid=51721 stamp=660081117
archive log filename=G:\ARCHIVE\PRODRAC_004_05766_0575926036.ARC recid=51723 stamp=660087215
archive log filename=E:\ARCHIVE\PRODRAC_004_05766_0575926036.ARC recid=51722 stamp=660087214
channel ORA_DISK_1: finished piece 1 at 14-JUL-08
piece handle=F:\BACKUP\PRODRAC_LVL0_20080714_1MJLG8GQ_S45110_P1 tag=INDRAC comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:04:43Shirley,
If there was no change to the E: then the logs should have been kept (RMAN-8137) unless they were possibly aged out of the controlfile (unlikely since it defaults to 65K) or have been applied already. (The F: logs would have been marked as EXPIRED at the next crosscheck). Check to see which are EXPIRED:
RMAN> list expired archivelog all;
To see the earliest log:
select sequence# from v$log_history where rownum <2;
Have you checked the V$MANAGED_STANDBY to insure they were not already applied? Given that your latest rman log shows logs being deleted with no RMAN-8137 raised would indicate that they are not needed for the standby.
Maybe you are looking for
-
What is the best way to clean up Contacts?
After synching multiple email address books and 3 Mac devices into Contacts, then that backing up to iCloud, I now have a mess in my Contacts App. There are so many duplicated entries! I thought the smartest thing to do was to link duplicates, since
-
I desire to know how to install Adobe Reader on a flash drive.
My desk top computer is older & does not have internet access. I would like to install free Adobe Reader on a flash drive while on my lap top that has internet access & then install it on my desk top. Is this possible & if so How do I do it? [ email
-
HT4427 installing a hard drive on my macbook.
How do I install a internal hard drive in my mac book.
-
7520 all in one will not print photos from photo tray from computer
7520 all in one printer photodmart will not print photos from computer from the photo tray
-
Row To Column with distinct values
Hi Oracle Gurus, Please help me on this regard. A table has columns statuscode,reasoncode,date with valid values as statuscode -> status1,status2,status3,status4,status5 reasoncode -> a,b,c,d,e Date will be passed by runtime. Requirement is to take t