Message ID Archive Mode

Hi.
I have a requeriment to get the MessageID to add the file name in archive mode. (sender channel)
Only I see an option Add Time Stamp but I need the messageID is it posible?
Thanks

Luis,
I have a requeriment to get the MessageID to add the file name in archive mode. (sender channel)
Only I see an option Add Time Stamp but I need the messageID is it posible?
AFAIK, this is not possible. There is no option in the Sender File CC to add MessageID in the file name for archiving.
What you can do is - create another file channel (Receiver), which will write the file in the archive directory with the MessageID as part of the file name.
I would wait for what others to comment on this issue.
Regards,
Neetesh

Similar Messages

  • RE: Processing Logged messages in batch mode ?

    Paul,
    One way to findout the log file is to inspect your
    <CentralServer>.log. It should have a line something like..
    Redirecting output to <someDirectory>/forte_ex_26564.log.
    That is the file the the current ftexec is writing to.
    This has worked fine for me.
    Another way is to sort the files by date, and look at the latest
    ones.
    ...I'd love to hear about a better way to do this.
    Ajith Kallambella. M
    Forte Systems Engineer
    International Business Corporation.
    -----Original Message-----
    From: [email protected] [mailto:[email protected]]
    Sent: Monday, March 01, 1999 9:06 AM
    To: Forte-Users (Adresse de messagerie)
    Subject: Processing Logged messages in batch mode ?
    Hi,
    When I launch partitions, they display a whole bunch of 'useful' messages.
    (maybe using 'task.logmgr.putline')
    I'm afraid these traces go directly in the launcher's log file under
    $FORTE_ROOT/log
    As I get control only when the ftexec ends for the next instruction to be
    interpreted,
    how can I figure out which of these log files relates to the ftexec I just
    got executed ?
    (example: "forte_ex_3613.log")
    I found '/output' for node managers, but for single ftexec(s) ?
    (thinking also about re-used ftexec(s))
    Thanks in advance,
    j-paul gabrielli
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Paul,
    One way to findout the log file is to inspect your
    <CentralServer>.log. It should have a line something like..
    Redirecting output to <someDirectory>/forte_ex_26564.log.
    That is the file the the current ftexec is writing to.
    This has worked fine for me.
    Another way is to sort the files by date, and look at the latest
    ones.
    ...I'd love to hear about a better way to do this.
    Ajith Kallambella. M
    Forte Systems Engineer
    International Business Corporation.
    -----Original Message-----
    From: [email protected] [mailto:[email protected]]
    Sent: Monday, March 01, 1999 9:06 AM
    To: Forte-Users (Adresse de messagerie)
    Subject: Processing Logged messages in batch mode ?
    Hi,
    When I launch partitions, they display a whole bunch of 'useful' messages.
    (maybe using 'task.logmgr.putline')
    I'm afraid these traces go directly in the launcher's log file under
    $FORTE_ROOT/log
    As I get control only when the ftexec ends for the next instruction to be
    interpreted,
    how can I figure out which of these log files relates to the ftexec I just
    got executed ?
    (example: "forte_ex_3613.log")
    I found '/output' for node managers, but for single ftexec(s) ?
    (thinking also about re-used ftexec(s))
    Thanks in advance,
    j-paul gabrielli
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • FW: Processing Logged messages in batch mode ?

    If you rename your partitions' log files before running the app and do the
    renaming according to each partitions' purpose or partition number, then
    after running the app, just go to the approprate log file. You can do the
    renaming using system agents so that it is not a manual process.
    Essentially, you know in advance where the output goes.
    -Ravi
    -----Original Message-----
    From: J-Paul GABRIELLI [SMTP:[email protected]]
    Sent: Monday, March 01, 1999 11:29 AM
    To: 'Kalidindi, Ravi CWT-MSP'
    Subject: RE: Processing Logged messages in batch mode ?
    Sorry, I was looking for a batch way, when the partition is dead (i.E.
    can't monitor it nor view it in escript)
    No gui :-)
    j-p
    -----Message d'origine-----
    De: Kalidindi, Ravi CWT-MSP [SMTP:[email protected]]
    Date: lundi 1 mars 1999 17:51
    A: 'Kallambella, Ajith'; 'J-Paul GABRIELLI'; 'Forte'
    Objet: RE: Processing Logged messages in batch mode ?
    In case of installed applications or running in distributed mode, you can
    use econsole to choose the appropriate active partition and view the log
    file for it. The "log file" window also displays the name of the
    particular
    log file. In case of installed applications, you can also rename the log
    file for your partitions.
    Hope that helps
    -Ravi Kalidindi
    Born Info Svcs Group
    -----Original Message-----
    From: Kallambella, Ajith [SMTP:[email protected]]
    Sent: Monday, March 01, 1999 9:47 AM
    To: 'J-Paul GABRIELLI'; 'Forte'
    Subject: RE: Processing Logged messages in batch mode ?
    Paul,
    One way to findout the log file is to inspect your
    <CentralServer>.log. It should have a line something like..
    Redirecting output to <someDirectory>/forte_ex_26564.log.
    That is the file the the current ftexec is writing to.
    This has worked fine for me.
    Another way is to sort the files by date, and look at the latest
    ones.
    ...I'd love to hear about a better way to do this.
    Ajith Kallambella. M
    Forte Systems Engineer
    International Business Corporation.
    -----Original Message-----
    From: [email protected] [mailto:[email protected]]
    Sent: Monday, March 01, 1999 9:06 AM
    To: Forte-Users (Adresse de messagerie)
    Subject: Processing Logged messages in batch mode ?
    Hi,
    When I launch partitions, they display a whole bunch of 'useful'messages.
    (maybe using 'task.logmgr.putline')
    I'm afraid these traces go directly in the launcher's log file under
    $FORTE_ROOT/log
    As I get control only when the ftexec ends for the next instruction tobe
    interpreted,
    how can I figure out which of these log files relates to the ftexec Ijust
    got executed ?
    (example: "forte_ex_3613.log")
    I found '/output' for node managers, but for single ftexec(s) ?
    (thinking also about re-used ftexec(s))
    Thanks in advance,
    j-paul gabrielli
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    If you rename your partitions' log files before running the app and do the
    renaming according to each partitions' purpose or partition number, then
    after running the app, just go to the approprate log file. You can do the
    renaming using system agents so that it is not a manual process.
    Essentially, you know in advance where the output goes.
    -Ravi
    -----Original Message-----
    From: J-Paul GABRIELLI [SMTP:[email protected]]
    Sent: Monday, March 01, 1999 11:29 AM
    To: 'Kalidindi, Ravi CWT-MSP'
    Subject: RE: Processing Logged messages in batch mode ?
    Sorry, I was looking for a batch way, when the partition is dead (i.E.
    can't monitor it nor view it in escript)
    No gui :-)
    j-p
    -----Message d'origine-----
    De: Kalidindi, Ravi CWT-MSP [SMTP:[email protected]]
    Date: lundi 1 mars 1999 17:51
    A: 'Kallambella, Ajith'; 'J-Paul GABRIELLI'; 'Forte'
    Objet: RE: Processing Logged messages in batch mode ?
    In case of installed applications or running in distributed mode, you can
    use econsole to choose the appropriate active partition and view the log
    file for it. The "log file" window also displays the name of the
    particular
    log file. In case of installed applications, you can also rename the log
    file for your partitions.
    Hope that helps
    -Ravi Kalidindi
    Born Info Svcs Group
    -----Original Message-----
    From: Kallambella, Ajith [SMTP:[email protected]]
    Sent: Monday, March 01, 1999 9:47 AM
    To: 'J-Paul GABRIELLI'; 'Forte'
    Subject: RE: Processing Logged messages in batch mode ?
    Paul,
    One way to findout the log file is to inspect your
    <CentralServer>.log. It should have a line something like..
    Redirecting output to <someDirectory>/forte_ex_26564.log.
    That is the file the the current ftexec is writing to.
    This has worked fine for me.
    Another way is to sort the files by date, and look at the latest
    ones.
    ...I'd love to hear about a better way to do this.
    Ajith Kallambella. M
    Forte Systems Engineer
    International Business Corporation.
    -----Original Message-----
    From: [email protected] [mailto:[email protected]]
    Sent: Monday, March 01, 1999 9:06 AM
    To: Forte-Users (Adresse de messagerie)
    Subject: Processing Logged messages in batch mode ?
    Hi,
    When I launch partitions, they display a whole bunch of 'useful'messages.
    (maybe using 'task.logmgr.putline')
    I'm afraid these traces go directly in the launcher's log file under
    $FORTE_ROOT/log
    As I get control only when the ftexec ends for the next instruction tobe
    interpreted,
    how can I figure out which of these log files relates to the ftexec Ijust
    got executed ?
    (example: "forte_ex_3613.log")
    I found '/output' for node managers, but for single ftexec(s) ?
    (thinking also about re-used ftexec(s))
    Thanks in advance,
    j-paul gabrielli
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • Processing Logged messages in batch mode ?

    Hi,
    When I launch partitions, they display a whole bunch of 'useful' messages.
    (maybe using 'task.logmgr.putline')
    I'm afraid these traces go directly in the launcher's log file under $FORTE_ROOT/log
    As I get control only when the ftexec ends for the next instruction to be interpreted,
    how can I figure out which of these log files relates to the ftexec I just got executed ?
    (example: "forte_ex_3613.log")
    I found '/output' for node managers, but for single ftexec(s) ?
    (thinking also about re-used ftexec(s))
    Thanks in advance,
    j-paul gabrielli

    The instrument "LogFile" on the active partition agent contains the name of
    the log file.
    At 10:47 AM 3/1/99 -0500, Kallambella, Ajith wrote:
    Paul,
    One way to findout the log file is to inspect your
    <CentralServer>.log. It should have a line something like..
    Redirecting output to <someDirectory>/forte_ex_26564.log.
    That is the file the the current ftexec is writing to.
    This has worked fine for me.
    Another way is to sort the files by date, and look at the latest
    ones.
    ...I'd love to hear about a better way to do this.
    Ajith Kallambella. M
    Forte Systems Engineer
    International Business Corporation.
    -----Original Message-----
    From: [email protected] [mailto:[email protected]]
    Sent: Monday, March 01, 1999 9:06 AM
    To: Forte-Users (Adresse de messagerie)
    Subject: Processing Logged messages in batch mode ?
    Hi,
    When I launch partitions, they display a whole bunch of 'useful' messages.
    (maybe using 'task.logmgr.putline')
    I'm afraid these traces go directly in the launcher's log file under
    $FORTE_ROOT/log
    As I get control only when the ftexec ends for the next instruction to be
    interpreted,
    how can I figure out which of these log files relates to the ftexec I just
    got executed ?
    (example: "forte_ex_3613.log")
    I found '/output' for node managers, but for single ftexec(s) ?
    (thinking also about re-used ftexec(s))
    Thanks in advance,
    j-paul gabrielli
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>============================================
    Don Nelson
    Senior Systems Architect
    Forte Software, Inc.
    Denver, CO
    Phone: 303-265-7709
    Corporate voice mail: 510-986-3810
    aka: [email protected]
    ============================================
    "Nothing spoils fun like finding out it builds character." - Calvin
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • JSF 1.2 app is deployed only in exploded archive mode if JSF 2.0 facet used

    Hi.
    I'm developing JSF 1.2 + facelets application. The only way to configure eclipse WTP editors to properly handle xhtml pages is to install JSF 2.0 facet (there was facelets plugin before, but it was superseded by JSF 2.0 facet if I'm not mistaken). When application is deployed on WL, OEPE show message that JSF 2.0 applications can be deployed only in exploded archive mode.
    Is there any way to make OEPE to believe that this application is based on JSF 1.2 (as really is)?
    Regards,
    Vadim.

    Hi, Ian.
    You are stating that it's a JSF 2.0 app by selecting that facet - the tooling relies on your facet selection to determine what features are available.Thought as much. Determining which JSF implementation will be active at deployment time looks like very complicated task and facet version is a reliable source of such kind info. Sadly :)
    What "eclipse WTP editors" are you trying to configure?"HTML editor". Without JSF 2.0 facet it doesn't handle JSF taglibs' namespaces and EL expressions content assist/navigation.
    The other option is to install JBoss Tools richfaces support, but for various reasons I'd like not to.
    After using exploded deployment for a week now I see that it is not that slow as I initially thought (at least for a small project): JSP/XHTML changes picked up, class methods reloaded. Manual republishing is still not as fast as in case of the splitsource deployment, but it don't call it often.
    Thanks.
    Edited by: user3269289 on Apr 8, 2011 2:34 AM

  • Recovery in archive mode, but no archived logs

    Hi,
    I hope someone can help me with the following question.
    I have a 9.2 database in archive mode. Suppose on t=t0 I make an online backup. At the end of this run, I will do an "alter system switch logfile" statement to capture the latest transactions (more or less also at t=t0). Later, at t=t1 we do some transaction and more archived redologs are created. At t-t2 I need to restore the complete database, but I have LOST all the archive files as from t=t0. Now the question is, is it still possible to recover (even it means to go back to t=t0)? I have tried it, but always the system suggests to apply the (missing) archived redo's, and it seems I cannot escape this. But, Is it possible to get back to t=t0?
    Thanks a lot, for any clue or pointer !

    If you take a hot backup and you lose the redo logs, you have a fundamentally inconsistent backup. Each tablespace will be internally consistent, but they probably won't have the same SCN as the control files, so you'll get this message.
    Do you have an earlier hot backup and the archived logs that would restore that backup to the point in time (t=0) where you took the latest hot backup? Personally, I generally like to keep at least 2 or 3 old backups, with archived log files, just in case something goes wrong with the most recent backup.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Are we change nonarchive mode to archive mode using rman in enterprislinux

    how can i change nonarchive mode to archive mode using Recovery manager in enterprise linux

    Did you read the other post where you asked this same question: {message:id=3868427}

  • Archiving of ERS messages: maintain archiving parameters for message type

    Dear experts,
    I have problems creating archive messages for ERS messages in ECC 6.0. I always get message "Maintain archiving parameter(s) for output type ERS (appl. MR)" even if I have done all steps as described in SAP note 391822:
    NACE: application "MR", output type "ERS"
    details: storage mode "2 Archive only", document type "ZUNSGUTHCD"
    processing routine: medium "1 print", program "RM08NAST", form routine "ENTRY_ERS", form "MR_PRINT".
    A condition record for message type ERS is existing with function "VN" and medium "1" and date/time "4".
    OAC2: document type "ZUNSGUTHCD", document class "PDF", status "x".
    OAC3: object type "BUS2081", Doc.type "ZUNSGUTHCD", L "x", Content rep. ID "K1", link "TOA01", retention period "0".
    The archive "K1" is existing and working for other documents, but not for ERS.
    Something is missing, but I don't find out what. Any help is greatly appreciated, point are given!
    Regards, Karsten

    Dear experts,
    I have problems creating archive messages for ERS messages in ECC 6.0. I always get message "Maintain archiving parameter(s) for output type ERS (appl. MR)" even if I have done all steps as described in SAP note 391822:
    NACE: application "MR", output type "ERS"
    details: storage mode "2 Archive only", document type "ZUNSGUTHCD"
    processing routine: medium "1 print", program "RM08NAST", form routine "ENTRY_ERS", form "MR_PRINT".
    A condition record for message type ERS is existing with function "VN" and medium "1" and date/time "4".
    OAC2: document type "ZUNSGUTHCD", document class "PDF", status "x".
    OAC3: object type "BUS2081", Doc.type "ZUNSGUTHCD", L "x", Content rep. ID "K1", link "TOA01", retention period "0".
    The archive "K1" is existing and working for other documents, but not for ERS.
    Something is missing, but I don't find out what. Any help is greatly appreciated, point are given!
    Regards, Karsten

  • File Adapter archiving mode....

    Putting message into send queue failed, due to: com.sap.aii.af.ra.ms.api.DuplicateMessageException: Message ID a7cbcf00-4e00-11db-bd7f-0018717400ed(OUTBOUND) already exists in database: com.sap.sql.DuplicateKeyException: ORA-00001: unique constraint (SAPSR3DB.SYS_C00135467) violated.
    2006-09-27 17:17:47 Error Returning to application. Exception: com.sap.aii.af.ra.ms.api.DuplicateMessageException: Message ID a7cbcf00-4e00-11db-bd7f-0018717400ed(OUTBOUND) already exists in database: com.sap.sql.DuplicateKeyException: ORA-00001: unique constraint (SAPSR3DB.SYS_C00135467) violated
    2006-09-27 17:17:47 Error Attempt to archive file "D:\SSNEDI\test\in\queue\20060926070906359.txt" after processing failed. Retry
    source directory : D:\SSNEDI\test\in\queue
    target directory : D:\SSNEDI\test\in\queue\FB
    But, source directory not deleted 20060926070906359.txt.
    Target directory exist file too.
    what's the matter?

    Hi Kim,
    If it is in test mode,the file wont get deleted in your source direcory.  So the adpater will keep polling the file and the server will be flooded with messages.  If it is in the delete mode file will be deleted,archive mode it will be moved in to the archived directory but in the test mode file will not be deleted from your source directory and will be polled continously by the adapter.
    Also go thro' this link:-
    Re: Archiving ftp files
    Hope this provides a solution.
    Regards.
    Praveen

  • Archive mode

    After I execute Begin Archive command for a cube...can I change any database properties???
    I tried adding comments and it gets updated??
    What are the type of changes which it will not accept when the cube is in archive mode..
    Data updations should not be allowed for sure..!

    First, why would you make any changes while you are backing up the database. When you execute the Begin Archive command you are backing up the database in the state right before you execute the command.
    Placing the database in read-only (or archive) mode protects the database from updates during the backup process.
    After you perform the backup, return the database to read-write mode.
    The begin archive command performs the following tasks:
    * Commits modified data to disk.
    * Switches the database to read-only mode.
    * Reopens the database files in shared, read-only mode.
    * Creates in the ARBORPATH\app\appname\dbname directory a file (default name of archive.lst) containing a list of files to be backed up.
    Attempting to modify data during the backup process results in an error message that data is in read-only mode, therefore, you should not make any changes to the database during this process.

  • Strange: stored exit -1 in hotbackup and archive mode

    Hello all,
    we got strange problem with enabled hotbackup and archive mode with calendar database. (JES 4 2005Q4 with all available patches, 116577-42)
    Store daemon exited wit -1 if one of above activated.
    Checked Database - ok, 'make' new database with rebuilt - same condition.
    Both prozesses are stopping after copy of ics50alarms.db and daemon exited wit -1.
    But csbackup will work right.
    Any ideas ?
    Thanks in advance
    Frank

    Hello,
    we found the reason of this malefunction:
    As storedaemon act a perl script - this script unfortunatly do not check the
    language but grep some system messages.
    So we corrected language settings (system messages) and the storedaemon will run correctly.
    Regards
    Frank

  • Changing of the timestamp in sender file adapter in archive mode

    Hi,
    I have a requirement where in I have to archive a file with timestamp different from that generated by XI.
    Please let me know if this can be done and if so how can we handle the changes to be made to the timestamp in the sender adapter in archive mode.
    regards,
    Srinivas.

    Srinivas,
    Option 1) Create a bat file..to run the perl script you call..
    Perl script..
    #!/usr/bin/perl -w
    print("Starter that you want to change: ");
    chomp($badex = <STDIN>);
    print("Starter that you want added: ");
    chomp($goodex = <STDIN>);
    foreach $file (<$badex*>){
        @fields = split(/$badex/,$file);
        $goodfile = ("$goodex" . "$fields[1]");
        rename("$file","$goodfile");
    Run that on the os
    That should fix it.........
    Option 2) On your local Machine create a java file..add this code to it
    public class Utils
         public static int Randomizer(){
              int randomInt = 0;
              randomInt = (int) (Math.random()*1000);
              return randomInt;
    public static void main(String[] args)
         Randomizer();
    save and compile..
    Create a bat file to add the number returned from the random to your targetFilename
    so it would be something like..
    mv oldFileName Newfilename+randomizer... and also get this command written to a file..helpful later on.........
    Hope that helps
    Regards
    Ravi Raman
    PS:Dont forget the points if helpful

  • How to recover the data from a  dropped table in production/archive mode

    How to recover the data/change on a table that was dropped by accident.
    The database is on archive mode.

    Oracle Version. ? If 10g.
    Try this Way
    SQL> create table taj as select * from all_objects where rownum <= 100;
    Table created.
    SQL> drop table taj ;
    Table dropped.
    SQL> show recyclebin
    ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
    TAJ              BIN$b3MmS7kYS9ClMvKm0bu8Vw==$0 TABLE        2006-09-10:16:02:58
    SQL> flashback table taj to before drop;
    Flashback complete.
    SQL> show recyclebin;
    SQL> desc taj;
    Name                                      Null?    Type
    OWNER                                              VARCHAR2(30)
    OBJECT_NAME                                        VARCHAR2(30)
    SUBOBJECT_NAME                                     VARCHAR2(30)
    OBJECT_ID                                          NUMBER
    DATA_OBJECT_ID                                     NUMBER
    OBJECT_TYPE                                        VARCHAR2(19)
    CREATED                                            DATE
    LAST_DDL_TIME                                      DATE
    TIMESTAMP                                          VARCHAR2(19)
    STATUS                                             VARCHAR2(7)
    TEMPORARY                                          VARCHAR2(1)
    GENERATED                                          VARCHAR2(1)
    SECONDARY                                          VARCHAR2(1)
    SQL>M.S.Taj

  • Exceptions are using rpc/encoded messages in documentwrapped mode

    Version: Weblogic 8.1SP3
    I exposed a web service in documentwrapped(/literal) mode, and it seems weblogic is using rpc/encoded convention instead, to serialize Exception.
    I posted more infos on the Axis bugtracker : http://nagoya.apache.org/jira/browse/AXIS-1576
    but I found later that the problem seems to come from Weblogic.
    Does someone know a workaround for this problem ?

    Hi Shabrish,
                               As per the details from MQ Server technical guys, thread is opened by admin user of PI to put the messages on queues of MQ server and that thread does not get close itself and keep on hanging the message in Delivering Mode. Because of this issue all messages behind it get queued to Delivering mode .And as soon as we restart the Java Stack it gets flushed up and open the new connection and deliver all old messages but new message again get stuck in Delivering Mode.
    Any idea on this.
    Regards,
    Anurag

  • Test cold backup database in archive mode

    Hello,
    We doing every week a cold backup from Oracle database, us database is in archive mode.
    We would like to test us cold backup in other server but the drive where we have to copy
    the datafiles and logs and rollback are different on this server.
    In the production server the oracle_home and the oracle_sid:
    ORACLE_HOME=E:\oracle\product\10.2.0\db_1
    ORACLE_SID=ENCY
    and on the other server to test the backup are:
    ORACLE_HOME=I:\oracle\product\10.2.0\db_2
    ORACLE_SID=ENCY
    Can somebody help me saying please the steps that I have to follow to restore the database in other server
    to test if the last cold backup is fine?
    My understand is copy the files(datafiles, log files, control files, pfile,...) in the new server, when everything is copied into new server,
    I have to:
    Modify the parameter CONTROL_FILES in the pfile
    Mount the database and rename the datafiles and redolog files
    Thanks.

    user641364 wrote:
    Hello,
    We doing every week a cold backup from Oracle database, us database is in archive mode.
    We would like to test us cold backup in other server but the drive where we have to copy
    the datafiles and logs and rollback are different on this server.
    In the production server the oracle_home and the oracle_sid:
    ORACLE_HOME=E:\oracle\product\10.2.0\db_1
    ORACLE_SID=ENCY
    and on the other server to test the backup are:
    ORACLE_HOME=I:\oracle\product\10.2.0\db_2
    ORACLE_SID=ENCYThe only difference between servers is different ORACLE_HOME path?
    All other directories are the same?
    >
    Can somebody help me saying please the steps that I have to follow to restore the database in other server
    to test if the last cold backup is fine?
    My understand is copy the files(datafiles, log files, control files, pfile,...) in the new server, when everything is copied into new server,
    I have to:
    Modify the parameter CONTROL_FILES in the pfile
    Mount the database and rename the datafiles and redolog filesYou shoud modify and rename only in case if path to datafiles, redologs, controlfiles, dump, archive log directories is different on test server.
    If that will be the same, then no need to rename.
    If it is different, then modify pfile in order to meet correct one paths.

Maybe you are looking for