Clarification in transports

Hi all,
I have one master data(ID) object which is showing in unassigned nodes in quality system. So should i capture the same in infoobject in objetcs or in insources. let me know please
Regards
Ashwin

Hello Ashwin,
In the InfoObject, goto change mode and in the Master Data/text tab make sure the InfoArea is assigned in Development, if not assign and transport it again. Also make sure the InfoArea is available in Quality, otherwise you have to transport that too.
Thanks
Chandran

Similar Messages

  • Clarification about Transports

    Hi Gurus:
    Small Q. I have a lets say 5 transports & I do know the order in which they sh'd be released (lets say one after another).
    <b>Should I have to wait until the 'transport 1' reaches Quality successfully</b> & then only I can release 'Transport 2' right... In this example Tr 2 is dependent on Tr 1. Like Tr1 - ODS Changes & Tr 2 - Update rules...
    I think I'm 99% correct, but please confirm...
    In cases where a ODS feeds 5 cubes, I hope I can release Transports of the update rules of 5 cubes,  in parallel, for this scenario right...
    Please suggest

    Should I have to wait until the 'transport 1' reaches Quality successfully & then only I can release 'Transport 2' right... In this example Tr 2 is dependent on Tr 1. Like Tr1 - ODS Changes & Tr 2 - Update rules...
    Ans : For ex if you have 5 transports
    A
    B
    C
    D
    E
    1. You need to check if there are any interdependcies i.e if there are any objects included in the obect that are required by another tranpsort.
    2. If there are interdependecies you need to follow the sequence.
    3. If there are no dependencies you can go ahead and move all the tranpsorts at a time
    for your second question since you are upating 5 different cubes you can move all the transports at the same time.
    Regards
    vijay thammineni

  • Transporting Function Module & Function group

    Hello All,
    I want some clarification in transporting a FM, FG, Extract Structure.
    First I collected Function Group in a Transport Request (R3TRFUGR********) and collected Function Module in the same request and also the extract structure.
    This Function Group have only one Function Module and 4 Includes (LRSALK01, LRSAXD01, LZ_BI_DS_DATATOP, LZ_BI_DS_DATAUXX, RSAUMAC).
    Do I need to collect these includes separately in the same request or is it enough if i collect only function group and function module and move the request.
    Please let me know.
    Thank you.

    Hi Saptrain,
    the best is always to let the system handle it: As soon as you assign an object (function group, function module, include or whatever) to a package, it will ask for a transport. Create the transport and the transport management system will include in the transport what has to be transported.
    As far as I know (but I never have the question), a function group (R3TR FUGR) will transport everything that is in the function group: All functions, all modules, all includes, screens, texts, status and...I think even test cases
    You will find FUGR in the transport after creating the function group.
    If you change something after releasing the transport, it will transport only the changed objects (i.e. includes)
    Regards,
    Clemens

  • HTTP Service Maintainence for BSP

    Hi Friends..
    A clarification :
    I transported a BSP in production. When I go there to display my BSP in SE80 its gives a message:
    There is no SICF node yet for this BSP Application.
    Could not generate the node automatically.
    Add a node manually using "HTTP Service Maintainence"
    Also I am not able to run this BSP and it throws exception.
    I want to know whether can we create directly in Production for this BSP or
    If I transport this from DEV System will it be automatically created in the Production System after the Transport.
    I am quite new to BSP..
    Immediate help would be appreciated.
    Regards
    Deepak
    Message was edited by: DEEPAK PARASHAR

    First of all:
    Better place for this post is BSP forum.
    Coming to your requirement.
    For your BSP application the SICF node should get created automatically if you have transported properly.
    When you create a new BSP application and attach it to a transport request , the following objects should be there under the transport task.
    ICF Service    "as soon as you create the BSP application                        
    Info Object from the MIME Repository   "if you have MIMEs for your application
    BSP (Business Server Pages) Application "once the page is created
    So in your case just go to dev. environment and add your BSP application to a new transport request and transport.
    You can also manually add these things to your transport request from transaction se03->object directory entry.
    R3TR     SICF "SICF service
    R3TR     SMIM " mime objects
    R3TR     WAPA "BSP pages
    Hope i am clear.
    Regards
    Raja

  • Need Clarification  required for Shipping Type Fields in Transportation

    HI
    I am New to transportation module , i come to know that shipping document is controlled by shipping Type if i am wrong correct me ?
    when i click the shipping Type i have seen so many fields in that few fields i didn't understand few field which i mentioned below
    Service Level , Process control , Leg Indicator, adopt route , Determine Leg , Shipping Type Preliminary leg and sub.leg Shipping point
    I know may be it is a basic question please don't lock the thread and also i searched in the Google
    Difference between stage and leg in shipping & transportation?
    https://wiki.sdn.sap.com/wiki/display/ERPLO/Transportation354
    I tried to understand by using F1 functionality but  i couldn't able to understand clearly .Can anyone  guide me how it is useful in a real time scenarios and where and how it is impact
    Regards,
    Prasanna

    Service Level - You can differentiate the type of shipment like Load, General Cargo, Express Cargo etc.,
    Process Control - Just press F1 and read "Examples"
    Leg Indicator - If the cargo goes directly from origin to destination with one shipment document, then you can assign Direct Leg.  If multiple transportation is involved and in each and every stage, if you want to generate shipment document, then you have to assign Preliminary Leg, Main Leg and Subsequent leg
    Adopt route -  Press F1
    Determine legs - Press F1 and go through Explanation of Leg Determination Procedure  as it is very difficult to explain
    Long time back, I have handled this and hence, not able to give examples against each.  Will think of and update in case anything comes to my mind.
    G. Lakshmipathi

  • Clarification in charm for daily transports

    Dear Charm Experts,
    We have configured charm in our test system and have few calrifications. Please help.
    Objective: to use charm for transport management of regular corrections for support and maintenance of system (no implementation or upgrade)
    So now we have configured and testing the workflow. Part of it i am successful able to create change request from dswp, approve and set to in development and also able to create transport request from change request.
    Now i am unable to release the transport request, getting the following error.
    "Action 'Regular Correction: Release Transport Request' cannot be
    executed during phase 'Development without Release'"
    My understanding is i can not release the request as the maintenance cycle status is in development. So to release the tr of change request i need to also set the status of maintenance cycle.
    If this is the case then do i need to create a maintenance cycle for each change we need in our landscape? or how i can create and implement the chages using charm just like Remedy. Hope you understood my concern, please let me know if you need more information.
    Thank you

    Hi,
    I guess, if i correctly understtand your prb, you are stcuk in flow of NCR.
    if the MC is in status "Deve w/o release" your change doc "In developemnet", at this time you can only create the TR and do the develpment activity.
    at once you set the MC as "Develp wit release" and your cr in "In development", you can release ur TR task,if you trigger action test transport, Then ToC s created, and import into test system, you can go for all testing, if any correction needed, do that in again in Dev system,release te new task , triger action again test import,ToC created, test it in test system.
    Once development done, trigger the action Release Transport Order, nw change doc "Complete developement".
    later you can move MC phase "test", CR as "To be tested", in this case corrections reparied by using test messages for create tr and import it.
    So please set the phase of your MC as Develp wit release and proceed.
    Thanks,
    Jansi

  • Re-Transport Clarification in BCS

    Hi All
    due to some Issues in Original Transport, I want to Create Manual Transport for Data Basis,Consolidation Area and Special Version.
    During Manual Transport on Data Basis( Right Click on DB and Select TRANSPORT) reached to the Following Options
    Data Basis       ( What is My Selection for 3 Tasks in Selections)                                
      1.  DELETE Selected Values
    2. DELETE Transported Values
    3. Overwrite(GRAY Out)
    Consolidation Area
    1.  DELETE Selected Values
    2. DELETE Transported Values
    3. Overwrite(NO GRAY Out)
    Special Version
    1.  DELETE Selected Values
    2. DELETE Transported Values
    3. Overwrite(NO Gray Out)
    Your Help really Appreciated
    Thx
    Ramana

    Hi Dan
    Thanks for Your Info and we got respose form SAP . One of the Option is Your Method.
    Thx
    Ramana

  • Problem in creation of STO(stock transport order)

    hi all SAP gurus,
    i am facing the problem in creation of stock transport order with ME21N code. i m getting the error msg of "Not possible to determine shipping data for material in STO "
    i have checked the stock transport order setup in customization. but not getting the clear idea to come out from this problem.
    if anybody have the idea of " how to do the customization of stock transport order setup, pls. let me explain.
    thnx in advance.
    rgrds,
    rajesh

    Hi Rajesh,
    when ur creating STO with UB doc type, then you have to maintain material in both the plants.
    Regarding the error which is comming for you, you have to check the settings at
    SPRO- IMG- MM-Purchasing-Set up stock transport order- Assign delivery type & Checking rule
    here for document type UB and your supplying plant-XXXX select nothing for Dlv type(delivery type) and chr - 01 and after this save.
    After this you create STO as you are doing and see what the system is prompting.
    For any further clarification let me know.
    Regards,
    NJ

  • Transports across clients on same SID

    I have a request from the SAP project consultants to set up automatic transports from client 203 to 204 on my development system.
    I was given a script that the basis consultant had written to do this on a windows platform (he does not have iSeries background) but the script seems to refer to a different SID and at this time I cannot get clarification from them on this.
    The question is what ways can one do a automatic transports from 1 client to another on the same system on an iSeries?
    The consultants are complaining about using SCC1.
    Anette

    Hi Anette,
    I would say: Both ways: Importing or SCC1 are useful ways ...
    The automatic transport - should it happen "immediately" or e.g. all 15 minutes ?
    I would normally suggest to NOT use automatic transports but to always use "import all" in DEV.
    Why ?
    Because the 15 minutes (in average 7.5) are too long for me => I would impoirt manually anyway - but why then all 15 minutes ?
    => I would setup the STMS with transport groups e.g. as follows:
    DEV => /QA/ => PRD
    /QA/ consists of:
    DEV.203
    DEV.203
    QAS.100 etc.
    You could even split /QA/ into a "/DEV-QA/" and a real "/QA/" if you prefer that.
    Then all release of transports puts it automatically into the buffers and then they cna import themselves or you could automate it via STMS e.g. all 15 minutes (you should check this with your IFS backup during night).
    Regards
    Volker Gueldenpfennig, consolut international ag
    http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

  • Offline transport of transport requests to file on Client-PC and re-import

    Hi all,
    for maintenance reasons, we copied a BW system in a Virtual Machine (VM) and made some BI-IP developments there (new InfoObjects, InfoAreas, DSOs, Cubes, MultiCubes, aggregation levels, filters, planning functions, queries etc.). The VM is "offline" with no transport system in place so far (all BW objects should have been saved to $TEMP). Thus, we cannot use SAP standard nor BW transport systems due to offline environment.
    Now we set up a new BW server we want to "copy" our developments from the VM to.
    What we are looking for:
    (1)how can we export transport requests to a file on a client PC (VM) and then re-import the file on the other physical server again to further use our previous developments (no source systems in use)?
    (2) any other best practice approach to that scenario
    I already found an entry in another thread, which covers a similar issue but does not fully meet my requirements in my point of view: http://it.toolbox.com/wiki/index.php/Upload_and_download_SAP_transport_request .
    Would be great if anyone has a solution to that scenario. If you need further details for clarification, please let me know.
    Thanks in advance.
    Stefan.

    Other solution found

  • Stock transport order requiremnet date does not match in tranRRP3) &MD04

    STO delivery date does not match in transaction MD04 (ECC 6.0) and RRP3 (SCM5.1). This situation occurs when delivery date in STO is past due.  During Shipment Scheduling & avaibilty check, materail staging/availability date(EKET-MBDAT) get updated with current date. This EKET -MBDAT date is reflecting in the stock requirement list as a requirement date which is correct. Still APO product view screen dispalys the past due date as a requiremnet date instead of date in MBDAT.
    Please let me know in case of any more clarification on this.

    Dear Sanjay,
    STO Delivery Date is receipt date at destination location. This minus Transportation Lead time is STO requirement date at source.
    When you conduct ATP check for this PO/STO, your committed date is updated at source and delivery date will be updated as committed date plus TL lead time.
    Do check your GR/GI times in Product Master as primary date mismatch issue.
    If you are using any Customized program for updating EKET-MBDAT, it wont automatically trigger the change date in APO.
    In such cases you need to run CCR in APO and reconcile.
    The better way is to periodically schedule IM initialization (RIMODINI). This maps all R3/ECC data in your IM for PO/STOs and you would see the consistency in APO and ECC.
    Regards,
    Bipin

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Transport layer and transport package

    I have a fundamental query on transport package and layer.
    Objects are classified in packages depending on the project and put in $TMP or test package else they are local ? Only objects belonging to a transport package can be transported , others are not ? How transport package is defined ? All objects must be a part of transport package else can not be transported ?
    ABout transport layer, I understand this is used to define which path objects will take during transport particularly when there are multiple transport routes. But what are the advantages of it ? How we can define it (trx code/menu path) ?
    Is it mandatory to use transport package and layers ?

    Hi Mike,
    Just to clarify a few basic concepts:
    A transport package (formerly called a "development class", a term you may still encounter) groups related development objects. Whenever you create a new development object (table, program, ...) you have to assign it to a package; the object is then placed into a transport request. Alternatively you can indicate it is a local object (its package is then "$TMP") but such an object is non-transportable.
    A transport layer identifies the route that transportable devmt objects must follow. Typically in your system you will have two layers: a layer usually called Zxxx (with xxx = SID of your development system) is used for your own developments. Another layer called "SAP" is used for transporting SAP objects, for example changes to standard SAP code implemented via transaction SNOTE and recorded in repairs. Without the "SAP" layer in place, repairs would be non-transportable.
    Each transport package belongs to a layer. You assign the layer when you create a new package in transaction SE80. You can change the layer later on, but this is rarely needed and can cause quite a few side-effects. SAP objects always have layer "SAP". The layers themselves are managed in STMS, under "Transport Routes".
    Hope this clarifies things,
    Rgds, Mark

  • How to transport standard text in SO10 to different system.

    Hi,
    I need to change the standard text in SO10. I need to change the custom text. We have a custom text id (zval) associated with it. How to transport the changes i have done from Dev to Quality to Production.
    I wanted one clarification: when we create the transport manually, whether only those changes what i have done for the text can be transported or it will include all the other text which are initially present (like table entries). From the program RSTXTRAN, can you give the what are the inputs need to given.
    Please let me know regarding this. Thanks in advance

    Steps..
    Create a work bench CR in the transaction SE09..
    Modify the task to Development correction / Repair..
    Go to the program RSTXTRAN..
    In the name of correction give the task number (Not the CR number)
    Give the Text Key..Your standard text name..
    Press F8...
    Press select all..
    Then Press enter..
    Then press the button "Trsfr texts to corr".
    Now the standard texts will be moved to a CR..
    Then release the CR and move it to QA..

  • Problem in transporting the component Controller

    Hi Friedns,
    We had a problem when transporting a WD Componenrt controller. Til Quality it when ok. But when the transport reached to production we saw that from the attributes tab of the Componenrt controller the WD_ASSIST WAS MISSING . Due to which it was giving a dump. We recreated a new transport and sent it but that didnt worked to .
    Can anybody advise any solution. In the dump it is saying Component ZBASE_RATE CONTAINS SYNTAX ERRORS.
    THANKS
    Rohit

    Hi,
    It might be difference of the environments in ur quality and production systems.
    Check whether both systems are having the same level of webdynpro patches.
    You can check with ur basis people for this clarification.
    Regards,
    Veerendra Nath

Maybe you are looking for

  • Access result set in user define type of table

    here is the situation. I have a stored procedure that dequeues messages of a AQ and passes them as an OUT parameter in a collection of a user defined type. The same type used to define the queues. The java code executes properly but seems like we don

  • Open shipping notification in md04

    Dear Experts, Please suggest how we can handle the qty differences arising from the goods receipt or return delivery in case of using the goods receipt w.r.t inbound delivery in system. In MD04 the diffrential quantities are shown as open shipping no

  • Patch number for upgrading opatch from 10.2.0.1.0 to 10.2.0.1.1

    Please can someone tell me what is the Patch number for upgrading opatch utility from 10.2.0.1.0 to 10.2.0.1.1? Thanks in advance.

  • How to find the Datasources for the given table names ?

    Hi All, I have an urgent requirement where I ned to find the names of BW Datasources, created for the given table names. Both the tables and Datasources are in BW system only. I can see the table in SE11 but I am unable to find its associated Datasou

  • Save as in numbers to a word document

    Hi, Couls someone please tell me how to save a numbers document so a colleage using word can open it. Thanks