Fully Automated SMI Process for Suppliers

Hi There,
I am currently working on the feasibility of implementing the Fully Automated SMI process for suppliers. I would appreciate any links to information regarding implementing this system.
I require the following information:
Technical/Functional requirements on Customer/Supplier side.
Where this business process currently fits for CPG's
Appreciate any information on the above.
Dominic

Hi Mark,
I have this issue before.
When we first add our accout to Outlook, we should better create a new profile manually.
I suggest removing our accout from Outlook via "Mail" and re-creating new profile.
Operation details as below:
1. Start-->Control Panel-->Mail-->E-mail Accounts...-->select your account and Remove it-->Close
2. Show Profiles...-->Add-->then configure your account again.
Hope it is helpful
Thanks
Mavis
Mavis Huang
TechNet Community Support

Similar Messages

  • Process for supplier registration

    Hi there,
    We want to use supplier registration functionality for us to capture vendor registration process into the system. We are planning to use SRM to capture this process. Our planned system will be SRM 7 EHP 1 with ECC 6 EHP 4.
    Our desired business process is like this:
    1. Vendor will click link in our company website,
    2. Vendor then will fill up the general data, and submit supporting documents,
    3. Purchaser then will evalute the general data and the supporting documents, if okay, then it will be approved,
    4. After that the vendor data will be replicated to ECC
    I wonder whether this can be captured in standard functionality of supplier registration in SRM?
    Best regards,
    Josh

    Hi Nikhil,
    I have read some documentation in this link:
    http://help.sap.com/saphelp_srm701/helpdata/EN/cd/d7114b1df64601b4173fbce6513a31/frameset.htm
    Are you referring to the "Supplier Qualification with Automatic Transfer to SAP ERP" which is described in this link? If yes, I assume that the pre-requisite is using Supplier Registration within SUS, rite? If I dont use SUS can i have this functionality?
    Further reading from this link, I see this statement:
    "The accounts payable clerk checks the IDoc list (we05), completes the missing data in the IDoc and restarts the inbound IDoc, if necessary"
    This is rather confusing for me as SRM supplier data do not cover full data (no financial data), thus if we use this functionality then the message will always stuck in IDOC, and then the AP clerk need to always check the IDOC, is this meant to be like this? And one more thing, is it a good practice to allow user to complete data in IDOC? As far as I know, IDOC is a very technical object, I cannot imagine a non-technical user told to edit the IDOC and restart the data?
    Best regards,
    Josh

  • 2 Automated Row Processes for one page?

    Hey people, me again ...
    i have created 2 Regions on my Page. Each region has databasefields which references: RegionA -> a view, RegionB --> a table with the same ID (customer) ID)
    For each region i have an ARF, which was generated by the wizard.
    When i had just one Region the rowfetch worked fine. Now that i have the second Region i get a failure: "Row cannot be fetched" (debug shows that this happen in my second ARF).
    I tried to change the process point from AFTER Header to everything what i can choose ... same failure.
    If i change the sequence
    ARF_A from 10 to 20 and
    ARF_B from 21 to 10 then everything in my SECOND Region is shown.
    Same happens when i set "Set Memory Cache on display" for the second ARF.
    Isn't it possible to habe 2 ARFs for one page?
    Or where is my failure?
    Thank you for your help.
    Jana

    Hey,
    first of all, thank you for your response.
    In my first region there is a view over 3 tables. Works fine. I don't want to put more tables in this view because it is very big already.
    So i want to have another region with just a single table.
    Is there no way to have two ARF - processes on same page?

  • Page having PL/SQL process and Automatic Row Process for 2 different tables

    Hi,
    I have a page containing 2 regions A & B.
    Region-A content would be updated to table T1(PK : Ticket#).
    Region-B content would be inserted into table T2(PK: Attachment# ; FK: Ticket#).
    Region-B is used for uploading a file content into T2.
    Since I cannot use 2 DML processes on the page for 2 different tables with a common column, so I have a PL/SQL process to update the record into T1 and an Automatic Row Process(DML) for inserting into T2.
    Now the issue is in Region-B when I select a file using 'Browse' button and click on Upload button to fire the Automatic Row Process, the success message is displayed but the file is not uploaded into the table. But when I moved the entire Region-B and the Automatic Row Process to a different page and clicking Upload is working fine and inserting the record into the table along with the file content.
    An item P10_TICKET_NUMBER with source type as Database column with source value as TICKET_NUMBER is used in Region-A.
    I have gone through the forums and found some of the threads below
    Re: 2 Automated Row Processes for one page?
    Re: Error when trying to create 2 Forms on same page on 2 tables with ID as
    where people facing similar issues but here I have followed the solution provided(with one PL/SQL process and other Automatic process) in the threads but still issue persists.
    Can anyone throw some light on this please.
    Thanks,
    Raj.

    Hi Teku,
    You just have a look at this thread, where u can find a solution for your problem.
    INSERTING Records into Second table based on First table Primary Key
    hope this helps.
    Bye,
    Srikavi

  • Duplicate IR through parallel processing for automated ERS

    Hi,
    We got duplicate IR issue in production when running the parallel processing for automated ERS job. This issue is not happening in every time. Once in a while the issue happeing. That means the issue has happened in June month as twice. What could be the reasons to got this issue. On those days the job took more time comaredt o general. We are unable to replicate the same scenareo. When i am testing the job is creating IRs successfully. Provide me the reasons for this.

    Wow - long post to say "can I use hardware boxes as inserts?" and the answer is yes, and you have been able to for a long time.
    I don't know why you're doing some odd "duplicated track" thing... weird...
    So, for inserts of regular channels, just stick Logic's I/O plug on the channel. Tell it which audio output you want it to send to, and which audio input to receive from. Patch up the appropriate ins and outs on your interface to your hardware box/patchbay/mixer/whatever and bob's your uncle.
    You can also do this on aux channels, so if you want to send a bunch of tracks to a hardware reverb, you'd put the I/O plug on the aux channel you're using in the same way as described above. Now simply use the sends on each channel you want to send to that aux (and therefore hardware reverb).
    Note you'll need to have software monitoring turned on.
    Another way is to just set the output of a channel or aux to the extra audio outputs on your interface, and bring the outputs of your processing hardware back into spare inputs and feed them into the Logic mix using input objects.
    Lots of ways to do it in Logic.
    And no duplicate recordings needed...
    I still don't understand why the Apple-developers didn't think of including such a plug-in, because it could allow amazing routing possibilities, like in this case, you could send the audio track to the main output(1-2 or whatever) BUT also to alternate hardware outputs, so you can use a hardware reverb unit, + a hardware delay unit etc...to which the audio track is being sent , and then you could blend the results back in Logic more easily.
    You can just do this already with mixer routing alone, no plugins necessary.

  • What is the diffrence between DR and SMI processes?

    Actually when i say DR process is for comparision purpose and SMI is for actual replenishment purpose people are not getting convinced,i am requesting you experts to  come up with a better answer with an example .
    thanks in advance,
    Nandan

    Hi Nandan,
    DR (Dynamic replenishment) is basically a Visibility process where you can do a comparision of Customer and Supplier Planning data which can be shown absolute numbers, percentages and deviations shown in colour coding.
    Here Planned Orders & Purchase requisitions =Customer plan
    And Customer Order= Created PO, for these Supplier can enter responses=Supplierplan/Supplier order.
    Ex: If customer plan for LUX = 100 and Supplier Plan=30, the difference of -70 Nos will come in deviation in Key figure =Difference Planned receipts/Planned requirement and  %ge in RED colour with -70%  in Key figure =Difference Planned receipts/Planned requirement(%).
    SMI ( Supplier Managed Inventory) is a process where Customer gives/handover Net Replenishment planning responsibility to Supplier.
    SMI uses dependent requirement & stock balance to calculate replenishment plan based on Min/Max Logic and subsequently converts the plan in to PO( this is optional) and ASN at the time of shipment.
    The POs and ASNs are sent back to the customer's ERP system as indication of commitment from the supplier.
    Ex: If for LUX Min/Max stock level is 70/100, once the stock level goes to below 70 say  50, a Demand of 50 numbers will show in Key figure =Demand and Supplier can enter Key figure =Planned receipt manually or allow system to propose automatically rest will happen as said above.
    Here Supplier can create ASN for the replenishment plan.
    I hope this helps you to explain.
    Regards,
    Vasu

  • Fully automated Outlook 2013 with Exchange 2010 client setup

    Hi,
    I'm trying to create a situation where a user fires up Outlook 2013 for the first time and the Exchange account is configured for them without any intervention whatsoever.
    I currently have it that when a user opens Outlook, they see the "Welcome to Outlook 2013" screen and they click next.  They are asked if the want to setup Outlook to connect to an email account and click Next.  On the next screen, there
    primary SMTP address is automatically filled out, and then autodiscover takes over and configures the rest for them.
    I have configured AutoDiscover correctly (including SCP), and configured a GPO with "Automatically configure profile based on Active Directory Primary SMTP address".
    However, I still get that "Welcome to Outlook 2013" screen.
    How do I get rid of that to make the whole process fully automated?
    Many thanks,
    -Mark

    Hi Mark,
    I have this issue before.
    When we first add our accout to Outlook, we should better create a new profile manually.
    I suggest removing our accout from Outlook via "Mail" and re-creating new profile.
    Operation details as below:
    1. Start-->Control Panel-->Mail-->E-mail Accounts...-->select your account and Remove it-->Close
    2. Show Profiles...-->Add-->then configure your account again.
    Hope it is helpful
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

  • Fully Automated RMAN TSPITR fails

    condition:
    oracle 11.2&&windows7(32bit)
           I performed Fully Automated RMAN TSPITR yesterday.When the oracle performed the export of the metadata of the recovery set,the following problem appeared.
    Performing export of metadata...
       EXPDP> Starting "SYS"."TSPITR_EXP_dbtF":
    Removing automatic instance
    shutting down automatic instance
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 08/23/2013 01:01:05
    RMAN-06136: ORACLE error from auxiliary database: ORA-01097: cannot shutdown while in a transaction - commit or rollback first
    RMAN-06962: Error received during export of metadata
    RMAN-06960:    EXPDP> ORA-39123: Data Pump transportable tablespace job aborted
    ORA-39187: The transportable set is not self-contained, violation list is
    ORA-39906: Constraint FK_PK between table SH.PK in tablespace TEST01 and table SH.FK in tablespace TEST02.
             The message of the problem tells me that the transprotable set is not self-contained.But I remember that I have dropped the constraint FK_PK.And then I run the following code.And the result tells me that the constraint  surely  be dropped.
           So ,why does the problem occur?
            Thanks!
    SQL> select count(*) from dba_constraints where constraint_name ='FK_PK';
      COUNT(*)
             0
    SQL>    Begin
      2         dbms_tts.transport_set_check('TEST02,TEST01',true,true);
      3     End;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select * from transport_set_violations;
    no rows selected

    I just drop the constriant FK_PK.There are no index dropped of the tablespace test02 in the dba_recyclebin.
    SQL> select object_name,original_name,type,ts_name from dba_recyclebin;
    OBJECT_NAME                    ORIGINAL_N TYPE       TS_NAME
    BIN$c96+yjANSE6ThcTovMwoPw==$0 TAB        TABLE      USERS
    BIN$MkU0+TSYSJK1qnCAKglJFg==$0 TABLE1     TABLE      USERS
    BIN$YCQ2IgKaQDWw6Cxrl6QRaA==$0 TAB        TABLE      TEST01
    SQL> show user;
    USER is "SH"
    SQL> select name from v$database;
    NAME
    WAREHOUS
    And the following code (starting the RMAN---occur the problem):
    SQL> $rman target sys/123456@warehouse catalog rman/123456@catalog;
    Recovery Manager: Release 11.2.0.1.0 - Production on Fri Aug 23 10:29:47 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: WAREHOUS (DBID=4011143137)
    connected to recovery catalog database
    RMAN> recover tablespace test02 until scn 1346235 auxiliary destination 'G:\Oracle\TS';
    Starting recover at 23-AUG-13
    starting full resync of recovery catalog
    full resync complete
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=22 device type=DISK
    Creating automatic instance, with SID='atmp'
    initialization parameters used for automatic instance:
    db_name=WAREHOUS
    db_unique_name=atmp_tspitr_WAREHOUS
    compatible=11.2.0.0.0
    db_block_size=8192
    db_files=200
    sga_target=280M
    processes=50
    db_create_file_dest=G:\Oracle\TS
    log_archive_dest_1='location=G:\Oracle\TS'
    #No auxiliary parameter file used
    starting up automatic instance WAREHOUS
    Oracle instance started
    Total System Global Area     292933632 bytes
    Fixed Size                     1374164 bytes
    Variable Size                100665388 bytes
    Database Buffers             184549376 bytes
    Redo Buffers                   6344704 bytes
    Automatic instance created
    Running TRANSPORT_SET_CHECK on recovery set tablespaces
    TRANSPORT_SET_CHECK completed successfully
    contents of Memory Script:
    # set requested point in time
    set until  scn 1346235;
    # restore the controlfile
    restore clone controlfile;
    # mount the controlfile
    sql clone 'alter database mount clone database';
    # archive current online log
    sql 'alter system archive log current';
    # avoid unnecessary autobackups for structural changes during TSPITR
    sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
    # resync catalog
    resync catalog;
    executing Memory Script
    executing command: SET until clause
    Starting restore at 23-AUG-13
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=59 device type=DISK
    channel ORA_AUX_DISK_1: starting datafile backup set restore
    channel ORA_AUX_DISK_1: restoring control file
    channel ORA_AUX_DISK_1: reading from backup piece G:\ORACLE\BACKUP\WAREHOUSE\LEVEL1_CUMULATIVE_0NOHKRT6_1_1
    channel ORA_AUX_DISK_1: piece handle=G:\ORACLE\BACKUP\WAREHOUSE\LEVEL1_CUMULATIVE_0NOHKRT6_1_1 tag=LEVEL1_CUMULATIVE
    channel ORA_AUX_DISK_1: restored backup piece 1
    channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
    output file name=G:\ORACLE\TS\WAREHOUSE\CONTROLFILE\O1_MF_91FLFZYS_.CTL
    Finished restore at 23-AUG-13
    sql statement: alter database mount clone database
    sql statement: alter system archive log current
    sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
    starting full resync of recovery catalog
    full resync complete
    contents of Memory Script:
    # set requested point in time
    set until  scn 1346235;
    # set destinations for recovery set and auxiliary set datafiles
    set newname for clone datafile  1 to new;
    set newname for clone datafile  3 to new;
    set newname for clone datafile  2 to new;
    set newname for clone tempfile  1 to new;
    set newname for datafile  7 to
    "D:\APP\ASUS\ORADATA\WAREHOUSE\TEST02.DBF";
    # switch all tempfiles
    switch clone tempfile all;
    # restore the tablespaces in the recovery set and the auxiliary set
    restore clone datafile  1, 3, 2, 7;
    switch clone datafile all;
    executing Memory Script
    executing command: SET until clause
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    renamed tempfile 1 to G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_TEMP_%U_.TMP in control file
    Starting restore at 23-AUG-13
    using channel ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: starting datafile backup set restore
    channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_AUX_DISK_1: restoring datafile 00001 to G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_SYSTEM_%U_.DBF
    channel ORA_AUX_DISK_1: restoring datafile 00003 to G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_UNDOTBS1_%U_.DBF
    channel ORA_AUX_DISK_1: restoring datafile 00002 to G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_SYSAUX_%U_.DBF
    channel ORA_AUX_DISK_1: restoring datafile 00007 to D:\APP\ASUS\ORADATA\WAREHOUSE\TEST02.DBF
    channel ORA_AUX_DISK_1: reading from backup piece G:\ORACLE\BACKUP\WAREHOUSE\LEVEL0_0IOHKQV0_1_1
    channel ORA_AUX_DISK_1: piece handle=G:\ORACLE\BACKUP\WAREHOUSE\LEVEL0_0IOHKQV0_1_1 tag=LEVEL0
    channel ORA_AUX_DISK_1: restored backup piece 1
    channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:55
    Finished restore at 23-AUG-13
    datafile 1 switched to datafile copy
    input datafile copy RECID=5 STAMP=824207618 file name=G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_SYSTEM_91FLGBWL_.DBF
    datafile 3 switched to datafile copy
    input datafile copy RECID=6 STAMP=824207618 file name=G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_UNDOTBS1_91FLGBYW_.DBF
    datafile 2 switched to datafile copy
    input datafile copy RECID=7 STAMP=824207618 file name=G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_SYSAUX_91FLGBXY_.DBF
    contents of Memory Script:
    # set requested point in time
    set until  scn 1346235;
    # online the datafiles restored or switched
    sql clone "alter database datafile  1 online";
    sql clone "alter database datafile  3 online";
    sql clone "alter database datafile  2 online";
    sql clone "alter database datafile  7 online";
    # recover and open resetlogs
    recover clone database tablespace  "TEST02", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
    alter clone database open resetlogs;
    executing Memory Script
    executing command: SET until clause
    sql statement: alter database datafile  1 online
    sql statement: alter database datafile  3 online
    sql statement: alter database datafile  2 online
    sql statement: alter database datafile  7 online
    Starting recover at 23-AUG-13
    using channel ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
    channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
    destination for restore of datafile 00007: D:\APP\ASUS\ORADATA\WAREHOUSE\TEST02.DBF
    destination for restore of datafile 00001: G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_SYSTEM_91FLGBWL_.DBF
    destination for restore of datafile 00003: G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_UNDOTBS1_91FLGBYW_.DBF
    destination for restore of datafile 00002: G:\ORACLE\TS\WAREHOUSE\DATAFILE\O1_MF_SYSAUX_91FLGBXY_.DBF
    channel ORA_AUX_DISK_1: reading from backup piece G:\ORACLE\BACKUP\WAREHOUSE\LEVEL1_CUMULATIVE_0MOHKRRO_1_1
    channel ORA_AUX_DISK_1: piece handle=G:\ORACLE\BACKUP\WAREHOUSE\LEVEL1_CUMULATIVE_0MOHKRRO_1_1 tag=LEVEL1_CUMULATIVE
    channel ORA_AUX_DISK_1: restored backup piece 1
    channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
    starting media recovery
    archived log for thread 1 with sequence 14 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_18\O1_MF_1_14_911NKBNY_.ARC
    archived log for thread 1 with sequence 15 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_19\O1_MF_1_15_912Y0W3B_.ARC
    archived log for thread 1 with sequence 16 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_19\O1_MF_1_16_91475BY4_.ARC
    archived log for thread 1 with sequence 17 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_19\O1_MF_1_17_9149CJMS_.ARC
    archived log for thread 1 with sequence 18 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_18_915MLYR1_.ARC
    archived log for thread 1 with sequence 19 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_19_915OR34Z_.ARC
    archived log for thread 1 with sequence 20 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_20_916XQMRD_.ARC
    archived log for thread 1 with sequence 21 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_21_91715KYK_.ARC
    archived log for thread 1 with sequence 22 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_21\O1_MF_1_22_91884645_.ARC
    archived log for thread 1 with sequence 23 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_21\O1_MF_1_23_919JV790_.ARC
    archived log for thread 1 with sequence 24 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_22\O1_MF_1_24_91D6GRC1_.ARC
    archived log for thread 1 with sequence 25 is already on disk as file D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_22\O1_MF_1_25_91D7OOOR_.ARC
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_18\O1_MF_1_14_911NKBNY_.ARC thread=1 sequence=14
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_19\O1_MF_1_15_912Y0W3B_.ARC thread=1 sequence=15
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_19\O1_MF_1_16_91475BY4_.ARC thread=1 sequence=16
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_19\O1_MF_1_16_91475BY4_.ARC thread=1 sequence=16
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_19\O1_MF_1_17_9149CJMS_.ARC thread=1 sequence=17
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_18_915MLYR1_.ARC thread=1 sequence=18
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_19_915OR34Z_.ARC thread=1 sequence=19
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_20_916XQMRD_.ARC thread=1 sequence=20
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_20\O1_MF_1_21_91715KYK_.ARC thread=1 sequence=21
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_21\O1_MF_1_22_91884645_.ARC thread=1 sequence=22
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_21\O1_MF_1_23_919JV790_.ARC thread=1 sequence=23
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_22\O1_MF_1_24_91D6GRC1_.ARC thread=1 sequence=24
    archived log file name=D:\APP\ASUS\FLASH_RECOVERY_AREA\WAREHOUSE\ARCHIVELOG\2013_08_22\O1_MF_1_25_91D7OOOR_.ARC thread=1 sequence=25
    media recovery complete, elapsed time: 00:00:44
    Finished recover at 23-AUG-13
    database opened
    contents of Memory Script:
    # make read only the tablespace that will be exported
    sql clone 'alter tablespace  TEST02 read only';
    # create directory for datapump import
    sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
    G:\Oracle\TS''";
    # create directory for datapump export
    sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
    G:\Oracle\TS''";
    executing Memory Script
    sql statement: alter tablespace  TEST02 read only
    sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''G:\Oracle\TS''
    sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''G:\Oracle\TS''
    Performing export of metadata...
       EXPDP> Starting "SYS"."TSPITR_EXP_atmp":
    Removing automatic instance
    shutting down automatic instance
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 08/23/2013 10:36:08
    RMAN-06136: ORACLE error from auxiliary database: ORA-01097: cannot shutdown while in a transaction - commit or rollback first
    RMAN-06962: Error received during export of metadata
    RMAN-06960:    EXPDP> ORA-39123: Data Pump transportable tablespace job aborted
    ORA-39187: The transportable set is not self-contained, violation list is
    ORA-39906: Constraint FK_PK between table SH.PK in tablespace TEST01 and table SH.FK in tablespace TEST02.
    RMAN>

  • SRM 3.0 integration with XI 3.0 for Supplier enablement scenario

    Hi,
    Iam trying to integrate SRM 3.0 with <b>XI 3.0</b> for Supplier Enablement Scenario. Can anyone please help me with the documentation for the same? I have downloaded documentation for SRM from service.sap.com/ibc but it contains steps only for XI 2.0.
    Does anyone have the steps for XI 3.0?
    Thanks and Regards,
    Manish

    Vadim
          We are looking to integrate SRM 4.0 with R/3 ( ECC 5.0 ) using XI 3.0. I went to service marketplace to look for XI process content for SRM 4.0 - I could find XI content for SRM 2.0 only.
    1) Do you know where SAP pre-delivered XI 3.0 process content can be downloaded from ?
    2) Would you be having the cookbooks- possibly for all scenarios for SRM 4.0
    If you could forward /give pointers to the above , I would appreciate your help. My yahoo id is [email protected] 
    Thanks for your help in advance.

  • Dunning letters and automatic remittance advise for suppliers

    Hi Friends,
    Can any one please provide a document for setting up dunning letters and Automatic Remittance advise for suppliers process in R12
    Thanks in Advance.

    Hi Chery,
    Thanks for your update, Yes I have reviewed the note
    I am stuck with How to create the xdodelivery.cfg file per Oracle XML Publisher Administration
    Also can you please let me know the process of sending dunning letters automatically to the customers, As there is no standard process. I know we have to customize the process by creating bursting program and add to the request set, but I am nor aware of the process. I have seup the process till running the IEX: Send Dunnings for Delinquent Customers programm and displaying the reminders of the customers which have setup in the XML publisher
    Can you please let me know how to send dunning letters automatically to the customers
    Regards
    Vasu

  • What are the minimum SAP processes for public sector ?.

    Dear  Expert
    I would understand that SAP Public Sector - Business Process has the following key:
    Shared Services
    Human Capital Management
    Government Purchases
    Formulation of Public Sector Accounting and Budget
    Social Services
    Social Security
    Government Programs
    Tax and Revenue Management
    Public Safety
    Organizational Management
    Organizational Support
    For an implementation in SAP Public Sector, I found  several solutions are:
    - Solutions - SAP Business Suite
    - Software SAP Customer Relationship Management
    - Software ERP SAP
    - SAP Product Lifecycle Management
    - SAP Supply Chain Management
    - SAP Supplier Relationship Management
    If we select SAP ERP Software. From the above processes. ¿Is it correct to have at least the formulation of Public Sector Accounting and implemented budget? or ¿Is it OK to implement all processes for the public sector? I'm  very confusing ...
    Please,  could you explain SAP Procurement public sector solutions? or do you have a link on this advice?
    All recommendations are welcome.
    Thank you very much for your support and recommendations
    Dear SAP
      SAP Sector Público –  Tiene los siguientes   Procesos de Negocios :
    Servicios Compartidos
    Gestión del Capital Humano
    Compras de Gobierno
    Formulación de Contabilidad del Sector Público y Presupuesto
    Servicios Sociales
    Seguridad Social
    Programas de Gobierno
    Impuestos e Ingresos de Gestión
    Seguridad Pública
    Gestión Organizacional
    Apoyo Organizacional
    Para una implementación.  SAP Sector Público tienen varias soluciones y son:
    - Soluciones - SAP Business Suite
    - Software SAP Customer Relationship Management
    - Software ERP SAP
    - SAP Product Lifecycle Management
    - SAP Supply Chain Management
    - SAP Supplier Relationship Management
    Si seleccionamos SAP ERP Software. De los procesos mencionados anteriormente. ¿Es correcto,  tener por lo menos el proceso de formulación de Contabilidad del Sector Público y Presupuesto implementado? o ¿Debería ponerse en práctica  o implantarse todos los procesos  del sector público?
    Por favor, ¿podría explicar Adquisiciones SAP para soluciones del sector público?
    Todas sus recomendaciones serán bienvenidas.
    Muchas gracias por su apoyo y recomendaciones

    Dear,
    I found this information and it is good:
    Procurement for Public Sector - Public Sector - SCN Wiki
    PPS -Procurement for Public Sector- enhancements - Supplier Relationship Management - SCN Wikiii
    La solución SAP Procurement for Public Sector (SAP PPS)
       Características generales son:
       La fase de preparación
       La fase de licitación
       La fase de ejecución del contrato
    El valor de la solución SAP Procurement for  Public Sector (SAP PPS) - SRM 7.0-based PPS Architecture
    1. Solución completa para todo el ciclo de vida de una contratación
    publica, como la gestión de pliegos.
    2. Gestión de contratos eficaz para garantizar mas transparencia,control y auditabilidad.
    3. Integración con Contabilidad presupuestaria (Funds Management)
    4. Posibilidad de gestionar contrataciones más complejas electrónicamente.
    5. Simplificar la arquitectura de sistemas y reducir así el coste de integración. SAP PPS - Firma digital.
    6. Menos modificaciones funcionales en proyectos de implantación.
    Thanks

  • Regarding Goods Reversal & Goods Issue Process for PO

    Hi Experts,
               Could you any one tel me what is Goods Reversal and Goods Issue process for PO...
    How to create the FM & detailed Procedure Pls?....
    Please any one tel me.......
    Thanks & Reagards
    HB

    Hi Hans,
    SOURCE : HELP.SAP
    Purpose
    Inventory Management uses this process in such a way that the goods issue posting is divided into two parts that run in separate systems. Posting the GI document in the supplying plant results in a message to the receiving plant. The receiving plant then performs a complementary posting. The physical goods receipt takes place as usual.
    Prerequisites
    When using batch processing, the following prerequisites must be fulfilled:
    Both the original and target systems have the same batch definition level.
    The batch definition level is either the material or the client.
    An ALE scenario exists for materials and classes (characteristics).
    Unique batch numbers exist cross-system.
    Batches can only be changed in their original system when they are not decoupled. From an organizational point of view, this must also lead to the batch status being changeable in a local SAP R/3 system. For example, this is impossible when transfer posting to a new batch and results in further actions, for example, relabeling containers, palettes and so on.
    Characteristics
    As for the purchase order in a one-system situation, the system should automatically post the material into the stock in transit at the receiving profit center and the corresponding Profit Center Accounting using intra-CC transfer prices at goods issue for the purchase order and the unchecked delivery. This requirement is valid for one-system situations as well as for two-system situations where there is an ALE interface. No internal billing document should be created.
    In a two-system case, the receiving profit center should be derived at goods issue from the unchecked delivery. Profit Center Accounting then takes place with
    Stock change transfer price to stock
    Internal expense to internal sales
    Internal clearing account to stock change transfer price.
    Account determination in a purchase order for an intra-company-code transaction must be different from account determination in external transactions. Automatic GR/IR account clearing is required in both one-system and two-system situations.
    The stock in transit must be visible in the receiving profit center.
    The system must send a shipping notification at goods issue in one-system and two-system situations.
    You need to create an invoice document for the internal and external trading statistics for cross-boundary deliveries as well as for customs purposes.
    GR/GI slips are created.
    Process Flow
    Goods Issue Posting for Stock Transfers
    The delivery triggers the goods issue in the issuing system.
    The call contains the stock transport order data known in the delivery, including the PO item and the logical system of the recipient.
    The transaction (quantity and value updates) is selected using the movement type:
    Movement Type     
    Function
    641     
    Goods issue with UB logic (Creation of stock in transit at recipient, immediate value posting).
    647     
    As 641, however the goods receipt line (movement type 101) is added automatically, so that the goods receipt is posted at the same time as the goods issue (one-step procedure).
    You determine the movement type according to the schedule line category in Sales and Distribution. The goods issue for a cross-system stock transfer must be different from the integrated transaction. This is achieved by adding a new movement type.
    You post quantities and values at goods issue in the same way as a goods issue for a sales order. That is to say, the quantity is posted in the supplying plant and the value is adjusted to that of the stock account. The offsetting posting is made to a clearing account. The known data from the delivery is copied to Accounting to balance the account where necessary.
    The system creates a message to the appropriate receiving system for all items with reference to a cross-system purchase order. The system does not perform any validity checks on the recipient’s data before posting begins. Incorrect Customizing results in the update being terminated.
    If a goods issue has receiving plants in different logical systems, an IDoc is sent for each system.
    In order that the goods receipt is able to use the values on the receiver side, you must add the values used to post the goods movement, in particular the transfer prices, to the IDoc.
    The logic for recognizing the profit center switch functions as follows: At goods issue, the system recognizes that the profit center of the issuing plant is different from the profit center of the receiving plant. The system derives the profit center node from the relevant profit center.
    Data Transfer
    The IDocs sent by the issuing plant trigger the goods issue postings in the receiving plant.
    Background Posting in the Receiving System
    The goods receipt is posted in the receiving system using the IDoc. The interface receives the data from the goods issue in the supplying plant. The following processes now run at the recipient:
    The system finds the update control for the GR part of the posting.
    The goods movement is posted with the new movement type.
    During valuation of the goods receipt, the system might, where necessary (UB logic), refer to the values (legal value and the value from the parallel valuation type, if you are using the transfer price function) from the IDoc.
    The PO history is updated. The PO history is updated with the material document number from the second part of the GI posting. The GI document number is not stored in the supplying plant, because there is no way to display this document.
    In two-step procedures the goods receipt is posted to the stock in transit.
    Reversal
    You can only reverse this goods issue for the PO using the cancellation transaction in SD. You cannot reverse the GI in Inventory Management.
    The material document that is automatically created in the receiving system cannot be canceled. This reversal is triggered by the sending system (the actual reversal of the GI document takes place there) and transmits the data, including the reversal movement type, to the receiving system. No actual reversal is posted in the receiving system, because the material document number of the original document does not exist in this system. This scenario is applicable for cases where you use the two-step procedure (with stock in transit).
    Distribution of Batch Master Data and Characteristics
    The batch information is transported using the message category BATMAS.
    When you create a cross-system goods issue, the system creates the corresponding IDoc using the message category BATMAS.
    When the delivery arrives in the target system, the batch and all the information is already present in the system.
    Changes to the batch data are also distributed using the message category BATMAS.
    The batch can be decoupled in the receiving SAP R/3 system. This means that the batch can have a different status in the receiving system than in the original system. By setting an indicator at material level, you decide whether the batch can be decoupled or whether the batch and all its attributes are copied from the original system. "Decoupled" i.e. "locally independent" batches are no longer distributed from its own system.
    The batch data does not need to be available before the physical goods receipt takes place. The goods receipt into the stock in transit does not usually refer to the batch unless you are working with batches with assigned active ingredient values.
    If the GI cannot be posted for organizational reasons, for example because the goods cannot be loaded onto a truck until 10pm, then you can post the goods into the GR blocked stock. This stock is also non-batch-specific.
    In cases where the GI IDoc arrives before the batch IDoc, then the GI IDoc can be subsequently posted by a periodically scheduled report (transaction BD87). A program like this exists in the SAP standard system. In Customizing for MM Inventory Management (activity Copy, Change Movement Types), you should make settings to define that manual creation of batches at goods receipt is not allowed.
    Shipping Notification
    The shipping notification is required in the receiving system due to its relevance for MRP. In this way, for example, a change in delivery date determined at goods issue is sent to the receiving system using the shipping notification. The shipping notification can also be used when posting the GR batches.
    Reward if found helpfull,
    Cheers,
    Chaitanya.

  • Automated build server for Arch? (like the sourceforge build system)

    Has someone considered some kind of automated build system for Arch?
    Something that would work like this:
    - It'd have every library and dependency possible installed.
    - It'd intelligently read the makefiles produced to see what libraries they used and compare them against a lookup table to see what Arch dependencies they then required. Failing that, it could use ldd and a second lookup table that matched libraries to packages.
    - It'd attempt to figure out the target binary to run (again, from the makefiles produced), and then run it. If it worked, it'd be marked as usable. If it didn't work, it'd be marked as needing fixing.
    All of these points can fail, especially in the parsing of the makefiles; in each case, this would be noted by the system and user action could be taken.
    In operation, it wouldn't take away from users managing their own packages. It'd just provide a secure environment to build packages in, and attempt to automate some of the process. In the best cases, the system would theoretically be capable enough to download a package's sourcecode, ./configure it, make it, make a package out of it, get the package verified as usable, then update the repo with it.
    Note the verification step in the previous paragraph: I would never want this to be an automated system. Sure, it sound amazing on paper, and might even work for a little while, but sooner or later something would come crashing down and since repo management is quite a trust-based issue, everyone would freak out and they wouldn't want the build server anymore.
    -dav7
    Last edited by dav7 (2008-10-17 18:41:22)

    Who would have access to upload to such a build server? If it's the general public, then this is a security nightmare, as well as a growth curve nightmare. The monetary investment for a project like this would need to come from somewhere.
    And yeah, something like this has been considered, and a working proof-of-concept has been sitting around for years. http://projects.archlinux.org/?p=pacbuild.git;a=summary . What this kind of project really needs is someone with some distributed computing smarts and dedication (and time) to get it off the ground in a form that will survive past a proof-of-concept barebones implementation.
    One of the largest design challenges would be dependency resolution for batch upgrades. For instance, let's say we update libfoobar, which is depended upon by foo, bar, baz, and batman, the system needs to know that libfoobar needs to be built and installed in order to compile the rest of them against it.

  • Returns process for stock transfer

    Hi all,
    How to carry out sales returns process for stcok transfer,
    material is sent from head office plant to branch plant, now we need to do sales return from branch plant to head office plant,
    kindly let me know what are the customization & end user steps need to be taken.

    Hi,
    The standard stock transfer comprises a purchase order, outbound delivery and an invoice if relevant.
    Returns process in the same scenario will also involve a purchase order but of a returns type. Note that the supplying plant and receiving plant will remain the same in the returns PO as well; additionally returns PO has a 'check' indicating that it is a return.
    Outbound delivery is created w.r.t returns PO (using VL10H for eg. just like a normal stock transfer). When PGI is posted, the stock is received in the 'supplying plant' therefore completing the returns process. This bit is similar to customer's returns. The movement type, etc. are taken care of when the appropriate document types are chosen.
    Cheers,
    KC
    SAP SD

  • CO88 - Collective Settlement Processing for Product cost collectors

    Hello Gurus,
    I have couple of Settlement related problems. Please advice if you have any solution to fix these problems.
    1.We are using CO88 - Collective Settlement Processing for Product cost collectors but some of the cost collectors are not settling fully. Where as the same cost collector settled individually without any problem.
    2. Some cost collectors are settled but very small amounts are still there as un-settled.
    Thanks in advance.
    Vishwanath.

    Hi
    Welcome to SDN! We hope your 1st experience here is good
    Issue 1: It depends on the error that you got at the time of settlement... Many times, when a PCC is being posted to from Production while you are trying to settle, it can give you error...
    later on it can be settled w/o any issues
    Issue 2: Ideally there does not remain any amount after settlement.. May be the amount which you are seeing pertains to the next month.. You can shift to periodic view using CtrlShiftF11 and see if any balance really exists
    BR,Ajay M

Maybe you are looking for

  • Black screen instead of TTYs after Xorg once started and leaved

    Hi, I've installed Arch on my laptop and experience following issue... I log in after boot and run "startx", then openbox starts properly. This is fine, but when I leave X, I can't see anything (TTY expected). I can switch to another TTY, login as ro

  • How do i sort my music by albums without influence of artists sorting?

    I have a few albuns with various artists in it, but my music app (ios 5) makes an album per artist, replicating it. I have already tested the "group by album artist" option in settings enabled and disabled and nothing happened.

  • From XML To Query

    Hi, This is My problem: I receive datas from an xml Feed and I always worked in this way: <cftry> <!--- fetch data from web service ---> <cfhttp url=" http://www............. method="get" charset="utf-8"/> <cfset xmldata = cfhttp.fileContent> <cftry>

  • I just installed OS 10.7 on my MacBook Air. Now Preview refuses to launch. Any ideas?

    I installed OS 10.7 on my MacBook Air and now Preview refuses to run. I've installed all updates available to no avail. Any ideas?

  • HP Photosmart D110a, Windows 8

    I accidentally left an ink cartridge in the printer over the winter in our camper. It leaked as it thawed. When I discovered it I cleaned the ink out as best I could. I've done all the cleaning tests, rebooting, etc. Now I am getting a 'Print Cartrid