LIS Question - Continuation

Hi Roberto,
I have opened a new post so that I can assign points.
One more question, I want to analyze infostructure S9xx and in tcode MC23 I saw all the char and keyfigures are selected from the structures MCEKKN, MCKALK, MCEKPO etc......  This is the first time I am working with LIS and I need to analyze these infostructures(both functionally and also technically).  Is it that they created these infostructures but filled data into the table(ie S9xx) via ABAP programs.
please let me know.
thanks.
Sabrina.

There is an updating program for the info structure. For self-defined info structures (S501-S999) it is a generated program, taking into account the update rules you defined in tr. MC24/25.
You can find the name of the program via looking into table TMC2P (MCINF = your info structure, FPROG = updating program). Please do not change anything in LIS control tables, though.
To get to know which applications are updating the info structure in your case takes an additional step, since application 41 is a generic one. You can call tr. OMOC and choose "detail" for application 41 to see which applications are active for your info structure.
Hope that helps,
Bernd Sieger

Similar Messages

  • User cant respond to adadmin question "Continue as though successful Y/N"

    User is SR: 7712716.992 has a form fail to generate during 11i to 12 upgrade.
    He is at step 12--8 'Compile and generate using adadmin' of task 'Perform the Upgrade' under
    category 'Upg to Rel 12.0.4' of MW.
    The following Oracle Forms objects did not generate successfully:
    wms forms/US WMSLABEL.fmx
    An error occurred while generating Oracle Forms objects.
    Continue as if it were successful :
    AD Administration could not find a response to the above prompt
    or found an incorrect response in the defaults file.
    You must run AD Administration in an interactive session
    and provide a correct value.
    How does he respond to this question?
    Dave

    How do you get around with this issue when you running a maintenance patch in non interactive mode? I get following error and can not see how to fix this issue-
    The following Oracle Forms objects did not generate successfully:
    wms forms/US WMSLABEL.fmx
    An error occurred while generating Oracle Forms files.
    Continue as if it were successful :
    AutoPatch could not find a response to the above prompt
    or found an incorrect response in the defaults file.

  • Hierarchy questions continue

    dear all
    I posted the following sample data below
    create table t1
      parent_id varchar2(200),
      child_id varchar2(200)
    insert into t1
      (parent_id, child_id)
    values
      ('P1', 'C1');
    insert into t1
      (parent_id, child_id)
    values
      ('P2', 'C2');
    insert into t1
      (parent_id, child_id)
    values
      ('C1', 'D1');
    insert into t1
      (parent_id, child_id)
    values
      ('D1', 'D2');
      insert into t1
      (parent_id, child_id)
    values
      ('D1', 'D3');
      insert into t1
      (parent_id, child_id)
    values
      ('D3', 'D4');
        insert into t1
      (parent_id, child_id)
    values
      ('D3', 'D5');and I was given the following sample query below to determine a childs associated parent in a tree instance
    WITH  got_paths      AS
      SELECT  SYS_CONNECT_BY_PATH (parent_id, ' -> ')  AS parent_path
      ,  SYS_CONNECT_BY_PATH (child_id,  ' -> ')  AS child_path
      FROM  t1
      START WITH  child_id  = 'D4'
      CONNECT BY  child_id  = PRIOR parent_id
    SELECT  MAX (SUBSTR (child_path, 5, INSTR (child_path, ' -> ', 1, 2) - 4))      AS child
    ,  MAX ( SUBSTR ( child_path, 5, INSTR (child_path, ' -> ', 1, 2) - 4)|| parent_path)      AS path
    FROM    got_paths
    ;the query works fine for the sample cases, but I need to modify the query once again, so that this time around I get only this as output instead
    child Parent
    D4 D3
    furthermore, the query also makes use of several assumptions associated with the sample data. Take for example if the sample data was changed to the following below
    insert into t1
      (parent_id, child_id)
    values
      ('P111', 'C111');
    insert into t1
      (parent_id, child_id)
    values
      ('P222', 'C222');
    insert into t1
      (parent_id, child_id)
    values
      ('C111', 'D111');
    insert into t1
      (parent_id, child_id)
    values
      ('D111', 'D222');
      insert into t1
      (parent_id, child_id)
    values
      ('D111', 'D333');
      insert into t1
      (parent_id, child_id)
    values
      ('D333', 'D444');
        insert into t1
      (parent_id, child_id)
    values
      ('D333', 'D555');it wouldnt work. I need help in modifying it.. The modification should work for both cases and any future case I might not think of

    Hi,
    Thanks for posting the CREATE TABLE and INSERT statements.
    Whenever you post a question, please say what version of Oracle you're using. That's especially important if you're using CONNECT BY, and even more important if your version is over 7 years old, as Oracle 9 is. Even if you've given your version in some other thread, and even if that thread was only a few hours ago, always say what version of Oracle you're using. Not everyone who wants to help you has read all of your earlier posts, and not everyone who has read all your earlier posts recognizes that you are the same poster, and not everyone who has read all of your earlier posts and recognizes you as the same poster remembers everything from the earlier messages. Is it really that hard to say something like: "I'm using Oracle 9.2.0.1.0"?
    It always helps if you explain why you want the given results from the given data.
    In this case:
    SELECT     child_id
    ,     parent_id
    FROM     t1
    WHERE     child_id     = 'D4'
    ;produces the correct results, but it may be for the wrong reasons, and therefore it might not work on your full table.
    Is there anything about your data that you're not saying? For example, is child_id unique?
    What results do you want from the second set of data that you posted (the data with 4-character ids, such as 'P111')?
    If you want to pass a parameter (say a child_id) to the query, give a few examples of different inputs and the output you want from each one.

  • Question continued about GPIB-232CV-A to AaronK

    Now i am programming my programme with Assemble Language such MCS-51 series.I give a write command to the instrument with the Rs232's command style and the TALK led is on .Then i write a read command to get message,but i can not get and the the Talk LED is still on while the Listen led is off.so i don't know the reason.

    Hi,
    When you write your information, does the instrument on the receiving side actually read the data sent? It is hard to guess at what could be going wrong given this data. Do you have a GPIB analyzer that you could hook up to analyze the signals on the bus? If the data is getting to the instrument, are you sure that the instrument is trying to return data? Also you might consider using a regular PC with RS232 to make sure that both the 232CV is working as well as the instrument you are connected to before you switch to the more advanced microprocessor approach.
    Best Regards,
    Aaron K.
    Application Engineer
    National Instruments

  • Error when logging into Yahoo homepage

    i get a question on the bottom of my screen when i log into my yahoo.com home page:
    Do you want to save yql?format=json&yhlVer=2&yhlClient=rapid&yhlS=150002829&yhlCT=3&yhl... from geo.query.yahoo.com?
    options under save are save or save as. cancel button is also present and x is available. Nothing happens when you press any of them. I turned off PC and also did a restore to 3 days ago. Question continues to pop up. Any help would be appreciated.
    Thanks

    David, welcome to the forum.
    This is a problem that would best be handled by Yahoo Customer Support.  They are the experts on their software and should be able to solve your problem quickly.
    Please click the "Thumbs Up+ button" if I have helped you and click "Accept as Solution" if your problem is solved.
    Signature:
    HP TouchPad - 1.2 GHz; 1 GB memory; 32 GB storage; WebOS/CyanogenMod 11(Kit Kat)
    HP 10 Plus; Android-Kit Kat; 1.0 GHz Allwinner A31 ARM Cortex A7 Quad Core Processor ; 2GB RAM Memory Long: 2 GB DDR3L SDRAM (1600MHz); 16GB disable eMMC 16GB v4.51
    HP Omen; i7-4710QH; 8 GB memory; 256 GB San Disk SSD; Win 8.1
    HP Photosmart 7520 AIO
    ++++++++++++++++++
    **Click the Thumbs Up+ to say 'Thanks' and the 'Accept as Solution' if I have solved your problem.**
    Intelligence is God given; Wisdom is the sum of our mistakes!
    I am not an HP employee.

  • Mac Formatted Time Capsule w/ PC Formatted External Drive Via USB?

    I use a Mac-formatted Time Capsule to run Time Machine backups on my MacBook Pro.  My wife backs up her PC laptop on a PC-formatted Western Digital "My Book" hard drive which is powered via A/C wall adapter.  I connected the Western Digital PC drive to the Time Capsule via USB so it would be available on our home network and my wife could run her PC backups wirelessly, BUT the drive is not showing up on our network.  Anybody have any idea what I'm doing wrong?
    Thanks in advance!
    ---Tim

    My questions continue below in red.  Thank you for your patience.
    LaPastenague wrote:
    Tsac77 wrote:
    Thanks for the in depth clarification.  A few follow up questions:
    1) Are you saying that even though I am only using the TC (in this instance) to make the windows drive available on the network, the drive still needs to be formatted in HFS+?
    HFS+ or FAT32.. the later having severe issues with large disk and large files.
    Great.  I'll do HFS+
    2) If I format the windows drive in HFS+, will the windows laptop be immediately able to write/read to it over the network?  Or will I have to install some sort of program on the windows laptop that allows windows to read/write to HFS+ (which I've always undewrstood to be a mac only format)?
    The format of the disk connected to a NAS is totally irrelevant.. if it offers windows networking file protocol.. ie SMB then it can store files on the hard disk.. it is just like sharing the disk in the Mac.. you can write to that directly from the PC. Try it.. The disk is controlled by the Mac. But it presents SMB to the Network.. On your Mac, go to system preferences.. sharing.. it should be in Internet and wireless.. but on Lion I have no idea. Open file sharing which should be ticked..there should be a public folder by default.. you can add others.. go to options, turn on SMB.. then from the windows computer you should see the disk available in networking. If not fix the names to SMB standard.. you will need to do this on the TC as well.. short, no spaces.. pure alphanumeric names. Copy some files from windows to your Mac.. you have now copied to a network drive.. with the drive format totally irrelevant to your windows computer.
    I'm sorry, but I don't know what "windows networking file protocol" and "SMB" mean.  Also, when you say "NAS" are you referring to the Time Capsule or another device?  Can you please explain things like "The disk is controlled by the Mac but it presents SMB to the network" in a less advanced way.  Pretend I'm a child.  You won't be far off.  It seems like this technique you're explaining is a way for the windows PC to write to the external drive via my MacBook.  Is that even close to correct?  The setup I must create needs to work regardless if macbook is on/off/out of the house.
    3) Does plugging a USB drive into the TC never work properly or is this just a semi-common problem that happens sometimes?  My odds of success will probably dictate whether I try this or not.  I already own the windows USB drive and would like to avoid buying more hardware if possible.
    It is problematic enough that I recommend you try it before committing to it. Even if you have to copy the files off to another location and then reformat the drive. Test it for a couple of days.. the issue happens especially when you are using Lion.. but will also happen with windows. Especially when the disk spins down it will not spin up again and become available to the network.
    Got it.
    4) What do you mean by files being in "native format"?  And what does a dead TC have to do with whether or not they can be recovered to the windows laptop?  If the TC dies, can't I just plug the windows USB drive directly into the windows laptop and it will be the same thing as connecting the drive and laptop via network?  How does a dead TC make things more difficult?
    Thanks for your continued help.
    HFS+ is not windows native format... so files stored by the TC on the USB drive.. when the TC dies.. they all die eventually may be difficult to recover unless you plug it into another TC.. as the PC cannot natively read them. If you format FAT32 then the windows computer can read it.. just there are other issues. eg the disk can be slow.. very slow and it has far less protection. eg a power failure can corrupt the drive.. this was an issue before we moved in windows to NTFS.
    Why do you say "files stored BY THE TC on the USB drive"  Why is the TC storing files on the USB drive?  Isn't plugging the USB into the TC just making the USB available on the network to my windows laptop?  Why is the TC storing files?

  • ABAP mapping

    Hellio SAP-PI experts,
    In what case and how we do ABAP and XSLT mapping.
    What is the use of User decision step added in BPM of SAP-PI 7.1
    Thanks
    Vijaya

    Hello Vijaya
    This forum is for PI developers. It is not the correct location to educate yourself on PI.
    Your user will be monitored and will be locked if this type of questioning continues.
    Regards
    XI/PI Moderator

  • Split Suite Bar and Ribbon Menu in SharePoint Online / O365

    As we all know the suite bar and ribbon menu controls are loaded from a single control. I have a requirement to place a div between suite bar and ribbon menu as shown in the image. Is it possible in SharePoint Online / O365? 
    Current suite bar and ribbon menu control(HTML Master Page Template)
            <div id="ms-designer-ribbon">
                <!--SPM:<%@Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"%>-->
                <div id="TurnOnAccessibility" style="display:none" class="s4-notdlg noindex">
                    <a id="linkTurnOnAcc" href="#" class="ms-accessible ms-acc-button" onclick="SetIsAccessibilityFeatureEnabled(true);UpdateAccessibilityUI();document.getElementById('linkTurnOffAcc').focus();return
    false;">
                        <!--MS:<SharePoint:EncodedLiteral runat="server" text="&#60;%$Resources:wss,master_turnonaccessibility%&#62;" EncodeMethod="HtmlEncode">-->
                        <!--ME:</SharePoint:EncodedLiteral>-->
                    </a>
                </div>
                <div id="TurnOffAccessibility" style="display:none" class="s4-notdlg noindex">
                    <a id="linkTurnOffAcc" href="#" class="ms-accessible ms-acc-button" onclick="SetIsAccessibilityFeatureEnabled(false);UpdateAccessibilityUI();document.getElementById('linkTurnOnAcc').focus();return
    false;">
                        <!--MS:<SharePoint:EncodedLiteral runat="server" text="&#60;%$Resources:wss,master_turnoffaccessibility%&#62;" EncodeMethod="HtmlEncode">-->
                        <!--ME:</SharePoint:EncodedLiteral>-->
                    </a>
                </div>
                <!--SID:02 {Ribbon Snippet}-->
                <!--PS: Start Preview--><div class="DefaultContentBlock" style="background:rgb(0, 114, 198); color:white; width:100%; padding:8px; height:64px; ">In true previews of your site, the SharePoint
    ribbon will be here.</div><!--PE: End Preview-->
            </div>
            <!--MS:<SharePoint:SPSecurityTrimmedControl runat="server" AuthenticationRestrictions="AnonymousUsersOnly">-->
                <!--SPM:<wssucw:Welcome runat="server" EnableViewState="false"/>-->
            <!--ME:</SharePoint:SPSecurityTrimmedControl>-->
    Actual Requirement 

    Hi,
    According to your description, my understanding is that you want to do customization in SharePoint Online with the requirement above.
    For the first question, continue to Scott, you can read the rss feed, the rss feed effectively contains the items from the list, you can read the list using Client Object Model to get RSS feed.
    For the second question, if you want to get current log in profile in my site, then you can try to use user profile rest api to get it.
    User profiles REST API reference
    For the third question, if you want to deploy the announcement app from on-premise to Online, if it's OOTB app, then it's not necessary, SharePoint Online have the announcement app as well.
    For the fourth question, if the CapIQ is a customize solution, I suggest you can use SharePoint hosted app to publish to the SharePoint online site.
    How to: Publish an app for SharePoint by using Visual Studio
    For the fifth question, if you want to upload the questions not create in site to the survey,  you can use Client Object Model to create the questions.
    Here is a similiar thread for your reference:
    Importing
    Questions into a Sharepoint Survey
    For the sixth question, if the information is stored in the list, then you can read the list using Client Object Model and it's better to create a SharePoint hosted app to achieve it.
    More information:
    How to: Create a basic SharePoint-hosted app
    Retrieve list item
    Thanks
    Best Regards
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Why we will go for Queue delta instead of Unserialized and Direct delta ?

    Hi Experts,
    Why we will go for Queue delta instead of Unserialized and Direct delta ? specify any reasons for that ?
    What happens internally when we use Queue delta , Direct delta ?
    I will allocate points to those who help me in detail. My advance thanks who respond to my query.

    Hi,
    Direct Delta
    With this update mode, extraction data is transferred directly to the BW delta queues every time a document is posted. In this way, each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues. If you are using this method, there is no need to schedule a job at regular intervals to transfer the data to the BW delta queues. On the other hand, the number of LUWs per DataSource increases significantly in the BW delta queues because the deltas of many documents are not summarized into one LUW in the BW delta queues as was previously the case for the V3 update.
    If you are using this update mode, note that you cannot post any documents during delta initialization in an application from the start of the recompilation run in the OLTP until all delta init requests have been successfully updated successfully in BW. Otherwise, data from documents posted in the meantime is irretrievably lost. The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) A maximum of 10,000 document changes (creating, changing or deleting documents) are accrued between two delta extractions for the application in question. A (considerably) larger number of LUWs in the BW delta queue can result in terminations during extraction.
    b) With a future delta initialization, you can ensure that no documents are posted from the start of the recompilation run in R/3 until all delta-init requests have been successfully posted. This applies particularly if, for example, you want to include more organizational units such as another plant or sales organization in the extraction. Stopping the posting of documents always applies to the entire client.
    Queued Delta
    With this update mode, the extraction data for the affected application is compiled in an extraction queue (instead of in the update data) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update.
    Up to 10,000 delta extractions of documents to an LUW in the BW delta queues are cumulated in this way per DataSource, depending on the application.
    If you use this method, it is also necessary to schedule a job to regularly transfer the data to the BW delta queues ("update collective run"). However, you should note that reports delivered using the logistics extract structures Customizing cockpit are used during this scheduling. This scheduling is carried out with the same report which is used when you use the V3 updating (RMBWV311, RMBWV312 or RMBWV313).There is no point in scheduling with the RSM13005 report for this update method since this report only processes V3 update entries. The simplest way to perform scheduling is via the "Job control" function in the logistics extract structures Customizing Cockpit. We recommend that you schedule the job hourly during normal operation - that is, after successful delta initialization.
    In the case of a delta initialization, the document postings of the affected application can be included again after successful execution of the recompilation run in the OLTP (e.g OLI7BW, OLI8BW or OLI9BW), provided that you make sure that the update collective run is not started before all delta Init requests have been successfully updated in the BW.
    In the posting-free phase during the recompilation run in OLTP, you should execute the update collective run once (as before) to make sure that there are no old delta extraction data remaining in the extraction queues when you resume posting of documents.
    Using transaction SMQ1 and the queue names MCEX11, MCEX12 or MCEX13 you can get an overview of the data in the extraction queues.
    If you want to use the functions of the logistics extract structures Customizing cockpit to make changes to the extract structures of an application (for which you selected this update method), you should make absolutely sure that there is no data in the extraction queue before executing these changes in the affected systems. This applies in particular to the transfer of changes to a production system. You can perform a check when the V3 update is already in use in the respective target system using the RMCSBWCC check report.
    In the following cases, the extraction queues should never contain any data:
    - Importing an R/3 Support Package
    - Performing an R/3 upgrade
    For an overview of the data of all extraction queues of the logistics extract structures Customizing Cockpit, use transaction LBWQ. You may also obtain this overview via the "Log queue overview" function in the logistics extract structures Customizing cockpit. Only the extraction queues that currently contain extraction data are displayed in this case.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) More than 10,000 document changes (creating, changing or deleting a document) are performed each day for the application in question.
    b) In future delta initializations, you must reduce the posting-free phase to executing the recompilation run in R/3. The document postings should be included again when the delta Init requests are posted in BW. Of course, the conditions described above for the update collective run must be taken into account.
    Un-serialized V3 Update
    Note: Before PI Release 2002.1 the only update method available was V3 Update. As of PI 2002.1 three new update methods are available because the V3 update could lead to inconsistencies under certain circumstances. As of PI 2003.1 the old V3 update will not be supported anymore.
    With this update mode, the extraction data of the application in question continues to be written to the update tables using a V3 update module and is retained there until the data is read and processed by a collective update run.
    However, unlike the current default values (serialized V3 update); the data is read in the update collective run (without taking the sequence from the update tables into account) and then transferred to the BW delta queues.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method since serialized data transfer is never the aim of this update method. However, you should note the following limitation of this update method:
    The extraction data of a document posting, where update terminations occurred in the V2 update, can only be processed by the V3 update when the V2 update has been successfully posted.
    This update method is recommended for the following general criteria:
    a) Due to the design of the data targets in BW and for the particular application in question, it is irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence in which the data was generated in R/3.
    Thanks,
    JituK

  • How to push idoc from r/3

    how to push idoc from r/3

    Hello
    We expect you to be able to search the forum to get answers to basic questions such as this.
    - This thread will be locked.
    - No points will be awarded to any of the replies.
    - Your user will be monitored and if this type of questioning continues your account will be closed.
    Please familiarise yourself with the forum Rules of Engagement before posting.
    Rules of Engagement
    https://wiki.sdn.sap.com/wiki/display/HOME/RulesofEngagement
    Regards
    XI/PI Moderator

  • Customer Refund Processing

    We have a customer how overpaid their account by $1,000. What is the best (and easiest) way to process a refund check?

    Dear all,
    Sorry to interrupt here. I have a question continueous from previous scenario.
    1) Customer Overpaid $1000. (Payment on Account)
    2) I return $600 to Customer via Outgoing Payment screen.
    3) Go to Link Payment to Invoices, the $1000 is still available to be offset with other future A/R Invoice.
    The other system users will misunderstand that there are still overpaid $1000 for this customer.
    In fact, the overpaid is left $400 only, what can I do to show the result correct on the screen?
    Thanks.
    Regards,
    Lay Chin

  • Direct Delta, Queued and Unseralized V3

    HI
    FOR IM datasources, BF and UM, the guide mentions Direct Delta as the preferred selection. How do we decide this?
    Also for LO cockpit DS in SD, what selection is chosen?
    Thanks
    Harie

    hi Harie,
    take a look oss note 505700 -LBWE: New update methods as of PI 2002.1 and 500426 - BW extraction SD: alternatives to V3 updating
    hope this helps.
    505700
    Symptom
    Up to and including PI 2001.2 or PI-A 2001.2, only the "Serialized V3 update" update method was used for all applications of the logistics extract structures Customizing Cockpit. As a result of the new plug-in, three new update methods are now also available for each application.
    The "Serialized V3 update" update method is no longer offered as of PI 2003.1 or PI-A 2003.1. Between installing PI 2002.1 or PI-A 2002.1 and installing PI 2003.1 or PI-A 2003.1, you should therefore switch to one of these new methods in all applications offering alternative update methods.
    Rather than describing an error, this note helps you make the right selection of update methods for the applications relating to the logistics extract structures Customizing Cockpit.
    Read this note carefully and in particular, refer to the essential notes required for some applications when using the new update methods.
    You will also find more detailed information on the SAP  Service Marketplace at: http://service.sap.com/~sapidb/011000358700005772172002
    Or:
    http://service.sap.com/BW > SAP BW InfoIndex > Delta Handling > New Update Methods in Logistics Extraction with PI2002.1 2.0x/3.0x (ppt)
    Other terms
    V3 update, V3, serialization, TMCEXUPD
    Reason and Prerequisites
    New design
    Solution
    1. Why should I no longer use the "Serialized V3 update" as of PI 2002.1 or PI-A 2002.1?
    The following restrictions and problems with the "serialized V3 update" resulted in alternative update methods being provided with the new plug-in and the "serialized V3 update" update method no longer being provided with an offset of 2 plug-in releases.
    a) If, in an R/3 System, several users who are logged on in different languages enter documents in an application, the V3 collective run only ever processes the update entries for one language during a process call. Subsequently, a process call is automatically started for the update entries from the documents that were entered in the next language. During the serialized V3 update, only update entries that were generated in direct chronological order and with the same logon language can therefore be processed. If the language in the sequence of the update entries changes, the V3 collective update process is terminated and then restarted with the new language.
    For every restart, the VBHDR update table is read sequentially on the database. If the update tables contain a very high number of entries, it may require so much time to process the update data that more new update records are simultaneously generated than the number of records being processed.
    b) The serialized V3 update can only guarantee the correct sequence of extraction data in a document if the document has not been changed twice in one second.
    c) Furthermore, the serialized V3 update can only ensure that the extraction data of a document is in the correct sequence if the times have been synchronized exactly on all system instances, since the time of the update record (which is determined using the locale time of the application server) is used in sorting the update data.
    d) In addition, the serialized V3 update can only ensure that the extraction data of a document is in the correct sequence if an error did not occur beforehand in the U2 update, since the V3 update only processes update data, for which the U2 update is successfully processed.
    2. New "direct delta" update method:
    With this update mode, extraction data is transferred directly to the BW delta queues every time a document is posted. In this way, each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues.
    If you are using this method, there is no need to schedule a job at regular intervals to transfer the data to the BW delta queues. On the other hand, the number of LUWs per DataSource increases significantly in the BW delta queues because the deltas of many documents are not summarized into one LUW in the BW delta queues as was previously the case for the V3 update.
    If you are using this update mode, note that you cannot post any documents during delta initialization in an application from the start of the recompilation run in the OLTP until all delta init requests have been successfully updated successfully in BW. Otherwise, data from documents posted in the meantime is irretrievably lost.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) A maximum of 10,000 document changes (creating, changing or deleting documents) are accrued between two delta extractions for the application in question. A (considerably) larger number of LUWs in the BW delta queue can result in terminations during extraction.
    b) With a future delta initialization, you can ensure that no documents are posted from the start of the recompilation run in R/3 until all delta-init requests have been successfully posted. This applies particularly if, for example, you want to include more organizational units such as another plant or sales organization in the extraction. Stopping the posting of documents always applies to the entire client.
    3. The new "queued delta" update method:
    With this update mode, the extraction data for the affected application is compiled in an extraction queue (instead of in the update data) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update.
    Up to 10,000 delta extractions of documents to an LUW in the BW delta queues are cumulated in this way per DataSource, depending on the application.
    If you use this method, it is also necessary to schedule a job to regularly transfer the data to the BW delta queues ("update collective run"). However, you should note that reports delivered using the logistics extract structures Customizing cockpit are used during this scheduling. There is no point in scheduling with the RSM13005 report for this update method since this report only processes V3 update entries. The simplest way to perform scheduling is via the "Job control" function in the logistics extract structures Customizing Cockpit. We recommend that you schedule the job hourly during normal operation - that is, after successful delta initialization.
    In the case of a delta initialization, the document postings of the affected application can be included again after successful execution of the recompilation run in the OLTP, provided that you make sure that the update collective run is not started before all delta Init requests have been successfully updated in the BW.
    In the posting-free phase during the recompilation run in OLTP, you should execute the update collective run once (as before) to make sure that there are no old delta extraction data remaining in the extraction queues when you resume posting of documents.
    If you want to use the functions of the logistics extract structures Customizing cockpit to make changes to the extract structures of an application (for which you selected this update method), you should make absolutely sure that there is no data in the extraction queue before executing these changes in the affected systems. This applies in particular to the transfer of changes to a production system. You can perform a check when the V3 update is already in use in the respective target system using the RMCSBWCC check report.
    In the following cases, the extraction queues should never contain any data:
    - Importing an R/3 Support Package
    - Performing an R/3 upgrade
    - Importing a plug-in Support Packages
    - Executing a plug-in upgrade
    For an overview of the data of all extraction queues of the logistics extract structures Customizing Cockpit, use transaction LBWQ. You may also obtain this overview via the "Log queue overview" function in the logistics extract structures Customizing cockpit. Only the extraction queues that currently contain extraction data are displayed in this case.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method.
    This update method is recommended for the following general criteria:
    a) More than 10,000 document changes (creating, changing or deleting a documents) are performed each day for the application in question.
    b) In future delta initializations, you must reduce the posting-free phase to executing the recompilation run in R/3. The document postings should be included again when the delta Init requests are posted in BW. Of course, the conditions described above for the update collective run must be taken into account.
    4. The new "unserialized V3 update" update method:
    With this update mode, the extraction data of the application in question continues to be written to the update tables using a V3 update module and is retained there until the data is read and processed by a collective update run.
    However, unlike the current default values (serialized V3 update), the data is read in the update collective run (without taking the sequence from the update tables into account) and then transferred to the BW delta queues.
    The restrictions and problems described in relation to the "Serialized V3 update" do not apply to this update method since serialized data transfer is never the aim of this update method. However, you should note the following limitation of this update method:
    The extraction data of a document posting, where update terminations occurred in the V2 update, can only be processed by the V3 update when the V2 update has been successfully posted.
    This update method is recommended for the following general criteria:
    a) Due to the design of the data targets in BW and for the particular application in question, it is irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence in which the data was generated in R/3.
    5. Other essential points to consider:
    a) If you want to select a new update method in application 02 (Purchasing), it is IMPERATIVE that you implement note 500736. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBV302 report.
    b) If you want to select a new update method in application 03 (Inventory Management), it is IMPERATIVE that you implement note 486784. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBWV303 report.
    c) If you want to select a new update method in application 04 (Production Planning and Control), it is IMPERATIVE that you implement note 491382. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBWV304 report.
    d) If you want to select a new update method in application 45 (Agency Business), it is IMPERATIVE that you implement note 507357. Otherwise, even if you have selected another update method, the data will still be written to the V3 update. The update data can then no longer be processed using the RMBWV345 report.
    e) If you want to change the update method of an application to "queued delta", we urgently recommended that you install the latest qRFC version. In this case, you must refer to note 438015.
    f) If you use the new selection function in the logistics extract structures Customizing Cockpit in an application to change from the "Serialized V3" update method to the "direct delta" or "queued delta", you must make sure that there are no pending V3 updates for the applications concerned before importing the relevant correction in all affected systems. To check this, use transaction SA38 to run the RMCSBWCC report with one of the following extract structures in the relevant systems:
        Application 02:   MC02M_0HDR
        Application 03:   MC03BF0
        Application 04:   MC04PE0ARB
        Application 05:   MC05Q00ACT
        Application 08:   MC08TR0FKP
        Application 11:   MC11VA0HDR
        Application 12:   MC12VC0HDR
        Application 13:   MC13VD0HDR
        Application 17:   MC17I00ACT
        Application 18:   MC18I00ACT
        Application 40:   MC40RP0REV
        Application 43:   MC43RK0CAS
        Application 44:   MC44RB0REC
        Application 45:   MC45W_0HDR.
    You can switch the update method if this report does return any information on open V3 updates. Of course, you must not post any documents in the affected application after checking with the report and until you import the relevant Customizing request. This procedure applies in particular to importing the relevant Customizing request into a production system.
    Otherwise, the pending V3 updates are no longer processed. This processing is still feasible even after you import the Customizing request using the RSM13005 report. However, in this case, you can be sure that the serialization of data in the BW delta queues has not been preserved.
    g) Early delta initialization in the logistics extraction:
    As of PI 2002.1 and BW Release 3.0B, you can use the early delta initialization to perform the delta initialization for selected DataSources.
    Only the DataSources of applications 11, 12 and 13 support this procedure in the logistics extraction for PI 2002.1.
    The early delta initialization is used to admit document postings in the OLTP system as early as possible during the initialization procedure. If an early delta initialization InfoPackage was started in BW, data may be written immediately to the delta queue of the central delta management.
    When you use the "direct delta" update method in the logistics extraction, you do not have to wait for the successful conclusion of all delta Init requests in order to readmit document postings in the OLTP if you are using an early delta initialization.
    When you use the "queued delta" update method, early delta initialization is essentially of no advantage because here, as with conventional initialization procedures, you can readmit document postings after successfully filling the setup tables, provided that you ensure that no data is transferred from the extraction queue to the delta queue, that is, an update collective run is not triggered until all of the delta init requests have been successfully posted.
    Regardless of the initialization procedure or update method selected, it is nevertheless necessary to stop any document postings for the affected application during a recompilation run in the OLTP (filling the setup tables).
    500426
    Symptom
    With the BW extraction with the structures of the logistics extract structures customizing cockpit there is only V3 updating as an update method up to plug-in release PI-2001.2.
    To make every possible effort to ensure the transfer of document postings to BW in the correct sequence the parameter "SERIAL" was added to the V3 collective update (refer also to note 385099). If this parameter is set at the start of the V3 collective update, the update data is read and processed by the database in the sequence of its creation time.
    However, this procedure gives rise to the following major performance problem:
    If documents are entered in an application in an R/3 System by several users that are logged on in different languages, the V3 collective run will only ever process the update entries for one language in a process call. A process call is then started automatically for the update entries of the documents which were entered in the next language. Accordingly, during the serialized V3 update only update entries which were generated in direct chronological sequence and with the same logon language can ever be processed. If the language changes in the sequence of the update entries, the V3 collective update process is terminated and restarted with the new language.
    During every restart, the update table VBHDR is read sequentially on the database. If there are very many entries in the update tables, this can cause the processing of the update data taking so much time that more new update records are generated simultaneously than are processed.
    Furthermore, the serialization is not 100% guaranteed in all scenarios when the V3 update is used, as described also in note 384211.
    From plug-in PI 2002.1 the option is offered for the most important applications of the logistics extract structures customizing cockpit of selecting between several alternative update methods.
    The serialized V3 update is no longer provided from plug-in PI-2003.1.
    The modification provided here enables you to use the new update methods for the applications 11, 12 and 13 as early as from patch 5 of PI 2001.2 or PI-A 2001.2.
    The following update methods are offered:
    direct delta (update mode A):
    With this update mode the extraction data is transferred directly into the BW delta queues with every document posting. Every document posting with delta extraction becomes exactly one LUW in the corresponding BW delta queues.
    If you use this method it is therefore no longer necessary to regularly schedule a job to transfer the data to the BW delta queues. Otherwise there will be a substantial increase in the number of LUWs per DataSource in the BW delta queues since, unlike previously, with V3 updating the deltas of many documents are grouped together to form an LUW in the BW delta queues.
    Furthermore you should note when using this update mode that during delta initialization in an application, from the start of the recompilation run in the OLTP no document postings may be made until all delta Init requests have been updated successfully in BW. Otherwise data in documents posted in the interim will be irretrievably lost.
    queued delta (Update mode B):
    With this update mode the extraction data of the affected application is collected in an 'extraction queue' instead of in the update dataand can be transferred via an update collective run into the BW delta queues, as is already the case during V3 updating.
    For each DataSource,1000 delta extractions are summarized by documents of the application to form an LUW in the BW delta queues.
    If you are using this method it is thus also necessary to regularly schedule a job for transferring the data to the BW delta queues. This scheduling is carried out with the same report which is used when you use the V3 updating (RMBWV311, RMBWV312 or RMBWV313). You are recommended to schedule the job on an hourly basis in normal operation - that is following the successful delta initialization.
    In the case of a delta initialization the document postings of the affected application can be included again after the successful execution of the recompilation run in the OLTP (OLI7BW, OLI8BW or OLI9BW) if you ensure that the update collective run is not started before all delta Init requests are updated successfully in BW.
    In the posting-free phase during the recompilation run in the OLTP - as previously - the update collective run should be be run once successfully to ensure that there is no old delta extraction data in the extraction queues during the reinsertion of the document posting.
    Using transaction SMQ1 and the queue names MCEX11, MCEX12 or MCEX13 you can get an overview of the data in the extraction queues.
    unserialized V3 update (update mode C):
    With this update mode the extraction data of the affected application is written to the update tables as before with the aid of a V3 update module and kept there until the data is read and processed by an update collective run.
    However, unlike the current default values (serialized V3 update), in the update collective run the data is read and transferred to the BW delta queues without taking into account the sequence from the update tables. This is why the performance problem described above does not arise in this case.
    This method should only be used if, due to the design of the cubes or ODS objects in the BW system, it does not matter whether the data is transferred to BW in exactly the same sequence in which it was generated in the OLTP System.
    Other terms
    V3 update
    Reason and Prerequisites
    Additional desired functions.
    Solution
    If you already want to use one of the above-described new update methods in the applications 11, 12 or 13 with PI 2001.2 or PI-A 2001.2, you can do this with the corrections given here. However, it is vital that you bear in mind the following points:
    1. The corrections described here consist ofa modification which is not included in any PI Support Package. Check whether the described corrections are necessary. When you import a PI Support Package, parts of these corrections may be overwritten. When importing the PI Support Package ensure that you have got all of these corrections (SPAM/SPAU).
    2. Only copy these modifications in consultation with SAP logistics extraction development.
    3. The corrections presented here should only ever be regarded as an interim solution. The functions are only fully available with PI 2002.1 and PI-A 2002.1. You should also consider the option of upgrading soon to PI 2002.1 or PI-A 2002.1.
    Note also that if the update methods of the applications 11, 12 and 13 are changed prematurely, other applications can still only offer the serialized V3 update. The performance problem described above continues to apply to these applications. It should, however, be significantly reduced by the absence of the modules of applications 11, 12 and 13.
    4. The corrections described here necessitate the importing of Support Package 5 of PI 2001.2 or PI-A 2001.2. The corrections described here are not possible in a lower plug-in release or Support Package status since the functions of the new update methods are not available.
    5. If you want to change the update method of an application to "queued delta", you are strongly recommended to first install the most current qRFC version. Refer to note 438015.
    6. If you use the corrections below to change from the update method "Serialized V3" to "Direct delta" or "queued delta" in an application, you must ensure that before the corresponding corrections are imported there are no more open V3 updates for the corresponding applications in all affected systems. To check this, you can execute the report RMCSBWCC in the corresponding systems with transaction SA38 with one of the following extract structures:
        Application 11:   MC11VA0HDR
        Application 12:   MC12VC0HDR
        Application 13:   MC13VD0HDR.
    The change is possible if you are not referred to open V3 updates. Of course, no document postings should be made in the affected application after the check with the report until the transfer of the corrections. This procedure applies in particular to the importing of the corrections into a production system.
    7. If you are already using the update mode "queued delta" with one of the applications 11, 12 or 13 with the corrections stored here before PI 2002.1 and if you want to make changes to the extract structures in this phase via the cockpit, you should ensure that there is no data in the extraction queues before you execute these changes in the affected systems. In particular this applies to the transportation of changes into a production system. In this case it is not sufficient to execute the check report RMCSBWCC recommended in the cockpit and to eliminate the problem areas specified there.
    Below we describe how you can check and ensure this condition.
    8. When you import PI 2002.1 the changes made here will always be ineffective and you will get the update method "serialized V3" again as the preset update method.
    If you have already changed to a different update method in advance in one of the applications 11, 12 or 13 via the corrections stored here, you must consider the following points during the plug-in upgrade to PI 2002.1 or PI-A 2002.1:
    a) Before the upgrade you must ensure that there is no more data in the extraction queues MCEX11, MCEX12 and MCEX13. Otherwise it may no longer be possible to process the data, which may have to be deleted from the queues, due to an extract structure enhancement with the new plug-in. You can do this by starting the collective update report (RMBWV311, RMBWV312 or RMBWV313) and by subsequently checking the extraction queues using transaction SMQ1.
    b) After the upgrade you must ensure that before the inclusion of new document postings, the update method of the corresponding application used in advance by the corrections specified here is also set in the cockpit of the logistics extraction (LBWE), and that this setting is transported into all clients of the affected systems.
    Note in this context that the corrections specified here change the update method so that it is client-independent. The new functions in the cockpit of the logistics extraction from PI 2002.1 or PI-A 2002.1 then allow the client-specific selection of the update methods.
    9. If you want to use the update method "direct delta" in application 11 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000373868" given below for the include MCEX11TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    10. If you want to use the update method "direct delta" in application 12 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000373868" given below for the include MCEX12TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    11. If you want to use the update method "direct delta" in application 13 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000373868" given below for the include MCEX13TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    12. If you want to use the update method "queued delta" in application 11 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000380083" given below for the include MCEX11TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    13. If you want to use the update method "queued delta" in application 12 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000380083" given below for the include MCEX12TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    14. If you want to use the update method "queued delta" in application 13 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000380083" given below for the include MCEX13TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    15. If you want to use the update method "unserialized V3 update" in application 11 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000380084" given below for the include MCEX11TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    16. If you want to use the update method "unserialized V3 update" in application 12 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000380084" given below for the include MCEX12TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).
    17. If you want to use the update method "unserialized V3 update" in application 13 instead of the serialized V3 update, copy the changes specified in the correction instructions "0120061532 0000380084" given below for the include MCEX13TOP using the ABAP editor (transaction SE38) (these guidelines can be used both for PI 2001.2 and PI-A 2001.2).

  • Queued Vs Unserialized V3 Update.

    Hi BW experts,
      I am not clear about the difference between Queued delta and Unserialized V3 Update.And what are the disadvantages of V3 Update.
    Thanks in Advance.
    Thanks,
    Jelina.

    Hi ,
    <b>The new "queued delta" update method:</b>
        With this update mode, the extraction data for the affected application is compiled in an extraction queue (instead of in the  update data) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update. Up to 10,000 delta extractions of documents to an LUW in the BW delta queues are cumulated in this way per DataSource, depending on the application.
    If you use this method, it is also necessary to schedule a job to regularly transfer the data to the BW delta queues ("update collective run"). However, you should note that reports delivered  using the logistics extract structures Customizing cockpit are used during this scheduling. There is no point in scheduling with the RSM13005 report for this update method since this report only processes V3 update entries. The simplest way to perform scheduling is via the "Job control" function in the logistics extract structures Customizing Cockpit. We recommend that you schedule the  job hourly during normal operation - that is, after successful delta initialization.
    In the case of a delta initialization, the document postings of the affected application can be included again after successful execution of the recompilation run in the OLTP, provided that you make sure that the update collective run is not started before all delta Init requests have been successfully updated in the BW.
    In the posting-free phase during the recompilation run in OLTP, you should execute the update collective run once (as before) to make sure that there are no old delta extraction data remaining in the  extraction queues when you resume posting of documents.
    If you want to use the functions of the logistics extract structures Customizing cockpit to make changes to the extract structures of an application (for which you selected this update method), you should make absolutely sure that there is no data in the extraction queue before executing these changes in the affected systems. This applies in particular to the transfer of changes to a production system. You can perform a check when the V3 update is already in use in the
    respective target system using the RMCSBWCC check report.
    <b>The new "unserialized V3 update" update method:</b>
    With this update mode, the extraction data of the application in question continues to be written to the update tables using a V3 update module and is retained there until the data is read and processed by a collective update run.
    However, unlike the current default values (serialized V3 update), he data is read in the update collective run (without taking the sequence from the update tables into account) and then transferred to the BW delta queues.
    The restrictions and problems described in relation to the  "Serialized V3 update" do not apply to this update method since serialized data transfer is never the aim of this update method. However, you should note the following limitation of this update method:
    The extraction data of a document posting, where update terminations occurred in the V2 update, can only be processed by the V3 update  when the V2 update has been successfully posted.
    This update method is recommended for the following general criteria:
    a) Due to the design of the data targets in BW and for the particular application in question, it is irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence in which the data was generated in R/3.

  • Exceptions page for cookies

    7/9/11
    Hello again. I have now managed to sort this problem myself.
    Barry
    Hello,
    My old XP computer crashed a week ago and the cost to repair it was too much. I have just received a new PC with Windows 7 Home premium and, amongst many other tasks, have reinstalled Firefox (the same version as I was using on the XP PC). I have two problems related to Options in Tools.
    Firstly, I can't seem to get the same info that was automatically available before from the Privacy tag in Options. I have set this window to ask me every time a site wants to add a cookie. Before, if I clicked on "Exceptions" I was presented with a list of all cookies I had either blocked, allowed for this session or allowed. Now the box is always blank with no cookies in it at all, so I can't check whether I have blocked a cookie I shouldn't have and I can't, as I did before, set the status order and then delete all the cookies I had allowed for the session. Can someone help me with this?
    My second problem is also related to cookies. Previously I am sure the Priority tag either stated or asked you to click a box to say "Treat all cookies from this site in the same manner". Now, no matter how many times I click "Block" or "Allow" the same question continuously pops up for those same cookies every time I go to their site and I am forever clicking Block or Allow. I might have to click the same cookie 20 or more times before it stops popping up (I believe this is probably tied to the first problem of no cookies showing in the exceptions list?).
    Can someone help me with this?
    Thanks,
    Barry

    7/9/11
    Hello again. I have now managed to sort this problem myself.
    Barry
    Hello,
    My old XP computer crashed a week ago and the cost to repair it was too much. I have just received a new PC with Windows 7 Home premium and, amongst many other tasks, have reinstalled Firefox (the same version as I was using on the XP PC). I have two problems related to Options in Tools.
    Firstly, I can't seem to get the same info that was automatically available before from the Privacy tag in Options. I have set this window to ask me every time a site wants to add a cookie. Before, if I clicked on "Exceptions" I was presented with a list of all cookies I had either blocked, allowed for this session or allowed. Now the box is always blank with no cookies in it at all, so I can't check whether I have blocked a cookie I shouldn't have and I can't, as I did before, set the status order and then delete all the cookies I had allowed for the session. Can someone help me with this?
    My second problem is also related to cookies. Previously I am sure the Priority tag either stated or asked you to click a box to say "Treat all cookies from this site in the same manner". Now, no matter how many times I click "Block" or "Allow" the same question continuously pops up for those same cookies every time I go to their site and I am forever clicking Block or Allow. I might have to click the same cookie 20 or more times before it stops popping up (I believe this is probably tied to the first problem of no cookies showing in the exceptions list?).
    Can someone help me with this?
    Thanks,
    Barry

  • Type of Physical Inventory

    Dear All,
                how many type of Physical Inventory we can do in SAP.Please explain in detials & also Steps(T-code).
    Regards
    Pavan

    Hi,
    Physical Inventory Locate the document in its SAP Library structure
    Purpose
    This component allows you to carry out a physical inventory of your company’s warehouse stocks for balance sheet purposes. Various procedures can be implemented for this.
    Scope of Functions
    In the SAP system, a physical inventory can be carried out both for a company’s own stock and for special stocks. Inventory for a company’s stock and for special stocks (such as consignment stock at customer, external consignment stock from vendor, or returnable packaging) must be taken separately (in different physical inventory documents), however.
    Note that the blocked stock returns and the stock in transfer cannot be inventoried. If these stocks are still to be counted in a physical inventory, you must transfer post these stocks to other stocks capable of inclusion in a physical inventory.
    The stock in a warehouse can be divided into stock types. In the standard system, a physical inventory can be carried out for the following stock types:
    ·        Unrestricted-use stock in the warehouse
    ·        Quality inspection stock
    ·        Blocked stock
    Note
    If batch status management is active, the first stock type covers both unrestricted-use stock and restricted-use stock.
    Inventory of all stock types mentioned can be taken in a single transaction. For the materials to be inventoried, one item is created in the physical inventory document for every stock type.
    Physical inventory takes place at storage location level. A separate physical inventory document is created for every storage location.
    If a material does not exist in a storage location, this means that no goods movement has ever taken place for the material in the storage location. The material, therefore, has never had any stock in this storage location. The material does not exist at stock management level in the storage location. It is therefore not possible to carry out a physical inventory for the material in this storage location.
    This is not to be confused with a material for which a goods movement has taken place and for which the stock balance is currently zero. A physical inventory must be carried out in this case, since storage location data is not deleted when the stock balance is zero.
    Physical Inventory Procedures
    The SAP System supports the following physical inventory procedures:
    ·        Periodic inventory
    ·        Continuous inventory
    ·        Cycle counting
    ·        Inventory sampling
    Periodic Physical Inventory
    In a periodic inventory, all stocks of the company are physically counted on the balance sheet key date. In this case, every material must be counted. During counting, the entire warehouse must be blocked for material movements.
    Continuous Physical Inventory
    In the continuous physical inventory procedure, stocks are counted continuously during the entire fiscal year. In this case, it is important to ensure that every material is physically counted at least once during the year.
    Cycle Counting
    Cycle counting is a method of physical inventory where inventory is counted at regular intervals within a fiscal year. These intervals (or cycles) depend on the cycle counting indicator set for the materials.
    The Cycle Counting Method of Physical Inventory allows fast-moving items to be counted more frequently than slow-moving items.
    Inventory Sampling
    In Structure link MM – Inventory Sampling, randomly selected stocks of the company are physically counted on the balance sheet key date. If the variances between the result of the count and the book inventory balance are small enough, it is presumed that the book inventory balances for the other stocks are correct.
    Updating the Logistics Information System (LIS)
    The physical inventory is connected to the Logistics Information System (LIS). When inventory differences are updated in the LIS, they are displayed both on a quantity and on a value basis. The physical inventory items are aggregated at inventory number level, plant level, storage location level, and material level. If you need more detailed information on the inventory differences in the LIS, you can access the list of inventory differences directly from the LIS and continue the analysis at item level.
    Restrictions
    The posting of physical inventory differences is subject to certain time constraints:
    ·        The posting period is automatically set during counting. Therefore, the inventory difference must be posted to the same period or - if postings to the previous period are allowed - in the following period.
    ·        The fiscal year is set by specifying a planned count date when creating a physical inventory document. All subsequent postings to this document must take place in this fiscal year and/or in the first period of the following fiscal year, if postings to the previous period are allowed.

Maybe you are looking for

  • Version Control System?

    Is there an option for integrated version control? Specifically, is there a recommended plug-in available that integrates with Visual Source Safe (2005)? I'm using the dedicated Eclipse IDE for Flex 2 (ie... I'm not running Eclipse with the Flex 2 pl

  • Display Fields in JTable

    Hi I 've successfully created a JTable that connected to the MySQL databases that I've created previously. I am able to display the Table using Query like this : rs = stmt.executeQuery("SELECT * FROM commenttable where tradeName like 'P%'"); However,

  • Help getting setup

    I'm trying to get PP running for my school in hopes of distributing course content internally. I have followed the Admin setup guide to a T and gone over every last detail about 10 separate times. Everything seems to be setup correctly, and I can sub

  • ADF and Web Services

    Am using JDev11.1.1.x How do I create a Web Service using an ADF app. Have looked online and cannot seem to find a reference to this topic newer than about 2008. Thanks - Casey

  • How to make line chart start at y-axis?

    How do you make the series on a line chart start at the y-axis?  I tried adding a data point for y(x=0), but the Xcelsius graph drops it and starts with y(x=1).  How do I fix that?  This is especially important for series that need to start at zero.