Difference between full and target quantity in iteme category

What is the difference between Target quantity and Full quantity in Completion rule in Item Catergory customization?

hi
full quantity is used for quotations
target quantity for contracts
Completion rules
An item may be considered fully referenced if one of the following criteria have been met
An item is complete as soon as it is used as a reference. In this case, the item is fully referenced, even if only a part of the quantity is copied.
An item is only complete once the full quantity has been copied into a subsequent document. If the item is partially referenced, it is possible to enter several sales documents for the item until the quantity is fully accounted for.
An item is only complete once the target quantity (in a contract, for example) has been copied into a subsequent document. If the item is partially referenced, it is possible to enter several sales documents for the item until the quantity is fully accounted for.
The standard version of the system defines an inquiry item as complete even if only one part of the inquiry quantity has been copied into a quotation. On the other hand, when you copy a quotation into a sales order, the quotation is not complete until the quotation quantity has been fully referenced.
Contracts, like quotations, are only considered complete once the full target quantity of the contract has been released.

Similar Messages

  • Difference between Trusted and Target Reconcillation?

    Hey iam new to OIM.
    can any one send me differnce between Trusted and Target Reconcillation?

    Just an example to describe the difference
    In an organization, a user (i.e: John Doe) has access to multiple systems: Microsoft Active Directory, Oracle Database, etc.
    User John Doe is registered in the company PeopleSoft HR Application. In order for OIM to manage John Doe user, you need to create John Doe user in OIM. This is normally achieve using Trusted Reconciliation. In this example, the trusted reconciliation is going against PeopleSoft HR which is a source of truth for people data.
    OIM also required to manage the user Active Directory and Oracle Database accounts. In this case, you run the Target Reconciliation to reconcile target systems accounts into OIM. This will tells OIM that John Doe has AD Account and Oracle Database account. HTH.

  • EXACT DIFFERENCE BETWEEN  FULL LOAD AND REPAIRFULL LOAD?

    HI Champ,
    Can anyone explain me the exact difference between  full load and Repairfull load?Give e some senario where we can go for this?Please.....
    10zin

    Hi,
    Full repair can be said as a Full with selections. But the main use or advantage of Full repair load is that it wont affect delta loads in the system. If you load a full to a target with deltas running you again will have to initialize them for deltas to continue. But if you do full repair it wont affect deltas.
    This is normally done we when we lose some data or there is data mismatch between source system and BW.
    Check the OSS Note 739863 'Repairing data in BW' for all the details
    Symptom
    Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer).
    There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the DeltaQueue, handling errors in the customers posting procedure (for example, a change in the extract structure during production operation if the DeltaQueue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.
    Solution
    Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.
    BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B). If you want to use this function, we recommend that you use the ODS object layer.
    Note that you should only use this procedure if you have a small number of incorrect or missing records. Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
    1. Repair request: Definition
    If you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this DataSource/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully.
    The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the InfoPackage (Maintenance) menu.
    2. Prerequisites for using the "Repair Request" function
    2.1. Troubleshooting
    Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.
    We always recommend that you load the data into the PSA table in the first step and check the result there.
    2.2. Analyze the effects on the downstream targets
    Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button).
    You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets.
    If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.
    2.3. Analysis of the selections
    You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records.
    If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair.
    You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system.
    The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target.
    Some applications have additional special features:
    Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request.
    Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence).
    FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).
    3. Repair request execution
    How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets."
    Refer.
    Repair full request
    Re: Repair full request
    Steps to perform repair full request
    full repair request
    repair full request
    Re: Repair Full Request
    Thanks,
    JituK

  • T-Code for reporting stock quantity difference between IM and WM

    Hi All
    I'm probably going to kick myself for asking this quastion but is there a report to show the stock quantity differences between IM and WM?
    I have searched the forum first but cannot find info.
    We have various differences in IM and WM shown in MD04 and LS26. This is usually the result of an unplanned plant to plant transfer not being recipted in at the destination plant.
    If no standard T-code is available, which tables can I join to create my own query?
    Thanks in advance
    Darren

    compare MMBE with LS26
    Make sure you dont enter a storage location and storage type in selection screen of LS26.
    if you post a difference with LI20, then you have just posted a difference within WM, means you moved a quantity from a bin into the difference storate type 998.  The balance of both is still equal to your stock shown in MMBE. You have to clear the difference with IM by executing LI21 transaction. Only then the quantity will disappear from difference storage type and will be adjusted in MM and FI.
    LX23 will only report real inconstencies and will adjust them.
    If you do a MM movement like 303, then this creates a transfer request in WM, this TR needs to be converted into a TO. to move the stock from the bin to the interim storage type for goods issue.
    In your case you created just a negative quant in the interim storage type for goods issue and still have a positive quant in the bin, the balance is equal to the stock shown in MMBE.

  • What's the difference between "Full Name" and "Account Name"

    Just re-install a new OS X Lion. On the "Create Your Computer Account" screen, what's the difference between "Full Name" and "Account Name"?

    Your computer name by default would be "John David Appleseed's Macbook Pro"
    But you can always change that in System Preferences->Sharing
    And yes, the first account you set up will be an administrator.

  • Report - quantity and value difference between delivery and invoice?

    Hello experts,
    My client use u201EProof of deliveryu201D t-code: VLPOD. There is always quantity and value difference between delivery and invoice.
    Do you know any report which will show this differences between delivery and invoice?
    Best regards,
    Maciej

    Hi,
    Note 867678 - Proof of delivery (POD), delivery and billing document will help you to understand POD flow.
    You have the tcode VLPODF/VLPODL/VLPODA, but perhaps they will not help you. So, as the before note suggests you, you can use the tables VBFA, LIKP, LIPS, VBRK, VBRP, TVPOD and TVPODG to do your own report.
    I hope this helps you
    Regard,
    Eduardo

  • What is the difference between full checkpoint and incremental checkpoint?

    What is the difference between full checkpoint and incremental checkpoint?
    And what is checkpoint queue?
    Can someone clarify these concepts?
    Thanks!

    Hi,
    there are different types of checkpoints:
    - Full checkpoint:
    => DBWR writes all dirty buffers from the Buffer cache to the datafiles and CKPT retrieves a new Checkpoint Change Number from a sys owned sequence and writes this number to all file headers and the controlfile.
    -- can be triggered by different events, like a logswitch, a manual checkpoint (alter system ..), a shutdown and so on
    This is the setup point for SMON for a crash recovery.
    - Partial checkpoint:
    => DBWR writes all dirty buffers of a singel tablespace from the Buffer cache to the datafiles and CKPT retrieves a new Checkpoint Change Number from a sys owned sequence and writes this number to all file headers and the controlfile.
    -- can e triggered by an ALTER TABLESPACE OFFLINE, ALTER TABLESPACE READ ONLY, ALTER TABLESPACE BEGIN BACKUP statement.
    Incremental checkpoint:
    number to all file headers and the controlfile.
    -- can be triggered by different events, like a logswitch, a manual checkpoint (alter system ..), a shutdown and so on
    - Partial checkpoint:
    => DBWR writes all dirty buffers of a single tablespace from the Buffer cache to the datafiles and CKPT retrieves a new Checkpoint Change Number from a sys owned sequence and writes this number to all file headers and the controlfile.
    -- frequency is determined by FAST_START_MTTR_TARGET parameter (new feature to Oracle 9i), with wich you can specify a time in seconds which SMON is allowed to take maximal for a Crash Recovery until the database must be open again.
    Dapending on the system you have SMON must calculate the maximum number of Redolog-blocks which it can manage to recover in the specified number of seconds. It will then create so called incremental checkpoints which will be tracked in the so called checkpoint queue in memory.
    FAST_START_MTTR_TARGET is auto-tuned in Oracle 10g and Oracle tries to manage (incremental) checkpoint in a fashion that a minimum of I/Os are caused and a minimum time for crash recovery is needed.
    If you set LOG_CHECKPOINTS_TO_ALERT to TRUE you will find checkpoint information in the alertSID.log file. You will see FULL and INCREMENTAL checkpoints then.
    Hope this clarifies your question,
    Lutz Hartmann

  • Difference between Full backup and lncremental level 0 backup

    Hi,
    What is the difference between Full backup and lncremental level 0 backup?
    what I understand is:
    Actually incremental backup backups all the block containing data and skips the blocks that are unused.
    Similarly full backup backups all the blocks containing data.(does it skip unused blocks ?????).
    The only difference that I feel is that the level 0 backup is recorded as an incremental backup in the RMAN repository, so it can be used as the parent for a level 1 backup where Full database backup is not recorded in the Rman respository and cannot be used as parent for level 1 backup.
    (if you still feel you have some more points to share regarding the difference between backups, I request to share with me)
    so my question here is
    Can we use Incremental level 0 backup for full database recovery. Else is there any compulsion like we must only use full database backup for full database recovery?
    can anyone throw light on this
    with regards;
    Boo

    nowhere in this thread we talked about the image copies as we are only talking about the incremental backup.
    The word "always" has a meaning. If you now wish to say, "well, RMAN will mostly use a level 0 or full backup, except when there are image copies around" then the word "always" doesn't fit your newly-revised description. That would make it an inappropriate word to use. And indeed, it's a factually wrong word to use.
    If you want to get really picky about it, incidentally, the words 'datafile copy' appear in a post from me at Aug 4, 2007 6:03 PM whereas your post (in which the "always" word was used) only appears Aug 4, 2007 8:32 PM -a full 2 and a half hours (and and two posts) AFTER the concept of image copies had been mentioned.
    But regardless of times: you can't just pretend image copies don't exist just because they haven't been mentioned so far in a thread. If you post "this is the way something ALWAYS works", you'd better be completely right about it, because "ALWAYS" is a big word. And there's no grey about this: either "always" is right, or it's wrong. And in this case, it's wrong: there are many times when RMAN will not do what you baldly asserted it would do. Image copies aren't some rare, exotic beast, either... so the occasions when your 'rule' breaks down are many and common.
    until now I was simply beliving in what Oracle docs and metalink notes says
    Try not to do that. Because you will fall into error after error if you do. Metalink is not the Holy Gospel of Oracle. Testing is, and it's because I've been testing this stuff since 1999 I will, on this occasion, decline your kind invitation to do "a small test". Been there, done that.
    All of which leaves us discussing things that are of no relevance to the original poster, and I have no desire to take the thread off into silly areas of dispute. So I will leave it there, and I have nothing further to say on the matter. RMAN does what it does, and what it does is not what you said it would do. End of story.

  • Whats the difference between full screen mode on iLife 09 and 11?

    I had a overview of iLife 11 in store at Apple yesterday, I like the ful screen mode of it. I have iLife 09 but never really used it much to be honest.
    I just fired it up, and it has a full screen mode on it I see, so whats the difference between '09 and '11 full screen version?
    Also I'm a little worried if my iMac would run it fine? Its a 2007 Alu Core 2 Duo 2Ghz, 2Gb RAM, so a little older
    cheers

    If viewing full screen is your primary interest with iPhoto then iPhoto 11 may not be for you. iPhoto 11 has new themes for books, cards and slideshows which are rather slick and offers 2 page spread layouts in the books. In all fairness, it has had its difficulties as you can easily tell by the posts but not by everyone.
    Bottom line: it all depends on what you primarily use iPhoto for. Excellent image management and organization or the frills, bells and whistles of its other features.
    Happy Holidays

  • Difference between BBP_GET_STATUS_2 and CLEAN_REQREQ_UP reports

    Could someone explain in simple way what is the difference between BBP_GET_STATUS_2 and CLEAN_REQREQ_UP reports? I have read the differences in standard documentation but it is a little confusing.
    Thanks!
    Regards,
    Madhur

    Hi
    CLEAN_REQREQ_UP (Cleaner Job)
    You can use this function for document types Shopping cart, (Local) Purchase Order, Confirmation, and Invoice to trigger a synchronization with the associated documents in the back-end system. The system checks whether and how the (follow-on) documents were posted in the back end, and updates the object link and references, as well as the document status.
    A job (background processing) is generated for the program CLEAN_REQREQ_UP. When this is run, the system queries a database table containing the transfer information of the documents to the back end. The entries are checked with the data of the respective back-end systems. If the back-end transfer is successful, the respective entries are deleted and the prerequisites for further processing are created.
    BBP_GET_STATUS2 (Status Job)
    The status job was created by SAP to update the EBP system with data such as purchase requisition number, purchase order number, goods received or invoices recorded manually in R/3, etc. The report should not be run on a frequent basis at short intervals unless the order count from EBP to R/3 is not that high. Otherwise, a recommended interval for running the report is approximately every hour. Basically the schedule times depend on your business requirements.
    Until this job runs, the user will not see the number of the backend document created in R/3 for a particular shopping cart in the history tab of the check status transaction.
    Some more information :
    Go to:
    SPRO->IMG->Integration with Other SAP components->Advanced Planning and
    Optimization->basic Settings for the Data Transfer->Change Transfer->
    change Transfer for Transaction Data->Active Online Transfer using BTE
    Here you should maintain the application 'SRMNTY' with active flag.
    Once this customizing is enabled, whenever a follow-on document
    (either confirmation or invoice) for an extended classic PO is created
    in the backend R/3 system, the R/3 system communicates to the SRM system
    by creating an entry in the table BBP_DOCUMENT_TAB for this PO.
    The item level of the SRM PO has fields to store the actual quantity
    and values entered for the corresponding confirmations and invoices.
    After that, run the reports CLEAN_REQREQ_UP and BBP_GET_STATUS_2.
    When the report CLEAN_REQREQ_UP runs this will update the
    PO with statistical information. With the latest information in
    BBP_PDIGP table (statistical information) the query should work fine.
    Summer

  • Differences between LSMW and BDC

    Hi All
    Please can you give me the few points about the differences between LSMW and BDC?
    Awaiting for your Responce
    Praveen

    Hai Check with the following document
    GOOD
    THERE IS THREE TYPE OF METHOD IN BDC
    BDC SESSION
    CALL TRANSACTION
    CALL DIALOG
    What is BDC or batch input
    The Batch Input is a SAP technic that allows automating the input in transactions. It lies on a BDC (Batch Data Commands) scenario.
    BDC functions:
    · BDC_OPEN_GROUP : Opens a session group
    · BDC_CLOSE_GROUP : Closes a session
    · BDC_INSERT : Insert a BDC scenario in the session
    · The ABAP statement "CALL TRANSACTION" is also called to run directly a transaction from its BDC table.
    It runs the program RSBDCSUB in order to launch automatically the session. The session management is done through the transaction code SM35.
    The object itself is maintanable through the transaction SE24.
    BDC methods:
    Method
    Description
    Parameters
    OPEN_SESSION
    Opens a session
    SUBRC (Return Code – 0 OK)
    SESSIONNAME (Session to be created)
    CLOSE_SESSION
    Closes a session
    None
    RESET_BDCDATA
    Resets the BDC Internal Table...
    None. Normally, for internal purpose…
    BDC_DYNPRO
    Handles a new screen
    PROGNAME (Name of the program)
    DYNPRONR (Screen Number)
    BDC_FIELD
    Puts a value on the screen
    FIELDNAME (Name of the field)
    FIELDVALUE (Value to be passed)
    CONSTRUCTOR
    Constructor - Initializes NO_DATA
    NODATA (No data character). The constructor is called automatically when the object is created.
    RUN_SESSION
    Launches a session with RSBDCBTC
    None
    CALL_TRANSACTION
    Calls a transaction with the current BDC Data
    MODE (Display Mode)
    UPDATE (Update Mode)
    TCODE (Transaction to be called)
    BDC_INSERT
    Inserts the BDC scenario in the session
    TCODE (Transaction to be called)
    BDC techniques used in programs:
    1) Building a BDC table and calling a transaction,
    2) Building a session and a set of BDC scenarios and keeping the session available in SM35,
    3) Building a session and lauching the transaction right after closing the session.
    BDC using Call Transaction
    BDC using Call transaction involves calling an SAP transaction in back ground from within the ABAP
    program. The process involves building an Internal BDC table containing the screen information needed to
    execute the required transaction and then passing this to the Call transaction command (See code example).
    The full procedure for creating a BDC program is as follows:
    What is the difference between batch input and call transaction in BDC?
    Session method.
    1) synchronous processing.
    2) can tranfer large amount of data.
    3) processing is slower.
    4) error log is created
    5) data is not updated until session is processed.
    Call transaction.
    1) asynchronous processing
    2) can transfer small amount of data
    3) processing is faster.
    4) errors need to be handled explicitly
    5) data is updated automatically
    BATINPUT/DIRECT INPUT
    A: Batch-inputs can not be used to fill the "delivery due list" screen because it is not a dynpro. This is a standard SAP report. A SAP report (check with "System -> Status") may be called using SUBMIT sentence with the appropriate options . It is preferred to call a report than create a Batch-input program.
    GO THROUGH THIS LINK
    http://www.guidancetech.com/people/holland/sap/abap/zzsni001.htm
    The LSM Workbench is an SAP R/3 based tool that supports the one-time or periodic transfer of data from non-SAP systems ("legacy systems") to SAP systems.
    The LSM Workbench helps you to organize your data migration project and guides you through the process by using a clear sequence of steps.
    The most common conversion rules are predefined. Reusable conversion rules assure consistent data conversion for different data objects.
    LSMW vs DX Workbench
    The LSM Workbench covers the following steps:
    Read the legacy data from one or several files (e.g. spreadsheet tables, sequential files).
    Convert the data from source format to target format.
    Import the data using standard interfaces (Batch Input, Direct Input, BAPI, IDoc).
    Experiences made in successful implementation projects have shown that using the LSM Workbench significantly contributes to accelerating data migration.
    SAP provides this tool along with documentation to customers and partners free of charge.
    Users of the LSM Workbench receive the usual support via SAP Net - R/3 Frontend (component BC-SRV-DX-LSM).
    Releases:
    Version 1.7.2 of the LSM Workbench ("LSMW 1.7.2") available
    Attention : LSMW 1.7.2 requires an SAP R/3 system with SAP R/3 4.0 or SAP R/3 4.5.
    Version 1.8.0 of the LSM Workbench (1.21mb) ("LSMW 1.8.0") available
    Attention : LSMW 1.8.0 requires an SAP R/3 system with SAP R/3 4.6.
    Version 3.0 of the LSM Workbench (1.89mb) ("LSMW 3.0") available for Web Application Server 6.10
    Attention : LSMW 3.0 requires a SAP WAS 6.10. Functionality of version 1.7.2 and 3.0 are identical !
    Version 4.0 of the LSM Workbench ("LSMW 4.0") integrated in Web Application Server 6.20
    Attention : LSMW 4.0 is an integrated part of SAP WAS 6.20.
    Thanks & regards
    Sreenivasulu P
    Message was edited by: Sreenivasulu Ponnadi

  • Which is difference between Pipeline and Consignment..

    Hello  All...
    I need your help... Can you help me to understand the difference between Pipeline and Consignment?
    Many thanks and I appreciate your Help and comments!

    A pipeline material is a material that flows directly into the production process from a pipeline (for example, oil), from a pipe (for example, tap water), or from another similar source (for example, electricity).
    A material from the pipeline is always available; i.e. it can be withdrawn from the pipeline at any time and in any quantity.
    Depending on the system configuration, a material can be withdrawn only from the pipeline or, in addition to the pipeline, normal stocks of the material can also be managed.
    STEPS TO MAINTAIN PIPELINE MATERIAL
    1. You can create a material with PIPE material type or else you can use any material type but that should allow pipeline process.
    2. You should have Inforecord for the material with valid conditions, price will pick only from inforecord.
    3. If requied you can maintains ource list or else you can select during goods issue.
    4. From the Inventory Management menu, choose Goods movement ® Goods issue.
    Maintain the data on the initial screen. Choose Movement type ® Consumption ® To cost center (or To order, To network, All account assignments) ® From pipeline (Movement types : 201 P, 261 P, 281 P, or 291 P)
    5. On the collective entry screen, enter the account assignment. Enter the items.
    You do not have to enter the vendor as this will be found automatically by the system.
    If more than one vendor is possible, a pop-up window appears with a list of pipeline vendors, from which you can select the vendor you require.
    Post the goods movement.
    we can not stock pipeline material. It is readily available. directlly it is avilable at cost center. you need to pay for that consumption

  • MAIN DIFFERENCES BETWEEN PARALLEL AND SEQUENTAIL PRCESSING???

    HI PALS,
    I WANT THE COMPLETE DIFFERENCES BETWEEN PARALLEL AND SEQUENTIAL PROCESSING!
    IN THE CONTEXT OF RFC.

    Hi
    Parallel Processing
    To achieve a balanced distribution of the system load, you can use destination additions to execute function modules in parallel tasks in any application server or in a predefined application server group of an SAP system.
    Parallel-processing is implemented with a special variant of asynchonous RFC. Itu2019s important that you use only the correct variant for your own parallel processing applications: the CALL FUNCTION STARTING NEW TASK DESTINATION IN GROUP keyword. Using other variants of asynchronous RFC circumvents the built-in safeguards in the correct keyword, and can bring your system to its knees
    Details are discussed in the following subsections:
    ·        Prerequisites for Parallel Processing
    ·        Function Modules and ABAP Keywords for Parallel Processing
    ·        Managing Resources in Parallel Processing
    Prerequisites for Parallel Processing
    Before you implement parallel processing, make sure that your application and your SAP system meet these requirements:
    ·        Logically-independent units of work:
    The data processing task that is to be carried out in parallel must be logically independent of other instances of the task. That is, the task can be carried out without reference to other records from the same data set that are also being processed in parallel, and the task is not dependent upon the results of others of the parallel operations. For example, parallel processing is not suitable for data that must be sequentially processed or in which the processing of one data item is dependent upon the processing of another item of the data.
    By definition, there is no guarantee that data will be processed in a particular order in parallel processing or that a particular result will be available at a given point in processing. 
    ·        ABAP requirements:
    ¡        The function module that you call must be marked as externally callable. This attribute is specified in the Remote function call supported field in the function module definition (transaction SE37).
    ¡        The called function module may not include a function call to the destination u201CBACK.u201D
    ¡        The calling program should not change to a new internal session after making an asynchronous RFC call. That is, you should not use SUBMIT or CALL TRANSACTION in such a report after using CALL FUNCTION STARTING NEW TASK.  
    ¡        You cannot use the CALL FUNCTION STARTING NEW TASK DESTINATION IN GROUP keyword to start external programs. 
    ·        System resources: 
    In order to process tasks from parallel jobs, a server in your SAP system must have at least 3 dialog work processes. It must also meet the workload criteria of the parallel processing system: Dispatcher queue less than 10% full, at least one dialog work process free for processing tasks from the parallel job.
    Function Modules and ABAP Keywords for Parallel Processing
    You can implement parallel processing in your applications by using the following function modules and ABAP keywords:
    ·        SPBT_INITIALIZE: Optional function module. 
    Use to determine the availability of resources for parallel processing. 
    You can do the following:
    ¡        check that the parallel processing group that you have specified is correct.
    ¡        find out how many work processes are available so that you can more efficiently size the packets of data that are to be processed in your data.
    ·        CALL FUNCTION Remotefunction STARTING NEW TASK Taskname DESTINATION IN GROUP:
    With this ABAP statement, you are telling the SAP system to process function module calls in parallel. Typically, youu2019ll place this keyword in a loop in which you divide up the data that is to be processed into work packets. You can pass the data that is to be processed in the form of an internal table (EXPORT, TABLE arguments). The keyword implements parallel processing by dispatching asynchronous RFC calls to the servers that are available in the RFC server group specified for the processing.
    Note that your RFC calls with CALL FUNCTION are processed in work processes of type DIALOG. The DIALOG limit on processing of one dialog step (by default 300 seconds, system profile parameter rdisp/max_wprun_time) applies to these RFC calls. Keep this limit in mind when you divide up data for parallel processing calls. 
    ·        SPBT_GET_PP_DESTINATION: Optional function module. 
    Call immediately after the CALL FUNCTION keyword to get the name of the server on which the parallel processing task will be run. 
    ·        SPBT_DO_NOT_USE_SERVER: Optional function module. 
    Excludes a particular server from further use for processing parallel processing tasks. Use in conjunction with SPBT_GET_PP_DESTINATION if you determine that a particular server is not available for parallel processing (for example, COMMUNICATION FAILURE exception if a server becomes unavailable).
    ·        WAIT: ABAP keyword
    WAIT UNTIL
    Required if you wish to wait for all of the asynchronous parallel tasks created with CALL FUNCTION to return. This is normally a requirement for orderly background processing. May be used only if the CALL FUNCTION includes the PERFORMING ON RETURN addition.
    ·        RECEIVE: ABAP keyword
    RECEIVE RESULTS FROM FUNCTION Remotefunction
    Required if you wish to receive the results of the processing of an asynchronous RFC. RECEIVE retrieves IMPORT and TABLE parameters as well as messages and return codes.
    Managing Resources in Parallel Processing
    You use the following destination additions to perform parallel execution of function modules (asynchronous calls) in the SAP system:
    In a predefined group of application servers:
    CALL FUNCTION Remotefunction STARTING NEW TASK Taskname
    DESTINATION IN GROUP Groupname
    In all currently available and active application servers:
    CALL FUNCTION Remotefunction STARTING NEW TASK Taskname
    DESTINATION IN GROUP DEFAULT
    Sequential Processing
    In the following cases, the system chooses sequential (non-parallel) processing:
    ●      In table RSADMIN, entry QUERY_MAX_WP_DIAG has value (column value) 1.
    ●      The entire query consists of one sub-access only.
    ●      The query is running in a batch process.
    ●      The query was started from the query monitor (transaction RSRT) using various debug options (for example, SQL query display, execution plan display). See, Dividing a MultiProvider Query into Sub-Queries.
    ●      The query requests non-cumulative key figures.
    ●      Insufficient dialog processes are available when the query is executed. These are required for parallel processing.
    ●      The query is configured for non-parallel processing.
    ●      You want to save the result of the query in a file or a table.
    In Release SAP NetWeaver 7.0, the system can efficiently manage the large intermediate results produced by parallel processing. In previous releases, the system terminated when it reached a particular intermediate result size and proceeded to read data sequentially. This is no longer the case. Therefore, the RSADMIN parameter that was used in previous releases for reading a MultiProvider sequentially is no longer used.
    Reward If Helpfull,
    Naresh

  • Difference between exipre and obosolete

    Hi ALL,
    WHAT IS THE EXACT DIFFERENCE BETWEEN exipre and obosolete WITH REFERENCE RMAN.

    afzal wrote:
    Hi ALL,
    WHAT IS THE EXACT DIFFERENCE BETWEEN exipre and obosolete WITH REFERENCE RMAN.As it was mentioned above, obsolete means that the backup is no longer needed according the retention policy. But the expired means that couldn't be found after CROSSCHECK command executed. See the following example. In this example the retention policy is set to redundancy 1 which means that the backup which have more than one copy, is marked as an obsolete. Then I delete a backup, run CROSSCHECK command and see that the deleted backup marked as EXPIRED
    RMAN> show retention policy;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    RMAN> backup datafile 2;
    RMAN> report obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 1
    no obsolete backups found
    RMAN> backup datafile 2;
    RMAN> report obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 1
    Report of obsolete backups and copies
    Type                 Key    Completion Time    Filename/Handle
    Backup Set           1      30-JAN-11        
      Backup Piece       1      30-JAN-11          /u01/oracle/product/10.2.0/db_1/flash_recovery_area/TEST/backupset/2011_01_30/o1_mf_nnndf_TAG20110130T131902_6nbc877l_.bkp
    RMAN> list expired backup;
    RMAN> exit
    Recovery Manager complete.
    [oracle@linux_server ~]$ rm -rf /u01/oracle/product/10.2.0/db_1/flash_recovery_area/TEST/backupset/2011_01_30/o1_mf_nnndf_TAG20110130T131902_6nbc877l_.bkp
    [oracle@linux_server ~]$ rman target /
    RMAN> list expired backup;
    using target database control file instead of recovery catalog
    RMAN> crosscheck backup;
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=159 devtype=DISK
    crosschecked backup piece: found to be 'EXPIRED'
    backup piece handle=/u01/oracle/product/10.2.0/db_1/flash_recovery_area/TEST/backupset/2011_01_30/o1_mf_nnndf_TAG20110130T131902_6nbc877l_.bkp recid=1 stamp=741791943
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/oracle/product/10.2.0/db_1/flash_recovery_area/TEST/backupset/2011_01_30/o1_mf_nnndf_TAG20110130T131914_6nbc8lh0_.bkp recid=2 stamp=741791954
    Crosschecked 2 objects
    RMAN> list expired backup;
    List of Backup Sets
    ===================
    BS Key  Type LV Size       Device Type Elapsed Time Completion Time
    1       Full    18.72M     DISK        00:00:03     30-JAN-11     
            BP Key: 1   Status: EXPIRED  Compressed: NO  Tag: TAG20110130T131902
            Piece Name: /u01/oracle/product/10.2.0/db_1/flash_recovery_area/TEST/backupset/2011_01_30/o1_mf_nnndf_TAG20110130T131902_6nbc877l_.bkp
      List of Datafiles in backup set 1
      File LV Type Ckp SCN    Ckp Time  Name
      2       Full 516141     30-JAN-11 /u01/oracle/product/10.2.0/db_1/oradata/test/undotbs01.dbf
    RMAN>

  • What is the difference between DSo and Infocube

    Hello,
             Kindly tell me what is the difference between DSO and Infocube?
    And please tell me how to take the desicion that in whichi case we can use DSO and in which case we can use Infocube..

    Hi ,
    DataStore object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic) level.
    This data can be evaluated using a BEx query.
    A DataStore object contains key fields (for example, document number/item) and data fields that can also contain character fields (for example, order status, customer) as key figures. The data from a DataStore object can be updated with a delta update into InfoCubes and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems.
    Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables.
    Use
    The cumulative update of key figures is supported for DataStore objects, just as it is with InfoCubes, but with DataStore objects it is also possible to overwrite data fields. This is particularly important with document-related structures. If documents are changed in the source system, these changes include both numeric fields, such as the order quantity, and non-numeric fields, such as the ship-to party, status and delivery date. To reproduce these changes in the DataStore objects in the BI system, you have to overwrite the relevant fields in the DataStore objects and set them to the current value. Furthermore, you can use an overwrite and the existing change log to render a source delta enabled. This means that the delta that is further updated to the InfoCubes, for example, is calculated from two successive after-images.
    An InfoCube describes (from an analysis point of view) a self-contained dataset, for example, for a business-orientated area. You analyze this dataset in a BEx query.
    An InfoCube is a set of relational tables arranged according to the star schema: A large fact table in the middle surrounded by several dimension tables.
    Use
    InfoCubes are filled with data from one or more InfoSources or other InfoProviders. They are available as InfoProviders for analysis and reporting purposes.
    Structure
    The data is stored physically in an InfoCube. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.
    The real-time characteristic can be assigned to an InfoCube. Real-time InfoCubes are used differently to standard InfoCubes.
    ODS versus Info-cubes in a typical project scenario
    ODS
    why we use ods?
    why is psa  & ods nessasary
    Hope this helps,
    Regards,
    CSM Reddy

Maybe you are looking for