Statistical Set up

Hi
Sirs
I am running the statistical setup for billing documents... i.e.oli9bw.. i am not ablet to get the data in setup tables... all the datasources are active for billing,,,, is there any thing that all sales order datasource should be active because some order and item datasources were inactive
Pls throw some light on this...
Regard
Radha

Hi Radha,
Their is no dependency between Sales Order and Billig application as for the statistical updates.
you can execute the setup table job in background and check if it is getting executed succesfully. Also check if the base tables for the Billing application liek VBRK and VBRP have the necessary data, and if at all your are giving any selection in the setup table job, then that data is avaiable in the base tables.
Hope this helps.
Regards,
Umesh

Similar Messages

  • System downtime during Statistical Set-up (OLI3BW)

    Hi Gurus,
    Please help me with this.
    Is there a way we can minimise R/3 downtime while doing Statistical set-up for loading data into BW.
    We have Purchasing cube active with us and we are enhancing the cube and the extractor to add new fields, since the requirements needs history for this new fields, we have to reinitliase the LO set-up again, we are also upgrading from unserialised V3 to Queued Delta update. However historically it looks like we need a downtime which means no transactions should happen while performing OLI3BW. I was wondering is there a way we can perform this with no downtime. Our client is little apprehensive for a long downtime. Please share your thoughts..
    Thanks All.

    hi,
    try check SAP Note Number: 753654
    and there is 'how to' minimize downtime
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d51aa90-0201-0010-749e-d6b993c7a0d6

  • Statistical set up table / V3

    Hi,
    --I just completed a review and a test on IDES (in R3) on the topic “statistical set up process for the logistics customizing cockpit”
    1. The document kept referring to “V3 scheduled” job and “V3” update: Can you bring this “V3” in simple terms for me? Was I supposed to see something called V3? What really is V3 update?
    2. There was a discussion about running OLI17BW transaction to set up the “statistical table for the logistics application number”.  I followed the discussion but what actually is this “statistical table” and what is it for?
    3. In this same discussion, it stated that “you can no longer use the statistical data already contained in setup table and you have to delete it via LBWG” Can you touch on this quote to help me get the logic behind it? What is this setup table and how do I view its content?
    Thanks

    Hi Amanda,
    The V3 update is the method used to load your QRFC queue.  This method is identified in transaction LBWE and you can change your method there.  In your case, you need to select the option next to Application 11: SD Sales BW.  Without getting to involved, "Queued Delta" is probably chosen most often.
    In OLI7BW, your are filling in the setup tables for Initializing SD data transfers.  If you want to load a year's worth of data, then you load that year's data into the setup table using OLI7BW.  Then you can go to BW and run a full load or initialization for that data.  If, for example, you wanted 2006 sales data, you would identify you SD number ranges that were created in 2006, select those sales documents in OLI7BW and execute the transaction in the background for large amounts of data (or foreground for smaller datasets).  After successful completion, you can go into BW and run you infopackage wide open and you will get your 2006 data.  If you loaded 2005 and 2006 into the setup tables, running the IP wide open would give you both years.
    LBWG deletes the data in the setup tables.  If you load 2006 data in the table, and then load the data a second time, then you will have duplicate data.  You will need to run LBWG to wipe the data and start over.
    Thanks,
    Brian

  • Statistical set up of orders ( oli7bw)

    HI Gurus,
    Just a small doubt,
    I am going to do OLI7BW for SD Orders. This statistical setup is going to take 8 to 10 hours of time.
    Now can you please suggest
    1) when to run this setup? DUring weekends or nights where user activity will not be there...
    2) Whether to run it in foreground or background ( i have two options execute & execute in background which one to select ? )
    3) what is the significance of ' New Run' , ' Block all orders?' & ' Simulation extr. str. BW'
    Thanks in advance
    regards
    Janardhan KUmar K

    1) when to run this setup? DUring weekends or nights where user activity will not be there...
    The best time to run the setup is when there's little or no activity on the source system.
    2) Whether to run it in foreground or background ( i have two options execute & execute in background which one to select ? )
    It should be run in background. If you run in foreground, you'll lock up your SAPGUI and risk timing out. Execute in Background is the best selection.
    3) what is the significance of ' New Run' , ' Block all orders?' & ' Simulation extr. str. BW'
    'New Run' should be checked if this is a new execution of the setup.
    'Block all orders?', when checked, will lock all documents in the source system from being updated during the setup (not advisable if the setup is running during times where the source tables can be updated).
    'Simulation extr. str. BW' runs the setup as a simulation, whereby the setup tables aren't updated. This will help you figure out if there are any issues with the data prior to the actual execution of the setup job.
    If I may make a suggestion...you may want to get the range of Sales Documents by year/month and execute multiple, concurrent setup jobs. You could probably run 4 concurrent jobs with Sales Document ranges of 6 months per job. This will significantly increase your throughput in extracting the data into the setup table.

  • What is reorganisation for event PF in statistical set up for PP

    Hi All,
    While running setup tables for PP (OLI4BW) there is a check box for reorganisation for event PF  .
    what does this mean?
    How does this affect the records(fetch) if it is not checked.
    what type of records it fetches.
    please enlight on this.
    Regards,
    Samyuktha.

    in as2 you can just check if the object exists:
    if(obj)
    but no event is dispatched.
    if it does it's either on-stage or on the back-stage which you can check if needed.

  • Impact of Statistical document reversal once it is cleared

    HI,
    I have scenario where lots of documents are posted in this manner.
    1.Statistical entry is posted (Sub transaction- statistical -0020) ; then
    2.Same has been cleared against payment and statistical entry gets real (Sub transaction - real  - 0021);then
    3. Clearing has been reset ; then
    4. Statistical entry has been reversed
    Now system is unable to find credit line items which once cleared against statistical line item. This issue is causing mismatch between debtors and G/L line items.
    How to find such entries?
    Regards,
    Paresh

    Hi,
    In your point number (6), please explain what you meam by "0021" - Real Posting.
    Also, did you note the FI-CA document number when you made the Security Deposit request? If so, display it in FPE3 and see if there is a Clearing document associated with it.
    Normally when the deposit is paid, the clearing document creates a document with Main 0020 and Sub 0010, with clearing resriction of "2".
    Check the setting in Posting Area 1010 (via transaction FQC0) to ensure that for Statistical Key H, for Main/Sub 0020/0020, the offset Main/Subis  0020/0010, with Clearing Restriction "2".
    Also, check your serttings for Main/Sub 0020/0020 in table V_TFKITVOR. Check the Statistical setting indicator to see if it properly set for Cash Security Deposit Requests.
    Regards,
    Ivor M.

  • STATISTICAL SETUP- LO

    Hi Gurus,
    What is STATISTICAL SETUP in LO extraction.
    Regards,RAMU>

    Hello ram,
    Please go thru this.
    Setup of Statistical Data
    During a setup (statistical setup), the information structures are filled with consistent and complete data.
    The statistical setup of one or more information structures is necessary in the following cases:
    •     If an information structure was created using the Logistics Data Warehouse (statistical database)
    •     If the update for the information structures was activated after documents were already in the system that were meant to be included in the statistics
    •     If the statistical update in Customizing was changed
    •     If the statistical data is inconsistent
    Preparation
    To restart a statistical update after an termination, allocate a name to each background run of the setup.
    If a statistical setup is then terminated or if a setup is interrupted from the archive documents, the status of the setup will then be saved under the name of this run.
    When restarting under the same name, processing will continue from the intermediate status that was saved.
    After the run is successfully completed, the intermediate status that was saved is deleted.
    Recommendation
    •     The reports should run as a background job.
    •     You should not change the sequence of steps that describe the statistical setup of information structures (procedure).
    Procedure
    The following steps are necessary to perform a complete statistical set up:
    1. Initialize version for statistical setup
    For reasons of security, the statistical data determined during a setup is not directly written into an information structure, but are saved instead under a separate version name. This version must be activated for intermediate storage before the statistical setup is carried out.
    For this purpose, the report RMCSISCP is available.
    The actual data is saved under the version name "000".
    2. Save actual data
    The actual data in the system should be saved for safety reasons.
    Again, use report RMCSISCP for this purpose.
    3. Edit all archived documents
    To do this, you can use the various reports that are available in the individual applications.
    4. Edit all documents in the system
    This also takes place via the various reports that are available in the individual applications.
    5. Transfer the data which was obtained in the statistical setup from the statistical setup version to the actual version
    The data obtained is only imported to the actual version instead of the existing data after the successful completion of the statistical setup.
    The report RMCSISCP is again used for this purpose.
    Note
    You can see the versions of the standard information structures by using the standard analyses. Here are the following possibilities:
    a) You can allocate the version number of the version as a value to the user parameter "MCR" that you would like to see.
    On the selection screen of the standard analysis, the field Version under Parameters is automatically completed  with the given version number. The version number can be changed simply by overwriting it.
    This setting is user-dependent and applies to all standard analyses.
    b) If you assign the value "X" to the user paramter "MCR", the version "000" will generally appear on the selection screen of the standard analyses. Even in this case, the version number can be overwritten.
    This setting is also user-dependent and applies to all standard analyses.
    If you want to use either one of these functions, select System -> User profile -> User parameters. You can select this function from any menu.
    Recommendation
    You should change the user parameters in an alternative session, as it is not possible to return to the starting point from the user parameters maintenance screen.
    c) Via "User settings", you can also make specifications for the versions, which are also user-dependent, but also only refer to one particular standard analysis.
    To do this, in the selection screen of the standard analysis, select Goto -> User settings.
    You are now in the standard drill-down. Select Goto ->Parameters.
    If you set the indicator "Selectable version", the field "Version" with the version number "000" will also appear on the selection screen. This version number can be overwritten at any time and thus changed.
    6. Follow-up processing
    After successfully completing the statistical setup, you can then delete data that is no longer needed by deleting the version under which this data is saved.
    These include:
    o     Safety copy
    o     Version for statistical setup
    The report RMCSISCP can be used for this purpose.
    Activities
    To carry out a statistical setup, procede as follows:
    1. Enter your selection requirements.
    2. Save your entries.
    3. Select Program -> Execute.
    Description of the report RMCSISCP
    The report RMCSISCP is used in all information systems in Logistics. For this reason, the report is described below.
    You can use report RMCSISCP you can copy or delete versions from information structures under which statistical data has been saved.
    The data, that has been saved in an information structure, is different due to the version under which they have been saved.
    The version "000" includes the actual data, i.e., the data that was written into the information structure as a result of the statistical update.
    The intermediate results are stored in the versions that begin with "&(" and that are created when an information structure is setup.
    Planning data is stored under all other versions (that means version A000 and all numerical versions except version 000).
    The statistical setup of an information structure is carried out just like the online update, with the difference being that the data is not usually written into version "000" first, but rather into a version that you choose, that begins with "&(". The data is only copied from this version into the "000" version after the successful completion of the initial transfer.
    Note
    While copying into the "000" version, you must not carry out any postings that update the respective information structure.
    To keep the old version "000" (the existing actual data) from being lost, save it under another version. This "safety copy" is also generated with report RMCSISCP.
    You can use the indicator "Initialize target version" to delete data data stored until now under this version. If the indicator "Do not copy initial records" is set, data records are only set if at least one key figure is not initial.
    Attention
    The report RMCSISCP should only be used for copying, deleting and for copying + deleting. Copy Management should be used to meet more complex requirements (Tools -> Copy Management).
    Activities
    1. Enter the name of the information structure that is to be copied/deleted.
    2. Choose a processing type.
    The Copy Management screen is called up. The processing type that you have selected is already set.
    3. Here you make all the remaining entries required for selection.
    Additional functions
    You can also call up a log which contains detailed information about all of the completed setup runs.
    This function allows you to make restrictions by specifying the start date of a run, the run name, the user and the application.
    You can also either print the list of selected setup runs or the detail screen for a setup run.
    Thanks,Ramoji.

  • Analysis report on the basis of Planner group

    In the existing analysis of Planner group using MCI4, is it possible to add Notification type in the drill down.
    I required to do the analysis on the basis of Notification type also. And if it is possible to add Notification type to the Planner grp, my analysis report is get fitted in the Standard report, otherwise I have to prepare the Z-Report.
    Thanks in advance

    Hi,
        Please check this detailed link Modify T-Code MCI4 Report
    I think in ur case while doing statistical set up running programme RIPMS001 is enough ..
    regrds
    pushpa

  • How to find out the standard info stucture for a customized IS in LIS

    Hi All
    I have to run the statistical set up for the customized info structure S763 for the application plant maintenance in LIS. But i dont know whether it was copied from a standard SAP info structure or it was enhanced from a standard info structure.
    I have even checked out all the standard IS for plant maintenance but it is not matching. The closest match is S063 but a few characteristics and a key figure is missing.
    Could anybody guide me how to find out the standard info structure?
    Any help would be of great help.
    Regards
    Saddy

    Hi Ganesh
    No..it didnt solve my problem. MC23 only displays the info structures.  I have a customized info structure S763 which was created long time back. Im not sure whether it was copied from a standard info structure or enhanced from a standard info structure.
    I have to give the source info structure name to run statistical set up. The closest one which matches with the S763 is S063 but it is missing few characteristics and key figure.
    does it means that the IS S063 was enhanced?? Am i right?? If it was enhanced how will i run the statistical setup for the same...
    I hope i have explained clearly.
    Do provide ur valuable input.
    Regards
    Saddy

  • How to populate the data in purchasing LIS tables - S011 and S012

    Hello,
    While Running the report mc$g, I am not getting the values into invoice column. Please let me know if there is any setting I need to do in IMG to populate the invoice data.
    Thanks,
    Aditya

    Hi,
    Note 433518 deals with most known problems in relation to                      
    updating infostructures S011 and S012.  Sometimes when a new support           
    pack or upgrade takes place and a statistical set-up does not                  
    inconsistencies can occur.      
    S011 forms the data basis for the purchasing group analysis.                 
    S012 forms the data basis for the material groups, vendor and material       
    analysis.                                                                               
    The key figures in the Purchasing Information System are updated when        
    the following three types of events occur:                                                                               
    - Purchasing document (purchase order, scheduling agreement, contract,       
      inquiry/quotation) create/change                                           
    - Goods receipt for a purchase order, scheduling agreement                   
    - Invoice receipt for a purchases order, scheduling agreement                                                                               
    You can find further information about updating in the Implementation        
    Guide for the Logistics Information System.                                                                               
    For more information on PURCHIS, please visit SAP's online help.  You        
    should find all the information you are looking for there.                                                                               
    Can you please review the following notes as a possible solution for          
    your issue :                                                                 
      501416  Purchase order changes are updated incorrectly in S011/S012        
      459450  FAQ: Logistic Information System (LIS) in Purchasing                                                                               
    Report RMCENEUA is used to set up statistical data for Purchasing              
    information structures such as S011, S012, and S013.    The best answer regarding information about reorganizing statistics  in program RMCENEUA / transaction OLI3 can be found in the attached   note 64636.                    
    Regards,
    Edit

  • Update was terminated while creating purchase order from ME21N

    Hi Experts,
    We are getting this dump when try to create purchase order from ME21N;
    Our system ECC 6.0 and IS-AFS (Apparel & Footwear Solution) V600 component has installed,
    How can we prevent this error,
    Regards
    Here is the dump and sm21 log,
    Runtime Errors         LOAD_PROGRAM_NOT_FOUND
    Exception              CX_SY_PROGRAM_NOT_FOUND
    Date and Time          10.10.2008 10:22:03
    Short text
         Program "RMCMS431 " not found.
    What happened?
         There are several possibilities:
         Error in the ABAP Application Program
         The current ABAP program "SAPLMCS4" had to be terminated because it has
         come across a statement that unfortunately cannot be executed.
         or
         Error in the SAP kernel.
         The current ABAP "SAPLMCS4" program had to be terminated because the
         ABAP processor detected an internal system error.
    What can you do?
         Note down which actions and inputs caused the error.
         To process the problem further, contact you SAP system
         administrator.
        Using Transaction ST22 for ABAP Dump Analysis, you can look
        at and manage termination messages, and you can also
        keep them for a long time.
    Error analysis
        An exception occurred that is explained in detail below.
        The exception, which is assigned to class 'CX_SY_PROGRAM_NOT_FOUND', was not
         caught in
        procedure "TMC2F_FROUT_CALL" "(FORM)", nor was it propagated by a RAISING
         clause.
        Since the caller of the procedure could not have anticipated that the
        exception would occur, the current program is terminated.
        The reason for the exception is:
        On account of a branch in the program
        (CALL FUNCTION/DIALOG, external PERFORM, SUBMIT)
        or a transaction call, another ABAP/4 program
        is to be loaded, namely "RMCMS431 ".
        However, program "RMCMS431 " does not exist in the library.
        Possible reasons:
        a) Wrong program name specified in an external PERFORM or
           SUBMIT or, when defining a new transaction, a new
           dialog module or a new function module.
        b) Transport error
    How to correct the error
        Check the last transports to the R/3 System.
        Are changes currently being made to the program "SAPLMCS4"?
        Has the correct program been entered in table TSTC for Transaction "ME21N "?
        If the error occures in a non-modified SAP program, you may be able to
        find an interim solution in an SAP Note.
        If you have access to SAP Notes, carry out a search with the following
        keywords:
        "LOAD_PROGRAM_NOT_FOUND" "CX_SY_PROGRAM_NOT_FOUND"
        "SAPLMCS4" or "LMCS4F10"
        "TMC2F_FROUT_CALL"
        If you cannot solve the problem yourself and want to send an error
        notification to SAP, include the following information:
        1. The description of the current problem (short dump)
           To save the description, choose "System->List->Save->Local File
        (Unconverted)".
        2. Corresponding system log
           Display the system log by calling transaction SM21.
           Restrict the time interval to 10 minutes before and five minutes
        after the short dump. Then choose "System->List->Save->Local File
        (Unconverted)".
        3. If the problem occurs in a problem of your own or a modified SAP
        program: The source code of the program
           In the editor, choose "Utilities->More
        Utilities->Upload/Download->Download".
        4. Details about the conditions under which the error occurred or which
        actions and input led to the error.
        The exception must either be prevented, caught within proedure
        "TMC2F_FROUT_CALL" "(FORM)", or its possible occurrence must be declared in the
        RAISING clause of the procedure.
        To prevent the exception, note the following:
    sm21 log ;
    Transaction Canceled 00 671 ( LOAD_PROGRAM_NOT_FOUND 20081010102203saptest_TET_00 EYUCE 100
    Update terminated
    > Update key: F39796DD5421F1509233001E0BD601E0
    > Update module: MCE_STATISTICS_UPD_V2
    Run-time error "LOAD_PROGRAM_NOT_FOUND" occurred

    Please follow the oss note 800335
    V2 Update terminations in Purchasing transactions after upgrading or installing AFS 5.0 release.
    Other terms
    AFS, LIS, MIGO, ME22N, OLI3, S433, S431
    Reason and Prerequisites
    AFS Purchasing infostructure S431 is no longer used. S433 is the valid infostructure for the release AFS 5.0.
    Solution
    Please do the following.
    For customers upgrading to AFS 5.0 only:
    > Rebuild Infostructure S433.
    - Goto transaction: OLI3 (Statistical Set up of Infostructures)
    - Info structure to be compile: S433
    - Specify the 'Name of run'.
    - Execute.
    > Please also follow the instructions given below.
    For both Upgrade and Non-upgrade Customers:
    > Create the report 'ZDELS431'.
    - Transaction: SE38
    - Give Program name as ZDELS431.
    - Create (F5).
    - Title : 'Program to delete S431'.
    - Type  : 1 (Executable Program).
    - Status: T (Test Program).
    > Copy the program text from the note and paste in the program.
    > Save and activate the program.
    > Execute the report for all clients in Update mode.
      (Two check-boxes will appear: P_ALL_CL, P_UPDATE.
       Please check both of them).
    This report will delete all the references to infostructure S431.

  • Row chaining in table with more than 255 columns

    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahav

    user10952094 wrote:
    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
    "Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
    And this paraphrase from "Practical Oracle 8i":
    V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
    Related information may also be found here:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
    "When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
    http://download.oracle.com/docs/html/B14340_01/data.htm
    "For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
    Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
    Why not a test case?
    Create a test table named T4 with 1000 columns.
    With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
    SPOOL C:\TESTME.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
    COL1,
    COL2,
    COL3,
    COL255,
    COL256,
    COL257)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat are the results of the above?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        166
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        166                                                
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252        332  Another test, this time with an average row length of about 12 bytes:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME2.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL256,
      COL257,
      COL999)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        332 
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        332
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695The final test only inserts data into the first 4 columns:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME3.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL2,
      COL3,
      COL4)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat should the 'table fetch continued row' show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
    "Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
    Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case.

  • Re init 2LIS_11_VAITM delta (BW 3.5)

    Hi
    I had to do re initialization of delta load for 2LIS_11_VAITM. The data flow in BW is to ODS and then to the Cube. I followed the following steps:
    BW side
    Deleted data in data targets (ODS and CUBE)
    Deleted INIT request
    R/3 Side
    Deleted set up tables through SBIW
    Checked in RSA3 that its 0 Records
    Tried to fill the set up tables (OLI7BW), entered New Run, date and time of termination (future date), No selection (i.e yr or doc no etc) and "0" as faulty records and executed
    It came to stage where it asked "Start of Order Processing", I continued and the system is still running the same transaction with timer.
    I checked that in RSA3 there are about 1007 records, I checked after 4 hrs, it still says 1007 records
    I monitored in SM37, I cannot find any job executed by my user ID, I tries RM* job selection, but I cannot find any job (scheduled or active or cancelled or finished or released etc)
    I find the Run in NPRT, but cannot find anything in LBWF.
    Have I done something wrong here? Can anyone please advise me how to proceed with the Re Init
    Many thanks

    Hi,
    1. Delete data in DSO/Cube
    2. Stop scheduled jobs for transfering data in R/3 side and BI check in SM37 and LBWE -> Job control
    3. Delete setup tables for application 11 using LBWG
    4. Delete delta initialization for your InfoPackage in BI by using RSA1 -> Scheduler -> "Delete Initialization Options for Source System"
    5. Delete RFC queue with LBWQ
    6. Check queues that there is no data with RSA7 and LBWQ
    7. Run OLI7BW in background in your case with no restricitons
    8. Start delta init InfoPackage in BI when job of initialization in R/3 is finished
    Check additional:
    statistical set up of orders ( oli7bw)
    Regards
    Andreas

  • Changing the extract structure MC02M_0ITM

    Hi,
    When I am trying to change the extract structure MC02M_0ITM, I am getting the following error:
    "struct. from appl. 02 due to open v3 proc. not changed -> Long text" , "Message No. MCEX 140". All I was doing  was adding more fields from communication structure to extract structure and I am getting above error message. Can someone shed some light on that. Appreciate your help. Thanks.
    Wen.

    Hi Wen
    You have to
    1) Clear LBWQ Queues if you are using Queued delta
    2) Move all records from RSA7 - two times to BW under delta update call from BW - so that the records stored for repeat delta are also emptied
    3) Empty the statistical set-up table - LBWQ transaction in R/3
    then you can modify the structure in LO cockpit
    Arvind

  • Can anyone explain how the LO data source flows from ECC to BI up to cubes?

    Dear all,
    Can anyone explain step by step (HOW THE DATA FLOWS)....By taking 2LIS_11_VAHDR  from SD , starting from Activate Data source in ECC  to Info Cube in BI7.0. Please this will be helpful -
    Thanks for the advance answers...
    Edited by: harishk.225 on Dec 23, 2011 9:56 AM

    Hi Harish,
    First go to RSA5 in ECC  select your datasource 2LIS_11_VAHDR and activate it.
    After activatin check in RSA6 whether the datasoure activated properly or not.
    Then log on  to BI sytem Select datasource and click on replicate.
    Then Create Infocube infopackage ,Transformation, DTP ...entire flow. But dont schedule it because there is no data in SETUP Tables.For full load first we need to run statistical set up , so that data should come to SETUP Table.
    Now go to ECC  enter T.CODE OLI7BW to filled up the SETUP Table  for datasource 2LIS_11_VAHDR.
    It will ask  you for Run Name etc give the run name and time limit and execute it.
    If you get some error then first delete the SETUP Table by using Transaction Code LBWG.
    In LBWG it will ask you for applicatiion number , give 11 i.e for sales datasourc and execute it.
    Data will get deleted from SETUP Table. To check data is deleted or not  in SET UP Tabe go to database table MC11VA0HDRSETUP. Always remember the name of SETUP Table will be Extract Structure followed by SETUP.
    EX : If extract structure name is MC11VA0HDR then name of the SETUP Table will MC11VA0HDRSETUP.
    If data gets deleted then again go to 0LI7BW and run the Statistical Setup then agan check in MC1VAOHDRSETUP table contains data or not.
    Then trigger the Infopackage and DTP in BI.  Above steps was for full load.
    Now to Load deltas follow below steps.
    First Create INIT at BI side which enable delta at ECC Side.
    Then go to ECC and execute T.CODE LBWE.
    LBWE is LO Cockpit Work Bench. There we have folowng functions 1.Maintan Datasource. 2.Maintain Extract Structure. 3. Job  Control.  4.Delta Type 5.. Activate/deactivate.
    If you want to add any new field to your datasource then to to Maintain Extract Strucutere remeber before adding or modififying datasource first we should deactivate datasource by using 5th function  Activate/Deactivate.
    Then select DELTA TYPE in delta type we have four types of delta. 1. Delta Queue (RSA7) 2. Queued Delta (LBWQ)
    3. Unserialized (SM13) , 4. Seriaized (SM13)
    If you select Delta queue then deltas drectly come to RSAT.
    if you select Queued Delta  deltas goes to LBWQ then you have perform V3 job by using Job Control  function to move data from LBWQ to RSA7. Remeber to BI  data  always goes from RSA7 only.
    If you select Unserealized then delta goes to SM13 again you have to perorm V3 Job to move deltas in RSA7.
    Serialize update is not in used in Lo-cockpit.
    There is a differerence between  DELTA QUEUE , QUEUED DELTA, UNSERIALIZE UPDATE. Pls find the differences on net.
    Then select your delta type and create deltas dtp in BI and start loading .
    Hope this help you.
    In RSA7 we have two tables delta and repeat delta, to know the functionality of this please search for the documents on  net.
    Regards,
    Asim

Maybe you are looking for

  • Discoverer Report opened from Portal does not show correct data

    Hi, When I try to open a Discoverer report from a portal that I created, it shows stale data but when I click on "Analyze", it shows all the data correctly. Does anyone know what the reason could be for this? Thank you, Santoshi

  • Ctrl-Alt-Backspace Again.

    I couldn't be bothered to fix this at the time and need to fix it now. I have followed the Xorg wiki page to the letter concerning DontZap adding the line to keymap etc but still cannot use ctrl-alt-bksp. Is there a particular place I should be addin

  • Java.lang.NoClassDefFoundError in Java Web Service Pack

    I downloaded a Java WSDP 1.0.01 together with the tutorial. When I tried to compile a simple program using the SAX package, it said it couldn't resolve such names as SAXParserFactory, etc. Then I moved all the .jar files to the JRE/lib/ext directory

  • How to write program to list the system directory(content of pc)

    how to write a program that will display all the folder n directory of the pc like: +My Computer +c: +d:                                                                                                                                                  

  • Starting and Stopping Server from Administration Console without using system ID

    Hi, I tried to create another user in the ACL and add it to the Administrators group. When I connect to the admin console using this user I receive error message NoAccessRuntimeException. Can anybody show me how ti configure user to access the consol