LiveCache Consistency Check question, OM17

I have a general question about a LiveCache Consistency Check (transaction OM17).  I know that it is a data inconsistency b/w the SAP APO database and SAP APO LiveCache.  But what does that mean to a functional user?  Can someone explain this in layman's terms?

In Layman terms, you can say that this checks the INCONSISTENCY between LiveCache and Database.
Here is a detailed documentation on each of the Object Types
Consistency Check for Setup Matrices
The consistency check for setup matrices contains:
· A check whether the setup matrices exist in liveCache
· A check whether the setup transitions exist in liveCache
· A field comparison between the setup transitions in the database and those in liveCache
When the setup matrices are corrected, the setup matrices in liveCache are completely generated from those in the database. Previously nonexistent setup matrices and setup transitions are newly created in liveCache. Superfluous setup transitions are deleted from liveCache. Setup transitions that differ at the field level are adjusted to match the database status.
Consistency Comparison for Block Basis Definitions
Use
When you set this indicator, checks are performed in liveCache and in the database on characteristic values for block basis definitions:
· The existence of block basis definitions is checked.
· The consistency of the characteristic values is checked.
After the checks you can call a correction function in the check results display. When correcting the error, the system deletes obsolete block basis definitions in liveCache. The system completes or corrects missing or inconsistent block basis definitions in liveCache.
Note
The check is performed independently of the planning version.
Consistency Check for Resources
The consistency check for resources contains:
· A check that the resource and corresponding time stream exist in liveCache
· An check that a resource's characteristic blocks exist in liveCache
· A field comparison between the database resource and the liveCache resource
When correcting the resources, the resources in liveCache are completely generated from the database resources. Previously nonexistent resources are created in liveCache.
Consistency check for downtimes caused by maintenance order
The consistency check for maintenance downtimes contains:
A check that the maintenance downtime has a reference to an existing maintenance order.
A check that the dates of maintenance downtime correspond to the exisiting maintenance order.
When correcting the maintenance downtime errors, downtimes without maintenance order are deleted and wrong dates of downtimes are corrected in relation to the maintenance order.
Consistency Check for Product Location Combinations
Use
If you set this indicator the system executes a consistency check for product location combinations. The consistency check for product location combinations includes:
· A check for the existence of a product location combination in the database and in liveCache
· A field comparison between product location combinations in the database and in liveCache
· The determination of obsolete entries for product location combinations in the database
· A check for the existence of characteristic value assignments for product location combinations in the database and in liveCache
· A field comparison of characteristic value assignments for product location combinations in the database and in liveCache
After the check you can call a correction function from the display of check results. For the correction, the system deletes obsolete product location combinations from the database and in liveCache. The system corrects inconsistent product location combinations and characteristic value assignments for product location combinations in liveCache.
Consistency Check for Stocks
Use
If you set this indicator the system executes a consistency check for stocks. The consistency check for stocks includes:
· A check for the existence of a stock in the database and in liveCache
· A field comparison between stocks in the database and in liveCache
· The determination of obsolete entries for stocks in the database
· A check for the existence of characteristic value assignments in the database and in liveCache
· A field comparison between characteristic value assignments for batch stocks in the database and in liveCache
After the check, you can call a correction function from the display of check results. For the correction, the system attempts to correct inconsistent stocks in the database and in liveCache. If a correction is not possible, the stocks are deleted in the database and in liveCache. The system corrects characteristic value assignments for batch stocks in liveCache.
After inconsistent stocks have been corrected, it may be necessary to start the delta report in order to reconcile SAP APO and SAP R/3.
Consistency Comparison of Configuration/CDP for Orders
Use
When you set this indicator, the system performs a consistency check with regard to configuration or CDP (characteristic value assignments/ requirements) for receipts/requirements belonging to orders:
· In the case of products with variant configuration and product variants, the system checks whether there is a referenced configuration in the database.
· In the case of products with CDP, the system checks whether CDP characteristics exist.
Note
In the area Restrictions, you can use the indicator CDP: Detailed Check to define a detailed check for CDP characteristics. If you set this indicator, the CDP data used for the orders is also compared with the product master.
· For products without configuration/CDP, the system checks whether invalid references to variant configuration or invalid CDP characteristics data exist.
After the check, you can call a correction function in the check results display. When executing the correction, the system tries to adjust inconsistent orders in liveCache.
After inconsistent orders have been corrected, you may need to start the delta report to compare the SAP R/3 system and SAP APO again.
Dependencies
The consistency check for configuration or CDP data is very time-consuming. You should therefore limit the comparison as far as possible to certain products or locations.
Consistency Check for Production Campaigns
If orders are assigned to production campaigns that do not exist in the database, this leads to inconsistent campaigns.
You can correct inconsistent production campaigns by removing all orders from them. That means that the campaign assignments are removed from the orders in liveCache.
Consistency Check for Operations
In the database table of /SAPAPO/OPR operations, there may exist operations that have no orders in liveCache, no orders for a simulation version, orders for deleted simulation versions, or no external operation number. These operations place an unnecessary load on the database table and can hinder system performance.
Consistency Check for Planning Matrices
As planning matrices are not master data, they are only located in liveCache. For each production version, there is a record in the database with information about matrices that must exist for this production version and whether the last matrix explosion was successful.
The consistency check for planning matrices checks:
· Whether the matrices associated with each record on the database exist in liveCache
· Whether the records associated with all the matrices in liveCache exist on the database
· Whether the last matrix explosion was successful.
If inconsistencies are discovered, they can be corrected. As corrections are made by recalculating the inconsistent matrices, the process can take a while and should only be done for large matrices (with many orders or many item variants) at times when it can be guaranteed not to hinder any other system processes.
Consistency Check for Simulation Versions
This is a check for whether simulation versions exist in liveCache.
Correction does not take place automatically. Simulation versions that still exist in the database but no longer exist in liveCache do not influence the running of the system. If necessary, they can be deleted using transaction /SAPAPO/CDPSS0.
Consistency Check for Product Allocations
The consistency check for product allocations checks the data for product allocation assignment from the database and compares this with the incoming orders quantity in Demand Planning. Surpluses or shortages are displayed and can be corrected.
The reconcile is only executed for product allocation groups with a direct connection to the product allocation group in the planning area if the connection is also fully defined.
There may be long runtimes during the consistency check due to the data structure. The following factors can hinder performance:
· Number of characteristics combinations
· Number of periods in a time series
· Number of sales orders that take product allocations from a time stream
Error in the reconcile
If it is not possible to reconcile the incoming orders quantity, the data records are issued again with a relevant error message. Check the following causes and attempt again.
Check:
· If the planning area to be checked is locked
· If the time streams are initialized (after liveCache has been initialized)
· If all characteristics combinations area available in the planning area
· The wildcard indicator for collective product allocation
· The settings for your planning area
Due Delivery Schedules/Confirmations Consistency Check
When a scheduling agreement release is received from a customer for sales and distribution scheduling agreement items, a due delivery schedule is created and stored in liveCache. As soon as a confirmation for a due delivery schedule containing at least one schedule line with a quantity larger than zero is generated, an object is also created for it in liveCache. The transaction data for sales and distribution scheduling agreement items contains, amongst other things, an entry with the key of the due delivery schedule object currently located in liveCache and an entry with the key of the confirmation that is currently valid in the database. During the check, the system checks whether liveCache objects exist for sales and distribution scheduling agreement items and whether the transaction data entries are correct.
The following individual checks are made for active sales and distribution scheduling agreement items:
· Is there an operative scheduling agreement release and/or forecast/planning delivery schedule in the database, but no associated liveCache object?
· Is there a confirmation in the database, but no associated liveCache object?
· Is there a due delivery schedule in liveCache, without at least one existing operative scheduling agreement release and/or forecast/planning delivery schedule?
· Is there a confirmation in liveCache, without an existing confirmation in the database?
· Is the key in the transaction data in the database that shows the current due delivery schedule in liveCache, also that of the actual liveCache object?
· Is there actually a confirmation in the database for the key in the transaction data that shows the currently valid confirmation in the database?
If a sales and distribution scheduling agreement item is inactive, there are not allowed to be any due delivery schedules or confirmations in liveCache. In this case, the following checks are made:
· Is there a due delivery schedule in liveCache for an inactive sales and distribution scheduling agreement item?
· Is there a confirmation in liveCache for an inactive sales and distribution scheduling agreement item?
Consistency Check for Production Backflushes
Partially confirmed orders cannot be deleted from liveCache. For each partially confirmed order of the database table, there must be a corresponding order in liveCache. If no order exists, there is a data inconsistency that can only be rectified by deleting the order from the database tables of the confirmation.
Entries for orders that have already been confirmed exist in the status matrix. The entry in an order's status matrix is deleted when the confirmed order is deleted by the /SAPAPO/PPC_ORD archiving report. Each status matrix entry for which database tables of the confirmation do not exist, present an inconsistency that can only be removed by deleting the status matrix entry.
Consistency Check for iPPE Objects
The iPPE object is not an iPPE master data structure. It is a data extract that is generated for each iPPE access object.
The consistency check for the object checks that it exists in liveCache and also determines its identity using the backup copy in the database. When correcting the object, the copy from the database is written to liveCache.
It is necessary to check the object if the following error message occurs: 'Error while calling COM routines via application program' (/sapapo/om 102) with return code 1601, 1602, or 1603. This does not apply to liveCache initialization.
Consistency Check for Procurement Scheduling Agreement Items
The following three objects represent procurement scheduling agreement items (scheduling agreement in short):
1. Scheduling agreement schedule lines
2. Release schedule lines
3. Confirmations
All these objects are located in liveCache. Release schedule lines and confirmations are also located in the database with a historical record. Depending on the process that was set up for the scheduling agreement, not all objects exist in liveCache or have historical records.
If goods movements exist for an object, there must always be at least one entry in liveCache. If all schedule lines are covered by goods receipts, at least one schedule line will exist in liveCache with the number '0000000000' and an open quantity of 0.
A liveCache crash, operator errors, and program errors can all cause inconsistencies. Below is a list of all the inconsistent statuses that have been identified and that can be removed:
1. The object is not in liveCache but goods receipts exist.
2. The number of input nodes and output nodes is different.
3. There are no input nodes at the order, but the material exists in the source location for the order.
4. The original quantity at the source of an order is different from that at the destination.
5. The accumulated quantities in liveCache are different from those in the database (the cumulative received quantity, for example).
6. The set process is compared with the status in liveCache.
7. A check is made to see whether the scheduling agreement is being planned in APO or in R/3 and whether the schedule lines have the appropriate specification.
If a schedule line inconsistency is identified, no more checks for inconsistencies are made, instead it moves on to the next schedule line.
Consistency Check for MSP Orders
Provides a list of maintenance and slot orders that
·     Exist in the database but have no corresponding orders in the liveCache
·     Exist in the liveCache, but have no corresponding orders in the database
Procedure
From within the list, you can either
·     Correct the inconsistencies
If you choose to do this, the system deletes the selected orders from the database and/or liveCache.
You receive a message that the selected order(s) have been deleted.
·     Leave the inconsistencies in the database and/or liveCache
Such inconsistencies place an unnecessary load on the database and/or liveCache.  Moreover, those orders that exist in the liveCache, but have no corresponding orders in the database, influence the scheduling results of subsequent orders in the liveCache
Hope this helps
Regards
Kumar Ayyagari

Similar Messages

  • DP Consistency Checks: OM17 vs. TS_LCM_CONS_CHECK?

    SCM 5.0 Demand Planning:
    What exactly does the 'DP Time Series' check do in transaction /SAPAPO/OM17? I cannot find any specific SAP documentation on this particular consistency check.
    What is the difference between the 'DP Time Series' check in transaction /SAPAPO/OM17, and the check in report /SAPAPO/TS_LCM_CONS_CHECK (transaction /SAPAPO/TSCONS)?  The check in OM17 reported inconsistencies that were not reported in the TS_LCM_CONS_CHECK.

    OM17 will check database table and livecache and if they are not in sync ,it will show them as inconsistency and than you can make them in sync.
    Consistency Checks for Planning Areas (/SAPAPO/TSCONS) will check time series of planning area and try to fix it .
    Changes in a planning area can result in inconsistencies in subordinate objects. These functions let you check the consistency of the data in various areas.
    Features
    You can check the consistency of the following objects from planning area maintenance:
    ·        Notes table
    ·        Selection table
    ·        The planning books that use the planning area
    ·        The time series in LiveCache
    Check below link for further details:
    http://help.sap.com/saphelp_scm41/helpdata/EN/2b/e9346e13b9ef44bddd49ee419c11fd/frameset.htm

  • The time characteristic consistency check has produced an error     DBMAN     62

    Hi
    We are getting an error while updating to one target as The time characteristic consistency check has produced an error""
    * Checked the respective PSA data field  which is fine. Dates are in correct format.
    * RSRV ran for the particular target. It had error but the jobs were successful till yesterday.
    * Data from 01.05.2014 is having the problem ( Which I got from the record number in PSA )
    NB: We are running on HANA database but the respective target is not yet HANA Optimised
    Details Figures Attached
    Any Idea.. What could  have gone wrong
    Regards
    Reshoi R

    Hi
    Record Number 2995 is shown in the figures attached.
    I have already carried out the deletion and automatic repair mentioned in RSCV061 even before posting this question but of no use ( I have shown that also in the figures attached at the posting time.
    Reporting is fine. The problem is while loading the data to the cube. The loading is fine till date 30.04.2014 but from the date 01.05.2014. The record is error while updating.
    The cube is partitioned from month 01.2008 to 12.2015.
    Regards
    Reshoi R

  • SQL SERVER 2014 - Project consistency check failed. The following inconsistencies were detected: package.dtsx. has a different ProtectionLevel than the project.

    I am getting the following error when I right click on a package in solution explorer and execute using SQL Server 2014:
    Project consistency check failed. The following inconsistencies were detected: package.dtsx. has a different ProtectionLevel than the project.
    Luckily my solution only has 10 packages.  I have verified and re-verified that everyone is set to dontsavesensitive.
    I have verified and re-verified that the solution has the protection level to dontsavesensitive.
    What on earth is going on here?  I am using windows authentication and have no need for the package to save any sensitive information, nor should there be any.
    Can someone please help me out here?
    The Degenerate Dimension

    Hi MMiligan,
    Just as the error message said, the exactly issue is that the package package.dtsx has a different ProtectionLevel property value than the project.
    So please double check the value of project ProtectionLevel property, like DontSaveSensitive, then make sure all packages in the project use the same ProtectionLevel value.
    If there is still an error message indicated that some packages use a different value, please right-click the indicated package to select View Code, then make sure there is a line like below(the DontSaveSensitive
    protection level corresponds to the value of 0):
    DTS:ProtectionLevel="0"
    The following similar thread is for your reference:
    http://stackoverflow.com/questions/18705161/protection-level-changed-mid-project-now-project-wont-build
    Thanks,
    Katherine Xiong
    If you have any feedback on our support, please click
    here.
    Katherine Xiong
    TechNet Community Support

  • Fileserver consistency checks failing 0x809909B0

    I have a file server with approx. 1.7TB of files which became inconsistent on the 8<sup>th</sup> of November, and has been unable to complete a consistency check since then. 
    Consistency checks fail with the error:
    The protection agent on <server> was temporarily unable to respond because it was in an unexpected state. (ID 60 Details: Internal error code: 0x809909B0)
    I have removed and re-added the server, including deleting the protected data from DPM disk, and reinstalled the DPM agent on the file server, but I still receive the above error when consistency checking. 
    DPM is System Center 2012 R2 v4.2.1254.0 on Windows Server 2012 R2. 
    The file server is running Windows Server 2012 R2 also. 
    KB2919355 is installed on both systems.
    I did have Dedup running on the file server, but have un-deduped the drive as part of troubleshooting this issue.
    Other servers backed up by this DPM server are being backed up fine, including a HyperV cluster, 2 exchange servers and 2 sharepoint farms.
    There are a large number of these errors in the DPMRA log around the time the job fails:
    5288       4F48      
    11/28     22:49:56.601      
    29          
    radefaultsubtask.cpp(355)          
    [00000000013EE9D0]       50C1500F-0353-4D2A-8188-FC03612E2503
    WARNING          
    Failed: Hr: = [0x809909b0] : CRADefaultSubTask: WorkitemID does not exist, {EB6F4726-E5C5-4958-B034-542518E5A931}
    5288       4F48      
    11/28     22:49:56.601      
    05          
    defaultsubtask.cpp(546)              
    [00000000013EE9D0]       50C1500F-0353-4D2A-8188-FC03612E2503
    WARNING          
    Failed: Hr: = [0x809909b0] : Encountered Failure: : lVal : CommandReceivedSpecific(pCommand, pOvl)
    5288       4F48      
    11/28     22:49:56.601      
    05          
    defaultsubtask.cpp(751)              
    [00000000013EE9D0]       50C1500F-0353-4D2A-8188-FC03612E2503
    WARNING          
    Failed: Hr: = [0x809909b0] : Encountered Failure: : lVal : CommandReceived(pAgentOvl)
    5288       4F48      
    11/28     22:50:03.635      
    03          
    runtime.cpp(1376)         
    [00000000012F4850]       50C1500F-0353-4D2A-8188-FC03612E2503        
    FATAL   Subtask failure, sending status response XML=[<?xml version="1.0"?>
    5288       4F48      
    11/28     22:50:03.635      
    03          
    runtime.cpp(1376)         
    [00000000012F4850]       50C1500F-0353-4D2A-8188-FC03612E2503        
    FATAL   <Status xmlns="http://schemas.microsoft.com/2003/dls/StatusMessages.xsd" StatusCode="-2137454160" Reason="Error" CommandID="RACancelAllSubTasks" CommandInstanceID="8a514c9b-6a62-42b4-a03b-0f938e3f20cc"
    GuidWorkItem="eb6f4726-e5c5-4958-b034-542518e5a931" TETaskInstanceID="50c1500f-0353-4d2a-8188-fc03612e2503"><ErrorInfo xmlns="http://schemas.microsoft.com/2003/dls/GenericAgentStatus.xsd" ErrorCode="536872913"
    DetailedCode="-2137454160" DetailedSource="2"><Parameter Name="AgentTargetServer" Value="DPM01.domain.com"/></ErrorInfo></Status>
    5288       4F48      
    11/28     22:50:03.635      
    03          
    runtime.cpp(1376)         
    [00000000012F4850]       50C1500F-0353-4D2A-8188-FC03612E2503        
    FATAL   ]
    I have also tried the steps in this article with no change:
    https://social.technet.microsoft.com/Forums/en-US/107bdce0-e13c-427a-9341-07d4b558f007/dpmra-crashing-after-update-to-dpm-2012-r2-ur2
    Currently I have reduced the amount of shares being backed up to just ~400gb of data and I have it in a consistent state, but I need to get the remainder of the data backed up, and I can’t be in a situation where if it becomes inconsistent it can’t run a
    successful consistency check again.

    Hi,
    If you want to pursue this immediately, you can visit the following web site to open a support incident. The charge to your CC will not be processed until your case is resolved and closed, if it's a code defect in the DPM then we wave all charges.
    http://support.microsoft.com/select/Default.aspx?target=assistance
    In the Quick product finder, enter:System Center 2012 R2  - then select System Center 2012 R2 Data Protection Manager
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Recovery points and consistency checks are failing for one disk on file server. Access is denied (0x80070005)

     Good day! I've built a new DPM 2012 R2 server and migrated or re-created jobs from my older 2012 SP1 DPM servers as trying to upgrade them was not working out. The DPM R2 server is running great besides this one issue. The problem is with a file
    and print server at a remote office. Since it has around 750 GB of user data on it I chose to do manual replica creation on the data disk, (E:) but created the C:\ drive replica normally, over the WAN. The manual replica creation was successful, and the initial
    consistency check and recovery point creation worked as expected, however, since then it will not create another RP or run a CC on the E:\ drive. Recovery points on the C:\ drive are being created properly. When I try to run a CC I get this error: "An
    unexpected error occurred while the job was running. (ID 104 Details: Access is denied (0x80070005))" I've tried to drop protection on the two directories of the E:\ drive (retaining data) and re-create, but that didn't help. I've even done a recovery
    from the initial RP to verify that it was created properly, and that worked as well.

    Hi,
    Access Denied error is usually caused by Antivirus software.  Please disable real time scanning on the DPM Server for DPMRA process. 
    928840 You receive job status failure messages in Data Protection Manager
    http://support.microsoft.com/default.aspx?scid=kb;EN-US;928840
    For DPM 2012 the path has changed.
    To resolve this problem, configure the antivirus program to exclude real-time monitoring of the Dpmra.exe program on the Data Protection Manager server. By default, the Dpmra.exe file is located in the following folder:
    %ProgramFiles%\Microsoft System Center 2012\DPM\DPM\Bin
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • SELS error during consistency checks. Fieldname not found in sender

    I'm stuck in the System Analysis section, after starting programs for Generation and Reveiver settings. The task "Execute Consistency Checks" returns an error in PC001_PCL_CHECK.
    Check SELS started at 16.06.2011 14:38:31
    Errors detected for conversion object A910 :
    A910:Fieldname VKORG of structure S_A910 mentioned in selref not found in sender
    Check SELS finished at 16.06.2011 14:39:00
    The help says:
    The selection field mentioned for the reduction of this conversion object does not exist in the sender system. Thus this selection field can not be used for reduction. Please remove the assignment of this conversion object from reduction. For further assistance please contact SAP support.
    My question is simply HOW? Where can I remove this object? I don't even find a structure called S_A910 in any of my systems. I have table A910 but it doesn't have a field called VKORG in it at all... I'm somewhat lost here.
    I've added the latest SPs (17) and also applied all the notes. At least I think I applied them all... I repeated the step several times, no luck.
    Any ideas?
    Thanks in advance
    Stefan

    Hi Sandeep,
    Many thanks for your reply. The note hasn't fixed the issue so I'm still stuck with the exact same error. I did run it as described and while the report updated a few fields, it didn't help to eliminate the error. Reading the note it would appear that the error described there is different from what I'm facing.
    > error message like : "TVKOV: member ID_VKORG of group G_VK has no selref field!
    I am getting "Fieldname of structure S_A910 mentioned in selref not found in sender" so to me it looks like a different issue.
    Oh well... Nothing like a bit of fun.

  • Report Group 1SIP Consistency Check

    SAP reccomends the following to be consitency check for CO customers for system copy:
    "CO customers: An additional consistency check can be performed by running the report group 1SIP before the system copy in the source system, as well as after the copy in the target system, and then comparing the results. No customer data may be changed in the meantime."
    I have the following question:
    1. By "Running the Report Group" does SAP mean running the individual reports under the Report Group 1SIP?
    Edited by: Asim M on May 23, 2008 6:49 PM

    Hi,
    Yes, that is what is meant, i.e. report 1SIP-001 or TA S_ALR_87013611. In this case report group and report are synonymous as there is only one report assigned to the 1SIP report group (customer reports should never be assigned to SAP standard report groups). Therefore running 1SIP via TA GR55 results in the same output.
    Regards
    Karl

  • Importance of Global Consistency check

    Hello,
    I have always checked my rpd for global consistency without knowing the actual meaning behind it. But last night I created a logical column with the following expression:
    max(VALUEOF(NQ_SESSION......))
    basically i created an aggregation over a logical column that obtains its value from session variable. I know that if we need to use a column as an aggregation column we should use the aggregation tab in the column properties. When we choose an aggregation it disables the editable column formula field.
    I put the above formula which violates the rule. The result is perfect so long as I don't check for global consistency. It throws an error that looks something like this:
    [38083] The Attribute 'Acceptance Rate Target' defines a measure using an obsolete method.
    The question is.. what is the significance of global consistency check.. and what is the consistency criteria.. and is it ok to save the rpd without checking for global consistency (yes this does not cause the BI server to crash when trying to start)
    Thanks

    First, forget about the variable approach.
    Now you need to do the following steps:
    1) Import the table to the physical layer
    2) Create logical table in existing Business Model in BMM layer with the imported table as logical table source
    3) Create another logical table in existing Business Model in BMM layer with the imported table as logical table source
    4) Create complex join between both table, now you will get one logical dimension table and one logical fact table
    5) In the logical fact table you need to select the column ("Target") and add an aggregation rule, like MAX or MIN
    6) Assuming you have a hierarchy for every dimension in your BMM layer, you need to set the logical levels of the new measure to the Grand Total Level of each dimension hierarchy.
    By doing this, you get a "level based measure", for more info: read this:
    http://oraclebizint.wordpress.com/2007/12/03/oracle-bi-ee-101332-level-based-measures-lbms/
    By setting all logical levels to the grand total level, the measure will be "immune" for all dimensions used in your report.
    So when you have a report like
    Month__Actual___Target
    The BI Server will create two queries:
    select month, sum(sales) from calendar, sales_table where calendar.id = sales_table.calendar_id
    and
    select max(value) from target_table
    The BI Server will then stitch both results together.
    Regards,
    Stijn

  • Repository Consistency Check 39008 "does not join"?

    I'm using Administration Tool 11.1.1.6.0 with a Repository version of 318.
    I have imported my star schema metadata from the database using an OCI connection. All the joins were included, so I can go to Physical->Fact Table->Physical Diagram->Object(s) and Direct Joins and it shows my fact table linked to all my dimension tables. I then clicked-and-dragged my schema to the Business layer. I created my dimension by right-clicking on my logical tables in the Business layer and choosing Create Logical Dimension -> Dimension with Level-Based Hierarchy. This worked for all the dimensions that had only one level (a base level and a grand total level), but resulted in some odd errors when done for dimensions with more than one level. I got around these errors by manually creating these dimensions, clicking-and-dragging the logical columns in, and setting up the keys.
    Only now when I do I consistency check, I get three of the following warnings, one for each dimension that has more than one level:
    WARNINGS:
    Business Model [Business Model]:
    [39008] Logical dimension table [Logical Table] has a source [Physical Table?] that does not join to any fact source.At least, I think it is referring to the Physical Table, but changing the name of the Physical Table doesn't change the error message, though changing the Logical Table name does, so I'm not really sure what it is referring to. Here is what one looks like precisely:
    [39008] Logical dimension table Time has a source TIME that does not join to any fact source.Now, each of the three multi-level dimensions have a base level with a key that is present in the Fact Table. I can even right-click on the Fact Table on the Business Layer and go to Business Model Diagram or Physical Model Diagram and get a diagram of my fact table linked to all of its dimensions, including the three in question. Analysis made in OBIEE work so long as I don't use those three dimensions.
    Does anybody have any idea what I'm missing here?

    Thanks, it looks like the field for those three logical dimensions was left blank for some reason. So it was because the Logical Dimensions weren't joining to the Fact Table, rather than the Logical Tables?

  • Security, Consistency Check, Physical Model

    Hi experts,
    just THREE questions:
    1- the following object-level security configurations: "Assign users to a web catalog group" and "Create a Web catalog group" could be done in the BI Administration Tool?
    2- a consistency check can be performed over a selection of multiple dimension hierarchies?
    3- is it possible modify the physical model of a repository for add a derived metric column, and define this column as a value like "dollar divided by units"?
    I think it's possible only through database manipulation and not Administration Tool. Is it right???
    Thanks a lot.
    Huge

    Huge10 wrote:
    Hi experts,
    just THREE questions:
    1- the following object-level security configurations: "Assign users to a web catalog group" and "Create a Web catalog group" could be done in the BI Administration Tool?No
    2- a consistency check can be performed over a selection of multiple dimension hierarchies? Yes.
    >
    3- is it possible modify the physical model of a repository for add a derived metric column, and define this column as a value like "dollar divided by units"?
    I think it's possible only through database manipulation and not Administration Tool. Is it right???
    You can do it in BMM layer ..as a logical column without manipulating DB.

  • Consistency checks in APO

    Can anybody please explain the reasons for doing the
    1.Model consistency check
    2. Re-org Consistency check(/SAPAPO/TS_LCM_REORG)
    What would be the frequency for these consistency checks?

    Okay I'll try and keep the answer simple:
    1) The model consistency check results in an overview of the (in) completeness of the master data
    objects within the model concerned. Missing data (for example, coordinates of a location are
    initialised) will be displayed by error, warning or information messages. It can be checked and – if
    necessary – be corrected from the log. Should be done when maintaining master data or after
    activating the integration models (master data transfer).
    2) For Time Series the memory consumption in the liveCache increases because of superfluous time series if the master data containing the time series information is deleted or removed from the planning version model. This report removes these superfluous data and tidies up livecache. Should be run daily.
    For more info on consistency checks (mostly SNP and related) see the following document:
    [Internal and External Consistency Checks|https://websmp203.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000330202007E]
    Regards
    Ian

  • Spool Consistency Check

    Hello,
    We are facing spool consistency error messages in SM21. Later we found that there are no spool consistency jobs running in our system. Of late, I have started running RSTS0020 three times daily with an interval of one hour between each run. Whatever object I find repeating in all the three runs, I delete only those.
    I am trying to automate this process. What job do I schedule to perform the same task. One more question that haunts me is, even if I schedule a report to run at midnight every day, how can I delete the inconsistent object shown in the report, automatically. And don't I need to schedule the job three times with an interval of one hour between each job schedule? I am not sure how do I go about this. I am confused. Please help.
    Thank you.

    Thank you for responding. If you had read my question correctly, I am also doing the same. I am running RSTS0020 in SE38 which, is the same as SP12 consistency check because, SP12 consistency check runs RSTS0020 in the background. But, my question was different. Here is my question again.
    As per note 48400, you have to run RSTS0020 three times with an interval of one hour each. So, currently I am running this manually and at the end of every run, you get the output of inconsistent objects. What I do is, I take a screenshot of all the three runs then compare them. I select the check boxes and delete only those objects that has been repeated in all the three runs.
    I know we can run RSTS0020 in the background but, I don't know how can I perform the later part of the above task. I mean when I execute RSTS0020 in the background, how can I delete the inconsistent objects? Any help is highly appreciated.
    Thank you.

  • Connection dropped between DPM and Agent during consistency check, following DPM 2012 SP1 to DPM 2012 R2

    Recently I have updated a DPM 2012 SP1 standalone server to DPM 2012 R2. That it’s self was a challenge... Following the upgrade most of the protection groups required consistency checks. For some time I have been having problems performing these consistency
    checks on remote servers over a 10Meg link.
    On the DPM server I would kick off a ping to the remote host, then start a consistency check. Within a few seconds I would start to get dropped pings, and very large ping response times, 3000ms plus. Within 3 minutes the pings would time out completely and
    no connections can be made to the remote server, RDP etc. However any other server on the network can connect to the remote client. Eventually the DPM jobs times out and sometime later the pings also return.
    The resolution to this has taken some digging but this is what I have found. The DPM agent update does not seem to fully work and requires a reinstall. Here is my process:
    Logon to remote client server
    Install .net 4 (a raw install of the client requires this, may not be needed when done via DPM)
    Via Add Remove Programs, remove DPM agent
    Clean up Reg
    Delete "HKLM\SOFTWARE\Microsoft\Microsoft Data Protection Manager"
    Delete "HKLM\System\CurrentControlSet\Services\DPM**" (However this key is not there for me)
    Back onto DPM Server
    Modified protection groups as required to remove all instances of the client server
    Remove Agent for client server
    Install agent via DPM (watch for the glitch right at the end where it does a failed, then succeed…)
    REBOOT client server
    Add back into protection group
    Consistency check kicked off automatically
    This seems to have fixed the issue.

    Hi,
    The prerequisite required for DPM 2012 SP1 that must be installed is a re-rerelease of UR3 which is 4.1.3317.0 - please download and install the update specified. 
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • DPM 2010 - Consistency checks stops after 23 hours

    When running a consistency check on a very large amount of files, the checks repeatedly stops after 23-24 hours.
    How do I solve this, so that I get a full consistency check and a good replica?
    Please visit http://www.bleumer.eu

    Hi,
    OK - I'm not aware of any known DPMRA crash issues for DPM 2010 UR7 - so this will need to be investigated by getting the crash stack and possibly a process dump.  Please open a support ticket for investigation.
    In the meantime - run a chkdsk /f against the DPM replica volume to be sure we're not crashing due to a file system problem.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

Maybe you are looking for

  • How can I design a BPM with mulitiple IDOC types as sender

    Hi Experts,   I am using PI 7.1   I have a senario that I have to design a BPM in such a way that I will get different IDOC types as sender while the receive step has to trigger the the respective IDOC types at run time. For example: when matmas is t

  • How to set a Japplet as parent for a JDialog ?

    Hi, I am using netbeans IDE 6.0 for swing development. I have a web application where i have main GUI as JApplet, In this I am calling a custom JDialog on a button click. Code for main GUI looks like, import javax.swing.*; import javax.swing.event.*;

  • More than one ipod

    I have an ipod video and nano and is there anyway to use both with the same itunes program on my laptop? my goal is for them to have access to the same music with out having to go in between two programs or importing music/cd's twice.

  • Colon in index marker text

    I have a mif file from which I need to create a 1 level index. I have numerous data which contains a colon; so the MText tag looks something like this - <MText `Accredited: Representative'> Now, I know that the function of a colon is "Separates level

  • Anyone successfully got Prod. BOM components printing on AR Docs?

    Hi All, Has anyone successfully got Production BOM components to print out in their AR documents in a similar way to a Sales BOM using either Crystal or PLD? (price of components is irrelevant, quantity of components is irrelevant as well, we just ne