Remove Incidents in Data Warehouse DB
Hi
Is there a way to delete Incidents / Change items from the Data Warehouse DB so "test-workitems" don´t show up in reports?
I know there´s no supported way of doing this, except re-install the DW-DB. But i don´t want to wipe the history just to remove a couple of Incidents / change -items.
If you delete your work items in Service Manager, DW will have DeletedDate field populated and all the relationship facts will have IsDeleted = 1. I am sure you have filters on your report to exclude deleted items. Here is the link how to delete an object
from SCSM:
http://blogs.technet.com/b/servicemanager/archive/2011/07/13/using-smlets-beta-3-post-9-deleting-objects.aspx
Similar Messages
-
Reinstall Data Warehouse to remove test data
We are wanting to go live with SCSM soon, hopefully in a couple of weeks.
Unfortunately, we don't have the infrastructure in which I was able to setup a test and production environment. I've managed to remove the test data out of the ServiceManager database via the SMLets, but we are wanting to remove the test data out of the
data warehouse as well. I've attempting manually deleting from the database (I know unsupported), though this did not work and just gave many database errors (as expected).
What I'm wondering is if it's possible to uninstall/reinstall the DW. Upon reinstall, I know I would need to likely re-register it with SCSM.
In doing this, will all of the custom classifications/statuses/templates/etc stay in place?
Once installed, would all of the sync jobs pick up as expected and sync the current data in the Service Manager (No incidents/Changes/etc) to the DW?
We have all databases/Service Manager/Portal running on one server. DW running on a 2nd server.
If anyone could provide some insight around this, it would be much appreciated. Any direction towards documentation would also be great!
ThanksThe best scenario (and my recomendation) would be to export and take backup of all your management packs, reinstall the entire Service Manager environment and import them again. In that way you should be able to get a fresh, functional Service
Manager and all your settings would be retained.
However, to answer your question, you would need to:
- Unregister with SCSM DW
- Uninstall DW
- Delete the three DW databases
- Install DW
- Register with DW
This would not affect any settings in your SCSM environment and you would have an empty DW.
Regards
//Anders
Anders Asp | Lumagate | www.lumagate.com | Sweden -
View All Incidents Active on Specific Date Data Warehouse
Hi
I'm looking to find out if it is possible to view all incidents active on a specific date within the data warehouse. Obviously there is the created, resolved and closed dates however that doesn't present the correct information as filtering on the created
by date with a specific date only shows incidents created on that specific date.
What I am interested in is how many active incidents did I have at the time of the data warehouse processing and how I would go about that. Obviously it is possible to get this off the live database but it would be good to get this out of the data warehouse.
ThanksActually turns out the reason why none of the data was making sense was that the data in the cube was completely wrong. Despite the dwdatamart incidentdimvw showing the correct information the cube was out by a long way (cube said over 5000 active incidents
where as dwdatamart had 17) so i'm guessing there was a problem in processing somewhere along the line. Not quite sure what would cause the cube to be so inaccurate though which is perhaps more concerning. -
I'm installing the Data Warehouse component on a new server (Server 2008 R2 SP1 with SQL 2012 SP1 CU6). Reporting service, analysis services and full text services are installed. I'm installing under an account that has DBO rights to the instance.
All pre-reqs pass. The installation gets to the very end of the installation and i get the following error:
An error occurred while executing a custom action:_AssignSdkAccountAsSsrsPublisher
This upgrade attempt has failed before permanent modifications were made. Upgrade has successfully rolled back to the Original state of the system. Once the correction are made, you can retry upgrade for this role.
I look in the setup logs and only see the following reference to "AssignSdkAccountAsSsrsPublisher"
MSI (s) (1C:00) [15:09:51:914]: NOTE: custom action _AssignSdkAccountAsSsrsPublisher unexpectedly closed the hInstall handle (type MSIHANDLE) provided to it. The custom action should be fixed to not close that handle.
CustomAction _AssignSdkAccountAsSsrsPublisher returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
Originally, I had some issues with services not being started during the install because some accounts didn't have "logon as a service" rights, but those have all been rectified. Not sure if something is leftover from that, or maybe CU6 wasn't
tested with SM R2 and i need to go back to an earlier CU?That worked for me too. By manually removing all the folders from SSRS, then retrying the install, allowed it to proceed and complete successfully the second time around.
The first time installing SCSM 2012 R2, I got hit with
this error, which is why SSRS wasn't in a clean state. Make sure to launch setup.exe elevated as an admin! -
Configuration Dataset = 90% of Data Warehouse - Event Errors 31552
Hi All,
I'm currently running SCOM 2012 R2 and have recently had some problems with the Data Warehouse Data Sync. We currently have around 800 servers in our production environment, no Network devices, we use Orchestrator for integration with our call logging system
and I believe this is where our problems started. We had a runbook which got itself into a loop and was constantly updating alerts, it also contributed to a large number of state changes. We have resolved that problem now, but I started to receive alerts
saying SCOM couldn't sync Alert data under event 31552.
Failed to store data in the Data Warehouse.
Exception 'SqlException': Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
One or more workflows were affected by this.
Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance
Instance name: Alert data set
Instance ID: XX
Management group: XX
I have been researching problems with syncing alert data, and came across the queries to manually do the database maintenance, I ran that on the alert instance and it took around 16.5 hours to run on the first night, then it ran fast (2 seconds) most the
day, when it got to about the same time the next day it took another 9.5 hours to run so I'm not sure why that's giving different results.
Initially it appeared all of our datasets were out of sync, after the first night all appear to be in sync bar the Hourly Performance Dataset. Which still has around 161 OutstandingAggregations. When I run the Maintenance on Performance it doesn't appear
to be fixing it. (It runs in about 2 seconds, successfully)
I recently ran DWDatarp on the database to see how the Alert Dataset was looking and to my surprise I found that the Configuration Dataset has blown out to take up 90% of the DataWarehouse, table below. Does anyone have any ideas on what might cause this
or how I can fix it?
Dataset name Aggregation name Max Age Current Size, Kb
Alert data set Raw data 400 132,224 ( 0%)
Client Monitoring data set Raw data 30 0 ( 0%)
Client Monitoring data set Daily aggregations 400 16 ( 0%)
Configuration dataset Raw data 400 683,981,456 ( 90%)
Event data set Raw data 100 17,971,872 ( 2%)
Performance data set Raw data 10 4,937,536 ( 1%)
Performance data set Hourly aggregations 400 28,487,376 ( 4%)
Performance data set Daily aggregations 400 1,302,368 ( 0%)
State data set Raw data 180 296,392 ( 0%)
State data set Hourly aggregations 400 17,752,280 ( 2%)
State data set Daily aggregations 400 1,094,240 ( 0%)
Microsoft.Exchange.2010.Dataset.AlertImpact Raw data
7 0 ( 0%)
Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations
3 0 ( 0%)
Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations
182 0 ( 0%)
Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data 400 176 ( 0%)
Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations 400 0 ( 0%)
Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data 7 0 ( 0%)
Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations 400 0 ( 0%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Rawdata 3 84,864 ( 0%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations 7 407,416 ( 0%)
Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations 182 143,128 ( 0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data 7 6,088 ( 0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations 31 20,056 ( 0%)
Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations 182 3,720 ( 0%)
I have one other 31553 event showing up on one of the Management servers as follows,
Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations.
Exception 'SqlException': Sql execution failed. Error 2627, Level 14, State 1, Procedure ManagedEntityChange, Line 368, Message: Violation of UNIQUE KEY constraint 'UN_ManagedEntityProperty_ManagedEntityRowIdFromDAteTime'. Cannot insert duplicate key in
object 'dbo.ManagedEntityProperty'. The duplicate key value is (263, Aug 26 2013 6:02AM).
One or more workflows were affected by this.
Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.ManagedEntity
Instance name: XX
Instance ID: XX
Management group: XX
which from my readings means I'm likely in for an MS support call.. :( But I just wanted to see if anyone has any information about the Configuration Dataset as I couldn't find much in my searching.Hi All,
The results of the MS Support call were as follows, I don't recommend doing these steps without an MS Support case, any damage you do is your own fault these particular actions resolved our problems:
1. Regarding the Configuration Dataset being so large.
This was caused by our AlertStage table which was also very large, we truncated the alert stage table and ran the maintenance tasks manually to clear this up. As I didn't require any of the alerts sitting in the AlertStage table we simply did a straight truncation
of the the table. The document linked by MHG above shows the process of doing a backup & restore on the AlertStage Table if you need to. It took a few days of running maintenance tasks to resolve this problem properly. As soon as the truncation had taken
place the Confirguration Dataset dropped in size to less than a gig.
2. Error 31553 Duplicate Key Error
This was a problem with duplicate keys in the ManagedEntityProperty table. We identified rows which had duplicate information, which could be gathered from the Events being logged on the Management Server.
We then updated a few of these rows to have a slightly different time to what was already in the Database. We noticed that the event kept logging with a different row each time we updated the previous row. We ran the following query to find out how many rows
actually had duplicates:
select * from ManagedEntityProperty mep
inner join ManagedEntity me on mep.ManagedEntityRowId = me.ManagedEntityRowId
inner join ManagedEntityStage mes on mes.ManagedEntityGuid = me.ManagedEntityGuid
where mes.ChangeDateTime = mep.FromDateTime
order by mep.ManagedEntityRowId
This returned over 25,000 duplicate rows. Rather than replace the times for all the rows, we removed all duplicates from the database. (Best to have MS Check this one out for you if you have a lot of data)
After doing this there was a lot of data moving around the Staging tables (I assume from the management server that couldn't communicate properly), so once again we truncated the AlertStage table as it wasn't keeping up. Once this was done everything worked
properly and all the queues stayed under control.
To confirm things had been cleared up we checked the AlertStage table had no entries and the ManagedEntityStage table had no entries. We also confirmed that the 31553 events stopped on the Management server.
Hopefully this can help someone, or provide a bit more information on these problems. -
Service Manager Data Warehouse Install - Analysis Server Configuration For OLAP Cubes Fail
Hello everyone,
I have an issue with my installation of the Data Warehouse for System Center Service Manager 2012 SP1.
My install environment is the following:
Windows Server 2012 – System Center Service Manager (Successfully Installed) - Virtual
Windows Server 2012 – System Center Data Warehouse (Pending) - Virtual
Windows Server 2012 – MS SQL Server 2012 – Physical, Clustered 1<sup>st</sup> of Four Servers
The SQL Server is a clustered installation with named instances, specifically for SharePoint and Service Manager. Each instance has its own IP address and dynamic ports are turned off. I’m installing using the domain administrator account and I also chose
to run the installer as administrator. The domain admin has sysadmin rights to the service manager server and instance I’m trying to install on. However, the account does not have sysadmin rights to some of the other instances.
The install is smooth up until it needs to connect to the Analysis server database. I have tried connecting to the analysis servers on other SQL servers on site and all were successful. The only difference between the older SQL servers, the SQL 2012 development
server and the SQL 2012 Production server I’m trying to install to is that the that the domain admin account doesn’t have sysadmin access on all the databases on the new production server. The SQL server is being installed and configured by a contractor so
if you all have troubleshooting suggestions, I’ll need to coordinate with the contractor.
Starting with the screen below, I began searching for help online. There seems to be no one else with this issue or it is not documented properly. I opened a ticket with MS, called the contractor and troubleshot with him, troubleshot as far as I could on
my own and I’m still at a loss as to what is preventing the installer from connecting specifically to the analysis server.
I first thought the installer was at issue or that the data warehouse sever was at issue. But all signs are pointing at the SQL server. The installer is able to connect to all the other SQL servers – including other 2012 servers (same versions) – so it can’t
be the installer. I’m pretty sure the SQL server is going to be at issue.
After looking at this error, I opened the resource monitor and clicked the dropdown to see if it was trying to connect to the correct server and it was. I then connected to the old and new test and development servers successfully. Then connected to the
SQL 2008 R2 production cluster successfully. I then compared the two servers. The only difference other than the version numbers is that the admin account doesn’t have sysadmin rights on all the SQL 2012 database servers. But the database servers are not the
problem. The analysis servers are.
I then checked the event logs and they are empty as far as this issue is concerned. Actually, there are no errors on the SQL 2012 production box and the Data Warehouse box. I then checked the log that the installer creates during every step of the installation
and this is what is created when the dropdown is clicked for the analysis server configuration screen. The log file location is:
“C:\Users\admin\AppData\Local\Temp\2\SCSMSetupWizard01.txt”
In the file is the following text.
01:03:34:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012
01:03:34:Using SQL Server 2012 management scope on SCSMSQL2012
01:03:36:Collecting SQL instances on server SCSMSQL2012
01:03:36:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
01:03:36:Using SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
01:03:38:Found SQL Instance: SCSMSQL2012\PWGSQL2012
01:03:38:Found SQL Instance: SCSMSQL2012\SCSMSQL2012
01:03:39:Error:GetSqlInstanceList(), Exception Type: Microsoft.AnalysisServices.ConnectionException, Exception Message: A connection cannot be made. Ensure that the server is running.
01:03:39:StackTrace: at Microsoft.AnalysisServices.XmlaClient.GetTcpClient(ConnectionInfo connectionInfo)
at Microsoft.AnalysisServices.XmlaClient.OpenTcpConnection(ConnectionInfo connectionInfo)
at Microsoft.AnalysisServices.XmlaClient.OpenConnection(ConnectionInfo connectionInfo, Boolean& isSessionTokenNeeded)
at Microsoft.AnalysisServices.XmlaClient.Connect(ConnectionInfo connectionInfo, Boolean beginSession)
at Microsoft.AnalysisServices.Server.Connect(String connectionString, String sessionId, ObjectExpansion expansionType)
at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetASVersion(StringBuilder sqlInstanceServiceName)
at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetSqlInstanceList(String sqlServerName, Int32 serviceType)
I’m now investigating the issue according to this output, and decided to ask you all if you’ve run into this issue and found a resolution.I am running into same issue . But I don't anything in the instances section related to portipv6 . I do see in the listener section , I tried to remove it . But it comes up again . Please help
<ConfigurationSettings>
<Security>
<RequireClientAuthentication>0</RequireClientAuthentication>
<SecurityPackageList/>
</Security>
<Network>
<Listener>
<RequestSizeThreshold>4095</RequestSizeThreshold>
<MaxAllowedRequestSize>0</MaxAllowedRequestSize>
<ServerSendTimeout>60000</ServerSendTimeout>
<ServerReceiveTimeout>60000</ServerReceiveTimeout>
<IPV4Support>2</IPV4Support>
<IPV6Support>2</IPV6Support>
</Listener>
<TCP>
<MaxPendingSendCount>12</MaxPendingSendCount>
<MaxPendingReceiveCount>4</MaxPendingReceiveCount>
<MinPendingReceiveCount>2</MinPendingReceiveCount>
<MaxCompletedReceiveCount>9</MaxCompletedReceiveCount>
<ScatterReceiveMultiplier>5</ScatterReceiveMultiplier>
<MaxPendingAcceptExCount>10</MaxPendingAcceptExCount>
<MinPendingAcceptExCount>2</MinPendingAcceptExCount>
<InitialConnectTimeout>10</InitialConnectTimeout>
<SocketOptions>
<SendBufferSize>0</SendBufferSize>
<ReceiveBufferSize>0</ReceiveBufferSize>
<DisableNonblockingMode>1</DisableNonblockingMode>
<EnableNagleAlgorithm>0</EnableNagleAlgorithm>
<EnableLingerOnClose>0</EnableLingerOnClose>
<LingerTimeout>0</LingerTimeout>
</SocketOptions>
</TCP>
<Requests>
<EnableBinaryXML>0</EnableBinaryXML>
<EnableCompression>0</EnableCompression>
</Requests>
<Responses>
<EnableBinaryXML>1</EnableBinaryXML>
<EnableCompression>1</EnableCompression>
<CompressionLevel>9</CompressionLevel>
</Responses>
<ListenOnlyOnLocalConnections>0</ListenOnlyOnLocalConnections>
</Network>
<Log>
<File>msmdredir.log</File>
<FileBufferSize>0</FileBufferSize>
<MessageLogs>Console;System</MessageLogs>
<Exception>
<CreateAndSendCrashReports>0</CreateAndSendCrashReports>
<CrashReportsFolder/>
<SQLDumperFlagsOn>0x0</SQLDumperFlagsOn>
<SQLDumperFlagsOff>0x0</SQLDumperFlagsOff>
<MiniDumpFlagsOn>0x0</MiniDumpFlagsOn>
<MiniDumpFlagsOff>0x0</MiniDumpFlagsOff>
<MinidumpErrorList>0xC1000000, 0xC1000001, 0xC100000C, 0xC1000016, 0xC1360054, 0xC1360055</MinidumpErrorList>
<ExceptionHandlingMode>0</ExceptionHandlingMode>
<MaxExceptions>500</MaxExceptions>
<MaxDuplicateDumps>1</MaxDuplicateDumps>
</Exception>
</Log>
<Memory>
<HandleIA64AlignmentFaults>0</HandleIA64AlignmentFaults>
<PreAllocate>0</PreAllocate>
<VertiPaqPagingPolicy>0</VertiPaqPagingPolicy>
<PagePoolRestrictNumaNode>0</PagePoolRestrictNumaNode>
</Memory>
<Instances/>
<VertiPaq>
<DefaultSegmentRowCount>0</DefaultSegmentRowCount>
<ProcessingTimeboxSecPerMRow>-1</ProcessingTimeboxSecPerMRow>
<SEQueryRegistry>
<Size>0</Size>
<MinKCycles>0</MinKCycles>
<MinCyclesPerRow>0</MinCyclesPerRow>
<MaxArbShpSize>0</MaxArbShpSize>
</SEQueryRegistry>
</VertiPaq>
</ConfigurationSettings> -
Service Manager 2012 R2 - Data warehouse Issue
i have an issue with a customer with their Data warehouse Server. when ever we generate a report using Service Manager we are not seeing data in the report. example - we only see 4 incidents on reports when we generate them and these are many months
old records. within the database there are 1000+ Incidents created however when generating a report only shows us 4 incidents. i'm trying to figure out why it's only showing few records whereas it should show all the records when we generate. i have this
issue now with two customers
i can see that the Data warehouse jobs are running without issues. they are not failing. please let me know how i can get this issue fixedOpen up an SQL management studio and connect to the database that hosts the data warehouse database. Run a query against this following views.
Incident
SELECT * FROM [DWDataMart].[dbo].[IncidentDimvw]
If we look at the Incident query if this only returns 4 incidents as your report then the sync to the data warehouse is not working correctly. I would recommend runnung travis ETL job to run all the data warehouse jobs in the correct order. You can find
it here. https://gallery.technet.microsoft.com/PowerShell-Script-to-Run-a4a2081c
And if that still does not help there is another few blog posts for troubleshooting the data warehouse but lets try this first and go from there.
Cheers,
Thomas Strömberg
System Center Specialist
Blog:
Twitter: LinkedIn:
Please remember to 'Propose as answer' if you find a reply helpful -
Data Warehouse Jobs stuck at running - Since February!
Folks,
My incidents have not been groomed out of the console since February. I ran the Get-SCDWJob and found most of the jobs are disabled. See below. I've tried to enable all of them using PowerShell and they never are set back to Enabled.
No errors are present in the Event log. In fact, the Event log shows successfully starting the jobs.
I've restarted the three services. Rebooted the server.
I've been using this blog post as a guide.
http://blogs.msdn.com/b/scplat/archive/2010/06/07/troubleshooting-the-data-warehouse-data-warehouse-isn-t-getting-new-data-or-jobs-seem-to-run-forever.aspx
Anyone have any ideas?
Win 08 R2 and SQL 2008 R2 SP 1.
BatchId Name Status CategoryName StartTime
EndTime IsEnabled
13810 DWMaintenance Running Maintenance 3/22/2013 4:26:00 PM
True
13807 Extract_DW_ Running Extract 2/28/2013 7:08:00 PM
False
ServMgr_MG
13808 Extract_Ser Running Extract 2/28/2013 7:08:00 PM
False
vMgr_MG
13780 Load.CMDWDataMart Running Load 2/28/2013 7:08:00 PM
False
13784 Load.Common Running Load 2/28/2013 7:08:00 PM
False
13781 Load.OMDWDataMart Running Load 2/28/2013 7:08:00 PM
False
13809 MPSyncJob Running Synchronization 2/28/2013 8:08:00 PM
True
3405 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
ChangeAndActivityMan
agementCube
3411 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
ConfigItemCube
3407 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
PowerManagementCube
3404 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
ServiceCatalogCube
3406 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
SoftwareUpdateCube
3410 Process.SystemCenter Running CubeProcessing 1/31/2013 3:00:00 AM 2/10/2013 2:59:00 PM True
WorkItemsCube
13796 Transform.Common Running Transform 2/28/2013 7:08:00 PM
FalseOkay, I've done to much work without writing it down. I've gotten it to show me a new error using Marcel's script. The error is below.
It looks like a Cube issue. Not sure how to fix it.
There is no need to wait anymore for Job DWMaintenance because there is an error in module ManageCubeTranslations an
e error is: <Errors><Error EventTime="2013-07-29T19:03:30.1401986Z">The workitem to add cube translations was aborte
cause a lock was unavailable for a cube.</Error></Errors>
Also running the command Get-SCDWJobModule | fl >> c:\temp\jobs290.txt shows the following errors.
JobId : 302
CategoryId : 1
JobModuleId : 6350
BatchId : 3404
ModuleId : 5869
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:30.6412697Z">The connection either timed out or was lo
st.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterServiceCatalogCube
ModuleDescription : Process_SystemCenterServiceCatalogCube
JobName : Process.SystemCenterServiceCatalogCube
CategoryName : CubeProcessing
Description : Process.SystemCenterServiceCatalogCube
CreationTime : 7/29/2013 12:57:39 PM
NotToBePickedBefore :
ModuleCreationTime : 7/29/2013 12:57:39 PM
ModuleModifiedTime :
ModuleStartTime :
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 312
CategoryId : 1
JobModuleId : 6436
BatchId : 3405
ModuleId : 5938
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:35.1028411Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterChangeAndActivityManagementCube
ModuleDescription : Process_SystemCenterChangeAndActivityManagementCube
JobName : Process.SystemCenterChangeAndActivityManagementCube
CategoryName : CubeProcessing
Description : Process.SystemCenterChangeAndActivityManagementCube
CreationTime : 2/10/2013 7:58:31 PM
NotToBePickedBefore : 2/10/2013 7:58:35 PM
ModuleCreationTime : 2/10/2013 7:58:31 PM
ModuleModifiedTime : 2/10/2013 7:58:35 PM
ModuleStartTime : 2/10/2013 7:58:31 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 331
CategoryId : 1
JobModuleId : 6816
BatchId : 3406
ModuleId : 6242
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:38.7064180Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterSoftwareUpdateCube
ModuleDescription : Process_SystemCenterSoftwareUpdateCube
JobName : Process.SystemCenterSoftwareUpdateCube
CategoryName : CubeProcessing
Description : Process.SystemCenterSoftwareUpdateCube
CreationTime : 2/10/2013 7:58:35 PM
NotToBePickedBefore : 2/10/2013 7:58:39 PM
ModuleCreationTime : 2/10/2013 7:58:35 PM
ModuleModifiedTime : 2/10/2013 7:58:39 PM
ModuleStartTime : 2/10/2013 7:58:35 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 334
CategoryId : 1
JobModuleId : 6822
BatchId : 3407
ModuleId : 6246
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:42.2943950Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterPowerManagementCube
ModuleDescription : Process_SystemCenterPowerManagementCube
JobName : Process.SystemCenterPowerManagementCube
CategoryName : CubeProcessing
Description : Process.SystemCenterPowerManagementCube
CreationTime : 2/10/2013 7:58:39 PM
NotToBePickedBefore : 2/10/2013 7:58:42 PM
ModuleCreationTime : 2/10/2013 7:58:39 PM
ModuleModifiedTime : 2/10/2013 7:58:42 PM
ModuleStartTime : 2/10/2013 7:58:39 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 350
CategoryId : 1
JobModuleId : 6890
BatchId : 3410
ModuleId : 6299
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:45.8355723Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterWorkItemsCube
ModuleDescription : Process_SystemCenterWorkItemsCube
JobName : Process.SystemCenterWorkItemsCube
CategoryName : CubeProcessing
Description : Process.SystemCenterWorkItemsCube
CreationTime : 2/10/2013 7:58:42 PM
NotToBePickedBefore : 2/10/2013 7:58:46 PM
ModuleCreationTime : 2/10/2013 7:58:42 PM
ModuleModifiedTime : 2/10/2013 7:58:46 PM
ModuleStartTime : 2/10/2013 7:58:42 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593
JobId : 352
CategoryId : 1
JobModuleId : 6892
BatchId : 3411
ModuleId : 6300
ModuleTypeId : 1
ModuleErrorCount : 0
ModuleRetryCount : 0
Status : Not Started
ModuleErrorSummary : <Errors><Error EventTime="2013-02-10T19:58:49.6887476Z">Object reference not set to an instance o
f an object.</Error></Errors>
ModuleTypeName : Health Service Module
ModuleName : Process_SystemCenterConfigItemCube
ModuleDescription : Process_SystemCenterConfigItemCube
JobName : Process.SystemCenterConfigItemCube
CategoryName : CubeProcessing
Description : Process.SystemCenterConfigItemCube
CreationTime : 2/10/2013 7:58:46 PM
NotToBePickedBefore : 2/10/2013 7:58:50 PM
ModuleCreationTime : 2/10/2013 7:58:46 PM
ModuleModifiedTime : 2/10/2013 7:58:50 PM
ModuleStartTime : 2/10/2013 7:58:46 PM
ManagementGroup : DW_Freeport_ServMgr_MG
ManagementGroupId : f61a61f2-e0fe-eb37-4888-7e0be9c08593 -
Analyze command in a Data warehouse env
We are doing data loads daily on our Data warehouse. On certain target tables, we have change data capture enabled. As part of loading the table ( 4 million rows total) , we remove data for a certain time period ( say a month = 50,000+ rows) and loading that again from the source. We are also doing a full table analyze part of this load and is taking a long time.
Question is : Do we need to do the analyze command every day ? Is there a big change we would see if we run the analyze once a week ?
Thanks.Hi srwijese,
My DW actually has 12TBs and after each dataload we do stats collection from our tables, BUT, we have partitioned tables in most of cases, so we just collect it at partition level using dbms_stat package. I don't know if your enviroment is partitioned or not, if yes, do a stats collection just for partition loaded.
P.S: If you wish add [email protected] (MSN) to share experiences.
Jonathan Ferreira
http://oracle4dbas.blogspot.com -
Hello all
I am trying to create a dashboard to be used in a weekly meeting for the services team. One of the requirements is to show the number of incidents by category for the last 7,14,30 days. SO I am trying to create a Date filter for my pivot table
but dates are not coming in as a date format. I read that this is a known "feature" in the data warehouse and heard that there is a fix/update that works around this situation. Has any found a fix for this?I found this
http://blogs.technet.com/b/servicemanager/archive/2012/12/07/incidents-or-service-requests-sliced-by-months-quarters.aspx
Seems like it what I was looking for.
This is cool and works well. but not quite what I am trying to do. I will keep working with it though.
but still looking on how to filter 7,14,30 day... -
Are analytic functions usefull only for data warehouses?
Hi,
I deal with reporting queries on Oracle databases but I don't work on Data Warehouses, thus I'd like to know if learning to use analytic functions (sql for anaylis such as rollup, cube, grouping, ...) might be usefull in helping me to develop better reports or if analytic functions are usually usefull only for data warehouses queries. I mean are rollup, cube, grouping, ... usefull also on operational database or do they make sense only on DWH?
Thanks!Mark1970 wrote:
thus does it worth learning them for improving report queries also not on DHW but on common operational databases?Why pigeonhole report queries as "+operational+" or "+data warehouse+"?
Do you tell a user/manager that "<i>No, this report cannot be done as it looks like a data warehouse report and we have an operational database!</i>"?
Data processing and data reporting requirements not not care what label you assign to your database.
Simple real world example of using analytical queries on a non warehouse. We supply data to an external system via XML. They require that we limit the number of parent entities per XML file we supply. E.g. 100 customer elements (together with all their child elements) per file. Analytical SQL enables this to be done by creating "buckets" that can only contain 100 parent elements at a time. Complete process is SQL driven - no slow-by-slow row by row processing in PL/SQL using nested cursor loops and silly approaches like that.
Analytical SQL is a tool in the developer toolbox. It would be unwise to remove it from the toolbox, thinking that it is not applicable and won't be needed for the work that's to be done. -
Data Warehouse Synchronization server - too many entries
I am having problems with app advisor not getting performace data. I believe that it is due to the Data Warehouse Synchronization
server having too many entries as I only have 1. How do I remove a server?Seems you need to restore the OperationsManager Database from the previous backup, try the methods in the kb.
http://support.microsoft.com/kb/2771934/en-gb
http://social.technet.microsoft.com/Forums/en-US/0f80e33e-243a-44ab-ba1a-e73ec421de03/two-dw-synchronization-server-instances-by-mistake
Juke Chou
TechNet Community Support -
Suggestions of building this data warehouse
Hi,
We have 5 source systems and all the data are coming in to 5 different ODSes in our data warehouse.
For reporting purposes,
1). do you suggest that we feed each ODS into a one cube each and push them into a multiprovider
or
2) feed all 5 ODSes into a single cube
3) What is the advantage of 1) over 2); and 2) over 1)
4) Any suggestions on how best to configure the cubes and/or multiprovider?
5) How should the keys fields of the ODSes be handled in the Cube?
6) How best do your suggest that we handle this with respect to BW authorization?
ThanksHi,
thanks for the response.
I don't clearly get some of the points you made well.
On 1) & 2), you indicated that
"I would say 1 would be right way to do it. In that case, you can split ur KF values by dataprovider if ever you need them to comparision scenario.".
What did you mean by "...ou can split ur KF values by dataprovider if ever you need them to comparision scenario"?
I was also lost on the "split ur KF laues.." and " comparison scenario"
I will appreciate and example to clarify this
On 3) you wrote that
"Unless you dont have any field in each ODS to differentiate the data, putting all the data into a single cube would prevent you from spilting the data by dataprovider."
Can you clarify (preferably with an exmaple) what you meant by ""Unless you dont have any field in each ODS to differentiate the data"?
For performance purposes, won't multiple cubes be better than placing all the data in a single cube?
I thought I read that the ODSes to multiple cubes and them pushing to a multiprovider has some advantages of being able to add or remove a cube whithout the need to modify the queries.
On 4) what "data structure" were you refering to? ( this is logistic data).
Thanks -
Best practice of metadata table in data warehouse environment ?
Hi guru's,
In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
In order to keep track of these tables we are creating metadata table in DWH schema say for example
Stage DWH_schema
Table_1 Table_A
Table_2 Table_b
Table_3 Table_c
Table_4 Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
Any idea will be greatly appriciated.
Thanks!hi
seems quite a buzzing question.
In our project we designed a hub and spoke like architecture.
Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
Datamodeler fits perfect for modelling L1 purpose.
L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
hope this helps D. -
Diff b/w Data warehouse and Business Warehouse
Hi all,
what is the Diff b/w Data warehouse and Business Warehouse?hi..
The diferrence between Datawarehousing and Business Warehouse are as follows.
DataWarehousing is the concept and BIW is a tool that uses this concept in Business applicaitons.
DataWarehousing allows you to analyze tons of data (millions and millions of records of data) in a convinent and optimum way, it is called BIW when applied to Business applications like analyzing the sales of a company.
Advantages- Consedering the volume of business data, BIW allows you to make decisions faster, I mean you can analyze data faster. Support for multiple languges easy to use and so on.
Refer this
Re: WHAT IS THE DIFFERENCE BETWEEN BIW & DATAWAREHOUSING
hope it helps...
Maybe you are looking for
-
Another "How do I delete an email" question
So here is my dilema..... I bought my wife a new iPad Air. She already had an iPad 1 that my 4 year old has deemed hers. Her iTunes account had a bunch of kid games she bought. We didn't want her new iPad to have all that crap so I wanted to create a
-
ITunes 10.6.3 download location?
Does anyone know where to dl iTunes 10.6.3. on the dev portal the download links to the apple.com/itunes site. No iTunes 10.6.3 there, just 10.6.1...
-
Laptop editing with Premiere Pro CS4: what are the most important specs
Laptop Editing: What are the MOST important specs? Hello everybody, I have been a long time lurker, but I decided to join in and share my voice. I've been editing, for a while now, various projects of various lenghts shot on my HV20, on an IMac using
-
How to get back my lost photos ?????
kkk see i forgot my password then connected to computer and set it to factory settings by mistake i did this before 4 months and lost all beautiful pics i did not do any icloud stuff and have no backups of it any idea on how to get the pics back??/
-
Writing file with windows 7 and vista os..
hi, since i am using Windows XP operating system with Labview, i made a executable file and i trying to install with Windows 7 or Vista OS installed PC. when my device is detect it will gives the hardware information , that have to save in to the ha