Images in Data Warehouse

Hello every one,
If we have images of terabytes then should we go for the DWH solution and why? Some reasons please for points in my report.
Regards

Hi,
Please refer this blog to understand more about my requirement.
Re: Dynamic selection of the images for OBIEE 11g report
I have followed this ‘http://vikramwalia.wordpress.com/category/obiee-11g/ blog. In the final step images are not displaying(null value displaying in place of images).
Note : I have deployed the analyticsRes folder in the console and i am able to access the Images.
If you are able to resolve this please let me know as it is critical.
Thanks,
Hari.

Similar Messages

  • Service Manager data warehouse management server Installation fails

    Hi there,
    In Virtual Machine Windows Server 2012 R2 Standard with my user being a Local Admin and SQL Admin. I tried to do a Service Manager data warehouse management server
    first installation I am facing the following image as error:
    In the event viewer I get the following error:
    "Microsoft System Center 2012 R2 Service Manager -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 25211
    The arguments are: -2147024809, The parameter is incorrect."
    In the Setup log, some of the errors are:
    WixRemoveFoldersEx:  Entering WixRemoveFoldersEx in C:\Windows\Installer\MSI35E.tmp, version 3.7.1224.0
    WixRemoveFoldersEx:  Error 0x80070057: Missing folder property: PSCONFIGFOLDER.A591E3B4_D228_431D_BF89_99D52C8FFB76 for row: wrf4582BC4C5CC47B1D2380408CD7A752DC.A591E3B4_D228_431D_BF89_99D52C8FFB76
    CustomAction WixRemoveFoldersEx.A591E3B4_D228_431D_BF89_99D52C8FFB76 returned actual error code 1603 but will be translated to success due to continue marking
    CAStartServices: CAStartServices was passed . OMCFG
    CAStartServices: Checking if service already started. OMCFG
    CAStartServices: Attempting to start service. OMCFG
    CAStartServices: StartService failed. Error Code: 0x8007042D.
    ConfigureSDKConfigService: CAStartServices failed, trying again.... Error Code: 0x8007042D. OMCFG
    Action start 17:47:05: _SetHealthServiceConfig.80B659D9_F758_4E7D_B4FA_E53FC737DCC9.
    GetMsiFeatureState: Failed to get feature state. Error Code: 0x80070646. MOMServer
    MSI (s) (EC!4C) [17:47:05:483]: Note: 1: 2711 2: MOMGateway
    SetHealthServiceConfig: Failed to get Feature State.. Error Code: 0x80070646. MOMServer
    GetMsiFeatureState: Failed to get feature state. Error Code: 0x80070646. MOMGateway
    I have checked the following post but it did not help me:
    http://social.technet.microsoft.com/Forums/systemcenter/en-US/c42bb04d-a51e-4037-a8a3-37d714d6faac/scsm-management-server-installation-fails?forum=systemcenterservicemanager
    Could you please help me with this issue?
    Thanks a lot,
    M

    Hi,
    Sorry I cannot post the full log. I have found also these errors in the log:
    Calling custom action CAManaged!Microsoft.MOMv3.Setup.MOMv3ManagedCAs.RegisterSdkSCP
    RegisterSdkSCP: There is no previous serviceConnectionPoint
    RegisterSdkSCP: Creating New serviceConnectionPoint
    RegisterSdkSCP: Adding ACL for current user: DOMAIN\InstallationAccount
    RegisterSdkSCP: Adding ACL for SM Admini: DOMAIN\SCSMDWadmins
    RegisterSdkSCP: Error: Access is denied.
    InstallCounters: LoadPerfCounterTextStrings() failed . Error Code: 0x80070057. momv3 "D:\Program Files\Microsoft System Center 2012 R2\Service Manager\MOMConnectorCounters.ini"
    InstallPerfCountersHelper: pcCounterInstaller->InstallCounters() for the default counters failed. Error Code: 0x80070057. MOMConnector
    InstallPerfCountersLib: InstallHealthServicePerfCounters() failed . Error Code: 0x80070057.
    InstallPerfCountersLib: Retry Count : .
    InstallHSPerfCounters: Failed to install agent perf counters. Error Code: 0x80070057.
    Thanks for your reply.

  • Data Warehouse in Virtual Machine

    Hi all!
    I'm looking info about pros and cons of using an Oracle Data Warehouse in a Virtual Machine.
    I've been looking at google but I can't find something worthy.
    Do you know where can find a document or tips?
    Thanks!

    I am curious to know why you would want to run a Data Warehouse within a VM image. It would seem to me the benefits would be limited for most data warehouses unless you are using this as a way to quickly spin off QA, training, unit/integration testing environments using limited resources.
    There is nothing specific you need to do for data warehousing when running a database inside a VM but obviously you will need to setup your VM environment carefully. You can find general information on virtualization here: http://www.oracle.com/technology/tech/virtualization/index.html
    I would be interested in getting more information you about this requirement.
    Regards
    Keith Laker
    Senior Principal Product Manager, Data Warehousing

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • Diff b/w Data warehouse and Business Warehouse

    Hi all,
    what is the Diff b/w Data warehouse and Business Warehouse?

    hi..
    The diferrence between Datawarehousing and Business Warehouse are as follows.
    DataWarehousing is the concept and BIW is a tool that uses this concept in Business applicaitons.
    DataWarehousing allows you to analyze tons of data (millions and millions of records of data) in a convinent and optimum way, it is called BIW when applied to Business applications like analyzing the sales of a company.
    Advantages- Consedering the volume of business data, BIW allows you to make decisions faster, I mean you can analyze data faster. Support for multiple languges easy to use and so on.
    Refer this
    Re: WHAT IS THE DIFFERENCE BETWEEN BIW & DATAWAREHOUSING
    hope it helps...

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Does anyone know how to display (in LabVIEW) the memory use during execution of an image and data acquisition VI to predict when it is time to cease the acquisition to prevent the program crashing?

    Does anyone know how to display (in LabVIEW) the memory use during execution of an image and data acquisition VI to predict when it is time to cease the acquisition to prevent the program crashing?
    I am acquiring images and data to a buffer on the edge of the while loop, and am finding that the crashing of the program is unpredictable, but almost always due to a memory saturation when the buffers gets too big.
    I have attached the VI.
    Thanks for the help
    Attachments:
    new_control_and_acquisition_program.vi ‏946 KB

    Take a look at this document that discusses how to monitor IMAQ memory usage:
    http://digital.ni.com/public.nsf/websearch/8C6E405861C60DE786256DB400755957
    Hope this helps -
    Julie

  • Please help to get onhand stock report with last purchase and billed date warehouse and item wise

    please help to get onhand stock report with last purchase and billed date warehouse and item wise

    Hi Rajeesh Ambadi...
    Try This
    SELECT distinct T0.ITEMCODE , t1.ItemName, T0.ONHAND as 'Total Qty',  
      T1.LASTPURDAT ,t1.LastPurPrc
    FROM OITW T0 INNER JOIN OITM T1 ON T0.ITEMCODE = T1.ITEMCODE
    INNER JOIN OITB T2 ON T1.ITMSGRPCOD=T2.ITMSGRPCOD left join ibt1 t3 on t3.itemcode = t0.itemcode and t3.whscode = t0.whscode
    WHERE
    T0.ONHAND>0
    AND T0.WhsCode ='[%0]'
    Hope Helpful
    Regards
    Kennedy

  • My back up failed and I get a messageTime Machine couldn't complete the backup to "TimeCap".  The backup disk image "/Volumes/Data/Mary Fleming's MacBook Pro .sparsebundle" is already in use.

    I noticed today that my last back-up was this a.m. and it has tried several times to back up since but I keep getting this message
    Time Machine couldn’t complete the backup to “TimeCap”.
    The backup disk image “/Volumes/Data/ MacBook Pro ."sparsebundle” is already in use.
    I have never had this problem before and don't know what to do. I am not that great around computers so if anyone can help I would appreciate it. I did try to do some research but could not understand any of the content I read. I sure don't know what "sparsebundle" is and this is the first time I ever heard the term.
    If anyone can help woold appreciate it.
    I have Version 10.9.1

    This is a standard issue..
    Been around since LION but you missed out.. !! Well the old rule applies.. there are three types of Mac users.
    1. The bugs have bitten you, and you have done the work arounds.
    2. The bugs are biting you and you are now trying to find out what is going on.
    3. The bugs will bite you. And in the meantime you laugh at everyone with problems and say it must be all their fault. Doesn't happen to my system. 
    You were simply in the last group.. now, with the wonders of mavericks to help, you got bit.
    The solution is simple.. unplug the power cord from the TC .. count to 10.. power up the TC again.
    No luck look at the more technical help.
    C12 here. http://pondini.org/TM/Troubleshooting.html
    Your naming might also be an issue if this continues to happen.. see C9.

  • How to display image and data in module pool screen?

    Hi,
    I want to display image and relevant data besides the image in module pool screen, I am using docking container to display the image.
    Actually I am able to display image or data any one but not both.
    one more thing I want to display multiple images and their data.
    Please suggest some one if you have any idea.
    Regards,
    Dileep.

    You can try below way, I have used in report.
    DATA: gc_docking              TYPE REF TO cl_gui_docking_container, "#EC NEEDED  "Docking Container
           gc_split                 TYPE REF TO cl_gui_easy_splitter_container, "#EC NEEDED  "Splitter
           gc_top_container         TYPE REF TO cl_gui_container, "#EC NEEDED  "Top Container
           gc_bottom_container      TYPE REF TO cl_gui_container, "#EC NEEDED  "Bottom Container
          gc_document              TYPE REF TO cl_dd_document, "#EC NEEDED  "Document
           gc_events                TYPE REF TO lcl_event_class, "#EC NEEDED  " Local Event Class
           gc_grid                  TYPE REF TO cl_gui_alv_grid, "#EC NEEDED  " ALV Class
    " Creating Docking
    CREATE OBJECT gc_docking
           EXPORTING
             ratio = c_95.
         IF sy-subrc EQ 0.
    * Splitting the Docking container
           CREATE OBJECT gc_split
             EXPORTING
               parent        = gc_docking
               sash_position = c_10 "Position of Splitter Bar (in Percent)
               with_border   = c_1. "With Border = 1 Without Border = 0
         ENDIF.
    *   Placing the containers in the splitter
         gc_top_container = gc_split->top_left_container .
         gc_bottom_container = gc_split->bottom_right_container .
    *   Creating Grid
         CREATE OBJECT gc_grid
           EXPORTING
             i_parent = gc_bottom_container.
       ELSE.
    * Background job handling
         CREATE OBJECT gc_grid
           EXPORTING
             i_parent = gc_docking.
       ENDIF.
    *   Creating the document
       CREATE OBJECT gc_document
         EXPORTING
           style = 'ALV_GRID'.
    Regards,
    Sameer

  • Error while creating data warehouse tables.

    Hi,
    I am getting an error while creating data warehouse tables.
    I am using OBIA 7.9.5.
    The contents of the generate_clt log are as below.
    >>>>>>>>>>>>>>>>>>>>>>>>>>
    Schema will be created from the following containers:
    Oracle 11.5.10
    Universal
    Conflict(s) between containers:
    Table Name : W_BOM_ITEM_FS
    Column Name: INTEGRATION_ID.
    The column properties that are different :[keyTypeCode]
    Success!
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    There are two rows in the DAC repository schema for the column and the table.
    The w_etl_table_col.KEY_TYPE_CD value for DW application is UNKNOWN and for the ORA_11i application it is NULL.
    Could this be the cause of the issue? If yes, why could the values be different and how to resolve this?
    If not, then what could be the problem?
    Any responses will be appreciated.
    Thanks and regards,
    Manoj.

    Strange. The OBIA 7.9.5 Installation and Configuration Guide says the following:
    4.3.4.3 Create ODBC Database Connections
    Note: You must use the Oracle Merant ODBC driver to create the ODBC connections. The Oracle Merant ODBC driver is installed by the Oracle Business Intelligence Applications installer. Therefore, you will need to create the ODBC connections after you have run the Oracle Business Intelligence Applications installer and have installed the DAC Client.
    Several other users are getting the same message creating DW tables.

  • Failure on clean install of Service Manager 2012 R2 Data Warehouse installation - Error about "AssignSdkAccountAsSsrsPublisher"

    I'm installing the Data Warehouse component on a new server (Server 2008 R2 SP1 with SQL 2012 SP1 CU6).  Reporting service, analysis services and full text services are installed.  I'm installing under an account that has DBO rights to the instance. 
    All pre-reqs pass.  The installation gets to the very end of the installation and i get the following error:
    An error occurred while executing a custom action:_AssignSdkAccountAsSsrsPublisher
    This upgrade attempt has failed before permanent modifications were made. Upgrade has successfully rolled back to the Original state of the system. Once the correction are made, you can retry upgrade for this role.
    I look in the setup logs and only see the following reference to "AssignSdkAccountAsSsrsPublisher"
    MSI (s) (1C:00) [15:09:51:914]: NOTE: custom action _AssignSdkAccountAsSsrsPublisher unexpectedly closed the hInstall handle (type MSIHANDLE) provided to it. The custom action should be fixed to not close that handle.
    CustomAction _AssignSdkAccountAsSsrsPublisher returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
    Originally, I had some issues with services not being started during the install because some accounts didn't have "logon as a service" rights, but those have all been rectified.  Not sure if something is leftover from that, or maybe CU6 wasn't
    tested with SM R2 and i need to go back to an earlier CU?

    That worked for me too.  By manually removing all the folders from SSRS, then retrying the install, allowed it to proceed and complete successfully the second time around.
    The first time installing SCSM 2012 R2, I got hit with
    this error, which is why SSRS wasn't in a clean state.  Make sure to launch setup.exe elevated as an admin!

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • Configuration Dataset = 90% of Data Warehouse - Event Errors 31552

    Hi All,
    I'm currently running SCOM 2012 R2 and have recently had some problems with the Data Warehouse Data Sync. We currently have around 800 servers in our production environment, no Network devices, we use Orchestrator for integration with our call logging system
    and I believe this is where our problems started. We had a runbook which got itself into a loop and was constantly updating alerts, it also contributed to a large number of state changes. We have resolved that problem now, but I started to receive alerts
    saying SCOM couldn't sync Alert data under event 31552.
    Failed to store data in the Data Warehouse.
    Exception 'SqlException': Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance 
    Instance name: Alert data set 
    Instance ID: XX
    Management group: XX
    I have been researching problems with syncing alert data, and came across the queries to manually do the database maintenance, I ran that on the alert instance and it took around 16.5 hours to run on the first night, then it ran fast (2 seconds) most the
    day, when it got to about the same time the next day it took another 9.5 hours to run so I'm not sure why that's giving different results.
    Initially it appeared all of our datasets were out of sync, after the first night all appear to be in sync bar the Hourly Performance Dataset. Which still has around 161 OutstandingAggregations. When I run the Maintenance on Performance it doesn't appear
    to be fixing it. (It runs in about 2 seconds, successfully)
    I recently ran DWDatarp on the database to see how the Alert Dataset was looking and to my surprise I found that the Configuration Dataset has blown out to take up 90% of the DataWarehouse, table below. Does anyone have any ideas on what might cause this
    or how I can fix it?
    Dataset name                   Aggregation name     Max Age     Current Size, Kb
    Alert data set                 Raw data                 400       132,224 (  0%)
    Client Monitoring data set     Raw data                  30             0 (  0%)
    Client Monitoring data set     Daily aggregations       400            16 (  0%)
    Configuration dataset          Raw data                 400   683,981,456 ( 90%)
    Event data set                 Raw data                 100    17,971,872 (  2%)
    Performance data set           Raw data                  10     4,937,536 (  1%)
    Performance data set           Hourly aggregations      400    28,487,376 (  4%)
    Performance data set           Daily aggregations       400     1,302,368 (  0%)
    State data set                 Raw data                 180       296,392 (  0%)
    State data set                 Hourly aggregations      400    17,752,280 (  2%)
    State data set                 Daily aggregations       400     1,094,240 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Raw data      
    7     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations        
    3     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations      
    182     0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data                 400           176 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data 7             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Rawdata                   3        84,864 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations        7       407,416 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations       182       143,128 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data                   7         6,088 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations       31        20,056 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations       182         3,720 (  0%)
    I have one other 31553 event showing up on one of the Management servers as follows,
    Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations.
    Exception 'SqlException': Sql execution failed. Error 2627, Level 14, State 1, Procedure ManagedEntityChange, Line 368, Message: Violation of UNIQUE KEY constraint 'UN_ManagedEntityProperty_ManagedEntityRowIdFromDAteTime'. Cannot insert duplicate key in
    object 'dbo.ManagedEntityProperty'. The duplicate key value is (263, Aug 26 2013  6:02AM). 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.ManagedEntity 
    Instance name: XX 
    Instance ID: XX
    Management group: XX
    which from my readings means I'm likely in for an MS support call.. :( But I just wanted to see if anyone has any information about the Configuration Dataset as I couldn't find much in my searching.

    Hi All,
    The results of the MS Support call were as follows, I don't recommend doing these steps without an MS Support case, any damage you do is your own fault these particular actions resolved our problems:
    1. Regarding the Configuration Dataset being so large. 
    This was caused by our AlertStage table which was also very large, we truncated the alert stage table and ran the maintenance tasks manually to clear this up. As I didn't require any of the alerts sitting in the AlertStage table we simply did a straight truncation
    of the the table. The document linked by MHG above shows the process of doing a backup & restore on the AlertStage Table if you need to. It took a few days of running maintenance tasks to resolve this problem properly. As soon as the truncation had taken
    place the Confirguration Dataset dropped in size to less than a gig.
    2. Error 31553 Duplicate Key Error
    This was a problem with duplicate keys in the ManagedEntityProperty table. We identified rows which had duplicate information, which could be gathered from the Events being logged on the Management Server.
    We then updated a few of these rows to have a slightly different time to what was already in the Database. We noticed that the event kept logging with a different row each time we updated the previous row. We ran the following query to find out how many rows
    actually had duplicates:
    select * from ManagedEntityProperty mep
    inner join ManagedEntity me on mep.ManagedEntityRowId = me.ManagedEntityRowId
    inner join ManagedEntityStage mes on mes.ManagedEntityGuid = me.ManagedEntityGuid
    where mes.ChangeDateTime = mep.FromDateTime
    order by mep.ManagedEntityRowId
    This returned over 25,000 duplicate rows. Rather than replace the times for all the rows, we removed all duplicates from the database. (Best to have MS Check this one out for you if you have a lot of data)
    After doing this there was a lot of data moving around the Staging tables (I assume from the management server that couldn't communicate properly), so once again we truncated the AlertStage table as it wasn't keeping up. Once this was done everything worked
    properly and all the queues stayed under control.
    To confirm things had been cleared up we checked the AlertStage table had no entries and the ManagedEntityStage table had no entries. We also confirmed that the 31553 events stopped on the Management server.
    Hopefully this can help someone, or provide a bit more information on these problems.

  • SAles Order not appearing in Sales Data Warehouse SAles Order List

    Hi Gurus,
    Can you help me in this regard :- My Sales Order has been invoiced but its not appearing in the Sales Data Warehouse Sales Order List.

    Hi poonam,
    I don't get clearly, but mean here your sales order doesn't exist in your cube or ODs in BW.
    If so check the delta load has been done or not.
    Before that check in tables VBAK (Header data) and  VBAP (Item data) , wheather your sales order exists.
    Hope this helps.
    Regards,
    Reddy

Maybe you are looking for

  • How to install Firewire?

    Hi, I am getting problem with my Power Mac G5. So many times i get message on screen "Restart your computer by pressing power key..." So, some buddy suggests me to install & upgrade latest Fire wire from Apple.com/download. But i don't know how to in

  • Elements does not start up ....

    Just perchased adobe elements and premiere  ...premere starts up but elements just askes me to sign in and does not load the program my password is good im using windows 7 and ideas why im getting real frustrated with it im also up dating both as we

  • 30EA1: Database mounted, can't get rows with DATEs

    I'm starting to use SQL Developer 3.0.02 build MAIN-02.37. I tried to use it with a 10g standby database that is only mounted, to check on its recovery progress, and it appears to NOT return rows with DATE columns in them. I've tried this with both a

  • How do I download the support illustration files utilised in illustrator tutorials?

    How do I download the support illustration files utilised in illustrator tutorials? Eg Illustration sof fish in a bowl and bear?

  • Network flakyness after wake from sleep

    Topic says it all...it's a strange thing. I HAVE network access...I can ssh and get email and web just fine. A couple apps though (RSS Menu, WeatherDock) tell me I don't have network access (via networked syslog server...LoL): Feb 22 04:50:28 iMac RS