Custom Inherited/Derived Class & the Data Warehouse

Hi there,
I'd like to reference an older post by Travis in regards to getting a Custom Class into the Dataware House:
DW Reporting on Custom Class
Scenario:
I've created an Inherited/Derived Class, with its own Form (not an
Extension).
This Class Inherits from the Base Incident Class.
Inherited Class has two (2) new properties (a String and a List Property).
Class and Form is all working well within the SCSM Console.
By default, this class is inheriting all the Relationships (like AssignedTo, AffectedUser, CreatedBy, HasConfigItem) from the Base Incident Class without me having to do anything in the mp apart from including the <Reference> & <TypeProjection>
components.
To get this class I am aware that it has
to be defined - as in a new dimension has to be created for this Derived Class.
Questions:
1. Is it recommended to have these dimension/s in a separate sealed mp, or include it with the derived Class mp (sealed) - is there a best practice?
2. Online resources (technet and other) state a dimension has to be created for the new Class <Dimensions>, but do I also need to include <Outriggers> as well as <Facts>
for this Class?

In JDeveloper 10.1.2, I configure in adfjclient_binding.xml:
<controlDefinition name="JTable Ext"
className="com.lib.swing.JTableExt"
classPath=""
shortLabel="JTableExt"
longLabel="com.lib.swing.JTableExt"
tooltipText="com.lib.swing.JTableExt"
bindingType="DCTable"
icon="/oracle/ideimpl/resource/images/palette/JTable.gif">
<useTemplate>
<![CDATA[${FieldName}.setBinding(panelBinding,"${BindingName}","${IteratorBinding.getId()}")]]>
</useTemplate>
<imports>
<![CDATA[javax.swing.JTable;javax.swing.table.TableModel;com.lib.swing.JTableExt]]>
</imports>
</controlDefinition>
and In my projects, I only drag and drop JTableExt into my form, it auto generate code:
jTableExt1.setBinding(panelBinding, "SubProfileTypeView2", "SubProfileTypeView1Iterator");
because setBinding function of JTableExt in my library, it contains setmodel function and some other function.
Now, In JDeveloper 10.1.3, I drag and drop JTableExt into my form and I have to code one line:
jTableExt1.setBinding(panelBinding, "SubProfileTypeView2", "SubProfileTypeView1Iterator");
Which way is there, I haven't to code?
Jacque

Similar Messages

  • How to Troubleshoot why data is not moving over into the Data Warehouse after Sql Server Agent Job Run

    Hello,
    Here is my problem:
    Data was imported into the staging area. After resolving some errors and running the job, I got the data to move over to the next area. From there, data should be moving over into the DW.  I have been troubleshooting for hours and cannot reslove this
    issue. I have restarted the sql service services, I have ran a couple packages manually, and the job is running successfully. 
    What are some reasons why data is not getting into the data warehouse? Where should I be looking? 
    Your help is greatly appreciated!!

    Anything is possible.
    So, just to reiterate, running the job manually works, running the scheduled job does not result in errors neither data arriving to the DW, right? And it used to, correct?
    If so, the 1st step would be to examine the configuration(s). But not before you inspect the package. Do you have an ability to export it to a file system and open in BIDS?
    Arthur My Blog

  • Windows Client MP is using the default action account to write to the Data Warehouse

    Hi,
    I have recently installed SCOM 2012R2, and am in the process of migrating everthing from SCOM 2007R2.
    Just after I installed the Windows Client MP (version 6.0.7250.0), I started getting this Event 11852:
    OleDb Module encountered a failure 0x80004005 during execution and will post it as output data item. Unspecified error
    : Cannot open database "OperationsManagerDW" requested by the login. The login failed.
    Workflow name: Microsoft.Windows.Client.Win7.ComputerGroup.MemoryTrendsRAM
    Instance name: Microsoft System Center Data Warehouse
    Instance ID: {16781F33-F72D-033C-1DF4-65A2AFF32CA3}
    Management group: 2012SCOMDEV
    This is happening every 7 days and only happening on the Win7 Management pack. I checked the DW logs and found that it was using the default action acount (SCOM ACTION)to send to the DW.  This account doesn't have access to the DW DB.
    I assume it should be using the Data Warehouse action account.  This is how the acount is setup
    Can anyone tell me where I'm going wrong here?  I have not had to touch the permissions for any of the other Management packs.  the rest of them just work.

    For this issue, you can check below link
    http://blog.metasplo.it/2014/01/scom-2012-oledb-module-0x80004005.html
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"Mai Ali | My blog:
    Technical | Twitter:
    Mai Ali

  • Upgrade OM 2012 to SP1 Beta - Version of SQL Server for the Operational Database and the Data Warehouse

    Hello,
    When I try to verify the prerequisites to upgrade my SCOM 2012 UR2 Platform to SP1 Beta, I have these errors :
    The installed version of SQL Server is not supported for the operational database.
    The installed version of SQL Server is not supported for the data warehouse.
    But when I execute this query Select @@version on my MSSQL Instance, the result is :
    Microsoft SQL Server 2008 R2 (SP1) - 10.50.2500.0 (X64)   Jun 17 2011 00:54:03   Copyright (c) Microsoft Corporation  Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor) 
    But
    here, we can see that :
    Microsoft SQL Server: SQL Server SQL 2008 R2 SP1, SQL Server 2008 R2 SP2, SQL Server 2012, SQL Server 2012 SP1, are supported.
    Do I need to pach my MSSQL Server with a specific cumulative update package ?
    Thanks.

    These are the requirements for your SQL:
    SQL Server 2008 and SQL Server 2012 are available in both Standard and Enterprise editions. Operations Manager will function with both editions.
    Operations Manager does not support hosting its databases or SQL Server Reporting Services on a 32-bit edition of SQL Server.
    Using a different version of SQL Server for different Operations Manager features is not supported. The same version should be used for all features.
    SQL Server collation settings for all databases must be one of the following: SQL_Latin1_General_CP1_CI_AS, French_CI_AS, Cyrillic_General_CI_AS, Chinese_PRC_CI_AS, Japanese_CI_AS, Traditional_Spanish_CI_AS, or Latin1_General_CI_AS.  No other collation
    settings are supported.
    The SQL Server Agent service must be started, and the startup type must be set to automatic.
    Side-by-side installation of System Center Operations Manager 2007 R2 reporting and System Center 2012 Service Pack 1 (SP1), Operations Manager reporting on the same server is not supported.
    The db_owner role for the operational database must be a domain account. If you set the SQL Server Authentication to Mixed mode, and then try to add a local SQL Server login on the operational database, the Data Access service will not be able to start.
    For information about how to resolve the issue, see
    System Center Data Access Service Start Up Failure Due to SQL Configuration Change
    If you plan to use the Network Monitoring features of System Center 2012 – Operations Manager, you should move the tempdb database to a separate disk that has multiple spindles. For more information, see
    tempdb Database.
    http://technet.microsoft.com/en-us/library/jj656654.aspx#BKMK_RBF_OperationsDatabase
    Check the SQL server agent service and see whether it is set to automatic AND started. This got me confused at my first SP1 install as well. This is not done by default...
    It's doing common things uncommonly well that brings succes.

  • Service Level Tracking Reports show no data from the data warehouse db.

    Hi, I have some Service Level Tracking reports that will only show data from the last few days.  If I select last month as the time frame, there is no data displayed.  All of these reports have previously worked fine.  Nothing to my knowledge
    has changed, and it appears that the data is still being collected in the data warehouse database.  Any thoughts would be greatly appreciated.

    Is this meant for SCOM forum
    https://social.technet.microsoft.com/Forums/systemcenter/en-US/home?category=systemcenteroperationsmanager
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Any one know how to use "custom" option present under the data access tab in XLS file format of Data Services

    Hi Experts,
            Any one know how to use or what is the purpose of "custom" option present under the data access tab in Excel workbook file format of Data Services
    Thanks in Advance,
    Rajesh.

    Rajesh, what is the Custom Protocol you are trying to use? It should be  something like PSFTP, etc.,
    Cheers
    Ganesh Sampath

  • Segregating sections of the data warehouse solution

    I'm helping to set up a data warehouse for my company and had a question about the best way to organize my databases.
    I have a pretty beefy server with a lot of memory, a powerful processor and plenty of drive space. For that reason and others we plan on putting everything on a single server.
    We have a few areas defined for our warehouse: staging, reporting / application data and analysis services data. Staging will be made up of several databases as will the reporting / application area. We have two options in front of us now for how we configure
    our environment. Either we:
    1. Put all databases on a single instance and prefix them each with, for example, Staging_ for each database (e.x. Staging_Payments, Staging_Commissions, Staging_Core). In this case databases for each area will all be together. In order to avoid confusion
    about the purpose of a database (the area it belongs in) we would have to prefix the database name with the area name.
    OR
    2. Create a separate instance on the same server named "Staging" into which we put all of our staging databases and then we just name them after their source (e.x. Payments, Commissions, Core)
    I'm partial to option 2 because it feels cleaner but I don't know if there are reasons not to do this. We will have applications and ETLs that have to query data across instances but since we're on the same server I imagine the overhead would be negligible.
    I appreciate anyone's insight.

    Hi Omatase,
    I have set up a lot of SQL server data warehouses....we are even about to get into hosted/cloud data warehouses.
    Unless you have some compelling reason your staging area should be just one database and all staging data should flow into your staging area. You can have your data warehouse on the same machine or replicate a distributed version to another machine. Its all
    pretty simple now.
    The data warehouse should be a separate database. You should push data from your staging database to your data warehouse database. If the machine can handle the load? Great. We are seeing that faster and faster machines are handling these loads more and more
    easily.
    There is no reason to have separate SQL Server instances on the one machine that I can think of. It will not give you much advantage for some pretty severe disadvantages.
    These forums are not for people like me to sell our wares so if you want to talk to me in more detail about how we do these things please feel free to email me on [email protected] Ok? We have very complete ways in which we set up data warehouses and ETL
    that are much faster and cheaper than anyone else.
    Peter Nolan

  • Event data collection process unable to write data to the Data Warehouse

    Alert Description:
    Event data collection process unable to write data to the Data Warehouse. Failed to store data in the Data Warehouse. The operation will be retried.
    Exception 'InvalidOperationException': The given value of type Int32 from the data source cannot be converted to type tinyint of the specified target column.
    Running SCOM 2007 R2 on Server 2008 R2 with SQL Server 2008 R2. I can only find a single reference to this exact error on the Internet. It started occurring on a weekend. No changes were made to the SCOM server directly before this occurred. Anyone know
    what the error means and/or how to fix?

    Hello,
    I would suggest the following threas for your reference:
    Troubles with DataWarehouse database
    http://social.technet.microsoft.com/Forums/en-US/operationsmanagergeneral/thread/5e7005ae-d5d8-4b5c-a51c-740634e3da4e
    Data Warehouse configuration synchronization process failed
    to read state 
    http://social.technet.microsoft.com/Forums/en-US/systemcenter/thread/8ea1f4b9-115b-43cd-b66f-617533703047
    Thanks,
    Yog Li
    TechNet Community Support

  • Alert data collection process unable to write data to the Data Warehouse

    Alert data collection process unable to write data to the
    Data Warehouse. Failed to store data in the Data Warehouse. The operation will
    be retried.
    Exception 'InvalidOperationException': The given value of type
    String from the data source cannot be converted to type nvarchar of the
    specified target column.
    One or more workflows were affected by this.
    Workflow name:
    Microsoft.SystemCenter.DataWarehouse.CollectAlertData
    Instance name: Data
    Warehouse Synchronization Service
    Instance ID:
    {9A0B3744-A559-3080-EA82-D22638DAC93D}
    Management group: SCOMMG
    Can anybody Help?

    About 24 hours ago, one of my four management servers began generating this error every 10 minutes; we only upgraded to SCOM 2012 R2 a couple weeks ago, I have NOT installed UR1.  No new management packs or database changes have been made within the
    last week; KB945946 is not related to this.  An Event ID 11411 warning started occurring around the same time as this started and repeats every 10 minutes, too:
    Alert subscription data source module encountered alert subscriptions that were waiting for a long time to receive an acknowledgement.
     Alert subscription ruleid, Alert subscription query low watermark, Alert subscription query high watermark:
    5fcdbf15-4f5b-29db-ffdc-f2088a0f33b7,03/27/2014 00:01:39, 03/27/2014 20:30:00
    Performance on the Data Warehouse database server seems fine; CPU, memory and disk I/O are good.
    How can we identify where the problem is?

  • Design the data warehouse around the reporting system?

    Hi All,
    A Jr. data warehouse developer resisted my suggestion to flatten out activity tables of differing grains into a single fact table.  (Think sales order header, sales order detail, and even a 3rd level of details to each sales order detail.)  Although
    he agreed that flattening out the fact tables into a single fact would be proper for a data warehouse, he's concerned that report developers will have an easier time querying the data warehouse with the 3 separate fact tables.  I'm not sure if it's because
    the report developers don't like learning new schemas or if their reporting tool is just severely limited, mainly because I've never used Cognos.  I assured him that a properly-designed data warehouse will save on query execution time, but he's concerned
    about the reporting tool and how it may not work so well with the data warehouse.  
    Did I give him the proper advice?  It seems like a data warehouse should be built properly regardless of reporting tool shortcomings.  Assuming this tool is lousy, maybe they need a new reporting system for their new data warehouse.
    Thanks,
    Eric

    Hi Eric,
    one of the hard and fast rules of building a data warehouse is that from a logical point of view the fact table presents data at a certain level of granularity and that you do not mix facts in fact tables. This is data warehousing 101.
    From your comment you seem to be suggesting mixing data of different granularity in the one table.
    Now, we have ways and means of co-habiting data that will appear as different fact tables in the one physical table. We control the physical placement of data in fact tables. But on SQL Server we would never mix facts at different granularities or representing
    different data in the one fact table. SQL Server supports that quite poorly.
    It is sad that in 2015 people are still messing up data warehouse project from pure ignorance of what is available. We have data warehouse data models that are extremely extensive but people just have to start from scratch and reinvent the wheel and fail over
    and over again. Sad but true.
    Best Regards 
    Peter Nolan

  • Sharepoint Custom calendar – Hover over the date to add a new item is not working – Sharepoint 2010

    Hi,
    In my Sharepoint visual web part i am using default sharepoint calendar view. But Mouse hover over the date to add a new item is not working. Please see this image below i need the same add new item functionality. 
    

    Hi Sudhanthira,
    Couple of simmilar queries i can see from Madhu posted.
    Please follow this thread:-
    http://social.technet.microsoft.com/Forums/en-US/sharepoint2010programming/thread/b62f9b7e-2ce1-4efd-905c-9cc5471ad216
    To be or Not to Be..The question is this only......

  • Custome Process for update the date

    Hello Every one,
    I have a page with the check-box where the user can select a record and update the record by pressing on the button.
    I have three processes, and each of these process update one field ( which is a date field) of the page by pressing on the specific button. here are the three processes:
    declare
    begin
    for i in 1..apex_application.g_f01.count loop
    UPDATE  REC_RET_ADD_RECORD
    SET DATE_ADMIN_APPROVED=:P8_DATE_ADMIN_APPROVED
    WHERE REC_RET_ID = APEX_APPLICATION.G_F01(I);
    end loop;
    commit;
    end;
    Condition: When Admin button pressed (submit)
    The second process:
    declare
    begin
    for i in 1..apex_application.g_f01.count loop
    UPDATE REC_RET_ADD_RECORD
    SET DATE_COMMITTEE_APPROVED=:P8_DATE_COMMITTEE_APPROVED
    WHERE  REC_RET_ID = APEX_APPLICATION.G_F01(I);
    end loop;
    commit;
    end;
    Condition: When Committee button pressed (submit)
    The third process:
    declare
    begin
    for i in 1..apex_application.g_f01.count loop
    UPDATE REC_RET_ADD_RECORD
    SET DATE_OHS_APPROVED=:P8_DATE_OHS_APPROVED
    WHERE  REC_RET_ID = APEX_APPLICATION.G_F01(I);
    end loop;
    commit;
    end;
    Condition: When OHS button pressed (submit)
    Now the problem is, when the user selects the row and selects the date for Admin field and presses the Admin button, the system inserts the date but after that if the user wants to selects the same row and update the OHS date, the system will remove the date for Admin field and inserts the OHS date in OHS field and visa verse.
    I don't know why this is happening. Could you please help me. I appreciate it.

    Here is the report query:
    SELECT  apex_item.checkbox (1,
                               REC_RET_ID,
                               'onchange="spCheckChange(this);"',
                               :f_REC_RET_ID_list,
                              ) checkbox,
    "REC_RET_ID",
    "RET_SCHEDULE_NUM",
    "RET_COST_CENTER",
    "RET_TITLE",
    "RET_TITLE_DESC",
    "RET_PERIOD",
    "DATE_USER_INSERTED",
    "DATE_ADMIN_APPROVED",
    "DATE_COMMITTEE_APPROVED",
    "RC3_FORM_REQUIRE",
    "DATE_OHS_APPROVED",
    "DESTRUCTION_SUSPENDED",
    "DESTRUCTION_SUSPENDED_NOTE",
    "INACTIVE_DATE",
    "INACTIVE_DATE_APPROVED",
    "CREATION_DATE",
    "CREATED_BY",
    "LAST_UPDATE_DATE",
    "LAST_UPDATE_BY"
    from REC_RET_ADD_RECORD

  • Query to view the support group of a generated ticket in the Database, not the Data Warehouse

    Hi,
    I need a query to view the support group of a generated ticket, but the query must be useful in the only Database (I don't know what happened in the DW, could be a issue).
    Please help me,
    Greetings! 

    it's "possible", in the same way that low temperature fusion and space elevators are "possible". There's nothing conclusively ruling it out, but not something you want to attempt on your own under a time constraint. 
    The "Active" ServiceManager database is a 5th or 6th normal form dynamic software defined snowflake schema representitive object database. Updating the database directly is strictly unsupported, and
    Travis has discouraged people from even trying to read from it. a college of mine does service manager and operations manager databases at a 400+ level full time, and after 2+ years of doing it, he
    almost understands the operational database. it's WELL beyond me, and I do Service Manager basically full time. 
    That being said, if you are brave and only looking for some basic data, you might be able to tease some things out with the SQL management studio and a good grasp of SQL syntax.
     for a production environment, you're much better off fixing the DW. 

  • How do i bring the management pack to data warehouse?

    I have been working with Service Manager a lot but this is my first custom management pack in SCOM.
    What i am trying to do is Create two classes and create a relationship. Data will be entered in these classes using PowerShell. This classes should be brought to Data Warehouse as well. There is no discovery or monitoring involved. I have sealed the the
    ManagementPack.
    I created the class and relationship and imported the management pack in SCOM console. Import was successful and i can see the tables are created in the OperationsManager database but i don't see them in the data warehouse.
    I don't see any place from where i can force sync the management packs.
    Is there something special i have to do to bring these classes to data warehouse?

    Hi 
    scom will not create table for MP in DW. DW is intended for store result of monitors and rules like alerts, performance data,event id,...etc  for reporting.
    refer below link for more information.
     http://blogs.technet.com/b/stefan_stranger/archive/2009/08/15/everything-you-wanted-to-know-about-opsmgr-data-warehouse-grooming-but-were-afraid-to-ask.aspx
    Regards
    sridhar v

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

Maybe you are looking for