Level of granularity

Hi Gurus,
How do we determine the level of granularity of three different ODS?

eg: if you want ur ods to be in the line item level. then add the line item infobject along with other key infobjects to make the record unique.
so keys could be
companycode
fiscalyear
salesdocnum
lineitem
Regards,
BWer
Assing points if helpful.

Similar Messages

  • MDX Calculate minimum price at a higher level of granularity

    Hi all!
    I can't find my way to calculate the following scenario in SSRS mdx statement: I need to know the "category best price" of a product category at a lower level of granularity. See example below:
    Category:         Subcategory:        Price:        
    Category Best Price?
    Adventure         Bikes                   110            105
    Adventure         Kayak                  120            105
    Adventure         Running Coat        105            105
    Fitness             Training Shoes       80              75
    Fitness             T-Shirt                  75              75
    I need to undestand how can I build the Calculated Measure to get the "category best price" column! Note that in the real scenario, there are much more values in the rows, so the granularity is much higher. Additionally, the attributes doesn't
    belong to the same dimension.
    Thanks a ton for your help!

    Hi mts_aa,
    According to your description, you want to show lowest price of each Category group in each row. Right?
    In this scenario, we can generate this column in MDX query level. Please refer the query below:
    with member [Measures].[MinChildren]
    as
    min([Product].[Product Categories].currentmember.parent.children,[Measures].[Price])
    select {[Measures].[Price],[Measures].[MinChildren]} on 0,
    [Product].[Category].[Category].members*[Product].[Subcategory].[Subcategory].members on 1
    from
    [MDX]
    Also you can do this on report level. You just need to have the records group on Category. Then use the expression below in detail row:
    =Min(Fields!Price.Value,"Category")
    Reference:
    Understanding Groups (Report Builder and SSRS)
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • InfoCube design - high granularity by inclusion of document level detail

    When considering the inclusion of sales document level detail in an infocube (0DOC_NUMBER/0S_ORD_ITEM), two possible high-level designs come to mind:
    1. Make sales document number/sales order item master data carrying infoobjects and include in infocube as line item dimensions. Activate all necessary characteristic navigational attributes for reporting.
    2. Model infocube dimensions to contain all necessary charactistics for reporting with inclusion of characteristics sales document number/sales order item therefore, fact table will have highest level of granularity.
    Knowing the 'textbook' does not suggest the above designs, has anyone attempted maximum granularity in infocube design with success or otherwise? If successful, has this design outperformed DSO or InfoSet reporting performance?
    Thank you - Brendan.

    Hi Brendan,
    we have implemented both solutions; it all depends on the number of documents. if it is much more than 1 mio; I recommend storing your data in the target; if it is 1 or 2 mio you can have master data (more flexible).
    We have some 4 mio records in 0MAT_PLANT and are starting having performance issues.
    by the way; if you are using ORACLE, NEVER mark a dimension with high cardinality; that doesn't work.
    hope that helps
    Olivier.
    Message was edited by:
            Olivier Cora

  • Issue with Skip Level Hierarchy in OBIEE 11g

    Hi,
    I have implemented Ragged and Skip Level Hierarchy as explained in the links below in OBIEE 11g RPD.
    (checked the 'Ragged' and 'Skipped Levels' check boxes)
    http://www.rittmanmead.com/2010/07/obiee-11gr1-support-for-ragged-skip-level-value-based-hierarchies/
    http://docs.oracle.com/cd/E17904_01/bi.1111/e10540/dimensions.htm#BABFBBFG
    When the Presentation Hierarchy is dragged in the OBI Answers, the leaf level beyond the skip (null) levels is not shown in the report.
    When any level finds intermediate null values, OBIEE is failing to show the existing value of the leaf level (Most Granular Level) or any values after Skipped Levels.
    For example, following is the values in database for different levels of department hierarchy.
    A -> B -> C -> Null -> Null -> F
    The OBIEE Presentation Hierarchy can only drill down till C, not beyond that. (does not show the leaf level value 'F').
    Could you please suggest whether I am missing any step?
    Thanks.

    Hi, My problem is that in my report it is showing me that levels as -- A -> B -> C -> Null -> Null -> F, however I want to show data as A -> B -> C -> F, i.e the null valus(columns) should be skipped. Please help.
    Thanks and Regards,
    Deepak

  • Pattern to suppress unchecked warnings at the line level

    Hi,
    As we migrate our APIs to Java 5 (exploiting generics in particular), we are hitting some cases in which we chose not to address the unchecked warnings. To make the code cleaner we thought about annotating methods with @SuppressWarnings("unchecked") but this has the major drawback of turning off all warnings (instead of just the very few we are comfortable leaving behind).
    A colleague of mine had a really interesting idea to solve this problem. It can be expressed by the class below:
    public final class Cast
      @SuppressWarnings("unchecked")
      public static <T> T uncheckedCast(Object object)
        return (T)object;
    }Now we are using this to suppress the specific unchecked warnings we want to ignore. If we decide to revisit these cases, we can just query the usages of the method above.
    Although this approach seems really clean to me, I would love to hear your option on it. Can you see any major problem that we are getting ourselves into?
    Thanks in advance,
    Marcelo

    I'm not sure why, but you cannot apply annotation to arbitrary lines. The lowest level of granularity is an assignment statement.
    The good news is that the @SuppressWarnings annotation can be applied at that level.
    So, whilst this won't work:
      @SuppressWarnings("unchecked")
      someMethodCall((T)obj);this, will:
      @SuppressWarnings("unchecked")
      T tObj = (T)obj;
      someMethodCall(tObj);

  • Low level synchronization in legacy library

    I have an existing C++ library that I need to make available from Java. The library itself is not multithreaded, but it has some global data that it protects using Posix (i.e. native) mutexes. I have no problem with the JNI layer making the library callable, with the exception of the synchronization.
    As near as I can tell, the right way to integrate this is to use MonitorEnter/Exit calls. The problem with this is that I don't have the JNIEnv available at that point in the code, and to make it directly available, I'd have to propagate it through many layers of calls.
    The one possible workaround that I have is to use JNI_GetCreatedJavaVMs to get the VM pointer, and then use GetEnv to get the JNIEnv. This seems to rely on the current constraint of only one VM per process. However, I don't know if that constraint applies to platforms other than Sun's (with the obvious example, for me at least, being WebLogic). Also, this seems to be a temporary solution, as the design of JNI_GetCreatedJavaVMs clearly shows an intent to have more than one VM in the future, but with no easy global way to find the current VM (or JNIEnv, for that matter).
    Any suggestions or corrections would be welcome.
    Gary

    Thanks for your suggestion. It wouldn't work directly for me, since I have no control over the clients of this library, and the clients are already multithreaded. However, I can achieve locking at a similar level of granularity by using JNI callbacks at the very top level of my interface. It's possible that the performance impact of such large granularity locking would not be a problem, though that's very difficult to determine.
    At the moment, though, I'm still strongly leaning towards simply asserting that native implementations of Java threads are a requirement for use of this package.
    Gary

  • Fact table  with different  granularity

    I have a fact table , the measure with different granualrity , for example, there are a fact table named project, and in the project , there are some columns measure like project value, Opportunity value, (one opporuntity have many project). so I have to sum the project value and Opportuntity value
    by month, how to sum opportuntity value? (I want to sum the value distinct Opportuntity ). anyone have suggetion. thanks in advance.

    The purist answer is that you should NEVER have a fact table with multiple levels of granularity - it's asking for trouble. Better solution is to split off a new fact table at the proper level of granularity. Having said that, I've worked with BI Apps before so I understand the pain you're going through. In the long run though we found it better to add new fact tables and track the data at the proper levels. Less confusing (long run) than trying to kludge round data to fit in a square fact table.
    Thx,
    Scott

  • Whats Need of  Multiple Aggregate Levels

    Hi Gurus,
    Can some one say whats the need of Multiple Aggregate Levels on a Multi provider?
    Is it some thing Aggregate Levels are just to determine your level of Granularity you want to report upon?
    If really ie the case cant we get it in the query it self by Free Characterstics, Drill Down, Slice and Dice Options?
    Is any other strong reason for one need to go for Multiple Aggregate Levels or Aggregate Levels itself for that matter.
    Thanks for clarification.
    I am trying to understand things better as Iam very new to this area.
    Regards,
    BI-IP Guest

    Manyak,
    Really Thanks for your explanation. Though not completely but mostly I think I understand. Especially Iam really gtill time not aware of an Aggregate Level can supports Planning only at Lowest Granularity.
    I read before that we cant change values at SUM, MIN/MAX Values. But now reading this I understand that we cant change values at Totals Level but only at items Level.
    But again in your example can we change values at indepndent Line Item Level or For Region's Total Level at the end.
    What in this Case if client wants to see both Country and Region under one Layout and both want to changeble for planning?
    One more Last Question if that does not bother you..
    What happens when one go for Input Read Query designed over the Multi-Provider (Contains a realtime Cube for Plan) it self instead of going to Multiple Aggregate levels or for that matter even a Single Aggregate Level also...All I want to know what difference it really makes for me in designing a Input ready Query or else in Getting my Layout or else in Planning Things.
    Might be too many questions on Aggregate Levels but still I request to make me clear.
    Thanks,
    BI-IP Guest

  • APO DP: Disaggregation to product&plant level from higher levels.

    Hi.
    We do demand planning on groups of products and for country/region in general, we have around 48.000 CVC's in our current setup. It works very well.
    A new situation has arisen where we need to have the forecast split down to product and plant level.
    As is we simply don't have the information at this level of granularity.
    I don't see how we can add for instance product to our setup, we have around 20.000 products so the number of CVC's in DP would become massive if we did this.
    I was thinking that perhaps something could be done by exporting the relevant key figures to a new DP setup with fewer characteristics (to keep the number of CVC's down) via some infocubes, perhaps some disaggregation could be done via some tables and the BW update rules. This still leaves the issue of how to get the figures properly disaggregated to plant and product though.
    Does anyone have experiences on how to get the figures split to lower levels from DP when you're planning on a higher level?

    Simon,
    One approach as you mentioned can be creating Z Table where in you set up disaggregation proportion from product group level to product level or product location level.
    Product Group X  100       Product A@loc1 10
                                          Product B@loc1 90
    Download your planning area data into infocube C and then use BW routines to convert the data from group in infocube C to lower level refereing Z Table....into another infocube..
    SAP also provides such standard functionality of spliting the aggregate Demand plan to detailed level
    SNP plan..through functionality like location slit or product split.
    Essential you will be using same concept in yor BW solution or you may also want to consider the
    release your DP to SNP planning area its as solution of diaggregation of data  to lower level.
    Regards,
    Manish

  • Load cube data at multiple levels

    Hi All,
    I have a time dimension with following hierarchies.
    all_time --> Year --> Quarter -->Month
    all_time -->Week
    My fact table contains data at week and month level. I mapped my fact table time_key to both levels(Week and month) in AWM but when I load data into cube AWM loading data only at week level and ignoring all moths facts rows in facts table. How can I load data at both levels?
    Olap Version: 10.2.0.4
    Thanks
    Dileep.

    I am trying to wrap my mind around your design. It appears that you have a fact table with two levels of granularity. Why not just have one hierarchy with a granularity at the week level?
    All Time --> Year --> Quarter --> Month --> Week
    Do you have some facts that relate only to a month level? If so, can you just relate the month total to one of the weeks in the month? Or is it acceptable to just have data at the month level and eliminate the week level?
    If your week and month data are mutually exclusive, you could run into problems. When you look at data rolled up to the year level, for example, it would exclude the data that was inserted at the week level.
    When I think of multiple hierarchies of time, I first think of calendar years versus fiscal years. The two hierarchies are completely unique due to a business requirement, yet they are clearly related as measures of time (years, months, etc.). But a fiscal month never rolls up into a calendar quarter, or a calendar day into a fiscal week or year, because the hierarchies are distinct. But the fact table is at only one level of granularity (day or week or month), no matter how it rolls up into the separate hierarchies.
    At first glance, your two hierarchies look like they should be one.

  • Granular Data

    Hi Guys, I'm trying to define what is actually meant by granular data, and if anyone could please define [give an example of] Granular data vs non-granular data. Your help is highly appreciated.

    In other words you have to consider (about data granularity concept) that you can start from the highest level (it depends from your datasource availability, but in LC extraction area, for example,  you can refer to every schedule line datasource) and arrive to the lowest level (a data storage only by company code or market/sales area without any other subset of info!)...
    As you can understand, your granularity needs (related to your reporting requirements) will be an important element you have to consider when you have to choose your datasource: if you don't need to have schedule line info from your deliveries (how many quantities have been delivered and when for every item of every single document), probably you should activate only the item or better the header extractor; but this kind of architectural and datamodeling choice have to consider other elements too: what about possible enhancement in the future ? (your customer will ask to you to extend your reporting to an higher level of granularity)....
    this is the reason (with many others) for what you usually load your data with an high level of granularity in the ODS (where you keep as key, document, item and schedule line, for example) and then upload them in a cube where you put only chars that are requested to build the desired query...
    In the end, there are many thing to say and to consider...but this is beauty of datawarehousing !!!
    Hope it helps!
    Bye,
    Roberto

  • Mixed Granularity

    Hi Guys,
    I have a situation where, i need to display details actual versus forecasted values side by side, and also show the difference between the acutals and forecasted values,
    The problem is that the actual values come from transaction data, and are available at a very granular level, for each day and each location. however the forecasted values provided by business are at a month level and have only one value for each region.
    we require other reports to show day level and location level details, hence the low granularity selected, however to create a report of actuals versus budgeted, we need to be able to combine the different levels of granularity.
    I welcome suggestions on the most efficient way of doing this.
    thanks in advance,
    Domnic

    Hi Daan,
    Thanks for the heads up, I've created the level based measures as explained, they work perfectly, however i may need a more detialed explanation on how to join the forecasted values to these aggregated measures.
    The forecasted values are in a denormalised table, containing 1 value for each region and month
    Any pointers in this direction would be greatly appreciated.
    - Domnic

  • Fields in the cube

    Hi Experts
    Here client wants 200 fields in cube
    whether that is suggestable help me in this
    Regards
    Anand

    Hi,
    Its possible to have that many fields in the cube, but you have to consider what the reporting requirements are and how the data is connected, etc. You also have to consider how much data volume is going to be generated and at what level you are going to store the data in the cube. If your client is suggesting you to have line item level or granular data in your cube , suggest to them to have that in an ODS and provide jump queries from the cube to the ODS if necessary.
    If many of the fields are not going to be required for reporting at present or in the future, suggest to the client to have them in the ODS and a smaller subset of the fields in the cubes.
    Cheers,
    Kedar

  • Package.......StatsPack...Error..

    Hi.. i m tuning my Oracle Database, for that i need to install Statspack package, when i compiled package... it returned following error.
    Warning: Package Body created with compilation errors.
    SQL> show error
    Errors for PACKAGE BODY STATSPACK:
    LINE/COL ERROR
    2045/3 PLS-00201: identifier 'SYS.DBMS_SHARED_POOL' must be declared
    2045/3 PL/SQL: Statement ignored
    HOw to solve above error..Please check the following Package, and do help me.
    Note: I m connected perfstat user, no as SYS user.
    create or replace package body STATSPACK as
    /* Define package variables.
    Variables prefixed with p_ are package variables.
    p_snap_id integer; /* snapshot id */
    p_instance_number number; /* instance number */
    p_instance_name varchar2(16); /* instance name */
    p_startup_time date; /* instance startup time */
    p_parallel varchar2(3); /* parallel server */
    p_version varchar2(17); /* Oracle release */
    p_dbid number; /* database id */
    p_host_name varchar2(64); /* host instance is on */
    p_name varchar2(9); /* database name */
    p_new_sga integer; /* Instance bounced since last snap? */
    tmp_int integer; /* initialise defaults */
    p_def_snap_level number default 5; /* default snapshot lvl */
    p_def_session_id number default 0; /* default session id */
    p_def_ucomment varchar2(160) default null;
    p_def_pin_statspack varchar2(10) default 'TRUE';
    p_def_last_modified date default SYSDATE;
    /* Below are the default threshold (_th) values for choosing SQL statements
    to store in the stats$sqlsummary table - these statements will typically
    be the statements using the most resources.
    p_def_num_sql number default 50; /* Num. SQL statements */
    p_def_executions_th number default 100; /* Num. executions */
    p_def_parse_calls_th number default 1000; /* Num. parse calls */
    p_def_disk_reads_th number default 1000; /* Num. disk reads */
    p_def_buffer_gets_th number default 10000; /* Num. buf gets */
    p_def_sharable_mem_th number default 1048576; /* Sharable memory */
    p_def_version_count_th number default 20; /* Child Cursors */
    p_def_all_init varchar2(10) default 'FALSE';
    cursor get_instance is
    select instance_number, instance_name
    , startup_time, parallel, version
    , host_name
    from v$instance;
    cursor get_db is
    select dbid, name
    from v$database;
    procedure SNAP
    (i_snap_level in number default null
    ,i_session_id in number default null
    ,i_ucomment in varchar2 default null
    ,i_num_sql in number default null
    ,i_executions_th in number default null
    ,i_parse_calls_th in number default null
    ,i_disk_reads_th in number default null
    ,i_buffer_gets_th in number default null
    ,i_sharable_mem_th in number default null
    ,i_version_count_th in number default null
    ,i_all_init in varchar2 default null
    ,i_pin_statspack in varchar2 default null
    ,i_modify_parameter in varchar2 default 'FALSE'
    is
    /* Takes a snapshot by calling the SNAP function, and discards
    the snapshot id. This is useful when automating taking
    snapshots from dbms_job
    l_snap_id number;
    begin
    l_snap_id := statspack.snap ( i_snap_level, i_session_id, i_ucomment
    , i_num_sql
    , i_executions_th
    , i_parse_calls_th
    , i_disk_reads_th
    , i_buffer_gets_th
    , i_sharable_mem_th
    , i_version_count_th
    , i_all_init
    , i_pin_statspack
    , i_modify_parameter);
    end SNAP;
    procedure MODIFY_STATSPACK_PARAMETER
    ( i_dbid in number default null
    , i_instance_number in number default null
    , i_snap_level in number default null
    , i_session_id in number default null
    , i_ucomment in varchar2 default null
    , i_num_sql in number default null
    , i_executions_th in number default null
    , i_parse_calls_th in number default null
    , i_disk_reads_th in number default null
    , i_buffer_gets_th in number default null
    , i_sharable_mem_th in number default null
    , i_version_count_th in number default null
    , i_all_init in varchar2 default null
    , i_pin_statspack in varchar2 default null
    , i_modify_parameter in varchar2 default 'TRUE'
    is
    /* Calls QAM with the modify flag, and discards the
    output variables
    l_snap_level number;
    l_session_id number;
    l_ucomment varchar2(160);
    l_num_sql number;
    l_executions_th number;
    l_parse_calls_th number;
    l_disk_reads_th number;
    l_buffer_gets_th number;
    l_sharable_mem_th number;
    l_version_count_th number;
    l_all_init varchar2(5);
    l_pin_statspack varchar2(10);
    begin
    statspack.qam_statspack_parameter( i_dbid
    , i_instance_number
    , i_snap_level
    , i_session_id
    , i_ucomment
    , i_num_sql
    , i_executions_th
    , i_parse_calls_th
    , i_disk_reads_th
    , i_buffer_gets_th
    , i_sharable_mem_th
    , i_version_count_th
    , i_all_init
    , i_pin_statspack
    , 'TRUE'
    , l_snap_level
    , l_session_id
    , l_ucomment
    , l_num_sql
    , l_executions_th
    , l_parse_calls_th
    , l_disk_reads_th
    , l_buffer_gets_th
    , l_sharable_mem_th
    , l_version_count_th
    , l_all_init
    , l_pin_statspack);
    /* As we have explicity been requested to change the parameters,
    independently of taking a snapshot, commit
    commit;
    end MODIFY_STATSPACK_PARAMETER;
    procedure QAM_STATSPACK_PARAMETER
    ( i_dbid in number default null
    , i_instance_number in number default null
    , i_snap_level in number default null
    , i_session_id in number default null
    , i_ucomment in varchar2 default null
    , i_num_sql in number default null
    , i_executions_th in number default null
    , i_parse_calls_th in number default null
    , i_disk_reads_th in number default null
    , i_buffer_gets_th in number default null
    , i_sharable_mem_th in number default null
    , i_version_count_th in number default null
    , i_all_init in varchar2 default null
    , i_pin_statspack in varchar2 default null
    , i_modify_parameter in varchar2 default 'FALSE'
    , o_snap_level out number
    , o_session_id out number
    , o_ucomment out varchar2
    , o_num_sql out number
    , o_executions_th out number
    , o_parse_calls_th out number
    , o_disk_reads_th out number
    , o_buffer_gets_th out number
    , o_sharable_mem_th out number
    , o_version_count_th out number
    , o_all_init out varchar2
    , o_pin_statspack out varchar2
    is
    /* Query And Modify statspack parameter procedure, allows query
    and/or user modification of the statistics collection parameters
    for an instance. If there are no pre-existing parameters for
    an instance, insert the Oracle defaults.
    l_instance_number number;
    l_dbid number;
    ui_all_init varchar2(5);
    l_params_exist varchar2(1);
    begin
    if ((i_dbid is null ) or (i_instance_number is null)) then
    l_dbid := p_dbid;
    l_instance_number := p_instance_number;
    else
    l_dbid := i_dbid;
    l_instance_number := i_instance_number;
    end if;
    /* Upper case any input vars which are inserted */
    ui_all_init := upper(i_all_init);
    if ( (i_modify_parameter is null)
    or (upper(i_modify_parameter) = 'FALSE') ) then
    /* Query values, if none exist, insert the defaults tempered
    with variables supplied */
    begin
    select nvl(i_session_id, session_id)
    , nvl(i_snap_level, snap_level)
    , nvl(i_ucomment, ucomment)
    , nvl(i_num_sql, num_sql)
    , nvl(i_executions_th, executions_th)
    , nvl(i_parse_calls_th, parse_calls_th)
    , nvl(i_disk_reads_th, disk_reads_th)
    , nvl(i_buffer_gets_th, buffer_gets_th)
    , nvl(i_sharable_mem_th, sharable_mem_th)
    , nvl(i_version_count_th, version_count_th)
    , nvl(ui_all_init, all_init)
    , nvl(i_pin_statspack, pin_statspack)
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack
    from stats$statspack_parameter
    where instance_number = l_instance_number
    and dbid = l_dbid;
    exception
    when NO_DATA_FOUND then
    insert into stats$statspack_parameter
    ( dbid
    , instance_number
    , session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    , last_modified
    values
    ( l_dbid
    , l_instance_number
    , p_def_session_id
    , p_def_snap_level
    , p_def_ucomment
    , p_def_num_sql
    , p_def_executions_th
    , p_def_parse_calls_th
    , p_def_disk_reads_th
    , p_def_buffer_gets_th
    , p_def_sharable_mem_th
    , p_def_version_count_th
    , p_def_all_init
    , p_def_pin_statspack
    , SYSDATE
    returning nvl(i_session_id, p_def_session_id)
    , nvl(i_snap_level, p_def_snap_level)
    , nvl(i_ucomment, p_def_ucomment)
    , nvl(i_num_sql, p_def_num_sql)
    , nvl(i_executions_th, p_def_executions_th)
    , nvl(i_parse_calls_th, p_def_parse_calls_th)
    , nvl(i_disk_reads_th, p_def_disk_reads_th)
    , nvl(i_buffer_gets_th, p_def_buffer_gets_th)
    , nvl(i_sharable_mem_th, p_def_sharable_mem_th)
    , nvl(i_version_count_th, p_def_version_count_th)
    , nvl(ui_all_init, p_def_all_init)
    , nvl(i_pin_statspack, p_def_pin_statspack)
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack;
    end; /* don't modify parameter values */
    elsif upper(i_modify_parameter) = 'TRUE' then
    /* modify values, if none exist, insert the defaults tempered
    with the variables supplied */
    begin
    update stats$statspack_parameter
    set session_id = nvl(i_session_id, session_id)
    , snap_level = nvl(i_snap_level, snap_level)
    , ucomment = nvl(i_ucomment, ucomment)
    , num_sql = nvl(i_num_sql, num_sql)
    , executions_th = nvl(i_executions_th, executions_th)
    , parse_calls_th = nvl(i_parse_calls_th, parse_calls_th)
    , disk_reads_th = nvl(i_disk_reads_th, disk_reads_th)
    , buffer_gets_th = nvl(i_buffer_gets_th, buffer_gets_th)
    , sharable_mem_th = nvl(i_sharable_mem_th, sharable_mem_th)
    , version_count_th = nvl(i_version_count_th, version_count_th)
    , all_init = nvl(ui_all_init, all_init)
    , pin_statspack = nvl(i_pin_statspack, pin_statspack)
    where instance_number = l_instance_number
    and dbid = l_dbid
    returning session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack;
    if SQL%ROWCOUNT = 0 then
    insert into stats$statspack_parameter
    ( dbid
    , instance_number
    , session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    , last_modified
    values
    ( l_dbid
    , l_instance_number
    , nvl(i_session_id, p_def_session_id)
    , nvl(i_snap_level, p_def_snap_level)
    , nvl(i_ucomment, p_def_ucomment)
    , nvl(i_num_sql, p_def_num_sql)
    , nvl(i_executions_th, p_def_executions_th)
    , nvl(i_parse_calls_th, p_def_parse_calls_th)
    , nvl(i_disk_reads_th, p_def_disk_reads_th)
    , nvl(i_buffer_gets_th, p_def_buffer_gets_th)
    , nvl(i_sharable_mem_th, p_def_sharable_mem_th)
    , nvl(i_version_count_th, p_def_version_count_th)
    , nvl(ui_all_init, p_def_all_init)
    , nvl(i_pin_statspack, p_def_pin_statspack)
    , SYSDATE
    returning session_id
    , snap_level
    , ucomment
    , num_sql
    , executions_th
    , parse_calls_th
    , disk_reads_th
    , buffer_gets_th
    , sharable_mem_th
    , version_count_th
    , all_init
    , pin_statspack
    into o_session_id
    , o_snap_level
    , o_ucomment
    , o_num_sql
    , o_executions_th
    , o_parse_calls_th
    , o_disk_reads_th
    , o_buffer_gets_th
    , o_sharable_mem_th
    , o_version_count_th
    , o_all_init
    , o_pin_statspack;
    end if;
    end; /* modify values */
    else
    /* error */
    raise_application_error
    (-20100,'QAM_STATSPACK_PARAMETER i_modify_parameter value is invalid');
    end if; /* modify */
    end QAM_STATSPACK_PARAMETER;
    procedure STAT_CHANGES
    /* Returns a set of differences of the values from corresponding pairs
    of rows in STATS$SYSSTAT, STATS$LIBRARYCACHE and STATS$WAITSTAT,
    based on the begin and end (bid, eid) snapshot id's specified.
    This procedure is the only call to STATSPACK made by the statsrep
    report.
    Modified to include multi-db support.
    ( bid IN number
    , eid IN number
    , db_ident IN number
    , inst_num IN number
    , parallel IN varchar2
    , lhtr OUT number, bfwt OUT number
    , tran OUT number, chng OUT number
    , ucal OUT number, urol OUT number
    , rsiz OUT number
    , phyr OUT number, phyrd OUT number
    , phyrdl OUT number
    , phyw OUT number, ucom OUT number
    , prse OUT number, hprse OUT number
    , recr OUT number, gets OUT number
    , rlsr OUT number, rent OUT number
    , srtm OUT number, srtd OUT number
    , srtr OUT number, strn OUT number
    , lhr OUT number, bc OUT varchar2
    , sp OUT varchar2, lb OUT varchar2
    , bs OUT varchar2, twt OUT number
    , logc OUT number, prscpu OUT number
    , tcpu OUT number, exe OUT number
    , prsela OUT number
    , bspm OUT number, espm OUT number
    , bfrm OUT number, efrm OUT number
    , blog OUT number, elog OUT number
    , bocur OUT number, eocur OUT number
    , dmsd OUT number, dmfc OUT number -- begin OPS
    , dfcms OUT number, dfcmr OUT number
    , dmsi OUT number, dmrv OUT number
    , dynal OUT number, dynares OUT number
    , pmrv OUT number, pmpt OUT number
    , npmrv OUT number, npmpt OUT number
    , scma OUT number, scml OUT number
    , pinc OUT number, picrnc OUT number
    , picc OUT number, picrcc OUT number
    , pbc OUT number, pbcrc OUT number
    , pcba OUT number, pccrba OUT number
    , pcrbpi OUT number
    , dynapres OUT number, dynapshl OUT number
    , prcma OUT number, prcml OUT number
    , pwrm OUT number, pfpim OUT number
    , pwnm OUT number
    , dpms OUT number, dnpms OUT number
    , glsg OUT number, glag OUT number
    , glgt OUT number, glsc OUT number
    , glac OUT number, glct OUT number
    , glrl OUT number
    , gcge OUT number, gcgt OUT number
    , gccv OUT number, gcct OUT number
    , gccrrv OUT number, gccrrt OUT number
    , gccurv OUT number, gccurt OUT number
    , gccrsv OUT number
    , gccrbt OUT number, gccrft OUT number
    , gccrst OUT number, gccusv OUT number
    , gccupt OUT number, gccuft OUT number
    , gccust OUT number -- end OPS
    ) is
    bval number;
    eval number;
    l_b_session_id number; /* begin session id */
    l_b_serial# number; /* begin serial# */
    l_e_session_id number; /* end session id */
    l_e_serial# number; /* end serial# */
    function LIBRARYCACHE_HITRATIO RETURN number is
    /* Returns Library cache hit ratio for the begin and end (bid, eid)
    snapshot id's specified
    cursor LH (i_snap_id number) is
    select sum(pins), sum(pinhits)
    from stats$librarycache
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num;
    bpsum number;
    bhsum number;
    epsum number;
    ehsum number;
    begin
    if not LH%ISOPEN then open LH (bid); end if;
    fetch LH into bpsum, bhsum;
    if LH%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for stats$librarycache');
    end if; close LH;
    if not LH%ISOPEN then open LH (eid); end if;
    fetch LH into epsum, ehsum;
    if LH%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for stats$librarycache');
    end if; close LH;
    return (ehsum - bhsum) / (epsum - bpsum);
    end LIBRARYCACHE_HITRATIO;
    function GET_PARAM (i_name varchar2) RETURN varchar2 is
    /* Returns the value for the init.ora parameter for the snapshot
    specified.
    cursor PARAMETER is
    select value
    from stats$parameter
    where snap_id = eid
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    par_value varchar2(512);
    begin
    if not PARAMETER%ISOPEN then open PARAMETER; end if;
    fetch PARAMETER into par_value;
    if PARAMETER%NOTFOUND then
    raise_application_error
    (-20100,'Missing Init.ora parameter '|| i_name);
    end if; close PARAMETER;
    return par_value;
    end GET_PARAM;
    function GET_SYSSTAT (i_name varchar2, i_beid number) RETURN number is
    /* Returns the value for the System Statistic for the snapshot
    specified.
    cursor SYSSTAT is
    select value
    from stats$sysstat
    where snap_id = i_beid
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    stat_value varchar2(512);
    begin
    if not SYSSTAT%ISOPEN then open SYSSTAT; end if;
    fetch SYSSTAT into stat_value;
    if SYSSTAT%NOTFOUND then
    raise_application_error
    (-20100,'Missing System Statistic '|| i_name);
    end if; close SYSSTAT;
    return stat_value;
    end GET_SYSSTAT;
    function BUFFER_WAITS RETURN number is
    /* Returns the total number of waits for all buffers in the interval
    specified by the begin and end snapshot id's (bid, eid)
    cursor BW (i_snap_id number) is
    select sum(wait_count)
    from stats$waitstat
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num;
    bbwsum number; ebwsum number;
    begin
    if not BW%ISOPEN then open BW (bid); end if;
    fetch BW into bbwsum;
    if BW%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for stats$waitstat');
    end if; close BW;
    if not BW%ISOPEN then open BW (eid); end if;
    fetch BW into ebwsum;
    if BW%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for stats$waitstat');
    end if; close BW;
    return ebwsum - bbwsum;
    end BUFFER_WAITS;
    function TOTAL_EVENT_TIME RETURN number is
    /* Returns the total amount of time waited for events for
    the interval specified by the begin and end snapshot id's
    (bid, eid) by foreground processes. This excludes idle
    wait events.
    cursor WAITS (i_snap_id number) is
    select sum(time_waited_micro)
    from stats$system_event
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num
    and event not in (select event from stats$idle_event);
    bwaittime number;
    ewaittime number;
    begin
    if not WAITS%ISOPEN then open WAITS (bid); end if;
    fetch WAITS into bwaittime;
    if WAITS%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for stats$system_event');
    end if; close WAITS;
    if not WAITS%ISOPEN then open WAITS (eid); end if;
    fetch WAITS into ewaittime;
    if WAITS%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for stats$system_event');
    end if; close WAITS;
    return ewaittime - bwaittime;
    end TOTAL_EVENT_TIME;
    function LATCH_HITRATIO return NUMBER is
    /* Returns the latch hit ratio specified by the begin and
    end snapshot id's (bid, eid)
    cursor GETS_MISSES (i_snap_id number) is
    select sum(gets), sum(misses)
    from stats$latch
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num;
    blget number; -- beginning latch gets
    blmis number; -- beginning latch misses
    elget number; -- end latch gets
    elmis number; -- end latch misses
    begin
    if not GETS_MISSES%ISOPEN then open GETS_MISSES (bid); end if;
    fetch GETS_MISSES into blget, blmis;
    if GETS_MISSES%NOTFOUND then
    raise_application_error
    (-20100,'Missing start value for STATS$LATCH gets and misses');
    end if; close GETS_MISSES;
    if not GETS_MISSES%ISOPEN then open GETS_MISSES (eid); end if;
    fetch GETS_MISSES into elget, elmis;
    if GETS_MISSES%NOTFOUND then
    raise_application_error
    (-20100,'Missing end value for STATS$LATCH gets and misses');
    end if; close GETS_MISSES;
    return ( ( elmis - blmis ) / ( elget - blget ) );
    end LATCH_HITRATIO;
    function SGASTAT (i_name varchar2, i_beid number) RETURN number is
    /* Returns the bytes used by i_name in the shared pool
    for the begin or end snapshot (bid, eid) specified
    cursor bytes_used is
    select bytes
    from stats$sgastat
    where snap_id = i_beid
    and dbid = db_ident
    and instance_number = inst_num
    and pool in ('shared pool', 'all pools')
    and name = i_name;
    total_bytes number;
    begin
    if i_name = 'total_shared_pool' then
    select sum(bytes)
    into total_bytes
    from stats$sgastat
    where snap_id = i_beid
    and dbid = db_ident
    and instance_number = inst_num
    and pool in ('shared pool','all pools');
    else
    open bytes_used; fetch bytes_used into total_bytes;
    if bytes_used%notfound then
    raise_application_error
    (-20100,'Missing value for SGASTAT: '||i_name);
    end if;
    close bytes_used;
    end if;
    return total_bytes;
    end SGASTAT;
    function SYSDIF (i_name varchar2) RETURN number is
    /* Returns the difference between statistics for the statistic
    name specified for the interval between the begin and end
    snapshot id's (bid, eid)
    cursor SY (i_snap_id number) is
    select value
    from stats$sysstat
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    begin
    /* Get start value */
    open SY (bid); fetch SY into bval;
    if SY%notfound then
    raise_application_error
    (-20100,'Missing start value for statistic: '||i_name);
    end if; close SY;
    /* Get end value */
    open SY (eid); fetch SY into eval;
    if SY%notfound then
    raise_application_error
    (-20100,'Missing end value for statistic: '||i_name);
    end if; close SY;
    /* Return difference */
    return eval - bval;
    end SYSDIF;
    function SESDIF (st_name varchar2) RETURN number is
    /* Returns the difference between statistics values for the
    statistic name specified for the interval between the begin and end
    snapshot id's (bid, eid), for the session monitored for that
    snapshot
    cursor SE (i_snap_id number) is
    select ses.value
    from stats$sysstat sys
    , stats$sesstat ses
    where sys.snap_id = i_snap_id
    and ses.snap_id = i_snap_id
    and ses.dbid = db_ident
    and sys.dbid = db_ident
    and ses.instance_number = inst_num
    and sys.instance_number = inst_num
    and ses.statistic# = sys.statistic#
    and sys.name = st_name;
    begin
    /* Get start value */
    open SE (bid); fetch SE into bval;
    if SE%notfound then
    eval :=0;
    end if; close SE;
    /* Get end value */
    open SE (eid); fetch SE into eval;
    if SE%notfound then
    eval :=0;
    end if; close SE;
    /* Return difference */
    return eval - bval;
    end SESDIF;
    function DLMDIF (i_name varchar2) RETURN number is
    /* Returns the difference between statistics for the statistic
    name specified for the interval between the begin and end
    snapshot id's (bid, eid)
    cursor DLM (i_snap_id number) is
    select value
    from stats$dlm_misc
    where snap_id = i_snap_id
    and dbid = db_ident
    and instance_number = inst_num
    and name = i_name;
    begin
    /* Get start value */
    open DLM (bid); fetch DLM into bval;
    if DLM%notfound then
    raise_application_error
    (-20100,'Missing start value for statistic: '||i_name);
    end if; close DLM;
    /* Get end value */
    open DLM (eid); fetch DLM into eval;
    if DLM%notfound then
    raise_application_error
    (-20100,'Missing end value for statistic: '||i_name);
    end if; close DLM;
    /* Return difference */
    return eval - bval;
    end DLMDIF;
    begin /* main procedure body of STAT_CHANGES */
    lhtr := LIBRARYCACHE_HITRATIO;
    bfwt := BUFFER_WAITS;
    lhr := LATCH_HITRATIO;
    chng := SYSDIF('db block changes');
    ucal := SYSDIF('user calls');
    urol := SYSDIF('user rollbacks');
    ucom := SYSDIF('user commits');
    tran := ucom + urol;
    rsiz := SYSDIF('redo size');
    phyr := SYSDIF('physical reads');
    phyrd := SYSDIF('physical reads direct');
    phyrdl := SYSDIF('physical reads direct (lob)');
    phyw := SYSDIF('physical writes');
    hprse := SYSDIF('parse count (hard)');
    prse := SYSDIF('parse count (total)');
    gets := SYSDIF('session logical reads');
    recr := SYSDIF('recursive calls');
    rlsr := SYSDIF('redo log space requests');
    rent := SYSDIF('redo entries');
    srtm := SYSDIF('sorts (memory)');
    srtd := SYSDIF('sorts (disk)');
    srtr := SYSDIF('sorts (rows)');
    logc := SYSDIF('logons cumulative');
    prscpu := SYSDIF('parse time cpu');
    prsela := SYSDIF('parse time elapsed');
    tcpu := SYSDIF('CPU used by this session');
    exe := SYSDIF('execute count');
    bs := GET_PARAM('db_block_size');
    bc := GET_PARAM('db_block_buffers') * bs;
    if bc = 0 then
    bc := GET_PARAM('db_cache_size')
    + GET_PARAM('db_keep_cache_size')
    + GET_PARAM('db_recycle_cache_size')
    + GET_PARAM('db_2k_cache_size')
    + GET_PARAM('db_4k_cache_size')
    + GET_PARAM('db_8k_cache_size')
    + GET_PARAM('db_16k_cache_size')
    + GET_PARAM('db_32k_cache_size');
    end if;
    sp := GET_PARAM('shared_pool_size');
    lb := GET_PARAM('log_buffer');
    twt := TOTAL_EVENT_TIME; -- total wait time for all non-idle events
    bspm := SGASTAT('total_shared_pool', bid);
    espm := SGASTAT('total_shared_pool', eid);
    bfrm := SGASTAT('free memory', bid);
    efrm := SGASTAT('free memory', eid);
    blog := GET_SYSSTAT('logons current', bid);
    elog := GET_SYSSTAT('logons current', eid);
    bocur := GET_SYSSTAT('opened cursors current', bid);
    eocur := GET_SYSSTAT('opened cursors current', eid);
    /* Do we want to report on cluster-specific statistics? Check
    in procedure variable "parallel".
    if parallel = 'YES' then
    dmsd := DLMDIF('messages sent directly');
    dmfc := DLMDIF('messages flow controlled');
    dmsi := DLMDIF('messages sent indirectly');
    dmrv := DLMDIF('messages received');
    dfcms := DLMDIF('flow control messages sent');
    dfcmr := DLMDIF('flow control messages received');
    dynal := DLMDIF('dynamically allocated enqueues');
    dynares := DLMDIF('dynamically allocated resources');
    pmrv := DLMDIF('gcs msgs received');
    pmpt := DLMDIF('gcs msgs process time(ms)');
    npmrv := DLMDIF('ges msgs received');
    npmpt := DLMDIF('ges msgs process time(ms)');
    scma := DLMDIF('gcs side channel msgs actual');
    scml := DLMDIF('gcs side channel msgs logical');
    pinc := DLMDIF('gcs immediate (null) converts');
    picrnc := DLMDIF('gcs immediate cr (null) converts');
    picc := DLMDIF('gcs immediate (compatible) converts');
    picrcc := DLMDIF('gcs immediate cr (compatible) converts');
    pbc := DLMDIF('gcs blocked converts');
    pbcrc := DLMDIF('gcs blocked cr converts');
    pcba := DLMDIF('gcs compatible basts');
    pccrba := DLMDIF('gcs compatible cr basts');
    pcrbpi := DLMDIF('gcs cr basts to PIs');
    dynapres := DLMDIF('dynamically allocated gcs resources');
    dynapshl := DLMDIF('dynamically allocated gcs shadows');
    prcma := DLMDIF('gcs recovery claim msgs actual');
    prcml := DLMDIF('gcs recovery claim msgs logical');
    pwrm := DLMDIF('gcs write request msgs');
    pfpim := DLMDIF('gcs flush pi msgs');
    pwnm := DLMDIF('gcs write notification msgs');
    dpms := SYSDIF('gcs messages sent');
    dnpms := SYSDIF('ges messages sent');
    glsg := SYSDIF('global lock sync gets');
    glag := SYSDIF('global lock async gets');
    glgt := SYSDIF('global lock get time');
    glsc := SYSDIF('global lock sync converts');
    glac := SYSDIF('global lock async converts');
    glct := SYSDIF('global lock convert time');
    glrl := SYSDIF('global lock releases');
    gcge := SYSDIF('global cache gets');
    gcgt := SYSDIF('global cache get time');
    gccv := SYSDIF('global cache converts');
    gcct := SYSDIF('global cache convert time');
    gccrrv := SYSDIF('global cache cr blocks received');
    gccrrt := SYSDIF('global cache cr block receive time');
    gccurv := SYSDIF('global cache current blocks received');
    gccurt := SYSDIF('global cache current block receive time');
    gccrsv := SYSDIF('global cache cr blocks served');
    gccrbt := SYSDIF('global cache cr block build time');
    gccrft := SYSDIF('global cache cr block flush time');
    gccrst := SYSDIF('global cache cr block send time');
    gccusv := SYSDIF('global cache current blocks served');
    gccupt := SYSDIF('global cache current block pin time');
    gccuft := SYSDIF('global cache current block flush time');
    gccust := SYSDIF('global cache current block send time');
    end if;
    /* Determine if we want to report on session-specific statistics.
    Check that the session is the same one for both snapshots.
    select session_id
    , serial#
    into l_b_session_id
    , l_b_serial#
    from stats$snapshot
    where snap_id = bid
    and dbid = db_ident
    and instance_number = inst_num;
    select session_id
    , serial#
    into l_e_session_id
    , l_e_serial#
    from stats$snapshot
    where snap_id = eid
    and dbid = db_ident
    and instance_number = inst_num;
    if ( (l_b_session_id = l_e_session_id)
    and (l_b_serial# = l_e_serial#)
    and (l_b_session_id != 0) ) then
    /* we have a valid comparison - it is the
    same session - get number of tx performed
    by this session */
    strn := SESDIF('user rollbacks') + SESDIF('user commits');
    if strn = 0 then
    /* No new transactions */
    strn := 1;
    end if;
    else
    /* No valid comparison can be made */
    strn :=1;
    end if;
    end STAT_CHANGES;
    function SNAP
    (i_snap_level in number default null
    ,i_session_id in number default null
    ,i_ucomment in varchar2 default null
    ,i_num_sql in number default null
    ,i_executions_th in number default null
    ,i_parse_calls_th in number default null
    ,i_disk_reads_th in number default null
    ,i_buffer_gets_th in number default null
    ,i_sharable_mem_th in number default null
    ,i_version_count_th in number default null
    ,i_all_init in varchar2 default null
    ,i_pin_statspack in varchar2 default null
    ,i_modify_parameter in varchar2 default 'FALSE'
    RETURN integer IS
    /* This function performs a snapshot of the v$ views into the
    stats$ tables, and returns the snapshot id.
    If parameters are passed, these are the values used, otherwise
    the values stored in the stats$statspack_parameter table are used.
    l_snap_id integer;
    l_snap_level number;
    l_session_id number;
    l_serial# number;
    l_ucomment varchar2(160);
    l_num_sql number;
    l_executions_th number;
    l_parse_calls_th number;
    l_disk_reads_th number;
    l_buffer_gets_th number;
    l_sharable_mem_th number;
    l_version_count_th number;
    l_all_init varchar2(5);
    l_pin_statspack varchar2(10);
    l_sql_stmt varchar2(3000);
    l_slarti varchar2(20);
    l_threshold number;
    l_total_sql number := 0;
    l_total_sql_mem number := 0;
    l_single_use_sql number := 0;
    l_single_use_sql_mem number := 0;
    l_text_subset varchar2(31);
    l_sharable_mem number;
    l_version_count number;
    l_sorts number;
    l_module varchar2(64);
    l_loaded_versions number;
    l_executions number;
    l_loads number;
    l_invalidations number;
    l_parse_calls number;
    l_disk_reads number;
    l_buffer_gets number;
    l_rows_processed number;
    l_address raw(8);
    l_hash_value number;
    l_version_count number;
    l_max_begin_time date;
    cursor GETSERIAL is
    select serial#
    from v$session
    where sid = l_session_id;
    PROCEDURE snap_sql IS
    begin
    /* Gather summary statistics */
    insert into stats$sql_statistics
    ( snap_id
    , dbid
    , instance_number
    , total_sql
    , total_sql_mem
    , single_use_sql
    , single_use_sql_mem
    select l_snap_id
    , p_dbid
    , p_instance_number
    , count(1)
    , sum(sharable_mem)
    , sum(decode(executions, 1, 1, 0))
    , sum(decode(executions, 1, sharable_mem, 0))
    from stats$v$sqlxs
    where is_obsolete = 'N';
    /* Gather SQL statements which exceed any threshold,
    excluding obsolete parent cursors
    insert into stats$sql_summary
    ( snap_id
    , dbid
    , instance_number
    , text_subset
    , sharable_mem
    , sorts
    , module
    , loaded_versions
    , executions
    , loads
    , invalidations
    , parse_calls
    , disk_reads
    , buffer_gets
    , rows_processed
    , command_type
    , address
    , hash_value
    , version_count
    , cpu_time
    , elapsed_time
    , outline_sid
    , outline_category
    select l_snap_id
    , p_dbid
    , p_instance_number
    , substr(sql_text,1,31)
    , sharable_mem
    , sorts
    , module
    , loaded_versions
    , executions
    , loads
    , invalidations
    , parse_calls
    , disk_reads
    , buffer_gets
    , rows_processed
    , command_type
    , address
    , hash_value
    , version_count
    , cpu_time
    , elapsed_time
    , outline_sid
    , outline_category
    from stats$v$sqlxs
    where is_obsolete = 'N'
    and ( buffer_gets > l_buffer_gets_th
    or disk_reads > l_disk_reads_th
    or parse_calls > l_parse_calls_th
    or executions > l_executions_th
    or sharable_mem > l_sharable_mem_th
    or version_count > l_version_count_th
    /* Insert the SQL Text for hash_values captured in the snapshot
    into stats$sqltext if it's not already there. Identify SQL which
    execeeded the threshold by querying stats$sql_summary for this
    snapid and database instance
    insert into stats$sqltext
    ( hash_value
    , text_subset
    , piece
    , sql_text
    , address
    , command_type
    , last_snap_id
    select st1.hash_value
    , ss.text_subset
    , st1.piece
    , st1.sql_text
    , st1.address
    , st1.command_type
    , ss.snap_id
    from v$sqltext st1
    , stats$sql_summary ss
    where ss.snap_id = l_snap_id
    and ss.dbid = p_dbid
    and ss.instance_number = p_instance_number
    and st1.hash_value = ss.hash_value
    and st1.address = ss.address
    and not exists (select 1
    from stats$sqltext st2
    where st2.hash_value = ss.hash_value
    and st2.text_subset = ss.text_subset
    IF l_snap_level >= 6 THEN
    /* Identify SQL which execeeded the threshold by querying
    stats$sql_summary for this snapid and database instance.
    Capture the plans which were used for the high-load SQL if
    don't already have this data.
    Omit capturing plan usage information for cursors which
    have a zero plan hash value.
    Currently this is captured in a level 6 (or greater)
    snapshot, however this may be integrated into level 5
    snapshot at a later date.
    hl - high load
    insert into stats$sql_plan_usage
    ( hash_value
    , text_subset
    , plan_hash_value
    , cost
    , snap_id
    , address
    , optimizer
    select hl.hash_value
    , hl.text_subset
    , hl.plan_hash_value
    , hl.cost
    , max(hl.snap_id)
    , max(hl.address)
    , max(hl.optimizer)
    from (select /*+ ordered use_nl(sq) index(sq) */
    ss.hash_value
    , ss.text_subset
    , sq.plan_hash_value
    , nvl(sq.optimizer_cost,-9) cost
    , ss.snap_id snap_id
    , ss.address
    , sq.optimizer_mode optimizer
    from stats$sql_summary ss
    , v$sql sq
    where ss.snap_id = l_snap_id
    and ss.dbid = p_dbid
    and ss.instance_number = p_instance_number
    and sq.hash_value = ss.hash_value
    and sq.address = ss.address
    and sq.plan_hash_value > 0
    ) hl
    where not exists (select /*+ no_unnest */
    from stats$sql_plan_usage spu
    where spu.hash_value = hl.hash_value
    and spu.text_subset = hl.text_subset
    and spu.plan_hash_value
    = hl.plan_hash_value
    and spu.cost = hl.cost
    group by hl.hash_value
    , hl.text_subset
    , hl.plan_hash_value
    , hl.cost
    , hl.optimizer;
    /* For all new hash_value, plan_hash_value, cost combinations
    just captured, get the optimizer plans, if we don't already
    have them. Note that the plan (and hence the plan hash value)
    comprises the access path and the join order (and not
    variable factors such as the cardinality).
    insert into stats$sql_plan
    ( plan_hash_value
    , id
    , operation
    , options
    , object_node
    , object#
    , object_owner
    , object_name
    , optimizer
    , parent_id
    , depth
    , position
    , cost
    , cardinality
    , bytes
    , other_tag
    , partition_start
    , partition_stop
    , partition_id
    , other
    , distribution
    , cpu_cost
    , io_cost
    , temp_space
    , snap_id
    select /*+ ordered use_nl(s) use_nl(sp.p) */
    new_plan.plan_hash_value
    , sp.id
    , max(sp.operation)
    , max(sp.options)
    , max(sp.object_node)
    , max(sp.object#)
    , max(sp.object_owner)
    , max(sp.object_name)
    , max(sp.optimizer)
    , max(sp.parent_id)
    , max(sp.depth)
    , max(sp.position)
    , max(sp.cost)
    , max(sp.cardinality)
    , max(sp.bytes)
    , max(sp.other_tag)
    , max(sp.partition_start)
    , max(sp.partition_stop)
    , max(sp.partition_id)
    , max(sp.other)
    , max(sp.distribution)
    , max(sp.cpu_cost)
    , max(sp.io_cost)
    , max(sp.temp_space)
    , max(new_plan.snap_id)
    from (select /*+ index(spu) */
    distinct
    spu.plan_hash_value
    , spu.hash_value
    , spu.address
    , spu.text_subset
    , spu.snap_id
    from stats$sql_plan_usage spu
    where spu.snap_id = l_snap_id
    and not exists (select /*+ nl_aj */ *
    from stats$sql_plan ssp
    where ssp.plan_hash_value
    = spu.plan_hash_value
    ) new_plan
    , v$sql s
    , v$sql_plan sp
    where sp.hash_value = new_plan.hash_value
    and sp.address = new_plan.address
    and s.hash_value = new_plan.hash_value
    and s.address = new_plan.address
    and s.hash_value = sp.hash_value
    and s.address = sp.address
    and s.child_number = sp.child_number
    group by
    new_plan.plan_hash_value
    , sp.id;
    END IF; /* snap level >=6 */
    END snap_sql;
    begin /* Function SNAP */
    /* Get instance parameter defaults from stats$statspack_parameter,
    or use supplied parameters.
    If all parameters are specified, use them, otherwise get values
    from the parameters not specified from stats$statspack_parameter.
    statspack.qam_statspack_parameter
    ( p_dbid
    , p_instance_number
    , i_snap_level, i_session_id, i_ucomment, i_num_sql
    , i_executions_th, i_parse_calls_th
    , i_disk_reads_th, i_buffer_gets_th, i_sharable_mem_th
    , i_version_count_th, i_all_init
    , i_pin_statspack
    , i_modify_parameter
    , l_snap_level, l_session_id, l_ucomment, l_num_sql
    , l_executions_th, l_parse_calls_th
    , l_disk_reads_th, l_buffer_gets_th, l_sharable_mem_th
    , l_version_count_th, l_all_init
    , l_pin_statspack);
    /* Generate a snapshot id */
    select stats$snapshot_id.nextval
    into l_snap_id
    from dual
    where rownum = 1;
    /* Determine the serial# of the session to maintain stats for,
    if this was requested.
    if l_session_id > 0 then
    if not GETSERIAL%ISOPEN then open GETSERIAL; end if;
    fetch GETSERIAL into l_serial#;
    if GETSERIAL%NOTFOUND then
    /* Session has already disappeared - don't gather
    statistics for this session in this snapshot */
    l_session_id := 0;
    l_serial# := 0;
    end if; close GETSERIAL;
    else
    l_serial# := 0;
    end if;
    /* The instance has been restarted since the last snapshot */
    if p_new_sga = 0
    then
    begin
    p_new_sga := 1;
    /* Get the instance startup time, and other characteristics */
    insert into stats$database_instance
    ( dbid
    , instance_number
    , startup_time
    , snap_id
    , parallel
    , version
    , db_name
    , instance_name
    , host_name
    select p_dbid
    , p_instance_number
    , p_startup_time
    , l_snap_id
    , p_parallel
    , p_version
    , p_name
    , p_instance_name
    , p_host_name
    from sys.dual;
    commit;
    end;
    end if; /* new SGA */
    /* Work out the max undo stat time, used for gathering undo stat data */
    select nvl(max(begin_time), to_date('01011900','DDMMYYYY'))
    into l_max_begin_time
    from stats$undostat
    where dbid = p_dbid
    and instance_number = p_instance_number;
    /* Save the snapshot characteristics */
    insert into stats$snapshot
    ( snap_id, dbid, instance_number
    , snap_time, startup_time
    , session_id, snap_level, ucomment
    , executions_th, parse_calls_th, disk_reads_th
    , buffer_gets_th, sharable_mem_th
    , version_count_th, serial#, all_init)
    values
    ( l_snap_id, p_dbid, p_instance_number
    , SYSDATE, p_startup_time
    , l_session_id, l_snap_level, l_ucomment
    , l_executions_th, l_parse_calls_th, l_disk_reads_th
    , l_buffer_gets_th, l_sharable_mem_th
    , l_version_count_th, l_serial#, l_all_init);
    /* Begin gathering statistics */
    insert into stats$filestatxs
    ( snap_id
    , dbid
    , instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    from stats$v$filestatxs;
    insert into stats$tempstatxs
    ( snap_id
    , dbid
    , instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , tsname
    , filename
    , phyrds
    , phywrts
    , singleblkrds
    , readtim
    , writetim
    , singleblkrdtim
    , phyblkrd
    , phyblkwrt
    , wait_count
    , time
    from stats$v$tempstatxs;
    insert into stats$librarycache
    ( snap_id
    , dbid
    , instance_number
    , namespace
    , gets
    , gethits
    , pins
    , pinhits
    , reloads
    , invalidations
    , dlm_lock_requests
    , dlm_pin_requests
    , dlm_pin_releases
    , dlm_invalidation_requests
    , dlm_invalidations
    select l_snap_id
    , p_dbid
    , p_instance_number
    , namespace
    , gets
    , gethits
    , pins
    , pinhits
    , reloads
    , invalidations
    , dlm_lock_requests
    , dlm_pin_requests
    , dlm_pin_releases
    , dlm_invalidation_requests
    , dlm_invalidations
    from v$librarycache;
    insert into stats$buffer_pool_statistics
    ( snap_id
    , dbid
    , instance_number
    , id
    , name
    , block_size
    , set_msize
    , cnum_repl
    , cnum_write
    , cnum_set
    , buf_got
    , sum_write
    , sum_scan
    , free_buffer_wait
    , write_complete_wait
    , buffer_busy_wait
    , free_buffer_inspected
    , dirty_buffers_inspected
    , db_block_change
    , db_block_gets
    , consistent_gets
    , physical_reads
    , physical_writes
    select l_snap_id
    , p_dbid
    , p_instance_number
    , id
    , name
    , block_size
    , set_msize
    , cnum_repl
    , cnum_write
    , cnum_set
    , buf_got
    , sum_write
    , sum_scan
    , free_buffer_wait
    , write_complete_wait
    , buffer_busy_wait
    , free_buffer_inspected
    , dirty_buffers_inspected
    , db_block_change
    , db_block_gets
    , consistent_gets
    , physical_reads
    , physical_writes
    from v$buffer_pool_statistics;
    insert into stats$rollstat
    ( snap_id
    , dbid
    , instance_number
    , usn
    , extents
    , rssize
    , writes
    , xacts
    , gets
    , waits
    , optsize
    , hwmsize
    , shrinks
    , wraps
    , extends
    , aveshrink
    , aveactive
    select l_snap_id
    , p_dbid
    , p_instance_number
    , usn
    , extents
    , rssize
    , writes
    , xacts
    , gets
    , waits
    , optsize
    , hwmsize
    , shrinks
    , wraps
    , extends
    , aveshrink
    , aveactive
    from v$rollstat;
    insert into stats$rowcache_summary
    ( snap_id
    , dbid
    , instance_number
    , parameter
    , total_usage
    , usage
    , gets
    , getmisses
    , scans
    , scanmisses
    , scancompletes
    , modifications
    , flushes
    , dlm_requests
    , dlm_conflicts
    , dlm_releases
    select l_snap_id
    , p_dbid
    , p_instance_number
    , parameter
    , sum("COUNT")
    , sum(usage)
    , sum(gets)
    , sum(getmisses)
    , sum(scans)
    , sum(scanmisses)
    , sum(scancompletes)
    , sum(modifications)
    , sum(flushes)
    , sum(dlm_requests)
    , sum(dlm_conflicts)
    , sum(dlm_releases)
    from v$rowcache
    group by l_snap_id, p_dbid, p_instance_number, parameter;
    /* Collect parameters every snapshot, to cater for dynamic
    parameters changable while instance is running
    if l_all_init = 'FALSE' then
    insert into stats$parameter
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    , isdefault
    , ismodified
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , value
    , isdefault
    , ismodified
    from v$system_parameter;
    else
    insert into stats$parameter
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    , isdefault
    , ismodified
    select l_snap_id
    , p_dbid
    , p_instance_number
    , i.ksppinm
    , sv.ksppstvl
    , sv.ksppstdf
    , decode(bitand(sv.ksppstvf,7),1,'MODIFIED',4,'SYSTEM_MOD','FALSE')
    from stats$x$ksppi i
    , stats$x$ksppsv sv
    where i.indx = sv.indx;
    end if;
    /* To cater for variable size SGA - insert on each snapshot */
    insert into stats$sga
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , value
    from v$sga;
    /* Get current allocation of memory in the SGA */
    insert into stats$sgastat
    ( snap_id
    , dbid
    , instance_number
    , pool
    , name
    , bytes
    select l_snap_id
    , p_dbid
    , p_instance_number
    , pool
    , name
    , bytes
    from v$sgastat;
    insert into stats$system_event
    ( snap_id
    , dbid
    , instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    select l_snap_id
    , p_dbid
    , p_instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    from v$system_event;
    insert into stats$bg_event_summary
    ( snap_id
    , dbid
    , instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    select l_snap_id
    , p_dbid
    , p_instance_number
    , e.event
    , sum(e.total_waits)
    , sum(e.total_timeouts)
    , sum(e.time_waited_micro)
    from v$session_event e
    where e.sid in (select s.sid from v$session s where s.type = 'BACKGROUND')
    group by l_snap_id, p_dbid, p_instance_number, e.event;
    insert into stats$sysstat
    ( snap_id
    , dbid
    , instance_number
    , statistic#
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , statistic#
    , name
    , value
    from v$sysstat;
    insert into stats$waitstat
    ( snap_id
    , dbid
    , instance_number
    , class
    , wait_count
    , time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , class
    , "COUNT"
    , time
    from v$waitstat;
    insert into stats$enqueue_stat
    ( snap_id
    , dbid
    , instance_number
    , eq_type
    , total_req#
    , total_wait#
    , succ_req#
    , failed_req#
    , cum_wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , eq_type
    , total_req#
    , total_wait#
    , succ_req#
    , failed_req#
    , cum_wait_time
    from v$enqueue_stat
    where total_req# != 0;
    insert into stats$latch
    ( snap_id
    , dbid
    , instance_number
    , name
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    from v$latch;
    insert into stats$latch_misses_summary
    ( snap_id
    , dbid
    , instance_number
    , parent_name
    , where_in_code
    , nwfail_count
    , sleep_count
    , wtr_slp_count
    select l_snap_id
    , p_dbid
    , p_instance_number
    , parent_name
    , "WHERE"
    , sum(nwfail_count)
    , sum(sleep_count)
    , sum(wtr_slp_count)
    from v$latch_misses
    where sleep_count > 0
    group by l_snap_id, p_dbid, p_instance_number
    , parent_name, "WHERE";
    insert into stats$resource_limit
    ( snap_id
    , dbid
    , instance_number
    , resource_name
    , current_utilization
    , max_utilization
    , initial_allocation
    , limit_value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , resource_name
    , current_utilization
    , max_utilization
    , initial_allocation
    , limit_value
    from v$resource_limit
    where limit_value != ' UNLIMITED'
    and max_utilization > 0;
    insert into stats$undostat
    ( begin_time
    , end_time
    , dbid
    , instance_number
    , snap_id
    , undotsn
    , undoblks
    , txncount
    , maxquerylen
    , maxconcurrency
    , unxpstealcnt
    , unxpblkrelcnt
    , unxpblkreucnt
    , expstealcnt
    , expblkrelcnt
    , expblkreucnt
    , ssolderrcnt
    , nospaceerrcnt
    select begin_time
    , end_time
    , p_dbid
    , p_instance_number
    , l_snap_id
    , undotsn
    , undoblks
    , txncount
    , maxquerylen
    , maxconcurrency
    , unxpstealcnt
    , unxpblkrelcnt
    , unxpblkreucnt
    , expstealcnt
    , expblkrelcnt
    , expblkreucnt
    , ssolderrcnt
    , nospaceerrcnt
    from v$undostat
    where begin_time > l_max_begin_time
    and begin_time + (1/(24*6)) <= end_time;
    insert into stats$db_cache_advice
    ( snap_id
    , dbid
    , instance_number
    , id
    , name
    , block_size
    , buffers_for_estimate
    , advice_status
    , size_for_estimate
    , estd_physical_read_factor
    , estd_physical_reads
    select l_snap_id
    , p_dbid
    , p_instance_number
    , id
    , name
    , block_size
    , buffers_for_estimate
    , advice_status
    , size_for_estimate
    , estd_physical_read_factor
    , estd_physical_reads
    from v$db_cache_advice
    where advice_status = 'ON';
    insert into stats$pgastat
    ( snap_id
    , dbid
    , instance_number
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , name
    , value
    from v$pgastat;
    insert into stats$instance_recovery
    ( snap_id
    , dbid
    , instance_number
    , recovery_estimated_ios
    , actual_redo_blks
    , target_redo_blks
    , log_file_size_redo_blks
    , log_chkpt_timeout_redo_blks
    , log_chkpt_interval_redo_blks
    , fast_start_io_target_redo_blks
    , target_mttr
    , estimated_mttr
    , ckpt_block_writes
    select l_snap_id
    , p_dbid
    , p_instance_number
    , recovery_estimated_ios
    , actual_redo_blks
    , target_redo_blks
    , log_file_size_redo_blks
    , log_chkpt_timeout_redo_blks
    , log_chkpt_interval_redo_blks
    , fast_start_io_target_redo_blks
    , target_mttr
    , estimated_mttr
    , ckpt_block_writes
    from v$instance_recovery;
    if p_parallel = 'YES' then
    insert into stats$dlm_misc
    ( snap_id
    , dbid
    , instance_number
    , statistic#
    , name
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , statistic#
    , name
    , value
    from v$dlm_misc;
    end if; /* parallel */
    /* Begin gathering Extended Statistics */
    IF l_snap_level >= 5 THEN
    snap_sql;
    END IF; /* snap level >=5 */
    IF l_snap_level >= 10 THEN
    insert into stats$latch_children
    ( snap_id
    , dbid
    , instance_number
    , latch#
    , child#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , latch#
    , child#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    from v$latch_children;
    insert into stats$latch_parent
    ( snap_id
    , dbid
    , instance_number
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    select l_snap_id
    , p_dbid
    , p_instance_number
    , latch#
    , level#
    , gets
    , misses
    , sleeps
    , immediate_gets
    , immediate_misses
    , spin_gets
    , sleep1
    , sleep2
    , sleep3
    , sleep4
    , wait_time
    from v$latch_parent;
    END IF; /* snap level >=10 */
    /* Record level session-granular statistics if a specific session
    has been requested
    if l_session_id > 0
    then
    insert into stats$sesstat
    ( snap_id
    , dbid
    , instance_number
    , statistic#
    , value
    select l_snap_id
    , p_dbid
    , p_instance_number
    , statistic#
    , value
    from v$sesstat
    where sid = l_session_id;
    insert into stats$session_event
    ( snap_id
    , dbid
    , instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    , max_wait
    select l_snap_id
    , p_dbid
    , p_instance_number
    , event
    , total_waits
    , total_timeouts
    , time_waited_micro
    , max_wait
    from v$session_event
    where sid = l_session_id;
    end if;
    commit work;
    RETURN l_snap_id;
    end SNAP; /* Function SNAP */
    begin /* STATSPACK body */
    /* Query the database id, instance_number, database name, instance
    name and startup time for the instance we are working on
    /* Get information about the current instance */
    open get_instance;
    fetch get_instance into
    p_instance_number, p_instance_name
    , p_startup_time, p_parallel, p_version
    , p_host_name;
    close get_instance;
    /* Select the database info for the db connected to */
    open get_db;
    fetch get_db into p_dbid, p_name;
    close get_db;
    /* Keep the package
    sys.dbms_shared_pool.keep('PERFSTAT.STATSPACK', 'P');
    /* Determine if the instance has been restarted since the previous snapshot
    begin
    select 1
    into p_new_sga
    from stats$database_instance
    where startup_time = p_startup_time
    and dbid = p_dbid
    and instance_number = p_instance_number;
    exception
    when NO_DATA_FOUND then
    p_new_sga := 0;
    end;
    end STATSPACK;
    Warning: Package Body created with compilation errors.
    SQL> show error
    Errors for PACKAGE BODY STATSPACK:
    LINE/COL ERROR
    2045/3 PLS-00201: identifier 'SYS.DBMS_SHARED_POOL' must be declared
    2045/3 PL/SQL: Statement ignored

    When i compiled the package, it compiled with following error.
    Warning: Package Body created with compilation errors.
    SQL> show error
    Errors for PACKAGE BODY STATSPACK:
    LINE/COL ERROR
    2045/3 PLS-00201: identifier 'SYS.DBMS_SHARED_POOL' must be declared
    2045/3 PL/SQL: Statement ignored
    SQL>

  • Best practice to move things between various environments in SharePoint 2013

    Hi All SharePoint Gurus!! - I was using SP deployment wizard to move Sites/lists/libraries/items etc. using SP Deployment Wizard (spdeploymentwizard.codeplex.com) in SP 2010. We just upgraded to SP 2013. I have few Lists and Libraries that I need to push
    into the Staging 2013 and Production 2013 environment from Development 2013 environment. SP Deployment Wizard  is throwing error right from the startup. I checked SP 2013 provides granular backups but is restricted to Lists/Library level. Could anybody
    let me know if SP Deployment Wizard works for 2013? I love that tool. Also, Whats the best practice to move things between various environments?
    Regards,
    Khushi
    Khushi

    Hi Khushi,
    I want to let you know that we built
    SharePoint Migration tool
    MetaVis Migrator that can copy and migrate to and from on-premise or hosted SharePoint sites. The tool can copy entire
    sites with sub-site hierarchies, content types, fields, lists, list views, documents, items with attachments, look and feel elements, permissions, groups and other objects - all together on at any level of granularity (for
    example, just lists or just list views or selected items). The tool preserves created / modified properties, all metadata and versions. It looks like Windows Explorer with copy/paste and drag-n-drop functions so it is easy to learn. It does not require any
    server side installations so you can do everything using your computer or any other server. The tool can copy the complete sites or just individual lists or even selected items. The tool also supports incremental or delta copy based on the previous migrations.
    The tool also includes Pre-Migration Analysis that helps to identify customizations.
    Free trial is available:
    http://www.metavistech.com . Feel free to contact us.
    Good luck with your migration project,
    Mark

Maybe you are looking for