Execution Statistics by target

I'm looking for a way to either capture the following information or maybe OWB already gathers this info and I just need to know where to look.
I'm looking at the following info for each target.
Target Table Name
Load start time
Load end time
rows inserted
rows modified
Success or Failure.
I'd like to tie this information to an audit key in the target tables so we can always know when each row was loaded.
I searched on audit tables (and that returned more error related stuff) and statistics (and that returned more stuff on analyze table and actual Oracle stats).

u can use dbms_utility.gettime
declare
tempo number;
begin
TEMPO := DBMS_UTILITY.GET_TIME;
<your code>
TEMPO := TEMPO - DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('TEMPO>'||TEMPO);
end;

Similar Messages

  • Plan Execution Statistics

    Hi all,
    Good day...
    I could see in my db,version 10.2.0.4 Plan Execution Statistics, OS time stastiscs are disabled...What is the user of these parameterss..?
    SQL> /
    STATISTICS_NAME                                                  SESSION_ SYSTEM_S ACTIVAT
    Buffer Cache Advice                                              ENABLED  ENABLED  TYPICAL
    MTTR Advice                                                      ENABLED  ENABLED  TYPICAL
    Timed Statistics                                                 ENABLED  ENABLED  TYPICAL
    Timed OS Statistics                                              DISABLED DISABLED ALL
    Segment Level Statistics                                         ENABLED  ENABLED  TYPICAL
    PGA Advice                                                       ENABLED  ENABLED  TYPICAL
    Plan Execution Statistics                                        DISABLED DISABLED ALL
    Shared Pool Advice                                               ENABLED  ENABLED  TYPICAL
    Modification Monitoring                                          ENABLED  ENABLED  TYPICAL
    Longops Statistics                                               ENABLED  ENABLED  TYPICAL
    Bind Data Capture                                                ENABLED  ENABLED  TYPICAL
    Ultrafast Latch Statistics                                       ENABLED  ENABLED  TYPICAL
    Threshold-based Alerts                                           ENABLED  ENABLED  TYPICAL
    Global Cache Statistics                                          ENABLED  ENABLED  TYPICAL
    Active Session History                                           ENABLED  ENABLED  TYPICAL
    Undo Advisor, Alerts and Fast Ramp up                            ENABLED  ENABLED  TYPICAL
    Streams Pool Advice                                              ENABLED  ENABLED  TYPICAL
    17 rows selected.thanks,
    baskar.l

    If I am getting it right, these setting won't change as long as you don't go and manually change them.
    SQL> select STATISTICS_NAME,SYSTEM_STATUS,SESSION_STATUS
      2  from V$statistics_level
      3  /
    STATISTICS_NAME                                                  SYSTEM_S SESSION_
    Buffer Cache Advice                                              ENABLED  ENABLED
    MTTR Advice                                                      ENABLED  ENABLED
    Timed Statistics                                                 ENABLED  ENABLED
    Timed OS Statistics                                              DISABLED DISABLED
    Segment Level Statistics                                         ENABLED  ENABLED
    PGA Advice                                                       ENABLED  ENABLED
    Plan Execution Statistics                                        DISABLED DISABLED
    Shared Pool Advice                                               ENABLED  ENABLED
    Modification Monitoring                                          ENABLED  ENABLED
    Longops Statistics                                               ENABLED  ENABLED
    Bind Data Capture                                                ENABLED  ENABLED
    Ultrafast Latch Statistics                                       ENABLED  ENABLED
    Threshold-based Alerts                                           ENABLED  ENABLED
    Global Cache Statistics                                          ENABLED  ENABLED
    Active Session History                                           ENABLED  ENABLED
    Undo Advisor, Alerts and Fast Ramp up                            ENABLED  ENABLED
    16 rows selected.
    SQL> alter session set timed_os_statistics=4;
    Session altered.
    SQL>  select STATISTICS_NAME,SYSTEM_STATUS,SESSION_STATUS
      2  from V$statistics_level
      3  /
    STATISTICS_NAME                                                  SYSTEM_S SESSION_
    Buffer Cache Advice                                              ENABLED  ENABLED
    MTTR Advice                                                      ENABLED  ENABLED
    Timed Statistics                                                 ENABLED  ENABLED
    Timed OS Statistics                                              DISABLED ENABLED
    Segment Level Statistics                                         ENABLED  ENABLED
    PGA Advice                                                       ENABLED  ENABLED
    Plan Execution Statistics                                        DISABLED DISABLED
    Shared Pool Advice                                               ENABLED  ENABLED
    Modification Monitoring                                          ENABLED  ENABLED
    Longops Statistics                                               ENABLED  ENABLED
    Bind Data Capture                                                ENABLED  ENABLED
    Ultrafast Latch Statistics                                       ENABLED  ENABLED
    Threshold-based Alerts                                           ENABLED  ENABLED
    Global Cache Statistics                                          ENABLED  ENABLED
    Active Session History                                           ENABLED  ENABLED
    Undo Advisor, Alerts and Fast Ramp up                            ENABLED  ENABLED
    16 rows selected.This was 10201 on OEL4.
    Looking at the docs,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams211.htm#REFRN10214
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams215.htm#REFRN10217
    For the parameter value Typical, default value for this parameter is 0 which means it would be diabled. I have changed it my session level and its working now. In the first link,
    >
    When the STATISTICS_LEVEL parameter is modified by ALTER SYSTEM, all advisories or statistics are dynamically turned on or off, depending on the new value of STATISTICS_LEVEL. When modified by ALTER SESSION, the following advisories or statistics are turned on or off in the local session only. Their system-wide state is not changed:
    Timed statistics
    Timed OS statistics
    Plan execution statistics
    >
    HTH
    Aman....

  • Mapping execution statistics

    Hello,
    Normally using the repository browser,one can see the dml statistics of a mapping execution,
    but i shows the total number of records, selected, inserted or updated by a mapping.
    If there are more than one target table in a mapping, and I want to see the number of records
    inserted/updated/delete in each of the target tables,
    where do I need to check?
    Thnks
    MD

    Hi
    The below query will give u all the necessary details
    Execute it from OWNER Schema.
    select
    x.TASK_NAME,
    y.TARGET_NAME,
    z.NUMBER_ERRORS,z.NUMBER_RECORDS_INSERTED Inserted,
    z.NUMBER_RECORDS_UPDATED UPDATED ,z.NUMBER_RECORDS_MERGED MERGED, z.NUMBER_RECORDS_SELECTED SELECTED,
    z.ELAPSE_TIME,z.RUN_STATUS,
    x.EXECUTION_AUDIT_ID,
    a.MAP_RUN_ID
    from ALL_RT_AUDIT_STEP_RUNS z,
    ALL_RT_AUDIT_STEP_RUN_TARGETS y ,
    ALL_RT_AUDIT_map_runs a,
    ALL_RT_AUDIT_EXECUTIONS x
    where x.EXECUTION_AUDIT_ID = a.EXECUTION_AUDIT_ID and
    a.MAP_RUN_ID = z.MAP_RUN_ID and
    z.step_id = y.step_id
    and x.task_name like *'Mapping_name'*
    order by x.EXECUTION_AUDIT_ID, a.MAP_RUN_ID

  • Report execution statistics

    How I can generate statistics about execution of report custom on the system? I know transaction ST03 but is not possible to analyze data in a long time range.
    I want to generate statistics to undestand if all reports custom present on the R/3 are used or not.
    Thank you very mutch and excuse me for grammal errors,
    Gianluca

    Hi Gianluca,
    I don't think any real statistics are stored in data base tables.  This kind of data really lends itself to being stored in files. 
    My gut feeling is that if you can't find what you need in STAT, is that there is probably not going to be an easy way of getting what you want.
    Sorry couldn't offer you a more positive response.
    Cheers,
    Brad

  • Execution Statistics

    I was looking at some statistics from an ADDM report and the executions of the statement made me curious. When a Statspack (AWR in 10g) report is run are the executions of a particular statement accurate for the time frame for that report? Also If I were to look at v$sqlarea or v$sql and pull out the executions is that cumulative since database startup?
    I had a hard time finding that info in the documentation so any assistance would be greatly appreciated.
    Thanks,
    Brian

    And a follow up to this for those of you familiar with RAC. Are the statistics specific for a particular instance or are they an indication of the amount of times a statement was executed in the database regardless of instance?
    Thanks,
    Brian

  • Upload data from excel to Ztable with statistics

    Hi,
    I have a requirement to upload data from excel sheet to ztable .
    Here i need tp provide the user with the execution statistics like
    1.Number of records read from the Excel spread-sheet
    2. Number records processed successfully
    3. Number records with Error
    4. Name and location of Error Log-file (text-file format)
    5. Name and location of the file containing error records (Excel spread-sheet format)
    I would appreciate if any of you have code written for the same

    See the below example code to upload from xl file to sap
    REPORT ZLWMI151_UPLOAD no standard page heading
                           line-size 100 line-count 60.
    *tables : zbatch_cross_ref.
    data : begin of t_text occurs 0,
           werks(4) type c,
           cmatnr(15) type c,
           srlno(12) type n,
           matnr(7) type n,
           charg(10) type n,
           end of t_text.
    data: begin of t_zbatch occurs 0,
          werks like zbatch_cross_ref-werks,
          cmatnr like zbatch_cross_ref-cmatnr,
          srlno like zbatch_cross_ref-srlno,
          matnr like zbatch_cross_ref-matnr,
          charg like zbatch_cross_ref-charg,
          end of t_zbatch.
    data : g_repid like sy-repid,
           g_line like sy-index,
           g_line1 like sy-index,
           $v_start_col         type i value '1',
           $v_start_row         type i value '2',
           $v_end_col           type i value '256',
           $v_end_row           type i value '65536',
           gd_currentrow type i.
    data: itab like alsmex_tabline occurs 0 with header line.
    data : t_final like zbatch_cross_ref occurs 0 with header line.
    selection-screen : begin of block blk with frame title text.
    parameters : p_file like rlgrap-filename obligatory.
    selection-screen : end of block blk.
    initialization.
      g_repid = sy-repid.
    at selection-screen on value-request for p_file.
      CALL FUNCTION 'F4_FILENAME'
           EXPORTING
                PROGRAM_NAME = g_repid
           IMPORTING
                FILE_NAME    = p_file.
    start-of-selection.
    Uploading the data into Internal Table
      perform upload_data.
      perform modify_table.
    top-of-page.
      CALL FUNCTION 'Z_HEADER'
      EXPORTING
        FLEX_TEXT1       =
        FLEX_TEXT2       =
        FLEX_TEXT3       =
    *&      Form  upload_data
          text
    FORM upload_data.
      CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
           EXPORTING
                FILENAME                = p_file
                I_BEGIN_COL             = $v_start_col
                I_BEGIN_ROW             = $v_start_row
                I_END_COL               = $v_end_col
                I_END_ROW               = $v_end_row
           TABLES
                INTERN                  = itab
           EXCEPTIONS
                INCONSISTENT_PARAMETERS = 1
                UPLOAD_OLE              = 2
                OTHERS                  = 3.
      IF SY-SUBRC <> 0.
        write:/10 'File '.
      ENDIF.
      if sy-subrc eq 0.
        read table itab index 1.
        gd_currentrow = itab-row.
        loop at itab.
          if itab-row ne gd_currentrow.
            append t_text.
            clear t_text.
            gd_currentrow = itab-row.
          endif.
          case itab-col.
            when '0001'.
              t_text-werks = itab-value.
            when '0002'.
              t_text-cmatnr = itab-value.
            when '0003'.
              t_text-srlno = itab-value.
            when '0004'.
              t_text-matnr = itab-value.
            when '0005'.
              t_text-charg = itab-value.
          endcase.
        endloop.
      endif.
      append t_text.
    ENDFORM.                    " upload_data
    *&      Form  modify_table
          Modify the table ZBATCH_CROSS_REF
    FORM modify_table.
      loop at t_text.
        t_final-werks = t_text-werks.
        t_final-cmatnr = t_text-cmatnr.
        t_final-srlno = t_text-srlno.
        t_final-matnr = t_text-matnr.
        t_final-charg = t_text-charg.
        t_final-erdat = sy-datum.
        t_final-erzet = sy-uzeit.
        t_final-ernam = sy-uname.
        t_final-rstat = 'U'.
        append t_final.
        clear t_final.
      endloop.
      delete t_final where werks = ''.
      describe table t_final lines g_line.
      sort t_final by werks cmatnr srlno.
    Deleting the Duplicate Records
      perform select_data.
      describe table t_final lines g_line1.
      modify zbatch_cross_ref from table t_final.
      if sy-subrc ne 0.
        write:/ 'Updation failed'.
      else.
        Skip 1.
        Write:/12 'Updation has been Completed Sucessfully'.
        skip 1.
        Write:/12 'Records in file ',42 g_line .
        write:/12 'Updated records in Table',42 g_line1.
      endif.
      delete from zbatch_cross_ref where werks = ''.
    ENDFORM.                    " modify_table
    *&      Form  select_data
          Deleting the duplicate records
    FORM select_data.
      select werks
             cmatnr
             srlno from zbatch_cross_ref
             into table t_zbatch for all entries in t_final
             where werks = t_final-werks
             and  cmatnr = t_final-cmatnr
             and srlno = t_final-srlno.
      sort t_zbatch by werks cmatnr srlno.
      loop at t_zbatch.
        read table t_final with key werks = t_zbatch-werks
                                    cmatnr = t_zbatch-cmatnr
                                    srlno = t_zbatch-srlno.
        if sy-subrc eq 0.
          delete table t_final .
        endif.
        clear: t_zbatch,
               t_final.
      endloop.
    ENDFORM.                    " select_data
    Reward Points if it is helpful
    Thanks
    Seshu

  • B2B Campaign Execution

    Hi all
    In email campaign execution where the target group contains BP which are Organizations the emails should go to the their respective contact person maintained in BP Relationship. Is there any specific setting which needs to be maintained for the above scenario to happen?
    As the problem which I'm facing is that when I execute a email campaign for a target group containing BP of category Person then the emails are going out of the system and email activity is also created as system is able to find the email address from BP Master Data.
    But when I execute a email campaign for a Target Group containing Organizations as BP then emails are not going to their respective contact persons dispite of maintaining the proper BP Relationship for Organization and contact person.
    is there specific settings which i need to maintian in order for same to happen.
    Thanks
    Avi

    Hi Avinesh,
    To send e-mails to the contact persons of the organizations, make sure you have checked the B2B checkbox in the segment tab for your target group.
    Also click on functions to specify the role of your contact person ( which can be seen in below the target group assignement).
    Regards,
    Shalini Chauhan

  • NEW FEATURE:AUTOTRACE IN SQL*PLUS 3.3(EXECUTION PLAN)

    제품 : SQL*PLUS
    작성날짜 : 2003-10-07
    NEW FEATURE:AUTOTRACE IN SQL*PLUS 3.3
    ======================================
    Autotrace는 SQL*Plus 3.3부터 지원하는 New feature로서 기존에는 init.ora에
    SQL_TRACE=TRUE를 setting 후 얻어진 trace file을 TKPROF란 utility를
    이용하여 SQL 문의 수행 경로, 각종 통계 정보를 얻었다.
    그러나, SQL*Plus 3.3부터는 이것을 간단히 처리할 수 있는 방법을 제공한다.
    1. SQL*Plus를 실행하여 scott user로 접속한 후, plan table을 생성한다.
    #sqlplus scott/tiger
    SQL>@$ORACLE_HOME/rdbms/admin/utlxplan
    2. 다음에 sys user에서 PLUSTRACE란 ROLE을 만든다.
    SVRMGR>connect internal;
    SVRMGR>create role plustrace;
    SVRMGR>grant select on v_$sesstat to plustrace;
    SVRMGR>grant select on v_$statname to plustrace;
    SVRMGR>grant select on v_$session to plustrace;
    SVRMGR>grant plustrace to dba with admin option;
    SVRMGR>grant plustrace to scott;
    비고) 위의 grant 문은 client에 SQL*Plus 3.3이 install되어 있는 경우
    C:ORAWIN95\PLUS33\PLUSTRCE.SQL이라는 script에 기록되어 있다.
    다음과 같이 실행해 주면 된다.
    1> connect sys/manager
    2> @$ORACLE_HOME/sqlplus/admin/plustrce.sql
    3> grant plustrace to scott;
    3. 다음에는 scott user로 connect하여 작업한다.
    #sqlplus scott/tiger
    SQL>set autotrace on
    SQL>select * from emp;
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 TABLE ACCESS (FULL) OF 'EMP'
    Statistics
    389 recursive calls
    5 db block gets
    53 consistent gets
    12 physical reads
    0 redo size
    1049 bytes sent via SQL*Net to client
    239 bytes received via SQL*Net from client
    4 SQL*Net round-trips to/from client
    0 sorts (memory)
    0 sorts (disk)
    13 rows processed
    4. 참고로 set autotrace에는 여러가지 option을 부여해 작업할 수도 있다.
    예)set autotrace on => Explain plan and statistics.
    set autotrace on explain => Explain plan only.
    set autotrace traceonly => select된 결과는 빼고 trace만 display
    시킴.
    set autotrace on statistics=> sql statement execution statistics.
    5. 서버 버젼과 상관없다.
    Server가 7.2 version 이하일지라도 clinet에 SQL*Plus 3.3이 install되어
    있으면 client에서 sqlplus 3.3을 구동시켜 server에 접속하여 위와 같이
    작업하면 무리없이 작업이 가능하다.
    Reference Documents
    <Note:43214.1>

    Hi Roman,
    I don't have an Oracle 9.2 database readily available, but it works fine on 10g XE. Please note 3.1 is not certified with 9i:
    http://www.oracle.com/technetwork/developer-tools/sql-developer/certification-096745.html
    Regards,
    Gary
    SQL Developer Team

  • Performance Tuning -To find the execution time for Select Statement

    Hi,
    There is a program that takes 10 hrs to execute. I need tune its performance. The program is basically reading few tables like KNA1,ANLA,ANLU,ADRC etc and updates to Custom table. I did my analysis and found few performance techniques for ABAP coding.
    Now my problem is, to get this object approved I need to submit the execution statistics to client.I checked both ST05 and SE30. I heard of a Tcode where we can execute a select statement and note its time, then modify and find its improved Performance. Can anybody suggest me on this.
    Thanks,
    Rajani.

    Hi,
    This is documentation regarding performance analysis. Hope this will be useful
    It is a general practice to use           Select  *  from <database>…     This statement populates all the values of the structure in the database.
    The effect is many fold:-
    •     It increases the time to retrieve data from database
    •     There is large amount of unused data in memory
    •     It increases the processing time from work area or internal tables
    It is always a good practice to retrieve only the required fields. Always use the syntax      Select f1  f2  …  fn  from <database>…      
    e.g.     Do not use the following statement:-
         Data: i_mara like mara occurs 0 with header line.
         Data: i_marc like marc occurs 0 with header line.
         Select * from mara
              Into table i_mara
              Where matnr in s_matnr.
         Select * from marc
              Into table i_marc
              For all entries in i_mara
              Where matnr eq i_mara-matnr.
         Instead use the following statement:-
                                       Data: begin of i_mara occurs 0,
                                            Matnr like mara-matnr,
                                                  End of i_mara.
         Data: begin of i_marc occurs 0,
              Matnr like marc-matnr,
                                            Werks like marc-werks,
                                                 End of i_marc.
         Select matnr from mara
              Into table i_mara
              Where matnr in s_matnr.

  • Concurrent execution of the same interface

    Hi,
    we use ODI in an application that waits for incoming data files in a directory and uploads their content into a target database.
    The principal question I have is this:
    Can ODI handle concurrent executions of a single interface? In our use case, this happens when two (or more) users submit their files within short period of time, but I think this question is important in general. Is ODI a good fit for concurrent data flows targeting a single datastore?
    I personally doubt that. An ODI interface consists of loading and integration steps. That is, loading data into one or more temporary work tables, joining into a flow table (if having more than one source datastore), and finally integrating (merging) data into the target datastore. Most of the loading and integration knowledge modules follow this pattern:
    1. drop work table
    2. create work table
    3. populate work table
    4. use work table (e.g. for joining into a flow table)
    5. drop work table
    This seems to be a problem - indeed testing with two concurrent executions failed due to the faster instance dropping the work table being used by the other instance.
    I know you can configure the KMs not to drop the work table and/or design custom LKMs/IKMs as you see fit, yet the idea of two concurrent processes using the same work table doesn't look right to me.
    So back to my original question - what is the best practice for concurrent ODI executions on the same target? Is serializing of executions on single target a must? Does ODI provide means to achieve the serialization? Are there other options?
    Thanks in advance for your feedback.

    Sure, I like the idea.
    It's a pity that such KMs do not come out of the box with ODI. Even though I agree with you that it's an easy customization, it's still effort that someone has to pay, on top of the ODI license cost. After all, ODI is supposed to reduce the amount of development work. Hopefully Oracle realizes this and ships more reusable KMs in the future (the product is still pretty new in the Oracle portfolio).
    Thanks for your feedback, it's definitely more elegant than serializing execution (which remains an option for less demanding scenarios).
    Further elaborating your idea...How about creating an ad-hoc work schema dynamically? Is that feasible? Say by cloning a template physical schema and setting the work schema? That way the existing KMs could be reused.
    Another fancy idea would be using the substitution API in the work table prefix definition, e.g. "C$<%=odiRef.getSession("SESS_NO")%>". That would be a nice feature for Oracle to implement.

  • ETL execution time want to reduce

    Hi Everybody,
    I am working on owb 10g with R2.
    Environment is win 2003 server 64bit itanium server,
    oracle 10 database in netap server mapped as I drive on 186 server where owb installed.
    source files : oracle's staging schema
    target : oracle target schema
    Problem :
    The problem is before 1 month our ETL process was taking 2 hrs to complete .
    now a days 5 hrs...i dont know why.
    any body suggest what I need to check in owb.
    for optimization.

    Thanks for reply sir,
    as you suggest a query for checking the execution time in desc order, I am sending you little bit o/p for today date execution.
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_CONTRACT_SUMMARY_M2__V_1"
    20-NOV-07 20-NOV-07 1056 0 0
    346150 0 346052
    0 0 0
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_POLICY_SUSPENCE_V_1"
    20-NOV-07 20-NOV-07 884 0 0
    246576 0 0
    0 0 246576
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_ACTIVITY_AMT_DETAIL_M3_V_1"
    20-NOV-07 20-NOV-07 615 0 0
    13927 13927 0
    0 0 0
    ==================================
    I think Elapse time depend on No of rec selected and inserted merge wahtever be...if rec are reduce then time also reduce but compare to before (when ETL got finished within 2 hrs), so we got more than 100 sec's diffrence b/w that time and now .
    source tables analyzed daily before mapping execution started. and target tables analyzed at evening time .
    As a remeber from last that day nothing any major changes made in ETL mappings. one day there was a problem arise that source_loc for another Process Wonders ( As i told before there are total 3 main Process runs Sun , Wonders and Life_asia,in which sun and wonders are scheduled) so we have correct that loc and deployed the all mappings as requier msg from control center.
    then mappings runs fine but Execution time increased by 1 hrs more(5 hrs+) than before (3-4hr).
    and normal time was
    2 hrs for LifeAsia.
    30 mnt for wonders
    15 mnts for Sun.
    Can you Suggest me what i can do for temp/permanent solution of this problem.
    according to our System config...
    1 tb hdd.in which 2-300 gb free
    4 gb ram
    64 bit windows os
    Used temp tablespace 99 % with auto-extendable
    Used target table space 93-95%....
    data load incrementaly daily.
    load window was 5am to 8 am which is now a days going upto 12 .30 pm
    after which matview going to refresh.
    after which reports and cubes refresh.
    So all process going to delay and this is live process .
    suggest me if any info u want .
    abt hardware config , we need to increase some...? like ram ....memory..etc.
    @wait for reply...

  • Ant conditional target in Build.XML

    Hi!
    I am new to the Ant Build process & I am looking for some condition type build parameter in build.xml so that the user can use some command line option to execute the same target but with some modification & if not passed then the normal execution take place. I want something like this
    <!-- compiling of java files done here normal "ant"-->
        <target name="compile">
            <javac srcdir="${src}" destdir="${src}" classpathref="classpath"/>
            <echo message="compiling done"/>
        </target>
        <!-- Excute when some argument is passed something like conditional statemet-->
        <!-- compile without wsdl when "ant withoutwsdl"-->
        <target name="compile">
             <javac srcdir="${src}" destdir="${src}" excludes="wsdl/**" classpathref="classpath"/>
             <echo message="Compiled without WSDL done."/>
        </target> So that i dont have to make much change in the build process.
    Please help me out.
    Thanks.

    check the ANT documentation. It tells you how to have conditional execution in your targets.
    Of course you can NOT have multiple targets with the same name.

  • BW Statistics and delta management

    Hi,
    This is a very basic question but I could not find myself the way to a solution...
    I got a timeout during an update of BW Statistics data targets. The request was red and I removed from the cube.
    I try to restart the update anew (repeat delta) but after a warning the action is cancelled. The warning is:
    <i>
    Last delta update is not yet completed.
    Therefore, no new delta update is possible.
    You can start the request again
    if the last delta request is red or green in the monitor (QM activity) </i>
    If I set the request to green I will loose the last delta.
    So, what can I do?
    Thxs.
    LauQ

    OK. I did not set it via the QM. As it was red in the monitor I tought that it was already red in the QM status.
    Thxs for this hint. Reward given.
    LauQ.

  • How to get the execution time of a query

    Hi,
    Environment: 10.2.0.4.0
    Just wondering how I can get the query execution time? I am not interested in the query output nor do I want the statistics, just the execution time?
    Any suggestions will be appreciated
    Thanks in advance
    rogers42

    If you're using SQL*Plus
    SQL> set autotrace traceonly
    SQL> set timing on
    SQL> <<your query here>>SQL*Plus will fetch all the data and then report the query plan, execution statistics, and elapsed time. It will not display the actual data.
    SET TIMING ON alone tells SQL*Plus to display the execution time of each SQL statement-- the problem is that it also displays all the data which can skew the results because you're including the time required by SQL*Plus to pipe a bunch of data to the screen.
    Justin

  • Difference: BI Statistics 3.x to 7.0

    Hello All,
    we want to implement a cost sharing model regarding our different InfoAreas.
    Therefore, we want to use BI Statistics.
    In the old BI Statistics 3.x I find a multicube 0BWTC_C10 which provides all key figures that we need and additionally the navigation object InfoArea.
    But as we migrated to 7.0 I read that certain InfoCubes have been replaced, for example
    0tct_mc01 replaces 0bwtc_c02
    0tct_mc02 replaces 0bwtc_c02 and so on.
    However, I cannot find a multicube which combines all of these new InfoCubes.
    Here my question:
    Do I have to use the new infocubes or can I also load my query statistics (new 7.0 queries) into the old infocubes and use the old multicubes?
    Can I also use the old infocubes for data loading statistics also I use new technology - I still use infopackages and DTPs?
    Thank you in advance.
    best regards,
    Ilona

    The old Statistics related targets are obsolete and have to go ahead with the new COCKPIT targets.
    So if you want to read the earlier statistics information you can use RSA1OLD and then you can check the information of the historical data. In Bi7.0 the earlier data is not been migrated and hence you need to work with the new data flow.
    In terms of DTP's its optional.
    Hope this helps..

Maybe you are looking for

  • Index file won't update on server

    I have changed &amp;amp; uploaded mjy index.html file many, many times with no problem. Now I have another update &amp;amp; when I upload it with either Dreamweaver or GoDaddy's FTP it does not update the file...even though I get no error message. I

  • How to Stop Versions and Return Save-As

    Death to Versions, Long Live Save-As! I got rid of Versions and got back Save-As for Pages, Numbers, Keynote, and Preview. It also works for TextEdit. I have done a clean install of Lion from a bootable Flash drive, reinstalled iWorks 09, refused to

  • Using time machine with uverse

    How to set up time machine with uverse

  • Code in smartforms

    hi......... I am developing a smartform........using Zprogram.My zprogram coding is as follows...............here from my Zprogram I am exporting the structures wa_proj and wa_prps........when I am executing my Zprogram the values are being populated

  • "No Time series found in Live Cache"

    Hi All We are haveing a issue when we are trying to open our Planning books with message No time sereis in Li ve Cache. Can any suggest siva