Dbms_datapump - How to ennusre pl/sql only completes when job completes -

Hi,
Calling dbms_datapump via pl/sql - when look at outpurt directory where log and export file are created it seems to take a while but pl/sql comes back saying complete far earlier.
Uisng 11.2.0.3
Want pl/sql to show complete only when actually complete.
Seems tobe running in background.
declare
  -- Local variables here
  i integer;
h1 number; -- Datapump handle
  dir_name varchar2(30); -- Directory Name
v_file_name varchar2(100);
  v_log_name  varchar2(100); 
v_job_status ku$_Status;          -- The status object returned by get_status
    v_job_state VARCHAR2(4000);
    v_status ku$_Status1010;
    v_logs ku$_LogEntry1010;
    v_row PLS_INTEGER;
    v_current_sequence_number archive_audit.aa_etl_run_num_seq%type;
   v_jobState                user_datapump_jobs.state%TYPE;
begin
--execute immediate ('alter tablespace ARCHIVED_PARTITIONS read only');
-- Get last etl_run_num_seq by querying public synonym ARCHIVE_ETL_RUN_NUM_SEQ
-- Need check no caching on etl_run_num_seq
select last_number - 1
into v_current_sequence_number
from ALL_SEQUENCES A
WHERE A.SEQUENCE_NAME = 'ETL_RUN_NUM_SEQ';
v_file_name := 'archiveexppre.'||v_current_sequence_number;
v_log_name  := 'archiveexpprelog.'||v_current_sequence_number;
dbms_output.put_line(v_file_name);
dbms_output.put_line(v_log_name);
-- Create a (user-named) Data Pump job to do a schema export.
  dir_name := 'DATA_EXPORTS_DIR';
  h1 := dbms_datapump.open(operation =>'EXPORT',
  job_mode =>'TRANSPORTABLE',
  remote_link => NULL,
  job_name    => 'ARCHIVEEXP10');--||v_current_sequence_number);
  dbms_datapump.add_file(handle =>h1,
                         filename => v_file_name,
                         directory => dir_name,
                         filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE,
                         reusefile => 1); -- value of 1 instructs to overwrite existing file
  dbms_datapump.add_file(handle =>h1,
                         filename => v_log_name,
                         directory => dir_name,
                         filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
                         reusefile => 1); -- value of 1 instructs to overwrite existing file
  dbms_datapump.metadata_filter(     
      handle => h1,
      name   => 'TABLESPACE_EXPR',
     VALUE    => 'IN(''ARCHIVED_PARTITIONS'')'
--dbms_datapump.metadata_filter(handle =>h1,
  --                       name => 'TABLE_FILTER',
  --                       value => 'BATCH_AUDIT');
-- Start the datapump_job
-- dbms_datapump.set_parameter(h1, 'TRANSPORTABLE', 'ALWAYS');
  dbms_datapump.start_job(h1);
  begin
    null;
    -- dbms_datapump.detach(handle => h1);
  end;
dbms_datapump.wait_for_job(h1,v_jobState);
dbms_output.put_line('Job has completed');
exception
    when others then
      dbms_datapump.get_status(handle => h1,
                             mask => dbms_datapump.KU$_STATUS_WIP,
                             timeout=> 0,
                            job_state => v_job_state,
                            status => v_job_status);
               dbms_output.put_line(v_job_state);
     RAISE_APPLICATION_ERROR(-20010,DBMS_UTILITY.format_error_backtrace);
end;causes this.
How do I ensure the pl/sql job only completes when the dbms_daatpump job completes i.e. runs in the foreground.
Tried adding dbms_datapump.wait_for_job(h1,v_jobState);
but get message job is not attached to this session when add this.
Removed dbms_datapump.detach and now wokrs O.K - seems that dbms_datapump.detach + wait for job are mutually exclusive?
Thanks
Edited by: user5716448 on 28-Sep-2012 06:37
Edited by: user5716448 on 28-Sep-2012 06:37
Edited by: user5716448 on 28-Sep-2012 06:38
Edited by: user5716448 on 28-Sep-2012 06:47
Edited by: user5716448 on 28-Sep-2012 06:50

user5716448 wrote:
Removed dbms_datapump.detach and now wokrs O.K - seems that dbms_datapump.detach + wait for job are mutually exclusive?
If you want your block to WAIT till datapump finishes, why are you detaching? detach means you are no longer interested once the job has started. Remove the detach and keep the wait_for_job as you have found out.

Similar Messages

  • How to get the SQL Signon that Agent Jobs "Run As" or "Executed as User"

    How to get the SQL Signon that Agent Jobs "Run As" or "Executed as User"?
    I have an install SQL scripts that creates a Linked Server. I want to put some security on the Linked Server and only grant the Agent Job Signon (the "Run As" or "Executed as User") access to the linked server. I need to retrieve the
    Agent Job Signon (something like "NT SERVICE\SQLAgent$FIDEV360BI02").
    I could query certain jobs and SUBSTRING the Message column - using some form of the query below, which would return "Executed as user: NT SERVICE\SQLAgent$SSDEVBI02. The step succeeded." But that is pretty imprecise.
    use msdb
    SELECT [JobName] = JOB.name,
    [Step] = HIST.step_id,
    [StepName] = HIST.step_name,
    [Message] = HIST.message,
    [Status] = CASE WHEN HIST.run_status = 0 THEN 'Failed'
    WHEN HIST.run_status = 1 THEN 'Succeeded'
    WHEN HIST.run_status = 2 THEN 'Retry'
    WHEN HIST.run_status = 3 THEN 'Canceled'
    END,
    [RunDate] = HIST.run_date,
    [RunTime] = HIST.run_time,
    [Duration] = HIST.run_duration,
    [Retries] = HIST.retries_attempted
    FROM sysjobs JOB
    INNER JOIN sysjobhistory HIST ON HIST.job_id = JOB.job_id
    -- CHANGE THIS
    -- WHERE JOB.name like '%GroupMaster%' or Job.name like '%etlv%'
    ORDER BY HIST.run_date, HIST.run_time

    by default all sql jobs are executed as sql server agent account, unless otherwise a proxy is setup.
    you can get the proxy information as Olaf mentioned, if the proxy_id is null for the step, it implies that the job step was executed as sql server service account and in such case it will be null
    so, if it is null, it ran as sql server agent account.
    so, one work around is get the sql server agent service account and if the proxy is null, that means it ran as sql server agent account, so, use isnull function. the disadvantage would be if the sql server agent account was switched, you might not get the
    accurate information as the new account will show up though the job really ran as old account, to get this information, you need to  get this from the logmessage column as you mentioned above.
     try this code...
    /*from sql 2008r2 sp1, you get the service accounts using tsql,otherwise you have to query the registry keys*/
    declare @sqlserveragentaccount varchar(2000)
    select @sqlserveragentaccount= service_account
    from sys.dm_server_services
    where servicename like '%sql%server%agent%'
    select message,isnull(name,@sqlserveragentaccount) as AccountName
    from sysjobhistory a inner join sysjobsteps b
    on a.step_id=b.step_id and a.job_id=b.job_id
    left outer join sysproxies c on c.proxy_id=b.proxy_id
    Hope it Helps!!

  • Kodo 3.4.1: how to limit # of sql query params when using collection param

    Hi,
    We have a problem when using Kodo 3.4.1 against SQL Server 2005, when we execute query:
              Query query = pm.newQuery( extent, "IdList.contains( this )" );
              query.declareParameters( "Collection IdList" );
              query.declareImports( "import java.util.Collection;" );
              return (List) query.execute( list );
    We got:
    com.microsoft.sqlserver.jdbc.SQLServerException: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Too many parameters were provided in this RPC request. The maximum is 2100.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(Unknown Source)
    at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(Unknown Source)
    at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(Unknown Source)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(Unknown Source)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(Unknown Source)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(Unknown Source)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(Unknown Source)
    at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(Unknown Source)
    at com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:354)
    at com.solarmetric.jdbc.PoolConnection$PoolPreparedStatement.executeQuery(PoolConnection.java:341)
    at com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:352)
    at com.solarmetric.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.executeQuery(LoggingConnectionDecorator.java:1106)
    at com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:352)
    at kodo.jdbc.runtime.JDBCStoreManager$CancelPreparedStatement.executeQuery(JDBCStoreManager.java:1730)
    at com.solarmetric.jdbc.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:339)
    at kodo.jdbc.sql.Select.execute(Select.java:1581)
    at kodo.jdbc.sql.Select.execute(Select.java:1533)
    at kodo.jdbc.runtime.SelectResultObjectProvider.open(SelectResultObjectProvider.java:102)
    at com.solarmetric.rop.EagerResultList.<init>(EagerResultList.java:22)
    at kodo.query.AbstractQuery.execute(AbstractQuery.java:1081)
    at kodo.query.AbstractQuery.executeWithArray(AbstractQuery.java:836)
    at kodo.query.AbstractQuery.execute(AbstractQuery.java:799)
    It seems that there're too many ids in the list, and Kodo didn't limit the # of sql parameters when craft the sql query. Is there a way to ask Kodo to use multiple queries with smaller # of sql parameters instead of using one big sql query?
    Thanks

    Hi,
    Sadly, there is no way to do that in Kodo currently. The closest is the inClauseLimit DBDictionary setting, but that setting just breaks things up into multiple IN statements put together with an OR clause. In your case, it looks like the network transport, not the SQL parser, is complaining about size limits.
    You could force the query to run in-memory, but that would probably be prohibitively slow, unless the data set that you're querying against is relatively small.
    -Patrick

  • DBMS_DATAPUMP; how to get the log file of a job?

    Hi
    I want the user to be able to see the logfile of his job from another session.
    this is my procedure
    create or replace procedure get_job_log (p_job_name IN varchar2 )
    is
    hdl_job  number;
    l_job_state     VARCHAR2 (20);
    l_status        sys.ku$_Status1010;
    l_job_status    sys.ku$_JobStatus1010;
    l_job_logentry sys.ku$_LogEntry1010;
    l_job_logline  sys.ku$_LogLine1010;
    begin
    hdl_job := DBMS_DATAPUMP.ATTACH(
                                         job_name   => p_job_name
                                        ,job_owner  => 'CLONE_USER'
    DBMS_DATAPUMP.GET_STATUS(
       handle  => hdl_job
       ,mask   => dbms_datapump.ku$_status_job_error + dbms_datapump.ku$_status_job_status + dbms_datapump.ku$_status_wip
       --,timeout   => 15
       ,job_state =>l_job_state
       ,status    =>l_status);
    l_job_logentry:=l_status.wip ;
    for x in l_job_logentry.first .. l_job_logentry.last loop
       dbms_output.put_line (l_job_logentry(x).LogText) ;
    end loop;
    dbms_datapump.detach(hdl_job);
    end;
    /when I run it for the first time, it works... kindof...
    but my problem is that if I try running it again I get:
    ORA-31626: job does not exist
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 902
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3407
    if I close sqlplus session , open new one and run - it works. So what is the issue here? Is detach not enough? What else should I do?
    my version is 11.1.0.6
    UPDATE.
    Looks like the above is not true. It doesn't error if it runs to the end of code... The problem seems to happen only when exception occures and .detach is not called. So this is not an issue anymore.
    But I still don't get a decent performance from the .get_status function. Sometimes it just hangs and errors on timeout.
    Still the question is: How can I get the LOG of the session using DBMS_DATAPUMP from another session and output it on the screen?...
    Edited by: Andrei Kübar on Dec 23, 2011 12:44 PM
    Edited by: Andrei Kübar on Dec 23, 2011 12:57 PM

    I believe I found a solution
    I am now running the above code only 1 time but in a loop and inserting output into a table, so that user can query the table and see where the job is
    here's the code I'm using (partially copied from http://psoug.org/reference/dbms_datapump.html)
    l_job_state:='UNDEFINED';
    while (l_job_state != 'COMPLETED') and (l_job_state != 'STOPPED') loop
         DBMS_DATAPUMP.GET_STATUS(
            handle  => hdl_job
           ,mask   => dbms_datapump.ku$_status_job_error + dbms_datapump.ku$_status_job_status + dbms_datapump.ku$_status_wip
         --,timeout   => 15
           ,job_state =>l_job_state
           ,status    =>l_status);
         if (bitand(l_status.mask,dbms_datapump.ku$_status_wip) != 0) then
           l_job_logentry:=l_status.wip ;
           if (l_job_logentry is not null) then
                for x in l_job_logentry.first .. l_job_logentry.last loop
                    --dbms_output.put_line (l_job_logentry(x).LogText ) ;
                    writelog (l_job_logentry(x).LogText );
                end loop;
           end if;
         end if;
         if (bitand(l_status.mask,dbms_datapump.ku$_status_job_error) != 0) then
            l_job_error :=l_status.error ;
            if  (l_job_error is not null) then
                for x in l_job_error.first .. l_job_error.last loop
                    dbms_output.put_line (l_job_error(x).LogText) ;
                    writelog('Error: '|| l_job_error(x).LogText) ;
                end loop;
            end if;
         end if;
         --dbms_output.put_line ('Current job state: '||l_job_state);
         writelog ('Current job state: '||l_job_state);
    end loop;thanks all for your help

  • How to set pl/sql function as background job in apex?

    Hi,
    I wrote a function which returns boolean value based on result.This function updates a table everyday.How to set this as a background job?or do I need to use dbms_job only?
    Thanks,
    Mahender.

    No, you can use APEX_PLSQL_JOB.
    Example:
    DECLARE
    l_sql VARCHAR2(4000);
    l_job NUMBER;
    BEGIN
    l_sql := 'BEGIN MY_PACKAGE.MY_PROCESS; END;';
    l_job := APEX_PLSQL_JOB.SUBMIT_PROCESS(
    p_sql => l_sql,
    p_status => 'Background process submitted');
    --store l_job for later reference
    END;
    full documentation link [http://download.oracle.com/docs/cd/E14373_01/apirefs.32/e13369/apex_plsql_job.htm#BGBCJIJI]
    If I help you, please stamp my answer as CORRECT or HELPFULL :)

  • How can I get UK-only sites when I search on my iMac?

    Searching the web using Firefox often fills the list with completely irrelevant US-based site info. I want to focus much more helpfully by controlling where the info originates. Maybe there's somewhere in Preferences where I can set this, but I can't find it. Just using e.g. 'UK' as a search criterion isn't sensitive enough.

    You can find a link on the Google results page in the side bar at the left on the www<i></i>.google<i></i>.co<i></i>.uk page:
    The web:
    Pages from the UK

  • The ipod will only charge when lying completely flat

    the ipod touch has to be completely flat when charging.Is this correct?

    No. Take it in for service.
    Apple - Support - iPod - Service FAQ

  • SQL trace for Background jobs

    Hi
    Can anyone suggest how to find the SQL trace for background jobs.
    Thanks in advance.
    Regards
    D.Vadivukkarasi

    Hi
    Check the transaction ST05.
    Plz Reward Points if helpful

  • Run reports only as a job

    Hello,
    could you please kindly advice how to restrict report generations only to scheduling jobs (no foreground / interactive reports possible).
    Kind Regards, Piotr

    Not certain whether this is possible on a report by report basis, but it is possible on a user basis.
    In Disco Admin, bring up the privileges for a user, and on the Scheduled Workbooks tab, select the "User must always schedule workbooks" option.

  • How can I force Time Machine to make a complete backup of my Hard Drive.  I just installed a new external drive for Backup since my previous one failed.  Now when I back up, Time Machine only backs up my data folder and the Users folder.

    How can I force Time Machine to make a complete backup of my Hard Drive.  I just installed a new external drive for Backup since my previous one failed.  Now when I back up, Time Machine only backs up my data folder and the Users folder.
    When I start a backup. Time Machine says "Oldest Backup: None; Latest Backup: None", so it seems like it should do a complete backup, but it only does a partial. 

    Hi I'd like to jump in here. Your app showed me this:
    Time Machine:
              Skip System Files: NO
              Mobile backups: OFF
              Auto backup: YES
              Volumes being backed up:
                        Macintosh HD: Disk size: 749.3 GB Disk used: 453.81 GB
              Destinations:
                        Plastic Wrapper [Local] (Last used)
                        Total size: 999.86 GB
                        Total number of backups: 64
                        Oldest backup: 2013-07-24 23:25:11 +0000
                        Last backup: 2013-11-17 01:40:47 +0000
                        Size of backup disk: Too small
                                  Backup size 999.86 GB < (Disk used 453.81 GB X 3)
              Time Machine details may not be accurate.
              All volumes being backed up may not be listed.
              /sbin excluded from backup!
              /usr excluded from backup!
              /System excluded from backup!
              /bin excluded from backup!
              /private excluded from backup!
              /Library excluded from backup!
              /Applications excluded from backup!
    Aside from the size of my backup drive, which I will increase at some point, I'd really like to have time machine backing up all system folders, especially Applications. How to I reset this hidden exclusions?
    Thanks,
    Darcy

  • How to NOT monitor SQL database on specific servers only

    I have a test server with SCOM agent installed. I am using it for Lync watcher node to monitor Lync using SCOM. On this server there is a local SQL Express database. SCOM has MP for SQLs and alerts me about this SQL express database on test server that there
    is buffer cache hit ratio too low or page life expectancy too low.
    How to disable these SQL monitors for only this test server?

    I am wondering whether using this cmdlet "Remove-SCOMDisabledClassInstance" I will not remove
    any other SQL 2012 monitoring on another servers in my environment?
    I also found this blog post
    http://www.systemcentercentral.com/how-to-remove-objects-from-monitoring-remove-scomdisabledclassinstance/
    where there is slightly different method, they propose to choose "SQL server 2012 installation seed"
    rather than "SQL Server 2012 DB Engine"
    They also suggest to use "Override the Object Discovery" rather than "Disable
    the Object Discovery". What is the best approach?

  • How to open multiple sql files in only one ssms instance

    how to open multiple sql files in only one ssms instance, I can't get anything to work that I find online..I hope you can help us.

    I tried opening two files but selecting and hitting enter. it opens one SSMS and two tabs.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • How to compare same SQL query performance in different DB servers.

    We have Production and Validation Environment of Oracle11g DB on two Solaris OSs.
    H/W and DB,etc configurations of two Oracle DBs are almost same in PROD and VAL.
    But we detected large SQL query performace difference in PROD DB and VAL DB in same SQL query.
    I would like to find and solve the cause of this situation.
    How could I do that ?
    I plan to compare SQL execution plan in PROD and VAL DB and index fragmentations.
    Before that I thought I need to keep same condition of DB statistics information in PROD and VAL DB.
    So, I plan to execute alter system FLUSH BUFFER_CACHE;
    But I am worring about bad effects of alter system FLUSH BUFFER_CACHE; to end users
    If we did alter system FLUSH BUFFER_CACHE; and got execution plan of that SQL query in the time end users do not use that system ,
    there is not large bad effect to end users after those operations?
    Could you please let me know the recomendation to compare SQL query performace ?

    Thank you.
    I got AWR report for only VAL DB server but it looks strange.
    Is there any thing wrong in DB or how to get AWR report ?
    Host Name
    Platform
    CPUs
    Cores
    Sockets
    Memory (GB)
    xxxx
    Solaris[tm] OE (64-bit)
    .00
    Snap Id
    Snap Time
    Sessions
    Cursors/Session
    Begin Snap:
    xxxx
    13-Apr-15 04:00:04
    End Snap:
    xxxx
    14-Apr-15 04:00:22
    Elapsed:
    1,440.30 (mins)
    DB Time:
    0.00 (mins)
    Report Summary
    Cache Sizes
    Begin
    End
    Buffer Cache:
    M
    M
    Std Block Size:
    K
    Shared Pool Size:
    0M
    0M
    Log Buffer:
    K
    Load Profile
    Per Second
    Per Transaction
    Per Exec
    Per Call
    DB Time(s):
    0.0
    0.0
    0.00
    0.00
    DB CPU(s):
    0.0
    0.0
    0.00
    0.00
    Redo size:
    Logical reads:
    0.0
    1.0
    Block changes:
    0.0
    1.0
    Physical reads:
    0.0
    1.0
    Physical writes:
    0.0
    1.0
    User calls:
    0.0
    1.0
    Parses:
    0.0
    1.0
    Hard parses:
    W/A MB processed:
    16.7
    1,442,472.0
    Logons:
    Executes:
    0.0
    1.0
    Rollbacks:
    Transactions:
    0.0
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %:
    Redo NoWait %:
    Buffer Hit %:
    In-memory Sort %:
    Library Hit %:
    96.69
    Soft Parse %:
    Execute to Parse %:
    0.00
    Latch Hit %:
    Parse CPU to Parse Elapsd %:
    % Non-Parse CPU:
    Shared Pool Statistics
    Begin
    End
    Memory Usage %:
    % SQL with executions>1:
    34.82
    48.31
    % Memory for SQL w/exec>1:
    63.66
    73.05
    Top 5 Timed Foreground Events
    Event
    Waits
    Time(s)
    Avg wait (ms)
    % DB time
    Wait Class
    DB CPU
    0
    100.00
    Host CPU (CPUs: Cores: Sockets: )
    Load Average Begin
    Load Average End
    %User
    %System
    %WIO
    %Idle
    Instance CPU
    %Total CPU
    %Busy CPU
    %DB time waiting for CPU (Resource Manager)
    Memory Statistics
    Begin
    End
    Host Mem (MB):
    SGA use (MB):
    46,336.0
    46,336.0
    PGA use (MB):
    713.6
    662.6
    % Host Mem used for SGA+PGA:
    Time Model Statistics
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Operating System Statistics
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Operating System Statistics - Detail
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Foreground Wait Class
    s - second, ms - millisecond - 1000th of a second
    ordered by wait time desc, waits desc
    %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
    Captured Time accounts for % of Total DB time .00 (s)
    Total FG Wait Time: (s) DB CPU time: .00 (s)
    Wait Class
    Waits
    %Time -outs
    Total Wait Time (s)
    Avg wait (ms)
    %DB time
    DB CPU
    0
    100.00
    Back to Wait Events Statistics
    Back to Top
    Foreground Wait Events
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Background Wait Events
    ordered by wait time desc, waits desc (idle events last)
    Only events with Total Wait Time (s) >= .001 are shown
    %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
    Event
    Waits
    %Time -outs
    Total Wait Time (s)
    Avg wait (ms)
    Waits /txn
    % bg time
    log file parallel write
    527,034
    0
    2,209
    4
    527,034.00
    db file parallel write
    381,966
    0
    249
    1
    381,966.00
    os thread startup
    2,650
    0
    151
    57
    2,650.00
    latch: messages
    125,526
    0
    89
    1
    125,526.00
    control file sequential read
    148,662
    0
    54
    0
    148,662.00
    control file parallel write
    41,935
    0
    28
    1
    41,935.00
    Log archive I/O
    5,070
    0
    14
    3
    5,070.00
    Disk file operations I/O
    8,091
    0
    10
    1
    8,091.00
    log file sequential read
    3,024
    0
    6
    2
    3,024.00
    db file sequential read
    1,299
    0
    2
    2
    1,299.00
    latch: shared pool
    722
    0
    1
    1
    722.00
    enq: CF - contention
    4
    0
    1
    208
    4.00
    reliable message
    1,316
    0
    1
    1
    1,316.00
    log file sync
    71
    0
    1
    9
    71.00
    enq: CR - block range reuse ckpt
    36
    0
    0
    13
    36.00
    enq: JS - queue lock
    459
    0
    0
    1
    459.00
    log file single write
    414
    0
    0
    1
    414.00
    enq: PR - contention
    5
    0
    0
    57
    5.00
    asynch descriptor resize
    67,076
    100
    0
    0
    67,076.00
    LGWR wait for redo copy
    5,184
    0
    0
    0
    5,184.00
    rdbms ipc reply
    1,234
    0
    0
    0
    1,234.00
    ADR block file read
    384
    0
    0
    0
    384.00
    SQL*Net message to client
    189,490
    0
    0
    0
    189,490.00
    latch free
    559
    0
    0
    0
    559.00
    db file scattered read
    17
    0
    0
    6
    17.00
    resmgr:internal state change
    1
    100
    0
    100
    1.00
    direct path read
    301
    0
    0
    0
    301.00
    enq: RO - fast object reuse
    35
    0
    0
    2
    35.00
    direct path write
    122
    0
    0
    1
    122.00
    latch: cache buffers chains
    260
    0
    0
    0
    260.00
    db file parallel read
    1
    0
    0
    41
    1.00
    ADR file lock
    144
    0
    0
    0
    144.00
    latch: redo writing
    55
    0
    0
    1
    55.00
    ADR block file write
    120
    0
    0
    0
    120.00
    wait list latch free
    2
    0
    0
    10
    2.00
    latch: cache buffers lru chain
    44
    0
    0
    0
    44.00
    buffer busy waits
    3
    0
    0
    2
    3.00
    latch: call allocation
    57
    0
    0
    0
    57.00
    SQL*Net more data to client
    55
    0
    0
    0
    55.00
    ARCH wait for archivelog lock
    78
    0
    0
    0
    78.00
    rdbms ipc message
    3,157,653
    40
    4,058,370
    1285
    3,157,653.00
    Streams AQ: qmn slave idle wait
    11,826
    0
    172,828
    14614
    11,826.00
    DIAG idle wait
    170,978
    100
    172,681
    1010
    170,978.00
    dispatcher timer
    1,440
    100
    86,417
    60012
    1,440.00
    Streams AQ: qmn coordinator idle wait
    6,479
    48
    86,413
    13337
    6,479.00
    shared server idle wait
    2,879
    100
    86,401
    30011
    2,879.00
    Space Manager: slave idle wait
    17,258
    100
    86,324
    5002
    17,258.00
    pmon timer
    46,489
    62
    86,252
    1855
    46,489.00
    smon timer
    361
    66
    86,145
    238628
    361.00
    VKRM Idle
    1
    0
    14,401
    14400820
    1.00
    SQL*Net message from client
    253,909
    0
    419
    2
    253,909.00
    class slave wait
    379
    0
    0
    0
    379.00
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram Detail (64 msec to 2 sec)
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram Detail (4 sec to 2 min)
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Wait Event Histogram Detail (4 min to 1 hr)
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Service Statistics
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    Service Wait Class Stats
    No data exists for this section of the report.
    Back to Wait Events Statistics
    Back to Top
    SQL Statistics
    SQL ordered by Elapsed Time
    SQL ordered by CPU Time
    SQL ordered by User I/O Wait Time
    SQL ordered by Gets
    SQL ordered by Reads
    SQL ordered by Physical Reads (UnOptimized)
    SQL ordered by Executions
    SQL ordered by Parse Calls
    SQL ordered by Sharable Memory
    SQL ordered by Version Count
    Complete List of SQL Text
    Back to Top
    SQL ordered by Elapsed Time
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by CPU Time
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by User I/O Wait Time
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by Gets
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by Reads
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by Physical Reads (UnOptimized)
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by Executions
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by Parse Calls
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by Sharable Memory
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    SQL ordered by Version Count
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    Complete List of SQL Text
    No data exists for this section of the report.
    Back to SQL Statistics
    Back to Top
    Instance Activity Statistics
    Instance Activity Stats
    Instance Activity Stats - Absolute Values
    Instance Activity Stats - Thread Activity
    Back to Top
    Instance Activity Stats
    No data exists for this section of the report.
    Back to Instance Activity Statistics
    Back to Top
    Instance Activity Stats - Absolute Values
    No data exists for this section of the report.
    Back to Instance Activity Statistics
    Back to Top
    Instance Activity Stats - Thread Activity
    Statistics identified by '(derived)' come from sources other than SYSSTAT
    Statistic
    Total
    per Hour
    log switches (derived)
    69
    2.87
    Back to Instance Activity Statistics
    Back to Top
    IO Stats
    IOStat by Function summary
    IOStat by Filetype summary
    IOStat by Function/Filetype summary
    Tablespace IO Stats
    File IO Stats
    Back to Top
    IOStat by Function summary
    'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
    ordered by (Data Read + Write) desc
    Function Name
    Reads: Data
    Reqs per sec
    Data per sec
    Writes: Data
    Reqs per sec
    Data per sec
    Waits: Count
    Avg Tm(ms)
    Others
    28.8G
    20.55
    .340727
    16.7G
    2.65
    .198442
    1803K
    0.01
    Direct Reads
    43.6G
    57.09
    .517021
    411M
    0.59
    .004755
    0
    LGWR
    19M
    0.02
    .000219
    41.9G
    21.87
    .496493
    2760
    0.08
    Direct Writes
    16M
    0.00
    .000185
    8.9G
    1.77
    .105927
    0
    DBWR
    0M
    0.00
    0M
    6.7G
    4.42
    .079670
    0
    Buffer Cache Reads
    3.1G
    3.67
    .037318
    0M
    0.00
    0M
    260.1K
    3.96
    TOTAL:
    75.6G
    81.33
    .895473
    74.7G
    31.31
    .885290
    2065.8K
    0.51
    Back to IO Stats
    Back to Top
    IOStat by Filetype summary
    'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
    Small Read and Large Read are average service times, in milliseconds
    Ordered by (Data Read + Write) desc
    Filetype Name
    Reads: Data
    Reqs per sec
    Data per sec
    Writes: Data
    Reqs per sec
    Data per sec
    Small Read
    Large Read
    Data File
    53.2G
    78.33
    .630701
    8.9G
    7.04
    .105197
    0.37
    21.51
    Log File
    13.9G
    0.18
    .164213
    41.9G
    21.85
    .496123
    0.02
    2.93
    Archive Log
    0M
    0.00
    0M
    13.9G
    0.16
    .164213
    Temp File
    5.6G
    0.67
    .066213
    8.1G
    0.80
    .096496
    5.33
    3713.27
    Control File
    2.9G
    2.16
    .034333
    2G
    1.46
    .023247
    0.05
    19.98

  • How to apply the constraint ONLY to new rows

    Hi, Gurus:
       I have one question as follows:
       We need to migrate a legacy system to a new production server. I am required to add two columns to every table in order to record who updates the row most recently through triggers, and  I should apply not null constraint to the columns . However, since legacy system already has data for every table, and old data does not have value for the 2 new columns. If we apply the constraint, all of existing rows will raise exception. I wonder if there is possibility to apply the constraint ONLY to new rows to come in future.
    Thanks.
    Sam

       We need to migrate a legacy system to a new production server. I am required to add two columns to every table in order to record who updates the row most recently through triggers, and  I should apply not null constraint to the columns .
    The best suggestion I can give you is that you make sure management documents the name of the person that came up with that hair-brained requirement so they can be sufficiently punished in the future for the tremendous waste of human and database resources they caused for which they got virtually NOTHING in return.
    I have seen many systems over the past 25+years that have added columns such as those: CREATED_DATE, CREATED_BY, MODIFIED_DATE, MODIFIED_BY.
    I have yet to see even ONE system where that information is actually useful for any real purpose. Many systems have application/schema users and those users can modify the data. Also, any DBA can modify the data and many of them can connect as the schema owner to do that.
    Many tables also get updated by other applications or bulk load processes and those processes use generic connections that can NOT be tied back to any particular system.
    The net result is that those columns will be populated by user names that are utterly useless for any auditing purposes.
    If a user is allowed to modify a table they are allowed to modify a table. If you want to track that you should implement a proper security strategy using Oracle's AUDIT functionality.
    Cluttering up ALL, or even many, of your tables with such columns is a TERRIBLE idea. Worse is adding triggers that server no other purpose but capture useless infomation but, because they are PL/SQL cause performance impacts just aggravates the total impact.
    It is certainly appropriate to be concerned about the security and auditability of your important data. But adding columns and triggers such as those proposed is NOT the proper solution to achieve that security.
    Before your organization makes such an idiotic decision you should propose that the same steps be taken before adding that functionality that you should take before the addition of ANY MAJOR structural or application changes:
    1. document the actual requirement
    2. document and justify the business reasons for that requirement
    3. perform testing that shows the impact of that requirement on the production system
    4. determine the resource cost (people, storage, etc) of implementing that requirement
    5. demonstrate how that information will actually be used EFFECTIVELY for some business purpose
    As regards items #1 and #2 above the requirement should be stated in terms of the PROBLEM to be solved, not some preconceived notion of the solution that should be used.
    Your org should also talk to other orgs or other depts in your same org that have used your proposed solution and find out how useful it has been for them. If you do this research you will likely find that it hasn't met their needs at all.
    And in your own org there are likely some applications with tables that already have such columns. Has anyone there EVER used those columns and found them invaluable for identifying and resolving any actual problem?
    If you can't use them and their data for some important process why add them to begin with?
    IMHO it is a total waste of time and resources to add such columns to ALL of your tables. Any such approach to auditing or security should, at most, be limited to those tables with key data that needs to be protected and only then when you cannot implement the proper 'best practices' auditing.
    A migration is difficult enough without adding useless additional requirements like those. You have FAR more important things you can do with the resources you have available:
    1. Capture ALL DDL for the existing system into a version control system
    2. Train your developers on using the version control system
    3. Determining the proper configuration of the new server and system. It is almost a CERTAINTY that settings will get changed and performance will suffer even though you don't think you have changed anything at all.
    4. Validating that the data has been migrated successfully. That can involve extensive querying and comparison to make sure data has not been altered during the migration. The process of validating a SINGLE TABLE is more difficult if the table structures are not the same. And they won't be if you add two columns to every table; every single query you do will have to specify the columns by name in order to EXCLUDE your two new columns.
    5. Validating the performance of the app on the new system. There WILL BE problems where things don't work like they used to. You need to find those problems and fix them
    6. Capturing the proper statistics after the data has been migrated and all of the indexes have been rebuilt.
    7. Capturing the new execution plans to use a a baseline for when things go wrong in the future.
    If it is worth doing it is worth doing right.

  • How to view the sql query?

    hi,
      how to view the sql query formed from the xml structure in the receiver jdbc?

    You can view SAP Note at
    http://service.sap.com/notes
    But you require SMP login ID for this which you should get from your company. The content of the notes are as follows:
    Reason and Prerequisites
    You are looking for additional parameter settings. There are two possible reasons why a feature is available via the "additional parameters" table in the "advanced mode" section of the configuration, but not as documented parameter in the configuration UI itself:
    Category 1: The parameter has been introduced for a patch or a SP upgrade where no UI upgrade and/or documentation upgrade was possible. In this case, the parameter will be moved to the UI and the documentation as soon as possible. The parameter in the "additional parameters" table will be deprecated after this move, but still be working. The parameter belongs to the supported adapter functionality and can be used in all, also productive, scenarios.
    Category 2. The parameter has been introduced for testing purposes, proof-of-concept scenarios, as workaround or as pre-released functionality. In this case, the parameter may or may not be moved to the UI and documentation, and the functionality may be changed, replaced or removed. For this parameter category there is no guaranteed support and usage in productive scenarios is not supported.
    When you want to use a parameter documented here, please be aware to which category it belongs!
    Solution
    The following list shows all available parameters of category 1 or 2. Please note:
    Parameter names are always case-sensitive! Parameter values may be case-sensitive, this is documented for each parameter.
    Parameter names and values as documented below must be used always without quotaton marks ("), if not explicitly stated otherwise.
    The default value of a parameter is always chosen that it does not change the standard functionality
    JDBC Receiver Adapter Parameters
    1. Parameter name: "logSQLStatement"
                  Parameter type: boolean
                  Parameter value: true for any string value, false only for empty string
                  Parameter value default: false (empty String)
                  Available with: SP9
                  Category: 2
                  Description:
                  When implementing a scenario with the JDBC receiver adapter, it may be helpful to see which SQL statement is generated by the JDBC adapter from the XI message content for error analysis. Before SP9, this can only be found in the trace of the JDBC adapter if trace level DEBUG is activated. With SP9, the generated SQL statement will be shown in the details page (audit protocol) of the message monitor for each message directly.
                  This should be used only during the test phase and not in productive scenarios.
    Regards,
    Prateek

Maybe you are looking for

  • Case Sensitive File Names

    Hi... I've been trying out the trial version of RH8, before upgrading from RH7. One of the problems I have noticed is the option to "Use Lowercase File Names (Recommended for UNIX)" in the Single Source Layouts setting, only works for actual pages in

  • Mouse not working when reinstalling OS X with install disk

    I am trying to reinstall OS X 10.3 on iMAC G5 after a hard drive crash with the help of the original install disks, but my wireless mouse does not work anymore. Any ideas how I can solve this or go through the installation process with the keyboard a

  • Extraction of data from SAP

    Hi Expert, When I did an extraction from SAP (ie FBL3N, FBL1N, etc) via export->local file->spreadsheet, the total balance from the extraction file is different from viewing SAP on screen. Why is this so ? If do a direct export->spreadsheet, it is ok

  • User level privileges to discoverer reports

    Hi All, I am facing a very strange problem. One of my users is running a discoverer report. He complains that he cannot see a particular row. When I open the report with the same responsibility but using my user name, I can see that row. Does it depe

  • How to Render Collection and Recommendation Assets

    unable to render Collection and Recommendation Assets on the page. pls help.