Subscriber PL/SQL procedure runs as SYS instead of user

I am trying to figure out why the PL/SQL procedures ran via the AQ subscription are running as user SYS instead of the user that created the subscriber and registered the procedure. Following is the script I used to add the subscriber and register the procedure. All this happens under user SCOTT. The creation of the queues happened under SCOTT as well. The enqueing of the message happens via a procedure in SCOTT, initiated by SCOTT. Scott also has
GRANT execute ON dbms_aq to SCOTT;
GRANT execute on dbms_aqadm to SCOTT;
--ADD SUBSCRIBERS AND REGISTER THE QUEUES
begin
dbms_aqadm.add_subscriber
(queue_name => 'in_drv_queue',
subscriber => sys.aq$_agent('XML_drv', null, null),
rule => 'tab.user_data.queue_type = ''XML''' );
dbms_aqadm.add_subscriber
(queue_name => 'in_drv_queue',
subscriber => sys.aq$_agent('nonXML_drv', null, null),
rule => 'tab.user_data.queue_type = ''NON XML''' );
end;
begin
dbms_aq.register
(sys.aq$_reg_info_list
(sys.aq$_reg_info('in_drv_queue:XML_drv',
DBMS_AQ.NAMESPACE_AQ,
'plsql://scott.dequeue_xml',
HEXTORAW('FF')) ),
1);
dbms_aq.register
(sys.aq$_reg_info_list
(sys.aq$_reg_info('in_drv_queue:nonXML_drv',
DBMS_AQ.NAMESPACE_AQ,
'plsql://scott.dequeue_nonxml',
HEXTORAW('FF')) ),
1);
commit;
end;
Can anyone provide any suggestions as to why the two dequeue_ procedures run as SYS instead of SCOTT? I'm not sure what I am missing from the various examples I have read.
Thanks
Jason

I'm not familiar with the Audit Trail you refer to, but the following is one possible option to know what user the session is "running as"
SQL> select sys_context('userenv','client_info') from dual;
SYS_CONTEXT('USERENV','CLIENT_
SQL>
SQL> begin
  2     dbms_application_info.set_client_info('Testing');
  3  end;
  4  /
PL/SQL procedure successfully completed
SQL> select sys_context('userenv','client_info') from dual;
SYS_CONTEXT('USERENV','CLIENT_
Testing

Similar Messages

  • Insert to other DB from PL/SQL Procedure

    Hi,
    I have a PL/SQL Procedure running in my 10g DB. When this procedure is executed, i want some data to be inserted to tables in other DB(like MS SQL Server DB Table)
    Pls Advise
    Regards,
    Murugesan.

    well you should create a DBlink and then use this dblink to fetch data from other database.
    create databaselink <dblinkname> connect to <user> identified by <password> using <SID>;
    now u can use this as
    select * from emp@<dblinkname>
    if emp is in other database

  • PL/SQL procedure is 10x slower when running from weblogic

    Hi everyone,
    we've developed a PL/SQL procedure performing reporting - the original solution was written in Java but due to performance problems we've decided to switch this particular piece to PL/SQL. Everything works fine as long as we execute the procedure from SQL Developer - the batch processing 20000 items finishes in about 80 seconds, which is a serious improvement compared to the previous solution.
    But once we call the very same procedure (on exactly the same data) from weblogic, the performance seriously drops - instead of 80 seconds it suddenly runs for about 23 minutes, which is 10x slower. And we don't know why this happens :-(
    We've profiled the procedure (in both environments) using DBMS_PROFILER, and we've found that if the procedure is executed from Weblogic, one of the SQL statements runs noticeably slower and consumes about 800 seconds (90% of the total run time) instead of 0.9 second (2% of the total run time), but we're not sure why - in both cases this query is executed 32742-times, giving 24ms vs. 0.03ms in average.
    The SQL is
    SELECT personId INTO v_personId FROM (            
            SELECT personId FROM PersonRelations
            WHERE extPersonId LIKE v_person_prefix || '%'
    ) WHERE rownum = 1;Basically it returns an ID of the person according to some external ID (or the prefix of the ID). I do understand why this query might be a performance problem (LIKE operator etc.), but I don't understand why this runs quite fast when executed from SQL Developer and 10x slower when executed from Weblogic (exactly the same data, etc.).
    Ve're using Oracle 10gR2 with Weblogic 10, running on a separate machine - there are no other intensive tasks, so there's nothing that could interfere with the oracle process. According to the 'top' command, the wait time is below 0.5%, so there should be no serious I/O problems. We've even checked JDBC connection pool settings in Weblogic, but I doubt this issue is related to JDBC (and everything looks fine anyway). The statistics are fresh and the results are quite consistent.
    Edited by: user6510516 on 17.7.2009 13:46

    The setup is quite simple - the database is running on a dedicated database server (development only). Generally there are no 'intensive' tasks running on this machine, especially not when the procedure I'm talking about was executed. The application server (weblogic 10) is running on different machine so it does not interfere with the database (in this case it was my own workstation).
    No, the procedure is not called 20000x - we have a table with batch of records we need to process, with a given flag (say processed=0). The procedure reads them using a cursor and processes the records one-by-one. By 'processing' I mean computing some sums, updating other table, etc. and finally switching the record to processed=1. I.e. the procedure looks like this:
    CREATE PROCEDURE process_records IS
        v_record records_to_process%ROWTYPE;
    BEGIN
         OPEN records_to_process;
         LOOP
              FETCH records_to_process INTO v_record;
              EXIT WHEN records_to_process%NOTFOUND;
              -- process the record (update table A, insert a record into B, delete from C, query table D ....)
              -- and finally mark the row as 'processed=1'
         END LOOP;
         CLOSE records_to_process;
    END process_records;The procedure is actually part of a package and the cursor 'records_to_process' is defined in the body. One of the queries executed in the procedure is the SELECT mentioned above (the one that jumps from 2% to 90%).
    So the only thing we actually do in Weblogic is
    CallableStatement cstmt = connection.prepareCall("{call ProcessPkg.process_records}");
    cstmt.execute();and that's it - there is only one call to the JDBC, so the network overhead shouldn't be a problem.
    There are 20000 rows we use for testing - we just update them to 'processed=0' (and clear some of the other tables). So actually each run uses exactly the same data, same code paths and produces the very same results. Yet when executed from SQL developer it takes 80 seconds and when executed from Weblogic it takes 800 seconds :-(
    The only difference I've just noticed is that when using SQL Developer, we're using PL/SQL notation, i.e. "BEGIN ProcessPkg.process_records; END;" instead of "{call }" but I guess that's irrelevant. And yet another difference - weblogic uses JDBC from 10gR2, while the SQL Developer is bundled with JDBC from 11g.

  • Run report from PL/sql procedure

    Please any one tell how to I run a report from pl/sql procedure.

    I am not sure, but depending on your environment you can create a script to run your PL/SQL code and then generate the report. As is customary in an UNIX environment utilizing shell scripts.

  • Unix shell script run from pl/sql procedure

    Hi Guru
    I want to run unix shell script from pl/sql procedure. Actual I want to run it from developer 10g form.
    Please guide me in this regards
    Regards
    Jewel

    Look at the host or client_host builtins in the help

  • SQL Procedure working when run manually, not running from sql server agent

    I have a procedure that runs fine using the execute command in SSMS, however putting the same command in a job gives the following error.
    line 9, character 9, unexpected end of input
    The code takes a very long XML string in UTF-8 encoding and puts it into a single nvarchar(max) cell. Then puts this string into a XML cell in a different table, allowing me to query the individual parts of the XML code using the nodes function. I cannot put
    the data directly into a nvarchar cell due to encoding differences.
    I can't reproduce the string here as it is very very long.
    I'm just looking for ideas really as to where it might be going wrong.
    Here is what I know so far:
    The procedure runs without issue when executed manually
    I have checked permission issues, and that doesn't seem to be the problem. The agent runs under my own account and I am a sysadmin on the database
    I split the procedure into seperate parts to locate exactly where the problem is occuring. Once again the seperate procedures run fine when executed manually but an error occurs when run through SQL Server agent.
    When the query is run seperately through SQL Server Agent it gives a slightly different error. This leads me to believe it is an encoding issue. However I am getting the XML from a webpage and I can't change the encoding on the webpage.
    line 1, character 38, unable to switch the encoding
    I know this is a long shot since you can't replicate the issue but if anyone could give an idea as to where to start looking for an answer, it would be greatly appreciated.

    Here's how I'm taking the XML data and putting it into an nvarchar(max) column (Column Name TEXT):
    Select @url = 'http://....'
    EXEC @hr=sp_OACreate 'WinHttp.WinHttpRequest.5.1',@win OUT
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win
    EXEC @hr=sp_OAMethod @win, 'Open',NULL,'GET',@url,'false'
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win
    EXEC @hr=sp_OAMethod @win,'Send'
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win
    INSERT #TextData(TEXT)
    EXEC @hr=sp_OAGetProperty @win,'ResponseText'
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win
    EXEC @hr=sp_OADestroy @win
    IF @hr <> 0 EXEC sp_OAGetErrorInfo @win

  • How to Set arguments to PL-SQL Procedure schedule to run from OEM.

    Hi All,
    OEM Config Details is as follows
    Oracle (R) Enterprise Manager Ver 9.2.0.1.0
    I created a job which is scheduled to run a PL-SQL Procedure. The procedure should accept two arguments as follows.
    BEGIN
    FLSTG.PROC_DELETE_DATA (ARG1,ARG2);
    END;
    Arg1 is of type Varchar2 & Arg2 is of type Date
    How should i set arguments for the procedure?
    Can someone please help me?
    Thanks in Advance.
    Regards,
    Vidyanand

    The only problem is when writing an ODI procedure using the Oracle Technology in ODI Designer, ODI asks for the Schema.
    If I do not assign a value to the schema field when I run my ODI procedure I get the error message :
    java.lang.Exception: Internal error: object ConnectConnection
    ColConnectId:null
    ColContextCode:CTX_SRC
    ColConName:null
    ColIndCommit:null
    ColIsolLevel:null
    ColLschemaName:null
    ColPlanComp:null
    ColTechIntName:null
    DefConnectId:null
    DefContextCode:CTX_SRC
    DefConName:null
    DefIndCommit:null
    DefIsolLevel:null
    DefLschemaName:null
    DefPlanComp:null
    DefTechIntName:ORACLE
    ExeChannel:J
    IndErr:0
    IndLogMethod:null
    IndLogNb:null
    LogLevDet:3
    Nno:1
    OrdTrt:0
    ScenTaskNo:1
    SessNo:4152001
    TaskName1:Traitement
    TaskName2:CNT_SRC_ALL
    TaskName3:DROP COUNT_ROWS

  • Unable to Integrate to BC4J Components & Running PL/SQL Procedure

    I recently shifted to bc4j technology.
    i am having problem in calling one application module from
    another application module, using same transaction context.
    Another problem is running pl/sql procedure from doDML method of
    entity object, i also want to pass user's input to these
    procedures.

    Anant,
    If you want to call one application module from another, you
    should nest them in a root application module. The documentation
    describes how to nest application modules; basically you select
    the Application Modules tab in the application module editor.
    Nested application modules all share the transaction context of
    the root app module, so when you call one app module from another
    you're automatically using the same transaction context.
    The Oracle9i JDeveloper Release Candidate (which you can download
    from OTN) contains a sample that calls a PL/SQL stored procedure
    from BC4J. It's in <jdev_home>\BC4J\samples\StoredProc.
    Blaise

  • Basic Report Run from a PL/SQL Procedure

    I am hosed trying to start a BI Publisher report from a PL/SQL procedure. Basically, I have a procedure with a local variable defined as an XMLType. I need to call an RTF source, pass the xmlfile in and tell the RTF Engine where to send the output.
    The users guide implies that it involves calls to a Java program and I am totally ignorant of Java.
    I have successfully linked the RTF source into the XML_Publisher application and previewed it against a known data set. hat works.
    So, if I have a PL/SQL package with the local variable L_XML defined as XMLTYPE
    and an output file L_OUTBUT defined as VARCHAR2(100), what do I use to call the RTF Processor and generate the report to the output file?
    That is
    declare
    l_xml xmltype;
    l_output varchar2(100);
    begin
    -- stuff to build the xml variable
    -- insert the call to FTP Processor here Apparently, it should be
    -- something like
    package.procedure('linked_rtf file name', l_xml, l_output);
    end;
    If this calls a Java script, would also appreciate a sample copy of the script.
    My need is desperate so any assistance will be greatly appreciated.
    Many thanks
    Frank

    I don't think you can. The parameters passed to a VB dll are formatted differently than a C dll, especially strings. I know a C program has to use special data structures when working with a VB dll, and this is basically what Oracle is doing.
    It should be possible, however, to create a C dll that acts as a wrapper for the VB dll. I'm not sure this would save you anything, though. If you have to do that you may as well write the function in C.

  • Execute CDC mappings from a PL/SQL procedure

    Hi,
    I´m using OWB 11.2.0.2 for Linux. I´ve created some CDC mappings to update cubes with changes coming from other tables and cubes (from the tables that implement those cubes with the relational option). The issues are:
    - The CDC mappings run successfully from the OWB (Project Navigator - Start), but I cannot execute them from a procedure in PL/SQL with the following code:
    PROCEDURE "PROC_RUNCDCMAPPINGS" IS
    --inicializar variables aquí
    RetVal NUMBER;
    P_ENV WB_RT_MAPAUDIT.WB_RT_NAME_VALUES;
    -- ventana principal
    BEGIN
    RetVal:= BARIK.CDC_LOAD_CUBO_RECARGA.MAIN(P_ENV);
    RetVal:= BARIK.CDC_LOAD_CUBO_TOR.MAIN(P_ENV);
    RetVal:= BARIK.CDC_LOAD_CUBO_TOAE.MAIN(P_ENV);
    RetVal:= BARIK.CDC_LOAD_CUBO_VIAJES.MAIN(P_ENV);
    RetVal:= BARIK.CDC_LOAD_CUBO_TICKETINCIDENCIA.MAIN(P_ENV);
    RetVal:= BARIK.CDC_LOAD_CUBO_LIQMONEDERO.MAIN(P_ENV);
    RetVal:= BARIK.CDC_LOAD_CUBOS_LIQTEMPORALES.MAIN(P_ENV);
    COMMIT;
    END;
    It doesn´t report any error (the value for RetVal after execution is 0), but the cubes are not loaded with changes, and the changes stored in the J$_%tables are not consumed.
    Some of the options that may impact in the mappings are:
    - All the CDC are of Simple type
    - There are more than one subscriber to consume the changes, as for some tables, its changes must feed more than one CDC.
    - All the mappings include only one execution unit per mapping.
    - The integration/load template is the default: DEFAULT_ORACLE_TARGET_CT
    Other question is: As I explained, I need more than one subscriber because same updates must be consumed by different CDC mappings, to load different cubes, but I´ve not been able to assign the subscribers to only the tables associated with them, so all the subscribers are subscribed to all the changes in all the CDC tables, but as many of those subscribers never consume the changes of same tables, in the J$_% tables remains the not consumed records, and I haven´t found the way to purge those tables (other than the delete from J$_), nor to assign the tables with the subscribers (so the subscribers are only subscribed to their interested changes, that will be consumed, so the tables will be emptied after the consumption).
    Any help with these problems will be greatly appreciated.
    Tell me if more info is needed to clarify the situation.
    Best regards,
    Ana

    Hi David,
    Thank you for your reply.
    These mappings are the mappings needed to update the cubes with the changes detected by the CDC system, they are located under the Mapping Templates folder and I´m using code templates for the control of the loading and the integration (the DEFAULT_ORACLE_TARGET_CT) mapping.
    What I need is to execute these mappings within a PL/SQL procedure that will be invoked from different tools.
    I´ve done it for regular mappings (not CDC mappings), and it works. The code is the same as for the CDC ones:
    PROCEDURE "PROC_RUNLOADMAPPINGS" IS
    --inicializar variables aquí
    RetVal NUMBER;
    P_ENV WB_RT_MAPAUDIT.WB_RT_NAME_VALUES;
    -- ventana principal
    BEGIN
    RetVal:= BARIK.LOAD_CUBO_RECARGA.MAIN(P_ENV);
    RetVal:= BARIK.LOAD_CUBO_TOR.MAIN(P_ENV);
    RetVal:= BARIK.LOAD_CUBO_TOAE.MAIN(P_ENV);
    RetVal:= BARIK.LOAD_CUBO_VIAJES.MAIN(P_ENV);
    RetVal:= BARIK.LOAD_CUBO_TICKETINCIDENCIA.MAIN(P_ENV);
    COMMIT;
    END;
    -- End of PROC_RUNLOADMAPPINGS;
    ,and when I run it, the mappings are executed, but with the CDC ones it doesn´t (even when no error is reported).
    I know that they are deployed in the selected agent (in my case the Default_Agent), but when I start them from the OWB, the mapping packages are created in the DB schema, so, I thought that maybe I could invoke them....so what you tell me is that the only way to invoke them is from SQL*Plus? not from a regular PL/SQL procedure?
    Thank you very much,
    Ana

  • PL/SQL Procedure Calling Java Host Command Problem

    This is my first post to this forum so I hope I have chosen the correct one for my problem. I have copied a java procedure to call Unix OS commands from within a PL/SQL procedure. This java works well for some OS commands (Eg ls -la) however it fails when I call others (eg env). Can anyone please give me some help or pointers?
    The java is owned by sys and it looks like this
    CREATE OR REPLACE AND COMPILE JAVA SOURCE NAMED "ExecCmd" AS
    //ExecCmd.java
    import java.io.*;
    import java.util.*;
    //import java.util.ArrayList;
    public class ExecCmd {
    static public String[] runCommand(String cmd)
    throws IOException {
    // set up list to capture command output lines
    ArrayList list = new ArrayList();
    // start command running
    System.out.println("OS Command is: "+cmd);
    Process proc = Runtime.getRuntime().exec(cmd);
    // get command's output stream and
    // put a buffered reader input stream on it
    InputStream istr = proc.getInputStream();
    BufferedReader br = new BufferedReader(new InputStreamReader(istr));
    // read output lines from command
    String str;
    while ((str = br.readLine()) != null)
    list.add(str);
    // wait for command to terminate
    try {
    proc.waitFor();
    catch (InterruptedException e) {
    System.err.println("process was interrupted");
    // check its exit value
    if (proc.exitValue() != 0)
    System.err.println("exit value was non-zero: "+proc.exitValue());
    // close stream
    br.close();
    // return list of strings to caller
    return (String[])list.toArray(new String[0]);
    public static void main(String args[]) throws IOException {
    try {
    // run a command
    String outlist[] = runCommand(args[0]);
    for (int i = 0; i < outlist.length; i++)
    System.out.println(outlist);
    catch (IOException e) {
    System.err.println(e);
    The PL/SQL looks like so:
    CREATE or REPLACE PROCEDURE RunExecCmd(Command IN STRING) AS
    LANGUAGE JAVA NAME 'ExecCmd.main(java.lang.String[])';
    I have granted the following permissions to a user who wishes to run the code:
    drop public synonym RunExecCmd
    create public synonym RunExecCmd for RunExecCmd
    grant execute on RunExecCmd to FRED
    grant javasyspriv to FRED;
    Execute dbms_java.grant_permission('FRED','java.io.FilePermission','/bin/env','execute');
    commit
    Execute dbms_java.grant_permission('FRED','java.io.FilePermission','/opt/oracle/live/9.0.1/dbs/*','read, write, execute');
    commit
    The following test harness has been used:
    Set Serverout On size 1000000;
    call dbms_java.set_output(1000000);
    execute RunExecCmd('/bin/ls -la');
    execute RunExecCmd('/bin/env');
    The output is as follows:
    SQL> Set Serverout On size 1000000;
    SQL> call dbms_java.set_output(1000000);
    Call completed.
    SQL> execute RunExecCmd('/bin/ls -la');
    OS Command is: /bin/ls -la
    total 16522
    drwxrwxr-x 2 ora9sys dba 1024 Oct 18 09:46 .
    drwxrwxr-x 53 ora9sys dba 1024 Aug 13 09:09 ..
    -rw-r--r-- 1 ora9sys dba 40 Sep 3 11:35 afiedt.buf
    -rw-r--r-- 1 ora9sys dba 51 Sep 3 09:52 bern1.sql
    PL/SQL procedure successfully completed.
    SQL> execute RunExecCmd('/bin/env');
    OS Command is: /bin/env
    exit value was non-zero: 127
    PL/SQL procedure successfully completed.
    Both commands do work when called from the OS command line.
    Any help or assistance would be really appreciated.
    Regards,
    Bernard.

    Kamal,
    Thanks for that. I have tried to use getErrorStream and it does give me more info. It appears that some of the commands cannot be found. I suspected that this was the case but I am not sure about how this can be as they all appear to reside in the same directory with the same permissions.
    What is more confusing is output like so:
    SQL> Set Serverout On size 1000000;
    SQL> call dbms_java.set_output(1000000);
    Call completed.
    SQL> execute RunExecCmd('/usr/bin/id');
    OS Command is: /usr/bin/id
    exit value was non-zero: 1
    id: invalid user name: ""
    PL/SQL procedure successfully completed.
    SQL> execute RunExecCmd('/usr/bin/which id');
    OS Command is: /usr/bin/which id
    /usr/bin/id
    PL/SQL procedure successfully completed.
    Regards,
    Bernard

  • Thread safe operations on table in pl/sql procedure?

    I am developing java application which will be running in N localizations simultaneously, each application will have M threads. Every thread takes an unique ID with status NOT_TAKEN from table Queue and changes its status to TAKEN.
    Problem:
    How to prevent thread from situation like this:
    1. Thread A select first ID with status NOT_TAKEN
    2. In the same time thread B select first (so it will be the same ID which selected thread A) ID with status NOT_TAKEN.
    3. Thread A changes status of ID to TAKEN
    4. Thread B changes status of ID to TAKEN
    After this operation thread A and B are using the same ID.
    What I did:
    I have written pl/sql procedure which lock table Queue in exclusive mode, selects first ID, changes its status to TAKEN and unlocks the table. Because it is lock in exclusive mode so only one thread can run this procedure simultaneously.
    Question:
    How optimally should it be solved, because mine solution prevents from doing all other Updates on Queue table while it is locked, like changing status from TAKEN to OPERATION_DONE so it has performance issue.

    As Centinul has said, you need to lock just one row.
    I would just add NOWAIT to the select statement to let the Java thread go and try again, instead of waiting for the other threads.
    Example: (not tested)
    -- Assuming structure of your QueueTable: ( IDCol is PK , StatusCol is VARCHAR2, ... )
    -- or make it part of the package....
    CREATE OF REPLACE
    FUNCTION updateQueue( nQID QueueTable.IDCol%TYPE) RETURN VARCHAR2 AS
       eLocked EXCEPTION;
       PRAGMA EXCEPTION_INIT(eLocked,-54);
       CURSOR curQueueTable IS SELECT 1 CNTR FROM QueueTable WHERE IDCol=nQID AND StatusCol='NOT_TAKEN' FOR UPDATE OF StatusCol NOWAIT;
       recQueueTable curQueueTable%ROWTYPE;
       cRtn VARCHAR2(1);
    BEGIN
       cRtn := 'Y';
       BEGIN
          OPEN curQueueTable;
          FETCH curBuffSummary INTO recQueueTable;
          CLOSE curQueueTable;
          IF recQueueTable.CNTR IS NOT NULL AND recQueueTable.CNTR = 1 THEN
              UPDATE QueueTable SET StatusCol = 'TAKEN' WHERE IDCol=nQID;
          ELSE
              -- Already updated
              cRtn := 'N';
          END IF;
             -- You can control your transaction here as well
             -- COMMIT;
             -- But if realy should be done in the Java thread.
        EXCEPTION
           WHEN eLocked OR STANDARD.TIMEOUT_ON_RESOURCE THEN
           -- Thread could not get exclusice row lock. Ignore error.
             cRtn := 'N';
             NULL;
          WHEN OTHERS THEN
             -- Handle other errors...
             -- NULL; just kidding...
             RAISE;
       END;
       RETURN cRtn;
    END; Edited by: thomaso on Sep 18, 2009 10:30 AM

  • Pl/sql procedure with shell script

    Hi Guys,
    I will be updating some of the columns in the database thru SQL UPDATE stament. I want to make this process automatic. I.e instead of running manually this uodate process, i want to write a unix script which run on cron job. In the update stament I have to compare date like e_create_date > to_date (........, 'yymmdd') and date should be 2 days previous then to date and I would ike to create the spool file which I would like to send through mail.the script will run automatically.
    I am confused how to write shell script & sql procedure and how to call it inside the shell script. How can this be done.
    Help Appreciated.
    Thanks
    sonu

    save the Store procedure as a create_SP.sql in OS
    save another testrun.sql
    as follows
    begin
    sp_test('&p_test');
    end;
    If you are on a Unix or Solaris OS then
    open a Shell Script as follows :
    and name as test.sh or test.ksh
    var='SYSTEM'
    export var
    sqlplus username/password
    @create_SP.sql
    -- this created a store procedure on the database
    @testrun.sql $var
    This will execute the SP with parameter from OS Variable.

  • Calling PL/SQL Procedure In Another Schema Gives Unexpected Result

    I have a SQL Script that does this:
    conn pnr/<password for user pnr>;
    set serveroutput on;
    exec vms.disable_all_fk_constraints;
    SELECT owner, constraint_name, status FROM user_constraints WHERE constraint_type = 'R';
    and the disable_all_fk_constraints procedure that is owned by user 'vms' is defined as:
    create or replace
    procedure disable_all_fk_constraints is
    v_sql   VARCHAR2(4000);
    begin
    dbms_output.put_line('Disabling all referential integrity constraints.');
    for rec in (SELECT table_name, constraint_name FROM user_constraints WHERE constraint_type='R') loop
    dbms_output.put_line('Disabling constraint ' || rec.constraint_name || ' from ' || rec.table_name || '.');
    v_sql := 'ALTER TABLE ' || rec.table_name || ' DISABLE CONSTRAINT ' || rec.constraint_name;
    execute immediate(v_sql);
    end loop;
    end;
    When I run the SQL script, the call to vms.disable_all_fk_constraints disables the FK constrains in the 'vms' schema, whereas I wanted it to disable the FK constraints in the 'pnr' schema (the invoker of the procedure). I know that I could make this work by copying the disable_all_fk_constraints procedure to the 'pnr' schema and calling it as "+exec disable_all_fk_constraints;+" from within the SQL script but I want to avoid having to duplicate the PL/SQL procedure in each schema that uses it.
    What can I do?
    Thank you

    You have two issues to solve.
    First you need to write a packaged procedure that works with INVOKER rights. The default is DEFINER rights.
    The difference is excatly what you need. Usually the package has the rights from the schema where it is defined (=Definer rights). In your case schema VMS. Whereas you need the privileges from the user that calls the package (PNR).
    => Check out the documentation for INVOKER rights
    The second problem is that the view "user_constraints" will not give the results you expect when called from inside a procedure in another schema. An alternative could be to use the view DBA_CONSTRAINTS with a filter on the owner (where owner = 'PNR'). Not sure if there are other working possibilities. Well you could create a list of constraint names that you want to disable, instead of creating the list dynamically.
    And you could have another potential disaster creeping up upon you. If you run this thing, then at this moment you don't have any referential integrity anymore. You can't be sure that you can create the FKs again after this action. This is EXTREMLY DANGEROUS. I would never ever do this in any kind of production or test database. I would be very careful when I do it on a development database.

  • Pl/sql procedure to validate/review

    Hi folks, looking for some good practice/guidelines for evaluating and reviewing a big pl/sql procedure performance, have already tried dbms_profiler ... but not good, would like to hear ... is there any other/better option to work with.
    Regards

    Alvaro wrote:
    Run them in development see if the performance will be acceptable, if not then you can do a TRACE/TKPROF to drill down to the slowdown.
    After that you can perhaps invoke the SQL tuning facility and proceed to tune the invidual SQL commands individually.
    Don't see any quick way of giving you a large overview of all the performance of all PL/SQLs on the database. This must be done on a case-by-case basis.
    And that is the basic issue. You CANNOT determine performance of the code in production using development. Anyone that says it can be done, has either a perfectly static and known production environment, that is duplicated to the last byte and last h/w bolt on development. Or is a liar.
    So as you need to design, work and test within the limits of development, the aims are simple. Design and write scalable and performant code. No need to be a rocket scientists or analysing tkprof traces to determine whether code is scalable and performant. One should be able to, at first look, tell that a FOR cursor loop doing DMLs inside the loop is not performant and cannot scale. Or that a procedure tasked to update almost every single row in a VLT, is a major workload that is serialised, and instead should be parallelised. So before diving into nuts and bolts of a tkprof, get the algorithms and designs right.
    As for unleashing the code on production. No amount of stress testing and tuning on development is adequate to tick off the deployment checklist box that says "+code will now run just fine in production+". Which is why you need to add to the code, the ability for the code to report on what it is doing, the workloads its dealing with, the resources it has available and is using, and how fast it is dealing with the workloads.
    The success of that s/w deployed to production, directly depends on how well that code is instrumented - in order to enable the code to report on problems encountered in the production environment.

Maybe you are looking for

  • Question on pfile in RAC environment

    DB version:10GR2,11GR2 OS Version: Solaris, AIX, Linux Lets say i have a 2 node RAC environment. DB name is : jerry I have a datagroup called DATA7 Node1's pfile would only contain a single like as shown below $ cat initjerry1.ora SPFILE='+DATA7/jerr

  • IPhone 4s streaming video to AppleTV2

    "An error occurred loading this content. Try again Later" I'm getting this message as I try to play content on an Apple TV2 (running current 5.3 software).  From the same app (America's Cup app), I can play some videos but not others.  File videos ar

  • Issue for report painter

      Hi,all experts       I am facing an urgent issue. I have created a report using T-code GRR2. I have get below message, when I drill down this report. What should I take a further action? Reconciliation account given instead of cost element: Lengthy

  • Oracle migration from solaris to AIX

    Hi, We have a activity of a migration of oracle 9i Solaris database (800G in size) to AIX oracle 10g database, we use RMAN and exp/imp as a backup method. What are the efficient and fastest methods to do this and i what way? Thanks Jaffy

  • Re: Recompose Tool

    Being new to PS Elements 9, I tried the recompose tool on my own.  It worked well and I achieved the correction in my photo.  I went to save it and noticed that the extension on the file name was changed from a .jpg to .psg  I did not know what to do