Table names in generated stored procs are qualified with sa schema name

I am using OMW 9.2.0.1.2 with the 9.2.0.1.3 SQL Serevr plugin to help with a SQL Server 7 to Oracle 9.2.0.1 migration on NT.
As is common with SQL Server databases, the dbo is sa. I don't want my Oracle schema to be called sa. I have succesfully gotten around this by renaming the sa user in the Oracle model in OMW.
However, the stored procedure code that OMW generates has table names qualified with sa as the schema (the tables names in the original T/SQL procs were not qualified).
How can I stop OMW from generating table names qualified with sa?
Thanks.

Hi,
this is a bug in the OMWB. As a workaround, you can generate the migration scripts (see reference guide and user guide for more information) from the OMWB Oracle Model and then edit these scripts to ensure that the 'sa' prefix does not appear in the text of the stored procedures. Then use these scripts to generate the schema in your database.
An alternative is to migrate the stored procedures, schema and data over to the Oracle database using OMWB and then open each procedure in Enterprise Manager, remove the references to the 'sa' prefix and re-compile the procedure.
I will keep you updated on the release this fix will appear in.
I hope this helps,
Tom.

Similar Messages

  • Any Function module or BAPIs are available to get scheme name for the inter

    I have internal order no value in table  AUFK-AUFNR ,  and the internal order corresponding Scheme value is available in IMPR-PRNAM . Now I want to inner join both the tables to extract the data , but there is no common field . Is there any Function module or BAPIs are available to get scheme name for the internal orders?

    look at DB-VIEW  "V_IVP_OR".
    Regards,
    Laurent

  • Are drives with the same name differentiated?

    I'm running OSX 10.6.8 and three external drives on which data is stored (a main drive, A-Data, and two backups). A friend may decide to help me with a project and to do so will have to have the same setup. Naming the drives on the second system is of some concern to me, so I did some research.
    The fellow at http://www.cnet.com/au/news/drives-in-os-x-appearing-with-1-appended-to-their-na mes/ states:
    If by chance you mount two drives of the same name, because the system can't create two mount points with the same name it appends sequential numbers to new mount points as they are created, and therefore you will see the numbered drive names in the Finder.
    Unless I misunderstand what he is talking about, I found that I can have two drives with the same name. I can have two drives named A-Data on my desktop. And therein lies my problem (maybe). The project involves Premiere and several of it's sister programs, all of which require external files (video, audio, stills) to be linked, not stored within the working file.
    QUES 1
    Are disks with the same name distinguishable by OSX or software?
    QUES 2
    Assume that
    I am working on a project which is saved on A-Data (and all the project links are to files on that particular A-Data)
    I then connect another disk with the same name (and with the same folder structure), and backup my files to it.
    I then eject the first A-Data and open the project on the second A-Data.
    Will the software say that it can't find the linked files (because they are on the ejected disk), or will the project open as normal?

    Avoid having the same names if at all possible, especially if you use apps that 'link' to the media via the file path.
    The file path is set by the order in which the disks mount, which can change across reboots.
    To see what is going on by using Disk Utility…
    Select the first volume, look at the mount point (a blue link at the bottom of the Window) …
    /Volumes/A-Data
    Now connect the second volume & select it, look at the mount point…
    /Volumes/A-Data 1
    Now reboot, power off the first disk, leave the second one on & power disk one back on when the OS has competed booting. They will show the same paths, but each disk is now the 'other' disk - this makes a big mess for Apps like Premiere.
    The only way to make this work is to clone every file to be identical on both disks, and never use Premiere (or any app that links to media) when both disks are connected. Otherwise you risk linking to media files on 'A-Data' and 'A-Data 1'.
    Premiere must have tools for sharing projects - look at them before using disk names in this way.
    You could however clone 'disk 1' to 'disk 2' & then rename it on the second Mac (to make them identical to fix the paths for Premiere), just avoid bringing them to one Mac with the same disk names. It does make more headaches in future though, because your friend will add new media that you need to migrate back to your disk.

  • Using dbms_datapump package to export the schema with the schema name as pa

    Hi,
    I am using the pl/sql block to export schema using dbms_datapump package,Now I want to pass the scheme name as the parameter to the procedure and get the .dmp and .log files with the schema name included.
    CREATE OR REPLACE PROCEDURE export
    IS
    h1 number;
    begin
    h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'export1', version => 'COMPATIBLE');
    dbms_datapump.set_parallel(handle => h1, degree => 1);
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''CHECKOUT'')');
    dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT%U' || to_char(sysdate,'dd-mm-yyyy') || '.DMP', directory => 'DATA_PUMP_DIR', filetype => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
    dbms_datapump.detach (handle => h1);
    exception
    when others then
    raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
    end;
    Thank you in advanced
    Sri

    user12062360 wrote:
    Hi,
    I am using the pl/sql block to export schema using dbms_datapump package,Now I want to pass the scheme name as the parameter to the procedure and get the .dmp and .log files with the schema name included.
    OK, please proceed to do so
    >
    CREATE OR REPLACE PROCEDURE export
    IS
    h1 number;
    begin
    h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'export1', version => 'COMPATIBLE');
    dbms_datapump.set_parallel(handle => h1, degree => 1);
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''CHECKOUT'')');
    dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT%U' || to_char(sysdate,'dd-mm-yyyy') || '.DMP', directory => 'DATA_PUMP_DIR', filetype => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
    dbms_datapump.detach (handle => h1);
    exception
    when others then
    raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
    end;
    EXCEPTION handler is a bug waiting to happen.
    eliminate it entirely

  • HT204074 Our new iphone5 id name is the same as an existing still in use iphone4 in the family; if I remove the devise; change its name can I then immediately associate it with the correct name.

    I have read the HT4627 but I think my original question is more specific  Our new iphone5 id name is the same as an existing still in use iphone4 in the family; if I remove the devise; change its name can I then immediately associate it with the correct name.

    You can change the name of a device on it via Settings > General > About > Name - you shouldn't need to remove the association and then reassociate it.

  • No table names appear (only Stored Procs) when connected to SQLServer w/dbo

    Greetings,
    I'm using Crystal Reports 11, connecting to a databse on SQL Server 2005.
    I am on Windows 7. I also have the problem on Windows Server 2003
    Starting from a blank database, I create my connection through ODBC, and when I expand my database, all I get is a list of stored procedures, no tables.
    I have isolated this to whether I have dbo authority on this particular database or not.
    i.e. I have a user with dbo authority, and they see nothing but stored procedure names, I take dbo authority away, and they see tables, views, stored procs, all as would be expected.
    I have several other SQL Server databases that do not share this issue. Users with dbo authority can see tables as expected.
    I have compared the settings on the databases, and nothing appears to be different (the database with the issue is considerably larger, but I can't see where that would be an issue)
    I will probably also open this issue with Microsoft, to see what they say from their end
    If anyone has seen this before, or would have any suggestions they would be greatly appreciated. Seems very strange that giving MORE authority would be affecting what the user can see.

    Don
    Below are the results from the SQLCON program you suggested. It is returning Table info... (truncated most of it, as I assume it's not all that pertinent)
    SQLConnect Successful
    [Microsoft][ODBC SQL Server Driver][SQL Server]Changed database context to 'eas'.
    [Microsoft][ODBC SQL Server Driver][SQL Server]Changed language setting to us_english.
    ODBC Version is : 03
    SQL Driver Name is : SQLSRV32
    SQL Driver Version is : 06.01.7600
    SQL Driver Supported ODBC Version is : 03
    SQL DBMS Name is : Microsoft SQL Server
    SQL DBMS Version is : 09.00.4035
    SQLTables Successful
    [Row, Database, Owner, Name, Type]
    1  eas,  dbo,  ABC_ACCOUNT,  TABLE,
    2  eas,  dbo,  ABC_BUSINESS_UNIT,  TABLE,
    3  eas,  dbo,  ABC_COMPANY,  TABLE,
    Here is the listing from the TS server (also seeing the issue):
    SQLConnect Successful
    [Microsoft][ODBC SQL Server Driver][SQL Server]Changed database context to 'eas'.
    [Microsoft][ODBC SQL Server Driver][SQL Server]Changed language setting to us_english.
    ODBC Version is : 03
    SQL Driver Name is : SQLSRV32
    SQL Driver Version is : 03.86.3959
    SQL Driver Supported ODBC Version is : 03
    SQL DBMS Name is : Microsoft SQL Server
    SQL DBMS Version is : 09.00.4035
    SQLTables Successful
    [Row, Database, Owner, Name, Type]
    1  eas,  dbo,  ABC_ACCOUNT,  TABLE,
    2  eas,  dbo,  ABC_BUSINESS_UNIT,  TABLE,
    3  eas,  dbo,  ABC_COMPANY,  TABLE,
    I apologize for not answering your previous question, I think the SQL Driver Name will answer that? If not, let me know and I'll get you the information you're looking for. Thank you again for your time on this.

  • Populating OBIEE event polling table S_NQ_EPT using Informatica/Stored Proc

    Hi,
    We have successfully setup OBIEE event polling Table S_NQ_EPT; and OBIEE cache purging mechanism is working fine.
    If anybody has done any setup for event polling table which gets updated after/in DAC execution Plan run using informatica/Oracle SP.
    Can anybody suggest pointers on how to insert rows in S_NQ_EPT after DAC run using informatica/DAC/Oracle Stored Procedure?
    Thanks in Advance,

    HI Srini,
    Thanks for reply.. yeah we have setup event polling table and all steps mentioned on site are done.. We want to populate event polling table after ETL run, any pointers?
    Thanks,

  • New Nikon and keeping old - are files with the same names allowed?

    I have been using the organizer catalog for all my photos for years.  When my wife got interested in photography I bought her a Nikon and added her photos into the same catalog - most of the tags overlapped since the pictures are mostly of family and around town.  This worked out fine since I used a Canon and the file names were different
    Recently I looked for a new camera (its been years and I wanted to move up).  Thinking that it made sense to share accessories and lenses I got myself a new Nikon - a different model than my wife has if that matters.  (No Nikon -Canon discussion please).  Of course now all the pictures that I take are named in the camera and have exactly the same names as my wife's pictures already in the Organizer catalog.
    I haven't tried to import my pictures yet, but I seem to remember that the Organizer catalog won't import a "duplicate" picture even if its in a different folder on the hard drive.  So the question is, how does Organizer know whether a picture is a duplicate or not?  If it looks for filenames then it won't import my pictures (or worse, it will import my pictures and will "overwrite" my wife's - not on the hard drive in a different folder, but in the catalog - or just mess up all the tags etc.).
    The only thing I can think of is to download my pictures directly to the hard drive and batch rename them in Irfanview (like add my camera model to the front of the filename) and then import these renamed pictures to the catalog.  This adds a few steps to the workflow over just using the Adobe Importer and allows more opportunity for me to mess things up if i'm not concentrating as I do all these steps, so its something I would like to avoid.
    What do all you professional photographers with more than one camera do about this.  Should I just start a separate catalog - although the beauty of the PSE catalog is that I can locate all of the pictures I want with just a few clicks of the mouse without looking in more than one place.  Extra info - I'm using PSE 5.0.2 on Vista 32.
    Thank you for reading this far.  All suggestions will be much appreciated.
    Blaine

    PSE is smart enough to know that a file in a different directory with a different capture date and time is not the same as the photo you already imported. No need to rename beforehand.

  • Are libraries with the same name a problem?

    In previous versions of FCPX, having the same event on two drives was a real problem.  I had backup drives that mirrored my media drive, and if I started FCPX before closing the backup drives, FCPX would warn me about the same event being present on more than one drive and potential corruption (the same may have been true of projects, too).
    In 10.1, you can easily copy libraries to different drives in Finder.  But then both libraries have the same name.  Is that a problem like it was in earlier versions of FCPX (with events)?  Is there a potential corruption issue?
    I've started renaming the libraries from with FCPX after I copy them (so, for example, I have "Day the Beach" and "Day at the Beach Backup."  I don't know if that's advisable or necessary, or even if I need to do the renaming from with FCPX rather than Finder.
    Any advice is appreciated.  Thanks!

    A Library is completely self contained and should not pose any problem.
    However placing a Backup ID makes good house keeping sense to me.
    Al

  • CSS properties are prefixed with style sheet name

    I have two sites both created with the default templates in dreamweaver. The properties of the CSS file both looked different as shown.
    What is the difference and how to I clean up the one that is prefixed?

    There is nothing to clean up. .something are classes, #something are IDs. They don#rt mutually exclude one another but can coexist. If defined in a specific stylesheet and order, they provide contextual rules, e.g. if a class rule appears inside a specific elemnet with an ID. Nothing wrong. Read up on CSS!
    Mylenium

  • Issue with passing schema name as variable to sql file

    Hi,
    I have a scenario wherein from SYS a Java process (Process_1) is invoking SQL files and executing the same in SQLPLUS mode.
    DB: Oracle 11.2.3.0
    Platform: Oracle Linux 5 (64-bit)
    Call_1.sql is being invoked by Java which contains the below content:-
    ALTER SESSION SET CURRENT_SCHEMA=&&1;
    UPDATE <table1> SET <Column1> = &&1;
    COMMIT;
    @Filename_1.sql
    Another process (Process_2) again from SYS user is also accessing Filename_1.sql.
    The content of Filename_1.sql is:-
    DECLARE
    cnt NUMBER := 0;
    BEGIN
      SELECT COUNT(1) INTO cnt FROM all_tables WHERE table_name = 'TEST' AND owner = '&Schema_name';
      IF cnt = 1 THEN
      BEGIN
        EXECUTE IMMEDIATE 'DROP TABLE TEST';
        dbms_output.put_line('Table dropped with success');
      END;
      END IF;
      SELECT COUNT(1) INTO cnt FROM all_tables WHERE table_name = 'TEST' AND owner = '&Schema_name';
      IF cnt = 0 THEN
      BEGIN
        EXECUTE IMMEDIATE 'CREATE TABLE TEST (name VARCHAR2(100) , ID NUMBER)';
        dbms_output.put_line('Table created with success');
      END;
      END IF;
    End;
    Process_2 uses "&Schema_Name" identifier to populate the owner name in Filename_1.sql. But Process_1 needs to use "&&1" to populate the owner name. This is where I am looking a way to modify Call_1.sql file so that it can accommodate both "&&1"  to populate owner name values in Filename_1.sql (with avoiding making any changes to Filename_1.sql).
    Any help would be appreciated.
    Thanks.

    Bad day for good code. Have yet to spot any posted today... Sadly, yours is just another ugly hack.
    The appropriate method for using SQL*Plus substitution variables (in an automated fashion), is as command line parameters. Not as static/global variables defined by some other script ran prior.
    So if a script is, for example, to create a schema, it should look something as follows:
    -- usage: create-schema.sql <schema_name>
    set verify off
    set define on
    create user &1 identified by .. default tablespace .. quota ... ;
    grant ... to &1;
    --eof
    If script 1 wants to call it direct then:
    -- script 1
    @create-schema SCOTT
    If script 2 want to call it using an existing variable:
    -- script 2
    @create-schema &SCHEMA
    Please - when hacking in this fashion, make an attempt to understand why the hack is needed and how it works. (and yes, the majority of SQL*Plus scripts fall into the CLI hack category). There's nothing simple, beautiful, or elegant about SQL*Plus scripts and their mainframe roots.

  • How to add company name to show up in contact list with person's name on Droid

    I just switched from the Blackberry Storm to the Motorola Droid.  Is there a way to have the company name show up in the contact list under the person's name.  I need that for work.  It syncs up with a program called Act by Sage and my storm used to show both.

    You can edit your contacts, and list the company name under the organization section. However, it will not show up when they call you. You will only be able to see it in your contact list. Hope this helps!

  • Change Table Names in Universe to a Fully Qualified Table Name

    We have implement the HR Rapid Datamart. The table names just have the table. We need programmically to have the table use a fully qualified name. Example Company table needs to read the schema.table name DM.Company. Is there a way to set it in the Custom Parameters section of the Edit connection window to automatically add the schema name to the table name programmically without having to rename the table names in the universe?

    You can add the schema name for all tables in one pass if you select all of them (press the CTRL key while selecting them) and them go to the properties window and type in the schema name in the owner field.
    Regards,
    Stratos

  • Stored proc parameters referencing in code

    Hi,
    Quick question ...
    I want my stored proc to accept two parameters : a source table name and a destination table name. The stored proc's goal is to update the destingation table with data in the source table.
    Here is my code :
    CREATE OR REPLACE PROCEDURE UPDATE_FPSYNTHESE (
    sourcetable IN VARCHAR2,
    targettable IN VARCHAR2
    IS
    BEGIN
    INSERT INTO targettable (SELECT * FROM sourcetable WHERE id NOT IN (SELECT id FROM targettable));
    UPDATE targettable SET name = (select name from sourcetable where targettable.id = sourcetable.id);
    END;
    Here's when I want to create it :
    Errors for PROCEDURE UPDATE_FPSYNTHESE:
    LINE/COL ERROR
    10/1 PL/SQL: SQL Statement ignored
    10/40 PL/SQL: ORA-00942: table or view does not exist
    11/1 PL/SQL: SQL Statement ignored
    11/8 PL/SQL: ORA-00942: table or view does not exist
    Warning: Procedure created with compilation errors.
    It doesn't seem to dereference the parameters ...
    what am I doing wrong ?
    Thanks in advance,
    Best regards,
    Steve

    If the source and target tables must be dynamic, you would have to use dynamic SQL
    CREATE OR REPLACE PROCEDURE update_fpsynthese (
      sourcetable IN VARCHAR2,
      targettable IN VARCHAR2 )
    AS
      sqlStmt VARCHAR2(4000);
    BEGIN
      sqlStmt := 'INSERT INTO ' || targettable ||
                 ' (SELECT * ' ||
                 '    FROM ' || sourcetable ||
                 '   WHERE id NOT IN (SELECT id ' ||
                 '                      FROM ' || targettable || '))';
      EXECUTE IMMEDIATE sqlStmt;
    END;Since you are building up strings and executing them, you'll also have to worry about things like SQL injection attacks. You also lose compile-time syntax checking and generally complicate your life from a development & maintenance perspective. Are there really that many tables that you need to do this procedure with?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Generate DDL without schema name

    Hi,
    I would like to generate a DDL with just the tables of a given schema. I found that the procedure DBMS_METADATA.GET_DDL can give me such information but all the tables are fully qualified with the schema prefixing the table's name. How can I get rid of the schema name ?
    Here's what I executed on scott's schema:
    set long 10000;
    execute      dbms_metadata.SET_TRANSFORM_PARAM(dbms_metadata.session_transform, 'SEGMENT_ATTRIBUTES', false);
    execute      dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'STORAGE', false);
    execute      dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'TABLESPACE', false);
    select DBMS_METADATA.GET_DDL('TABLE',table_name) from user_tables;and here's what I got:
    CREATE TABLE "SCOTT"."EMP" ...
    Thanks,
    Luc

    Well dbms_metadata.get_ddl will generate the ddl script with username by default if you dont want you can try your own script. Check the sample function which create to fix that.
    Hope this helps.
    SRI>set long 1000000
    SRI>create or replace function aaa(nstr varchar2,nuser varchar2) return varchar2 is
    2 begin
    3 return replace(nstr,chr(34)||nuser||chr(34)||'.','');
    4 end;
    5 /
    function created
    SRI>select dbms_metadata.get_ddl('TABLE','DEPT') from dual
    DBMS_METADATA.GET_DDL('TABLE','DEPT')
    CREATE TABLE "SCOTT"."DEPT"
    ( "DEPTNO" NUMBER(2,0),
    "DNAME" VARCHAR2(14),
    "LOC" VARCHAR2(13)
    ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TOOLS"
    SRI>select aaa(dbms_metadata.get_ddl('TABLE','DEPT'),'SCOTT') from dual;
    AAA(DBMS_METADATA.GET_DDL('TABLE','DEPT'),'SCOTT')
    CREATE TABLE "DEPT"
    ( "DEPTNO" NUMBER(2,0),
    "DNAME" VARCHAR2(14),
    "LOC" VARCHAR2(13)
    ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TOOLS"
    -Sri

Maybe you are looking for