Select Schema

Hello;
I am trying to build a form on a table in a schema that was created after my workspace. I have linked the schema with the workspace in the administration application. I also have NULL in the accessible schemas text box for my user.
When I use the wizard to create a form the Table/View Owner dropdown box does not show this new schema.
Any ideas what I am doing wrong?

Chris,
Say you're working on application 100 and its parsing schema (owner) is FOO. You have recently added BAR to the workspace and you want to create a form on BAR.EMP in application 100. To see BAR in the Table/View Owner select list, BAR must grant select on EMP (or any of its other tables/views) to FOO.
Note also, it was not necessary for you to add BAR to the workspace schemas for this purpose.
Scott

Similar Messages

  • Dataguard setup for selected schemas

    Version:10.2, 11.2
    Platform : Solaris 5.10
    In our Production DB, we have some schemas which we do not want to be replicated at the standby site. Logical backup is enough for these schemas.
    Instead of replicating the entire database, Is there a way we could set up dataguard in which the logs are shipped only for selected schemas?

    With a Physical Standby Database , you always 'replicate' your whole Primary. With Logical Standby, though, you could exclude certain schemas from the 'replication'.
    Another option would be - as already mentioned by others - to do a regular replication with Streams or Golden Gate and only replicate the schemas resp. objects that you deem important.
    However, I would recommend that you review the requirement to exclude 'unimportant' schemas from your Standby Database. It makes the whole setup more complicated than necessary.
    Just go with Physical Standby and ignore those schemas - why bother?
    Kind regards
    Uwe Hesse
    http://uhesse.wordpress.com

  • Select schemas from relational model on import from data dictionary option

    Hi All,
    I have one relational model with 3 diferent schemas,
    I want to compare one of my schemas with the data dictionary I have in a database,
    I select the import option in the general file menu, select from data dictionary option,
    select the connection from my database,swap target model checked,select a physical model and select the objects i want to compare from the database,
    My problem is that the result is the comparison between all the objects in my model and the objects in the database that I have selected,
    what I really want is to compare a list of objects in my model to a list of objects in my database,
    this could be possible? or always need to compare all the objects of the model?
    Thanks in advance

    Hi jbellver,
    there is no any development in DM 3.1.0.691 on that problem. In production release you'll be able to compare objects in subview with database or just to select several objects and compare them with database. And of course "generate in DDl" filtering still can be used - it works at "Compare dialog" level.
    Philip

  • Select schema via cursor

    I am working with a database in which all schemas have the same tables. I am trying to write a PL/SQL routine that will loop through each schema, query a table in that schema, and insert that data into a master table (simplified explanation). I can get a cursor that contains all the schemas, but I can't figure out how to set the schema name from a cursor.
    sample code ... (not working)
    declare
    cursor schemaCrsr is select SCHEMA_NAME from MASTER.SCHEMA_TABLE;
    begin
    for mySchemaCrsr in schemaCrsr loop
    insert into MASTER.TABLE_X
    select * from mySchemaCrsr.TABLE_X;
    end loop;
    end;
    What is the correct syntax for "mySchemaCrsr.TABLE_X" to enable this? Or is there another way to accomplish this?
    Thanks

    Chris,
    I'm using Oracle version 8.1.6 now, so I can't tell if this will work in Oracle version 7.3.4 or not, but I think I remember using something similar in Oracle version 8.0.5. Please try the code below and let us know if it works for you or not.
    Also, posting on this forum causes spaces to be added between concatenation symbols where they don't belong, so anywhere you see a space between | and |, remove the space, before running the code.
    Any time that you are using some sort of variable to represent a schema name or table name or column name at run time, you have to use some sort of dynamic sql. Execute immediate is a new type of dynamic sql. DBMS_SQL is an older method.
    Barbara
    DECLARE
    CURSOR schemacrsr
    IS
    SELECT schema_name
    FROM master.schema_table;
    cursor_name INTEGER;
    rows_inserted INTEGER;
    BEGIN
    FOR myschemacrsr IN schemacrsr
    LOOP
    cursor_name := DBMS_SQL.OPEN_CURSOR;
    DBMS_SQL.PARSE
    (cursor_name,
    'INSERT INTO master.table_x SELECT * FROM '
    | | myschemacrsr.schema_name
    | | '.table_x',
    DBMS_SQL.NATIVE);
    rows_inserted := DBMS_SQL.EXECUTE (cursor_name);
    DBMS_SQL.CLOSE_CURSOR (cursor_name);
    END LOOP;
    END;
    null

  • Select schema name. How?

    Hi All,
    May be a very simple question, how can I select the schema name I'm logged into. I don't carry DBA privileges.
    --Sam                                                                                                                                                                                                                                                                       

    Thanks.
    --Sam                                                                                                                                                                                                                                               

  • Backing up selected Schemas

    DB version:10gR2
    Is it possible to backup just selected few schemas using RMAN.

    No. Such a task is done by exp or expdp. The smallest unit you can backup is a datafile.
    Werner

  • Replicate selected schema

    Hi,
    I am running a data guard environment with a primary and a physical standby database. Is it possible to not replicate a particular schema with its objects (tables, views, packages etc) and associated tablespaces. Will changing to logical standby able to deal with it? What is the implications and what is the catch? Thanks for any ideas.

    It is not possible with Physical Standby.
    With logical standby, you can achieve it by defining SKIP rules on Standby Database using DBMS_LOGSTDBY package.
    Read more about:
    DBMS_LOGSTDBY.SKIP procedure
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lsbydb.htm#i997288
    DBMS_LOGSTDBY.SKIP_ERROR procedure
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lsbydb.htm#i997648
    Be aware that you have to define skip rules for all objects you don't want to have DML and DDL transactions against them to be applied to the logical standby database.
    You have an option to define a schema level rule by using SCHEMA_DDL or SCHEMA_DML options.
    Example for schema level rule (taken from Oracle® Database PL/SQL Packages and Types Reference):
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP(STMT => 'SCHEMA DDL', -
         schema_name => 'HR', -
         table_name => '%', -
         proc_name => null);
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP(STMT => 'DML', -
         schema_name => 'HR', -
         table_name => '%', -
         proc_name => null);Also you said that you are willing to convert your standby from Physical to Logical.
    Before doing that you should be aware of that with logical you have to deal with possible performance problems since the way the changes are applied to Logical Standby is totally different than the way for Physical Standby.
    SQL Apply process applies uses logminer to mine the archive logs and creates separate SQL statements for every row affected with one transaction.
    For instance, if your transaction updated 1000 rows, SQL Apply will create 1000 different UPDATE statements. Now imagine what may happen if your UPDATE statement uses poor execution plan. Fortunately, you have ability to create as many additional indexes as you need.
    Not all data types are supported with Logical Standby. So far these data types are not supported:
    - BFILE
    - Collections (including VARRAYS and nested tables)
    - Encrypted columns
    - Multimedia data types (including Spatial, Image, and Context)
    - ROWID, UROWID
    - User-defined types
    - XMLType
    So if you have, for instance, context indexes, or using XMLType, applying changes for these columns will not be straightforward. (There is some workaround though)
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/data_support.htm#CHDGFADJ
    You should also be aware that even though you have defined skip rules, they are defined on logical database site, so the changes made on primary database against these tables are still "captured" and sent to the standby.
    Bottom line is, you can achieve the goal of having transactions against some objects being not applied to Logical Standby, but you should be aware of the complexity, which comes with Logical Standby configuration.
    Also, my opinion is, that if you like to have a reliable DR solution that protects your data, you should think of having Physical Standby. Logical Standby is useful if you want users to run their reports and queries against it so it would reduce the overhead to the primary database.
    Cheers,
    Message was edited by:
    tekicora

  • When using "Database diff" selecting other schemas only for compare own objects are shown too!

    Hi!
    For comparing lot of objects I use a priviliged account z, which can read Schema a and b.
    In the compare dialog I set different from the Defaults following Options:
    - step 1
    - option "Maintain"
    - step 3
    - button "More"
    - select schema a
    - button "Lookup"
    - mark all objects and shuttle to the right
    - repeat for schema b
    After the diff report is finished, schema object of account z are listed too, despite I have not selected this.
    Best regards
    Torsten

    Ah, you're using user Z to select objects from A & B?
    On step 2, do you have anything selected that you have not picked an object type for using the 'shuttle' in Step 3?
    For example, if you picked, 'procedures' and then in the wizard, didn't pick ANY procedures in A or B, it will by default use ALL of the procedures in Z for the compare.

  • Will deleting a column at logical schema delete the same at physical level by DDL Sync?

    Will deleting a column at logical schema delete the same at physical level by DDL Sync?

    Hi David,
    First of all thanks for your quick response and for your help logging the enhancement request,
    I am testing more or less your suggestion but I  am not sure if I understood exactly what you mean,
    1)I imported from data dictionary in a new model and into the options menu on the schema select screen I un-ckecked partitions and triggers,
    I guessed that the import should not get from the data dictionary the information about the partitions but the result is that the tables partitioned (by list in this case) are partitioned by range without fields into the physical model on SDDM,
    2)I select one of the tables modify a NO partitioned option and propagate the option for the rest of the tables
    3) I imported again from data dictionary but this time I included the partitions into the option menu on select schema screen,
    into tabular view on compare models screen I can select all the tables with different partitioned option, also I can change for "list partitions" and select only the partitions that I want to import.
    So I have a solution for my problem, thanks a lot for your suggestion
    The second step I'm not sure is needed or maybe I can avoid the step with some configuration setting in any of the preferences screen,
    if not, I think the options to not include partitions into select schema screen are not so clear, at least for me,
    please, could you confirm me if a way to avoid the second step exists or if I misunderstood this option?
    thanks in advance

  • Error in schema validation in XI. Please help!

    Hi Experts,
       I have a FILE to proxy scenario. In the sender agreement I have selected schema validation by Adapter. (Other option is by Integration Engine). Now when the file I get an error in sender file communication channel monitoring:
    Error: com.sap.engine.interfaces.messaging.api.exception.MessageFormatException: Schema DataIn.xsd not found in D:\usr\XXXXXXXXX\validation\schema\31f7c36098e411deadcae4d30a13c639\httptraining-00EIM_POC\DataIn_Async_Out\httptraining-00EIM_POC\DataIn.xsd  (validation\schema)
    Do I need to keep the schema file in some server location?  Do I need to setup this path anywhere? Where do I need to place the schema file?
    What is the difference between validation by adapter and by integration engine?
    Please help!
    Thanks
    Gopal

    >What is the difference between validation by adapter and by integration engine?
    Both are same. One in Java engine and the other in the abap engine.  If you set at one place that is good enough.  In 7.1+  this becomes one of the pipeline steps.
    >Do I need to keep the schema file in some server location? Do I need to setup this path anywhere? Where do I need to place the schema file
    Yes you have to place schema in the server location.
    Refer the Rajasekar document for the placing this file in the server location.

  • Couldn't export schema because of invalid argument value and other errors.

    Hello, All.
    I am using an oracle 10g database, which is running on a Redhat9 linux server, to manage our lab information.
    I plan to export a schema (only tables, not the real data) into a file.
    First, I used the Enterprise Manager to export. I went through and complete the export job. After submitting, the system says:
    There is a problem writing to the export files: ORA-39094: Parallel execution not supported in this database edition..
    So I just copy the PL/SQL code which would be send to the source database and comment the set_parallel procedure.
    The code is listed here:
    declare
    h1 NUMBER;
    begin
    begin
    h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'EXPORT000468', version => 'COMPATIBLE');
    end;
    --begin
    --dbms_datapump.set_parallel(handle => h1, degree => 1);
    --end;
    begin
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT.LOG', directory => 'DATA_FILE_DIR', filetype => 3);
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
    end;
    begin
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''FLOWLIMS'')');
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
    end;
    begin
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT%U.DMP', directory => 'DATA_FILE_DIR', filetype => 1);
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
    end;
    begin
    dbms_datapump.data_filter(handle => h1, name => 'INCLUDE_ROWS', value => 0);
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    end;
    begin
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
    end;
    begin
    dbms_datapump.detach(handle => h1);
    end;
    end;
    I paste the code in an SQL*Plus session and execute it. The system says:
    declare
    ERROR at line 1:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 2486
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 2718
    ORA-06512: at line 23
    I am not very sure which part is wrong for I have exported another schema successfully using this method just now.
    Any advice is highly appreciated!
    Qian

    Well, I will list more details about how to complete the job here.
    1) Go to the Enterprise Manager, log in as FLOWLIMS (I just want to export the schema FLOWLIMS)
    2)Go to the "Maitenance"
    3)Under the "Utilities", select the "Export to files"
    4)in the next page: Export: Export Type, select "Schemas"
    5) in the next page, select the schema "FLOWLIMS"
    6) in the next page, select these parameters:
    Maximam Number of Threads in Export job: 1
    Estimate Disk Space: Blockes
    7) when I click the "Estimate Disk Space Now", it says :
    Export Estimate Failed
    There is a problem writing to the export files: ORA-39094: Parallel execution not supported in this database edition..
    8) So I give up extimating
    9) Other options:
    I select "Generate Log file"
    The Directory Object is the default value "DATA_FILE_DIR"
    The Log File is the default value "EXPDAT.LOG"
    10) the advanced options are like these:
    Content: What to export from the Source Database: "Metadata only"
    Export content: "include all objects"
    Flashback: select "As the specified System Change Number (SCN)
    SCN: just accept the default number 28901412
    Query: select nothing. I need all fields of all tables.
    11) in the next page, I accept the default directory object "DATA_FILE_DIR" and the default File Name "EXPDAT%U.DMP". The Maximam File Size is blank, I just leave it blank.
    12) in the next page "Schedule" , I select to start the job immediatly.
    13) in the next page "Review", it shows:
    Export Type          Schemas
    Statistics type          Estimate optimizer statistics when data is imported
    Parallelism          1
    Files to Export          DATA_FILE_DIR EXPDAT%U.DMP
    Log File          DATA_FILE_DIR EXPDAT.LOG
    and the PL/SQL is like that:
    declare
    h1 NUMBER;
    begin
    begin
    h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'EXPORT000487', version => 'COMPATIBLE');
    end;
    begin
    dbms_datapump.set_parallel(handle => h1, degree => 1);
    end;
    begin
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT.LOG', directory => 'DATA_FILE_DIR', filetype => 3);
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
    end;
    begin
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''FLOWLIMS'')');
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
    end;
    begin
    dbms_datapump.add_file(handle => h1, filename => 'EXPDAT%U.DMP', directory => 'DATA_FILE_DIR', filetype => 1);
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
    end;
    begin
    dbms_datapump.data_filter(handle => h1, name => 'INCLUDE_ROWS', value => 0);
    end;
    begin
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
    end;
    begin
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
    end;
    begin
    dbms_datapump.detach(handle => h1);
    end;
    end;
    14) after I click the "submit" , it shows:
    Export Submit Failed
    There is a problem writing to the export files: ORA-39094: Parallel execution not supported in this database edition..
    15) I copy the PL/SQL, comment this part:
    --begin
    --dbms_datapump.set_parallel(handle => h1, degree => 1);
    --end;
    I run it in an iSQL*Plus Release 10.1.0.2,
    It shows:
    declare
    ERROR at line 1:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 2486
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 2718
    ORA-06512: at line 23
    Could anybody help? Thanks a lot!
    Qian

  • Strange error message when importing one schema into another

    Hello,
    using Oracle Workshop for Weblogic 10.3, I am trying to define a new schema A which imports the namespace of an already existing schema B. The schema validator states that schema B is valid.
    However, when I try to add an import directive in the design view and try to select schema B as the schema location I get the error message that B.xsd is an invalid schema file. When I manually add the import directive in the source view, I can refer to all elements of the namespace defined in schema B. Unfortunately, schema A now is marked as invalid with the error message "The dependency is not configure in schema resource. Possible reason one or more import/include is not set correctly."
    Even external schema validation tools do not find any validation errors in schema B.
    Any ideas?
    Kind regards
    Alexander

    The dependency is not configure in schema resource. Possible reason one or more import/include is not set correctly."http://forums.oracle.com/forums/thread.jspa?threadID=1048184&tstart=0
    Regards,
    Anuj

  • Using a schema from a different Logical Data Service

    Is it possible to use a schema from a different Logical Data Service in a new Logical Data Service? I want to maintain just one version of the schema.

    when I try and associate a schema type the 'Select Schema Type' dialog appears. On there the scope is set to project. Next to it is 'All projects', is there a way I can get that ungreyed?I don't think so. I assume that is a remnant from ALDSP 2.x when we had the notion of an ALDSP application that contained multiple ALDSP projects and there was visibility across projects (actually just folders) in the same application.
    can I use a schema associated with another dataspace projectNo. The contents of a dataspace are not accessible from outside the the dataspace other than what is provided by the APIs. And although the schemas could be accessed via catalogservices, this isn't a standard protocol (like http://) and therefore schema processors wont' be able to use them.
    You could host your schemas in a web-application and expose them via http. Create a Dynamic Web Project and put your schemas where the content gets served from (i.e. alongside index.html), and then they would be accessible via http://<yourserver>:<port>/yourWebApp/yourSchema.xsd. All the schemaLocation in the imports would need to be relative. It would probably be easier to create your schema-hosting application first, then build your dataservices.

  • Schemas stored in BiztalkMgmt DB table

    Hello,
    In the Biztalk Admin Console , i can go to any schema, doubleclk and select Schema View. This shows me the actual schema.
    Any idea  from which BiztalkManagement DB table, the schema view  is stored in?
    My original thought was this would come from the gaced assembly?
    please let me  know
    regr,
    BM

    Hi,
    Full schema content is present in bt_XMLShare table of BizTalkMgmtDB with message type and schema ID.
    Refer: BizTalkMgmtDB:
    All Table Details
    Rachit
    Please mark as answer or vote as helpful if my reply does

  • Not able to select physicalschema directory for file data server in ODI 11g

    Hi,
    I am a beginner to ODI tech and stuck up with an error while doing a tutorial (mentioned in this link - http://st-curriculum.oracle.com/obe/fmw/odi/odi_11g/ODIproject_ff-to-ff/ODIproject_flatfile-to-flatfile.htm).
    While creating a physical schema for default file server(FILE_GENERIC) , I am not able to select schema directories and the name field with value 'FILE_GENERIC.Directory' is grayed out (non editable)
    I have gone through many documents but could not find any relevant information for this.
    So could you please let me know if any configurations required for this?
    Thanks,
    Anusha

    Hi Oleg,
    Thanks for your reply.
    While creation of physical schema , Name field is grayed out , is that the default behaviour of the screen? because in the tutorial I could see the name filead is pointing to a file directory path.
    Thanks,
    Anusha

Maybe you are looking for