Blocking DDL Queries Across Schemas

I am experiencing a problem where DDL queries executed on two different schemas seem to be blocking each other and hence taking longer time to execute. If they are not executed in parallel they complete within the expected time.
What pointers should I research or look upon?
P.S - We are using 64 bit Oracle 11gr2 Standard Edition RAC server.

You are saying that parallel execution of DDL queries are blocking each other user request; so as Sb has said, post your DDL queries, how and when you are running DDL queries, what error you are getting; i mean provide us more information as much as you can (because none of here are able to see your monitor, what exactly you are doing and what exactly oracle is saying) etc. There are many ready made SQLs available on this forum and/or google by just search with this forum or google, post your effort to get the solution then we will try our best to give you solution.
Not something like, "Doctor (forum members), today i am not feeling well (DDL queries are blocking each other), please give me some medicine (some sql to solve the question)"
Regards
Girish Sharma

Similar Messages

  • Few queries across the BSEG & REGUP tables.

    Hi all.
    Am hoping that you might be able to assist. I have a dataset that has a few tables from a SAP system & I have been asked to run a few queries across the BSEG & REGUP tables.
    I have tried using EBELN, BELNR, AUFNR and REBZG to try and join the two tables but unfortunately these fields are not consistently populated. Is there any other fields that I could use that are common to both tables and should be populated by default ??
    Any assistance that could be provided would be appreciated.

    Hi Ashutosh Kumar
    You can join both table with
                              BUKRS - Company Code
                              BELNR - Accounting Docu. No.
                              GJAHR - Fiscal Year.
                              BSCHL - Posting Key.
                              SAKNR - G/L account number.
    Reward Me Points .
    Thanks
    Pari

  • FK across schemas in Data Modeling

    Hello,
    how can I create a foreign key across schemas in SQL Developer Data Modeling? Must both tables be on the same relational model? Otherwise I cannot create the relation because both tables must be displayed on the same diagram in order to draw a FK relation between them. Or is there another way? If not, then this is very inconvenient as I must include all the tables from different schemas in one relational model.
    TIA,
    Peter

    Peter,
    Yes you need all the tables in one and the same relational diagram.
    However you can use the SUBVIEW facility.
    When you import one or more schemas from a catalog one global relational model will be created and one subview per subschema. You can create the foreign key on the main relational diagram or you can drag and drop the table you want the foreign key with on the subview and create the relation there. A subview and a main view are always kept in sync.
    When you have created your own main relational diagram you can create one or more subviews yourself. To populate a subview you can drag and drop tables to it or you can use the "select neighbors" facility (click right on table) and then use "create subview from selected". You can also do a multiple select on the diagram (hold shift key) and create a subview from selected.
    Kind regards,
    René De Vleeschauwer
    SQL Developer Data Modeling team

  • Is it possible to fire DDL queries from within a trigger?

    Hi,
    In Oracle 8.0.5, Is it possible to fire DDL queries from within a trigger?
    e.g. when no. of rows is multiple of 1000, I wish to call a procedure which
    analyzes the table and finds the size of the table. I am getting :
    "ORA-04092: cannot COMMIT in a trigger" error message.
    Regards
    Sanchayan

    One way around this is to use the AUTONOMOUS_TRANSACTION pragma, but I think that was introduced in 8i. Check your docs. Another way is to set up a procedure to do the ANALYZE and in your trigger submit a job using DBMS_JOB to run that procedure.
    Anyway, the second option is best because you really don't want to impose the overhead of running an ANALYZE onto a DML transaction.
    You are also going to run into the mutating table problem: you can't find out how big the table is, because you can't run queries against the table from within a trigger. You could use DATE_LAST_ANALYZED from USER_TAB_COLUMNS, but really you should just schedule jobs to run ANALAYZE at regular intervals - once a day, once a week, once a month, depending on the volatility and usage of any given table.
    Cheers, APC

  • Need the find the all DDLs from the schema of 1 year old

    Hi,
    Can you please provide the information for the following databases, schemas within the given timeframe:
    - LMS_MX / BATCH_MX / JMS_MX
    – DWH_MX
    Start: 01.01.2013
    End: 31.12.2013
    We need all DDL from the schemas above.
    Thanks,
    Can anybody help me out ,?  how to find such details.?

    1978276 wrote:
    i dont no , how to use audit logs..? can anybody guide me to find the details of 1 year..
    Do you have auditing enabled? Default is not enabled for what you want to query. So unless you have specific auditing enabled, and the audit logs for that year period, you are out of luck.
    Thus, the details are not available as you did not instruct Oracle to record the details.
    Not an Oracle problem. Your problem.
    And as rp0428 said - this begs the question as just WHAT is your problem?

  • PL Package to sync ddl between two schemas

    About a year or so ago I came across a website (can't remember) that indicated that in 11g there is a package that would automatically update a target schema from a source schema to make it structurally the same (ddl). I've been unable to find such a package and want to know if anyone else out there knows if such a package exists. We are writing some software that would push the ddl changes made in a dev schema to test and then prod without the need to maintain scripts. We would want it to work similar to the way jDeveloper works with database changes made to a model. We could write it from scratch, but didn't want to go that route if there was something available from oracle that we could leverage.
    Thanks in advance for any insight.

    Never used it but probably you are talking about DBMS_COMPARISON?!
    The aforementioned document further links you to Comparing and Converging Data.

  • Export DDL for current schema (feature request "Creation of Object script")

    Hello Raptor-Team,
    to generate a DDL script for my current schema, I tried your suggestion on SQL Developer Exchange to click on "other users" and then select my user. Wait a minute... where is my current user in this menu? Am I missing something? ;-)
    As this technique seems to make sense only with DBA users exporting other schemas, I'll log a new request for the often-desired one-button-current-schema-DDL-generator...
    Best regards,
    --==/ Uwe \==--

    Nope - you're not missing something. I've logged it. Sorry. I know suggesting connecting as SYSTEM and then selecting other users is not what you are after, but if you have access to SYSTEM, that's be a solution. Alternatively, you just have to expand each node type and select the top level category.
    Sue

  • Spatial Query across schemas. one version enabled table another not -Hanged

    Hi,
    I am executing a PL/sql procedure where a Spatial query run across two schemas. One table(in x schema) is version enabled and second table(in y schema) is Unversioned. Add to that complexity I am running the procedure from third user logon. I think I have enough previleges, as I won't get any error message.
    But, Procedure worked fine when there is no table is version enabled. It started giving problem when one table got version enabled.
    I have tried by setting " DBMS_WM.Gotoworkspace('LIVE');" before running spatial query. But still no luck, process just hangs on the spatial query.
    I tried by using physical name of the Table (table1_LT) which is making it to work. But, as per Workspace manager guide, applications, programs should NOT use, this physical tables(because it is not the correct way on versioned table).
    1. How can I hint to my query, to use a table from only live version?
    2. Why Query is hanging forever (even tried by leaving it over night....)
    Normally it used to take one or two minutes(before versioning..)
    I have posted it Workspace manager forum, But No Luck (people seems to be shy away after seeing "Spatial query" )
    Any help is highly appriciated

    Hi,
    I will need to know more details about the specific query you are performing. So, please do the following:
    1. list the actual query that you are using
    2. generate an explain plan of the query both before and after the table was version enabled. use @?/rdbms/admin/utlxpls or anything that generates the predicate information.
    3. also, give any pertinent details about the table(size of the table, number of rows expected to be returned, column types in the table, etc).
    Based on that, I will see if I can suggest a possible hint that may be able to improve the performance of your query.
    Regards,
    Ben

  • Block corruption in sys schema

    hi,
    I have a table in sys schema (sorry for creating a table in sys schema) and now it says one block is corrupted while selecting from it. I don't have a backup of the table. Is there any option to repair it without shutting down the database? (i cannot use rman to do blockrecover)
    thanks
    alimajaf

    You haven't given much details about your environment or the corruption in question.
    Did you try DBMS_REPAIR?
    Regards
    Venkat

  • Call procedure across schema

    I have oracle 8i
    i have multiple identical schema for different country.
    i write one procedure in one schema(country_1) and granted to all others for execute.
    now i am connected to country_2 which doesn't have procedure,
    and i am calling procedure of country_1(which has a procedure).
    upto this everything is fine
    but problem is i want to
    execute the procedure on the recently connected schema(country_2),it is executing on country_1(where actual procedure is )
    Thanks,
    prathesh
    ([email protected])

    You need to add AUTHID CURRENT_USER to your create procedure command.
    From the documentation
    invoker_rights_clause
    The invoker_rights_clause lets you specify whether the procedure executes with the privileges and in the schema of the user who owns it or with the privileges and in the schema of CURRENT_USER.
    This clause also determines how Oracle resolves external names in queries, DML operations, and dynamic SQL statements in the procedure.
    TTFN
    John

  • How to find invalid views across schemas

    I just started on a project that, well let's put it this way, has a messy database. It has multiple schemas and contains many views, synonyms and database links.
    I want to add a column to a table, but I want to make sure
    this doesn't invalidate any views somewhere in the database.
    Is there a way to check for invalid views across multiple schemas? Or better yet, how to find out beforehand what views in what schemas look at the table?
    Thanks,
    Tim

    To find out, where the table is used you can select:
    select owner,name,type
    from all_dependencies
    where referenced_owner = (select user from dual)
    and referenced_name = 'TABLENAME'
    and referenced_type = 'TABLE'
    It not only shows Views but also show�s if the table is used inside a trigger and so on.
    To find out, what objects are invalid:
    select owner, object_name, object_type
    from all_objects
    where status = 'INVALID';
    regards
    Anna

  • Track DDL statements for schema.

    Dear all,
    I want to track all DDL statements being performed on particular schema.
    I want to capture entire sql statement being run.
    How can I achive this.
    Please advice.
    Waiting for your reply.
    Regards,

    Hi all,
    I have used this trigger to track ddl.
    create table perfstat.stats$ddl_log(
    user_name varchar2(30),
    ddl_date date,
    ddl_type varchar2(30),
    object_type varchar2(18),
    owner varchar2(30),
    object_name varchar2(128)
    create or replace triggerDDLTrigger
    AFTER DDL ON DATABASE
    BEGIN
    insert into
    perfstat.stats$ddl_log
    user_name,
    ddl_date,
    ddl_type,
    object_type,
    owner,
    object_name
    VALUES
    ora_login_user,
    sysdate,
    ora_sysevent,
    ora_dict_obj_type,
    ora_dict_obj_owner,
    ora_dict_obj_name
    END;
    It's working fine ,
    But I want to capture entire sql statement into stats$ddl_log table.
    What to modify in table or trigger that I could achive this.
    Waiting for your reply.
    Thanks and regards...

  • My iPad2 screen has 3 blocks of lines across the screen

    Hi,
    Has anyone else encountered this? The blocks of lines appear to be aligned with the spaces between the smart cover and go vertically across my screen. I am using a ZAGG invisishield cover and have wiped it clean. It's most visible when the screen is white.

    Simon-
    My first thought is that the invisishield cover has reacted to the pressure of your smart cover.  In other words, the lines may not be in the iPad, but between the invisishield and the iPad.
    I'm not familiar with the invisishield.  If it can be removed without ruining it, you might see if that makes the lines go away.
    If the lines are actually in the iPad, it may be a defect.  If convenient, show it to an Apple Store Genius.  If not, look for an Apple Authorized Repair Center near you.
    Fred

  • Table does not exist when creating FK Constraint across schemas

    Hi all,
    This will probably boil down to a permissions issue since I'm sketchy on the various levels....
    I'm testing a conversion to Oracle from our legacy system. There are 4 schemas which I've created and each of those schema users have been granted DBA roles.
    After creating a number of tables I wrote the SQL to create the FK Constraints. Most of them went in, but the ones crossing schemas don't. Logged in as SYS, I can do a select from each table, I can even JOIN the two in the SELECT. However when I try creating the constraint it give me a: ORA-00942: table or view does not exist
    ALTER TABLE USERA.TABLEA ADD FOREIGN KEY (COLA) REFERENCES USERB.TABLEB (COLA) ON DELETE CASCADE
    Again, I have scads of commands that went in correctly so this must be a permissions type thing. I'm the only one logged into the database since it's my own test system. This is 10g BTW.
    If you have any suggestions as what to look into, please explain how to actually perform the checks since I'm still learning how to get around.
    Thanks very much!

    To bulk grant, you can use dynamic SQL; somthing like this:
    <BR><BR>
    SQL> declare
      2    l_grantor VARCHAR2(30) := 'USERA';
      3    l_grantee VARCHAR2(30) := 'USERB';
      4  begin
      5    for table_rec in (select owner,table_name from all_tables where OWNER=l_grantor) loop
      6      execute immediate 'GRANT REFERENCES ON '||table_rec.OWNER||'.'||table_rec.TABLE_NAME||' TO '||l_grantee;
      7    end loop;
      8  end;
      9  /

  • Warehouse partitioning - performance of queries across multiple partitions?

    Hi,
    We are using Oracle 11.2.0.3 and have a large central fact table with several surrogate ids which have bitmap indexes on them and have fks looking at dimension tables + several measures
    (PRODUCT_ID,
    CUSTOMER_ID,
    DAY_ID,
    TRANS_TYPE_ID,
    REGION_ID,
    QTY
    VALUE)
    We have 2 distinct sets of queries users look to run for most part, ones accessing all transactions for products regradless of the time those transactions happened (i.e. non-financial queries - about 70%,
    queries determining what happened in a particular week - 20% of queries.
    Table will have approx 4bn rows in eventually.
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year - however this data wouldn't be joined to any other table.
    Then considering sub-partitioning by hash of product_id which is surrogate key for product dimension.
    Thoughts on performance?
    Queries by their nature would hit several sub-partitions.
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    Thanks

    >
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    >
    Queries that access multiple partitions can improve performance for two use cases: 1) only a subset of the entire table is needed and 2) if the access is done in parallel.
    Even if 9 of 10 partitions are needed that can still be better than scanning a single table containing all of the data. And when there is a logical partitioning key (transaction date) that matches typical query predicate conditions then you can get guaranteed benefits by limiting a query to only 1 (or a small number) partition when an index on a single table might not get used at all.
    Conversely, if all table data is needed (perhaps there is no good partition key) and parallel option is not available then I wouldn't expect any performance benefit for either single table or partitioning.
    You don't mention if you have licensed the parallel option.
    >
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    >
    You provide some confusing information. On the one hand you say that 70% of your queries are
    >
    ones accessing all transactions for products regradless of the time those transactions happened
    >
    But then you add that you are
    >
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year
    >
    How can you drop old partitions every year if 70% of the queries need product data 'regardless of the time those transactions happened'?
    What is the actual 'datetime' requirement'? And what is your definition of 'a particular week'? Does a week cross Month and Year boundaries? Does the requirement include MONTHLY, QUARTERLY or ANNUAL reporting?
    Those 'boundary' requirements (and the online/offline need) are critical inputs to the best partitioning strategy. A MONTHLY partitioning strategy means that for some weeks two partitions are needed. A weekly partitioning strategy means that for some months two partitions are needed. Which queries are run more frequently weekly or monthly?
    Why did you mention sub-partitioning? What benefit do you expect or what problem are you trying to address? And why hash? Hash partitioning guarantees that ALL partitions will be needed for predicate-based queries since Oracle can't prune partitions when it evaluates execution plans.
    The biggest performance benefit of partitioning is when the partition keys used have a high correspondence with the filter predicates used in the queries that you run.
    Contrarily the biggest management benefit of partitioning is when you can use interval partitioning to automate the creation of new partitions (and subpartitions if used) based solely on the data.
    The other big consideration for partitioning, for both performance and management, is the use of global versus local indexes. WIth global indexes (e.g. a global primary key) you can't just drop a partition in isolation; the global primary key needs to be maintained by deleting the corresponding index entries.
    On the other hand if your partition key includes the primary key column(s) then you can use a local index for the primary key. Then partition maintenance (drop, exchange) is very efficient.

Maybe you are looking for

  • Can you generate/make a javadoc in netbeans?

    I'm using netbeans for a project, and i cant figure out how to make a javadoc and is there any way to generate one. I was using the free version of jbuilder and there wasnt a way to generate one unless i had the higher version. My school provided an

  • Staying Up-to-date

    Howdy! I have managed to get behind several versions of OCS and was wondering if there was somewhere on the Oracle web site where I could have them send me email alerts when a new product version or patch was released. I've looked through Metalink an

  • Splitting Dimensions on Attributes for reporting

    Does anyone have experience or a good idea on how to most efficiently allow splitting a dimension on attributes for reporting custom cells. We are working in 11G. First let me explain We have a dimension Brand that has a level based hierarchy. Total

  • Getting Dump "No more storage space available for extending an internal table" ??

    Hi All, In Web Ui while i was click save, it taking too much of time for saving at all lost it throwing dump such as "No more storage space available for extending an internal table". Recently Basis side did some activities for storing day to day tra

  • JTree node's margin

    Hi folks, i'd like to know if it's possible to modify the right margin (gap) that there is between a parent node and his children? In fact i want to reduce it. For example if the default gap is 50 pixels wide, i want it to be 30. thanks for your help