Concerning parallelism at the table level (10g)!!

Hi, all.
I would like to know which oracle dictionary contains the information
about the parallelism of a table.
To be more specific, assuming that I issued the following statement,
--> Alter table XXX parallel degree 35;
which oracle dictionary have the information of parallism of a table??
DBA_TABLES does not have a column with regard to parallism.
Thanks in advance.
Best Regards.

Check column DEGREE in DBA_TABLES
- Amit
http://askoracledba.blogspot.com/

Similar Messages

  • How to get MR Document from Billing Document at the table level

    Hi All,
    I am looking to find the MR Document based on which the Billing Document was generated. I need to find it out at the table level.
    I looked into DBERCHZ2 table, but I can see a single billing document is linked to multiple Meter Reading Documents via different Line Items.
    Could you guyz please help me? Am I seeing the right table? Is there any FM or ABAP program for the same?
    Thanks a lot in advance.
    Regards.

    Hi ,
    The link is Installation to get MR Doc from Bill Doc .
    from ERCH-BELNR  , fetch Contract (VERTRAg )  and then from EVER-VERTRAG get Installation (ANLAGE).
    Get MR Doc(ABLBELNR) from EABLG based on EABLG-ANLAGE.
    Thanks ,
    Sachin

  • How an 3.x object is migrated to 7.0 at the table level?

    Hi All,
    in the context menu of an 3.X object . There is an option called "Migate". when we click this option , The 3.x object is migated to 7.0 object.
    Can anyone please tell me how does the system "migrate" at the table level. What tables are affected. How does the "with export" and "w/o export " work.
    Thanks in advance .
    Regards,
    Satish

    yes
    there is two tables ( i dont remeber exactly the names of tables) when you migrate say your ds from 3.x to 7 then in one table entry is deleted and in another table that entry is entered
    with export means you can again migrate to 3.x but not in another case.
    thx=points
    nain

  • Understanding of the table level information from SAP HANA Studio

    Hello Gurus,
    Need some clarification from following information provided by SAP HANA Studio.
    I have a table REGIONS and the contents are as follows:
    REGION_ID REGION_NAME
    11      Europe
    12      Americas
    13      Asia
    44      Middle East and Africa
    15      Australia
    6      Africa
    The Runtime Information about the table is as follows:
    Image# 1
    Image# 2
    Image# 3
    Total size in KB show in Image#2 and 3 are not matching. Why?
    Total size in KB show in Image#2 and Total Memory Consumption are not matching. Why?
    The values of Memory consumption in Main Storage and Delta Storage are not matching in Image# 1 and Image# 2. Why?
    Estimated Maximum Memory Consumption (Image# 1) and Estimated Maximum Size (Image# 2) are matching.
    Why the Loaded column Image# 2 is showing the value ‘PARTIALLY’. The table has just 6 rows and still partially loaded? Why?
    What is the significance of column Loaded in Image# 2 and 3 ?
    Thanks,
    Shirish.

    Have a look on this
    Playing with SAP HANA
    That presentation should help you to answer the questions yourself
    Regards,
    Krishna Tangudu

  • Need to know ,how SAP is Posting wrong Enrtry at the Table level for Same .

    Hello SAP Experts ,
    Good Morning Gems!!!
    I have a Typical Issue here , SAP Is Posting In wrong tables
    .  Production  is currently processed everything that it should process.  This works as designed.
                PO = 7800507594 (From Plant  5400 to Plant  5409 - only NL, no intercompany)
                Dlvry  = 81243277 (Has PGI but no Invoice, as it is the same company code!!!
    The issue is that for some reason, SAP has put an entry into table VKDFS by mistake.  This table is used by SAP to determine the list in VF04.  This is why VF04 thinks it should create an invoice, but it should not.  so we are getting  error message that the Invoice cannot be created , which is valid, and a good thing.
    Do we have  OSS for 2 things:
                1) A note that mentions why it is determining the wrong CoCode or Sales Area which leads to the entry into this table
                            This is to stop it from adding entries incorrectly.
                2) A note that mentions how to fix the existing entries in this table that should not be there?
    Awarded Full Points for the Correct answer
    Thansk and Regards
    Adarsh Srivastava
    Supply Chain Consultant ,
    CSC INDIA

    Dear Friend,
    I guess you item category in Delivery Document would be NLN.
    Go to VOV7 & under Business Data there is a  check box for Billing Relevance. This box should be blank. I mean if there is any entry in this box (either A or J) then remove it & make it blank.
    Hope this helps...
    Thanks,
    Jignesh Metha

  • How to get the Table Level Constraints List

    hi all,
    i created a table as follows,
    create table temp(fld1 number, fld2 number, fld3 varchar2(10),
    constraint fld1_pk primary key (fld1),
    constraint fld2_uk unique (fld2) );
    table has created successfully.
    now i need to get list of constraints (constraint_names) in to Java.
    i checked user_tab_columns table and in that i got only nullable.
    how to get this one, with query;
    regards
    pavan

    Yes!
    SQL> select constraint_name, generated from user_constraints
      2  where table_name = 'EMP'
      3  /
    CONSTRAINT_NAME                GENERATED
    SYS_C003996                    GENERATED NAME
    SYS_C003997                    GENERATED NAME
    EMP_PRIMARY_KEY                USER NAME
    EMP_SELF_KEY                   USER NAME
    EMP_FOREIGN_KEY                USER NAME
    5 rows selected.
    SQL>Cheers
    Sarma.
    Message was edited by:
    Radhakrishna Sarma

  • How to apply Table level Break in the WEBi Report

    Hi,
    How we can apply the table level break in the WebI Report.
    I know a bit bt i dont know its right or wrong.. Initally we normally create a table(with the values(which has 2rows(1: for naming  and 2: for associating the object with it at down of it  ie., one above and one below)) and at the top of that present table we will insert a "ROW ABOVE" to the naming row and then we insert a "Row below" to the object values  and drag the first coloumn of the first row almost invisible(naming row) and then the same in next row(objects associating values for naming)... is this ryt.. if NO  thn let meknow the various ways to do it..
    Thank you in Advance....

    Hi,
    what is your requirement. if to show subtotal then create one table and select one column and apply break.
    Thanks,
    Amit

  • Queries run PARALLEL but no parallel parameters or table DOP set

    I've just taken over a new system and am seeing something that seems to me to be a bit unusual.
    Queries are running parallel, but I've checked and all tables have a DOP of 1 and none of the user queries have a parallel hint. Yet I see queries running parallel in OEM.
    The DB is 11.1.0.7 EE. It does have PARALLEL parms set at the DB level, such as max servers and min servers.
    I showed this to another DBA and they hadn't seen it, either.
    The docs state that, for a query to run parallel, a DOP > 1 must be set at the table level or in the query with a hint. Of course, these days, the docs are frequently very wrong and out of date. If this were 11.2, I would think the new Automatic DOP was involved.
    Appreciate any ideas on how to investigate this.

    pdp0617 wrote:
    I've just taken over a new system and am seeing something that seems to me to be a bit unusual.
    Queries are running parallel, but I've checked and all tables have a DOP of 1 and none of the user queries have a parallel hint. Yet I see queries running parallel in OEM.
    The DB is 11.1.0.7 EE. It does have PARALLEL parms set at the DB level, such as max servers and min servers.
    I showed this to another DBA and they hadn't seen it, either.
    The docs state that, for a query to run parallel, a DOP > 1 must be set at the table level or in the query with a hint. Of course, these days, the docs are frequently very wrong and out of date. If this were 11.2, I would think the new Automatic DOP was involved.
    Appreciate any ideas on how to investigate this.Well, its not certainly an unusual behavior. I have seen it already once but I am not sure that you have the same case but let's try. Can you tell that whether your query is using any indexes and what's the output of this query if the index is in use?
    select index_name,degree,instances from all_indexes where index_name='indexname';HTH
    Aman....

  • Numeric getting multiplied by 10 at table level

    Hello Experts,
    1) When we save PO, suppose if we have 1 Qty , netprice = 80, it is getting saved in PO as it is. Whereas at table level, it is getting multiplied by 10 and the value  800 is getting saved at the table level.
    2)Even in material master accounting view also, suppose if stock value is 120 & MAP is say 6, whereas in MBEW table values are updating as 1200 & 60.
    But in standard reports, values are proper, whereas in tables it is storing this way.
    This is more problematic, since during FS preparation, we'll  have to fetch the values from tables.
    Kindly Suggest
    Mahesh

    Hi
    PLease check in Material Master Accounting view for how many price unit you have maintained ? check whether it is given for 1 price unit or for 100 price unit
    it will solve your problem
    Thanks & Regards,
    Mani

  • How to generate test data for all the tables in oracle

    I am planning to use plsql to generate the test data in all the tables in schema, schema name is given as input parameters, min records in master table, min records in child table. data should be consistent in the columns which are used for constraints i.e. using same column value..
    planning to implement something like
    execute sp_schema_data_gen (schemaname, minrecinmstrtbl, minrecsforchildtable);
    schemaname = owner,
    minrecinmstrtbl= minimum records to insert into each parent table,
    minrecsforchildtable = minimum records to enter into each child table of a each master table;
    all_tables where owner= schemaname;
    all_tab_columns and all_constrains - where owner =schemaname;
    using dbms_random pkg.
    is anyone have better idea to do this.. is this functionality already there in oracle db?

    Ah, damorgan, data, test data, metadata and table-driven processes. Love the stuff!
    There are two approaches you can take with this. I'll mention both and then ask which
    one you think you would find most useful for your requirements.
    One approach I would call the generic bottom-up approach which is the one I think you
    are referring to.
    This system is a generic test data generator. It isn't designed to generate data for any
    particular existing table or application but is the general case solution.
    Building on damorgan's advice define the basic hierarchy: table collection, tables, data; so start at the data level.
    1. Identify/document the data types that you need to support. Start small (NUMBER, VARCHAR2, DATE) and add as you go along
    2. For each data type identify the functionality and attributes that you need. For instance for VARCHAR2
    a. min length - the minimum length to generate
    b. max length - the maximum length
    c. prefix - a prefix for the generated data; e.g. for an address field you might want a 'add1' prefix
    d. suffix - a suffix for the generated data; see prefix
    e. whether to generate NULLs
    3. For NUMBER you will probably want at least precision and scale but might want minimum and maximum values or even min/max precision,
    min/max scale.
    4. store the attribute combinations in Oracle tables
    5. build functionality for each data type that can create the range and type of data that you need. These functions should take parameters that can be used to control the attributes and the amount of data generated.
    6. At the table level you will need business rules that control how the different columns of the table relate to each other. For example, for ADDRESS information your business rule might be that ADDRESS1, CITY, STATE, ZIP are required and ADDRESS2 is optional.
    7. Add table-level processes, driven by the saved metadata, that can generate data at the record level by leveraging the data type functionality you have built previously.
    8. Then add the metadata, business rules and functionality to control the TABLE-TO-TABLE relationships; that is, the data model. You need the same DETPNO values in the SCOTT.EMP table that exist in the SCOTT.DEPT table.
    The second approach I have used more often. I would it call the top-down approach and I use
    it when test data is needed for an existing system. The main use case here is to avoid
    having to copy production data to QA, TEST or DEV environments.
    QA people want to test with data that they are familiar with: names, companies, code values.
    I've found they aren't often fond of random character strings for names of things.
    The second approach I use for mature systems where there is already plenty of data to choose from.
    It involves selecting subsets of data from each of the existing tables and saving that data in a
    set of test tables. This data can then be used for regression testing and for automated unit testing of
    existing functionality and functionality that is being developed.
    QA can use data they are already familiar with and can test the application (GUI?) interface on that
    data to see if they get the expected changes.
    For each table to be tested (e.g. DEPT) I create two test system tables. A BEFORE table and an EXPECTED table.
    1. DEPT_TEST_BEFORE
         This table has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look BEFORE the
         test for that test case is performed.
         CREATE TABLE DEPT_TEST_BEFORE
         TESTCASE NUMBER,
         DEPTNO NUMBER(2),
         DNAME VARCHAR2(14 BYTE),
         LOC VARCHAR2(13 BYTE)
    2. DEPT_TEST_EXPECTED
         This table also has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look AFTER the
         test for that test case is performed.
    Each of these tables are a mirror image of the actual application table with one new column
    added that contains a value representing the TESTCASE_NUMBER.
    To create test case #3 identify or create the DEPT records you want to use for test case #3.
    Insert these records into DEPT_TEST_BEFORE:
         INSERT INTO DEPT_TEST_BEFORE
         SELECT 3, D.* FROM DEPT D where DEPNO = 20
    Insert records for test case #3 into DEPT_TEST_EXPECTED that show the rows as they should
    look after test #3 is run. For example, if test #3 creates one new record add all the
    records fro the BEFORE data set and add a new one for the new record.
    When you want to run TESTCASE_ONE the process is basically (ignore for this illustration that
    there is a foreign key betwee DEPT and EMP):
    1. delete the records from SCOTT.DEPT that correspond to test case #3 DEPT records.
              DELETE FROM DEPT
              WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3);
    2. insert the test data set records for SCOTT.DEPT for test case #3.
              INSERT INTO DEPT
              SELECT DEPTNO, DNAME, LOC FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3;
    3 perform the test.
    4. compare the actual results with the expected results.
         This is done by a function that compares the records in DEPT with the records
         in DEPT_TEST_EXPECTED for test #3.
         I usually store these results in yet another table or just report them out.
    5. Report out the differences.
    This second approach uses data the users (QA) are already familiar with, is scaleable and
    is easy to add new data that meets business requirements.
    It is also easy to automatically generate the necessary tables and test setup/breakdown
    using a table-driven metadata approach. Adding a new test table is as easy as calling
    a stored procedure; the procedure can generate the DDL or create the actual tables needed
    for the BEFORE and AFTER snapshots.
    The main disadvantage is that existing data will almost never cover the corner cases.
    But you can add data for these. By corner cases I mean data that defines the limits
    for a data type: a VARCHAR2(30) name field should have at least one test record that
    has a name that is 30 characters long.
    Which of these approaches makes the most sense for you?

  • Encrypt Credit card data - table level

    Hi Team,
    We want to encrypt the credit card data, please let me know how to do this.
    We want to encrypt the data at the table level so that the specific column cannot be viewed by others and also encrypting the column at the OS level.
    11i Version:
    Database: 10.2.0.5.0
    Apps: 11.5.10.2
    Thanks,

    Hi;
    1. Check what Shree has been posted
    2. If those note are not help you can try to use Scrambling- Data masking,see
    Re: How to prevent DBA from Seeing salary data
    3. If even its not help than rise SR ;)
    PS:Please dont forget to change thread status to answered if it possible when u belive your thread has been answered, it pretend to lose time of other forums user while they are searching open question which is not answered,thanks for understanding
    Regard
    Helios

  • Table level supplemental logging

    How is table level supplemental logging different from Database level supplemental logging? Is Database level supplemental logging required for enabling table level supplemental logging?
    I have done 3 test cases, please suggest!
    Case 1
    Enabled only DB level supplemental logging(sl)
    observations--->
    DML on all tables can be tracked with logminer.
    I find this perfect.
    case 2
    Enabling only table level supplemental logging
    Setting---->
    2 tables ---AAA(with table level sl) & BBB (without table level sl)
    Only DDL is recorded with the help of logminer & few of the operations are listed as internal.
    case3
    Enabling database level sl first & then enabling table level sl only on one table --->AAA & no table level sl on BBB
    observation---> All the tables DDL & DML are getting tracked--point is if this is getting the same result
    as DB level SL, what is the significance of enabling Table level SL? or am I missing something?

    I have the same experience: when database level supplemental logging is enabled, adding supplemental logging at the table level does not affect functionality or performance.  Inserting 1 M rows into test table takes 25 sec ( measured on target database ) with table level supplemental logging, and 26 sec without it.  My GoldenGate version is 11.2, Oracle database version 11.2.0.3.0
    If someone can show the benefit of having table level supplemental logging in addition to database level logging, I would very much appreciate.

  • Showing values in  (1000) in the table and full values in graph

    Hi ,
      I have a requirement in which the amt need to be shown in table in 1000 for example , if the amt is 10000 , it should be shown as 10 in the table , while in graph below the table , it should be shown as 10000 . Any idea , how to do it . Is there any way , we can calculate at the table level .

    Can you explain us the rqmt in clear,it is bit confusing

  • Please ADVISE: Implementation of ARCs at Table Level

    Dear Colleague,
    While modeling with Designer (DS9i, Release2), I have made use of Arcs, i.e. when only ONE of two or more relationships are applicable at a time, but two or more of the relationships are possible.
    I have the situation in which an ARC contains two relationships that are also part of the primary key.
    My question is: How will designer implement this at the table level when I generate the table defintions from the entity definitions?
    I assume:
    1. that one column is generated for each relationship.
    2. that a check constraint is generated e.g.
    (col_1 is NULL and col_2 is NOT NULL) OR
    (col_1 is NOT NULL and col_2 is NULL)
    Since one of the two columns must be NULL, these columns cannot be part of the Primary key. Does Designer consequently, generate no Primary key constraint, but only a candidate key constraint?
    Please advise.
    Best regards,
    Randy

    Hi Randy,
    I suggest you make changes in the table design after transformation. You can then arrange your own design solution to the problem (either single key or multi-key with check constraints). Remember the database design transformer is not intended to provide a directly usable design, just a 'first-cut'.

  • Record not inserting into the table through Forms 10g

    Hi all,
    I have created a form in 10g(10.1.2.0.2) based on just one table that has 4 columns(col1, col2, col3, col4).
    Here col1, col2 and col3 are VARCHAR2 and col4 is date and all the columns are not null columns(There are no primary and foriegn key constrains, which means duplicates are allowed).
    My form contains 2 blocks where block 1 has one text item (col1) and 3 buttons (Delete, Save, Exit).
    And block2 is a database block and has col2,col3,col4 which are in tabluar layout frame displaying 10 records.
    When the form is opened the cursor has to be in block1.col1 for querrying. Here i enter a value in col1, and then when I click on col2 in the block2, then I put execute_query in new_block_instance of block2, which displays the records.
    The block2 properties are not updatable, insertable and query is allowed.
    Everything is working good until here. But here in the block2 when I want to insert another record into the table, by navigating all the way down to the last empty record and entering the new values for col2, col3 and col4 And then Ctrl+S will display the message "*FRM-40400: Transaction complete: 1 record applied and saved.*" But actually the record is not inserted into the table.
    I also disabled the col4 by setting the Enabled property to No, since while inserting new record the date have to be populated into it and it shouldnt be changed by the user. And im populating the sysdate into the new record by setting Intial Value property to *$$DATE$$*.
    And another requirement which I could not work arround here is that, the col3 also should be populated with the username of the user while inserting.
    please help me...

    Hi Sarah,
    I do not want to update the existing record. So I kept Udate Allowed to No in property palette for the items in block2.
    Do I have to do this property at block level also?
    I'm inserting a new record here.
    Edited by: Charan on Sep 19, 2011 8:48 AM

Maybe you are looking for