MRS for Data structure table

I need to various data structure table for MRS

A possible solution is create a materialized view and then a check constraint on the materialized view: this should work in a multiuser environment.
Example:
SQL> drop table t;
Table supprimee.
SQL>
SQL> create table t (
  2  x integer,
  3  y varchar2(10)
  4  );
Table creee.
SQL>
SQL> CREATE MATERIALIZED VIEW LOG on t
  2  WITH ROWID (x, y)
  3  including new values;
Journal de vue materialisee cree.
SQL>
SQL> CREATE MATERIALIZED VIEW t_mv
  2  REFRESH FAST ON COMMIT AS
  3  SELECT count(*) cnt
  4  FROM t;
Vue materialisee creee.
SQL>
SQL> ALTER TABLE t_mv
  2  ADD CONSTRAINT chk check(cnt<=1);
Table modifiee.
SQL>
SQL> insert into t values(1,'Ok');
1 ligne creee.
SQL> commit;
Validation effectuee.
SQL>
SQL> insert into t values(2,'KO');
1 ligne creee.
SQL> commit;
commit
ERREUR a la ligne 1 :
ORA-12008: erreur dans le chemin de regeneration de la vue materialisee
ORA-02290: violation de contraintes (TEST.CHK) de verification

Similar Messages

  • Create csv file for data in tables

    Hi All,
    I need to "export" data for about 60 tables in one of my databases to a csv file format using a "|" as a separator.
    I know I can do this using a query like:
    select col1 || '|' || col2 || '|' || col3 from table;
    Some of my tables have more than 50 columns so I'm guessing there is an easier way to do this than to construct select SQL statements for each table?
    Thanks in advance.

    I would point out that the OP did not identify the target for the files so it could be an non-Oracle database or Excel in which case external tables would not work since Oracle only writes to external tables in datapump format.  If the target is another Oracle database then external tables would be an option.  If external tables are an option then insert/select over a database link would be a potential alternate approach to using csv files.
    I use a SQL script to generate the Select statement to create my csv files.
    set echo off
    rem
    rem SQL*Plus script to create comma delimited output file from table
    rem
    rem 20000614  Mark D Powell   Automate commonly done task
    rem
    set pagesize  0
    set verify    off
    set feedback  off
    set linesize  999
    set trimspool on
    accept owner    prompt 'Enter table owner => '
    accept tblname  prompt 'Enter table name => '
    spool csv2.sql
    select 'select ' from sys.dual;
    select decode(column_id,1,column_name,
                 '||''|''||'||column_name)
    from   sys.dba_tab_columns
    where  table_name = upper('&&tblname')
    and    owner      = upper('&&owner')
    order by column_id;
    select 'from &&owner..&&tblname;'
    from   sys.dual;
    spool off
    undefine owner
    undefine tblname
    HTH -- Mark D Powell --

  • Need links for data structure and algorithms.

    Hi.
    I am just new to java but need to learn data structure and algorithms.
    Do your guys got any good links or bbs to learn?
    Thanx in advance

    http://www.amazon.com/exec/obidos/tg/detail/-/1571690956/ref=cm_huw_sim_1_3/104-7657019-1043968?v=glance
    http://www.amazon.com/exec/obidos/tg/detail/-/0534376681/ref=cm_huw_sim_1_4/104-7657019-1043968?v=glance
    http://www.amazon.com/exec/obidos/tg/detail/-/0672324539/ref=cm_huw_sim_1_2/104-7657019-1043968?v=glance
    http://www.amazon.com/exec/obidos/tg/detail/-/0201775786/qid=1060946080/sr=8-1/ref=sr_8_1/104-7657019-1043968?v=glance&s=books&n=507846
    $8 for the first

  • Checking for data in tables

    hi gurus,
    i have an itab which has fi transactional data , which inlcudes glaccounts with the name belnr.
    now i need to check if each glaccount has a master data , if a match is not found that glaccounts must be stored to display later.
    so i thought of doing this way, is it a best practise to do this way.
    tables: skb1.
    data : i_skb1 like skb1 occurs 0 with header line.
    data:begin of i_saknr occurs 0,
    saknr like skb1-saknr,
    end of i_saknr.
    loop at itab.
    select single * from skb1 into i_skb1 where saknr EQ itab-belnr.
    if sy-subrc <> 0.
    i_saknr-saknr = itab-belnr.
    append i_saknr.
    endif.
    endloop.
    is there a better way of doing this.

    Sanjana,
    Do like this
    tables: skb1.
    data : i_skb1 like skb1 occurs 0 with header line.
    types:begin of ty_saknr,
    saknr like skb1-saknr,
    end of ty_saknr.
    data: i_saknr type table of ty_saknr,
            wa_saknr type ty_saknr.
    select * from skb1 into table i_skb1 for all entries in itab where saknr EQ itab-belnr.
    loop at itab into wa.
    read table itab into wa with key saknr = wa-belnr.
    if sy-subrc <> 0.
    wa_saknr-saknr = wa-belnr.
    append wa_saknr to i_saknr.
    endif.
    endloop.
    Regards,
    Satish

  • Check for data in table

    So i have a table with one column:
    SUMA_LEI
    LEI
    I have a form made for this table. My purpose is to have just on row for this table, so when the user gets to the form page of this table, it gets an error like: "delete previous value, then insert new one". For this I need to make a pl/sql process that checks if there is a row inserted in the table and if that is the case, then I get an error, if not, im allowed to continue on that page.
    I need some help as soon as possible if you can please help me. I know that I need some "if clause", but beyond that I don't know how to do it

    A possible solution is create a materialized view and then a check constraint on the materialized view: this should work in a multiuser environment.
    Example:
    SQL> drop table t;
    Table supprimee.
    SQL>
    SQL> create table t (
      2  x integer,
      3  y varchar2(10)
      4  );
    Table creee.
    SQL>
    SQL> CREATE MATERIALIZED VIEW LOG on t
      2  WITH ROWID (x, y)
      3  including new values;
    Journal de vue materialisee cree.
    SQL>
    SQL> CREATE MATERIALIZED VIEW t_mv
      2  REFRESH FAST ON COMMIT AS
      3  SELECT count(*) cnt
      4  FROM t;
    Vue materialisee creee.
    SQL>
    SQL> ALTER TABLE t_mv
      2  ADD CONSTRAINT chk check(cnt<=1);
    Table modifiee.
    SQL>
    SQL> insert into t values(1,'Ok');
    1 ligne creee.
    SQL> commit;
    Validation effectuee.
    SQL>
    SQL> insert into t values(2,'KO');
    1 ligne creee.
    SQL> commit;
    commit
    ERREUR a la ligne 1 :
    ORA-12008: erreur dans le chemin de regeneration de la vue materialisee
    ORA-02290: violation de contraintes (TEST.CHK) de verification

  • Anyone for Data Structures ?

    Hi,
    Can anyone think of a datastrucute where by I can store like an array of integers (which I can increment easily) and a linked key (a string) that will move with its value when I sort it in order fo the integer value? similar to a hashTable but you cant sort this > ???
    I know you cant get arrays of arrays of two different types can you ?
    Ideas ?
    cheers.

    SortedMap (such as TreeMap) orders by keys, not values.
    You can make any collection and call Collections.sort() on it to sort it, but your object need to be Comparable for it to work. You can make a key-value object that implements Comparable and the compareTo method responds to the value. Then, make the equals method respond to the key, so when you call get(key), you'll get back the correct object. It may be a little tricky, but I think it could work.
    -JBoeing

  • Linked List, Stacks and Other Data Structures in Objective-C 2.0

    I've checked a little throughout documentation that Apple provides for the different frameworks in order to determine if there were ready-made classes for data structures such as linked lists, stacks and so forth but there don't appear to be any available? Is this correct?

    NSArray and NSDictionary are the multi-purpose built-in aggregate types. If you need something else then you can use STL or roll your own.

  • Data Structures and Algorithms in java book

    Hi guys,
    I want to know a good book which is good for Data Structures and Algorithms in java. I am good at Core java but a beginner for Data Structures in Java. I am a little poor in Data Structures concepts.
    Following are the books I have found on the net. Could you help me the choose the best outta them.
    1. Data Structures and Algorithms in Java - Mitchell Waite
    2. Data Structures in Java - Sandra Anderson
    3. Fundamentals of OOP and Data Structures in Java - Richard Weiner & Lewis J. Pinson
    4. Object Oriented Data Structures Using Java - Nell Dale, Daniel T. Joyce, Chip Weems

    lieni wrote:
    I good data structures book doesn't have to be language-specific.Thx DrLazlo, my speachYes.
    The OP wrote:
    I have access to these books and dont know which one to start with.What I meant is that you shouldn't narrow your search to insist that the book you choose have "Java" in the title.

  • Data Structures Problems

    hey friends...
    can anyone help me with finding solved problems for Data Structures... I read the materilas but still wanna practicing .... plz if tell me if anyone knows about that ..
    Thanks...

    You should know that Data Structures refers to any structure that allows for the storage and retrieval of data. Thus if you have questions you need to be more specific.
    ASK A QUESTION!
    DeltaCoder
    "Change your mind, your body will follow.

  • How to find out the aligning Data dictionary table for a structure.

    Hi
    As the table controls are associated with strucures, the data input goes to the data dictionary table aligned to that structure. Is there any way we can find out the table related to that particular structure ?
    Thanks,
    Dhareppa

    Hi,
    The structure and its fields may be associated with multiple database tables.
    You can try by where-used list of the structure/field and then you can look and determine the table by hit and try method.
    Also as suggested above you can try get the SQL trace and then ypu can look for table associated behind the structure/fields I believe this is the best way to find the DB table.
    Thanks,
    Ravi

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

  • How to create internal table for a structure in BSP

    hi ,
    I have created a Structure in BSP.I want to create an internal table for that Structure. But in my coding ie.
    <% data: begin of itab_1 .
                     include type zuvendstr.
                     data:end of itab_1.
                     data wa_str like line of itab_1.
                     loop at itab_1 into wa_str. %>
                    <tr>
                     <td><%=wa_str-name%> </td>
                           <%endloop.%>
    In this zuvendstr is Structure ,wa_str is workarea and itab_1 is an Internal table.But it is showinng an error that itab_1 is unknown.But we cannot define internal tables for an Structure in Page Attributes.So,please resolve how to create internal table for Structure in BSPS

    Hi,
    You can define itab_1 like this (assuming zuvendstr is a structure type):
    DATA: itab_1 TYPE TABLE OF zuvendstr.
    Regards,
    Tanguy

  • Creating view for DataSource from table containing Structures.

    Hello All,
    I need to make a datasource for HR PY master data from a table PA9004(its special for a HR infotype) which needs some fields like PFOBEE(PF-Opening Balance-Ee).
    Field(PFOBEE) is part of a structure(PS0008) which is included in the table PA9004.I tried where-used list for this structure in programs,FuncModules,Tables but got no result of any transparent table.
    To form my Datasource I tried creating the Database view, which had inconsistencies. I tried using the table PA9004, which said "This operation failed, because the template structure quantity fields or currency fields, for example, field PFOBEE refer to a different table."
    Also Projection view cant be used for generic datasources.
    Even Create Infostructure(MC21) does not work for HR: PY-IO.
    All suggestions welcome.Mail me at [email protected]

    Hello All,
    I solved this problem by forming my generic datasource(TCode RSO2) on a Infoset which had the table pa9004. There were some warnings regarding the use of the development class, but finally I had the datasource ready.
    I checked the datasource is working using the RSA3 TCode and the DS is perfectly extracting the data. I replicated this DS in BW server and it appears exactly where I wanted.
    Some colleages had suggested writing a functional module to extract the data, which must be right but I have not tried this option yet.
    Experts may please review my solution and offer tips. I am willing to offer points to anyone offering a more elegant solution.

  • How to get the NameTab Structure(X031L) for Dynamic internal table?

    when we pass standard table to the FM 'DD_GET_NAMETAB' we will get the
    nametab structure(x031l), like this i want to get the same structure for
    dynamic internal  table . how can I achieve this
    please help me...

    Hi,
    try this method
    REPORT zmaschl_create_data_dynamic .
    TYPE-POOLS: slis.
    DATA: it_fcat TYPE slis_t_fieldcat_alv,
          is_fcat LIKE LINE OF it_fcat.
    DATA: it_fieldcat TYPE lvc_t_fcat,
          is_fieldcat LIKE LINE OF it_fieldcat.
    DATA: new_table TYPE REF TO data.
    DATA: new_line  TYPE REF TO data.
    FIELD-SYMBOLS: <l_table> TYPE ANY TABLE,
                   <l_line>  TYPE ANY,
                   <l_field> TYPE ANY.
    Build fieldcat
    CALL FUNCTION 'REUSE_ALV_FIELDCATALOG_MERGE'       EXPORTING           i_structure_name = 'SYST'       CHANGING           ct_fieldcat      = it_fcat[].   LOOP AT it_fcat INTO is_fcat WHERE NOT reptext_ddic IS initial.        MOVE-CORRESPONDING is_fcat TO is_fieldcat.        is_fieldcat-fieldname = is_fcat-fieldname.        is_fieldcat-ref_field = is_fcat-fieldname.        is_fieldcat-ref_table = is_fcat-ref_tabname.        APPEND is_fieldcat TO it_fieldcat.   ENDLOOP.
    Create a new Table
    CALL METHOD cl_alv_table_create=>create_dynamic_table       EXPORTING        it_fieldcatalog = it_fieldcat       IMPORTING        ep_table        = new_table.
    Create a new Line with the same structure of the table.
    ASSIGN new_table->* TO <l_table>.CREATE DATA new_line LIKE LINE OF <l_table>.ASSIGN new_line->* TO <l_line>.
    Test it...
       DO 30 TIMES.
          ASSIGN COMPONENT 'SUBRC' OF STRUCTURE <l_line> TO <l_field>.
          <l_field> = sy-index.
          INSERT <l_line> INTO TABLE <l_table>.
       ENDDO.
       LOOP AT <l_table> ASSIGNING <l_line>.
          ASSIGN COMPONENT 'SUBRC' OF STRUCTURE <l_line> TO <l_field>.
          WRITE <l_field>.
       ENDLOOP.

  • Open ended data structure for retrieving SQL data

    I have a flex project that reads data from a database and
    sends it to my flex app to display in a chart. I need to do this in
    Actionscript since I have to dynamically create different charts
    based on the database definitions. Up til now, I have inserted the
    data for each series into a separate
    ArrayList(java)/ArrayCollection(flex) in this way:
    BindingUtils.bindProperty(lineSeries, "dataProvider", series,
    "pointList");
    lineSeries.xField="point1";
    lineSeries.yField="point2";
    The series variable is a custom Actionscript/Java object
    which contains an ArrayCollection called pointList. And the array
    is made up of custom DataObjects with two fields, point1 &
    point2.
    But in the new paradigm, I'd basically like to create a big
    array of arrays similar to a database table and then just point the
    xField to one column, and the yField to another column. Anyway, it
    seems like the problem is that my point1 and point2 in the current
    implementation are variable names, but with this new dynamic
    structure, I won't be able to create variables on the fly. So how
    do I let flex know to look at column 3, for example for the yField?
    The attached code is from the Flex documentation and is an example
    of what I'd like to emulate (except I need to do it in
    actionscript). I'd like to be able to select "month" or "amount",
    etc. But coming from the Java side, there is no way for me to name
    the array contained in each column.

    I guess I wasn't clear enough. I am actually serializing a
    java object and mapping it to an actionscript object via blazeDS. I
    know about how the objects translate, for example for
    ArrayCollections in flex, I use ArrayLists in java. The problem is
    that I'm not sure how to get the associative aspects of the arrays
    in flex into my java data structures in a way that will translate
    to the way flex can define arrays, such as Month:"January".
    It seems that in flex, you can create an Array and assign it
    an element like {month:"January", amount:"450"} and it will create
    an Object with variables for month and amount. But I don't know how
    to do this on the fly in Java (or in Actionscript for that matter)
    when I have no way of knowing beforehand how many variables my
    object will need. Each series will need an x and y, though usually
    the x will be dates which are the same for each series. So if I had
    10 series sharing an axis, I'd need 11 variables in my object-
    date, series1, series 2...series 10.

Maybe you are looking for