WM and long table incl.spatial geometry?

Hi all,
KMS maintains the Danish Cadastre of land parcels, currently utilizing Oracle 8.1.7 in conjunction with a GIS system storing the graphics. In the near future the DataBase and all applications are to be rewritten in order incorporate all the graphics as SDO_geometry in Oracle Spatial, which we already utilize for other purposes, namely
     vers.: (32 bit) Oracle9i Enterprise Edition Release 9.2.0.2.0 - Production on Sun/SunOS 5.8 (64 bit) = Solaris 2.8
I have read some introcuctionary papers about the Workspace Manager on the WM homepage and elsewhere as well as read parts of the "Oracle9i Application Developer's Guide - Workspace Manager Release 2 (9.2)". But I have no practical experience with Workspace Manager.
Could some of you people using Workspace Manager inform me whether it is feasible to version enable a cadastral table
1) holding of the order 25 million rows?
2) each row storing an SDO_geometry?
Does a version enabled table
3) respond slower than otherwise to queries and updates?
4) require lots more storage space?
I am aware that one of the reasons behind developing Oracle Workspace Manager is to handle Long Term Transactions. On the KMS Cadastre there are a lot of short transactions and a few long transactions (days, weeks even months), which rather often are in conflict, because they take place in same geograhical location. The update frequency will of course be low on the 25 million records, as annually only about 20% of all parcels are updated.
- Thanks in advance,
Jens Ole Jensen
Kort & MatrikelStyrelsen (WWW: http://www.kms.dk)
Danmark

Could some of you people using Workspace Manager inform me whether it is feasible to version enable a cadastral table
1) holding of the order 25 million rows?
2) each row storing an SDO_geometry?
Yes. You can version such a table.
Does a version enabled table
3) respond slower than otherwise to queries and updates?
There will be a performance impact due to versioning. But, it will not be very noticeable.
4) require lots more storage space?
No. Workspace Manager only versions rows that have been changed. In your case, since updates are fewer, as you mention, and since only changed rows are versioned, the growth in storage space should not be much.
Hope this helps. We will be glad to help in tuning and sizing.
regards
Arun

Similar Messages

  • Cluster table with Spatial column

    Hi,
    I tried to create a spatial table(with one SDO_GEOMETRY column) with cluster on one attribute column. But I keep getting error ORA-03001: unimplemented feature.
    Is this mean that I can not cluster a table with SDO_GEOMETRY column?
    Thanks
    Helen

    Hi Helen,
    The parameter you mention is only for real application clusters, not for clustering columns of tables.
    As far as I can tell, when clustering columns of different tables together Oracle will try to
    store all of the data associated with those tables together on disk.
    The Oracle Spatial geometry datatype (mdsys.sdo_geometry) includes two varray types of
    length 1048576. Because these varrays can hold so much data Oracle "makes arraingements"
    to store data in these columns outside of the table in a lob segment (in reality, data is only
    stored out-of-line if there is over 4kb of data in the varray).
    Because of this (no ability to ensure the spatial data is stored with the clustering columns),
    the clustering mechnism is disabled when you have spatial data.
    I read through the doc and it is unclear - the only restriction I could find is using these columns
    as the clustering key.
    Hope this helps,
    Dan

  • Can't fetch clob and long in one select/query

    I created a nightmare table containing numerous binary data types to test an application I was working on, and believe I have found an undocumented bug in Oracle's JDBC drivers that is preventing me from loading a CLOB and a LONG in a single SQL select statement. I can load the CLOB successfully, but attempting to call ResultSet.get...() for the LONG column always results in
    java.sql.SQLException: Stream has already been closed
    even when processing the columns in the order of the SELECT statement.
    I have demonstrated this behaviour with version 9.2.0.3 of Oracle's JDBC drivers, running against Oracle 9.2.0.2.0.
    The following Java example contains SQL code to create and populate a table containing a collection of nasty binary columns, and then Java code that demonstrates the problem.
    I would really appreciate any workarounds that allow me to pull this data out of a single query.
    import java.sql.*;
    This class was developed to verify that you can't have a CLOB and a LONG column in the
    same SQL select statement, and extract both values. Calling get...() for the LONG column
    always causes 'java.sql.SQLException: Stream has already been closed'.
    CREATE TABLE BINARY_COLS_TEST
    PK INTEGER PRIMARY KEY NOT NULL,
    CLOB_COL CLOB,
    BLOB_COL BLOB,
    RAW_COL RAW(100),
    LONG_COL LONG
    INSERT INTO BINARY_COLS_TEST (
    PK,
    CLOB_COL,
    BLOB_COL,
    RAW_COL,
    LONG_COL
    ) VALUES (
    1,
    '-- clob value --',
    HEXTORAW('01020304050607'),
    HEXTORAW('01020304050607'),
    '-- long value --'
    public class JdbcLongTest
    public static void main(String argv[])
    throws Exception
    Driver driver = (Driver)Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
    DriverManager.registerDriver(driver);
    Connection connection = DriverManager.getConnection(argv[0], argv[1], argv[2]);
    Statement stmt = connection.createStatement();
    ResultSet results = null;
    try
    String query = "SELECT pk, clob_col, blob_col, raw_col, long_col FROM binary_cols_test";
    results = stmt.executeQuery(query);
    while (results.next())
    int pk = results.getInt(1);
    System.out.println("Loaded int");
    Clob clob = results.getClob(2);
    // It doesn't work if you just close the ascii stream.
    // clob.getAsciiStream().close();
    String clobString = clob.getSubString(1, (int)clob.length());
    System.out.println("Loaded CLOB");
    // Streaming not strictly necessary for short values.
    // Blob blob = results.getBlob(3);
    byte blobData[] = results.getBytes(3);
    System.out.println("Loaded BLOB");
    byte rawData[] = results.getBytes(4);
    System.out.println("Loaded RAW");
    byte longData[] = results.getBytes(5);
    System.out.println("Loaded LONG");
    catch (SQLException e)
    e.printStackTrace();
    results.close();
    stmt.close();
    connection.close();
    } // public class JdbcLongTest

    The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
    rs = stmt.executeQuery("select myLong, myNumber from tab");
    while (rs.next()) {
    int n = rs.getInt(2);
    String s = rs.getString(1);
    The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
    Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
    This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
    Douglas

  • How can I use TopLink for querys that have two and more tables?

    I use TopLink today, and I can use one table to query, but how can I use TopLink for querys that have two and more tables?
    Thank you for see and answer this question.

    You can write a custom SQL query and map it to an object as needed. You can also use the Toplink query language "anyOf" or "get" commands to map two tables as long as you map them as one to one (get command) or one to many (anyOf command) in the toplink mapping workbench.
    Zev.
    check out oracle.toplink.expressions.Expression in the 10.1.3 API

  • Word File Takes Longer and Longer to Save

    I’ve been working daily on the same 100-page document for several months. I had no problems with the document under Word 2003. However, under Word 2010 the file grows in size over time, and saving the file takes longer and longer to the point where there
    are significant timeouts (Word is “not responding”) whenever an automatic save occurs. 
    I don't want to diable automatic saves because Word does crash for me on rare occasions, generally if I do something "too fast".
    I’ve found a workaround for the problem, and every three weeks or so after the automatic saves have become painfully long I copy the document to the clipboard (except for the last paragraph mark) and then I open a new document based on the relevant template
    and I paste the clipboard contents into the new document. I rename the old version of the document and the new version becomes the working version. This reduces the file size (currently around 1.4 MB) by about 150 KB and the problem goes away for another three
    weeks.
    Certain aspects of my situation are unusual, and these may or may not be relevant to the problem:
    At the end of each day I use (via a macro) the Review, Compare feature of Word to compare the document with the previous day’s version to allow me to reread any changes I made to it.
    I use various other macros for intelligent page-turning, resizing windows, smart Find, etc.
    I maintain the document as a DOC file (Word 97-2003 Compatibility Mode) because I need to share the document with an organization that requires this format.
    The document flips back and forth a few times between being a one-column and two-column document.
    The document has a table of contents on the last page.
    The headings in the document have embedded section and subsection numbers.
    The document has numerous embedded SEQ and cross-reference fields.
    The document has embedded EMF pictures that were generated by a non-Microsoft application.
    The long times to save the file and the temporary solution I’ve found to the problem suggest that some "junk" is accumulating “in” the last paragraph mark. This junk doesn't cause any operational errors, but it slows things down to the point where
    the auto-save times out and I temporarily get the distracting "not responding" message. It would be nice if Word could automatically eliminate the junk in the last paragraph mark so that I wouldn’t have to do it manually.
    Do you have any suggestions for how I might eliminate the problem?
    I'd be pleased to send a copy of the slow-saving file to a Microsoft Word programmer for diagnosis of the problem.
    I have up-to-date Windows 7 professional (64 bit) and Word 2010 14.0.6129.5000 (32 bit).
    Thanks for your help,
    Don Macnaughton

    I am experiencing exactly the same save issue, although I cannot use the suggestion of copying to a new document as I have allot of references within the same document and I'm scared that I'll loose them (or mess them up).
    It is nearly a year later, did you have any luck?
    Francois,
    I'm still experiencing the problem. However, I've now converted the document from a DOC to a DOCX, but that made no difference.  So every 18 or so days I copy all of the document into a new document except for the last paragraph mark
    and the problem goes away for another 18 or so days.  For my document this solution is fully reliable although it's less convenient because it's a little complicated and I worry I may make a mistake or some text may be lost in the transition.
    So I'm still looking for a solution to the problem. Is there anything unique about your document or your handling of the document that might be the cause of the problem?  Are you using macros, Compare Versions, switching back and forth between
    one and two columns, or anything else that is common to the features that I list in my first post in this thread?
    You might want to try my copying solution as a test while keeping your original document as the official version that you continue to work with.  You could then check the test document very carefully to see if my solution works with your
    document.  You might find that you can trust my solution (or you might not). 
    By the way, I make sure that the copy worked properly by doing a Compare Versions of the old and new documents.  (Surprisingly, sometimes the compare finds very minor differences between the two documents, but usually not.)
    If the problem really bothers you, you can hire Microsoft Support, although that will cost you some money.  If you do that, please let us know the outcome.
    Don Macnaughton

  • Update column data to Upper Case in parent and child table

    Hi ,
    I am facing issue while updating column value to upper case in parent table and child table. How can i do that ?
    when updating parent row:
    ORA-02292: integrity constraint (XXXXXXXXXXXXXX_FK) violated - child record found
    When updatng corresponding child row:
    ORA-02291: integrity constraint (XXXXXXXXXXXXXXXX_FK) violated - parent key not found
    how can i update on both the places ?
    Regards,
    AA

    I am facing issue while updating column value to upper case in parent table and child table. How can i do that ?
    Why do you need to do that?
    That is just ONE of several questions you should answer before you start modifying your data.
    1. What is your 4 digit Oracle version? (result of SELECT * FROM V$VERSION)
    2. If both values are the same case what difference does it make what that case is?hen you don't need to alter your original data.
    3. What is the source of the column values you are using now? If you change your data to upper case it will no longer be identical to the source data.
    4. What is your plan for enforcing future values to be stored in UPPER case? Are you going to use a trigger? Have you written and tested such a trigger to see if it will even work the way you expect?
    5. Why aren't you using a surrogate key instead of a 'business' data item? You have just demonstrated one reason why surrogate keys can be useful: their actual value is NOT important.
    You should reexamine your problem and architecture and consider other alternatives.
    One alternative is to add a new 'surrogate key' column to use as the primary key. Just create a new sequence and use a trigger to populate the new column. Your current plans will require a trigger to perform the case conversion so instead of the just use the trigger to provide the value.
    If the change is being done to facilitate searching you could just add a VIRTUAL column UPPER_MY_COLUMN and index that instead. Then you could search on that new virtual column and the data values would still be identical to the original data source.

  • Use of T168F and T168 Tables in System

    Hi everybody
    I'm making a development, the purpose is to assign the Release Strategy to RFQ's when the user sets the Unit Prices for Items in transactión ME47.
    This action is executed in SAPMM06E in Routine strategie_ermitteln calling FM ME_REL_STRATEGIE_EKKO, but before the call there is a check at routine begin: t160-vorga NE vorga-angb (vorga-angb is a constant 'AG').
    t160-vorga has the 'A' value for ME41 (creating RFQ), then in this transaction the Strategy is assigned.
    But in ME47, t160-vorga has the 'AG' value, then the FM is not called.
    I solved this making t160-vorga = 'A' (only in memory) in EXIT_SAPMM06E_017, and the Release Strategy is assigned ok.
    But, when the user modificates a price using a button, the program makes a validation in Routine FCODE, checking that the value of t160-vorga exists in T168F and T168 tables with field fcode = 'KO'.  Because i modified the value from AG to A, this record does not exist and the program set an error. But, if i don't modify the t160-vorga, the Release is not assigned because the check in Routine strategie_ermitteln.
    So , i'm thinking in change my development, calling myself the FM  ME_REL_STRATEGIE_EKKO in EXIT_SAPMM06E_012 when action Save, but i dont' like this solution because i would need to copy not only the FM call, but else all the code involved in fill the parameters.
    So, i think other solution is to create the record for vorga = 'A', fcode = 'KO'  in T168F and T168 tables.
    If somebody can helpe, my doubts are:
    1. what implicates this record creation in these tables T168/T168F, that is, wich is the usage of these tables in the system ??     
    2. Where can i see the meaning of t168-vorga values (AG, A, K, F etc.)  ??
    3. In wich transaction can i create records to these tables ??
    4. Could be any problem if i add that records to these tables ??
    Any help will be apreciated !!
    (Excuse the long Topic text)
    Thanks
    Frank

    Hi Frank,
    2. Look the table T167T (to find this, I was going into the SE11, Data element: VORGA, : table, ...)
    3. The table T167T could be updated with the SM30/SM31 transaction. But you have a warning message.
    4. See the message, in the previous answer
    Rgd
    Frédéric

  • SQL*Plus 'Copy' command and LONG datatypes

    Hi. I'm using Oracle 9.2.0.5 and wanna copy LONG to LONG without using an Interface in VB or any other programming language.
    Some of the fields (plain text) are greater than 32 Kb, and I tried the SQL*Plus 'Copy' command, without success.
    (For compatibility reasons I can't convert LONG to CLOB, I need to copy LONG to LONG)
    This is the example I'm working with:
    Table Source_LONG (ID number, DATA long)
    Table Destination_LONG (ID number, DATA long)
    The SQL*Plus command: (connected from test_database@environment)
    set long 100000
    copy from test_database/test_database@environment insert destination_long (id,data)
    I tried using both FROM and TO, but same results.
    The fields are copied into destination_long, but they are
    truncated at 32768 bytes, even with the LONG variable set to 100000. Any ideas ?
    Thanks.

    I'm working with 2 similar tables with this structure:
    SOURCE_LONG (ID number, DATA long)
    DESTINATION_LONG (ID number, DATA long)
    SOURCE_LONG contains two rows:
    ID DATA
    1 hello
    3 ....text bigger than 32kb...
    I tried your solution and it insert 2 rows, but only the ID is filled. The DATA is empty in both cases :-(
    insert into destination_long(id,data) (select id,to_lob(data) from source_long);

  • Migration of LONG and LONG RAW datatype

    Just upgraded a DB from 8.1.7.4 to 10.2.0.1.0. In the post-upgrade tasks, it speaks of migrating tables with LONG and LONG RAW datatypes to CLOB's or BLOB's. All of my tables in the DB with LONG or LONG RAW datatypes are in the sys, sysman, mdsys or system schemas (as per query of dba_tab_columns). Are these to be converted? Or, does Oracle want us to convert user data only (user_tab_columns)?

    USER_TAB_COLUMNS tells you the columns in the tables owned by the current user. There may well be many users on your system that you created that contain objects. I suppose you could log in to each of those schemas and query their USER_TAB_COLUMNS table, but it's probably easier to query DBA_TAB_COLUMNS with an appropriate WHERE clause on the owner of the objects.
    Justin

  • How to use CDHDR and CDPOS tables

    Hello Gurus,
    How can the data be extracted from CDHDR and CDPOS tables ?
    For example, I want to find the changes for the material master, what is the object need to be given in the OBJECT value etc..
    I tried to use restricting the transaction MM02 and the date valid from and to, but it is taking longer time.
    Appreciate, if you can provide the objects which can be used to track the changed data like Production orders etc..
    Thanks
    Aadhya

    Hi Aadhya,
    For the Change Document Object (OBJECTCLAS), you can use the following item:
    MATERIAL = material master
    ORDER = production order/maintenance order
    EINKBELEG = purchasing document
    VERKBELEG = sales document
    The Object Value (OBJECTID) is usually the object's number, such as material number, production order number, etc.
    Get the Document Number (CHANGENR) and go to CDPOS to see the changes detail.
    Regards,
    Julian

  • Cluster and pooled tables

    when we will use cluster table , pooled table and transparent tables...

    FOr Tables
    http://www.erpgenie.com/abap/tables.htm
    For Tables
    <b>Pooled Tables, Table Pools, Cluster Tables, and Table Clusters</b>
    These types of tables are not transparent in the sense that they are not legible or manageable directly using the underlying database system tools. They are managed from within the R/3 environment from the ABAP dictionary and also at runtime when they are loaded into application memory.Pool and cluster tables are logical tables. Physically, these logical tables are arranged as records of transparent tables. The pool and cluster tables are grouped together in other tables, which are of the transparent type. The tables that group together pool tables are known as table pools, or just pools; similarly, table clusters, or just
    clusters, are the tables which group cluster tables.Not all operations that can be performed over transparent tables can be executed over pool or cluster tables.
    For instance, you can manage these tables using Open SQL calls from ABAP, but not Native SQL.These tables are meant to be buffered and loaded in memory, because they are commonly used for storing internal control information and other types of data with no external (business) relevance. SAP recommends that tables of pool or cluster type be used exclusively for control information such as
    program parameters, documentation, and so on. Transaction and application data should be stored in transparent tables.
    <b>Table Pools</b>
    From the point of view of the underlying DBMS as from the point of view of the ABAP dictionary, a table pool is a transparent table containing a group of pooled tables which, when created, were  assigned to this table pool.
    Field              Type                Description
    TABNAME CHAR(10)   Table                 name
    VARKEY CHAR(n)    Maximum key length n =< 110
    DATALN INT2(5) Length of the VARDATA record returned
    VARDATA RAW(m) Maximum length of the data varies according to DBMS
    <b>Table Clusters</b>
    Similarly to pooled tables, cluster tables are logical tables which, when created, are assigned to a table cluster. Therefore, a table cluster, or just cluster, groups together several tables of type clusters.Several logical rows from different cluster tables are brought together in a single physical record. The records
    from the cluster tables assigned to a cluster are thus stored in a single common table in the database.A cluster contains a transparent cluster key which must be located at the start of the key of all logical cluster tables to be included in the cluster. As well, a cluster contains a long field (VARDATA), which contains the
    data of the cluster tables for this key. If the data does not fit into a field, continuation records are created.
    Field Type Description
    CLKEY1 CHAR(*) First key fields
    CLKEY2 CHAR(*) Second key field
    CLKEYN CHAR(*) nth key field
    PAGENO INT2(5) Number of the next page
    TIMESTMP CHAR(14) Time stamp
    PAGELG INT2(5) Length of the VARDATA record returned
    VARDATA RAW(*) Maximum length of the data section; varies according to database system
    <b>Working with Tables</b>
    The dictionary includes many functions for working with tables. There are five basic operations you can perform on tables: display, create, delete, modify, copy. Please do not confuse displaying a  table with displaying the table entries (table contents). In order to display a table, it must previously exist; otherwise the system will display an  error message in the status bar. For the following example, the table TABNA is used. To display this table, from the main dictionary screen, enter the table name in the Object name
    input field with the radio button selected next to Tables. Then, click on the Display button at the bottom of the screen, or press the F7 function key,  or, alternatively,
    select Dictionary object Display from the menu.
    In this screen, you can see table information such as
    ¨ Table type, shown next to the name of the object. In the example, it is a transparent table.
    ¨ Short text description.
    ¨ Name of the user who made the last change, and the date of the change.
    ¨ Master language.
    ¨ Table status. On the screen, you can see this table is saved and active.
    ¨ Development class. For information on development classes, refer to Chap. 6.
    Delivery class, which sets the maintenance group for the table. It controls how tables will behave during client copy procedures, upgrades, and so forth.¨
    Tab. Maint. Allowed flag, which indicates whether you can generate a screen for maintaining table entries.
    ¨Then, on the lower part of the screen, you can see the table fields with all associated characteristics such as:
    ¨ Field name.
    ¨ Key indicator. When set, this field is the primary key, or part of it.
    ¨ Data element.
    ¨ Basic data type.
    ¨ Length.
    ¨ Check table.
    ¨ Short text, describing the field.
    Additional information about the table can be displayed by selecting the corresponding functions from the menu or directly from the application toolbar, such as keys, indexes, or technical settings.
    Regards,
    Balaji
    **Rewards for helpful answers

  • Cannot get long table on second page of BSP

    I want to create a BSP with 2+ pages.  The first page is a letter, and the second and subsequent pages contain a table which could be one or more pages long.
    If I put the letter in the main window of the first page, and the table in the main window of the second page, I do not get a table. 
    If I put the letter in the main window of the first page, and the table in a secondary window of the second page, then I get the table.  But it truncates at the end of the page.  It does not  go on to multiple pages.
    If I put the table on the first page in the main window, then I get the entire table for as many pages as it takes to contain it.
    I want the letter on the first page, and the entire table on as many subsequent pages as it takes to contain it.  What am I doing wrong?

    Hi,
    actually I did not get it.
    Do you work with frames?
    Do you use server side cookies?
    What are you doing in the events of page1 and page2?
    Best regards
    Renald

  • Cannot get long table on second page of Smartform

    I want to create a Smartform with 2+ pages. The first page is a letter, and the second and subsequent pages contain a table which could be one or more pages long.
    If I put the letter in the main window of the first page, and the table in the main window of the second page, I do not get a table.
    If I put the letter in the main window of the first page, and the table in a secondary window of the second page, then I get the table. But it truncates at the end of the page. It does not go on to multiple pages.
    If I put the table on the first page in the main window, then I get the entire table for as many pages as it takes to contain it.
    I want the letter on the first page, and the entire table on as many subsequent pages as it takes to contain it. What am I doing wrong?

    Hi ,
    Define the letter in the first page....if you think that the data is constant for the letter then define it in the secondary window itself and if you think its not then go for main window and in that table...
    Now coming to the second page define the data in the main window and also apart from that define the third page also which should be same as second page....
    in the second page set go to as third page and in the third page set go to as third page...
    Hope this will help.
    Regards,
    Rohan.

  • Any problem using bseg and bkpf tables

    For fico details i using  bseg  and bkpf  tables.
    I noted that programming is very difficult because of these tables are cluster table.
    plz give me other tables
    if i using these tables any problem will come?

    >
    mysvijai197715 wrote:
    > Hi Aniesh,
    >
    >            BSEG and BKPF are cluster tables. It contains transperant tables like BSIS, BSIK etc. For example to take vendor details usr the BSIK.If you use BSEG and BKPF it will take long time search. suppose your concern having lot of data means may be your server will get shutdown. So use only trasnperant tables like BSIS, BSIK etc.
    >
    >
    > Regards
    > R.Vijai
    Incorrect.  BKPF is a transparent table and not a cluster table and you can use it just like any other transparent table.  BSEG is a cluster table but there is no problem selecting from it so long as you use the key of BUKRS, BELNR, GJAHR - unless you are selecting a very large amount of data, but then this can cause problems when selecting from any type of table.
    The advantage of using BSEG over the other FI line item tables such as BSIS and BSIK is that it holds all the lines of an FI document while the others will hold only a subset eg BSIK will only hold lines that contain a vendor reference and BSAS holds only cleared GL account lines.  Though you can only use it when you have the key.  If you need to search on vendor, you can use BSIK as a starting point, but since I usually need to get hold of all the lines on an FI document, I then have to select from BSEG anyway.

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

Maybe you are looking for

  • Report region source error

    Hi all, I'm trying to display report ( SQL Query report ) the query is as follows:- SELECT E.C001, E.C002, AE.C001, AE.C002 FROM APEX_COLLECTIONS COLLECTION_NAME = 'EXAMPLE' A INNER JOIN APEX_COLLECTIONS COLLECTION_NAME = 'ANOTHER_EXAMPLE' AE ON E.SE

  • Purchase Order in Foreign Exchange

    Business Process Flow: Material Requirement Generated from Project - Reservation will be created against the requirement – Run MRP based on the WBS element by selecting the Creation Indicators to Create Planned Orders – Convert the Planned Order into

  • Simulated device with visual basic 6.0

    It is possible to run a VB application using simulated devices? I have created a simulated PCI6024E, but my VB application does not find the device. Thanks.

  • Captivate Scorm 1.2 with Moodle and max. apptempts

    Dear all, I am the person who pushed Captivate in our company and now everyone is looking at me because its not doing as we are used by Articulate. The setting: Moodle allows three tries to see if someone passed or failed a lesson. After the third tr

  • Siebel Public Sector verification Plan Item applet

    In siebel public sector application .we have verification Plan view under Case form applet.We can add a verification template there and associted verification Item will be populated in verification Item child applet . We are not allowed to create our