Performance problems when filtering and excluding hierarchy node

Dear all,
I setup a query with a hierarchy linked to it, where I use a "hard-filtered" exclusion on this hierarchy for a specific node AND a variable which the users can use for flexible filtering purposes.
When I execute the query, the runtime of this query is unimaginable high and I really doubt, whether the underlying SQL-Statement is executed (or generated) correctly. Before I start tracing everything I want to know, whether you encountered some similar problems with such kind of selection and de-selection in a query?!
Thanks,
Andreas

Hi Andreas,
I had read in one of the performance Docs that,
"Inclusion gives better performance than exclusions.."
Never tried it but you can check if there is any performance improvement by including the nodes you need rather than excluding the one you don`t need..
Let me know..
Ashish

Similar Messages

  • Performance problem when running a personalization rule

    We have a serious performance problem when running a personalization rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType == deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType == tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences in the
    OS?
    Thank you,
    Luis Muñiz

    Luis,
    I think you are working through Support on this one, so hopefully you are in good
    shape.
    For others who are seeing this same performance issue with the reference CM implementation,
    there is a patch available via Support for the 3.2 and 3.5 releases that solves
    this problem.
    This issue is being tracked internally as CR060645 for WLPS 3.2 and CR055594 for
    WLPS 3.5.
    Regards,
    PJL
    "Luis Muniz" <[email protected]> wrote:
    We have a serious performance problem when running a personalization
    rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType ==
    deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType ==
    tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and
    cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and
    a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same
    time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences
    in the
    OS?
    Thank you,
    Luis Muñiz

  • Performance problem when printing - table TSPEVJOB big size

    Hi,
    in a SAP ERP system there is performance problem when printing because table TSPEVJOB has many millions of entries.
    This table has reached a considerable size (about 90 Gb); db size is 200 Gb.
    Standard reports have been scheduled correctly and kernel of SAP ERP system is updated.
    I've tried to scheduled report RSPO1041 (instead of RSPO0041) to reduce entries but it is not possible run it during normal operations because it locks the printing system.
    Do you know why this table has increased ?
    Are there any reports to clean this table or other methods ?
    Thanks.
    Maurizio Manera

    Dear,
    Please see the Note 706478 - Preventing Basis tables from increasing considerably and
    Note 195157 - Application log: Deletion of logs.
    Note 1566659 - Table TSPEVJOB is filled excessively
    Note 1293472 - Spool work process hangs on semaphore 43
    Note 1362343 - Deadlocks in table TSP02 and TSPEVJOB
    Note 996455 - Deadlocks on TSP02 or TSPEVJOB when you delete
    For more information see the below link as,
    http://www.sqlservercurry.com/2008/04/how-to-delete-records-from-large-table.html
    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=91179
    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=83525
    If any doubts Let me know.
    Thomas
    Edited by: thomas_raja on Aug 7, 2011 12:29 PM

  • Performance problem when using CAPS LOCK piano input

    Dear reader,
    I'm very new to Logic and am running into a performance problem when using the CAPS LOCK-piano-keyboard for input of an instrument: when I'm not recording everything is fine and the program instantly responds on my keystrokes, but as soon as I go into record-mode there is sometimes a delay in the response-time (so I press a key and it takes up to half a second longer before the note is actually played).
    Is there anything to do about this to improve performance (for example turning of certain features of the application), or should I never use the CAPS LOCK keyboard anyway and go straight for an external MIDI-keyboard?
    Thanks and regards,
    Tim Metz

    Does your project have Audio tracks and just how heavy it is, how many tracks? Also, what kind of Software Instrument do you use?

  • [svn] 3363: Fix performance problem when changing multiple DisplayObject-dependent properties .

    Revision: 3363
    Author: [email protected]
    Date: 2008-09-25 11:58:56 -0700 (Thu, 25 Sep 2008)
    Log Message:
    Fix performance problem when changing multiple DisplayObject-dependent properties. The call to assignDisplayObjects() is now batched up into the next commitProperties() call.
    Bugs: SDK-17033
    Reviewer: Deepa
    Ticket Links:
    http://bugs.adobe.com/jira/browse/SDK-17033
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/flex/core/Group.as

    hello sir,
    i want to your help
    i was installed fresh windows 7 via cd rom and then after installed all software.
    and now after 1 day customer complained me that cd rom not read any cd and i m also check when i insurt cd so its not read and when i am double click on cd rom icon its eject so what i do for that please reply on my email address.
    [text removed for privacy]
    VIMAL

  • TS2634 My problem - when syncing and error messages  comes up:  iTunes could not sync calendars to the iPhone because an error occurred while margin data."  I've tried turning the phone off and back on, & tried both USB ports but get the same thing.  What

    My problem - when syncing and error messages  comes up:  iTunes could not sync calendars to the iPhone because an error occurred while margin data."  I've tried turning the phone off and back on, & tried both USB ports but get the same thing.  What can I do next since it doesn't give me even one clue as to what the error would be?

    Before I started to resync calendars one by one as suggested in the troubleshooting article, I remembered something that came up in a sync when I first attempted :  a 'which one do you want to keep' message about a repeating calendar event, which came up with three options, one from Calendar, and two from iCal on the phone.  I deleted that event, then went through the calendar resync one by one, and all seems OK now.
    In Music there is a way to find 'ghost' items that show up as grayed-out on the menus, so that you can try to reassociate or delete them.  It would be nice to have a similar way to work with Calendar!
    Thanks for pointing me to the right article.

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Run Time Error when trying to select hierarchy node

    Hy all,
    the situation is the following:
    i attempt to create a variable of type "hierarchy node" (for cost center cha) in my planning area, i used the replacement type "user defined value", check on "input allowed by user" and then i create the user entry, in selection condition i run match code and when i try to select a single cost center (not a hierarchy node) expanding the hierarchy tree the system goes dump with the following message:
    Runtime Errors         MESSAGE_TYPE_X
    Error analysis
    Short text of error message:
    Program is inconsistent -> see long text
    Technical information about the message:
    Diagnosis:
    An inconsistent program status has occurred. The program cannot be continued.
    System response:
    The system crashes.
    Procedure
             1.  Look in OSS for a note under the error message UPC099.
             2.  When you open a problem message, send the first pages of the
                 system crash message including the section 'Source code excerpt
                 ' with the message.
         Procedure for System Administration
        Message classe...... "UPC"
        Number.............. 099
    Selecting a hierarchy node (not a single cost center but a text node grouping more cost conter) this problem does not occur.
    I appreciate (and reward) any hints.
    Thanks in advance
    Fabio

    Hi,
    Go through the oss notes with error message UPC099
    Dump during hierarchy selection:<b>536694</b>
    Dump when reading a hierarchy with intervals :<b>423953</b>
    Regards-
    Siddhu
    Message was edited by: sidhartha

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance problems when using Premiere Elements for photo slideshows

    Hello,
    I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems.  I had used it to create similar projects before, but not nearly as big.  This one is like 260 photos, so basically it is 260 seperate clips.  I have a POWERHOUSE workstation (see below) so it isn't my PC.  Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized.  I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really.  I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense.  Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really.  I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features.  How can I make sure it utilizes my workstation to its full potential?  Here are my specs:
    PC
    Intel Core i7-2600K 4.6 GHz Overclocked
    ASUS P8P67 Deluxe Motherboard
    AMD Firepro V8800 Video Card
    Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
    Corsair Vengeance 16GB (4x4GB) Memory
    Corsair H60 Liquid CPU Cooler
    Corsair Professional Series Gold AX850 Power Supply
    Graphite Series 600T Mid-Tower Case
    Western Digital Caviar Black 1 TB SATA III Hard Drive
    Western Digital Caviar Black 2 TB SATA III Hard Drive
    Western Digital Green 3 TB SATA III Hard Drive
    Logitech Wireless Gaming Mouse G700
    I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
    Wacom Intuos5 Pen Tablet
    Yes, this system is blazingly fast.  I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time.  HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed. 
    Monitors – All run on the ATI V8800
    Dell Ultra Sharp 30 inch
    Samsung 27 Inch
    HAANS-G 28 Inch
    Herman Miller Embody Ergonomic Chair (one of my favorite items)

    Andy,
    There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
    As of PrPro CS 5, two things changed:
    The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
    The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
    Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
    I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
    Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
    Good luck, and hope that helps,
    Hunt

  • Repair Tool Performance Problem when Zoom on Image

    Aperture Friends,
    I experience a huge performance hit when editing images with the Repair/Clone tool only when "zoomed" into an image. The curious part is that the repair while "zoomed in" function works fine for about 3 images, then from the 4th image on, the software grinds to almost a complete stop for several seconds after selection a source point and trying to use the repair. The mouse cursor even changes to a spinning rainbow DVD look during this time.
    These are 15-20MB Raw files from a Canon 5d and we are using the most recent version of AP on a nearly brand new iMac 27", of course with Snow L, and all patches applied. The library is less than 20GB and we are not using Reference files as of yet.
    I realize that these may be large files, but it is curios that it works with blazing speed to start with, then degrades.
    Is this a common problem, or an identified issue? Might you more experienced folks know a resoulution?

    hi!
    i'm experiencing a similar problem on my imac i5 with the difference it is slow all the time. even after a reboot it's quite impossible to use the clone/repair tool. my library is about 40GB with referenced files (all on external HD fw800). i work with a 5D Mk II (which is still not supported for tethered shooting, by the way!).
    (sorry if i nicked your thread )
    best regards,
    günther

  • Performance problem when initializing a large array

    I am writing an application in C on a SUN T1 machine running Solaris 10. I compiled my application using cc in Sun Studio 11.
    In my code I allocate a large array on the heap. I then initialize every element in this array. When the array contains up to about 2 million elements, the performance is as I would expect -- increasing run time due to more elements to process, cache misses, and instructions to execute. However, once the array size is on the order of 4 million or more elements, the run time increases dramatically -- a large jump not in line with the other increases due to more elements, instructions, cache misses, etc.
    An interesting aspect is that I experience this problem regardless of element size. The break point in performance happens between 2 and 4 million elements, even if the elements are one byte or 64 bytes.
    Could there be a paging issue or other subtle memory allocation issue happening here?
    Thanks in advance for any help you can give.
    -John

    to save me writing some code to reproduce this odd behaviour do you have a small testcase that shows this?
    tim

  • Large DB Performance problems when updating schemas

    Hi,
    I'm facing performance problems updating the schema of large databases in SQLServer 2008 R2 and I can't find a proper solution. I would have thought that this is a fairly common problem so here it goes.
    I have a database which is about 700Gb in size. I am going to detail 2 different issues:
    EXAMPLE 1: CHANGING A FIELD TYPE
    In that Database I have a table with the schema detailed below as Table_TB. This table contains several million records. As you can see there is a column of type TEXT (Comment_FD). What I am trying to do is changing  the column type to NVARCHAR(MAX) to
    remove the deprecated TEXT type and support UNICODE characters in that field.
    The command I am running is the following one:
    ALTER TABLE DBA.Table_TB ALTER COLUMN Comment_FD NVARCHAR(MAX)
    This operation takes several hours to complete.
    I tried the following:
    -Adding the new column to the same table and copying the data over
    -Creating a new entire table with the new field type modified and copy the data over
    -Exporting to disk the contents of the table, truncate the table and reimporting the data both with Management Studio and the bcp tool. 
    -When copying to a new table I made also tried removing the PK of the table to skip the overhead of creating an index. 
    -I also tried using a simple spd to copy the data over (to the column in the same table and the other table) in batches. 
    In ALL my tests I set the SQLServer Recovery model to Simple to skip as much as I can the overhead of generating logs.
    No matter what I do, the times to complete this operation seem to be unusable. 
    Please note that I am reducing this problem to its minimum expression. Can't say precisely how long the operation takes, but a script that contain 5 field type changes identical to that one in that and other two tables takes 7 days to complete!!! The REAL problem
    is that I have several other fields to which I need to change the type, and they currently amount to a total running time of 14 days!. And this is just changing a handful of fields in a handful of tables. At some point every string in the system will need
    to be migrated to get unicode support, making this completely impracticable.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 14M records and will be about 44Gb in size. 
    EXAMPLE 2: ADDING COLUMNS
    I have another table int the same DB, with a schema detailed below as Table_2_TB, and again several million records in it. I am trying to add a colum using the following SQL:
    ALTER TABLE DBA.Table2_TB ADD strFiDatasourceName_FD VARCHAR(64) NOT NULL DEFAULT ''
    This operation takes a bit more than 7 hours to complete.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 54M records, and a size of 98Gb.
    QUESTIONS:
    ---->Am I doing something wrong, or is there any way to optimize either the SQL or the SQLServer configuration to speed this up?
    ---->Are these performance levels normal at all when dealing with databases of this size? 
    ---->Anyone out there with experience on DBs of this size?
    ---->Does Microsoft offer some kind of service (cloud?) to make structural changes in large Dbs?
    Thanks a lot for your help in advance!
    This is the schema for the first table:
    CREATE TABLE [DBA].[Table_TB](
    [BranchCode_FD] [char](2) NOT NULL,
    [FolderNo_FD] [int] NOT NULL,
    [DateTime_FD] [datetime] NULL,
    [Staff_FD] [char](3) NULL,
    [Action_FD] [varchar](25) NULL,
    [Comment_FD] [text] NULL,
    [Team_FD] [varchar](8) NULL,
    [Group_FD] [varchar](8) NULL,
    [PopUpDate_FD] [smalldatetime] NULL,
    [nRecordID_FD] [smallint] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    PRIMARY KEY CLUSTERED
    [BranchCode_FD] ASC,
    [FolderNo_FD] ASC,
    [nRecordID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__DateTime] DEFAULT ('1980-01-01') FOR [DateTime_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__PopUp__096A45D7] DEFAULT ('1980-01-01') FOR [PopUpDate_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    This is the schema for the second table:
    CREATE TABLE [DBA].[Table2_TB](
    [strBBranchCode_FD] [char](2) NOT NULL,
    [lFFoldNo_FD] [int] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    [strFiType_FD] [char](3) NOT NULL,
    [dtFiCreateDate_FD] [smalldatetime] NOT NULL,
    [strFiBookingRef_FD] [varchar](32) NOT NULL,
    [strFiBookingRefDayMonth_FD] [varchar](5) NOT NULL,
    [strFiBookedVia_FD] [varchar](50) NOT NULL,
    [bFiInterfaced_FD] [smallint] NOT NULL,
    [nFiSortOrder_FD] [smallint] NOT NULL,
    [strFiCreateStaffCode_FD] [char](3) NOT NULL,
    [strPcProductCode_FD] [varchar](5) NOT NULL,
    [dtFiStartDateTime_FD] [smalldatetime] NOT NULL,
    [lFiFinanVendID_FD] [int] NOT NULL,
    [lFiItinVendID_FD] [int] NOT NULL,
    [dtFiVendBalDueDate_FD] [smalldatetime] NOT NULL,
    [dtFiVendDepositDueDate_FD] [smalldatetime] NOT NULL,
    [strFiStatus_FD] [varchar](2) NOT NULL,
    [bFiTransFeeHasBeenApplied_FD] [smallint] NOT NULL,
    [bFiATOLTypeMan_FD] [smallint] NOT NULL,
    [nFiATOLType_FD] [smallint] NOT NULL,
    [strCcClassCode_FD] [varchar](10) NOT NULL,
    [strFiStartPointCode_FD] [varchar](5) NOT NULL,
    [strFiEndPointCode_FD] [varchar](5) NOT NULL,
    [strFiAirlineCode_FD] [varchar](3) NOT NULL,
    [strFiVendDocNo_FD] [varchar](16) NOT NULL,
    [dtFiIssueDate_FD] [smalldatetime] NOT NULL,
    [strFiDiscReasonCode_FD] [varchar](3) NOT NULL,
    [nFiLastFoldPricingID_FD] [smallint] NOT NULL,
    [strFiPrintingNote_FD] [text] NOT NULL,
    [strFiNonPrintingNote_FD] [text] NOT NULL,
    [dtFiStatusExpiryDate_FD] [smalldatetime] NOT NULL,
    [strFiClientFreqTravellerNo_FD] [varchar](20) NOT NULL,
    [strFiRouteNo_FD] [varchar](5) NOT NULL,
    [nFiNumBum_FD] [smallint] NOT NULL,
    [dtFiEndDateTime_FD] [smalldatetime] NOT NULL,
    [strFiFareBase_FD] [varchar](15) NOT NULL,
    [strFiInterfaceItemID_FD] [varchar](15) NOT NULL,
    [strFiEndPointLoc_FD] [varchar](255) NULL,
    [strFiStartPointLoc_FD] [varchar](255) NULL,
    [strMcMealCode_FD] [varchar](5) NOT NULL,
    [strFiSeatNote_FD] [text] NOT NULL,
    [strFiMealNote_FD] [text] NOT NULL,
    [strFiAirCraftType_FD] [varchar](5) NOT NULL,
    [strFiJourneyTime_FD] [varchar](8) NOT NULL,
    [strFiCheckInMins_FD] [varchar](10) NOT NULL,
    [lFiJourneyDist_FD] [int] NOT NULL,
    [nFiNumStop_FD] [smallint] NOT NULL,
    [strFiBaggageAllow_FD] [varchar](15) NOT NULL,
    [strFiIssueStaffCode_FD] [varchar](20) NOT NULL,
    [dtFiDispatchDate_FD] [smalldatetime] NOT NULL,
    [strFiDispatchStaffCode_FD] [varchar](3) NOT NULL,
    [strDmDispatchCode_FD] [varchar](2) NOT NULL,
    [lfFiVendDepositDueAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateCode_FD] [varchar](50) NOT NULL,
    [strFiRatePlan_FD] [varchar](2) NOT NULL,
    [strFiCabinNo_FD] [varchar](8) NOT NULL,
    [strFiMileage_FD] [varchar](10) NOT NULL,
    [strFiStartPointLocTelNo_FD] [varchar](40) NULL,
    [strFiBookingGuarantee_FD] [varchar](60) NOT NULL,
    [strFiSpecialRemarks_FD] [varchar](250) NULL,
    [strFiCxnCondition_FD] [varchar](100) NOT NULL,
    [bFiFlyDrive_FD] [smallint] NOT NULL,
    [strFiConfNo_FD] [varchar](32) NOT NULL,
    [lFiLinkID_FD] [int] NOT NULL,
    [strFiCategory_FD] [varchar](15) NOT NULL,
    [nFiNumRoom_FD] [smallint] NOT NULL,
    [nFiNumDay_FD] [smallint] NOT NULL,
    [nFiSaleFoldItemID_FD] [smallint] NOT NULL,
    [bFiRefundItem_FD] [smallint] NOT NULL,
    [strFiFareSavingCode_FD] [varchar](2) NOT NULL,
    [lfFiFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateNote_FD] [text] NOT NULL,
    [strFiDiscCode_FD] [varchar](20) NOT NULL,
    [strFiReqDispatchMethodCode_FD] [varchar](2) NOT NULL,
    [dtFiReqDispatchDateTime_FD] [smalldatetime] NOT NULL,
    [nFiReqDispatchVoucherType_FD] [smallint] NOT NULL,
    [nFiLastFoldItemDetailID_FD] [smallint] NOT NULL,
    [nFiNumConjunction_FD] [smallint] NOT NULL,
    [lfFiBSPFrgnBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxDiscrepancy_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPPenaltyFeeAmt_FD] [decimal](17, 2) NOT NULL,
    [nFiRegion_FD] [smallint] NOT NULL,
    [strFiOpenTktNo_FD] [varchar](16) NOT NULL,
    [strFiTktSource_FD] [varchar](3) NOT NULL,
    [strFiJourneyType_FD] [varchar](3) NOT NULL,
    [strFiTktType_FD] [varchar](3) NOT NULL,
    [strFiInterfaceNameRemark_FD] [varchar](50) NULL,
    [nFiATOLIssuedStatus_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBase_FD] [varchar](13) NOT NULL,
    [strFiPaxType_FD] [varchar](3) NOT NULL,
    [strFiActualCarrier_FD] [varchar](2) NOT NULL,
    [strFiNetRemitType_FD] [varchar](1) NOT NULL,
    [strFiFareConstruction_FD] [text] NOT NULL,
    [lfFiBSPTotVATAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNetFare_FD] [smallint] NOT NULL,
    [strFiTourCode_FD] [varchar](50) NOT NULL,
    [strFiSuppFOPInfo_FD] [varchar](255) NOT NULL,
    [lfFitBSPPublishedCommPerc_FD] [decimal](12, 6) NOT NULL,
    [strFiBSPFareCurrCode_FD] [varchar](3) NOT NULL,
    [strFiTktIssueIataNo_FD] [varchar](8) NOT NULL,
    [lfFiFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiFareOfferedSavingCode_FD] [varchar](2) NOT NULL,
    [strFiDesc_FD] [varchar](8000) NOT NULL,
    [lfFiBSPFareBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNoPrintOnItin_FD] [smallint] NOT NULL,
    [dtFiStatusCodeChangeDateTime_FD] [smalldatetime] NOT NULL,
    [bFiNoPrintIfAllPricingsZeroCustAmt_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBaseLow_FD] [varchar](13) NOT NULL,
    [lfFiFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiBrochureCode_FD] [varchar](8) NOT NULL,
    [bFiVendPayDepositNow_FD] [smallint] NOT NULL,
    [bFiVendPayBalanceNow_FD] [smallint] NOT NULL,
    [strFiBankBranchCode_FD] [varchar](2) NOT NULL,
    [strFiOperatingAirlineCode_FD] [varchar](2) NOT NULL,
    [strFiFarePassengerTypeCode_FD] [varchar](3) NOT NULL,
    [strFiAssociatedFarePricingInfoID_FD] [varchar](15) NOT NULL,
    [bFiVerificationReq_FD] [smallint] NOT NULL,
    [dtFiLastVerifiedDateTime_FD] [datetime] NOT NULL,
    [lFiLastVerifiedLevel_FD] [int] NOT NULL,
    [lFiLastVerifiedWithCount_FD] [int] NOT NULL,
    [dtFiTktingStatusChangeDateTime_FD] [smalldatetime] NOT NULL,
    [strFiTktingInformation_FD] [text] NOT NULL,
    [strFiTktingStatus_FD] [varchar](2) NOT NULL,
    [lfFiBSPFareSellDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiTktingBatchID_FD] [varchar](15) NOT NULL,
    [dtFiTktingBatchDateTime_FD] [smalldatetime] NOT NULL,
    [lIplPolicyLevelID_FD] [int] NOT NULL,
    [strFiTktingDescription_FD] [varchar](1000) NOT NULL,
    [strFiTktingDataVersion_FD] [varchar](10) NOT NULL,
    [strFiSourceTktingSystem_FD] [varchar](20) NOT NULL,
    [strFiOthPtsPmtCode_FD] [varchar](3) NOT NULL,
    [bFiManualPtsEntry_FD] [smallint] NOT NULL,
    [strFiEndorsement_FD] [varchar](500) NOT NULL,
    [strFiOwnedByStaffCode_FD] [char](3) NOT NULL,
    [strFiThirdPartyTrackingID_FD] [varchar](25) NOT NULL,
    [strFiAdditionalPrintingNote_FD] [text] NULL,
    [bFiOverridePrintingNote_FD] [smallint] NOT NULL,
    [lfFiCarbonOffsetWeightAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiCancellationPolicyNote_FD] [text] NULL,
    [bFiPEProcessed_FD] [smallint] NOT NULL,
    [bFiActingAsAgentFor_FD] [smallint] NOT NULL,
    [nFiOriginalBuyingBasis_FD] [smallint] NOT NULL,
    [bFiIsOpenSegment_FD] [smallint] NOT NULL,
    [nFiCreateSource_FD] [smallint] NOT NULL,
    [strFiBookingSourceInvoiceNo_FD] [varchar](7) NOT NULL,
    [strFiGDSPaxTypeCode_FD] [varchar](8) NOT NULL,
    [strFiNetFareGDSAccountCode_FD] [varchar](8) NOT NULL,
    [strFiPOSID_FD] [varchar](50) NOT NULL,
    [bFiIsPOSEditable_FD] [smallint] NOT NULL,
    [lfFiCustExchRate_FD] [decimal](16, 8) NOT NULL,
    [lfFiCustFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiCCItemPayableToBranch_FD] [smallint] NOT NULL,
    [dtFiExternalAccountingDate_FD] [smalldatetime] NOT NULL,
    [strFiSourceSystemBookResponseText_FD] [text] NOT NULL,
    [bFiIsConnection_FD] [smallint] NOT NULL,
    [bFiIsPEFoldLevelItem_FD] [smallint] NOT NULL,
    [nFiReqdEndorsedConjTktType_FD] [smallint] NOT NULL,
    [bFiEndorsedConjTktDetailIsManual_FD] [smallint] NOT NULL,
    [strFiReqdEndorsedConjTktDetailText_FD] [varchar](30) NOT NULL,
    [lFiLastTktingTemplateID_FD] [int] NOT NULL,
    [dtFiBookedDate_FD] [smalldatetime] NOT NULL,
    [strFiTktingVerificationWarning_FD] [varchar](1000) NOT NULL,
    [strFiTktingVerificationError_FD] [varchar](1000) NOT NULL,
    [strFiTktingError_FD] [varchar](1000) NOT NULL,
    [strFiCustomerAccountingData00_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData01_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData02_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData03_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData04_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData05_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData06_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData07_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData08_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData09_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingDataNote_FD] [varchar](100) NOT NULL,
    [strFiCustomerAccountingData10_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData11_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData12_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData13_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData14_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData15_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData16_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData17_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData18_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData19_FD] [varchar](50) NOT NULL,
    [nFiFlightBasis_FD] [smallint] NOT NULL,
    [lFiStartPointVendID_FD] [int] NOT NULL,
    [lFiEndPointVendID_FD] [int] NOT NULL,
    [lCtID_FD] [int] NOT NULL,
    [strFiContractCode_FD] [varchar](25) NOT NULL,
    [strFiContractPeriodCode_FD] [varchar](50) NOT NULL,
    PRIMARY KEY CLUSTERED
    [strBBranchCode_FD] ASC,
    [lFFoldNo_FD] ASC,
    [nFiFoldItemID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([nFiReqDispatchVoucherType_FD])
    REFERENCES [DBA].[VoucherTypes_TB] ([nVtCode_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strPcProductCode_FD], [strFiType_FD])
    REFERENCES [DBA].[ProductCodes_TB] ([ProductCode_FD], [Type_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strBBranchCode_FD], [lFFoldNo_FD])
    REFERENCES [DBA].[Folder_TB] ([BranchCode_FD], [FolderNo_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strBB__44AB0736] DEFAULT ('') FOR [strBBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFFol__459F2B6F] DEFAULT (0) FOR [lFFoldNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiFo__46934FA8] DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__478773E1] DEFAULT ('') FOR [strFiType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiC__487B981A] DEFAULT ('1980-01-01') FOR [dtFiCreateDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookingRef] DEFAULT ('') FOR [strFiBookingRef_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4A63E08C] DEFAULT ('') FOR [strFiBookingRefDayMonth_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookedVia_FD] DEFAULT ('') FOR [strFiBookedVia_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiIn__4C4C28FE] DEFAULT (0) FOR [bFiInterfaced_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSo__4D404D37] DEFAULT ((-1)) FOR [nFiSortOrder_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4E347170] DEFAULT ('') FOR [strFiCreateStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strPcProductCode] DEFAULT ('') FOR [strPcProductCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__501CB9E2] DEFAULT ('1980-01-01') FOR [dtFiStartDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiFi__5110DE1B] DEFAULT ((-1)) FOR [lFiFinanVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiIt__52050254] DEFAULT ((-1)) FOR [lFiItinVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__52F9268D] DEFAULT ('1980-01-01') FOR [dtFiVendBalDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__53ED4AC6] DEFAULT ('1980-01-01') FOR [dtFiVendDepositDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__54E16EFF] DEFAULT ('') FOR [strFiStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiTr__55D59338] DEFAULT (0) FOR [bFiTransFeeHasBeenApplied_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiAT__56C9B771] DEFAULT (0) FOR [bFiATOLTypeMan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__57BDDBAA] DEFAULT (0) FOR [nFiATOLType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strCc__58B1FFE3] DEFAULT ('') FOR [strCcClassCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiStartPointCode] DEFAULT ('') FOR [strFiStartPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiEndPointCode] DEFAULT ('') FOR [strFiEndPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5B8E6C8E] DEFAULT ('') FOR [strFiAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5C8290C7] DEFAULT ('') FOR [strFiVendDocNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiI__5D76B500] DEFAULT ('1980-01-01') FOR [dtFiIssueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5E6AD939] DEFAULT ('') FOR [strFiDiscReasonCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__5F5EFD72] DEFAULT ((-1)) FOR [nFiLastFoldPricingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__605321AB] DEFAULT ('') FOR [strFiPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__614745E4] DEFAULT ('') FOR [strFiNonPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__623B6A1D] DEFAULT ('1980-01-01') FOR [dtFiStatusExpiryDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__632F8E56] DEFAULT ('') FOR [strFiClientFreqTravellerNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6423B28F] DEFAULT ('') FOR [strFiRouteNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__nFiNumBum] DEFAULT ((-1)) FOR [nFiNumBum_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiE__660BFB01] DEFAULT ('1980-01-01') FOR [dtFiEndDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67001F3A] DEFAULT ('') FOR [strFiFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67F44373] DEFAULT ('') FOR [strFiInterfaceItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__68E867AC] DEFAULT ('') FOR [strFiEndPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__69DC8BE5] DEFAULT ('') FOR [strFiStartPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strMc__6AD0B01E] DEFAULT ('') FOR [strMcMealCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6BC4D457] DEFAULT ('') FOR [strFiSeatNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6CB8F890] DEFAULT ('') FOR [strFiMealNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6DAD1CC9] DEFAULT ('') FOR [strFiAirCraftType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6EA14102] DEFAULT ('') FOR [strFiJourneyTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6F95653B] DEFAULT ('') FOR [strFiCheckInMins_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiJo__70898974] DEFAULT (0) FOR [lFiJourneyDist_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__717DADAD] DEFAULT (0) FOR [nFiNumStop_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7271D1E6] DEFAULT ('') FOR [strFiBaggageAllow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7365F61F] DEFAULT ('') FOR [strFiIssueStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiD__745A1A58] DEFAULT ('1980-01-01') FOR [dtFiDispatchDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__754E3E91] DEFAULT ('') FOR [strFiDispatchStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strDm__764262CA] DEFAULT ('') FOR [strDmDispatchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiV__77368703] DEFAULT (0) FOR [lfFiVendDepositDueAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__782AAB3C] DEFAULT ('') FOR [strFiRateCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__791ECF75] DEFAULT ('') FOR [strFiRatePlan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7A12F3AE] DEFAULT ('') FOR [strFiCabinNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7B0717E7] DEFAULT ('') FOR [strFiMileage_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7BFB3C20] DEFAULT ('') FOR [strFiStartPointLocTelNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7CEF6059] DEFAULT ('') FOR [strFiBookingGuarantee_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7DE38492] DEFAULT ('') FOR [strFiSpecialRemarks_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7ED7A8CB] DEFAULT ('') FOR [strFiCxnCondition_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiFl__7FCBCD04] DEFAULT (0) FOR [bFiFlyDrive_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__00BFF13D] DEFAULT ('') FOR [strFiConfNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiLi__01B41576] DEFAULT ((-1)) FOR [lFiLinkID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__02A839AF] DEFAULT ('') FOR [strFiCategory_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__039C5DE8] DEFAULT (0) FOR [nFiNumRoom_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__04908221] DEFAULT (0) FOR [nFiNumDay_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSa__0584A65A] DEFAULT ((-1)) FOR [nFiSaleFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiRe__0678CA93] DEFAULT (0) FOR [bFiRefundItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__076CEECC] DEFAULT ('') FOR [strFiFareSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__08611305] DEFAULT (0) FOR [lfFiFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0955373E] DEFAULT ('') FOR [strFiRateNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0A495B77] DEFAULT ('') FOR [strFiDiscCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0B3D7FB0] DEFAULT ('') FOR [strFiReqDispatchMethodCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiR__0C31A3E9] DEFAULT ('1980-01-01') FOR [dtFiReqDispatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiRe__0D25C822] DEFAULT (0) FOR [nFiReqDispatchVoucherType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__0F0E1094] DEFAULT ((-1)) FOR [nFiLastFoldItemDetailID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__100234CD] DEFAULT (0) FOR [nFiNumConjunction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__10F65906] DEFAULT (0) FOR [lfFiBSPFrgnBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__11EA7D3F] DEFAULT (0) FOR [lfFiBSPBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__12DEA178] DEFAULT (0) FOR [lfFiBSPTaxDiscrepancy_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__13D2C5B1] DEFAULT (0) FOR [lfFiBSPPenaltyFeeAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiDo__14C6E9EA] DEFAULT (0) FOR [nFiRegion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__15BB0E23] DEFAULT ('') FOR [strFiOpenTktNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__16AF325C] DEFAULT ('') FOR [strFiTktSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__17A35695] DEFAULT ('') FOR [strFiJourneyType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__18977ACE] DEFAULT ('') FOR [strFiTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__198B9F07] DEFAULT ('') FOR [strFiInterfaceNameRemark_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__1A7FC340] DEFAULT ((-1)) FOR [nFiATOLIssuedStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1B73E779] DEFAULT ('') FOR [strFiFareSavingFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1C680BB2] DEFAULT ('') FOR [strFiPaxType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1D5C2FEB] DEFAULT ('') FOR [strFiActualCarrier_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1E505424] DEFAULT ('') FOR [strFiNetRemitType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1F44785D] DEFAULT ('') FOR [strFiFareConstruction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__20389C96] DEFAULT (0) FOR [lfFiBSPTotVATAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiNe__212CC0CF] DEFAULT (0) FOR [bFiNetFare_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__2220E508] DEFAULT ('') FOR [strFiTourCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__23150941] DEFAULT ('') FOR [strFiSuppFOPInfo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFit__24092D7A] DEFAULT (0) FOR [lfFitBSPPublishedCommPerc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__24FD51B3] DEFAULT ('') FOR [strFiBSPFareCurrCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__25F175EC] DEFAULT ('') FOR [strFiTktIssueIataNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__27D9BE5E] DEFAULT (0) FOR [lfFiFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__251D4D44] DEFAULT ('') FOR [strFiFareOfferedSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiDesc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintOnItin_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-1-1') FOR [dtFiStatusCodeChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintIfAllPricingsZeroCustAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFareSavingFareBaseLow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBrochureCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayDepositNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayBalanceNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBankBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOperatingAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFarePassengerTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAssociatedFarePricingInfoID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVerificationReq_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiLastVerifiedDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedLevel_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedWithCount_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingStatusChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingInformation_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareSellDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPTaxBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingBatchID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingBatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lIplPolicyLevelID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDescription_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDataVersion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceTktingSystem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOthPtsPmtCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiManualPtsEntry_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiEndorsement_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOwnedByStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiThirdPartyTrackingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAdditionalPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiOverridePrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiCarbonOffsetWeightAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCancellationPolicyNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiPEProcessed_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiActingAsAgentFor_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiOriginalBuyingBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsOpenSegment_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiCreateSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (' ') FOR [strFiBookingSourceInvoiceNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiGDSPaxTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiNetFareGDSAccountCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiPOSID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsPOSEditable_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustExchRate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiCCItemPayableToBranch_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiExternalAccountingDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceSystemBookResponseText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsConnection_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsPEFoldLevelItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiReqdEndorsedConjTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiEndorsedConjTktDetailIsManual_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiReqdEndorsedConjTktDetailText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiLastTktingTemplateID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiBookedDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationWarning_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData00_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData01_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData02_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData03_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData04_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData05_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData06_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData07_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData08_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData09_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingDataNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData10_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData11_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData12_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData13_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData14_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData15_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData16_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData17_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData18_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData19_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiFlightBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiStartPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiEndPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lCtID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractPeriodCode_FD]
    GO

    Hi,
    I just wanted to summarize the conclusions we got here, in case it helps someone else.
    For the case of the ALTER statement:
    First we analyzed the query performance and found out that the query was I/O bound. A handful of useful scripts can be found on the links below. Some high values on PAGEIOLATCH wait times suggested some memory pressure, so we incremented the amount of memory
    dedicated to the server up to 32GB. This single change was one of the most effective ones and reduced the query execution time by about 40%-50%. I guess SQLServer needs less paging to perform the operation when it can load more pages in memory at the same
    time.
    We run more tests tweaking some other mentioned variables like the server maxDOP, which made in fact the query slower the higher the value we set. The initial server config was set with auto CPU affinity, but I/O mask affinity set to the first 4 CPUs, and
    we found out that setting it to ALL AUTO perform faster for some reason.
    After some analysis made by Microsoft on the diagnostics/metrics data, the only interesting finding was that a couple of the storage volumes were performing slightly slower than the rest causing a bit of a bottleneck. We recommended the customer to look
    into it with their storage team, but even with those fixed, we won't expect the query to run much faster.
    No other tweaks have been found useful to speed up the ALTER statement. Basically it comes up to how fast you I/O subsystem is, and how much can SQLServer can cache in memory. No other suggestion has been made by Microsoft, and they've advised that any
    other tweaked query (split the column addition and the constraints for example) is going to perform worse than the plain and simple alter statement.
    On the NVARCHAR type problem:
    As it was suggested by Erland Sommarskog, SQLServer 2014 Enterprise edition performed this operation about 40% faster than SQLServer 2008 R2 with the same hardware specs.
    At the moment upgrading the customer infrastructure is not an option for us, so we don't have a proper solution to accomplish this in a workable time frame on SQLServer 2008. 
    The strategy that we found might be the best option was the one suggessted by E. Sommarskog, bind our code for reading access to a COALESCED calculated column, and writing on the new converted NVARCHAR column. Schedule a batch job in the backgroud
    to migrate all the data over time to the new column, and finally remove the old column.
    Thanks a lot everyone for your help.
    David.
    Useful links:
    http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
    http://msdn.microsoft.com/en-us/library/ms189768.aspx
    http://rusanu.com/2014/02/24/how-to-analyse-sql-server-performance/

  • Performance problems with combobox and datasource

    We have a perfomance problem, when we are connecting a datatable object or something like this to a datasource property of a combobox. Below you find the source code. The SQL-Statement reads about 40000 rows and the result (all 40000) should be listed in the combobox. There is duration about 30 second before this process has finished. Any suggestions?
    Dim ds As New DataSet
    strSQL = "Select * from am.city"
    conn = New Oracle.DataAccess.Client.OracleConnection(Configuration.ConfigurationSettings.AppSettings("conORA"))
    comm = New Oracle.DataAccess.Client.OracleCommand(strSQL)
    da = New Oracle.DataAccess.Client.OracleDataAdapter(strSQL, conn)
    conn.Open()
    da.Fill(ds)
    conn.Close()
    Dim dt As New DataTable
    dt = ds.Tables(0)
    ComboBox1.DataSource = dt
    ComboBox1.ValueMember = dv.Table.Columns("id").ColumnName
    ComboBox1.DisplayMember = dv.Table.Columns("city").ColumnName

    But how long does it take to fill the DataTable?
    I can fill a 40000 row datatable in under 4 seconds.
    DataBinding a combo box to that many rows is pretty expensive, and not normally recommended.
    David
    Dim strConnection As String = "Data Source=oracle;User ID=scott;Password=tiger;"
    Dim conn As OracleConnection = New OracleConnection(strConnection)
    conn.Open()
    Dim cmd As New OracleCommand("select * from (select * from all_objects union all select * from all_objects) where rownum <= 40000", conn)
    Dim ds As New DataSet()
    Dim da As New OracleDataAdapter(cmd)
    Dim begin As Date = Now
    da.Fill(ds)
    Console.WriteLine(ds.Tables(0).Rows.Count & " rows loaded in " & Now.Subtract(begin).TotalSeconds & " seconds")
    outputs
    40000 rows loaded in 3.734375 seconds

  • [URGENT] Performance problem with BC4J and partioned data

    Hi all,
    I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
    When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
    Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
    Thanks,
    Axel

    Here's a SQL statement that creates the table.
    CREATE TABLE SEARCH
    (SEAR_PARTKEY_DAY              NUMBER(4)        NOT NULL
    ,SEAR_PARTKEY_EMP            VARCHAR2(2)      NOT NULL
    ,SEAR_ID                     NUMBER(20)       NOT NULL
    ,SEAR_ENTRY_DATE             TIMESTAMP        NOT NULL
    ,SEAR_LAST_MODIFIED            TIMESTAMP             NOT NULL
    ,SEAR_STATUS                 VARCHAR2(100)    DEFAULT '0'
    ,SEAR_ITC_DATE               TIMESTAMP        NOT NULL
    ,SEAR_MESSAGE_CLASS          VARCHAR2(15)     NOT NULL
    ,SEAR_CHIPHERING_TYPE        VARCHAR2(256)   
    ,SEAR_GMAT                   VARCHAR2(1)      DEFAULT 'U'
    ,SEAR_NATIONALITY            VARCHAR2(3)      DEFAULT 'XXX'
    ,SEAR_MESSAGE_ID             VARCHAR2(32)     NOT NULL
    ,SEAR_COMMENT                VARCHAR2(256)    NOT NULL
    ,SEAR_NUMBER_OF              NUMBER(3)        NOT NULL
    ,SEAR_INTERCEPTION_SYSTEM    VARCHAR2(40)    
    ,SEAR_COMM_PRIOD_H           NUMBER(5)        DEFAULT -1
    ,SEAR_PRIOD_R                  NUMBER(5)        DEFAULT -1
    ,SEAR_INMARSAT_CES           VARCHAR2(40)    
    ,SEAR_BEAM                   VARCHAR2(10)    
    ,SEAR_DIALED_NUMBER          VARCHAR2(70)    
    ,SEAR_TRANSMIT_NUMBER        VARCHAR2(70)    
    ,SEAR_CALLED_NUMBER          VARCHAR2(40)    
    ,SEAR_CALLER_NUMBER          VARCHAR2(40)    
    ,SEAR_MATERIAL_TYPE          VARCHAR2(3)      NOT NULL
    ,SEAR_SOURCE                 VARCHAR2(10)    
    ,SEAR_MAPPING                VARCHAR2(100)    DEFAULT '__REST'
    ,SEAR_DETAIL_MAPPING         VARCHAR2(100)
    ,SEAR_PRIORITY               NUMBER(3)        DEFAULT 255
    ,SEAR_LANGUAGE               VARCHAR2(5)      DEFAULT 'XXX'
    ,SEAR_TRANSMISSION_TYPE      VARCHAR2(40)    
    ,SEAR_INMARSAT_STD           VARCHAR2(1)     
    ,SEAR_FILE_NAME              VARCHAR2(100)    NOT NULL
    PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
      PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
    );of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
    We moved to native JDBC with our application and the performance is like we never expected to be!

Maybe you are looking for