Storage Performance problem

Hello,
Let me ask you for lights troubleshooting this problem ...
Here we have a database server runing on a M5000, SOL10,
with Brocade HBAs to SAN and using 2 IBM DS4700 Storage arrays
(called STOR01 and 02).
Problem : Commit delays, DB slow in creating tables, ....
DB DBP @ SUNGZ Global zone.
Storage info :
DBP on SUNGZ
LUN : 01_B_03 (Tower 01, Ctrl B, Array 03) 275GB.
Testing copies with a test file with File Size 4859109376 Bytes
My Analyse
1)     Poor performance. Suspecting STOR01 to be the guilty.
[root@SUNGZ /DB/DBP/tmp]#time cp temp011.dbf tfd
real 24m30.857s
Moving data away To STOR02...
[root@SUNGZ #]zfs replace ORA-DBP-pool3 OLD NEW)
Info new LUN : 02_A_06 (Tower 02, Ctrl A, Array 06)
[root@SUNGZ /DB/DBP/tmp]#time cp temp011.dbf tfd
real 16m30.857s
Difference 8 MINUTES. Better but not best.
2)     Evacuate freed lun 01_B_03. Testing back 01_B_03 LUN from SUNGZ.
a.     creating pool “zpool create –m none TFDPOOL LUN”
b.     creating volume “zfs create –o mountpoint=/TFD TFDPOOL:TFD”
c.     copy STOR02 -> STOR01
[root@SUNGZ /DB/DBP/tmp]#time cp temp011.dbf /TFD/tfd
real 4m47.3
GREAT ! … But it took 24 mins before zfs replace on this lun …
d.     copy STOR01 -> STOR01
# ls -l /TFD
total 18995430
-rw-r----- 1 root root 4859109376 Aug 13 12:08 tfd2
# time cp temp011.dbf /TFD/tfd
real 4m45.2
GREAT !
e.     copy STOR02 -> STOR02
[root@SUNGZ /DB/DBP/tmp]#time cp temp011.dbf tfd
real 15m59.857s
WHAT A PERF !
3)     Other copy tests :
Copy 01_B_05 (DBD) : 4.8GB in 4m22.
Copy 02_A_04 (DBA) : 1.2GB in 1:04. (4.8GB in 4m16)
Conclusions :
1)     Moving STOR01 -> STOR02, same poor performance. Best but.
Once moved to STOR02, Same poor performance while writing locally.
Other DBs on STOR02 are fine.
Same poor performance behavior with 2 different storage arrays.
2)     After tower swap, From Same Host, Orignal “SLOW LUN 01_B_03” is not the problem
Copy from STOR02 to STOR02 -> 16m30 and 15m59 (Test Analyse 1B & 2E)
Copy from STOR01 to STOR01 -> 4m45
Copy from STOR02 to STOR01 -> 4m47
Perf can change depending of storage array workload (diff 10, 15 or 30 secs)
Slow lun problem moved to STOR02 (who was faster than STOR01)
As writing to STOR01 makes no difference where the IOread comes from,problem is by consequences at WRITE Level.
… WRITE level, yes, but now on STOR02...but we had same behavior on STOR01
Again Same poor performance behavior with 2 different storage arrays.
Defragmentation ? Same IO speed copying to STOR01.
Problem is STORAGE ARRAY INDEPENDENT.
Tested ZFS NO_CACHEFLUSH parameter on Dev and Acc.
No spectaclular wins.
Tests on 2 other Sol10 Global zones (Dev on 01_B_05) and (Acc on 02_A_04) are both ok
SO, it is Hardware platform independent (V490, M4000, M5000)
If storage array independent then problem is HOST SIDE.
(For this case, SUNGZ)
ZFS Problem ?
Zpool upgrade listed pools to release 15. So we ran ZPOOL UPDATE -A
ZFS Upgraded pools to latest level, no spectacular win.
Possible : TFDPOOL is NEWLY created pool while
ORA-DBP-pool3 has been created while ago …
Any idea where that performance difference may come from ?
Thanks
Thierry
Belgium
Edited by: user13397414 on 13-août-2012 5:43
Edited by: user13397414 on 13-août-2012 5:44

Sriram,
Practically speaking , you cannot have a server with a storage ,which is comparable with SAN or NAS.
In an enterprise , storage ought to be on SAN or NAS. The oerformance issues might be there .But you can imporove by taking care of proper HBA's and switch configurations and also RAID confiuguration at storage array level.
You cannot get a server ,which is manufactured and have many TB's of storage .Which is impractical.
Performance wise , take care of above mentioned.
Sandeep Reddy Enti
HCC
http://analytiks.blogspot.com

Similar Messages

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Performance Problems Bex 7.0 and Office 2007 Workbooks

    Hi
    we had a performance Problem with Bex 7.0 and Worksbooks in Office 2007.
    The Workbooks are created with Office 2003 and runs with good performance but in Office 2007 the performance is inacceptable.
    E.g. open Workbook with Office 2003   --    30 seconds
           open Workbook with Office 2007   --    15 minutes
    We do everything what we find in SAP Notes, Whitepapers oder SDN Messages.
    For Example:
    - We installed all Excel Patches witch descriped in: Microsoft Excel 2007 &
    SAP Business Explorer Compatibility
    - We set the optimize X: RS_FRONTEND_INIT setting u2018ANA_USE_OPTIMIZE_STG = Xu2019
    - We open worksbooks in Office 2007 with the repair Flag.
    - We used Flag open in XLS format
    But same Workbooks are extrem slow.
    We try to create a new Workbook with Office 2007 and it runs with good performance.
    But there are 500 Workbooks we didn`t wont to create all new.
    System Information:
    BW: 7.0 Netweaver 7.01 BI_CONT 7.05
    Client: SAP Gui 7.10 BI Explorer: 902
    Thank your for your Help.
    Edited by: Carsten Ziemann on Feb 2, 2011 4:36 PM

    Hello Carsten,
    Try to use Workbook compression:
      -  Open the specific workbook in BEx Analyzer
      -  Open Workbook Settings dailog
      -  Check "Use Optimized Storage"
      -  Click on OK Button
      -  Save the workbook
    But also, your front end tools are on a very old version.
    I would like to recommend you to install the latest patch of SAPGUI 7.20 and Business Explore 7.20.
    Front End Version 7.10 will be supported until April 2011.
    But, if you want to continue using 7.10, update to latest patch:
    http://service.sap.com/swdc
    > Support Packages and Patches
    > Browse our Download Catalog
      > SAP Frontend Components
    > SAP GUI FOR WINDOWS
    > SAP GUI FOR WINDOWS 7.10 CORE
    > Win32
    _ > gui710_20-10002995.exe
       |  > BI ADDON FOR SAP GUI
       |  > BI 7.0 ADDON FOR SAP GUI 7.10
       |_ > bi710sp14_1400-10004472.exe
    Cheers,
    Edward John

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • Performance problem querying multiple CLOBS

    We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
    Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
    select count(*) from articles
    where contains (abstract , 'matter OR dark OR detection') > 0
    or contains (subject , 'matter OR dark OR detection') > 0
    Columns abstract and subject are CLOBs. However, this query works fine for AND;
    select count(*) from articles
    where contains (abstract , 'matter AND dark AND detection') > 0
    or contains (subject , 'matter AND dark AND detection') > 0
    The explain plan gives a cost of 2157 for OR and 14.3 for AND.
    I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
    create index article_abstract_search on article(abstract)
    INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
    The data and index tables are on separate tablespaces.
    Can anyone suggest what is going on here, and any alternatives?
    Many thanks,
    Geoff Robinson

    Thanks for your reply, Omar.
    I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
    Add an extra CONTAINS and it becomes 300 times more costly!
    The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
    Regards
    Geoff

  • Performance problem with Oracle

    We are currently getting a system developed in Unix/Weblogic/Tomcat/Oracle environment. We have developed a screen that contains 5 or 6 different parameters to select from. We could select multiple parameters in each of these selections. The idea behind the subsequent screens is to attach information to already existing data/ possible future data that matches the selection criteria.
    Based on these selections, existing data located within the system in a table is searched and those that match are selected. Also new rows are created in the table against combinations that do not currently have a match. Frequently multiple parameters are selected, and 2000 different combinations need to be searched in the table. Of these selections, only about 100 or 200 combinations will be available in existing data. So the system is having to insert 1800 rows. The user meanwhile waits for the system to come up with data based on their selections. The user is not willing to wait more than 30 seconds to get to the next screen. In the above mentioned scenario, the system takes more than an hour to insert the new records and bring the information up. We need suggestions to see if the performance can be improved this drastically. If not what are the alternatives? Thanks

    The #1 cause for performance problems with Oracle is not using it correctly.
    I find it hard to believe that with the small data volumes mentioned, that you can have perfornance problems.
    You need to perform a sanity check. Are you using Oracle correctly? Do you know what bind variables are? Are you using indexes correctly? Are you using PL/SQL correctly? Is the instance setup correctly? What about storage, are you using SAME (RAID10) or something else? Etc.
    Facts. Oracle peforms exceptionally well. Oracle exceptionally well.
    Simple example from a benchmark I did on this exact same subject. App-tier developers not understanding and not using Oracle correctly. Incorrect usage of Oracle doing a 100,000 SQL statements. 24+ minutes elapsed time. Doing those exact same 100,000 SQL statement correctly (using bind variables) - 8 seconds elapsed time. (benchmark using Oracle 10.1.0.3 on a Sunfire V20z server)
    But then you need to use Oracle correctly. Are you familiar with the Oracle Concepts Guide? Have you read the Oracle Application Developer Fundamentals Guide?

  • Event ID 218 Copy of database 'Mailbox Database'- experienced a performance problem

    Hi,
    We have 16 GB of RAM for EXchange 2010 and 4 Processor and using iSCSI Starwind LUN.
    Event ID : 218
    Event Source : ExchangeStoreDB
    Event Category :Database Recovery
    the copy of database 'Mailbox Database - NYM' on this server experienced a performance problem. Failover returned the following
    error: here is only one copy of this mailbox database (Mailbox Database - NYM). Automatic recovery is not available.
    It occurs specially when the exchange backup start ; we are using window backup for taking exchange backup
    I can see warnning of ESE below the event 218 - Information Store (3912) Mailbox Database - NYM: A request to write to the file "D:\Program Files\Microsoft\Exchange Server\V14\Mailbox\Mailbox Database 0959355037\Mailbox Database 0959355037.edb" at
    offset 150801350656 (0x000000231c760000) for 32768 (0x00008000) bytes has not completed for 68 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem.
    Also,
    1. How can I enable verbose logging to expert level for exchange MailboxDatabase.
    2. What type performace counter need to be set for exchange database in Performance Monitor
    Any Help ?
    Thanks
    Prakash

    We had the same issue on our mbx server with Exchange 2010 Ent SP3, Win Svr 2008 R2 SP1 running on VMware v5.1 with Exchange dbs mounted to the VM via iSCSI LUNs on our NetApp SAN.  
    We escalated the ticket, and the Microsoft Exchange escalation team stated that the EIDs of the database corruption and automatic recovery seem to point to a hardware issue.  And, they told us not to panic and that there was no need to rebuild the environment
    and migrate the databases immediately.  They instead asked us to focus all our efforts on solving the iSCSI environment issues, since each Exchange EID db corruption/ autorecovery would be preceded by some type of corresponding iSCI System EID.  
    We hence opened tickets with MS Storage Team and with NetApp support and worked on this ticket with input from all 3 groups.
    After about a month an a half of troubleshooting & trial error- with tickets open with NetApp, MS Storage Team, and MS Exchange team, we finally seem to have applied a configuration change that worked. 
    NetApp support referred us to the following article relating to VMware:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2039495
    So, per NetApp support and MS Storage team, do the following:
    1.    Manually dismount Exchange Databases on the mailbox server.
    2.    Open Device Manager, Network Adapters, VMXNET3 Ethernet Adapter #3, Properties
    3.    Change Small RX Buffers from the default 512 to the maximum 8192 using the drop down selector.
    4.    Change the RX Ring #1 from the default 1024 to the maximum of 4096 using the drop down selector.
    5.    Click OK and save the change.
    6.    Re-mount the Exchange Databases
    And, to our surprise, this seems to have done the trick.  It seems like the issue is an iSCSI-related NIC setting all along.
    Thanks,
    Detrich

  • Large DB Performance problems when updating schemas

    Hi,
    I'm facing performance problems updating the schema of large databases in SQLServer 2008 R2 and I can't find a proper solution. I would have thought that this is a fairly common problem so here it goes.
    I have a database which is about 700Gb in size. I am going to detail 2 different issues:
    EXAMPLE 1: CHANGING A FIELD TYPE
    In that Database I have a table with the schema detailed below as Table_TB. This table contains several million records. As you can see there is a column of type TEXT (Comment_FD). What I am trying to do is changing  the column type to NVARCHAR(MAX) to
    remove the deprecated TEXT type and support UNICODE characters in that field.
    The command I am running is the following one:
    ALTER TABLE DBA.Table_TB ALTER COLUMN Comment_FD NVARCHAR(MAX)
    This operation takes several hours to complete.
    I tried the following:
    -Adding the new column to the same table and copying the data over
    -Creating a new entire table with the new field type modified and copy the data over
    -Exporting to disk the contents of the table, truncate the table and reimporting the data both with Management Studio and the bcp tool. 
    -When copying to a new table I made also tried removing the PK of the table to skip the overhead of creating an index. 
    -I also tried using a simple spd to copy the data over (to the column in the same table and the other table) in batches. 
    In ALL my tests I set the SQLServer Recovery model to Simple to skip as much as I can the overhead of generating logs.
    No matter what I do, the times to complete this operation seem to be unusable. 
    Please note that I am reducing this problem to its minimum expression. Can't say precisely how long the operation takes, but a script that contain 5 field type changes identical to that one in that and other two tables takes 7 days to complete!!! The REAL problem
    is that I have several other fields to which I need to change the type, and they currently amount to a total running time of 14 days!. And this is just changing a handful of fields in a handful of tables. At some point every string in the system will need
    to be migrated to get unicode support, making this completely impracticable.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 14M records and will be about 44Gb in size. 
    EXAMPLE 2: ADDING COLUMNS
    I have another table int the same DB, with a schema detailed below as Table_2_TB, and again several million records in it. I am trying to add a colum using the following SQL:
    ALTER TABLE DBA.Table2_TB ADD strFiDatasourceName_FD VARCHAR(64) NOT NULL DEFAULT ''
    This operation takes a bit more than 7 hours to complete.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 54M records, and a size of 98Gb.
    QUESTIONS:
    ---->Am I doing something wrong, or is there any way to optimize either the SQL or the SQLServer configuration to speed this up?
    ---->Are these performance levels normal at all when dealing with databases of this size? 
    ---->Anyone out there with experience on DBs of this size?
    ---->Does Microsoft offer some kind of service (cloud?) to make structural changes in large Dbs?
    Thanks a lot for your help in advance!
    This is the schema for the first table:
    CREATE TABLE [DBA].[Table_TB](
    [BranchCode_FD] [char](2) NOT NULL,
    [FolderNo_FD] [int] NOT NULL,
    [DateTime_FD] [datetime] NULL,
    [Staff_FD] [char](3) NULL,
    [Action_FD] [varchar](25) NULL,
    [Comment_FD] [text] NULL,
    [Team_FD] [varchar](8) NULL,
    [Group_FD] [varchar](8) NULL,
    [PopUpDate_FD] [smalldatetime] NULL,
    [nRecordID_FD] [smallint] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    PRIMARY KEY CLUSTERED
    [BranchCode_FD] ASC,
    [FolderNo_FD] ASC,
    [nRecordID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__DateTime] DEFAULT ('1980-01-01') FOR [DateTime_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__PopUp__096A45D7] DEFAULT ('1980-01-01') FOR [PopUpDate_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    This is the schema for the second table:
    CREATE TABLE [DBA].[Table2_TB](
    [strBBranchCode_FD] [char](2) NOT NULL,
    [lFFoldNo_FD] [int] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    [strFiType_FD] [char](3) NOT NULL,
    [dtFiCreateDate_FD] [smalldatetime] NOT NULL,
    [strFiBookingRef_FD] [varchar](32) NOT NULL,
    [strFiBookingRefDayMonth_FD] [varchar](5) NOT NULL,
    [strFiBookedVia_FD] [varchar](50) NOT NULL,
    [bFiInterfaced_FD] [smallint] NOT NULL,
    [nFiSortOrder_FD] [smallint] NOT NULL,
    [strFiCreateStaffCode_FD] [char](3) NOT NULL,
    [strPcProductCode_FD] [varchar](5) NOT NULL,
    [dtFiStartDateTime_FD] [smalldatetime] NOT NULL,
    [lFiFinanVendID_FD] [int] NOT NULL,
    [lFiItinVendID_FD] [int] NOT NULL,
    [dtFiVendBalDueDate_FD] [smalldatetime] NOT NULL,
    [dtFiVendDepositDueDate_FD] [smalldatetime] NOT NULL,
    [strFiStatus_FD] [varchar](2) NOT NULL,
    [bFiTransFeeHasBeenApplied_FD] [smallint] NOT NULL,
    [bFiATOLTypeMan_FD] [smallint] NOT NULL,
    [nFiATOLType_FD] [smallint] NOT NULL,
    [strCcClassCode_FD] [varchar](10) NOT NULL,
    [strFiStartPointCode_FD] [varchar](5) NOT NULL,
    [strFiEndPointCode_FD] [varchar](5) NOT NULL,
    [strFiAirlineCode_FD] [varchar](3) NOT NULL,
    [strFiVendDocNo_FD] [varchar](16) NOT NULL,
    [dtFiIssueDate_FD] [smalldatetime] NOT NULL,
    [strFiDiscReasonCode_FD] [varchar](3) NOT NULL,
    [nFiLastFoldPricingID_FD] [smallint] NOT NULL,
    [strFiPrintingNote_FD] [text] NOT NULL,
    [strFiNonPrintingNote_FD] [text] NOT NULL,
    [dtFiStatusExpiryDate_FD] [smalldatetime] NOT NULL,
    [strFiClientFreqTravellerNo_FD] [varchar](20) NOT NULL,
    [strFiRouteNo_FD] [varchar](5) NOT NULL,
    [nFiNumBum_FD] [smallint] NOT NULL,
    [dtFiEndDateTime_FD] [smalldatetime] NOT NULL,
    [strFiFareBase_FD] [varchar](15) NOT NULL,
    [strFiInterfaceItemID_FD] [varchar](15) NOT NULL,
    [strFiEndPointLoc_FD] [varchar](255) NULL,
    [strFiStartPointLoc_FD] [varchar](255) NULL,
    [strMcMealCode_FD] [varchar](5) NOT NULL,
    [strFiSeatNote_FD] [text] NOT NULL,
    [strFiMealNote_FD] [text] NOT NULL,
    [strFiAirCraftType_FD] [varchar](5) NOT NULL,
    [strFiJourneyTime_FD] [varchar](8) NOT NULL,
    [strFiCheckInMins_FD] [varchar](10) NOT NULL,
    [lFiJourneyDist_FD] [int] NOT NULL,
    [nFiNumStop_FD] [smallint] NOT NULL,
    [strFiBaggageAllow_FD] [varchar](15) NOT NULL,
    [strFiIssueStaffCode_FD] [varchar](20) NOT NULL,
    [dtFiDispatchDate_FD] [smalldatetime] NOT NULL,
    [strFiDispatchStaffCode_FD] [varchar](3) NOT NULL,
    [strDmDispatchCode_FD] [varchar](2) NOT NULL,
    [lfFiVendDepositDueAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateCode_FD] [varchar](50) NOT NULL,
    [strFiRatePlan_FD] [varchar](2) NOT NULL,
    [strFiCabinNo_FD] [varchar](8) NOT NULL,
    [strFiMileage_FD] [varchar](10) NOT NULL,
    [strFiStartPointLocTelNo_FD] [varchar](40) NULL,
    [strFiBookingGuarantee_FD] [varchar](60) NOT NULL,
    [strFiSpecialRemarks_FD] [varchar](250) NULL,
    [strFiCxnCondition_FD] [varchar](100) NOT NULL,
    [bFiFlyDrive_FD] [smallint] NOT NULL,
    [strFiConfNo_FD] [varchar](32) NOT NULL,
    [lFiLinkID_FD] [int] NOT NULL,
    [strFiCategory_FD] [varchar](15) NOT NULL,
    [nFiNumRoom_FD] [smallint] NOT NULL,
    [nFiNumDay_FD] [smallint] NOT NULL,
    [nFiSaleFoldItemID_FD] [smallint] NOT NULL,
    [bFiRefundItem_FD] [smallint] NOT NULL,
    [strFiFareSavingCode_FD] [varchar](2) NOT NULL,
    [lfFiFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateNote_FD] [text] NOT NULL,
    [strFiDiscCode_FD] [varchar](20) NOT NULL,
    [strFiReqDispatchMethodCode_FD] [varchar](2) NOT NULL,
    [dtFiReqDispatchDateTime_FD] [smalldatetime] NOT NULL,
    [nFiReqDispatchVoucherType_FD] [smallint] NOT NULL,
    [nFiLastFoldItemDetailID_FD] [smallint] NOT NULL,
    [nFiNumConjunction_FD] [smallint] NOT NULL,
    [lfFiBSPFrgnBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxDiscrepancy_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPPenaltyFeeAmt_FD] [decimal](17, 2) NOT NULL,
    [nFiRegion_FD] [smallint] NOT NULL,
    [strFiOpenTktNo_FD] [varchar](16) NOT NULL,
    [strFiTktSource_FD] [varchar](3) NOT NULL,
    [strFiJourneyType_FD] [varchar](3) NOT NULL,
    [strFiTktType_FD] [varchar](3) NOT NULL,
    [strFiInterfaceNameRemark_FD] [varchar](50) NULL,
    [nFiATOLIssuedStatus_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBase_FD] [varchar](13) NOT NULL,
    [strFiPaxType_FD] [varchar](3) NOT NULL,
    [strFiActualCarrier_FD] [varchar](2) NOT NULL,
    [strFiNetRemitType_FD] [varchar](1) NOT NULL,
    [strFiFareConstruction_FD] [text] NOT NULL,
    [lfFiBSPTotVATAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNetFare_FD] [smallint] NOT NULL,
    [strFiTourCode_FD] [varchar](50) NOT NULL,
    [strFiSuppFOPInfo_FD] [varchar](255) NOT NULL,
    [lfFitBSPPublishedCommPerc_FD] [decimal](12, 6) NOT NULL,
    [strFiBSPFareCurrCode_FD] [varchar](3) NOT NULL,
    [strFiTktIssueIataNo_FD] [varchar](8) NOT NULL,
    [lfFiFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiFareOfferedSavingCode_FD] [varchar](2) NOT NULL,
    [strFiDesc_FD] [varchar](8000) NOT NULL,
    [lfFiBSPFareBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNoPrintOnItin_FD] [smallint] NOT NULL,
    [dtFiStatusCodeChangeDateTime_FD] [smalldatetime] NOT NULL,
    [bFiNoPrintIfAllPricingsZeroCustAmt_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBaseLow_FD] [varchar](13) NOT NULL,
    [lfFiFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiBrochureCode_FD] [varchar](8) NOT NULL,
    [bFiVendPayDepositNow_FD] [smallint] NOT NULL,
    [bFiVendPayBalanceNow_FD] [smallint] NOT NULL,
    [strFiBankBranchCode_FD] [varchar](2) NOT NULL,
    [strFiOperatingAirlineCode_FD] [varchar](2) NOT NULL,
    [strFiFarePassengerTypeCode_FD] [varchar](3) NOT NULL,
    [strFiAssociatedFarePricingInfoID_FD] [varchar](15) NOT NULL,
    [bFiVerificationReq_FD] [smallint] NOT NULL,
    [dtFiLastVerifiedDateTime_FD] [datetime] NOT NULL,
    [lFiLastVerifiedLevel_FD] [int] NOT NULL,
    [lFiLastVerifiedWithCount_FD] [int] NOT NULL,
    [dtFiTktingStatusChangeDateTime_FD] [smalldatetime] NOT NULL,
    [strFiTktingInformation_FD] [text] NOT NULL,
    [strFiTktingStatus_FD] [varchar](2) NOT NULL,
    [lfFiBSPFareSellDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiTktingBatchID_FD] [varchar](15) NOT NULL,
    [dtFiTktingBatchDateTime_FD] [smalldatetime] NOT NULL,
    [lIplPolicyLevelID_FD] [int] NOT NULL,
    [strFiTktingDescription_FD] [varchar](1000) NOT NULL,
    [strFiTktingDataVersion_FD] [varchar](10) NOT NULL,
    [strFiSourceTktingSystem_FD] [varchar](20) NOT NULL,
    [strFiOthPtsPmtCode_FD] [varchar](3) NOT NULL,
    [bFiManualPtsEntry_FD] [smallint] NOT NULL,
    [strFiEndorsement_FD] [varchar](500) NOT NULL,
    [strFiOwnedByStaffCode_FD] [char](3) NOT NULL,
    [strFiThirdPartyTrackingID_FD] [varchar](25) NOT NULL,
    [strFiAdditionalPrintingNote_FD] [text] NULL,
    [bFiOverridePrintingNote_FD] [smallint] NOT NULL,
    [lfFiCarbonOffsetWeightAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiCancellationPolicyNote_FD] [text] NULL,
    [bFiPEProcessed_FD] [smallint] NOT NULL,
    [bFiActingAsAgentFor_FD] [smallint] NOT NULL,
    [nFiOriginalBuyingBasis_FD] [smallint] NOT NULL,
    [bFiIsOpenSegment_FD] [smallint] NOT NULL,
    [nFiCreateSource_FD] [smallint] NOT NULL,
    [strFiBookingSourceInvoiceNo_FD] [varchar](7) NOT NULL,
    [strFiGDSPaxTypeCode_FD] [varchar](8) NOT NULL,
    [strFiNetFareGDSAccountCode_FD] [varchar](8) NOT NULL,
    [strFiPOSID_FD] [varchar](50) NOT NULL,
    [bFiIsPOSEditable_FD] [smallint] NOT NULL,
    [lfFiCustExchRate_FD] [decimal](16, 8) NOT NULL,
    [lfFiCustFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiCCItemPayableToBranch_FD] [smallint] NOT NULL,
    [dtFiExternalAccountingDate_FD] [smalldatetime] NOT NULL,
    [strFiSourceSystemBookResponseText_FD] [text] NOT NULL,
    [bFiIsConnection_FD] [smallint] NOT NULL,
    [bFiIsPEFoldLevelItem_FD] [smallint] NOT NULL,
    [nFiReqdEndorsedConjTktType_FD] [smallint] NOT NULL,
    [bFiEndorsedConjTktDetailIsManual_FD] [smallint] NOT NULL,
    [strFiReqdEndorsedConjTktDetailText_FD] [varchar](30) NOT NULL,
    [lFiLastTktingTemplateID_FD] [int] NOT NULL,
    [dtFiBookedDate_FD] [smalldatetime] NOT NULL,
    [strFiTktingVerificationWarning_FD] [varchar](1000) NOT NULL,
    [strFiTktingVerificationError_FD] [varchar](1000) NOT NULL,
    [strFiTktingError_FD] [varchar](1000) NOT NULL,
    [strFiCustomerAccountingData00_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData01_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData02_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData03_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData04_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData05_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData06_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData07_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData08_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData09_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingDataNote_FD] [varchar](100) NOT NULL,
    [strFiCustomerAccountingData10_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData11_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData12_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData13_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData14_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData15_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData16_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData17_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData18_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData19_FD] [varchar](50) NOT NULL,
    [nFiFlightBasis_FD] [smallint] NOT NULL,
    [lFiStartPointVendID_FD] [int] NOT NULL,
    [lFiEndPointVendID_FD] [int] NOT NULL,
    [lCtID_FD] [int] NOT NULL,
    [strFiContractCode_FD] [varchar](25) NOT NULL,
    [strFiContractPeriodCode_FD] [varchar](50) NOT NULL,
    PRIMARY KEY CLUSTERED
    [strBBranchCode_FD] ASC,
    [lFFoldNo_FD] ASC,
    [nFiFoldItemID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([nFiReqDispatchVoucherType_FD])
    REFERENCES [DBA].[VoucherTypes_TB] ([nVtCode_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strPcProductCode_FD], [strFiType_FD])
    REFERENCES [DBA].[ProductCodes_TB] ([ProductCode_FD], [Type_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strBBranchCode_FD], [lFFoldNo_FD])
    REFERENCES [DBA].[Folder_TB] ([BranchCode_FD], [FolderNo_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strBB__44AB0736] DEFAULT ('') FOR [strBBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFFol__459F2B6F] DEFAULT (0) FOR [lFFoldNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiFo__46934FA8] DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__478773E1] DEFAULT ('') FOR [strFiType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiC__487B981A] DEFAULT ('1980-01-01') FOR [dtFiCreateDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookingRef] DEFAULT ('') FOR [strFiBookingRef_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4A63E08C] DEFAULT ('') FOR [strFiBookingRefDayMonth_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookedVia_FD] DEFAULT ('') FOR [strFiBookedVia_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiIn__4C4C28FE] DEFAULT (0) FOR [bFiInterfaced_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSo__4D404D37] DEFAULT ((-1)) FOR [nFiSortOrder_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4E347170] DEFAULT ('') FOR [strFiCreateStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strPcProductCode] DEFAULT ('') FOR [strPcProductCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__501CB9E2] DEFAULT ('1980-01-01') FOR [dtFiStartDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiFi__5110DE1B] DEFAULT ((-1)) FOR [lFiFinanVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiIt__52050254] DEFAULT ((-1)) FOR [lFiItinVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__52F9268D] DEFAULT ('1980-01-01') FOR [dtFiVendBalDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__53ED4AC6] DEFAULT ('1980-01-01') FOR [dtFiVendDepositDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__54E16EFF] DEFAULT ('') FOR [strFiStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiTr__55D59338] DEFAULT (0) FOR [bFiTransFeeHasBeenApplied_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiAT__56C9B771] DEFAULT (0) FOR [bFiATOLTypeMan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__57BDDBAA] DEFAULT (0) FOR [nFiATOLType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strCc__58B1FFE3] DEFAULT ('') FOR [strCcClassCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiStartPointCode] DEFAULT ('') FOR [strFiStartPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiEndPointCode] DEFAULT ('') FOR [strFiEndPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5B8E6C8E] DEFAULT ('') FOR [strFiAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5C8290C7] DEFAULT ('') FOR [strFiVendDocNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiI__5D76B500] DEFAULT ('1980-01-01') FOR [dtFiIssueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5E6AD939] DEFAULT ('') FOR [strFiDiscReasonCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__5F5EFD72] DEFAULT ((-1)) FOR [nFiLastFoldPricingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__605321AB] DEFAULT ('') FOR [strFiPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__614745E4] DEFAULT ('') FOR [strFiNonPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__623B6A1D] DEFAULT ('1980-01-01') FOR [dtFiStatusExpiryDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__632F8E56] DEFAULT ('') FOR [strFiClientFreqTravellerNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6423B28F] DEFAULT ('') FOR [strFiRouteNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__nFiNumBum] DEFAULT ((-1)) FOR [nFiNumBum_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiE__660BFB01] DEFAULT ('1980-01-01') FOR [dtFiEndDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67001F3A] DEFAULT ('') FOR [strFiFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67F44373] DEFAULT ('') FOR [strFiInterfaceItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__68E867AC] DEFAULT ('') FOR [strFiEndPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__69DC8BE5] DEFAULT ('') FOR [strFiStartPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strMc__6AD0B01E] DEFAULT ('') FOR [strMcMealCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6BC4D457] DEFAULT ('') FOR [strFiSeatNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6CB8F890] DEFAULT ('') FOR [strFiMealNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6DAD1CC9] DEFAULT ('') FOR [strFiAirCraftType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6EA14102] DEFAULT ('') FOR [strFiJourneyTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6F95653B] DEFAULT ('') FOR [strFiCheckInMins_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiJo__70898974] DEFAULT (0) FOR [lFiJourneyDist_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__717DADAD] DEFAULT (0) FOR [nFiNumStop_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7271D1E6] DEFAULT ('') FOR [strFiBaggageAllow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7365F61F] DEFAULT ('') FOR [strFiIssueStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiD__745A1A58] DEFAULT ('1980-01-01') FOR [dtFiDispatchDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__754E3E91] DEFAULT ('') FOR [strFiDispatchStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strDm__764262CA] DEFAULT ('') FOR [strDmDispatchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiV__77368703] DEFAULT (0) FOR [lfFiVendDepositDueAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__782AAB3C] DEFAULT ('') FOR [strFiRateCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__791ECF75] DEFAULT ('') FOR [strFiRatePlan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7A12F3AE] DEFAULT ('') FOR [strFiCabinNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7B0717E7] DEFAULT ('') FOR [strFiMileage_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7BFB3C20] DEFAULT ('') FOR [strFiStartPointLocTelNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7CEF6059] DEFAULT ('') FOR [strFiBookingGuarantee_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7DE38492] DEFAULT ('') FOR [strFiSpecialRemarks_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7ED7A8CB] DEFAULT ('') FOR [strFiCxnCondition_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiFl__7FCBCD04] DEFAULT (0) FOR [bFiFlyDrive_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__00BFF13D] DEFAULT ('') FOR [strFiConfNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiLi__01B41576] DEFAULT ((-1)) FOR [lFiLinkID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__02A839AF] DEFAULT ('') FOR [strFiCategory_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__039C5DE8] DEFAULT (0) FOR [nFiNumRoom_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__04908221] DEFAULT (0) FOR [nFiNumDay_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSa__0584A65A] DEFAULT ((-1)) FOR [nFiSaleFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiRe__0678CA93] DEFAULT (0) FOR [bFiRefundItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__076CEECC] DEFAULT ('') FOR [strFiFareSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__08611305] DEFAULT (0) FOR [lfFiFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0955373E] DEFAULT ('') FOR [strFiRateNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0A495B77] DEFAULT ('') FOR [strFiDiscCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0B3D7FB0] DEFAULT ('') FOR [strFiReqDispatchMethodCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiR__0C31A3E9] DEFAULT ('1980-01-01') FOR [dtFiReqDispatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiRe__0D25C822] DEFAULT (0) FOR [nFiReqDispatchVoucherType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__0F0E1094] DEFAULT ((-1)) FOR [nFiLastFoldItemDetailID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__100234CD] DEFAULT (0) FOR [nFiNumConjunction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__10F65906] DEFAULT (0) FOR [lfFiBSPFrgnBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__11EA7D3F] DEFAULT (0) FOR [lfFiBSPBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__12DEA178] DEFAULT (0) FOR [lfFiBSPTaxDiscrepancy_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__13D2C5B1] DEFAULT (0) FOR [lfFiBSPPenaltyFeeAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiDo__14C6E9EA] DEFAULT (0) FOR [nFiRegion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__15BB0E23] DEFAULT ('') FOR [strFiOpenTktNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__16AF325C] DEFAULT ('') FOR [strFiTktSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__17A35695] DEFAULT ('') FOR [strFiJourneyType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__18977ACE] DEFAULT ('') FOR [strFiTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__198B9F07] DEFAULT ('') FOR [strFiInterfaceNameRemark_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__1A7FC340] DEFAULT ((-1)) FOR [nFiATOLIssuedStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1B73E779] DEFAULT ('') FOR [strFiFareSavingFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1C680BB2] DEFAULT ('') FOR [strFiPaxType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1D5C2FEB] DEFAULT ('') FOR [strFiActualCarrier_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1E505424] DEFAULT ('') FOR [strFiNetRemitType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1F44785D] DEFAULT ('') FOR [strFiFareConstruction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__20389C96] DEFAULT (0) FOR [lfFiBSPTotVATAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiNe__212CC0CF] DEFAULT (0) FOR [bFiNetFare_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__2220E508] DEFAULT ('') FOR [strFiTourCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__23150941] DEFAULT ('') FOR [strFiSuppFOPInfo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFit__24092D7A] DEFAULT (0) FOR [lfFitBSPPublishedCommPerc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__24FD51B3] DEFAULT ('') FOR [strFiBSPFareCurrCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__25F175EC] DEFAULT ('') FOR [strFiTktIssueIataNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__27D9BE5E] DEFAULT (0) FOR [lfFiFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__251D4D44] DEFAULT ('') FOR [strFiFareOfferedSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiDesc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintOnItin_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-1-1') FOR [dtFiStatusCodeChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintIfAllPricingsZeroCustAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFareSavingFareBaseLow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBrochureCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayDepositNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayBalanceNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBankBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOperatingAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFarePassengerTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAssociatedFarePricingInfoID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVerificationReq_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiLastVerifiedDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedLevel_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedWithCount_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingStatusChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingInformation_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareSellDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPTaxBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingBatchID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingBatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lIplPolicyLevelID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDescription_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDataVersion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceTktingSystem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOthPtsPmtCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiManualPtsEntry_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiEndorsement_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOwnedByStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiThirdPartyTrackingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAdditionalPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiOverridePrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiCarbonOffsetWeightAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCancellationPolicyNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiPEProcessed_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiActingAsAgentFor_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiOriginalBuyingBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsOpenSegment_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiCreateSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (' ') FOR [strFiBookingSourceInvoiceNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiGDSPaxTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiNetFareGDSAccountCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiPOSID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsPOSEditable_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustExchRate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiCCItemPayableToBranch_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiExternalAccountingDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceSystemBookResponseText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsConnection_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsPEFoldLevelItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiReqdEndorsedConjTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiEndorsedConjTktDetailIsManual_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiReqdEndorsedConjTktDetailText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiLastTktingTemplateID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiBookedDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationWarning_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData00_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData01_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData02_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData03_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData04_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData05_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData06_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData07_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData08_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData09_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingDataNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData10_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData11_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData12_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData13_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData14_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData15_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData16_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData17_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData18_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData19_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiFlightBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiStartPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiEndPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lCtID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractPeriodCode_FD]
    GO

    Hi,
    I just wanted to summarize the conclusions we got here, in case it helps someone else.
    For the case of the ALTER statement:
    First we analyzed the query performance and found out that the query was I/O bound. A handful of useful scripts can be found on the links below. Some high values on PAGEIOLATCH wait times suggested some memory pressure, so we incremented the amount of memory
    dedicated to the server up to 32GB. This single change was one of the most effective ones and reduced the query execution time by about 40%-50%. I guess SQLServer needs less paging to perform the operation when it can load more pages in memory at the same
    time.
    We run more tests tweaking some other mentioned variables like the server maxDOP, which made in fact the query slower the higher the value we set. The initial server config was set with auto CPU affinity, but I/O mask affinity set to the first 4 CPUs, and
    we found out that setting it to ALL AUTO perform faster for some reason.
    After some analysis made by Microsoft on the diagnostics/metrics data, the only interesting finding was that a couple of the storage volumes were performing slightly slower than the rest causing a bit of a bottleneck. We recommended the customer to look
    into it with their storage team, but even with those fixed, we won't expect the query to run much faster.
    No other tweaks have been found useful to speed up the ALTER statement. Basically it comes up to how fast you I/O subsystem is, and how much can SQLServer can cache in memory. No other suggestion has been made by Microsoft, and they've advised that any
    other tweaked query (split the column addition and the constraints for example) is going to perform worse than the plain and simple alter statement.
    On the NVARCHAR type problem:
    As it was suggested by Erland Sommarskog, SQLServer 2014 Enterprise edition performed this operation about 40% faster than SQLServer 2008 R2 with the same hardware specs.
    At the moment upgrading the customer infrastructure is not an option for us, so we don't have a proper solution to accomplish this in a workable time frame on SQLServer 2008. 
    The strategy that we found might be the best option was the one suggessted by E. Sommarskog, bind our code for reading access to a COALESCED calculated column, and writing on the new converted NVARCHAR column. Schedule a batch job in the backgroud
    to migrate all the data over time to the new column, and finally remove the old column.
    Thanks a lot everyone for your help.
    David.
    Useful links:
    http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
    http://msdn.microsoft.com/en-us/library/ms189768.aspx
    http://rusanu.com/2014/02/24/how-to-analyse-sql-server-performance/

  • Critical performance problem upon bulk load of groups

    All (including product development),
    I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
    During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
    Running SQL trace points in the directions of the following SQL statement:
    SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
    DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
    LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
    EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
    CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
    WWPOB_PAGE$ WHERE ID = :b1
    I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
    "GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
    This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
    Also note: In the call to addGroupToList, I set owner to true for all groups.
    Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
    Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
    Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
    Thanks,
    Erik Hagen (you may call me on +47 90631013)
    null

    YES!
    I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
    About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
    Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
    ============================================
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
    ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
    ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
    ON PORTAL30.WWSEC_PERSON$('ID')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
    ON PORTAL30.WWSEC_PERSON$('USER_NAME')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
    "SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
    ON PORTAL30.WWSEC_FLAT$("ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
    ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
    ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
    "NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
    "GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    ==================================
    Thanks,
    Erik Hagen
    null

  • Random performance problems

    Hello,
    We have a medium sized internet application which is experiencing random performance problems. The applications data is stored in xml which is read/saved every page hit to a clob. The table which holds the clob has the cacheing option enabled and the clob is stored out of line (clob sizes range from 14k - 1mb)
    THE PROBLEM - Normally the read/write takes 5-10 seconds. We have been experiencing times of up to 350+ seconds. This is not a locking issue. When writing the clob to the database, we do not chunk, we simply pass the entire xml string at one time.
    Any ideas how you would debug this problem? Has anyone ran into any other similar problems?

    Joshua,
    Have you checked out LOB performance guidelines at http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl09bes.htm#120857? What release (9.2?), configuration (JDBC OCI or thin?), storage options (tablespaces, extent sizes, etc.) are you using? Do you have connection pooling in place? There are many possib
    If you have a customer ID, you can also file a TAR at http://metalink.oracle.com to someone help track down what the problem might be.
    Regards,
    Geoff

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

Maybe you are looking for

  • Why can't I download preordered album

    IT says purchased on iTunes and I have the two songs that were available early so I know I bought it. It doesn't show up in my purchased or downloaded. On the album it has the show complete album option and when I click it each song has the buy butto

  • Why doesn't my .AVi file import into iMovies

    I cannot get iMovies to recognise my .AVI files (from my Samsung ST50). I can download the files from my camera to my hard drive (so the computer recognises them), and I can play them via Quicktime, so I know they work. But I can't import the .AVI fi

  • Scam apps on iTunes App Store

    Is there a way to check for possible scam apps in the iTunes App Store and to report it to Apple so it can be kicked off of the App Store?  What should I do if I purchased one of those app and it later turns out to be scam?  Will Apple terminate the

  • [solved] dnsmasq doesn't start

    After last yesterdays upgrade my dnsmasq could not start. Init script shows it started properly, but there wasn't any dnsmasq process. There was nothing in log also. I tryed to resync package, and I get this: (I translated polish errors descriptions)

  • Sales document flow(vbfa) entries are deleted

    Hi Guys,         I am facing a problem with VBFA table. Actually for some delivery i am able to get Sales order number from VBFA table but for some delivery there is no records exists in VBFA table. So just want to know is there any program or Funtio