Latency performance problems losing interest.

As I first got this program it blew me away how many options it contains but the more i try to work as I'm used to- I am a former analog person then to Roland HD recorder and now this. I am having trouble with latency as I try to record live mics. I have read these forums and applied the test and it seems 180samples latency. But then reset the "delay" function as suggested and don't notice any change. The mics all sound as if I have an echoplex on at all times which is creatively very distracting. On top of that I have run diagnostic tests to analyze how much CPU is being used for even medum sized recordings and it's showing the CPU as having almost a full gig of memory space but the computer is still choking. so- ***?
dual 450 G4   Mac OS X (10.4.5)  

I'm using a Firepod and I can set the headphone mix to listen to the signal but then I can't hear the tracks I'm playing to- I have tried all different levels of sample rates but the delay effect is still there and significant enought to make tracks sound slightly off. i.e. the drum track is down and then the bass track sounds like it's dragging behind. when I set it down at 128 it's better slightly but still present. I thought the "delay" function was supposed to compensate for this.

Similar Messages

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • RMAN duplicate target database from active database - performance problem

    Hello. I’m running into a major performance problem when trying to duplicate a database from a target located inside our firewall to an auxiliary located outside our firewall. Both target and auxiliary are located in the same equipment room just on different subnets. Previously I had the auxiliary located on the same subnet as the target behind the firewall and duplicating a 4.5T database took 12 hours. Now with the auxiliary moved outside the firewall attempting to duplicate the same 4.5T database is estimated to exceed 35 hours. The target is a RAC instance using ASM and so is the auxiliary. Ping, tnsping, traceroutes to and from target and auxiliary all indicate no problem or latency. Any ideas on things to consider while hunting for this elusive performance decrease?
    Thanks in advance.

    It would obviously appear network related. Have you captured any network/firewall metrics? Are all components set to full duplex? Would it be possible to take the firewall down temporarily and then test the throughput? Do you encounter any latency if you were to copy a large file across the subnets?
    You may want to check V$RMAN_BACKUP_JOB_DETAILS, V$BACKUP_SYNC_IO or V$BACKUP_ASYNC_IO when the backup is running.

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • ATV Performance Problems

    Help smart people! I just got an ATV. I successfully synched my iTunes library with my ATV. My ATV is connected to my TV via HDMI and MacBook Pro via wireless. I can see all of "my movies" in my ATV menu. However, when I select one of my videos it's very slow, very choppy and then times out. My ATV has an unobstructed wireless signal and is connected via wireless "n". Anyone having similiar performance problems? Suggestions?

    capaho wrote:
    Create a secondary iTunes library....
    I'm not so sure there really is such a thing as a "primary" iTunes library and a "secondary" iTunes library. An ATV can be synced to only one library at a time. Other libraries on the network can only be used for streaming. The "primary" and "secondary" library concept is an interesting if not artificial one.
    This is Winston's nomenclature.
    Personally I preferred the old Sources selection from 1.0/1.1. software.
    While the combined sync/stream listings for the sync library are intended to help, I think they confuse new users and maintaining and sorting this 'merged list' I feel may be partly responsible for sluggishness in 2.0+.
    You knew what the source of the media was - AppleTV itself (content synced), streamed content from named libraries (including the sync library by default).

  • Performance Problem between Oracle 9i to Oracle 10g using Crystal XI

    We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
    Our technical Setup:
    Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
    Database server is Oracle 10g
    What we have concluded:
    Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
    We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
    We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
    Oracle 10g no longer supports the /*+ RULE */ hint.
    Verify DB Query:
    select /*+ RULE */ *
    from
    (select /*+ RULE */ null table_qualifier, o1.owner table_owner,
    o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
    'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
    decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
    o1.object_type), o1.object_type) table_type, null remarks from all_objects
    o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
    table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
    table_type, null remarks from all_objects o3, all_synonyms s where
    o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
    s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
    s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
    null remarks from all_synonyms s1 where s1.db_link is not null ) tables
    WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
    3
    SQL From Main Report:
    SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
    FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
    WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
    SQL From Sub Report:
    SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
    FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
    WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
    Does anyone have any suggestions on how we can improve the report performance with 10g?

    Hi Eric,
    Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
    While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
    Here are the Doc Ids, if you are interested:
    Note 377037.1
    Note:364822.1
    Thanks again for your response.
    Venu Boddu.

  • 1.1 performance problems related to system configuration?

    It seems like a lot of people are having serious performance problems with Aperture 1.1 in areas were they didn't have any (or at least not so much) problems in the previous 1.01 release.
    Most often these problems occur as slow behaviour of the application when switching views (especially into and out of full view), loading images into the viewer or doing image adjustments. In most cases Aperture works normal for some time and then starts to slow down gradually up to point were images are no longer refreshed correctly or the whole application crashes. Most of the time simply restarting Aperture doesn't help, one has to restart the OS.
    Most of the time the problems occur in conjunction with CPU usage rates which are much higher than in 1.0.1.
    For some people even other applications seem to be affected to a point where the whole system has to be restarted to get everything working up at full speed again. Also shutdown times seem to increase dramatically after such an Aperture slowdown.
    My intention in this thread is to collect information from users who are experiencing such problems about their system configuration. At the moment it does not look like these problems are related to special configurations only, but maybe we can find a common point when we collect as much information as possible about system where Aperture 1.1 shows this behaviour.
    Before I continue with my configuration, I would like to point out that this thread is not about general speed issues with Aperture. If you're not able to work smoothly with 16MPix RAW files on G5 systems with Radeon 9650 video cards or Aperture is generally slow on your iBook 14" system where you installed it with a hack, than this is not the right thread. I fully understand if you want to complain about these general speed issues, but please refrain from doing so in this thread.
    Here I only want to collect information from people who either know that some things works considerably faster in the previous release or who notice that Aperture 1.1 really slows down after some time of use.
    Enough said, here is my information:
    - Powermac G5 Dualcore 2.0
    - 2.5 GB RAM
    - Nvidia 7800GT (flashed PC version)
    - System disk: Software RAID0 (2 WD 10000rpm 74GB Raptor drives)
    - Aperture library on a hardware RAID0 (2 Maxtor 160GB drives) connected to Highpoint RocketRAID 2320 PCIe adapter
    - Displays: 17" and 15" TFT
    I do not think, that we need more information, things like external drives (apart from ones used for the actual library), superdrive types, connected USB stuff like printers, scanners etc. shouldn't make any difference so no need to report that. Also it is self-evident that Mac OS 10.4.6 is used.
    Of interest might be any internal cards (PCIe/PCI/PCI-X...) build into your system like my RAID adapter, Decklink cards (wasn't there a report about problems with them?), any other special video or audio cards or additional graphic cards.
    Again, please only post here if you're experiencing any of the mentioned problems and please try to keep your information as condensed as possible. This thread is about collecting data, there are already enough other threads where the specific problems (or other general speed issues) are discussed.
    Bye,
    Carsten
    BTW: Within the next week I will perform some tests which will include replacing my 7800GT with the original 6600 and removing as much extra stuff from my system as possible to see if that helps.

    Yesterday i had my first decent run in 1.1 and was pleased i avoided a lot perfromance issues that seemed to affect others.
    After i posted, i got hit by a big slow-down in system perfromance. I tried to quit Aperture but couldn't, it had no tasks in its activity window. However Activity Monitor showed Aperture as a 30 thread 1.4GB Virtual memory hairball soaking-up 80-90% of my 4 cpu's. Given the high cpu activity i suspected the reason was not my 2GB of RAM, althought its obviously better with more. So what caused the sudded decerease in system perfromance after 6 hours of relative trouble free editing/sorting with 1.1 ?
    This morning i re-created the issue. Before i go further, when i ran 1.1 for the first time i did not migrate my whole library to the new raw algorithum (its not called the bleeding edge for nothing). So this morning i selected one project to migrate all its raw images to 1.1 and after the progress bar completed its work, the cpus ramped and system got bogged-down again.
    So Aperture is doing a background task that is consuming large amounts of cpu power, shows nothing in its activity monitor and takes a very long time to complete. My project had 89 raw images migrated to the 1.1 algorithum and it took 4 minutes to complete those 'background processes' (more reconstituting of images?). I'm not sure what its doing, but it takes a long time and shows no obvious sign it is normal. If you leave it to complete its work, the system returns to normal. More of an issue is the system allows you to continue to work as the background processes crank, compounding the heavy workload.
    Bit of a guess this, but is this what is causing people system's problems ? As i said if i left my quad alone for 4 minutes all returns as normal. Its just not normal to think it will ever end, so you do more and compound the slow-down ?
    In the interests of research i did another project migrating 245 8MB raws to the 1.1 algorithum and it took 8 minutes. First 5mins consumed 1GB of virtual memory over 20 threads at average 250% CPU usage for Aperture alone. The last three minutes saw the cpus ramp higher to 350%, virtual memory increase to 1.2GB. After the 8 minutes all returned to nornal and fans slowed down (excellent fan/noise behaviour on these quads).
    Is this what others are seeing ?
    When you force quit Aperture during these system slow-downs what effect does this having on your images ? Do the uncompleted background processes restart when you go to try and view them ?
    If i get time i'll try and compare to my MBP.

  • DB Performance problem

    Hi Friends,
    We are experiencing performance problem with our oracle applications/database.
    I run the OEM and I got the following report charts:
    http://farm3.static.flickr.com/2447/3613769336_1b142c9dd.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    Are there any clues that these charts can give re: performance problem?
    What other charts in OEM that can help solve or give assitance performance problem?
    Thanks a lot in advance

    ytterp2009 wrote:
    Hi Charles,
    This is the output of:
    SELECT
    SUBSTR(NAME,1,30) NAME,
    SUBSTR(VALUE,1,40) VALUE
    FROM
    V$PARAMETER
    ORDER BY
    UPPER(NAME);
    (snip)
    Are there parameters need tuning?
    ThanksThanks for posting the output of the SQL statement. The output answers several potential questions (note to other readers, shift the values in the SQL statement's output down by one row).
    Parameters which I found to be interesting:
    control_files                 C:\ORACLE\PRODUCT\10.2.0\ORADATA\BQDB1\C
    cpu_count                     2
    db_block_buffers              995,648 = 8,156,348,416 bytes = 7.6 GB
    db_block_size                 8192
    db_cache_advice               on
    db_file_multiblock_read_count 16
    hash_area_size                131,072
    log_buffer                    7,024,640
    open_cursors                  300
    pga_aggregate_target          2.68435E+12 = 2,684,350,000,000 = 2,500 GB
    processes                     950
    sessions                      1,200
    session_cached_cursors        20
    shared_pool_size              570,425,344
    sga_max_size                  8,749,318,144
    sga_target                    0
    sort_area_retained_size       0
    sort_area_size                65536
    use_indirect_data_buffers     TRUE
    workarea_size_policy          AUTOFrom the above, the server is running on Windows, and based on the value for use_indirect_data_buffers is running a 32 bit version of Windows using a windowing technique to access memory (database buffer cache only) beyond the 4GB upper limit for 32 bit applications. By default, 32 bit Windows limits each process to a maximum of 2GB of memory utilization. This 2GB limit may be raised to 3GB through a change in the Windows configuration, but a certain amount of the lower 4GB region (specifically in the upper 2GB of that region) must be used for the windowing technique to access the upper memory (the default might be 1GB of memory, but verify with Metalink).
    By default on Windows, each session connecting to the database requires 1MB of server memory for the initial connection (this may be decreased, see Metalink), and with SESSIONS set at 1,200, 1.2GB of the lower 2GB (or 3GB) memory region would be consumed just to let the sessions connect, before any processing is performed by the sessions.
    The shared pool is potentially consuming another 544MB (0.531GB) of the lower 2GB (or 3GB) memory region, and the log buffer is consuming another 6.7MB of memory.
    Just with the combination of the memory required per thread for each session, the memory for the shared pool, and the memory for the log buffer, the server is very close to the 2GB memory limit before the clients have performed any real work.
    Note that the workarea_size_policy is set to AUTO, so as long as that parameter is not adjusted at the session level, the sort_area_size and sort_area_retained_size have no impact. However, the 2,500 GB specification (very likely an error) for the pga_aggregate_target is definitely a problem as the memory must come from the lower 2GB (or 3GB) memory region.
    If I recall correctly, a couple years ago Dell performed testing with 32 bit servers using memory windowing to utilize memory above the 4GB limit. Their tests found that the server must have roughly 12GB of memory to match (or possibly exceed) the performance of a server with just 4GB of memory which was not using memory windowing. Enabling memory windowing and placing the database buffer cache into the memory above the 4GB limit has a negative performance impact - Dell found that once 12GB of memory was available in the server, performance recovered to the point that it was just as good as if the server had only 4GB of memory. You might reconsider whether or not to continue using the memory above the 4GB limit.
    db_file_multiblock_read_count is set to 16 - on Oracle 10.2.0.1 and above this parameter should be left unset, allowing Oracle to automatically configure the parameter (it will likely be set to achieve 1MB multi-block reads with a value of 128).
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • 10.1.3.3 ESB DB Adapter Performance Problem

    Hello,
    We are trying to update oracle database using DB Adapter. The insertion into database via DBAdapter (& only with DBAdapter alone) is slow. Even for transferring 50 records of ~1K data, 5-6 seconds are spent.
    Environment:
    Oracle SOA suite 10.1.3 with 10.1.3.3 Patch Applied
    AIX 5
    8 CPU & 20 GB RAM
    Our test setup.
    Tool:ESB & BPEL
    Inbound Adapter to read data from Oracle Table
    TransformActivity to convert source schema to destination schema
    Outbound Adapter to write data into same oracle table in the same machine. (This has performance problem).
    ESB Console shows much of total time spent in the Outbound Adapter activity.
    We also created a BPEL process to do the data transfer between Oracle Databases. Adapter statistics for outbound insert activity in BPEL console shows higher value under "Commit" listed under "Adapter Post Processing".
    If we are to read data using DB adapter from oracle table & write it to a file using File adapter, transfer of 10,000 records (~2K each) happens in 2 secs.Only writing into database is taking long time. We are unsure why writing into database takes so much. Any help would be appreciated to solve this problem.
    We have modified the DB values stated by Oracle documentation for performance improvement. We have done the JVM tuning. We tried using "UsesBatchWriting" & UseDirectSql=true. However there is no improvement.
    We also tried creating an outbound adapter which executes custom sql. Custom sql inserts 10000 records into destination table. (insert into dest_table select * from source_table). There is no performance issue with this approach. Customsql is executed in less than 2 seconds. Also we dont see any performance problem if we are to use any of the sql clients to update data in the same destination table. Only via DB Adapter we face this issue.
    Interestingly, in a different setup,a Windows machine with just 1CPU, 1GB RAM running 10.1.3 is able to transfer 10,000 records (~2K per record) to a different Oracle database over the network(with in LAN).
    Please let me know if you would like know setting of any parameter in the system.We would appreciate if any help can be provided to find where the bottleneck is.
    Thanks

    I'm presuming this is just merge and not insert.
    do alter system set sql_Trace=true and capture the trace files on the database . It's probably only waiting on SQLNET message from client but we need to rule that out.
    dmstool should show you some of the activity stuff inside the client, it may also be worth doing a truss on the java process to see what syscalls it is waiting on.
    Also are you up to MLR7 , the latest ESB release ?

  • Large DB Performance problems when updating schemas

    Hi,
    I'm facing performance problems updating the schema of large databases in SQLServer 2008 R2 and I can't find a proper solution. I would have thought that this is a fairly common problem so here it goes.
    I have a database which is about 700Gb in size. I am going to detail 2 different issues:
    EXAMPLE 1: CHANGING A FIELD TYPE
    In that Database I have a table with the schema detailed below as Table_TB. This table contains several million records. As you can see there is a column of type TEXT (Comment_FD). What I am trying to do is changing  the column type to NVARCHAR(MAX) to
    remove the deprecated TEXT type and support UNICODE characters in that field.
    The command I am running is the following one:
    ALTER TABLE DBA.Table_TB ALTER COLUMN Comment_FD NVARCHAR(MAX)
    This operation takes several hours to complete.
    I tried the following:
    -Adding the new column to the same table and copying the data over
    -Creating a new entire table with the new field type modified and copy the data over
    -Exporting to disk the contents of the table, truncate the table and reimporting the data both with Management Studio and the bcp tool. 
    -When copying to a new table I made also tried removing the PK of the table to skip the overhead of creating an index. 
    -I also tried using a simple spd to copy the data over (to the column in the same table and the other table) in batches. 
    In ALL my tests I set the SQLServer Recovery model to Simple to skip as much as I can the overhead of generating logs.
    No matter what I do, the times to complete this operation seem to be unusable. 
    Please note that I am reducing this problem to its minimum expression. Can't say precisely how long the operation takes, but a script that contain 5 field type changes identical to that one in that and other two tables takes 7 days to complete!!! The REAL problem
    is that I have several other fields to which I need to change the type, and they currently amount to a total running time of 14 days!. And this is just changing a handful of fields in a handful of tables. At some point every string in the system will need
    to be migrated to get unicode support, making this completely impracticable.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 14M records and will be about 44Gb in size. 
    EXAMPLE 2: ADDING COLUMNS
    I have another table int the same DB, with a schema detailed below as Table_2_TB, and again several million records in it. I am trying to add a colum using the following SQL:
    ALTER TABLE DBA.Table2_TB ADD strFiDatasourceName_FD VARCHAR(64) NOT NULL DEFAULT ''
    This operation takes a bit more than 7 hours to complete.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 54M records, and a size of 98Gb.
    QUESTIONS:
    ---->Am I doing something wrong, or is there any way to optimize either the SQL or the SQLServer configuration to speed this up?
    ---->Are these performance levels normal at all when dealing with databases of this size? 
    ---->Anyone out there with experience on DBs of this size?
    ---->Does Microsoft offer some kind of service (cloud?) to make structural changes in large Dbs?
    Thanks a lot for your help in advance!
    This is the schema for the first table:
    CREATE TABLE [DBA].[Table_TB](
    [BranchCode_FD] [char](2) NOT NULL,
    [FolderNo_FD] [int] NOT NULL,
    [DateTime_FD] [datetime] NULL,
    [Staff_FD] [char](3) NULL,
    [Action_FD] [varchar](25) NULL,
    [Comment_FD] [text] NULL,
    [Team_FD] [varchar](8) NULL,
    [Group_FD] [varchar](8) NULL,
    [PopUpDate_FD] [smalldatetime] NULL,
    [nRecordID_FD] [smallint] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    PRIMARY KEY CLUSTERED
    [BranchCode_FD] ASC,
    [FolderNo_FD] ASC,
    [nRecordID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__DateTime] DEFAULT ('1980-01-01') FOR [DateTime_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__PopUp__096A45D7] DEFAULT ('1980-01-01') FOR [PopUpDate_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    This is the schema for the second table:
    CREATE TABLE [DBA].[Table2_TB](
    [strBBranchCode_FD] [char](2) NOT NULL,
    [lFFoldNo_FD] [int] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    [strFiType_FD] [char](3) NOT NULL,
    [dtFiCreateDate_FD] [smalldatetime] NOT NULL,
    [strFiBookingRef_FD] [varchar](32) NOT NULL,
    [strFiBookingRefDayMonth_FD] [varchar](5) NOT NULL,
    [strFiBookedVia_FD] [varchar](50) NOT NULL,
    [bFiInterfaced_FD] [smallint] NOT NULL,
    [nFiSortOrder_FD] [smallint] NOT NULL,
    [strFiCreateStaffCode_FD] [char](3) NOT NULL,
    [strPcProductCode_FD] [varchar](5) NOT NULL,
    [dtFiStartDateTime_FD] [smalldatetime] NOT NULL,
    [lFiFinanVendID_FD] [int] NOT NULL,
    [lFiItinVendID_FD] [int] NOT NULL,
    [dtFiVendBalDueDate_FD] [smalldatetime] NOT NULL,
    [dtFiVendDepositDueDate_FD] [smalldatetime] NOT NULL,
    [strFiStatus_FD] [varchar](2) NOT NULL,
    [bFiTransFeeHasBeenApplied_FD] [smallint] NOT NULL,
    [bFiATOLTypeMan_FD] [smallint] NOT NULL,
    [nFiATOLType_FD] [smallint] NOT NULL,
    [strCcClassCode_FD] [varchar](10) NOT NULL,
    [strFiStartPointCode_FD] [varchar](5) NOT NULL,
    [strFiEndPointCode_FD] [varchar](5) NOT NULL,
    [strFiAirlineCode_FD] [varchar](3) NOT NULL,
    [strFiVendDocNo_FD] [varchar](16) NOT NULL,
    [dtFiIssueDate_FD] [smalldatetime] NOT NULL,
    [strFiDiscReasonCode_FD] [varchar](3) NOT NULL,
    [nFiLastFoldPricingID_FD] [smallint] NOT NULL,
    [strFiPrintingNote_FD] [text] NOT NULL,
    [strFiNonPrintingNote_FD] [text] NOT NULL,
    [dtFiStatusExpiryDate_FD] [smalldatetime] NOT NULL,
    [strFiClientFreqTravellerNo_FD] [varchar](20) NOT NULL,
    [strFiRouteNo_FD] [varchar](5) NOT NULL,
    [nFiNumBum_FD] [smallint] NOT NULL,
    [dtFiEndDateTime_FD] [smalldatetime] NOT NULL,
    [strFiFareBase_FD] [varchar](15) NOT NULL,
    [strFiInterfaceItemID_FD] [varchar](15) NOT NULL,
    [strFiEndPointLoc_FD] [varchar](255) NULL,
    [strFiStartPointLoc_FD] [varchar](255) NULL,
    [strMcMealCode_FD] [varchar](5) NOT NULL,
    [strFiSeatNote_FD] [text] NOT NULL,
    [strFiMealNote_FD] [text] NOT NULL,
    [strFiAirCraftType_FD] [varchar](5) NOT NULL,
    [strFiJourneyTime_FD] [varchar](8) NOT NULL,
    [strFiCheckInMins_FD] [varchar](10) NOT NULL,
    [lFiJourneyDist_FD] [int] NOT NULL,
    [nFiNumStop_FD] [smallint] NOT NULL,
    [strFiBaggageAllow_FD] [varchar](15) NOT NULL,
    [strFiIssueStaffCode_FD] [varchar](20) NOT NULL,
    [dtFiDispatchDate_FD] [smalldatetime] NOT NULL,
    [strFiDispatchStaffCode_FD] [varchar](3) NOT NULL,
    [strDmDispatchCode_FD] [varchar](2) NOT NULL,
    [lfFiVendDepositDueAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateCode_FD] [varchar](50) NOT NULL,
    [strFiRatePlan_FD] [varchar](2) NOT NULL,
    [strFiCabinNo_FD] [varchar](8) NOT NULL,
    [strFiMileage_FD] [varchar](10) NOT NULL,
    [strFiStartPointLocTelNo_FD] [varchar](40) NULL,
    [strFiBookingGuarantee_FD] [varchar](60) NOT NULL,
    [strFiSpecialRemarks_FD] [varchar](250) NULL,
    [strFiCxnCondition_FD] [varchar](100) NOT NULL,
    [bFiFlyDrive_FD] [smallint] NOT NULL,
    [strFiConfNo_FD] [varchar](32) NOT NULL,
    [lFiLinkID_FD] [int] NOT NULL,
    [strFiCategory_FD] [varchar](15) NOT NULL,
    [nFiNumRoom_FD] [smallint] NOT NULL,
    [nFiNumDay_FD] [smallint] NOT NULL,
    [nFiSaleFoldItemID_FD] [smallint] NOT NULL,
    [bFiRefundItem_FD] [smallint] NOT NULL,
    [strFiFareSavingCode_FD] [varchar](2) NOT NULL,
    [lfFiFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateNote_FD] [text] NOT NULL,
    [strFiDiscCode_FD] [varchar](20) NOT NULL,
    [strFiReqDispatchMethodCode_FD] [varchar](2) NOT NULL,
    [dtFiReqDispatchDateTime_FD] [smalldatetime] NOT NULL,
    [nFiReqDispatchVoucherType_FD] [smallint] NOT NULL,
    [nFiLastFoldItemDetailID_FD] [smallint] NOT NULL,
    [nFiNumConjunction_FD] [smallint] NOT NULL,
    [lfFiBSPFrgnBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxDiscrepancy_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPPenaltyFeeAmt_FD] [decimal](17, 2) NOT NULL,
    [nFiRegion_FD] [smallint] NOT NULL,
    [strFiOpenTktNo_FD] [varchar](16) NOT NULL,
    [strFiTktSource_FD] [varchar](3) NOT NULL,
    [strFiJourneyType_FD] [varchar](3) NOT NULL,
    [strFiTktType_FD] [varchar](3) NOT NULL,
    [strFiInterfaceNameRemark_FD] [varchar](50) NULL,
    [nFiATOLIssuedStatus_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBase_FD] [varchar](13) NOT NULL,
    [strFiPaxType_FD] [varchar](3) NOT NULL,
    [strFiActualCarrier_FD] [varchar](2) NOT NULL,
    [strFiNetRemitType_FD] [varchar](1) NOT NULL,
    [strFiFareConstruction_FD] [text] NOT NULL,
    [lfFiBSPTotVATAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNetFare_FD] [smallint] NOT NULL,
    [strFiTourCode_FD] [varchar](50) NOT NULL,
    [strFiSuppFOPInfo_FD] [varchar](255) NOT NULL,
    [lfFitBSPPublishedCommPerc_FD] [decimal](12, 6) NOT NULL,
    [strFiBSPFareCurrCode_FD] [varchar](3) NOT NULL,
    [strFiTktIssueIataNo_FD] [varchar](8) NOT NULL,
    [lfFiFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiFareOfferedSavingCode_FD] [varchar](2) NOT NULL,
    [strFiDesc_FD] [varchar](8000) NOT NULL,
    [lfFiBSPFareBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNoPrintOnItin_FD] [smallint] NOT NULL,
    [dtFiStatusCodeChangeDateTime_FD] [smalldatetime] NOT NULL,
    [bFiNoPrintIfAllPricingsZeroCustAmt_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBaseLow_FD] [varchar](13) NOT NULL,
    [lfFiFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiBrochureCode_FD] [varchar](8) NOT NULL,
    [bFiVendPayDepositNow_FD] [smallint] NOT NULL,
    [bFiVendPayBalanceNow_FD] [smallint] NOT NULL,
    [strFiBankBranchCode_FD] [varchar](2) NOT NULL,
    [strFiOperatingAirlineCode_FD] [varchar](2) NOT NULL,
    [strFiFarePassengerTypeCode_FD] [varchar](3) NOT NULL,
    [strFiAssociatedFarePricingInfoID_FD] [varchar](15) NOT NULL,
    [bFiVerificationReq_FD] [smallint] NOT NULL,
    [dtFiLastVerifiedDateTime_FD] [datetime] NOT NULL,
    [lFiLastVerifiedLevel_FD] [int] NOT NULL,
    [lFiLastVerifiedWithCount_FD] [int] NOT NULL,
    [dtFiTktingStatusChangeDateTime_FD] [smalldatetime] NOT NULL,
    [strFiTktingInformation_FD] [text] NOT NULL,
    [strFiTktingStatus_FD] [varchar](2) NOT NULL,
    [lfFiBSPFareSellDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiTktingBatchID_FD] [varchar](15) NOT NULL,
    [dtFiTktingBatchDateTime_FD] [smalldatetime] NOT NULL,
    [lIplPolicyLevelID_FD] [int] NOT NULL,
    [strFiTktingDescription_FD] [varchar](1000) NOT NULL,
    [strFiTktingDataVersion_FD] [varchar](10) NOT NULL,
    [strFiSourceTktingSystem_FD] [varchar](20) NOT NULL,
    [strFiOthPtsPmtCode_FD] [varchar](3) NOT NULL,
    [bFiManualPtsEntry_FD] [smallint] NOT NULL,
    [strFiEndorsement_FD] [varchar](500) NOT NULL,
    [strFiOwnedByStaffCode_FD] [char](3) NOT NULL,
    [strFiThirdPartyTrackingID_FD] [varchar](25) NOT NULL,
    [strFiAdditionalPrintingNote_FD] [text] NULL,
    [bFiOverridePrintingNote_FD] [smallint] NOT NULL,
    [lfFiCarbonOffsetWeightAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiCancellationPolicyNote_FD] [text] NULL,
    [bFiPEProcessed_FD] [smallint] NOT NULL,
    [bFiActingAsAgentFor_FD] [smallint] NOT NULL,
    [nFiOriginalBuyingBasis_FD] [smallint] NOT NULL,
    [bFiIsOpenSegment_FD] [smallint] NOT NULL,
    [nFiCreateSource_FD] [smallint] NOT NULL,
    [strFiBookingSourceInvoiceNo_FD] [varchar](7) NOT NULL,
    [strFiGDSPaxTypeCode_FD] [varchar](8) NOT NULL,
    [strFiNetFareGDSAccountCode_FD] [varchar](8) NOT NULL,
    [strFiPOSID_FD] [varchar](50) NOT NULL,
    [bFiIsPOSEditable_FD] [smallint] NOT NULL,
    [lfFiCustExchRate_FD] [decimal](16, 8) NOT NULL,
    [lfFiCustFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiCCItemPayableToBranch_FD] [smallint] NOT NULL,
    [dtFiExternalAccountingDate_FD] [smalldatetime] NOT NULL,
    [strFiSourceSystemBookResponseText_FD] [text] NOT NULL,
    [bFiIsConnection_FD] [smallint] NOT NULL,
    [bFiIsPEFoldLevelItem_FD] [smallint] NOT NULL,
    [nFiReqdEndorsedConjTktType_FD] [smallint] NOT NULL,
    [bFiEndorsedConjTktDetailIsManual_FD] [smallint] NOT NULL,
    [strFiReqdEndorsedConjTktDetailText_FD] [varchar](30) NOT NULL,
    [lFiLastTktingTemplateID_FD] [int] NOT NULL,
    [dtFiBookedDate_FD] [smalldatetime] NOT NULL,
    [strFiTktingVerificationWarning_FD] [varchar](1000) NOT NULL,
    [strFiTktingVerificationError_FD] [varchar](1000) NOT NULL,
    [strFiTktingError_FD] [varchar](1000) NOT NULL,
    [strFiCustomerAccountingData00_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData01_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData02_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData03_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData04_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData05_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData06_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData07_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData08_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData09_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingDataNote_FD] [varchar](100) NOT NULL,
    [strFiCustomerAccountingData10_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData11_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData12_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData13_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData14_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData15_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData16_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData17_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData18_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData19_FD] [varchar](50) NOT NULL,
    [nFiFlightBasis_FD] [smallint] NOT NULL,
    [lFiStartPointVendID_FD] [int] NOT NULL,
    [lFiEndPointVendID_FD] [int] NOT NULL,
    [lCtID_FD] [int] NOT NULL,
    [strFiContractCode_FD] [varchar](25) NOT NULL,
    [strFiContractPeriodCode_FD] [varchar](50) NOT NULL,
    PRIMARY KEY CLUSTERED
    [strBBranchCode_FD] ASC,
    [lFFoldNo_FD] ASC,
    [nFiFoldItemID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([nFiReqDispatchVoucherType_FD])
    REFERENCES [DBA].[VoucherTypes_TB] ([nVtCode_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strPcProductCode_FD], [strFiType_FD])
    REFERENCES [DBA].[ProductCodes_TB] ([ProductCode_FD], [Type_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strBBranchCode_FD], [lFFoldNo_FD])
    REFERENCES [DBA].[Folder_TB] ([BranchCode_FD], [FolderNo_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strBB__44AB0736] DEFAULT ('') FOR [strBBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFFol__459F2B6F] DEFAULT (0) FOR [lFFoldNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiFo__46934FA8] DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__478773E1] DEFAULT ('') FOR [strFiType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiC__487B981A] DEFAULT ('1980-01-01') FOR [dtFiCreateDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookingRef] DEFAULT ('') FOR [strFiBookingRef_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4A63E08C] DEFAULT ('') FOR [strFiBookingRefDayMonth_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookedVia_FD] DEFAULT ('') FOR [strFiBookedVia_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiIn__4C4C28FE] DEFAULT (0) FOR [bFiInterfaced_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSo__4D404D37] DEFAULT ((-1)) FOR [nFiSortOrder_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4E347170] DEFAULT ('') FOR [strFiCreateStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strPcProductCode] DEFAULT ('') FOR [strPcProductCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__501CB9E2] DEFAULT ('1980-01-01') FOR [dtFiStartDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiFi__5110DE1B] DEFAULT ((-1)) FOR [lFiFinanVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiIt__52050254] DEFAULT ((-1)) FOR [lFiItinVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__52F9268D] DEFAULT ('1980-01-01') FOR [dtFiVendBalDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__53ED4AC6] DEFAULT ('1980-01-01') FOR [dtFiVendDepositDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__54E16EFF] DEFAULT ('') FOR [strFiStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiTr__55D59338] DEFAULT (0) FOR [bFiTransFeeHasBeenApplied_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiAT__56C9B771] DEFAULT (0) FOR [bFiATOLTypeMan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__57BDDBAA] DEFAULT (0) FOR [nFiATOLType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strCc__58B1FFE3] DEFAULT ('') FOR [strCcClassCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiStartPointCode] DEFAULT ('') FOR [strFiStartPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiEndPointCode] DEFAULT ('') FOR [strFiEndPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5B8E6C8E] DEFAULT ('') FOR [strFiAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5C8290C7] DEFAULT ('') FOR [strFiVendDocNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiI__5D76B500] DEFAULT ('1980-01-01') FOR [dtFiIssueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5E6AD939] DEFAULT ('') FOR [strFiDiscReasonCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__5F5EFD72] DEFAULT ((-1)) FOR [nFiLastFoldPricingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__605321AB] DEFAULT ('') FOR [strFiPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__614745E4] DEFAULT ('') FOR [strFiNonPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__623B6A1D] DEFAULT ('1980-01-01') FOR [dtFiStatusExpiryDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__632F8E56] DEFAULT ('') FOR [strFiClientFreqTravellerNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6423B28F] DEFAULT ('') FOR [strFiRouteNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__nFiNumBum] DEFAULT ((-1)) FOR [nFiNumBum_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiE__660BFB01] DEFAULT ('1980-01-01') FOR [dtFiEndDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67001F3A] DEFAULT ('') FOR [strFiFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67F44373] DEFAULT ('') FOR [strFiInterfaceItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__68E867AC] DEFAULT ('') FOR [strFiEndPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__69DC8BE5] DEFAULT ('') FOR [strFiStartPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strMc__6AD0B01E] DEFAULT ('') FOR [strMcMealCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6BC4D457] DEFAULT ('') FOR [strFiSeatNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6CB8F890] DEFAULT ('') FOR [strFiMealNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6DAD1CC9] DEFAULT ('') FOR [strFiAirCraftType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6EA14102] DEFAULT ('') FOR [strFiJourneyTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6F95653B] DEFAULT ('') FOR [strFiCheckInMins_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiJo__70898974] DEFAULT (0) FOR [lFiJourneyDist_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__717DADAD] DEFAULT (0) FOR [nFiNumStop_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7271D1E6] DEFAULT ('') FOR [strFiBaggageAllow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7365F61F] DEFAULT ('') FOR [strFiIssueStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiD__745A1A58] DEFAULT ('1980-01-01') FOR [dtFiDispatchDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__754E3E91] DEFAULT ('') FOR [strFiDispatchStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strDm__764262CA] DEFAULT ('') FOR [strDmDispatchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiV__77368703] DEFAULT (0) FOR [lfFiVendDepositDueAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__782AAB3C] DEFAULT ('') FOR [strFiRateCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__791ECF75] DEFAULT ('') FOR [strFiRatePlan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7A12F3AE] DEFAULT ('') FOR [strFiCabinNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7B0717E7] DEFAULT ('') FOR [strFiMileage_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7BFB3C20] DEFAULT ('') FOR [strFiStartPointLocTelNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7CEF6059] DEFAULT ('') FOR [strFiBookingGuarantee_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7DE38492] DEFAULT ('') FOR [strFiSpecialRemarks_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7ED7A8CB] DEFAULT ('') FOR [strFiCxnCondition_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiFl__7FCBCD04] DEFAULT (0) FOR [bFiFlyDrive_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__00BFF13D] DEFAULT ('') FOR [strFiConfNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiLi__01B41576] DEFAULT ((-1)) FOR [lFiLinkID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__02A839AF] DEFAULT ('') FOR [strFiCategory_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__039C5DE8] DEFAULT (0) FOR [nFiNumRoom_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__04908221] DEFAULT (0) FOR [nFiNumDay_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSa__0584A65A] DEFAULT ((-1)) FOR [nFiSaleFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiRe__0678CA93] DEFAULT (0) FOR [bFiRefundItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__076CEECC] DEFAULT ('') FOR [strFiFareSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__08611305] DEFAULT (0) FOR [lfFiFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0955373E] DEFAULT ('') FOR [strFiRateNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0A495B77] DEFAULT ('') FOR [strFiDiscCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0B3D7FB0] DEFAULT ('') FOR [strFiReqDispatchMethodCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiR__0C31A3E9] DEFAULT ('1980-01-01') FOR [dtFiReqDispatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiRe__0D25C822] DEFAULT (0) FOR [nFiReqDispatchVoucherType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__0F0E1094] DEFAULT ((-1)) FOR [nFiLastFoldItemDetailID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__100234CD] DEFAULT (0) FOR [nFiNumConjunction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__10F65906] DEFAULT (0) FOR [lfFiBSPFrgnBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__11EA7D3F] DEFAULT (0) FOR [lfFiBSPBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__12DEA178] DEFAULT (0) FOR [lfFiBSPTaxDiscrepancy_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__13D2C5B1] DEFAULT (0) FOR [lfFiBSPPenaltyFeeAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiDo__14C6E9EA] DEFAULT (0) FOR [nFiRegion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__15BB0E23] DEFAULT ('') FOR [strFiOpenTktNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__16AF325C] DEFAULT ('') FOR [strFiTktSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__17A35695] DEFAULT ('') FOR [strFiJourneyType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__18977ACE] DEFAULT ('') FOR [strFiTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__198B9F07] DEFAULT ('') FOR [strFiInterfaceNameRemark_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__1A7FC340] DEFAULT ((-1)) FOR [nFiATOLIssuedStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1B73E779] DEFAULT ('') FOR [strFiFareSavingFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1C680BB2] DEFAULT ('') FOR [strFiPaxType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1D5C2FEB] DEFAULT ('') FOR [strFiActualCarrier_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1E505424] DEFAULT ('') FOR [strFiNetRemitType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1F44785D] DEFAULT ('') FOR [strFiFareConstruction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__20389C96] DEFAULT (0) FOR [lfFiBSPTotVATAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiNe__212CC0CF] DEFAULT (0) FOR [bFiNetFare_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__2220E508] DEFAULT ('') FOR [strFiTourCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__23150941] DEFAULT ('') FOR [strFiSuppFOPInfo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFit__24092D7A] DEFAULT (0) FOR [lfFitBSPPublishedCommPerc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__24FD51B3] DEFAULT ('') FOR [strFiBSPFareCurrCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__25F175EC] DEFAULT ('') FOR [strFiTktIssueIataNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__27D9BE5E] DEFAULT (0) FOR [lfFiFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__251D4D44] DEFAULT ('') FOR [strFiFareOfferedSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiDesc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintOnItin_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-1-1') FOR [dtFiStatusCodeChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintIfAllPricingsZeroCustAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFareSavingFareBaseLow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBrochureCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayDepositNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayBalanceNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBankBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOperatingAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFarePassengerTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAssociatedFarePricingInfoID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVerificationReq_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiLastVerifiedDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedLevel_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedWithCount_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingStatusChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingInformation_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareSellDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPTaxBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingBatchID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingBatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lIplPolicyLevelID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDescription_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDataVersion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceTktingSystem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOthPtsPmtCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiManualPtsEntry_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiEndorsement_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOwnedByStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiThirdPartyTrackingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAdditionalPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiOverridePrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiCarbonOffsetWeightAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCancellationPolicyNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiPEProcessed_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiActingAsAgentFor_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiOriginalBuyingBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsOpenSegment_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiCreateSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (' ') FOR [strFiBookingSourceInvoiceNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiGDSPaxTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiNetFareGDSAccountCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiPOSID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsPOSEditable_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustExchRate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiCCItemPayableToBranch_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiExternalAccountingDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceSystemBookResponseText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsConnection_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsPEFoldLevelItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiReqdEndorsedConjTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiEndorsedConjTktDetailIsManual_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiReqdEndorsedConjTktDetailText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiLastTktingTemplateID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiBookedDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationWarning_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData00_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData01_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData02_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData03_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData04_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData05_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData06_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData07_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData08_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData09_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingDataNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData10_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData11_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData12_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData13_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData14_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData15_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData16_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData17_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData18_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData19_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiFlightBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiStartPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiEndPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lCtID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractPeriodCode_FD]
    GO

    Hi,
    I just wanted to summarize the conclusions we got here, in case it helps someone else.
    For the case of the ALTER statement:
    First we analyzed the query performance and found out that the query was I/O bound. A handful of useful scripts can be found on the links below. Some high values on PAGEIOLATCH wait times suggested some memory pressure, so we incremented the amount of memory
    dedicated to the server up to 32GB. This single change was one of the most effective ones and reduced the query execution time by about 40%-50%. I guess SQLServer needs less paging to perform the operation when it can load more pages in memory at the same
    time.
    We run more tests tweaking some other mentioned variables like the server maxDOP, which made in fact the query slower the higher the value we set. The initial server config was set with auto CPU affinity, but I/O mask affinity set to the first 4 CPUs, and
    we found out that setting it to ALL AUTO perform faster for some reason.
    After some analysis made by Microsoft on the diagnostics/metrics data, the only interesting finding was that a couple of the storage volumes were performing slightly slower than the rest causing a bit of a bottleneck. We recommended the customer to look
    into it with their storage team, but even with those fixed, we won't expect the query to run much faster.
    No other tweaks have been found useful to speed up the ALTER statement. Basically it comes up to how fast you I/O subsystem is, and how much can SQLServer can cache in memory. No other suggestion has been made by Microsoft, and they've advised that any
    other tweaked query (split the column addition and the constraints for example) is going to perform worse than the plain and simple alter statement.
    On the NVARCHAR type problem:
    As it was suggested by Erland Sommarskog, SQLServer 2014 Enterprise edition performed this operation about 40% faster than SQLServer 2008 R2 with the same hardware specs.
    At the moment upgrading the customer infrastructure is not an option for us, so we don't have a proper solution to accomplish this in a workable time frame on SQLServer 2008. 
    The strategy that we found might be the best option was the one suggessted by E. Sommarskog, bind our code for reading access to a COALESCED calculated column, and writing on the new converted NVARCHAR column. Schedule a batch job in the backgroud
    to migrate all the data over time to the new column, and finally remove the old column.
    Thanks a lot everyone for your help.
    David.
    Useful links:
    http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
    http://msdn.microsoft.com/en-us/library/ms189768.aspx
    http://rusanu.com/2014/02/24/how-to-analyse-sql-server-performance/

  • Critical performance problem upon bulk load of groups

    All (including product development),
    I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
    During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
    Running SQL trace points in the directions of the following SQL statement:
    SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
    DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
    LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
    EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
    CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
    WWPOB_PAGE$ WHERE ID = :b1
    I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
    "GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
    This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
    Also note: In the call to addGroupToList, I set owner to true for all groups.
    Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
    Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
    Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
    Thanks,
    Erik Hagen (you may call me on +47 90631013)
    null

    YES!
    I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
    About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
    Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
    ============================================
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
    ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
    ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
    ON PORTAL30.WWSEC_PERSON$('ID')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
    ON PORTAL30.WWSEC_PERSON$('USER_NAME')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
    "SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
    ON PORTAL30.WWSEC_FLAT$("ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
    ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
    ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
    "NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
    "GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    ==================================
    Thanks,
    Erik Hagen
    null

  • SAP Performance Problem with Oracle 10 on Sun SPARC T5240 server

    Dear Friends,
    We have a performance problem after migrating our SAP ERP 6.0 basis system. We moved to new servers a month ago.
    According to SAP EarlyWatchAlert Report, CPU response time is too high, altough CPU utilization is never more than 5%,
    The current system is :
    Database server : Sun SPARC Enterprise T5240 Server - 2 CPU´s with 6 core, 8 thread, 1.2 Ghz 32 GB RAM
    and we use another identical configured server as an application server.
    Database is Oracle 10.2.0 and operating system is Solaris 10.
    The problem is average CPU response time is 450 ms, and max. CPU load is %5 percent.
    Pre-migration configuration with old servers, we had CPU response : 150 ms and max CPU load: 50%.
    Old configuration : 2 X HP rp3440, 2 X PA 8800 CPU (2 core, 1.0 GHz).
    Have you got any experience with a similar situation, which setting might be wrong for not fully utilizing server CPU´s?
    or do you know any similar configuration for benchmark?
    Thanks in advance
    Uzan

    Our organization upgraded an application - The vendor had originally suggested a T2000. When we finally migrated onto it, the performance was worse than the older version of the application. The vendor hadn't yet tested the combination of T2000 and 10.2 with their application when they had made that hardware recommendation, they subsequently revised their recommendations. We ended up with an M4000.
    The application performance, IMHO, still stinks. But that's because it's a java based application written for database independence.
    The interesting item to note here, is that in the application teams 'selective' testing - everything worked fine. When the testing was extended to include users who didn't follow the 'test plan', they found performance begin to tank. Then with more users, more testing, they found that the T2000 was not going to perform well. The test plan by the apps team was not sufficient to find the issues, and they did not include any load testing!
    Edited by: dbtoo on Jun 5, 2009 11:20 AM

  • ZBook 17 g2 - poor DPC Latency performance when running from z Turbo Drive PCIe SSD

    I'm setting up a new zBook 17 g2 and am getting very poor DPC latency performance (> 6000 us) when running from the PCIe SSD. I've re-installed the OS (Win 7 64 bit) on both the PCIe SSD and a SATA HDD and the DPC latency performance is fine when running from the HDD (50 - 100 us) but horrible when running from the PCIe SSD (> 6000 us).  I've updated the BIOS and tried every combination of driver and component enabling/disabling I can think of.  The DPC latency is extremely high from the initial Windows install with no drivers installed.  Adding drivers seems to have no effect on the DPC latency. Before purchasing the laptop I found this review: http://www.notebookcheck.net/Review-HP-ZBook-17-E9X11AA-ABA-Workstation.106222.0.html where the DPC latency measurement (middle of the page) looks OK.  Of course, this is the prior version of the laptop and I believe it does not have the PCIe SSD.  Combining that with the fact that I get fine performance when running from the HDD I am led to believe that the PCIe SSD is the cause of the problem. Has anyone found a solution to this problem?  As it stands right now my zBook is not usable for digital audio work when running from the PCIe SSD.  But it cost me a lot of money so I'd sure like to use it...! Thanks, rgames

    Hi mooktank, No solution yet but, as of about six weeks ago, HP at least acknowledged that it's a problem (finally).  I reproduced it perfectly on another zBook 17 g2 and another PCIe SSD in the same laptop and HP was able to reproduce the problem as well.  So the problem is clearly in the BIOS or with some driver related to the PCIe SSD.  It could also be with the firmware in the drive, itself, but I can't find any other PCIe drives in the 60 mm form factor.  So there's no way to see if a differnt type of drive would fix the problem. My suspicion is that it's related to the PCIe sleep states - those are known to cause exactly these types of problems because the drive takes quick "naps" to save power and there's a delay when it is told to wake back up.  That delay causes a delay in the audio buffer that results in pops/crackles/stutters that would never be noticed doing other tasks like video editing or CAD work .  So it's a problem specific to folks who need low-latency audio performance (very few apps require low latency audio - video editing, for example, uses huge buffers with relatively high latency).  A lot of desktops offer a BIOS option to disable those sleep states but no such option exists in HP's BIOS for that laptop.  In theory you can do it from within Windows but it doesn't have an effect on my system.  That might be one of those options that Windows allows you to change but that actually has no effect. One workaround is to disable CPU throttling.  That makes the CPU run at full speed all the time and, I believe, also disables the PCIe and other sleep states.  When I disable CPU throttling, DPC latency goes back to normal.  However, the CPU is then running full-speed all the time so your battery life basically goes to nothing and the laptop gets *very* hot. Clearly that is not necessary because the laptop runs fine from the SATA SSD.  HP needs to fix the latency problem associated with the PCIe drive. The next logical step is to provide a BIOS update that provides a way to disable the PCIe sleep states without disabling CPU throttling, like on many desktop systems.  The bad news is that HP tech support is not very technical, so it takes forever for them to figure out what I'm talking about.  It took a couple months for them to start using the DPC Latency checker. Hopefully there will be a fix at some point... in the meantime, I hope that HP sends me a check for spending so much time educating their techs on how computers work.  And for countless hours lost re-installing different OSes only to show that the performance is exactly the same as shown in the DPC Latency checker. rgames

Maybe you are looking for

  • Internal table to file - SE16N extract format

    Hello experts, I have a few records in my internal table and would like to throw these records into a file on my PC without displaying these records. The format of the output file should be identical to the format that we get from Export -> Localfile

  • PS Process flow

    Hi Folks, Actully im a technical consultant. Now I want to study Projrct systems. tell me the process flow of the Project systems. And also I need important transactions and Master tables. Please anybody guide me. Thanks for your valuable time, Point

  • 1GB memory for minor update on my phone, I doubt.

    The memory space of 7.1.1 seemed so huge that it has 1GB memory. Am I the only one who have this problem? I doubt 1GB is too much for a minor update. please help me

  • Printing Spool output

    Hi All, i'm printing a form in background job. i've passed the print parameters with submit for immediate printing from spool. but the output is created in the spool with status 'Frontend Unavailable'. the default printer 'LOCL' has also been configu

  • Program to download on Non Unix Server location using SFTP

    Hi, What commands/function modules can use for download the file on Non Unix Server location using SFTP. Thanks, Fract