Performance Problem: Synonym in Schema Referencing Synonym in Diff Schema

Does anyone have any experience using a synonym in one schema that references another synonym in another schema? I found that if I do it this way, the performance drops considerably depending upon the SQL. It runs slow and in some cases I get Oracle errors (i.e. Invalid ROWID). If I change my synonym to reference the actual table in the other schema, it don't get the same degradation in performance.
I'm curious if anyone has experienced the same problem and what options might be available to improve the situation. I lose the "semantic" layer by doing this - having to always reference the physical table instead of the synonym in the other schema.
Thanks for any input or comments you might have.
Sabert

Only 2 schemas. Schema A has a table T1 with a public synonym S1. I am running sql against Schema B, however I have a private synonym S2 in this schema (B) that references synonym S1 in Schema A. Because I am using a synonym to reference another synonym in another schema I'm having performance issues. I tried a test and changed the S2 synonym in schema B to point to the table T1 instead of S1 in schema A and it performed much better. Just want to understand what I'm seeing here. This example might help:
Schema A:
"CREATE SYNONYM S1 FOR T1"
Schema B (I'm running my sql against this schema, but my synonym points to table in schema A):
"CREATE SYNONYM S2 FOR A.S1"
"INSERT INTO XXX SELECT * FROM S2"
I had performance issues with this synonym to synonym, so I changed it to synonym to table:
"CREATE SYNONYM S2 FOR A.T1"
Again, with the change in reference from a synonym to the actual table, things seem to work better. Hope this makes sense.
Thanks for your help.

Similar Messages

  • Problem while Creating MVLOG with synonym in Oracle 9i:Is it an Oracle Bug?

    Hi All,
    I am facing a problem while Creating MVLOG with synonym in Oracle 9i but for 10G it is working fine. Is it an Oracle Bug? or i am missing something.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL>
    SQL> create table t ( name varchar2(20), id varchar2(1) primary key);
    Table created.
    SQL> create materialized view log on t;
    Materialized view log created.
    SQL> create public synonym syn_t for t;
    Synonym created.
    SQL> CREATE MATERIALIZED VIEW MV_t
      2  REFRESH ON DEMAND
      3  WITH PRIMARY KEY
      4  AS
      5  SELECT name,id
      6  FROM syn_t;
    Materialized view created.
    SQL> CREATE MATERIALIZED VIEW LOG ON  MV_t
      2  WITH PRIMARY KEY
      3   (name)
      4    INCLUDING NEW VALUES;
    Materialized view log created.
    SQL> select * from v$version;
    BANNER
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
    PL/SQL Release 9.2.0.6.0 - Production
    CORE    9.2.0.6.0       Production
    TNS for Solaris: Version 9.2.0.6.0 - Production
    NLSRTL Version 9.2.0.6.0 - Production
    SQL>
    SQL> create table t ( name varchar2(20), id varchar2(1) primary key);
    Table created.
    SQL> create materialized view log on t;
    Materialized view log created.
    SQL> create public synonym syn_t for t;
    Synonym created.
    SQL> CREATE MATERIALIZED VIEW MV_t
    REFRESH ON DEMAND
    WITH PRIMARY KEY
    AS
      2    3    4    5  SELECT name,id
    FROM syn_t;   6
    Materialized view created.
    SQL> CREATE MATERIALIZED VIEW LOG ON  MV_t
    WITH PRIMARY KEY
    (name)
      INCLUDING NEW VALUES;  2    3    4
    CREATE MATERIALIZED VIEW LOG ON  MV_t
    ERROR at line 1:
    ORA-12014: table 'MV_T' does not contain a primary key constraintRegards
    Message was edited by:
    Avinash Tripathi
    null

    Hi Nicloei,
    Thanks for the reply. Actually i don't want any work around (Creating MVLOG on table rather than synonym is fine with me) . I just wanted to know it is actually an oracle bug or something else.
    Regards
    Avinash

  • Table T1 in Schema A, Synonym T1 in Schema B, View V1 in Schema B.

    Hi,
    There are two schemas Schema A and Schema B
    Table-> XML_TABLE is in Schema A.
    If I create a view V1 with the following query:
    SELECT *
    FROM XML_TABLE t
    WHERE XMLEXISTS (
    'declare namespace nmsp ="name:namespace"; fn:collection("oradb:/PUBLIC/Table2")/ROW[ColumnName=$d/nmsp:Colmn2/nmsp:id]'
    PASSING t.textxml AS "d")
    My XML Indexes are used and output is fast.
    BUT in Schema B also I have a synonym for table XML_TABLE pointing to the table in Schema A.
    And I create a similar view v1 in schema B also using the same query as above.
    BUT this doesn't use indexes. I can see that in the plan and also query never gets executed.
    So IS IT correct to have table in schema A, Synonym in Schema B for this table and View in Schema B for this.
    How should I make it work
    OR
    Should I remove the view to Schema A only?
    Thanks..
    Edited by: user8941550 on Dec 12, 2012 11:56 PM

    user8941550 wrote:
    Hi,
    There are two schemas Schema A and Schema B
    Table-> XML_TABLE is in Schema A.
    If I create a view V1 with the following query:
    SELECT *
    FROM XML_TABLE t
    WHERE XMLEXISTS (
    'declare namespace nmsp ="name:namespace"; fn:collection("oradb:/PUBLIC/Table2")/ROW[ColumnName=$d/nmsp:Colmn2/nmsp:id]'
    PASSING t.textxml AS "d")
    My XML Indexes are used and output is fast.
    BUT in Schema B also I have a synonym for table XML_TABLE pointing to the table in Schema A.
    And I create a similar view v1 in schema B also using the same query as above.
    BUT this doesn't use indexes. I can see that in the plan and also query never gets executed.Ok, I was following you to that point, but what do you mean the query never gets executed?
    So IS IT correct to have table in schema A, Synonym in Schema B for this table and View in Schema B for this.Well, it's not incorrect. Perhaps more tidy to just have the view defined in schema A along with the table, if it's going to be used in schema A as well, and then have a synonym for schema B to use that view.
    How should I make it work
    OR
    Should I remove the view to Schema A only?Perhaps you could post an example of your table with some example data for people to use, along with the views and statement to set up the synonyms, so it can be reproduced by others to see what's going on. Also, post the explain plans you're referring to.

  • Large DB Performance problems when updating schemas

    Hi,
    I'm facing performance problems updating the schema of large databases in SQLServer 2008 R2 and I can't find a proper solution. I would have thought that this is a fairly common problem so here it goes.
    I have a database which is about 700Gb in size. I am going to detail 2 different issues:
    EXAMPLE 1: CHANGING A FIELD TYPE
    In that Database I have a table with the schema detailed below as Table_TB. This table contains several million records. As you can see there is a column of type TEXT (Comment_FD). What I am trying to do is changing  the column type to NVARCHAR(MAX) to
    remove the deprecated TEXT type and support UNICODE characters in that field.
    The command I am running is the following one:
    ALTER TABLE DBA.Table_TB ALTER COLUMN Comment_FD NVARCHAR(MAX)
    This operation takes several hours to complete.
    I tried the following:
    -Adding the new column to the same table and copying the data over
    -Creating a new entire table with the new field type modified and copy the data over
    -Exporting to disk the contents of the table, truncate the table and reimporting the data both with Management Studio and the bcp tool. 
    -When copying to a new table I made also tried removing the PK of the table to skip the overhead of creating an index. 
    -I also tried using a simple spd to copy the data over (to the column in the same table and the other table) in batches. 
    In ALL my tests I set the SQLServer Recovery model to Simple to skip as much as I can the overhead of generating logs.
    No matter what I do, the times to complete this operation seem to be unusable. 
    Please note that I am reducing this problem to its minimum expression. Can't say precisely how long the operation takes, but a script that contain 5 field type changes identical to that one in that and other two tables takes 7 days to complete!!! The REAL problem
    is that I have several other fields to which I need to change the type, and they currently amount to a total running time of 14 days!. And this is just changing a handful of fields in a handful of tables. At some point every string in the system will need
    to be migrated to get unicode support, making this completely impracticable.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 14M records and will be about 44Gb in size. 
    EXAMPLE 2: ADDING COLUMNS
    I have another table int the same DB, with a schema detailed below as Table_2_TB, and again several million records in it. I am trying to add a colum using the following SQL:
    ALTER TABLE DBA.Table2_TB ADD strFiDatasourceName_FD VARCHAR(64) NOT NULL DEFAULT ''
    This operation takes a bit more than 7 hours to complete.
    **Based on smaller DBs in the same system I guesstimate this table will contain about 54M records, and a size of 98Gb.
    QUESTIONS:
    ---->Am I doing something wrong, or is there any way to optimize either the SQL or the SQLServer configuration to speed this up?
    ---->Are these performance levels normal at all when dealing with databases of this size? 
    ---->Anyone out there with experience on DBs of this size?
    ---->Does Microsoft offer some kind of service (cloud?) to make structural changes in large Dbs?
    Thanks a lot for your help in advance!
    This is the schema for the first table:
    CREATE TABLE [DBA].[Table_TB](
    [BranchCode_FD] [char](2) NOT NULL,
    [FolderNo_FD] [int] NOT NULL,
    [DateTime_FD] [datetime] NULL,
    [Staff_FD] [char](3) NULL,
    [Action_FD] [varchar](25) NULL,
    [Comment_FD] [text] NULL,
    [Team_FD] [varchar](8) NULL,
    [Group_FD] [varchar](8) NULL,
    [PopUpDate_FD] [smalldatetime] NULL,
    [nRecordID_FD] [smallint] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    PRIMARY KEY CLUSTERED
    [BranchCode_FD] ASC,
    [FolderNo_FD] ASC,
    [nRecordID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__DateTime] DEFAULT ('1980-01-01') FOR [DateTime_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD CONSTRAINT [DF__Table_TB__PopUp__096A45D7] DEFAULT ('1980-01-01') FOR [PopUpDate_FD]
    GO
    ALTER TABLE [DBA].[Table_TB] ADD DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    This is the schema for the second table:
    CREATE TABLE [DBA].[Table2_TB](
    [strBBranchCode_FD] [char](2) NOT NULL,
    [lFFoldNo_FD] [int] NOT NULL,
    [nFiFoldItemID_FD] [smallint] NOT NULL,
    [strFiType_FD] [char](3) NOT NULL,
    [dtFiCreateDate_FD] [smalldatetime] NOT NULL,
    [strFiBookingRef_FD] [varchar](32) NOT NULL,
    [strFiBookingRefDayMonth_FD] [varchar](5) NOT NULL,
    [strFiBookedVia_FD] [varchar](50) NOT NULL,
    [bFiInterfaced_FD] [smallint] NOT NULL,
    [nFiSortOrder_FD] [smallint] NOT NULL,
    [strFiCreateStaffCode_FD] [char](3) NOT NULL,
    [strPcProductCode_FD] [varchar](5) NOT NULL,
    [dtFiStartDateTime_FD] [smalldatetime] NOT NULL,
    [lFiFinanVendID_FD] [int] NOT NULL,
    [lFiItinVendID_FD] [int] NOT NULL,
    [dtFiVendBalDueDate_FD] [smalldatetime] NOT NULL,
    [dtFiVendDepositDueDate_FD] [smalldatetime] NOT NULL,
    [strFiStatus_FD] [varchar](2) NOT NULL,
    [bFiTransFeeHasBeenApplied_FD] [smallint] NOT NULL,
    [bFiATOLTypeMan_FD] [smallint] NOT NULL,
    [nFiATOLType_FD] [smallint] NOT NULL,
    [strCcClassCode_FD] [varchar](10) NOT NULL,
    [strFiStartPointCode_FD] [varchar](5) NOT NULL,
    [strFiEndPointCode_FD] [varchar](5) NOT NULL,
    [strFiAirlineCode_FD] [varchar](3) NOT NULL,
    [strFiVendDocNo_FD] [varchar](16) NOT NULL,
    [dtFiIssueDate_FD] [smalldatetime] NOT NULL,
    [strFiDiscReasonCode_FD] [varchar](3) NOT NULL,
    [nFiLastFoldPricingID_FD] [smallint] NOT NULL,
    [strFiPrintingNote_FD] [text] NOT NULL,
    [strFiNonPrintingNote_FD] [text] NOT NULL,
    [dtFiStatusExpiryDate_FD] [smalldatetime] NOT NULL,
    [strFiClientFreqTravellerNo_FD] [varchar](20) NOT NULL,
    [strFiRouteNo_FD] [varchar](5) NOT NULL,
    [nFiNumBum_FD] [smallint] NOT NULL,
    [dtFiEndDateTime_FD] [smalldatetime] NOT NULL,
    [strFiFareBase_FD] [varchar](15) NOT NULL,
    [strFiInterfaceItemID_FD] [varchar](15) NOT NULL,
    [strFiEndPointLoc_FD] [varchar](255) NULL,
    [strFiStartPointLoc_FD] [varchar](255) NULL,
    [strMcMealCode_FD] [varchar](5) NOT NULL,
    [strFiSeatNote_FD] [text] NOT NULL,
    [strFiMealNote_FD] [text] NOT NULL,
    [strFiAirCraftType_FD] [varchar](5) NOT NULL,
    [strFiJourneyTime_FD] [varchar](8) NOT NULL,
    [strFiCheckInMins_FD] [varchar](10) NOT NULL,
    [lFiJourneyDist_FD] [int] NOT NULL,
    [nFiNumStop_FD] [smallint] NOT NULL,
    [strFiBaggageAllow_FD] [varchar](15) NOT NULL,
    [strFiIssueStaffCode_FD] [varchar](20) NOT NULL,
    [dtFiDispatchDate_FD] [smalldatetime] NOT NULL,
    [strFiDispatchStaffCode_FD] [varchar](3) NOT NULL,
    [strDmDispatchCode_FD] [varchar](2) NOT NULL,
    [lfFiVendDepositDueAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateCode_FD] [varchar](50) NOT NULL,
    [strFiRatePlan_FD] [varchar](2) NOT NULL,
    [strFiCabinNo_FD] [varchar](8) NOT NULL,
    [strFiMileage_FD] [varchar](10) NOT NULL,
    [strFiStartPointLocTelNo_FD] [varchar](40) NULL,
    [strFiBookingGuarantee_FD] [varchar](60) NOT NULL,
    [strFiSpecialRemarks_FD] [varchar](250) NULL,
    [strFiCxnCondition_FD] [varchar](100) NOT NULL,
    [bFiFlyDrive_FD] [smallint] NOT NULL,
    [strFiConfNo_FD] [varchar](32) NOT NULL,
    [lFiLinkID_FD] [int] NOT NULL,
    [strFiCategory_FD] [varchar](15) NOT NULL,
    [nFiNumRoom_FD] [smallint] NOT NULL,
    [nFiNumDay_FD] [smallint] NOT NULL,
    [nFiSaleFoldItemID_FD] [smallint] NOT NULL,
    [bFiRefundItem_FD] [smallint] NOT NULL,
    [strFiFareSavingCode_FD] [varchar](2) NOT NULL,
    [lfFiFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiRateNote_FD] [text] NOT NULL,
    [strFiDiscCode_FD] [varchar](20) NOT NULL,
    [strFiReqDispatchMethodCode_FD] [varchar](2) NOT NULL,
    [dtFiReqDispatchDateTime_FD] [smalldatetime] NOT NULL,
    [nFiReqDispatchVoucherType_FD] [smallint] NOT NULL,
    [nFiLastFoldItemDetailID_FD] [smallint] NOT NULL,
    [nFiNumConjunction_FD] [smallint] NOT NULL,
    [lfFiBSPFrgnBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPBaseFareAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxDiscrepancy_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPPenaltyFeeAmt_FD] [decimal](17, 2) NOT NULL,
    [nFiRegion_FD] [smallint] NOT NULL,
    [strFiOpenTktNo_FD] [varchar](16) NOT NULL,
    [strFiTktSource_FD] [varchar](3) NOT NULL,
    [strFiJourneyType_FD] [varchar](3) NOT NULL,
    [strFiTktType_FD] [varchar](3) NOT NULL,
    [strFiInterfaceNameRemark_FD] [varchar](50) NULL,
    [nFiATOLIssuedStatus_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBase_FD] [varchar](13) NOT NULL,
    [strFiPaxType_FD] [varchar](3) NOT NULL,
    [strFiActualCarrier_FD] [varchar](2) NOT NULL,
    [strFiNetRemitType_FD] [varchar](1) NOT NULL,
    [strFiFareConstruction_FD] [text] NOT NULL,
    [lfFiBSPTotVATAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNetFare_FD] [smallint] NOT NULL,
    [strFiTourCode_FD] [varchar](50) NOT NULL,
    [strFiSuppFOPInfo_FD] [varchar](255) NOT NULL,
    [lfFitBSPPublishedCommPerc_FD] [decimal](12, 6) NOT NULL,
    [strFiBSPFareCurrCode_FD] [varchar](3) NOT NULL,
    [strFiTktIssueIataNo_FD] [varchar](8) NOT NULL,
    [lfFiFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiFareOfferedSavingCode_FD] [varchar](2) NOT NULL,
    [strFiDesc_FD] [varchar](8000) NOT NULL,
    [lfFiBSPFareBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiNoPrintOnItin_FD] [smallint] NOT NULL,
    [dtFiStatusCodeChangeDateTime_FD] [smalldatetime] NOT NULL,
    [bFiNoPrintIfAllPricingsZeroCustAmt_FD] [smallint] NOT NULL,
    [strFiFareSavingFareBaseLow_FD] [varchar](13) NOT NULL,
    [lfFiFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiBrochureCode_FD] [varchar](8) NOT NULL,
    [bFiVendPayDepositNow_FD] [smallint] NOT NULL,
    [bFiVendPayBalanceNow_FD] [smallint] NOT NULL,
    [strFiBankBranchCode_FD] [varchar](2) NOT NULL,
    [strFiOperatingAirlineCode_FD] [varchar](2) NOT NULL,
    [strFiFarePassengerTypeCode_FD] [varchar](3) NOT NULL,
    [strFiAssociatedFarePricingInfoID_FD] [varchar](15) NOT NULL,
    [bFiVerificationReq_FD] [smallint] NOT NULL,
    [dtFiLastVerifiedDateTime_FD] [datetime] NOT NULL,
    [lFiLastVerifiedLevel_FD] [int] NOT NULL,
    [lFiLastVerifiedWithCount_FD] [int] NOT NULL,
    [dtFiTktingStatusChangeDateTime_FD] [smalldatetime] NOT NULL,
    [strFiTktingInformation_FD] [text] NOT NULL,
    [strFiTktingStatus_FD] [varchar](2) NOT NULL,
    [lfFiBSPFareSellDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiBSPTaxBuyDiscrepancyAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiTktingBatchID_FD] [varchar](15) NOT NULL,
    [dtFiTktingBatchDateTime_FD] [smalldatetime] NOT NULL,
    [lIplPolicyLevelID_FD] [int] NOT NULL,
    [strFiTktingDescription_FD] [varchar](1000) NOT NULL,
    [strFiTktingDataVersion_FD] [varchar](10) NOT NULL,
    [strFiSourceTktingSystem_FD] [varchar](20) NOT NULL,
    [strFiOthPtsPmtCode_FD] [varchar](3) NOT NULL,
    [bFiManualPtsEntry_FD] [smallint] NOT NULL,
    [strFiEndorsement_FD] [varchar](500) NOT NULL,
    [strFiOwnedByStaffCode_FD] [char](3) NOT NULL,
    [strFiThirdPartyTrackingID_FD] [varchar](25) NOT NULL,
    [strFiAdditionalPrintingNote_FD] [text] NULL,
    [bFiOverridePrintingNote_FD] [smallint] NOT NULL,
    [lfFiCarbonOffsetWeightAmt_FD] [decimal](17, 2) NOT NULL,
    [strFiCancellationPolicyNote_FD] [text] NULL,
    [bFiPEProcessed_FD] [smallint] NOT NULL,
    [bFiActingAsAgentFor_FD] [smallint] NOT NULL,
    [nFiOriginalBuyingBasis_FD] [smallint] NOT NULL,
    [bFiIsOpenSegment_FD] [smallint] NOT NULL,
    [nFiCreateSource_FD] [smallint] NOT NULL,
    [strFiBookingSourceInvoiceNo_FD] [varchar](7) NOT NULL,
    [strFiGDSPaxTypeCode_FD] [varchar](8) NOT NULL,
    [strFiNetFareGDSAccountCode_FD] [varchar](8) NOT NULL,
    [strFiPOSID_FD] [varchar](50) NOT NULL,
    [bFiIsPOSEditable_FD] [smallint] NOT NULL,
    [lfFiCustExchRate_FD] [decimal](16, 8) NOT NULL,
    [lfFiCustFareSavingLowAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [lfFiCustFareOfferedSavingAmt_FD] [decimal](17, 2) NOT NULL,
    [bFiCCItemPayableToBranch_FD] [smallint] NOT NULL,
    [dtFiExternalAccountingDate_FD] [smalldatetime] NOT NULL,
    [strFiSourceSystemBookResponseText_FD] [text] NOT NULL,
    [bFiIsConnection_FD] [smallint] NOT NULL,
    [bFiIsPEFoldLevelItem_FD] [smallint] NOT NULL,
    [nFiReqdEndorsedConjTktType_FD] [smallint] NOT NULL,
    [bFiEndorsedConjTktDetailIsManual_FD] [smallint] NOT NULL,
    [strFiReqdEndorsedConjTktDetailText_FD] [varchar](30) NOT NULL,
    [lFiLastTktingTemplateID_FD] [int] NOT NULL,
    [dtFiBookedDate_FD] [smalldatetime] NOT NULL,
    [strFiTktingVerificationWarning_FD] [varchar](1000) NOT NULL,
    [strFiTktingVerificationError_FD] [varchar](1000) NOT NULL,
    [strFiTktingError_FD] [varchar](1000) NOT NULL,
    [strFiCustomerAccountingData00_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData01_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData02_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData03_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData04_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData05_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData06_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData07_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData08_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData09_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingDataNote_FD] [varchar](100) NOT NULL,
    [strFiCustomerAccountingData10_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData11_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData12_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData13_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData14_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData15_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData16_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData17_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData18_FD] [varchar](50) NOT NULL,
    [strFiCustomerAccountingData19_FD] [varchar](50) NOT NULL,
    [nFiFlightBasis_FD] [smallint] NOT NULL,
    [lFiStartPointVendID_FD] [int] NOT NULL,
    [lFiEndPointVendID_FD] [int] NOT NULL,
    [lCtID_FD] [int] NOT NULL,
    [strFiContractCode_FD] [varchar](25) NOT NULL,
    [strFiContractPeriodCode_FD] [varchar](50) NOT NULL,
    PRIMARY KEY CLUSTERED
    [strBBranchCode_FD] ASC,
    [lFFoldNo_FD] ASC,
    [nFiFoldItemID_FD] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([nFiReqDispatchVoucherType_FD])
    REFERENCES [DBA].[VoucherTypes_TB] ([nVtCode_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strPcProductCode_FD], [strFiType_FD])
    REFERENCES [DBA].[ProductCodes_TB] ([ProductCode_FD], [Type_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] WITH NOCHECK ADD FOREIGN KEY([strBBranchCode_FD], [lFFoldNo_FD])
    REFERENCES [DBA].[Folder_TB] ([BranchCode_FD], [FolderNo_FD])
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strBB__44AB0736] DEFAULT ('') FOR [strBBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFFol__459F2B6F] DEFAULT (0) FOR [lFFoldNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiFo__46934FA8] DEFAULT ((-1)) FOR [nFiFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__478773E1] DEFAULT ('') FOR [strFiType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiC__487B981A] DEFAULT ('1980-01-01') FOR [dtFiCreateDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookingRef] DEFAULT ('') FOR [strFiBookingRef_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4A63E08C] DEFAULT ('') FOR [strFiBookingRefDayMonth_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiBookedVia_FD] DEFAULT ('') FOR [strFiBookedVia_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiIn__4C4C28FE] DEFAULT (0) FOR [bFiInterfaced_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSo__4D404D37] DEFAULT ((-1)) FOR [nFiSortOrder_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__4E347170] DEFAULT ('') FOR [strFiCreateStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strPcProductCode] DEFAULT ('') FOR [strPcProductCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__501CB9E2] DEFAULT ('1980-01-01') FOR [dtFiStartDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiFi__5110DE1B] DEFAULT ((-1)) FOR [lFiFinanVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiIt__52050254] DEFAULT ((-1)) FOR [lFiItinVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__52F9268D] DEFAULT ('1980-01-01') FOR [dtFiVendBalDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiV__53ED4AC6] DEFAULT ('1980-01-01') FOR [dtFiVendDepositDueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__54E16EFF] DEFAULT ('') FOR [strFiStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiTr__55D59338] DEFAULT (0) FOR [bFiTransFeeHasBeenApplied_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiAT__56C9B771] DEFAULT (0) FOR [bFiATOLTypeMan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__57BDDBAA] DEFAULT (0) FOR [nFiATOLType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strCc__58B1FFE3] DEFAULT ('') FOR [strCcClassCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiStartPointCode] DEFAULT ('') FOR [strFiStartPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__strFiEndPointCode] DEFAULT ('') FOR [strFiEndPointCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5B8E6C8E] DEFAULT ('') FOR [strFiAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5C8290C7] DEFAULT ('') FOR [strFiVendDocNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiI__5D76B500] DEFAULT ('1980-01-01') FOR [dtFiIssueDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__5E6AD939] DEFAULT ('') FOR [strFiDiscReasonCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__5F5EFD72] DEFAULT ((-1)) FOR [nFiLastFoldPricingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__605321AB] DEFAULT ('') FOR [strFiPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__614745E4] DEFAULT ('') FOR [strFiNonPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiS__623B6A1D] DEFAULT ('1980-01-01') FOR [dtFiStatusExpiryDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__632F8E56] DEFAULT ('') FOR [strFiClientFreqTravellerNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6423B28F] DEFAULT ('') FOR [strFiRouteNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_TB__nFiNumBum] DEFAULT ((-1)) FOR [nFiNumBum_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiE__660BFB01] DEFAULT ('1980-01-01') FOR [dtFiEndDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67001F3A] DEFAULT ('') FOR [strFiFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__67F44373] DEFAULT ('') FOR [strFiInterfaceItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__68E867AC] DEFAULT ('') FOR [strFiEndPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__69DC8BE5] DEFAULT ('') FOR [strFiStartPointLoc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strMc__6AD0B01E] DEFAULT ('') FOR [strMcMealCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6BC4D457] DEFAULT ('') FOR [strFiSeatNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6CB8F890] DEFAULT ('') FOR [strFiMealNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6DAD1CC9] DEFAULT ('') FOR [strFiAirCraftType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6EA14102] DEFAULT ('') FOR [strFiJourneyTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__6F95653B] DEFAULT ('') FOR [strFiCheckInMins_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiJo__70898974] DEFAULT (0) FOR [lFiJourneyDist_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__717DADAD] DEFAULT (0) FOR [nFiNumStop_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7271D1E6] DEFAULT ('') FOR [strFiBaggageAllow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7365F61F] DEFAULT ('') FOR [strFiIssueStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiD__745A1A58] DEFAULT ('1980-01-01') FOR [dtFiDispatchDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__754E3E91] DEFAULT ('') FOR [strFiDispatchStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strDm__764262CA] DEFAULT ('') FOR [strDmDispatchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiV__77368703] DEFAULT (0) FOR [lfFiVendDepositDueAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__782AAB3C] DEFAULT ('') FOR [strFiRateCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__791ECF75] DEFAULT ('') FOR [strFiRatePlan_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7A12F3AE] DEFAULT ('') FOR [strFiCabinNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7B0717E7] DEFAULT ('') FOR [strFiMileage_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7BFB3C20] DEFAULT ('') FOR [strFiStartPointLocTelNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7CEF6059] DEFAULT ('') FOR [strFiBookingGuarantee_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7DE38492] DEFAULT ('') FOR [strFiSpecialRemarks_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__7ED7A8CB] DEFAULT ('') FOR [strFiCxnCondition_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiFl__7FCBCD04] DEFAULT (0) FOR [bFiFlyDrive_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__00BFF13D] DEFAULT ('') FOR [strFiConfNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lFiLi__01B41576] DEFAULT ((-1)) FOR [lFiLinkID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__02A839AF] DEFAULT ('') FOR [strFiCategory_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__039C5DE8] DEFAULT (0) FOR [nFiNumRoom_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__04908221] DEFAULT (0) FOR [nFiNumDay_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiSa__0584A65A] DEFAULT ((-1)) FOR [nFiSaleFoldItemID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiRe__0678CA93] DEFAULT (0) FOR [bFiRefundItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__076CEECC] DEFAULT ('') FOR [strFiFareSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__08611305] DEFAULT (0) FOR [lfFiFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0955373E] DEFAULT ('') FOR [strFiRateNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0A495B77] DEFAULT ('') FOR [strFiDiscCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__0B3D7FB0] DEFAULT ('') FOR [strFiReqDispatchMethodCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__dtFiR__0C31A3E9] DEFAULT ('1980-01-01') FOR [dtFiReqDispatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiRe__0D25C822] DEFAULT (0) FOR [nFiReqDispatchVoucherType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiLa__0F0E1094] DEFAULT ((-1)) FOR [nFiLastFoldItemDetailID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiNu__100234CD] DEFAULT (0) FOR [nFiNumConjunction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__10F65906] DEFAULT (0) FOR [lfFiBSPFrgnBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__11EA7D3F] DEFAULT (0) FOR [lfFiBSPBaseFareAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__12DEA178] DEFAULT (0) FOR [lfFiBSPTaxDiscrepancy_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__13D2C5B1] DEFAULT (0) FOR [lfFiBSPPenaltyFeeAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiDo__14C6E9EA] DEFAULT (0) FOR [nFiRegion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__15BB0E23] DEFAULT ('') FOR [strFiOpenTktNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__16AF325C] DEFAULT ('') FOR [strFiTktSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__17A35695] DEFAULT ('') FOR [strFiJourneyType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__18977ACE] DEFAULT ('') FOR [strFiTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__198B9F07] DEFAULT ('') FOR [strFiInterfaceNameRemark_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__nFiAT__1A7FC340] DEFAULT ((-1)) FOR [nFiATOLIssuedStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1B73E779] DEFAULT ('') FOR [strFiFareSavingFareBase_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1C680BB2] DEFAULT ('') FOR [strFiPaxType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1D5C2FEB] DEFAULT ('') FOR [strFiActualCarrier_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1E505424] DEFAULT ('') FOR [strFiNetRemitType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__1F44785D] DEFAULT ('') FOR [strFiFareConstruction_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiB__20389C96] DEFAULT (0) FOR [lfFiBSPTotVATAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__bFiNe__212CC0CF] DEFAULT (0) FOR [bFiNetFare_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__2220E508] DEFAULT ('') FOR [strFiTourCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__23150941] DEFAULT ('') FOR [strFiSuppFOPInfo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFit__24092D7A] DEFAULT (0) FOR [lfFitBSPPublishedCommPerc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__24FD51B3] DEFAULT ('') FOR [strFiBSPFareCurrCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__25F175EC] DEFAULT ('') FOR [strFiTktIssueIataNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__lfFiF__27D9BE5E] DEFAULT (0) FOR [lfFiFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD CONSTRAINT [DF__Table2_T__strFi__251D4D44] DEFAULT ('') FOR [strFiFareOfferedSavingCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiDesc_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintOnItin_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-1-1') FOR [dtFiStatusCodeChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiNoPrintIfAllPricingsZeroCustAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFareSavingFareBaseLow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBrochureCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayDepositNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVendPayBalanceNow_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiBankBranchCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOperatingAirlineCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiFarePassengerTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAssociatedFarePricingInfoID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiVerificationReq_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiLastVerifiedDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedLevel_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lFiLastVerifiedWithCount_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingStatusChangeDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingInformation_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingStatus_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPFareSellDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiBSPTaxBuyDiscrepancyAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingBatchID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiTktingBatchDateTime_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lIplPolicyLevelID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDescription_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingDataVersion_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceTktingSystem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOthPtsPmtCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiManualPtsEntry_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiEndorsement_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiOwnedByStaffCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiThirdPartyTrackingID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiAdditionalPrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiOverridePrintingNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [lfFiCarbonOffsetWeightAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCancellationPolicyNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiPEProcessed_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiActingAsAgentFor_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiOriginalBuyingBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsOpenSegment_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [nFiCreateSource_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (' ') FOR [strFiBookingSourceInvoiceNo_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiGDSPaxTypeCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiNetFareGDSAccountCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiPOSID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT (0) FOR [bFiIsPOSEditable_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustExchRate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingLowAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [lfFiCustFareOfferedSavingAmt_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiCCItemPayableToBranch_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiExternalAccountingDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiSourceSystemBookResponseText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsConnection_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiIsPEFoldLevelItem_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiReqdEndorsedConjTktType_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [bFiEndorsedConjTktDetailIsManual_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiReqdEndorsedConjTktDetailText_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiLastTktingTemplateID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('1980-01-01') FOR [dtFiBookedDate_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationWarning_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingVerificationError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiTktingError_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData00_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData01_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData02_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData03_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData04_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData05_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData06_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData07_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData08_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData09_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingDataNote_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData10_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData11_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData12_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData13_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData14_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData15_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData16_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData17_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData18_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiCustomerAccountingData19_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((0)) FOR [nFiFlightBasis_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiStartPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lFiEndPointVendID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ((-1)) FOR [lCtID_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractCode_FD]
    GO
    ALTER TABLE [DBA].[Table2_TB] ADD DEFAULT ('') FOR [strFiContractPeriodCode_FD]
    GO

    Hi,
    I just wanted to summarize the conclusions we got here, in case it helps someone else.
    For the case of the ALTER statement:
    First we analyzed the query performance and found out that the query was I/O bound. A handful of useful scripts can be found on the links below. Some high values on PAGEIOLATCH wait times suggested some memory pressure, so we incremented the amount of memory
    dedicated to the server up to 32GB. This single change was one of the most effective ones and reduced the query execution time by about 40%-50%. I guess SQLServer needs less paging to perform the operation when it can load more pages in memory at the same
    time.
    We run more tests tweaking some other mentioned variables like the server maxDOP, which made in fact the query slower the higher the value we set. The initial server config was set with auto CPU affinity, but I/O mask affinity set to the first 4 CPUs, and
    we found out that setting it to ALL AUTO perform faster for some reason.
    After some analysis made by Microsoft on the diagnostics/metrics data, the only interesting finding was that a couple of the storage volumes were performing slightly slower than the rest causing a bit of a bottleneck. We recommended the customer to look
    into it with their storage team, but even with those fixed, we won't expect the query to run much faster.
    No other tweaks have been found useful to speed up the ALTER statement. Basically it comes up to how fast you I/O subsystem is, and how much can SQLServer can cache in memory. No other suggestion has been made by Microsoft, and they've advised that any
    other tweaked query (split the column addition and the constraints for example) is going to perform worse than the plain and simple alter statement.
    On the NVARCHAR type problem:
    As it was suggested by Erland Sommarskog, SQLServer 2014 Enterprise edition performed this operation about 40% faster than SQLServer 2008 R2 with the same hardware specs.
    At the moment upgrading the customer infrastructure is not an option for us, so we don't have a proper solution to accomplish this in a workable time frame on SQLServer 2008. 
    The strategy that we found might be the best option was the one suggessted by E. Sommarskog, bind our code for reading access to a COALESCED calculated column, and writing on the new converted NVARCHAR column. Schedule a batch job in the backgroud
    to migrate all the data over time to the new column, and finally remove the old column.
    Thanks a lot everyone for your help.
    David.
    Useful links:
    http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
    http://msdn.microsoft.com/en-us/library/ms189768.aspx
    http://rusanu.com/2014/02/24/how-to-analyse-sql-server-performance/

  • Performance Problem between Oracle 9i to Oracle 10g using Crystal XI

    We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
    Our technical Setup:
    Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
    Database server is Oracle 10g
    What we have concluded:
    Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
    We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
    We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
    Oracle 10g no longer supports the /*+ RULE */ hint.
    Verify DB Query:
    select /*+ RULE */ *
    from
    (select /*+ RULE */ null table_qualifier, o1.owner table_owner,
    o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
    'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
    decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
    o1.object_type), o1.object_type) table_type, null remarks from all_objects
    o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
    table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
    table_type, null remarks from all_objects o3, all_synonyms s where
    o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
    s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
    s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
    null remarks from all_synonyms s1 where s1.db_link is not null ) tables
    WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
    3
    SQL From Main Report:
    SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
    FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
    WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
    SQL From Sub Report:
    SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
    FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
    WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
    Does anyone have any suggestions on how we can improve the report performance with 10g?

    Hi Eric,
    Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
    While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
    Here are the Doc Ids, if you are interested:
    Note 377037.1
    Note:364822.1
    Thanks again for your response.
    Venu Boddu.

  • Problem calling a packaged function from an authorization scheme

    I wrote a package and body called pkg_auth with a function returning a boolean called is_authorized with 2 parameters (username and functional area).
    i.e.
    CREATE OR REPLACE PACKAGE pkg_auth
    AS
    FUNCTION is_authorized(p_username VARCHAR2, p_functional_area VARCHAR2) RETURN BOOLEAN;
    END;
    additionally i created a public synonym for it and granted execute access on it to apex_public_user and htmldb_public_user;
    i then created an authorization scheme called 'access_control_db' defined as Scheme Type 'PLSQL Function Returning Boolean' and placed the following:
    pkg_auth.is_authorized(v('APP_USER'),'DATABASE')
    in the Expression 1 field.
    and the following:
    Not permitted to edit database information.
    in the error field.
    However when I apply this authorization scheme to a buton I receive the following error when I go to the page containing that button:
    ORA-06550: line 1, column 44: PLS-00221: 'IS_AUTHORIZED' is not a procedure or is undefined ORA-06550: line 1, column 44: PL/SQL: Statement ignored
         Error      ERR-1082 Error in executing authorization scheme code.
    Any help would be most appreciated.

    Hello,
    Does putting 'return' infront -
    return pkg_auth.is_authorized(v('APP_USER'),'DATABASE')fix your problem?
    John.
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.apex-evangelists.com
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    REWARDS: Please remember to mark helpful or correct posts on the forum, not just for my answers but for everyone!

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • Performance problem in Mapping Designer using UDF with external imports

    Hello,
    we do have a big performance problem in developing (not in execution) graphical Mappings as far as we use "user defined functions" (UDF) with include-entries referencing to jar files which are imported as "imported archives".
    For example the execution of invice mapping with a little bit bigger test file in the Mapping designer:
    - after opening, not in change mod: 6 seconds
    - after switching to change mod: 37 seconds (that's clear, now everything is compiled first)
    - after adding "com.seeburger.functions.permstore.CounterFactory;" into the "import" field of one UDF, no other change: 227 seconds
    - after saving and submiting the changlist (no longer in change mode): 6 seconds
    - after switching to change mode: 227 seconds
    So execution speed of testing (and also when watching queues) only increases in changemod more then three minutes when using UDF with imports, referencing to external JAR files. It doesn't depend on Seeburger functions (we are using XI also for EDIFACT, so we also use some Seeburger functions), I can reproduce it with any other JAR file which is used from a UDF.
    Using java included functions like "java.text.NumberFormat;" in "Import" doesn't slow down the testing.
    Can anybody reproduce this? We are using XI 3.0 SP19 on a AIX machine, so we also have to use the Java version from IBM.
    cu
    Manfred

    Problem was fixed by a upgrad of the JDK.

  • Performance Problem - MS SQL 2K and PreparedStatement

    Hi all
    I am using MS SQL 2k and used PreparedStatement to retrieve data. There is strange and serious performance problem when the PreparedStatement contains "?" and using PreparedStatement.setX() functions to set its value. I have performed the test with the following code.
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    // statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 961
    10 1061
    200 1803
    for (int i = 0; i < 10; i ++) {
    try {
    con = DBConnection.getInstance();
    // statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = '" + cardNo + "'");
    statement = con.prepareStatement("SELECT * FROM cardno WHERE car_no = ?");
    statement.setString(1, cardNo);
    rs = statement.executeQuery();
    if (rs.next()) {
    catch(SQLException e) {
    e.printStackTrace();
    finally {
    try {
    rs.close();
    statement.close();
    catch(SQLException e) {
    e.printStackTrace();
    Iteration Time (ms)
    1 1171
    10 2754
    100 18817
    200 36443
    The above test is performed with DataDirect JDBC 3.0 driver. The one uses ? and setString functions take much longer to execute, which supposed to be faster because of precompilation of the statement.
    I have tried different drivers - the one provided by MS, data direct and Sprinta JDBC drivers but all suffer the same problem in different extent. So, I am wondering if MS SQL doesn't support for precompiled statement and no matter what JDBC driver I used I am still having the performance problem. If so, many O/R mappings cannot be used because I believe most of them if not all use the precompiled statement.
    Best regards
    Edmond

    Edmond,
    Most JDBC drivers for MS SQL (and I think this includes all the drivers you tested) use sp_executesql to execute PreparedStatements. This is a pretty good solution as the driver doesn't have to keep any information about the PreparedStatement locally, the server takes care of all the precompiling and caching. And if the statement isn't already precompiled, this is also taken care of transparently by SQL Server.
    The problem with this approach is that all names in the query must be fully qualified. This means that the driver has to parse the query you are submitting and make all names fully qualified (by prepending a db name and schema). This is why creating a PreparedStatement takes so much using these drivers (and why it does so every time you create it, even though it's the same PreparedStatement).
    However, the speed advantage of PreparedStatements only becomes visible if you reuse the statement a lot of times.
    As about why the PreparedStatement with no placeholder is much faster, I think is because of internal optimisations (maybe the statement is run as a plain statement (?) ).
    As a conclusion, if you can reuse the same PreparedStatement, then the performance hit is not so high. Just ignore it. However, if the PreparedStatement is created each time and only used a few times, then you might have a performance issue. In this case I would recommend you try out the jTDS driver ( http://jtds.sourceforge.net ), which uses a completely different approach: temporary stored procedures are created for PreparedStatements. This means that no parsing is done by the driver and PreparedStatement caching is possible (i.e. the next time you are preparing the same statement it will take much less as the previously submitted procedure will be reused).
    Alin.

  • Performance Problem  while signing into Application

    Hello
    Could someone plz throw some light into which area I can look into for my performance problem. its an E-business suite version 11.5.10.2 which was upgrade from 11.5.8.
    the Problem : When the Sign in Page is displayed , After the user name / Pwd is entered it sort of takes for ever for the System to actually log the user in. Sometimes I have to click twice on the Sign in Button.
    I have run purge sign on audit / purge concurrent Request/manager logs / gather schema stats but its still slow. Are there any way of check whether the Middle Tier is the bottle neck.
    Thanks
    Nini

    can you check the profile option FND%diagnostic% if it was enabled or not
    fadi

  • How to diagnose performance problems?

    Hi all,
    I'm trying to run some basic performance tests of our app with Coherence, and I'm getting some pretty miserable results. I'm obviously missing something very basic about the configuration, but I can't figure out what it is.
    For our test app, I'm using 3 machines, all Mac Pros running OSX 10.5. Each JVM is configured with 1G RAM and is running in server mode. We're connected on a gigabit LAN (I ran the datagram test, we get about 108Mb/sec but with high packet loss, around 10-12%). Two of the nodes are storage enabled, the other runs as a client performing an approximation of one of the operations we perform routinely.
    The test consists of running a single operation many times. We're developing a card processing system, so I create a bunch of cards (10,000 by default), each of which has about 10 related objects in different caches. The objects are serialised using POF. I write all these to the caches, then perform a single operation for each card. The operation consists of about 14 reads and 6 writes. I can do this about 6-6.5 times per second, for a total of ~120 Coherence ops/sec, which seems extremely low.
    During the test, the network tops out at about 1.2M/s, and the CPU at about 35% on the storage nodes and almost unnoticeable on the client. There's clearly something blocking somewhere, but I can't find what it is. The client uses a thread pool of 100 threads and has 100 threads assigned to its distributed service, and the storage nodes also have 100 threads assigned to their distributed services. I also tried giving more threads to the invocation service (since we use VersionedPut a lot), but that didn't seem to help either. I've also created the required indexes.
    Any help in diagnosing what the problem would be greatly appreciated. I've attached the config used for the two types of node below.
    Cheers,
    Colin
    Config for storage nodes:
    -Dtangosol.coherence.wka=10.1.1.2
    -Dtangosol.coherence.override=%HOME%/coherence.override.xml
    -Dtangosol.coherence.distributed.localstorage=true
    -Dtangosol.coherence.wkaport=8088
    -Dtangosol.coherence.distributed.threads=100
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>distributed-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>distributed-backing-map</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    <serializer>
    <class-name>runtime.util.protobuf.ProtobufPofContext</class-name>
    </serializer>
    <backup-count>1</backup-count>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>distributed-backing-map</scheme-name>
    <listener>
    <class-scheme>
    <class-name>service.data.manager.coherence.impl.utilfile.CoherenceBackingMapListener</class-name>
    <init-params>
    <init-param>
    <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
    <param-value>{manager-context}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </listener>
    </local-scheme>
    <invocation-scheme>
    <service-name>InvocationService</service-name>
    <autostart>true</autostart>
    </invocation-scheme>
    </caching-schemes>
    </cache-config>
    Config for client node:
    -Dtangosol.coherence.wka=10.1.1.2
    -Dtangosol.coherence.wkaport=8088
    -Dtangosol.coherence.distributed.localstorage=false
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>distributed-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <thread-count>100</thread-count>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>distributed-backing-map</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    <backup-count>1</backup-count>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>distributed-backing-map</scheme-name>
    </local-scheme>
    <invocation-scheme>
    <service-name>InvocationService</service-name>
    <autostart>true</autostart>
    </invocation-scheme>
    </caching-schemes>
    </cache-config>

    Hi David,
    Thanks for the quick response. I'm currently using an executor with 100 threads on the client. I thought that might not be enough since the CPU on the client hardly moves at all, but bumping that up to 1000 didn't change things. If I run the same code with a single Coherence instance in my local machine I get around 110/sec, so I suspect that somewhere in the transport layer something doesn't have enough threads.
    The code is somewhat complicated because we have an abstraction layer on top of Coherence. I'll try to post the relevant parts below:
    The main loop is pretty straightforward. This is a Runnable that I post to the client executor:
    public void run()
    Card card = read(dataManager, Card.class, "cardNumber", cardNumber);
    assert card != null;
    Group group = read(dataManager, Group.class, "id", card.getGroupId());
    assert group != null;
    Account account = read(dataManager, Account.class, "groupId", group.getId());
    assert account != null;
    User user = read(dataManager, User.class, "groupId", group.getId());
    assert user != null;
    ClientUser clientUser = read(dataManager, ClientUser.class, "userId", user.getId());
    HoldLog holdLog = createHoldLog(group, account);
    account.getCurrentHolds().add(holdLog);
    account.setUpdated(now());
    write(dataManager, HoldLog.class, holdLog);
    ClientUser clientUser2 = read(dataManager, ClientUser.class, "userId", user.getId());
    write(dataManager, Account.class, account);
    NetworkMessage networkMessage = createNetworkMessage(message, card, group, holdLog);
    write(dataManager, NetworkMessage.class, networkMessage);
    read does a bit of juggling with our abstraction layer, basically this just gets a ValueExtractor for a named field and then creates an EqualsFilter with it:
    private static <V extends Identifiable> V read(DataManager dataManager,
    Class<V> valueClass,
    String keyField,
    Object keyValue)
    DataMap<Identifiable> dataMap = dataManager.getDataMap(valueClass.getSimpleName(), valueClass);
    FilterFactory filterFactory = dataManager.getFilterFactory();
    ValueExtractorFactory extractorFactory = dataManager.getValueExtractorFactory();
    Filter<Identifiable> filter = filterFactory.equals(extractorFactory.fieldExtractor(valueClass, keyField), keyValue);
    Set<Map.Entry<GUID, Identifiable>> entries = dataMap.entrySet(filter);
    if (entries.isEmpty())
    return null;
    assert entries.size() == 1 : "Expected single entry, found " + entries.size() + " for " + valueClass.getSimpleName();
    return valueClass.cast(entries.iterator().next().getValue());
    write is trivial:
    private static <V extends Identifiable> void write(DataManager dataManager, Class<V> valueClass, V value)
    DataMap<Identifiable> dataMap = dataManager.getDataMap(valueClass.getSimpleName(), valueClass);
    dataMap.put(value.getId(), value);
    And entrySet directly wraps the Coherence call:
    public Set<Entry<GUID, V>> entrySet(Filter<V> filter)
    validateFilter(filter);
    return wrapEntrySet(dataMap.entrySet(((CoherenceFilterAdapter<V>) filter).getCoherenceFilter()));
    This just returns a Map.EntrySet implementation that decodes the binary encoded objects.
    I'm not really sure how much more code to post - what I have is really just a thin layer over Coherence, and I'm reasonably sure that the main problem isn't there since the CPU hardly moves on the client. I suspect thread starvation at some point, but I'm not sure where to look.
    Thanks,
    Colin

  • Numbers Import and Load Performance Problems

    Some initial results of converting a single 1.9MB Excel spreadsheet to Numbers:
    _Results using Numbers v1.0_
    Import 1.9MB Excel spreadsheet into Numbers: 7 minutes 3.5 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 11.7 seconds
    _Results using Numbers v1.0.1_
    Import 1.9MB Excel spreadsheet into Numbers: 6 minutes 36.1 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 5.8 seconds
    _Comparison to Excel_
    Excel loads the original 1.9MB spreadsheet in 4.2 seconds.
    Summary
    Numbers v1.0 and v1.0.1 exhibit severe performance problems with loading (of it's own files) and importing of Excel V.x files.

    Hello
    It seems that you missed a detail.
    When a Numbers document is 1.9MB on disk, it may be a 7 or 8 MB file to load.
    A Numbers document s not a file but a package which is a disguised folder.
    The document itself is described in an WML extremely verbose file stored in a gzip archive.
    Opening such a document starts with an unpack sequence which is a fast one (except maybe if the space available on the support is short).
    The unpacked file may easily be 10 times larger than the packed one.
    Just an example, the xml.gz file containing the report of my bank operations for 2007 is a 300Kb one but the expanded one, the one which Numers must read, is a 4 MB one, yes 13,3 times the original.
    And, loading it is not sufficient, this huge file must be "interpreted" to build the display.
    As it is very long, Apple treats it as the TRUE description of the document and so, each time it must display something, it must work as the interpreters that old users like me knew when they used the Basic available in Apple // machines.
    Addind a supplemetary stage would have add time to the opening sequence but would have fasten the usage of the document.
    Of course, it would also had added a supplementary stage duringthe save it process.
    I hope that they will adopt this scheme but of course I don't know if they will do that.
    Of course, the problem is quite the same when we import a document from Excel or from AppleWorks.
    The app reads the original which is stored in a compact shape then it deciphers it to create the XML code. Optimisation would perhaps reduce a bit these tasks but it will continue to be a time consuming one.
    Yvan KOENIG (from FRANCE dimanche 27 janvier 2008 16:46:12)

  • Oracle performance problem -

    We are facing some critical performance problems with one of the
    tables. I have a table with the schema given below.When there
    are around 2 lakh entries in this table, if I give a select count
    (*) , it takes around 20 seconds to return the count. Where as
    any other table in the same database, returns the count in less
    than 2 seconds for around 1 lakh entries. I am not able to
    figure out where the performance bottle neck is.
    Configuration:
    Oracle 8.0.5 on Windows NT 4.0
    USER_NAME NOT NULL VARCHAR2(30)
    TASK_ID NUMBER // Unique
    TASK_NAME NOT NULL VARCHAR2(100)
    TERMINAL_NAME VARCHAR2(30)
    TASK_DATE DATE
    TASK_STATUS NOT NULL NUMBER
    PARENT_ID NUMBER
    NE_NAME NOT NULL VARCHAR2(30)
    ENM_JOBID VARCHAR2(30)
    CMD_CONTENT VARCHAR2(4000)
    CMD_RESPONSE VARCHAR2(4000)
    RESPONSE_TIME DATE
    TASK_TYPE NUMBER
    SCHEDULE_TIME DATE
    All the tables are present in the same table space. So
    I guess it could not be because of any disk access differences
    between tables. Have you guys faced any such problems ? Could
    this be because of the huge CMD_RESPONSE and CMD_CONTENT
    fields ?

    Thanks a lot.
    I would like to know , how I can optimise the performance when
    having longer records. Will increasing the block size help in
    getting better performance ? If so,is there any other
    disadvantage in increasing the block size.
    My actual requirement is to read all the records in the
    table. So, a static cursor is created on the server and I do a
    block fetch(SQLFetchScrolll). I use ODBC for this purpose. The
    first fetch takes a lot of time, around 50 seconds or so for 2
    lakh records. Any ideas how to optimise the same ???
    Best Regards,
    Vignesh

Maybe you are looking for