LDAP Query´s Slow on Virtual DC with W2K12 over Hyper-V W2K12 R2

Hello,
We have 2 Virtual Machine DC´s. After upgrade HOSTs from Hyper-V 2012 to Hyper-V 2012 R2, LDAP Query´s are very slow on the 2 Virtual DC´s.
Has anyone ever went through the same problem?
Thanks,
Alexandre Smialoski

No. But what LDAP query are you running?  
Do you have any network/connectivity issues?
Santhosh Sivarajan | Houston, TX | www.sivarajan.com
ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
Blogs: Blogs
Twitter: Twitter
LinkedIn: LinkedIn
Facebook: Facebook
Microsoft Virtual Academy:
Microsoft Virtual Academy
This posting is provided AS IS with no warranties, and confers no rights.

Similar Messages

  • Query ID in Virtual Cube with services-Function module

    Hi,
    I am using virtual cube with services linked to a function module.
    The function module has fixed parameters(such as infoprovider name). None of these parameters consists of query information such as query  ID OR queryname .
    Do any one know how to determine query which was executed this function module?
    Best Regards,
    Anil

    Hi Claudio,
    I never implemented Virtual InfoCube with services with a FM, but I know there is a couple of How To Documents about named:
    - How to Reporting from External Data via Virtual InfoProvider
    -How to Implement a Virtual InfoCube with Services
    both with some code samples: did you read it?
    Hope it helps
    GFV

  • Update query is slow with merge replication

    Hello friend,
    I have a database with enabling merge replication.
    Then the problem is update query is taking more time.
    But when I disable the merge triggers then it'll update quickly.
    I really appreciate your
    quick response.
    Thanks.

    Hi Manjula,
    According to your description, the update query is slow after configuring merge replication. There are some proposals for you troubleshooting this issue as follows.
    1. Perform regular index maintenance, update statistics, re-index, on the following Replication system tables.
        •MSmerge_contents
        •MSmerge_genhistory
        •MSmerge_tombstone
        •MSmerge_current_partition_mappings
        •MSmerge_past_partition_mappings
    2. Make sure that your tables involved in the query have suitable indexes. Also do the re-indexing and update the statistics for these tables. Additionally, you can use
    Database Engine Tuning Advisor to tune databases for better query performance.
    Here are some related articles for your reference.
    http://blogs.msdn.com/b/chrissk/archive/2010/02/01/sql-server-merge-replication-best-practices.aspx
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    Thanks,
    Lydia Zhang

  • Is it possible to retrieve data from an Oracle db with an LDAP query?

    Our application uses an LDAP query to retrieve data from Microsoft Active Directory. Is it also possible to retrieve data from an Oracle database with an LDAP query?

    if you have Oracle Internet Directory, you will retrieve with ldapsearch data, which are physically stored in the database. But to select * from emp where ename='SCOTT', it is probably not possible.
    At least I have never heard of such a product which translate ldap query in sql query. But feel free to write your own one in perl :-)

  • Query Designer slows down after working some time with it

    Hi all,
    the new BEx Query Designer slows down when working some time with it. The longer it remains open, the slower it gets. Especially formula editing slows down extremely.
    Did anyone of you encounter the same problem? Do you have an idea, how to fix this. To me it seems as if the Designer allocates more and more RAM and does not free that up.
    My version: BI AddOn 7.X, Support Package 13, Revision 467
    Kind regards,
    Philipp

    I have seen a similar problem on one of my devices, the 'Samsung A-920'. Every time the system would pop up the 'Will you allow Network Access' screen , the imput from all keypresses from then on would be strangely delayed. It looked like the problem was connected with the switching from my app and the system dialog form. I tried for many many long hours / days to fix this, but just ended up hacking my phone to remove the security questions. After removing the security questions my problem went away.
    I don't know if it's an option in your application, but is it possible to do everything using just one Canvas, and not switch between displayables? You may want to do an experiment using a single displayable Canvas, and just change how it draws. I know this will make user input much more complicated, but you may be able to avoid the input delays.
    In my case, I think the device wasn't properly releasing / un-registering the input handling from the previous dialogs, so all keypresses still went through the non-current network-security dialog before reaching my app.

  • Query of query - running slower on 64 bit CF than 32 bit CF

    Greetings...
    I am seeing behavior where pages that use query-of-query run slower on 64-bit Coldfusion 9.01 than on 32-bit Coldfusion 9.01.
    My server specs are : dual processer virtual machine, 4 GIG ram, Windows 2008 Datacenter Server r2 64-bit, Coldfusion 9.01. Note that the coldfusion is literally "straight out of the box", and is using all default settings - the only thing I configured in CF is a single datasource.
    The script I am using to benchmark this runs a query that returns 20,000 rows with fields id, firstname, lastname, email, city, datecreated. I then loop through all 20,000 records, and for each record, I do a query-of-query (on the same master query) to find any other record where the lastname matches that of the record I'm currently on. Note that I'm only interested in using this process for comparative benchmarking purposes, and I know that the process could be written more efficiently.
    Here are my observed execution times for both 64-bit and 32-bit Coldfusion (in seconds) on the same machine.
    64 bit CF 9.01: 63,49,52,52,52,48,50,49,54 (avg=52 seconds)
    32 bit CF 9.01: 47,45,43,43,45,41,44,42,46 (avg=44 seconds)
    It appears from this that 64-bit CF performs worse than 32-bit CF when doing query-of-query operations. Has anyone made similar observations, and is there any way I can tune the environment to improve 64 bit performance?
    Thanks for any help you can provide!
    By the way, here's the code that is generating these results:
    <!--- Allrecs query returns 20000 rows --->
    <CFQUERY NAME="ALLRECS" DATASOURCE="MyDsn">
        SELECT * FROM MyTBL
    </CFQUERY>
    <CFLOOP QUERY="ALLRECS">
        <CFQUERY NAME="SAMELASTNAME" DBTYPE="QUERY">
            SELECT * FROM ALLRECS
            WHERE LN=<CFQUERYPARAM VALUE="#ALLRECS.LN#" CFSQLTYPE="CF_SQL_VARCHAR">
            AND ID<><CFQUERYPARAM VALUE="#AllRecs.ID#" CFSQLTYPE="CF_SQL_INTEGER">
        </CFQUERY>
        <CFIF SameLastName.RecordCount GT 20>
            #AllRecs.LN#, #AllRecs.FN# : #SameLastName.RecordCount# other records with same lastname<BR>
        </CFIF>
    </CFLOOP>

    BoBear2681 wrote:
    ..follow-up: ..Thanks for the follow-up. I'll be interested to hear the progress (or otherwise, as the case may be).
    As an aside. I got sick of trying to deal with Clip because it could only handle very small Clip sizes. AFAIR it was 1 second of 44.1 KHz stereo. From that point, I developed BigClip.
    Unfortunately BigClip as it stands is even less able to fulfil your functional requirement than Clip, in that only one BigClip can be playing at a time. Further, it can be blocked by other sound applications (e.g. VLC Media Player, Flash in a web page..) or vice-versa.

  • Regarding Virtual InfoCube with Services

    Hi All,
    I have a general doubt regarding two Standard InfoCubes
    We have two InfoCubes FIGL_C01, FIGL_C01, based on which we have two Virtual InfoCubes with services. FIGL_VC1, FIGL_VC2.
    What is the necessity for maintaining these InfoProviders as we are already having the base InfoCubes on which we can derive the queries.
    What is the basic use of Virutal InfoCube with Services.
    Regards.

    Hi,
    SAP help says:
    <i>At query runtime, the service of the virtual InfoCubes uses the data from the basic InfoCubes to determine the balance-dependent location of the financial statement items (contra items), in the financial statement version and presents the result in the query. From the technical point of view, the service determines the item indicator in the key for the financial statement items depending on the balance in the respective node in the financial statement structure.</i>
    To know further infomation about why we need to find the Item indicator in finance statement refer:
    http://help.sap.com/saphelp_nw04/helpdata/en/04/7b95fb42b0f94aba334c0890dbbda4/frameset.htm
    Note 673564 describes exactly why the virtual Infocubes 0FIGL_VC1 and 0FIGL_VC2 were developed. (because BW cannot handle the FI fin. statement hierarchies)
    With rgds,
    Anil Kumar Sharma .P

  • How to create Virtual InfoProvider with Services - Virtual Characteristic

    Hi all,
    I need to create a virtual infoprovider that also fills a virtual characteristic in order to display custom characteristics and calculated values based on user selection.
    Basicaly what I want to do is send a variable in a virtual characteristic equal to the values I want to fill in the characteristic.  So for instance if my base cube contains the infoobjects Brand, Product Line, Region and Country I want a  have a new infoobject that has no data, but fills itself with the vales of the infoobject I specify at query time on the virtual cube.
    The reason for this is I am trying to create a WAD using the delta chart (waterfall graph) that will show the difference between the plan and actual totals for a specific characteristic in Profitibility analysis.
    I have a document that explains the situation.  Send me your mail address and I will send you the document.
    Kind Regards

    We found a way to build the structure for this.
    Regards

  • Virtual infocube with services - division of suppliers in categories

    Hello,
    I’ve a problem and the idea to solve it with a virtual infocube with services. But unfortunately I don’t know if it’s really possible to solve it like this.
    The scenario is:
    I’ve suppliers, turnovers, goods (which have been delivered) and different quality deficiencies.  In dependence of the turnover with each supplier, they will be ranked. For the position in the ranking they get points. From this start points one have to subtract points for the different quality deficiencies. In dependence of this end-sum, the suppliers will be divided into categories. And this division should be shown in the query.
    But the suppliers can vary in each run of the query in dependence of the selection in the query. And the turnover of each supplier which depends on the selected goods can vary, too. So the division into the different categories depends on the selection and is dynamic. One doesn’t know this before. And so it isn’t possible to save the category of each supplier in an infoprovider.
    Now I want to know, if it’s possible to solve this with a virtual infocube with services.
    I hope there’s somebody who can help me. Thank you.
    Susanne

    Hello Susanne,
    first of all, yes, it can be solved with VirtualProviders. But I wouldn't expect it to have a good performance. It depends on the number of data records we are talking about.
    When you create a VirtualProvider you should make sure that the RFC Packing switch is flagged on. You can then get the selections in the tables-parameter selection with the structure bapi6200sl. Now you need to read the raw data for your selection. Either use the function module RSDRI_INFOPROV_READ or write the data into an ODS to simplify the selection.
    Based on the raw data you need to do you calculation and write the data back to the table data.
    For details of the implementation of the VirtualProvider and the function module you can check the SDN. These topics are also covered in my book ABAP Development in SAP BW - User Exits and BAdIs that was lately translated into English. You can find both the German and English version under <a href="http://www.sap-hefte.de/katalog/hefte/titel/gp/titelID-1256">www.sap-hefte.de</a> or <a href="http://www.sappress.com/product.cfm?account=&product=H1948">www.sappress.com</a>
    Best regards

  • Virtual InfoCube with Services - function module parameters documentation ?

    Hello,
    I have been trying to use a Virtual Infocube with Services.
    I have seen most of the posts in SDN, and read the documentation in http://help.sap.com/saphelp_nw04/helpdata/en/8d/2b4e3cb7f4d83ee10000000a114084/frameset.htm
    I did not manage to find a precise description of the import 
    parameters of the Variant 2.
    In particular, what is the meaning of the
    i_tx_rangetab TYPE rsdri_tx_rangetab parameter. I read in the code of RS_BCT_FIGL_DATA_GET that it has to do with query columns. In the tests I did this table is alway empty.
    Also, i_th_sfc gives you the list of characteristics used in the query. But it does not inform you on wether these are in the 'rows' 'free characteristics' or 'filter'. Is there a way of knowing that?
    Claudio Ciardelli

    Hi Claudio,
    I never implemented Virtual InfoCube with services with a FM, but I know there is a couple of How To Documents about named:
    - How to Reporting from External Data via Virtual InfoProvider
    -How to Implement a Virtual InfoCube with Services
    both with some code samples: did you read it?
    Hope it helps
    GFV

  • Virtual provider with services

    Hi experts,
    we have the same query on virtual cube(with services) and BW loacal cube.Both local cube and virtual cube contains the same data. But both are showing different query results.
    that is virtual provider query is not displaying all the values in master data table.
    for example we have 10 brands under one company code, the virtual cube query is only showing 5 among them. I would like to see all the values without any restrictions, so can you please guide me.
    when i checked from tcode /listcube it is displaying all the values without any restriction.
    Thanks for your time.

    Hi Anupama,
    My question is if u have same data in the both cubes ( virtal&Basic) u can create querry on u r basic cube only. But this not answer.
    R u using same conditions on both cubes. Means In querry designer.
    When u r using the virtual cube(with services) it is generally create on function module, this module properly maintainning the complete master data or not? Check it.
    Hope this will help
    HARI GUPTA

  • How to load data from a virtual cube with services

    Hello all,
    we have set up a virtual cube with service and create a BEx report to get the data from an external database. That works fine. The question is now:
    Is it some how possible to "load" the data from this virtual cube with service (I know that there are not really data...) into an other InfoCube?
    If that is possible, can you please give my some guidance how to set up this scenario.
    Thanks in advance
    Jürgen

    Hi:
    I don't have system before me, so try this.
    I know it works for Remote Cube.
    Right Click on the Cube and Select Generate Export Data Source.
    If you can do this successfully, then go to Source Systems tab and select the BW. Here, Right CLick on select Replicate DataSources.
    Next, go to InfoSOurces, click on Refresh. Copy the name of Virtual Cube and add 8 as a prefix and search for the infosource.
    If you can see it, that means, you can load data from this cube to anywhere you want, just like you do to ODS.
    ELSE.
    Try and see if you can create an InfoSpoke in Virtual Cube. Tran - RSBO.
    Here, you can load to a database table and then, from this table, you can create datasource, etc.
    ELSE.
    Create query and save it as CSV file and load it anywhere you want. This is more difficult.
    Good luck
    Ram Chamarthy

  • Virtual cube with services read from Multicube?

    Hello All.
    We have logically partitioned the Balance Sheet cube 0FIGL_C01 into 3 new cubes. Now I found that I also had to partition the Virtual cube 0FIGL_VC1 into 3 new virtual cubes( I have copied the standard function module to change the data origin).
    Then I have included 4 virtual cubes in one single multicube.
    When I run a query on the multicube, I get a short Dump.
    My questions are:
    1.- has anyone done the same logical partioning for balance sheet before?
    2.- Is it possible to use a multicube as source of data for a Virtual cube with services, using the balance sheet virtual cube function module?
    Thank you all for your help.
    Regards,
    Alfonso.

    a basic cube can only be a source of data for virtual cube or for  a multicube.
    all cubes should have atleast one char common when u add them in a multicube.
    check this

  • Virtual Cube with Services - Debugging

    I want to debug the function module assigned to a Virtual Cube with services. Using transaction RSRT, I can access the FM using debugger when the query is initially called, by selecting Debug options/Default Breakpoints/VirtualCube. I want to debug subsequent navigation steps on the query. How do I access the debugger for subsequent navigation steps?

    Hi Maverick,
    You can extract the attributes of the characterstic but u need to configure depending on the function module you are using to read the data from basic infocubes.
    I suppose in your FM u are using RSDRI_INFOPROV_READ*(check in your FM to find this) in to get the data from basic infocube. If it is the case then u need to configure interface parameter I_TH_SFC and I_T_RANGE to get the attributes. Hope it helps and if need more let me know the same. If you are using the other function modules you can follow the same logic.
    Regards,
    Ramana
    Message was edited by: Ramana

  • SQL Query very slow.

    I have a table which has 40million data in it. Of-course partitioned!.
    begin
    pk_cm_entity_context.set_entity_in_context(1);
    end;
    SELECT COUNT(1) FROM XFACE_ADDL_DETAILS_TXNLOG;
    alter table XFACE_ADDL_DETAILS_TXNLOG rename to XFACE_ADDLDTS_TXNLOG_PTPART;
    SELECT COUNT(1) FROM XFACE_ADDLDTS_TXNLOG_PTPART;
    -- Create table
    create table XFACE_ADDL_DETAILS_TXNLOG
    REF_TXN_NO CHAR(40),
    REF_USR_NO CHAR(40),
    REF_KEY_NO VARCHAR2(50),
    REF_TXN_NO_ORG CHAR(40),
    REF_USR_NO_ORG CHAR(40),
    RECON_CODE VARCHAR2(25),
    COD_TASK_DERIVED VARCHAR2(5),
    COD_CHNL_ID VARCHAR2(6),
    COD_SERVICE_ID VARCHAR2(10),
    COD_USER_ID VARCHAR2(30),
    COD_AUTH_ID VARCHAR2(30),
    COD_ACCT_NO CHAR(22),
    TYP_ACCT_NO VARCHAR2(4),
    COD_SUB_ACCT_NO CHAR(16),
    COD_DEP_NO NUMBER(5),
    AMOUNT NUMBER(15,2),
    COD_CCY VARCHAR2(3),
    DAT_POST DATE,
    DAT_VALUE DATE,
    TXT_TXN_NARRATIVE VARCHAR2(60),
    DATE_CHEQUE_ISSUE DATE,
    TXN_BUSINESS_TYPE VARCHAR2(10),
    CARD_NO CHAR(20),
    INVENTORY_CODE CHAR(10),
    INVENTORY_NO CHAR(20),
    CARD_PASSBOOK_NO CHAR(30),
    COD_CASH_ANALYSIS CHAR(20),
    BANK_INFORMATION_NO CHAR(8),
    BATCH_NO CHAR(10),
    SUMMARY VARCHAR2(60),
    MAIN_IC_TYPE CHAR(1),
    MAIN_IC_NO CHAR(48),
    MAIN_IC_NAME CHAR(64),
    MAIN_IC_CHECK_RETURN_CODE CHAR(1),
    DEPUTY_IC_TYPE CHAR(1),
    DEPUTY_IC_NO CHAR(48),
    DEPUTY_NAME CHAR(64),
    DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
    ACCOUNT_PROPERTY CHAR(4),
    CHEQUE_NO CHAR(20),
    COD_EXT_TASK CHAR(10),
    COD_MODULE CHAR(4),
    ACC_PURPOSE_CODE VARCHAR2(15),
    NATIONALITY CHAR(3),
    CUSTOMER_NAME CHAR(192),
    COD_INCOME_EXPENSE CHAR(6),
    COD_EXT_BRANCH CHAR(6),
    COD_ACCT_TITLE CHAR(192),
    FLG_CA_TT CHAR(1),
    DAT_EXT_LOCAL DATE,
    ACCT_OWNER_VALID_RESULT CHAR(1),
    FLG_DR_CR CHAR(1),
    FLG_ONLINE_UPLOAD CHAR(1),
    FLG_STMT_DISPLAY CHAR(1),
    COD_TXN_TYPE NUMBER(1),
    DAT_TS_TXN TIMESTAMP(6),
    LC_BG_GUARANTEE_NO VARCHAR2(20),
    COD_OTHER_ACCT_NO CHAR(22),
    COD_MOD_OTHER_ACCT_NO CHAR(4),
    COD_CC_BRN_SUB_ACCT NUMBER(5),
    COD_CC_BRN_OTHR_ACCT NUMBER(5),
    COD_ENTITY_VPD NUMBER(5) default NVL(sys_context('CLIENTCONTEXT','entity_code'),11),
    COD_EXT_TASK_REV VARCHAR2(10)
    partition by hash (REF_TXN_NO)
    PARTITIONS 128
    store in (FCHDATA1,FCHDATA2,FCHDATA3,FCHDATA4, FCHDATA5, FCHDATA6, FCHDATA7, FCHDATA8);
    insert /*+APPEND NOLOGGING */ into XFACE_ADDL_DETAILS_TXNLOG
    select /*+PARALLEL */ * from XFACE_ADDLDTS_TXNLOG_PTPART;
    -- Add comments to the table
    comment on table XFACE_ADDL_DETAILS_TXNLOG
    is ' Additional Data log table ';
    -- Add comments to the columns
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO
    is 'Transaction Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO
    is 'User Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_KEY_NO
    is 'Unique key to identify a leg of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO_ORG
    is 'Original Transaction Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO_ORG
    is 'Original Transaction User Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.RECON_CODE
    is 'Reconciliation of transactions in future';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TASK_DERIVED
    is 'Transaction mnemonic for the request';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CHNL_ID
    is 'Channel ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SERVICE_ID
    is 'Service ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_USER_ID
    is 'User ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_AUTH_ID
    is 'Authorizer ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_NO
    is 'It can be Card number or MCA or GL or CASH GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TYP_ACCT_NO
    is 'Type of input (Valid values CARD, MCA, GL, CASH, LN)';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SUB_ACCT_NO
    is 'MC Sub Account Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_DEP_NO
    is 'Deposit Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.AMOUNT
    is 'Transaction Amount';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CCY
    is 'Currency Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_POST
    is 'Posting Date of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_VALUE
    is 'Value Date of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TXT_TXN_NARRATIVE
    is 'Text Transaction Narrative';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DATE_CHEQUE_ISSUE
    is 'Date of Issue of Cheque';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TXN_BUSINESS_TYPE
    is 'Transaction Business Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_NO
    is 'Card Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_CODE
    is 'Inventory Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_NO
    is 'Inventory Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_PASSBOOK_NO
    is 'Card Passbook Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CASH_ANALYSIS
    is 'Cash Analysis Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.BANK_INFORMATION_NO
    is 'Bank Information Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.BATCH_NO
    is 'Batch Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.SUMMARY
    is 'Summary';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_TYPE
    is 'IC Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NO
    is 'IC Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NAME
    is 'IC Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_CHECK_RETURN_CODE
    is 'IC Check Return Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_TYPE
    is 'Deputy IC Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_NO
    is 'Deputy IC Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_NAME
    is 'Deputy Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_CHECK_RETURN_CODE
    is 'Deputy IC Check Return Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCOUNT_PROPERTY
    is 'Account Property';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CHEQUE_NO
    is 'Cheque Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_TASK
    is 'External Task Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MODULE
    is 'Module Code - CH, TD, RD , LN, CASH, GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACC_PURPOSE_CODE
    is 'Account Purpose Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.NATIONALITY
    is 'Nationality';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CUSTOMER_NAME
    is 'Customer Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_INCOME_EXPENSE
    is 'Income Expense Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_BRANCH
    is 'External Branch Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_TITLE
    is 'Account Title Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_CA_TT
    is 'Cash or Funds Transfer flag';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_EXT_LOCAL
    is 'Local Date';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCT_OWNER_VALID_RESULT
    is 'Account Owner Valid Result';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_DR_CR
    is 'Flag Debit Credit - D, C.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_ONLINE_UPLOAD
    is 'Flag Online Upload - O, U.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_STMT_DISPLAY
    is 'Statement Display Flag - Y/N, Y(Normal Reversal), N(Correction Reversal)';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TXN_TYPE
    is 'To denote the kind of transaction:
    1 ?Cash Credit Transaction
    2 ?Cash Debit Transaction
    3 ?Funds Transfer Credit Transaction
    4 ?Funds Transfer Debit Transaction
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_TS_TXN
    is 'Date and Timestamp of the record being inserted';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.LC_BG_GUARANTEE_NO
    is 'LC/BG Guarantee Number for which the request for the Liquidation has been initiated.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_OTHER_ACCT_NO
    is 'Other Account No';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MOD_OTHER_ACCT_NO
    is 'Module Code of Other Account No - CH, TD, RD , LN, CASH, GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_SUB_ACCT
    is 'Branch Code for Sub Account';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_OTHR_ACCT
    is 'Branch Code for Other Account';
    -- Create/Recreate indexes
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_1;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_2;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_3;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_4;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_5;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_6;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_7;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_8;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_1 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_2 on XFACE_ADDL_DETAILS_TXNLOG (REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_3 on XFACE_ADDL_DETAILS_TXNLOG (COD_SUB_ACCT_NO, FLG_STMT_DISPLAY,DAT_POST COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_SUB_ACCT_NO, FLG_STMT_DISPLAY) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_4 on
    XFACE_ADDL_DETAILS_TXNLOG (COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH)
    PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_5 on XFACE_ADDL_DETAILS_TXNLOG (COD_USER_ID, DAT_POST, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_USER_ID) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_6 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO_ORG, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(REF_TXN_NO_ORG) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_7 on XFACE_ADDL_DETAILS_TXNLOG (DAT_EXT_LOCAL, DAT_POST,TXN_BUSINESS_TYPE, FLG_ONLINE_UPLOAD, COD_CHNL_ID, REF_TXN_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(DAT_EXT_LOCAL) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    /* Previous Key order: (COD_EXT_BRANCH,DAT_POST,REF_TXN_NO_ORG,COD_SERVICE_ID,COD_ENTITY_VPD) */
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_8 on XFACE_ADDL_DETAILS_TXNLOG (DAT_POST, COD_EXT_BRANCH, REF_TXN_NO_ORG, COD_SERVICE_ID, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(DAT_POST) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    ALTER TABLE XFACE_ADDL_DETAILS_TXNLOG NOPARALLEL PCTFREE 50 INITRANS 128 LOGGING;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_1 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_2 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_3 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_4 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_5 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_6 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_7 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_8 NOPARALLEL INITRANS 128;
    BEGIN
    DBMS_RLS.ADD_POLICY(OBJECT_SCHEMA => UPPER('FCR44HOST'),
    OBJECT_NAME => UPPER('XFACE_ADDL_DETAILS_TXNLOG '),
    POLICY_NAME => 'FC_ENTITY_POLICY',
    FUNCTION_SCHEMA => UPPER('FCR44HOST'),
    POLICY_FUNCTION => 'pk_cm_vpd_policy.get_entity_predicate',
    STATEMENT_TYPES => 'select,insert,update,delete',
    UPDATE_CHECK => TRUE,
    ENABLE => TRUE,
    STATIC_POLICY => FALSE,
    POLICY_TYPE => DBMS_RLS.SHARED_STATIC,
    LONG_PREDICATE => FALSE,
    SEC_RELEVANT_COLS => NULL,
    SEC_RELEVANT_COLS_OPT => NULL);
    END;
    begin
    dbms_stats.gather_table_stats(ownname => 'FCR44HOST',tabname => 'XFACE_ADDL_DETAILS_TXNLOG', cascade=>true,method_opt=>'for all columns size 1',degree => 32, GRANULARITY => 'PARTITION');
    end;
    Query which takes time.
    INSERT INTO xface_addl_dtls_tlog_temp
    (ref_txn_no,
    ref_usr_no,
    ref_key_no,
    ref_txn_no_org,
    ref_usr_no_org,
    recon_code,
    cod_task_derived,
    cod_chnl_id,
    cod_service_id,
    cod_user_id,
    cod_auth_id,
    cod_acct_no,
    typ_acct_no,
    cod_sub_acct_no,
    cod_dep_no,
    amount,
    cod_ccy,
    dat_post,
    dat_value,
    txt_txn_narrative,
    date_cheque_issue,
    txn_business_type,
    card_no,
    inventory_code,
    inventory_no,
    card_passbook_no,
    cod_cash_analysis,
    bank_information_no,
    batch_no,
    summary,
    main_ic_type,
    main_ic_no,
    main_ic_name,
    main_ic_check_return_code,
    deputy_ic_type,
    deputy_ic_no,
    deputy_name,
    deputy_ic_check_return_code,
    account_property,
    cheque_no,
    cod_ext_task,
    cod_module,
    acc_purpose_code,
    nationality,
    customer_name,
    cod_income_expense,
    cod_ext_branch,
    cod_acct_title,
    flg_ca_tt,
    dat_ext_local,
    acct_owner_valid_result,
    flg_dr_cr,
    flg_online_upload,
    flg_stmt_display,
    cod_txn_type,
    dat_ts_txn,
    lc_bg_guarantee_no,
    cod_other_acct_no,
    cod_mod_other_acct_no,
    cod_cc_brn_sub_acct,
    cod_cc_brn_othr_acct,
    cod_ext_task_rev,
    sessionid)
    SELECT ref_txn_no,
    ref_usr_no,
    ref_key_no,
    ref_txn_no_org,
    ref_usr_no_org,
    recon_code,
    cod_task_derived,
    cod_chnl_id,
    cod_service_id,
    cod_user_id,
    cod_auth_id,
    cod_acct_no,
    typ_acct_no,
    cod_sub_acct_no,
    cod_dep_no,
    amount,
    cod_ccy,
    dat_post,
    dat_value,
    txt_txn_narrative,
    date_cheque_issue,
    txn_business_type,
    card_no,
    inventory_code,
    inventory_no,
    card_passbook_no,
    cod_cash_analysis,
    bank_information_no,
    batch_no,
    summary,
    main_ic_type,
    main_ic_no,
    main_ic_name,
    main_ic_check_return_code,
    deputy_ic_type,
    deputy_ic_no,
    deputy_name,
    deputy_ic_check_return_code,
    account_property,
    cheque_no,
    cod_ext_task,
    cod_module,
    acc_purpose_code,
    nationality,
    customer_name,
    cod_income_expense,
    cod_ext_branch,
    cod_acct_title,
    flg_ca_tt,
    dat_ext_local,
    acct_owner_valid_result,
    flg_dr_cr,
    flg_online_upload,
    flg_stmt_display,
    cod_txn_type,
    dat_ts_txn,
    lc_bg_guarantee_no,
    cod_other_acct_no,
    cod_mod_other_acct_no,
    cod_cc_brn_sub_acct,
    cod_cc_brn_othr_acct,
    cod_ext_task_rev,
    var_l_sessionid
    FROM xface_addl_details_txnlog
    WHERE cod_sub_acct_no = var_pi_cod_acct_no
    AND dat_post between var_pi_start_dat AND var_pi_end_dat;
    Index referred is in_xface_addl_details_txnlog_3.
    First time when i execute the query it takes huge time. but subsequent queries are faster. This is only if i pass same account and criteria again.
    Observed that first time it goes for physical reads which takes time. and subsequent runs physical reads are less.....
    Request suggestions.....this is account statement inquiry user may have 10000txns in a day as well
    Bymistake earlier i raised this in "Oracle -> Text"
    Slow inserts due to physical reads every time for fresh account i am passin
    They suggested to use bind variable. But as i know, we are already using bind variables to bind account number and start and end date.

    My Replies below.
    Whenever you post provide your 4 digit Oracle version (SELECT * FROM V$VERSION).
    Ans :
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    "CORE     11.2.0.3.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    1. If your question is about the INSERT query into xface_addl_dtls_tlog_temp why didn't you post any information about the DDL for that table? Is it the same structure as the table you did post DDL for?
    Ans :
    -- Create table
    create global temporary table XFACE_ADDL_DTLS_TLOG_TEMP
    REF_TXN_NO CHAR(40) not null,
    REF_USR_NO CHAR(40) not null,
    REF_KEY_NO VARCHAR2(50),
    REF_TXN_NO_ORG CHAR(40),
    REF_USR_NO_ORG CHAR(40),
    RECON_CODE VARCHAR2(25),
    COD_TASK_DERIVED VARCHAR2(5),
    COD_CHNL_ID VARCHAR2(6),
    COD_SERVICE_ID VARCHAR2(10),
    COD_USER_ID VARCHAR2(30),
    COD_AUTH_ID VARCHAR2(30),
    COD_ACCT_NO CHAR(22),
    TYP_ACCT_NO VARCHAR2(4),
    COD_SUB_ACCT_NO CHAR(16),
    COD_DEP_NO NUMBER(5),
    AMOUNT NUMBER(15,2),
    COD_CCY VARCHAR2(3),
    DAT_POST DATE,
    DAT_VALUE DATE,
    TXT_TXN_NARRATIVE VARCHAR2(60),
    DATE_CHEQUE_ISSUE DATE,
    TXN_BUSINESS_TYPE VARCHAR2(10),
    CARD_NO CHAR(20),
    INVENTORY_CODE CHAR(10),
    INVENTORY_NO CHAR(20),
    CARD_PASSBOOK_NO CHAR(30),
    COD_CASH_ANALYSIS CHAR(20),
    BANK_INFORMATION_NO CHAR(8),
    BATCH_NO CHAR(10),
    SUMMARY VARCHAR2(60),
    MAIN_IC_TYPE CHAR(1),
    MAIN_IC_NO VARCHAR2(150),
    MAIN_IC_NAME VARCHAR2(192),
    MAIN_IC_CHECK_RETURN_CODE CHAR(1),
    DEPUTY_IC_TYPE CHAR(1),
    DEPUTY_IC_NO VARCHAR2(150),
    DEPUTY_NAME VARCHAR2(192),
    DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
    ACCOUNT_PROPERTY CHAR(4),
    CHEQUE_NO CHAR(20),
    COD_EXT_TASK CHAR(10),
    COD_MODULE CHAR(4),
    ACC_PURPOSE_CODE VARCHAR2(15),
    NATIONALITY CHAR(3),
    CUSTOMER_NAME CHAR(192),
    COD_INCOME_EXPENSE CHAR(6),
    COD_EXT_BRANCH CHAR(6),
    COD_ACCT_TITLE VARCHAR2(360),
    FLG_CA_TT CHAR(1),
    DAT_EXT_LOCAL DATE,
    ACCT_OWNER_VALID_RESULT CHAR(1),
    FLG_DR_CR CHAR(1),
    FLG_ONLINE_UPLOAD CHAR(1),
    FLG_STMT_DISPLAY CHAR(1),
    COD_TXN_TYPE NUMBER(1),
    DAT_TS_TXN TIMESTAMP(6),
    LC_BG_GUARANTEE_NO VARCHAR2(20),
    COD_OTHER_ACCT_NO CHAR(22),
    COD_MOD_OTHER_ACCT_NO CHAR(4),
    COD_CC_BRN_SUB_ACCT NUMBER(5),
    COD_CC_BRN_OTHR_ACCT NUMBER(5),
    COD_EXT_TASK_REV VARCHAR2(10),
    SESSIONID NUMBER default USERENV('SESSIONID') not null
    on commit delete rows;
    -- Create/Recreate indexes
    create index IN_XFACE_ADDL_DTLS_TLOG_TEMP on XFACE_ADDL_DTLS_TLOG_TEMP (COD_SUB_ACCT_NO, REF_TXN_NO, COD_SERVICE_ID, REF_KEY_NO, SESSIONID);
    2. Why doesn't your INSERT query use APPEND, NOLOGGING and PARALLEL like the first query you posted? If those help for the first query why didn't you try them for the query you are now having problems with?
    Ans :
    I will try to use append but i cannot use parallel since i have hardware limitations.
    3. What does this mean: 'Index referred is in_xface_addl_details_txnlog_3.'? You haven't posted any plan that refers to any index. Do you have an execution plan? Why didn't you post it?
    Ans :
    Plan hash value: 4081844790
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | INSERT STATEMENT | | | | 5 (100)| | | |
    | 1 | LOAD TABLE CONVENTIONAL | | | | | | | |
    | 2 | FILTER | | | | | | | |
    | 3 | PARTITION HASH ALL | | 1 | 494 | 5 (0)| 00:00:01 | 1 | 128 |
    | 4 | TABLE ACCESS BY GLOBAL INDEX ROWID| XFACE_ADDL_DETAILS_TXNLOG | 1 | 494 | 5 (0)| 00:00:01 | ROWID | ROWID |
    | 5 | INDEX RANGE SCAN | IN_XFACE_ADDL_DETAILS_TXNLOG_3 | 1 | | 3 (0)| 00:00:01 | 1 | 128 |
    4. Why are you defining 37 columns as CHAR datatypes? Are you aware that CHAR data REQUIRES the use of the designated number of BYTES/CHARACTERS?
    Ans :
    I understand and appreciate your points, but since it is huge application and is built over a period of time. I am afraid if i will be allowed to do change on datatypes. there are lot of queries over this table.
    5. Are you aware that #4 means those 37 columns columns, even if all of them are NULL, mean that your MINIMUM record length is 1012? Care to guess how many of those records Oracle can fit into an 8k block? And that is if you ignore the other 26 VARCHAR2, NUMBER and DATE columns.
    Two of your columns take 192 bytes MINIMUM even if they are null
    CUSTOMER_NAME CHAR(192),
    COD_ACCT_TITLE CHAR(192)
    Why are you wasting all of that space? If you are using a multi-byte character set and your data is multi-byte those 37 columns are using even more space because some characters will use more than one byte.
    If the name and title average 30 characters/bytes then those two columns alone use 300+ unused bytes. With 40 million records those unused bytes, just for those two columns take 12 GB of space.
    WIth a block size of 8k that would totally waste 1.5 million blocks that Oracle has to read just to ignore the empty space that isn't being used.
    I highly suspect that your use of CHAR is a large part of this performance problem and probably other performance problems in your system. Not only for this table but for any other table that uses similar CHAR datatypes and wastes space.
    Please reconsider your use of CHAR datatypes like this. I can't imagine what justification you have for using them.
    Ans :
    I understand your points, but since it is huge application is built over a period of time. I am afraid if i will be allowed to do change on datatypes.
    I have to manage in current situation. Not expecting query to respond in millisecs but not even 40secs which is happening currently.
    Edited by: Rohit Jadhav on Dec 30, 2012 6:44 PM

Maybe you are looking for

  • SEGV Oracle 8.1.6 installer on RH 6.1

    Hi, When I try to install Oracle 8.1.6 on a stock RedHat 6.1 system, the java installer dies on me with the message: SIGSEGV received at bffff1e0 in /opt/orastage/Oracle8iR2/stage/Components/oracle.swd.jre/1.1.8/1/DataFiles/Expanded/linux/lib/linux/n

  • Message tracking max results returned

    Hi All, On message tracking page the predefined max rsults returned value only have 3 options 250, 500 and 1000. May i know how can i increase query setting more than 1000 results return. Regards, Rock

  • Function Call returning old SQL Query

    Hello All, I have a Pipeline Function which creates a SQL within (Dynamic SQL that gets stored in a LONG variable) based on the parameter STRING passed to the function. Inside this function, once the SQL is built, I am inserting this SQL into a log t

  • Navigation with MouseListener

    Hi! My navigation goes wrong, but perhaps it's an easy answer to it? In my application I have a JFrame with BorderLayout. In West I have a JPanel with some JLabels ( let's call them A, B and C) creating a menu with 3 alternatives and in Center I have

  • IPhoto 9.6 hangs when opening

    Whenever I try to use iPhoto the color wheel spins but it never opens.  This seems have been since I recently upgrades to OS X 10.10 Yosemite.  I left things alone for almost 24 hours thinking that the library was upgrading but nothing happened, this