Limitation on Session Size (8K) with Clustering

          Hi.
          Is there a limitation on the session size for a clustered environment. i'm not
          sure whether its true or not. can anyone please clarify. Also is it for the entire
          session object or per user.
          Thanks
          Nitesh
          

There is no limit as such. The only limit imposed depends on the heap allocated for
          the JVM.
          Nitesh wrote:
          > Hi.
          >
          > Is there a limitation on the session size for a clustered environment. i'm not
          > sure whether its true or not. can anyone please clarify. Also is it for the entire
          > session object or per user.
          >
          > Thanks
          >
          > Nitesh
          Rajesh Mirchandani
          Developer Relations Engineer
          BEA Support
          

Similar Messages

  • Limitation on Service size for deploying

    Is there any limitation on Service size for deploying? The size of my service is 5MB approx. and I am not able to migrate it through Catalog deployer as well as Exporting and Importing file.

    There is no limit as such. The only limit imposed depends on the heap allocated for
              the JVM.
              Nitesh wrote:
              > Hi.
              >
              > Is there a limitation on the session size for a clustered environment. i'm not
              > sure whether its true or not. can anyone please clarify. Also is it for the entire
              > session object or per user.
              >
              > Thanks
              >
              > Nitesh
              Rajesh Mirchandani
              Developer Relations Engineer
              BEA Support
              

  • Paper Size issues with CreatePDF Desktop Printer

    Are there any known paper size issues with PDFs created using Acrobat.com's CreatePDF Desktop Printer?
    I've performed limited testing with a trial subscription, in preparation for a rollout to several clients.
    Standard paper size in this country is A4, not Letter.  The desktop printer was created manually on a Windows XP system following the instructions in document cpsid_86984.  MS Word was then used to print a Word document to the virtual printer.  Paper Size in Word's Page Setup was correctly set to A4.  However the resultant PDF file was Letter size, causing the top of each page to be truncated.
    I then looked at the Properties of the printer, and found that it was using an "HP Color LaserJet PS" driver (self-chosen by the printer install procedure).  Its Paper Size was also set to A4.  Word does override some printer driver settings, but in this case both the application and the printer were set to A4, so there should have been no issue.
    On a hunch, I then changed the CreatePDF printer driver to a Xerox Phaser, as suggested in the above Adobe document for other versions of Windows.  (Couldn't find the recommended "Xerox Phaser 6120 PS", so chose the 1235 PS model instead.)  After confirming that it too was set for A4, I repeated the test using the same Word document.  This time the result was fine.
    While I seem to have solved the issue on this occasion, I have not been able to do sufficient testing with a 5-PDF trial, and wish to avoid similar problems with the future live users, all of which use Word and A4 paper.  Any information or recommendations would be appreciated.  Also, is there any information available on the service's sensitivity to different printer drivers used with the CreatePDF's printer definition?  And can we assume that the alternative "Upload and Convert" procedure correctly selects output paper size from the settings of an uploaded document?
    PS - The newly-revised doc cpsid_86984 still seems to need further revising.  Vista and Windows 7 instructions have now been split.  I tried the new Vista instructions on a Vista SP2 PC and found that step 6 appears to be out of place - there was no provision to enter Adobe ID and password at this stage.  It appears that, as with XP and Win7, one must configure the printer after it is installed (and not just if changing the ID or password, as stated in the document).

    Thank you, Rebecca.
    The plot thickens a little, given that it was the same unaltered Word document that first created a letter-size PDF, but correctly created an A4-size PDF after the driver was changed from the HP Color Laser PS to a Xerox Phaser.  I thought that the answer may lie in your comment that "it'll get complicated if there is a particular driver selected in the process of manually installing the PDF desktop printer".  But that HP driver was not (consciously) selected - it became part of the printer definition when the manual install instructions were followed.
    However I haven't yet had a chance to try a different XP system, and given that you haven't been able to reproduce the issue (thank you for trying), I will assume for the time being that it might have been a spurious problem that won't recur.  I'll take your point about using the installer, though when the opportunity arises I might try to satisfy my cursed curiosity by experimenting further with the manual install.  If I come up with anything of interest, I'll post again.

  • By default AIX limits maximum file size to 1GB

    When writing files larger than 1GB in AIX, I receive a "File too large" error.
    This file size limit presents a problem, especially when creating large files,
    such as LDIF exports from a Directory Server istance or message store dumps
    from a Messaging Server instance.
    <P>
    By default, AIX limits the maximum size of files to 1GB. However, root can
    adjust the maximum file size for itself with the following command:<BR>
    <P>
    $ulimit -f <I>arbitrary_large_number</I>
    <BR>
    <P>
    The -f modifier
    specifies the maximum file size. For example,<BR>
    <P>
    $ulimit -f 4194304
    <P>
    This values for the maximum size of files is set in the
    /etc/security/limits
    file. The default values in this file are as follows:<BR>
    <P>
    fsize = 2097151<BR>
    core = 2097151<BR>
    cpu = -1<BR>
    data = 262144<BR>
    rss = 65536<BR>
    stack = 65536<BR>
    nofiles = 2000<BR>
    <P>
    To view your local default values, use the following command:<BR>
    <P>
    $ulimit -a
    <P>
    Any user can adjust the maximum file size limit downward. For example,<BR>
    <P>
    $ulimit -f<BR>
    2097151<BR>
    $ulimit -f 100<BR>
    $ulimit -f<BR>
    100<BR>
    $cp /unix /tmp/MyBigFile #(copy a big file to another location with enough space)<BR>
    File too large<BR>
    $ls -al /tmp/MyBigFile<BR>
    -rwxr--r-- 1 UserID GroupID 51200 Jul 25 08:41 MyBigFile
    $
    <P>
    However, only root can adjust the ulimit
    file size limits upward. Such changes
    will take effect only after the user logs in again.
    <P>
    Additionally, it is possible to set specific user limits with the
    chuser command. For
    information on chuser,
    click the following URL:<BR>
    <P>
    http://www.rs6000.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds1/chuser.htm#A067913d9
    <P>
    For more information on the ulimit
    command, check the following URL:<BR>
    <P>
    http://www.rs6000.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds5/ulimit.htm
    <B>Note:</B><BR>
    The shell in use may affect the actual limits imposed by ulimit
    . In particular,
    /usr/csh may not
    properly adjust the limit from the default value. Also,
    bash (Bourne Again shell)
    may treat file sizes in blocks as though they were file sizes in kilobytes(KB).

    The size you can handle is not limited... only by your code :-)
    Check MemoryMappedFiles... i handle Gigabyte files with no problem ;-)

  • Oracle ADF viewScope causing session size bloat

    As I'm sure you know, ADF introduces some additional scopes (pageFlowScope, viewScope and backingBeanScope) on top of the standard JSF ones. Our use of one of the ADF scopes, viewScope, appears to be causing our session size to bloat over time.
    Objects that are view scoped (e.g. our Backing Beans) are managed by ADF and appear to be put into the session in a org.apache.myfaces.trinidadinternal.application.StateManagerImpl$PageState object. The number of these objects in the session is equal to the org.apache.myfaces.trinidad.CLIENT_STATE_MAX_TOKENS in our web.xml configuration file.
    Once all of the tokens are ‘used up’, by navigating around the application, the oldest one of these objects is removed from the session and (should be) garbage collected at some point. However, the reclaim of this space is observed much later, after the session has expired. Because of this, when load testing the application we see the heap space usage gradually increasing, before causing the JVM to crash.
    The monitoring of the creation and destruction of our objects is done by adding log statements in the default constructor and in the finalize method (Which overrides the finalize method on object). The logging statements on object creation are seen when we would expect them, but the logging statements from the finalize method are only seen after session expiry. When a garbage collection is triggered using Oracle JRocket Mission Control we see the heap usage drop significantly, but don’t observe any logging from the finalize method calls.
    Does anyone have any thoughts on why the garbage collector might not be able to reclaim view scoped objects after they are removed from the session?
    Thanks in advance.
    P.S. I have already found VIEW SCOPE IS NOT RELEASING PROPERLY IN ADF which is a very closesly related thread, but unfortunately was not able to use the replies on there to resolve our issue. I've also posted this same question on Stack Overflow (http://stackoverflow.com/questions/13380151/lifetime-of-view-scoped-objects-oracle-adf). I'll try and update both threads if I find a solution.
    Edited by: 971217 on 14-Nov-2012 07:08

    Hi Frank,
    Thanks for your very useful reply. I've managed to recreate the problem today by doing the following.
    1. Create pageOne.jspx and pageTwo.jspx
    2. Create PageOneBB.java and PageTwoBB.java
    3. Register PageOneBB.java and PageTwoBB.java in the adfc-config.xml as view scoped managed beans.
    Then after building and deploying out to my Weblogic server I continue by doing the following:
    4. Open pageOne.jspx in a browser. Observe the constructor of pageOneBB being called and the correct default text being shown in the box. [Optional] Set the text value to a new string and click on the button.
    5. Get redirected to pageTwo.jspx. Observe the constructor of pageTwoBB being called and the correct default text being shown in the box. [Optional] Set the value to a new string and click on the button.
    6. Monitor the Weblogic server using Oracle JRocket Mission Control. Observe the large lists of booleans being created as expected (5,000,000 per click!).
    7. Note that this number is never reduced - even though the old view scoped beans should have been released for garbage collection.
    8. Repeat steps 4 and 5 until I see the Weblogic server crash due to a java.lang.OutOfMemoryError.
    9. Wait for all of the sessions to expire. I've set my session expiry to be 180s for the purpose of this test.
    10. After 180s observe the finalize method being called on all of the backing bean objects and the heap usage drop significantly.
    11. The server works again but the problem has been demonstrated in a reproducible way.
    adfc-config.xml
    <managed-bean>
        <managed-bean-name>pageOneBB</managed-bean-name>
        <managed-bean-class>presentation.adf.test.PageOneBB</managed-bean-class>
        <managed-bean-scope>view</managed-bean-scope>
    </managed-bean>
    <managed-bean>
        <managed-bean-name>pageTwoBB</managed-bean-name>
        <managed-bean-class>presentation.adf.test.PageTwoBB</managed-bean-class>
        <managed-bean-scope>view</managed-bean-scope>
    </managed-bean>
    pageOne.jspx
    <?xml version='1.0' encoding='utf-8?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
        xlmns:f="http://java.sun.com/jsf/core"
        xmlns:h="http://java.sun.com/jsf/html"
        xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
        xmlns:c="http://java.sun.com/jsp/jstl/core" >
        <jsf:directive.page contentType="text/html;charset=UTF-8" />
        <f:view>
            <af:document id="t" title="Page One">
                <af:form>
                    <af:inputText id="pgOneIn" value="#{viewScope.pageOneBB.testData}" />
                    <af:commandButton id="pgOneButton" partialSubmit="true"
                        blocking="true" action="#{viewScope.pageOneBB.goToPageTwo}"
                        text="Submit" />
                </af:form>
            <af:document>
        </f:view>
    </jsp:root> pageTwo.jspx
    <?xml version='1.0' encoding='utf-8?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
        xlmns:f="http://java.sun.com/jsf/core"
        xmlns:h="http://java.sun.com/jsf/html"
        xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
        xmlns:c="http://java.sun.com/jsp/jstl/core" >
        <jsf:directive.page contentType="text/html;charset=UTF-8" />
        <f:view>
            <af:document id="t" title="Page Two">
                <af:form>
                    <af:inputText id="pgTwoIn" value="#{viewScope.pageTwoBB.testData}" />
                    <af:commandButton id="pgTwoButton" partialSubmit="true"
                        blocking="true" action="#{viewScope.pageTwoBB.goToPageOne}"
                        text="Submit" />
                </af:form>
            <af:document>
        </f:view>
    </jsp:root> PageOneBB.java
    package presentation.adf.test;
    import java.io.IOException;
    import java.io.Serializable;
    import java.util.ArrayList;
    import java.util.List;
    import javax.faces.context.FacesContext;
    import org.apache.log4j.Logger;
    import logger.log4j.RuntimeConfigurableLogger;
    public class PageOneBB implements Serialiable
        /** Default serial version UID. */
        private static final long serialVersionUID = 1L;
        /** Page one default text. */
        private String pageOneData = "Page one default text";
        /** A list of booleans that will become large. */
        private List<Boolean> largeBooleanList = new ArrayList<Boolean>();
        /** The logger */
        private static final Logger LOG = RuntimeConfigurableLogger.gotLogger(PageOneBB.class);
        /** Default constructor for PageOneBB. */
        public PageOneBB()
            for (int i = 0; i < 5000000; i++)
                largeBooleanList.add(new Boolean(true));
            if (LOG.isTraceEnabled())
                LOG.trace("Constructor called on PageOneBB. This object has a hash code of " + this.hashCode());
        /** Method for redirecting to page two. */
        public void goToPageTwo()
            try
                FacesContext.getCurrentInstance().getExternalContext.redirect("pageTwo.jspx");
            catch (IOException e)
                e.printStackTrace();
        * {@inheritDoc}
        @Override
        protected void finalize() throws Exception
            if (LOG.isTraceEnabled())
                LOG.trace("Finalize method called on PageOneBB. This object has a hash code of " + this.hashCode());
        * Set the testData
        * @param testData
        *        The testData to set.
        public void setTestData(String testData)
            if (LOG.isTraceEnabled())
                LOG.trace("setTestData method called on PageOneBB with a parameter of " + testData);
            this.pageOneData = testData;
        * Get the testData
        * @return The testData.
        public String getTestData()
            if (LOG.isTraceEnabled())
                LOG.trace("getTestData method called on PageOneBB");
            return pageOneData;
    PageTwoBB.java
    package presentation.adf.test;
    import java.io.IOException;
    import java.io.Serializable;
    import java.util.ArrayList;
    import java.util.List;
    import javax.faces.context.FacesContext;
    import org.apache.log4j.Logger;
    import logger.log4j.RuntimeConfigurableLogger;
    public class PageTwoeBB implements Serialiable
        /** Default serial version UID. */
        private static final long serialVersionUID = 1L;
        /** Page one default text. */
        private String pageTwoData = "Page two default text";
        /** A list of booleans that will become large. */
        private List<Boolean> largeBooleanList = new ArrayList<Boolean>();
        /** The logger */
        private static final Logger LOG = RuntimeConfigurableLogger.gotLogger(PageTwoBB.class);
        /** Default constructor for PageTwoBB. */
        public PageTwoBB()
            for (int i = 0; i < 5000000; i++)
                largeBooleanList.add(new Boolean(true));
            if (LOG.isTraceEnabled())
                LOG.trace("Constructor called on PageTwoBB. This object has a hash code of " + this.hashCode());
        /** Method for redirecting to page one. */
        public void goToPageOne()
            try
                FacesContext.getCurrentInstance().getExternalContext.redirect("pageOne.jspx");
            catch (IOException e)
                e.printStackTrace();
        * {@inheritDoc}
        @Override
        protected void finalize() throws Exception
            if (LOG.isTraceEnabled())
                LOG.trace("Finalize method called on PageTwoBB. This object has a hash code of " + this.hashCode());
        * Set the testData
        * @param testData
        *        The testData to set.
        public void setTestData(String testData)
            if (LOG.isTraceEnabled())
                LOG.trace("setTestData method called on PageTwoBB with a parameter of " + testData);
            this.pageTwoData = testData;
        * Get the testData
        * @return The testData.
        public String getTestData()
            if (LOG.isTraceEnabled())
                LOG.trace("getTestData method called on PageTwoBB");
            return pageTwoData;
    }

  • Server Monitoring with clustered instances

    Anyone using the server monitor or multiserver monitor with
    clustered instances of coldfusion? In CF 8.0.1 on Solaris, enabling
    monitoring produces a vast number of repeated errors of the form
    included below. This occurs on the both clustered instances as the
    instances are setup to replicate session data using J2EE session
    variables. The monitoring appears to work but the frequency of the
    errors produced in the ouput log of *BOTH* of the cluster instances
    is extensive. These errors do not occur when monitoring the
    "cfusion" admin instance. Is this a product issue or a
    configuration issue?
    MM/DD HH:MM:SS error Setup of session replication failed.
    [2]java.io.StreamCorruptedException: unexpected end of block
    data
    at
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1945)
    at
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1869)
    at
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753)
    at
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
    at
    java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
    at java.util.Hashtable.readObject(Hashtable.java:859)
    ...

    Dear Jan,
    Already I have added the plugin but while adding the target i am getting below error. Can u please give some idea on this
    Test Connection failed: [_WinAuthDLLToLoadDynamicProp;em_error=DLL file 'D:\12c_agent\plugins\oracle.em.smss.agent.plugin_12.1.0.2.0\scripts\emx\microsoft_sqlserver_database..\..\..\..\dependencies\oracle.em.smss\jdbcdriver\sqljdbc_auth.dll' is found missing or not was never copied manually. Please copy amd64 version of sqljdbc_auth.dll at the above location and re-try, MSSQL_NumClusterNodes;Can't resolve a non-optional query descriptor property [dllFile] (dllFile), WbemRemote_Determination_DynamicProperty;Can't resolve a non-optional query descriptor property [dllFile] (dllFile), MSSQLInstance_TestMetric_DynamicProperty;Can't resolve a non-optional query descriptor property [dllFile] (dllFile), OSType_TargetHost_DynamicProperty;Can't resolve a non-optional query descriptor property [STDINWBEM_HOST] (ms_sqlserver_host), MSSQL_NumClusterNodes;Can't resolve a non-optional query descriptor property [dllFile] (dllFile)]

  • Standalone oc4j in 3 different machine with clustering enabled

    Hello,
    I just want to know if clustering is possible in my situation.
    I have 3 different machine/server with a load balancer, I've installed a standalone oc4j in each machine, I deploy my application to each of the standalone oc4j, I have enable the clustering of each oc4j using peer to peer configuration.
    Machine 1 pointing to node of Machine 2
    Machine 2 pointing to node of Machine 3
    Machine 3 pointing to node of Machine 1
    Then I test my application, for my first try I was pointed to machine 1 by my load balancer, created a session, etc..etc.., after that I stop my application to oc4j machine 1, then after I refresh my page, the load balancer pointed me to machine 2, and there I see that my session is lost. Clustering is not working.
    Can anyone guide my if clustering is possible using oc4j's only. Thanks.

    I have solved this issue.
    It turns out that wls 10.3 does not always delete an application cleanly.
    I found 3 copies of the application remaining in the server.
    I deleted and reinstalled the wls and the problem was solved.

  • How to pass session variable value with GO URL to override session value

    Hi Gurus,
    We have below requirement.Please help us at the earliest.
    How to pass session variable value with GO URL to override session value. ( It is not working after making changes to authentication xml file session init block creation as explained by oracle (Bug No14372679 : which they claim it is fixed in 1.7 version  Ref No :Bug 14372679 : REQUEST VARIABLE NOT OVERRIDING SESSION VARIABLE RUNNING THRU A GO URL )
    Please provide step by step solution.No vague answers.
    I followed below steps mentioned.
    RPD:
    ****-> Created a session variable called STATUS
    -> Create Session Init block called Init_Status with SQL
        select 'ACTIVE' from dual;
    -> Assigned the session variable STATUS to Init block Init_Status
    authenticationschemas.xml:
    Added
    <RequestVariable source="url" type="informational"
    nameInSource="RE_CODE" biVariableName="NQ_SESSION.STATUS"/>
    Report
    Edit column "Contract Status" and added session variable as
    VALUEOF(NQ_SESSION.STATUS)
    URL:
    http://localhost:9704/analytics/saw.dll?PortalGo&Action=prompt&path=%2Fshared%2FQAV%2FTest_Report_By%20Contract%20Status&RE_CODE='EXPIRED'
    Issue:
    When  I run the URL above with parameter EXPIRED, the report still shows for  ACTIVE only. The URL is not making any difference with report.
    Report is picking the default value from RPD session variable init query.
    could you please let me know if I am missing something.

    Hi,
    Check those links might help you.
    Integrating Oracle OBIEE Content using GO URL
    How to set session variables using url variables | OBIEE Blog
    OBIEE 10G - How to set a request/session variable using the Saw Url (Go/Dashboard) | GerardNico.com (BI, OBIEE, O…
    Thanks,
    Satya

  • Error while creating table with clusters

    Hi
    I tried the following
    CREATE CLUSTER emp_dept (deptno NUMBER(3))Cluster is created
    Then i tried to create the table having the above cluster but giving the errors:
    create table emp10 (ename char(5),deptno number(2) )cluster emp_dept(deptno);The error is:
    ORA-01753 column definition incompatible with clustered column definitionCould you please help me in this

    Your cluster is based on a NUMBER(3) data type while the emp10 table has a deptno column with a data type of NUMBER(2).

  • How to scan legal size document with Adobe Acrobat 6.0?

    How do you scan legal (8.5 x 14.0) size documents with Adobe Acrobat 6.0.0 ? There is no option for that size paper in the scan menu. And I can't find the answer in either my help file or on-line with Adobe.com. Can anyone help me?

    I use a HP OfficeJet G85. I have no problem copying/printing legal (8.5 x 14) size documents with that all-in-one unit. So I assumed it should be able to send the image it collected to my Adobe Acrobat program instead of to the printer. Am I missing something? Or should I put the question to HP instead of Adobe?

  • Limiting the Text Size in CRM 5.0 IC Web Client

    Hello Experts,
    We have a scenario where in we can enter the text for a CRM Document (say Service document) in the text field of IC Web Client. We have noticed that we are even allowed to enter the text of size around 73 MB also (which comes into 1000s of Word Documents).
    We have now have a requirement for limiting this text as it is causing some problems and somehow affecting system performance.
    Could anyone help me in limitating the text size in CRM 5.0 IC Web Client. I mean whether there is any configuration setting for doing the same or we need to do it in any text determination Procedure. Thanks in anticipation for your response.
    Best Regards,
    Kishore K

    Kishore,
    Use below code to determine no of characters typed in FoUpText.htm:
    data: col_wrap type ref to cl_bsp_wd_collection_wrapper,
          lr_entity type ref to if_bol_bo_property_access,
          lv_str type string,
          lv_len type i.
          col_wrap = typed_context->BTTEXT->get_collection_wrapper( ).
          lr_entity ?= col_wrap->get_current( ).
          lv_str =  lr_entity->get_property_as_string('CONC_LINES').
          lv_len = strlen( lv_str ).
    Place this code in controller FoUpText.do anywhere accordingly
    Cheers,
    Ankur

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Is there a limitation on the Size of Attachments to SBO Documents?

    Hi, Experts.
    Is there a size limitation to the Attachments that you can Attatch to an SBO Document like an Activity?
    Does this depend only on the Hardware Specifications, or does SBO also limit the Size of Attachments?
    Any help would be appreciated.
    Thanks,
    Marli

    Hi Marli,
    There is no limitation on the size of attachment. Otherwise also, SBO stores only the name of the file and copies it to the attachment folder. Since, in SBO, only the name is stored, therefore the size of the attachment does not matter.
    Rahul

  • Built-in restore session hangs/freezes with too many tabs in Fx 29

    Hi, I've used the Session Manager add on for awhile, ever since a previous 'upgrade' to Firefox tended to not save sessions after a crash.
    The upgrade to version 29 has rendered Session Manager unusable, as documented here:
    https://support.mozilla.org/en-US/questions/1000544?esab=a&as=aaq
    However I've determined that this seems to be not a problem with Session Manager, but with Firefox's built-in restore functionality. This is because though Session Manager no longer shows at startup, Firefox's native session restore does, and Session Manager can still be accessed through the "Tools" menu. It simply crashes with even a couple hundred tabs, let alone the 1-3k tab sessions I normally use.
    Experimentation has demonstrated several things. (Both session restore and Session Manager work the same way, which tells me that Session Manager is probably built on the session restore function, simply adding support for multiple sessions and better autosaving.)
    1. If I try to restore a large session, even if I remove almost all of my tabs from the restore, it will start to open everything, then (apparently) hang without loading any of the default tabs, and then freeze if I try to do anything at all, including clicking on the search box, trying to scroll, or clicking on another tab.
    2. If I try to open a session with one window and one tab, it will open.
    Not sure if this is relevant, but if I clicked on another tab before it froze, when I do a session restore from the old session it will open with that one that I clicked as the default tab.
    It used to be that Firefox would only load the last tab I was on for each window, and then load others when I clicked on them. I kind of suspect that the upgrade has Firefox now trying to either load all tabs at once (rather than only when I clicked on them, as previously), or at the very least trying to get the metadata from all the pages.
    I know that the sensible solution is to not have more than a few dozen tabs at any one time. But I'm not really a sensible person I've spent the past three hours trying to downgrade. I'm dedicated enough to my silly and obsolescent browsing method that I imagine I will keep trying to downgrade, security and time wastage be damned, until the issue is fixed.
    On the other hand, it occurs that there may already be a patch or new add-on which addresses this issue, and that would be a lot easier than downgrading, which Mozilla quite intentionally makes painfully difficult. Is there an easier way to get my multi-k tab sessions back?
    Thanks,
    LK
    PS: OS is Ubuntu 12.10. I have automatic updates disabled but periodically upgrade my whole system from the Terminal. My last system-wide upgrade was yesterday, the penultimate one was sometime in April, which is why I think the problem starts with Fx 29.

    Hi, I'm a Tab junky...and I'm not changing. I have this issue also. After 2 weeks of utter frustration, not liking any of the roll backs, etc, I dove into other browsers....and found Comodo IceDragon, which is based of Firefox 26.0.... but better. I can't even describe how stable it is, even compared to FF26.
    The only glitches were bringing in my history and passwords. I had to use Password exporter to deal with passwords and had to export my bookmarks and history to Chrome, then into IceDragon because it could not do it directly. But Comodo tech support was fast and very helpful!!
    Give it a roll. It's like old home week, but better.

  • MS CRM 2011 session time out with our CBA

    Hi CRM Gurus,
    can we have CRM website, session time out, with out implementing the CBA?
    by default the ideal time out for the apppool is 20 minutes, still it does not have any impact.
    is there any way to achieve the timeout for the CRM website and any impact for other things like outlook , email router etc if we do so?
    thanks in advance.
    Regards,
    yes.Sudhanshu
    yes.sudhanshu
    http://bproud2banindian.blogspot.com
    http://ms-crm-2011-beta.blogspot.com

    Hi yes.sudhanshu
    Look for the following registry key on a machine where this issue is occuring,
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WebClient\Parameters\InternetServerTimeoutInSec
    and increase the value, think it defaults to 30, and test .

Maybe you are looking for