Orcale B2B perf tuning for large documents

In http://www.b2bgurus.com/2008/02/large-edi-file-processing.html, the following settings are suggested to handle large EDI documents. Does any one has more information on how these settings work?
oracle.tip.adapter.b2b.sqltablelock=false
oracle.tip.adapter.b2b.edi.mergeMsgSize=500
oracle.tip.adapter.b2b.qMsgRatio=20
Thanks
Bala

Without sar utility (sysstat RPM, i'm pretty sure it is not included in OracleVM) you are pretty out of luck. As OracleVM is based on OEL5 you could try to install sysstat RPM from OEL5.

Similar Messages

  • B2B - OSB Integration for RosettaNet Document

    Hi,
    I have the following requirement:
    http://<<hostname:<<port>>/b2b/httpreceiver <-- OSB Business Service(BS) <-- OSB Proxy Service (PS)
    The remote trading partner will be using the endpoint URL of the proxy service(PS) to post the RosettaNet message to our server.
    The same set up is working fine for EDI messages.
    But it is not working for inbound RosettaNet trxn. it is throwing Document Protocol identification error.
    Please suggest.
    Thanks,
    Monica

    Hello,
    topic: setting up rosettanet over internet transactions.
    source: http://download-uk.oracle.com/docs/cd/B14099_19/integrate.1012/b19370/tutorial_rn_edi.htm#CACHEGJC
    I am new to B2B.I have created collaboration on both acme and globalchip servers using rosettaNet.i have deployed it and created agreement too on both servers.at last step while verifying the purchase order transactions,i have run the deq.sh on both servers.t1.trace file also generated but the trace file is empty.
    Can u tell me why the trace file is empty and please let me know what steps i should follow next.
    Regard's
    sanga kiran

  • XSLT of large documents in Oracle

    Hi,
    I'm trying to run XSLT in Oracle for large documents (say 1MB) via a stored procedure. It works for small documents but for large ones I'm getting an errror:
    : ORA-19011: Character string buffer too small
    ORA-06512: at "SYS.XMLTYPE", line 0
    ORA-06512: at line 1
    ORA-06512: at "MAPSUBMISSION_FN", line 7
    here's the sproc I'm using. How should I increase the buffer?
    thanks
    CREATE OR REPLACE FUNCTION MapSubmission_FN(outputId IN NUMBER, newtaskId IN NUMBER, mapId IN NUMBER)
    RETURN NUMBER
    IS
    newoutputId NUMBER;
    BEGIN
         select task_output_seq.nextval into newoutputId from dual;
         insert into task_output_tbl values (newoutputId, newtaskId,'xml',
         (select xmltransform(output_tbl.xml_output_data_lob, (select xml_data from xml_tbl_wrk
                   where temp_id=mapId)).getStringVal()
         from task_output_tbl output_tbl where task_output_id=outputId), current_timestamp);
         RETURN newoutputId;
    END;

    Would you try change getStringVal() to getClobVal()?

  • Large documents and contents-errors in text - for thesis

    Hi - i´ll try to make it short. (quite experinced user on Mac and Pages - but hates Word)
    Large document like master/PhD on Pages 09 Version 4.1 (923) (using EndNote X4 for bibliography - but EndNote is not the problem! I tried it without...) - more than a hundred pages. (now with Lion 10.7.2 it seems to be even worse than with SLep).
    Content is numbered with up to four "levels" (like: 2.3.1.2 Xxxxxx). I seem to make a lot of things right because the content at the beginning of the document is correct with all the points (just the content is 4 pages long - and everything including the page-numbers is correct) -
    BUT:
    In the text most "sub-/headlines" or new points are correct - but some are just very wrong like:
    it should be 8.3.2.1 - instead it is 8.3.6 or even 2.1 - and everytime i try to change it its just wrong in another or the same way... - but in any case (as long as i am doing it right with tabs and using the right level) the content at the beginning of the document is correct.
    That is exactly why i hate Word - it just did the same crap (but Word also mixed up the footnote-numbering - in Pages 09 at least all the foodnotes are right).
    Any idea? - the next thing i will try is just to copy every paragraph into a new document - but i doubt that it will help - is there anybody at apple who could help???
    Greetings
    simplex

    I didn't answer either because I found your writing impenetrable and without the file to work with improbable we could talk you through it.
    It sounds to me like you have simply skipped a sequence somewhere and it has lost track of the numbering, or you got it to restart somewhere in an attempt to 'fix' your error.
    Peter

  • Oracle 10g Discoverer Reports & export to xls fails for large reports

    Hi ,
    We have following configurations:
    1: RHEL 5.4
    2: Discoverer :Version 10.1.2.48.18
    3: Oracle10g Apps Version : Version 10.1.2.0.2
    Issue:
    Most of small reports works fine ....but when large discoverer reports are executed the page
    keeps on refreshing for 15-20 hours but no output ....same for export to xls ......
    But same reports works fine in oracle9i for same data voulme....
    Observations:
    When on linux with top command the processes are monitored its observed that discoverer process
    dis51ws dies for large reports after 1-2 minutes ...& the page keeps on refreshing but no output....
    for 1-2 minutes it consumes 50-80% cpu utilisation then process disappears & cpu 80% idle ...
    It seems that as 10g Apps is installed on RHEL 5.4 ...non certfified OS causing an issue...
    Can any one adds more inputs in this regards......
    we have checked logs : below are log details :
    Below logs gives "Logkeys: exceptions discoiv.servlet_exceptions" for this report ...
    1:
    OC4J~OC4J_BI_Forms~default_island~1:
    10/04/11 12:11:29 Oracle Application Server Containers for J2EE 10g (10.1.2.0.2) initialized
    10/04/11 12:11:59 Using oracle.reports.util.EnvironmentGlobal class
    10/04/11 14:23:37 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:23:38 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114240115278
    10/04/11 14:25:42 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:25:42 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114254315439
    10/04/11 14:28:53 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:28:54 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114285615691
    10/04/11 14:29:13 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:29:13 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114291315728
    10/04/11 14:32:35 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:32:35 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114323615949
    10/04/11 14:32:48 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:32:48 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114324815982
    10/04/11 14:34:44 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:34:44 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114344616128
    10/04/11 14:34:55 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:34:55 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114345516155
    10/04/11 14:36:25 Tutalii: /oracle10gas/app/oracle10g/discoverer/lib/discoverer5.jar archive
    2:
    Discoverer~SessionServer~12
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Active Eul: EULADMIN
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    DCSCORBAInterface::Delete called
    DCSCORBAInterface destructor called
    DCSCORBAInterface::Delete called
    DCSCORBAInterface destructor called
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    3:
    application.log
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Finding async request action forward
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Calling externalize
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_TOOL: dvtb xk-1ml versionzw1.0w kyxdvtbyxbisltyxbicho vzwwjyxjbisltyxjdvtby
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_VIEW: dv xk-1ml versionzw1.0w kyxdv bazw0w cszw25wyxpc vzw1wjyxjdvy
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_VIEW: lc
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_HS: dvhs
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_DS: dv_ds &qls_z!2=New GL Report.Clndr Id&arq=false&qls_x!14=Sheet 1.Closing Balance (Credit)&qls_x!3=New GL Report.Voucher No&qls_z!3=New GL Report.Frm Prd&fm=xml&qls_z!4=New GL Report.To Prd&qls_x!2=New GL Report.Prd Desc&qls_x!11=Sheet 1.Debit Amount&qls_x!4=New GL Report.Gl Voucher No&qls_x!9=Sheet 1.Opening Balance Debit&tss_s!0=New GL Report.Acct Id,lh,group,false&qls_x!1=New GL Report.Acct Sdesc&qls_z!1=New GL Report.Loc Desc&qls_x!10=Sheet 1.Opening Balance (Credit)&qls_x!12=Sheet 1.Credit Amount&aow=false&qls_x!7=New GL Report.Shrt Code&qls_x!13=Sheet 1.Closing Balance (Debit)&qls_x!6=New GL Report.Pmt Rct Dt&qls_x!8=New GL Report.Dtl Nrtn&sss=true&qls_x!5=New GL Report.Voucher Dt&qls_x!0=New GL Report.Acct Id&qls_z!0=New GL Report.Loc Id&ddsver=1
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.ViewerModelImpl.externalize [EXTERN_STATE]: $wksht$%24dv_ds%24%26qls_z%212%3DNew+GL+Report.Clndr+Id%26arq%3Dfalse%26qls_x%2114%3DSheet+1.Closing+Balance+%28Credit%29%26qls_x%213%3DNew+GL+Report.Voucher+No%26qls_z%213%3DNew+GL+Report.Frm+Prd%26fm%3Dxml%26qls_z%214%3DNew+GL+Report.To+Prd%26qls_x%212%3DNew+GL+Report.Prd+Desc%26qls_x%2111%3DSheet+1.Debit+Amount%26qls_x%214%3DNew+GL+Report.Gl+Voucher+No%26qls_x%219%3DSheet+1.Opening+Balance+Debit%26tss_s%210%3DNew+GL+Report.Acct+Id%2Clh%2Cgroup%2Cfalse%26qls_x%211%3DNew+GL+Report.Acct+Sdesc%26qls_z%211%3DNew+GL+Report.Loc+Desc%26qls_x%2110%3DSheet+1.Opening+Balance+%28Credit%29%26qls_x%2112%3DSheet+1.Credit+Amount%26aow%3Dfalse%26qls_x%217%3DNew+GL+Report.Shrt+Code%26qls_x%2113%3DSheet+1.Closing+Balance+%28Debit%29%26qls_x%216%3DNew+GL+Report.Pmt+Rct+Dt%26qls_x%218%3DNew+GL+Report.Dtl+Nrtn%26sss%3Dtrue%26qls_x%215%3DNew+GL+Report.Voucher+Dt%26qls_x%210%3DNew+GL+Report.Acct+Id%26qls_z%210%3DNew+GL+Report.Loc+Id%26ddsver%3D1%24dv%24xk-1ml+versionzw1.0w+kyxdv+bazw0w+cszw25wyxpc+vzw1wjyxjdvy%24wd%24false%24lc%24%24dvtb%24xk-1ml+versionzw1.0w+kyxdvtbyxbisltyxbicho+vzwwjyxjbisltyxjdvtby%24wvs%241101%24dvhs%24$cn$&vct=svd&cnk=cf_a101$ap$%26df%3D%26l%3D%26s%3D%26nc%3D%26dl%3D$expl$&sp=&node=&event=focus&state=(117)&root=63&wbt=2$prid$NEW_GL_REPORT%2F31$ctyp$viewer
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Storing state
    10/04/11 14:37:25 discoverer: [INFO] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Externalize Perf: 8ms
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving Attributes
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving ApplicationRequest
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Checking for SSO mode
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving errors, messages
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Returning final forward: "/ExportProgress.uix" redirect: "false"
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute ----------------------- End Request --------------------------
    10/04/11 14:37:25 discoverer: [INFO] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Total Request Time in AppCtrl: 18
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] org.apache.struts.action.RequestProcessor.processForwardConfig processForwardConfig(ForwardConfig[name=long running operation,path=/ExportProgress.uix,redirect=false,contextRelative=false])
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.view.DiscovererPageBroker.isCacheable Setting cacheable to: true
    4: XML log:
    log2010041114285615691.xml
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>184</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer started.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>162</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="TRACE"></MSG_TYPE><MSG_LEVEL>5</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Return DCSModelInterface::SendReceiveData(kScheduleInterface, inTable, outTable)
    outTable = DCITable
         Length=0
    ]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcsmdli.cpp</FILE_NAME> <LINE_NUMBER>259</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[DCSModelInterface::SendReceiveData(kScheduleInterface)]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS </MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcsmdli.cpp</FILE_NAME> <LINE_NUMBER>226</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> <LOG_SIZE>0</LOG_SIZE> <EXTRA_INFO><MethodEnd duration="0.1" sizeChange="0" >
    real 0m0.1s
    user 0m0.900s
    sys 0m0.109s
    </MethodEnd></EXTRA_INFO> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:37:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer stopped.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>184</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:37:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:37:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer started.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>162</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:37:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    Regards,

    Hi ,
    as per Note: 466697.1 ....its an memory error....& need to increase MaxVirtualDiskMem and MaxVirtualHeapMem
    But we already have slowly increased MaxVirtualDiskMem and MaxVirtualHeapMem to below very high values ...but the issue remains same.......
    as per note we are getting Logkeys: exceptions discoiv.servlet_exceptions error but
    after that we are not getting below error ............
    Unexpected error in state machine: java.lang.OutOfMemoryError
    Hence I presume that its different issue rather than memmory....
    Below are pref.txt values..........
    CacheFlushPercentage          = 25          # Percent of cache flushed if the cache is full. valid values 0 - 100%.
    MaxVirtualDiskMem          = 9294967296     # Maximum amount of disk memory allowed for the data cache. Should be greater than or equal to MaxVirtualHeapMem
    MaxVirtualHeapMem          = 4294967296     # Maximum amount of heap memory allowed for the data cache.
    QueryBehavior = 0          # Action to take after opening a workbook (0 = Run Query Automatically, 1 = Don't Run Query, 2 = Ask for Confirmation)
    Also we have raised an SR & as per SR .............all below settings are tried as per SR except for applying a recent patch....
    ====================================================================
    Discoverer performance is largely determined by how well the database
    has been designed and tuned for queries.
    A well-designed database will perform significantly better than a poorly
    designed database.
    Workbook design can also affect query performance.
    1. Apply latest Discoverer patch as documented in <<Note:237607.1>>
    'ALERT: Required and Recommended Patch Levels For All Discoverer Version'.
    2. Increase the maximum JVM heap memory:
    In general, the default values for the minimum heap (-Xms) memory and
    maximum heap (-Xmx) memory are sufficient.
    However, if your organization consistently runs large Discoverer queries
    then you may benefit from increasing the maximum heap memory from the
    default values.
    This is recomended if your users are typically running large queries via
    Discoverer Viewer.
    Increasing the JVM memory can help to avoid "java.lang.OutOfMemoryError"
    in Discoverer Viewer:
    Please see <<Note 563960.1>>, Best Practice: Configuring The
    OC4J_BI_Forms JVM For
    Discoverer Viewer/Portlet 10g Performance And Stability for specific
    details.
    3. Disable Query Prediction:
    Query Prediction provides an estimate of the time required to retrieve
    the information in a query.
    The Query Prediction appears before the query and consumes time.
    Edit the <oracle_home>/discoverer/util/pref.txt on the middle-tier
    server and set:
    QPPEnable=0
    Also set:
    QPPObtainCostMethod = 0
    4. Uncheck the 'Enable fantrap detection' checkbox.
    When the box is checked, every query generated by Discoverer is
    interrogated. Discoverer will detect a fan trap and rewrite the query to
    ensure that the aggregation is done at the correct level.
    Please refer to <<Note:210553.1>>, Oracle BI Discoverer: Fan Trap
    Resolution - Correct Results
    Everytime for more information on fantraps.
    To disable, in Plus go to Tools -> Options -> Advanced -> Fan Trap
    settings.
    In Viewer go to Preferences and uncheck the box.
    5. Disable Materialized Views/Summaries
    In pref.txt add parameter:
    MaterializedViewRedirectionBehavior = 0
    Value equal to 0 ensures that Materialized View (MV) Redirection is
    always performed when MVs are available.
    6. Improve query performance by optimizing the SQL.
    In pref.txt modify/add following parameters:
    SQLFlatten = 0
    SQLItemTrim = 0
    SQLJoinTrim = 0
    UseOptimizerHints = 0
    If value of SQLFlatten is 1 then Discoverer will merge inline views in
    the query SQL where ever possible.
    In case of SQLItemTrim, Discoverer will remove unused folder items from
    the query SQL where possible and for SQLJoinTrim Discoverer will remove
    unnecessary joins from the query where possible.
    UseOptimizerHints will add Optimizer hints to SQL if set >
    0.Unnecessarily making Discoverer perform these checks consumes
    resources and rather than increasing the performance may reduce the same. So unless you feel these checks have to be performed depending on
    requirements, assign zero to these parameters.
    7. When Discoverer builds a query, Discoverer makes a database security
    check to confirm that the user has access to the tables referenced in
    the folders. Avoiding this check can save time.
    So in pref.txt underDatabase section add:
    ObjectsAlwaysAccessible = 1
    8. Whenever you are creating conditions, ensure that you match the Case.
    This in turn can reduce the time Discoverer spends on changing the Case
    and matching.
    For example:
    option Upper(Department) in (Upper('VIDEO SALES')
    9. Ensure that summaries are refreshed periodically in Discoverer
    Administrator.
    10. Increase the amount of memory available for the Discoverer data cache. Please refer to <<Note 245752.1>>, Explaining Oracle BI Discoverer
    Session Memory Management
    And Server Cache Settings.
    11. Performance may be enhanced by enabling OracleAS Web Cache.
    ==================================================================
    Thanks for your reply....................
    Regards,

  • About shell scripts for large-scale automation of  encoding tasks

    in the user menu of Compressor, it said that we can use the command line to write shell scripts for large-scale automation of encoding tasks.
    I would like to have more information about the shell script for compressor, is that any document link?
    Thanks

    You can use a script function to set-up a more secure environment that you call at the start of every admin script. This could be your main stamp album for stuff that can be moved there.
    A few more stamps to add to the collection (be sure to read up on them before use):
    1) reset the command hash
    hash -r
    2) prevent core dumps
    ulimit -H -c0
    3) set the IFS
    4) clear all aliases  (see unalias -a)
    Also you can remove the ALL from sudo and add explicit commands to the the sudoers file. There's a lot of fine tuning you can do in sudoers - inc. env variables as teekay said.
    But I'm no expert so best to check all of the above.

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • Have Windows XP and Adobe 9 Reader and need to send a series of large documents to clients as a matter of urgency     When I convert 10 pages a MS-Word file to Pdf this results in file of 6.7 MB which can't be emailed.     Do I combine them and then copy

    I have Windows XP and Adobe 9 Reader and need to send a series of large documents to clients as a matter of urgency When I convert 10 pages a MS-Word file to Pdf this results in file of 6.7 MB which can't be emailed.  Do I combine them and then copy to JPEG 2000 or do I have to save each page separately which is very time consuming Please advise me how to reduce the size and send 10 pages plus quickly by Adobe without the huge hassles I am enduring

    What kind of software do you use for the conversion to pdf? Adobe Reader can't create pdf files.

  • How do I increase the font size of a large document?

    Whenever I try to increase the font size of a large document, the text boxes cross their boundaries and mix with each other, or partly disappear at the end of every page. Do I really have to adjunst every text bos in the document or is there a faster way? Im having this issue with adobe acrobat 11 pro

    This is not a feasible thing to do. Editing a PDF is a desperate last resort for so many reasons.
    Whatever you need to solve, you are unlikely  to solve it in Acrobat. Probably best to export text and remake it.

  • How can I change a page position in a large document?,

    How can I change a page position in a large document?

    Question asked and answered many times !
    Insert a section break just before the page to move.
    Insert a section break just after the page to move.
    Select the page's thumbnail
    cut
    Insert a section break where you want to insert the page.
    paste
    The required infos are available in Pages User Guide which isn't delivered to help helpers to help you.
    Yvan KOENIG (VALLAURIS, France) mercredi 5 octobre 2011 14:33:24
    iMac 21”5, i7, 2.8 GHz, 4 Gbytes, 1 Tbytes, mac OS X 10.6.8 and 10.7.0
    My iDisk is : <http://public.me.com/koenigyvan>
    Please : Search for questions similar to your own before submitting them to the community

  • Can't print large documents.

    For years, we've been printing large documents to our plotter without issue.  Suddenly last week, nothing large will print.
    I've seen a few different error messages, and have tried different computers, and different versions of Illustrator, but nothing large will print.
    When trying to print, I currently get the following:
    I've also seen an error suggesting not enough memory is available, which is bogus.  And I think another error that was more vague, something to the effect of "can't print"
    I've setup a clean install of illustrator on a clean windows install, same issue.
    I setup an alternate print server with our rip software, same software we've been using for years.  Same error on the alternate server.
    Nothing changed, and it's effecting everything here, and new installs.  I don't know what happened, but this is getting critical, we need the ability to print.

    EFI eXpress v. 4.1
    This all worked fine until a week ago, so I don't think there should be a hardware software compatibility issue.  The software and hardware were all working fine, and then we got blindsided by this stubborn error that came out of nowhere, won't go away, and has no error or log that gives us any useful information.
    It isn't a font issue.  I've created a basic test document that just consists of the word test.  Using this same font, I've had success printing documents that are 37"x79", 39"x75".  But, when I print 39"x79", it fails.  i've not zeroed in on the size that seems to be the cutoff, but I knew that a 37x75 worked, and 40x80 didn't, and 39x79 didn't either.  So, playing around with the numbers I found the above that did work.
    Also, using the general epson driver that you can get on epson's site, any print job can be sent without issue.  It's just that anything beyond 90 inches gets cut off. 
    I suppose this suggests the problem lies with EFI.  But, a clean install of EFI didn't resolve the issue.  What it comes down to for me is, the error is generated from Adobe.  So, at the least, I'd like an explanation from Adobe as to what the error is, since their software is generating it, and giving me no reason why it failed.  Plus, there's no way I can justify to management the cost of bringing in a 3rd party support for the EFI software, unless I have a better reason why the EFI software worked last week, but not this week, so I really need better data out of Adobe as to why it fails.
    I currently have a paid case open with Adobe, but they seem to be having difficulty as well, so far it's been a lot of time on hold, and a little bit of "try this, try that, try this", which is getting nowhere.  I need something more methodical to happen, and some actual following of data, or checking of logs, to get some detailed information, hopefully the case will get escalated to someone with a bit more in depth knowledge on these things.
    I will keep posting the details here as they emerge, because there's nothing I hate more than a forum thread with a problem I'm experiencing, that doesn't end with an answer.  I think forums should do a better job of either posting a resolution, or purging the cases that don't resolve, to get rid of the useless junk that clogs the internet and wastes people's time searching for answers.

  • [CS5 Win] Switching to InDesign is slow in CS5 for certain documents. XML links?

    Hi,
    I put this question in the scripting part, since you might be the ones to notice such things as this, and it might relate to importing XML by scripting. Also I got no positive response in one of the other forums, "complaining" about the sluggish performance redrawing the interface when setting focus to InDesign CS5. But now I can more clearly see that it is not slow for all documents.
    Having certain documents open in InDesign CS5 makes it very slow to switch to. The CPU peaks at 80-100% and it takes about a second or two until InDesign responds.
    The documents do not necessarily need to be large in size, but it appears that it mostly occurrs on documents that I've imported data to (or that our customers have), using our own code, which takes care of an XML file and portions it out into templates.
    One strange thing I just noticed was that selecting "relink all" by alt-clicking the relink button in the Links palette, makes InDesign look for (presumalbly) previously imported XML files. How can that be?
    I never store any links to the imported XML - as far as I know. Where should I look for that kind of links?
    How and why would previously imported XML file paths be stored? I just import the XML into the structure, "deal with it", and deletes the node that it was imported to. (The data is left in a structure that I build up myself.)
    Does anyone know if there is a change related to any of the facts stated above, in InDesign CS5?
    I'll attach a screen shot of the links dialogue (above), when it "looks for" an XML file. Could this kind of "missing file" cause the abovementioned "slowness" every time switching to InDesign from another application?
    (InDesign documents that don't relate to XML doesn't make switching to InDesign slow - but it gets slow when such a document is loaded, even under a tab that is not in focus.)
    Best regards,
    Andreas

    I've uploaded a video to demonstrate that InDesign sometimes can keep references to links:
    http://www.youtube.com/watch?v=OoQOSIlfYl4
    It's obvious that there are thousands of "orphans" in the documents. All XML files ever imported are kept in some way... The links (or the names of them) are obviously not removed from the document when deleting the Element in the XML Structure. Since the communication of the code with a database is built around XML import, the number of imported files is very large.
    Can these orphaned link references somehow be removed? When manually checking for missing files, InDesign loops though all of the XML files as seen in the YouTube video above, but ends up saying that there are no missing files. The next time I do the same check, all links are looped through again.
    Perhaps we should "link to file", using .xmlImportPreferences.createLinkToXML = true
    This propery has not been explicitly set, and therfore = false. But is there any way to get rid of all the old link names that's inside the document somehow?
    CS4 does the same thing, but there is "no 2 second lag time" switching from another application to InDesign CS4 with this kind of document open.

  • Indices configuration for XML document analysis (indexing time problems)

    Hi all,
    I'm currently developing a tool for XML Document analysis using XQuery. We have a need to analyse the content of a large CMS dump, so I am adding all documents to a berkeley DB xml to be able to run xqueries against it.
    In my last run I've been running to indexing speed problems, with single documents (typically 10-20 K in size) taking around 20 sec to be added to the database after 6000 documents (I've got around 20000 in total). The time needed for adding docs to the database drops with the number of documents.
    I suspect my index configuration to be the reason for this performance drop. Indeed, I've been very generous with indexes, as we have to analyse the data and don't know the structure in advance.
    Currently my index configuration includes:
    - 2 default indicess: edge-element-presence-none and edge-attribute-presence-none to be able to speed up every possible xquery to analyse data patterns: ex. collection()//table//p[contains(.,'help')]
    - 8 edge-attribute-substring-string indices on attributes we use often (id, value, name, ...)
    - 1 edge-element-substring-string index on the root element of the xml documents to be able to speed up document searches: ex. collection()//page[contains(.,'help')]
    So here my questions:
    - Are there any possible performance optimisations in Database config (not index config)? I only set the following:
    setTransactional(false);
    envConf.setCacheSize(1024*64);
    envConf.setCacheMax(1024*256);
    - How can I test various index configuration on the fly? Are there any db tools that allow to set/remove indexes?
    - Is my index config suspect? ;-)
    Greetings,
    Nils

    Hi Nils,
    The edge-element-substring-string index on the document element is almost certainly the cause of the slow document inserts - that's really not a good idea. Substring indexes are used to optimize "=", contains(), starts-with() and ends-with() when they are applied to the named element that has the substring index, so I don't think that index will do what you want it to.
    John

  • CLM - Maximum File Size for Contract Documents

    We did a search but did not find an answer for this.  For the CLM system does anybody know SAP's recommendation on the maximum file size that the application can reliably handle for contract documents?  We recently increased the maximum file size from 10MB to 50MB to accommodate a larger than allowed file size.  Now the business has requested the capacity to add files (pdf files) up to 115MB.  Would increasing our file limit to approximately 150MB be within the recommended parameters?

    There is no specific recommendation for maximum file sizes as this depends on how your time-out parameters are set, your server resources and network (also tunneling policies).  The fact is the higher the filesize the more risk you run in unreliable uploads.  Compressing files before uploading is a good practise for larger files.
    These are the same issues which you would run into if you for example upload to a sharepoint site or even a copy on a shared drive, it all depends on the same parameters. 
    Tunneling policies could also affect your performance as it will limit the upload speeds according the priority set on certain network activities and if it limits your bandwidth you run in the risk of time-out so as you see there is no straight answer to your question.  I know customers who can without no problem upload 150Mb files and others who already have issues with 50Mb.
    Unfortunate, as I do not know your architecture and infrastructure I can only advice to stresstest the conditions of 150 Mb uploades with several user ids and see if you get time-outs or if it degrades the network performance. One of the questions you should ask your business is how frequent are the occurences of 150Mb+ uploads and if it is the appropiate way.  It could be that these are very rare and in that case you could ask them to do this outside the busy hours of the network which will provides you less risk in problems.

  • New FAQ Entry on JVM Parameters for Large Cache Sizes

    I've posted a new [FAQ entry|http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#60] on JVM parameters for large cache sizes. The text of it is as follows:
    What JVM parameters should I consider when tuning an application with a large cache size?
    If your application has a large cache size, tuning the Java GC may be necessary. You will almost certainly be using a 64b JVM (i.e. -d64), the -server option, and setting your heap and stack sizes with -Xmx and -Xms. Be sure that you don't set the cache size too close to the heap size so that your application has plenty of room for its data and to avoided excessive full GC's. We have found that the Concurrent Mark Sweep GC is generally the best in this environment since it yields more predictable GC results. This can be enabled with -XX:+UseConcMarkSweepGC.
    Best practices dictates that you disable System.gc() calls with -XX:-DisableExplicitGC.
    Other JVM options which may prove useful are -XX:NewSize (start with 512m or 1024m as a value), -XX:MaxNewSize (try 1024m as a value), and -XX:CMSInitiatingOccupancyFraction=55. NewSize is typically tuned in relationship to the overall heap size so if you specify this parameter you will also need to provide a -Xmx value. A convenient way of specifying this in relative terms is to use -XX:NewRatio. The values we've suggested are only starting points. The actual values will vary depending on the runtime characteristics of the application.
    You may also want to refer to the following articles:
    * Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
    * The most complete list of -XX options for Java 6 JVM
    * My Favorite Hotspot JVM Flags
    Edited by: Charles Lamb on Oct 22, 2009 9:13 AM

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

Maybe you are looking for