Cache Performance in 4.5.1 under AIX

Hello,
We are running 4.5.1 under AIX 4.3.2 (with JDK 1.1, using LDAPRealm with
Netscape Directory Server 4.1. We have about 15 groups and about 300
users spread across them. When trying to validate a user who is in the
first group listed in the deployment descriptor, performance is fine.
The further down the group list in the DD, the longer it takes
(approaching several minutes). As a test, we added a new user further
down the DD list and measured time. With no load (no users except us on
the system), we eventually got a cache miss and a security exception was
thrown. I know that you support only 10 groups and one of them must be
the "everyone" group. But there are only 302 users ??? What's up ? I
need to add about 3000+ users very soon. Can we change cache size, add
more groups ? I've combed thru the WL Properties files. What is the user
limit of Weblogic with LDAP ?
Thank you.
Joe

Forgot to add this is happening in Win7 x86, haven't tested other environments.

Similar Messages

  • Can Oracle 8 run under AIX 64 bits OS ?

    I would like to know whether Oracle 8 can run under AIX 64 bits
    environment?
    If the answer is yes, are there any improvement in performance as
    compare to running it under AIX 32 bits environment?
    Hope I can get some answers on this.
    Thanks.
    Regards.
    Choo Kong Jam
    null

    http://platforms.oracle.com/ibm
    Choo Kong Jam (guest) wrote:
    : I would like to know whether Oracle 8 can run under AIX 64
    bits
    : environment?
    : If the answer is yes, are there any improvement in performance
    as
    : compare to running it under AIX 32 bits environment?
    : Hope I can get some answers on this.
    : Thanks.
    : Regards.
    : Choo Kong Jam
    null

  • Bar Code Printing in Report under AIX Env

    Dear All,
    Iam using Oracle forms11g, Reports11g,Oracle11g DB in AIX Env.
    My client sys is window 7. When i try to print report in a Network Printer i can able to print report.
    But if any field having barcode , which is not printing.
    Please guide any solution for this..
    Thanks in advance
    a

    which fonts i should use in printing the Report in Network printer under AIX Environment.No idea, I don't work on AIX. Try this site:
    http://www.idautomation.com/
    2D bar codes are not supported in Reports. You will need a Java solution. See this thread:
    Re: how to use 2 d barcodes in reports 6i
    This thread also has the suggestion to create the 2D bar code as an image. Of course, you still need some other program to create this image first.

  • SCM Live cache performance tunning

    I got a ne project here I have to work on SCM live cache, performance tunning and ECC Integrating ith SCM.
    If you have any documents can you please help me.
    - Thanks

    7,
    Try these
    https://websmp103.sap-ag.de/~sapidb/011000358700000567062006E
    https://websmp103.sap-ag.de/~sapidb/011000358700002213412003E
    https://websmp103.sap-ag.de/~sapidb/011000358700000715082008E
    https://websmp103.sap-ag.de/~sapidb/011000358700007382642002E
    https://websmp103.sap-ag.de/~sapidb/011000358700008748092002E
    Rgds,
    DB49

  • HT4134 what happened to my clear cache button that used to be under the safari drop down menu?

    what happened to my clear cache button that used to be under the safari drop down menu?

    Thanks for the help apm60  It seemed to work.

  • CGI support under AIX ?

    We tried to run a simple program using the CGIServlet
    under WLS 5.1 and AIX 4.3.3. The program isn't called
    at all. In the Developer Center, section "CGIServlet
    translations", we found that CGIServlet is only supported
    under NT, Solaris and HP-UX.
    Q: Will CGIServlet be supported under AIX? If so, when
    is this going to happen?
    Regards,
    Jens

    We currently have no plans to support the CGI servlet under AIX yet. This
    is something we're planning for future releases -- or, we have been
    considering placing that servlet on our developers center web site.
    Thanks,
    Michael
    Michael Girdley
    BEA Systems Inc
    "Jens Lührs" <[email protected]> wrote in message
    news:39a24edc$[email protected]..
    >
    We tried to run a simple program using the CGIServlet
    under WLS 5.1 and AIX 4.3.3. The program isn't called
    at all. In the Developer Center, section "CGIServlet
    translations", we found that CGIServlet is only supported
    under NT, Solaris and HP-UX.
    Q: Will CGIServlet be supported under AIX? If so, when
    is this going to happen?
    Regards,
    Jens

  • IBM JSSE SSL is very slow under AIX

    When I select the JSSE SSL implementation under AIX my WLS instance becomes very, very slow. Response times for simple user actions are ~26 sec. while the CPU is idle. I select the JSSE implemenation in the WLS Admin Console as follows:
    Environment > Servers > AdminServer > SSL > Advanced > Use JSSE SSL
    I configured WLS to use TLSv1 as described in:
    Client-cert atn fails under AIX
    My JDK is IBM J9 SR-9. I'm not able to reproduce this issue with the J9 SR-9 build for Win32.
    Is this a known issue under AIX? And is there any workaround available?
    Thanks in advance!
    Bas

    When I select the JSSE SSL implementation under AIX my WLS instance becomes very, very slow. Response times for simple user actions are ~26 sec. while the CPU is idle. I select the JSSE implemenation in the WLS Admin Console as follows:
    Environment > Servers > AdminServer > SSL > Advanced > Use JSSE SSL
    I configured WLS to use TLSv1 as described in:
    Client-cert atn fails under AIX
    My JDK is IBM J9 SR-9. I'm not able to reproduce this issue with the J9 SR-9 build for Win32.
    Is this a known issue under AIX? And is there any workaround available?
    Thanks in advance!
    Bas

  • Cache performance issues

    I was able to add indexes to a cache. This cache holds objects which contain all the rows in one of our database tables.
    I run a sql query using hibernate on the database table, and then run the same query using filters on the cache. I am not noticing a significant performance gain. Is there something I'm doing wrong?
    Here's what I'm trying to do:
    QueryMap cache = (QueryMap)this.getPrimaryDAO().getCache();
    Filter filterStateEq = new EqualsFilter("getStateCode", state);
    Filter filterCompanyEq = new EqualsFilter("getCompanyCode", company);
    Filter filterCoverageEq = new EqualsFilter("getCovCode", coverage);
    Filter filterEffDateLE = new DateLessEqualsFilter("getEffectiveDate",
    effectiveDate);
    Filter filterExpDateGE = new DateGreaterEqualsFilter("getExpirationDate",
    effectiveDate);
    Filter filterAnd = new AllFilter(new Filter[]
    {filterStateEq, filterCompanyEq,
    filterLobEq, filterEffDateLE,
    filterExpDateGE});
    Set filteredSet = cache.keySet(filter);
    Basically I'm trying to simulate a sql query:
    select * from where state =
    and company =
    and covcode =
    and ......

    Hi Asim,
    The code looks good, and it is quite natural to expect performance numbers similar to the DB considering you are executing the query on a single thread. When you will see the difference is under load on multiple machines. Let’s say you have 10 nodes configured as cache servers (storage enabled). Each of theses nodes will own approximately 10% of the total data in the cache. When a client thread calls a distributed query, it will be executed on each cache server node in parallel against the partial dataset they own. This means that with Coherence the performance of your queries will scale near linearly. Providing similar scalability with a typical commercial DB is going to be incomparably more expensive.
    Best regards,
    Gary Hawks
    Tangosol

  • Cache / performance question...

    Hello,
    I've been running tests against an all-write load to try and understand limitations and BDB internals, and am wondering what causes BDB to begin 'forcing' pages from cache even though clean pages still exist, checkpoints are being regularly run, and trickling is being used? As an all-write load (in a testing environment) continues to operate, I notice performance begins to suffer and cannot recover at roughly the time db_stat begins reporting forced writes of both clean and dirty pages. For example, a write load of 5000 txn/sec degrades to 1000/sec and below after ~500K transactions, and I start to see things like the below in db_stat:
    3219 Clean pages forced from the cache
    7271 Dirty pages forced from the cache
    183004 Dirty pages written by trickle-sync thread
    Even when the cache has clean pages, as the database grows over time, I imagine that new keys may be written to "older" pages. This particular example uses DB_HASH. In this case, although the cache has a clean page, a given key "475000" may be written to an old page already on disk. This would, as far as I understand, require that page to be re-read into the cache to complete the write. However, monitoring vmstat does not show me any inbound IO activity whatsoever during the time that the performance begins to drop. I am therefore not certain that old pages are being read from disk, and confused as to why write performance begins to suffer so greatly when the cache fills up.
    I'm trying this without DB_TXN_NOSYNC/etc - my environment favors steady performance over bursty speed, so I don't want to forward-load anything that might trigger a large number of writes later whenever possible.
    I have sample source and testing results that I can attach, but I'm first trying to understand if I'm not understanding something basic before I post too much information. Thanks! :)
    Edited by: user10542315 on Jan 1, 2009 7:22 PM

    Hello,
    Although I'm not an engineer on the product, and I'm sure one will chime in to help you tune your tests, I thought I'd ask you if you've seen the following white paper http://www.oracle.com/technology/products/berkeley-db/pdf/berkeley-db-perf.pdf ? It's a bit dated, but it is trying to do some of the same things you are doing. We're focusing a lot of resources on performance given the radical shifts in the ratio of cores, hyper-threads, memory, disk-speed, and flash-storage devices. All of these have changed the underlying assumptions one makes when building a database engine ("Will we be memory, or CPU bound? How big a bottle-neck will I/O be? How can we avoid that? Can we max out all the cores/threads of all resources when running purely in memory?"). There are a lot of interesting areas to research and improve. We're doing some of that work in 4.8, more later. I can't be specific, but we're trying our best to keep DB at the bleeding edge of hardware capabilities.
    -greg
    Gregory Burd - Product Manager - Oracle Berkeley DB - [email protected]

  • Installation problem under AIX 5.3 with Oracle 10g

    Hello,
    I start an installation in AIX 5.3 machine with oracle 10. I install oracle 10.0 and pathes for oracle 10.2.0.2.0. I also install and all the interim patches through MOpatch utility. I also install the latest R3trans and R3load for unicode installation. But i have 4 errors under phase IMPORT ABAP. In this phase 33 packages completed succesfully. I have 4 packages with errors.
    In ImportMonitor.Console.Log i have the following errors:
    Import Monitor jobs: running 1, waiting 4, completed 33, failed 0, total 38.
    Import Monitor jobs: running 2, waiting 3, completed 33, failed 0, total 38.
    Import Monitor jobs: running 3, waiting 2, completed 33, failed 0, total 38.
    Loading of 'SAPSSEXC_4' import package: ERROR
    Import Monitor jobs: running 2, waiting 2, completed 33, failed 1, total 38.
    Loading of 'SAPPOOL' import package: ERROR
    Import Monitor jobs: running 1, waiting 2, completed 33, failed 2, total 38.
    Loading of 'DOKCLU' import package: ERROR
    Import Monitor jobs: running 0, waiting 2, completed 33, failed 3, total 38.
    Import Monitor jobs: running 1, waiting 1, completed 33, failed 3, total 38.
    Loading of 'SAPCLUST' import package: ERROR
    Import Monitor jobs: running 0, waiting 1, completed 33, failed 4, total 38.
    Inside SAPSSEXC_4.log i have the following errors:
    /usr/sap/BEQ/SYS/exe/run/R3load: START OF LOG: 20071130102907
    /usr/sap/BEQ/SYS/exe/run/R3load: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#14 $ SAP
    /usr/sap/BEQ/SYS/exe/run/R3load: version R7.00/V1.4
    Compiled Oct 20 2007 02:05:46
    /usr/sap/BEQ/SYS/exe/run/R3load -i SAPSSEXC_4.cmd -dbcodepage 4102 -l SAPSSEXC_4.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
    (DB) INFO: T512CLU deleted/truncated #20071130102908
    myCluster (63.20.Imp): 655: error when retrieving table description for physical table T512CLU.
    myCluster (63.20.Imp): 656: return code received from nametab is 2
    myCluster (63.20.Imp): 299: error when retrieving physical nametab for table T512CLU.
    (CNV) ERROR: data conversion failed.  rc = 2
    (DB) INFO: disconnected from DB
    /usr/sap/BEQ/SYS/exe/run/R3load: job finished with 1 error(s)
    /usr/sap/BEQ/SYS/exe/run/R3load: END OF LOG: 20071130102908
    Under SAPPOOL i have:
    usr/sap/BEQ/SYS/exe/run/R3load: START OF LOG: 20071130102907
    /usr/sap/BEQ/SYS/exe/run/R3load: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#14 $ SAP
    /usr/sap/BEQ/SYS/exe/run/R3load: version R7.00/V1.4
    Compiled Oct 20 2007 02:05:46
    /usr/sap/BEQ/SYS/exe/run/R3load -i SAPPOOL.cmd -dbcodepage 4102 -l SAPPOOL.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
    (DB) INFO: ATAB deleted/truncated #20071130102908
    failed to read short nametab of table AT01                           (rc=2)
    (CNVPOOL) conversion failed for row 0 of table  VARKEY = ã ±ã ±ã °â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  â  
    (CNV) ERROR: data conversion failed.  rc = 2
    (DB) INFO: disconnected from DB
    /usr/sap/BEQ/SYS/exe/run/R3load: job finished with 1 error(s)
    /usr/sap/BEQ/SYS/exe/run/R3load: END OF LOG: 20071130102908
    I read notes 421554 and 898181 i execute the directions from the notes to change the R3load and R3trans.
    Do you have any idea how can i procceed with the errors?
    Thank you in advance
    Thanasis Porpodas

    Hi,
    look at sap note 921593 and search for myCluster ,
    read that section following is a part of that note.
    Symptom:
    During the import into a UNICODE system the following error occurs
    (for example in the SAPCLUST.log):
    myCluster (63.2.Imp): 2085: (Warning:) inconsistent field names(source): physical field K1N05 appears as logic K1N5.
    myCluster (63.2.Imp): 2086: (Warning:) further investigation recommended
    myCluster (63.2.Imp): 1924: error when checking key field consistency for logic table TACOPC    .
    myCluster (63.2.Imp): 1927: logic table is canonical.
    myCluster (63.2.Imp): 1930: received return code 2 from c3_uc_check_key_field_descr_consistency.
    myCluster (63.2.Imp): 1224: unable to retrieve nametab info for logic table TACOPC    .
    myCluster (63.2.Imp): 8032: unable to acquire nametab info for logic table TACOPC    .
    myCluster (63.2.Imp): 2807: failed to convert cluster data of cluster item.
    myCluster: CLU4       *00001*
    myCluster (63.2.Imp): 319: error during conversion of cluster item.
    myCluster (63.2.Imp): 322: affected physical table is CLU4.
    (CNV) ERROR: code page conversion failed              rc = 2
    |
    |                              RSCP - Error
    | Error from:            Codepage handling (RSCP)
    | code:  128  RSCPENOOBJ   No such object
    | Dummy module without real rscpmc[34]
    | module: rscpmm  no:    2 line:    75          T100: TS008
    | TSL01: CPV  p3: Dummy-IPC   p4: rscpmc4_init
    `----
    Cause:
    This problem is caused by incorrect data which should have been removed from the source system before the export.
    Solution:
    There are two possible workarounds:
          1. Modify DDL<dbs>.TPL (<dbs> = ADA, DB2, DB4, DB6, IND, MSS, ORA) BEFORE the R3load TSK files are generated;
                  search for the keyword "negdat:" and add "CLU4" and "VER_CLUSTR" to this line.
          2. Modify the TSK file (most probably SAPCLUST.TSK) BEFORE R3load import is (re-)started;
                  search for the lines starting with "D CLU4 I" and "D VER_CLUSTR I" and change the status (i.e. "err" or "xeq") to "ign" or remove the lines.

  • Using a hashmap for caching -- performance problems

    Hello!
    1) DESCRIPTION OF MY PROBLEM:
    I was planing to speed up computations in my algorithm by using a caching-mechanism based on a Java HashMap. But it does not work. In fact the performance decreased instead.
    My task is to compute conditional probabilities P(result | event), given an event (e.g. "rainy day") and a result of a measurement (e.g. "degree of humidity").
    2) SITUATION AS AN EXCERPT OF MY CODE:
    Here is an abstraction of my code:
    ====================================================================================================================================
    int numOfevents = 343;
    // initialize cache for precomputed probabilities
    HashMap<String,Map<String,Double>> precomputedProbabilities = new HashMap<String,Map<String,Double>>(numOfEvents);
    // Given a combination of an event and a result, test if the conditional probability has already been computed
    if (this.precomputedProbability.containsKey(eventID)) {
    if (this.precomputedProbability.get(eventID).containsKey(result)) {
    return this.precomputedProbability.get(eventID).get(result);
    } else {
    // initialize a new hashmap to maintain the mappings
    Map<String,Double> resultProbs4event = new HashMap<String,Double>();
    precomputedProbability.put(eventID,resultProbs4event);
    // unless we could use the above short-cut via the cache, we have to really compute the conditional probability for the specific combination of the event and result
    * make the necessary computations to compute the variable "condProb"
    // store in map
    precomputedProbabilities.get(eventID).put(result, condProb);
    ====================================================================================================================================
    3) FINAL COMMENTS
    After introducing this cache-mechanism I encountered a severe decrease in performance of my algorithm. In total there are over 94 millions of combinations for which the conditional probabilities have to be computed. But there is a lot of redundancy in this set of feasible combinations. Basically it can be brought down to just about 260.000 different combinations that have to be captured in the caching structure. Therefore I expected a significant increase of the performance.
    What do I do wrong? Or is the overhead of a nested HashMap so severe? The computation of the conditional probabilities only contains basic operations.
    Only for those who are interested in more details
    4) DEEPER CONSIDERATION OF THE PROCEDURE
    Each defined event stores a list of associated results. These results lists include 7 items on average. To actually compute the conditional probability for a combination of an event and a result, I have to run through the results list of this event and perform an Java "equals"-operation for each list item to compute the relative frequency of the result item at hand. So, without using the caching, I would estimate to perform on average:
    7 "equal"-operations (--> to compute the number of occurences of this result item in a list of 7 items on average)
    plus
    1 double fractions (--> to compute a relative frequency)
    for 94 million combinations.
    Considering the computation for one combination (event, result) this would mean to compare the overhead of the look-up operations in the nested HashMap with the computational cost of performing 7 "equal' operations + one double fration operation.
    I would have expected that it should be less expensive to perform the lookups.
    Best regards!
    Edited by: Coding_But_Still_Alive on Sep 10, 2008 7:01 AM

    Hello!
    Thank you very much! I have performed several optimization steps. But still caching is slower than without caching. This may be due to the fact, that the eventID and results all share long common prefixes. I am not sure how this affects the computation of the values of the hash method.
    * Attention: result and eventID are given as input of the method
    Map<String,Map<String,Double>> precomputedProbs = new HashMap<String,Map<String,Double>>(1200);
    HashMap<String,Double> results2probs = (HashMap<String,Double>)this.precomputedProbs.get(eventID);
    if (results2Probs != null) {
           Double prob = results2Probs.get(result);
           if (prob != null) {
                 return prob;
    } else {
           // so far there are no conditional probs for the annotated results of this event
           // initialize a new hashmap to maintain the mappings
           results2Probs = new HashMap<String,Double>(2000);
           precomputedProbs.put(eventID,results2Probs);
    * Later, in case of the computation of the conditional probability... use the initialized map to save one "get"-operation on "precomputedProbs"
    // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
    results2probs.put(result, condProb);And... because it was asked for, here is the computation of the conditional probability in detail:
    * Attention: result and eventID are given as input of the method
    // the computed conditional probabaility
    double condProb = -1.0;
    ArrayList resultsList = (ArrayList<String>)this.eventID2resultsList.get(eventID);
    if (resultsList != null) {
                // listSize is expected to be about 7 on average
                int listSize = resultsList.size(); 
                // sanity check
                if (listSize > 0) {
                    // check if given result is defined in the list of defined results
                    if (this.definedResults.containsKey(result)) { 
                        // store conditional prob. for the specific event/result combination
                        // Analyze the list for matching results
                        for (int i = 0; i < listSize; i++) {
                            if (result.equals(resultsList.get(i))) {
                                occurrence_count++;
                        if (occurrence_count == 0) {
                            condProb = 0.0;
                        } else {
                            condProb = ((double)occurrence_count) / ((double)listSize);
                        // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
                        results2probs.put(result, condProb);
                        return condProb;
                    } else {
                        // mark that result is not part of the list of defined results
                        return -1.0;
                } else {
                    throw new NullPointerException("Unexpected happening. Event " + eventID + " contains no result definitions.");
            } else {
                throw new IllegalArgumentException("unknown event ID:"+ eventID);
            }I have performed tests on a decreased data input set. I processed only 100.000 result instances, instead of about 250K. This means there are 343 * 100K = 34.300K = 34M calls of my method per each iteration of the algorithm. I performed 20 iterations. With caching it took 8 min 5 sec, without only 7 min 5 sec.
    I also tried to profile the lookup-operations for the HashMaps, but they took less than a ms. The same as with the operations for computing the conditional probability. So regarding this, there was no additional insight from comparing the times on ms level.
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:22 AM
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:24 AM

  • WebStart Fails to load cached app when disconnected from internet under Apple OS-X

    Java WebStart broken under OS-X
    Our Java WebStart application fails to re-launch when disconnected from the internet.
    Instructions:
    Running OS-X 10.8 or 10.9 with internet connected.
    Launch a WebStart application (e.g.. from http://www.dsuk.biz/Downloads/SmartType/SmartType2.0.19/APPLICATION.JNLP)
    Disconnect from internet.
    Relaunch the application either with the downloaded JNLP file or the installed desktop shortcut.
    I get "Unable to launch application" error with following exception:
    java.net.UnknownHostException: www.dsuk.biz
      at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
      at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
      at java.net.Socket.connect(Socket.java:589)
      at java.net.Socket.connect(Socket.java:538)
      at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
      at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
      at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
      at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
      at sun.net.www.http.HttpClient.New(HttpClient.java:308)
      at sun.net.www.http.HttpClient.New(HttpClient.java:326)
      at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1167)
      at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1103)
      at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:997)
      at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:931)
      at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1511)
      at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1439)
      at com.sun.deploy.net.HttpUtils.followRedirects(Unknown Source)
      at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source)
      at com.sun.deploy.net.BasicHttpRequest.doGetRequestEX(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.checkUpdateAvailable(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.isUpdateAvailable(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source)
      at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source)
      at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source)
      at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source)
      at com.sun.javaws.Launcher.launch(Unknown Source)
      at com.sun.javaws.Main.launchApp(Unknown Source)
      at com.sun.javaws.Main.continueInSecureThread(Unknown Source)
      at com.sun.javaws.Main.access$000(Unknown Source)
      at com.sun.javaws.Main$1.run(Unknown Source)
      at java.lang.Thread.run(Thread.java:744)
    I have reproduced with Java build 1.8.0-b132 and 1.7.0_51
    Java Web Start 11.0.2.132
    Using JRE version 1.8.0-b132 Java HotSpot(TM) 64-Bit Server VM
    Re launching the application under Windows 7 when disconnected from the internet seems to work fine (tested in virtual Windows 7 64 bit VM under Parallels). It used to work fine on OS-X!
    RANT/QUESTION
    WebStart over the last two years has given me no end of headaches with deployed applications mysteriously breaking when either Oracle or Apple have made changes. So far WebStart just has not been a reliable and robust means of deployment.
    Is WebStart to be taken seriously as a deployment vehicle for a commercial application or should I be looking at other solutions?
    Also: How do I report bug to Oracle? Do you have to pay for support in order to report bugs?

    Java WebStart broken under OS-X
    Our Java WebStart application fails to re-launch when disconnected from the internet.
    Instructions:
    Running OS-X 10.8 or 10.9 with internet connected.
    Launch a WebStart application (e.g.. from http://www.dsuk.biz/Downloads/SmartType/SmartType2.0.19/APPLICATION.JNLP)
    Disconnect from internet.
    Relaunch the application either with the downloaded JNLP file or the installed desktop shortcut.
    I get "Unable to launch application" error with following exception:
    java.net.UnknownHostException: www.dsuk.biz
      at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
      at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
      at java.net.Socket.connect(Socket.java:589)
      at java.net.Socket.connect(Socket.java:538)
      at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
      at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
      at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
      at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
      at sun.net.www.http.HttpClient.New(HttpClient.java:308)
      at sun.net.www.http.HttpClient.New(HttpClient.java:326)
      at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1167)
      at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1103)
      at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:997)
      at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:931)
      at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1511)
      at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1439)
      at com.sun.deploy.net.HttpUtils.followRedirects(Unknown Source)
      at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source)
      at com.sun.deploy.net.BasicHttpRequest.doGetRequestEX(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.checkUpdateAvailable(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.isUpdateAvailable(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source)
      at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source)
      at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source)
      at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source)
      at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source)
      at com.sun.javaws.Launcher.launch(Unknown Source)
      at com.sun.javaws.Main.launchApp(Unknown Source)
      at com.sun.javaws.Main.continueInSecureThread(Unknown Source)
      at com.sun.javaws.Main.access$000(Unknown Source)
      at com.sun.javaws.Main$1.run(Unknown Source)
      at java.lang.Thread.run(Thread.java:744)
    I have reproduced with Java build 1.8.0-b132 and 1.7.0_51
    Java Web Start 11.0.2.132
    Using JRE version 1.8.0-b132 Java HotSpot(TM) 64-Bit Server VM
    Re launching the application under Windows 7 when disconnected from the internet seems to work fine (tested in virtual Windows 7 64 bit VM under Parallels). It used to work fine on OS-X!
    RANT/QUESTION
    WebStart over the last two years has given me no end of headaches with deployed applications mysteriously breaking when either Oracle or Apple have made changes. So far WebStart just has not been a reliable and robust means of deployment.
    Is WebStart to be taken seriously as a deployment vehicle for a commercial application or should I be looking at other solutions?
    Also: How do I report bug to Oracle? Do you have to pay for support in order to report bugs?

  • File system cache performance

    hi.
    i was wondering if anyone could offer any insight into how to
    assess the performance of the file system cache. I am interested
    in things like hit rate (which % of pages read are coming from the
    cache instead of from disk), the amount of data read from the cache
    over a time span, etc.
    outside of the ::memstat dcmd for mdb, i cannot seem to find a whole lot about this topic.
    thanks.

    sar will give you what you need....

  • Performance of PL/SQL-packages under Oracle 11gR2

    Under Oracle 9i I have used PL/SQL-packages/procedures to perform complicated initializations of the tables of a database schema.
    This was always a long job ... but an execution time of about 4 hours was acceptable!
    Now I changed to Oracle 11g.
    And now there is the following behaviour:
    When I create a NEW instance of the database and then create the schema the execution time ( using the same PL/SQL-packages as in Oracle 9i ) is more than 12 hours which is not acceptable anymore!
    When I only drop the schema ( in the EXISTING instance ) with a drop user (owner of the schema) cascading and then create the schema again the execution time for the same initialization is less than 3 hours which is OK.
    Does anyone have an idea about the reason for such a 'strange' behaviour?
    ... Or does anyone have a hint where I could look for such reasons?

    Hi,
    did you compare the execution plan in 9i and 11g R2?
    when you go to 11gR2, did you keep the statistic of the 9i, so if any regression, 11g can use 9i plan?
    thanks

  • Problem with new-line-character and java.io.LineNumberReader under AIX

    Hi folks,
    I got the following problem: I wrote a little parser that reads in a plain-text, tabulator-separated, line-formatted logfile (and later on safes the data to a 2-dimensional Vector). This logfile was originally generated by an AIX ksh script, however, I copied it on my Windows machine to work with it (for I'm using a Java editor that runs under Win Systems).
    For any reason, Windows, and what is worse Java too, seems not to recognize correctly the new-line character (in the API it is written that this should be a newline '\n' or a carriage-return '\r' or one followed by the other) that marks the end of a line in the logfile.
    Also, when I'm opening the logfile with the "Notepad"-editor, this special character does not seem to be recognized, every line is inserted right after the other.
    On the other side, when I open the logfile with the built-in editor in the CMD-Shell ("Dos-shell"), the newline chars seem to be recognized correctly.
    But when start my parser on the AIX-machine the newline does not seem to be recognized correctly again.
    I tried to read in the logfile with MS-Excel and safe it as a plain-text, tabulator-separated, line-formatted logfile again, with such files my parser works fine both on the AIX as it does on Windows.
    Any ideas? Anybody got over the same problem already?
    Greetz FK

    Under windows, text files' lines are usually delimited by \r\n,
    under Unix/Linux/AIX etc. \n
    and under Mac \r.
    I recommend to use the following editors, which are capable to handle files with Unix and Windows-styled line-delimiters or convert between these types:
    Programmer's File Editor (PFE; available on Windows)
    The Nirvana Editor (http://www.nedit.org/; available on Unix, MAcOS, Windows)
    (BTW good old vim can handle that too. Transferring text files to windows in order to edit them, even using Excel for this purpose means your being a UNIX newbie, (I mean no offense by writing this) so vim is probably beyond your reach for the moment.)
    Java normally assumes the platform's line delimiters where it is running, so if you transferred the file from Unix to Windows might be distrurbing.

Maybe you are looking for