HP-UX vs. NT performace issue

I have a J2EE application that I've recently ported from a WINNT development box
to a 2-CPU HP L-class server. I am running Weblogic 6.1, SP1 on both machines.
On a particularly memory-intensive piece of the application, I've notice a performance
drop of 50% to 75% from the NT box to the HP box.
Each server is running the JDK 131 (w/Hotspot) that comes with the Weblogic install.
I have set all the HP system variables (max_thread_proc, etc.) to Bea recommendations.
To benchmark I used an XML->PDF engine (Apache's FOP) to render a 200 page PDF
document from a static XML-file. No database connection is involved. Below are
the results I see. Initial heap size=max heap size in each case.
HP server (HP-Unix, 2x550 mhz, 2GB ram):
Heap Size=64M, 1 report = 545ms/page (frequent out of memory errors)
Heap Size=64M, 2 reports = didn't try
Heap Size=256M, 1 report = 372ms/page
Heap Size=256M, 2 reports = 1700ms/page
Heap Size=512M, 1 report = 350ms/page
Heap Size=512M, 2 reports = 1675ms/page
Nt dev box (NT4-Worksatation, PIII-933mhz, 512MB ram):
Heap Size=64M, 1 report = 251ms/page
Heap Size=64M, 2 reports = 750ms/page
Heap Size=256M, 1 report = 245ms/page
Heap Size=256M, 2 reports = 500/page
Does this kind of perfomance gap seem normal? I can see a drop in speed due to
the processors. But two reports at once should be faster on the HP box right?
It seems sometimes Weblogic switches over to two processors, but not usually until
I run 3 or more reports at once.
The only difference I can see for sure between the two boxes is that the NT machine
performs at least 10 times as much garbage collection. (Sometimes several times
per page, as opposed to once every 8-10 pages on the NT box.) Garbage collection
occurs a little more frequenly on the HP box when I lower the heap size, but still
not nearly as often as on the NT--at any heap size. Also the HP box runs out of
memory if I lower the heap size.
Is this just some difference in the hotspot implementations? Tomorrow we are going
to try installing the HP-UX native jdk131 from Sun and pointing to that instead
of the Weblogic jdk. But I have a feeling they are the same.
If anyone has any ideas about how I can boost the performance on the HP box, or
what might be causing this gap, I'd be appreciate hearing them very much.
Thanks a lot,
Matt Savino
Quest Diagnostics

Thanks Mike. You were right, I meant to say "once every 8-10 pages on the HP box".
The thing that really bugs me is how much more the HP box degrades when I run
two reports concurrently.
And we are bent on using HP because it is a Corporate Standard. Amen.
So are you saying that JVMs run better on Windows, Solaris, or other forms of
Unix?. I'd be very interested in hearing any general opinions on this.
thx,
Matt Savino
"Mike Reiche" <[email protected]> wrote:
>
Yup, the NT box is faster, looks pretty normal to me. More concurrent
requests
(10+) might be a bit of an equalizer. Why are you bent on using HP?
Your comments regarding who does more garbage collection are not clear
- I think
you wrote 'once every 8-10 pages on the NT box' when you meant HP. I
think you're
saying that the NT box performs more GC than the HP box.
Mike
"Matt Savino" <[email protected]> wrote:
I have a J2EE application that I've recently ported from a WINNT development
box
to a 2-CPU HP L-class server. I am running Weblogic 6.1, SP1 on both
machines.
On a particularly memory-intensive piece of the application, I've notice
a performance
drop of 50% to 75% from the NT box to the HP box.
Each server is running the JDK 131 (w/Hotspot) that comes with the Weblogic
install.
I have set all the HP system variables (max_thread_proc, etc.) to Bea
recommendations.
To benchmark I used an XML->PDF engine (Apache's FOP) to render a 200
page PDF
document from a static XML-file. No database connection is involved.
Below are
the results I see. Initial heap size=max heap size in each case.
HP server (HP-Unix, 2x550 mhz, 2GB ram):
Heap Size=64M, 1 report = 545ms/page (frequent out of memory errors)
Heap Size=64M, 2 reports = didn't try
Heap Size=256M, 1 report = 372ms/page
Heap Size=256M, 2 reports = 1700ms/page
Heap Size=512M, 1 report = 350ms/page
Heap Size=512M, 2 reports = 1675ms/page
Nt dev box (NT4-Worksatation, PIII-933mhz, 512MB ram):
Heap Size=64M, 1 report = 251ms/page
Heap Size=64M, 2 reports = 750ms/page
Heap Size=256M, 1 report = 245ms/page
Heap Size=256M, 2 reports = 500/page
Does this kind of perfomance gap seem normal? I can see a drop in speed
due to
the processors. But two reports at once should be faster on the HP box
right?
It seems sometimes Weblogic switches over to two processors, but not
usually until
I run 3 or more reports at once.
The only difference I can see for sure between the two boxes is that
the NT machine
performs at least 10 times as much garbage collection. (Sometimes several
times
per page, as opposed to once every 8-10 pages on the NT box.) Garbage
collection
occurs a little more frequenly on the HP box when I lower the heap size,
but still
not nearly as often as on the NT--at any heap size. Also the HP boxruns
out of
memory if I lower the heap size.
Is this just some difference in the hotspot implementations? Tomorrow
we are going
to try installing the HP-UX native jdk131 from Sun and pointing to that
instead
of the Weblogic jdk. But I have a feeling they are the same.
If anyone has any ideas about how I can boost the performance on the
HP box, or
what might be causing this gap, I'd be appreciate hearing them verymuch.
Thanks a lot,
Matt Savino
Quest Diagnostics

Similar Messages

  • LG G2 kitkat SEVERE battery drain and performace issues

    After updating my LG G2 to kitkat the other day, my phone performance and battery life have been absolutely awful. Previously I could go 24  hours on a charge with moderate use, and now I'm lucky if I get 4 hours with it sitting on my desk unused with battery saver on. I also get stuttering while navigating, running apps, etc., and it freezes often. I can hardly even use my phone at this point. Anyone else encounter these problems? Anyone know what's wrong?

    I'm having the exact same issues along with many others. I can no longer send or receive texts either, answering a call is a nightmare. My phone used to charge to 100% in about 1.5 hours and last for roughly 20-24 hrs. At the moment it's been charging 1.5 hours and it just hit 17%. Within the first hour of the update my phone went from 93% to 15%. Once it died, I plugged it in and it charged decently fast back to 100% so I unplugged it and sat down and watched it drop a percent per minute until it died on me again. I ran task killers and deleted apps and managed to get 3 hours of life off the 3rd charge of the day and now it's not even wanting to charge fast at all. It drains faster than it charges. I plugged it in to charge yesterday at 4 pm. At 2 am it was finally fully charged and I unplugged it to see if it was gonna behave and it was dead by 7:30 am. I was asleep during that entire time frame so it was NOT being used. Oh and it now overheats so badly that it burns me if I touch my face/ear to it.
    I want to try a hard factory reset but I have over 1000 photos to back up and now my computer will no longer recognize the phone, it won't install any drivers, it's all errors. So I tried Wireless Storage which I already had set up and was working great and now it won't even turn on on the phone. It's useless. So I resorted to the Verizon Cloud and I can't access any of my photos now anywhere else other than my phone because to activate my account I have to receive the temp password from Verizon in a text  but it wont send or receive texts at all on either wifi or data. Kitkat as good as bricked my 3 week old phone. My hubby's phone on the other hand, is working just fine post update. ARGH!

  • General Ledger Accounting (New): Line Items 0FIGL_O14  Performace issue

    Dear Forum,
    We are facing a performance issue while loading the data to 0FIGL_O14 General Ledger Accounting (New): Line Items from  CUBE ZMMPRC01 -> ODSO 0FIGL_O14 DSO.
    Please see my requirement below for updating the data to 0FIGL_O14 DSO.
    This report is generated to display Dry Dock and Running Repair expenses for the particular Purchase orders with respective G/L's.
    1) The G/L DSO will provide us the 0DEBIT_LC and    0DEB_CRE_DC Foreign currency amount with signs (+/-) amounts and.
    2) ZMMPRC01 Cube   will provide us the 0ORDER_VALUE  (Purchse order value)and    0INVCD_AMNT Invoice  amount.
    While we are loading the data from  CUBE ZMMPRC01 -> ODSO 0FIGL_O14 DSO ,we have created nearly 19 InfoObject  level routine to derive the below mentioned fields data for MM Purchase Order related records.
    0CHRT_ACCTS    Chart of accounts
    0ITEM_NUM      Number of line item within accounting documen
    0AC_DOC_NO     Accounting document number
    0GL_ACCOUNT    G/L Account
    0COMP_CODE     Company code
    0COSTCENTER    Cost Center
    0CO_AREA       Controlling area
    0COSTELMNT     Cost Element
    0SEGMENT       Segment for Segmental Reporting
    0BUS_AREA      Business area
    0FUNC_AREA     Functional area
    0AC_DOC_NR     Document Number (General Ledger View)
    0AC_DOC_TYP    Document type
    0POST_KEY      Posting key
    0PSTNG_DATE    Posting date in the document
    0DOC_CURRCY    Document currency
    0LOC_CURTP2    Currency Type of Second Local Currency
    0CALQUART1     Quarter
    0CALYEAR       Calendar year
    For reference Please see the below logic to derive the data for PO related record.
    DATA:
          MONITOR_REC    TYPE rsmonitor.
    $$ begin of routine - insert your code only below this line        -
        ... "insert your code here
        types : begin of ty_FIGL,
                    CHRT_ACCTS type /BI0/OICHRT_ACCTS,
                    ITEM_NUM type /BI0/OIITEM_NUM,
                    AC_DOC_NO type /BI0/OIAC_DOC_NO,
                    GL_ACCOUNT type /BI0/OIGL_ACCOUNT,
                end of ty_FIGL.
        data :it_figl type STANDARD TABLE OF ty_figl,
              wa_figl type ty_figl.
        SELECT single CHRT_ACCTS
                        ITEM_NUM
                        AC_DOC_NO
                        GL_ACCOUNT from /BI0/AFIGL_O1400
                          into wa_figl
                          where DOC_NUM = SOURCE_FIELDS-DOC_NUM and
                                DOC_ITEM = SOURCE_FIELDS-DOC_ITEM and
                                /BIC/Z_PCODE = SOURCE_FIELDS-/BIC/Z_PCODE
                                and
                                /BIC/Z_VOY_NO = SOURCE_FIELDS-/BIC/Z_VOY_NO
                                and
                                FISCYEAR = SOURCE_FIELDS-FISCYEAR.
        if sy-subrc = 0.
          RESULT = wa_figl-AC_DOC_NO.
        ENDIF.
        clear wa_figl.
    Please note the same kind of logic is applied for all the above mentioned fields.
    Here is my concerns and issue.
    For the all above all routines i am referring BI0/AFIGL_O1400
    DSO and finally loading to the Same DSO(BI0/AFIGL_O1400
    The worried part is my DSO  0FIGL_O1400 is currecnly having nearly 60 Lacks records and MM cube is having nearly 55 requests which are required to update to the Above DSO for PO related PO value and Invoice amount.
    The big issue here is while uploading data from MM cube to DSO say for example if the request is having  25,000 records from this  nearly 500-600 records will be updated to DSO.
    But here it is taking huge time ( nearly 3 days for request ) for updating  these records , like this i have to pull 50 more requests from Cube to DSO as per the requirement.
    Please note as of now i haven't created any indexes on DSO to improve this loads.
    Please note am facing this issue in Production environment and need your help ASAP.
    Thanks & Regards,
    Srinivas Padugula

    Hi,
    If selecting data from 0FIGL_O14 is taking long time then you can create secondary indexes on DSO.
    0FIGL_O14 would be huge as data volume directly corresponds to data volume in BSEG.
    But for you requirement, I think what you can do is,
    1. create multiprovider on top of DSO and Cube and create Bex report to give you the fields requried from both the infoproviders, you can then use open hub or APD approach to keep the data in the staging table or direct update DSO and then load the data to the DSO
    2. Create secondary indexes on DSO so that fetching would be faster.
    3. Do the enhancment at R/3 level to fetch fields from MM during load of G/L
    Regards,
    Pravin Karkhanis.

  • Performace issue in Oracle 10.1.0.4

    Hi,
    I am having the database with version 10.1.0.4 on solaris environment, I am having the strange problem with open cursors, set the open_cursors paremeter has 1000, but in the snapshot cursors/session showing 1200 and the count is gradually increasing and sometimes found upto 15000. Secondly, parse 2 execute ratio is ~86%.
    I am not getting any error related to open_cursors but users are reporting the very slow response of the performance. Application upload bulk logs in the database and use bind variables.
    RAM -> 8G
    The value of below parameters:
    open_cursors=1000
    session_cached_cursor=150
    cursor_sharing=similar
    SGA=3.5GB
    Please help to find out the rootcause of the issue and suggestion to resolve this.
    Thanks in advance,
    Subash

    select 'session_cached_cursors' parameter,
    lpad(value, 5) value,
    decode(value, 0, ' n/a', to_char(100 * used / value, '990') || '%') usage
    from ( select max(s.value) used
    from v$statname n,
    v$sesstat s
    where n.name = 'session cursor cache count'
    and s.statistic# = n.statistic#
    ( select value
    from v$parameter
    where name = 'session_cached_cursors'
    union all
    select 'open_cursors', lpad(value, 5),
    to_char(100 * used / value, '990') || '%'
    from ( select max(sum(s.value)) used
    from v$statname n,
    v$sesstat s
    where n.name in ( 'opened cursors current',
    'session cursor cache count')
    and s.statistic# = n.statistic#
    group by s.sid
    ( select value
    from v$parameter
    where name = 'open_cursors');
    and check ... what session open many cursors
    select a.value, s.username, s.sid, s.serial#, s.machine
    from gv$sesstat a, gv$statname b, gv$session s
    where a.statistic# = b.statistic# and s.sid=a.sid
    and b.name = 'session cursor cache count' order by a.value;
    Or get information on session open > 1000 cursors ;)
    select SADDR,SID,USER_NAME,ADDRESS,HASH_VALUE,SQL_ID,SQL_TEXT from v$open_cursor where sid in (SELECT sid FROM V$OPEN_CURSOR group by sid having count(*)>1000)

  • Regarding performace issue

    hi,
    i want to retrieve the no. of records available for 6 months period. if i am writing single query to retrieve data for 6 months, it's getting timed out in jsp. i am using java and jsp for this.
    so using function call, i want to retrieve the no. of records available for weekly or monthly and want to get the total no. of records available for 6 months.
    how to split the date range into smaller subset in java?
    and not even a single day should be left in between?
    any build-in function is available for this?
    how to do this?
    it's urgent.. can anyone help me?
    thanks,
    Sri

    hi,
    i want to retrieve the no. of records available for
    6 months period. if i am writing single query to
    retrieve data for 6 months, it's getting timed out
    in jsp. i am using java and jsp for this. If you've a huge amount of data from that single query, should it be better to have performance tuning on related database tables?
    Fixing performance issue at the end point may not provide much improvement.
    :D

  • Performace issue on MATERIALIZED view

    Hi Gurus,
    Need help to understand perfoamance issue on MATERIALIZED view. I have created mat view on some big table of 76GB. Mat view is of 25 GB.
    I have refreshed mat view before running report. In OEM it ish showng going for full table scan and estimated time is of 2 hrs where full tablescan on base table on which mat view is created is of 20 Mins . I am using fast refresh on demand .
    We are using Oracle 10.2.0.4 on Sun Sprac 64bit 10 platform and version.
    Could you please let me know what could be the reason why mat views are performing poor ?
    Thanks & Regards

    You have MLOG created on your master table, right?
    OK, then check DBA_MVIEWS. Look for LAST_REFRESH_TYPE, If everything is OK, i should be FAST.
    If everything is OK by now, the problem can be the nature of master table. If there is great amount of changes in master table, set periodic job which will do refresh often (since 'often' is fuzzy word, it can be every 5, 10, 30 minutes...). Incremental refresh will perform beter if there is small amount of changes in MLOG table!
    You can check your MLOG table. If it is huge (size in MB) and there is only few records in it, try to realocate the space issuing "ALTER TABLE MLOG$_table_name;"
    Hope something will be helpfull...
    Best regards

  • Cluster file systems performace issues

    hi all,
    I've been running a 3 node 10gR2 RAC cluster on linux using OCFS2 filesystem for some time as a test environment which is due to go into production.
    Recently I noticed some performance issues when reading from disk so I did some comparisons and the results don't seem to make any sense.
    For the purposes of my tests I created a single node instance and created the following tablespaces:
    i) a local filesystem using ext3
    ii) an ext3 filesystem on the SAN
    iii) an OCFS2 filesystem on the SAN
    iv) and a raw device on the SAN.
    I created a similar table with the exact data in each tablespace containing 900,000 rows and created the same index one each table.
    (i was trying to generate a i/o intensive select statement, but also one which is reallistic to our application)
    I then ran the same query against each table (making sure to flush the buffer cache between each query execution).
    I checked that the explain plan were the same for all queries (they were) and the physical reads (from an autotrace) were also comparable.
    The results from the ext3 filesystems (both local and SAN) were approx 1 second, whilst the results from OCFS2 and the raw device were between 11 and 19 seconds.
    I have tried this test every day for the past 5 days and the results are always in this ballpark.
    we currently cannot put this environment into production as queries which read from disk are cripplingly slow....
    I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db.
    judging from this, and many other forums, OCFS2 is in quite wide use so this cannot be an inherent problem with this type of filesystem.
    Also, given the results from my raw device test I am not sure that moving to ASM would provide any benefits either...
    if anyone has any advice, I'd be very grateful

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • Administration Portal user and group managent performace issue

    Hi
    I have implemented a custom IPlanet LDAP authentication provider which provides
    a realization for the needed authentication, group and user management interfaces
    (UserReader, UserEditor, GroupReader, GroupEditor).
    The authentication provider itself seems to work fine, but I think I found a possible
    performance problem in the WebLogic Administration Portal, when I studied the
    authentication framework by remote debugging. Response times (with just one simultaneous
    user) are quite long (over 8 seconds) with 40 groups on the same hierarchy level
    on the Users, Groups and Roles tree. I'm developing with P4 processor and 1 Gb
    ram (512 Mb allocated for WLS)
    After little debugging I found out that every time a node in the group tree is
    dlicked isMember() method of the authentication provider gets called n * (n -1)
    times, where n is the number of visible groups in the hierachy tree. '
    What happens is that for each group, the membership of all the other visible groups
    is checked by the isMember(group, member, recursive), method call. Even the usage
    of a membership cache in this point didn't speed up the rendering noticeably.
    By placing a break point in the isMember() method and studying the call stack,
    one can see that all the isMember() calls are made during the rendering performed
    by the ControlTreeWalker. For example if there is 40 groups visible in the tree,
    the isMember() method gets called 1600 times. This seems quite heavy. With a
    small number of groups per hierarchy level this problem might not be serious,
    but in case where there would be over 30 000 customer companies using the portal
    and each having their own user groups, it could be an issue.
    The problem does not occur with WebLogic console and browsing of groups and users
    (using the same authentication provider) is fast with it. When a user is selected
    from the user list and the Groups tab is checked, the Possible Groups and Current
    Groups list boxes will get populated. When debugging this sequence, one can see
    that only the listGroups() method of the authentication provider is called once
    per list box and the order of method calls is of order n (rather than n^2 which
    is the case with the Administrator Portal).
    Has anyone had similar problems with Administrator Portal's performance?
    Ville

    Ville,
    You're correct about the performance degradation issue. This is being
    addressed in SP2.
    Thanks,
    Phil
    "Ville" <[email protected]> wrote in message
    news:[email protected]...
    >
    Hi
    I have implemented a custom IPlanet LDAP authentication provider whichprovides
    a realization for the needed authentication, group and user managementinterfaces
    (UserReader, UserEditor, GroupReader, GroupEditor).
    The authentication provider itself seems to work fine, but I think I founda possible
    performance problem in the WebLogic Administration Portal, when I studiedthe
    authentication framework by remote debugging. Response times (with justone simultaneous
    user) are quite long (over 8 seconds) with 40 groups on the same hierarchylevel
    on the Users, Groups and Roles tree. I'm developing with P4 processor and1 Gb
    ram (512 Mb allocated for WLS)
    After little debugging I found out that every time a node in the grouptree is
    dlicked isMember() method of the authentication provider gets called n *(n -1)
    times, where n is the number of visible groups in the hierachy tree. '
    What happens is that for each group, the membership of all the othervisible groups
    is checked by the isMember(group, member, recursive), method call. Eventhe usage
    of a membership cache in this point didn't speed up the renderingnoticeably.
    >
    By placing a break point in the isMember() method and studying the callstack,
    one can see that all the isMember() calls are made during the renderingperformed
    by the ControlTreeWalker. For example if there is 40 groups visible inthe tree,
    the isMember() method gets called 1600 times. This seems quite heavy.With a
    small number of groups per hierarchy level this problem might not beserious,
    but in case where there would be over 30 000 customer companies using theportal
    and each having their own user groups, it could be an issue.
    The problem does not occur with WebLogic console and browsing of groupsand users
    (using the same authentication provider) is fast with it. When a user isselected
    from the user list and the Groups tab is checked, the Possible Groups andCurrent
    Groups list boxes will get populated. When debugging this sequence, onecan see
    that only the listGroups() method of the authentication provider is calledonce
    per list box and the order of method calls is of order n (rather than n^2which
    is the case with the Administrator Portal).
    Has anyone had similar problems with Administrator Portal's performance?
    Ville

  • Query Performace Issue-Usage of SAP_DROP_EMPTY_FPARTITIONS Program

    Hi Experts,
    We are facing query peroformance issue in our BW Production System. Queries on the Sales Multiprovider are taking lot of time to run. We need to tune the query perofrmace.
    We need to drop the empty partitions at the database level. Have anyone of you used the program SAP_DROP_EMPTY_FPARTITIONS to drop the empty partitions ? If Yes, Please provide me with details of your experience of using this program. Please let me know, whether there are any disadvantages using this program in the Production System.
    Kindly treat this as an urgent requirement.
    Your help will be appreciated....
    Thanks,
    Shalaka

    Hi Shwetha,
    I think that pgm drops if partition contains no records(in DEL_CNT)
    or if partitions requid is not in dimtab (DEL_DIM)
    Hope it helps!
    (and don't forget to reward the answer, if you want !)
    Bye,
    Roberto

  • Windows Phone 8 Silverlight application performace issue.

    Hi Everybody,
    We are developing an application which is having 100 pages (Chat Application), in this application we can send images,audio,video,contact,location everything by attachments. Application is running perfectly but if we use application for one hour by sending
    all attachments and doing some navigation's like one to another page, application becoming slow down and crashing. I have observed major issue with memory but I could not able to resolve this issue. I want to clear all memory consuming by pages, and how can
    I make this application perfect and fast.
    Your help is very very appreciable.
    Thanks in advance.

    As long as pages are kept on the BackStack they won't be reclaimed by the Garbage Collector. So you either have to structure your navigation in such a way that the user's navigation doesn't result in an ever longer BackStack.
    E.g. if I navigate forwards from Page A to Page B and then Page C I should not provide a button to jump to Page A but rather have the user do back navigation which will remove the other pages from the back stack. Alternatively you could clear the Back Stack
    whenever the user visits the start page of your App.
    Another common issue that is keeping pages alive even when they are no longer on the back stack is that they have registered to static events or to events on objects that are kept alive (e.g. Singletons or something like that). In that case the event still
    is a reference to the page and that in turn keeps it from being collected by the GC. To fix this you should unregister those events when navigating away from a page.
    Hope these hints help you in fixing your memory consumption problem.

  • Performace issues with a new intel Mini

    Here's a quick question from a new mac user.
    I've just bought a 1.66 intel dual mini with 512MB and to be honest I'm quite surprised how slow it is. It certainly doesn't seem any faster than the ancient PC it replaced. I get considerable lags when switching between applications, for example. Using Word I get delays between hitting the key and letters appearing on screen.
    I don't use it for anything special at all just yet - Word, Safari, palm sync stuff. I notice using Activity Monitor that I only have a tiny bit of free memory (7mb or so). Is there anything I can do to improve performance? Has anyone else had similar experiences? I'm reluctant to start doing much else if the performance is already that slow. Any tips would be most welcome.
    Will I need to increase my memory? If so, it seems a little disingenuous of apple to market a product that is not really up to running its OS.
    mac mini   Mac OS X (10.4.8)  

    Allbran, welcome!
    You are certainly not the first to be surprised. As others have mentioned, 512 Meg is not enough to run substantial legacy apps like MS Word and have other programs open too and have good performance.
    MS Word was designed for the older PPC processor and runs in emulation (Rosetta) on the Intel Macs.
    The only practical solutions are to add more RAM or switch to applications that were programmed for the Intel Macs.
    See this link for a discussion of the issues and the range of solutions, which ended in a satisfied user (she upgraded to 2 Gig):
    http://discussions.apple.com/message.jspa?messageID=3463183
    My personal preference would be that anyone selling Macs would make it clear that 512 Meg RAM is not sufficient for anything but casual MS Word use. Unfortunately sellers don't do this, and I suspect many buyers aren't ready to hear it either.
    An additional discussion here:
    http://discussions.apple.com/message.jspa?messageID=3528525#3528525
    "Having just upgraded my Intel 512 to 2gb, I can tell you that the beach-ball is at rest. I went from complete frustration to really enjoying the machine. I can now run office, dreamweaver, Safari, itunes, and other apps concurrently with no slowdowns at all.
    In fact, this is the best-performing Mac I've ever owned, and that covers a lot of machines!"

  • Performace Issue using Crystal Report For enterprise and BEx Queries

    Hi all;
        We are generating the following error stack when trying to build a report on top of a BEX query using Crystal Report for Enterprise :
        |7C4F8ECE44034DB897AD88D6F98B028B3|2011 12 12 17:24:21.277|+0100|>>|E| |crj|20380|  56|ModalContext    | |2|0|0|0|BIPSDK.InfoStore:query|CHVXRIL0047:20380:56.174:1|-|-|BIPSDK.InfoStore:query|CHVXRIL0047:20380:56.174:1|Cut2PbOe3UdzgckPBHn8spEab|||||||||com.crystaldecisions.sdk.occa.infostore.internal.InfoObjects||Assertion failed: Java plugin for CommonConnection is not loaded.
    java.lang.AssertionError
         at com.businessobjects.foundation.logging.log4j.Log4jLogger.assertTrue(Log4jLogger.java:52)
         at com.crystaldecisions.sdk.occa.infostore.internal.InfoObjects.newInfoObject(InfoObjects.java:576)
         at com.crystaldecisions.sdk.occa.infostore.internal.InfoObjects.continueUnpackHelper(InfoObjects.java:548)
         at com.crystaldecisions.sdk.occa.infostore.internal.InfoObjects.continueUnpack(InfoObjects.java:489)
         at com.crystaldecisions.sdk.occa.infostore.internal.InfoObjects.startUnpack(InfoObjects.java:464)
         at com.crystaldecisions.sdk.occa.infostore.internal.InternalInfoStore$XRL3WireStrategy.startUnpackTo(InternalInfoStore.java:1484)
         at com.crystaldecisions.sdk.occa.infostore.internal.InternalInfoStore$XRL3WireStrategy.startUnpackTo(InternalInfoStore.java:1464)
         at com.crystaldecisions.sdk.occa.infostore.internal.InternalInfoStore.unpackAll(InternalInfoStore.java:910)
         at com.crystaldecisions.sdk.occa.infostore.internal.InternalInfoStore.queryHelper(InternalInfoStore.java:944)
         at com.crystaldecisions.sdk.occa.infostore.internal.InternalInfoStore.queryHelper(InternalInfoStore.java:929)
         at com.crystaldecisions.sdk.occa.infostore.internal.InternalInfoStore.query_aroundBody24(InternalInfoStore.java:798)
         at com.crystaldecisions.sdk.occa.infostore.internal.InternalInfoStore.query(InternalInfoStore.java:1)
         at com.crystaldecisions.sdk.occa.infostore.internal.InfoStore.query_aroundBody20(InfoStore.java:175)
         at com.crystaldecisions.sdk.occa.infostore.internal.InfoStore.query_aroundBody21$advice(InfoStore.java:42)
         at com.crystaldecisions.sdk.occa.infostore.internal.InfoStore.query(InfoStore.java:1)
         at com.businessobjects.mds.securedconnection.cms.services.olap.OlapCmsSecuredConnectionService.getConnectionObject(OlapCmsSecuredConnectionService.java:125)
         at com.businessobjects.mds.securedconnection.cms.services.olap.OlapCmsSecuredConnectionService.getOlapSecuredConnection(OlapCmsSecuredConnectionService.java:191)
         at com.businessobjects.mds.securedconnection.loader.internal.SecuredConnectionLoaderImpl.getOlapConnectionFromSecuredConnection(SecuredConnectionLoaderImpl.java:83)
         at com.businessobjects.mds.securedconnection.loader.internal.SecuredConnectionLoaderImpl.getConnectionFromSecuredConnection(SecuredConnectionLoaderImpl.java:60)
         at com.businessobjects.dsl.services.workspace.impl.DirectOlapAccessDataProviderBuilder.loadSecuredConnection(DirectOlapAccessDataProviderBuilder.java:193)
         at com.businessobjects.dsl.services.workspace.impl.DirectOlapAccessDataProviderBuilder.loadSecuredConnection(DirectOlapAccessDataProviderBuilder.java:176)
         at com.businessobjects.dsl.services.workspace.impl.DirectOlapAccessDataProviderBuilder.provideUniverseFromCms(DirectOlapAccessDataProviderBuilder.java:63)
         at com.businessobjects.dsl.services.datasource.impl.AbstractUniverseProvider.provideUniverse(AbstractUniverseProvider.java:41)
         at com.businessobjects.dsl.services.workspace.impl.AbstractDataProviderBuilder.updateQuerySpecDataProvider(AbstractDataProviderBuilder.java:119)
         at com.businessobjects.dsl.services.workspace.impl.AbstractDataProviderBuilder.updateDataProvider(AbstractDataProviderBuilder.java:106)
         at com.businessobjects.dsl.services.workspace.impl.AbstractDataProviderBuilder.addDataProvider(AbstractDataProviderBuilder.java:49)
         at com.businessobjects.dsl.services.workspace.impl.WorkspaceServiceImpl.addDataProvider(WorkspaceServiceImpl.java:56)
         at com.businessobjects.dsl.services.workspace.impl.WorkspaceServiceImpl.addDataProvider(WorkspaceServiceImpl.java:45)
         at com.crystaldecisions.reports.dsl.shared.DSLTransientUniverseServiceProvider.createSessionServicesHelper(DSLTransientUniverseServiceProvider.java:72)
         at com.crystaldecisions.reports.dsl.shared.DSLServiceProvider.createSessionServices(DSLServiceProvider.java:428)
         at com.businessobjects.crystalreports.designer.qpintegration.DSLUtilities.getServiceProvider(DSLUtilities.java:279)
         at com.businessobjects.crystalreports.designer.qpintegration.InitializeDSLRunnable.run(InitializeDSLRunnable.java:82)
         at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)
    Here seems to be that a plugin is not loaded : com.crystaldecisions.sdk.occa.infostore.internal.InfoObjects||Assertion failed: Java plugin for CommonConnection is not loaded.
    could this affect the performance of Crystal Reports for Enterprise and how could I fix this ?
    Best Regards
    Anis

    Venkat,
    Thanks for your response. Please note, however, the transaction RAD1 does not exist. Let me provide more details about the current settings of the InfoObject.
    The Characteristic is 'Item' (0CS_ITEM) and upon going to RSA1 >  Modeling > InfoObjects > Item (0CS_ITEM) > Right Click > Display > Business Explorer (tab) > Text Type is set to 'Long Text' and BEx description is set to 'Long description' already.
    When I run/execute the query with this Item characteristic, the results in BEx Analyzer is showing appropriate long text, however, Crystal Report for Enterprise shows short text only
    K
    Edited by: Kumar Pathak on Feb 3, 2012 6:18 PM

  • Oracle 11g performace issue

    hi all,
    am using oracle 11g enterprise edition., i didn't ran any query but in ADDR report it was running some sytem based query(SYS,DBSNMP,SYSMAN etc)
    but in previous 10g it was working fine.. can anyone tell me how to stop those query..
    this is the query was running most frequently :
    select end_time, wait_class#, (time_waited_fg)/(intsize_csec/100), (time_waited)/(intsize_csec/100), 0
    from v$waitclassmetric union all
    select fg.end_time, -1, fg.value, bg.value, dbtime.value
    from v$sysmetric fg, v$sysmetric bg, v$sysmetric dbtime
    where bg.metric_name = 'Background CPU Usage Per Sec' and bg.group_id = 2 and fg.metric_name = 'CPU Usage Per Sec' and fg.group_id = 2 and dbtime.metric_name = 'Average Active Sessions' and dbtime.group_id = 2 and bg.end_time = fg.end_time and fg.end_time = dbtime.end_time order by end_time,wait_class#
    UPDATE MGMT_JOB_EXECUTION SET STEP_STATUS=:B2 WHERE STEP_ID=:B1

    Velsjeya wrote:
    hi all,
    am using oracle 11g enterprise edition., i didn't ran any query but in ADDR report it was running some sytem based query(SYS,DBSNMP,SYSMAN etc)
    but in previous 10g it was working fine.. can anyone tell me how to stop those query..
    this is the query was running most frequently :
    select end_time, wait_class#, (time_waited_fg)/(intsize_csec/100), (time_waited)/(intsize_csec/100), 0
    from v$waitclassmetric union all
    select fg.end_time, -1, fg.value, bg.value, dbtime.value
    from v$sysmetric fg, v$sysmetric bg, v$sysmetric dbtime
    where bg.metric_name = 'Background CPU Usage Per Sec' and bg.group_id = 2 and fg.metric_name = 'CPU Usage Per Sec' and fg.group_id = 2 and dbtime.metric_name = 'Average Active Sessions' and dbtime.group_id = 2 and bg.end_time = fg.end_time and fg.end_time = dbtime.end_time order by end_time,wait_class#
    UPDATE MGMT_JOB_EXECUTION SET STEP_STATUS=:B2 WHERE STEP_ID=:B1Why are you so concerned about background queries? Do you really think these queries uses high resource and have high elapsed time then you custom queries? The god way to diagnose a performance problem is to identify most resource intensive/time consuming program/module/code rather then running after background queries. Yes sometimes background queries too run slowly in rear cases and then you need to run dbms_stats.gather_dictionary_stats or dbms_stats.gather_fixed_objects_stats (https://blogs.oracle.com/optimizer/entry/fixed_objects_statistics_and_why)

  • Oracle Performace Issue

    Hi,
    I used my notebook with 8GB of memory to upload data into Oracle 11g and then took dump of it and imported onto the prod server which is 32GB memory and 2 Quad Processors.... Now when I run select * from table_name on my laptop which got about 500,000 records it takes 6 seconds but on prod server it takes about 284 seconds...
    The only difference between the installation is that my laptop is running OLTP installation and Prod is running on DW configuration... What I dont understand why this is taking so long on prod server and so fast on my laptop...
    Can one can point me to right direction?
    Thank you in advance..
    Regards,
    NK

    My prod server is in data center and Iam accessing it through the point to point DSL which is 2MB link, and the local is on my notebook.... if the execution time is same then the only thing which i can think of is uploading the data is talking 4 mins or so. Because the link is only 2MB and the time getting the results is spent on sending the data to my computer.
    I use Golder6 for querying as its very light weight tool and is quite fast as compared to toad.
    Show parameter optimizer on my prod server shows following:
    SQL> show parameter optimizer;
    NAME                    TYPE     VALUE
    optimizer_capture_sql_plan_baselines boolean     FALSE
    optimizer_dynamic_sampling     integer     2
    optimizer_features_enable     string     11.2.0.1
    optimizer_index_caching      integer     0
    optimizer_index_cost_adj     integer     100
    optimizer_mode               string     ALL_ROWS
    optimizer_secure_view_merging     boolean     TRUE
    optimizer_use_invisible_indexes boolean     FALSE
    optimizer_use_pending_statistics boolean     FALSE
    optimizer_use_sql_plan_baselines boolean     TRUE
    SQL>
    I will past the same from my notebook as I get back home...
    The table structure is same on both the machine and have an ID column indexed. I am the only user connecting to the prod server as its a brand new machine with 2 Quad cores (2.4) with 32 GB system memory and Oracle is allowed to use about 24GB of it. The Automatic Memory Management is enabled on the prod server.

  • IDOCS Performace issue

    Hi,
          I am not able to figure out a situation where 400 idocs are taking 2 minutes in development server and it is taking 1 hour in QA server which are coming from middleware in the form inbound IDOCS.
    Also, if we could change the settings in BD51, 0 -> Mass processing.. Would it take less time then?
    Please guide.
    Thanks,
    Gaurav

    Are these standard SAP IDOCs, utilzing SAP standard process codes and function modules?  First thing that comes to mind is that in QA, any data validations, etc., that result in database table reads, are slower due to table size.  In Development, you probably have virtually no table rows.

Maybe you are looking for