ChipKill kills performance

I just upgraded to Bios 1.1 on my K8T Master2 FAR, and now I can actually boot Windows with ChipKill turned on (it didn't with 1.0).  However, ChipKill seems to kill memory performace.  For example my SiSoft Sandra memory score was cut in half compared to ChipKill disabled.  Looking at how the northbridge is configured, it seems like the memory scrubber is set to scrub DRAM every 40ns.  That seems like way overkill, even faster than refresh!  Is anyone else trying to use ChipKill?
My system:
K8T Master2 FAR
Opteron 248  
4x256MB PC2100 DRAM  
ATI AIW AGP
Maxtor 160GB ATA

Quote
Originally posted by Bas
Turn it off....
The system does ECC anyway...
It's ment to get extreme ram correction, e.g. to be used when the system should be extreemly stable!
I don't believe that.  When I have ChipKill disabled, the Northbridge configuration registers show that ECC is disabled too.  In that case, there is no checking of DRAM at all.

Similar Messages

  • Urgent - Unprocessed messages kill performance and J2EE.

    Dear experts,
    We have a live XI system with which we have the following issue. During go-live a great number of messages have been sent through XI. A lot of them did not compute well. Some were missing essential data, etc.
    After a while the inbound queues were emtpy and all looked well. However, every so many hours user BACKGRJOBS then suddenly puts thousands of files in the inbound queue and let's it run. These are messages from days ago. Many of them have status "Manual restart". After this, J2EE is struggling to work. File adapters stop working, etc.
    And afterwards, the inbound queue is empty again.
    Where are these messages stored? As they do not seem to appear in the inbound queue, but then suddenly are put in by the BACKGRUSER.
    And how can I stop it from killing J2EE?
    Kind regards.

    Hi,
    The performance of your J2ee engine is degrading due to large volume of data you are trying to reprocess at the same time.
    These data are persisted in the database and the background job is trying to reprocess it.
    You can do following things :-
    1) wait for some time if the messages are reprocessing
    2) If the server performance is poor then you will have to restart the J2EE side.
    3)Schedule the job so that it can pick lesser number of files.
    4)You can also cancel the failed messages from SXMB_MONI and resend the files.
    Regards
    Amitanshu

  • Multiselect kills performance

    Hi,
    We've been struggeling with the performance of multiselect from the start and find it hard to get a good grip on the situation.
    We started small, with only a few thousand products (from a specific category) which seemed to have very little to no performance impact.
    The next step was to indicate more dimensions as multiselect, including the price dimension, which applies to all our product (>13 mln). We still only wanted multiselect for the few thousand products. The result was a performance killer, endeca-requests took much longer and the CPU time increased dramatically.
    We then create a seperate price-dimension, 1 for our largest productgroup (10 mln), and 1 for the rest. The performance went back to normal.
    Now we've expanded multiselect to more productgroups (not our largest) and involves around 2 mln products. The performance was very bad and the amount of calls that last more than 3 seconds increased by 10fold.
    We have a very hard time trying to find out why this happens. We know Endeca needs to do more calculations but we did not expect to see such a dramatic decrease in performance.
    Has anyone encountered the same problem or has a suggestion?
    Thanks.
    Maarten

    Hi Maartin,
    A question for You:
    Are you trying to pull records and refinements in the same query?
    One way to handle this is by executing two queries to endeca:
    1. Get the products for that particular query with no refinements info.
    2. Execute a light weight query with 0 rec_per_page and expand the required refinements in Ne parameter.
    HTH
    Mandar

  • My experience - Slow processing of previews is killing performance?

    Set up:
    24" iMac, Intel Core 2 Duo 2.8Ghz, 4Gb RAM, Radeon HD2600 with 256Mb, 1Tb 7200 Seagate Barracuda HD.
    Aperture 3.02, RAW 3.01
    OSX 10.6.3
    Problem: In a project where previews are turned on, any change to a photo needs sloooow processing of the preview.
    Take a Canon 5D Mark II RAW photo (25Mb or so) with some simple adjustments.
    If I remove the curves adjustment (auto) I have an entry on the activity window for 90 seconds saying the photo is processing.
    Adding a curves adjustment takes 100 seconds of processing before I even make a change.
    If I then hit the Auto button it takes 120 seconds of processing.
    Very recently I have
    1) used all three of the cmd-alt options to rebuild/repair the library
    2) Cleared all caches and deleted the aperture plist and a RAW plist file mentioned in another thread
    3) Copied the library to another disk and back to de-fragment it
    So I hope there are no problems with my library.
    If I try the above actions after turning off previews the processing is down to 4 to 6 seconds for a photo.
    Is anyone else seeing this massive slowdown with previews?
    Is anyone seeing fast performance with previews on large photos?
    I usually work with previews off and build them at the end of my workflow, but I'd like to know if I have a problem with my library / install.

    Hi...I have the exact same machine as you except for the external drive. Also, I shoot RAW with a Nikon D300 (+/- 12 MB file). I'm far to be seeing the same performance as you. Processing is much faster, I would say about 2 sec for a photo. The only thing I would do if you haven't done it already is to set preview size to no larger than your screen resolution (for the 24'' iMac is "Fit within 1920 X 1920" )with a quality no higher than 6.

  • Table Spool (Lazy Spool) & Hash Match (Aggregate) killing performance.

    Hi Folks,
    I have a query that takes about 5 minutes and I am not sure where the issue is. Is there anyway someone can give an insight to this query plan please? Here are the tables and indexes definitions:
    -----Table 1
    CREATE TABLE [dbo].[PropVal](
    [Item] [nvarchar](16) NOT NULL,
    [Symbol] [nvarchar](23) NOT NULL,
    [Date] [smalldatetime] NOT NULL,
    [Value] [nvarchar](max) NOT NULL
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    CREATE UNIQUE CLUSTERED INDEX [PropVal_PK] ON [dbo].[PropVal]
    [Symbol] ASC,
    [Item] ASC
    )GO
    -----Table 2
    CREATE TABLE [dbo].[Crons](
    [CronID] [nvarchar](23) NOT NULL,
    [Date] [smalldatetime] NOT NULL,
    [XMLBlob] [xml] NOT NULL,
    PRIMARY KEY CLUSTERED ([CronID] ASC )
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
    GO
    CREATE NONCLUSTERED INDEX [test_idx_crons] ON [dbo].[Crons]
    ( [Date] ASC )
    INCLUDE ( [CronID]) 
    GO
    below is the script 
    SELECT  'CLIENTIDS' AS Item
      , Symbol
      , CONVERT(NCHAR(8), GETUTCDATE(), 112) AS Date
      , STUFF(CAST(( SELECT DISTINCT
                            ',' + [Value]
          FROM   DBO.PropVal WITH (NOLOCK)
          WHERE  [Symbol] IN (
                 SELECT DISTINCT
                        ( [Value] )
                 FROM    DBO.PropVal WITH (NOLOCK)
                 WHERE   [Item] = 'USERID'
                         AND [Value] NOT LIKE 'RIMES-%'
                         AND [Symbol] IN (
                              SELECT  DISTINCT a.[Symbol]
                              FROM    DBO.PropVal a, DBO.Crons b WITH (NOLOCK)
                              WHERE   a.[Item] = 'SYSTEM'
                                      AND a.[Symbol] = SUBSTRING(b.[CronID],1,CHARINDEX('.',b.[CronID])-1)
                                      AND b.[Date] > DATEADD(MONTH,-1,GETUTCDATE())
                                     AND a.[Value] = symbols.Symbol ) )
                         AND [Item] = 'CLIENTID'
                 FOR
                    XML PATH('')
                 ) AS varchar(MAX)), 1, 1, '') AS Value
    FROM    ( SELECT DISTINCT
                        [Value] AS Symbol
              FROM      DBO.PropVal WITH (NOLOCK)
              WHERE     [Item] = 'SYSTEM'
              AND       [Value] LIKE 'SYS-%'
            ) AS symbols
    UNION ALL
    SELECT  'USERIDS' AS Item
      , Symbol
      , CONVERT(NCHAR(8), GETUTCDATE(), 112) AS Date
      , STUFF(CAST(( SELECT DISTINCT
                            ',' + [Value]
          FROM   DBO.PropVal WITH (NOLOCK)
          WHERE  [Symbol] IN (
                 SELECT  [Symbol]
                 FROM    DBO.PropVal WITH (NOLOCK)
                 WHERE   [Item] = 'SYSTEM'
                         AND [Value] = symbols.Symbol )
          AND [Item] = 'USERID'
          AND [Value] NOT LIKE 'RIMES%'
          FOR
             XML PATH('')
          ) AS varchar(MAX)), 1, 1, '') AS Value
    FROM    ( SELECT DISTINCT
                        [Value] AS Symbol
              FROM      DBO.PropVal WITH (NOLOCK)
              WHERE     [Item] = 'SYSTEM'
              AND       [Value] LIKE 'SYS-%'
            ) AS symbols;

    Not that this is a fix to performance, but this query may not be doing what you want.  About 11 lines into the query you have a WHERE clause that reads
    WHERE [Item] = 'USERID'
    AND [Value] NOT LIKE 'RIMES-%'
    AND [Symbol] IN (
    SELECT DISTINCT a.[Symbol]
    FROM DBO.PropVal a, DBO.Crons b WITH (NOLOCK)
    WHERE a.[Item] = 'SYSTEM'
    AND a.[Symbol] = SUBSTRING(b.[CronID],1,CHARINDEX('.',b.[CronID])-1)
    AND b.[Date] > DATEADD(MONTH,-1,GETUTCDATE())
    AND a.[Value] = symbols.Symbol ) )
    AND [Item] = 'CLIENTID'
    Notice that that is of the form
    WHERE [Item] = 'USERID'
    AND <some other stuff>
    AND [Item] = 'CLIENTID'
    So, since no row can have Item = 'USERID' and have Item = 'CLIENTID', whis WHERE clause will never return any rows.
    Tom

  • Application Outline Update Kills Performance

    Developing small application on a machine with several gigs of hertz and memory. Keyboard becomes non-responsive with the Application Outline window open: about a half second delay between characters typed in.
    Work around is close the Application Outline window and the reponse returns to normal.

    You asked for it, you got it...the Java Studio Creator team has provided significant design-time performance with this latest hot fix. Connect to the Update Center http://developers.sun.com/prodtech/javatools/jscreator/downloads/updates/index.html and get it today!
    Note: This hot fix requires Sun Java Studio Creator 2 Update 1. If you need to upgrade from Java Studio Creator 2, see the Downloads page http://developers.sun.com/prodtech/javatools/jscreator/downloads/index.jsp.
    Comments? A new forum thread has been started to consolidate discussion about the hotfix and issues related to IDE performance. Come on over to Get Improved Performance with Hot Fix 2 at http://swforum.sun.com/jive/thread.jspa?threadID=105441 and let us know what you think.

  • Apache kills performance

    Hi,
    after installing AS9iEnterprice on Windows with target 9iDB (without apache) on the same host, the ASApache starts a JRE-Task which is using the whole CPU-time. All other Oracle services are shut down except database and listener.
    Can it be a configuration problem? I changed nothing, except the connection information for portal but I think the problem was always there.
    Thank you!

    it seems to me that i`am close to find out what is wrong..
    On clean solaris 11.2 when i run
    ab -c 50 -n 3000 http://localhost/
    Concurrency Level: 
    50
    Time taken for tests:   2.898152 seconds
    Complete requests: 
    3000
    Failed requests:   
    0
    Write errors:      
    0
    Total transferred: 
    888835 bytes
    HTML transferred:  
    135585 bytes
    Requests per second:
    1035.14 [#/sec] (mean)
    Time per request:  
    48.303 [ms] (mean)
    Time per request:  
    0.966 [ms] (mean, across all concurrent requests)
    Transfer rate:     
    299.50 [Kbytes/sec] received
    Connection Times (ms)
    min
    mean
    [+/-sd]
    median
    max
    Connect:   
    0
    7
    19.3

    119
    Processing:
    7
    40
    92.9
    10
    490
    Waiting:   
    6
    35
    83.2
    9
    439
    Total:    
    10
    47
    107.2
    11
    584
    and when i run ab -c 50 -n 3000 -k http://localhost/
    Concurrency Level: 
    50
    Time taken for tests:   0.415655 seconds
    Complete requests: 
    3000
    Failed requests:   
    0
    Write errors:      
    0
    Keep-Alive requests:
    2974
    Total transferred: 
    996168 bytes
    HTML transferred:  
    135180 bytes
    Requests per second:
    7217.52 [#/sec] (mean)
    Time per request:  
    6.928 [ms] (mean)
    Time per request:  
    0.139 [ms] (mean, across all concurrent requests)
    Transfer rate:     
    2338.48 [Kbytes/sec] received
    Connection Times (ms)
    min
    mean
    +/-sd
    median
    max
    Connect:   
    0
    0
    0.2
    0  
    3
    Processing:
    0
    3
    24.8
    1
    405
    Waiting:   
    0
    3  
    24.8
    1
    405
    Total:     
    0
    3
    24.9
    1
    406
    -k         
    Use HTTP KeepAlive feature
    what part of TCP to tune to fix this issue ?

  • Final Cut Pro X Performance Killed

    Yosemite has ABSOLUTELY KILLED performance on Final Cut Pro. Despite running 20 cores, 2 x GPUs and loads of RAM, whenever FCP renders in the background, everything comes to a near COMPLETE STOP. I have to wait for all the background rendering to finish before I can do anything. It is adding HOURS to even simple projects. The only way to fix it and be able to use the machine is to either WAIT for the grass to grow or quite FCP then everything is back to normal.
    Thank you *&%%^#$% Yosemite for another &*%@^&#$ set of issues.

    Still nothing has changed. I called Apple Support and finally got someone to do something. However I'm the only one having this problem (of course) and their 'bright' solution is to try a different disk. Really?! Thanks for nothing.
    Today rending a 5 minute video from optimized which after 4 hours got to 95% (before Yosemite would take around 5 mins) and stopped there. After an hour at 95% I guessed it was screwed again so I quit FCP and rebooted. After reboot, FCP had completely reset itself and when I reopened the library, it reported all kinds of missing assets. I opened the library in finder (show package contents) and everything is still there. Repair permissions (which are again screwed up) helped nothing. What the f*$#@???? I open the project and everything is there as it should be and the rendered result in the timeline looks fine. When I try to export a master, FCP warns that it will be rendered with missing assets however it FINALLY rendered out without any issues. Thank GOD. Free if wretched FCPX, I can take the master now and re-compress it in Compressor and finally get it loaded. I'm only 48 hours late on the deadline thanks to this ongoing S@$t.
    Thanks Apple for F&%@*&g me. A whole Saturday wasted yet again. Yosemite be damned.

  • Functions slowing down performance question

    Hey there.
    I've got a query that really slogs. This query calls quite a few functions and there's no question that some of the work that needs to be done, simply takes time.
    However, someone has adamantly told me that using functions slow down the query compared to the same code in the base SQL.
    I find this hard to believe that the exact same code - whether well written or not - would be much faster in the base view than having a view call the functions.
    Is this correct that functions kill performance?
    Thanks for any advice.
    Russ

    There is the performance impact of context switching between SQL and PL/SQL engines. Pure SQL is always faster.
    SQL> create or replace function f (n number) return number as
      2  begin
      3    return n + 1;
      4  end;
      5  /
    Function created.
    SQL> set timing on
    SQL> select sum(f(level)) from dual
      2  connect by level <= 1000000;
    SUM(F(LEVEL))
       5.0000E+11
    Elapsed: 00:00:07.06
    SQL> select sum(level + 1) from dual
      2  connect by level <= 1000000;
    SUM(LEVEL+1)
      5.0000E+11
    Elapsed: 00:00:01.09

  • Poor MDX performance on F4 master data lookup

    Hi,
    <P>
    I've posted this to this forum as it didn't get much help in the BW 7.0 forum. I'm thinking it was too MDX oriented to get any help there. Hopefully someone has some ideas.
    <P>
    We have upgraded our BW system to 7.0 EHP1 SP6 from BW 3.5. There is substantial use of SAP BusinessObjects Enterprise XI 3.1 (BOXI) and also significant use of navigational attibutes. Everything works fine in 3.5 and we have worked through a number of performance problems in BW 7.0. We are using BOXI 3.1 SP1 but have tested with SP2 and it generates the same MDX. We do however have all the latest MDX related notes, including the composite note 1142664.
    <P>
    We have a number of "fat" queries that act as universes for BOXI and it is when BOXI sends a MDX statement that includes certain crossjoins with navigational attributes that things fall apart. This is an example of one that runs in about a minute in 3.5:
    <P>
    SELECT { [Measures]. [494GFZKQ2EHOMQEPILFPU9QMV], [Measures].[494GFZSELD3E5CY5OFI24BPCN], [Measures].[494GG07RNAAT6M1203MQOFMS7], [Measures]. [494GG0N4P7I87V3YBRRF8JK7R] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0MAT_SALES__ZPRODCAT]. [LEVEL01].MEMBERS, EXCEPT( { [0MAT_SALES__ZASS_GRP]. [LEVEL01].MEMBERS } , { { [0MAT_SALES__ZASS_GRP].[M5], [0MAT_SALES__ZASS_GRP].[M6] } } ) ), EXCEPT( { [0SALES_OFF]. [LEVEL01].MEMBERS } , { { [0SALES_OFF].[#] } } ) ), [0SALES_OFF__ZPLNTAREA].[LEVEL01].MEMBERS ), [0SALES_OFF__ZPLNTREGN]. [LEVEL01].MEMBERS ), [ZMFIFWEEK].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES MEMBER_UNIQUE_NAME, MEMBER_NAME, MEMBER_CAPTION ON ROWS FROM [ZMSD01/ZMSD01_QBO_Q0010]
    <P>
    However in 7.0 there appear to be some master data lookups that are killing performance before we even get to the BW queries. Note that in RSRT terms this is prior to even getting the popup screen withe "display aggregate".
    <P>
    They were taking 700 seconds but now take about 150 seconds after an index was created on the ODS /BIC/AZOSDOR0300. From what I can see, the navigational attributes require BW to ask "what are the valid SIDs for SALES_OFF in this multiprovider". The odd thing is that BW 3.5 does no such query. It just hits the fact tables directly.
    <P>
    SELECT "SID" , "SALES_OFF" FROM ( SELECT "S0000"."SID","P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000"."SALES_OFF" = "S0000"."SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BI0/D0PCA_C021" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDBL018" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR028" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR038" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR058" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDOR081" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "S0000"."SID" IN ( SELECT "D"."SID_0SALES_OFF" AS "SID" FROM "/BIC/DZBSDPAY016" "D" ) UNION SELECT "S0000"."SID" ,"P0000"."SALES_OFF" FROM "/BI0/PSALES_OFF" "P0000" JOIN "/BI0/SSALES_OFF" "S0000" ON "P0000" . "SALES_OFF" = "S0000" . "SALES_OFF" WHERE "P0000"."OBJVERS" = 'A' AND "P0000"."SALES_OFF" IN ( SELECT "O"."SALES_OFF" AS "KEY" FROM "/BIC/AZOSDOR0300" "O" ) ) ORDER BY "SALES_OFF" ASC
    <P>
    I had assumed this had something to do with BOXI - but I don't think this is a MDX specific problem, even though it's hard to test in RSRT as it's a query navigation. Also I assumed it might be something to do with the F4 master data lookup but that's not the case, because of course this "fat" query doesn't have a selection screen, just a small initial view and a large number of free characteristics. Still I set the characteristic settings not only to do lookups on the master data values and that made no difference. Nonetheless you can see in the MDXTEST trace that event 6001: F4: Read Data. Curiously this is an extra one that sits between event 40011: MDX Initialization and event 40010: MDX Execution.
    <P>
    I've tuned this query as much as I can from the Oracle perspective and checked the indexes and statistics. Also checked Oracle is perfectly tuned and parameterized as for 10.2.0.4 with the May 2010 patchset for AIX. But this query returns an estimated 56 million rows and runs an expensive UNION join on them - so no suprise that it's slow. As a point of interest changing it from UNION to UNION ALL cuts the time to 30 seconds. Don't think that helps me though, other than confirming that it is the sort which is expensive on 56m records.
    <P>
    Thinking that the UNORDER MDX statement might make a difference, I changed the MDX to the following but that didn't make any difference either.
    <P>
    SELECT { [Measures].[494GFZKQ2EHOMQEPILFPU9QMV], [Measures].[494GFZSELD3E5CY5OFI24BPCN], [Measures].[494GG07RNAAT6M1203MQOFMS7], [Measures].[494GG0N4P7I87V3YBRRF8JK7R] } ON COLUMNS ,
    NON EMPTY UNORDER( CROSSJOIN(
      UNORDER( CROSSJOIN(
        UNORDER( CROSSJOIN(
          UNORDER( CROSSJOIN(
            UNORDER( CROSSJOIN(
              [0MAT_SALES__ZPRODCAT].[LEVEL01].MEMBERS, EXCEPT(
                { [0MAT_SALES__ZASS_GRP].[LEVEL01].MEMBERS } , { { [0MAT_SALES__ZASS_GRP].[M5], [0MAT_SALES__ZASS_GRP].[M6] } }
            ) ), EXCEPT(
              { [0SALES_OFF].[LEVEL01].MEMBERS } , { { [0SALES_OFF].[#] } }
          ) ), [0SALES_OFF__ZPLNTAREA].[LEVEL01].MEMBERS
        ) ), [0SALES_OFF__ZPLNTREGN].[LEVEL01].MEMBERS
      ) ), [ZMFIFWEEK].[LEVEL01].MEMBERS
    DIMENSION PROPERTIES MEMBER_UNIQUE_NAME, MEMBER_NAME, MEMBER_CAPTION ON ROWS FROM [ZMSD01/ZMSD01_QBO_Q0010]
    <P>
    Does anyone know why BW 7.0 behaves differently in this respect and what I can do to resolve the problem? It is very difficult to make any changes to the universe or BEx query because there are thousands of Webi queries written over the top and the regression test would be very expensive.
    <P>
    Regards,
    <P>
    John

    Hi John,
    couple of comments:
    - first of all you posted it in the wrong forum. This belongs into the BW forum
    - MDX enhancements in regards to BusinessObjects are part of BW 7.01 SP05 - not just BW 7.0
    would suggest you post it in the BW forum
    Ingo

  • Satellite Pro A10 - Does SP2 or 3 slow down the XP performance?

    Hi,
    I have a Satellite Pro A10 PSA15E which over the years had become very slow.
    Recently I upgraded it's RAM from 256 to the maximum 1gb to try to speed things up but it did not help.
    Next I reformatted the hard disc and reloaded the system using the Toshiba recovery disks. This worked fine and the system performance was excellent. However, with on-line security in mind, I loaded Windows SP2 ( with a view to subsequently go to SP3) and this has taken the system back to where I started - very, very slow, in fact pretty well unusable.
    Does anyone know if Toshiba support SP2 and SP3 on this model of laptop?
    If so how do you install them without killing performance?
    Thanks in advance.

    Hi!
    I think SP2 or SP3 is supported on this Toshiba notebook because both Service Packs are contains really important security updates and I think this is not the problem.
    If you install SP2 or SP3 you will also get the Security center and better Windows firewall. Maybe you should disable both services because the slow down the system performance (I always disable both services).
    Just go to start => run => services.msc and disable both services and set the start up type to disabled.
    Bye

  • Performance Issue: Retrieving records from Oracle Database

    While retrieving data from Oracle database we are facing performance issues.
    The query is returning 890 records and while displaying it on the jsp page, the page is taking almost 18 minutes for displaying records.
    I have observed that cpu usage is 100% while processing the request.
    Could any one advise what are the methods at DB end or Java end we can think of to avoid such issues.
    Thanks
    R.

    passion_for_java wrote:
    Will it make any difference if I select columns instead of ls.*
    possibly, especially if there's a lot or data being returned.
    Less data over the wire means a faster response,
    You may also want to look at your database, is that outer join really needed? Does it perform? Are your indexes good?
    A bad index (or a missing one) can kill query performance (we've seen performance of queries drop from seconds to hours when indexes got corrupted).
    A missing index can cause full table scans, which of course kill performance if the table is large.

  • Pathological ParallelGC performance w/ big long-lived object (512MB array)

    Hoping to improve performance, we recently added a bloom filter -- backed by a single long-lived 512MB array -- to our application. Unfortunately, it's killed performance -- because the app now spends ~16 of every ~19 seconds in garbage collection, from the moment the big array is allocated.
    My first theory was that the array was stuck in one of the young generations, never capable of being promoted, and thus being endlessly copied back and forth on every minor young collection. However, some tests indicate the big array winds up in "PS Old" right away... which would seem to be a safe, non-costly place for it to grow old. So I'm perplexed by the GC performance hit.
    Here's the tail of a log from a long-running process -- with UseParallelGC on a dual-opteron machine running 32bit OS/VM -- showing the problem:
    % tail gc.log
    697410.794: [GC [PSYoungGen: 192290K->2372K(195328K)] 1719973K->1535565K(1833728K), 16.4679630 secs]
    697432.415: [GC [PSYoungGen: 188356K->1894K(194752K)] 1721549K->1536592K(1833152K), 16.4797510 secs]
    697451.419: [GC [PSYoungGen: 188262K->4723K(195200K)] 1722960K->1540085K(1833600K), 16.4797410 secs]
    697470.817: [GC [PSYoungGen: 191091K->1825K(195520K)] 1726453K->1541275K(1833920K), 16.4763350 secs]
    697490.087: [GC [PSYoungGen: 189025K->8570K(195776K)] 1728475K->1550136K(1834176K), 16.4764320 secs]
    697509.644: [GC [PSYoungGen: 195770K->5651K(192576K)] 1737336K->1555061K(1830976K), 16.4785310 secs]
    697530.749: [GC [PSYoungGen: 189203K->1971K(194176K)] 1738613K->1556430K(1832576K), 16.4642690 secs]
    697551.998: [GC [PSYoungGen: 185523K->1716K(193536K)] 1739982K->1556999K(1831936K), 16.4680660 secs]
    697572.424: [GC [PSYoungGen: 185524K->4196K(193984K)] 1740807K->1560197K(1832384K), 16.4727490 secs]
    I get similar results from the moment of launch on another machine, and 'jmap -heap' (which isn't working on the long-lived process) indicates the 512MB object is in 'PS Old' right away (this is from a quick launch of a similar app):
    jdk1.5.0_04-32bit/bin/jmap -heap 10586Attaching to process ID 10586, please wait...
    Debugger attached successfully.
    Server compiler detected.
    JVM version is 1.5.0_04-b05
    using thread-local object allocation.
    Parallel GC with 2 thread(s)
    Heap Configuration:
    MinHeapFreeRatio = 40
    MaxHeapFreeRatio = 70
    MaxHeapSize = 1887436800 (1800.0MB)
    NewSize = 655360 (0.625MB)
    MaxNewSize = 4294901760 (4095.9375MB)
    OldSize = 1441792 (1.375MB)
    NewRatio = 8
    SurvivorRatio = 8
    PermSize = 16777216 (16.0MB)
    MaxPermSize = 67108864 (64.0MB)
    Heap Usage:
    PS Young Generation
    Eden Space:
    capacity = 157286400 (150.0MB)
    used = 157286400 (150.0MB)
    free = 0 (0.0MB)
    100.0% used
    From Space:
    capacity = 26214400 (25.0MB)
    used = 26209080 (24.99492645263672MB)
    free = 5320 (0.00507354736328125MB)
    99.97970581054688% used
    To Space:
    capacity = 26214400 (25.0MB)
    used = 1556480 (1.484375MB)
    free = 24657920 (23.515625MB)
    5.9375% used
    PS Old Generation
    capacity = 1677721600 (1600.0MB)
    used = 583893848 (556.8445663452148MB)
    free = 1093827752 (1043.1554336547852MB)
    34.80278539657593% used
    PS Perm Generation
    capacity = 16777216 (16.0MB)
    used = 10513680 (10.026626586914062MB)
    free = 6263536 (5.9733734130859375MB)
    62.66641616821289% used
    The 'PS Old' generation also looks way oversized here -- 1.6G out of 1.8G! -- and the young/tenured starved, although no non-default constraints have been set on generation sizes, and we had hoped the ballyhooed 'ergonomics' would've adjusted generation sizes sensibly over time.
    '-XX:+UseSerialGC' doesn't have the problem, and 'jmap -heap' suggests the big array is in the tenured generation there.
    Any ideas why UseParallelGC is behaving pathologically here? Is this, as I suspect, a bug? Any suggestions for getting it to work better through VM options? (Cap the perm size?)
    Any way to kick a running ParallelGC VM while it's running to resize it's generations more sensibly?
    I may also tweak the bloom filter to use a number of smaller -- and thus more GC-relocatable -- arrays... but I'd expect that to have a slight performance hit from the extra level of indirection/indexing, and it seems I shouldn't have to do this if the VM lets me allocate a giant object in the first place.
    Thanks for any tips/insights.
    - Gordon @ IA

    Yes, in my app, the large array is updated constantly.
    However, in the test case below, I'm getting similar behavior without any accesses to the big array at all.
    (I'll file this test case via the bug-reporting interface as well.)
    Minimal test case which seems to prompt the same behavior:
    * Demonstrate problematic ParallelGC behavior with a "big" (512MB)
    * object. (bug-id#6298694)
    * @author gojomo/archive,org
    public class BigLumpGCBug {
    int[] bigBitfield;
    public static void main(String[] args) {
    (new BigLumpGCBug()).instanceMain(args);
    private void instanceMain(String[] args) {
    bigBitfield = new int[Integer.MAX_VALUE>>>4]; // 512MB worth of ints
    while(true) {
    byte[] filler = new byte[1024*1024]; // 1MB
    Run with java-options "-Xmx700m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps", the GC log is reasonable:
    0.000: [GC 0.001: [DefNew: 173K->63K(576K), 0.0036490 secs]0.005: [Tenured: 39K->103K(1408K), 0.0287510 secs] 173K->103K(1984K), 0.0331310 secs]
    2.532: [GC 2.532: [DefNew: 0K->0K(576K), 0.0041910 secs]2.536: [Tenured: 524391K->524391K(525700K), 0.0333090 secs] 524391K->524391K(526276K), 0.0401890 secs]
    5.684: [GC 5.684: [DefNew: 43890K->0K(49600K), 0.0041230 secs] 568281K->524391K(711296K), 0.0042690 secs]
    5.822: [GC 5.822: [DefNew: 43458K->0K(49600K), 0.0036770 secs] 567849K->524391K(711296K), 0.0038330 secs]
    5.956: [GC 5.957: [DefNew: 43304K->0K(49600K), 0.0039410 secs] 567695K->524391K(711296K), 0.0137480 secs]
    6.112: [GC 6.113: [DefNew: 43202K->0K(49600K), 0.0034930 secs] 567594K->524391K(711296K), 0.0041640 secs]
    Run with the ParallelGC, "-Xmx700m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseParallelGC", the long GCs dominate immediately:
    0.000: [GC [PSYoungGen: 2272K->120K(3584K)] 526560K->524408K(529344K), 60.8538370 secs]
    60.854: [Full GC [PSYoungGen: 120K->0K(3584K)] [PSOldGen: 524288K->524389K(656960K)] 524408K->524389K(660544K) [PSPermGen: 1388K->1388K(8192K)], 0.0279560 secs]
    60.891: [GC [PSYoungGen: 2081K->0K(6656K)] 526470K->524389K(663616K), 57.3028060 secs]
    118.215: [GC [PSYoungGen: 5163K->0K(6656K)] 529553K->524389K(663616K), 59.5562960 secs]
    177.787: [GC
    Thanks,
    - Gordon @ IA

  • Performance concern with deferred loading.

    I've been reading a great tutorial of making Space Invaders. Each frame, it checks if the sprite is in a HashMap instance. If it finds the key, the value is assigned for drawing. If it's not found, we would load it. This is referred to as differed loading for later complex games. Anyway, my concern is every frame, it checks this hash map. Wouldn't this kill performance? Any better ways of handling this?
    Here's a link to the code:
    http://www.planetalia.com/cursos/Java-Invaders/JAVA-INVADERS-11.tutorial
    Thanks,
    Phil

    Anyway, my concern is every
    frame, it checks this hash map. Wouldn't this kill
    performance?Yeah, it might take the frame as much as one microsecond longer to display. But probably not nearly that much.
    Computers are fast. Displays are slow. People's perceptions of displays are extremely slow. If you saw the code that runs as you drag a piece of code from one place to another place in your text editor you would (a) be astonished at how much code gets run repeatedly, and (b) notice that that code is much more complicated than just a little hashmap lookup.

  • Late 2011 Macbook Pro  - worse graphics performance than early 2011?

    As per Apple.com:
    Apple's Late 2011 MBP 15" claim 2X graphics performance as per Apple's tests from the last Nvidia based MBPs.
    Apple's Early 2011 MBP 15" claimed 3X graphics performance as per Apple's (seemingly identical) tests from the last Nvidia based MBPs.
    http://imgur.com/lAvek
    Can anyone at Apple or otherwise comment on this?
    Specifically, It seems the Half Life 2 test drop sfrom X3 to X2 on the Late 2011 model, even though it has the 6770 AMD card in it.
    -Did Lion kill performance?
    -Did Apple downclock the chips after reveiwers (anandtech??) found that while using discreet graphics battery performance dropped to roughly 2.5 hours?

    Luke Heemsbergen wrote:
    ....the Half Life 2 test
    -Did Lion kill performance?
    It's likely the Half Life test is not compatible with Lion, nor is older 3D games.
    If you want to use a test, use a more neutral test like Cinebench, GeekBench etc.
    Or simply visit Barefeats and they will tell you
    http://www.barefeats.com/
    Remember programs are always in a state of having to be updated for the newer hardware and operating systems.

Maybe you are looking for

  • HP G60-637CL laptop NO SOUND AFTER DOWN WINDOW 8.1 PRO

    I need assistance on my notebook, after downloading Window 8.1 pro, I have no sound on my notebook

  • ICR - Selection times for Customer vs Vendor open items

    All/Ralph We have implemented note 1349849 and tested the selection again. I now have a strange result. The selection of customer open items is fast i.e. 120 000 records in 850 sec, however, the selection of the vendor items takes takes 3600 sec for

  • Need help in inbound function module !

    What is the meaning of this in the inbound function module ? WORK_RESULT = C_WF_RESULT_OK. In my function module its showing some error !

  • Programati​cally create local variable in TestStand

    Can somebody please explain the process of creating a local variable programatically?  I have searched the forums but haven't found what I am looking for yet. I have an array of unknown size.  I use a foreach loop to step through the array and grab t

  • Transfering images from my BB8100 to my laptop

    My Blackberry show up as a removable storage space and I can see the file structure and folders but when I go into the "pictures" folder there are no pictures showing up, yet I have taken loads of pictures and want to download them to my laptop - why