Performance issues in bootcamp with Passmark benchmark test

I wonder if anyone can shed some light on this
I've got a Win XP 32bit SP3 installed on bootcamp. When i test this using the passmark benchmark test (latest version on a 30 day trial). I get poor CPU performance. I am running a 2.53 13" MacBook Pro. I can compare the results with other peoples results from the passmark database, by same CPU and also by other macbooks.
My scores are nearly half what other similar macbooks are showing in CPU Math, which is quite a difference.
So i wonder what's up, and how can i check it further? Half sounds like only one CPU running.
I've compared VMFusion and Bootcamp as well and there's not much in them, both results are showing around half the performance. (This is Fusion running the bootcamp install. A free windows install in Fusion shows slightly better results but not that much)
Any pointers or help would be great, for example is there some way under OS X i can check the performance of the macbook as a baseline?
Is there something i've not done whilst installing Win XP through book camp?
thanks in advance!
paul

Maybe until software catches up and is patched to work with Intel 3000 plus 6970M.
Apple did get early jump on Sandy Bridge +
Can't find the thread from someone else (thanks for making searching threads so clumbersome)
AMD Mobility
http://support.amd.com/us/gpudownload/windows/Pages/radeonmob_win7-64.aspx
And for Nvidia 320M users:
http://www.nvidia.com/object/notebook-win7-winvista-64bit-275.27-beta-driver.htm l
3DMark06 will almost surely I suppose need a patch.
I'd hit the web sites for PC laptop gaming if you can.
http://www.bing.com/search?q=6970M+3DMark06
http://www.notebookcheck.net/Intel-HD-Graphics-3000.37948.0.html
This should be right now your alley -
http://forums.macrumors.com/showthread.php?t=1101179&page=2
http://www.bing.com/search?q=6970M+with+Intel+3000+graphics
Most of this type stuff is at your fingertips with a little searching.

Similar Messages

  • Performance issue in correlation with hidden objects in reports

    Hello,
    is there a known performance issue in correlation with hidden objects in reports using conditional suppressions? (HFM version 11.1.2.1)
    Using comprehensive reports, we have huge performance differences between the same reports with and without hidden objects. Furthermore we suspect that some trouble with our reporting server environment base on using these reports through enduser.
    Every advice would be welcome!
    Regards,
    bsc
    Edited by: 972676 on Nov 22, 2012 11:27 AM

    If you said that working with EVDRE for each separate sheet is fin ethat's means the main problem it is related to your VB custom macro interdependecy.
    I suggest to add a log (to write into a text file)for you Macro and you will se that actually that minute is staying to perform operations from custom Macro.
    Kind Regards
    Sorin Radulescu

  • Performance issue calling bapi po create in test mode to get error messages

    Hi,
    We have  a report which displays in alv the purchase orders that got created in SAP, but either got blocked due to not meeting PO Release Strategy tolerances or have failed output messages .We are displaying the failed messages too.
    We are looping the internal table of eban(purchase requisition) and calling bapi po create in test mode to get failed messages.
    Now we are facing performance issue in production.What can be the other effecient way to get the error messages without effecting performance.
    Regards,
    Suvarna

    Hi Suvarna,
    so you need to reduce the number of PO-simulations.
    - Likely you checked already, that all EBAN-entries should already be converted into POs. If there would be a large number of "new" EBAN-entries, they don't need to be simulated.
    - If it's a temporary problem: give aid to correct the problems (maintain prices or whatever the error-reasons are) Then the amount of not-converted purchase requisitions (PR) should drop, too
    - If it's likely, that your volume of open PR will stay high: create a Z-Table with key of EBAN and a counter, simulate (once a day) PO conversions and store the results in the Z-table. In your report you can use the results... if they are "new enough". From time to time new simulations should be done, missing master data might be available.
    Maybe users should be allowed to start this 2nd report manually (in background), too -> then they can update the messages after some data corrections themself, without waiting for the result (just check later in online report and do something different in between).
    And you might need to explain, PO simulation takes as long as PO creation... there is no easy or fast way around this.
    Best regards,
    Christian

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • SSAS Strange Performance Issues (Long running with NO read or write activity) - UAT Test Environment

    Hi All,
    Im looking for some pointers, my team and I have drawn a blank as to what is going on here.
    Our UAT is a virtual machine.
    I have written a simple MDX query which on normal freshly processed cube executes in under 15 seconds, I can keep running the query.
    Run 1. 12 secs
    Run 2. 8 Secs
    Run 3. 8 Secs
    Run 4. 7 Secs
    Run 5. 8 Secs
    Run 6. 28 MINUTES!!
    This is on our test environment, I am on the only user connected and there is no processing active.
    Could anyone please offer some advice on where to look, or tips on what the issue may be.
    Regards,
    Andy

    Hi aown61,
    According to your description, you get long time processing after executing processing several times. Right?
    In this scenario, it's quite strange that a processing take long time. I suggest you using SQL Profiler to monitor the event during processing. It can track engine process events, such as the start of a batch or a transaction, you can replay the events captured
    on the Analysis Services instance to see exactly what happened. For more information, please refer to link below:
    Use SQL Server Profiler to Monitor Analysis Services
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • Performance issues home sharing with Apple TV2 from Mountain Lion Server

    I have a Mac Mini which I have just upgraded to Mountain Lion Server from Snow Leopard Server.
    I have noticed that the performance of Streaming a film using Home Sharing to an Apple TV2 has degraded compared to the Snow Leopard setup. In fact the film halts a number of times during play back which is not ideal.
    I have tested the network between the 2 devices and cannot find a fault.
    Has anyone come across this problem before.
    Are there any diagnostic tools I can use to measure the home sharing streaming service to the AppleTV2 device.
    Any help much appreciated

    Well, I tried a few other things and one worked but again just the first time I tried connecting to the desktop PC with iTunes. I flashed my router with the latest update and the ATV2 could see the iTunes library and I was able to play media. Later in the day I was going to show off to my daughter that I had fixed it and, to my dismay, no go. I tried opening the suggested ports but no luck.
    I then tried loading iTunes on a Win7 laptop and it works perfectly with the ATV2. Both the laptop and the ATV2 are connected to the router wirelessly while the Desktop is connected to the router by Ethernet. Not sure if this is part of the issue as it sounds like this didn't work for others. The only other difference between the laptop and desktop is that the desktop has Win7 SP1 loaded while the laptop does not; now I'm scarred to load it though I don't think that's the issue. All in all, a very vexing situation. Hopefully Apple comes up with a solution soon.

  • Webi Report Performance issue as compared with backaend SAP BW as

    Hi
    We are having Backend as SAP BW.
    Dashboard and Webi reports are created with Backend SAP BW.
    i.e thru Universe on top of Bex query and then webi report and then dashboard thru live office.
    My point is that when we have created webi reprts with date range as parameter(sometimes as mandatory variable which comes as prompt in webi) or sometimes taking L 01 Calender date from Bex and creating a prompt in webi,  we are facing that reports are taking lot of time to open. 5 minutes 10 minutes and sometimes 22 minutes  to open.
    This type of problem never ocurred when Backened was Oracle.
    Also when drilling in webi report,it takes lot of time .
    So can you suggest any solution?

    Hi Gaurav,
    We logged this issue with support already, it is acknowledged.
    What happens is that whenever you use an infoobject in the condition
    (so pull in the object into condition and build a condition there,
    or use that object in a filter object in the universe and then use the filter)
    this will result in that object being added to the result set.
    Since the query will retrieve a lot of different calendar days for the period you selected,
    the resultset will be VERY big and performance virtually non-existent.
    The workaround we used is to use a BEX variable for all date based selections.
    One optional range variable makes it possible to build various types of selections
    (less (with a very early startdate), greater (with a very far in the future enddate) and between).
    Because the range selection is now handled by BEX, the calendar day will not be part of the selection...
    Good luck!
    Marianne

  • Performance issue while working with large files.

    Hello Gurus,
    I have to upload about 1 million keys from a CSV file on the application server and then delete the entries from a DB table containing 18 million entries. This is causing performance problems and my programm is very slow. Which approach will be better?
    1. First read all the data in the CSV and then use the delete statement?
    2. Or delete each line directly after reading the key from the file?
    And another program has to update about 2 million entries in a DB table containing  20 million entries. Here I also have very big performance problems(the program has been running for more the 14 hours). Which is the best way to work with such a large amount?
    I tried to rewrite the program so that it will run parallel but since this program will only run once the costs of implementing a aRFC parallization are too big. Please help, maybe someone doing migration is good at this
    Regards,
    Ioan.

    Hi,
    I would suggest, you should split the files and then process each set.
    lock the table to ensure it is available all time.
    After each set ,do a commit and then proceed.
    This would ensure there is no break in middle and have to start again by deleteing the entries from files which are already processed.
    Also make use of the sorted table and keys when deleting/updating DB.
    In Delete, when multiple entries are involved , use of  an internal table might be tricky as some records may be successfully deleted and some maynot.
    To make sure, first get the count of records from DB that are matching in Internal table set 1
    Then do the delete from DB with the Internal tabel set 1
    Again check the count from DB that are matching in Internal table set 1 and see the count is zero.
    This would make sure the entire records are deleted. but again may add some performance
    And the goal here is to reduce the execution time.
    Gurus may have a better idea..
    Regards
    Sree

  • Performance issues for iOS with high resolution.

    I made an app with a resolution of 480x320 for iOS. It works quite well.
    I then remade it with a resolution of 960x640. In AIR for iOS settings I set Resolution to "High".
    The app looked great, however there was a noticeable drop in performance.
    The app functioned the same way as the original lower resolution app. but it was lagging.
    Has anyone else had this problem?
    Am I doing something wrong?

    With my game, I had around 60fps on the 3GS, and around 50 on the iPhone 4, with the high settings. I got around 10 fps extra by using: stage.quality = StageQuality.LOW;
    Air 2.6. I tried with Air 2.7, but it seems like that command can't be used there (?)

  • Performance issue on iPad4 with AIR SDK 3.9

    Hi!
    I have an app that I've created with Flex SDK 4.6.0. The first time I've  compiled the app with AIR SDK 3.1 and it runs with good performance on iPad4 (and little bit slow on oPad2). Then I've upgrated the AIR SDK to the version 3.9 and suddenly my app starts to run slow (but on iPad2 performance is good)
    Is it any known problem with AIR SDK 3.9  on iPad4? Or on iOS 6.1?
    Should I downgrade the AIR SDK back to 3.1 to get good performance on iPad4?
    Thanks in advance
    UPD: I've downgraded the AIR SDK to 3.1 and my app get back the good performance! (But there's some strange bugs)
    Message was edited by: yx

    Hi Nimit!
    1. I've upgraded AIR SDK to 4.0 beta and the problem is gone away.
    2. Unfortunately  I'm not sure I can share my app - it's not in the policy of the company I'm working for. I'll check it out with my boss
    Than you,
    Olga

  • Performance issues when working with huge lists

    I’ve got a script that reads a large CSV spreadsheet
    and parses the data into a list of the form [[A1,B1,C1],
    [A2,B2,C2], [A3,B3,C3]] and a second list of the form
    [#A1:B1,#A2:B2,#A3:B3] etc… The actual spreadsheet is about
    10 columns x 10,000 rows. Reading the file string goes fast enough,
    the parsing starts off fast but slows to a crawl after about 500
    rows (I put the row count on the stage to check progress). Does
    anyone know if the getaProp, addProp, and append methods are
    sensitive to the size of the list?
    A sample of one of the parsing loops is below. I’m
    aware all interactivity will stop as this is executed. This script
    is strictly for internal use, it crunches the numbers in two
    spreadsheets and merges the results to a new CSV file. The program
    is intended to run overnight and the new file harvested in the
    morning.

    > Does anyone know if the getaProp, addProp, and
    > append methods are sensitive to the size of the list?
    Is this a trick question? Sure they are. All of them.
    Addprop and append are quite fast (due to the list object
    scalable
    preallocating memory as required), so i doubt that they are
    the cause of
    the problem.
    GetAProp will search each item in the list, therefore, if you
    are
    searching for the last item, or if the item is not in the
    list, the more
    the items, the slower the command.
    Didn't go through all your code but I noticed
    - this: repeat with rowCount = 2 to file2string.line.count
    Big no-no! Line counting is a very slow operation for it to
    be a
    evaluated in a loop.
    - and this: myFile2data.append(myLineData)
    String operations like this require memory reallocation, and
    therefore
    are very slow. If you do conclude that such an operation
    causes the
    problem, consider using a preallocated buffer (create a big
    string in
    advance) and then use
    mydata.char.[currentoffset..(currentoffset+newstr.length)]=newstr
    This can make code run even hundreds times faster, compared
    to the
    append method.
    Applied CD wrote:
    > I?ve got a script that reads a large CSV spreadsheet and
    parses the data into a
    > list of the form [[A1,B1,C1], [A2,B2,C2], [A3,B3,C3]]
    and a second list of the
    > form [#A1:B1,#A2:B2,#A3:B3] etc? The actual spreadsheet
    is about 10 columns x
    > 10,000 rows. Reading the file string goes fast enough,
    the parsing starts off
    > fast but slows to a crawl after about 500 rows (I put
    the row count on the
    > stage to check progress). Does anyone know if the
    getaProp, addProp, and
    > append methods are sensitive to the size of the list?
    >
    > A sample of one of the parsing loops is below. I?m aware
    all interactivity
    > will stop as this is executed. This script is strictly
    for internal use, it
    > crunches the numbers in two spreadsheets and merges the
    results to a new CSV
    > file. The program is intended to run overnight and the
    new file harvested in
    > the morning.
    >
    > writeLine("File 2 Data Parsing" & RETURN)
    > myOrderColumn = myHeaders2.getOne("OrderNum")
    > myChargesColumn = myHeaders2.getOne("Cost")
    > myFile2data = []
    > mergedFedExCharges = [:]
    > repeat with rowCount = 2 to file2string.line.count
    > myLineData = []
    > repeat with i = 1 to
    file2string.line[rowCount].item.count
    > myItem = file2string.line[rowCount].item
    > if i = 1 then
    > myItem = chars(myItem,2,myItem.length)
    > end if
    > myLineData.append(myItem)
    > end repeat
    > if myLineData.count = myHeaders2.count then
    > myFile2data.append(myLineData)
    > myOrderSymbol = symbol("s" &
    myLineData[myOrderColumn])
    > myCurrentValue =
    getaProp(mergedFedExCharges,myOrderSymbol)
    > if voidP(myCurrentValue) then
    > mergedFedExCharges.addProp(myOrderSymbol,0.00)
    > end if
    > mergedFedExCharges[myOrderSymbol] =
    mergedFedExCharges[myOrderSymbol] +
    > value(myLineData[myChargesColumn])
    > writeUpdate(myLineData[1])
    > else
    > writeError("file 2 : " & string(myLineData) &
    RETURN)
    > end if
    > end repeat
    >

  • Solve Performance Issue... with multiplethread

    I made an application with a huge row indatabase table...
    every siingle process could process thousand rows
    Is it make sense to divide one process in single http request
    for example ( divide process by create ten application module )
    into multiple thread?
    has anyone have some problem with me?
    and how the best strategy to solve this problem?

    OK, this helps to understand the problem.
    We had a problem alike yours. What we end up doing with is to read the files into a temporary db table, committing every 500 rows to reduce memory usage. We do this without any validation, just to get hold of the data in the db.
    After all data from a file is in a db table we do the validation (you can even use pl/sql for this) and show all rows to the use which are not valid. This gives the user the change to correct the rows (or dismiss them).
    After that (now knowing that hte data should be processed without any error) we do the real work of inserting the data.
    All you ahve to do is to work in chunks (we use 500-1000 rows) before we commit the data already processed. Flags in the temporary table allow us to start the process again if something happens during processing the data.
    Working in chunks allows the framework to free and regain some memory used while doing the work.
    Timo

  • Performance issue using Universe with SNC connection

    I had a dynamic dashboard using Live office > Webi report > Universe > Bex query which was working fine but we recently implemented SNC between Business Objects and SAP BW due to which the universe connection has been changed to use sign on connection. After changing the connection in the universe I am see a performance degradation refreshing the connections in xelsius dashboard. Earlier the connection refresh time was 6 sec now the refresh time is around 30 secs, Intersting thing is I have tryed refreshing the webi report the refresh time of the webi report did not change which is still less than 6 secs and I have also tryed refeshing the Live Office component directly in the design/spreedsheet mode of the xcelsius even here the refresh time remain same less than 6 secs.
    The connection refresh time is bad only when I am in preview mode or if I deploy the swf to the business object server.
    Xcelsius Version: 2008 (5.3)
    BO Version: 3.1 sp2 fixpack 2.8
    Thanks

    Anup,
    What will happen behind the screens when application restarts.
    what are the otherways of achieving the same Behavior,like getting the application state to initial state.

  • Performance issue: Calling a BAPI PO create in test mode to get error msgs

    Hi,
    We have an ALV report in which we display purchase orders that got created in SAP, but either got blocked due to not meeting PO Release strategy tolerances or have failed output messages. we are displaying the failed messages even.
    We are looping the internal table of eban(PR) & calling bapi po create in test mode to get failed messages.
    Now we are facing performance issue in production. What can be the alternate efficient way to get the error msgs with efficiency
    Regards,
    Ayub H.
    Moderator message: duplicate post (different ID, same company...), see below:
    Performance issue calling bapi po create in test mode to get error messages
    Edited by: Thomas Zloch on Mar 9, 2012

    Hi Suvarna,
    so you need to reduce the number of PO-simulations.
    - Likely you checked already, that all EBAN-entries should already be converted into POs. If there would be a large number of "new" EBAN-entries, they don't need to be simulated.
    - If it's a temporary problem: give aid to correct the problems (maintain prices or whatever the error-reasons are) Then the amount of not-converted purchase requisitions (PR) should drop, too
    - If it's likely, that your volume of open PR will stay high: create a Z-Table with key of EBAN and a counter, simulate (once a day) PO conversions and store the results in the Z-table. In your report you can use the results... if they are "new enough". From time to time new simulations should be done, missing master data might be available.
    Maybe users should be allowed to start this 2nd report manually (in background), too -> then they can update the messages after some data corrections themself, without waiting for the result (just check later in online report and do something different in between).
    And you might need to explain, PO simulation takes as long as PO creation... there is no easy or fast way around this.
    Best regards,
    Christian

  • Performance Issues with 10.6.7 and External USB Drives

    I've had a few performance issues come up with the latest 10.6.7 that seem to be related to external USB drives. I have a 2TB USB drive tha I have my iMovie content on this drive and after 10.6.7 update, iMovie is almost unusable. Finder even seems slow when browsing files on this drive as well. It seems like any access to the drive is delayed in all applications. Before the update, the performance was acceptable, but now it almost unusable. Most of the files on this drive are large dv files.
    Anyone else experience this?

    Matt,
    If you want help, please start your own thread here:
    http://discussions.apple.com/forum.jspa?forumID=1339&start=0
    And if your previous thread you aren't getting sufficient help for your iPhone, post a new topic here:
    http://discussions.apple.com/forum.jspa?forumID=1139
    You'll get a wider audience, and won't confuse the original poster. Performance issues can be caused by numerous issues as outlined in my FAQ*
    http://www.macmaps.com/Macosxspeed.html
    If every person who had a performance issue posted to this thread, we'd never find a solution for the initial poster. Let's isolate each case one by one. It is NOT necessarily the same issue, even if the symptoms are the same. There are numerous contributing factors at work with computers, and if we don't isolate them, we'll never get to the root cause.

Maybe you are looking for

  • Stateful session bean returns null

    Hi, I call a stateful session bean which should return a Long object. Everything works fine while the bean executes and the Long object is returned by the bean. But in the EJB-Client I get a null object. The EJB-Client is a OC4J 10g standalone server

  • Free Trial Version Downlod Location

    Dear my friends: In "Oracle9i JHeadstart - Frequently Asked Questions" it had said that: "There is a thirty day trial version is available. This trial is only available in combination with e-consulting online or three days of consulting." I want down

  • Elements 10 locks up upon sign in

    I can use my user name and passwork successfully on the adobe site but when I attempt to sign in within Elements 10 the program locks up completely. Solutions?

  • Isync calendar with palm missing location field.

    I think i finally got isync to sync properly with my palm. That is, not create duplicate entries on sync. But sadly i can't get it to sync the Location field from ical to the palm. The palm does have a Location field, but it is just empty. It appears

  • Mi ipod works but makes a noise

    Mi Ipod got wet like a week ago, but I left it to dry, and then I turned it on. It worked, and it´s still working. Is just that Im worried about this sound it does, like it was turning off, It just does that when the screen is turned on. Is it gettin