Improve performance on computers with more than 1 GB RAM (Photoshop CS2)

Photoshop CS2 uses complex memory management procedures. On computers with 1 GB of RAM, or more, you can optimize Photoshop to take advantage of the quantity of RAM in your system to manage memory more efficiently. For further information on how to implement the "Bigger Tiles" Plug-in please refer to Adobe Knowledgebase
Document 331372

Hi,
1 to 8 GB for process is little on the higher side :) Y don't you try to "page" it up like windows.. Serialize the memory intensive objects into a folder and load them (unserialize) when they are required...
Just a suggestion and not exactly answering your qn. ..But just thought will post this comment ... Hope it gives some insight..
Cheers,
Chandra

Similar Messages

  • Problems deploying Windows 7 x64 computers with more than 8GB RAM

    SCCM 2012 R2 CU1
    Windows 7 x64
    We have a weird problem after upgrading from SP1 to R2 CU1. We have built a Windows 7 64 bit image with the latest updates (currently September 2014 updates). It deploys fine to all computers. But after some days – don’t know exactly – guess after 4-7 days
    - we start getting errors on computers with more than 8 GB of RAM. The errors just start out of the blue. We didn't change TS, image or driver package.
    The error shows in 3 different ways – depending of the computer model.
    1) BSOD – STOP 0x000000F4. Disk error.
    2) Task Sequence error 0x87D00269 when installing the first Application in TS. The error seems to have something to do with Management Point not found.
    3) Task Sequence simply stops after installing CMclient. No errors - it simply stops.
    For instance we get the BSOD on the Lenovo ThinkCentre M93p. But on the ThinkCentre M83 we get the TS error 0x87D00269. M93p and M83 uses the same driver package. They consequently  fail every time after the error has begun. If we reduce (remove physically)
    the memory to max. 8GB of RAM, they successfully run the Task Sequence.  We get the above errors on both Lenovo and HP machines with more than 8GB. 
    I made a copy of the production TS and started testing. I created a new driver package for M93p/M83. It contains only the NIC drivers. Now M93p an M83 successfully finishes the TS. Adding sata or chipset or MEI/SOL drivers to the package I get the 0x87D00269
    error on both m93p and M83. At least not the BSOD on the M93p ;-)
    Tried setting  SMSTSMPListRequestTimeout to 180 in the TS. Also tried putting in a 5 minute break before installing the first Application in the TS. Also tried putting in SMSMP=mp1.mydomain in the installation properties for the CMclient. It is still
    the same problem.
    Investigating the smsts.log on a computer with the 0x87D00269 error.
    Policy Evaluation failed, hr=0x87d00269 InstallApplication
    25-09-2014 08:59:59 3020 (0x0BCC)
    Setting TSEnv variable 'SMSTSAppPolicyEvaluationJobID__ScopeId_654E40B7-FC55-4213-B807-B97383283607/Application_d9eea5a0-0660-43e6-94b8-13983890bae2'=''InstallApplication 25-09-2014 08:59:59 3020 (0x0BCC)
    EvaluationJob complete InstallApplication25-09-2014 08:59:59 3020 (0x0BCC)
    MP list missing in WMI, sending message to location service to retrieve MP list and retrying. InstallApplication 25-09-2014 08:59:59 3020 (0x0BCC)
    m_hResult, HRESULT=87d00269 (e:\qfe\nts\sms\client\osdeployment\installapplication\installapplication.cpp,1080) InstallApplication
    25-09-2014 08:59:59 3020 (0x0BCC)
    Step 2 out of 2 complete InstallApplication 25-09-2014 08:59:59 3020 (0x0BCC)
    Install application action failed: 'SCIENCE PC Guide'. Error Code 0x87d00269 InstallApplication 25-09-2014 08:59:59 3020 (0x0BCC)
    Sending error status message InstallApplication 25-09-2014 08:59:59 3020 (0x0BCC)
     Setting URL = http://mp1.mydomain, Ports = 80,443, CRL = false InstallApplication 25-09-2014 08:59:59 3020 (0x0BCC)
    Investegating the 0x87D00269 error on the siteservers Status Messages Queries is weird.
    Install Static Applications failed, hr=0x87d00269. The operating system reported error 617: You have attempted to change your password to one that you have used in the past. The policy of your user account does not allow this. Please select a password that
    you have not previously used.
    Hopefully this is a bug in translating the errorcodes? Just to be sure I checked all the SCCM serviceaccounts and none of them have expired passwords.  And again why only on computers with more than 8GB RAM.
    Anyone else had problems rolling computers with more than 8GB og RAM?
    Why does this happens after a few days? To begin with, it runs just perfect and then starts failing on all machines with more than 8GB. Without a change in TS, image and driverpackage. We have seen this pattern for the last 3 month after
    updating our image with the latest Windows Updates. It all started after updating from SP1 to R2 CU1.
    Why does it only affect machines with more than 8GB RAM?
    How to get rid of the 0x87D00269 error?
    Seems to be somehow related to driver installation mechanism in SCCM in combination with Windows 7 64 bit, +8GB RAM and SCCM 2012 R2 CU1 or perhaps just R2.
    Any help or hints would be appreciated!
    Thanks
    Anders

    We have the same workaround, using a x64 boot image resolved all our issues with WinPE 5.0.
    Machines with >4GB of RAM
    Lenovo ThinkPad T540p failed applying drivers with 80040005
    Black screen/white cursor issues intermittently
    No updates from Microsoft, they are still looking through the logs, but we have updated our boot images and all is well now. Our case it still open with Microsoft, they are researching the logs.
    Daniel Ratliff | http://www.PotentEngineer.com

  • GeForce 7300 GT with more than 2GB RAM?

    I had a lot of crashes with my MP and had original 2x512MB & Kingston's 2x1GB installed.
    I contacted Kingston's support when Rember caused kernel panic every time I tried to test memory with it.
    They said that it is "known issue" that 7300gt does not work with current drivers with openGL applications and more than 2GB memory installed.
    But my MP kept having kernel panics with rember (and with lots of other applications)without any openGL involved. I even ran Rember with Finder closed.
    With only 2x512 or only 2x1 installed everything seems to be fine, except that there's not enough memory to use FCS efficiently.
    Anybody had same problems?
    Is this really "known issue"?
    If it is, when is solution available?
    MP+GF7300gt have been on the market over half a year now.
    Any links and references would be appreciated.
    I didn't find anything from apple's website and will be calling apple's support tomorrow...

    I have 4 GBs installed in my Mac Pro w/7300 card. No problems whatsoever.
    For best performance from quad-channel interleaving you may want to install either six additional 512 MB DIMMs (four in Riser A and four in Riser B) or two more 512 MB DIMMs (in Riser A) and four 1 GB DIMMs (in Riser B.)
    In your current configuration I believe you should have two 512 MB DIMMs in Riser A and two 1 GB DIMMs in Riser B.
    See Memory (FB-DIMM) Replacement Instructions.

  • Successful install of 8.1.7 on Solaris 8 with more than 256Mb RAM

    To all who have struggled with this one, like me. You owe me a bottle of Rogaine.... but I will accept a bottle of Jack instead.
    It appears that the kernel parameters published in the Install Guide are only good for Solaris 8 with up to 256Mb RAM
    If dbassist craters at around 80% in OUI with a "Can't connect to Oracle" and you have more than 256Mb on board
    DOUBLE the kernel parameters !!!
    shmmni = 200
    shmseg = 20
    semmns = 400
    semmni = 200
    semmls =200
    semopm =200
    touch /reconfigure (they don't mention that in the Guide)
    reboot
    If someone out there can isolate which actual damn kernel parameter is the culprit, I will gladly share some Jack with you.
    Otherwise an email to [email protected] with a comment or three will suffice.
    Cheers

    Sorry to ask a dumb question, but want to be precise, whn you say touch/reconfigure, do you mean touch system.
    WHat is the reconfigure?

  • Mac Pro 2.66 crashing with more than 2GB ram

    I curently have a base config of 1GB ram (2 512MB chips), the systems runs fine, I then added memory in various configs and it always kernel panics when I exceed 2 GB. System runs fine until i try something memory intensive like Aperture or Photoshop.
    tried
    4 x 512MB chips (2GB) = perfect
    6 X 512MB chips (3GB) = crash
    2 x 512MB and 2 x 2GB (5GB) = crash
    2 X 2GB (4GB) = crash
    I have tried only riser 1, still crashes even with only the 4GB running. All memory is genuine apple and passes the hardtest CD.
    Any help much appreciated
    cheers

    Agreed.
    If you cannont get the memory to fail when only one or two pairs is in a given riser then it will be very difficult to diagnose and you'd have to suspect either a riser or mainboard problem.
    You do know that memtest will only test up to 2GB unless you buy the $0.99 version 4.14 though right? Lower versions shouldn't KP though, should just allocate 2GB and test.
    If it fails in both the following combinations:
    Riser A - 2x 512
    Riser B - 2x512, 2x 512
    Riser A - 2x 512, 2x 512
    Riser B - 2x512
    then I suppose I'd be suspecting hardware failure on the motherboard. BY the way, both risers boards are identical, you can swap Riser A for Riser B and still have a working system if it helps in your tests.
    I just went through the same thing, fortunately I had only 2 pairs (2x1GB) to work with...so I labelled the chips ABCD and worked the combinations through the risers A and B to determine that failure followed a given FBDIMM....after I'd done all this and found the suspected FBDIMM, I powered up the machine with the cover off for a change and found the red lights...did I feel silly then

  • P67A-GD65B3 won't post with more than 2 RAM sticks/P

    Issue is as stated.  On this board the dimm slots are listed as 1,2,3,4 (as marked on motherboard).  The computer will consistently power cycle (not even post) when RAM is in any dual channel configuration.  I've tried 4 sticks of memory (2 separate orders of 2x4GB).  All RAM sticks work in non-dual channel mode.  I've modified the voltage to 1.65.  I've verified the timings are correct at 8-8-8-24.  I've inserted 1 stick at a time, let boot all the way to windows, and inserted the next stick.  I tried the inserting 1 at a time numerous different ways.  Anything else I may have missed?  If not, I guess its RMA time.

    Quote
    2 separate orders of 2x4GB
    Check with G.Skill. I believe they only warranty & suggest one kit being installed. Differences in batches are noted as the excuse if memory serves me. UEFI/BIOS version 1.D was released by MSI to accomodate the non standard SPD profile programming, since a newer one, 1.E is installed, it rules a problem there out. It's also possible the RAM requires X.M.P. to be enabled to make them work correctly. Again, G.Skill is going to be the best source for advice on how to get their product to work.

  • No POST with more than 2 RAM slots?

    Hello community! Yuu are my last hope.
    I recenly purchased the MSI X99S and tried to install 4x4GB Crucial Ballistix DDR4, but it wont work. It does work with only 2 Sticks. I verified that the RAM is not broken, though. I also used the required quad-channel DIMM-Slots. Anyone else having this problem and a potential fix? Currently I can only use 8 GB.

    Hello! Thank you for the answer. I have the X99s SLI Plus Motherboard. I tried it and the result is, that my PC only boots when the RAM is inserted in DIMM 1 and 3. Is it supposed to work in every single slot? Because it doesnt even work if I put one each in DIMM 1-4.

  • RegionRenderer encodeAll The region component with id: pt1:r1 has detected a page fragment with multiple root components. Fragments with more than one root component may not display correctly in a region and may have a negative impact on performance.

    Hi,
    I am using JDEV 11.1.2.1.0
    I am getting the following error :-
    <RegionRenderer> <encodeAll> The region component with id: pt1:r1 has detected a page fragment with multiple root components. Fragments with more than one root component may not display correctly in a region and may have a negative impact on performance. It is recommended that you restructure the page fragment to have a single root component.
    Piece of code is for region is:-
       <f:facet name="second">
                                                <af:panelStretchLayout id="pa1"
                                                                       binding="#{backingBeanScope.Assign.pa1}">
                                                    <f:facet name="center">
                                                        <af:region value="#{bindings.tfdAssignGraph1.regionModel}" id="r1"
                                                                   binding="#{backingBeanScope.Assign.r1}"/>
                                                    </f:facet>
                                                </af:panelStretchLayout>
                                            </f:facet>
    How do I resolve it ?
    Thanks,

    Hi,
    I see at least 3 errors
    1. <RegionRenderer> <encodeAll> The region component with id: pt1:r1 has detected a page fragment with multiple root components.
    the page fragment should only have a single component under the jsp:root tag. If you see more than one, wrap them in e.g. an af:panelGroupLayout or af:group component
    2. SAPFunction.jspx/.xml" has an invalid character ".".
    check the document (you can open it in JDeveloper if the customization was a seeded one. Seems that editing this file smething has gone bad
    3. The expression "#{bindings..regionModel}" (that was specified for the RegionModel "value" attribute of the region component with id "pePanel") evaluated to null.
    "pageeditorpanel" does seem to be missing in the PageDef file of the page holding the region
    Frank

  • Perform VENDOR EVALUATION for MORE THAN ONE VENDORS at a time

    Hello all,
    Please guide for any process where i can perform Vendor Evaluation for MORE THAN ONE vendors AT A TIME.
    At my location there are around thousand vendors, which are to be evaluated, and difficult to perform the evaluation process one-by-one.
    (ME61/ME62/ME63)
    Detailed replies with various possibilities would be highly appreciated.
    Thanks & Regards,
    Joy Ghosh

    The vendor evaluation for some thousand vendors at the same time has already been in SAP long before they developed LSMW. The purpose of LSMW is to load data from a legacy system, of course you can (mis-)use it for a lot other things.
    But you should not always use LSMW if you are to lazy to go thru the SAP standard menu to find a transaction like ME6G
    There you define a job that runs RM06LBAT report.
    You first have to define a selection variant for this report. this can be done in SE38 by entering the report name, select variant, clicking display, then entering a name for the variant and clicking Create.

  • Can i use my ipod with more than one computer?

    i synched my ipod with one computer, hooked it up to another and it says i can't sync with more than one library? it says i have to restore it but i dont want to delete my music off of my ipod, i just want to add the music from this other library

    I have two different iPods and two computers. I regularly use either iPod on either computer.
    First set the preferences so itunes will not automatically sync when ipod is connected.
    Then connect the ipod and set itunes to manually manage music.
    Drag and drop whatever content you want to the device showing on the left.
    If this was the default setup, there wouldn't be all these posts about people losing all their music when they connect their ipod.

  • Synching with more than one computer and attempting to delete more than one

    Rage is not the word. I have been having the worst time with this thing! UGH! 1st I have more than one computer that has different music on each, but this thing will only sync with 1 and delete anything from another library already added. CRAZY! Second I wanted to rip my cds to my computer without being connected to the internet and itunes won't recong the music added at a later time. I have to be connected to the internet for it to Id the music as I am ripping...CRAZY! Then I was attempting to delete the duplicated music that was not Id and it would only allow me to delete one song at a time...CRAZY! Is there a way to have this thing synch with more than 1 computer? Delete songs in a group and not everything in sight. If so how and if anyone has any remedies for the rest of this craziness I would be most appreciative. I took me forever to even figure how to get to the point to post this. I beginning to feel very sorry I bought this thing. I thought I could rip my cds, and use my work and home computers to synch with it. Please someone tell me I wasn't wrong for making this assumption. Will I be able to sych with other ipods? I bought this thing thinking I could was I wrong. I haven't even attempted that yet. I have an 80 GB ipod

    "I have more than one computer that has different music on each, but this thing will only sync with 1 and delete anything from another library already added"
    You can't "sync" that is automatically update an iPod from multiple libraries or computers. If you want to connect and use an iPod on more than one computer you need to change the update preference in the iPod Summary tab to "Manually manage music and videos" and click Apply.
    Using iPod with Multiple computers
    Managing content manually on iPod
    "I have to be connected to the internet for it to Id the music as I am ripping"
    That's correct, iTunes (and other music programs) downloads the information from an online database that contains the details of many thousands of CDs.
    "Delete songs in a group and not everything in sight"
    Click on the first track you want to delete in a list, hold down the shift key and click on the last one, this highlights everything in between. Press the delete key
    "Will I be able to sych with other ipods?"
    What do you want to sync with other iPods, iTunes? You can't sync one iPod with another, you can only update from iTunes. If you are adding a second iPod for your own use, just connect the new iPod to your computer and follow the on screen instructions. It will update from your existing library. Depending on the size of your library and the type of iPod you choose you can have it update all songs and playlists, selected playlists only or you can manage it manually. You can also have a look at this page:How to use multiple iPods with one computer
    You might also be interested in this page: iPod Fast Start: The New User's Guide to iPod

  • Sync with more than one computer

    Is it possible to sync with more than one computer using the same Blackberry Curve 8310? I have a work pc and a laptop and I am using MS Outlook. I've tried syncing to both but the appointments all duplicate themselves. Is there a way to set it up so I can sync to both.
    Les

    Hello Keithj,
    Welcome to the BlackBerry Support Community Forums
    You can synchronize with multiple computers but there are a few considerations to take into account when doing so.
    Check out the following KB article for more information:
    http://www.blackberry.com/btsc/KB18693
    Cheers,
    -FB
    Come follow your BlackBerry Technical Team on Twitter! @BlackBerryHelp
    Be sure to click Kudos! for those who have helped you.
    Click "Accept as a Solution" for posts that have solved your issue(s)!

  • Row chaining in table with more than 255 columns

    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahav

    user10952094 wrote:
    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
    "Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
    And this paraphrase from "Practical Oracle 8i":
    V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
    Related information may also be found here:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
    "When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
    http://download.oracle.com/docs/html/B14340_01/data.htm
    "For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
    Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
    Why not a test case?
    Create a test table named T4 with 1000 columns.
    With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
    SPOOL C:\TESTME.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
    COL1,
    COL2,
    COL3,
    COL255,
    COL256,
    COL257)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat are the results of the above?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        166
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        166                                                
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252        332  Another test, this time with an average row length of about 12 bytes:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME2.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL256,
      COL257,
      COL999)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        332 
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        332
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695The final test only inserts data into the first 4 columns:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME3.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL2,
      COL3,
      COL4)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat should the 'table fetch continued row' show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
    "Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
    Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case.

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • General Scenario- Adding columns into a table with more than 100 million rows

    I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
    Thanks in advance.
    svk

    For such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
    Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
    table). Add the default after all the rows have a value for the new column.

Maybe you are looking for

  • Mac Mini (mid 2011) Display Issues

    Hello. Recently I started having some issues with the display from my Mac Mini (mid 2011, core i7, 8GB, OS X 10.9). During usage the screen started flickering erratically and then went blank. I turned the Mac off and found that it was very hot so I r

  • Found a major bug in AME 6.0.1 - Anyone else experiencing this?

    When exporting media from Premiere 6.0.1 on a Windows 7 64bit machine with an I7 Quad core with Hyperthreading and 16GB of Ram with NVidia Cuda card (Mercury Engine Acceleration enabled) I encounter an issue where AME hangs.  I can export media  fine

  • Need help with EventQueueMonitor

    Hello! I have a task to write an application on Java2 SE, which will manage the Accessible object of another Java window. To do this, I put the Java Accessible Bridge version 2.0. JVM 1.6.0_24 In combination with Java Accessible Bridge is a few examp

  • After patching Logon-Check Availability in Rz20 is showing 0

    Hi, We patched our SSM from SP 20 to SP 25. after that Logon-Check Availability in Rz20 is not working.

  • Lab Systems: Is it a part of IS-H* Med?

    Hi, Does SAP IS-H* Med supports Lab systems. Do they integrate with the equipment and get the results back? Please let me know I have seen radiology and pharmacy modules are part of IS-H* Med but I cannot find LIS system part of Med. Is it necessary