Big file servers 10TB or above

We have more than 10TB of data, managed on Windows file servers, using vmware, 2TB RDMs because of the NTFS volume limit, plus RDM size limits. How do you manage your data?
Also is it possible on Windows 2012 R2 to mount an NFS volume? lets say an EMC Isilon folder ?..if so, anybody know i can put my redirected documents folders in this volume for offline use?
Thank you

Hi,
If you want to store data that exceed 2TB, the device must be initialized by using the GUID partition table (GPT) partitioning scheme.
For more detailed information, you could refer to the article below:
Windows support for hard disks that are larger than 2 TB
https://support.microsoft.com/en-us/kb/2581408?wa=wsignin1.0
To connect to an NFS share, you need to make sure you have the Client for NFS installed. You could refer to the article below to mount an NFS share.
Mounting an NFS shared resource to a drive letter
https://technet.microsoft.com/en-us/library/cc754350.aspx?f=255&MSPPError=-2147217396
NFS cannot handle the ACL requirements for redirected profiles. You MUST use a file system that supports sufficient ACLs (CIFS/SMB). Please see:
Profile Redirection and GPP drive mapping with NFS Share.
https://social.technet.microsoft.com/Forums/windowsserver/en-US/a339c046-3ea4-41e1-acce-e44eb466f950/profile-redirection-and-gpp-drive-mapping-with-nfs-share
Best Regards,
Mandy
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

Similar Messages

  • Transfering big files

    Hi,
    i would like to transfer really big files ( 500 megs and above ). This is a file to file transfer without any conversion (except code page). How can i do that with XI without having the whole data in XI which will for sure waste a lot of time and space in the database (log tables/msg tables)
    The file is written from a BW process with a tempory filename, after the file is written it is renamed to a filename XI is looking for. XI should then trigger the file transfer without reading the content.
    Any idea?

    Use the Chunk Mode of the file adapter if you are on PI 7.11(For binary data transfers)
    /people/niki.scaglione2/blog/2009/10/31/chunkmode-for-binary-file-transfer-within-pi-71-ehp1
    Check the "Large Files Handling" section in this guide:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?quicklink=index&overridelayout=true
    Regards,
    Ravi

  • Sync two 10.6 file servers

    Hi everyone,
    this might be a very basic question, but I haven’t seen a good guide on how to accomplish this, hence the question.
    I would like to use 2 absolutely identical file servers (no other services, only AFP) with one of them being a backup that is not in use unless the first one crashes, burns, gets stolen, etc. The servers will be a Mac mini Server each with both having a 12 TB FireWire RAID attached. They need to be in separate locations, but Gigabit Ethernet is available to connect them.
    In order to be clear: I thought of syncing both the system and the data on the RAID.
    Do you have any recommendations on how to best accomplish this? Is there a best practice? Is rsync what I need here?
    Thanks
    Björn

    There are many, many elements to your question and synching the data is the least (and easiest?) part of the equation.
    For one, what's your failover model? Do you want the failover to happen instantly, with no disruption to the users? or do you mind if the users get disconnected and have to reconnect to their shares?
    Or maybe you want failover to only happen manually? (i.e. only when you know the primary server is going to be down for a while). This is common because the cost of failback (i.e. resynching the 'backup' data to the primary server) is time consuming and could take longer than the primary server would be offline, anyway - if it'll take 2 hours to sync your data back then there's no point in failing over if your server is going to be back in 10 minutes.
    Then there's the volume of data and, more importantly, the rate of change. Even if you have 10TB of data there may be only a few megabytes of data that changes daily and needs to be kept in sync. That wil ave a big impact on your replication strategy.
    While on that subject, how much tolerance do you have for the servers being out of sync? If you need them to be real-time then you don't have the equipment for this - real-time replication of filesystems is a tricky (and expensive) task. If you want to sync daily, or even a few times a day, then that's easier, with the cost being a few hours' lost work should an unexpected failover happen. That may or may not be viable for you.
    Either way I would not recommend Retrospect for this (or even for regular backups). A simple rsync shell script can replicate the data between two servers, it's largely an issue of frequency and volume that you have to consider.

  • Moving big files(600MB) with FTP Adapter error The IO operation failed

    I everybody, I have the next trouble:
    I need to move big files from one server to another remote server through ftp protocol. All the configuration is correct and I am able to move little files
    with no problem, but when I move big files the server shows the next error:
    Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'readEBS'
    failed due to: The IO operation failed. The IO operation failed. The "OPER[NOOP][NONE]" IO operation for "/tmp/TestLogSOA/DetalleCostos3333333.dvd"
    failed. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution.
    java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: XA error: XAResource.XAER_NOTA start() failed on
    resource 'SOADataSource_ohsdomain': XAER_NOTA : The XID is not valid oracle.jdbc.xa.OracleXAException at
    oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1532) at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:321) at
    weblogic.jdbc.wrapper.VendorXAResource.start(VendorXAResource.java:51) at weblogic.jdbc.jta.DataSource.start(DataSource.java:722) at
    weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1228) at
    weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1161) at
    weblogic.transaction.internal.XAServerResourceInfo.enlist(XAServerResourceInfo.java:297) at
    weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:507) at
    weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:434) at
    weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1592) at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1496) at
    weblogic.jdbc.jta.DataSource.getConnection(DataSource.java:439) at weblogic.jdbc.jta.DataSource.connect(DataSource.java:396) at
    weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:355) at
    oracle.integration.platform.xml.XMLDocumentManagerImpl.getConnection(XMLDocumentManagerImpl.java:623) at
    oracle.integration.platform.xml.XMLDocumentManagerImpl.insertDocument(XMLDocumentManagerImpl.java:208) at
    sun.reflect.GeneratedMethodAccessor1534.invoke(Unknown Source) at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at
    org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at
    org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at
    org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at
    org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at
    org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at
    org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy285.insertDocument(Unknown Source) at
    oracle.integration.platform.instance.store.MessageStore.savePayload(MessageStore.java:244) at
    oracle.integration.platform.instance.store.MessageStore.savePayloads(MessageStore.java:99) at
    oracle.integration.platform.instance.InstanceManagerImpl.persistPayloads(InstanceManagerImpl.java:773) at
    oracle.integration.platform.instance.InstanceManagerImpl.persistReferenceInstanceBean(InstanceManagerImpl.java:1106) at
    oracle.integration.platform.blocks.adapter.AbstractAdapterBindingComponent.createAndPersistBindingInstance(AbstractAdapterBindingComponent.java:502)
    at oracle.integration.platform.blocks.adapter.AdapterReference.createAndPersistBindingInstance(AdapterReference.java:356) at
    oracle.integration.platform.blocks.adapter.AdapterReference.request(AdapterReference.java:171) at
    oracle.integration.platform.blocks.mesh.SynchronousMessageHandler.doRequest(SynchronousMessageHandler.java:139) at
    oracle.integration.platform.blocks.mesh.MessageRouter.request(MessageRouter.java:179) at
    Thanks!!!

    Hi idavistro,
    You can try setting the XA Transaction Timeout for SOADataSource.
    1. Log in to the WebLogic Admin Console.
    2. Select in the left tree: Services-> Datasources-> SOADataSource->Transaction.
    2. Select Set XA Transaction Timeout.
    3. Set XA Transaction Timeout to 0.
    4. Restart the server and check if the error still appears.
    Regards,
    Neeraj Sehgal

  • Big File issue

    Hi Group,
    When we are trying to process the big file (around 40MB) ,the source file adapter not picking up the file,we are using AAE,can any body suggest if we required any addition tuning.
    Regards,
    Rajiv

    Hi Rajiv,
    As said above by Michal 40 MB is not a big file but if yo need to tune your server you can set the below parameters in PI Server.
    u2022     UME Parameters :  May be we need to look into the pool size and poolmax wait parameters - UME recommended parameters (like: poolmaxsize=50, poolmaxwait=60000)
    u2022     Tuning Parameters:  May be we need to look/define the Message Size Limit u201Clike: EO_MSG_SIZE_LIMIT = 0000100u201D under tuning category
    u2022     ICM Parameters: May be we need to consider ICM parameters (ex: icm/conn_timeout = 900000. icm/HTTP/max_request_size_KB = 2097152)
    Thanks and Regards,
    Naveen.

  • Big file export issue

    Hello,
    We have developed a pair of import/export plug-ins in order to support a in-house format. For several reasons, which are not explained here, we had to develop import-export plugins instead of a single format plug-in. Our in house file format is designed for aerial and satellite images and supports very large files which can be above 100.000*100.000 pixels in size.
    The import plug-in works fine with large images but, unfortunately, we cannot export these images because the export plug-in is grayed out in the drop down list when a large image is loaded. We have tried with several versions of Photoshop up to CS 6 but the problem remains. We haven't found any attributes in the export plug-in in order to indicate that it supports large images.
    Has anyone got an idea ?
    Thanks,
    Bruno

    Heh, we also seem to run into the same issue with our Geographic Imager plugin, when exporting georeferenced files that exceed in either height or width the 30000 pixel limit (yeah, a common case for aerial or satellite data). PS indeed disables the menu and I'm unaware about any workaround for it (I'd also love to know if there is one). What's more important though, is that Photoshop doesn't actually disable the export plugin in this case - you can still run it through scripts or actions. And this is why for us specifically this is not really a big deal, because we do provide access to all our functinality including Export via our own panel.
    Here http://forums.adobe.com/thread/745904 Chris mentioned a PIPL property that was supposed to exist that limits the exported file size, but I think the overall conclusion was that it didn't make it to the release, so unless he or Tom enlighten us here about another magic property, there may be no better soultion to it.
    ivar

  • Not enough space on my new SSD drive to import my data from time machine backup, how can I import my latest backup minus some big files?

    I just got a new 256GB SSD drive for my mac, I want to import my data from time machine backup, but its larger than 256GB since it used to be on my old optical drive. How can I import my latest backup keeping out some big files on the external drive?

    Hello Salemr,
    When you restore from a Time Machine back up, you can tell it to not transfer folders like Desktop, Documents. Downloads, Movies, Music, Pictures and Public. Take a look at the article below for the steps to restore from your back up.  
    Move your data to a new Mac
    http://support.apple.com/en-us/ht5872
    Regards,
    -Norm G. 

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • Adobe Photoshop CS3 collapse each time it load a big file

    I was loading a big file of photos from iMac iPhoto to Adobe Photoshop CS3 and it keep collapsing, yet each time I reopen photoshop it load the photos again and collapse again. is there a way to stop this cycle?

    I don't think that too many users here actually use iPhoto (even the Mac users)
    However, Google is your friend. A quick search came up with some other non-Adobe forum entries:
    .... but the golden rule of iPhoto is NEVER EVER MESS WITH THE IPHOTO LIBRARY FROM OUTSIDE IPHOTO.In other words, anything you might want to do with the pictures in iPhoto can be done from *within the program,* and that is the only safe way to work with it. Don't go messing around inside the "package" that is the iPhoto Library unless you are REALLY keen to lose data, because that is exactly what will happen.
    .....everything you want to do to a photo in iPhoto can be handled from *within the program.* This INCLUDES using a third-party editor, and saves a lot of time and disk space if you do this way:
    1. In iPhoto's preferences, specify a third-party editor (let's say Photoshop) to be used for editing photos.
    2. Now, when you right-click (or control-click) a photo in iPhoto, you have two options: Edit in Full Screen (ie iPhoto's own editor) or Edit with External Editor. Choose the latter.
    3. Photoshop will open, then the photo you selected will automatically open in PS. Do your editing, and when you save (not save as), PS "hands" the modified photo back to iPhoto, which treats it exactly the same as if you'd done that stuff in iPhoto's own editor and updates the thumbnail to reflect your changes. Best of all, your unmodified original remains untouched so you can always go back to it if necessary.

  • I've doubled my RAM but still can't save big files in photoshop...

    I have just upgraded my Ram from the shipped 4GB to 8GB but I still can't save big images in photoshop. It seems to have made no difference to the speed of general tasks and saving and I can't save big files in photoshop at all. I already moved massive amounts of files off my computer and onto my external hard drive, so there is much less in my computer now, and twice the RAM but it is making no noticeable difference. When I click memory, under 'about my mac' it shows that the RAM is installed and now has twice the memory. Could this be something to do with photoshop? I'm running CS6.

    Also, I just calculated 220 cm into inches and this is roughly 86.5 inches. Over 7 feet in length.
    WIth an image that large, you may want to consider down sampling to a lower DPI to help with being able to work on and process images of this size much easier and it will be easier for your Print house to process, as well.
    You might want to consider working with these rather large images at 225 DPI instead of 300 if resolution at close distances is still a concern.
    Or what you good try is working with the images at 300 DPI, but then save them/export them as a Jpeg image at the highest image quality setting.
    I do a lot of project where I use a high resolution jpeg image to save on image proceesing overhead.
    The final printed images still come out pretty clear, clean and crisp without having to process those large files at a print house or having to deal with using that full resolution image in a page layout or illustration program.

  • Error in loading big file

    Hi All,
    I have an application on WL8.1 with sp3, the database is SQL Server 2000. I have code below to load local file into database. The data type in database is image.
    PreparedStatement pStatement = null;
    InputStreammyStream myStream = new InputStream();
    myStream.setEmbeddedStream( is );
    pStatement.setBinaryStream( 1, myStream, -1 );
    pStatement.executeUpdate();
    pStatement.close();
    pStatement = null;
    is is InputStream from a local file and the sql statement is
    insert into file_content(content) values(?)
    This workes fine for the files with size less than 150M, but for those big files (>150M), it doesn't work and I got error message as below:
    <Feb 11, 2005 12:00:41 PM PST> <Notice> <EJB> <BEA-010014> <Error occurred while attempting to rollback transaction: javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    at weblogic.transaction.internal.ServerTransactionImpl.internalRollback(ServerTransactionImpl.java:396)
    at weblogic.transaction.internal.ServerTransactionImpl.rollback(ServerTransactionImpl.java:362)
    Any body can help? Thanks in advance.

    Fred Wang wrote:
    Hi All,
    I have an application on WL8.1 with sp3, the database is SQL Server 2000. I have code below to load local file into database. The data type in database is image.
    PreparedStatement pStatement = null;
    InputStreammyStream myStream = new InputStream();
    myStream.setEmbeddedStream( is );
    pStatement.setBinaryStream( 1, myStream, -1 );
    pStatement.executeUpdate();
    pStatement.close();
    pStatement = null;
    is is InputStream from a local file and the sql statement is
    insert into file_content(content) values(?)
    This workes fine for the files with size less than 150M, but for those big files (>150M), it doesn't work and I got error message as below:
    <Feb 11, 2005 12:00:41 PM PST> <Notice> <EJB> <BEA-010014> <Error occurred while attempting to rollback transaction: javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    at weblogic.transaction.internal.ServerTransactionImpl.internalRollback(ServerTransactionImpl.java:396)
    at weblogic.transaction.internal.ServerTransactionImpl.rollback(ServerTransactionImpl.java:362)
    Any body can help? Thanks in advance.I already answered this as to the cause, in the ms newsgroups... It's the DBMS choking so
    bad on the size of your image file that it actually kills the connection. Note that the DBMS
    has to save the whole image to log as well as to the DBMS table. The fundamental response is
    to get DBA help to configure the DBMS to be able to do what you want. If you want and can,
    you could split your image column into multiple image columns, and split your data into
    100meg chunks, and then enter an empty row and then update it column by column, and then
    concatenate the data again in the cleint when you get it out, etc, but much better to post
    to the MS server newsgroups for ideas.
    Joe

  • What are the good ways to send a big file( 20MB-100MB) file to my friend?

    what are the good ways to send a big file( 20MB-100MB) file to my friend?
    Thanks in advance

    if this is with the internet, iChat is probly your best bet,
    but if you just want a transfer,
    plug a firewire into both of your computers, shutdown one of them, hold "T" and press the power button, the restarted computer should pop up as an external drive on the second computer.

  • Can't move big files also with Mac OS Extended Journaled!

    Hi all =)
    This time I can't really understand.
    I'm trying to move a big file (an app 9 GB large) from my macbook app folder to an external USB drive HFS+ formatted, but at about 5.5 GB the process stop and I get this error:
    "The Finder can't complete the operation because some data in "application.app" can't be read or written. (error code -36)"
    Tried to search for this error code over the internet with no results.
    Tried the transfer of the same file to a different drive (which is also HFS) but I still get the same error code.
    Both drives have plenty of available free space.
    Tried also  different USB cables.
    The app in subject has just been fully downloaded with no error from the app store.
    What should I try now? Please any suggestion welcome.. this situation it's so frustrating!
    Thanks

    LowLuster wrote:
    The Applications folder is a System folder and there are restrictions on it. I'm not sure why you are trying to copy an App out of it to an external drive. Try copying it to a folder on the ROOT of the internal drive first, a folder you create using a Admin account, and then copying that to the external.
    Thanks a lot LowLust, you actually solved my situation!
    But apparently in this "forum" you can't cancel the choosen answer if by mistake you clicked on the wrong one and you can't edit your messages and you even can't delete your messages... such a freedom downhere, jeez!

  • Active Directory domain migration with Exchange 2010, System Center 2012 R2 and File Servers

    Greeting dear colleagues!
    I got a task to migrate existing Active Directory domain to a new froest and a brand new domain.
    I have a single domain with Forest/Domain level 2003 and two DC (2008 R2 and 2012 R2). My domain contains Exchange 2010 Organization, some System Center components (SCCM, SCOM, SCSM) and File Servers with mapped "My Documents" user folders. Domain
    has about 1500 users/computers.
    How do u think, is it realy possible to migrate such a domain to a new one with minimum downtime and user interruption? Maybe someone has already done something like that before? Please, write that here, i promise that i won't ask for instruction from you,
    maybe only some small questions :)
    Now I'm studying ADMT manual for sure.
    Thanks in advance, 
    Dmitriy Titov
    С уважением, Дмитрий Титов

    Hi Dmitriy,
    I got a task to migrate existing Active Directory domain to a new froest and a brand new domain.
    How do u think, is it realy possible to migrate such a domain to a new one with minimum downtime and user interruption?
    As far as I know, during inter-forest migration, user and group objects are cloned rather than migrated, which means they can still access resources in the source forest, they can even access resources after the migration is completed. You can ask users
    to switch domain as soon as the new domain is ready.
    Therefore, there shouldn’t be a huge downtime/interruption.
    More information for you:
    ADMT Guide: Migrating and Restructuring Active Directory Domains
    https://technet.microsoft.com/en-us/library/cc974332(v=ws.10).aspx
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]

  • How do I eliminate the use and creation of big files in the profile - places.sqlite, urlclassifier3.sqlite, places.sqlite-wal?

    I want to eliminate the use and creation of big files in the profile - places.sqlite (10MB), urlclassifier3.sqlite (5MB), places.sqlite-wal (1MB)?
    Ffor urlclassifier, I tried disabling safebrowsing options (in security menu or in about:config), but file remains. Also if deleted it is still re-created with 5 MB.

    Start at http://www.mozilla.com and download the latest
    version. '''At the completion of the download don't let the setup start Firefox for you, when the setup ends, start Firefox in your normal manner this way you will be less likely to create a new profile.'''
    The extra startup pages are temporary -- read them. The next time
    Firefox comes up you should be back to starting with your normal home page.
    '''Note: Firefox must be down once the install starts. '''
    Close Firefox with File>Exit. Then make sure Firefox is not running --
    On Windows: check the "Processes" tab in the Windows Task Manager.
    '''On Mac: Firefox > Quit then Command+option+Esc, if Firefox running use Force quit'''
    Permission errors on Mac:
    '''If you were getting permission errors on a Mac, download the latest
    Firefox version, then uninstall and reinstall Firefox. Do not remove
    your profile directories and files as they contain your settings,
    bookmarks, history, extensions, passwords, and cookies.'''
    in any case once you are on 4.0 you might want to take a look at "(fx4)"
    * Fx4: Firefox 4.0 Problems and Orientation (#fx4) <br>http://dmcritchie.mvps.org/firefox/firefox-problems#fx4

Maybe you are looking for