BerkeleyDB with SAN

We have a use case where we are using BerekeleyDB as read only. As of now we are using local hard drive for accessing Berekely DB files.
Since the IOPS is very low with a local hard drive, we are having problems with scaling our appliaction. We have multiple nodes running our application.Since we need to repliacate the updated Berkeley DB files across nodes, it becomes a maintanence problem and have this overhead.
We are looking at a SAN solution to replace the local hardrive. which gives better IOPS and relieves us from overhead and maintaining files across multiple nodes.
Have few questions on this
1) I read in the FAQ secton that the behavious is undefined if SAN is used. That talks more abt when we have writes to the Berkeley DB. We are using Berkeley DB as a read only solution. Are they any issues we need to be aware of we we plan to use SAN?
2) It also talks about if we use SAN drives,we may not be able use file system cache? We are planning to use Berkeley DB cache and leverage Linux file system cache to reduce the IOPS. Is there an impact to that if we use SAN?
3) It talks about mutex locks while access Berekely DB. Can we avoid using locks, as we are using it as read only? Does the Berekely DB API provide any options to avoid locks on the data?
4) Is there anything else we need to be aware of while using SAN with Berkeley DB?
Appreaciate you could give some directions on the above questions.
Thanks

Hello,
It sounds like there is some part of the application which is not read only
since there is a need to replicate updated Berkeley DB files across nodes.
Even if the application is read only, if there are other concurrent updaters
accessing the database files, then all of them need to share the same environment.
If all instances of the application(s) are shut down whenever the user database files
are changed then the read-only nodes, each with their
own private environment and log files, could share access to the SAN database files.
Each node could use whatever BDB and Linux caching they like. Just be sure to do
something like mount the SAN database files read-only so there is no chance of
them being updated.
Mutexes are needed whenever a node could have more than one thread or process accessing
the environment.
You did not mention which product is being used. You may want to consider the
HA product or Concurrent Data Store product if transactions are not used.
http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/intro_products.html
Thank you,
Sandra

Similar Messages

  • Do we need to create two zones for Two HBA for a host connected with SAN ?

    Hi,While creating Zone , Do we need to create two zones for Two HBA for a host connected with SAN ? Or a zone is enough for
    a host which having Two HBAs...We have two 9124s for our SAN fabric...
    As I found like one zone  below, I little bit confused that , if a host having two HBA connected with SAN, should I expect two zones for every Host?
    from the zone set, I gave the command show zoneset
    zone name SQLSVR-X-NNN_CX4 vsan 1
        pwwn 50:06:NN:NN:NN:NN:NN:NN
        pwwn 50:06:NN:NN:NN:NN:NN:NN
              pwwn 10:00:NN:NN:NN:NN:NN:NN
    But I found only one zone for the server's HBA2:by the same time in the fabric I found switches A & B showing the WWNs of those HBAs on its
    connected N port...Its not only for this server alone, but for all hosts..Can you help me to clarify on this please..that should we need to create one zone for
    one HBA?

    if u have two independent fabrics between hosts and storage, i think the below confs are recommended.
    Scenario 1:  2 HBAs single port each ( redundancy across HBA / Storage port )
    HBA1 - port 0 ---------> Fabric A ----------> Storage port ( FAx/CLx )
    HBA2 - port 0 ---------> Fabirc B ----------> Storage port ( FAy/CLy )
    Scenario 2: 2 HBAs of dual port each
    HBA1 - port 0 -------> Fabric A ---------> Storage port ( FAx/CLx )
    HBA2 - port0 ---------> Fabric A ---------> Storage port ( FAs/CLs )
    HBA1 - port 1 --------> Fabric A --------> Storage port ( FAy/CLy )
    HBA2 - port 1 ---------> Fabric B --------> Storage port ( FAt/CLt )
    the zone which is in your output is VSAN 1. if its a production VSAN, Cisco doesn't recomends to use VSAN 1 ( default vsan ) for production.

  • Sun Cluste 3.1 with SAN 6320 - Any Known Issues?

    Hello,
    We are moving to new Sun hardware with following configurations.
    Solaris 8,Sun Cluster 3.1, Oracle 8.0.6.3 on two V1280 connected to Sun Stordege SAN 6320. SAN is also connected to 5 other machines including one windows 2000.
    Following were the limitations which we came across during the testing phase.
    1. Maximum LUN you can have on a 6320 , co-existing with SUN Cluster is 16. ( You can not have more than 16 LUNS configured on 6320..!)
    2. Maximum number of CLUSTER nodes that you can have with 6320 is FOUR.
    Refer:
    http://docs-pdf.sun.com/816-3381/816-3381.pdf
    Bug ID: 4840853
    Is anybody else there, already moved/moving to any such configuration and wants to give some tips and suggestions. Please let me know.
    Thanks
    Sair

    An update on the same..
    we are having issues with SAN 6320.
    SAN hangs when we use 7 nodes with Sun CLuster 3.1 simultaneously accessing the volumes. No volumes are being accessed from moer than a single node.
    will update later...

  • SSL Certificates with SAN going away next year

    if my SCCM internet based client management requires SSL with SAN, what do we do after October 2015 when entities will no longer issue Certs with the IP or Intranet Alternative names? see godaddy article:
    http://support.godaddy.com/help/article/6935/phasing-out-intranet-names-and-ip-addresses-in-ssls?locale=en
    thanks,
    azin
    azwright

    the issuer has issued the cert for now, but says will no longer support it when it expires. 
    Additionally, I thought the whole point of purchasing a 3rd party cert was that the clients would already trust it and not need to run the certutil to import the cert.  Right now, i'm getting a GetDP error saying IP address not found, but I can actually
    go to my server on the Internet.  It then gives me a Cert error saying not trusted.  I will try the certuil.exe and see if that resolves the issue, but the expiration and it not being supported after that is something I will need to dig further.
    thanks,aw
    azwright

  • MOVED: MSI Neo 2 platinum with San Diego or Venice

    This topic has been moved to Overclockers & Modding Corner.
    MSI Neo 2 platinum with San Diego or Venice

    Here are my results with my Neo2/3000+ Venice:
    I use different apps to OC in Windows after booting with safe settings that I know work. That way if I use an app and push to far, the PC will reset with safe BIOS settings. The key for getting more vcore is to not set the first setting at anything higher than 1.45v. Setting anything after that will only result in it reading as 1.45v - as in your example, where you are setting 1.55v+10% and it only reads 1.59v...That is the same as 1.45v+10%.
    What you'll need to do is install Core Center. Once installed go to Start/All Programs/Start Up, and delete it from there. That will keep it from starting everytime you restart the PC, and only run when you click on the icon. Next, boot at the highest vcore you need to, or 1.45v+10% if you need all 1.59v. Once Windows has loaded open Core Center and use the voltage options there to bump it up slowly. Both my brother and I have Neo2's and I can get 1.725v, while he can get 1.78v to work, all on BIOS v1.A...Hope this helps, good luck.
    Also, if you don't have them, programs like Core Center, the A64 Tweaker, and Clock Gen are very handy and work really well when OCing. Be careful and have fun!

  • How to use scanner ADF with SANE in linux using Morena

    Hi! I'm using Morena libraries to access my scanner in my linux by the SANE backend. I'm able to scan one page, but I just can't scan all the pages on the scanner. I've tried loops and status but nothing successfull.
    Using linux utility "scanadf" works great, it scans all the pages. I don't know what I'm missing.
    Hope some one out there has used the Morena project to accesso SANE scanners.
    Best regards!

    Hope, that something of the below will help:
    According to the Sane specification, Sane backend (i.e. scanner) should report the "out of paper" situation by returning SANE_STATUS_NO_DOCS status code. Some backends do it by default, some require to set a special sane option, so that they accept the fact that "application wants to use ADF". E.q. when you scan multiple papers using ADF just by default options with our scanner HP ScanJet 6300C, it will result in endless repeated scans. In that situation you need to examine the list of backend's options and find the appropriate adf option name. In case of our HP ScanJet scanner, you need to call setOption("source", "ADF");
    Is is possible, that some backends need to be inform to pull the next paper into the adf (something like change document option).
    How To detect out of paper using Morena
    1) Morena produces acquired data via Java ImageProducer interface. When image was acquired correctly, it will send imageComplete(ImageConsumer.STATICIMAGEDONE). When some error occurs, it will send imageComplete(ImageConsumer.IMAGEERROR). Developer can track these values. MorenaImage class has it's own getStatus() method, which reflects these values. Here is an example how to:
    MorenaImage image;
    SaneSource source=SaneConnection.selectSource(null);
    source.setResolution(100);
    source.setOption("mode", "Color");
    source.setOption("source", "ADF");
    do
    { image = new MorenaImage(source);
    System.out.println("Size of acquired image is " + image.getWidth() + " x " + image.getHeight() + " x " + image.getPixelSize() + ", image.getStatus=" + image.getStatus());
    } while (image.getStatus() == ImageConsumer.STATICIMAGEDONE);
    2) When developer decides not to buffer the acquired image, it is possible to omit MorenaImage class:
    Image image;
    SaneConnection saneConnection=null;
    try
    saneConnection = SaneConnection.connect("localhost");
    SaneSource source=saneConnection.selectSource(null,null);
    source.setOption("mode", "Color");
    source.setResolution(100);
    source.setOption("source", "ADF");
    do
    image=Toolkit.getDefaultToolkit().createImage(source);
    MediaTracker tracker=new MediaTracker(this);
    tracker.addImage(image, 0);
    try
    { tracker.waitForAll();
    catch (InterruptedException e)
    { e.printStackTrace();
    tracker.removeImage(image);
    System.out.println("Size of acquired image is " + image.getWidth(this) + " x " + image.getHeight(this) + ", saneConnection.getResultCode()=" + saneConnection.getResultCode());
    } while (saneConnection.getResultCode() != SaneConstants.SANE_STATUS_NO_DOCS);
    catch (Exception e)
    { e.printStackTrace();
    finally
    { if(null!=saneConnection)
    try
    { saneConnection.close();
    catch (Exception e)
    { e.printStackTrace();
    3) Quit simple solution is to detect a size of the acquired images. If the size = 0 (or -1 in some scanners), stop the scanning.
    Martin Motovsky

  • ASM backup strategies with SAN snapshots

    Hello folks,
    in one of our setups we have a database server (11.2) that has only binaries and config (not the datafiles) and all ASM disks that are on the SAN. The SAN has daily snapshots, so the datafiles are backed up. My question is, how do I proceed if the database server fails and needs to be reinstalled? My question is how do I tell to the grid that we have already a diskgroup with disks on the SAN. I found md_backup and md_restore, but I am not sure how / when to proceed with md_restore.
    I have searched some time, but I am quite confused with the approaches, please note that RMAN is not an option in this environment.
    Thanks in advance
    Alex

    Hi,
    O.k. that changes things. You should have mentioned that in the beginning of your post, that you shutdown the database before you doing the snapshot and your are not using archivelog mode.
    I would advise to also dismount the diskgroups before doing the snapshots.
    However when was the last time you tried the archivelog mode? 11g increased the performance a lot.
    Note: In this case the snapshot is only a point in time view. In case of an error, you will loose data. (The last hours from your last snapshot).
    Now regarding your questions:
    => You will install a new system with GI and database software. You will need a new diskgroup (e.g. INFRA) for the new OCR and Voting disks (they should not be contained in the other diskgroups like DATA and FRA, so no need to snapshot the old INFRA).
    => Then simply present the LUNs from DATA and FRA diskgroup to the new server. ASM will be able to mount the diskgroups as soon as all disks are available (and have the right permission). No need for md_backup or restore. MD_BACKUP backups the contents of the ASM headers, but since you still have the disks this meta information is still intact.
    => Reregister the database and the services with the Grid Infrastructure (srvctl add database / srvctl add service etc.). Just make sure to point to the correct spfile (you can find that e.g. with ASMCMD).
    => Start database...
    That is if your snapshot did work. Just have another forum thread, where a diskgroup got corrupted due to snapshot (luckily only FRA).
    And just as reminder: A snapshot is not a backup. Depending on how the storage is doing the snapshot, you should take precautions to move it to separate disks and verfiy it (that it is usable).
    Regards
    Sebastian

  • Migration 10g on Solaris with SAN

    My predecessors built our database servers with the ORACLE_HOME on the SAN.
    When it came time to upgrade the development server, the Oracle files that were not under ORACLE_HOME were copied to the newly built Solaris server, the SAN connections swapped over to the new server, and the IP address and Server name were swapped.
    The DBA ran the root.sh script and the server is now running as an Oracle server.
    There were a number of problems that had to be fixed.
    This DBA has since been terminated, and there are no notes available regarding the problems that cropped up. The "Powers that Be" want to duplicate the process on the Production server. Has anyone else tried this methodology, and succeeded?

    The old server was running Solaris 10 64-bit.
    The Oracle 10.2.0.2 ORACLE_HOME and the instances were installed on the SAN.
    The new server is running Solaris 10, and will be attached to the SAN.
    It will have the same name as the old server, and the same IP address.
    The plan is to replace the aging/dying server with a new server, swap the SAN connection, swap the server name, swap the IP address, and then rerun root.sh to make the new server an Oracle server
    This was done on the Development side, but that dba is no more.
    There were significant problems, but the SA that is pushing for this method has no clue what the resolution was.

  • How to manually create a standby db with SAN for hardware cluster failover

    Hi all,
    The primary db(oracle 9i r2, Sun Solaris) puts its datafile, redo logs, control files in SAN while the pfile, listener and tnsnames files are in its local hardisk. I need to create a standby db(not for dataguard) for hardware cluster failover, where if the primary db fails the hardware cluster will failover to standby db(mounting the datafile, redologs, control files in
    SAN to a mount point automatically and start the db services). But I don't know how to create the standby db, I would install the db software first then should I copy the pfile(created from primary db), listener and tnsnames files to standby db? What are the correct steps to do it? Any advice is greatly appreciated.

    Thanks ackermsb for the reponse. I think I have confused of setting standby db and creating a HA. What I trying to achieve is creating a HA with vendor clusterware and oracle e.g. Sun Cluster HA for Oracle.
    The steps involved in preparing the primary and standby oracle are:
    Oracle application files – These files include Oracle binaries, configuration files, and parameter files. Need to installed separately in two servers locally.
    The Database-related files – These files include the control file, redo logs, and data files are placed in a Cluster File System.
    What I don't how to do it is, after installing the oracle binary in standby server, should I create the listener, tnsnames and pfile or copy them from the primary db?
    Thanks in advance.

  • SD driver with SAN disk.

    On Solaris 8 2/02 lastest new kernel patch, my server (V880 and SF15K) use jni card (FCE-6460) and use jni driver ( 5.x ). SAN disk is Hitachi 9960. I try to update new disk on the fly. After I use devfsadm command in order to reconfig devices. But system cannot generate new device. But I check JNI's ezifiber program. It see new disk. If I reboot system. System see new disk.
    I would like to ask about sd driver. Can SD driver update new SAN disk on the fly? If SD driver doesn't support. Do you have plan for this method on sd driver ?
    Best Regards,
    Sombut J.
    Email: [email protected]

    Sorry, I am unable to help you since I'm a new guy in this neighborhood.
    I notice there are many unanswered questions in this forum. This needs to be improved. Come on Sun, jump in here and help us. Don't you care about your products and customers? I belong to a few other forums and this is probably the worst one.
    I believe Sun needs to be involved in this and make this forum more active. It's good investment to help developers stand on their feet and get going when they're stuck with their problems. I can see people can get frustrated not knowing where to seek for help. After all, this forum is set up just for that purpose. But if Sun developers don't jump in here and help us, then this forum should not exist.
    Thank you for listening...
    Quang

  • Premiere CC crashing constantly - possibly something to do with SAN?

    Hello all I'm working in a 5-10 seat media department that runs adobe CC (I believe it is current) on macs running mavericks with all the project files living on a fiber connected SAN.
    We take full advantage of dynamic linking to create 4-15minute short docs with motion graphics.
    I am a big adobe fan but my coworkers are frustrated with premiere constantly crashing while editing and rendering and they are talking about looking at FCP and Avid.
    I have found adobe to be stable when I work off a hard drive plugged directly into the Mac, so I am wondering if this is network related?
    Anyone out there working in a similar scenario that has encountered this crashing issue? How did you address it?
    Thanks for your help!
    -Michael

    Earlier this year, I suffered through the same issue, which became progressively worse over time. In retrospect, I realized it was connected with the minor upgrades of OSX which in turn installed newer versions of OpenCL. Each one got worse, crashing Premiere more and more often. Ultimately, Premiere became useless--couldn't do a damn thing.
    I read about a potential conflict with OpenCL, installed CUDA (which had been installed originally, but  apparently lost in a system rebuild), and it worked nearly flawlessly for the past nine months. Yesterday I updated Premiere CS6 and unknown to me, it set the default render engine back to OpenCL. My system was again totally parallized. I deleted all render files, all preferences--anything that might have been corrupt when I realized what was happening. Back to CUDA and instantly my system stabilized.
    What a waste of time and frustration. Adobe really needs to get this figured out, or block OpenCL from non-supported systems, or at least offer some semblence of guesswork in the crash messages such as, "You may want to try changing your render engine ..."
    Moral of the story? Don't upgrade. If it works, leave it alone.

  • RAC 10G With SAN storage

    Dear all
    I am trying to install Oracle 10gR2 Database, buy using Cluster Hardware with 2 nodes and RAID SAN Storage, i was thinking i can go with oracle RAC, but is there any one can advice me if thats ok, and provide me with step-by-step installation manual, knowing that i am using Windows 2003 64-bit As OS, or if any can help me to install the best solution for my Case.

    Check following link it will help
    Best Practices for Oracle Database 10g RAC on Microsoft 64bit Windows
    RAC Installation Lab

  • MSI Neo 2 platinum with San Diego or Venice

    Hey guys,
    How do you guys get high clocks (2.8ghz +) using any of the bioses for MSI neo 2 platinum EXCEPT for 1.36b for Venice or san diego?.. For all the other bioses, the max vcore is 1.55V + 10%. But even at this limit, the vcore for me is only ~1.59V as detected by the bios, MBM, CPUZ and core center. So i dont know how to give my CPU more juice without using 1.36.. I dont want to use 1.36 because it doesnt support my memory controller and my computer is unstable. I want to use a newer bios that supports san diego's, but i also want high vcore to keep my computer stable at 2.9Ghz. I have used 2.9Ghz before using 1.36 bios but i need 1.4 + 15% vcore (CPUZ and MBM report 1.62V). This was stable using prime 95 for 24hours.... ne suggestions? thanks

    Here are my results with my Neo2/3000+ Venice:
    I use different apps to OC in Windows after booting with safe settings that I know work. That way if I use an app and push to far, the PC will reset with safe BIOS settings. The key for getting more vcore is to not set the first setting at anything higher than 1.45v. Setting anything after that will only result in it reading as 1.45v - as in your example, where you are setting 1.55v+10% and it only reads 1.59v...That is the same as 1.45v+10%.
    What you'll need to do is install Core Center. Once installed go to Start/All Programs/Start Up, and delete it from there. That will keep it from starting everytime you restart the PC, and only run when you click on the icon. Next, boot at the highest vcore you need to, or 1.45v+10% if you need all 1.59v. Once Windows has loaded open Core Center and use the voltage options there to bump it up slowly. Both my brother and I have Neo2's and I can get 1.725v, while he can get 1.78v to work, all on BIOS v1.A...Hope this helps, good luck.
    Also, if you don't have them, programs like Core Center, the A64 Tweaker, and Clock Gen are very handy and work really well when OCing. Be careful and have fun!

  • Difficulty with SAN boot of solaris 10 (on V245) from HP EVA 4000 -

    Hi
    This is my first post. I'm trying to get my head around what's preventing me booting a V245 solaris 10 from a LUN presented from an HP EVA 4000.
    The machine boots up ok from a local disk. So, i've done the LUN masking (presentation) on the EVA and the Brocade zoning - solaris can see the disk.
    I've formated it close to the current internal root disk, and had used ufsdump/ufsrestore and installboot to copy the contents of / onto the new LUN , which i had mounted as /mnt.
    My problem is i just can't seem to get at the new disk at the ok prompt - i'm never quite sure that i'm booting from the correct disk.
    I believe all the compatibility with o/s qlc card drivers and the EVA are in order.
    The qlc HBA is as follows:
    Sun SG-XPCI2FC-QF2/x6768A 1077 2312 1077 10A and have suitable patch 119139-33
    Here's some output:
    root@bwdnbtsl02# uname -a
    SunOS bwdnbtsl02 5.10 Generic_137111-01 sun4u sparc SUNW,Sun-Fire-V245
    root@bwdnbtsl02#
    root@bwdnbtsl02# luxadm probe
    No Network Array enclosures found in /dev/es
    Found Fibre Channel device(s):
    Node WWN:50001fe15009df00 Device Type:Disk device
    Logical Path:/dev/rdsk/c4t600508B4001066A60000C00002B50000d0s2
    root@bwdnbtsl02#
    Now down at ok prompt, probe-scsi-all can see the EVA (HSV) LUNS
    (Here's a sample of the output)
    ok probe-scsi-all
    /pci@1f,700000/pci@0/SUNW,qlc@2,1
    ************************* Fabric Attached Devices ************************
    Adapter portId - 640100
    Device PortId 640000 DeviceId 1 WWPN 210000e08b8a3e4f
    Device PortId dc0000 DeviceId 2 WWPN 50001fe15009df09
    Lun 0 HP HSV200 6220
    Lun 1 DISK HP HSV200 6220
    Device PortId d20000 DeviceId 3 WWPN 50001fe150092f19
    Lun 0 HP HSV200 6220
    /pci@1f,700000/pci@0/SUNW,qlc@2
    ************************* Fabric Attached Devices ************************
    Adapter portId - 630100
    Device PortId 630000 DeviceId 1 WWPN 210100e08baa3e4f
    Device PortId 780000 DeviceId 2 WWPN 50001fe15009df08
    Lun 0 HP HSV200 6220
    Lun 1 DISK HP HSV200 6220
    Device PortId 6e0000 DeviceId 3 WWPN 50001fe150092f18
    Lun 0 HP HSV200 6220
    So, i attempt to boot with the following:
    ok boot /pci@1f,700000/pci@0/SUNW,qlc@2/fp@0,0/disk@w50001fe15009df08,0:a
    Boot device: /pci@1f,700000/pci@0/SUNW,qlc@2/fp@0,0/disk@w50001fe15009df08,0:a File and args:
    Can't read disk label.
    Can't open disk label package
    Can't open boot device
    {1} ok
    If anyone can put me straight here i'd be most grateful.
    thanks john

    hi,guys:
    i am interesting in your question. i have never try it like your experience.
    maybe the cause is drive . either the OS is running . the drive can load. or relation with the style of server .
    i think you can try it like the following again.
    when you install OS , you chocie EVA disk .
    remeber . if the test is complete. you must tell me .
    the following is my contact: [email protected]
    best regards

  • SCVMM Bare Metal Deployment with SAN Boot

    Can VMM assign "boot targets" to hosts while deploying them? Or do I have to do it manually before initiating VMM deployment? Specifically talking about PowerEdge M420. But also interested if it is possible in general?
    The data above this text is pseudorandom, brace yourselves.

    Hi Sir,
    http://support.microsoft.com/kb/305547
    This article shows the key point is we need to make the LUN works properly like a local disk , it also means that we need to prepare HBA driver within WinPE image before the deployment .
    I have also read these blog regarding adding drivers for SCVMM Bare Metal Deployment:
    http://www.thecloudbuilderblog.com/blog/2012/1/26/scvmm-2012-bare-metal-provisioning-adding-drivers-to-winpe.html
    http://www.virtualizationadmin.com/articles-tutorials/microsoft-hyper-v-articles/installation-and-deployment/performing-bare-metal-installation-hyper-v-using-system-center-virtual-machine-manager-2012.html
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Maybe you are looking for

  • Interactive Report - comparing user entered text field against a table field - the comparison is not finding a hit when it should.

    Example User enters 12345 in a pre filter ----- item_number field and this value exist in the table but its defined as VARCHAR2(2000). No match is found.  Do I need to define this text field in a certain way so that a match occurs - say in the elemen

  • BPM and OSB

    If customer has Oracle BPM 11g license, does that also mean he automatically has Oracle soa 11g license as well ? Or is oracle soa sepearte license one needs to buy apart from BPM ? also, have webcenter 11g. so do automatically get to use OWSM ? or i

  • Missing Rescheduling messages in MD06

    Hi We have a situation where a rescheduling message for a Purchase order (PO Item - Mrp element)"15" reschedule out is present on the MD04 screen but not on the MRP list MD06 This is the case directly following the MRP run so the messages should be a

  • Photos not showing, only gray boxes

    so i opened up my iPhoto, and some of my pics aren't showing. they show when i click on them, but the like thumbnails aren't showing. only gray boxes. like the outline of the box is a dotted gray line. help?

  • Can't open Safari. Quits unexpectadly every time

    Since yesterday I cannot open Safari My report is : Process: Safari [145] Path: /Applications/Safari.app/Contents/MacOS/Safari Identifier: com.apple.Safari Version: 3.2.1 (5525.27.1) Build Info: WebBrowser-55252701~3 Code Type: X86 (Native) Parent Pr