Space reclaimer on Thick Provisioned LUN

Hi, Could someone clarify something for me. I understand the need to reclaim space on thin provisioned LUNs using SnapDrive's space reclaimer. Is this a requirement for thick provisioned LUNs or will running it only fix the space reporting in System Manager. As an example - a 10Tb thick provisoned LUN is presented to a physical Windows server and mapped to a drive. 6tb of data is written to the LUN, and Windows and System Manager both agree that there is 4Tb free. All of the data is deleted. Windows sees 10Tb free and System Manager still sees 4Tb free as it has no way of knowing the blocks have been marked as deleted. Expected behaviour. Now if we write another 6Tb to the volume will there be space? The LUN will eventualy show 100% full, but after this will Windows quite happily carry on writing to the drive? I could test this but I wondered if anyone had any real world experience. In my organisation we have a process regularly run Snapdrive space reclaimer on thick provisioned LUNs and I am wondering if we need to (other than getting system manager to report correctly)? Many Thanks, Jason

It's still complicated I do not really know what combination of settings System Manager is using for "thick provisioned"; if we speak about space gurantee == volume and fractional reserve == 100% then you should always have enough space to (over-)write full LUN size.But note that under some conditions NetApp may not honor volume space gurantee ...

Similar Messages

  • Find all Connector Space Objects That Were Provisioned

    I'm trying to run a query on the FIM Synchronization Database to find all of the objects in a connectorspace that were created there via provisioning rules. Some objects in the connectorspace have joined and some have been created via provisioning but I
    can't find the field in the FIM Sync DB for where this is specified. Anyone know how I can pull this information?
    Cheers,
    Dan

    Thanks Sameera. It's part of a larger query so it would be great if I could find out where it is in the Database. I've looked all through it and joined everything I can. I thought it might be stored in the connector space hologram data which is encrypted
    in the db so I checked that with PowerShell and wmi but couldn't get anything out of that either. I'll probably just have to run the query I have and then link it up with the connection information in excel like you've described. Thanks again.

  • RMAN Space Reclaim

    Dear Gurus,
    I'm using RMAN with third party application for backup to disk. Our policy is to keep daily backups for 1 month and monthly backups for 1 year. I'm not setting RMAN retention policy. I used set until in rman scripts for setting the daily and monthly retention periods. However, I can not delete obsolete/expired backups because I'm not setting the retention policy.
    This is sample of my script:
    RMAN> backup
    keep until time "sysdate+30" logs (database);
    RMAN> delete obsolete;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of delete command at 08/29/2009 16:03:35
    RMAN-06525: RMAN retention policy is set to none
    Please advise how can I reclaim space of expired backups.

    HI..
    From metalink:--
    Error:     RMAN-06525 (RMAN-6525)Text:     RMAN retention policy is set to none
    Cause:     Command DELETE OBSOLETE and REPORT OBSOLETE requires that either:
         * RMAN retention policy is not NONE or, * RMAN retention policy is
         specified with REPORT/DELETE command.
    Action:     Either configure RMAN retention policy with CONFIGURE command or
         specify it at the end of DELETE/REPORT command.
    >
    Refer to [http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta040.htm#i83923]
    Anand

  • Space Reclaim

    Dear Experts,
    I need to reclaim space from huge indexes(Segment advisor showing lots fragmentation). What is the best way to return that fragmented space to tablespace. We also have space constraint i.e. size of index is so large that we don't have enough free space as the size of index. :)
    Thanks&Regards
    Sunil Kumar

    sunil kumar wrote:
    I have not even free space to rebuild the index, as it require at least size of the index free space for rebuild, as I mentioned earlier we have huge index to rebuild i.e. around 200GB, So what could be the better option !!.
    Not only do you need free space to put the rebuilt index, you will need a large amount of TEMPORARY tablespace to write the intermediate sort runs.
    Can you give us some idea of the scale of the problem - you've said that there is a "lot" of wasted space, but that doesn't mean much. Can you give some examples of: (current size, predicted size). Basically I wouldn't be in a great hurry to rebuild a large index unless the predicted size was less than 50% of the current size. I'd take note of it, of course, and ask myself why the index had got to that size - the answer makes a big difference to what you do about it.
    If you're sure you need to rebuild lots of indexes to reclaim space, and you don't have the space to do it - what about starting with the smallest first, as this may gradually free up space that can be used for the larger rebuilds.
    Regards
    Jonathan Lewis

  • Table Partitions and Rolling Window: free space not reclaimed

    I have a couple of processes that involve the rolling window scenario where we drop an old partition and add a new partition monthly. We are seeing, however, that free space is not being reclaimed, and so I keep having to feed new datafiles into the tablespace (which is LMT). I was (mistakenly, it seems) under the impression that dropping a partition frees up that space for re-use.
    We're currently on Oracle 10gR2 (10.2.0.2), but these tablespaces were orginally created in 9i. I've read that one solution is to implement "table shrinking," however I also see that is only available on tables in an ASSM tablespace. I see that ASSM is the default for tablespaces in 10gR2, but since my tablespaces were created in 9i, I assume they are not ASSM.
    I'd like to see what other people think of this scenario. Surely a lot of people do the rolling window scenario and need space reclaimed.

    No, it is of concern because Sanadra was messing with the partitions. She could have inadvertantly caused something to happen, in this case the free-space issue, and it's imperative to get all of the details.
    My iMac 27' Late-2012 does not have a fusion drive, yet shows that the type is "internal" the original poster doesn't indicate that.

  • Dedupe on NetApp and disk reclaim on VMware. Odd results

    Hi I am currently in the process of reclaiming disk space back from our NetApp FAS8020 Array running 7-mode 8.2.1. All of our flexvols are VMware datastores using VMFS which are all thin provisioned volumes. NONE of our datastores are presented using NFS.  On the VMware layer we have a mixture of VMs using thin and thick provisioned disk, any new VMs created are normally creating using thin provisioned disks.  Our VMware environment is ESXi 5.0.0 U3 and we also use VSC 4.2.2. This has been quite a journey for us and after a number of hurdles we are now able to see reclaim of volume space on the NetApp, this resulting in the free space returning to the aggregate. To get this all working we had to perform a few steps provided by NetApp and VMware. If we used NFS we could have used the disk reclaim feature in VSC but because that only works with NFS volumes this wasn't an option for us. NETAPP - Enable lun set space_alloc to enabled - https://kb.netapp.com/support/index?page=content&id=3013572. This is disabled by default on any version of ONTAP.VMWARE - Enable BlockDelete to value 1 on each ESXi host in cluster - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007427. This is disabled by default on the version of ESXi we are running.VMWARE - Rescan the VMFS datastores in VMware and update the VSC settings for each host. Set recommended host settings. Once performed check delete status is showing as 'supported' esxcli storage core device vaai status get -d naa - http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2014849VMWARE - login to ESXi host, go the /vmfs/volumes and datastore where you want to run disk reclaim and run vmkfstools -y percentage_of_deleted_blocks_to_reclaimNETAPP Run sis start -s -d -o /vol/lun - this will rerun deduplication and delete the existing checkpoints and start afresh. Whilst I believe we are seeing savings on the volumes we are not seeing the savings at the LUN layer in NetApp. The volume usage comes down and with dedupe on I would expect the volume usage to be lower than the datastore usage but the LUN usage doesnt go down.   Does anyone know why this might be the case. Both our flexvols and LUNs are created using thin provisoned and space reserved in unchecked on the LUN.

    Hi,
    Simple answer is yes. It's just the matter of visibility of the disks on the virtual servers. You need to configure the disks appropriately so that some of them are accessible from both nodes e.g. OCR or Voting disks and some are local, but many of the answers depend on the setup that you are going to choose.
    Regards,
    Jarek

  • FAS2040 volume and LUN configuration

    So, we've inherited down some badly configured NetApp FAS2040's and I'm in the process of upgrading them to Data ONTAP 8.0.5 from 7.3.4 (7-mode). We're contractors so these things change hands quite a bit and just getting access to NetApp support was something of a task. Anyways, I've been working on this issue since no one feels comfrtable at it and I've run into some snags. I want to maximize space since we have 4 of these 2040's (12 600g 15k SAS drives), a 2020, and a FAS2040 with the DS4xx something shelf unit. Working with one aggregate is fine as we don't think we'll need the added protection from having a root aggregate and aggregate for data volumes. What is best way to setup these volumes and LUN's? I've looked at it from both angles of having 1 volume and 2-3 LUNs, or by having multiple volumes with one LUN each. I think the single volume works good enough and we could take advantage of deduplication more efficiently. However, I'm confused on something. If I make 2x 1.75 TB or 2x 1.5 TB LUNs does that give me enough space for snapshots? I don't want to make snapshots, but I'm confused on how these things operate as I was under the impression that I should have Fractional Reserve @ 100% and snapshots at 0, but I was getting out of space alerts (everything is thick provisioned). I then proceeded to changing the FR to 0  (unchecking it in OnCommand) and the alert went away in vCenter plugin. I say I don't want snapshots because I thought that taking advantage of SnapVault would be a great idea since we are running an old 4.1 vmware infrastructure, can't use VDP, and have no ways of buying a backup/dr software. I figured allocating a 2040 to backups with SnapVault or using the netapp plugin in vcenter to do scheduled backups would be decent enough for now.  So... I'm incredibly confused on how to design the volumes in respect to not running out of space with this fractional reserver and snapshots. Am I making my luns to large for my volumes? Is there some mathmatical formula I should use? 

    Hi,  1) The recommentations regarding the root volume, refer link Page #159 2)One LUN per volume is recommented for ease of management 2)fractional reserve is LUN over write reserve, these links may help you Considerations
    Video Thanks

  • Repair Degraded Storage Spaces Virtual Disk - All physical disks show healthy

    I've seen this mentioned a few times but no definitive answers. I have a Storage Pool with 8 physical disks. Each physical disk shows as healthy and auto select. I have thick provisioned Virtual Disk created on this Storage Pool in a parity configuration.
    This virtual disk is configured for the maximum amount on the storage pool. 50% of the space is available on the volume of the virtual disk...I click repair it does nothing.
    Tested yanking a physical disk. Virtual disk becomes unavailable. No redundancy currently. How the heck do I fix this???

    Hi,
    First please try to run a repair again with Powershell cmdlet:
    Repair-VirtualDisk
    If issue still exists please try optimize to see if it will help:
    Optimize-Volume
    If you have any feedback on our support, please send to [email protected].
    Repair-VirtualDisk:
    Runs shows a progress bar for a very short period of time. Goes back to prompt. No messages or errors. Running Get-Virtual disk still shows the VD as degraded.
    Optimize-Volume:
    PS C:\Users\Administrator> Optimize-Volume -driveletter D -analyze -defrag -verbose
    VERBOSE: Invoking defragmentation on DATA (D:)...
    Optimize-Volume : A general error occurred that is not covered by a more specific error code.
    At line:1 char:1
    + Optimize-Volume -driveletter D -analyze -defrag -verbose
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: (MSFT_Volume (Ob...8-8907-3485...):ROOT/Microsoft/...age/MSFT_Volume) [Opt
    imize-Volume], CimException
    + FullyQualifiedErrorId : HRESULT 0x89000000,Optimize-Volume
    PS C:\Users\Administrator> Optimize-Volume -driveletter D -analyze -defrag -verbose
    VERBOSE: Invoking defragmentation on DATA (D:)...
    Optimize-Volume : A general error occurred that is not covered by a more specific error code.
    At line:1 char:1
    + Optimize-Volume -driveletter D -analyze -defrag -verbose
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: (MSFT_Volume (Ob...8-8907-3485...):ROOT/Microsoft/...age/MSFT_Volume) [Opt
    imize-Volume], CimException
    + FullyQualifiedErrorId : HRESULT 0x89000000,Optimize-Volume
    PS C:\Users\Administrator> Optimize-Volume D -analyze -verbose
    VERBOSE: Invoking analysis on DATA (D:)...
    Optimize-Volume : A general error occurred that is not covered by a more specific error code.
    At line:1 char:1
    + Optimize-Volume D -analyze -verbose
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: (MSFT_Volume (Ob...8-8907-3485...):ROOT/Microsoft/...age/MSFT_Volume) [Opt
    imize-Volume], CimException
    + FullyQualifiedErrorId : HRESULT 0x89000000,Optimize-Volume
    PS C:\Users\Administrator> Optimize-Volume D -Verbose
    VERBOSE: Invoking defragmentation on DATA (D:)...
    Optimize-Volume : A general error occurred that is not covered by a more specific error code.
    At line:1 char:1
    + Optimize-Volume D -Verbose
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: (MSFT_Volume (Ob...8-8907-3485...):ROOT/Microsoft/...age/MSFT_Volume) [Opt
    imize-Volume], CimException
    + FullyQualifiedErrorId : HRESULT 0x89000000,Optimize-Volume
    Both error out...

  • Best way to have resizeable LUNS for datafiles - non RAC system

    All,
    (thanks Avi for the help so far, I know its a holiday there so I wait for your return and see if any other users can chip in also)
    one of our systems (many on the go here) is being provided by an external vendor - I am reviewing their design and I have some concerns about the LUN's to house the datafiles:
    they dont want to pre-assign full size luns - sized for future growth - and want more flexibility to give each env less disk space in the beginning and allocate more as each env grows
    they are not going to use RAC (the system has nowhere near the uptime/capacity reqs - and we are removing it as it has caused enormous issues with the previous vendors and their lack of skills with it - we want simplicity)
    They have said they do not want to use ASM (I have asked for that previously I think they have never used it before - I may be able to change their minds on this but they are saying as not RAC not needed)
    but they are wondering how they give smaller luns to the env and increase size as they grow - but dont want to forever be adding /u0X /u0Y /u0Z extra filesystems (ebusiness suite rapidclone doesn't like working with many filesystem anyway and I find it unelegant to have so many mount points)
    they have suggested using large ovm repo's and serving the data filesystems out of that (i have told them to use the repo's just for the guest OS's and use direct phy's attached luns for datafiles (5TB of them)
    now they have suggested creating a large LUN (large enuogh for many envs at the same time [dev / test1 / test2 etc]) .... and putting OCFS2 on it so that they can mount it to all the domU/guest's and they can allocate space as needed uot of that:
    so that they have guests/VM's (DEV1 - DEV2 - TEST1 say) (all seperate vm's) and all mounting the same OCFS2 cluster filesystem (as /u01 maybe) and they can share that for the datafiles under a sep dir so that each DB VM would see:
    /u01/ and as subdirectories to that DEV1 DEV2 TEST1 so:
    /u01/DEV1
    /u01/DEV2
    /u01/TEST1
    and only use the right directory for each guests datafiles (thus sharing the space in u01(the big LUN) as needed per env)....
    i really dont like that as each guest is going to have the same oracle unix user details and able to write to each other dir's - id prefer dedicated LUNS for each VM - not mounted to many VM's
    so I am looking for a way to suggest something better....
    should I just insist on ASM (but this is a risk as I fear they are not experienced with it)
    or go with OEL/RHEL LVM and standard ext filesystems that can be extended - what are the risks with this? (On A Linux Guest For OVM, Which Partitions Can Be LVM? [ID 1080783.1]) - seems to say there is little performance impact
    or is there another option?
    Thanks all
    Martin
    Edited by: Martin Brambley on 11-Jun-2012 08:53

    Martin, what route did you end up going?
    We are about to deploy several hundred OEL VMs that are going to run non-RAC database instances. We don't plan to use ASM either. Our plan right now is to use one large 3TB LUN virtual disk to carve out the operating system space for the VMs and then have a separate physical attached LUN for each VM that will host a /u01 filesystem using LVM. I have concerns with this as we don't know how much space /u01 will ultimately need and if we end up having to extend /u01 on all of these VMs, that sounds like it will be messy. Right now I've got 400 separate 25gb LUNs presented to all of my OVM servers that we plan to use for /u01 filesystems.

  • NOT ABLE TO RECLAIM STORAGE USED BY XMLTYPE COLUMN

    Since we are on Oracle 9i, the 10G solution DBMS_XMLSCHEMA.CopyEvolve() is not available, we are trying to do it by ourselves.
    see the doc,
    Re: how to make xml schema change when there are existing records in the ta
    However, our requirement is different in the following ways:
    1. We hope to be able to add/drop as many XMLTYPE columns to a table as we
    want( all structured storage, but with different schemas).
    2. When a XMLTYPE Schema needs to be updated, we will add a new
    xmltype column, assign it the new schema and migrate the data from the old column. After that we want to drop the old xmltype column and it's schema to reclaim all the storage.
    The underlying xml tables are drop, space reclaimed.
    However, we found out that the segment/blocks/bytes of the xmltype column itself is not released.
    Question:
    1. Since Xmltype column associated with xmltype schema is structured
    stored, the xmldoc should parsed and stored in underlying tables. However, it seems that the xmltype column itself is acquiring sunstantial amount of space.
    By querying user_segments table, we saw the underlying structure is actually taking constant and small space, however, the main table is taking huge space.(We populated the table with 61K rows), For instance,
    SEGMENT_NAME BYTES SEGMENT_TYPE
    PERFORMANCEEVENT 260046848 TABLE
    SYS_C007080275 1048576 INDEX
    Performance1833_TAB 65536 TABLE
    Date1831_TAB 65536 TABLE
    LocationAddress1814_TAB 65536 TABLE
    SYS_IL0010401244C00018$$ 65536 LOBINDEX
    SYS_IL0010401244C00012$$ 65536 LOBINDEX
    Where PERFORMANCEEVENT is the main table.
    Why?

    Try to read this post.
    This will give you an idea about the way and when xml is shred into tables.
    XMLType column based on XML Schema: several questions
    In your case since you are not using the default table, the data is stored witin the table itself as oracle has not shred the data.

  • Remove hidden space

    How to remove or unlock hidden space on your SSD?

    If it was possible that would be a very foolish thing to do. That extra space, the unallocated space on all SSD units, is there for a reason. That space is call Over Provisioning. In case a part of the normally allocated space fails the SSD will auto use that extra space so the drive won't give errors or fail. Just use the drive as it is designed.

  • Running out of disk space (large VM boxes)

    Gentlemen:
    CUCM 6.1.4
    Unity version = 5.0(1.0)
    Cisco Message Store Manager Version 1.4.0.38
    Exchange Information Store Version: 6.5.7638.1
    Running out of disk space (large VM boxes)
    Here is the situation: My client never set up any storage or flush procedures within Cisco Unity Tools Message Store Manager (MSM). Now they have several voice mailboxes with a half a GB of data or more in their (Deleted Items Folder)
    I ran the (flush messages from deleted items folder) with success on these voice mailboxes.
    The question is should I see some disk space reclaimed on the Server Hard Drives?
    Thanks, Tom

    With Exchange, you will not see disk space reclaimed until an offline defrag is run against the Exchange databases.
    http://support.microsoft.com/kb/328804
    Hope this helps.
    Brandon

  • PI and MSE VMware disk provisioning

    Hello,
    Can someone tell me if it's permitted to thin provisioning teh VMware appliance for both PI and MSE?
    Tank you

    I have in a lab, but will not deploy it that way in production. I even have it thick provisioned in my home lab. Remember that TAC will only support thick provisioning since that is how they configure the ova.
    Sent from Cisco Technical Support iPhone App

  • Cannot recover hard disk space when deleting and huge sparseimage

    hi there,
    System Version: Mac OS X 10.4.3 (8F46)
    Kernel Version: Darwin 8.3.0
    my problem is that I found the mac was running slow and I noticed the harddisks were getting full:
    [veronique:/Users/e] e# df -H
    Filesystem Size Used Avail Capacity Mounted on
    /dev/disk0s3 60G 59G 729M 99% /
    devfs 102K 102K 0B 100% /dev
    fdesc 1.0K 1.0K 0B 100% /dev
    <volfs> 524K 524K 0B 100% /.vol
    /dev/disk1s2 60G 59G 725M 99% /Users/e
    automount -nsl [246] 0B 0B 0B 100% /Network
    automount -fstab [250] 0B 0B 0B 100% /automount/Servers
    automount -static [250] 0B 0B 0B 100% /automount/static
    Im not sure why they became this full but.. I then looked for big files and found a sparseimage of 40GB. I can mount it and all so it s in good shape.
    I needed to free up space and I delete a lot of music files, pictures, a virtual pc file of several gigs and so forth, but when I ran a new df -H things had nt changed.. So im guessing all the data, inclusive deleted data is in the sparseimage? is this so? I ve emptied the trash several times.
    I ve now tried to turn filevault OFF but it asked me to free up 1.9Gb.. catch 22.. if I rid myself of files it wont see more freed up space...
    is there something I can do? I could mount the sparseimage and browse it and delete files there or back up whatever I need and then get rid of it? should I re install?
    any help needs to go to [email protected] i went into my settings but didnt find out how to change my email. thats not impressive i know
    eric

    Here's what I had to resort to to forcibly reclaim missing free space on my FileVault-encrypted PowerBook G4.
    After several days and more restart/shutdown trials than I can count, I could not for the life of me get my PowerBook to give me the "Reclaim free space" option that usually pops up. Not sure why. But, I was missing nearly 12GB of free space and kind of needed it.
    I was able to force a space reclaim... but the process isn't exactly point-and-click.
    •I mounted the PowerBook via Firewire target disk mode on my desktop and copied over my PB account's .sparseimage file to my desktop (didn't want to work on the original file in case I screwed it up).
    •Used the terminal command "hdiutil compact" on the .sparseimage file (type "hdiutil compact " (with a trailing space) then before hitting return, drag the icon of the .sparseimage file on to the terminal. That'll insert the path of the file. Hit return.)
    •Let it run. Took about 15 minutes or so. On a 27GB .sparseimage, the procedure recovered 12.5GB!
    •Deleted the original .sparseimage from my PowerBook, emptied the trash, and copied over the newly compacted one. Rebooted the PB, and voila!
    Do keep in mind though that you'll need at least as much free space as your Home folder occupies itself if you want to turn FileVault off.
    PowerMac Dual 2.5GHz G5   Mac OS X (10.4)  

  • Problems with Storage Spaces

    I have a storage space with 10 drives, with over 10GB of free disk space in the pool.   Occasionally, certain of my spaces are inaccessible, presumably because two drives intermittently fail; these two drives total2.5g of physical space.  
    I have retired the two drives, but storage spaces will not let me remove them.  When I click remove it says : "cant remove the drive from the pool", "drive could not be removed because not
    all data could be reallocated. Add an additional drive to this pool and reattempt this operation".  
    How can I address this problem? 

    Hi,
    According to Storage Space FAQ, you could find the reason:
    Why do I have a low capacity warning even though I still have unused pool capacity?
    Storage Spaces provides advance notification of thinly provisioned storage spaces when the storage pool does not have enough capacity spread among a sufficient number of disks to continue to write new data. The default warning point is 70% capacity utilization.
    To learn when Storage Spaces will generate a warning, consider the following example.
    A two column, two-way mirror space that uses thin provisioning in a four disk pool
    Two of the disks have 1TB capacity and two have 2TB capacity. Because a two column, two-way mirror space needs four disks (number_of_disks = NumberOfColumns * NumberOfDataCopies), it will evenly consume all four disks as it writes new data. When capacity
    utilization of the two 1TB disks reaches 70%, Storage Spaces will warn of a low capacity condition. Even though the entire pool has 3.2TB free capacity, the thinly provisioned space will soon not be able to write any more data because the 1TB disks are nearly
    fully consumed.
    You can easily keep individual storage spaces’ low capacity warning synchronized with each other and with the pool by following the guidance in the next section, “How do I increase pool capacity?” from the moment of creating the pool and through all
    subsequent expansions of the pool.
    For more details, please refer to the link below:
    Storage Spaces Frequently Asked Questions (FAQ):
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx
    Roger Lu
    TechNet Community Support

Maybe you are looking for

  • Imac 27 Intel will not boot normally only in Safe Mode

    I posted previously but the answer to the question was did it wake up in safe mode, i did not give it a chance and I restored from back up to 10.10, upgraded to 10.10.1 and it worked for 1 week and returned to the previous problem. I ran AHT - Passed

  • Master Data Services - Can not add new User and MDS can not Identify LOCAL Users

    Team, We are using  SQL Server 2008 R2 and system working since long and suddenly we observed mentioned two issues. The server MyServer is already restarted but did not help.  The MDS installed and configured on SAME Machine (MyServer).   I  have two

  • Website not responding errors

    Hello... I feel like since I upgraded to Lion, Safari hasn't been the same. I've never had a problem with websites freezing on me and now, it seems like I often have to force reload a website because the website is not responding. This happens both a

  • Incorrect Address Mapping

    Hi: There is an problem with the field mapping from quick address into CRM. Sometimes the house name is mapped onto the second line of the address instead of the first line of the address. Need your immediate inputs. Regards,

  • Phone freezes when making a phone call

    Hey, I've only got my new phone for 3 months now and since about 1 month I have a problem when calling.  When I make a call the screen goes black and I can't unlock the screen anymore and it only works again once the other person hangs up the phone,