Linux LVM (Logical Volume Manager) for CentOS on Azure?

Hi.  I am trying out Azure and installed a OpenLogic CentOS 6 virtual machine.  I note that it is not running LVM (Logical Volume Manager) by default.  I would like to ask if it is possible to:
1. have CentOS Linux installed with LVM by default when creating a Linux virtual machine on Azure
2. switch to LVM after adding a new disk
On the other hand, is it a good idea to use LVM at all?  Will it affect performance, features on Azure?
Thanks.

Hi,
Based on my experience, you can add disk to an Azure VM. You can install the Logical Volume Manager to manage the disks attached to the VM. In addition, there is no Linux VM with LVM installed by default. If you want to have this, please submit your requirement
in Azure feedback:
http://feedback.azure.com/forums/34192--general-feedback
In addition, since you can have only one OS system disk for an Azure VM, this limitation may make multi-disk logical volume manager setups unworkable.
Best regards,
Susie
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

Similar Messages

  • Unable to find Logical Volume manager in OEL6

    Hi Guru's,
    I am new to Linux.
    I have installed OEL6 in my windows box with Virtual Box.
    After logging in
    i don't see
    the Logical Volume manager
    and Network option under Administration menu too.
    I tried to install lvm-1.0.8-14.x86_64.rpm (downloaded from redhat) but no luck.
    Please help me to fix this.
    Thank you
    chandra

    I remember the Red Hat 6.0 release notes outlining that Samba and LVM configuration GUI were removed without a replacement. Same goes for the system-network-config tool. More recent versions of the release notes at http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/6.0_Release_Notes/storage.html outline that system-config-lvm is in the process of transitioning to a more maintainable tool named gnome-disk-utility.
    You did not specify which version of Oracle Linux you installed. I suggest you refer to the release notes. A text based version of "system-config-network" should still be installed, which you can open from the command line.
    Regarding LVM gui you should setup the system software repository following the instructions at http://public-yum.oracle.com. You can then use yum to install the system-config-lvm package.

  • RAID on LVM logical volumes

    Hey,
    i've got a 1T and a 500G hard drive in my workstation. The 500G drive is my media disk. I initialized a software RAID-1 with one missing drive on a 500G logical volume on the 1T disk. After formatting and copying all the data from the media disk over to the new raid partition, I added the old 500G partition to the RAID.
    After rebooting my machine, the RAID was in inactive state and hence the filesystem could not be mounted.
    I found out, that in /etc/rc.sysinit the search for software RAID arrays is performed before the search for LVM volume groups. I started switching the RAID and LVM sections but after reboot nothing changed. I presume it's caused by the lv devices missing in /proc/partitions, which is used by md to scan for raid members by default.
    I decided to add some lines to /etc/rc.local (also my RAID device is /dev/md127 Oo):
    mdadm --stop /dev/md127
    mdadm --assemble /dev/md127 /dev/vg00/raid-data /dev/sdb1
    mount /mnt/raid
    It works now, but it is not a clean solution.
    How is it supposed to do RAID on top of LVM logical volumes in Arch Linux? Are there any alternative ideas of solutions to this problem (sure there are)?
    Thanks in advance.

    delcypher wrote:
    Hi I've been looking into backing up one of my LVM logical volumes and I came across this (http://www.novell.com/coolsolutions/appnote/19386.html) article.
    I'm a little confused by it as it states that "Nice work, your logical volume is now fault tolerant!".
    Are you sure that's the article? I can't find anything on that page about "nice work" or "fault tolerant"...?
    delcypher wrote:My concern is that because my volumegroup spans across two drives if anyone of them fails I will loose all may data.
    Correct.
    delcypher wrote:Is there any setup I can do by which I can have a perfect mirror of my volumegroup so that if one of my drives fail I still have a perfectly function volumegroup.
    Use 2 disks to create a RAID-1 array, then use that as your PV.
    sda1 + sdb1 = md0 => pv
    delcypher wrote:I understand linux supports software raid but would I need two drives identically sized (1TB) or can I just have one drive (i.e. 2TB) that is a mirror of my volumegroup?
    You can use 2 partitions on the same disk to create a RAID array, but that defeats the whole purpose of RAID.
    It sounds like you're merging the meanings of 'redundant' and 'backup' -- they are distinct things, so you need to decide if you want redundancy or backup. Or both ideally.

  • Extending LVM logical volume

    Hi I recently added a new hard drive to my computer which had an existing LVM on the first drive.
    I created an LVM physical volume on my new hard drive and added it to my existing volume group (volgroup)
    pvcreate /dev/sdb1
    vgextend volgroup /dev/sdb1
    I then proceeded to extend one of LVM logical volumes so that it used all the free space on my volume group (volgroup)
    I tried
    lvextend -l 100%FREE volgroup/home
    but this didn't extend the volume (it stayed the same size)... but
    lvextend -l +100%FREE volgroup/home
    did. What I don't understand is why. The description of the -l option is as follows
    -l, --extents [+]LogicalExtentsNumber[%{VG|LV|PVS|FREE|ORIGIN}]
    Extend or set the logical volume size in units of logical extents. With the + sign the value is added to the actual size
    of the logical volume and without it, the value is taken as an absolute one. The number can also be expressed as a per‐
    centage of the total space in the Volume Group with the suffix %VG, relative to the existing size of the Logical Volume
    with the suffix %LV, of the remaining free space for the specified PhysicalVolume(s) with the suffix %PVS, as a percent‐
    age of the remaining free space in the Volume Group with the suffix %FREE, or (for a snapshot) as a percentage of the
    total space in the Origin Logical Volume with the suffix %ORIGIN.
    As far as I can see the "+" only makes sense when setting the logical volume size in units of logical extents. If you're using %FREE that should be the percentage of free space of the volume group.
    Any ideas?

    --- Logical volume ---
    LV Name /dev/volgroup/home
    VG Name volgroup
    LV UUID PjPjuh-lPuN-j79Z-OIdI-ZniR-NOQT-NiRGbc
    LV Write Access read/write
    LV Status available
    # open 1
    LV Size 1.65 TiB
    Current LE 433488
    Segments 2
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 254:4
    It was originally ~900GiB. Thanks for your explaination.

  • Solaris9 Logical Volume Manager

    Dear All,
    I want to know more about Logical Volume Manager in Solaris9.
    I know that it makes logical volumes from one disk or more?
    are these Logical Volumes like a disk can be used for striping datafiles into them if I use RAID 5 controller in Oracle Database?

    no replies???????

  • Finding whole mapping from database file - filesystems - logical volume manager - logical partitions

    Hello,
    Trying to make reverse engeneering of database files and their physical carriers on logical partitions ( fdisk ).
    And not able to make whole path from filesystem down to partitions with intermediate logical volumes.
    1. select from dba_data_files ...
    2. df -k
    to get the listing of filesystems
    3. vgdisplay
    4. lvdisplay
    5. cat /proc/partitions
    6. fdisk /dev/sda -l
       fdisk /dev/sdb -l
    Problem I have is that not able to determine which partitions are consisten in logical volumes. And then which logical volumens are consisted in filesystem.
    Thank you for hint or direction.

    Hello Wadhah,
    Before start the discussion let me explain I am newcommer to Oracle Linux. My genetic with dba experience of Oracle is from IBM UNIX ( AIX 6.1 ) and Oracle 11gr2.
    First task is to get the complete picture of one database on Oracle Linux for future maintenance tasks and make database more flexible and
    preparing for more intense work:
    -adding datafiles,
    -optimize/replace archive redolog files on separated filesystem from ORACLE_BASE
    - separating auditing log files from $ORACLE_BASE to own filesystem
    - separating diag directory on separated file system ( logging, tracing )
    - adding/enlarging TEMP ts
    - adding/enlarging undo
    - enlarging redo for higher transaction rate ( to reduce number of switched per time perceived in alert_SID.log )
    - adding online redo and control files mirrors
    So in this context try to inspect content of the disk space from the highest logical level in V$, DBA views down to fdisk partitions.
    The idea was to go in these steps:
    1. select paths of present online redo groups, datafiles, controlfiles, temp, undo
       from V$, dba views
    2. For the paths got from the step 1
       locate filesystems and for those filesystems inspect which are on logical volumens and which are directly on partitions.
    3. For all used logical volumes locate the logical partitions and their disks /dev/sda, /dev/sdb, ...

  • Logical Volume Manager

    I am planning on installing Arch on a laptop soon. I have played with it in VirtualBox on a separate computer, and I am going to hope that it works with my touchpad / wireless mouse.
    I might butcher some jargon in this, but this is how I think LVMs work, and I would like to make sure.
    I am going to make partitions for boot, swap, /, and /home.
    I know boot only needs to be around 100MB, swap is twice the RAM -- and I saw in another thread, / around 10-15GB -- and /home is the rest.
    However, I read about installing on an LVM and it is easy to re-size partitions if needed.
    I only have 1 HD, and I'm mostly doing this because while I do have room to have too much space on /, if I end up only using around 4GB, I would like to have that 11GB on my /home for music or videos.
    So I think that means I would have something like this:
    sda1: boot, around 100MB.
    sda2: LVM, which contains swap, /, and /home.
    pvcreate /dev/sda2 creates the physical volume on the LVM, allowing me to partition it.
    After that, this is where I am confused.
    The Arch Wiki says:
    Create Volume group(s)
    Next step is to create a volume group on this physical volumes. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:
    # vgcreate VolGroup00 /dev/sda2
    # vgextend VolGroup00 /dev/sdb1
    Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:
    Can I skip this since I have only 1 HD? I guess that /dev/sdb1 is the LVM from the other harddrive?
    I suppose I would have to do vgcreate VolGroup00 /dev/sda2 just to create the volume goup, though. Is this correct?
    After this step, I am pretty much lost. Here's what the wiki says, and how I am interpreting it...
    # lvcreate -L 10G VolGroup00 -n lvolhome
    This will create a logical volume that you can access later with /dev/mapper/Volgroup00-lvolhome or /dev/VolGroup00/lvolhome. Same as with the volume groups you can use any name you want for your logical volume when creating it.
    So later, I would turn this into my home partition during the Arch installation? I would create a lvolswap, and a lvolroot?
    Then, during the installation process, I would format them to ext3, and mount them as /home, /, and then select lvolswap as my swap partition?
    That's about it for now, I guess.
    Last edited by COMMUNISTCHINA (2008-08-15 21:53:41)

    COMMUNISTCHINA wrote:
    Berticus wrote:
    COMMUNISTCHINA wrote:I dunno. I tried using LVM on a virtualbox and I keep getting a kernel panic. I followed the Wiki.
    If I put GRUB on /boot, it doesn't work, but I got it to work if I installed it on the / LV.
    odd, to my knowledge grub can't be on an LVM. why not install grub on MBR?
    I actually figured this out maybe an hour ago.
    I changed my GRUB configuration file, but since it's in a virtualbox, out of habit I type arch root=/dev/sda3 on startup, but I needed to type root=/dev/VG00/lvolroot. I had grub on /boot (not on the LVM), but when I was trying to boot, I told it to go to the wrong place for root. I could probably unmount the .iso for the vbox, but whatever.
    I will read a little more about RAID.
    If I end up figuring out how to do it, would I need my external hooked up to it all the time? I have tote the laptop around, and I wouldn't want the external mucking up the portability.
    I wasn't aware you were on a laptop. In that case, a file server would do you best. But cheapest solution would be just to stick with an external hard drive and not RAID.

  • Extending an LVM Logical Volume to Another Disk

    I have read the article in the wiki and also this guide since my drives are encrypted. I have already set up arch on one drive with LVM on top of an encrypted volume (following the linked guide strictly with exceptions on the cipher and the partitioning). So far everything works very well but I'm only halfway where I want to end up. Additional to the one HDD I want to use two more, all of them encrypted (no problem here) and one logical volume spread across all three drives (problem be here).
    So, to clarify, on /dev/sda I have two partitions, /boot (unencrypted, 130 MB) and /dev/sda2 (encrypted, ~2TB) on which the LVM volumes are.
    /dev/sda2 is added to vgroup and has several logical volumes of which /home is the largest.
    On the other hand I have /dev/sdb and /dev/sdc. Both should only contain one encrypted partition and extend /home. That is why I chose LVM in the first place.
    I started with /dev/sdc since all the data on it has been backed up already. The entire drive is now one big encrypted partition called /dev/sdc1. I mounted it with sudo cryptsetup luksOpen /dev/sdc1 lvm. And here I am stuck.
    1) First of all I'm not entirely clear on the last part "lvm" of that command, I initially thought it would be the name this partition would be mounted under but in the guide I took this from it is not explained and I want to be extra careful here because I have a lot of data on /dev/sda2 that is not backed up and I don't want to accidentally erase the logical volumes I already have.
    2) I guess the next part would be creating a physical volume on the mounted partition via lvm pvcreate /dev/mapper/lvm but again I want to be extra clear on this, I really do not want to temper with the volumes I already have unless I know exactly what effects it will have. If I would mount the encrypted partition with sudo cryptsetup luksOpen /dev/sdc1 aaa would I then have to create the physical volume with lvm pvcreate /dev/mapper/aaa? Because I already used lvm when I created all the volumes on the first drive (sda) and I don't know if they are still mounted under the same name (after many reboots and setting up the arch which I am currently using) in which case I might be in danger of overwriting something?
    3) From what I have read so far I could extend /home now with lvextend -l +100%FREE vgroup/home, is that correct? After that I would only have to extend the file system (ext4) on /home and it would be done?
    I hope it is clear what I am trying to do, if not please do ask. Thanks in advance for reading all of this.

    Turns out this is not quite over yet. After successfully dealing with LVM I was about to reboot when something occurred to me: The second HDD (sdc1) will not be decrypted at boot and I have no idea what would happen if LVM tries to mount a volume that is spread across two disks of which one is not available.
    So I searched for some time and everything I could find was this thread. Well, he is talking about altering the encrypt hook and generate a new initramfs to be able to enter more than one cryptdevice in grub. I have never dealt with hooks before and never got my hands on bash script like this. Now my options are pretty weak: I could do what he describes although I have no idea how it works or I could revert all the changes to LVM. Naturally I chose option number one, parsed the hook, edited the menu.lst, rebooted and "Waiting 10 seconds for device /dev/sda2".
    So yeah it's my fault for trying something I have no clue about.
    I tried to access the volumes from the live cd, but even though I decrypted the drives and loaded up lvm with vgchange -a -y I can not access the logical volumes. I naively tried to access the files such as menu.lst but I can't get there. Probably because I just lack the knowledge how to do that. But even if I would be able to access the files, could I build the hooks of my system from within the live cd?
    Last edited by venehan_snakes (2011-10-18 16:49:48)

  • Veritas volume manager for solaris 10

    Hi All
    which version veritas volume manager will support solaris 10 06/06.
    can you just update a link for reference
    Regards
    RPS

    Hello,
    we are currently using solaris 9 with veritas volume manager 3.5.So i would like to know if i upgrade to solaris 10 06/06.whether i can use 3.5 or not.
    Using the Veritas (Symantec) support site, I have found the following document
    VERITAS Storage Solutions 3.5 Maintenance Pack 4 for Solaris
    http://seer.support.veritas.com/docs/278582.htm
    The latest supported version listed for VxVM 3.5 with MP4 applied is Solaris 9. That means the answer is NO.
    I understand that searching the Veritas knowledge base might be tough and time consuming, but it's their product ...
    Michael

  • [solved] Filesystem check fail - Cannot access LVM Logical Volumes

    I am getting a "File System Check Failed"on startup, I recently did a full system upgrade but I'm not entirely sure that the cause of the issue as I don't reboot very often.
    I get the error right before this line is echo'ed out:
    /dev/mapper/Arch_LVM-Root:
    The super block could not be read or does not describe a correct ext2 filesystem...
    this is odd because the only ext2 filesystem I have is on an non-LVM boot partition...
    I can log-in and mount / as read/write and I can activate LVM with
    modprobe dm-mod
    and
    vgchange -ay Arch_LVM
    and they show up in lvdisplay but their status is "NOT available"
    I just need to mount these logical volumes so I can retrieve some personal data in my home directory, I am also hesitant to use LVM again if I can't retrieve my data.
    any suggestions?
    Last edited by action_owl (2010-08-15 02:15:58)

    I just popped in the install disk and was able to mount and access the LVM groups as expected, something must have been wonky with my filesystem

  • [solved]Booting a LVM logical volume spanned across two LUKS paritions

    I have two partitions formatted as LUKS that both physical LVM volumes for a logical LVM partition that holds my root filesystem.  I can't seem to figure out what the kernel line should be.  I tried specifying cryptdevice twice, but it seems to only want to take one of the cryptdevice= options.  What now?
    Last edited by synthead (2011-12-09 16:17:06)

    I found this: https://bbs.archlinux.org/viewtopic.php … 95#p827495
    Modifying /lib/initcpio/hooks/encrypt with the patch as recommended did the trick.  I'll file a feature request for this

  • Rvm - Ramdisk volume manager for Arch

    Hi,
    I tried an experiment, this is a little tool that allows you to create tmpfs based ramdisks with Arch Linux installed on them.  This script depends on `arch-install-scripts' as it uses pacstrap.
    You can for example do:
    sudo ./rvm create arch /home/bla/arch-mnt
    This will essentially install arch in `/home/bla/arch-mnt' (base and base-devel so far).  The target dir `arch-mnt' will automatically be mounted as tmpfs (2G at the moment, although a command line option for the preferred size would be useful).  On top of this a .tgz of the installation will be created and placed in RVM_VOLUMES_PATH (which needs to be set manually in the shell script for now, typically `/home/<user>/.rvm').
    We can then do:
    sudo ./rvm start arch /home/bla/arch-mnt
    This will unpack the `arch.tgz' from RVM_VOLUMES_PATH into the mount point.  We can then use `arch-chroot'.
    Similarly to stop the ramdisk we do:
    sudo ./rvm stop arch /home/bla/arch-mnt
    This will backup everything from the mount point to a .tgz and copy it back to RVM_VOLUMES_PATH.
    There are a couple of things that could be done better and will be, this is just a barebones version of the script.  We could for example have rc.d/systemd scripts to start/stop etc.
    Code is available on github
    Last edited by dimigon (2012-09-20 14:35:28)

    Hi,
    I tried an experiment, this is a little tool that allows you to create tmpfs based ramdisks with Arch Linux installed on them.  This script depends on `arch-install-scripts' as it uses pacstrap.
    You can for example do:
    sudo ./rvm create arch /home/bla/arch-mnt
    This will essentially install arch in `/home/bla/arch-mnt' (base and base-devel so far).  The target dir `arch-mnt' will automatically be mounted as tmpfs (2G at the moment, although a command line option for the preferred size would be useful).  On top of this a .tgz of the installation will be created and placed in RVM_VOLUMES_PATH (which needs to be set manually in the shell script for now, typically `/home/<user>/.rvm').
    We can then do:
    sudo ./rvm start arch /home/bla/arch-mnt
    This will unpack the `arch.tgz' from RVM_VOLUMES_PATH into the mount point.  We can then use `arch-chroot'.
    Similarly to stop the ramdisk we do:
    sudo ./rvm stop arch /home/bla/arch-mnt
    This will backup everything from the mount point to a .tgz and copy it back to RVM_VOLUMES_PATH.
    There are a couple of things that could be done better and will be, this is just a barebones version of the script.  We could for example have rc.d/systemd scripts to start/stop etc.
    Code is available on github
    Last edited by dimigon (2012-09-20 14:35:28)

  • Volume manager for fluxbox

    Some soggestion about a tray icon to control volume in fluxbox?
    more info here: https://bbs.archlinux.org/viewtopic.php?pid=1209271
    tried:
    volwheel
    gvtray
    fbmix
    with no success.

    volumeicon, create an icon in tray, if you leftclick it you can enable/disable volume, if you rightclick it open a terminal with alsamixer.
    pnmixer instead create an icon in tray, if you left-click it open a menu where you can mute or unmute and a button that link you to sound preferences.
    noone of them popup a slider, the only one now is volweel, but it have a lot of problem, when you move the slider it return to 0.
    (thanks for the reply)

  • Yosemite install fails: "not enough free space in the Core Storage Logical Volume Group for this operation"

    This is on a Mac Mini, running Mavericks, with a fusion drive (1TB/128GB), and 85GB free space.
    Any ideas?

    I too experienced the Low Disk space event…  I hit the restart button and the install started again… I thought I got caught in a loop.
    On the third time seeing the crawl bar for the Yosemite install I hit the Power Button and force killed the whole process.
    Upon rebooting I saw horizontal black and white bars across the screen and VERY Slow everything.
    Boot, Icon Bounce, Application Launch, Keyboard response… etc.
    I was able to get to the Desktop and then I re-installed Yosemite.  Dragging a window was laggy, and jerky…
    On the third install of Yosemite things improved some, almost but not quite as fast as Mavericks …  Some improvement but copy/writes remained very slow.
    I now notice that some Applications such as “Clean My Mac” fail to launch and am still finding out if all are functional.
    I have a home rolled Fusion Drive.
    Currently I am considering a Clean Yosemite install and a TM restore.
    Console and Activity Monitor report callservicesd and Chromehelper among others not responding…
    I am on hold for the last ten minutes to Apple Support... They must be VERY busy today...

  • 10g ASM on Logical Volumes vs. Raw devices and SAN Virtualization

    We are looking at setting up our standards for Oracle 10g non-rac systems. We are looking at the value of Oracle ASM in our environment.
    As per the official Oracle documentation, raw devices are preferred to using Logical Volumes when using ASM.
    From here: http://download.oracle.com/docs/cd/B19306_01/server.102/b15658/appa_aix.htm#sthr
    ef723
    "Note: Do not add logical volumes to Automatic Storage Management disk groups. Automatic Storage Management works best when you add raw disk devices to disk groups. If you are using Automatic Storage Management, then do not use LVM for striping. Automatic Storage Management implements striping and mirroring."
    Also, as per Metalink note 452924.1:
    "10) Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant."
    The issue is: if we use raw disk devices presented to ASM, the disks don't show up as used in the unix/AIX system tools (i.e. smit, lspv, etc.). Hence, when looking for raw devices on the system to add to filesystems/volume groups/etc., it's highly possible that a UNIX admin will grab a raw device that is already in use by Oracle ASM.
    Additionally, we are using a an IBM DS8300 SAN with IBM SAN Volume Controller (SVC) in front of it. Hence, we already have storage virtualization and I/O balancing at the SAN/hardware level.
    I'm looking for a little clarification to the following questions, as my understanding of their responses seem to confict:
    QUESTION #1: Can anyone clarify/provide additional detail as to why Logical volumes are not preferred when using Oracle ASM? Does the argument still hold in a SAN Virtualized environment?
    QUESTION #2: Does virtualization at the software level (ASM) make sense in our environment? As we already have I/O balancing provided at the hardware level via our SVC, what do we gain by adding yet another level of I/O balancing at the ASM level? Or as in the
    arguments the Oracle documentation makes against using Lvm, is this an unnecessary redundant striping (double-striped or in our case triple-striped/plaid)?
    QUESTION #3: So does SAN Virtualization conflict or compliment the virtualization provided by ASM?

    After more research/discussions/SR's, I've come to the following conclusion.
    Basically, in an intelligent storage environment (i.e. SVC), you're not getting a 100% bang for the buck by using ASM. Which is the cat's meow in a commodity hardware/unintelligent storage environment.
    Using ASM in a SVC environment potentially wastes CPU cycles having ASM balance i/o that is already balanced on the backend (sure if you shuffle a deck of cards that are already shuffled you're not doing any harm, but if they're already shuffled - then why are you shuffling them again??).
    That being said, there may still be some value for using ASM from the standpoint of storage management for multiple instances on a server. For example, one could better minimize space wastage by being able to share a "pool" of storage between mulitiple instances, rather than having to manage space on an instance-by-instance (or filesystem by filesystem) level.
    Also, in the case of having a unfriendly OS where one is unable to dynamically grow a filesystem (i.e. database outage required), there would be a definite benefit provided by ASM in being able to dynamically allocate disks to the "pool". Of course, with most higher-end end systems, dynamic filesystem growth is pretty much a given.
    In the case of RAC, regardless of the backend, ASM with raw is a no-brainer.
    In the case of a standalone instance, it's a judgement call. My vote in the case of intelligent storage where one could dynamically grow filesystems, would be to keep ASM out of the picture.
    Your vote may be different....just make sure you're putting in a solution to a problem and not a solution that's looking for a problem(s).
    And there's the whole culture of IT thing as well (i.e. do your storage guys know what you're doing and vice versa).....which can destroy any technological solution, regardless of how great it is.

Maybe you are looking for

  • Re: Link error during fcompile

    Hi Jeanne: I have already gone trough the procedure you have described but the result is same. When I invoke link command on all the *.obj files with some libraries from forte and MSVC/Lib I get the same Error. The command I am invoking is C:\> link

  • PO list display - creator of PO field

    Hi Gurus , Is there any standard PO report  in which i can get the ID/name of the creator of the PO in the report's output  ? I have checked ME2n but it does not have 'created by' or 'requisitioner' field in the output . Regards, Saurav

  • SSH-4-SSH2_UNEXPECTED_MSG

    Ok, I need a little help with this logging that i found on our production router. This is the message I am starting to get in my logging for the router  Another FIN in CLOSEWAIT state.  %SSH-4-SSH2_UNEXPECTED_MSG: Unexpected message type has arrived.

  • Please help - I  needed to restore my computer - All iTunes songs gone

    My system recently was infected by a virus and I needed to restore my entire computer. I had about 250 songs in my iTunes library. I still have these songs in my ipod nano. I reinstalled iTunes, but of course, now it is empty. I don't want to sync my

  • Query AD for DLL's and the last accessed date

    Hi, We are trying to clean up stale DLL's that are not being used.  Does anyone have a LDAP query for AD that can list all DLL's and the last date it was accessed