Zones and Zone Sets

I am trying to create a zone (hard zone - VPF) that is composed of 2 different blades. I would like the fibre attached into these 2 different blades to see each other and anythinf else plugged into the switch. I get an error message saying that I can't do this type of zone because it crosses 2 different blades. Is their a way around this? What am I doing wrong?
Thanks

Any zones not added to the zoneset are effectively unused, so depending on if the zones that are not yet in the zoneset are required, yes you will have to add them in.
However, the WWPNs you have listed do not appear to be using the Cisco recommended format for WWPN Pools (20:00:00:25:B5:xx:yy:zz) so perhaps you are not using pools or these zones are not related to UCS? If they are indeed UCS related zones then yes add them to the zoneset, but consider using the Pools instead if this is so.
My suggestion would be to use all WWPN zones and not mix with interface zones.
On each MDS switch, create a zone that contains only the two WWPNs of the storage controllers, ideally using an easy to identify alias. This guarantees the SAN ports can always see each other. Then, create an additional zone for each UCS blade that contains the blade WWPN from the WWPN Pool as well as the storage zone mentioned above. This allows the vHBA to see the SAN ports.
Here is a reference document you may find helpful.
http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/san-os/quick/guide/qcg_zones.html

Similar Messages

  • Customer Zone vs Zone

    Hello world,
    Looking for explanation on the difference between "Customer Zone" and "Zone" Ship-to Consumption level in ASCP plan options Organization tab, and where are they set up?
    Cheers!

    Hi,
    You need to use the HZ_CUST_ACCOUNT_ROLES table. This table will have the party_id of the relationship between the contact and the customer party.
    Regards,
    Ravi

  • How to display system time zone if user time zone is not set?

    Hi,
    My application implicitely chooses ALA as time zone if the user time zone is not set.
    How do I check this?

    Hi,
    Try using these FM's
    TZON_GET_USER_TIMEZONE
    GET_SYSTEM_TIMEZONE
    Z_LOCATION_TIMEZONE
    You can set the timezone in sy-zonlo by using the 1st FM and setting sy-zonlo in your application.
    I think system takes ALA default because its the 1st timezone in the table.(check FM BWDT_LIST_TIMEZONES)
    Regards,
    Amit
    Edited by: Amit Iyer on Aug 25, 2009 3:38 PM

  • Whole root zone and packages

    I installed in my global zone the "Entire Distribution" of Solaris 10 and some software from Companion CD.
    Now I would like install a "whole root zone" with a basic set of packages (like "End User Distribution") and without the packages
    from the Companion CD.
    I can use pkgrm to delete the unnecessary packages in my whole root zone.
    But is there a more elegant way to get a whole root zone only with basic packages?
    Regards. P.

    The "easy" way would be to do a minimal installation in the Global zone and then install the required services in the local zone.
    HTH,
    Roger S.

  • Solaris 10 Zones and networking..

    My machine has only one NIC card (rtls0) and also only one public ipv4 IP. I am at the moment unable to get more than one public IP. I've also created a few zones on the machine which I have assigned an internal IP. Now, I can connect (say SSH for example) internally to these zones just fine using their internal IP from the global zone. However, obviously the outside world would not be able to do so. So I decided to simply use the built in firewall/nat tech in Solaris in order to port forward certain ports to internal zones. (Like say set up port 2223 on the global level to forward ssh to one of my created zones' ssh) I looked up and down everywhere with the ipf and ipv4 port forwarding down to enabling it via routeadm and also setting the value of /dev/tcp ip_forwarding to 1. Then when I add the following rule to ipnat:
    rdr rtls0 PUBLIC_IP/32 port 2223 -> 192.168.1.2 port 22 tcp
    It still has zero effect on the forwarding. It fails to forward, and nothing I've done works. I'm on my last leg here with this issue. Am I doing something wrong with ipfilter or is there a better way to go about doing this with Solaris Zones? (I mean surely there must be an easier way to create self contained zones with applications that still run services without having to resort to assigning it its own IP, no?) Any help is appreciated, thanks.

    First thing to check is if your zone can access the global zone (try pinging). If this isn't the case you probably need to setup a routing entry allowing the non-global zone some access.
    For example, say the global is 10.0.0.1 and the non-global 192.168.0.1 on eri0 you'd use something like:
    route add 10.0.0.1 192.168.0.1 -iface
    This tells your non-global zone that it can reach the global zone through the eri0 interface. Ofcourse you can also expand this to networks and such.
    Another very important factor to keep in mind when dealing with internet is trying to access it from the non-global zone (as a test). Your ipnat.conf entry should be enough, my guess for to the reason for not routing the data is a non-static arp entry of your internet gateway. Now, this is a mere guess but if you have a default route in your routing table setup for Internet access (netstat -rn) make sure that the host to which the default route is pointing also has a static arp entry (man arp). If this is indeed the case you may also need to setup a routing entry as mentioned above to allow your zone access to this remote gateway.
    After that things should work as usual. Hope this helps.

  • IPMP configuration and zones - how to?

    Hello all,
    So, I've been thrown in at the deep end and have been given a brand new M4000 to get configured to host two zones. I have little zone experience and my last Solaris exposure was 7 !
    Anyway, enough of the woe, this M4000 has two quad port NICs, and so, I'm going to configure two ports per subnet using IPMP and on top of the IPMP link, I will configure two v4 addresses and give one to one zone and one to the other.
    My question is, how can this be best accomplished with regards to giving each zone a different address on the IPMP link.
    IP addresses available = 10.221.91.2 (for zone1) and (10.221.91.3 for zone2)
    So far, in the global zone I have
    ipadm create-ip net2     <-----port 0 of NIC1
    ipadm create-ip net6     <-----port 0 of NIC2
    ipadm create-ipmp -i net2,net6 ipmp0
    ipadm create-addr -T static -a 10.221.91.2/24 ipmp0/zone1
    ipadm create-addr -T static -a 10.221.91.3/24 ipmp0/zone2
    the output of ipmpstat -i and ipmpstat -a is all good. I can ping the addresses from external hosts.
    So, how now to assign each address to the correct host. I assume I'm using shared-ip?
    in the zonecfg, do I simply (as per [this documentation|http://docs.oracle.com/cd/E23824_01/html/821-1460/z.admin.task-54.html#z.admin.task-60] ):
    zonecfg:zone1> add net
    zonecfg:zone1:net> set address=10.221.91.2
    zonecfg:zone1:net> set physical=net2
    zonecfg:zone1:net> end
    and what if I have many many addresses to configure per interface... for example zone1 and zone2 will also require 6 addresses on another subnet (221.206.29.0)... so how would that look in the zonecfg?
    Is IPMP the correct way to be doing this? The client wants resilience above all, but these network connections are coming out of different switches thus LACP/Trunking is probably out of the question.
    Many thanks for your thoughts... please let me know if you want more information
    Solaris11 is a different beast altogether.
    Edited by: 913229 on 08-Feb-2012 08:03
    added link to the Solaris IPMP and zones doc

    Thanks for the reply....
    It still didn't work... but you pointed me in the right direction. I had to remove the addresses I had configured on ipmp0 and instead put them in the zonecfg. Makes sense really. Below I have detailed my steps as per your recommendation...
    I had configured the zone as minimally as I could:
    zonepath=/zones/zone1
    ip-type=shared
    net:
    address: 10.221.91.2
    physical=ipmp0
    but after it is installed, I try and boot it and I get:
    zone 'zone1': ipmp0:2: could not bring network interface up: address in use by zone 'global: Cannot assign the requested address
    So, I changed the ip-type to exclusive and I got:
    WARNING: skipping network interface 'ipmp0' which is used in the global zone.
    zone 'zone1': failed to add network device
    which was a bit of a shame.
    So, finally, I removed the addresses from ipmp0
    ipadm delete-addr ipmp0/zone1
    ipadm delete-addr ipmp0/zone2
    and set the address in zonecfg together with the physical=ipmp0 as per your recommendation and it seems to be working.
    So, am I correct in taking away from this that if using IPMP in shared-ip zones, don't set the address in the global zone, but stick it in the zone config and everyone is happy?
    I think this was the only way to achieve multiple IP addresses on one interface but over two ports?
    Lastly, why oh why is the gateway address in netstat -rn coming up as the address of the host?
    Anyway, thanks for your help.
    ;)

  • EBS with ZFS and Zones

    I will post this one again in desperation, I have had a SUN support call open on this subject for some time now but with no results.
    If I can't get a straight answer soon, I will be forced to port the application over to Windows, a desperate measure.
    Has anyone managed to recover a server and a zone that uses ZFS filesystems for the data partitions.
    I attemped a restore of the server and then the client zone but it appears to corrupt my ZFS file systems.
    The steps I have taken are listed below:
    Built a server and created a zone, added a ZFS fileystem to this zone and installed the EBS 7.4 client software into the zone making the host server the EBS server.
    Completed a backup.
    Destroyed the zone and host server.
    Installed the OS and re-created a zone with the same configuration.
    Added the ZFS filesystem and made this available within the zone.
    Installed EBS and carried out a complete restore.
    Logged into the zone and installed the EBS client software then carried out a complete restore.
    After a server reload this leaves the ZFS filesytem corrupt.
    status: One or more devices could not be used because the the label is missing
    or invalid. There are insufficient replicas for the pool to continue
    functioning.
    action: Destroy and re-create the pool from a backup source.
    see: http://www.sun.com/msg/ZFS-8000-5E
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    p_1 UNAVAIL 0 0 0 insufficient replicas
    mirror UNAVAIL 0 0 0 insufficient replicas
    c0t8d0 FAULTED 0 0 0 corrupted data
    c2t1d0 FAULTED 0 0 0 corrupted data

    I finally got a solution to the issue, thanks to a SUN tech guy rather than a member of the EBS support team.
    The whole issue revolves around the file:/etc/zfs/zpool.cache which needs to be backed up prior to carrying out a restore.
    Below is a full set of steps to recover a server using EBS7.4 that has zones installed and using ZFS:
    Instructions On How To Restore A Server With A Zone Installed
    Using the servers control guide re-install the OS from CD configuring the system disk to the original sizes, do not patch at this stage.
    Create the zpool's and the zfs file systems that existed for both the global and non-global zones.
    Carry out a restore using:
    If you don't have a bootstrap printout, read the backup tape to get the backup indexes.
    cd /usr/sbin/nsr
    Use scanner -B -im <device>
    to get the ssid number and record number
    scanner �B -im /dev/rmt/0hbn
    cd /usr/sbin/nsr
    Enter: ./mmrecov
    You will be prompted for the SSID number followed by the file and record number.
    All of this information is on the Bootstrap report.
    After the index has been recovered:
    Stop the backup demons with: �/etc/rc2.d/S95networker stop�
    Copy the original res file to res.org and then copy res.R to res.
    Start the backup demons with: �/etc/rc2.d/S95networker start�
    Now run: nsrck �L7 to reconstruct the indexes.
    You should now have your backup indexes intact and be able to perform standard restores.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache /etc/zfs/zpool.cache.org
    To restore the whole system:
    Shutdown any sub zones
    cd /
    Run �/usr/sbin/nsr/nsrmm �m� to mount the tape
    Enter �recover�
    At the Recover prompt enter: �force�
    Now enter: �add *� (to restore the complete server, this will now list out all the files in the backup library selected for restore)
    Now enter: �recover� to start the whole system recovery, and ensure the backup tape is loaded into the server.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache.org /etc/zfs/zpool.cache
    Reboot the server
    The non-global zone should now be bootable use zoneadm -z <zoneaname> boot
    start an X session onto the non-global zone and carry out a selective restore of all the ZFS file systems.

  • ZFS mount points and zones

    folks,
    a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
    we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
    with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
    is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
    we haven't looked at zone clusters at this stage if for no other reason than time....
    or is there a better way?
    thanks muchly,
    nelson

    i must be missing something...any ideas what and where?
    nelson
    devsun012~> zpool import Zbob
    devsun012~> zfs list|grep bob
    Zbob 56.9G 15.5G 21K /export/zfs/bob
    Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
    Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
    devsun012~> zpool export Zbob
    devsun012~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    1 bob running /opt/zones/bob native shared
    devsun013~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    16 bob running /opt/zones/bob native shared
    devsun012~> clrt list|egrep 'oracle_|HA'
    SUNW.HAStoragePlus:6
    SUNW.oracle_server:6
    SUNW.oracle_listener:5
    devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
    devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
    devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
    root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
    root@devsun012 > bob-has-rs
    clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
    clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
    clrs: (C891200) Failed to create resource "bob-has-rs".

  • Coverage Zones and PAC files

    Just wondering if anybody has used coverage zones and PAC files to do auth browser configuration.
    I'm looking at a global deployment of Content Engines (ACNS 5.1) and would like to set each CE up as a PAC server to respond to a request and direct the user to the nearest CE. So if I'm in Milan I use the Milan CE and when I go to London I use the London CE.
    I'm trying to get my head round this in the lab at present and want to get away with not using a Content Router only CE's. Can this be done?
    Any experience will be really helpful and I'll post any findings back to this thread.
    Thanks
    Mark

    Here is something that I found. ACNS 5.1 has the new auto proxy-config option which combines coverage zones with a proxy pac file. Essentially, you create a pac file that contains a special macro, and configure the CE (through the CDM GUI) to use that pac file in conjunction with the coverage zone information. Then, when the client requests that pac file from the CE, the CE replaces that macro in the pac file with one (or more) CE names based on the coverage zone that matches the requesting client's IP address. Link to the configuration.
    http://www.cisco.com/univercd/cc/td/doc/product/webscale/uce/acns51/deploy51/51router.htm#wp1039339

  • Snapshot zone and dataset

    Hi,
    I (try !!) to manage a X86-64 solaris computer. and I want to create à snapshot in a secondary zone..
    That zone exist is ready and booted and I can login on it. So I use zfs snapshot koonytan/root/datatank to create the snapshot. And I got an error saying that there is no dataset for the snapshot. OK
    I search for dataset and found in an oracle/sun documentation, that I have to use zonecfg to create the data set :
    zonecfg -z koonytank
    zonecfg:zion> add dataset
    zonecfg:zion:dataset> set name=koonytank/root/dataset
    zonecfg:zion:dataset> end
    verrify, commit exit : no error.
    Then I reboot the zone with zoneadm and I get en error "the zone koonytank" could not be verify. and the zone don't boot.
    I have to remove the dataset declaration to reboot the zone.
    So my question is : How to declare a dataset. I have heard of a flag in the global zone that will be inherited by the zone but I found nothing on that.
    I thank you in advance
    Pierre Léonard

    Why do we require the reverse relation of rsign type 'A', what is the use of that?
    So that all HR objects are linked in both way... depending on your reporting need you could start looking at org. structure from the bottom (e.g. from employee point of view) or top (Org. unit point of view)....
    E.g, if you want to find the position of a given employee, better use the relation P->S (bottom up) than O->S->P (top bottom)since positon is not known...
    What is sclas in hrp1001 field for??? Should it be 'O' for all, shall I use all these 6 fields, in the select query , if i chose to solve my problem by select queries???
    Well why not just looking at his description ("Type of related object")....
    Object O can be linked to another object O (then OTYPE = "O" and sclas = "O") -> this is the case of your zone/region...
    But object O can be linked to an object S (position) (so OTYPE = "O" and SCLAS = "S") ....
    to summary:
    - SCLAS has the same meaning than OTYPE but for the linked object
    - SOBID has the same meaning that OBJID but for the linked object
    Let's say you want to check if relation exists between zone and region:
    SELECT count(*) FROM HRP1001
    WHERE OTYPE = 'O'
         AND OBJID = ID(Region)
         AND PLVAR = '01'
         AND SUBTY = 'B0002'   "(= concatenation of RSIGN and RELAT)
         AND ISTAT = '1'   "Active
         AND SCLAS = 'O'
         AND SOBID = ID(region).
    cheers, I need to go now

  • Synchronization messes up time zones and scrambles calendar

    Hi, my problem is I live in Venezuela, and 6 months ago, Venezuela modified its time zone to GMT-04:30. Microsoft already has honored this change, and created a new time zone with that offset.
    Apparently, RIM hasn't. So I use the standard GMT-04:00 in my Pearl and set it not to synchronize time with any server.
    But when I try to synchronize with outlook, it apparently works fine, but the pearl changed zone to some strange GMT -10 african zone, and each entry in the calendar is Physicaly moved to the new time, obviously messing up my calendar.
    How do I Approach RIM in order for them to create the revised time zone for Venezuela ??
    Any other Ideas ?
    Message Edited by jsonnen on 04-06-2008 08:01 AM

    Do what I did - turn off time synch on the desktop, and just configure the phone to synch with the network.  It's likely that your phone network has the proper time and date.

  • Default gateways and zones in a multihomed system

    We do have some problems concerning default routes and zones in a multihomed system.
    I found several posts in this forum, most of them referring to a domument of meljr, but my feeling ist that the paper is either not correct or not applicable to our situation?! Perhaps somebody can give me a hint.
    Let me sketch our test environment. We have a multihomed Solaris 10 system attached to three different DMZ's using three different network adapters. We set up two local zones with IP's of the DMZ's of adapter 1 and 2, leaving adapter 0 for the IP of the global zone.
    Now we set up default routes to ensure that network traffic from the local zones is routed in the corresponding DMZ's. That makes three different default routes on the global zone. On startup of the local zones, netstat reports the expected default routes to the correct DMZ gateways inside each zone.
    Now what happens... My ssh to the global zone sometimes breaks. When this happens, no pings are possible to the IP of the global zone. Meanwhile, pings from other machines in our network (even from different subnets) might produce replies, some don't. By now, I can't tell you if there's is anything deterministic about it... More interesting: the local zone connections aren't affected at all!
    So we did some more testing. Binding an IP address to the DMZ interfaces where the zones are tied to makes no difference (we tried both, with or without dedicated addresses for the adapter in the global zone). So the setup we're using right now is made of 5 IP addresses.
    IP1, subnet 1: adapter 0, global zone
    IP2, subnet 2: adapter 1, global zone
    IP3, subnet 2; adapter 1. local zone 1
    IP4, subnet 3; adapter 2, global zone
    IP5, subnet 3; adapter 2, local zone 2
    In the global zone there are three default gateways defined, one in each DMZ subnet. Inside the local zones, at startup you'll find the corresponding gateway into the DMZ. Everything looks fine...
    I opened five ssh connections to the different IP's. Now what happened... After approx. half an hour, the connections to two IPs of the global zone (adapter 0 and adapter 1) broke down, while the connections to all other IP's were still open. This behaviour can be reconstructed!
    So perhaps anybody has a explaination for this behaviour. Or perhaps anybody can answer me some qustions:
    1. How are the three default gateways handled? Is there still some kind of "round robbin" implemenation? How can I guarantee that network traffic from outside isn't routed inside the DMZ's without preventing the local zones from talking to each other (actually we only need to communicate on some ports, but the single IP-stack concept only gives us all or nothing...).
    2. If I do a ping from local zone 1 to the default gateway of local zone 2, this route is added as additional default gateway inside local zone 1! So does this mean, the routing decision is made only inside the global zone not taking into account where the packet is sent from?
    3. After all, how are the IP packets routed from the different zone and the global zone, and how are they routed back to calling systems from the various DMS's and other networks, routed via these DMS's???
    The scenario seems to be covered by http://meljr.com/~meljr/Solaris10LocalZoneDefaultRoute.html, but configuring the machine like stated in the paper leaves me with the problems described.
    I'd be happy for any helpful comment!

    you can have multiple gateway entries in deafultrouter file but the default gateway for global zone can be only one but you can specify different gateways for different zones..
    using this default gateway, you should be able to connect via different network...!

  • Zones and proftpd

    Hello,
    I want to have some informations about proftpd installation with solaris. Actually I have installed proftpd and it is working fine. The problem is that I can't connect me on it. I have the proftpd message during the connection on proftpd with filezilla but my user failed. I don't want anonymous connexions. proftpd is installed on a special zone with a virtual network.I have created a user on the special zone to connect to the ftp server.
    useradd �d /export/toto -m �c � Mister toto� toto
    passwd -r files toto
    I what to now if the user must be created on the special zone and not in the global zone and if the useradd command is correct?. My user is not in the file /etc/ftpd/ftpusers.What can be the problem?
    Thanks
    PS : sorry for my english

    I'm having a problem with proftpd on Solaris 10 with SMF.
    I am using ProFTPD Version 1.3.0 from blastwave.
    I have modified the inetadm setting so that exec="/opt/csw/sbin/in.proftpd"
    If I reboot my zone the first ftp connection will fail with a "421 Service not available, remote server has closed connection." message.
    An immediate retry to connect to the ftp server works, and all subsequent connections work. By looking at ps output it seems that the first (failed) connection attempt starts an in.proftpd daemon which stays running and then parents in.proftpd daemons for subsequent connections. If I create a start script to start the first in.proftpd daemon during boot (/etc/rc3.d/S90proftpd) then things work alright. The question is, why won't inetd just start in.proftpd like it should to respond to the first request?
    In my proftpd.conf file I have "ServerType standalone" If I set "ServerType inetd" then it just doesn't work at all.
    bash-3.00# inetadm -l ftp
    SCOPE NAME=VALUE
    name="ftp"
    endpoint_type="stream"
    proto="tcp6" <----- If I change this to "tcp" then it doesn't work at all
    isrpc=FALSE
    wait=FALSE
    exec="/opt/csw/sbin/in.proftpd"
    user="root"
    default bind_addr=""
    default bind_fail_max=-1
    default bind_fail_interval=-1
    default max_con_rate=-1
    default max_copies=-1
    default con_rate_offline=-1
    default failrate_cnt=40
    default failrate_interval=60
    default inherit_env=TRUE
    default tcp_trace=FALSE
    default tcp_wrappers=FALSE

  • Map Zones and Regions to the PSAs

    Dear Experts
    Request your advice on the below query.
    My requirement is to map zone and region against the PSAs.
    In our current setting PA is name of the company.
    PSA is the geographic location - Now I want to maintain further indicators as Zone and Region for each PSA.
    Is there a standard solution available.
    Regards
    Sanjay.

    Hi,
    You already taken company as Personnel Area & Geographic location as Personnel Sub area.
    As further division of personnel sub area is not possible.You can do so in organisational management level.
    In order to link the org units to P.A u need to add new fields in organisational assignment infotype.
    You can control authorizations through structural authorisations.
    Cheers
    Ramesh

  • HT1657 I downloaded a movie from iTunes on my iPhone 4S, went out of the wifi zone and now my movie is no where to be found. I can't find it on my iPhone, Mac or iPad. Where is it?

    I downloaded a movie from iTunes on my iPhone 4S, went out of the wifi zone and now my movie is no where to be found. I can't find it on my iPhone, Mac or iPad. Where is it?

    Sounds like the movie didn't finish downloading before the Wi-Fi signal dropping ...
    Try re downloading. You won't be charged again.
    Downloading past purchases from the App Store, iBookstore, and iTunes Store

Maybe you are looking for

  • Vim update broke a few things

    What do I have to do to get my AUR color schemes working again along with my plugins? Got it temporarily working by adding the following line in my .vimrc set runtimepath=~/.vim,/usr/share/vim,/usr/share/vim/vim72 Last edited by Dart27 (2009-09-14 22

  • Need opinion about new design of my site

    Hi, please can you tell me what do you think about new design of my site http://www.francistravel.com I still work on it and still try to improve it. Thanks a lot, Petr

  • Slow Music Playback When Typing SB Live

    Howdo I'm hoping someone could help me out with a problem I am having with my SB li've 5. When playing mp3's through any software and typing at the same time (like now) the music playback becomes juddery. I have tried increasing the processor priorty

  • I need to Import Descriptive Flex Field from EBS to OBIA

    Hi we implemented OBIA 7.9.6.3 as Procurement and Spend Analytics there is some Descriptive Flex Fields i want to import for example :      we have DFF on PO_HEADERS carries some information on the POs i need to import it into OBIA Datawarehouse and

  • UDF Mandatory

    Hi all, In UDF, can we create Mandatory fields? eg: if we are adding inventory transfer we need to set particular UDF field as mandatory, if we forgot to add that field it need to show error? is it possible? Remarks: i know by Customization Tools , m