Striping with ZFS and SAN(AMS500 HDS)

There is a way to do striping with zfs?
I have Storage-SAN of HDS(AMS500), and I want to do striping on luns from the storage. I don't want any raid-5(because I have one on the disks in the HDS storage).
I only want to do strip on 2 luns(25GB each) that cames from different controllers.
Is striping on zfs those 2 luns is the best why to do it?
thanks.

In theory yes. So I would have expected that to work.
Of course, zfs is supposed to be a "smart" volume manager.
Your not supposed to have to control factors like striping and concatenation.
You just tell it what storage is available and it works out the best way to lay it out.
So theres always a chance its trying to do something "smart" which doesnt turn out to be what you want.
You might have to take this up on the opensolaris zfs forums.

Similar Messages

  • Where can I find the latest research on Solaris 10, zfs and SANs?

    I know Arul and Christian Bilien have done a lot of writing about storage technologies as they related to Oracle. Where are the latest findings? Obviously there are some exotic configurations that can be implemented to optimizer performance, but is there a set of "best practices" that generally work for "most people"? Is there common advice for folks using Solaris 10 and zfs on SAN hardware (ie, EMC)? Does double-striping have to be configured with meticulous care, or does it work "pretty well" just by taking some rough guesses?
    Thanks much!

    Hello,
    I have a couple of links that I have used:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
    http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
    These are not exactly new, so you may have encountered them already.
    List of ZFS blogs follows:
    http://www.opensolaris.org/os/community/zfs/blogs/
    Again, there does not seem to be huge activity on the blogs featured there.
    jason.
    http://jarneil.wordpress.com

  • EBS 7.4 with ZFS and Zones

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • EBS with ZFS and Zones

    I will post this one again in desperation, I have had a SUN support call open on this subject for some time now but with no results.
    If I can't get a straight answer soon, I will be forced to port the application over to Windows, a desperate measure.
    Has anyone managed to recover a server and a zone that uses ZFS filesystems for the data partitions.
    I attemped a restore of the server and then the client zone but it appears to corrupt my ZFS file systems.
    The steps I have taken are listed below:
    Built a server and created a zone, added a ZFS fileystem to this zone and installed the EBS 7.4 client software into the zone making the host server the EBS server.
    Completed a backup.
    Destroyed the zone and host server.
    Installed the OS and re-created a zone with the same configuration.
    Added the ZFS filesystem and made this available within the zone.
    Installed EBS and carried out a complete restore.
    Logged into the zone and installed the EBS client software then carried out a complete restore.
    After a server reload this leaves the ZFS filesytem corrupt.
    status: One or more devices could not be used because the the label is missing
    or invalid. There are insufficient replicas for the pool to continue
    functioning.
    action: Destroy and re-create the pool from a backup source.
    see: http://www.sun.com/msg/ZFS-8000-5E
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    p_1 UNAVAIL 0 0 0 insufficient replicas
    mirror UNAVAIL 0 0 0 insufficient replicas
    c0t8d0 FAULTED 0 0 0 corrupted data
    c2t1d0 FAULTED 0 0 0 corrupted data

    I finally got a solution to the issue, thanks to a SUN tech guy rather than a member of the EBS support team.
    The whole issue revolves around the file:/etc/zfs/zpool.cache which needs to be backed up prior to carrying out a restore.
    Below is a full set of steps to recover a server using EBS7.4 that has zones installed and using ZFS:
    Instructions On How To Restore A Server With A Zone Installed
    Using the servers control guide re-install the OS from CD configuring the system disk to the original sizes, do not patch at this stage.
    Create the zpool's and the zfs file systems that existed for both the global and non-global zones.
    Carry out a restore using:
    If you don't have a bootstrap printout, read the backup tape to get the backup indexes.
    cd /usr/sbin/nsr
    Use scanner -B -im <device>
    to get the ssid number and record number
    scanner �B -im /dev/rmt/0hbn
    cd /usr/sbin/nsr
    Enter: ./mmrecov
    You will be prompted for the SSID number followed by the file and record number.
    All of this information is on the Bootstrap report.
    After the index has been recovered:
    Stop the backup demons with: �/etc/rc2.d/S95networker stop�
    Copy the original res file to res.org and then copy res.R to res.
    Start the backup demons with: �/etc/rc2.d/S95networker start�
    Now run: nsrck �L7 to reconstruct the indexes.
    You should now have your backup indexes intact and be able to perform standard restores.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache /etc/zfs/zpool.cache.org
    To restore the whole system:
    Shutdown any sub zones
    cd /
    Run �/usr/sbin/nsr/nsrmm �m� to mount the tape
    Enter �recover�
    At the Recover prompt enter: �force�
    Now enter: �add *� (to restore the complete server, this will now list out all the files in the backup library selected for restore)
    Now enter: �recover� to start the whole system recovery, and ensure the backup tape is loaded into the server.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache.org /etc/zfs/zpool.cache
    Reboot the server
    The non-global zone should now be bootable use zoneadm -z <zoneaname> boot
    start an X session onto the non-global zone and carry out a selective restore of all the ZFS file systems.

  • More Major Issues with ZFS + Dedup

    I'm having more problems - this time, very, very serious ones, with ZFS and deduplication. Deduplication is basically making my iSCSI targets completely inaccessible to the clients trying to access them over COMSTAR. I have two commands right now that are completely hung:
    1) zfs destroy pool/volume
    2) zfs set dedup=off pool
    The first command I started four hours ago, and it has barely removed 10G of the 50G that were allocated to that volume. It also seems to periodically cause the ZFS system to stop responding to any other I/O requests, which in turn causes major issues on my iSCSI clients. I cannot kill or pause the destroy command, and I've tried renicing it, to no avail. If anyone has any hints or suggestions on what I can do to overcome this issue, I'd very much appreciate that. I'm open to suggestions that will kill the destroy command, or will at least reprioritize it such that other I/O requests have precedence over this destroy command.
    Thanks,
    Nick

    To add some more detail, I've been review iostat and zpool iostat for a couple of hours, and am seeing some very, very strange behavior. There seem to be three distinct patterns going on here.
    The first is extremely heavy writes. Using zpool iostat, I see write bandwidth in the 15MB/s range sustained for a few minutes. I'm guessing this is when ZFS is allowing normal access to volumes and when it is actually removing some of the data for the volume I tried to destroy. This only lasts for two to three minutes at a time before progressing to the next pattern.
    The second pattern is categorized by heavy, heavy read access - several thousand read operations per second, and several MB/s bandwidth. This also lasts for five or ten minutes before proceeding to the third pattern. During this time there is very little, if any write activity.
    The third and final pattern is categorized by absolutely no write activity (0s in both the write ops/sec and the write bandwidth columns, and very, very small read activity. By small ready activity, I mean 100-200 read ops per second, and 100-200K read bandwidth per second. This lasts for 30 to 40 minutes, and then the patter proceeds back to the first one.
    I have no idea what to make of this, and I'm out of my league in terms of ZFS tools to figure out what's going on. This is extremely frustrating because all of my iSCSI clients are essentially dead right now - this destroy command has completely taken over my ZFS storage, and it seems like all I can do is sit and wait for it to finish, which, as this rate, will be another 12 hours.
    Also, during this time, if I look at the plain iostat command, I see that the read ops for the physical disk and the actv are within normal ranges, as are asvc_t and %w. %b, however is pegged at 99-100%.
    Edited by: Nick on Jan 4, 2011 10:57 AM

  • Start-up / shut-down (and other) problems with ZFS over iSCSI

    Hi,
    I've had a limited time in which to try some concepts on an evaluation x4500 server. This is unfortunately my last day, so by the time anyone can reply I will probably not be able to to further tests, but maybe someone will be able to reproduce the issues.
    I'm using the evaluation server to export some iSCSI targets, and I'm connecting to them from my laptop, which has a fresh installation of SXDE 01/08. I was able to attach to the targets with
    iscsiadm add static-config \
    iqn.1986-03.com.sun:02:f4281081-c3fc-e448-c8f0-943d8861c9e8,192.168.15.62
    iscsiadm add static-config \
    iqn.1986-03.com.sun:02:371c6fe7-f0c8-e02c-c865-baef90fb71ce,192.168.15.62
    iscsiadm add static-config \
    iqn.1986-03.com.sun:02:ee74d5de-3046-6295-f35e-afe60e13db23,192.168.15.62
    iscsiadm modify discovery -s enableThis made the appropriate entries appear in /dev/dsk so I could then set up a simple zpool and zfs filesystem with:
    zpool create -m none dPool c3t010000144FA70C1400002A0047C2BF73d0 \
    c3t010000144FA70C1400002A0047C2BF81d0 c3t010000144FA70C1400002A0047C2BF8Bd0
    zfs create -o mountpoint=/data -o sharenfs=on dPool/dataThe general concept I'm testing is having a ZFS-based server using an IP SAN as a growable source of storage, and making the data available to clients over NFS/CIFS or other services. In principle this solution should also allow failover to another server, since all the ZFS data and metadata is in the IP SAN, not on the server. Although not done in this example it should also be possible to run raidz across multiple iSCSI disk arrays. However, it's not been a bed of roses. I've had a lot of errors in dmesg like the following, which I think are causing zfs/zpool commands to stall at times:
    Feb 28 14:30:39 F4060 iscsi: [ID 866572 kern.warning] WARNING: iscsi connection(ffffff014f6b6b78) protocol error - received an unsupported opcode:0x41
    Feb 28 14:30:41 F4060 iscsi: [ID 158826 kern.warning] WARNING: iscsi connection(10) login failed - failed to receive login response
    Feb 28 14:30:41 F4060 scsi_vhci: [ID 734749 kern.warning] WARNING: vhci_scsi_reset 0x1
    Feb 28 14:30:41 F4060 iscsi: [ID 339442 kern.notice] NOTICE: iscsi connection failed to set socket optionTCP_NODELAY, SO_RCVBUF or SO_SNDBUF
    Feb 28 14:30:41 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(13) unable to connect to target iqn.1986-03.com.sun:02:ee74d5de-3046-6295-f35e-afe60e13db23
    Feb 28 14:30:41 F4060 iscsi: [ID 339442 kern.notice] NOTICE: iscsi connection failed to set socket optionTCP_NODELAY, SO_RCVBUF or SO_SNDBUF
    Feb 28 14:30:41 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(7) unable to connect to target iqn.1986-03.com.sun:02:f4281081-c3fc-e448-c8f0-943d8861c9e8Does anyone know why a Solaris iSCSI target would send an unsupported opcode (0x41) to a Solaris iSCSI initiator? Surely they should be talking the same language!
    The main problems however are with shutdown and start-up. On occasions, I suspect that the ordering of ZFS, iSCSI and network services gets a bit out of sync. On one occasion the laptop even refused to complete the shutdown because it was reporting a continuous stream of console messages like
    Feb 27 18:26:37 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(13) unable to connect to target iqn.1986-03.com.sun:02:ee74d5de-3046-6295-f35e
    -afe60e13db23
    Feb 27 18:26:37 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(10) unable to connect to target iqn.1986-03.com.sun:02:371c6fe7-f0c8-e02c-c865
    -baef90fb71ce
    Feb 27 18:26:37 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(7) unable to connect to target iqn.1986-03.com.sun:02:f4281081-c3fc-e448-c8f0-
    943d8861c9e8I also get these on start-up, where it looks like ZFS tries to load the zpool configuration before iSCSI has found the disks, and even worse, iSCSI is starting up before nwamd has time to do its network auto-magic, and complains that the devices are unavailable.
    If these problems sorted themselves out after everything came up, I wouldn't really mind some temporary complaints in the log file, but what I get after a reboot is a working zpool but an unmounted ZFS filesystem! Here is what I have today:
    bash-3.2# zpool status
      pool: dPool
    state: ONLINE
    scrub: scrub completed with 0 errors on Thu Feb 28 14:33:43 2008
    config:
            NAME                                     STATE     READ WRITE CKSUM
            dPool                                    ONLINE       0     0     0
              c3t010000144FA70C1400002A0047C2BF73d0  ONLINE       0     0     0
              c3t010000144FA70C1400002A0047C2BF81d0  ONLINE       0     0     0
              c3t010000144FA70C1400002A0047C2BF8Bd0  ONLINE       0     0     0
    errors: No known data errors
    bash-3.2# zfs list
    NAME         USED  AVAIL  REFER  MOUNTPOINT
    dPool        480M  28.9G     1K  none
    dPool/data   480M  28.9G   480M  /dataThis all looks fine, and you can see that I was even able to scrub the pool data with no problems. But where are the 480MB of data I have put in the /data mountpoint:
    bash-3.2# ls /data
    bash-3.2# df -h /data
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c1d0s0         15G   4.4G    11G    30%    /As you can see, /data is unmounted, causing df to revert to the / filesystem containing the empty /data mountpoint, instead of showing the zpool mount.
    Since zfs is supposed to take care of its own mounts rather than using vfstab, I can't use "mount /data" to force this to mount. The only workaround I've found is to export and import the zpool. Then I get the filesystem to reappear:
    bash-3.2# df -h /data
    Filesystem             size   used  avail capacity  Mounted on
    dPool/data              29G   480M    29G     2%    /dataDoes anyone know if these are known issues with snv_79b, and is there a fix available or in the works?
    TIA,
    Graham

    EasyE, Welcome to the discussion area!
    (a) Call Apple and get your iMac G5 fixed since it is in the repair extension group. Don't waste your time doing anything else.
    (b) This area is for discussing the iMac G4. Since you have an iMac G5 in the future you should post in the iMac G5 discussion area.

  • SAN Replication with ZFS

    We have Solaris 10 Oracle database servers, with ZFS filesystems on an HP XP SAN array. We are going to replicate the data to a backup data center using the XP array itself. We will build identical servers at the backup data center. My question is, how will the servers at the backup data center recognize the data once it is replicated by the SAN? Will I simply need to import the zpool, and is it that simple? I can't find any documentation on the topic. Thanks.

    I've used Business Copy (BC) on an HP XP1024 array and imported a copy of a ZFS storage pool on a different host using something like ( Veritas Volume Mgr example ) :
    zpool import -d /dev/vx/dsk/hpxpdg1 -f poolHPXP0

  • Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
    The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
    Boot from SAN offers many benefits, including:
    Server without local storage can run cooler and use the extra space for other components.
    Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
    SAN storage allows the administrator to use storage more efficiently.
    Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
    Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
    Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
    Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
    Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Evan
    Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
    So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
    Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
    Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
    Step 1: Check at server
    a. make sure to have uniform firmware version across all components of UCS
    b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
    c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
    Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
    d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
    e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
    Step 2: Check at Storage Switch
    a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
    b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
    c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
    d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
    e. Most important configuration to check on Storage Switch is the zoning
    Zoning is basically access control for our initiator to  targets. Most common design is to configure one zone per initiator and target.
    Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
    Step 3: Check at Storage Array
    When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
    LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
    Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
    Following needs to be verified at Storage Array level
    a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
    b. If above is yes then Is LUN Masking applied?
    c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
    Below document has details and troubleshooting outputs:
    http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
    Hope this answers your question.
    Thanks,
    Vishal 

  • SAP NW7.1 with 10g and Netapp SAN on VM-Ware

    Hello together,
    I want to install several NW CE 7.1 Installations with an 10g Oracle DB on Windows Server 2003.
    Now, our admins talk about the Windows Hard Disk Drives like C: or D:
    Some admins want to implement different hard disk drives (like c: and d: etc.)  to the windows server 2003 to optimize the disk access, even if the server is connected to a netapp NAS. They say the the Windows 2003 server optimizes the disk access if more then one disk drive is configured.
    Some other solution architects says, that this is not necessary, because the C: partition an D: partition and e:partition is not on  a real hard disk, but on a SAN connection. It will be ok, if you define a c:partition with windows and a d:partition with oracle and sap on it, because the SAN is so fast that you must not seperate the disk load.
    How do you configure your Systems with oracle ?
    best rgards,
    Carsten Schulz

    Hi,
    I would do the mixing of Local Storage + SAN Storage for the same to have optimal file distributions.
    Along with the suggested partition layout by Eric (as well as by SAP and Oracle) , I would add the additional stripped partition on the local Hard-disk of the Box, which will be specially used for Paging activity (Virtual Memory).
    It is good to distribute all the suggested files (of different purpose) on different disks using RAID levels to deal with Data loss situation as well as to achieve higher I/O performance.
    Also the RAID levels play an important role here. Storage Partition with RAID-5 level is recommended for Storing the SAP Data files (Oracle Database files).  I am preferring RAID-0 level for OS executable and for SAP Executable (sapmnt) on local Hard-disks (or on SAN storage LUNs).
    *Origlogs and Mirrlogs files * are very sensitive & important files (online redo log files) which contains all committed+non-committed transaction entries and which are required to deal with Instance Recovery. So they are required to store at different locations to deal with any unexpected File loss/corruption situation whenever demanded in future.
    Along with the other files, the mirroring of Control files are also recommended on separate Disks location. A control file is very necessary file for the database to start and operate successfully.
    Separate location for storing Offline redo-log files (ORAARCH) is recommended to deal with Media recovery which is required in performing Complete/Point-in-time Database recovery.
    Regards,
    Bhavik G. Shroff

  • About ASM and SAN...

    Hello Guys,
    I have to implement 3 nodes RAC 10gR2 ob centOS4 operating system. I have study so many documents about rac instaltion and configurations. I have learn how to set the network requirements with private, public and virtual IPs and all other stuff. I have learn installtion of clusterware and database with cluster enable functionality.
    BUT the storage options are still not clear to me. We have purchases SAN and we are planning to implement ASM for the storage. Now i want to know:
    How many disk and disk partitions 3 node structure will require on SAN?
    How ASM will access SAN, or you can say OS will access this shared storage?
    Voting disk and OCR can not be store on sharted storage and need to be store on raw devices... what these raw device can be? How it can be access by all nodes?
    Above three questions are disturbing me a lot. If they are clear to me the whole storage concept will be clear and i can implement RAC.
    Please help me by answering the above 3 questions. I will be vert greatful to you.
    Regards,
    Imran

    How many disk and disk partitions 3 node structure will require on SAN?
    There's no real answer to that! With Oracle generally, RAC or no RAC, the answer to how many disks you should have is "as many as possible". Partitioning is really up to you, too, depending on what you find easiest to manage. If you have a single SAN array, for example, comprised of 15 disks that you choose to partition into three or four logical volumes so that you can call one 'data', one 'redo', one 'OS', and one 'other' -that's entirely up to you, since Oracle could care less how you partition, what you call them or how many of them there are. Moreover, everything on every partition is being striped across those 15 disks anyway, so who cares?
    I think, however, you might be thinking of the RAC-specific issues of the voting disk and the Oracle Cluster Registry. If you were using a cluster file system, they could be just two files on the file system, about 120M in size between them. Since you are going to use ASM and these two elements can't be stored inside an ASM array, you'll have to create two raw partitions for this purpose. The rest you then chop up for ASM's use.
    It is NOT true, incidentally, that "Voting disk and OCR can not be store on sharted storage". By definition, the voting disk and OCR must be on shared storage! Indeed, raw partitions, ASM arrays and cluster file systems are ALL shared storage technologies. It just so happens that those two files can't use ASM... but raw or cfs are fine.
    A raw partition is not, of course, intrinsically 'shared storage'... but if it's a raw partition on your SAN, to which all three of your nodes are physically attached, then it is shareable. It's shareable simply because three nodes can see it. And because there's no file system there with exclusive and blocking file locks, what one node does to a raw partition doesn't stop another node accessing it simultaneously (which is the definition of shared storage, of course).
    How will ASM access SAN? By you partitioning the SAN into a number of logical volumes, each one of which will be kept raw, and you then declaring each such volume as a candidate disk. You'll wrap all candidate disks up into an ASM disk group... and then Oracle will write to that disk group and hence through to the underlying logical volumes. Which comes back to the original question: how many logical volumes should you create out of, say, a 15 disk LUN on a SAN?
    Depends, as I said, on a lot of things, but for example RAID5 runs best when there are either 5 or 9 disks in the array (or did when last I looked at an EMC Clariion SAN!). So if your underlying RAID technology was going to be RAID5, you might well create 3 5-disk logical volumes on the one LUN. To let ASM use all 15 disks, you'd then create a 3-disk diskgroup (where 1 ASM disk = 1 SAN logical volume). On the other hand, you might want to keep some disks back for future storage, in which case a 1-disk ASM diskgroup representing a single 9-disk logical volume might be the go, with the remaining 6 disks on the LUN available for future expansion.
    It's a complicated topic, unfortunately. You're dealing with physical storage which is already abstracted into logical volumes and then abstracted even further by wrapping those logical volumes up into ASM disk groups. You balance performance, expandability, management convenience, your SAN vendor's optimisation tricks and so on... and hopefully come out with something that works for you!

  • ZFS and fragmentation

    I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
    i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
    1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
    2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
    3 - goto 1
    so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
    even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
    I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
    On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
    I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
    The question is, of course, am i missing something here? Who else has seen this behavior?

    Stephan,
    "well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
    for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
    "In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
    certainly, ZFS has some very nice features.
    "Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
    Unfortunately you have not provided enough information to confirm this."
    thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
    "How does the ZFS pool layout look like?"
    separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
    "Is the whole database in the same pool?"
    as in all the datafiles: yes.
    "At first you should separate the log and data files into different pools. ZFS works with "copy on write""
    it's already configured like that.
    "How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
    yes, i have read that. we never surpassed 55% pool usage.
    thanks!

  • Problem with layout and spry menu in IE

    My navigation bar looks fine in Foxfire and Safari, but is all messed up in IE.
    here is a link to the page:
    http://vacationlandphotography.com/tls/header.html
    here is my code:
    <html>
    <head>
    <title>The Landing School</title>
    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
    <!-- ImageReady Styles (header.psd) -->
    <style type="text/css">
    <!--
    body {
        text-align:center;
        background-color: #000000;
    #wrapper {
        margin: 0 auto;
        width: 1000;
        text-align:left;
    #Table_01 {
        margin: 0 auto;
        left:0px;
        top:0px;
        width:1000px;
        height:133px;
    #header-01 {
        position:relative;
        left:0px;
        top:0px;
        width:1000px;
        height:102px;
    #header-02 {
        position:relative;
        left:0px;
        top:0px;
        width:720px;
        height:30px;
    #header-03 {
        position:relative;
        left:697px;
        top:-31px;
        width:141px;
        height:31px;
    #header-04 {
        position:relative;
        left:838px;
        top:-61px;
        width:29px;
        height:31px;
    #header-05 {
        position:relative;
        left:867px;
        top:-92px;
        width:30px;
        height:31px;
    #header-06 {
        position:relative;
        left:897px;
        top:-123px;
        width:31px;
        height:31px;
    #header-07 {
        position:relative;
        left:928px;
        top:-154px;
        width:27px;
        height:31px;
    #header-08 {
        position:relative;
        left:955px;
        top:-185px;
        width:45px;
        height:31px;
    -->
    </style>
    <!-- End ImageReady Styles -->
    <script src="SpryAssets/SpryMenuBar.js" type="text/javascript"></script>
    <link href="SpryAssets/SpryMenuBarHorizontal.css" rel="stylesheet" type="text/css">
    </head>
    <body style="margin-top: 0px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px;">
    <div id="wrapper">
    <!-- ImageReady Slices (header.psd) -->
    <div id="Table_01">
        <div id="header-01">
            <img src="images/header_01.gif" width="1000" height="102" alt="">  </div>
        <div id="header-02">
          <ul id="MenuBar1" class="MenuBarHorizontal">
            <li><a class="MenuBarItemSubmenu" href="#">Wooden Boat Building</a>
                <ul>
                  <li><a href="#">Overview </a></li>
                  <li><a href="#">Career Options</a></li>
                  <li><a href="#">Alumni Profile</a></li>
                  <li><a href="#">Syllabus</a></li>
                  <li><a href="#">Admission</a></li>
              </ul>
            </li>
            <li><a href="#" class="MenuBarItemSubmenu">Composite Boat Building</a>
              <ul>
                <li><a href="#">Overview</a></li>
                <li><a href="#">Career Options</a></li>
                <li><a href="#">Alumni Profile</a></li>
                <li><a href="#">Syllabus</a></li>
                <li><a href="#">Admission</a></li>
              </ul>
            </li>
            <li><a class="MenuBarItemSubmenu MenuBarItemSubmenu" href="#">Yacht Design</a>
              <ul>
                <li><a href="#">Overview</a></li>
                <li><a href="#">Career Options</a></li>
                <li><a href="#">Alumni Profile</a></li>
                <li><a href="#">Syllabus</a></li>
                <li><a href="#">Admission</a></li>
              </ul>
            </li>
            <li><a href="#" class="MenuBarItemSubmenu">Marine Systems</a>
              <ul>
                <li><a href="#">Overview</a></li>
                <li><a href="#">Career Options</a></li>
                <li><a href="#">Alumni Profile</a></li>
                <li><a href="#">Syllabus</a></li>
                <li><a href="#">Admission</a></li>
              </ul>
            </li>
            <li><a href="#" class="MenuBarItemSubmenu">Overview</a>
              <ul>
                <li><a href="#">Mission</a></li>
                <li><a href="#">Location</a></li>
                <li><a href="#">Associate’s Degree</a></li>
                <li><a href="#">Tuition and Financial Aid</a></li>
                <li><a href="#">Admission</a></li>
                <li><a href="#">Faculty</a></li>
              </ul>
            </li>
          </ul>
      </div>
    <div id="header-03">
            <img src="images/header_03.gif" width="141" height="31" alt="">  </div>
    <div id="header-04">
        <a href="https://www.facebook.com/pages/The-Landing-School/81582557331?ref=ts" target="_top"><img src="images/header_04.gif" alt="" width="29" height="31" border="0"></a>    </div>
    <div id="header-05">
            <a href="http://twitter.com/#!/landingschool" target="_top"><img src="images/header_05.gif" alt="" width="30" height="31" border="0"></a>    </div>
    <div id="header-06">
            <a href="http://www.youtube.com/user/TheLandingSchool" target="_top"><img src="images/header_06.gif" alt="" width="31" height="31" border="0"></a>    </div>
    <div id="header-07">
            <a href="http://landingschool.blogspot.com/" target="_top"><img src="images/header_07.gif" alt="" width="27" height="31" border="0"></a>    </div>
    <div id="header-08">
            <img src="images/header_08.gif" width="45" height="31" alt="">
        </div>
    </div>
    <!-- End ImageReady Slices -->
    </div>
    <script type="text/javascript">
    <!--
    var MenuBar1 = new Spry.Widget.MenuBar("MenuBar1", {imgDown:"SpryAssets/SpryMenuBarDownHover.gif", imgRight:"SpryAssets/SpryMenuBarRightHover.gif"});
    //-->
    </script>
    </body>
    </html>
    here is my CSS:
    @charset "UTF-8";
    /* SpryMenuBarHorizontal.css - Revision: Spry Preview Release 1.4 */
    /* Copyright (c) 2006. Adobe Systems Incorporated. All rights reserved. */
    LAYOUT INFORMATION: describes box model, positioning, z-order
    /* The outermost container of the Menu Bar, an auto width box with no margin or padding */
    ul.MenuBarHorizontal
        margin: 0;
        padding: 0;
        list-style-type: none;
        font-size: 100%;
        cursor: none;
        width: 700;
        font-family: Arial;
        color: #FFFFFF;
        border: thin none #000000;
        height: auto;
    /* Set the active Menu Bar with this class, currently setting z-index to accomodate IE rendering bug: http://therealcrisp.xs4all.nl/meuk/IE-zindexbug.html */
    ul.MenuBarActive
        z-index: 1000;
        background-color: #FFFFFF;
        font-family: Arial, Helvetica, sans-serif;
        color: #000000;
    /* Menu item containers, position children relative to this container and are a fixed width */
    ul.MenuBarHorizontal li
        margin: 0;
        padding: 0;
        list-style-type: none;
        font-size: 11px;
        text-align: center;
        cursor: crosshair;
        width: 138px;
        float: left;
        height: 29px;
        background-color: #FFFFFF;
        border: 1px solid #000000;
        font-family: Arial;
        color: #FFFFFF;
        word-spacing: -0.1em;
        letter-spacing: normal;
    /* Submenus should appear below their parent (top: 0) with a higher z-index, but they are initially off the left side of the screen (-1000em) */
    ul.MenuBarHorizontal ul
        margin: 0;
        padding: 0;
        list-style-type: none;
        font-size: 100%;
        font: Arial;
        z-index: 1020;
        cursor: default;
        width: 8.2em;
        position: relative;
        left: -500em;
        background-color: #999999;
    /* Submenu that is showing with class designation MenuBarSubmenuVisible, we set left to auto so it comes onto the screen below its parent menu item */
    ul.MenuBarHorizontal ul.MenuBarSubmenuVisible
        left: auto;
    /* Menu item containers are same fixed width as parent */
    ul.MenuBarHorizontal ul li
        width: 8.2em;
    /* Submenus should appear slightly overlapping to the right (95%) and up (-5%) */
    ul.MenuBarHorizontal ul ul
        position: absolute;
        margin: -5% 0 0 125%;
    /* Submenu that is showing with class designation MenuBarSubmenuVisible, we set left to 0 so it comes onto the screen */
    ul.MenuBarHorizontal ul.MenuBarSubmenuVisible ul.MenuBarSubmenuVisible
        left: auto;
        top: 0;
    DESIGN INFORMATION: describes color scheme, borders, fonts
    /* Submenu containers have borders on all sides */
    ul.MenuBarHorizontal ul
        border: 0px solid #CCC;
    /* Menu items are a light gray block with padding and no text decoration */
    ul.MenuBarHorizontal a
        display: block;
        cursor: pointer;
        font: Arial;
        background-color: #999;
        padding: 0.5em 0.75em;
        color: #333;
        text-decoration: none;
    /* Menu items that have mouse over or focus have a blue background and white text */
    ul.MenuBarHorizontal a:hover, ul.MenuBarHorizontal a:focus
        background-color: #003366;
        color: #FFFFFF;
        border: 0px;
    /* Menu items that are open with submenus are set to MenuBarItemHover with a blue background and white text */
    ul.MenuBarHorizontal a.MenuBarItemHover, ul.MenuBarHorizontal a.MenuBarItemSubmenuHover, ul.MenuBarHorizontal a.MenuBarSubmenuVisible
        background-color: #003366;
        color: #FFFFFF;
    SUBMENU INDICATION: styles if there is a submenu under a given menu item
    /* Menu items that have a submenu have the class designation MenuBarItemSubmenu and are set to use a background image positioned on the far left (95%) and centered vertically (50%) */
    ul.MenuBarHorizontal a.MenuBarItemSubmenu
        background-image: none;
        background-repeat: no-repeat;
        background-position: 95% 50%;
        background-color: #FFFFFF;
        color: #000000;
        font-family: Arial, Helvetica, sans-serif;
        font-size: 11px;
    /* Menu items that have a submenu have the class designation MenuBarItemSubmenu and are set to use a background image positioned on the far left (95%) and centered vertically (50%) */
    ul.MenuBarHorizontal ul a.MenuBarItemSubmenu
        background-image: url(SpryMenuBarRight.gif);
        background-repeat: no-repeat;
        background-position: 95% 50%;
    /* Menu items that are open with submenus have the class designation MenuBarItemSubmenuHover and are set to use a "hover" background image positioned on the far left (95%) and centered vertically (50%) */
    ul.MenuBarHorizontal a.MenuBarItemSubmenuHover
        background-image: url(SpryMenuBarDownHover.gif);
        background-repeat: no-repeat;
        background-position: 95% 50%;
    /* Menu items that are open with submenus have the class designation MenuBarItemSubmenuHover and are set to use a "hover" background image positioned on the far left (95%) and centered vertically (50%) */
    ul.MenuBarHorizontal ul a.MenuBarItemSubmenuHover
        background-image: url(SpryMenuBarRightHover.gif);
        background-repeat: no-repeat;
        background-position: 95% 50%;
    BROWSER HACKS: the hacks below should not be changed unless you are an expert
    /* HACK FOR IE: to make sure the sub menus show above form controls, we underlay each submenu with an iframe */
    ul.MenuBarHorizontal iframe
        position: absolute;
        z-index: 1010;
    /* HACK FOR IE: to stabilize appearance of menu items; the slash in float is to keep IE 5.0 from parsing */
    @media screen, projection
        ul.MenuBarHorizontal li.MenuBarItemIE
        background: #FFF;
    thank you!
    Tote Road

    Your first difficulty is outlined here in the W3C Validation tool: http://validator.w3.org/check?uri=http%3A%2F%2Fvacationlandphotography.com%2Ftls%2Fheader. html&charset=%28detect+automatically%29&doctype=Inline&group=0
    Without a doctype declaration, the browsers don't know what to do with your code. Some apparently guess well. Internet Explorer, not so much.
    After you've fixed your page up by copying this
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml">
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    in place of this:
    <html>
    <head>
    either things will be improved or there will be something else cropping up. I opt for the former, myself!
    Beth

  • Problem with exporting and printing from pages to PDF

    I have a problem with my Pages
    My font will not be embeded in my pdf files.
    I have saved as a ps file and in to the destiller and my fonts are missing.
    I need to send my file to the print shop but they will not accept my file and i now understand why.
    fonts are missing...
    Is there a workaround.
    I have no problem in Indesign or quark but pages.....
    I need help
    thanks a zillion

    The font is coming from a notation software Sibelius 4 and is namned opus
    Hmmm ... Sibelius is a music notation software and notations are marked up in MusicXML. Presumably the font file is an SFNT with TrueType splines, but it is probably not installed in OS X system folders - rather in an internal Sibelius application font folder. So presumably you do not see the font in FontBook and OS X font auditing does not apply to the font.
    Sibelius exports EPS files, right? If memory serves, an EPS is still legal even if the font resource is not embedded. And in any case, we know from the behaviour that the font resource is not embedded for some reason. So how do you get Sibelius to put the font resource inside the graphic ... normally there is a button in the EPS export procedure that gives you the option to Embed All Fonts.
    You do not seem to get this button, though. Or why else would you not have checked it already and the problem would have gone away already.
    The next point in troubleshooting this is that you are not following the path that would let OS X detect that an external font resource is not embedded.
    I go from pages-print-printer- acrobat Pro 8-save as Pdf-x
    What you are doing here is telling Pages to tell OS X to generate a PostScript program within which is nested your Encapsulated PostScript program with the call to an unresolved external font resource.
    So why does Acrobat Pro not detect the unresolved external font resource? Hmm ... did you try the Preflight option in Acrobat 8 Pro? It should provide information on unresolved embeddings.
    I have also tried pages-print-printer- acrobat Pro 8 and save pdf as postscript and put the postscript file in destiller 8 pro with defalt setting high quality print
    The whole problem with EPS and PS is that this sort of situation is possible in the first place (plus, what is worse, the PS program can include custom additions to the graphics model that then fail in the PS interpreter whence Apple GX normalizing, Adobe Distiller normalizing, and Apple Quartz normalizing). You want to get as far away from EPS and PS as possible, believe me.
    So, you have not done what I posted that you should do in the first place. If I were you, I would first get rid of the problem that the EPS is making a call to an external font and then get rid of the problem that the PostScript is preserving the external call.
    To get rid of the problem that the EPS is preserving an external call, simply open the EPS in Apple Preview which includes a NORMALIZER for EPS/PS, and then save out the graphic as PDF. Alternatively, if you don't trust Quartz for some reason, set up a hotfolder for Distiller, make sure the option to embed all fonts is enabled, and convert the EPS to PDF.
    Now replace your EPS in Pages with PDF in Pages, and don't save PostScript to disk but save PDF to disk through the proper procedure which is File > Print > PDF > Save as PDF/X [for your custom configuration of the PDF/X-3 filter considering that no sane person in North Europe prints lowend US SWOP, we use ISO].
    If you begin by telling OS X that you want PDF within which fonts are supposed to be embedded ALWAYS, then you have started the right way. Otherwise, you have not told the operating system what you want to do, and this then leads you into places where you are unlikely to have the expertise to troubleshoot problems.
    So, forget placing EPS in the first place, place PDF. And forget saving PostScript to disk, save PDF to disk. If that does not sort your problem, here is the dirty solution for professional prepress.
    Adobe Photoshop has an EPS rasterizer that has wide tolerances for poor PostScript programming (so does Adobe Illustrator 6 and higher by the way).
    Therefore, if an EPS is posing problems, one workaround is to rasterize the EPS at high resolution in Photoshop and place that high resolution PDF in your layout.
    Take care that you rasterize as 1 bit at the required resolution of the print provider, probably 2450 dpi. When you save the 1 bit as PDF, Photoshop automatically compresses to a very, very small file (don't be surprised if 15Mb compresses to something like 0.5Mb).
    Rasterizing in Photoshop should not be necessary if you simply start by telling the operating system what it is you are trying to do. Then the operating system should be able to take the right decisions for you, and tell if you if finds problems it cannot resolve without turning to you.
    Good luck,
    Henrik
    would-be technical writer

  • Problems sharing internet with iptables and dnsmasq

    I followed this exactly: http://wiki.archlinux.org/index.php/Internet_Share
    And here is my dnsmasq.conf
    # Configuration file for dnsmasq.
    # Format is one option per line, legal options are the same
    # as the long options legal on the command line. See
    # "/usr/sbin/dnsmasq --help" or "man 8 dnsmasq" for details.
    # The following two options make you a better netizen, since they
    # tell dnsmasq to filter out queries which the public DNS cannot
    # answer, and which load the servers (especially the root servers)
    # uneccessarily. If you have a dial-on-demand link they also stop
    # these requests from bringing up the link uneccessarily.
    # Never forward plain names (without a dot or domain part)
    #domain-needed
    # Never forward addresses in the non-routed address spaces.
    #bogus-priv
    # Uncomment this to filter useless windows-originated DNS requests
    # which can trigger dial-on-demand links needlessly.
    # Note that (amongst other things) this blocks all SRV requests,
    # so don't use it if you use eg Kerberos, SIP, XMMP or Google-talk.
    # This option only affects forwarding, SRV records originating for
    # dnsmasq (via srv-host= lines) are not suppressed by it.
    #filterwin2k
    # Change this line if you want dns to get its upstream servers from
    # somewhere other that /etc/resolv.conf
    #resolv-file=
    # By default, dnsmasq will send queries to any of the upstream
    # servers it knows about and tries to favour servers to are known
    # to be up. Uncommenting this forces dnsmasq to try each query
    # with each server strictly in the order they appear in
    # /etc/resolv.conf
    #strict-order
    # If you don't want dnsmasq to read /etc/resolv.conf or any other
    # file, getting its servers from this file instead (see below), then
    # uncomment this.
    #no-resolv
    # If you don't want dnsmasq to poll /etc/resolv.conf or other resolv
    # files for changes and re-read them then uncomment this.
    #no-poll
    # Add other name servers here, with domain specs if they are for
    # non-public domains.
    #server=/localnet/192.168.0.1
    # Example of routing PTR queries to nameservers: this will send all
    # address->name queries for 192.168.3/24 to nameserver 10.1.2.3
    #server=/3.168.192.in-addr.arpa/10.1.2.3
    # Add local-only domains here, queries in these domains are answered
    # from /etc/hosts or DHCP only.
    #local=/localnet/
    # Add domains which you want to force to an IP address here.
    # The example below send any host in doubleclick.net to a local
    # webserver.
    #address=/doubleclick.net/127.0.0.1
    # --address (and --server) work with IPv6 addresses too.
    #address=/www.thekelleys.org.uk/fe80::20d:60ff:fe36:f83
    # You can control how dnsmasq talks to a server: this forces
    # queries to 10.1.2.3 to be routed via eth1
    # --server=10.1.2.3@eth1
    # and this sets the source (ie local) address used to talk to
    # 10.1.2.3 to 192.168.1.1 port 55 (there must be a interface with that
    # IP on the machine, obviously).
    # [email protected]#55
    # If you want dnsmasq to change uid and gid to something other
    # than root, you will need to have CONFIG_SECURITY_CAPABILITIES
    # enabled in your kernel. The default uid and gid of nobody will
    # be used if capability is available and this is not set.
    #user=
    #group=
    # If you want dnsmasq to listen for DHCP and DNS requests only on
    # specified interfaces (and the loopback) give the name of the
    # interface (eg eth0) here.
    # Repeat the line for more than one interface.
    interface=eth0
    # Or you can specify which interface _not_ to listen on
    #except-interface=
    # Or which to listen on by address (remember to include 127.0.0.1 if
    # you use this.)
    #listen-address=
    # If you want dnsmasq to provide only DNS service on an interface,
    # configure it as shown above, and then use the following line to
    # disable DHCP on it.
    #no-dhcp-interface=
    # On systems which support it, dnsmasq binds the wildcard address,
    # even when it is listening on only some interfaces. It then discards
    # requests that it shouldn't reply to. This has the advantage of
    # working even when interfaces come and go and change address. If you
    # want dnsmasq to really bind only the interfaces it is listening on,
    # uncomment this option. About the only time you may need this is when
    # running another nameserver on the same machine.
    bind-interfaces
    # If you don't want dnsmasq to read /etc/hosts, uncomment the
    # following line.
    #no-hosts
    # or if you want it to read another file, as well as /etc/hosts, use
    # this.
    #addn-hosts=/etc/banner_add_hosts
    # Set this (and domain: see below) if you want to have a domain
    # automatically added to simple names in a hosts-file.
    #expand-hosts
    # Set the domain for dnsmasq. this is optional, but if it is set, it
    # does the following things.
    # 1) Allows DHCP hosts to have fully qualified domain names, as long
    # as the domain part matches this setting.
    # 2) Sets the "domain" DHCP option thereby potentially setting the
    # domain of all systems configured by DHCP
    # 3) Provides the domain part for "expand-hosts"
    #domain=thekelleys.org.uk
    # Set a different domain for a particular subnet
    #domain=wireless.thekelleys.org.uk,192.168.2.0/24
    # Same idea, but range rather then subnet
    #domain=reserved.thekelleys.org.uk,192.68.3.100,192.168.3.200
    # Uncomment this to enable the integrated DHCP server, you need
    # to supply the range of addresses available for lease and optionally
    # a lease time. If you have more than one network, you will need to
    # repeat this for each network on which you want to supply DHCP
    # service.
    #dhcp-range=192.168.20.100,192.168.20.149,12h
    # This is an example of a DHCP range where the netmask is given. This
    # is needed for networks we reach the dnsmasq DHCP server via a relay
    # agent. If you don't know what a DHCP relay agent is, you probably
    # don't need to worry about this.
    dhcp-range=192.168.20.100,192.168.20.149,255.255.255.0,12h
    # This is an example of a DHCP range with a network-id, so that
    # some DHCP options may be set only for this network.
    #dhcp-range=red,192.168.0.50,192.168.0.150
    # Supply parameters for specified hosts using DHCP. There are lots
    # of valid alternatives, so we will give examples of each. Note that
    # IP addresses DO NOT have to be in the range given above, they just
    # need to be on the same network. The order of the parameters in these
    # do not matter, it's permissble to give name,adddress and MAC in any order
    # Always allocate the host with ethernet address 11:22:33:44:55:66
    # The IP address 192.168.0.60
    dhcp-host=00:E0:B8:9C:B7:2C,192.168.20.1
    # Always set the name of the host with hardware address
    # 11:22:33:44:55:66 to be "fred"
    #dhcp-host=11:22:33:44:55:66,fred
    # Always give the host with ethernet address 11:22:33:44:55:66
    # the name fred and IP address 192.168.0.60 and lease time 45 minutes
    #dhcp-host=11:22:33:44:55:66,fred,192.168.0.60,45m
    # Give a host with ethernet address 11:22:33:44:55:66 or
    # 12:34:56:78:90:12 the IP address 192.168.0.60. Dnsmasq will assume
    # that these two ethernet interfaces will never be in use at the same
    # time, and give the IP address to the second, even if it is already
    # in use by the first. Useful for laptops with wired and wireless
    # addresses.
    #dhcp-host=11:22:33:44:55:66,12:34:56:78:90:12,192.168.0.60
    # Give the machine which says its name is "bert" IP address
    # 192.168.0.70 and an infinite lease
    #dhcp-host=bert,192.168.0.70,infinite
    # Always give the host with client identifier 01:02:02:04
    # the IP address 192.168.0.60
    #dhcp-host=id:01:02:02:04,192.168.0.60
    # Always give the host with client identifier "marjorie"
    # the IP address 192.168.0.60
    #dhcp-host=id:marjorie,192.168.0.60
    # Enable the address given for "judge" in /etc/hosts
    # to be given to a machine presenting the name "judge" when
    # it asks for a DHCP lease.
    #dhcp-host=judge
    # Never offer DHCP service to a machine whose ethernet
    # address is 11:22:33:44:55:66
    #dhcp-host=11:22:33:44:55:66,ignore
    # Ignore any client-id presented by the machine with ethernet
    # address 11:22:33:44:55:66. This is useful to prevent a machine
    # being treated differently when running under different OS's or
    # between PXE boot and OS boot.
    #dhcp-host=11:22:33:44:55:66,id:*
    # Send extra options which are tagged as "red" to
    # the machine with ethernet address 11:22:33:44:55:66
    #dhcp-host=11:22:33:44:55:66,net:red
    # Send extra options which are tagged as "red" to
    # any machine with ethernet address starting 11:22:33:
    #dhcp-host=11:22:33:*:*:*,net:red
    # Ignore any clients which are specified in dhcp-host lines
    # or /etc/ethers. Equivalent to ISC "deny unkown-clients".
    # This relies on the special "known" tag which is set when
    # a host is matched.
    #dhcp-ignore=#known
    # Send extra options which are tagged as "red" to any machine whose
    # DHCP vendorclass string includes the substring "Linux"
    #dhcp-vendorclass=red,Linux
    # Send extra options which are tagged as "red" to any machine one
    # of whose DHCP userclass strings includes the substring "accounts"
    #dhcp-userclass=red,accounts
    # Send extra options which are tagged as "red" to any machine whose
    # MAC address matches the pattern.
    #dhcp-mac=red,00:60:8C:*:*:*
    # If this line is uncommented, dnsmasq will read /etc/ethers and act
    # on the ethernet-address/IP pairs found there just as if they had
    # been given as --dhcp-host options. Useful if you keep
    # MAC-address/host mappings there for other purposes.
    #read-ethers
    # Send options to hosts which ask for a DHCP lease.
    # See RFC 2132 for details of available options.
    # Common options can be given to dnsmasq by name:
    # run "dnsmasq --help dhcp" to get a list.
    # Note that all the common settings, such as netmask and
    # broadcast address, DNS server and default route, are given
    # sane defaults by dnsmasq. You very likely will not need
    # any dhcp-options. If you use Windows clients and Samba, there
    # are some options which are recommended, they are detailed at the
    # end of this section.
    # Override the default route supplied by dnsmasq, which assumes the
    # router is the same machine as the one running dnsmasq.
    #dhcp-option=3,1.2.3.4
    # Do the same thing, but using the option name
    #dhcp-option=option:router,1.2.3.4
    # Override the default route supplied by dnsmasq and send no default
    # route at all. Note that this only works for the options sent by
    # default (1, 3, 6, 12, 28) the same line will send a zero-length option
    # for all other option numbers.
    #dhcp-option=3
    # Set the NTP time server addresses to 192.168.0.4 and 10.10.0.5
    #dhcp-option=option:ntp-server,192.168.0.4,10.10.0.5
    # Set the NTP time server address to be the same machine as
    # is running dnsmasq
    #dhcp-option=42,0.0.0.0
    # Set the NIS domain name to "welly"
    #dhcp-option=40,welly
    # Set the default time-to-live to 50
    #dhcp-option=23,50
    # Set the "all subnets are local" flag
    #dhcp-option=27,1
    # Send the etherboot magic flag and then etherboot options (a string).
    #dhcp-option=128,e4:45:74:68:00:00
    #dhcp-option=129,NIC=eepro100
    # Specify an option which will only be sent to the "red" network
    # (see dhcp-range for the declaration of the "red" network)
    # Note that the net: part must precede the option: part.
    #dhcp-option = net:red, option:ntp-server, 192.168.1.1
    # The following DHCP options set up dnsmasq in the same way as is specified
    # for the ISC dhcpcd in
    # http://www.samba.org/samba/ftp/docs/textdocs/DHCP-Server-Configuration.txt
    # adapted for a typical dnsmasq installation where the host running
    # dnsmasq is also the host running samba.
    # you may want to uncomment some or all of them if you use
    # Windows clients and Samba.
    #dhcp-option=19,0 # option ip-forwarding off
    #dhcp-option=44,0.0.0.0 # set netbios-over-TCP/IP nameserver(s) aka WINS server(s)
    #dhcp-option=45,0.0.0.0 # netbios datagram distribution server
    #dhcp-option=46,8 # netbios node type
    # Send RFC-3397 DNS domain search DHCP option. WARNING: Your DHCP client
    # probably doesn't support this......
    #dhcp-option=option:domain-search,eng.apple.com,marketing.apple.com
    # Send RFC-3442 classless static routes (note the netmask encoding)
    #dhcp-option=121,192.168.1.0/24,1.2.3.4,10.0.0.0/8,5.6.7.8
    # Send vendor-class specific options encapsulated in DHCP option 43.
    # The meaning of the options is defined by the vendor-class so
    # options are sent only when the client supplied vendor class
    # matches the class given here. (A substring match is OK, so "MSFT"
    # matches "MSFT" and "MSFT 5.0"). This example sets the
    # mtftp address to 0.0.0.0 for PXEClients.
    #dhcp-option=vendor:PXEClient,1,0.0.0.0
    # Send microsoft-specific option to tell windows to release the DHCP lease
    # when it shuts down. Note the "i" flag, to tell dnsmasq to send the
    # value as a four-byte integer - that's what microsoft wants. See
    # http://technet2.microsoft.com/WindowsServer/en/library/a70f1bb7-d2d4-49f0-96d6-4b7414ecfaae1033.mspx?mfr=true
    #dhcp-option=vendor:MSFT,2,1i
    # Send the Encapsulated-vendor-class ID needed by some configurations of
    # Etherboot to allow is to recognise the DHCP server.
    #dhcp-option=vendor:Etherboot,60,"Etherboot"
    # Send options to PXELinux. Note that we need to send the options even
    # though they don't appear in the parameter request list, so we need
    # to use dhcp-option-force here.
    # See http://syslinux.zytor.com/pxe.php#special for details.
    # Magic number - needed before anything else is recognised
    #dhcp-option-force=208,f1:00:74:7e
    # Configuration file name
    #dhcp-option-force=209,configs/common
    # Path prefix
    #dhcp-option-force=210,/tftpboot/pxelinux/files/
    # Reboot time. (Note 'i' to send 32-bit value)
    #dhcp-option-force=211,30i
    # Set the boot filename for netboot/PXE. You will only need
    # this is you want to boot machines over the network and you will need
    # a TFTP server; either dnsmasq's built in TFTP server or an
    # external one. (See below for how to enable the TFTP server.)
    #dhcp-boot=pxelinux.0
    # Boot for Etherboot gPXE. The idea is to send two different
    # filenames, the first loads gPXE, and the second tells gPXE what to
    # load. The dhcp-match sets the gpxe tag for requests from gPXE.
    #dhcp-match=gpxe,175 # gPXE sends a 175 option.
    #dhcp-boot=net:#gpxe,undionly.kpxe
    #dhcp-boot=mybootimage
    # Encapsulated options for Etherboot gPXE. All the options are
    # encapsulated within option 175
    #dhcp-option=encap:175, 1, 5b # priority code
    #dhcp-option=encap:175, 176, 1b # no-proxydhcp
    #dhcp-option=encap:175, 177, string # bus-id
    #dhcp-option=encap:175, 189, 1b # BIOS drive code
    #dhcp-option=encap:175, 190, user # iSCSI username
    #dhcp-option=encap:175, 191, pass # iSCSI password
    # Test for the architecture of a netboot client. PXE clients are
    # supposed to send their architecture as option 93. (See RFC 4578)
    #dhcp-match=peecees, option:client-arch, 0 #x86-32
    #dhcp-match=itanics, option:client-arch, 2 #IA64
    #dhcp-match=hammers, option:client-arch, 6 #x86-64
    #dhcp-match=mactels, option:client-arch, 7 #EFI x86-64
    # Do real PXE, rather than just booting a single file, this is an
    # alternative to dhcp-boot.
    #pxe-prompt="What system shall I netboot?"
    # or with timeout before first available action is taken:
    #pxe-prompt="Press F8 for menu.", 60
    # Available boot services. for PXE.
    #pxe-service=x86PC, "Boot from local disk", 0
    # Loads <tftp-root>/pxelinux.0 from dnsmasq TFTP server.
    #pxe-service=x86PC, "Install Linux", pxelinux
    # Loads <tftp-root>/pxelinux.0 from TFTP server at 1.2.3.4.
    # Beware this fails on old PXE ROMS.
    #pxe-service=x86PC, "Install Linux", pxelinux, 1.2.3.4
    # Use bootserver on network, found my multicast or broadcast.
    #pxe-service=x86PC, "Install windows from RIS server", 1
    # Use bootserver at a known IP address.
    #pxe-service=x86PC, "Install windows from RIS server", 1, 1.2.3.4
    # If you have multicast-FTP available,
    # information for that can be passed in a similar way using options 1
    # to 5. See page 19 of
    # http://download.intel.com/design/archives/wfm/downloads/pxespec.pdf
    # Enable dnsmasq's built-in TFTP server
    #enable-tftp
    # Set the root directory for files availble via FTP.
    #tftp-root=/var/ftpd
    # Make the TFTP server more secure: with this set, only files owned by
    # the user dnsmasq is running as will be send over the net.
    #tftp-secure
    # Set the boot file name only when the "red" tag is set.
    #dhcp-boot=net:red,pxelinux.red-net
    # An example of dhcp-boot with an external TFTP server: the name and IP
    # address of the server are given after the filename.
    # Can fail with old PXE ROMS. Overridden by --pxe-service.
    #dhcp-boot=/var/ftpd/pxelinux.0,boothost,192.168.0.3
    # Set the limit on DHCP leases, the default is 150
    #dhcp-lease-max=150
    # The DHCP server needs somewhere on disk to keep its lease database.
    # This defaults to a sane location, but if you want to change it, use
    # the line below.
    #dhcp-leasefile=/var/lib/misc/dnsmasq.leases
    # Set the DHCP server to authoritative mode. In this mode it will barge in
    # and take over the lease for any client which broadcasts on the network,
    # whether it has a record of the lease or not. This avoids long timeouts
    # when a machine wakes up on a new network. DO NOT enable this if there's
    # the slighest chance that you might end up accidentally configuring a DHCP
    # server for your campus/company accidentally. The ISC server uses
    # the same option, and this URL provides more information:
    # http://www.isc.org/index.pl?/sw/dhcp/authoritative.php
    #dhcp-authoritative
    # Run an executable when a DHCP lease is created or destroyed.
    # The arguments sent to the script are "add" or "del",
    # then the MAC address, the IP address and finally the hostname
    # if there is one.
    #dhcp-script=/bin/echo
    # Set the cachesize here.
    #cache-size=150
    # If you want to disable negative caching, uncomment this.
    #no-negcache
    # Normally responses which come form /etc/hosts and the DHCP lease
    # file have Time-To-Live set as zero, which conventionally means
    # do not cache further. If you are happy to trade lower load on the
    # server for potentially stale date, you can set a time-to-live (in
    # seconds) here.
    #local-ttl=
    # If you want dnsmasq to detect attempts by Verisign to send queries
    # to unregistered .com and .net hosts to its sitefinder service and
    # have dnsmasq instead return the correct NXDOMAIN response, uncomment
    # this line. You can add similar lines to do the same for other
    # registries which have implemented wildcard A records.
    #bogus-nxdomain=64.94.110.11
    # If you want to fix up DNS results from upstream servers, use the
    # alias option. This only works for IPv4.
    # This alias makes a result of 1.2.3.4 appear as 5.6.7.8
    #alias=1.2.3.4,5.6.7.8
    # and this maps 1.2.3.x to 5.6.7.x
    #alias=1.2.3.0,5.6.7.0,255.255.255.0
    # and this maps 192.168.0.10->192.168.0.40 to 10.0.0.10->10.0.0.40
    #alias=192.168.0.10-192.168.0.40,10.0.0.0,255.255.255.0
    # Change these lines if you want dnsmasq to serve MX records.
    # Return an MX record named "maildomain.com" with target
    # servermachine.com and preference 50
    #mx-host=maildomain.com,servermachine.com,50
    # Set the default target for MX records created using the localmx option.
    #mx-target=servermachine.com
    # Return an MX record pointing to the mx-target for all local
    # machines.
    #localmx
    # Return an MX record pointing to itself for all local machines.
    #selfmx
    # Change the following lines if you want dnsmasq to serve SRV
    # records. These are useful if you want to serve ldap requests for
    # Active Directory and other windows-originated DNS requests.
    # See RFC 2782.
    # You may add multiple srv-host lines.
    # The fields are <name>,<target>,<port>,<priority>,<weight>
    # If the domain part if missing from the name (so that is just has the
    # service and protocol sections) then the domain given by the domain=
    # config option is used. (Note that expand-hosts does not need to be
    # set for this to work.)
    # A SRV record sending LDAP for the example.com domain to
    # ldapserver.example.com port 289
    #srv-host=_ldap._tcp.example.com,ldapserver.example.com,389
    # A SRV record sending LDAP for the example.com domain to
    # ldapserver.example.com port 289 (using domain=)
    #domain=example.com
    #srv-host=_ldap._tcp,ldapserver.example.com,389
    # Two SRV records for LDAP, each with different priorities
    #srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,1
    #srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,2
    # A SRV record indicating that there is no LDAP server for the domain
    # example.com
    #srv-host=_ldap._tcp.example.com
    # The following line shows how to make dnsmasq serve an arbitrary PTR
    # record. This is useful for DNS-SD. (Note that the
    # domain-name expansion done for SRV records _does_not
    # occur for PTR records.)
    #ptr-record=_http._tcp.dns-sd-services,"New Employee Page._http._tcp.dns-sd-services"
    # Change the following lines to enable dnsmasq to serve TXT records.
    # These are used for things like SPF and zeroconf. (Note that the
    # domain-name expansion done for SRV records _does_not
    # occur for TXT records.)
    #Example SPF.
    #txt-record=example.com,"v=spf1 a -all"
    #Example zeroconf
    #txt-record=_http._tcp.example.com,name=value,paper=A4
    # Provide an alias for a "local" DNS name. Note that this _only_ works
    # for targets which are names from DHCP or /etc/hosts. Give host
    # "bert" another name, bertrand
    #cname=bertand,bert
    # For debugging purposes, log each DNS query as it passes through
    # dnsmasq.
    #log-queries
    # Log lots of extra information about DHCP transactions.
    #log-dhcp
    # Include a another lot of configuration options.
    #conf-file=/etc/dnsmasq.more.conf
    #conf-dir=/etc/dnsmasq.d
    eth0 is set up for 192.168.20.1, netmask 255.255.255.0.  I can get an IP from my client machine and ping 192.168.20.1, but cannot access the internet.  resolv.conf on the client machine has a nameserver of 192.168.20.1.  Also, Firefox time sout trying to access Google via it's static IP.
    What should I do to grant the internet to my client machine?

    http://wiki.archlinux.org/index.php/DNS_with_bind -> Did you try it too ?

Maybe you are looking for

  • 2511x refresh rate problem

    When trying to use a particular piece of software I get an error saying that I must change my resolution to 1920x1080 @60Hz.  when checking in my display and video card settings this is already the case, but when viewing the OSD is is showing as 1920

  • Query help - group by

    Hi all, I need to get the sum(sal) and sum(comm) using select query.. I am using group by..but unable to get the result.. For example..if the table is eno     ename     job     sal      comm     deptno 1234     john     manager     2000 1     10 1235

  • ALV Popup in userexit

    Hi, An ALV Grid Popup has to be displayed and the code for this has to be written in the user exit. The requirement is such that when a particular button is clicked on the cat3_iscr transaction the ALV Grid popup should appear with still the cat3_isc

  • HT4972 After I've updated IOS in my iphone, all the applications, musics are gone.  How do get everything back?  Thank you.

    After I've updated IOS in my iphone, all the applications, musics are gone.  How do get everything back?  Thank you.

  • I am unable to share my NEW CALENDAR so that it is accessible in iCloud

    I have followed every thread i can find on this, but have not ben able to solve it. Others in this forum have had the problem, but have none successfully added Lightning calendar to iCloud? thank you!