Does distributed processing work best on a wired network?

Tried setting up qmaster (quick cluster) today with two MBa's over a wireless network; the submitted batch job simply
went into waiting mode & I noticed a lot of network traffic (at a slow rate) to my client system; I expected wireless to be
slower than wired but I also expected some sort of asyncrohnous behavior from Compressor/clustering to start the
encoding/compression process while data was being transferred..but cpu idled on both nodes..anyone?

Location Services (including Maps and "Find My Mac") relies on a database of known Wi-Fi access points. You don't necessarily have to be connected to a wireless network, but Wi-Fi does have to be turned on and you have to be within range of one of those access points.

Similar Messages

  • Can anyone make distributed processing work?

    I have two high-spec iMacs, connected via Thunderbolt and ethernet. I go through the Compressor set up for group transcoding, tried many different settings combinations - it NEVER uses the second machine in the transcode.
    Episode Pro on same machines does it no problem. Has anyone got this working in Compressor?

    I haven't tried with the 4.1 versions because the only machine I could test it with is running an earlier OS/ It's purposely frozen in that state and will be some time before it's upgraded. I have considered installing 10.9 on an external drive with a FCP X and Compressor and booting from that. If I have the time I'll try it and post the results.
    In the meantime, I hope someone else will  have some theories on why your processing setup isn't working and chime in.
    Finally, just to add I think a lot of folks have taken the simpler route and opted for multiple instances to speed up their work.
    Russ

  • What's the best network for distributed processing?

    Hey guys,
    I have a network of three computers... I have distributed processing all set up on Compressor, and it runs really fast through the ethernet connection I have set up. 
    What's the absolute best way to set up this network?
    The macbooks both have a firewire input, but that's not being used to transfer the data in this case... it's all ethernet. 
    The mac pro has 2 firewire ports... leaving one potentially open for a network. 
    Do you think it would be beneficial to set up a firewire network for these three computers, or is there any reason why a simple ethernet network is preferable?
    Thanks in advance for all of your help...
    Oh!  One more question!
    If I was going to set up a firewire network, how would I do it?
    ---Trav

    Hi MrWizard.. do you have Compressor.app V4.1? (Dec 2013) installed or the older ones.? According to you sign you're on older OSX 10.7.5.
    The idea you post is sound HOWEVER... it's somewhat fickle and troubesome.. I'm assuming old V3.4 Compressor/Qmaster later ....
    Hardware:
    You can CERTAINLY use the Firewire ports as NETWORKS NIC using IP over FW.. It certain works between TWO macs as Ive used it yeas back..
    Assume you have a FW hub or use the single  FW HUB  in the MACPRO ... or scout around for a fw switch  800 (if they exist!) or just use a FW800 to 400 cables and an cheap legacy FW400 hub... I saw one once in Taipei!
    Simply on each configure the Firewrie INTERFACEs in the System Preferences / Network .... (+ add interface etc etc), apply
    Either
    hard code (manual) an IP address (1.1.1.x)  for  each,
    Install and deploy OSX SERVER on your MACPRO (IP - 1.1.1.1) and set up a simple DNS and domain "mrwizards.cool_network"  for that FW subnet (1.1.1.1x)  then DHCP server  for that subnet so that when the two MACBOOM PROS  connect tey get a an IP address such as 1.1.1.2-253 and assign machine names. Set the dsn serahc on each machine to add 1.1.1.1 (your macpro to resolve machine names0. More work however easier to manage.
    test your local DNS between the machines or TRACEROUTE / PING the machines from each other and verify the IP path over the FW NICs works ok.
    as long as you can ascertain the IP traffic around your 1.1.1.x subnet the better.
    Once you configure these as a separate NETWORK, you will need some way to provide FireWire however... this is only the start of your headaches. Im assuming you are on an olde Compressor/Qmaster V3.5 with legacy FCS 3 suite... true/false?
    The configurations takes patience and trial and error and some clear understanding of what you are trying ot acheive if its really worth the effort.
    TRANSCODING over nodal cluster: AKA DISTRIBUTED SEGMENTED MULTIPASS TRANSCODING!
    ON paper you'd imaging the throughput would be worth it. Sadly the converse is true!.. it often takes LONGER to use a bunch of nodes rather than submission on a SINGLE machine using a CLUSTER on a single machine....
    the elapsed time for the submission relies on the processing speed of the slowest machine and its network connection
    final assembly of the quicktime peices for the object
    hmmmm often quickier just to use a single CLUSTER on the MACPRO!.
    You MAY want to consider this option assuming the hardware you have mentioned:
    what about this instead of FireWre NICS?
    USE Gbit ETHERNET allaround.
    Steps:
    Set the WIFI on the host to use the WIFI only network for you usual internal activities and bonjour (.local) etc.. assume low IP traffic. - all works oK
    OPTION: on the MACPRO dedicate EN0 (first ethernet NIC to WIFI router). Turn off WIFI on MACPRO.
    Utilise the ETHERNET (EN0 - builit in ethernet) on the MACBOOKs (older ones? .. will be very slow BTW)
    connect the MACPRO Second ETHERNET, and the MACBOOk (pros?) ethenets to a cheap 8 x port Ethernet HUB switch ($HK150 / €15) they are all switches these days). This gives you an isolated subnet... no router needed. Plug in the hub/switch and power on!
    look at step 3. above .. QUICK START: just hard code IP addresses as follows in each SYSTEM PREF/network):
    MAC PRO ip 1.1.1.1
    MACBOOK (Pro) #2 : ip address 1.1.1.2
    MACBOOK (Pro) #2 : ip address 1.1.1.3
    test the 1.1.1.1-1.1.1.3 network... (ping or traceroute)
    V3.5 Compressor and Qmaster for ye old FCS - dig in to QMASTER PREFS as:
    MACPRO - set servces and ALSO MAKE CONTROLLER., set NETWORK INTERFACE to the interface on the ETHERNET 2 (not all). set instances to more than 1 one instance but NOT max! Tick MANAGED SERVICES
    MACBOO (pro)'s set servces only, set NETWORK INTERFACE to the interface on the ETHERNET  (not all). set instances to more than 1 one instance but NOT max! Tick MANAGED SERVICES
    launch APPLE QADMINISTRATOR.app and make a managed cluster - easy (DONT waste your time with Quickcluster!.. just dont)
    IMPORTANT: make sure teh SOURCE and TARGETS of ALL the pbjects tha the transcodes will use are AVAILALBE (mounted and access) on ALL the machines that you want to participate.
    set up batch and make sure SEGMENTED is ticked in the inspector..... submit the job
    See how you go.
    However if you have Compressor V4.1 (dec 2013) then just set your network up and it should just work with out all teh voodoo in the previous steps.
    Post your results for others to see.
    Warwick
    Hong Kong

  • How does the login process works with a bind to AD?

    Hi there,
    I am trying to bind our mac to our Active Directory. Before doing I'd like to understand well how the login process works. Is there any reference I could look up to?
    For what I've understood until now with an example user "testAccount" and no automount and no AD extension and the ADplugin set as:
    a) if mobile is set, then a /Users/testAccount is created and if UNC is set then a smbHome is mounted "on desktop". If UNC is not set, do nothing more
    b) if force is set, then a /Users/testAccount is created and if UNC is set then a smbHome is mounted "on desktop". If UNC is not set, do nothing more
    c) if force is NOT set:
    --c1) if UNC is not set and cannot map NFSHomedirectory at all, then login the user with such a temporary home
    --c2) if UNC is set mount SMBhome and use it as mounted home folder (NFSHomeDirectory-->SMBhome "/Network/Servers/my.server.com/users/testAccount")
    --c3) if UNC is not set then retrieve the homeDirectory-NIS's attribute in AD (NFSHomeDirectory--> "/homes/testAccount") and create /homes/testAccount
    my doubt now is point c2) after I login in with a "#mount" I get:
    trigger on /Network/Servers/my.server.com/users (autofs, automounted)
    //[email protected]/users on /Network/Servers/my.server.com/users (smbfs, nodev, nosuid, automounted, mounted by testaccount)
    but my SMBhome is not correctly mounted on remote server (but the Library folder and MCX files are created!) and I get home errors because the system is looking for "/homes/testAccount", which I don't know where it is coming out from given that a
    #dscl /Active\ Directory/my.server.com -read /Users/testaccount |grep homes gives only out dsAttrTypeNative:unixHomeDirectory: /homes/testAccount
    thanks,
    a.

    Is there any reference I could look up to?
    http://www.macwindows.com

  • Just bought a Macbook Air.  I does not appear to be compatible with my LCD projector.  What LCD projectors work best with MacBook Air?

    My Cannon LCD LV-7210 projector does not recognize my Macbook Air nor does my Macbook Air recognize it is attached to an LCD projector.  Yes, I do have the proper adapter. 
    What LCD projectors work best with Macbook Air?  Does this LCD projector have auto-detect for computer?
    Thanks!

    I've just purchased a *27in (aluminium case) LCD Cinema Display* and am using it with my MacBook Pro, linked by the supplied three-jack cable (Mini DVI display jack).
    The 27" display uses Mini DisplayPort, not Mini DVI.
    But I also have *a white-cased MacBook (purchased in 2008) which has an older type of display socket*, certainly not compatible with the Mini DVI type.
    The early Mac Book uses Mini DVI.
    1. Is there a cable or adaptor available to link my older MacBook to the Cinema Display? Nothing in the online store looks 'right'
    No. Mini DVI is single-link, and does not support the resolution needed for the 27" display.

  • EDI-How does the Inbound File process work?

    Hi
    How does the Inbound process work in EDI
    When a file is put in the Inbound EDI directory of the App server how does the system start the process?
    Which programs are called/triggered?
    How does the Inbound process work?
    Please be brief and precise.

    Hi
    Check the links
    Check these links.
    http://help.sap.com/saphelp_erp2004/helpdata/en/dc/6b835943d711d1893e0000e8323c4f/content.htm
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://edocs.bea.com/elink/adapter/r3/userhtm/ale.htm#1008419
    http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
    http://www.sapgenie.com/sapedi/index.htm
    http://www.sappoint.com/abap/ale.pdf
    http://www.sappoint.com/abap/ale2.pdf
    http://www.sapgenie.com/sapedi/idoc_abap.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/0b/2a60bb507d11d18ee90000e8366fc2/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/78/217da751ce11d189570000e829fbbd/frameset.htm
    http://www.allsaplinks.com/idoc_sample.html
    http://www.sappoint.com/abap.html
    http://help.sap.com/saphelp_erp2004/helpdata/en/dc/6b835943d711d1893e0000e8323c4f/content.htm
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://edocs.bea.com/elink/adapter/r3/userhtm/ale.htm#1008419
    http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
    http://www.sapgenie.com/sapedi/index.htm
    http://www.allsaplinks.com/idoc_sample.html
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.docs
    Please check this PDF documents for ALE and IDoc.
    http://www.sappoint.com/abap/ale.pdf
    http://www.sappoint.com/abap/ale2.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCMIDALEIO/BCMIDALEIO.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCMIDALEPRO/BCMIDALEPRO.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/CABFAALEQS/CABFAALEQS.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCSRVEDISC/CAEDISCAP_STC.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCSRVEDI/CAEDI.pdf
    Check below link. It will give the step by step procedure for IDOC creation.
    http://www.supinfo-projects.com/cn/2005/idocs_en/2/
    Reward points if useful
    Regards
    Anji

  • How does the repair process work

    how does the repair process work, currently broke my iPhone, water damage, and still have a warranty until july 30.

    http://www.apple.com/support/iphone/repair/other/

  • I am considering an IPhone5 which I want to "tether" to my laptop and IPad. Does anyone know which network works best. TMobile tell me that they will only tether upto 1GB data, will this be sufficient?

    I am considering an IPhone5 which I want to "tether" to my laptop and Ipad. Does anyone know which network works best. TMobile tell me that they will only tether upto 1GB of data, will this be sufficient?

    I have the AT&T family plan for use with my laptop, iPhone an iPad with 6GB per month. I tend to use the tethering far more often with the MacBook Pro than the iPad (mine has cellular) and I don't think I've had any month where 1 GB would have been sufficient for tethering. Do they have a plan where they will let you pay for additional tethering? If so I would go to at last 2 GB.
    Or look into AT&T.

  • Having trouble setting up Distributed Processing / Qmaster

    Hey Guys,
    I was able to (at one point) use Compressor 3 via Final Cut Pro Studio with Qadministrator to allow Distributed processing either via managed clusters or QuickCluster - both worked on both macs (a 2007 2.6Ghz Core 2 Duo iMac and a 2009 3.06Ghz Core 2 Duo iMac).
    I just upgraded to Final Cut Pro X and Compressor 4. As far as I can tell, the Qmaster settings now reside in the Compressor application itself, so I tried setting it up as best as I could using both the QuickCluster and managed cluster options (very similar to when using the older Qmaster) but no dice. I can see my controller's cluster from the secondary iMac, but it always displays submissions as "Not Available" and it does not help with processing. I've tried everything I can think of - I tried using FCS Remover for the older version of Final Cut, I tried looking around via terminal to see if there are any residual files & settings prior to the FCPX install, I've tried following as many instructions as I could find (including apple's official documentation on Compressor 4 on setting up a cluster) but NOTHING seems to work. I'm at a loss!!
    Unfortunately, any documentation or refrences to issues with Qmaster / Distributed processing is related to older versions of Compressor and whatnot.
    Can anyone help or have any suggestions? I have no idea how to get this working and I am having trouble finding anything useful online during my research
    Perhaps someone is familiar and can help me set it up correctly? I'm verrrry new to Final Cut in general, so I appologize in advance if i'm a bit slow but i'll try to keep up!
    Thanks,

    In spite of all Apple's hype I'm not sure distributed processing is actually working.
    First I ran in to the problem with permisions on the /Users/Shared/Library/Application Support folder.  There's some info about that in this discussion.  You'll need to fix it on each computer you're trying to use as a node.
      https://discussions.apple.com/thread/3139466?start=0&tstart=0
    Then I finally found some decent documentation on Compressor 4 here
      http://help.apple.com/compressor/mac/4.0/en/compressor/usermanual/#chapter=29%26 section=1
    However no matter what I tried I could not get the compression to spread across more than one computer.  I tried managed clusters, quick clusters, and even "this computer plus".  I was testing on a mac pro, a mac mini, and a macbook air.  I could disable the mac pro and the processing would move to the mini, disable the mini and it would move the the macbook air.  No matter what I do though it won't run on multiple machines.
    I'm also having trouble doing any kind of compositing in FCPX and getting it to compress properly.  I see this error
    7/20/11 11:07:42.438 PM ProMSRendererTool: [23:07:42.438] <<<< VTVideoDecoderSelection >>>> VTSelectAndCreateVideoDecoderInstanceInternal: no video decoder found for 'png '
    and then I end up with a hung job in the Share Monitor that if I try to cancel just sits there forever saying "canceling".
    I'm seeing a bunch of Adobe AIR Encrypted Local Storage errors in the log too.  Don't know what that has to do with FCPX but I'll have a look and try and figure it out.

  • Distributed Processing

    I have a design question. I am looking to do some distributed processing on my data set (which is basically an object with six integer fields and a string). However, the processing will involve more than one object at a time (e.g. each request needs to work with all objects with attribute = value), which is potentially a large number. I think that rules out entry processors...unless I can run a filter within an entry processor. There is however also the scalability issue in that I will eventually have about 50 million objects minimum.
    As an invocable, I think I'll run into re-entrancy issues but assuming I can run queries on the backing map, I believe there is still the issue that there is no guarantee that all data will be processed and fail-overs etc. And from the documentation, I get the impression that invocables are meant more for management related tasks than raw data processing. Is there a way to guarantee fail-overs and complete processing? Should I really be doing heavy-duty querying on the backing map?
    I am also unable to partition some of the data off to another cache service as although an object A will have many related objects B, an object B can also have many related A (n:n relationship). Objects are related if they have the same value x for any attribute y.
    Is there any other option for processing this data? Result set sizes can run into millions.
    Please feel free to ask me more details. I have only attempted to give an overview of my problem here.
    Example
    Cache contains 10 million objects of type A, which define many-to-many relationship between a and b
    Cache contains 10 million objects of type A, which define many-to-many relationship between b and c
    Cache contains 10 million objects of type A, which define many-to-many relationship between c and d
    Same type objects are used since nature of relationship is the same in all cases.
    Challenge is to find the set <a,d>, where <a,b>, <b,c> and <c,d> relationships hold true. Sets a, b, c and d are not necessrily distinct sets in that values in d could also be in a, for example.
    Another example is to find set <d, b> where d values in set (x, y, z).
    Thanks in advance for your advice.

    Hi,
    user11218537 wrote:
    Thanks Robert. Very helpful comments.
    The way I have been doing intersections is by using the InFilter, with the appropriate extractors.
    For example, running with the same example you have put down:
    1. Retrieve a.x1...a.x6 from a. Let's call this S1.
    2. Retrieve the union of all b.x(i)-s for b-s where b.x1...b.x6 intersected with S1 is not empty. Let's call this union S2.
    3. Retrieve the union of all c.x(i)-s for c-s where c.x1...c.x6 intersected with S2 is not empty. Let's call this union S3.
    4. Retrieve all d.id-s where d.x1...d.x6 intersected with S3 is not empty.
    The thing is that in the above case, S1 runs into millions, as do S2, S3, S4. I have indexes defined on all six fields so retrieving S1 is not the issue, say with extractor getProperty1()
    In this case, I may have misunderstood what you had to do, but in my example S1 would be the up-to-six different integer values. You could possibly provide a bit more detailed information on what you need to do.
    But say S1 is 1 million strong. Defining then an InFilter from these values to retrieve matching S2 causes problems with delays and outofmemory exceptions. I am currently doing something like InFilter(S1, getProperty3()) for example, saying S2 contains objects where property 3 has any of the values in S1.
    Do you mean I should be doing the intersection manually, rather than cascading down the set of values via the InFilter? To resolve the outofmemory issues (sometimes with heap space, sometimes with "PacketReceiver"), I chop down the sets into 1000-10,000 strong InFilter and run the multiple smaller instances. Takes a long time though. Basically, what is the correct way to get 1 million objects with matching values from say a 50 million strong set? I'm guessing InFilter is inherently parallel in its implementation.
    My point is that you don't get them on the client side. You just work with the property values in a parallel aggregator to join up the values to S2.
    I'll look at the ParallelAwareAggregator to see if I could do the same in parallel. So instead of doing millions, each node is doing 10-100,000 from the set S1, and a proportional number from S2, S3 and S4. I'm guessing the logic will be the same, where InFilters are used to do the intersection.
    And you don't need to return the full set of matching cache entries to the client side at all, only the next partial result (which you still need to union up on the client side in aggregateResults).
    I will look at whether reverse maps are possible, but if I used a reverse map, would that be faster since I do have already have relevant indexes defined. I.e. is cache.get(keys) faster than cache.keySet(new Filter(keys, getProperty1()), if getProperty1() is indexed?
    Thanks.The reverse cache among others
    - gives you a single node to go for the set of cache keys having that particular x(i) value for the entire cache not only for that particular node, so it makes it more scalable than having to go for it to all nodes... also, since it is materialized in a cache, it possibly has a lower memory footprint as well (single serialized set, instead of one set per node and objects for each backing map key in the reverse index value)
    - gives you some control on the amount of data you retrieve from the cache at the same time, if you bump into OutOfMemory problems (you don't need to come up with the full matching set of keys in one go).
    You can also access the reverse cache from an aggregator to cut down on the latency for multiple reverse cache lookups if the reverse cache values likely have a large common part...
    Best regards,
    Robert

  • Why does Itunes stutter when using AE on a wired network?

    Itunes stutters badly when I try to use AE with remote speakers. I am running on an older PC with 512 RAM and a Penitum 4 2.6 GHz Chip. All my Apple Itunes software and firmware is up to date. I have already gone to Quicktime and am now using Safe Mode with Waveout only. So that doesn't work even though many posts suggest this is a solution. I also note that when turn off the remote speakers Itunes plays back perfectly on my computer speakers. I monitored my CPU activity and it does seem to correspond with the stuttering but it only reaches a top of 15-20% so its not like I have taxing the system. Also its not a wireless problem since I am doing this with an AE unit in wired mode (though I am using a powerline system to wire my house rather than ethernet cable). I have searched high and low for an answer. I am guessing that my old processer just cannot handle the demands that the AE unit requires to stay in sync or something along those lines. Thanks for any help or wisdom.

    Hi, I am having the exact same problem. When I connect to the AE wirelessly itunes streams music to it and through my amp / speakers without any skips / stuttering. When i dock the laptop (which disables the wireless card) and itunes streams using the wired lan it starts to skip every 5 or 6 seconds after approximately 4 minutes of play. I have also tried changing the quicktime settings (though feel these may only fix the skipping problem when playing through the PC directly) and the buffer time in itunes. This problem is really annoying, can anyone help?

  • [SOLVED] GRUB2 does not process hooks: System doesn't boot

    My initial system was an SSD where /dev/sda1 was my boot partition and /dev/sda2 was an (encrypted) LVM containing home, root, var and swap. Since all partitions were ext3, I decided to do a clean format to ext4 and copy my data back on the partitions. First I archived everything but home with the arch live CD onto a server with:
    rsync -a /mnt/* root@server:/path/to/backupdir/
    Everything went fine, but after copying back, the system would not boot (it was GRUB Legacy). Since it was not in the MBR but on sda1, I figured I could upgrade to GRUB2 now. So I followed the described procedure (from a live CD) and installed GRUB2 (this time into the MBR of sda. I then regenerated the image via mkinitcpio -p linux and generated a configuration file.
    When I try to start the system, GRUB2 gets loaded, but after the two messages for loading the ramdisk it remains silent for some time (no output at all) until it finally complains it cannot find my root. But I did not see any output of any hook being processed (including encrypt) so of course it cannot find my root, since it is still encrypted.
    I reformatted my boot partition again and reinstalled and regenerated everything again (I copied the directory contents from my backup but moved the old grub folder). Still the same issue. I know I could probably just reinstall everything and restore the settings, but I'd really prefer to restore my system, since this should be a lot faster.
    Here are the relevant configuration files:
    rc.conf
    # /etc/rc.conf - Main Configuration for Arch Linux
    # LOCALIZATION
    # LOCALE: available languages can be listed with the 'locale -a' command
    # LANG in /etc/locale.conf takes precedence
    # DAEMON_LOCALE: If set to 'yes', use $LOCALE as the locale during daemon
    # startup and during the boot process. If set to 'no', the C locale is used.
    # HARDWARECLOCK: set to "", "UTC" or "localtime", any other value will result
    # in the hardware clock being left untouched (useful for virtualization)
    # Note: Using "localtime" is discouraged, using "" makes hwclock fall back
    # to the value in /var/lib/hwclock/adjfile
    # TIMEZONE: timezones are found in /usr/share/zoneinfo
    # Note: if unset, the value in /etc/localtime is used unchanged
    # KEYMAP: keymaps are found in /usr/share/kbd/keymaps
    # CONSOLEFONT: found in /usr/share/kbd/consolefonts (only needed for non-US)
    # CONSOLEMAP: found in /usr/share/kbd/consoletrans
    # USECOLOR: use ANSI color sequences in startup messages
    LOCALE="en_US.UTF-8"
    DAEMON_LOCALE="no"
    HARDWARECLOCK="UTC"
    TIMEZONE="Europe/Berlin"
    KEYMAP="de-latin1-nodeadkeys"
    #CONSOLEFONT=
    #CONSOLEMAP=
    USECOLOR="yes"
    # HARDWARE
    # MODULES: Modules to load at boot-up. Blacklisting is no longer supported.
    # Replace every !module by an entry as on the following line in a file in
    # /etc/modprobe.d:
    # blacklist module
    # See "man modprobe.conf" for details.
    MODULES=(acpi-cpufreq cpufreq_ondemand tun fuse vboxdrv)
    # Udev settle timeout (default to 30)
    UDEV_TIMEOUT=30
    # Scan for FakeRAID (dmraid) Volumes at startup
    USEDMRAID="no"
    # Scan for BTRFS volumes at startup
    USEBTRFS="no"
    # Scan for LVM volume groups at startup, required if you use LVM
    USELVM="yes"
    # NETWORKING
    # HOSTNAME: Hostname of machine. Should also be put in /etc/hosts
    HOSTNAME="archlaptop"
    # Use 'ip addr' or 'ls /sys/class/net/' to see all available interfaces.
    # Wired network setup
    # - interface: name of device (required)
    # - address: IP address (leave blank for DHCP)
    # - netmask: subnet mask (ignored for DHCP) (optional, defaults to 255.255.255.0)
    # - broadcast: broadcast address (ignored for DHCP) (optional)
    # - gateway: default route (ignored for DHCP)
    # Static IP example
    # interface=eth0
    # address=192.168.0.2
    # netmask=255.255.255.0
    # broadcast=192.168.0.255
    # gateway=192.168.0.1
    # DHCP example
    # interface=eth0
    # address=
    # netmask=
    # gateway=
    interface=wlan0
    address=
    netmask=
    broadcast=
    gateway=
    # Setting this to "yes" will skip network shutdown.
    # This is required if your root device is on NFS.
    NETWORK_PERSIST="no"
    # Enable these netcfg profiles at boot-up. These are useful if you happen to
    # need more advanced network features than the simple network service
    # supports, such as multiple network configurations (ie, laptop users)
    # - set to 'menu' to present a menu during boot-up (dialog package required)
    # - prefix an entry with a ! to disable it
    # Network profiles are found in /etc/network.d
    # This requires the netcfg package
    NETWORKS=(FlosAP)
    # DAEMONS
    # Daemons to start at boot-up (in this order)
    # - prefix a daemon with a ! to disable it
    # - prefix a daemon with a @ to start it up in the background
    # If you are sure nothing else touches your hardware clock (such as ntpd or
    # a dual-boot), you might want to enable 'hwclock'. Note that this will only
    # make a difference if the hwclock program has been calibrated correctly.
    # If you use a network filesystem you should enable 'netfs'.
    DAEMONS=(syslog-ng dbus acpid crond alsa networkmanager @bumblebeed laptop-mode !hwclock ntpd psd)
    mkinitcpio.conf
    # vim:set ft=sh
    # MODULES
    # The following modules are loaded before any boot hooks are
    # run. Advanced users may wish to specify all system modules
    # in this array. For instance:
    # MODULES="piix ide_disk reiserfs"
    MODULES=""
    # BINARIES
    # This setting includes any additional binaries a given user may
    # wish into the CPIO image. This is run first, so it may be used to
    # override the actual binaries used in a given hook.
    # (Existing files are NOT overwritten if already added)
    # BINARIES are dependency parsed, so you may safely ignore libraries
    BINARIES=""
    # FILES
    # This setting is similar to BINARIES above, however, files are added
    # as-is and are not parsed in any way. This is useful for config files.
    # Some users may wish to include modprobe.conf for custom module options
    # like so:
    # FILES="/etc/modprobe.d/modprobe.conf"
    FILES=""
    # HOOKS
    # This is the most important setting in this file. The HOOKS control the
    # modules and scripts added to the image, and what happens at boot time.
    # Order is important, and it is recommended that you do not change the
    # order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
    # help on a given hook.
    # 'base' is _required_ unless you know precisely what you are doing.
    # 'udev' is _required_ in order to automatically load modules
    # 'filesystems' is _required_ unless you specify your fs modules in MODULES
    # Examples:
    ## This setup specifies all modules in the MODULES setting above.
    ## No raid, lvm2, or encrypted root is needed.
    # HOOKS="base"
    ## This setup will autodetect all modules for your system and should
    ## work as a sane default
    # HOOKS="base udev autodetect pata scsi sata filesystems"
    ## This is identical to the above, except the old ide subsystem is
    ## used for IDE devices instead of the new pata subsystem.
    # HOOKS="base udev autodetect ide scsi sata filesystems"
    ## This setup will generate a 'full' image which supports most systems.
    ## No autodetection is done.
    # HOOKS="base udev pata scsi sata usb filesystems"
    ## This setup assembles a pata mdadm array with an encrypted root FS.
    ## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
    # HOOKS="base udev pata mdadm encrypt filesystems"
    ## This setup loads an lvm2 volume group on a usb device.
    # HOOKS="base udev usb lvm2 filesystems"
    HOOKS="base udev autodetect pata scsi sata keymap encrypt lvm2 resume filesystems usbinput"
    # COMPRESSION
    # Use this to compress the initramfs image. With kernels earlier than
    # 2.6.30, only gzip is supported, which is also the default. Newer kernels
    # support gzip, bzip2 and lzma. Kernels 2.6.38 and later support xz
    # compression.
    #COMPRESSION="gzip"
    #COMPRESSION="bzip2"
    #COMPRESSION="lzma"
    #COMPRESSION="xz"
    #COMPRESSION="lzop"
    # COMPRESSION_OPTIONS
    # Additional options for the compressor
    #COMPRESSION_OPTIONS=""
    grub.cfg
    # DO NOT EDIT THIS FILE
    # It is automatically generated by grub-mkconfig using templates
    # from /etc/grub.d and settings from /etc/default/grub
    ### BEGIN /etc/grub.d/00_header ###
    insmod part_gpt
    insmod part_msdos
    if [ -s $prefix/grubenv ]; then
    load_env
    fi
    set default="0"
    if [ x"${feature_menuentry_id}" = xy ]; then
    menuentry_id_option="--id"
    else
    menuentry_id_option=""
    fi
    export menuentry_id_option
    if [ "${prev_saved_entry}" ]; then
    set saved_entry="${prev_saved_entry}"
    save_env saved_entry
    set prev_saved_entry=
    save_env prev_saved_entry
    set boot_once=true
    fi
    function savedefault {
    if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
    fi
    function load_video {
    if [ x$feature_all_video_module = xy ]; then
    insmod all_video
    else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
    fi
    if loadfont unicode ; then
    set gfxmode=auto
    load_video
    insmod gfxterm
    set locale_dir=$prefix/locale
    set lang=en_US
    insmod gettext
    fi
    terminal_input console
    terminal_output gfxterm
    set timeout=5
    ### END /etc/grub.d/00_header ###
    ### BEGIN /etc/grub.d/10_linux ###
    menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-194e65d3-b357-430d-b4bb-67a8300d287d' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 b69dee88-a8c9-4af7-a938-7ca6c8ff368c
    else
    search --no-floppy --fs-uuid --set=root b69dee88-a8c9-4af7-a938-7ca6c8ff368c
    fi
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux root=/dev/mapper/VolGroup00-root ro quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    menuentry 'Arch GNU/Linux, with Linux core repo kernel (Fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-fallback-194e65d3-b357-430d-b4bb-67a8300d287d' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 b69dee88-a8c9-4af7-a938-7ca6c8ff368c
    else
    search --no-floppy --fs-uuid --set=root b69dee88-a8c9-4af7-a938-7ca6c8ff368c
    fi
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux root=/dev/mapper/VolGroup00-root ro quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux-fallback.img
    ### END /etc/grub.d/10_linux ###
    ### BEGIN /etc/grub.d/20_linux_xen ###
    ### END /etc/grub.d/20_linux_xen ###
    ### BEGIN /etc/grub.d/20_memtest86+ ###
    ### END /etc/grub.d/20_memtest86+ ###
    ### BEGIN /etc/grub.d/30_os-prober ###
    ### END /etc/grub.d/30_os-prober ###
    ### BEGIN /etc/grub.d/40_custom ###
    # This file provides an easy way to add custom menu entries. Simply type the
    # menu entries you want to add after this comment. Be careful not to change
    # the 'exec tail' line above.
    ### END /etc/grub.d/40_custom ###
    ### BEGIN /etc/grub.d/41_custom ###
    if [ -f ${config_directory}/custom.cfg ]; then
    source ${config_directory}/custom.cfg
    elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
    source $prefix/custom.cfg;
    fi
    ### END /etc/grub.d/41_custom ###
    Thank you in advance. If you need any more information, please let me know.
    Regards,
    javex
    Last edited by javex (2012-08-10 18:33:41)

    Thank you for your reply. I looked further into the grub.cfg and removed the quiet part. Apparently the problem is that it runs all hooks but does not prompt me for a passphrase when running the encrypt hook. Why does this occur?
    Edit: I solved this: apparently I forgot to specify a cryptdevice. Since the article about dm-crypt does not talk about GRUB2, I missed that. I will rework that section to specify GRUB2 and GRUB-Legacy
    Last edited by javex (2012-08-10 18:33:22)

  • Compressor won't do distributed processing with FCPX.

    Running 4.1.3 with FCPX and Send to Compressor has all of the distributed processing groups greyed out in the selection drop down. I can only run compressor on This Computer.
    If I export the file from FCPX and then drop that into Compressor from the finder it works fine. Is there a reason it is not working when directly connected?
    I have deleted and reinstalled  both apps but get the same results.

    The thought is that the send to compressor menu items should have an option to use Compressor with distributed processing. Why can it not just export it to a temp location for me and then drop that into Compressor for me instead of requiring me to export it and then go open the finder and drop it in myself. There could be a second menu item "Send to Compressor (Distributed Encoding)" that you setup a defined temp location in preferences.
    Not a huge deal to click a few extra things, but would be nice to have things connected a little better.
    Side:
    I got a 4k camera and was about get a 5k iMac when I realized by chance that the new 5k model does not support TDM and I cannot cycle through my screenless Macs. Big bummer.

  • What amplifier would work best for Airport Express?

    I have built in a ceiling speakers in all my rooms and all the rooms have their own apple macmini or laptop
    All wiring of the speakers come to a central media room, and in this, I want to place 8 airport express to allow each room to be able to play through itunes onto their ceiling speaker. Effectively we could also put all speakers on from one room and create a party mode
    The problem is that airport express requires powered speakers and these ceiling speakers are not powered of course. This means I would need to put an amplifier between the airport express and the speaker (I think)
    Just wondering if anyone has successfully done this?
    I have had a look at multi zone amplifier like
    http://www.htd.com/whole-house-audio/mid-level-whole-house-audio/MCA-66-Audio-Co ntroller-Amplifier
    Most multi zone amplifiers have their own keypad, but with apple you do not require this as you can control it straight from your mac! So what I want to do is control the INPUT level on the amplifier, and fix the output level on the amplifier, if I am thinking right..... as I thought this would work best ...
    then if you put the volume control on the mac to MAX the amplifier would already be preset to max and the ceiling speakers would rock
    Any suggestions or .....
    Thanks

    http://www.macworld.com/article/143258/2009/10/airtuneswholehomeaudio.html
    In this Macworld article, they just use multiple Parasound Zamps, which are single zone amps that turn on and off by themselves, depending on when sound is passing through. I'm planning on doing this with my new construction because it looks like a simple setup with no remotes needed save the iPhone, which I always have on me.

  • How the invocation between process and sub-process work?

    As I have read throught some posts here and know that the web service whicn is used to trigger the process is asynchronous.
    so is the invocation between process and its sub-process asynchronous using web service?
    supposed there is a process like below:
    Start Event -
    >Sub-Process ->Automated Task-->EndEvent
    I have some confused questions:
    1: is it asyn to invoke the subprocess?
    2: if the invocation is asyn, is the next "Automated Task" triggerred before the completion of the nested subprocess? It seems to be, but it is not so reasonable.
    3: if the invocation is asyn, how the process get the return value of the nested subprocess?

    Hi John,
    the invocation of sub-processes is done via an internal runtime mechanism and not via normal Web Service call.
    The sub-process call is synchronous, so the calling process is waiting for the reply from the sub-process. There are a few things to keep in mind when using sub-processes:
    If you are branching off parallel execution paths (via a split gateway) within the sub-process and are using a regular end event, only the first token arriving at the end event will return to the calling process. The sub-process will keep running until either the last of its tokens flows into the end event or until the calling process is being terminated. In any case, the calling process will not react to any more tokens arriving at the sub-processes end event.
    If you are using parallelism within the sub-process, but want it to terminate once the first token arrives the end, use a terminating end event in the sub-process.
    Don't use the correlation condition at the start event of a sub-process. If this condition evaluates to 'false', the sub-process will never be started, but the calling process will wait forever for a reply.
    If you want the sub-process call to behave asynchronously, you can model the sub-process to split off a token to the execution path where it does the actual work and have another straight path directly from the split gateway to the (non-terminating) end event. That way the calling process will continue right away, but of course you cannot expect any response coming out of the sub-process.
    Best regards,
    Oliver

Maybe you are looking for

  • Re: is-retail POS extraction

    Hi expert,   This is pradeep. I am right now working in is-retail project in that POS extraction please guide me .whether I used r/3 side to  BW side extraction or in BW side used DB connection of POS system (is SQL).If I used DB at that time how can

  • HT1751 So what happens when the external hard drive with all of my iTunes media on it crashes??

    I moved my iTunes media/library (as well as iPhoto) to an external hard drive many moons ago.  It's been a great decision up until today, when the portable hard drive I was using decided to commit suicide.  Is it possible to restore the lost media an

  • Where to find bi11.1.1.5

    Hello, I just want to know only one thing by any chance is it possbile to download obiee 11.1.1.5 version for linux machine -64 bit? In the oracle.com its not available its avialble for HP machine. In the edelivary also i tried but somehow not able t

  • Songs won't authorize after upgrade to itunes 9

    So, I recently upgraded my os, and then got an iphone and upgraded itunes to 9. I go to play the the album I purchased on before the upgrade, and the computer gives me a message that the computer is not authorized to play it. I haven't put it on more

  • What data sources use for report which looks like tr. FS10N

    Hello, I would like to create report in BW which will looks like transaction FS10N (G/L account balance display). Can anybody give some tips which data sources I should use and how initialize these data sources? There is old genereal ledger in our sy