Map listeners and write-through strategy.

Hi.
Write-through strategy seems to be synchronious operations and if it fails, no data should appear in cache. Logically this means, that no events will be produced if the persisting fails (that's what we exactly need). But could not find any mention about this in documentation. Can anyone verify this?
Thanks, Anton.

If you are talking about throwing an exception in your CacheStore code,
it will happen before anything occurs in the internal cache managed by
Coherence and no events will be generated (that would have been generated
under normal cases where the operation succeeded.)

Similar Messages

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Mapping Street2 and Street3 through idoc CREMAS

    Hi
    Is there a way to populate the fields STREET2 and STREET3 through the idoc CREMAS05?
    Thanks
    Nag

    Hi Kris
    Thanks for the info. But i could not find any table structure in the BAPI_BUPA_ADDRESS_CHANGE fn which holds the STREET2 and STREET3. Can you provide further info?
    Thanks for your inputs so far.
    Nag

  • Permisions to read and write on a external hard drive

    Currently I am working with two Mac Air. One of them with Mac OS X 10.9.5 and the other one with Mac OS10.7.5.
    Also I have most of my files in an external hard drive.
    When I try to move file from the Mac air with Mac OS10.7.5 to the hard drive, I have no problem.
    When I try to move file from the Mac air with Mac OS X 10.9.5 to the same hard drive, I am unable to do this action
    because the hard drive appears as a read only (drwxr-xr-x)
    Does any one knows how to change the settings to allow to write in the hard drive ( being able to save files from my
    computer to the hard drive)
    Thank you for your help.
    by the way, the hard drive is not apple.

    Dear Kappy
    That's the problem. In the show information window doesn't appear any little lock anywhere as it appear in other windows
    such as safety and privacy or in parenteral control.
    Perhaps the lock to unlock this window is somewhere else in the preference settings.
    I can try to see if with the Mac air with Mac OS10.7.5 I can change these settings,
    but so far, I haven't being able to find a way to change permissions to read and write
    through the information window.
    Thank you for your collaboration
    Ramon

  • Write-through limitation and putAll

    Please find the quote below from developer guide, particularly this one In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail.If a putAll is called on a cache, will it result in one CacheStore.storeAll or many storeAll triggered from different coherence nodes/servers? (assume a distributed topology coherence 3.7.1)
    Will the store transaction failure lead to putAll transaction failure?
    Are there any patterns that shows how this coherence works with typical databases?
    14.7.2 Write-Through LimitationsCoherence does not support two-phase CacheStore operations across multiple CacheStore instances. In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail. In this case, it may be preferable to use a cache-aside architecture (updating the cache and database as two separate components of a single transaction) with the application server transaction manager. In many cases it is possible to design the database schema to prevent logical commit failures (but obviously not server failures). Write-behind caching avoids this issue as "puts" are not affected by database behavior (as the underlying issues have been addressed earlier in the design process).

    gs100 wrote:
    Thanks for the input, I have further questions based on these suggestions.
    1. Let us say one of the putAll fails we would know that it has failed due to underlying one or more store/storeAll. And even if we rollback the coherence transaction, the store/storeAll that succeeded would not be rolled back automatically, is that correct? If true, this means that it would leave the underlying DB/store in the inconsistent state with that of in-memory cache?I guess that is one of the reasons why the transaction framework does not support cache stores... also, write-behind would coalesce updates which would have funny consequences with regards to the transactional context...
    2. How do we get the custom implementation of putAll, that you suggested to handle specific errors? any pointers on this would be helpful.I guess it is not going to be posted, the Coherence team may or may not add something which is a bit more deterministic with regards to error.
    A few aspects of Coherence behaviour (a.k.a pitfalls) which you need to be aware of to be able to implement your own solution:
    Exceptions propagating back to the client can happen in:
    - entry-processor (not for putAll specifically)
    - result serialization code (not for putAll specifically, but for processAll/aggregate for example)
    - deserialization code (indexes/filter-based backing map listeners/cache stores lead to deserialization even for putAll)
    - triggers (intentionally, too)
    - cache stores
    There is no place where you could catch any exceptions from inside the NamedCache call, so they will come out.
    Coherence may execute the operation on one thread per partition or one thread per multiple partitions, but never on multiple threads per partition. This means there may be multiple exceptions even from a single storage node, but only at most one exception would be generated per partition (starting with 3.5).
    If you send multiple partitions with the same NamedCache call, you can lose exceptions as you wouldn't know if an exception would have or wouldn't have happened with a partition if it was sent alone instead of together with another on the same node.
    As you need to be able to return all exceptions from your method call, you have to produce and catch all of them and collect them otherwise you would lose all but one. To produce and catch all exceptions you have to produce all exceptions independently, i.e. different partitions must be operated on independently.
    To send an operation to a single partition only, you can separate the operations to different partitions by separating the keysets for different partitions with key-based operations, or applying a PartitionedFilter for filter-based operations.
    It is up to you where and how you iterate through the partitions. You can do it on the caller, you can do it on storage node from an Invocable sent via an InvocationService (in this case you can be either optimistic with ownership or chase a partition).
    3. Because we are thinking putAll that coherence implemented is most optimized (parallelism). I am not sure how the custom implementation can be as optimal (hope we don't end up calling one by one).You cannot implement it as optimally as Coherence itself does as it interleaves operations (Messages) to independent partitions/nodes (does not have to wait for the return message) from a single thread without waiting for the responses from individual nodes/partitions.
    You can either parallelize operations to multiple threads, or do the iteration on the single thread at the cost of higher latency.
    Best regards,
    Robert

  • Write through and CacheStore

    Hi,
         I'm running a near cache implementation, with the front being a local cache and the back being a distributed cache. The distributed cache has a local cache and a read-write-backing-map-scheme for persisting each cache to disk every t minutes (for backup purposes - we still use a cache in memory).
         I have a few questions about the Write through capabilities and CacheStore so as to better understand what's going on here:
         1. We only need to store the data to disk for backup in case of complete cluster failure (for example, all n machines in our cluster go down). Right now my implementation of the CacheStore has one line which reads "return null" for the following methods:
         load(..)
         loadAll(..)
         Is there a more efficient/effective way to write to disk and ignore reads if item does not exist in distributed map rather than extending CacheStore and returning null for all methods noted above?
         My reading and writing to disk occurs using the ExternalizableHelper class, thx for this nice work.
         2. How are CacheStore's instantiated initially? Since we want each cache (say we have two different caches here for simplicity) backed up to file every t minutes, do we have to have a separate CacheStore object for each cache? What is the best practice to attach a cachestore to a particular cache?
         For example, I start two Tangosol instances, one on machineA and one on machineB, both storing data as per my configuration. The 2 caches being used are "cacheA" and "cacheB". So when I start the two Tangosol instances, I have to instantiate CacheStore twice so that I can separately write "cacheA" and "cacheB" to their own separate files.
         3. When backup to disk occurs, is there any removing of items from the distributed cache?
         4. I'm not completely sure on the write delay here. What if an item in the cache is just added once, and no updates occur on it (ie. just one put, and 0+ gets). After a specified amount of time, will this be written to disk, or does an update on this object in the cache have to occur before this item can be added to the write queue and eventually written to disk? Once an item is added for the first time, does this trigger the update time for this object to be the first write time?
         Thanks,
         - Noah

    Hi Noah,
         1. No, load() and loadAll() returning null is the most effective way of implementing this.
         2. You can pass the cache name as a constructor parameter - see Parameter Macros in the Coherence User Guide.
         3. No, nothing is removed from the cache
         4. Writes are only triggered by put()'s.
         For more information please take a look at this forum post: <a href = "http://www.tangosol.net/forums/thread.jspa?threadID=445&tstart=0">What is Read-Through/Write-Through/Write-Behind Caching? </a>
         Regards,
         Dimitri

  • Trying to download update to CoPilot Live and CoPilot GPS with maps.  files sizes are large and taking hours to download on wireless connection.  How can I download App updates and new maps while connected to PC and Itunes through hard wire internet link?

    Trying to download update to CoPilot Live and CoPilot GPS with maps.  Files sizes are large and taking hours to download on wireless connection.  How can I download updates and new maps while connected to PC and Itunes through hard wire internet link?

    I'm on my iPad, so I don't know if this is the page with an actual download. I don't see a button, but assume that is because I  am on an iPad. It is in the DL section of Apple downloads.
    http://support.apple.com/kb/DL1708

  • How to create new Sheet in Excel file and write into it

    Hi to all, my requirement is this:
    I have an excel file (.xls) with multiple sheets (three sheets for precision: sheet1, sheet2 and sheet3).
    I must create in the same excel file new sheet, say sheet4, read data from sheet1, sheet2 and sheet3 and write them into sheet4.
    How to realize this in SSIS?
    Expecially, Is it possible to realize this in SSIS?
    thanks in advance.

    You need to create the sheet with the required metadata. There's no use creating blank sheet.
    Also metadata has to be fixed using a sample sheet prior to start of the package for setting the mapping. You may create a excel template for this purpose to just set the mapping at design time.
    At runtime pass actual excel sheet path for ExcelFilePath property of the source to point to correct file.
    Also the create table statement should include the same metadata info (columns)
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs
    Hi, well it works fine! I have used a sample sheet prior to start of the package for setting the mapping.
    Next at run time  pass actual excel sheet path for ExcelFilePath property of the source to point to correct file.
    However I have set also ValidateExternalMetadata properties of the Excel Destination to false and DelayValidation properties of the package to true.
    Now my problem is the following:
    I stored the three sheets name (sheet1, sheet2, sheet3) in the Object type user variable through a script task and the foreach loop loops through the three sheets stored in the variable.
    Now, I want that at each iteration the CURRENT VALUE of the Object type user variable is mapped into another user variable wich in turn is given in input to the Excel Source.
    Resuming:
    -) I have to copy the content of three sheets (sheet1, sheet2, sheet3) of excel file into another new sheet,   sheet4.
    -) I stored the three sheets name (sheet1, sheet2, sheet3) in the Object type user variable.
    -) With a  foreach loop container I loops the three sheets stored in the Object type user variable.
    -) Within a foreach loop container I have a Data Flow Task that transfer data from source sheet (sheet1, sheet2, sheet3) into destination sheet, sheet4.
    -) PROBLEM: how to change dinamically at each iteration of the foreach loop the name of source sheet? In the destination excel Task the sheet is always the same at each iteration, sheet4. But in the source excel task the name of sheet musts change at each
    iteration. In particular at the first iteration the name of the source sheet must be "sheet1$", at the second iteration "sheet2$", in the last iteration "sheet3$".
    How to change sheet name dinamically at each iteration?
    thanks.

  • Map/Reduce and Affinity

    First of all, let me describe use case.
    Where are two types of objects (and accordingly two distributed caches): Child(cid, pid, cdata1, ....) and Parent(pid, pdata1, ...). Child's key have key association so, child objects stored in the same partition where its parent stored. So, I need to run the query against parents with several constraints applied. These set of constraints contains constraint which is based on data from corresponding child objects. How to do it efficiently without precalculation and putting precalculated value in Parent objects and applying constraint to it?
    Example:
    User(uid, firstname, lastname) and Address(aid, uid, zip) (user can have several addresses)
    query is: select all users which firstname equal to 'John' and have address with zip equal to '65535'
    The second case is pretty the same, but the query is slight different: select all addresses with zip equal to '65535' and which user's firstname equal to 'John'

    Hi,
    I was asking about immutability of fields used to establish the join relationship between the parent and the child and the rest of the attributes in one of the tables.
    But it makes too many restrictions, I believe.
    Ok, in the general case of map reduce you can just do an aggregation and idempotently put the reduced info to a cache in a DIFFERENT cache service from what you aggregating in an idempotent way (always the same data on the same key even for retries), which also means that the granularity of such data must be so that an entry of the reduced data must be reduced from entries only in a single partition of the mapped data if this is not the final reduce step.
    Let's look at an example: you are doing a two step map/reduce.
    First step maps entries like this:
    Mapped data keys are:
    P1K1
    P1K2
    P2K3
    P3K4
    where with the notation P2K3 I mean that Coherence assigned the key to partition 2. This assignment has to do with Coherence's key-partitioning strategy (which is pluggable, by the way), you cannot explicitly decide it on an entry-by-entry basis.
    Let's assume that P1 and P2 are at the time both owned by node X, P3 is owned by node Y.
    When you are doing an aggregation of this cache, it is possible (likely) that your aggregator on node X will get entries for keys P1K1, P1K2, P2K3.
    Your aggregator must create a reduced result for (P1K1 + P1K2) which is a separate from the reduced result entry from P2K3. That result entry must only depend on the data in the entries from which it was reduced respectively (P1K1 + P1K2 or P2K3, but not both) and not on anything else. This is because Coherence may reexecute that
    aggregation later on a node which has only P1 but not P2 if node X died, and the cache might also have one of the reduced results and you would end up either with corrupt results or you would have to write more complex code to cover this case.
    You have to put these RP1K12, RP2K3, RP3K4 reduced entries to a cache separate from the mapped cache so that you are not facing the possibility of deadlocks or thread-starvation in the service of the aggregated cache.
    This unfortunately will mean that the reduced result will likely not be on the same node as the mapped data, and that would mean lower performance compared to if you were able to do this in the same node in case your both caches you want to join together were on the same node.
    Another approach, for affine data is that from the aggregator you locally enqueue the reduced data to a local queue from which its consumer fires off the second step of aggregations to the other cache to be mapped which aggregation holds the first reduced data in its state. You will have to care about duplicate checking of partial reduced data on the final reduce step so each step of your reduced data must carry its source partitions from all preceeding steps.
    I will possibly show an example of this in a couple of days, but I will have to iron out a couple of things, first, like ensuring at-least once guarantees for starting the next step.
    Alternatively you can build something similar with InvocationService which helps a bit by providing an asynchronous invocation possibility.
    Best regards,
    Robert

  • Can you plz tell m how to write Design Strategy in the scenario IDoc-- File

    Hi,
    can any body plz tell me how to write Design strategy, Preprocessing logic and post processing logic in the scenario IDoc-->File.
        My idoc is:
                Message Type: ZMMIPORDCHG
                Basic Type  : ORDERS05.
    lookup's are available.

    Hi,
    take a look at this document for an IDOC to IDOC Scenario,
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cdded790-0201-0010-6db8-beb9bb2b2660
    Use it as a template to design one side of the Interface and also take a look at this blog,
    /people/prateek.shah/blog/2005/06/08/introduction-to-idoc-xi-file-scenario-and-complete-walk-through-for-starters
    Regards,
    Bhavesh

  • Write-through Cache behavior during Transactional Operation

    If a put is called on a write-through cache during a transaction(with Optimistic Read-Committed settings) that involves multiple caches some set to write-through and others to write-behind, when will the store operation on the corresponding CacheStore be attempted?
         a) Immediately after the put() is called on the cache but before the transaction commit
         or
         b) Immediately after the transaction is committed irrespective of when the put is called

    Hi Abhay,
         The backing map (in this case, <tt>com.tangosol.net.cache.ReadWriteBackingMap</tt>) is responsible for calling the CacheStore implementation. When "commit" is called, Coherence will synchronously send the data to the backing map; the backing map then determines what to do with the data. In the case of ReadWriteBackingMap, it will either (depending on its configuration) synchronously call CacheStore (meaning that a store exception will interrupt your transaction) or queue the update for later (meaning that any store exception will occur after the cache transaction has completed).
         In 3.0, the <rollback-cachestore-failures> element under <read-write-backing-map-scheme> controls whether CacheStore exceptions are propagated back to the client. If you are using a release prior to 3.0, please see this FAQ Item on CacheStore Exceptions.
         Jon Purdy
         Tangosol, Inc.

  • Switching from write through to write behind automatically

    Hi,
    We are considering a Coherence solution to protect a customer facing application from outages due to database failures. This is for a financial company and the monetary value of each transaction is large and we want to provide 100% guarantee against data loss while not incurring any outages. We want to provide a write-through persistence to the database through Coherence which can switch to a write-behind automatically at runtime if the database persistence fails. Is this doable automatically and would it solve the problem I am trying to solve without losing any inflight transactions? Are there any real customer cases that were successful in achieving this using Coherence?
    Thanks
    Sairam
    Edited by: SKR on Feb 16, 2012 3:14 PM
    Edited by: SKR on Feb 16, 2012 3:15 PM

    SKR wrote:
    Jonathan.Knight wrote:
    Hi Sairam
    I know you can change the write-delay in JMX for a cache using write-behind but I pretty certauin you cannot make a write-through cache suddenly become a write-behind cache.
    I'm not sure why you think changing from write-through to write-behind will allow you to guarantee 100% no data loss - do you mean no loss of updates to the DB or no loss of data in the cache cluster? There are certainly scenarios that can occur where you can loose data from either the cluster or the DB that write-through or write-behind will not save you from. Presumably you want to use write-behind to allow for the DB to go down, although you will still need to configure Coherence to properly retry failed write-behind calls CacheStore behaviour on failure. What happens to your data if you are using write-behind and you loose a partition from you cluster (i.e. you loose a physical machine or two or more JVMs in a short space of time) - you have data loss - you cannot guarantee against this you can only mitigate it and have a recovery policy/procedure.
    JKJK,
    Thanks for your reply. I must have explained the scenario better. What we are trying to do is to have our transactions commit to the database synchronously using write-through, so that during normal operation, the data will be committed, persisted and durable in the database. But our RW database becomes a single point of failure and if some problem occurs to the database during the peak load time, we run the risk of an outage till we fix the database problem or failover to the standby (We don't have RAC architecture or automatic failover and the manual switchover takes about 10 - 15 mins minimum). We want to avoid this by providing a cache-only operation mode during such a failure, where the customers can continue to transact and the writes will get queued in the cache. I do understand that losing both the database and the cache or losing the primary and the backup in the cache would result in a data loss. But I am assuming such a dual failure is rare.
    We do not want to run write-behind all the time but only during the database failure window. From what you mentioned, it seems the runtime switching from write-through to write-behind is not available as an option.
    SairamHi Sairam,
    I would suggest that you configure write-behind to have a fairly short write-delay, and you only return a confirmation to the client
    - either after the write-behind succeeded (you can use a backing map listener to listen for the removal of the decoration which meant that the entry was dirty)
    - or if the database went down (noticeable from the failure), then it is up to you whether you send a confirmation which also mentions that it is not persisted to disk yet, or not at all
    Best regards,
    Robert

  • Start up disk is full, and need new storage and back up strategy

    I am maxed out on nothing my iMac and Mac book pro and recognise I need to introduce a new external hard drive into the equation.
    I currently have an apple time capsule with 2gb drive, that backs up the 600g iMac and 250g mac book.
    I would like to understand if I could point applications such as iTunes, iPhoto and iMovie towards an external drive ( as long as it is always connected to either the machine...or perhaps network so all machines have access to the external drive)....and hopefully I would also be able to access my iTunes library ( on the external drive) via an Apple TV...without any fuss.
    I would also like to use my apple time capsule to back up from both the iMac and Mac book internal drives...as well as recognising the data on the external drives that points to the respective computers.....eg the iTunes library etc.
    Is this at all possible???
    If so, which external hard drive solutions should I use ( I have seen some mention synology...but looked at the website, and so many models and not sure what I would need)???
    And how would I go about lifting and shifting data, redirecting applications to read and write to external drives, and getting time capsule to back up as above.....and possibly even have an additional back up strategy that allows me to take regular copies of all hard drives and keep content in an off site location for data security...possibly in a bootable format???
    Welcome all help and advice please.

    You should never, EVER let a conputer hard drive get completely full, EVER!
    With Macs and OS X, you shouldn't let the hard drive get below 15 GBs or less of free data space.
    If it does, it's time for some hard drive housecleaning.
    Follow some of my tips for cleaning out, deleting and archiving data from your Mac's internal hard drive.
    Have you emptied your iMac's Trash icon in the Dock?
    If you use iPhoto, iPhoto has its own trash that needs to be emptied, also.
    If you store images in other locations other than iPhoto, then you will have to weed through these to determine what to archive and what to delete.
    If you use Apple Mail app, Apple Mail also has its own trash area that needs to be emptied, too!
    Delete any old or no longer needed emails and/or archive to disc, flash drives or external hard drive, older emails you want to save.
    Other things you can do to gain space.
    Once you have around 15 GBs regained, do a search, download and install OmniDisk Sweeper.
    This app will help you locate files that you can move/archive and/or delete from your system.
    STAY AWAY FROM DELETING ANY FILES FROM OS X SYSTEM FOLDER!
    Look through your Documents folder and delete any type of old useless type files like "Read Me" type files.
    Again, archive to disc, flash drives, ext. hard drives or delete any old documents you no longer use or immediately need.
    Look in your Applications folder, if you have applications you haven't used in a long time, if the app doesn't have a dedicated uninstaller, then you can simply drag it into the OS X Trash icon. IF the application has an uninstaller app, then use it to completely delete the app from your Mac.
    Download an app called OnyX for your version of OS X.
    When you install and launch it, let it do its initial automatic tests, then go to the cleaning and maintenance tabs and run the maintenance tabs that let OnyX clean out all web browser cache files, web browser histories, system cache files, delete old error log files.
    Typically, iTunes and iPhoto libraries are the biggest users of HD space.
    move these files/data off of your internal drive to the external hard drive and deleted off of the internal hard drive.
    If you have any other large folders of personal data or projects, these should be archived or moved, also, to the optical discs, flash drives or external hard drive and then either archived to disc and/or deleted off your internal hard drive.
    Good Luck!

  • Hi, I am having trouble getting an album to download. I have tried it on both my iPhone and laptop through iTunes but neither works. I am wondering if it could be the size of the album stopping it downloading (212 Tracks) Any Ideas?

    Hi, I am having trouble getting an album to download. I have tried it on both my iPhone and laptop through iTunes but neither works. I am wondering if it could be the size of the album stopping it downloading (212 Tracks) Any Ideas?

    These alerts occur due to timeouts or conflicts trying to write a file during download.
    If you encounter this issue while while downloading something from the iTunes Store:
    Delete your iTunes Downloads folder, located in:
    Mac OS X:
  ~/Music/iTunes/iTunes Media/Downloads   Note: "iTunes Media" may appear as "iTunes Music. Also, the tilde (~)  refers to your Home directory.
    After locating your iTunes Downloads folder:
    Quit iTunes.
    Delete the Downloads folder on your computer.
    Open iTunes.
    Choose Store > Check for Available Downloads.
    Enter your account name and password.
    Also review this support aticle as it might be causing due to internet connection: http://support.apple.com/kb/ts1368
    Hope this helps.

  • How Do RoboHelp 9 WebHelp Generated Files Handle Map IDs and Aliases?

    The text below was written by our team's developer/architect. I am the help author who uses RoboHelp to write content and generate the help files, but I am clueless how it all gets generated and is deployed. Please help. We use RoboHelp 9. I use it in Windows XP and our app and help run on IE 7, 9, and Firefox (multiple versions).
    "Our application uses the numeric identifiers associated with the Map ID. For example, to get to the <appname>_home_page.htm file, we use the number 1053. <appname> = pecs, in this example.
    All of this is used in a call to a RoboHelp method defined in the RoboHelp_CSH.js file. The mehtod we are calling is the RH_ShowHelp() JavaScript method and the code to perform the call, when you click on Page Help, is this:
    RH_ShowHelp(0, ''/pecsHelp/index.htm>pecsHelp',HH_HELP_CONTEXT,topic);
    Topic is translated to the Map ID number for the page help. HH_HELP_CONTEXT is defined in the RoboHelp_CSH.js file. This method translates into a URL and from what I have seen, the URL that gets generated is this:
    http://{server}[:port]/pecsHelp/index.htm/{server}[:port]/pecsHelp/index.htm#<id=1053>>pecsHelp
    Server and port get replaced with the appropriate values. I have no clue how id=1053 is supposed to get translated to mean "pecs_home_page.htm". If you check the PECS_help.h file, you will see the following entry:
    #define PECS_Home_Page1 1053
    Then in the RoboHelp alias file (PECS 3.0.ali), the following line is in the file:
    <alias name="PECS_Home_Page1" link="pecs_home_page.htm"> </alias>
    But both of these files are used during the WebHelp generation process and I don't know how the WebHelp generated files handle the Map ID and aliases."

    You need to assign the numbers you find in the pecs_help.h file to topics in your help. You do this in Context Sensitive Help > Map Files > All Map IDs. (From RH7, but I assume the location is similar in RH9.) This creates the entries in the .ali file.
    Peter Grainge suggests a couple of sites to read for a greater understanding here:
    http://www.grainge.org/pages/authoring/calling_webhelp/using_map_ids.htm
    (Although the second  site is based on RH X5, the basic concepts and procedures should be very similar. )
    HTH,
    Amber

Maybe you are looking for

  • Does Premier have an Screen recorder if not does adobe has one in the Cloud ?

    Basically Does Premier have an Screen recorder if not does adobe has one in the Cloud ? Thanks !

  • File name does not showing the InDesign icon?

    I have got few files from my client and they said it is an InDesign CS2 version file. but the file name does not showing the InDesign icon on the file name, though I have CS2 and I have tried other version too. Also I could not open the file in any v

  • What's the name of the Transition Effect?

    What's the name of the transition effect that, dissolves the image from Top to Bottom then rebuilds the next image from bottom to top? Just simple small tiny pixels dissolving. Is there a name for this effect? Where is this effect located? Is there a

  • HELP! Sudden weird behavior in Aperture and MobileMe!

    So, I am running the latest version of Aperture and have been successfully publishing many albums to mobileme. I have several external hard drives connected to my computer, and almost all of my image files in aperture are referenced. Lately I have be

  • [SOLVED] GTK 3 App won't run

    I'm forking XSIRC and I switched to GTK3. After the switch, when I try to run, I get:       7437:          7437:          7437:    calling init: /usr/lib/libgdk-x11-2.0.so.0       7437:          7437:          7437:    calling init: /usr/lib/libatk-1