Issues distributing large content to pull DPs

I am having issues trying to distribute big files to PULL DPs.  All other packages are going through fine.  But when I try to ditribute wim files over 4Gb, they never start downloading.  The pull site shows a bits.tmp stub file the size of
the actual file.  The dataTransferService.log shows ErrorCode 0x80004005.
I initially thought it might be because my pull dp is running on a win2k3 server (using iis 6), but I tried distributing to a test windows 7 pull dp setup and got the same result.  WAIK tools for win7 has been installed on the pullDP and its proper
DLLs are in the system32 folder.
I have been able to distribute wim file of roughly 3.7Gb and lower but nothing over that to pull DPs.  If I make these regular DPs instead of pull DPs, everyone works fine.  This and the fact that all other packages are working fine tells me permissions
and boundaries are not the issue.  WebDav authoring rule is set to allow read to all content to all users as well.
My site server is running SCCM 2012 R2 with CU1 running on win2008 R2 (so iis 7.5) and I updated the client on my pull DP as well.
Bitsadmin does not give me much more to work on.  State: Transient_error with no progress on byte count.  error code : 0x80004005 Unspecified error and error context: 0x00000005 the error occurred while the remote file was being processed.
I have also modified the applicationHost.config file to allow all types of file extentions in the requestFiltering section, on the remote chance this was blocking somehow.  This type of issue seems to creep up sometime during the transfer, not before.
One other test I made was to distribute iso files as packages instead of wim files under operating system images.  I had the same results with large iso files as I did with wim files.
Things seem to point at some weird type of 4Gb package limit for a pull DP.  Either an issue with settings on my site server or with the CCM client itself.
I am running out of ideas and was wondering if anyone has had this issue and has resolved it.

This is a known issue for SCCM 2012. A official fix will be released in the near future. If you want to test instantly, contact CSS to open a case to get a test fix.
There is a KB for this issue (English not available currently)
http://support.microsoft.com/kb/2927111/fr
Juke Chou
TechNet Community Support

Similar Messages

  • Prestaging Content on Pull DP temporarily

    Hi,
    I've read below in the MS documentation that a Pull DP configured for Prestaged Content does not Pull from a source DP:
    The allow prestaged content configuration for a distribution point overrides the pull-distribution point
    configuration. A pull-distribution point that is configured for prestaged content waits for the content. It does not pull content from source distribution point, and, like a standard distribution point that has the prestage content configuration, does not
    receive content from the site server.
    My question is: Once I have prestaged my content onto the Pull DP can I remove the checkbox that enables the Pull DP for Prestaged Content and have the Pull DP revert back to pulling packages from its designated source DPs?
    In other words, once a Pull DP is selected for Prestaged Content, and the prestaged content has been distributed, can the Pull DP's configuration be reversed by unchecking the Allow Prestaged Content checkbox? Does this allow the Pull DP to return to normal
    Pull DP activities by pulling packages over the network from its Source DPs?
    Thanks!
    Grant

    Thank you Torsten. One quick follow-on question:
    I've read this in the docs as well:
    When the distribution point is configured as a pull-distribution point, do not enable the distribution
    point for prestaged content. The
    prestage content configuration for a distribution point overrides the pull-distribution point configuration. A pull-distribution point that is configured
    for prestaged content does not pull content from source distribution point and does not receive content from the site server.
    Just wanted to be sure that this also means that if we deploy a large number of Pull DPs, and then check the box to allow Prestaged content, then import our content, then uncheck the box enabling Prestaged Content that the Pull DPs will go back to acting
    normally and pulling from their Source DPs.
    Thanks again,
    Grant

  • Issues distributing content to DPs

    Hi I have lots of issues distributing packages (any .. driver packages seem especially bad) to DPs. If you check distmgr.log I cant see anything really happening just messages as below.
    PkgXferMgr.log doesnt reveal much either. It seems its not doing a lot ..
    I am not sure if I am doing something fundamentally wrong when distributing packages and apps. It just seems I have the same issues I had in 2007 where when I try and deploy say 20 packages to several DPs its pot luck whether it makes it to all of them.
    I really need this to work properly so e.g when I create a new OSD image with 30 new apps I can deploy to 20 DPs and be fairly confident it will make it to 18 of them ..If there "best practice" for distributing packages  ?
    Also if you check the "All Active Content Distribution" report the information is incorrect. Is there a way of seeing what packages are currently being transferred with a percentage complete or something ?
    Appreciate any help here guys.
    permanently deleting E7BAD7D896128F3D1B76BE40B689A0A114CCD993E3E33981A7D6D51026FAD052 SMS_DISTRIBUTION_MANAGER 1/30/2014 10:41:01 AM 5468 (0x155C)
    Sleep 30 minutes... SMS_DISTRIBUTION_MANAGER 1/30/2014 10:41:59 AM 5256 (0x1488)
    permanently deleting DAF6E0AAA324A3E9250FBA7976CF6AD3EC60F37043CB9D3206E87DFF39484CBC SMS_DISTRIBUTION_MANAGER 1/30/2014 10:42:05 AM 5468 (0x155C)
    permanently deleting C50F170E53C27691CD60DFA2EA980576E7DEFC4136F15A0A29DEEE3B9548022D SMS_DISTRIBUTION_MANAGER 1/30/2014 10:43:10 AM 5468 (0x155C)
    permanently deleting 05967E93CD4A44B127E26F567986649DEDEDFB87BD463FCA32D2F5298A77FE36 SMS_DISTRIBUTION_MANAGER 1/30/2014 10:44:14 AM 5468 (0x155C)
    permanently deleting C7F74F40D708D590EDB5D2ED064CF9C279FB1EBE33EDED073391E4D5E1CEE046 SMS_DISTRIBUTION_MANAGER 1/30/2014 10:45:21 AM 5468 (0x155C)
    permanently deleting CDE3F166613BE98B2EE3EBE892DBBA176E22E0ADC1FCE31E5AF9D010154B3743 SMS_DISTRIBUTION_MANAGER 1/30/2014 10:46:28 AM 5468 (0x155C)
    permanently deleting D84E3303730133A2488C1EE4165AB7771AAD1D4E5E34D4BB4C99E53E96C8605B SMS_DISTRIBUTION_MANAGER 1/30/2014 10:47:37 AM 5468 (0x155C)

    Hi,
    This is also possible that the pull-distribution point cannot download the content from source distribution point. In this case, Pkgxfermgr.log, PullDP.log and DataTransferService.log need to be checked.
    Juke Chou
    TechNet Community Support

  • Pull DPs and rate limiting.

    Hi,
    I have a 2012 SP1 environment and 77 sites globally. These sites vary from 34Mb to 1Mb links, and use rate limiting. I want to implement Pull DPs to help distribution issues but am up against the rate limiting issue. I cant configure these remote sites to
    use a more local site and keep their control over how much bandwidth they use, and so wondered how other people got around this issue?
    I currently have a star network, but want to make 5 of the larger bandwidth sites be source locations for content too.
    Thanks
    Kim 

    Hi,
    For Pull DP's you can restrict the bandwidth that is used by BITS on the Pull DP. It is covered here as well:http://blogs.technet.com/b/configmgrteam/archive/2013/06/06/introducing-the-pull-distribution-points.aspx
    You can use either the admin console or a group policy for instance.
    Regards,
    Jörgen
    -- My System Center blog ccmexec.com -- Twitter
    @ccmexec

  • Need help with "Very large content bounds" error...

    Hey guys,
    I've been having an issue with Adobe Muse [V7.0, Build 314, CL 778523] - one of the widgets I tried from the Exchange library seemed to bug out and created a large content box.
    This resulted in this error:
    Assert: "Very large content bounds (W:532767.1999999997 H:147446.49743999972) detected in BoxUtils::childBounds"
    Does anyone know how I could fix this issue?

    Hi there -
    Your file has been repaired and emailed back to you. Please let us know if you run into any other issues.
    Thanks!
    -Sam

  • Distributing the content directly to Secondary Site

    We have a particular application that is being used at a only one of our sites with has Secondary site.(SS1)
    Now my question is when i distribute the content do i always first need to distribute it to the Primary Site (PS1)or can i skip distributing it to Primary and directly distribute to secondary site SS1.
    We have 1 Primary Site & 4 Secondary Sites along with few additional  DPs.

    can you please help me to understand how should i distribute the content give below is the case:
    Primary Site  @ UK
    Secondary Site (along with DP role) @ Australia
    Location of content Network share in Australia.
    Now the application that i want to distribute is only used in Australia.
    So after creating the application in SCCM how should i distribute the content
    A) Only to Secondary Site
    B) To Primary Site First & then to Secondary site...
    If i use "A" the status keep on showing waiting for content for days. But if i use "B" it works perfectly fine without any hiccups.

  • Oracle RDF / Joseki : issue with large literals

    Hi,
    I have been using Joseki to query an Oracle RDF model. There seems to be an issue with large literals (according to a few unreliable tests, I would say this concerns literals around and over 4000 chars).
    Here are the two potential behaviours :
    First case:
    If the results contains several lines, one of which contains an overly large literal, there are NO exception on the server side, but the resulting xml is incomplete.
    It misses the "line" containing the large literal, and the xml is stopped there (which means that it also misses the closing </results> and </sparql>. In my case, I am using the results through Jena's sparqlService, which means I get this message :
    XMLStreamException: Unexpected EOF; was expecting a close tag for element <results>
    +at [row,col {unknown-source}]: [31,0]+
    Second case:
    If the query only returns one line which contains an overly large literal, the client receives a simple *"HttpException: 500 Internal Server Error"*
    Here is the error message from my server :
    +INFO [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] (SPARQL.java:165) - Throwable: we+
    blogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    java.lang.ClassCastException: weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB cannot be cast to oracle.sql.CLOB
    at oracle.spatial.rdf.client.jena.OracleSemIterator.getNodesFromResultSet(OracleSemIterator.java:579)
    at oracle.spatial.rdf.client.jena.OracleSemIterator.next(OracleSemIterator.java:445)
    at oracle.spatial.rdf.client.jena.OracleLeanQueryIter.moveToNextBinding(OracleLeanQueryIter.java:135)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.moveToNextBinding(QueryIterConvert.java:56)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterRepeatApply.moveToNextBinding(QueryIterRepeatApply.java:76)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.nextBinding(QueryIteratorBase.java:98)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:54
    +)+
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:50)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:30)
    at com.hp.hpl.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:69)
    at com.hp.hpl.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:62)
    at org.joseki.processors.SPARQL.executeQuery(SPARQL.java:309)
    at org.joseki.processors.SPARQL.execQueryWorker(SPARQL.java:288)
    at org.joseki.processors.SPARQL.execQueryProtected(SPARQL.java:126)
    at org.joseki.processors.SPARQL.execOperation(SPARQL.java:120)
    at org.joseki.processors.ProcessorBase.exec(ProcessorBase.java:112)
    at org.joseki.ServiceRequest.exec(ServiceRequest.java:36)
    at org.joseki.Dispatcher.dispatch(Dispatcher.java:59)
    at org.joseki.http.Servlet.doCommon(Servlet.java:177)
    at org.joseki.http.Servlet.doGet(Servlet.java:138)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Would there be any fix / workaround ?
    Please let me know if you need further information / tests.
    Thanks,
    Regards,
    Julien

    Thanks for your reply.
    While trying to build up a small test case, I found out why there were discrepancies between the two cases I described.
    Indeed, usually, the two cases return the same thing (no exception on the server side, but incomplete resulting XML).
    The exception I described happened when I tried something else. Since I saw that issues were coming from long literals, I used fn:string-length (ARQ) to figure out how long they were.
    The test case resulting in the CLOB-cast-exception is:
    - too large literal
    - only one result "line" containing this literal
    - usage of fn:string-length (which does not change the behaviour in other cases (i.e. no long literals or/and several lines).
    Anyway, you will receive the other test cases shortly.
    Thanks,
    Regards,
    Julien

  • Issues distributing packages and receiving Win32 Error = 0

    Within the last month or so, we've been experiencing issues distributing various packages to our DP's. There doesn't seem to be any rhyme or reason to it; it can be any combination of packages and DP's.
    We've noticed that just before the Win32 error occurs, the following 4 lines usually precede the error:
    UpdateStagedFile failed; 0x80070040
    UpdateStagedFolderRdc failed; 0x80070040
    UpdateStagedFolderRdc failed; 0x80070040
    UpdateStagedFolderRdcW failed; 0x80070040
    We've had an open ticket with MS for over a week now, but there hasn't been any progress. 
    Any light you guys could shed on this would be greatly appreciated!

    This is a known issue for SCCM 2012. A official fix will be released in the near future. If you want to test instantly, contact CSS to open a case to get a test fix.
    There is a KB for this issue (English not available currently)
    http://support.microsoft.com/kb/2927111/fr
    Juke Chou
    TechNet Community Support

  • Issue Playing SCORM Content on iLearning 5.0b

    Hey there - anyone else have an issue playing SCORM content on iLearning 5.0b where you have a version of Sun JRE later than 10 and the "Enable next generation Java plugin" setting in its control panel checked (which is the default on install)?
    We experienced this issue with OLM and Oracle issued a patch and updated SCORM adapter that fixed the issue for our OLM instance but I can't seem to find anything related to an updated iLearning SCORM adapter.
    Thanks

    Lee, I think patch #7565299 solves this if I recall correctly.
    Scott
    http://www.seertechsolutions.com

  • Error CSendFileAction::SendFiles failed; 0x80070003 when distribute package content to DP

    Hello,
    When I try to distribute package content to DP, SCCM starts to distribute content but stop after 1 -2 minutes with this error :
    Sending Started [D:\SCCMContentLib\FileLib\9E97\9E971632A3E9860A15AF04EFEC3A9D5AF9E7220CD4A731C3D9262D00670496A5] SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    Attempt to write 27 bytes to
    \\PWXXXXXXX\SMS_DP$\CMP00012.3-9E971632A3E9860A15AF04EFEC3A9D5AF9E7220CD4A731C3D9262D00670496A5 at position 0 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    Wrote 27 bytes to
    \\PWXXXXXXX\SMS_DP$\CMP00012.3-9E971632A3E9860A15AF04EFEC3A9D5AF9E7220CD4A731C3D9262D00670496A5 at position 0 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    Sending completed [D:\SCCMContentLib\FileLib\9E97\9E971632A3E9860A15AF04EFEC3A9D5AF9E7220CD4A731C3D9262D00670496A5] SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    CSendFileAction::SendFiles failed; 0x80070003 SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    Notifying pkgXferJobMgr SMS_PACKAGE_TRANSFER_MANAGER 09/11/2012 09:54:10 8464 (0x2110)
    Any suggestions ?

    Hello Torsten,
    what mean the Error: 
    ========  Finished Processing Cycle ( 07.04.2014 16:17:56 W. Europe Daylight Time) ========
    Sleeping for maximum 300 seconds.
    Failed to get object class
    ExecStaticMethod failed (800706be) SMS_DistributionPoint, AddFile
    CSendFileAction::AddFile failed; 0x800706be
    Deleting remote file \\SERVER-XXX-4.xxx.xx\SMS_DP$\SON00052.10-41692C6C16C15C06108A70D9896C0CDE344322B374B01F499BF54F55690E118D
    CSendFileAction::SendFiles failed; 0x800706be
    CSendFileAction::SendFiles failed; 0x800706be
    Notifying pkgXferJobMgr
    Sending failed. Failure count = 1, Restart time = 07.04.2014 16:50:47 W. Europe Daylight Time
    Sent status to the distribution manager for pkg SON00052, version 10, status 4 and distribution point ["Display=\\SERVER-DLG-4.roehm.net\"]MSWNET:["SMS_SITE=SON"]\\SERVER-XXX-4.xxx.xx\
    Thank you in advance.
    BR
    Tim

  • Error - Assert: "Very large content bounds....  HELP!

    After 2 months working on my company's website I ended up with an "error has occured" message.
    Message says - Assert: "Very large content bounds (W:12903330.289490186 H:953897.3592778504) detected in BoxUtils::childBounds".
    I can't get around this message popping up and disabling me from moving forward.
    Anyone have insight on what I should do. I'm not a html coder - 15 years using adone (pageMaker) InDesign.
    How can I resolve this and continue working on my website.

    Could you please send us the .muse file at [email protected] along with a link to this thread? If the file is larger than 20Mb you can use a service like Adobe SendNow, SendThisFile, WeTransfer, Dropbox, etc. We'll need to repair the error in the file and return it to you. Thanks.

  • SCCM 2012 R2 Pull DPs

        Hello to all,
    this article states that "There should be a good network connectivity between pull-distribution points and source distribution points so that you can achieve efficient and fast software distribution."
        Question: as far as I know DPs can compress data from source to them. Is it really important to have good connectivity to source so pull DPs can be used? Regular DPs can be used on a site that has no Secondary and still get SWs/Packets from its
    secondary server in a compressed way.
        Regards, EEOC.

        Question: as far as I know DPs can compress data from source to them. Is it really important to have good connectivity to source so pull DPs can be used? Regular DPs can be used on a site that has no Secondary and still get SWs/Packets
    from its secondary server in a compressed way.
    That is incorrect. DPs and transfer to DPs add no compression. Even site to site transfers are not compressed, they are simply put into an archive to my knowledge.
    The statement is a cause and effect statement. If the connection is not good, then you will not achieve efficient and fast software distribution. It doesn't say it won't work though.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Sporadic issues uploading large dynamic content blocks using REST API

    I have been using calls to endpoints like https://secure.eloqua.com/API/REST/1.0/assets/dynamicContent/157 (where "157" is a content block ID) to update existing dynamic content blocks. This has generally worked well for about a month and a half. Starting Saturday morning around Midnight, though, the majority of these calls started failing, with socket error "connection was forcibly closed by remote host".
    That's the .NET verbiage; at a lower level, this is socket error #10054. When using CURL, I get the result pictured below:
    This is sporadic; over the course of today, maybe 25% of these calls have succeeded and 75% have failed. The calls are mostly identical. At other times, we've gone days, maybe weeks without an issue, and I figured that the isolated failures of this sort that we did get related to scheduled downtime, loss of connectivity, and other transient conditions.
    The payload (PUT data) I'm sending up is ~450KB of JSON. It contains the top-level details about the block itself, plus all of the dynamic content / rules criteria, including the default.
    At first, I wondered if some sort of weird character (ampersand, less than / greater than...) had entered my data, flowed into the API call unescaped, and created a problem (e.g. invalid JSON). However, I did some troubleshooting, and the failure we're seeing does not seem to correlate with the absence or presence of any one content / rule item in particular. Rather, it seems to be volume related, i.e. I can send any one of the rules I want to create, or just a few, and that will succeed. But I cannot send all (or most) of the rules at once without experiencing failures most of the time (lately).
    So, if there were a way for me to split up the creation of the block over multiple API calls (one call to create the block, another to add the default content rule, yet another for the first non-default content rule, and so on), I think that would work. I find no indication on Topliners or in the Eloqua API documentation to indicate that this is possible, though.
    It has occurred to me that we are over some API call threshold. I know that some Eloqua installations have a per-day API call limit as low as 20,000. My question is, what do I see when I exceed the limit? Do I get the "forcibly closed" error, or another error perhaps, or is my activity simply throttled by the server? And is there somewhere in Eloqua that I can see where I stand with respect to this limit?
    Finally, I know we've done some things recently that have increased our overall processing load on the Eloqua servers. In particular, a larger number of contacts than previously have been flowing in from our CRM system. One theory I had was that we're allocated a certain amount of total processing power, and that if we overload this capacity complex API calls are subject to timing out. (My communications timeout is set to 10 minutes, and the error message comes up after about 2-3 minutes, but I don't know what kind of timeouts Oracle might have set up on their own equipment.) That being said, my code needs to basically wait indefinitely for this data, and certainly not give up after 2-3 minutes. If there's a way to indicate this fact to Oracle, I'd certainly like to know about that.
    Thank you all for reading and attempting to help!

    Hi James,
    As far as I have been told (by Support) is that the API Limit is a soft limit that is not currently enforced.
    I have seen other API endpoints failing from time to time (mostly on the BULK API) and have had to write logic to redo calls that fail. My failures seems to occur primarily due to server load and I can force them by making allot of exports at the same time.
    In regards to the Dynamic Content Rules I think that you might be able to make the rules one/two rules at the time. What I would test is to make a Dynamic Content with only the default content. Then loop through your rules and only have new rules in the Rules section of the Dynamic Content.
    One more thing. Try using the REST/2.0 endpoint.

  • Is there an issue with large libraries on iPod Classic 160 GB?

    I got a new iPod Classic a week ago. Have already taken it back and exchanged it once. The issue is that it works fine when I load , say, 20,000 songs onto it, but when I try to sync my whole library of around 31,500 songs it no longer works. All I get after the 'OK to disconnect' message has come up is the Apple logo. Attempting to reset doesn't work; reconnecting to the computer I get the message (some of the time) that iTunes has detected an iPod in recovery mode.
    I can restore the iPod by putting into disk mode, but restoring doesn't solve the problem - it comes back again if I try to sync the full library.
    So, I currently have a new iPod Classic with a big hard drive that is supposed to hold up to 40,000 songs according to specifications and what I was told by the expert in the Apple Store, but so far the behaviour of the two iPods has indicated that in reality it can't handle that many.
    THis is not the first time I've seen this kind of issue with an iPod. I had exactly the same issue when the first 60 GB iPods came out - software only worked with a smaller number of songs.

    I've been able fill up a 160Gb Classic quite happily, although at some point I recall finding my iPod started to crash as I got it filled up. One problem seemed to be that iTunes & Windows would get confused when trying to add a really large number for which this workaround helps during the rebuild.
    *Break up large transfers*
    In iTunes select the menu item *File... New Smart Playlist*. Change the first drop-down box to Playlist, the next to is and the next to Music or whatever playlist holds the bulk of the content you want on your device. Tick against *Limit to*, type in say 10, then change the drop-down to GB, and set the last drop-down to artist. When you click OK you can enter a name for the playlist, e.g. Transfer. Now sync this playlist to your iPod rather than your entire library. When the sync is complete modify the rule ( *File... Edit playlist* ) to increase the size by your chosen amount, then sync and repeat. You can experiment with different size increments, if it doesn't work just choose something a bit smaller until it works each time. Before long you should have all your music on your iPod. Once that's done you can move on to other media such as podcasts, videos, photos, playlists etc.
    The other problem I found is that sometimes the iPod would crash on exiting or even on entry to an iPod game. After a number of restores I determined that the number and complexity of the playlists I was putting on was the cause of the problem. I believe these are loaded into working RAM, so use too much memory for these and the iPod struggles on any other memory hungry task.
    tt2

  • Acrobat Connect Pro LMS 7.5 server cache issue - displaying old content

    Our Adobe Acrobat Connect Pro server is showing old Captivate-created content from about 4-6 weeks ago.
    I loaded 35+ sets of Captivate 3 (SCORM 1.2, HTML, zipped) content onto our Acrobat Connect Pro LMS about 6 weeks ago.
    I converted all training content from Captivate 3.0.1 to 4.0.1, 4 weeks ago by opening each file in Captivate 4 and following prompts to "Save As" new files with different file names.
    I reloaded all content 4 weeks ago, and again 2 weeks ago.
    I reloaded about half of all content again last week.
    End User Acceptance testing performed this week showed that most of the courses are showing old content, ranging from 2-6 weeks old
    Attempted fixes and workarounds:
    Deleting content entirely and reloaded from scratch - this will not work long term, as we lose usage data each time we reload completely new files.
    Contacted Adobe, provided times to track incidents of the issue.  We reached Tier 2 - who told us it was our problem and that everything appeared to be working fine from their side.
    New workaround - load new content and reattach course to new content.  This presents the same long term issue as the first workaround, but enables us to retain older versions of content in the system, should we need to revert or report on it.
    Gaining server side access is a bit challenging due to the hosting situation we have, so I am looking (ideally) for a solution that can be performed from the Administrator/Author Frontend.  However, I want to learn the real cause of the problem, wherever it might reside, so that it can be properly corrected and avoided in the future.  I am calling this a server cache issue, as it seems the server has somewhere retained unwanted old versions of content, preventing current content from being displayed to end users.  Viewing content as an end user = see old content.  Viewing content from the Content area (Author view) shows the current files, so I know they are on the server and are loading correctly, up to a point.
    I am preparing all content for another round of loading/reloading due to other issues and updates, so republishing and reloading all 35+ files into the LMS is unavoidable at this point.
    This issue is keeping our LMS from launching to several thousand users across the country, so any suggestions or helpful tips are much appreciated.

    I think I have isolated the source of this problem. It's the Pitstop Professional 9 plug in. I un-installed this, and everything opens quicker than greased lightning. I re-installed it and it's back to slowsville.
    Unfortunately Pitstop is essential to my workflow.
    Until recently I did my pre-press on a Mac G5 with Acrobat Pro 7 and Pitstop 6.5. I never had this problem with slow file opening. But it seems that the delays would occur when I used the plug-in with large complex files.. So it would open files as fast as you'd expect from an elderly machine. But starting to use Pitstop would result in a prolonged period of staring at a spinning beachball.
    I wonder is there any way to stop the Pitstop plug-in from initializing until it is used? So the plug-in stays inert until you select the tool from from the menus.

Maybe you are looking for

  • Broken pipe, DMLException: JBO-26061: Error while opening JDBC connection

    Hi, We are facing strange problem with oc4j903 container using bc4j getConnection() method. We are getting the following error intermitantly using bc4j getConnection() method. ApplicationModuleProvider JBO-30003: The application pool (bc4j_paris) fai

  • Is this support or what? QUESTION!!!

    Curious if iPhone 3G S or the 3.0 include (Video-Out) for Applications? I'd really like to play my video games on the TV using the device (Super Monkey Ball). I watched Steve do this in his Keynote explaining iPhone features. Example: http://www.yout

  • AppleScript help

    Hello, I'm very new to using Applescript and I'm trying to get the FTP program Fetch to simply refresh every 10 seconds on a loop while keeping Fetch in the background. I have a script that works well, but once it's running I can't quit it. I have to

  • No text Messages

    Hi All, My BB torch 9800 was working fine till yesterday , All of a sudden it got restarted after i clicked a photo from it. When restarted i opened my messages and i cant see any of my previous message history , I can see them if i go through * Veiw

  • Installation Fail (Adobe CS6 Design and Web) : Internet Connection

    Hello, One of students asked me about one problem which was relevant to Internet Connection when he installed Adobe CS 6. To begin with, I should have a check at internet connection on his laptop. Accessing to Adobe website, Google and Youtube was fi