Adding footage fails with larger documents

Hi,
I'd be thankful for any advice: I tried to add footage to a document (2.300 kb) for several times, but it doesn't work. With smaller documents it's no problem. It's a pdf of a magazine we publish for open access.
Has anyone an idea that would help me?
Thanks.

Hi MScallion,
You can try the following Jython script to transfer the files using FTP.
     import snpsftp
     ftp = snpsftp.SnpsFTP('HOST_NAME', 'USER_NAME', 'PASSWORD')
     ftp.setmode('ASCII')
     ftp.mget('SOURCE_DIR', 'File1.txt','OUTPUT_DIR')
     ftp.close()
Thanks,
Yellanki

Similar Messages

  • Wpg_docload fails with "large" files

    Hi people,
    I have an application that allows the user to query and download files stored in an external application server that exposes its functionality via webservices. There's a lot of overhead involved:
    1. The user queries the file from the application and gets a link that allows her to download the file. She clicks on it.
    2. Oracle submits a request to the webservice and gets a XML response back. One of the elements of the XML response is an embedded XML document itself, and one of its elements is the file, encoded in base64.
    3. The embedded XML document is extracted from the response, and the contents of the file are stored into a CLOB.
    4. The CLOB is converted into a BLOB.
    5. The BLOB is pushed to the client.
    Problem is, it only works with "small" files, less than 50 KB. With "large" files (more than 50 KB), the user clicks on the download link and about one second later, gets a
    The requested URL /apex/SCHEMA.GET_FILE was not found on this serverWhen I run the webservice outside Oracle, it works fine. I suppose it has to do with PGA/SGA tuning.
    It looks a lot like the problem described at this Ask Tom question.
    Here's my slightly modified code (XMLRPC_API is based on Jason Straub's excellent [Flexible Web Service API|http://jastraub.blogspot.com/2008/06/flexible-web-service-api.html]):
    CREATE OR REPLACE PROCEDURE get_file ( p_file_id IN NUMBER )
    IS
        l_url                  VARCHAR2( 255 );
        l_envelope             CLOB;
        l_xml                  XMLTYPE;
        l_xml_cooked           XMLTYPE;
        l_val                  CLOB;
        l_length               NUMBER;
        l_filename             VARCHAR2( 2000 );
        l_filename_with_path   VARCHAR2( 2000 );
        l_file_blob            BLOB;
    BEGIN
        SELECT FILENAME, FILENAME_WITH_PATH
          INTO l_filename, l_filename_with_path
          FROM MY_FILES
         WHERE FILE_ID = p_file_id;
        l_envelope := q'!<?xml version="1.0"?>!';
        l_envelope := l_envelope || '<methodCall>';
        l_envelope := l_envelope || '<methodName>getfile</methodName>';
        l_envelope := l_envelope || '<params>';
        l_envelope := l_envelope || '<param>';
        l_envelope := l_envelope || '<value><string>' || l_filename_with_path || '</string></value>';
        l_envelope := l_envelope || '</param>';
        l_envelope := l_envelope || '</params>';
        l_envelope := l_envelope || '</methodCall>';
        l_url := 'http://127.0.0.1/ws/xmlrpc_server.php';
        -- Download XML response from webservice. The file content is in an embedded XML document encoded in base64
        l_xml := XMLRPC_API.make_request( p_url      => l_url,
                                          p_envelope => l_envelope );
        -- Extract the embedded XML document from the XML response into a CLOB
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/methodResponse/params/param/value/string/text()').getclobval(), 1 );
        -- Make a XML document out of the extracted CLOB
        l_xml := xmltype.createxml( l_val );
        -- Get the actual content of the file from the XML
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/downloadResult/contents/text()').getclobval(), 1 );
        -- Convert from CLOB to BLOB
        l_file_blob := XMLRPC_API.clobbase642blob( l_val );
        -- Figure out how big the file is
        l_length    := DBMS_LOB.getlength( l_file_blob );
        -- Push the file to the client
        owa_util.mime_header( 'application/octet', FALSE );
        htp.p( 'Content-length: ' || l_length );
        htp.p( 'Content-Disposition: attachment;filename="' || l_filename || '"' );
        owa_util.http_header_close;
        wpg_docload.download_file( l_file_blob );
    END get_file;
    /I'm running XE, PGA is 200 MB, SGA is 800 MB. Any ideas?
    Regards,
    Georger

    Script: http://www.indesignsecrets.com/downloads/MultiPageImporter2.5JJB.jsx.zip
    It works great for files upto ~400 pages, when have more pages than that, is when I get the crash at around page 332 .
    Thanks

  • Problems with larger documents.

    Here's my problem: I am a composer, I need to print my scores (document size is 9x12) but it seems that adobe only supports 8.5x11. What I'm looking to do is to save my 9x12 document as a pdf, and have it actually show up in adobe reader as a 9x12 document so that it can be accurately printed at its intended size. Every time I try to do just that, the document either rescales to accomodate to adobe, or parts of the document get cut off. That's not what I want, I want a 9x12 document. If in the unfortunate case that what I'm trying to do is impossible, please notify me of an alternate universal document reader that will do what I want.

    I set up the score for my music in my notation software (finale 2011.) I must then convert the score from finale into a PDF, as there are additional items that I need to add to the score that can't be added from finale. So I export the document from finale as a 9x12 file. I use one of those PDF creators that acts as a printer since Finale 2011 for windows doesn't have direct pdf conversion capabilities. Once the PDF is created, I open it only to see that my large document has been subject to the 8.5x11 parameters of Adobe Acrobat, and much of it has been cropped out. I don't want to rescale, because again, it is a 9x12 document and that is all it will ever be. Is there a better alternative to this process that might yeild the desired outcome?

  • Pagecount: problem with larger documents

    Hi,
    To print the pagecount I print &SFSY-PAGE&/&SFSY-JOBPAGES& on every page. This works for smaller documents (2-3 pages) but if I have larger documents (12-15 pages) it will print 1/, 2/, 3/, 4/ etc untill close to the end where it will print 12/13, 13/13 correctly.
    Anyone had this issue before?
    Thanks!

    Hi,
    I think you should specific a length option for field, just like:
    &SFSY-PAGE(4ZC)& / &SFSY-FORMPAGES(4ZC)&
    "4: set length to 4
    "Z: Suppresses leading zeros in numbers
    "C: This effect corresponds to that of the ABAP statement CONDENSE
    Please try,
    Thanks

  • Using DBMS_METADATA.PUT with large documents

    Hi all,
    I am using the DBMS_METADATA functions and procedures to programmatically store and re-creating database objects. All was working well until I ran into some large create scripts (particularly on partitioned indexes and tables). If I only have a few partitions, the DBMS_METADATA.PUT successfully creates the index, but if my table has 40 partitions, the DBMS_METADATA.PUT fails.
    I came across one post that indicated that this is the result of the EXECUTE_IMMEDIATE being restricted to 32K.
    If that is the case, has anyone successfully found a workaround that will allow me to recreate tables and indexes that have large create scripts.
    Thanks
    Steve

    Again a post with no version number so anyone trying to help has to guess.
    In 11g Oracle merged NDS and DBMS_SQL so that limit no longer exists. In previous versions back to, probably, 7.3.4 you can use the DBMS_SQL built-in package.
    Look at the example "execute_plsql_block" in Morgan's Library at www.psoug.org under DBMS_SQL.
    In 10gR2 and 11gR1, and possibly earlier versions but I do not recall, dbms_metadata.put has a clob option. Look at the demo, again, in Morgan's Library under the package's name.
    But if the point is partitioned indexes be sure you familiarize yourself with DBMS_PCLXUTIL too.

  • Photo sharing with Apple TV fails with large library over 20000 photos

    Apple TV 2nd gen user for several years and enjoy photo sharing and viewing my library on the big screen.  I shared two folders under My Pictures, one less than 500 pics remained stable and the other folder grew by several hundred photos every few month. Once the total number of photos to share reached almost 20,000 photos, Apple TV failed to launch into slideshow through the menu.  It basically jumped out to the main menu after clocking for several minutes. 
    So, I tried splitting the large folder into two separate folders (10K photos each) and reset the home sharing option in iTunes to point to three folders, instead of two, with the same total exceeding 20,000 pics.  The ATV seems to recognize the folders, and the count is correct, but I am still unable to get slideshow to launch.  It's as if I passed a threshold.
    Any thoughts?  Thanks.

    I've solved the problem.
    In Photos, Select Preferences.  In the General Tab, make sure that the path to your external Photos library shows in the Library Location.  Then click on the Use as System Library Photo Library button.
    In iTunes 12.1.12, you will now be able to select your specific albums as before.
    I updated my Apple TVs to the latest updates.
    The specific photo albums can be accessed in Apple TV now.

  • Constant crashes with large document when scrolling through pages

    Need help ASAP with crashing issue. Have shut down and restarted, and nothing helps.

    Nobody can help you. You have not even bothered to tell us what program you are referring to much less provided any technical info like what system or what documents. Sorry, but this is pointless.
    Mylenium

  • Help with large document please.

    I'm using a MacPro 2 x 2.26 Xeon/6GB RAM/Radeon graphics card. For the 6th year I'm producing an A4 product catalogue for a client with mostly full page photographs on every page. This year it will have grown to 400 pages. I'm half way through and already the file is becoming a bit unwieldy with daily crashes and slow to load. Does anybody know if increasing the RAM would help or should I try to split the catalogue into smaller chunks and then join them up into a book. That's not something I've tried before so am not sure about the drawbacks.  The client does have a habit of changing the page order quite a bit which might be a bit of a problem if I have to move between the smaller documents. Or am I wrong?
    Any advice most welcome.

    Thanks for your comments Peter. I think I'm at a stage where I can easily split a duplicate file into 10 or maybe 11 documents but I will need to reference 5 or 6 words on each page for the index. Hopefully that won't tax the system too much!

  • Sharepoint 2013 adding ECT fails with "Access Denied by Business Connectivity Service"

    Using SharePoint Designer 2013 I am attempting to setup an External Content Type to a SQL DB. I have setup the SQL database with a valid login that was also used to setup an account with the Secure Store Service. I am running SharePoint 2013 designer and
    have opened my site with administrative credentials. No matter what, I continue to get the "access denied" message when I try to add this SQL database to my ECT section in SPD. All users have access to invoke the BCS app.
    I have deleted and recreated the BCS service application and it is running with farm credentials and temporarily I added the farm account to the local admin account....and again verified that all users have rights to run BCS...
    In all other aspects my SharePoint sites are working, I can modify and add via SPD and publish...etc...but I cannot add a connection to an external SQL server. I have also verified through Excel that I can connect to my SQL DB with the same credentials that
    I am trying in SharePoint and everything works.
    Most of the posts I see in this area relate to permissions or access problems AFTER the ECT connection is created. My problem is I can't even get a connection created.

    Here are the error logs that are generated when I try to connect....maybe this will help someone tell me where to correct the issue.....(I removed the actual domain names) but my account was listed which is an admin on the sharepoint system and domain.
    06/25/2013 16:48:00.24 w3wp.exe (0x1908) 0x0EE4 Business Connectivity Services Business Data 9f4c Unexpected 'Business Data Connectivity Service' BdcServiceApplication logging server side AccessDeniedException before marshalling
    and rethrowing on client side: Access Denied for User '0#.w|"domain\my account', which may be an impersonation by 'Domain\"sharepoint admin account"'. Securable IMetadataCatalog with Name 'ApplicationRegistry' denied access. Stack Trace:   
    at Microsoft.SharePoint.BusinessData.SharedService.ModelAccessor.Create(MetadataObjectStruct rawValues, MetadataObjectStruct applicationRegistryStruct, DbSessionWrapper dbSessionWrapper)     at Microsoft.SharePoint.BusinessData.SharedService.BdcServiceApplication.Execute[T](String
    operationName, UInt32 maxRunningTime, ExecuteDelegate`1 operation) 97fe289c-5245-e040-0f76-59614537398e
    06/25/2013 16:48:00.24 w3wp.exe (0x1908) 0x0EE4 Business Connectivity Services Business Data g0kc High Access Denied for User '0#.w|domain\my user account', which may be an impersonation by 'Domain\"sharepoint admin account"'.
    Securable IMetadataCatalog with Name 'ApplicationRegistry' has ACL that contains: 97fe289c-5245-e040-0f76-59614537398e

  • Adding UDFs fails with SQL-Error

    I got our customers DB (SBO05APL18) and upgraded to PL36. Upgrade worked perfectly fine although it took years (2,5GB DB).
    Everything seems to work so far but if I try to add a UDF in this DB i get the following SQL-Errormsg in B1:
    11/06/2008  11:04:17: 1). [Microsoft][SQL Native Client]Restricted data type attribute violation
    Doesn't matter what type or location i wish the UDF to have.
    Specs: Win2000 PL4 MSSQL05.
    Before you make suggestions: this DB is not the only one running on this machine and other DBs work fine.
    I compared the preferences, compatibility, collation and permissions. They are all equal.
    Couldn't find any hints on the net so far. Help needed!!

    Hi,
    Just a guess... Did you verify that the DB isn't in readonly mode (since it looks like your assumption)?
    Launch SQL Server Management Studio, right-click on your DB then last option (Properties).
    Select the Options page (top-left list) and near the end of list on the right, you should find a group named State, with the Read-Only state of the database. If true, change it to false.
    Regards,
    Eric

  • When printing from Preview, the document prints with large print and many lines added and pages added

    When printing from Preview, the document prints with large print and many lines added and pages added. Why does this happen?

    Thank you!!! That was the problem--scale was set at 200% (don't know why)--when I reduced it to 100% it printed perfectly.
    Thanks again for helping me resolve this problem.

  • Combine two large documents fails

    I have two large PDF files with multiple embedded videos. One is 1GB and the other is 2.12GB. When I try to combine them and save, the save fails with a message: "The Document could not be saved. The file is too big." This has happened in two computers, each one a Mac running 10.7.5. I have tried this in both Acrobat X and Acrobat XI.
    Note that InDesign also fails to export this as a single large file, but will export two files that each appear to be fine.

    Two things you can test : maintain the vendor code in the customer and the other way round to say SAP.
    Then the standard SAP program RFKORD10 allows several parameters that you can check.
    If none of tham are working, then you need to create a specific program based on RFKORD10 that will do what you want.

  • How do you apply 2 different types of page (first with large logo and 2nd/follow with small) in the same document?

    i´m using pages 5.1 and i need 2 different types of page in 1 document: first page with large logo and second with a small one. in pages 09 was a option for different first page ...

    that's not my problem, and I am unfortunately not a professional. Sorry.
    In Pages09 there was an option "Different First Page" (like MS Word). I could create different pages (first and second) with two different logos and text boxes. If the document was opened, there was only one side with the large logo. during writing the small logo and the other text boxes appeared on the second page (and subsequent pages).
    Unfortunately I miss this function. I mean the inclusion of items on a second page. However, this must not necessarily be active.

  • Jvm startup fails with error when using large -Xmx value

    I'm running JDK 1.6.0_02-b05 on RHEL5 server. I'm getting error when starting the JVM with large -Xmx value. The host has ample memory to succeed yet it fails. I see this error when I'm starting tomcat with a bunch of options but found that it can be easily reproduced by starting the JVM with -Xmx2048M and -version. So it's this boiled down test case that I've been examining more closely.
    host% free -mt
    total used free shared buffers cached
    Mem: 6084 3084 3000 0 184 1531
    -/+ buffers/cache: 1368 4716
    Swap: 6143 0 6143
    Total: 12228 3084 9144
    Free reveals the host has 6 GB of RAM, approximately half is available. Swap is totally free meaning I should have access to about 9 GB of memory at this point.
    host% java -version
    java version "1.6.0_02"
    Java(TM) SE Runtime Environment (build 1.6.0_02-b05)
    Java HotSpot(TM) Server VM (build 1.6.0_02-b05, mixed mode)
    java -version succeeds
    host% java -Xmx2048M -version
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    Could not create the Java virtual machine.
    java -Xmx2048M -version fails. Trace of this reveals mmap call fails.
    mmap2(NULL, 2214592512, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
    Any ideas?

    These are the relevant java options we are using:
    -server -XX:-OmitStackTraceInFastThrow -XX:+PrintClassHistogram -XX:+UseLargePages -Xms6g -Xmx6g -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=128m -XX:MaxPermSize=192m -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+ExplicitGCInvokesConcurrent -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.awt.headless=true
    This is a web application that is very dynamic and uses lots of database calls to build pages. We use a large clustered cache to reduce trips to the database. So being able to acces lots of memory is important to our application.
    I'll explain some of the more uncommon options:
    We use the Concurrent Garbage collector to reduce stop the world GC's. Here are the CMS options:
    -XX:+UseConcMarkSweepGC
    -XX:+CMSClassUnloadingEnabled
    -XX:+CMSPermGenSweepingEnabled An explicit coded GC invokes the Concurrent GC instead of the stop the world GC.
    -XX:+ExplicitGCInvokesConcurrentThe default PermSizes where not large enough for our application. So we increased them.
    -XX:PermSize=128m
    -XX:MaxPermSize=192mWe had some exceptions that were omitting their stack traces. This options fixes that problem:
    -XX:-OmitStackTraceInFastThrowWe approximate between 10% to 20% performance improvement with Large Page support. This is an advance feature.
    -XX:+UseLargePagesUseLargePages requires OS level configuration as well. In SUSE10 we configured the OS's hugepages by executing
    echo "vm.nr_hugepages = 3172" >> /etc/sysctl.confand then rebooting. kernel.shmmax may also need to be modified. If you use Large Page be sure to google for complete instructions.
    When we transitioned to 64bit we transitioned from much slower systems having 4GB of ram to much faster machines with 8GB of ram, so I can't answer the question of degraded performance, however with our application, the bigger our cache the better our performance, so if 64bit is slower we more than make up for it being able to access more memory. I bet the performance difference depends on the applications. You should do your own profiling.
    You can run both the 32bit version and the 64bit version on most 64bit OSes. So if there is a significant difference run the version you need for the application. For example if you need the memory use the 64bit version if you don't then use the 32bit version.

  • 2012 New Cluster Adding A Storage Pool fails with Error Code 0x8007139F

    Trying to setup a brand new cluster (first node) on Server 2012. Hardware passes cluster validation tests and consists of a dell 2950 with an MD1000 JBOD enclosure configured with a bunch of 7.2K RPM SAS and 15k SAS Drives. There is no RAID card or any other
    storage fabric, just a SAS adapter and an external enclosure.
    I can create a regular storage pool just fine and access it with no issues on the same box when I don't add it to the cluster. However when I try to add it to the cluster I keep getting these errors on adding a disk:
    Error Code: 0x8007139F if I try to add a disk (The group or resource is not in the correct state to perform the requested operation)
    When adding the Pool I get this error:
    Error Code 0x80070016 The Device Does not recognize the command
    Full Error on adding the pool
    Cluster resource 'Cluster Pool 1' of type 'Storage Pool' in clustered role 'b645f6ed-38e4-11e2-93f4-001517b8960b' failed. The error code was '0x16' ('The device does not recognize the command.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    if I try to just add the raw disks to the storage -- without using a pool or anything - almost every one of them but one fails with incorrect function except for one (a 7.2K RPM SAS drive). I cannot see any difference between it and the other disks. Any
    ideas? The error codes aren't anything helpful. I would imagine there's something in the drive configuration or hardware I am missing here I just don't know what considering the validation is passing and I am meeting the listed prerequisites.
    If I can provide any more details that would assist please let me know. Kind of at a loss here.

    Hi,
    You mentioned you use Dell MD 1000 as storage, Dell MD 1000 is Direct Attached Storage (DAS)
    Windows Server cluster do support DAS storage, Failover clusters include improvements to the way the cluster communicates with storage, improving the performance of a storage area network (SAN) or direct attached storage (DAS).
    But the Raid controller PERC 5/6 in MD 1000 may not support cluster technology. I did find its official article, but I found its next generation MD 1200 use Raid controller PERC H 800 is still not support cluster technology.
    You may contact Dell to check that.
    For more information please refer to following MS articles:
    Technical Guidebook for PowerVault MD1200 and MD 1220
    http://www.dell.com/downloads/global/products/pvaul/en/storage-powervault-md12x0-technical-guidebook.pdf
    Dell™ PERC 6/i, PERC 6/E and CERC 6/I User’s Guide
    http://support.dell.com/support/edocs/storage/RAID/PERC6/en/PDF/en_ug.pdf
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

Maybe you are looking for