Excel shows an error for large data dumps

Hi,
I have a report in which displays 10,00,000 records , for that i created an agent and run the report the data is loaded into excel when we open the excel it is throwing an error
I so mimimum i need to display 5,00,000 records
Error is A DDE error has occured, and a description of the error cannot display becuase it is too long. If the file name or path is long, try renaming the file or copying it into a diffrent folder
Thanks,

Hi,
you can filter out some condition on report wise as well as dashboard wise using a prompts.
Example:
If u have Period table in the report filter out for year=2012 only
or
If u have prompt in dashboard filter out.
in agent also we have the conditions rule while downloading the report.
Check this makshu.blogspot.com\increase-row-limits-in-table-properties.html
Regards
VG

Similar Messages

  • Excel Throws an error for large data dumps

    Hi,
    I have a report in which displays 10,00,000 records , for that i created an agent and run the report the data is loaded into excel when we open the excel it is throwing an error
    I so mimimum i need to display 5,00,000 records
    Thanks,

    Hi,
    you can filter out some condition on report wise as well as dashboard wise using a prompts.
    Example:
    If u have Period table in the report filter out for year=2012 only
    or
    If u have prompt in dashboard filter out.
    in agent also we have the conditions rule while downloading the report.
    Check this makshu.blogspot.com\increase-row-limits-in-table-properties.html
    Regards
    VG

  • Web show document timeout for large data in file

    Hi,
    I'm using Oracle Application Server 10.1.2.0.2 as a middleware and we launch an excel file using web.show_document. The problem that i'm facing is that if the data in the excel file is too large (I'm unable to quantify large!) the browser is unable to show the document. I have tried to increase the timeout in the Apache settings, but still the problem persists. Has anybody faced such a problem with web.show_document?

    OK, then you should get a thread dump of WLS just after your DBA confirms that all the heavy lifting for a given transaction is done, to see what WLS thinks it needs to be doing. Actually, I would open a support case and get instructions how to turn on JTA logging, so we'll see step-by-step, timestamp-by-timestamp the progress of the transaction.

  • INPUT: KT4/V vs CRC error in large data transfer/CD burning HERE!

    This issue can be solved with BIOS update. KT4V & KT4 Ultra users who are having this problem can request for the TEST BIOS to test on your system. You may either pm/email me or Bas, or get it at http://ftp://ftp.heppen.be/MSI/
    Please report back whether the test BIOS would really fix the problem, or cause any new problem, or any performance hit.
    ** this sounds like a Christmas Gift to KT4V users AND New Year Gift to KT4 Ultra users!!!  :D  **
    To all KT4 Ultra and KT4V users, either you have data corruption or CRC error in large data transfer and CD burning or not, your inputs are needed.
    Please list down your system specs as details as possible. Below here is a guideline, you may take this, CTRL-C (copy) and write your specs in your post.
    1. System specs:
    CPU:
    Motherboard:
    RAM Slot-1: (exact brand and model)
    RAM Slot-2:
    RAM Slot-3:
    display card:  [no overclock]
    IDE-1M:  (exact HDD brand and model pls)
    IDE-1S:
    IDE-2M:
    IDE-2S:
    IDE-3:
    SER-1:
    SER-2:
    PCI-1:
    PCI-2:
    PCI-3:
    PCI-4:
    PCI-5:
    PCI-6:
    PSU: (brand, model, total power, (estimated) combined power)
    BIOS revision:
    Operating System:
    VIA 4-in-1 drivers : (if you installed it, tell us the version)
    other drivers, services or applications might affect the data transfer such as : PCI Latency patch, WPCREDIT modifications, VCool, CoolerXP...
    2. CRC ERROR?
    PASS or FAIL
    If PASS, let us know your BIOS settings.
    If FAIL, proceed as below:
    3. Please use these BIOS settings:
    1. Load BIOS Setup Default
    2. NO OVERCLOCK ON FSB! Set accordingly to your CPU
    2. Set RAM to
    a) SPD, if failed try
    b) User defined to the slowest RAM timings, ie 266,2.5,3,6,3,disable interleave,4,disable 1T, normal
    If PASS, go for more extreme BIOS settings as you usually use:
    1. High Performance Default
    2. set RAM to the extreme timings
    3. DO NOT overclock yet until both 1. and 2. are PASS
    4. Try these suggestions:
    1. Microsoft IDE drivers (uninstall VIA 4-in-1's)
    2. VIA 4-in-1 different version's IDE filter driver?
    3. VIA IDE Miniport driver?
    4. use IDE-3 RAID channel for one HDD data transfer
    5. same HDD transfer, ie C:\dir1\*.* -> C:\dir2\*.*
    6. burn CD at 1x speed
    7. Set the HDD and/or CD to PIO mode, or slower UDMA mode.
    8. If and only if you know how to update BIOS correctly and willing to take some risks, try the BETA BIOS KT4 (1.25), KT4V (1.64) too.
    Please report back your tests and experimentations of these suggestions.
    If you have any workaround to deal with this issue other than set back FSB to 100Mhz, please tell us too.
    Thanks for your inputs!

    My system, just gotten this 2 days ago
    CPU: Athlon XP 2000+
    Motherboard: MSI KT4V (MS-6712)
    RAM Slot-1:
    RAM Slot-2: 512 Mb Kingston DDR 333 CAS 2.5
    RAM Slot-3:
    display card: Abit Siluro GF3 Ti200
    IDE-1M: Western Digital WD800JB (8 Mb Cache) - 80 GB
    IDE-1S: Seagate U-Series ST360020A - 60 GB
    IDE-2M: Sony DVD-ROM 16x (DDU1621)
    IDE-2S: Creative CDRW121032
    IDE-3:
    SER-1:
    SER-2:
    PCI-1:
    PCI-2:
    PCI-3: Accton 1207F 10/100 Fast Ethernet Card
    PCI-4: SBLIVE 5.1 Platinum with Live!Drive II
    PCI-5:
    PCI-6:
    PSU: 400Watts (Generic)
    BIOS revision: 1.6
    Operating System: Windows 2000 with SP3
    VIA 4-in-1 drivers : Hyperion 4.45, only AGP and INF installed. IDE drivers are standard Win2k/SP3 ones.
    2. CRC ERROR?
    FAIL.
    When i had my system, i tried installing Windows ME as i wanted to do dual-booting together with Win2K. When i tried to install NVIDIA Detonator drivers (Ver 30.82), it proceeded as normally and asked for a reboot, which i did, then it just hang before the start of Windows ME. I did a reboot, and later selected "Normal" as Windows ME detected a failed startup, and later i was able to enter Windows ME, but it reported that the NVIDIA Detonator drivers were invalid and of wrong type to my display card.
    Later i tried to move my files from my C:\ to D:\ and it reported saying that my destination file is invalid.
    When i changed my OS to Win2k/SP3 (no more dual-boot), and installed the same NVIDIA detonator driver version, it worked. When i started to copy files again, it later BSOD, and said PAGE_FAULT_ERROR (something like this). When installing from CDs, it will report that my .CAB files are corrupt or have insufficient swap file space (i set mine manually at 1.2Gb size). Then there are times during my reboots and entering win2k, i found my keyboard and mouse (all PS/2) not working and windows loads as usual.
    Later i changed my PCI latency setting from 32 to 96, i managed to install from CD without much further issues.
    Upon reading these posts here, i didn't realize that the MSI KT4 series or the KT400 chipset had so much issues! i have read from countless sites like extremetech and anandtech and none reported about this particular errors i have encountered during my first 2 days with this setup. (this is my first Athlon setup, i'm previously and Intel person-type).
    So far, i conclude in my settings is that:
    -32-Bit settings in Bios settings for CD/DVD/CD-RW must be disabled, i concur with Shumway's recommendations.
    -DMA settings in Windows 2000 for CD/DVD/CD-RW must be set to PIO mode otherwise when copying from CD to HDD will have read errors.
    -Installing the PCI latency fix really does wonders for my set. (PCI Latency fix ver 1.9). Now i can copy files from all my drives without worrying so much about CRC errors. Thank you, George E. Breese.
    -I really want to know why in WinME i can't install the NVIDIA detonator drivers, while in Win2K i can.
    I post again, once i have done some more tests to my system, especially CD-R writes.
    Angel17

  • Excel 2012 SP1 plagin for Master Data Services - Bugs ???

    Excel 2012 SP1 plagin for Master Data Services 32bit or 64bit
    1.  
    My entity A has 50,000 records.
    Entity B has a domain attribute "City" from entity A.
    I successfully add new records to the entity B, but after refresh I cannot see any data in the domain attribute "City" for new records (empty cels)!!! Although in the Web client (or through SSMS in table)  I can see the data in "City"
    attribute (e.g. code_095 {Moscow}).
    If I reduce the number of records in the entity A to 20 000, then I see code_095 {Moscow} in Excel 2012 SP1 plagin for MDS.
    Bug???
    2.
    Why Excel 2012 SP1 plagin for MDS resets the format cells ???  Bug???
    3.
    I have created rows for my own headings (business captions) above the MDS-table in the Excel-worksheet.
    Excel 2012 SP1 plagin for MDS: Why after applying the filter (query from server) Excel-worksheet completely recreated ?
    And all my row headers have been removed!!! What the hell ?
    from Moscow with money

    Superbluesman, is this still an issue? This looks like a bug. Did you file a Connect Item?
       Thanks!
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • WCF service connection forcibly closed by the remote host for large data

    Hello ,
                        WCF service is used to generate excel report , When the stored procedure returns large data around 30,000 records. Service fails
    to return the data . Below is the mentioned erorr log :
    System.ServiceModel.CommunicationException: An error occurred while receiving the HTTP
    response to <service url> This could be due to the service
     endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by
    the server (possibly due to the service shutting down). See server logs for more details. ---> System.Net.WebException:
    The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException:
    Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
    ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
       at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
       at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
       --- End of inner exception stack trace ---
       at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
       at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size)
       at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead)
       --- End of inner exception stack trace ---
       at System.Net.HttpWebRequest.GetResponse()
       at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout).
       --- End of inner exception stack trace ---
    Server stack trace:
       at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason)
       at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)
       at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)
       at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)
       at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
       at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
       at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
    Exception rethrown at [0]:
       at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
       at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
       at IDataSetService.GetMastersData(Int32 tableID, String userID, String action, Int32 maxRecordLimit, Auditor& audit, DataSet& resultSet, Object[] FieldValues)
       at SPARC.UI.Web.Entities.Reports.Framework.Presenters.MasterPresenter.GetDataSet(Int32 masterID, Object[] procParams, Auditor& audit, Int32 maxRecordLimit).
    WEB CONFIG SETTINGS OF SERVICE
    <httpRuntime maxRequestLength="2147483647" executionTimeout="360"/>
    <binding name="BasicHttpBinding_Common"  closeTimeout="10:00:00"  openTimeout="10:00:00"
           receiveTimeout="10:00:00"  sendTimeout="10:00:00"  allowCookies="false"
           bypassProxyOnLocal="false"  hostNameComparisonMode="StrongWildcard"
           maxBufferSize="2147483647"  maxBufferPoolSize="0"  maxReceivedMessageSize="2147483647"
           messageEncoding="Text"  textEncoding="utf-8"   transferMode="Buffered"
           useDefaultWebProxy="true">
    <readerQuotas     maxDepth="2147483647"
          maxStringContentLength="2147483647"  maxArrayLength="2147483647"
          maxBytesPerRead="2147483647"  maxNameTableCharCount="2147483647" />
         <security mode="None"> 
    WEB CONFIG SETTINGS OF CLIENT
    <httpRuntime maxRequestLength="2147483647" requestValidationMode="2.0"/>
    <binding name="BasicHttpBinding_Common"
           closeTimeout="10:00:00"       openTimeout="10:00:00"
           receiveTimeout="10:00:00"       sendTimeout="10:00:00"
            allowCookies="false"        bypassProxyOnLocal="false"
            hostNameComparisonMode="StrongWildcard"        maxBufferSize="2147483647"
            maxBufferPoolSize="2147483647"        maxReceivedMessageSize="2147483647"
            messageEncoding="Text"        textEncoding="utf-8"
            transferMode="Buffered"        useDefaultWebProxy="true">
     <readerQuotas
           maxDepth="2147483647"
           maxStringContentLength="2147483647"
           maxArrayLength="2147483647"
           maxBytesPerRead="2147483647"
           maxNameTableCharCount="2147483647" />   

    Doing binding configuration on a WCF service to override the default settings is not done the sameway it would be done on the client-side config file.
    A custom bindng must be used on the WCF service-side config to override the defualt binding settings on the WCF service-side.
    http://robbincremers.me/2012/01/01/wcf-custom-binding-by-configuration-and-by-binding-standardbindingelement-and-standardbindingcollectionelement/
    Thee readerQuotas and everything else must be given in the Custom Bindings to override any default setttings on the WCF service side.
    Also, you are posting to the wrong forum.
    http://social.msdn.microsoft.com/Forums/vstudio/en-us/home?forum=wcf

  • XML Solutions for Large Data Sets

    Hi,
    I'm working with a large data set (9 million records comprising 36 gigabytes) and am exploring the use of XML with it.
    I've experimented with a JDBC app (taken straight from Steve Muench's excellent <i>Oracle_XML_Applications</i>) for writing to CLOBS, but achieve throughputs of much less than 40k/s (the minimum speed required to process the data in < 10 days).
    What kind of throughputs are possible loading XML records from CLOBs into multiple tables (using server-side Java apps)?
    Could anyone comment whether XML is a feasible possibility for this size data set?
    Regards,
    Mike

    Just would like to identify myself (I'm the submitter):
    Michael Driscoll <[email protected]>.
    null

  • Java.lang.OutOfMemoryError: allocLargeObjectOrArray error for large payload

    Our is an outbound flow where one FTP adapter picks the files and it calls a requester service, requester service calls the EBS and EBS calls the provider service, and finally file is getting written using the B2B.
    Since last 4/5 days we are getting java.lang.OutOfMemoryError: allocLargeObjectOrArray.
    We are getting this error when large payloads are being used for doing testing.
    As per our understanding, when you have a tree of composite invocations (so A invokes B invokes C invokes D via flowN 100 times), none of the memory is released until they all complete.
    1. Could you please let us know exactly when memory is released?
    2. How to tune/optimize this?
    Our flow is like:
    SyncDisbursePaymentGetFtpAdapter --> CreateDisbursedPaymentEbizReqABCSImp l--> SyncDisbursePaymentRBTTEBS --> SyncDisbursedPaymentJPMC_CHKProvABCSImpl--> AIAB2BInterface --> Oracle B2B
    <Dec 12, 2012 8:17:06 PM EST> <Warning> <RMI> <BEA-080003> <RuntimeException thrown by rmi server: javax.management.remote.rmi.RMIConnecti\
    onImpl.invoke(Ljavax.management.ObjectName;Ljava.lang.String;Ljava.rmi.MarshalledObject;[Ljava.lang.String;Ljavax.security.auth.Subject;)
    javax.management.RuntimeErrorException: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664.
    javax.management.RuntimeErrorException: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:858)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:869)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:838)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
    at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:449)
    Truncated. see log file for complete stacktrace
    Caused By: java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 667664
    at java.util.Arrays.copyOf(Arrays.java:2786)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1847)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1756)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1169)
    Truncated. see log file for complete stacktrace
    Do any one have any idea how to rectify this error as whole UAT environment has become down because of this issue.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Please find the required info:
    1. Operating System--> LINUX
    2. JVM (Sun or JRockit)-->JRockit
    3. Domain info (production mode enabled?, log levels, number of servers in cluster, number of servers in a machine)-->
    a) production mode enabled-->>Production mode is not enabled, we are going to enable it.
    b)Log levels---There are many logs b2b, soa, bpel, integration, which log level do I need to do finest(32).
    c) number of servers in cluster-->2
    d) number of servers in a machine-->1
    3. Payload info (size, xml/non-xml?,)
    a) size-->more than 1 MB, and upto 25 MB
    b)xml/non-xm--> xml
    we are trying to do the changes as suggested by you and will update accordingly.

  • Regarding sql function error  for Hijri date to Gregorian date

    Hi ,
    I want to convert Hijri date format into Gregorian date format . i write the script with  sql function  like this
    $Hijri_Date = '16/04/1428';
    $Gregorian_Date = sql('DS_REPO','SELECT CONVERT(DATE,[$Hijri_Date],131)');
    print($Gregorian_Date);
    here $Hijri_Date data type is varchar and $Gregorian_Date data type is date.
    but  I am getting error like
    7868     5812     DBS-070401     10/26/2010 10:37:18 PM     |Session Job_Hijradata_Conversion
    7868     5812     DBS-070401     10/26/2010 10:37:18 PM     ODBC data source <UIPL-LAP-0013\SQLEXPRESS> error message for operation <SQLExecute>: <[Microsoft][SQL Server Native Client
    7868     5812     DBS-070401     10/26/2010 10:37:18 PM     10.0][SQL Server]Explicit conversion from data type int to date is not allowed.>.
    7868     5812     RUN-050304     10/26/2010 10:37:18 PM     |Session Job_Hijradata_Conversion
    7868     5812     RUN-050304     10/26/2010 10:37:18 PM     Function call <sql ( DS_REPO, SELECT CONVERT(DATE,16/04/1428,131) ) > failed, due to error <70401>: <ODBC data source
    7868     5812     RUN-050304     10/26/2010 10:37:18 PM     <UIPL-LAP-0013\SQLEXPRESS> error message for operation <SQLExecute>: <[Microsoft][SQL Server Native Client 10.0][SQL
    7868     5812     RUN-050304     10/26/2010 10:37:18 PM     Server]Explicit conversion from data type int to date is not allowed.>.>.
    7868     5812     RUN-053008     10/26/2010 10:37:18 PM     |Session Job_Hijradata_Conversion
    please help me out to solve this problem .
    Please suggest any other solution to convert hijri date format to gregorian date format.
    Thanks&Regards,
    Ramana.

    Hi ,
    In Data quality there is no inbuild function for converting hijri date to gregorian date .  we have the function for converting julian date to gregorian date.
    Thanks&Regards,
    Ramana.

  • Showing Keyfigures  error for result and overall result in the Query output

    Hi All,
    when executing  query results show correct figures for qty & value but with ERROR  instead of uom(Unit of Measurement) & currency;
    Can anybody help?
    Rgds,
    C.V.

    Hi There,
    Try checking that the particular UOM and currency u are reporting is loaded into BW or not.
    hope it helps,
    Regards,
    Parth.

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • ESS error for Personal data and Family members

    Hi,
    I am having an error in ESS For pesonal data and Family members.
    The error message is
    Acritical error has occured.Processing of the service had to be terminated.Unsaved data has been lost.
    PL contact you sytem Admin
    An exception error has occured that was not caught error key RFC_ERROR_SYSTEM FAILURE.
    Thanks
    Sasikanth

    Check the table V_T7XSSPERSUBTYP.
    Also check by the t.code HRUSER if the user have an employee assigned.
    Check the log using the T.Code ST22.
    Please You should do a trace using the t.code ST01
    Hope this help you.
    Regards

  • Log and transfer error for large avchd file

    The log and transfer proces for large (>15 min) avchd MTS files results in an error. The process is aborted and the destination prores 422 file disappears from the finder.
    I shoot avchd MTS files with a Sony HXR NX5 onto SDHC 32Gb cards.
    I copy the entire image to an attached disk.
    In FCP6.0.6 I use log and transfer.
    The process works flawlessly for smaller clips.
    The longer clips fail.
    During the process I see the creation of a ProRes 422 MOV file and its growth. Before completion it ends in an abort.
    In the log and transfer dialog I get a red octangle sign with an exclamation mark. I am not able to get the details of the error.
    Q: what should I do to be able to use my longer clips?
    Coen Jeukens
    the Netherlands

    All disks, internal and external (hardwired) are macosextended.
    Have reset all prefs. Currently rerunning an 19 min import (will take over an hour).
    If that fails, I will resort to setting ins and outs.
    Thinking out loud, my sony NX5 produces MTS files of max 2Gb. If the clip is longer, it is split over multiple files on the SDHC.
    FCP seems to recognise this artificial split. In L&T it combines the split files into a single clip.
    Q: could it be that L&T stumbles over this artificial split of a clip?
    Coen

  • "Date picker" in report - - View source shows no label for the date picker

    Hi
    In one of my reports, I am using multiple types (textarea/text/date picker/select list)
    I wanted to use a javascript based on the label, but I do not see "<label>" for Date picker in the view source due to which my Javascript fails.
    Snippet of view Source:
    <td class="t3data" ><label for="f08_0001" class="hideMe508">CATEGORY</label><textarea name="f08" rows="4" cols="16" wrap="VIRTUAL" id="f08_0001">EMPLOYEE INDUCTION</textarea></td>
    <td class="t3data" ><span class="lov"><input type="text" name="f11" size="15" maxlength="2000" value="29-FEB-08" style="padding-right:5px;" id="f11_0001" /><script type="text/javascript">
    As you can see above, we have a "label for" for the text area but not for the date picker
    Is there a reason for the same?
    Thanks
    Nitin

    Hi,
    OK - the headers attribute should also help as these will identify the correct cells. You would need to know the html tags used within each cell for each datatype.
    Would using cloneNode help you? It's a method of creating a copy of an entire row in a single instruction
    Andy

  • Mapping  error for larger messages

    HI,
    My senario is File to File with some conditions ( Conditions are written in UDF )
    When i am executing the payload with small size mapping executes correctly .
    But when i am executing with larger payload Mapping executes but not taking any conditions in UDF , It executes as one-one mapping.
    thanks in advance

    HI
    This should not be the case... if your UDF is working fine for smaller payload then it should work for larger payload also..
    - test mapping by using payload and after that in mapping use display queue wherever you have used UDF and try to to see what is input and what is the output.. and this will give you clear idea about what exactly is going on..
    - I guess something wrong with logic which you have used in UDF or is input payload dependent, so try to use display queue to get clear idea about UDF working.,.
    Thanks,
    Bhupesh

Maybe you are looking for