Large Core file

Hello,
I found in the home of my domain (/oracle/product/ias10g/j2ee/home) a core file sized 1.2 Gb.
It looks like a memory dump file. Is is full of binary data.
Can you please let me know what this file is exactly and when/how it is generated? Or maybe point to a another resource.
Is there a way for me to configure the generation of this file?
Thanks

Hi Abhishek,
You can explain more.
OS?
DB?
SAP Version?
/usr/sap/SID/DVEBMGS00/work/
SAP folder is full and creating issue.
Regards,
V Srinivasan

Similar Messages

  • How to limit the size of large core file

    These days in my project I am facing big trouble of large core files .
    I want to limit the size of the core file .
    Please help for same .
    Regards ,
    Abhi

    Hi Abhishek,
    You can explain more.
    OS?
    DB?
    SAP Version?
    /usr/sap/SID/DVEBMGS00/work/
    SAP folder is full and creating issue.
    Regards,
    V Srinivasan

  • How to increase performance speed of Photoshop CS6 v13.0.6 with trasformations in LARGE image files (25,000 X 50,000 pixels) on IMac3.4 GHz Intel Core i7, 16 GB memory, Mac OS 10.7.5?   Should I purchase a MacPro?

    I have hundreds of these large image files to process.  I frequently use SKEW, WARP, IMAGE ROTATION features in Photoshop.  I process the files ONE AT A TIME and have only Photoshop running on my IMac.  Most of the time I am watching SLOW progress bars to complete the transformations.
    I have allocated as much memory as I have (about 14GB) exclusively to Photoshop using the performance preferences panel.  Does anyone have a trick for speeding up the processing of these files--or is my only solution to buy a faster Mac--like the new Mac Pro?

    I have hundreds of these large image files to process.  I frequently use SKEW, WARP, IMAGE ROTATION features in Photoshop.  I process the files ONE AT A TIME and have only Photoshop running on my IMac.  Most of the time I am watching SLOW progress bars to complete the transformations.
    I have allocated as much memory as I have (about 14GB) exclusively to Photoshop using the performance preferences panel.  Does anyone have a trick for speeding up the processing of these files--or is my only solution to buy a faster Mac--like the new Mac Pro?

  • Help! mdb open core file Error

    Please Help me!
    when I use mdb to open one core file , the result is Error!
    -bash-3.00# mdb ./core
    mdb: core file data for mapping at ffb80000 not saved: Bad address
    The free disk is enough large!
    why! where wrong!

    My core file'size is 2g,largefile!
    But mdb support largefile!
    I do not know it is problem or not problem!
    Please Help me!
    By the way, my english is very pool!

  • I need to sort very large Excel files and perform other operations.  How much faster would this be on a MacPro rather than my MacBook Pro i7, 2.6, 15R?

    I am a scientist and run my own business.  Money is tight.  I have some very large Excel files (~200MB) that I need to sort and perform logic operations on.  I currently use a MacBookPro (i7 core, 2.6GHz, 16GB 1600 MHz DDR3) and I am thinking about buying a multicore MacPro.  Some of the operations take half an hour to perform.  How much faster should I expect these operations to happen on a new MacPro?  Is there a significant speed advantage in the 6 core vs 4 core?  Practically speaking, what are the features I should look at and what is the speed bump I should expect if I go to 32GB or 64GB?  Related to this I am using a 32 bit version of Excel.  Is there a 64 bit spreadsheet that I can us on a Mac that has no limit on column and row size?

    Grant Bennet-Alder,
    It’s funny you mentioned using Activity Monitor.  I use it all the time to watch when a computation cycle is finished so I can avoid a crash.  I keep it up in the corner of my screen while I respond to email or work on a grant.  Typically the %CPU will hang at ~100% (sometimes even saying the application is not responding in red) but will almost always complete the cycle if I let it go for 30 minutes or so.  As long as I leave Excel alone while it is working it will not crash.  I had not thought of using the Activity Monitor as you suggested. Also I did not realize using a 32 bit application limited me to 4GB of memory for each application.  That is clearly a problem for this kind of work.  Is there any work around for this?   It seems like a 64-bit spreadsheet would help.  I would love to use the new 64 bit Numbers but the current version limits the number of rows and columns.  I tried it out on my MacBook Pro but my files don’t fit.
    The hatter,
    This may be the solution for me. I’m OK with assembling the unit you described (I’ve even etched my own boards) but feel very bad about needing to step away from Apple products.  When I started computing this was the sort of thing computers were designed to do.  Is there any native 64-bit spreadsheet that allows unlimited rows/columns, which will run on an Apple?  Excel is only 64-bit on their machines.
    Many thanks to both of you for your quick and on point answers!

  • Software available for working with large video files?

    Hello,
    I'm working in PP CS6. I was wondering if there are any workarounds or 3rd party plugins/software that
    make working with really large video files easier and faster?
    Thanks.
    Mark

    Hi Jeff,
    Thanks for helping. This is the first time I shot video with my Nikon D5200. It was only a 3 minute test clip
    set at the highest resolution, 1920x1080-60i. I saw the red line above the clip in PP CS6 and hit the enter
    key to render the clip.
    It took almost 18 minutes or so to render the clip. This is probably normal but I was wondering if there is
    a way to reduce the file size so it doesn't take quite as long to render. I just remember a few years back
    that when the Red camera was out, guys were working with really huge files and there was a program
    from Cine something that they used to reduce the file size and make it more manageable when editing.
    I could be mistaken. I've been out of the editing look for a few years and just getting back into it.
    Thanks.
    Mark
    Here's my PC's components list you asked for:
    VisionDAW 4U 8-Core Xeon Workstation
      2 Intel QUAD-Core Xeon 5365-3.0GHz, 8MB, 1333MHz Processors
      16GB 667MHz Fully Buffered Server Memory Modules (2x2GB)
      Microsoft® Windows® Windows 7 Ultimate (x64)
      WDC 250GB, Ultra ATA100, 7200 rpm, 8MG Buffer Main OS HD
      2 WWDC 750GB, SATA II, 7200 RPM, 16MB Buffer HD (Raid 0)
      2 WDC 750GB, SATA II, 7200 rpm, 16MG Buffer HD (Samples)
      2 WDC 1TB Enterprise Class, SATA II, 7200 RPM, 32MB Buffer Hard Drive
      MOTU 24 I/O (main) / MOTU 2408mk3 (slave)
      Plexor PX-800A 18X Dbl. Layer DVD+/-RW Optical Drive
      Buffalo BuRay Drive (External) BR-816SU2 
      Front Panel USB Acess
      Integrated FireWire (1394a) interface
      Thermaltake Toughpower 850W Power Supply
      3xUAD1 Universal Audio Cards
      NVIDIA QUADRO FX 1800 / Memory 768 MB GDDR3
      CUDA Parallel Processor Cores / 64
      Maximum Display Resolution Digital @60Hz = 2560x1600
      Memory Interface 192Bit
      Memory Bandwidth (GB/sec) / 38.4 GB/sec
      PCI-Express, DUAL-Link DVI 1
      Digital Outputs 3 (2 out of 3 active at a time)
      Dual 25.5" Samsung 2693HM LCD HD Monitors

  • Flex file upload issue with large image files

         Hello, I have created a sample flex application to upload an image and also created java servlet to upload and save image and deployed in local tomcat server. I am testing the application in LAN. I am able to upload small as well as large image file(1Mb) from some PCs but in some other PCs I am getting IOError while uploading large image files however it is working fine for small images. Image uploading is hanging after 10%-20% and throwing IOError. *Surprizgly it is working Ok with XP systems and causeing issues with Windows7 systems*.
    Plz give me any idea to get a solution.
    In Tomcat server side it is giving following error:
    request: org.apache.catalina.connector.RequestFacade@c19694
    org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. Stream ended unexpectedly
            at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:371)
            at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.ja va:126)
            at flex.servlets.UploadImage.doPost(UploadImage.java:47)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
            at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.j ava:290)
            at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
            at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
            at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
            at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
            at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
            at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
            at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
            at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:877)
            at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProto col.java:594)
            at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1675)
            at java.lang.Thread.run(Thread.java:722)
    Caused by: org.apache.commons.fileupload.MultipartStream$MalformedStreamException: Stream ended unexpectedly
            at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStre am.java:982)
            at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:8 86)
            at java.io.InputStream.read(InputStream.java:101)
            at org.apache.commons.fileupload.util.Streams.copy(Streams.java:96)
            at org.apache.commons.fileupload.util.Streams.copy(Streams.java:66)
            at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:366)
    UploadImage.java:
    package flex.servlets;
    import java.io.*;
    import java.sql.*;
    import java.util.*;
    import java.text.*;
    import java.util.regex.*;
    import org.apache.commons.fileupload.servlet.ServletFileUpload;
    import org.apache.commons.fileupload.disk.DiskFileItemFactory;
    import org.apache.commons.fileupload.*;
    import sun.reflect.ReflectionFactory.GetReflectionFactoryAction;
    import javax.servlet.*;
    import javax.servlet.http.*;
    public class UploadImage extends HttpServlet{
             * @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
             *      response)
            protected void doGet(HttpServletRequest request,
                            HttpServletResponse response) throws ServletException, IOException {
                    // TODO Auto-generated method stub
                    doPost(request, response);
            public void doPost(HttpServletRequest request,
                            HttpServletResponse response)
            throws ServletException, IOException {
                    PrintWriter out = response.getWriter();
                    boolean isMultipart = ServletFileUpload.isMultipartContent(
                                    request);
                    System.out.println("request: "+request);
                    if (!isMultipart) {
                            System.out.println("File Not Uploaded");
                    } else {
                            FileItemFactory factory = new DiskFileItemFactory();
                            ServletFileUpload upload = new ServletFileUpload(factory);
                            List items = null;
                            try {
                                    items = upload.parseRequest(request);
                                    System.out.println("items: "+items);
                            } catch (FileUploadException e) {
                                    e.printStackTrace();
                            Iterator itr = items.iterator();
                            while (itr.hasNext()) {
                                    FileItem item = (FileItem) itr.next();
                                    if (item.isFormField()){
                                            String name = item.getFieldName();
                                            System.out.println("name: "+name);
                                            String value = item.getString();
                                            System.out.println("value: "+value);
                                    } else {
                                            try {
                                                    String itemName = item.getName();
                                                    Random generator = new Random();
                                                    int r = Math.abs(generator.nextInt());
                                                    String reg = "[.*]";
                                                    String replacingtext = "";
                                                    System.out.println("Text before replacing is:-" +
                                                                    itemName);
                                                    Pattern pattern = Pattern.compile(reg);
                                                    Matcher matcher = pattern.matcher(itemName);
                                                    StringBuffer buffer = new StringBuffer();
                                                    while (matcher.find()) {
                                                            matcher.appendReplacement(buffer, replacingtext);
                                                    int IndexOf = itemName.indexOf(".");
                                                    String domainName = itemName.substring(IndexOf);
                                                    System.out.println("domainName: "+domainName);
                                                    String finalimage = buffer.toString()+"_"+r+domainName;
                                                    System.out.println("Final Image==="+finalimage);
                                                    File savedFile = new File(getServletContext().getRealPath("assets/images/")+"/LowesFloorPlan.png");
                                                    //File savedFile = new File("D:/apache-tomcat-6.0.35/webapps/ROOT/example/"+"\\test.jpeg");
                                                    item.write(savedFile);
                                                    out.println("<html>");
                                                    out.println("<body>");
                                                    out.println("<table><tr><td>");
                                                    out.println("");
                                                    out.println("</td></tr></table>");
                                                    try {
                                                            out.println("image inserted successfully");
                                                            out.println("</body>");
                                                            out.println("</html>");  
                                                    } catch (Exception e) {
                                                            System.out.println(e.getMessage());
                                                    } finally {
                                            } catch (Exception e) {
                                                    e.printStackTrace();

    It is only coming in Windows 7 systems and the root of this problem is SSL certificate.
    Workaround for this:
    Open application in IE and click on certificate error link at address bar . Click install certificate and you are done..
    happy programming.
    Thanks
    DevSachin

  • Core File Support

    Hello,
    I'm not sure if this is the correct forum for my question but this looks like the best one.
    We have an large application written in 'c' that runs on SunOS 5.8 Generic February 2000
    (a.k.a. Solaris 8). We have an internal debugger which our developer's use to debug
    our application. We're in the process of trying to add core file support to the debugger.
    What's the best way to get to the dynamic linker's link map at the time of the core file dump?
    I've looked at the Linker and Libraries Guide, the rtld_debugger_interface for agents and the rd_loadobj_iter but they seem to be more for targeting a running process. I know the link map is somewhere in the core file I'm just not sure how to get at it. Any help is appreciated.
    Regards,
    Jim

    The core(4) man page describes the core file format. The elf(3ELF) man page describes the libelf library that may be used to read the file. In particular, look at elf32_getphdr(3ELF). You can also use "dump -ov core" to see the same information.
    Richard

  • Speeding download of large sized file

    Hi,
    I could really use some help. Im stumped. Im using Livecycle Designer ES and have a large form with a lot of graphics (~15 MB), Im planning on emailing it to customers, but its so large it is impractical. Ive tried getting rid of objects but am down to the core information so cant reduce the size that way. 2 questions:
    1) Is there a way within Livecycle or through publishing in Acrobat that I can significantly further reduce the size of the file? Ive read the Adobe documentation and have already tried their suggestions (e.g. using one of the supported fonts instead of embedding a font, etc.)
    2) If I cant significantly reduce the file size, is there a way that I can do something to make the user not wait as long for a download? I had 2 ideas but dont know if they are at all doable:
    a. Package the pdf file with a download accelerator that will work just for that file to speed up the download, and doesnt need a separate installation. Does anyone know if there is such a product?
    b. Split the large pdf file into sections. Have the user download the first section. While s/he is filling that out, in the background be downloading the next section, and so on. If this is programmable, could someone explain how to approach it.
    Thanks very much for any help with this. This is a great forum.
    Jeff

    Hallo David,
    the only "improvement" or workaround I see in NW 04 is to use the <b>Web Dynpro Binary Cache</b> to store your large resource. In NW 04s there is a more elegant solution called "on demand streams" but it is not applicable in NW 04.
    <u>Try the following workaround:</u>
    - Do not store your large resource in a context value attribute of type binary (like it is described within the tutorial on <u>FileUp/Download</u> https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/d2201c20-0801-0010-49b3-94fca8f590d5)
    - <u>Do not use</u> the <b>FileDownload</b> UI Element
    - Instead use a <b>LinkToURL</b> UI Element and bind it to a context attribute with name 'ResourceURL' of type String. The large resource is not written to the context but it is written to the Web Dynpro Binary Cache as an object of type java.io.InputStream.
    See <i>ExcelExport Tutorial</i> https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/uuid/1208c2cd-0401-0010-4ab6-f4736074acc6
    and the <a href="https://media.sdn.sap.com/javadocs/NW04/SPS15/wd/com/sap/tc/webdynpro/services/sal/url/api/WDWebResource.html">JavaDoc on WDWebResource</a>:
    <i>static IWDCachedWebResource getWebResource(InputStream input, WDWebResourceType resourceType)
    Returns an IWDWebResource for a given web resource.</i>
    - Try IWDCachedWebResource.setAttachement(true)
    - Store the URL of the cached web resource within the context attribute 'ResourceURL' to which the LinkToURL UI Element is bound to.
    Please try out this approach and tell me whether it circumvents your OutOfMemory problem.
    Regards, Bertram

  • Large Swap file

      Model Name:    MacBook Pro
      Model Identifier:    MacBookPro6,2
      Processor Name:    Intel Core i5
      Processor Speed:    2.53 GHz
      Number Of Processors:    1
      Total Number Of Cores:    2
      L2 Cache (per core):    256 KB
      L3 Cache:    3 MB
      Memory:    8 GB
      Processor Interconnect Speed:    4.8 GT/s
    Running the latest version of Parallels and Windows 7 64 bit.
    Using the Activity monitor I find that I always have  a large swap file (even when parallels is not running), a huge  amount of inactive memory and a very small free memory. It slows down my system. Any suggestions?

    First, upgrade your system to 10.6.7 or 10.6.8. Second, large swap files don't necessarily mean a thing. You might see the following:
    About OS X Memory Management and Usage
    Reading system memory usage in Activity Monitor
    Memory Management in Mac OS X
    Performance Guidelines- Memory Management in Mac OS X
    A detailed look at memory usage in OS X
    Understanding top output in the Terminal
    The amount of available RAM for applications is the sum of Free RAM and Inactive RAM. This will change as applications are opened and closed or change from active to inactive status. The Swap figure represents an estimate of the total amount of swap space required for VM if used, but does not necessarily indicate the actual size of the existing swap file. If you are really in need of more RAM that would be indicated by how frequently the system uses VM. If you open the Terminal and run the top command at the prompt you will find information reported on Pageins () and Pageouts (). Pageouts () is the important figure. If the value in the parentheses is 0 (zero) then OS X is not making instantaneous use of VM which means you have adequate physical RAM for the system with the applications you have loaded. If the figure in parentheses is running positive and your hard drive is constantly being used (thrashing) then you need more physical RAM.
    Adding RAM only makes it possible to run more programs concurrently.  It doesn't speed up the computer nor make games run faster.  What it can do is prevent the system from having to use disk-based VM when it runs out of RAM because you are trying to run too many applications concurrently or using applications that are extremely RAM dependent.  It will improve the performance of applications that run mostly in RAM or when loading programs.
    Bear in mind you are running Parallels and a VM concurrently with some other OS X applications. Too many concurrent applications will result in using too much memory and increasing swapping.

  • Validating large xml files

    In our application we have a requirement to validate large xml file (around 40 mb) against a simple schema file.
    I have tried all options(Sax and Dom) mentioned in below article on sun site -
    http://java.sun.com/developer/EJTechTips/2005/tt1025.html
    but with all of these approches the validation process just hangs for hours.
    Even i am running my test case for this validation by allocating 1 GB memory to test case, on latest processor(core-2 duo) but still it just hangs and returns after a long duration.
    If needed i can provide sample test case, schema file and xml file.
    Please help me to know if its a problem with JAXP 1.3 Schema Validation Framework (SVF) to handle large files, or if there is any other efficient way of validating large xml file.
    Regards
    Rahul

    you may want to split the XML to 10MB each and parse the XML's separately or store in a database and read the XML using XQuery.
    I am sure there may be better ways to do this, will see experts opinions here.

  • Import Large XML File to Table

    I have a large (819MB) XML file I'm trying to import into a table in the format:
    <ROW_SET>
    <ROW>
    <column_name>value</column_name>
    </ROW>
    <ROW>
    <column_name>value</column_name>
    </ROW>
    </ROW_SET>
    I've tried importing it with xmlsequence(...).extract(...) and ran into the number of nodes exceed maximum error.
    I've tried importing it with XMLTable(... passing XMLTYPE(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8'))) and I gave up after it ran for 15+ hours ( COLLECTION ITERATOR PICKLER FETCH issue ).
    I've tried importing it with:
    insCtx := DBMS_XMLStore.newContext('schemaname.tablename');
    DBMS_XMLStore.clearUpdateColumnList(insCtx);
    DBMS_XMLStore.setUpdateColumn(insCtx,'column1name');
    DBMS_XMLStore.setUpdateColumn(insCtx,'columnNname');
    ROWS := DBMS_XMLStore.insertXML(insCtx, XMLTYPE(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8')));
    and ran into ORA-04030: out of process memory when trying to allocate 1032 bytes (qmxlu subheap,qmemNextBuf:alloc).
    All I need to do is read the XML file and move the data into a matching table in a reasonable time. Once I have the data in the database, I no longer need the XML file.
    What would be the best way to import large XML files?
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    "CORE     11.2.0.1.0     Production"
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    This (rough) approach should work for you.
    CREATE TABLE HOLDS_XML
            (xml_col XMLTYPE)
          XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
    INSERT INTO HOLDS_XML
    VALUES (xmltype(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8')))
    -- Should be using AL32UTF8 for DB character set with XML
    SELECT ...
      FROM HOLD_XML HX
           XMLTable(...
              PASSING HX.xml_col ...)How it differs from your approach.
    By using the HOLDS_XML table with SECUREFILE BINARY XML storage (which became the default in 11.2.0.2) we are providing a place for Oracle to store a parsed version of the XML. This allows the XML to be stored on disk instead of in memory. Oracle can then access the needed pieces of XML from disk by streaming them instead of holding the whole XML in memory and parsing it repeatedly to find the information needed. That is what COLLECTION ITERATOR PICKLER FETCH means. A lot of memory work. You can search on that term to learn more about it if needed.
    The XMTable approach then simply reads this XML from disk and should be able to parse the XML with no issue. You have the option of adding indexes to the XML, but since you are simply reading it all one time and tossing it, there is no advantage to indexes (most likely)

  • Arbitrary waveform generation from large text file

    Hello,
    I'm trying to use a PXI 6733 card hooked up to a BNC 2110 in a PXI 1031-DC chassis to output arbitrary waveforms at a sample rate of 100kS/s.  The types of waveforms I want to generate are generally going to be sine waves of frequencies less than 10 kHz, but they need to be very high quality signals, hence the high sample rate.  Eventually, we would like to go up to as high as 200 kS/s, but for right now we just want to get it to work at the lower rate. 
    Someone in the department has already created for me large text files > 1GB  with (9) columns of numbers representing the output voltages for the channels(there will be 6 channels outputting sine waves, 3 other channels with a periodic DC voltage.   The reason for the large file is that we want a continuous signal for around 30 minutes to allow for equipment testing and configuration while the signals are being generated. 
    I'm supposed to use this file to generate the output voltages on the 6733 card, but I keep getting numerous errors and I've been unable to get something that works. The code, as written, currently generates an error code 200290 immediately after the buffered data is output from the card.  Nothing ever seems to get enqued or dequed, and although I've read the Labview help on buffers, I'm still very confused about their operation so I'm not even sure if the buffer is working properly.  I was hoping some of you could look at my code, and give me some suggestions(or sample code too!) for the best way to achieve this goal.
    Thanks a lot,
    Chris(new Labview user)

    Chris:
    For context, I've pasted in the "explain error" output from LabVIEW to refer to while we work on this. More after the code...
    Error -200290 occurred at an unidentified location
    Possible reason(s):
    The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.
    To avoid this error, you can do any of the following:
    1. Increase the size of the background buffer by configuring the buffer.
    2. Increase the number of samples you write each time you invoke a write operation.
    3. Write samples more often.
    4. Reduce the sample rate.
    5. Change the data transfer mechanism from interrupts to DMA if your device supports DMA.
    6. Reduce the number of applications your computer is executing concurrently.
    In addition, if you do not need to write every sample that is generated, you can configure the regeneration mode to allow regeneration, and then use the Position and Offset attributes to write the desired samples.
    By default, the analog output on the device does what is called regeneration. Basically, if we're outputting a repeating waveform, we can simply fill the buffer once and the DAQ device will reuse the samples, reducing load on the system. What appears to be happening is that the VI can't read samples out from the file fast enough to keep up with the DAQ card. The DAQ card is set to NOT allow regeneration, so once it empties the buffer, it stops the task since there aren't any new samples available yet.
    If we go through the options, we have a few things we can try:
    1. Increase background buffer size.
    I don't think this is the best option. Our issue is with filling the buffer, and this requires more advanced configuration.
    2. Increase the number of samples written.
    This may be a better option. If we increase how many samples we commit to the buffer, we can increase the minimum time between writes in the consumer loop.
    3. Write samples more often.
    This probably isn't as feasible. If anything, you should probably have a short "Wait" function in the consumer loop where the DAQmx write is occurring, just to regulate loop timing and give the CPU some breathing space.
    4. Reduce the sample rate.
    Definitely not a feasible option for your application, so we'll just skip that one.
    5. Use DMA instead of interrupts.
    I'm 99.99999999% sure you're already using DMA, so we'll skip this one also.
    6. Reduce the number of concurrent apps on the PC.
    This is to make sure that the CPU time required to maintain good loop rates isn't being taken by, say, an antivirus scanner or something. Generally, if you don't have anything major running other than LabVIEW, you should be fine.
    I think our best bet is to increase the "Samples to Write" quantity (to increase the minimum loop period), and possibly to delay the DAQmx Start Task and consumer loop until the producer loop has had a chance to build the queue up a little. That should reduce the chance that the DAQmx task will empty the system buffer and ensure that we can prime the queue with a large quantity of samples. The consumer loop will wait for elements to become available in the queue, so I have a feeling that the file read may be what is slowing the program down. Once the queue empties, we'll see the DAQmx error surface again. The only real solution is to load the file to memory farther ahead of time.
    Hope that helps!
    Caleb Harris
    National Instruments | Mechanical Engineer | http://www.ni.com/support

  • Problems viewing large tif files in preview

    i'm having trouble viewing large tif files in preview. the tif files are on a cdrom - when i click on the file to open it, it opens only the first 250 pages of the file. the file has approximately 3,000 pages. how can i get preview to open the rest of the pages?
    thanks for any suggestions.
    mac mini   Mac OS X (10.4.6)  

    no trick - i didn't create the cdrom but it only has 3 tif files with approximately 3000 pages each, not 3000 large tif files. plus several smaller pdf files, but those aren't giving me any problems.
    i don't know whether they're compressed, but i still can't get more than the first 250 pages to open, even after copying the file to my desktop. if anyone has any other ideas, i'd much appreciate it.
    mac mini   Mac OS X (10.4.6)  

  • CMS crash with core files and multiple report output generation

    Happy new year to everyone,
    Our BOXIR3.1SP6FP2 env has recently started behaving weirdly by triggering multiple output to users inbox and email notification out of scheduled reports. Also we have noticed the CMS crash with core file (almost 4GB) generation at the time of multiple report output.
    Most of the times, CMC crashes and recycles itself. At few times, CMS services alone went shut down.
    OS details: RHEL 5.5, 32 GB RAM, 8 core processor on each of the clustered node, Oracle 10GR2.4 CMS DB server, 11GR2.4 oracle reporting DB server and oracle 11.1.0.6 client.
    2015/01/21 23:54:37.946|>=| | |28123|1534131088|{|||||||||||||||DBQueue::Read
    2015/01/21 23:54:37.946|==| | |28123|1496185744|
    |||||||||||||||(OracleStatement.cpp:156) Prepare: SQL: SELECT ObjectID,
    Version, LastModifyTime, CRC, Properties FROM CMS_InfoObjects6 WHERE ObjectID
    IN (1004050) ORDER BY ObjectID
    2015/01/21 23:54:37.946|==| | |28123|1496185744| ||||||||||||||(OracleStatement.cpp:183) Prepared statement Execute
    2015/01/21 23:54:37.965|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString 50293
    2015/01/21 23:54:37.966|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString Unknown exception in database thread
    2015/01/21 23:54:37.967|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString 33007
    2015/01/21 23:54:37.967|==| | |28123|1496451984| |||||||||||||||SResourceSource::LoadString CMS is unstable and will shut down immediately. Reason: %1...
    2015/01/21 23:54:38.506|==| | |28123|1496185744| |||||||||||||||(OracleStatement.cpp:156) Prepare: SQL: SELECT ObjectID,
    Version, LastModifyTime, CRC, Properties FROM CMS_InfoObjects6 WHERE ObjectID IN (1009213) ORDER BY ObjectID
    2015/01/21 23:54:38.506|==| | |28123|1496185744| |||||||||||||||(OracleStatement.cpp:183) Prepared statement Execute
    2015/01/21 23:54:38.512|==| | |28123|1455592672| |||||||||||||||(sidaemon.cpp:549) SUNIXDaemon::run: server restart flag is 1..
    2015/01/21 23:54:38.513|==| | |28123|1455592672| |||||||||||||||(sidaemon.cpp:552) SUNIXDaemon::run: in abort ...
    2015/01/21 23:54:38.513|==| | |28123|1455592672| |||||||||||||||(sidaemon.cpp:555) SUNIXDaemon::run: doing the WithAbort case ...
    2015/01/21 23:54:38.520|==| | |28123|1496185744| |||||||||||||||(dbq.cpp:1357) DBQ: Time required to read 1 objects: 20.000000 ms
    Thank you,
    Karthik

    Hi Denis,
    I'm trying my best for the last few weeks to understand the core issue along with SAP however it is still a mystery.
    >Ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 270335
    max locked memory       (kbytes, -l) 32
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 10240
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 270335
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    Below is the observation as part of troubleshooting:
    1. CMS breaks at threshold of 3.9 G.
    2. CMS DB sits in a different Linux server than BOE server.
    3. All core files were generated by boe_cmsd process and are almost 4GB in size (same as max threshold which it breaks).
    4. Shell script which I've added in the BOE servers shows that the CMS DB is available/connecting at the time of CMS crash.
    5. SAP analysed the Core files and skeptical about the below lines.
         #3  0x58687b80 in skgesigCrash ()
          from /opt/oracle/product/11.1.0/client_1/lib32/libclntsh.so
         #4  0x58687e0d in skgesig_sigactionHandler ()
    I'll continue troubleshooting with a hope to fix it at the earliest.
    Thanks,
    Karthik

Maybe you are looking for

  • Delivery note of GR not appearing in Freight Invoice

    While doing GR I entered Delivery Note.But while preparing Invoice against Freight Vendor Delivery note is not appearing in MIRO whereas delivery note is appearing while preparing Invoice against Goods/service items.I want the delivery note to appear

  • IOS upgrade does it again.  It trashes my old calendar entries

    I have come to rely on the feature in the iOS upgrade that it will trash my old calendar entries. Upgrading to iOS 4 was no exception. It changed start and end times, repeat information, adds entries,combined multiple repeating entries into 1 even th

  • Best Setup for Lion Server Time Machine Backup with Drobo?

    I've been thinking about this a lot, yet I don't feel I have a good solution for this, so I'm going to throw it out to the community. I have a home server setup using a Mac Mini running Lion Server 10.7.2 with a Firewire 800 Drobo attached.  The Drob

  • Publishing web templates in static HTML

    Hi, When broadcasting web templates to HTML, must you publish them into the portal or can they simply reside on the BW server as static HTML pages and accessible via a URL? I ask because I am implementing a dashboard and the web templates should be p

  • Changing only column header color

    Hello, I changed the background color of the tabel to white. But column headers are also appearing as white. How can I change the color of column headers only. I tried several combinations...no luck. Thanks, Sunita.