Big SIC files?

Hello,
I am getting more and more error:
SMS Software Inventory Processor failed to process the file C:\Program Files (x86)\Microsoft Configuration Manager\inboxes\auth\sinv.box\JP98C7JM.SIC because it is larger than the defined maximum allowable size of 5000000.
let me what should I do? decrease the information collected? but it is always the field not collected the management needs and always for yesterday ...
Exclude the MP (I have only 2) using http://technet.microsoft.com/en-us/library/hh691018.aspx
Should I keep several version of the MOF files depending on the request from the Management but if so this will add some delays before the new MOF is compiled and then run by the next inventory..., isn't it?
Thanks,
DOm
System Center Operations Manager 2007 / System Center Configuration Manager 2007 R2 / Forefront Client Security / Forefront Identity Manager

Hello,
the size is already really big "5,000,000" ...
Apparently the classes in cause are:
Start Group
Name = "CCM Recently Used Applications"
ID = 3174
Class = "MICROSOFT|CCM_RECENTLY_USED_APPS|1.0"
Pragma = "SMS:UPDATE"
Start Group
Name = "Software Shortcut"
ID = 2
Class = "MICROSOFT|SOFTWARE_SHORTCUT|1.0"
Pragma = "SMS:DELETE"
Thanks to Sherry for the TIP as effectively the big MIF files are Citrix...
http://www.myitforum.com/forums/CCM_Recently_Used_Apps-Asset-Intelligence-Do-you-enable-it-m221190.aspx
Any other idea?
Thanks,
Dom
System Center Operations Manager 2007 / System Center Configuration Manager 2007 R2 / Forefront Client Security / Forefront Identity Manager

Similar Messages

  • How to seperate a big flat file

    Hi Everyone,
    I have a performance issue with loading a very big flat file. I have a 200 million flat file and I am trying to load it via Oracle Data Integrator to Sql server 2005. Oracle data integrator is trying to execute a bcp tool like below.
    bcp CDWH_DW.dbo.Campaign_History_Outbound in "\\server\CDWH\FlatFile2Send \FILES\ailing_active.txt" -f"\\Server\Campaign\Campaign_Response_Outbound.fmt" -o"\\server\CDWH\FlatFile2Send\ \LOG\sony_mailing.log" -T -S"sqlserver " -C"RAW" -m 1
    If I have a problem with data in this huge file it is very difficult to find what is wrong. So I am trying to find out a solution for this issue. Do you have a suggestion? I am thinking to separate this huge file a lot of small piece and loading them in a loop but I am thinking about how to separate this big file to a small piece.
    Thank You for your Helps
    Kind Regards
    Caner Sahan

    Hi Caner...
    I believe that is possible.
    There is any column, from text file, that can be used as PK?
    What is the type of this text, delimited? positional?
    Do you want this only for development or will keep it for production?
    Cezar Santos
    http://odiexperts.com

  • Slow extraction in big XML-Files with PL/SQL

    Hello,
    i have a performance problem with the extraction from attributes in big XML Files. I tested with a size of ~ 30 mb.
    The XML file is a response of a webservice. This response include some metadata of a document and the document itself. The document is inline embedded with a Base64 conversion.  Here is an example of a XML File i want to analyse:
    <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
       <soap:Body>
          <ns2:GetDocumentByIDResponse xmlns:ns2="***">
             <ArchivedDocument>
                <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
                   <Metadata archiveDate="2013-08-01+02:00" documentID="123">
                      <Descriptor type="Integer" name="fachlicheId">
                         <Value>123<Value>
                      </Descriptor>
                      <Descriptor type="String" name="user">
                         <Value>***</Value>
                      </Descriptor>
                      <InternalDescriptor type="Date" ID="DocumentDate">
                         <Value>2013-08-01+02:00</Value>
                      </InternalDescriptor>
                      <!-- Here some more InternalDescriptor Nodes -->
                   </Metadata>
                   <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
                      <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
                   </RepresentationDescription>
                </ArchivedDocumentDescription>
                <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
                   <Data fileName="20mb.test">
                      <BinaryData>
                        <!-- Here is the BASE64 converted document -->
                      </BinaryData>
                   </Data>
                </DocumentPart>
             </ArchivedDocument>
          </ns2:GetDocumentByIDResponse>
       </soap:Body>
    </soap:Envelope>
    Now i want to extract the filename and the Base64 converted document from this XML response.
    For the extraction of the filename i use the following command:
    v_filename := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    For the extraction of the binary data i use the following command:
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    My problem is the performance of this extraction. Here i created some summary of the start and end time for the commands:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 15:46:11,402668000
    10.09.13 - 15:47:21,407895000
    00:01:10,005227
    v_filename_bcm := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    10.09.13 - 15:47:21,407895000
    10.09.13 - 15:47:22,336786000
    00:00:00,928891
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    As you can see the extraction of the filename is slower then the document extraction. For the Extraction of the filename i need ~01
    I wonder about it and started some tests.
    I tried to use an exact - non dynamic - filename. So i have this commands:
    v_filename := '20mb_1.test';
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    Under this Conditions the time for the document extraction soar. You can see this in the following table:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 16:02:33,212035000
    10.09.13 - 16:02:33,212542000
    00:00:00,000507
    v_filename_bcm := '20mb_1.test';
    10.09.13 - 16:02:33,212542000
    10.09.13 - 16:03:40,342396000
    00:01:07,129854
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    So i'm looking for a faster extraction out of the xml file. Do you have any ideas? If you need more informations, please ask me.
    Thank you,
    Matthias
    PS: I use the Oracle 11.2.0.2.0

    Although using an XML schema is a good advice for an XML-centric application, I think it's a little overkill in this situation.
    Here are two approaches you can test :
    Using the DOM interface over your XMLType variable, for example :
    DECLARE
      v_xml    xmltype := xmltype('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> 
           <soap:Body> 
              <ns2:GetDocumentByIDResponse xmlns:ns2="***"> 
                 <ArchivedDocument> 
                    <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***"> 
                       <Metadata archiveDate="2013-08-01+02:00" documentID="123"> 
                          <Descriptor type="Integer" name="fachlicheId"> 
                             <Value>123</Value> 
                          </Descriptor> 
                          <Descriptor type="String" name="user"> 
                             <Value>***</Value> 
                          </Descriptor> 
                          <InternalDescriptor type="Date" ID="DocumentDate"> 
                             <Value>2013-08-01+02:00</Value> 
                          </InternalDescriptor> 
                          <!-- Here some more InternalDescriptor Nodes --> 
                       </Metadata> 
                       <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream"> 
                          <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/> 
                       </RepresentationDescription> 
                    </ArchivedDocumentDescription> 
                    <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0"> 
                       <Data fileName="20mb.test"> 
                          <BinaryData> 
                            ABC123 
                          </BinaryData> 
                       </Data> 
                    </DocumentPart> 
                 </ArchivedDocument> 
              </ns2:GetDocumentByIDResponse> 
           </soap:Body> 
        </soap:Envelope>');
      domDoc    dbms_xmldom.DOMDocument;
      docNode   dbms_xmldom.DOMNode;
      node      dbms_xmldom.DOMNode;
      nsmap     varchar2(2000) := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns2="***"';
      xpath_pfx varchar2(2000) := '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/';
      istream   sys.utl_characterinputstream;
      buf       varchar2(32767);
      numRead   pls_integer := 1;
      filename       varchar2(30);
      base64clob     clob;
    BEGIN
      domDoc := dbms_xmldom.newDOMDocument(v_xml);
      docNode := dbms_xmldom.makeNode(domdoc);
      filename := dbms_xslprocessor.valueOf(
                    docNode
                  , xpath_pfx || 'ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName'
                  , nsmap
      node := dbms_xslprocessor.selectSingleNode(
                docNode
              , xpath_pfx || 'ArchivedDocument/DocumentPart/Data/BinaryData/text()'
              , nsmap
      --create an input stream to read the node content :
      istream := dbms_xmldom.getNodeValueAsCharacterStream(node);
      dbms_lob.createtemporary(base64clob, false);
      -- read the content in 32k chunk and append data to the CLOB :
      loop
        istream.read(buf, numRead);
        exit when numRead = 0;
        dbms_lob.writeappend(base64clob, numRead, buf);
      end loop;
      -- free resources :
      istream.close();
      dbms_xmldom.freeDocument(domDoc);
    END;
    Using a temporary XMLType storage (binary XML) :
    create table tmp_xml of xmltype
    xmltype store as securefile binary xml;
    insert into tmp_xml values( v_xml );
    select x.*
    from tmp_xml t
       , xmltable(
           xmlnamespaces(
             'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
           , '***' as "ns2"
         , '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/ArchivedDocument/DocumentPart/Data'
           passing t.object_value
           columns filename    varchar2(30) path '@fileName'
                 , base64clob  clob         path 'BinaryData'
         ) x

  • Best strategy to upload a big data file and then to insert in db the content

    Hi,
    Here's our requirement. We have a web business application developed on JSF 1.2, JE66, WebLogic for application server, and Oracle for the data back end tier. We need to upload big data files (80 to 100 Mb) from a web page, and persist the content in database tables.
    What's the best way in terms of performance to implement this use case ? Once the file is uploaded on server a command button is available on the web page  to trigger a JSF controller action in order to save data in database.
    Actually we plan to keep in memory the content of the http request, and call insert line on each line of the file. But i think it's bad and not scalable.
    Is is better to write the file on server's disk and then use multiple threads to send the lines to the database ? How to use multi threading in a JSF managed bean ?
    Thanks

    In addition, LoadFromFile is overloaded to handle both BLOB and CLOB:
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       BLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT
    <BR>
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       CLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT

  • Can't import big video file from iPhone to Macbook.

    I have a big video file I took on my iPhone – 4.9gb – which both iPhoto and image capture wont let me import. Any suggestions how I can import this onto my Macbook?

    Perhaps try one of the apps that transfers photos and videos via WiFi between phone and computer. There are several in the App Store.

  • Whats the best way to get a big text file in a CLOB variable?

    Hi,
    I have a very premitive type of question.
    I have a big text file, say 30 MB, in a Unix directory (Solaris). I want to get the whole text of that file in a CLOB variable.
    I saw the procedure dbms_lob.loadfromfile/loadclobfromfile. In both these cases, according to the documentation, the target where is collect the data will be a BLOB and not a CLOB. So, I have to convert that blob into a clob.
    If I want to avoid all this conversion process, whats the best way to get a text from a file into a CLOB variable?
    Please suggest.
    Regards

    In addition, LoadFromFile is overloaded to handle both BLOB and CLOB:
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       BLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT
    <BR>
    PROCEDURE LOADFROMFILE
    Argument Name                  Type                    In/Out Default?
    DEST_LOB                       CLOB                    IN/OUT
    SRC_LOB                        BINARY FILE LOB         IN
    AMOUNT                         NUMBER(38)              IN
    DEST_OFFSET                    NUMBER(38)              IN     DEFAULT
    SRC_OFFSET                     NUMBER(38)              IN     DEFAULT

  • How can I transfer a big .pdf file e.g. 9 MB from my Macintosh Performa 5200 and its OS 8.1 to an USB-flash drive?

    How can I transfer a big .pdf file e.g. 9 MB from my Macintosh Performa 5200 and its OS 8.1 to an USB flash drive? E.g. is there any adapter available in order to connect the SCSI with USB? Or is it better to use a compression software and transfer it to 10 3,5" floppy discs?
    Thank you
    Emanuel

    Hello Emanuel,
    The Performa 5200CD did not have built-in Ethernet as standard, so unless an Ethernet card (or an external SCSI or LocalTalk-to-Ethernet adapter) has been added, that method would not be available in this case.
    Your suggestion involving compression software (such as an appropriate version of StuffIt) with segmenting capabilities could of course be one alternative.
    If you have an internal or external modem for the Performa, another way could be to use the telephone lines for transfers. A communications program would have to be used on both sides (for example, ZTerm or the communications section of ClarisWorks on the Performa).
    It is even possible to connect two serial modems directly. A simple line simulator (in principle, a 9 V battery in series with a 680 ohm resistor in one of the leads in an RJ-11 to RJ-11 cable), which can be built in a couple of minutes, is sometimes needed. Do NOT use a line simulator for units connected to the public telephone network.
    Yet another solution could be a null-modem transfer to a PC with a (DB-9M) serial port. This would require a null-modem cable (can be designed by combining a Macintosh modem cable (MiniDIN-8M to DB-25M) with a normal PC-style (DB-25F to DB-9F) null-modem cable . HyperTerminal or another communications program can be used on the PC.
    What do you have to work with (other computers/models/platforms)? Is this a one-time transfer, or do you plan to send additional files later? Is the intention to continue to use the Performa 5200?
    Jan

  • Advice needed: The way to solve out of memory problem (or the way to work with big csv files)

    Hello:)
    I'm in trouble: I have a big csv file (over 5gb of web-analytics data) and my 64 bit excel (and 6gb ram)
    I cant load file to data model because of it's size. There is an error "out of memory" in power query. 
    This is the first time when I encountered such a problem.
    What options do I have to work with such a file? To increase memory in my computer? Would it solve the problem? How much do I need to work with 6gb csv? 
    Or may be I can upload my data somewhere to azure and work with it there? 
    So the problem - is there any way to deal with big files using power query? Or I need to become a developer and learn sql or other languages? 
    Thanks in advance.
    Max

    Hi Miguel!
    Thanks for your answer. 
    I've tried to load this file on virtual pc from azure cloud with this config:
    I have increased memory limit in power query settings:
    And still, the proble is the same:
    What I do wrong? 

  • Big XML Files

    Hello,
    does anybody have experience with large XML documents in oracle XML DB? I think bigger than the purchaseOrder example, for example a whole book. I am looking for a storage method for documents at least 10 MByte large.
    Thanks
    Krisztian

    Sometimes the chapters are so big like a whole book. (ca 200-300 pages, legislation commentaries). What I'm looking for is a way to store big XML files and access them flexible at different levels. E.g. A law can have 50 articles and some times even 2400 articles. If I need to share the editing work the editor can get the whole document but sometimes even only fragments. Much better would be if more than one editor could work at one document, even at different fragments. But the fragments must be created dinamically.

  • VM crashes with big class files generated of JSPs

              Hi,
              When calling certain JSP pages with WebLogic Server 5.1 (SP 6) the HotSpot Virtual Machine (JDK1.3) crashes with a core dump.
              Using WebLogic as the JSP engine produces of every JSP page one java file which
              is compiled with javac to one class file. Each generated java file consists of
              just one method: _jspService(...){...}. One method is allowed to be of 64 K
              (the dynamic part 32 K as maximum). As we include other JSP pages and
              components and use taglibraries, the WebLogic JSP engine generates a very big
              java file (more than 1 MB). Javac compiles this to a class file which only
              method exceeds the 64 K limit. As the javac compiler does not reject the class
              file with the to big method the virtual machine crashes.
              Running the same JSP page on Windows NT 4 with WebLogic Server 5.1 (Service Pack 6)
              and JDK 1.3 BUT using the java option -classic works.
              Unfortunately it seems that there is no -classic option for java on Solaris for JDK1.3.
              Using JDK1.2 (JDK_1.2.2_05a) on Solaris or the jikes compiler of Jakarta
              causes an exception instead of a core dump but still does not work.
              Using solaris jdk1.2.2_05a the same page request results in the following exception:
              Tue Nov 21 09:08:16 CET 2000:<I> <WebAppServletContext-maxblue> Generated java file:
              /opt/tadevw/maxblue/weblogic/maxblue_cluster/maxblue_server/public_html/WEB-INF/_tmp_war/jsp_servlet/_portfolio/_PortfolioMyportfolio.java
              Tue Nov 21 09:08:24 CET 2000:<E> <WebAppServletContext-maxblue> Servlet failed with Exception
              java.lang.VerifyError: (class: jsp_servlet/_portfolio/_PortfolioMyportfolio, method:
              _jspService signature: (Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V) Illegal target of jump or branch
              Is there a way that the JSP engine of WebLogic creates/generates
              more than one function? (Without reengineering of the source code?)
              Regards,
              Boris
              

    Thank you! I had no idea. But what is a FAT32 drive? Is it the work bench in the Mac, that is used for the transfering between the external hard drives? Is there any way around this then?
    I thought in these HD home movie days, that this was an easy thing dealt with everyday. I still need to understand what practical solutions I can find. The files are .mov - quicktime files.
    It couldn't be the case that files larger than 4BG should be locked in the hard drive forever?
    Sverre

  • Building big XML file from scratch - Urgent

    Oracle 8.1.7.3 on windows NT platform
    What is the best way to generate a quiet big XML file from multiple tables ?
    I have information stored in many relational tables from which I need to generate a XML flat file either stored in a CLOB field or in a text file in a system directory. This XML file will be then used
    as an input to generate a report either in HTML using XSLT, or in a PDF file using apache-fop or in a MS Excel file using SoftArtisan ExcelWriter.
    My XML file has many levels in it structure, I mean that it is composed of one root element with 2 children, each children has a 3 children. One on these 3 children has 2 children and so on. Actually there are more or less 10 nested levels.
    To generate this XML file, I tried to use XSU for PL/SQL with the nested cursor() feature plus XSLT to transform the raw XML file to my requirements.
    But obviously there are some limitations using this method. Particularly, if the inner cursor returns an empty set I get exhausted resultset java error... A TAR confirmed that limitation.
    So I had to give up this method to use basic nested PL/SQL cursors. Each fetched row is then inserted into a table (varchar2) with a sequence number so that with a cursor like select xml_chunk from my_table order by sequence, I get the whole XML file that I save either in a flat file or in a CLOB (using append method).
    This method works fine, but it takes time and it's not flexible at all as I have to construct each XML tag. I guest this way of proceeding is not the more efficient...
    Using DOM method won't be better as I still need PL/SQL cursor to select each level of my XML structure and in addition I might for sure encounter a problem of memory.
    So what solutions would you suggest to generate this XML file. It must be quiet fast. The XML file can be up to 2Mo big. My system is actually a kind of on-the-fly reports generation. I mean that the XML file needs to be created with up-to-date data many times during the working hours !
    Quick answers or suggestions would be greatly appreciated. It's very urgent !!
    Thanks

    I looks like the best way is to using the SAX processing for your application? Do you know the DTD or XML schema of your output XML document?
    Would you send me the sample code for the method "to use XSU for PL/SQL with the nested cursor() feature plus XSLT to transform the raw XML file" to reproduce the problem?

  • Big gif files x multiple gifs files

    Hi,
    I have a question about performance:
    Is it better to have a big gif file with all tiles, sprites, and all other graphics resource for a game? Or is better to have for example: a medium gif file holding all tiles, a gif file for each sprite?
    In my opinion the second way is better, because I think that a big gif file is difficult to handle because I will have to crop individual tiles and frames for the sprite animation and the image gets a messy.
    But anyway I just wanna heard your opiniions about this issue.
    Tks

    It totally depends on the number of colors in your images.
    Remeber a GIF is limited to 256 colors, if you need more than this you will obviously have to split your tiles & animation frames across multiple gifs.
    Ofcourse, the better solution is to use PNGs.
    There is also the consideration of acceleration - large images will not be cached in vram, if you look at the assets of commercial titles, you will find the best size to go for is either 256x256 or 512x512.
    You should perform the cropping as you render the images, as it will be no slower than precropping them, and will take a good deal less vram. (a 20x20 image when cached in vram will be expanded to 32x32)

  • Big FMB files

    We have a strange case with big FMB file (approximately 3 MB).
    When i open the file in Forms Builder and store it in other directory the file change it size approximately to 2.5 MB.
    After compilation and resaving the file it returns to original size - 3 MB.
    Our programmer told me that sometimes he lose some triggers from the FMB while storing the file to other directory.
    What can be a problem?
    P.S. We are talking about v. 6i of Forms Builder.
    Thanks!

    The varying size of your form is nothing to be concerned of. see:
    FMB size shrinks dramatically
    Our programmer told me that sometimes he lose some triggers from the FMB while storing the file to other directory.Are those referenced triggers? Did you/he investigate a little? It's a little bit hard to diagnose a problem when the only information you have is that it exists (you know what? My database sometimes throws a no_data_found. Do you know why?) makes the solution to the problem more like a quiz show then a investigation...
    cheers

  • How quickly parse big XML file (60 MB) ???

    How quickly parse big XML file (60 MB) ???

    I assume you mean load it into XML DB ?. Fundamentally your document is about the upper limit for 9.2.x. I would strongly recommend trying to break it up into a set of smaller documents using a SAX parser before trying to load it into XML DB. In 10g it should be possible to load much bigger documents than this.

  • Error: code too large for try statement, when compiling a big java file.

    Hi,
    I have a big java file ( around 16000 lines). When compiling it, I got following error message:
    MyMain.java:15233: code too large for try statement
    } catch ( Throwable t ) {
    In MyMain.java, I just repeat following statements about 1000 times.
    try {
    if ( year >= 2002 ) {
    System.out.println( "year: Evaluation version is not valid" );
    } else {
    System.out.println( "year: Evaluation version is still valid" );
    } catch ( Throwable t ) {
    if ( year >= 2002 ) {
    System.out.println( "year: Evaluation version is not valid" );
    } else {
    System.out.println( "year: Evaluation version is still valid" );
    I tried 1.3 and 1.4 javac compiler, there was some error.
    How to make compiler to compile this code?
    Thanks,

    Hi,
    I have a big java file ( around 16000 lines). When
    compiling it, I got following error message:
    MyMain.java:15233: code too large for try statement
    } catch ( Throwable t ) {
    I tried 1.3 and 1.4 javac compiler, there was some
    error.
    How to make compiler to compile this code?
    You don't. Each method has an absolute limit on the number of byte codes. You have reached that limit. The limit is part of the specification and JVMs will refuse to run classes that exceed the limit. So even if you could compile it, it wouldn't run.
    It is quite common for code generators to generate large monolithic blocks of code. Presumably this is how you got to this spot. Modify the code generator to break it into smaller blocks.
    If you did it manually then you did it wrong. And you will have to manually break it into smaller blocks. (And re-examine your design since it is probably wrong.)

Maybe you are looking for

  • Just updated to firefox 3.6.8 and now my laptop's hard drive failed after subsequent firefox crashes.

    I just updated to firefox 3.6.8 this morning after getting prompted to update as I started firefox. The firefox update successfully installed page said that I should update to the latest adobe flashplayer, so I did that as well, although that install

  • LSO Correspondence - Language Setting

    Hi All, I have a requirement to generate LSO correspondence in languages other than the default English. We have customised all the standard correspondence templates in English and also have created correspondence in French.  However, if a user logs

  • Fall Back

    Hello , Is is possible to configure fall back link on the router if any one site goes down. Please let me know the steps to configure. Thanks Abhishek

  • Kill session without permission

    I have a procedure that sometimes won't to finish :) I have no permission to kill the session. Can I set a time, that after it procedure will be end or sht like this?

  • Will ACR presets made in CS3 work in CS?

    I'm interested in buying some ACR presets, they're made in CS3. I have CS, will they work? Thanks, Teresa