Creating large XML files

I have been working with DOM parser for creating XML files but as the size of the XML file increases, the memory allocated to the JVM gets consumed at a high rate. So i needed to find another parser that can make my life much easier. SAX parser is not a solution for me as it gives problems with doc manipulation and a lot of complex coding will be involved in callback functions.
Thank you in advance

vivek_kumar_kohli wrote:
I have been working with DOM parser for creating XML files Eh? The parser is for reading xml files, not creating them. You can create them using from a DOM using the Transformer mechanism, but that's not about the parser.
but as the size of the XML file increases, the memory allocated to the JVM gets consumed at a high rate. So i needed to find another parser that can make my life much easier. SAX parser is not a solution for me as it gives problems with doc manipulation and a lot of complex coding will be involved in callback functions.
It's the nature of the beast, either you build the whole document tree in memory, in which case you need the whole thing stored and lots of RAM, or you process it looking at a small part at a time, which is going to be more complicated.
Using SAX isn't that complicated, basically I just use a different ContentHandler for each significant element type. You work with a stack of content handlers, pushing a new one on the stack when a tag opens, and popping it off when the element closes.
This kind of code replaces code which walks the DOM tree, and it's really not all that different in overall structure.
Outputting XML isn't that hard to do with simple println calls.

Similar Messages

  • Xerces: Creating large XML docs

    Hi
    I am using the Xerces DOM parser to create large XML files. However, I need to flush the output every now and then to a file since I don't want the memory to keep growing. I tried doing the following:
    _domWriter->writeNode(fileFormatTarget, domDocument);
    fileFormatTarget->flush();
    But if I do a flush many times, the whole XML file is written again to the file.
    Is there a way to keeping flushing the memory to the file and releasing the memory after flushing ?
    I tried:
    _domWriter->release()
    But I can't use the DOMWriter any longer.
    Thanks
    Bijal

    Bijal,
    "Bijal Vora" <[email protected]> wrote in message news:19360326.1100183759055.JavaMail.root@jserv5...
    I am using the Xerces DOM parser to create large XML files. However, I need to flush the output every now and then to a file sinceI don't want the memory to keep growing. I tried doing the following:
    >
    _domWriter->writeNode(fileFormatTarget, domDocument);
    fileFormatTarget->flush();
    But if I do a flush many times, the whole XML file is written again to the file.
    Is there a way to keeping flushing the memory to the file and releasing the memory after flushing ?
    I tried:
    _domWriter->release()
    But I can't use the DOMWriter any longer.XML in general and DOM in particular is not suitable for processing
    large amounts of data. For your case the only feasible option could
    be using custom SAX handler, IMHO.
    Regards,
    Slava Imeshev

  • How to set SAXParser at command-line interface to create a large XML file

    Hi,
    I am trying to create a large XML file (more than 50 MB) by selecting from Oracle database but failed because of "out of memory" error. According to "Oracle XML Developer Guide", we should use SAXParser to parsing a large XML file. But there is no example to show how to set SAXParser at command-line
    Following is what I use to get xml files. It works only when the file is small.
    java OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    When I set SAXParser at the way below,
    java oracle.xml.parser.v2.SAXParser OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    it failed with the error message: "In class oracle.xml.parser.v2.SAXParser: void main(String argv[]) is not defined"
    Does anyone know how to solve the problem? I'll be appreciated very much for your help.
    Yi

    here are my ideas.
    register the xml schema.
    using xmldom, generate the desired xml output and return as xmltype.
    then you can use something like this to check.
    declare
    xmldoc xmltype ;
    begin
       -- populate xmldoc from you xmldom function
       -- validate against XML schema
       xmldoc.isSchemaValid(schema_url, root_element);
       if xmldoc.isSchemaValid = 1 then
            --valid schema
       else
            --invalid
       end if;
    end

  • Is there a way to import large XML files into HANA efficiently are their any data services provided to do this?

    1. Is there a way to import large XML files into HANA efficiently?
    2. Will it process it node by node or the entire file at a time?
    3. Are there any data services provided to do this?
    This for a project use case i also have an requirement to process bulk XML files, suggest me to accomplish this task

    Hi Patrick,
         I am addressing the similar issue. "Getting data from huge XMLs into Hana."
    Using Odata services can we handle huge data (i.e create schema/load into Hana) On-the-fly ?
    In my scenario,
    I get a folder of different complex XML files which are to be loaded into Hana database.
    Then I gotta transform & cleanse the data.
    Can I use oData services to transform and cleanse the data ?
    If so, how can I create oData services dynamically ?
    Any help is highly appreciated.
    Thank you.
    Regards,
    Alekhya

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • Query in a large xml file

    Hello,
    I'm trying to work with very large xml files which are created from csv files. These files may be very large - up to 1 GB ! Untill now I have managed to do several validations on these big xml files, and the only thing that works for me is SAX parser, DOM is out of the question because it fills up memory.
    My next task is to do queries on these files, smth like:
    select field1,field2 from file.xml
    where field3 = 'A'
    and (fileld4>'B' or field1='C')
    order by field2.
    I searched the net about finding out how to make queries on xml files (since I have never done queries on xml before), but I couldn't find which "query language" is best for large files. If I use XPath (XSLT) will that not cause me memory problems because XSLT represents the file as a memory object?
    My idea is to parse the file with SAX and check every row if it fits the where condition and then write it immediately to a result xml file. But validating the where statement can be very complicated without using some tool. Also the order by statement is another problematic issue.
    Does anyone have some more intelegent ideas about how I can do this? Please help! :(
    The xml file looks like this:
    <doc>
    <row id ="1">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    <row id ="M">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    </doc>

    Hi all,
    Thank you very much for your replies.
    First, saxon didn't work because it uses an in-memory parser, and that is what I was trying to avoid.
    Different database is also out of the question, because the customer insist on XML, and also there are some files that can never be converted to a database table, because eventually with some transformations thay are changed and are not completely like the standard csv format.
    I think that maybe http://exist.sourceforge.net is the rigth solution for me, but I will probably try it in the next version of my project.
    For now I have managed to make the project with only SAXParser and a lot of back - end programming and it works ok, althoug it was very hard to make it, and will be harded to maintain, so I will try to look at the eXist project.
    Thanks everyone for the help.

  • How to parse large xml file

    I need to parse large xml file which contains following tag. The size of the file is upto 10MB-50MB or more.
    <departments>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_253">
    <bss_depart id="253">
    <attributes>
    <name_one>abc</name_one>
    </attributes>
    </bss_depart id="253">
    </b_depart id="Bss_253">
    </a_depart id="124">
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="255">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="125">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    I want to get the infomation for that xml file. like mss_depart id=233, building xpath dyanmically for every id and loading
    that using dom4j. which is very very slow.
    Is there any other solution for that to read the data using sax parser only.
    I want to execute the xpath or data for the following way.
    //a_depart/@id ------> all the ids of a_depart tags if it returns 3 values say 123,124,125
    after that i want to execute
    //a_depart[@id='123']/b_depart/@id like this ...to retrive the values of all the levels ...
         I am executing following xpath for every unique ids at all levels.
         List l = doc.selectNodes(xPathForID);
         List l1 = doc.selectNodes(xPathForAttributes+attributes.get(j)+"/text()");
    But it is very slow and taking lot of time.
    Is there any other way to solve this problem. If any please mail me it is urgent.
    I am using jdk1.4 and jdk1.5
    Is there any support for sax parser to execute xpath in jdk1.5 direclty, with out using dom4j
    Thanks in advance....

    I doubt you will find a preexisting solution to your problem.
    SAX is usually recommended for processing big files (where "big" is undefined"). It works on big files by avoiding the messy problem of storing the data -- that is left as an exercise to you.
    DOM (and its variants) works by building a Document object as the head of the tree of objects for the entire contents. With DOM, you can then use XPath, because there is something to search that is already in memory. To use XPath, you seem to have two choices, build a DOM-ish tree, or if you can find an XPath processor (I'm not sure if one exists) that can process the XML file directly, but it will be slow, since you are looking for "all" occurences of an attribute, and this means you have to read the entire file each time.
    It might be worth exploring a hybrid approach -- use SAX to get some information, and build your own objects to store the data. Maybe a HashMap as the main index. But, that will keep you from using XPath, since you do not have the data structures it expects.
    A third alternative would be to look at JAXB. It builds Java code from a Schema of your data and then when you import the data, it creates the necessary objects and fills in values. But, I don't think XPath woll work there either.
    Dave Patterson

  • How to insert large xml file to XMLType column?

    Hi,
    I have a table with one column as XMLType(Binary XML storage option and Free Text Indexing). When i try to insert a large XML as long as 8kb, i'm getting an error ORA-01704:string literal too long.
    Insert into TEST values(XMLTYPE('xml HERE'));
    How to insert large XML values to XMLType column?
    Regards,
    Sprightee

    For a large XML file, you basically have two options - you can load the string directly as an XMLType, or you can load the string as a CLOB and cast it on the database side to an XMLType.
    If you decide to load the XML as XmlType client-side, then you may be interested to know that versions of Oracle after 11.2.0.2 support the JDBC 4.0 SQLXML standard. See the JDBC driver release documentation here:
    http://docs.oracle.com/cd/E18283_01/java.112/e16548/jdbcvers.htm#BABGHBCC
    If you want to load as a CLOB, then you'll need to use PreparedStatement's setClob() method, or allocate an oracle.sql.clob object.
    For versions before 11.2.0.2, you can create an XMLType with a constructor that includes an InputStream or byte[] array.
    HTH
    Edited by: 938186 on Jun 23, 2012 11:43 AM

  • Large XML files containing time based data

    Hi,
    i'm having an extremely large XML files, gegas, it holds some info for displaying graphical representations for users, one of which is the updated view every time instance. i'm stuck with handling this large XML file, obviously i tried using a DOM parser, i got an out of memory exception, i though of using SAX as its event based, but then i'll be stuck going backward through the representation, i need users to go backward and forward representing the graphical representation at different time instances.

    It is gigabytes?
    And this file gets modified by some external application(s), rather than being created one time?
    And you want to keep reprocessing it at the same time other application(s) are modifying it?
    If yes to all of those, then I doubt there is any real solution except to copy it. If you attempt to sync it with the file system then you are either going to be locking it, preventing updates, or you are going to end up with corrupted data and thus ill formed xml.
    If the time data is historical in that a 'user' does something and a new entry is made and you want to report in various ways on that data then parsing it into a database would seem like reasonable solution.

  • It is very slow to load a large XML file to XDB repository.

    I have a large xml file (about 100MB).
    I tried to create a resource in XDB and load this file to XDB repository as follows:
    declare
    res boolean;
    begin
    res := DBMS_XDB.createResource('/public/standard.xml', XMLType(bfilename('X
    MLDIR', 'standard.xml'), nls_charset_id('AL32UTF8')));
    end;
    I waited about one hour but the program is still running. At last I killed it.
    I think even for a large xml file the loading time shouldn't be that long.
    Does anyone have suggestions?
    Thanks a lot

    use FTP method,
    you can ftp the file directly to the XMLDB repository.
    it will be much faster.

  • Import Large XML File to Table

    I have a large (819MB) XML file I'm trying to import into a table in the format:
    <ROW_SET>
    <ROW>
    <column_name>value</column_name>
    </ROW>
    <ROW>
    <column_name>value</column_name>
    </ROW>
    </ROW_SET>
    I've tried importing it with xmlsequence(...).extract(...) and ran into the number of nodes exceed maximum error.
    I've tried importing it with XMLTable(... passing XMLTYPE(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8'))) and I gave up after it ran for 15+ hours ( COLLECTION ITERATOR PICKLER FETCH issue ).
    I've tried importing it with:
    insCtx := DBMS_XMLStore.newContext('schemaname.tablename');
    DBMS_XMLStore.clearUpdateColumnList(insCtx);
    DBMS_XMLStore.setUpdateColumn(insCtx,'column1name');
    DBMS_XMLStore.setUpdateColumn(insCtx,'columnNname');
    ROWS := DBMS_XMLStore.insertXML(insCtx, XMLTYPE(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8')));
    and ran into ORA-04030: out of process memory when trying to allocate 1032 bytes (qmxlu subheap,qmemNextBuf:alloc).
    All I need to do is read the XML file and move the data into a matching table in a reasonable time. Once I have the data in the database, I no longer need the XML file.
    What would be the best way to import large XML files?
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    "CORE     11.2.0.1.0     Production"
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    This (rough) approach should work for you.
    CREATE TABLE HOLDS_XML
            (xml_col XMLTYPE)
          XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
    INSERT INTO HOLDS_XML
    VALUES (xmltype(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8')))
    -- Should be using AL32UTF8 for DB character set with XML
    SELECT ...
      FROM HOLD_XML HX
           XMLTable(...
              PASSING HX.xml_col ...)How it differs from your approach.
    By using the HOLDS_XML table with SECUREFILE BINARY XML storage (which became the default in 11.2.0.2) we are providing a place for Oracle to store a parsed version of the XML. This allows the XML to be stored on disk instead of in memory. Oracle can then access the needed pieces of XML from disk by streaming them instead of holding the whole XML in memory and parsing it repeatedly to find the information needed. That is what COLLECTION ITERATOR PICKLER FETCH means. A lot of memory work. You can search on that term to learn more about it if needed.
    The XMTable approach then simply reads this XML from disk and should be able to parse the XML with no issue. You have the option of adding indexes to the XML, but since you are simply reading it all one time and tossing it, there is no advantage to indexes (most likely)

  • Parsing large xml file and display using swing

    Hi all,
    I want to read a large xml file and display graphically in swing as a tree structure.
    I implemented it and works fine for files of 5MB size after increasing the jvm heap size (-Xmx). If the file size is larger than 5MB it throws out of memory error. I'm creating a custom datastructure from the xml and I'm using sax parsing.
    After displaying the datastructure, the user could do some operation on this, like search etc.
    Can any of you suggest a method, to support larger files ? What I'm looking for is create the datastructure in file system, rather than in memory.
    Any other tips for memory management would be greatly appreciated
    Thanks in Advance.
    Nisha

    Use a memory-mapped file?
    http://javaalmanac.com/egs/java.nio/CreateMemMap.html

  • Creating an xml file from abap code

    Hello All,
    Please let me know which FM do I need to execute in order to create an XML file from my ABAP code ?
    Thanks in advance,
    Paul.

    This has been discussed before
    XML files from ABAP programs

  • Need help Take out the null values from the ResultSet and Create a XML file

    hi,
    I wrote something which connects to Database and gets the ResultSet. From that ResultSet I am creating
    a XML file. IN my program these are the main two classes Frame1 and ResultSetToXML. ResultSetToXML which
    takes ResultSet & Boolean value in its constructor. I am passing the ResultSet and Boolean value
    from Frame1 class. I am passing the boolean value to get the null values from the ResultSet and then add those
    null values to XML File. When i run the program it works alright and adds the null and not null values to
    the file. But when i pass the boolean value to take out the null values it would not take it out and adds
    the null and not null values.
    Please look at the code i am posing. I am showing step by step where its not adding the null values.
    Any help is always appreciated.
    Thanks in advance.
    ============================================================================
    Frame1 Class
    ============
    public class Frame1 extends JFrame{
    private JPanel contentPane;
    private XQuery xQuery1 = new XQuery();
    private XYLayout xYLayout1 = new XYLayout();
    public Document doc;
    private JButton jButton2 = new JButton();
    private Connection con;
    private Statement stmt;
    private ResultSetToXML rstx;
    //Construct the frame
    public Frame1() {
    enableEvents(AWTEvent.WINDOW_EVENT_MASK);
    try {
    jbInit();
    catch(Exception e) {
    e.printStackTrace();
    //Component initialization
    private void jbInit() throws Exception {
    //setIconImage(Toolkit.getDefaultToolkit().createImage(Frame1.class.getResource("[Your Icon]")));
    contentPane = (JPanel) this.getContentPane();
    xQuery1.setSql("");
    xQuery1.setUrl("jdbc:odbc:SCANODBC");
    xQuery1.setUserName("SYSDBA");
    xQuery1.setPassword("masterkey");
    xQuery1.setDriver("sun.jdbc.odbc.JdbcOdbcDriver");
    contentPane.setLayout(xYLayout1);
    this.setSize(new Dimension(400, 300));
    this.setTitle("Frame Title");
    xQuery1.setSql("Select * from Pinfo where pid=2 or pid=4");
    jButton2.setText("Get XML from DB");
    try {
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    catch(java.lang.ClassNotFoundException ex) {
    System.err.print("ClassNotFoundException: ");
    System.err.println(ex.getMessage());
    try {
    con = DriverManager.getConnection("jdbc:odbc:SCANODBC","SYSDBA", "masterkey");
    stmt = con.createStatement();
    catch(SQLException ex) {
    System.err.println("SQLException: " + ex.getMessage());
    jButton2.addActionListener(new java.awt.event.ActionListener() {
    public void actionPerformed(ActionEvent e) {
    jButton2_actionPerformed(e);
    contentPane.add(jButton2, new XYConstraints(126, 113, -1, -1));
    //Overridden so we can exit when window is closed
    protected void processWindowEvent(WindowEvent e) {
    super.processWindowEvent(e);
    if (e.getID() == WindowEvent.WINDOW_CLOSING) {
    System.exit(0);
    void jButton2_actionPerformed(ActionEvent e) {
    try{
    OutputStream out;
    XMLOutputter outputter;
    Element root;
    org.jdom.Document doc;
    root = new Element("PINFO");
    String query = "SELECT * FROM PINFO WHERE PID=2 OR PID=4";
    ResultSet rs = stmt.executeQuery(query);
    /*===========This is where i am passing the ResultSet and boolean=======
    ===========value to either add the null or not null values in the file======*/
    rstx = new ResultSetToXML(rs,true);
    } //end of try
    catch(SQLException ex) {
    System.err.println("SQLException: " + ex.getMessage());
    ======================================================================================
    ResultSetToXML class
    ====================
    public class ResultSetToXML {
    private OutputStream out;
    private Element root;
    private XMLOutputter outputter;
    private Document doc;
    // Constructor
    public ResultSetToXML(ResultSet rs, boolean checkifnull){
    try{
    String tagname="";
    String tagvalue="";
    root = new Element("pinfo");
    while (rs.next()){
    Element users = new Element("Record");
    for(int i=1;i<=rs.getMetaData().getColumnCount(); ++i){
    tagname= rs.getMetaData().getColumnName(i);
    tagvalue=rs.getString(i);
    System.out.println(tagname);
    System.out.println(tagvalue);
    /*============if the boolean value is false it adds the null and not
    null value to the file =====================*/
    /*============else it checks if the value is null or the length is
    less than 0 and does the else clause in the if(checkifnull)===*/
    if(checkifnull){ 
    if((tagvalue == null) || tagvalue.length() < 0 ){
    users.addContent((new Element(tagname).setText(tagvalue)));
    else{
    users.addContent((new Element(tagname).setText(tagvalue)));
    else{
    users.addContent((new Element(tagname).setText(tagvalue)));
    root.addContent(users);
    out=new FileOutputStream("c:/XMLFile.xml");
    doc = new Document(root);
    outputter = new XMLOutputter();
    outputter.output(doc,out);
    catch(IOException ioe){
    System.out.println(ioe);
    catch(SQLException sqle){

    Can someone please help me with this problem
    Thanks.

  • How to make a bean create an xml file..

    How does one create an xml file from a bean?
    If anyone have the answer, or some sample code I would really apprecciate it!!!
    In advance thanx a lot!!!

    just "wrap" this up in a bean.
    import java.io.*;
    import org.jdom.*;
    import org.jdom.output.*;
    /** Make up and write an XML document, using JDOM
    * @author Ian Darwin, [email protected]
    * @version $Id: DocWriteJDOM.java,v 1.2 2001/11/21 23:08:17 ian Exp $
    public class DocWriteJDOM {
         public static void main(String[] av) throws Exception {
              DocWriteJDOM dw = new DocWriteJDOM();
              Document doc = dw.makeDoc();
              // Create an output formatter, and have it write the doc.
              new XMLOutputter().output(doc, System.out);
         /** Generate the XML document */
         protected Document makeDoc() throws Exception {
                   Document doc = new Document(new Element("Poem"));
                   doc.getRootElement().
                        addContent(new Element("Stanza").
                             addContent(new Element("Line").
                                       setText("Once, upon a midnight dreary")).
                             addContent(new Element("Line").
                                       setText("While I pondered, weak and weary")));
                   return doc;

Maybe you are looking for

  • Lost use of word for mac 2004 when upgraded to mountain lion OSX

    IVe just upgraded to Mountain Lion OSX and lost use of word for Mac 2004. how can i get in to work again

  • Only In My Browsers

    This one is really weird and upsetting at the same time because I never had this problem before. After some research and trouble shooting I thought it was Muse problem but it's not. The only thing I know is that it started happening after I updated t

  • How to enable create folder Icon

    Hi, When I am using the setSaveDialog() method of JFileChooser to select the location, the default directory which appears if no argument is passed is "My Documents". In Windows XP professional edition the create folder icon in this dialog gets disab

  • PDF Signature Error-Trouble In Paradise

    I was happily using a form for my staff to use to request field time. The form contained 3 signature fields, which were all working fine, until I had to go into the form to add more objects, in an attempt to combine two forms into one. After I added

  • What is hp laserjet MFP1120n 'scanner error 14' error and how to resolve it?

    Hi, I've multifunction network priter MFP1120n. it is now displaying error 'scanner error 14' and not accepting scans and xerox also. Please suggest me what is this problem related to and how to resolve it. Regards, Ajay