Best data Structor for dealing with very large CSV files

hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
any suggestions would be greatly apprechiated.
Message was edited by:
ninjarob

How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
If the data size turns out to be prohibitive of loading into memory, how about a relational database?
Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

Similar Messages

  • JRockit for applications with very large heaps

    I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
    I used the JRMC to profile the operation and here were the results that I thought were interesting:
    liveset 30%
    heap fragmentation 2.5%
    GC Pause time average 600ms
    GC Pause time max 2.5 sec
    It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
    For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
    Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
    My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
    Thanks in advance for any help you can provide.

    Any suggestions for using JRockit with very large heaps?

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Import very large csv files into SAP

    Hi
    We're not using PI, but have middleware called Trading Networks. Our design is fixed (not my decision) to not upload files into Application Server and import it from there. Our design dictates that we must write RFCs and Trading Networks will call the RFC per interface with very large file sent as table of strings. This takes 14 minutes to import into SAP plain Z-table from where we'll integrate. As a test we uploaded the file to Application Server and integrated into Z-table from there. This took 4 minutes. However our architect is not impressed that we'll stretch available Application Server to it's limits.
    I want to propose that the large file be split in e.g. 4 parts at Trading Networks level and call 4 threads of the RFC which could reduce integration time to e.g. 3 minutes 30 seconds. Is there someone that has suggestions in this regard especially about a proposed, working, elegant solution for integrating large files with our current environment? This will form the foundation of our project.
    Thank you and best regards,
    Adrian

    Zip compression can be tried. The RFC will receive zip stream which will be decompressed using CL_ABAP_ZIP.

  • HTMLDB_tools, processing a very large csv file in a collection

    Hi Guys,
    I'm new to APEX and trying to load approx 1,000,000 rows into a table from a csv file, does anyone have a better way of doing it than htmldb_tools.parse_file (which is very slow)
    Cheers

    It's not Apex, but you could use SQL*Loader. It's really very fast!
    greets, Dik

  • Best data component to deal with mysql table

    Hi.
    I have an LMS (learning management system) with a lot of info in a mysql dbb with a few tables.I also use a lot of php.
    I was told flash data components aren't the best in the world so what should I do?
    Somebosy mentioned that I should use Flex data components as they were built for this sort of thing and are a lot more powerful. So where do I get one of those for ie: do I have to download anything as I only have Flash Cs5.5. Then how do I get that component into flash?
    Or would it be better to use a third party plugin for displaying tables. Somebody mentioned that excel and its pivot tables are amazing and simple.
    At the end of the day I also want clients to be able to see a dashboard.

    I am looking into various methods.
    Excel has a powerful pivot table which is amazing for tables, graphs etc... in a few drags with no programming.
    My brother uses it in Lloyds bank for number crunching millions of rows so it's even more than I need.
    I don't know of anybody mentioning excel if they use flex ie: maybe I am mixing both worlds. I will look into flex data components as there are tuts on lynda.com and using mxml. Basically a knowledgeable prgm told me I have to know at least as much as the programmer that I subcontract to. Therefore I have been reading my butt off. I am very weak on databases so that's my next field of research.
    THANKS.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Strategies for dealing with large forms

    I've created a dynamic form that is about 30 pages long. Although performance in reader is OK, when editing, every change takes about 10 seconds to be digested by Designer. I've tried editing parts and then copying them across, but formatting information seems to be lost when pasting.
    What are the practical limits to how long a form can be, and are there any suggestions for how to deal with longer forms?
    Many thanks
    Alex

    Performance may be okay in Reader 8, but a 30-page dynamic form will probably by unusable in Reader 7. I have done work with very large forms before, and I have maintained each page in a separate XDP and then brought them together once all the kinks are worked out. I don't know why you would be losing formatting information.
    There is a technique that I have used where you use dynamic subforms to show only one page of the form at a time. That approach allows a very large form to be used in version 7 with acceptable response time. However, it's a very complex approach, and I have come to think that it's too complex for comfort. The more complex your approach is the more likely it is to fail in a future version of Reader.

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • Namespace not found error when creating data server for xml with namespace

    Hi
    I tried creating XML data server in ODI with namespace in xml file. I followed the below steps but could not success in creating the dataserver. however when I remove the namespace in xml file I am able to reverse engineer the xml file.
    1) Create xml data server
    2) select xml driver - com.sunopsis.jdbc.driver.xml.SnpsXmlDriver
    3) Provide the jdbc url - jdbc:snps:xml?f=D:/xmlnew/sample_namespace.xml&s=xmlns&d=D:/xmlnew/sample_namespace.dtd
    xml content
    <f:root xmlns:f="http://www.w3.org/TR/html4/">
    <table>
    <name>African Coffee Table</name>
    <width>80</width>
    <length>120</length>
    </table>
    </f:root>
    DTD content
    <!ELEMENT f:root ( table ) >
    <!ELEMENT length ( #PCDATA ) >
    <!ELEMENT name ( #PCDATA ) >
    <!ELEMENT table ( name, width, length ) >
    <!ELEMENT width ( #PCDATA ) >
    when I test connection it shows the following error.
    java.sql.SQLException: The model generated by the model mapper was not accepted by a validator: Model not accepted: Namespace not found:
         at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.doGetConnection(LoginTimeoutDatasourceAdapter.java:133)
         at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.getConnection(LoginTimeoutDatasourceAdapter.java:62)
         at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java:1100)
         at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.getLocalConnect(SnpsDialogTestConnet.java:371)
         at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.localConnect(SnpsDialogTestConnet.java:794)
         at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.jButtonTest_ActionPerformed(SnpsDialogTestConnet.java:754)
         at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.connEtoC1(SnpsDialogTestConnet.java:137)
         at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.access$1(SnpsDialogTestConnet.java:133)
         at com.sunopsis.graphical.dialog.SnpsDialogTestConnet$IvjEventHandler.actionPerformed(SnpsDialogTestConnet.java:87)
         at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1995)
         at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2318)
         at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387)
         at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242)
         at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:236)
         at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:272)
         at java.awt.Component.processMouseEvent(Component.java:6289)
         at javax.swing.JComponent.processMouseEvent(JComponent.java:3267)
         at java.awt.Component.processEvent(Component.java:6054)
         at java.awt.Container.processEvent(Container.java:2041)
         at java.awt.Component.dispatchEventImpl(Component.java:4652)
         at java.awt.Container.dispatchEventImpl(Container.java:2099)
         at java.awt.Component.dispatchEvent(Component.java:4482)
         at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4577)
         at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4238)
         at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4168)
         at java.awt.Container.dispatchEventImpl(Container.java:2085)
         at java.awt.Window.dispatchEventImpl(Window.java:2478)
         at java.awt.Component.dispatchEvent(Component.java:4482)
         at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:644)
         at java.awt.EventQueue.access$000(EventQueue.java:85)
         at java.awt.EventQueue$1.run(EventQueue.java:603)
         at java.awt.EventQueue$1.run(EventQueue.java:601)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
         at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:98)
         at java.awt.EventQueue$2.run(EventQueue.java:617)
         at java.awt.EventQueue$2.run(EventQueue.java:615)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:614)
         at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269)
         at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)
    Caused by: java.sql.SQLException: The model generated by the model mapper was not accepted by a validator: Model not accepted: Namespace not found:
         at com.sunopsis.jdbc.driver.xml.SnpsXmlDTD.initialize(SnpsXmlDTD.java:389)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlDTD.initialize(SnpsXmlDTD.java:421)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlDTD.<init>(SnpsXmlDTD.java:150)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlSchema.<init>(SnpsXmlSchema.java:478)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlSchemaManager.createNewSchema(SnpsXmlSchemaManager.java:292)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlSchemaManager.getSchemaFromProperties(SnpsXmlSchemaManager.java:270)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlDriver.connect(SnpsXmlDriver.java:114)
         at oracle.odi.jdbc.datasource.DriverManagerUtils$DriverProxy.connect(DriverManagerUtils.java:23)
         at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:368)
         at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:352)
         at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:316)
         at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:275)
         at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.doGetConnection(LoginTimeoutDatasourceAdapter.java:99)
         at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.getConnection(LoginTimeoutDatasourceAdapter.java:62)
         at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter$ConnectionProcessor.run(LoginTimeoutDatasourceAdapter.java:217)
         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:662)

    Hi,
    Thans for your reply.
    This is the DTD for my xmldoc.
    <!ELEMENT Data (Department+)>
    <!ELEMENT EmployeeID (#PCDATA)>
    <!ATTLIST EmployeeID col (EMPID) #IMPLIED>
    <!ELEMENT Education (EmployeeID, Sequence, Dgree)>
    <!ATTLIST Education table NMTOKEN #IMPLIED>
    <!ELEMENT Employee (EmployeeName, EmployeeID, DepartmentID, Education*)>
    <!ATTLIST Employee table NMTOKEN #IMPLIED>
    <!ELEMENT EmployeeName (#PCDATA)>
    <!ATTLIST EmployeeName col NMTOKEN #IMPLIED>
    <!ELEMENT DepartName (#PCDATA)>
    <!ATTLIST DepartName col NMTOKEN #IMPLIED>
    <!ELEMENT Table (Column+)>
    <!ATTLIST Table importType NMTOKEN #IMPLIED>
    <!ATTLIST Table parentTable NMTOKEN #IMPLIED>
    <!ATTLIST Table tag NMTOKEN #IMPLIED>
    <!ATTLIST Table columns NMTOKEN #IMPLIED>
    <!ATTLIST Table name NMTOKEN #IMPLIED>
    <!ELEMENT DepartID (#PCDATA)>
    <!ATTLIST DepartID col NMTOKEN #IMPLIED>
    <!ELEMENT MetaData (Table+)>
    <!ELEMENT Sequence (#PCDATA)>
    <!ATTLIST Sequence col NMTOKEN #IMPLIED>
    <!ELEMENT Dgree (#PCDATA)>
    <!ATTLIST Dgree col NMTOKEN #IMPLIED>
    <!ELEMENT Export (MetaData, Data)>
    <!ELEMENT DepartmentID (#PCDATA)>
    <!ATTLIST DepartmentID col NMTOKEN #IMPLIED>
    <!ELEMENT Column (#PCDATA)>
    <!ATTLIST Column deleteKey NMTOKEN #IMPLIED>
    <!ATTLIST Column isKey NMTOKEN #IMPLIED>
    <!ELEMENT Department (DepartName, DepartID, Employee+)>
    <!ATTLIST Department table NMTOKEN #IMPLIED>
    Thanks again!
    Yan

  • Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, how to get back standard sized homepage?

    Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, need to return to standard sized homescreen?

    Triple click the Home button and then go to Settings>General>Accessibility and turn Zoom off. If problems see:
    iPhone: Configuring accessibility features (including VoiceOver and Zoom)

  • J1INUT Error - No data exist for processing with the given selection option

    Hi Guru's,
    I am using transaction J1INUT utilization of provision of TDS on Services for which in have made the provision with the help of J1INPR, But when I am executing J1INUT transaction .The following error message is displaying:
    No data exist for processing with the given selection options
    I have followed the below steps.
    1) ME21N - OP Creation
    2) ML81N - Service Entry
    3) J1INPR - Provision of TDS
    4) MIRO -  Invoice Posting
    I have checked the Table J_1IEWTPROV In that system is updating the table also. Even I have activated table TRWCA for field IND
    But still I am getting the same error. Any suggestions to resolve this.
    Appreciate your inputs. Thanks in Advance
    Regards,
    DeepaK

    Hi Deepak,
    Refer the below link and follow the steps - Provision for Taxes on Service Recieved.
    Re: Provision for Taxes on Service Recieved
    Hope it may useful to you.
    Regards,
    Govind Bhaskaran

Maybe you are looking for

  • After 9.2.1 update, iPhone reports as locked when it is not?

    Locking and unlocking the iPhone 4 w/ iOS 5 does not help. Quiting and relaunching the app does not help. Computer is a MacBook Pro w/ 10.6.8 Last sync with iPhoto a few weeks ago with no issues. Just did the 9.2.1 update and attempted to sync, yes,

  • WiFi must manually reconnect after sleep (since updating to 10.9.4)

    My MacBook Air (Mid 2011) just updated to 10.9.4 and now the WiFi doesn't reconnect to my home network when I resume from sleep (open the lid). I need to click the WiFi icon and re-choose the home network each time. This is new since I updated. Ironi

  • HT201210 Error

    I only have a usb cable with an arrow pointing to an itunes logo, the ipod won't do anything and I can't get it to register to itunes. I've been trying for 2 hours.. what's going on?

  • Database control in a cluster environment

    Hello DBA's Is it possible to use the database control in a windows cluster with oracle fail safe? I haven't found any documentation to this topic (only with RAC). I have two possibility's: 1. To use it as a cluster service where I could connect with

  • Help with PKGBUILD for PyCharm Professional

    All, I recently took over this package as it was not being maintained regularly. I'm new to this, so I updated the obvious fields so everyone could get the most up to date PyCharm Professional out there.  I have received some feedback, but wanted to