Structure of XML that can be interpreted by KM?
Hi,
My requirement is as follows - I need to bring in the documents stored in an external repository(Lotus Domino server) into KM repository thereby avoiding the external repository totally. This is how I am trying to approach - to convert all documents in the external repository into XML documents that is understandable by KM. Now, what I would like to know is the structure of the XML. Is there a specific XML structure that KM can read & hence store the documents accordingly in its repository?
Any help will be greatly appreciated.
Regards,
M.Subathra
Any thoughts on this?
Similar Messages
-
Is there a preferred format of XML that can be laoded into an Oracle table?
XML files vary quite a bit. From mere fragments to file carrying schema, etc. What is the preferred format of an XML file that can be loaded into an Oracle table assuming that the Schema agrees with the table structure.
It does not seem to load an XML file even with W3C namespace and all.
Krishnaswamy JayaramWith XE there is XML support in the database but no XQuery, JNDI, or Servlet support.
If you give a specific example it could be helpful, here are simple write and read tests on XE;
HR on 20/03/2006 10:28:22 at XE > CREATE TABLE xml_tab (xmlval XMLTYPE);
Table created.
HR on 20/03/2006 10:28:31 at XE > INSERT INTO xml_tab VALUES (
2 XMLTYPE.CREATEXML(’
3
4 221
5 John
6 ‘));
1 row created.
HR on 20/03/2006 10:28:41 at XE > INSERT INTO xml_tab VALUES (
2 XMLTYPE.CREATEXML(’
3
4 331
5 PO_1
6 ‘));
1 row created.
HR on 20/03/2006 10:28:49 at XE > select x.xmlval.getstringval() from xml_tab x;
X.XMLVAL.GETSTRINGVAL()
221
John
331
PO_1 -
COI: manual postings that can be interpreted by COI on divestiture
Hello Experts
I hope you can help. I am using COI functionality at a customer. I need to post a manual journal on posting level 30 against a minority interest item as an example. The journal should be recognised by the activity 'Divestiture' at a later stage when the activity is run. I have tried to create a doc type using Application = Consolidation of Investment and Posting = Manual Posting. The task linked to the doc type when run short dumps. I have also created a doc type where the Application has been set to Other and Posting = Manual Posting. I then changed the Document Field Options so that activity and number were available. The task when joined to the doc type runs and posts but when I divest at a later stage COI activity divest does not take this into account. How can I get COI to recognise that this manual posting is there??
RegardsKK,
Check the hierarchy and make sure that when a cons unit is in the hierarchy in two places, that there is a split or shared ownership of that cons unit. Otherwise it should only be in the hierarchy in a single place.
Also, where a parent cons unit exists, it is normally in a hierarchy node below the node of its parent cons unit. For example:
*PP-US* top node
AS1000 overall parent
AS1001 sub cons unit 1
*AUESTP* sub cons unit 2 node
AS5000 sub cons unit 2 - parent unit to cons unit 3
AS6000 sub cons unit 3
In the COI log itself, each cons group will include the lower cons units, because by default the belong to that group. So in the above example, AS6001 belongs to PP-US as well as AUESTP and will therefore appear in the log for both. In many cases the documents will only be in the level where the investee is stationed in the hierarchy.
For first consolidation, only purchase method documents occur for the elimination of the investment and corresponding equity. Because balances for equity held cons units are not included in reporting, there is no need for such elimination.
Minority interest is only applicable for purchase method cons units and will not calculate or post for equity held cons units.
I hope this helps.
Edited by: Dan Sullivan on May 13, 2008 3:31 PM
Edited by: Dan Sullivan on May 13, 2008 3:33 PM -
Do you have examples of CSV Format XML that can handle relationshiptypes?
Hi,
I have created a Windows Computer extended class that, for the sake of the example, has an additional property
ServerNameRow , and a relationship (selected via Single Instance Picker control)
BusinessUnitCustomersListPickerClass_Relationship. (The Single Instance Picker selects the primary name field/property only)
I want to import data via CSV, and/or PowerShell. Both need a Format XML file.
I don't know what the syntax is for relationship BusinessUnitCustomersListPickerClass_Relationship.
What I'd need is some examples, preferably of a non-extended class and of an extended class's format XML.http://blogs.technet.com/b/servicemanager/archive/2009/05/26/using-the-csv-import-feature.aspx
At the bottom of that blog post is an attachment (CSVImport.docx). It contains all the information you'll need for constructing CSV imports, including examples.
To import relationships, you'll need to define a type projection targeting the windows computer class that contains a component for your custom relationship type.
Also note you should not use the extension class itself in your definition file or in the type projection definition. Just use the windows computer class. Don't worry..your extension class property will be recognized by the import process. Extensions
to a class are different than inherited classes. -
Query language for XML that is going to remove use of database
Friends,
I want to know whether my idea is feasible or not.
I want to make a query language similar to SQL for XML which is going to remove the need for database for a large extent.
Their will be queries for making table,extracting data,making foreign key all through XML and no database.
I just want to know that is their any market value for such a project and will it be sellable,so that i can start working on it.There is no way to judge any future market for such a thing.
As it is, XML is widely abused, having mutated from its original purpose, a markup language readable by both machine and human, to a general data interchange language. It winds up being extremely slow and convoluted, in some cases overwhelming the hardware when a simple and common error occurs, then having to catch up. A big part of this is the silly and redundant method of defining metadata, again and again and again and again.
So what you want to do is create a database which will be highly formatted with redundant information, dynamically. This is silly, why does data need to be stored in a human readable format? It doesn't always even need to be presented in a human readable format! Why the heck would you need to read the bits of an mp4? Steganography?
What you are proposing is the exact opposite of what is needed. What is needed is a way to describe metadata for many different kinds of data, in a manner that can be interpreted by both man and machine at appropriate times, extensible for various paradigms.
What can be sold, on the other hand, is another question entirely, and not technical at all.
First definition of database that pops up during a search: an organized body of related information
Why would you want to get rid of that? -
Can we express batch relationship structure in XML in the database table
Hi
please help me ..
i have a batch XML batch structure ...can we express batch relationship structure in XML in tha database table?
yes..then how?
Thanks
Amu
Edited by: amu_2007 on Mar 25, 2010 6:57 PM
Edited by: amu_2007 on Mar 25, 2010 7:03 PMBut what is the problem with the initial solution given that split the XML into the data?
I mean you could do something like this?
SQL> create table batch (customer VARCHAR2(10)
2 ,cust_name VARCHAR2(10)
3 ,cust_type VARCHAR2(10)
4 )
5 /
Table created.
SQL>
SQL> create table section (customer VARCHAR2(10)
2 ,sect_name VARCHAR2(10)
3 ,sect_depend VARCHAR2(10)
4 )
5 /
Table created.
SQL> create table job_sections (customer VARCHAR2(10)
2 ,sect_name VARCHAR2(10)
3 ,job_sect_name VARCHAR2(10)
4 ,job_sect_depend VARCHAR2(10)
5 )
6 /
Table created.
SQL> create table job (customer VARCHAR2(10)
2 ,sect_name VARCHAR2(10)
3 ,job_sect_name VARCHAR2(10)
4 ,job_type VARCHAR2(10)
5 ,job_sub_type VARCHAR2(10)
6 ,job_depend VARCHAR2(10)
7 )
8 /
Table created.
SQL>
SQL>
SQL> insert all
2 when batch_rn = 1 then
3 into batch (customer, cust_name, cust_type) values (customer, cust_name, cust_type)
4 when section_rn = 1 then
5 into section (customer, sect_name, sect_depend) values (customer, sect_name, sect_dependency)
6 when job_sections_rn = 1 then
7 into job_sections (customer, sect_name, job_sect_name, job_sect_depend) values (customer, sect_name, job_sect_name, job_sect_dependency)
8 when 1=1 then
9 into job (customer, sect_name, job_sect_name, job_type, job_sub_type, job_depend) values (customer, sect_name, job_sect_name, job_type, jo
10 --
11 WITH t as (select XMLTYPE('
12 <BATCH customer="ABC" name="ABC1" type="ABC_TYPE">
13 <BATCH_SECTIONS>
14 <SECTION name="X" dependency="NULL">
15 <JOB_SECTIONS name="JOB1" dependency="NULL" >
16 <JOBS>
17 <JOB type="X" sub_type="xx" dependency="NULL" />
18 <JOB type="X" sub_type="yy" dependency="NULL" />
19 <JOB type="X" sub_type="zz" dependency="NULL" />
20 </JOBS>
21 </JOB_SECTIONS>
22 </SECTION>
23 <SECTION name="Y" dependency="X">
24 <JOB_SECTIONS name="JOB2" dependency="X" >
25 <JOBS>
26 <JOB type="Y" sub_type="xx" dependency="X" />
27 <JOB type="Y" sub_type="yy" dependency="X" />
28 <JOB type="Y" sub_type="zz" dependency="X" />
29 </JOBS>
30 </JOB_SECTIONS>
31 </SECTION>
32 <SECTION name="Z" dependency="Y">
33 <JOB_SECTIONS name="JOB3" dependency="NULL" >
34 <JOBS>
35 <JOB type="....." sub_type="...." dependency="NULL" />
36 </JOBS>
37 </JOB_SECTIONS>
38 <JOB_SECTIONS name="JOB4" dependency="NULL">
39 <JOBS>
40 <JOB type="...." sub_type="...." dependency="NULL" />
41 </JOBS>
42 </JOB_SECTIONS>
43 </SECTION>
44 </BATCH_SECTIONS>
45 </BATCH>
46 ') as xml from dual)
47 --
48 -- END OF TEST DATA
49 --
50 ,flat as (select a.customer, a.cust_name, a.cust_type
51 ,b.sect_name, NULLIF(b.sect_dependency,'NULL') as sect_dependency
52 ,c.job_sect_name, NULLIF(c.job_sect_dependency,'NULL') as job_sect_dependency
53 ,d.job_type, d.job_sub_type, NULLIF(d.job_dependency,'NULL') as job_dependency
54 from t
55 ,XMLTABLE('/BATCH'
56 PASSING t.xml
57 COLUMNS customer VARCHAR2(10) PATH '/BATCH/@customer'
58 ,cust_name VARCHAR2(10) PATH '/BATCH/@name'
59 ,cust_type VARCHAR2(10) PATH '/BATCH/@type'
60 ,bat_sections XMLTYPE PATH '/BATCH/BATCH_SECTIONS'
61 ) a
62 ,XMLTABLE('/BATCH_SECTIONS/SECTION'
63 PASSING a.bat_sections
64 COLUMNS sect_name VARCHAR2(10) PATH '/SECTION/@name'
65 ,sect_dependency VARCHAR2(10) PATH '/SECTION/@dependency'
66 ,section XMLTYPE PATH '/SECTION'
67 ) b
68 ,XMLTABLE('/SECTION/JOB_SECTIONS'
69 PASSING b.section
70 COLUMNS job_sect_name VARCHAR2(10) PATH '/JOB_SECTIONS/@name'
71 ,job_sect_dependency VARCHAR2(10) PATH '/JOB_SECTIONS/@dependency'
72 ,job_sections XMLTYPE PATH '/JOB_SECTIONS'
73 ) c
74 ,XMLTABLE('/JOB_SECTIONS/JOBS/JOB'
75 PASSING c.job_sections
76 COLUMNS job_type VARCHAR2(10) PATH '/JOB/@type'
77 ,job_sub_type VARCHAR2(10) PATH '/JOB/@sub_type'
78 ,job_dependency VARCHAR2(10) PATH '/JOB/@dependency'
79 ) d
80 )
81 --
82 select customer, cust_name, cust_type, sect_name, sect_dependency, job_sect_name, job_sect_dependency, job_type, job_sub_type, job_dependency
83 ,row_number() over (partition by customer order by 1) as batch_rn
84 ,row_number() over (partition by customer, sect_name order by 1) as section_rn
85 ,row_number() over (partition by customer, sect_name, job_sect_name order by 1) as job_sections_rn
86 from flat
87 /
16 rows created.
SQL> select * from batch;
CUSTOMER CUST_NAME CUST_TYPE
ABC ABC1 ABC_TYPE
SQL> select * from section;
CUSTOMER SECT_NAME SECT_DEPEN
ABC X
ABC Y X
ABC Z Y
SQL> select * from job_sections;
CUSTOMER SECT_NAME JOB_SECT_N JOB_SECT_D
ABC X JOB1
ABC Y JOB2 X
ABC Z JOB3
ABC Z JOB4
SQL> select * from job;
CUSTOMER SECT_NAME JOB_SECT_N JOB_TYPE JOB_SUB_TY JOB_DEPEND
ABC X JOB1 X xx
ABC X JOB1 X yy
ABC X JOB1 X zz
ABC Y JOB2 Y xx X
ABC Y JOB2 Y yy X
ABC Y JOB2 Y zz X
ABC Z JOB3 ..... ....
ABC Z JOB4 .... ....
8 rows selected.
SQL>But it would depend what you are actually after in terms of primary keys, and table relationships etc.
Let me put this simply for you...
h1. IF YOU DON'T DEMONSTRATE TO US WHAT OUTPUT YOU REQUIRE, WE CAN'T GIVE YOU AN ANSWER -
Rest method that can support request/responce in both xml and json formats
Hi,
I want to create rest method that can support request/responce in both xml and json formats.
I am trying in bellow way, but its not working getting error.
any idea on this?
Code in IService.cs :
[OperationContract]
[WebGet(UriTemplate = "/Login/{UserID}/{Password}")]
Responce Login(string UserID, string Password);
Code in Service.cs :
public Responce Login(string UserID, string Password)
try
objResponce = new Responce();
objResponce.MessageType = Responce.ResponceType.Warning.ToString();
string Con = GetConnectionString(UserID, Password); //Method to check valid user or not
if (Con.Trim().Length != 0)
objResponce.Message = "you have logged in Successfully";
else
objResponce.Message = "Please Enter Valid UserID,Password";
catch (Exception ex)
through ex;
return objResponce;
My Config settings :
<services>
<service name="OnePointAPI.OnePointAPIService">
<endpoint address="JSON" binding="webHttpBinding" contract="OnePointAPI.IOnePointAPIService" behaviorConfiguration="webJSON" ></endpoint>
<endpoint address="XML" binding="basicHttpBinding" contract="OnePointAPI.IOnePointAPIService" behaviorConfiguration="webXML" ></endpoint>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior>
<!-- To avoid disclosing metadata information, set the values below to false before deployment -->
<serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/>
<!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information -->
<serviceDebug includeExceptionDetailInFaults="false"/>
</behavior>
</serviceBehaviors>
<endpointBehaviors>
<behavior name="webJSON">
<webHttp defaultOutgoingResponseFormat="Json"/>
</behavior>
<behavior name="webXML">
<webHttp defaultOutgoingResponseFormat="Xml" />
</behavior>
</endpointBehaviors>
</behaviors>
Anwar ShaikIn several days (in the 19th) i will lecture at
SQLSaturday #360 and my last demo (hopefully I will have the time) is
Full implementation of JSON, I will show several function using JSON serializer and deserializer, and as mentioned full implementation including fast
indexes on JSON column (finding specific Node in 10 million rows
in less then a second). If you want to wait, then I will publish it latter probably, or you can come the lecture if you want :-)
* I am using Json.NET framework, by the way.
regarding your question, all you need is to find a nice serializer/deserializer framework (you can use
Json.NET framework) in order to do what you are looking for (if I understand what you asked)
Ronen Ariely
[Personal Site] [Blog] [Facebook] -
XmError: 7000 ODI XML Transformation That Can Be Executed Within a BPEL Pro
Hi iam sudhakar
iam using xml file as source and target and one csv file at the source through demo given in oracle site
facing problem loading data in taget xml file
source xml file contains
client_id,
address
and othercolums
csv file source contains
client_id
new_address
row_id
target xml(same as source) file is
client_id
address
and othe rcolums
iam just joining xml source and file as left outer join
problem is unable recive the data from file to the target xml file and it not showing any errpors
only xml data only storing in target xml file
and
set sql to sql
sql to sql append
filq to sql
After that i have created variable the opened sql to sql append
in detail tab
ia have written create XMLFILE (NAME OF THE VARIABLE ) FROM SCHEMA geo
Then I have created package
i joined variable and interface then excuted
In operator it showing errors
Sunopsis.jdbx.xml....
pls send the solution how to do
ODI XML Transformation That Can Be Executed Within a BPEL Pro
thanks
user11366851I tried According to ur suggestion ,it is not working
Actually Iam doing Second demos in oracle data integrator in oracle site
http://www.oracle.com/technology/obe/fusion_middleware/odi/ODIxml_BPEL/ODIxml_BPEL.htm
In this demo iam facing problem with coalesce function it is used in target data shore(address column)
pls tell the steps from starting onwards.......
thanks and regards
user11366851 -
Maximum XML file size that can parsed with c++ XML parser
Hi!
what is the maximum file size that can be parsed using the xml parser(version 1) for c++ on linux .
i'm getting an error(error no 231) when i try to parse an XML file of 3MB on Red Hat Linux 6.1
Regards
anjanamoving to xml db forum
-
Maximum setting that can be done for pivot view in instanceconfig.xml.
Hi All,
Can anyone let me know if there is any maximum setting that can be done for pivot view in instanceconfig.xml.
Regards,
ApoorvHi,
Our instanceconfig has the following settings..
<PivotView>
<MaxCells> 4000000 </MaxCells>
<MaxVisibleColumns> 5000 </MaxVisibleColumns>
<MaxVisiblePages> 2500 </MaxVisiblePages>
<MaxVisibleRows> 75000 </MaxVisibleRows>
<MaxVisibleSections> 3000 </MaxVisibleSections>
<ResultRowLimit>20000</ResultRowLimit>
</PivotView>
<CubeMaxRecords> 1000000 </CubeMaxRecords>
<CubeMaxPopulatedCells> 1000000 </CubeMaxPopulatedCells>
But is there somekind of documentation or Oracle recommendation on maximum setting for pivot view....
Need it, as it seems even the existing settings are not quite fruitful in fulfilling user requirements. -
I am calling an xml , that come from rtmp server and i want to play a video . when i pause it show an error of cross domain. what i can i do?
Please quote the exact error message, word-for-word, verbatim.
What is your operating system?
What version of Lightroom? -
Is there any script or batch program that can pick a xml file...
Hi,
Is there any script or batch program that can pick a .xml file from a folder and place it in a different folder on the same directory periodically without using an XI interface.
Thanks,
npHi Nadini,
Please refer below link for how to sechdule a batch file.
[Schedule Batch File-How to?|http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.general/2006-04/msg01349.html]
And please refer below links for how to write a batch file and other one as batch file commands.
http://www.wikihow.com/Write-a-Batch-File
http://www.aumha.org/a/batches.php
these two links was a click away in google :).
regards
Aashish Sinha -
Size limitation that can be passed to Java stored procedure
Hello!
I enjoy using Oracle8i these days. But I have some questions
about Java stored procedure. I want to pass the XML data to Java
stored procedure as IN parameter. But I got some errors when the
data size is long. Is there any limitation in the data size that
can be passed to Java stored procedure?
Would you please help me ?
This message is long, but would you please read my message?
Contents
1. Outline : I write what I want to do and the error message I
got
2. About the data size boundary: I write about the boundary
size. I found that it depend on which calling sequence I use.
3. The source code of the Java stored procedure
4. The source code of the Java code that call the Java stored
procedure
5. The call spec
6. Environment
1.Outline
I want to pass the XML data to Java stored procedure. But I got
some errors when the data size is long. The error message I got
is below.
[ Error messages and stack trace ]
java.sql.SQLException: ORA-01460: unimplemented or unreasonable
conversion reque
sted
java.sql.SQLException: ORA-01460: unimplemented or unreasonable
conversion reque
sted
at oracle.jdbc.ttc7.TTIoer.processError(Compiled Code)
at oracle.jdbc.ttc7.Oall7.receive(Compiled Code)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(Compiled Code)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch
(TTC7Protocol.java:721
at oracle.jdbc.driver.OracleStatement.doExecuteOther
(Compiled Code)
at oracle.jdbc.driver.OracleStatement.doExecuteWithBatch
(Compiled Code)
at oracle.jdbc.driver.OracleStatement.doExecute(Compiled
Code)
at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(Compiled
Code
at
oracle.jdbc.driver.OraclePreparedStatement.executeUpdate
(OraclePrepar
edStatement.java:256)
at oracle.jdbc.driver.OraclePreparedStatement.execute
(OraclePreparedStat
ement.java:273)
at javaSp.javaSpTestMain.sample_test
(javaSpTestMain.java:37)
at javaSp.javaSpTestMain.main(javaSpTestMain.java:72)
2. About the data size boundary
I don|ft know the boundary that I got errors exactly, but I
found that the boundary will be changed if I use |gprepareCall("
CALL insertData(?)");|h or |gprepareCall ("begin insertData
(?); end ;")|h.
When I use |gprepareCall(" CALL insertData(?)".
The data size 3931 byte --- No Error
The data size 4045 byte --- Error
Whne I use prepareCall ("begin insertData(?); end ;")
The data size 32612 byte --No Error
The data size 32692 byte --- Error
3. The source code of the Java stored procedure
public class javaSpBytesSample {
public javaSpBytesSample() {
public static int insertData( byte[] xmlDataBytes ) throws
SQLException{
int oraCode =0;
String xmlData = new String(xmlDataBytes);
try{
Connection l_connection; //Database Connection Object
//parse XML Data
dits_parser dp = new dits_parser(xmlData);
//Get data num
int datanum = dp.getElementNum("name");
//insesrt the data
PreparedStatement l_stmt;
for( int i = 0; i < datanum; i++ ){
l_stmt = l_connection.prepareStatement("INSERT INTO test
" +
"(LPID, NAME, SEX) " +
"values(?, ?, ?)");
l_stmt.setString(1,"LIPD_null");
l_stmt.setString(2,dp.getElemntValueByTagName("name",i));
l_stmt.setString(3,dp.getElemntValueByTagName("sex",i));
l_stmt.execute();
l_stmt.close(); //Close the Statement
l_stmt = l_connection.prepareStatement("COMMIT"); //
Commit the changes
l_stmt.execute();
l_stmt.close(); //Close the Statement l_stmt.execute
(); // Execute the Statement
catch(SQLException e ){
System.out.println(e.toString());
return(e.getErrorCode());
return(oraCode);
4. The source code of the Java code that call the Java stored
procedure
public static void sample_test(int num) {
//make data
Patient p = new Patient();
byte[] xmlData = p.generateXMLData(num);
try{
// Load the Oracle JDBC driver
DriverManager.registerDriver(new
oracle.jdbc.driver.OracleDriver());
Connection m_connection = DriverManager.getConnection
("jdbc:oracle:thin:@max:1521:test",
"testuser", "testuser");
CallableStatement l_stmt =
// m_connection.prepareCall(" CALL insertData(?)");
m_connection.prepareCall("begin insertData(?); end
l_stmt.setBytes(1,xmlData);
l_stmt.execute();
l_stmt.close();
System.out.println("SUCCESS to insert data");
catch(SQLException e ){
System.out.println( e.toString());
e.printStackTrace();
5. The call spec
CREATE OR REPLACE PROCEDURE insertData( xmlData IN LONG RAW)
AS
LANGUAGE JAVA NAME 'javaSp.javaSpBytesSample.insertData(byte[])';
6. Environment
OS: Windows NT 4.0 SP3
RDBMS: Oracle 8i Enterprise Edition Release 8.1.5.0.0 for
Windows NT
JDBC Driver: Oracle JDBC Drivers 8.1.5.0.0.
JVM: Java1.1.6_Borland ( The test program that call Java stored
procedure run on this Java VM)
nullIam passing an array of objects from Java to the C
file. The total size of data that Iam sending is
around 1GB. I have to load this data into the Shared
memory after getting it in my C file. Iam working on
HP-UX (64-bit). Everything works fine for around 400MB
of data. When I try to send around 500MB of data, the
disk utilization becomes 100%, so does my memory
utilization and I get a "Not enough space" when I try
to access shared memory. I have allocated nearly 2.5GB
in my SHMMAX variable. Also, I have around 45GB of
disk free. The JVM heap size is also at 2048MB. Where did you get the 400/500 number from? Is that the size of the file?
What do you do with the data? Are you doing nothing but copying it byte for byte into shared memory?
If yes then a simple test is to write a C application that does the same thing. If it has problems then it means you have an environment problem.
If no then you are probably increasing the size of the data by creating a structure to hold it. How much overhead does that add to the size of the data? -
I've been looking for a solution to this problem for some time now, and still no luck.
I'm going to point out first that I do NOT know the exact structure of the Tree Model. This is because the user can manipulate the tree as they wish. My problem comes when I want to save/load this tree structure, I thought that XML seems like the ideal choice.
My problem is that because I don't know the structure of the Tree Model, I'm not sure how to loop through every node in the Tree Model and add it to my XML Document.
People have suggested looping through the Tree Models children, except that wouldn't return the many other nodes that may be in the Tree Model.
If I haven't explained it very well, or you need to know any extra info just reply asking.
Hope someone out there can give me a hand, cheers!You could get the root node as DefaultMutableTreeNode:
TreeModel treeMdl = jTree.getModel();
DefaultMutableTreeNode root = (DefaultMutableTreeNode) treeMdl.getRoot(); and traverse the tree from there.
DefaultMutableTreeNode class has some related methods:
public Enumeration preorderEnumeration()
public Enumeration postorderEnumeration()
public Enumeration breadthFirstEnumeration()
public Enumeration depthFirstEnumeration()
public int getChildCount()
public TreeNode getChildAt(int index) -
How to structure an XML doc using schema?
Hi there,
I have an XML file with a grouped <employee> tags like,
<employee>
<employeename>John</employeename>
<employeename>Dave</employeename>
</employee>
I want to structure above xml file to be like this
<employee>
<employeename>John</employeename>
</employee>
<employee>
<employeename>Dave</employeename>
</employee>
Can I do that in a XML schema (.xsd file)? If I can, how do I do that?
Thanks,
ChandiHi Dave,
Thanks for your response.
I create this XML from .xsd files using a tool (MapForce by Altova which allows me to do the mappings between relational database table columns and elements in the .xsd file, and it creates Java code for me), the data is read from a database to compose this XML file.
Everything works fine with this XML file, I just want to change the structure of the XML file when it composes it, I was wondering if I could do that by changing the existing .xsd file?
Its not easy to use XSLT, I�m dealing with over 600 hierarchical elements with over 15 .xsd files.
So, what are my options?
All I want to know is, what should I change in the .xsd in order to compose this XML in the structure that I want.
It groups all the employee names (<employeename> tag) under one <employee> tag, instead I want to have individual <employee> tags for each and every <employeename> tags (<employee> tag around <employeename> tag for each and every employee)
Is it possible at all?
Thanks,
Chandi
Maybe you are looking for
-
Since upgrade I can no longer see tray icons on bottom of screen - how do I uncover them?
Since the latest upgrade, Firefox now covers my entire screen. I can get used to the disappearing tabs & such on the top, but for my job I need to switch back & forth between browser and email a lot, and need quick control of the volume bar. The enti
-
How to create a map in SAP BI???
Dear SAP BI Experts, I have a difficulties to make a map in BI and missed the steps procedure how to create a map. I have an ESRI CD Map and Arc VIew application. What is the first step if i want to create a map and is there any procedures or manua
-
I am unable to access anything in Windows Live, Microsoft, and Hotmail. All other internet is functioning. (IP, Google, browsing, etc) Microsoft remote assistance was unsuccessful. Error message says that Norton is blocking remote access. Unable to
-
BAPI_CUSTOMER_CREATEFROMDATA.
Hi Expert. I m facing problem in creating customer using 'BAPI_CUSTOMER_CREATEFROMDATA' . There is error message Ref customer does not exist in master data. what is ref customer.... Plz help me...... Thanks.. Kulvendra kumar
-
Error importing mp4/m4v video file format on adobe premier cs4
Hi i recently installed ADOBE premier cs4 on my pc. But once i imported the mp4/m4v video file it wont let me drag it to the timeline. and the videos appear to be black screen but the sound is there. I already installed the Klite codec and quicktime