Using import to reference a local schema
when I have an import in the WSDL fo the form:
<import namespace="namespace" location="c:/eclipse3/eclipse/workspace/an.xsd"/>
A can see the types declared in the schema, but the BPEL wont compile, giving error "WSDLException: faultCode=INVALID_WSDL: Invalid URL or
file: c:/eclipse3/eclipse/workspace/an.xsd: unknown protocol: c".
How do I reference a local xsd?
thanks
I don't know for sure if you can, but if it's possible, from the error you're getting, it seems that it's reading your 'URL' as beginning with 'c', in the same way an HTTP request would begin with 'http'; that's why you're getting the 'Unknown Protocol' error.
Perhaps this would work:
file:///C:/eclipse3/eclipse/workspace/an.xsd
Just a guess, mind. If not, I imagine you'll need to install a local server.
Similar Messages
-
Importing cross references in R12.1 using an API
Hello All,
Is there any API to import cross references in Oracle R12.1.Basically the API should insert records in to mtl_cross_references_b table.
Any pointers will be of great help.
Regards,
SumiIndex cross references are designed to enable you to have
index keywords that when clicked redirect the user to another
keyword. For example I use them for acronyms to redirect users to
the longer version. Are you cross references topics inside the
project? If so, can't you use the index cross references by
indexing in the same manner? If the cross reference you want is
outside the project, using the index is not the way I'd suggest.
Why not use a link in a topic to a URL, file, etc. You may be able
to do what you want through a redirect topic but I haven't looked
into this. Come back with some more details if I am off the
mark. -
Differents characters between db schemas when i use import
dear all.
source: Centos 4.0 and Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
target: RHEL AS 4.0 and Oracle Database 10g Release 10.2.0.1.0
When:
1.- i used exp from source. Export with this parameters:
exp USER owner= file= log= GRANTS=y INDEXES=Y CONSISTENT=Y ROWS=Y statistics=none
2.- then i used imp to migrate the first schema with the following parameters:
imp USER file= log= fromuser= touser= ignore=Y buffer=536870912
then when i compare two schemas. This software, TOAD fOR ORACLE V9. shows me the next difference in a package.
- There are a line that is a text:
||' Nº:'|| --- this is in the original package..... but
||' N?:'|| ----- this is in the source package..... i mean this shows like that when migrate.
What parameter i must to use (i think can be NLS_LANG.... ) in imp and exp in order to keep the same characters in all objects database?
Thabks a lot.i didnt' have any errors in log.
This is the import log:
Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export client uses US7ASCII character set (possible charset conversion)
tables list....
tables list....
tables list....
Import terminated successfully without warnings.
thanks for your answers... -
"ORA-00902: invalid datatype" when trying to register local schema
I'm trying to register a local schema which uses/includes a global schema(s). I validated the schemas using XMLSpy.
Registering the global schema(s) was successful. Registering these global schemas was done under user "Generic". For local schemas, that use global schema, a new user is created, and the following script is executed.
SQL> --
SQL> -- Create user
SQL> --
SQL> create user &1 identified by &2
2 temporary TABLESPACE temp
3 default TABLESPACE users;
old 1: create user &1 identified by &2
new 1: create user Simple identified by Simple
User created.
Elapsed: 00:00:00.03
SQL> --
SQL> -- Grant privileges
SQL> --
SQL> grant create session to &1;
old 1: grant create session to &1
new 1: grant create session to Simple
Grant succeeded.
Elapsed: 00:00:00.01
SQL> grant resource to &1;
old 1: grant resource to &1
new 1: grant resource to Simple
Grant succeeded.
Elapsed: 00:00:00.01
SQL> -- Dunno if following is required for local schemas, but if I remove I can't FTP
SQL> grant dba, xdbadmin to &1;
old 1: grant dba, xdbadmin to &1
new 1: grant dba, xdbadmin to Simple
Grant succeeded.
Elapsed: 00:00:00.02
SQL>
SQL> --
SQL> -- Connect ;-)
SQL> --
SQL> connect &1/&2@SVF91;
Connected.
===========================================================
SQL> --
SQL> -- Register the schema in Oracle
SQL> --
SQL> BEGIN
2 DBMS_XMLSchema.registerSchema(
3 schemaURL => '&1',
4 schemaDoc => xdbURIType('&2').getClob(),
5 local => TRUE,
6 genTypes => TRUE,
7 genBean => FALSE,
8 genTables => TRUE);
9 END;
10 /
old 3: schemaURL => '&1',
new 3: schemaURL => 'http://http://ehvl091a:8080/home/Simple/xsd/SimplePOI.xsd',
old 4: schemaDoc => xdbURIType('&2').getClob(),
new 4: schemaDoc => xdbURIType('/home/Simple/xsd/SimplePOI.xsd').getClob(),
BEGIN
ERROR at line 1:
ORA-31084: error while creating table "SIMPLE"."SIMPLE_XPOI" for element
"XPOIS"
ORA-00902: invalid datatype
ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 20
ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 31
ORA-06512: at line 2
The schema I use is:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:POI="POI" xmlns:xdb="http://xmlns.oracle.com/xdb" targetNamespace="POI" elementFormDefault="qualified" attributeFormDefault="unqualified" xdb:storeVarrayAsTable="true">
<xs:include schemaLocation="http://ehvl091a:8080/home/Generic/xsd/POI.xsd"/>
<xs:element name="XPOIS" xdb:defaultTable="SIMPLE_XPOI">
<xs:annotation>
<xs:documentation>A collection of XPOI</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="Simple" minOccurs="3" maxOccurs="unbounded" xdb:columnProps="NOT SUBSTITUTABLE">
<xs:complexType>
<xs:complexContent>
<xs:extension base="POI:POIType"/>
</xs:complexContent>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
I'm new at this XML (Oracle) stuff, so i could really use some pointers on how to tackle this. Thanks in advance.The following example works for me...
SQL> connect / as sysdba
Connected.
SQL> --
SQL> drop user global cascade
2 /
User dropped.
SQL> drop user local cascade
2 /
User dropped.
SQL> create user global identified by global
2 /
User created.
SQL> grant connect, resource, alter session, create view, xdbadmin to global
2 /
Grant succeeded.
SQL> create user local identified by local
2 /
User created.
SQL> grant connect, resource, alter session, create view to local
2 /
Grant succeeded.
SQL> connect global/global
Connected.
SQL> --
SQL> var schemaURL varchar2(256)
SQL> var schemaPath varchar2(256)
SQL> --
SQL> begin
2 :schemaURL := 'poTypes.xsd';
3 :schemaPath := '/public/poTypes.xsd';
4 end;
5 /
PL/SQL procedure successfully completed.
SQL> declare
2 res boolean;
3 xmlSchema xmlType := xmlType(
4 '<!-- edited with XML Spy v4.0 U (http://www.xmlspy.com) by Mark (Drake) -->
5 <xs:schema xmlns="http://xmlns.oralce.com/demo/xdb/purchaseOrderTypes" targetNamespace="http://xmlns.oralce.com/demo/xdb/purchaseOrderT
ypes" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" version="1.0" xdb:storeVarrayAsTable="true">
6 <xs:complexType name="LineItemsType" xdb:SQLType="LINEITEMS_T">
7 <xs:sequence>
8 <xs:element name="LineItem" type="LineItemType" maxOccurs="unbounded" xdb:SQLName="LINEITEM" xdb:SQLCollType="LINEIT
EM_V"/>
9 </xs:sequence>
10 </xs:complexType>
11 <xs:complexType name="LineItemType" xdb:SQLType="LINEITEM_T">
12 <xs:sequence>
13 <xs:element name="Description" type="DescriptionType" xdb:SQLName="DESCRIPTION"/>
14 <xs:element name="Part" type="PartType" xdb:SQLName="PART"/>
15 </xs:sequence>
16 <xs:attribute name="ItemNumber" type="xs:integer" xdb:SQLName="ITEMNUMBER" xdb:SQLType="NUMBER"/>
17 </xs:complexType>
18 <xs:complexType name="PartType" xdb:SQLType="PART_T">
19 <xs:attribute name="Id" xdb:SQLName="PART_NUMBER" xdb:SQLType="VARCHAR2">
20 <xs:simpleType>
21 <xs:restriction base="xs:string">
22 <xs:minLength value="10"/>
23 <xs:maxLength value="14"/>
24 </xs:restriction>
25 </xs:simpleType>
26 </xs:attribute>
27 <xs:attribute name="Quantity" type="moneyType" xdb:SQLName="QUANTITY"/>
28 <xs:attribute name="UnitPrice" type="quantityType" xdb:SQLName="UNITPRICE"/>
29 </xs:complexType>
30 <xs:simpleType name="ReferenceType">
31 <xs:restriction base="xs:string">
32 <xs:minLength value="18"/>
33 <xs:maxLength value="30"/>
34 </xs:restriction>
35 </xs:simpleType>
36 <xs:complexType name="ActionsType" xdb:SQLType="ACTIONS_T">
37 <xs:sequence>
38 <xs:element name="Action" maxOccurs="4" xdb:SQLName="ACTION" xdb:SQLCollType="ACTION_V">
39 <xs:complexType xdb:SQLType="ACTION_T">
40 <xs:sequence>
41 <xs:element name="User" type="UserType" xdb:SQLName="ACTIONED_BY"/>
42 <xs:element name="Date" type="DateType" minOccurs="0" xdb:SQLName="DATE_ACTIONED"/>
43 </xs:sequence>
44 </xs:complexType>
45 </xs:element>
46 </xs:sequence>
47 </xs:complexType>
48 <xs:complexType name="RejectionType" xdb:SQLType="REJECTION_T">
49 <xs:all>
50 <xs:element name="User" type="UserType" minOccurs="0" xdb:SQLName="REJECTED_BY"/>
51 <xs:element name="Date" type="DateType" minOccurs="0" xdb:SQLName="DATE_REJECTED"/>
52 <xs:element name="Comments" type="CommentsType" minOccurs="0" xdb:SQLName="REASON_REJECTED"/>
53 </xs:all>
54 </xs:complexType>
55 <xs:complexType name="ShippingInstructionsType" xdb:SQLType="SHIPPING_INSTRUCTIONS_T">
56 <xs:sequence>
57 <xs:element name="name" type="NameType" minOccurs="0" xdb:SQLName="SHIP_TO_NAME"/>
58 <xs:element name="address" type="AddressType" minOccurs="0" xdb:SQLName="SHIP_TO_ADDRESS"/>
59 <xs:element name="telephone" type="TelephoneType" minOccurs="0" xdb:SQLName="SHIP_TO_PHONE"/>
60 </xs:sequence>
61 </xs:complexType>
62 <xs:simpleType name="moneyType">
63 <xs:restriction base="xs:decimal">
64 <xs:fractionDigits value="2"/>
65 <xs:totalDigits value="12"/>
66 </xs:restriction>
67 </xs:simpleType>
68 <xs:simpleType name="quantityType">
69 <xs:restriction base="xs:decimal">
70 <xs:fractionDigits value="4"/>
71 <xs:totalDigits value="8"/>
72 </xs:restriction>
73 </xs:simpleType>
74 <xs:simpleType name="UserType">
75 <xs:restriction base="xs:string">
76 <xs:minLength value="1"/>
77 <xs:maxLength value="10"/>
78 </xs:restriction>
79 </xs:simpleType>
80 <xs:simpleType name="RequestorType">
81 <xs:restriction base="xs:string">
82 <xs:minLength value="0"/>
83 <xs:maxLength value="128"/>
84 </xs:restriction>
85 </xs:simpleType>
86 <xs:simpleType name="CostCenterType">
87 <xs:restriction base="xs:string">
88 <xs:minLength value="1"/>
89 <xs:maxLength value="4"/>
90 </xs:restriction>
91 </xs:simpleType>
92 <xs:simpleType name="VendorType">
93 <xs:restriction base="xs:string">
94 <xs:minLength value="0"/>
95 <xs:maxLength value="20"/>
96 </xs:restriction>
97 </xs:simpleType>
98 <xs:simpleType name="PurchaseOrderNumberType">
99 <xs:restriction base="xs:integer"/>
100 </xs:simpleType>
101 <xs:simpleType name="SpecialInstructionsType">
102 <xs:restriction base="xs:string">
103 <xs:minLength value="0"/>
104 <xs:maxLength value="2048"/>
105 </xs:restriction>
106 </xs:simpleType>
107 <xs:simpleType name="NameType">
108 <xs:restriction base="xs:string">
109 <xs:minLength value="1"/>
110 <xs:maxLength value="20"/>
111 </xs:restriction>
112 </xs:simpleType>
113 <xs:simpleType name="AddressType">
114 <xs:restriction base="xs:string">
115 <xs:minLength value="1"/>
116 <xs:maxLength value="256"/>
117 </xs:restriction>
118 </xs:simpleType>
119 <xs:simpleType name="TelephoneType">
120 <xs:restriction base="xs:string">
121 <xs:minLength value="1"/>
122 <xs:maxLength value="24"/>
123 </xs:restriction>
124 </xs:simpleType>
125 <xs:simpleType name="DateType">
126 <xs:restriction base="xs:date"/>
127 </xs:simpleType>
128 <xs:simpleType name="CommentsType">
129 <xs:restriction base="xs:string">
130 <xs:minLength value="1"/>
131 <xs:maxLength value="2048"/>
132 </xs:restriction>
133 </xs:simpleType>
134 <xs:simpleType name="DescriptionType">
135 <xs:restriction base="xs:string">
136 <xs:minLength value="1"/>
137 <xs:maxLength value="256"/>
138 </xs:restriction>
139 </xs:simpleType>
140 </xs:schema>');
141 begin
142 if (dbms_xdb.existsResource(:schemaPath)) then
143 dbms_xdb.deleteResource(:schemaPath);
144 end if;
145 res := dbms_xdb.createResource(:schemaPath,xmlSchema);
146 end;
147 /
PL/SQL procedure successfully completed.
SQL> begin
2 dbms_xmlschema.registerSchema
3 (
4 :schemaURL,
5 xdbURIType(:schemaPath).getClob(),
6 FALSE,TRUE,FALSE,FALSE
7 );
8 end;
9 /
PL/SQL procedure successfully completed.
SQL> connect local/local
Connected.
SQL> --
SQL> var schemaURL varchar2(256)
SQL> var schemaPath varchar2(256)
SQL> --
SQL> begin
2 :schemaURL := 'po.xsd';
3 :schemaPath := '/public/po.xsd';
4 end;
5 /
PL/SQL procedure successfully completed.
SQL> declare
2 res boolean;
3 xmlSchema xmlType := xmlType(
4 '<xs:schema xmlns="http://xmlns.oralce.com/demo/xdb/purchaseOrder" targetNamespace="http://xmlns.oralce.com/demo/xdb/purchaseOrder" xm
lns:types="http://xmlns.oralce.com/demo/xdb/purchaseOrderTypes" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.c
om/xdb" version="1.0" xdb:storeVarrayAsTable="true">
5 <xs:import namespace="http://xmlns.oralce.com/demo/xdb/purchaseOrderTypes" schemaLocation="poTypes.xsd"/>
6 <xs:element name="PurchaseOrder" type="PurchaseOrderType" xdb:defaultTable="PURCHASEORDER"/>
7 <xs:complexType name="PurchaseOrderType" xdb:SQLType="PURCHASEORDER_T">
8 <xs:sequence>
9 <xs:element name="Reference" type="types:ReferenceType" xdb:SQLName="REFERENCE"/>
10 <xs:element name="Actions" type="types:ActionsType" xdb:SQLName="ACTIONS"/>
11 <xs:element name="Reject" type="types:RejectionType" minOccurs="0" xdb:SQLName="REJECTION"/>
12 <xs:element name="Requestor" type="types:RequestorType" xdb:SQLName="REQUESTOR"/>
13 <xs:element name="User" type="types:UserType" xdb:SQLName="USERID"/>
14 <xs:element name="CostCenter" type="types:CostCenterType" xdb:SQLName="COST_CENTER"/>
15 <xs:element name="ShippingInstructions" type="types:ShippingInstructionsType" xdb:SQLName="SHIPPING_INSTRUCTIONS"/>
16 <xs:element name="SpecialInstructions" type="types:SpecialInstructionsType" xdb:SQLName="SPECIAL_INSTRUCTIONS"/>
17 <xs:element name="LineItems" type="types:LineItemsType" xdb:SQLName="LINEITEMS"/>
18 </xs:sequence>
19 </xs:complexType>
20 </xs:schema>');
21 begin
22 if (dbms_xdb.existsResource(:schemaPath)) then
23 dbms_xdb.deleteResource(:schemaPath);
24 end if;
25 res := dbms_xdb.createResource(:schemaPath,xmlSchema);
26 end;
27 /
PL/SQL procedure successfully completed.
SQL> begin
2 dbms_xmlschema.registerSchema
3 (
4 :schemaURL,
5 xdbURIType(:schemaPath).getClob(),
6 TRUE,TRUE,FALSE,TRUE
7 );
8 end;
9 /
PL/SQL procedure successfully completed.
SQL> desc PURCHASEORDER
Name Null? Type
TABLE of SYS.XMLTYPE(XMLSchema "po.xsd" Element "PurchaseOrder") STORAGE Object-relational TYPE "PURCHASEORDER_T"
SQL> desc PURCHASEORDER_T
PURCHASEORDER_T is NOT FINAL
Name Null? Type
SYS_XDBPD$ XDB.XDB$RAW_LIST_T
REFERENCE VARCHAR2(30 CHAR)
ACTIONS GLOBAL.ACTIONS_T
REJECTION GLOBAL.REJECTION_T
REQUESTOR VARCHAR2(128 CHAR)
USERID VARCHAR2(10 CHAR)
COST_CENTER VARCHAR2(4 CHAR)
SHIPPING_INSTRUCTIONS GLOBAL.SHIPPING_INSTRUCTIONS
_T
SPECIAL_INSTRUCTIONS VARCHAR2(2048 CHAR)
LINEITEMS GLOBAL.LINEITEMS_T
SQL>I'm not sure what you were trying here in your xml schema
xdb:columnProps="NOT SUBSTITUTABLE">
or whether or not this annotation is the source of the problem -
Using custom backing map for distributed scheme
Hallo,
We are evaluating usage of distributed scheme for storing large amount of data. We have analyzed memory usage for distributed-scheme backed by local-scheme (with binary unit calculator) and received interesting results.
On single JVM
110M - byte arrays - our data in binary form
20M - com.tangosol.util.Binary objects - overhead of binary objects to wrap byte arrays
27M - com.tangosol.net.cache.LocalCache$Entry objects - local cache entry overhead
When we have tried our own implementation of java.util.Map as backing map (using some clever tricks to get smaller size at the cost of CPU processing)
On single JVM
36M - byte arrays - out application data
So it is look like we are going to stick with our implementation.
My question
Are there any known drawbacks of using implementation of java.util.Map as backing map for distribute-scheme?
Our implementation is thread safe of cause (we have already tested it thread-count option of distributed-scheme)Hi Alexey,
alexey.ragozin wrote:
But I already have this table in first map (String -> Binary), in former project we have used handmade Map implementation with method internKey() to avoid duplication of data.
I'm looking for a way to reuse our old and proven techniques with coherence caches.
Thank you,
AlexeyUnfortunately there is no way the extractor could get hold of the already existing key reference in the reverse index. I agree that the extracted value reference in the reverse index entry (the key from that entry) should be reused by Coherence as the forward index value if the reverse index entry for the same extracted value exists, but apparently it is not.
Try to submit an enhancement request for this (and please share the ticket number for it so we can also look for it in the release notes).
Best regards,
Robert -
Configuration Question on local-scheme and high-units
I run my Tangosol cluster with 12 nodes on 3 machines(each machine with 4 cache server nodes). I have 2 important configuration questions. Appreciate if you can answer them ASAP.
- My requirement is that I need only 10000 objects to be in cluster so that the resources can be freed upon when other caches are loaded. I configured the <high-units> to be 10000 but I am not sure if this is per node or for the whole cluster. I see that the total number of objects in the cluster goes till 15800 objects even when I configured for the 10K as high-units (there is some free memory on servers in this case). Can you please explain this?
- Is there an easy way to know the memory stats of the cluster? The memory command on the cluster doesn't seem to be giving me the correct stats. Is there any other utility that I can use?
I started all the nodes with the same configuration as below. Can you please answer the above questions ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviI run my Tangosol cluster with 12 nodes on 3
machines(each machine with 4 cache server nodes). I
have 2 important configuration questions. Appreciate
if you can answer them ASAP.
- My requirement is that I need only 10000 objects to
be in cluster so that the resources can be freed upon
when other caches are loaded. I configured the
<high-units> to be 10000 but I am not sure if this is
per node or for the whole cluster. I see that the
total number of objects in the cluster goes till
15800 objects even when I configured for the 10K as
high-units (there is some free memory on servers in
this case). Can you please explain this?
It is per backing map, which is practically per node in case of distributed caches.
- Is there an easy way to know the memory stats of
the cluster? The memory command on the cluster
doesn't seem to be giving me the correct stats. Is
there any other utility that I can use?
Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
I started all the nodes with the same configuration
as below. Can you please answer the above questions
ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviBest regards,
Robert -
Copy all tables and data from remote schema to local schema
Hi,
I am using Oracle 10g Enterprise edition as a database and Windows 7 as the operating system.
My Requirement is to copy all the tables and data from remote machine database to my local machine.
I created a DB link between my location schema and remote machine schema. Database link created successful. Error what i am getting when i am trying to import all the tables from the remote data is as mentioned below
I created a directory gave all the privilege to do read and write on the directory.
Directory name : DUMP
Local Schema name / PASSWORD : PRODUCTION / PRODUCTION
Remote Schema name / Password : portal / m3892!2
When i run this scrip from command prompt
impdp PRODUCTION/PRODUCTION DIRECTORY=DUMP LOGFILE=LOCALPROD_MERUPROD.log network_link=MERU_DEV_LOCAL_PROD schemas=portal remap_schema=portal:PRODUCTION TABLE_EXISTS_ACTION=REPLACE
It is giving me following error please suggest me what is the mistake i couldn't trace
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
Thanks
SudhirThanks I gave the grant permission for the "portal" remote user. I didn't not get the error. Issue am facing now is I need to copy the entire schema of production to my local system production.
I am currently using the below script to copy. it not working. what might be the reason there is no error message coming but still i don't see any tables in my local system after executing this script
impdp PRODUCTION/PRODUCTION DIRECTORY=DUMP LOGFILE=LOCALPROD_MERUPROD.log network_link=MERU_DEV_LOCAL_PROD schemas=portal remap_schema=portal:PRODUCTION TABLE_EXISTS_ACTION=REPLACE
Or suggest me some other script to copy all the tables and procedure function from remote database to my local database using impdp script
Thanks
Sudhir -
How to use import parameter to be instead of SQL where sub-sentence ?
I wrote a RFC to read data from SAP table. To fetch data flexibility, I want to use import parameter xx instead of where sub-sentence in SQL sentence.
For example, "SELECT * FROM T WHERE XXX", and "XXX" is a importing parameter.
How can I use it.
Thanks a lot.
Frank.FUNCTION ZRFC_04.
*"*"Local Interface:
*" IMPORTING
*" VALUE(TARGETTABLE) LIKE MAKT-MAKTX
*" VALUE(TWHERE) LIKE MAKT-MAKTX
*" EXPORTING
*" VALUE(ZRETURN) LIKE MAKT-MAKTX
*" TABLES
*" TMP_TEST1 STRUCTURE ZTEST1
DATA:
TRANSACTION_ID LIKE ARFCTID,
V_VAILD(1) TYPE C,
scond(80) TYPE c.
V_VAILD = 'X'.
GET PARAMETER twhere fields scond.
The error " 'LATE FIELDS' expected, not 'TWHERE FIELDS' " generated. -
how can i import tables from a different schema into the existing relational model... to add these tables in the existing relational/logical model? plss help
note; I already have the relational/logical model ready from one schema... and I need to add few more tables to this relational/logical model
can I import the same way as I did previously??
but even if I do the same how can I add it in the model?? as the logical model has already been engineered..
please help ...
thanksHi,
Before you start, you should probably take a backup copy of your design (the .dmd file and associated folder), in case the update does not work out as you had hoped.
You need to use Import > Data Dictionary again, to start the Data Dictionary Import Wizard.
In step 1 use a suitable database connection that can access the relevant table definitions.
In step 2 select the schema (or schemas) to import. The "Import to" field in the lower left part of the main panel allows you to select which existing Relational Model to import into (or to specify that a new Relational Model is to be created).
In step 3 select the tables to import. (Note that if there are an Foreign Key constraints between the new tables and any tables you had previously imported, you should also include the previous tables, otherwise the Foreign Key constraints will not be imported.)
After the import itself has completed, the "Compare Models" dialog is displayed. This shows the differences between the model being imported and the previous state of the model, and allows you to select which changes are to be applied.
Just selecting the Merge button should apply all the additions and changes in the new import.
Having updated your Relational Model, you can then update your Logical Model. To do this you repeat the "Engineer to Logical Model". This displays the "Engineer to Logical Model" dialog, which shows the changes which will be applied to the Logical Model, and allows you to select which changes are to be applied.
Just selecting the Engineer button should apply all the additions and changes.
I hope this helps you achieve what you want.
David -
Dump - Access using NULL object reference is not possible!!!
Hi,
I'm using the BCS class for sending HTM format email so i use the below code for that its working,
DATA: gr_document TYPE REF TO cl_document_bcs,
gr_document = cl_document_bcs=>create_document(
i_type = 'HTM'
i_text = t_html
i_importance = '5'
i_subject = gc_subject ).
Next task is to send the image so i'm creating an another object to the same class, below code
*Image from MIME
DATA: o_mr_api TYPE REF TO if_mr_api.
DATA is_folder TYPE boole_d.
DATA l_img1 TYPE xstring.
DATA l_img2 TYPE xstring.
DATA l_loio TYPE skwf_io.
DATA: lo_document TYPE REF TO cl_document_bcs.
IF o_mr_api IS INITIAL.
o_mr_api = cl_mime_repository_api=>if_mr_api~get_api( ).
ENDIF.
CALL METHOD o_mr_api->get
EXPORTING
i_url = '/SAP/PUBLIC/ZDEMO/tick.png'
IMPORTING
e_is_folder = is_folder
e_content = l_img1
e_loio = l_loio
EXCEPTIONS
parameter_missing = 1
error_occured = 2
not_found = 3
permission_failure = 4
OTHERS = 5.
CALL METHOD o_mr_api->get
EXPORTING
i_url = '/SAP/PUBLIC/ZDEMO/Delete.png'
IMPORTING
e_is_folder = is_folder
e_content = l_img2
e_loio = l_loio
EXCEPTIONS
parameter_missing = 1
error_occured = 2
not_found = 3
permission_failure = 4
OTHERS = 5.
*Convert XSTRING to ITAB
DATA :lt_hex1 TYPE solix_tab,
lt_hex2 TYPE solix_tab,
ls_hex LIKE LINE OF lt_hex1,
lv_img1_size TYPE sood-objlen,
lv_img2_size TYPE sood-objlen.
CLEAR : lt_hex1, lt_hex2, ls_hex, lv_img1_size, lv_img2_size.
WHILE l_img1 IS NOT INITIAL.
ls_hex-line = l_img1.
APPEND ls_hex TO lt_hex1.
SHIFT l_img1 LEFT BY 255 PLACES IN BYTE MODE.
ENDWHILE.
WHILE l_img2 IS NOT INITIAL.
ls_hex-line = l_img2.
APPEND ls_hex TO lt_hex2.
SHIFT l_img2 LEFT BY 255 PLACES IN BYTE MODE.
ENDWHILE.
*Findthe Size of the image
DESCRIBE TABLE lt_hex1 LINES lv_img1_size.
DESCRIBE TABLE lt_hex2 LINES lv_img2_size.
lv_img1_size = lv_img1_size * 255.
lv_img2_size = lv_img2_size * 255.
*Attach Images
clear: lo_document.
lo_document->add_attachment(
EXPORTING
i_attachment_type = 'png' " Document Class for Attachment
i_attachment_subject = 'img1' " Attachment Title
i_attachment_size = lv_img1_size " Size of Document Content
i_att_content_hex = lt_hex1 " Content (Binary)
lo_document->add_attachment(
EXPORTING
i_attachment_type = 'png' " Document Class for Attachment
i_attachment_subject = 'img2' " Attachment Title
i_attachment_size = lv_img2_size " Size of Document Content
i_att_content_hex = lt_hex2 " Content (Binary)
but it throws the dump " Access using NULL object reference is not possible" when i tries to access for method add_attachment...
Thanks,
SivaYes there is commit work after that, Below code
gr_document = cl_document_bcs=>create_document(
i_type = 'HTM'
i_text = t_html
i_importance = '5'
i_subject = gc_subject ).
*Image from MIME
DATA: o_mr_api TYPE REF TO if_mr_api.
DATA is_folder TYPE boole_d.
DATA l_img1 TYPE xstring.
DATA l_img2 TYPE xstring.
DATA l_loio TYPE skwf_io.
DATA: lo_document TYPE REF TO cl_document_bcs.
IF o_mr_api IS INITIAL.
o_mr_api = cl_mime_repository_api=>if_mr_api~get_api( ).
ENDIF.
CALL METHOD o_mr_api->get
EXPORTING
i_url = '/SAP/PUBLIC/ZDEMO/tick.png'
IMPORTING
e_is_folder = is_folder
e_content = l_img1
e_loio = l_loio
EXCEPTIONS
parameter_missing = 1
error_occured = 2
not_found = 3
permission_failure = 4
OTHERS = 5.
CALL METHOD o_mr_api->get
EXPORTING
i_url = '/SAP/PUBLIC/ZDEMO/Delete.png'
IMPORTING
e_is_folder = is_folder
e_content = l_img2
e_loio = l_loio
EXCEPTIONS
parameter_missing = 1
error_occured = 2
not_found = 3
permission_failure = 4
OTHERS = 5.
*Convert XSTRING to ITAB
DATA :lt_hex1 TYPE solix_tab,
lt_hex2 TYPE solix_tab,
ls_hex LIKE LINE OF lt_hex1,
lv_img1_size TYPE sood-objlen,
lv_img2_size TYPE sood-objlen.
CLEAR : lt_hex1, lt_hex2, ls_hex, lv_img1_size, lv_img2_size.
WHILE l_img1 IS NOT INITIAL.
ls_hex-line = l_img1.
APPEND ls_hex TO lt_hex1.
SHIFT l_img1 LEFT BY 255 PLACES IN BYTE MODE.
ENDWHILE.
WHILE l_img2 IS NOT INITIAL.
ls_hex-line = l_img2.
APPEND ls_hex TO lt_hex2.
SHIFT l_img2 LEFT BY 255 PLACES IN BYTE MODE.
ENDWHILE.
*Findthe Size of the image
DESCRIBE TABLE lt_hex1 LINES lv_img1_size.
DESCRIBE TABLE lt_hex2 LINES lv_img2_size.
lv_img1_size = lv_img1_size * 255.
lv_img2_size = lv_img2_size * 255.
*Attach Images
create object lo_document type cl_document_bcs.
lo_document->add_attachment(
EXPORTING
i_attachment_type = 'png' " Document Class for Attachment
i_attachment_subject = 'img1' " Attachment Title
i_attachment_size = lv_img1_size " Size of Document Content
i_att_content_hex = lt_hex1 " Content (Binary)
lo_document->add_attachment(
EXPORTING
i_attachment_type = 'png' " Document Class for Attachment
i_attachment_subject = 'img2' " Attachment Title
i_attachment_size = lv_img2_size " Size of Document Content
i_att_content_hex = lt_hex2 " Content (Binary)
"Add document to send request
CALL METHOD gr_send_request->set_document( gr_document ).
TRY.
CALL METHOD gr_send_request->SET_SEND_IMMEDIATELY
EXPORTING
I_SEND_IMMEDIATELY = 'X'.
* CATCH CX_SEND_REQ_BCS INTO BCS_EXCEPTION .
**Catch exception here
ENDTRY.
DATA: LO_SENDER TYPE REF TO IF_SENDER_BCS VALUE IS INITIAL.
TRY.
LO_SENDER = CL_SAPUSER_BCS=>CREATE( SY-UNAME ). "sender is the logged in user
* Set sender to send request
gr_send_request->SET_SENDER(
EXPORTING
I_SENDER = LO_SENDER ).
* CATCH CX_ADDRESS_BCS.
****Catch exception here
ENDTRY.
"Send email
CALL METHOD gr_send_request->send(
EXPORTING
i_with_error_screen = 'X'
RECEIVING
result = gv_sent_to_all ).
IF gv_sent_to_all = 'X'.
WRITE 'Email sent!'.
ENDIF.
"Commit to send email
COMMIT WORK.
"Exception handling
CATCH cx_bcs INTO gr_bcs_exception.
WRITE:
'Error!',
'Error type:',
gr_bcs_exception->error_type.
ENDTRY. -
Error message when importing data using Import and export wizard
Getting below error message when importing data using IMPORT and EXPORT WIZARD
Error 0xc0202009: Data Flow Task 1: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
<dir>
<dir>
Messages
Error 0xc0202009: Data Flow Task 1: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80004005 Description: "Could not allocate a new page for database REPORTING' because of insufficient disk space in filegroup 'PRIMARY'.
Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.".
(SQL Server Import and Export Wizard)
Error 0xc0209029: Data Flow Task 1: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "Destination - Buyer_.Inputs[Destination Input]" failed because error code 0xC020907B occurred, and the error row disposition on "Destination
- Buyer_First_Qtr.Inputs[Destination Input]" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
(SQL Server Import and Export Wizard)
Error 0xc0047022: Data Flow Task 1: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Destination - Buyer" (28) failed with error code 0xC0209029 while processing input "Destination Input" (41). The
identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information
about the failure.
(SQL Server Import and Export Wizard)
Error 0xc02020c4: Data Flow Task 1: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
(SQL Server Import and Export Wizard)
</dir>
</dir>
Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Source - Buyer_First_Qtr returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput().
The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
(SQL Server Import and Export Wizard)
Smash126Hi Smash126,
Based on the error message” Could not allocate a new page for database REPORTING' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting
autogrowth on for existing files in the filegroup”, we can know that the issue is caused by the there is no sufficient disk space in filegroup 'PRIMARY' for the ‘REPORTING’ database.
To fix this issue, we can add additional files to the filegroup by add a new file to the PRIMARY filegroup on Files page, or setting Autogrowth on for existing files in the filegroup to increase the necessary space.
The following document about Add Data or Log Files to a Database is for your reference:
http://msdn.microsoft.com/en-us/library/ms189253.aspx
If there are any other questions, please feel free to ask.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
The interface you are trying to use is related to a logical schema that no
"The interface you are trying to use is related to a logical schema that no longer exists"
I'm facing this error when importing a project on Designer connect to a new work repository.
I have an TEST Data Integrator environment and now I need to move objects already created to a new DEV environment. I've created a new master and work repository with distinct ID's according note https://metalink.oracle.com/metalink/plsql/f?p=130:14:4335668333625114484::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,423815.1,1,1,1,helvetica
Any ideas?
ThanksHi,
Nothing occurs. My steps:
1) Export Master Repository from 1st environment (topoloy -> export master repository)
2) Create Master Repository on 2nd environment (through repcreate.bat)
3) Export Topology (1st environment)
4) Export all projects (1st environment)
5) Import Topology (2nd environemtn) ----> com.sunopsis.core.n: This import action has been cancelled because it could damage your repository (problem with the identifier sequences)
Is this sequence of operations correct?
Thanks -
Export and Import of APPS and Applsys Schemas in R12
Hi ,
Can you please let me know Export and Import of APPS and Applsys Schemas in R12 will cause any issue ?
Thanks and Regards,
Jagadeesha.Can you please let me know Export and Import of APPS and Applsys Schemas in R12 will cause any issue ?If you mean exporting/importing only those two schema between different instances then this is not supported (due to other schema dependencies) and the only supported way to export those two schema is by exporting/importing the complete database.
Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2 [ID 454616.1]
Export/import process for R12 using 11gR1 or 11gR2 [ID 741818.1]
Thanks,
Hussein -
What is difference when using import statement with static keyword ?
10. package com.sun.scjp;
11. public class Geodetics {
12. public static final double DIAMETER = 12756.32; // kilometers
13. }
Which two correctly access the DIAMETER member of the Geodetics class? (Choose two.)
A. import com.sun.scjp.Geodetics;
public class TerraCarta {
public double halfway()
{ return Geodetics.DIAMETER/2.0; }
B. import static com.sun.scjp.Geodetics;
public class TerraCarta{
public double halfway() { return DIAMETER/2.0; } }
C. import static com.sun.scjp.Geodetics.*;
public class TerraCarta {
public double halfway() { return DIAMETER/2.0; } }
D. package com.sun.scjp;
public class TerraCarta {
public double halfway() { return DIAMETER/2.0; } }
The correct answer is A,C.I understood how A is the answer ,but can anyone explain me about package import using static keyword.The above example can be used as a reference.
Thanks for your consideration.amtidumpti wrote:
10. package com.sun.scjp;
11. public class Geodetics {
12. public static final double DIAMETER = 12756.32; // kilometers
13. }
Which two correctly access the DIAMETER member of the Geodetics class? (Choose two.)
A. import com.sun.scjp.Geodetics;
public class TerraCarta {
public double halfway()
{ return Geodetics.DIAMETER/2.0; }
B. import static com.sun.scjp.Geodetics;
public class TerraCarta{
public double halfway() { return DIAMETER/2.0; } }
C. import static com.sun.scjp.Geodetics.*;
public class TerraCarta {
public double halfway() { return DIAMETER/2.0; } }
D. package com.sun.scjp;
public class TerraCarta {
public double halfway() { return DIAMETER/2.0; } }
The correct answer is A,C.I understood how A is the answer ,but can anyone explain me about package import using static keyword.The above example can be used as a reference.
Thanks for your consideration.here's a link to a small tutorial:
[http://www.deitel.com/articles/java_tutorials/20060211/index.html] -
Threading and Re-Use of buffers Using Call By Reference node (Duct Tape required)
I have been trying to get the following information into the public domain for years and now that I have the answers, I will share with those that may be interested.
Warning!
Wrap your head in duct tape before reading just for safety sakes.
My two questions have been;
1) can LV Re-use the buffers of the calling VI when using VI Serve Call by reference?
2) Is the UI thread used when using Call by reference?
1. When calling a VI using the call by reference node, does the data going into the connector pane of the node get copied, or is it in-line as it would be with a properly set up subVI?
Short answer: It is somewhere in-between.
Long answer:
The compiler doesn't know what VI will be called, but it does have a hint:
the reference wired into the Call By Reference node. It uses that to get the "Prototype" for the call. So for the best performance, use a prototype that has the same "in-placeness characteristics" as the called VI. That being said, users don't know what the "in-placeness characteristics" are.
Before I go into the details, I should say that the overhead of these copies shouldn't matter much unless it is a large data structure (an array with a lot of elements, or a cluster/class with many fields or containing large arrays etc.).
Example 1:
If the prototype does not modify the data then the compiler assumes that the Call By Reference node will not modify the data. However at run-time a check is made to see if the actual called VI will modify the data. If so, then a copy is made and passed in so that the original data can remain unmodified.
Example 2:
If the prototype contains an input that is wired through to an output in such a way that both the input and output terminals can use the same memory buffer, but at run-time a check determines that the actual called VI's input and output do not share a buffer, then a copy will be made from the actual call's output to the original VIs (combined input and output) buffer.
I should also mention that even with this "attempt to agree with the prototype" behavior, it's not always possible to get as good performance as a regular SubVI call. For instance if you have a situation where the prototype does not modify the data and passes it through to an output then the compiler must assume that the data is modified (because as in example 2, there exist VIs that may modify it even if the actual VI called does not).
And there are some caveats:
1) This "using a prototype" behavior was new to 2009. Before that we used a more naive way of passing data that assumed all inputs will be modified and no outputs share a buffer with an input.
2) This behavior is subject to change in future versions, if we find further optimizations.
3) This behavior is the same as we use for dynamic dispatch VIs (when using LV classes)
4) If you want to create a VI only to be used as a prototype, then you can use features of the In-Place Element Structure to control the "in-placeness characteristics" Namely the In/Out Element border nodes, the "Mark as modifier" feature of the border nodes (note the pencil icon on the In Element), and the Always Copy node.
5) The prototype is only the first reference ever wired into the Call By Reference node. So if you do make a new prototype VI, you can't just make a reference out of it to wire to the Call By Reference node. I suggest deleting the Call By Reference node and dropping a new one.
6) For remote calls, we always have to "make copies" by passing data over a network.
I hope this helps, if you want any further information/clarification, then feel free to ask.
2. Does the call by reference node execute in the UI thread? If the call is being made by a remote machine over ethernet, which thread does the host (the machine that makes the call by reference) execute on and which thread does the target (the machine that contains the VI file) execute on?
In the local case, the Call by Reference node does not require the UI thread and can run in whatever thread the VI wants to execute in.
When calling a remote VI, the Call by Reference node uses the UI thread (detailed below) on both the client and on the server.
The client uses the UI thread to send the call request to the server and then again when the response comes back. The UI thread is not blocked during the time in between.
The server receives the TCP message in the UI thread and then starts the call in the UI thread. The server also uses the UI thread to send the response back to the client. The UI thread is not blocked on the server during the execution of the VI.
I hope people find this when they need it!
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction
Solved!
Go to Solution.I never use duct tape. I wrap my head in aluminum foil and thus get much better shielding from the aliens trying to tap my mind.
Also easier to remove later, but why risk taking it off??
LabVIEW Champion . Do more with less code and in less time .
Maybe you are looking for
-
Oracle Developer 6.0 not working in Windows NT
I have succesfully installed Oracle Developer 6.0 in the following M/c HP-Vectra Pentium III 64MB RAM Windows NT workstation with Service Pack 4 loaded. There is no error during the installation. I have selected Complete Installtion option. I have ad
-
ADAPTER.SOAP_EXCEPTION:Maximum request length exceeded.
Hi frnds, Plz look for the error i am getting when sending large record... - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="1"> <SAP:Category>XIAdapter</SAP:Categor
-
Export webi report to excel print on 1 page
is there anyway in BO for a webi report when exported to excel to automatically print on one page? THe work around is to set the print area in excel itself wondering if it can be done in BO itself BO Edge XI 3.0 SQL 2005
-
Workflow approval for documents
Hi, Looking through the forum, it seems that it is possible to set up workflows for document approval with Solution Manager. I don't have much experience with this platform; can anybody point me to documentation, blogs or white papers that explain ho
-
Itunes 64 bit Win is not a valid 32 bit application
thrice I've downloaded itunes64setup for my win7 HP, but it every time I try to run it I get the error itunes64setup.exe is not a valid 32 bit application I'm getting sick of apple. They're worse than microsoft ever was.