Datatypes mapping .... (Re-Post)!!

Hi ... btw ... i am posting my query again with hope that someone will respond!!!
I am writing an application to transfer data between tables of Oracle 8 and MS-Access (both ways). I need to know the mapping of various datatypes between these 2 databases. As many of u know that Access has some proprietary datatypes that are not easily mapped to some hi-fi DB server like oracle. Any expert thoughts on this would be really helpful, thanks.
Raheel.

Check this link
http://my.kharkov.org/docs/oracle/nt.816/z26073/ch3.htm#1053639
Hope it helps u...let me know it
Thank you

Similar Messages

  • Datatype mapping problem

    Hi
    I've have a table with column with int4[] as data type. I have a problem with setting the type of corresponding entity bean field. I still have a message about not proper datatypes - byte array instead of integer[].
    To be precise, message is:
    java.sql.SQLException: ERROR: column "abc" is of type integer[] but expression is of type bytea
    and my bean's create() method is called with an int[] datatype argument.
    Can anyone help? I'm using postgres 8.0 as db server and jboss as.
    Thanks in advance,
    Michal

    Hi,
    Have you checked in the mapping that your field's type is the same in the external table AND in the Aggregator ? Because if you changed the type only after having done your mapping, it can happen that the field's type in the aggregator was not updated.
    Hope it helps,
    Florent

  • Blackberry Maps & UK Post Codes

    I'm struggling to  make the most of Blackberry Maps on my 8520.
    When I try to search for an address using a post code, it only recognises the first 4 or 5 characters depending on whether its a 6 or 7 figure post-code. So when I try to put in my home address it locates a place 3 miles away from where I actually live.
    When i try to enter town and county I keep getting a message saying they are not recognised.
    I have only managed to save my location by manually finding it on the map.
    Any advice please - and is there a Blackberry Maps instruction manual anywhere??

    http://us.blackberry.com/smartphones/features/blackberry_maps_detail.jsp
    http://www.knowyourmobile.com/smartphones/rim-blackberry/blackberry-guides/344267/how_to_set_up_and_...
    teffers wrote:
    Any info on why the Blackberry won't accept the last 2 digits of a post-code when trying to find an address? 
    Its not sufficiently accurate.
    Blackberry Best Advice - Back-up weekly
    If I have helped you please check the "Kudos" star on the right >>>>

  • Pre-mapping vs post-mapping  tool

    Hello everyone.
    The problem is that I need to find the mapping rule mistakes comparing the amounts in the mapped accounts with the amounts in the accounts imported from the previous system. I'm not able to build an automatic tool in BPC to compare the imported data vs the data before applying the mapping.
    How can I build it?
    Thanks,
    Oscar

    Hi Nilanjan,
    Thanks for the answer, but I think I haven't explained correctly.
    I import the data in an Application with 10 dimensions using a txt file with the information of all the dimensions. Some of these dimensions have a conversion file with a mapping, the problem we have is that when we upload the data from the file to BPC, it takes a lot of time to find witch lines from the txt file have been uploaded on each BPC member, so we would like to know if it is possible to build some kind of "log" where we could find the destination members for each line in the txt file.
    A good example of this "log" could be something similar to the lines below:
    Data from the txt file . . . . . . . . . . . . . . . . . . . BPC members where the line have been stored
    Dim1 . . ..Dim2. . . ..Dim3... . . Dim10 . . . . . Dim1 . . . . . . .Dim2 . . . . . . . Dim3... . . . . . . . Dim10
    TxtData1 TxtData2  TxtData3... TxtData10 . . .BPCMember1 BPCMember2 BPCMember3... BPCMember10
    Thanks again for your help,
    Oscar.
    Edited by: Oscar G.C. on Sep 24, 2010 9:08 AM

  • NVARCHAR - NVARCHAR2 datatype mapping between sql server and oracle.

    Question regarding size allocation that occurs for NVARCHAR columns in sql server to NVARCHAR2 in Oracle in the migration tool.
    The migration tool has converted NVARCHAR columns to be twice their size as NVARCH2 in Oracle. Example, a table whose column is mycol NVARCHAR(40) gets migrated as mycol NVARCHAR2(80).
    In SQL Server land, the designation of a column as NVARCHAR(40) will physically store the column as 80 bytes. However, as I understand from this excerpt (taken from here: http://www.oracle.com/technology/oramag/oracle/03-nov/o63tech_glob.html), that Oracle in essence will do the same thing for NVARCHAR2:
    To make it easy to allocate proper storage for Unicode values, Oracle9i Database introduced character semantics. You can now use a declaration such as VARCHAR2(3 CHAR), and Oracle will set aside the correct number of bytes to accommodate three characters in the underlying character set. In the case of AL32UTF8, Oracle will allocate 12 bytes, because the maximum length of a UTF-8 character encoding is four bytes (3 characters * 4 bytes/character = 12 bytes). On the other hand, if you're using AL16UTF16, in which case your declaration would be NVARCHAR2(3), Oracle allocates just six bytes (3 characters * 2 bytes/character = 6 bytes). One difference worth noting is that for UTF-8, a declaration using character semantics allows enough room for surrogate characters, whereas for UTF-16 that is not the case. A declaration such as NVARCHAR2(3) provides room for three UTF-16 code units, but a single supplementary character may consume two of those code units.
    As I read this, NVARCHAR(40) in sql server should be equivalent to NVARCHAR2(40) in Oracle and not NVARCHAR2(80) as it gets translated to - am I missing something? We have a rather large database to migrate so I need to get this size allocation right (and I obviously don't want to truncate any data).
    Thanks -

    Right - well, that's what I'm suggesting is that NVARCHAR(8) in SQL Server is equivalent to NVARCHAR2(8) in Oracle. Truncation will occur after 8 of any character, including, say, double byte Kanjii characters, which would physically store 16 bytes in the NVARCHAR(8) NVARCHAR2(8) column defn. I tried that in both SQL Server and Oracle, and the behavior is the same.
    Also, technically, NVARCHAR2(8) and VARCHAR2(8 CHAR) is roughly the same in terms of behavior, except that in the first case, the physical storage is 16 bytes, while in the second case, it's 24 bytes (to account for 'supplemental' characters or true UTF-8 compliance). Same truncation occurs after 8 of -any- character be it single byte ascii or double byte kanjii, which is the behavior I expect.
    Thanks for looking into this. What I decided to do was the following:
    What was originally defined in SQL Server as varchar (and translated to VARCHAR2 with the CHAR designation), I removed the CHAR designation since I truly want that number of bytes to be stored, and not characters (for columns I do not care about UTF8 or 16 compliance - that will also mimic the same behavior I currently have on the SQL Server side. For anything that was nvarchar in SQL Server, I went ahead and created them as VARCHAR2 with the CHAR designation, with the same size (not double the value as was in SQL Server which the documentation says it gets translated to). I did this since we probably will need true UTF-8 compliance down the line, and since I had the hood open, it was easy enough to do. I could also have converted NVARCHAR(n) to NVARCHAR2(n) (and not n*2) as what was happening in the migration.
    I tested strings in both cases, and I think I have my solution.
    Edited by: kpw on Sep 24, 2008 11:21 AM

  • Mapping to an IDOC using an externally created XSL file

    I want to map a message to an IDoc using an ABAP XSLT transform. I am creating the XSLT in an external tool.
    I need to use an XSD definition of the IDOC as the target document of the mapping. I can get that in two ways:
    1) Get the XSD definition of the IDOC from transaction WE60 and use the menu command 'Documentation -> XML Schema'
    2) In the Integration Repository, export the XSD of the imported IDOC.
    The problem is, there are significant syntax differences between these versions, and I don't know which is the correct version. For instance, the XSD exported from the Integration Repository has a mandatory attribute 'SEGMENT' for each segment that must be filled to create a valid XML message; this is not the case when the XSD definition is created in WE60.
    Can somebody advise which is the correct version to use?

    Hi Anthony,
    This is really strange...
    In both case XSD should be same...can you please tell for which idoc you are generating xsd?
    Eventhough if you generate XML Schema using WE60,  mandatory attribute 'SEGMENT' for each segment has to be there....this validation would be done while posting idoc. It will check for all mandatory fields.
    I have some different thoughts that what wojciech has suggested...
    "If were your shoes I would use xsd from integration repository because that's the expected result...."
    But final goal is not to achieve mapping but posting idoc with proper data in R/3 system successfully.
    Your second question..
    "Do I assume then that placing a constant value of 1 in the SEGMENT attribute is just to satisfy the validation of the XML message against the XSD:..?
    ummm...I think if you assign constant "1" to segment it will generate that segment while creating idoc. Its mandatory to assign constant value to segment which you want to generate...
    "Can I assume that the IDoc adapter will fill the SEGMENT attribute with the correct value?.."
    I dont think so Idoc adapter will fill this attribute automatically...you need to assign constant to field segment.
    Nilesh

  • Get File name of the inbound file during mapping

    Scenario: to read the file name of the inbound file (which has date required for the mapping) during runtime.
    The requirement is to read the date of the inbound file (passed to the XI pipline by the file adapter) and populate the same in the outbound mapping structure.
    Any idea about how to do this?
    (I went through few options of using java.util.map. not successful yet)

    Hi Anand,
    I posted the same question a time ago, without any help....
    Can I find out the full filename of input file in message mapping?
    Posted: Nov 23, 2004 1:00 PM
    I have in XI 2.0 the following scenario :
    In the inbound fileadapter I read my input file. The filename of the input file is part fixed, part variable (Like INDATA01.txt, INDATA03.txt, etc).
    So in my Adapterconfiguration, I specify the filename with a wildcard (INDATA*.txt).
    What I now want to do, is in my Message Mapping use the full filename , so I can do something different for every filenumber. Is there a way where I can get the full filename available in my message mapping (I did not find the filename in the XML in the message trace).

  • Help im having problems with site maps google etc with iweb09

    hi guys i have iweb 09 and im trying to get google to recognize my site map ,ive posted the site map in to my url folder of my domain name and also ive posted the google html file but google says error and will not verify,why is iweb so bad for this??what can i do i really need help?????????????

    An HTML verification file is better than a tag.
    You can make up a sitemap for free or use a third party application.
    You will find all this information in this section of iWeb for Musicians.....
    http://www.iwebformusicians.com/SearchEngines/SEO.html

  • LDAP Field Mapping in 4.6C - Using WebAS 6.10+ as an LDAP Gateway

    Dear All,
      We have a need to enable CUP Functionality (we use GRC AC 5.3) for one of our oldest R/3 systems - on 4.6C. All other R/3 backends are on 4.7+ releases, so it's a multiple backend configuration for GRC AC.
      However, LDAP Field Mapping functionality is missing in 4.6C. It was enabled through LDAPMAP in the higher releases only.
      At the same time, I discovered in one SAP HR document a diagram, which shows that indeed 4.7+ can map and post data directly to LDAP, but for 4.6C and below you can use WebAS 6.10+ as an LDAP Gateway. Meaning that 4.6C calls through RFC some functions in the higher release R/3 system to use its functions for Field Mapping and further transfer of user data to the target LDAP server.
      But... I can not find anywhere how to configure 4.7 / 6.0 servers to act as an LDAP Gateway for the older 4.6C server to bypass its limitation - absence of built-in LDAP Field Mapping functionality.
      Advice on how to realise this concept will be highly appreciated.
    Thanks,
    Laziz

    Hi,
    In order to migrate users, groups and password you have to use the command ldapaddent as you did with this sintax:
    # ldapaddent -D "cn=Directory Manager" -w secret -f /etc/group group
    # ldapaddent -D "cn=Directory Manager" -w secret -f /etc/passwd passwd
    # ldapaddent -D "cn=Directory Manager" -w secret -f /etc/shadow shadowNote that you must use passwd instead of people container.
    I suggest you to check this article from BigAdmin http://www.sun.com/bigadmin/features/articles/nis_ldap_part1.jsp
    G.

  • How to execute two stored procedures in a mapping?

    Hi All,
    I have a mapping with a source and a target table. In the same mapping, I have a stored procedure attached to a post mapping. during this execution of mapping the post mapping process works fine. Now I want to add one more stored procedure and attach to one more post mapping process. currently it is not allowing, is there any work around to implement this plan. pls post your suggestions.
    Thanks in Advance,
    Kishan

    Hello Kishan,
    The only way to archieve this currently is to create a super-procedure that calls all your stored procedures in sequence, and use the super-procedure with the post mapping operator.
    Regards, Hans Henrik

  • BPM steps for Mapping faliure :like if else

    Hi
    I had a requirment in XI for...
    File  -->  XI  -->  Abap Proxy.
    Condition : XI has to do some field validation in Mapping and post the data to abap proxy.  and if validation in mapping failed
    (Mapping failed), XI has to write the data in some database table.
    Note :
    Validation : Char,integer or length check.
    what i am trying is that..
    Send Step -> Transformation(it failed ,exception should called another interface) -->  Abap proxy
                                                                       i.e database table
    M i going with correct flow..
    Regards,
    Ajay P

    Hi
    i created the BPM as per the answers...and activated
    http://img390.imageshack.us/img390/9182/bpmshotsdd5.jpg
    but when i am testing it..
    1.Case :
    mapping do'nt fail
    in Sxmb_moni
    first message is sucessfull  but second message fail with error
    of   " NO RECEIVER DETERMINATION "
    id  ID  i have created 3 receiver determinations 3  inteface determination..
    MY ID part is quite similar to
    ID of scenario of File to RFC with BPM.
    Regards
    AjayP

  • Multi mapping in proxy

    Hi all,
              I am Trying to do file to RFC Scenario where i have to post multiple invoices in a single bapi. I am using proxy in my scenario In such case can i do multi mapping and post the data to R/3 could you please suggest.
    Thanks in advance,
    Kalam.

    I am using proxy in my scenario In such case can i do multi mapping and post the data to R/3
    You have to use BPM for this because ABAP proxy uses the abap stack and multimapping without BPM requires java stack.
    Regards,
    Prateek

  • Account Posting in Case of Subcontracting

    Hi Experts,
    How do we map Account posting in case of Subcontracting..
    Do we post something to Customer Account?? Or posting is only in G/L Accounts..??
    Thanks in Advance..
    Regards,
    HP

    Closing the thread because of no response..

  • Unexpected namespace change after mapping change

    Unexpected namespace change after mapping change
    Posted: Apr 18, 2006 5:17 AM    Reply 
    Hi all
    We developped a mapping about one year ago on XI SP9.
    The source of the message was an Idoc and the target an XML sample (and not a xsd!) of what our customer expected.
    Everything was working fine at that point.
    Here is an extract of the output we had:
    <?xml version="1.0" encoding="UTF-8" ?>
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://schemas.xmlsoap.org/soap/envelope/">
    <env:Header xmlns:p="http://localhost/webs/msgDetails.xsd">
    <p:msgDetails xmlns:p="http://localhost/webs/msgDetails.xsd">
    <p:sender>
    <p:senderId>XXX</p:senderId>
    <p:senderName>AUSTRALIA/NEW ZEALAND</p:senderName>
    </p:sender>
    We are now running SP14 and we had to change the mapping. Since we changed it the namespaces defined in the sample XML we use as a target structure are not used anymore, instead they are replaced by ns0, ns1...
    Now the output looks like this:
    <?xml version="1.0" encoding="UTF-8" ?>
    <ns0:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://www.w3.org/2001/XMLSchema-instance" ns1:schemaLocation="http://schemas.xmlsoap.org/soap/envelope/ http://schemas.xmlsoap.org/soap/envelope/">
    <ns0:Header>
    <ns2:msgDetails xmlns:p="http://localhost/webs/msgDetails.xsd" xmlns:ns2="http://localhost/webs/msgDetails.xsd">
    <ns2:sender>
    <ns2:senderId>21873</ns2:senderId>
    <ns2:senderName>XXX</ns2:senderName>
    </ns2:sender>
    Any idea what has happened here?
    Thanks in advance
    Gregory

    Hi Gregory,
    after a short view I would say that the two structures you have posted are logically equivalent.
    I try to make it clear for the sender-element.
    In the first example you have a tag <p:sender>. Obviously the namespace prefix 'p' is used here. In order to find out the qualified name of the element represented by that tag you have to find the namespace declaration for prefix p. As the tag itself does not contain such a declaration you have to go to the parent-tag. Indeed, here you find a declaration for p which says that p is just an alias for 'http://localhost/webs/msgDetails.xsd'. Hence the qualified name for the element is: 'sender' in namespace 'http://localhost/webs/msgDetails.xsd'.
    Now let us look at the corresponding tag <ns2:sender> in the second document. In oder to identify the qualified name of the element you have to find out, which namespace is bound to the namespace prefix ns2 which has been used here. Again you cannot find this in the tag itself but in its parent. Here you find that ns2 is bound to 'http://localhost/webs/msgDetails.xsd'. (Indeed, here you see that prefix p is also bound to that namespace. This seems to be superfluous to me, as the prefix is never used.)
    Consequently, the qualified name of that element is: 'sender' in namespace 'http://localhost/webs/msgDetails.xsd'.
    Thus, both tags represent elements with the same qualified name. For more details look at http://www.w3.org/TR/REC-xml-names/.
    Greetings Stephan

  • Horizontal mapping question

    Hello,
    I have and Person, Post and Organization extending Party class. I map
    Person, Post and Organization class each on its own table
    Now Person and Organization have a bi-directional association with each
    other (via inverted relation) - Organization has a collection of Persons
    and Person references back to Organization
    I would like to know how Kodo will fetch persons for a given organization.
    Will Kodo optimize SQL to not to attempts to query all tables of
    implementations of Party when fetching relation expressed on specific
    subclasses - Organization and Person in this case
    Could you please post some details on horizontal mapping rules and what
    cases are the most expansive in terms of SQL and performance and if there
    are any limitations as far as eager fetching is concerned
    Thank you
    Alex

    Alex,
    Things only get complex if you have relations to the virtual class --
    Party, in this case.
    If you declare a field to be a relation to a subclass, Kodo won't
    attempt to look in the other subclass tables.
    If you query on Party, Kodo will issue three queries -- one for Person,
    one for Post, and one for Organization. This is analgous to Kodo's
    support for queries on interfaces.
    Declaring a relation to Party will be the most expensive, as Kodo will
    not be able to do the join on the database side, but will have to bring
    the OID(s) into the JVM and then do the lookup from there.
    -Patrick
    Alex Roytman wrote:
    Hello,
    I have and Person, Post and Organization extending Party class. I map
    Person, Post and Organization class each on its own table
    Now Person and Organization have a bi-directional association with each
    other (via inverted relation) - Organization has a collection of Persons
    and Person references back to Organization
    I would like to know how Kodo will fetch persons for a given organization.
    Will Kodo optimize SQL to not to attempts to query all tables of
    implementations of Party when fetching relation expressed on specific
    subclasses - Organization and Person in this case
    Could you please post some details on horizontal mapping rules and what
    cases are the most expansive in terms of SQL and performance and if there
    are any limitations as far as eager fetching is concerned
    Thank you
    Alex

Maybe you are looking for