Conversion between Cordinates

I am using TextLayout for rendering a unicode text.and calculated the bounding boxes of each characters in textlayout
. Font size is set as 12 and for the same Text i am also generating image of it and then i found out the bounding boxes for each characters with the help of already available tool.The tool chooses bottom -left as the image origin and calculate the coordinates based on that.Obviously these two coordinates will differ.Can any one please suggest a method for conversion between coordinates of textlayout and that of image coordinates mentioned above.

I am using TextLayout for rendering a unicode text.and calculated the bounding boxes of each characters in textlayout
. Font size is set as 12 and for the same Text i am also generating image of it and then i found out the bounding boxes for each characters with the help of already available tool.The tool chooses bottom -left as the image origin and calculate the coordinates based on that.Obviously these two coordinates will differ.Can any one please suggest a method for conversion between coordinates of textlayout and that of image coordinates mentioned above.

Similar Messages

  • Can I have a Face Time conversation between two Apple devices using the same Apple ID?

    Can I have a Face Time conversation between two Apple devices using the same Apple ID?

    Yes, you can, but the two devices can't have the same e mail address associated with both.  Each, device needs its own unique e mail ADR.  In your case, the iPod can call your iPhone if you use the phone number, but the iPhone can not call your iPod because it must use the e mail address, and if the email address is the same on both, the iPhone will think it's calling itself, so the call will not go through.

  • Precision loss - conversions between exact values and floating point values

    Hi!
    I read this in your SQL Reference manual, but I don't quite get it.
    Conversions between exact numeric values (TT_TINYINT, TT_SMALLINT, TT_INTEGER, TT_BIGINT, NUMBER) and floating-point values (BINARY_FLOAT, BINARY_DOUBLE) can be inexact because the exact numeric values use decimal precision whereas the floating-point numbers use binary precision.
    Could you please give two examples: one where a TT_TINYINT is converted to a BINARY_DOUBLE and one when a TT_BIGINT is converted into a DOUBLE, both cases give examples on lost precision? This would be very helpful.
    Thanks!
    Sune

    chokpa wrote:
    Public Example (float... values){}
    new Example (1, 1e2, 3.0, 4.754);It accepts it if I just use 1,2,3,4 as the values being passed in, but doesn't like it if I use actual float values.Those are double literals, try
    new Example (1f, 1e2f, 3.0f, 4.754f);

  • When using the analog inputs and analog outputs of the PCI-7344, what is the conversion between the voltage entering the card to counts? Similarly, what is the conversion between counts to voltage at the output of the card?

    I am using the PCI-7344 to control my system. The analog inputs are connected to the output of my system while the analog outputs serve as feedback to the system. The system is a servo. I want to know what is the conversion between the voltage read at the input, to card counts, and finally to the voltage output to the system.

    Carole,
    If you are trying to do analog feedback with a servo motor, Chapter 14 of the manual talks about how to set up the torque feedback. Also linked below is a LabVIEW example of analog feedback.
    Chapter 14
    NI-Motion User Manual
    Automatic Analog Feedback with FlexMotion example
    A. Talley
    National Instruments

  • Explain an object-oriented method for a conversion between the E/R diagram

    Somebody can explain me with an example an object-oriented method for a conversion between the E/R diagram and a relational database schema.
    Thanks,

    Hello here are some tips regarding how to convert ERD into Relational objects.
    (1) One to many relationships:
    One to many relationships can be explained by following example.
    Take example of department and Employee. One employee can work in one department only but one department can have more then one employees. In one to many relationship Primary key of One side (here department table) goes to Many side as foreign key (here Employee table).
    (2) Many to Many relationships:
    Many to many relationships can be explained by following example.
    One student can take many subjects and one subject can be taken by more then one student.
    In this case one new table emerges which includes Primary keys of both the tables and establish composite primary key. e.g. here if PK of student is student_id and Pk of course is course_id then new table will emerge named (say) student_course with composite primary key of student_id and course_id (student_id and course_id are also foreign keys)
    (3) One to One:
    Example: A person can have only one tax file number and one tax file number can be allocated to only one person. In this case PK of any table can go to another table as foreign key.
    (4) Super class-sub class relationship:
    Example: person can be a man or woman. Such kinds of relationships are called super class- sub class relationship. There are many ways to convert such relationship into tables
    we can make one table for super class (eg Table Person) and we can put PK of super class in subclass as primary key (eg if person_id is pk of table person then it will go in both sub classes (man, woman) as Pk and it will also be foreign key of the super class (person)
    Note: This is just a way to solve super class relationships there are many other ways which you can choose.
    Thanks
    Ish

  • Conversions between character sets when using exp and imp utilities

    I use EE8ISO8859P2 character set on my server,
    when exporting database with NLS_LANG not set
    then conversion should be done between
    EE8ISO8859P2 and US7ASCII charsets, so some
    characters not present in US7ASCII should not be
    successfully converted.
    But when I import such a dump, all characters not
    present in US7ASCII charset are imported to the database.
    I thought that some characters should be lost when
    doing such a conversions, can someone tell me why is it not so?

    Not exactly. If the import is done with the same DB character set, then no matter how it has been exported. Conversion (corruption) may happen if the destination DB has a different character set. See this example :
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:01 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> create table test(col1 varchar2(1));
    Table created.
    TEST@db102 SQL> insert into test values(chr(166));
    1 row created.
    TEST@db102 SQL> select * from test;
    C
    ¦
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:55 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ©
    Typ=1 Len=1: 166
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ exp test/test file=test.dmp tables=test
    Export: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:47 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P15 character set (possible charset conversion)
    About to export specified tables via Conventional Path ...
    . . exporting table                           TEST          1 rows exported
    Export terminated successfully without warnings.
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:56 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> drop table test purge;
    Table dropped.
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ imp test/test file=test.dmp
    Import: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:15 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    import server uses WE8ISO8859P15 character set (possible charset conversion)
    . importing TEST's objects into TEST
    . importing TEST's objects into TEST
    . . importing table                         "TEST"          1 rows imported
    Import terminated successfully without warnings.
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:34 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ¦
    Typ=1 Len=1: 166
    TEST@db102 SQL>

  • Query on conversion between String to Enum type

    Hi All,
    I would like to get advice on how to convert between char and Enum type. Below is an example of generating unique random alphabet letters before converting them back to their corresponding letters that belonged to enum type called definition.Alphabet, which is part of a global project used by other applications:
    package definition;
    public enum Alphabet
    A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S,
    T, U, V, W, X, Y, Z
    public StringBuffer uniqueRandomAlphabet()
    String currentAlphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ";
    StringBuffer randomAlphabetSB = new StringBuffer();
    for (int numberOfAlphabet=26; numberOfAlphabet>0; numberOfAlphabet--)
    int character=(int)(Math.random()* numberOfAlphabet);
    String characterPicked = currentAlphabet.substring(character, character+1);
    // System.out.println(characterPicked);
    randomAlphabetSB.append(characterPicked);
    StringBuffer remainingAlphabet = new StringBuffer( currentAlphabet.length() );
    remainingAlphabet.setLength( currentAlphabet.length() );
    int current = 0;
    for (int currentAlphabetIndex = 0; currentAlphabetIndex < currentAlphabet.length(); currentAlphabetIndex++)
    char cur = currentAlphabet.charAt(currentAlphabetIndex);
    if (cur != characterPicked.charAt(0))
    remainingAlphabet.setCharAt( current++, cur );
    currentAlphabet = remainingAlphabet.toString();
    return randomAlphabetSB;
    // System.out.println(randomAlphabetSB);
    I got the following compilation error when trying to pass (Alphabet) StringBuffer[0] to a method that expects Alphabet.A type:
    inconvertible types
    required: definition.Alphabet
    found: char
    Any ideas on how to get around this. An alternative solution is to have a huge switch statement to assemble Alphabet type into an ArrayList<Alphabet>() but wondering whether there is a more shorter direct conversion path.
    I am using JDK1.6.0_17, Netbeans 6.7 on Windows XP.
    Thanks a lot,
    Jack

    I would like to get advice on how to convert between char and Enum type. Below is an example of generating unique random alphabet lettersIf I understand well, you may be interested in method shuffle(...) in class java.util.Collections, which randomly reorders a list.
    before converting them back to their corresponding letters that belonged to enum type called definition.AlphabetIf I understand well, you may be interested in the built-in method Alphabet.valueOf(...) which will return the appropriate instance by name (you'll probably have no problem to build a valid String name from a lowercase char).

  • Source system mapping/conversion  between BW systems

    Hello  BO/BW Forum
    I have question regarding source system mapping between BW systems
    We are in BW7 and we still have BW3.5 objects (Cubes,IO,ODS ets)
    We have 1 source in BW Dev that need to split to 2 different  sources in BW QA
    e.d BW Dev map to ECC Dev 110 and when we transport ,it should map
    to 2 sources  ,one is ECCQA Clinet 200 and another map to ECCQA Client 210
    in the conversion in BW QA we have mapped it to the 2 ECC QA sources but 
    the problem is that if we would only work with BW7 objects this will work but since we have
    also BW3.5 objects  the  mapping in not complete , we also checked the 7.0 button in the conversion
    any assistance is highly appreciated

    Hi,
    Hope you aware of Infosources interacts with Replicated Datasources.
    As per your post, BW1 is a source system for BW2 and BW3.
    So, you have to create Infosources in BW2 and BW3. So you have to replicate the datasources in BW2 and BW3.
    Regards,
    Suman

  • Conversion between local time and UTC: Anomaly at Daylight saving time edge

    Hi all,
    I want to convert back and forth between local time including all peculiarities like DST etc. This is the core of my code:
        Date d=new Date(toEpoch());
        Calendar cal=Calendar.getInstance(tz);
        cal.setTime(d);
        int offs=(cal.get( Calendar.ZONE_OFFSET )+cal.get( Calendar.DST_OFFSET ));
        System.out.println("zone_offset: "+cal.get( Calendar.ZONE_OFFSET )+", dst_offset: "+cal.get( Calendar.DST_OFFSET )+", offset: "+offs);
        Calendar cal2=Calendar.getInstance(TimeZone.getTimeZone("Etc/UTC"));
        System.out.println(cal2.getTimeZone().getID());
        cal2.setTime(d);
        cal2.add(Calendar.MILLISECOND,-offs);
        d=cal2.getTime();Looks (and, actually, is) rather complicated. I take two different calendar objects to be absolutely sure that no accidental DST conversion hits me on the UTC side.
    The general case works as expected:
    Date (fourteen digits, 'x'=Exit): 20050909190000
    Entered date: Fr, 9.9.2005, 19:00
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Etc/UTC
    Date in UTC:  Fr, 9.9.2005, 17:00I live in timezone Europe/Berlin, so during DST we have GMT+2. Let's see what's happening on christmas (non-DST):
    Date (fourteen digits, 'x'=Exit): 20051224180000
    Entered date: Sa, 24.12.2005, 18:00
    zone_offset: 3600000, dst_offset: 0, offset: 3600000
    Etc/UTC
    Date in UTC:  Sa, 24.12.2005, 17:00As expected!
    Now, we enter the dark zone of DST: Each year DST is turned on and off and it seems as if this leads to problems in both cases. Let's start with the next event of this case: At Oct., 30th, 2005, DST is turned off between 2:00 and 3:00 at night. So, we have once 2:30 with DST and (one hour later) 2:30 again without DST. Java, however, garbles things completely (I'm leaving the unneeded lines of my debug output out now):
    Entered date: So, 30.10.2005, 01:59
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Date in UTC:  Sa, 29.10.2005, 23:59
    Entered date: So, 30.10.2005, 02:00
    zone_offset: 3600000, dst_offset: 0, offset: 3600000
    Date in UTC:  So, 30.10.2005, 02:00
    Entered date: So, 30.10.2005, 02:30
    zone_offset: 3600000, dst_offset: 0, offset: 3600000
    Date in UTC:  So, 30.10.2005, 02:30
    Entered date: So, 30.10.2005, 02:59
    zone_offset: 3600000, dst_offset: 0, offset: 3600000
    Date in UTC:  So, 30.10.2005, 02:59
    Entered date: So, 30.10.2005, 03:00
    zone_offset: 3600000, dst_offset: 0, offset: 3600000
    Date in UTC:  So, 30.10.2005, 02:00So, directly before the DST change, offset is computed correctly. From 3:00 onward, also. But in between, Calendar does neither take an offset of two hours (what would be somehow correct) nor one hour (also somehow correct). Even though it predicts to take the one-hour zone offset into consideration, "GMT" time is actually the same as local time.
    Things become even wireder at the other edge of the DST time range: This year, DST began on March, 27th, 2005: Let's see what is happening:
    Entered date: So, 27.3.2005, 01:59
    zone_offset: 3600000, dst_offset: 0, offset: 3600000
    Date in UTC:  So, 27.3.2005, 00:59
    Entered date: So, 27.3.2005, 02:00
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Date in UTC:  So, 27.3.2005, 00:00
    Entered date: So, 27.3.2005, 02:30
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Date in UTC:  So, 27.3.2005, 00:30
    Entered date: So, 27.3.2005, 02:59
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Date in UTC:  So, 27.3.2005, 00:59
    Entered date: So, 27.3.2005, 03:00
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Date in UTC:  So, 27.3.2005, 00:00
    Entered date: So, 27.3.2005, 04:59
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Date in UTC:  So, 27.3.2005, 01:59
    Entered date: So, 27.3.2005, 05:00
    zone_offset: 3600000, dst_offset: 3600000, offset: 7200000
    Date in UTC:  So, 27.3.2005, 03:00While at 1:59, everything is ok, 2:00 is handled as within DST. That's ok, as the hour between 2:00 and 3:00 did not exist in that night in our local time zone, but at 3:00 the system totally screws up things as it suddenly subtracts three hours (while pretending to use two hours). This goes on until 5:00 in the morning when time is shown correctly again.
    Can anyone help me? What am I doing wrong? How do I actually convert correctly between a DST-aware local time zone and GMT/UTC?
    Best regards,
    Dirk

    Well, I would not say I did "just about everything" wrong. Actually, Calendar is not GMT-only as it has time zone information and converts its internal time according to that information if I use the getTime() function (what I do).
    In my applications, I handle dates in my own data structures. My EDate object (which is part of the QJCC library hosted on SourceForge) stores years, months, days, minutes, hours, and seconds seperately. Using this class, I managed to get an actually working time zone conversion this way:
      public void toUTC(TimeZone tz) {
        Calendar cal=Calendar.getInstance(tz);
        cal.clear();
        cal.set(year,month-1,day,hour,minute,second);
        int offs=(cal.get( Calendar.ZONE_OFFSET )+cal.get( Calendar.DST_OFFSET ));
        //System.out.println("zone offset: "+cal.get( Calendar.ZONE_OFFSET )+", DST offset: "+cal.get( Calendar.DST_OFFSET ));
        second-=(offs/1000);
        normalize();
    normalize() does something similar to the lenient stuff in Calendar.
    I will now write the method for the other direction. The most interesting item will be the answer to the question:
    "Tell me the correct DST_OFFSET in TimeZone X when I have the actual point of time given in UTC."
    Perhaps someone can give me a hint?!?

  • Ipad2 hacked?! Mysterious iMessage conversation between unknown persons on my iPad

    I have owned my iPad 2 since last sept. it is basically my computer and with me almost all of the time.
    I do not own an iPhone.
    The ipad is pass code locked.
    Until today I have never used the iMessage app. Not opened it.
    It did not indicate I had any unread messages, for what I thought are obvious reasons.
    I opened it for the first time today to message a friend.
    I found a conversation already on my iPad. Between unknown persons. Talking about smoking something and using the n word a lot. The number from the device NOT my iPad is totally unfamiliar. Nowhere on an contact list, phone log, email, etc. when I contacted this person to ask who they were, they were coy, would not tell me, and seemed utterly unconcerned that a stranger had just read their conversation.
    Also, imessege claimed the conversation had been read on the date it was sent, back in March.
    Again, I have never even opened this app until today.
    This iPad has not been repaired or out of my possession or used by others at all.
    I also don't live with anyone who speaks in this manner. The diction is just not right. Nor anyone who smokes anything of any kind. My partner and I are too old for that ****.
    Furthermore, the date and time for the conversation were for March 19th in the early afternoon. I know exactly where I was. I was at home. ON MY IPAD. I even have some writing saved on that day that would have been from around that time in the day. It's an east bay number, but I live in sf AND, at that time, my iPad number had a Kentucky number attached to it (I only got it changed to a local area code earlier this month).
    This is pretty scary. Please tell me I am missing something and this is no big deal and explain to me how. Because this looks bad. It looks to me that two people discussing drugs used my iPad to port a conversation about it and that is really, really, really disturbing.

    The feature really doesn't work very well yet. Most likely a future update will fix it or at least make it more obvious why it doesn't behave consistently.
    For what it's worth, after following the caller ID instructions that you have tried, it took about a day for syncing to start working but some conversations and messages still never show up on my iPad.
    Not much we can do but wait for iOS 5.1 I'm afraid. No doubt apple are aware of the issue at this point.

  • Conversions between character sets when using 'exp' and 'expdp' utilities

    When I make export with exp utility with NLS_LANG environment
    variable not set, export is done with US7ASCII character set,
    but when server uses some other character set e.g. EE8ISO8859P2,
    there can be some national specific characters lost with such a conversion.
    But when I use expdp utility and have not NLS_LANG set,
    in log file there is not mentioned that export is done in other character set
    than that on the server, does it mean that it uses character set on server?

    When I make export with exp utility with NLS_LANG environment
    variable not set, export is done with US7ASCII character set,
    but when server uses some other character set e.g. EE8ISO8859P2,
    there can be some national specific characters lost with such a conversion.
    But when I use expdp utility and have not NLS_LANG set,
    in log file there is not mentioned that export is done in other character set
    than that on the server, does it mean that it uses character set on server?

  • Testing conversion between reference and local currency...

    Hi,
    I am working on exchange rate configuration. If we define exchange Ref currency in the table for defining exchange rate type, SAP says that if we are using multiple currencies then we need not feed translation rates between each and every
    currency but just between reference currency and foreign currencies.
    I am using 3 foreign currencies (X1,X2,X3) and one local or reference (BMD) currency.
    I have defined the translation rates between (X1 -> BMD, X2 -> BMD, X3 -> BMD). Now I want to test if translation between X1 and X2 takes place
    correctly or not. How can I test this ? How can I prototype this in afunction module ?
    Regards,
    Rajesh.

    Hi,
    Use the FM:
    <b>READ_EXCHANGE_RATE</b> Retrieve exchange rate on a particular date
    Hope this helps.
    Reward if helpful.
    Regards,
    Sipra

  • GUID Conversion between CHAR22 and CHAR32

    I am invoking RFC function through XI. GUID type is RAW16.
    Result value for GUID is CHAR22. For example: Q9dnXjzNAEoCAAAAChIaKA==.
    I want to get CHAR32 value. And I don't know how to convert.
    Could you help me with this conversion?
    I have used GUID_CONVERT that returns wrong CHAR32 value. CHAR22 for right GUID is GzTdNZpD04e200002X8QAC.

    Hi Ulf,
    The rawtohex function you found must be used on an expression inside a graph information models, or inside CE_CALC on scripts. It will probably solve your issue.
    Regards, Fernando Da Rós

  • RAW Conversion between CS2 and Aperture``

    Has any one done any sort of comparison of RAW conversation out put from CS vrs; Aperture? Recently I've been using CS2 for my port processing work and really have noticed that the CS2 processing seems to be much clearer, better tones, and detail. Lately I don't seem to be able to get a decent print out of Aperture, I've been using it since it first shipped and I know it fairly well, just feels like I'm in some sort of a slump with it.

    If you search this forum for deer.nef you will find a couple of recent threads on that topic.
    I have done my own tests, converting several images using both apps. My conclusion is that both deliver a good conversion. Some images may come out better from on or the other but you will probably get the best result from the one you have the most familiarity with.
    Note: the current RAW converson in Lightroom is not as up to date as the one in the Photoshop CS3 Public Beta. That will change next Monday. Try your own images in both. You may have a personal style that works better withh one or the other.

  • Column maping and conversion between dissimilar tables fails.

    Hello,
    I have two Oracle databases of different versions and replicate an only schema in one direction (9.2.0.8 -> 11.2.0.3).
    An instrument of replication is GoldenGate 11.1.1.1.2_05.
    All objects of two replicating schemas are identical except one table -
    in 9.2.0.8 the table have this appearance -
    create table test.t1 add (id number, nm number);
    create index test.i_t1 on test.t1(id);however, in 11.2.0.3 -
    create table test.t11g add (id number, vch varchar2(5));
    create index test.i_t1 on test.t11g(id);and all objects, except this mentioned table in its two incarnations, are replicating successfully.
    Now about GoldenGate structure - it looks like this -
    (DB 9.2.0.8) -> (Extractor) -> (Local trail) -> (Pump) -> (Network) -> (Remote trail) -> (Replicator) -> (DB 11.2.0.3)
    I decided to make mapping and conversion at target site and, however, I have those parameters for replicator -
    --Replicat group --
    REPLICAT REP2
    --source and target definitions
    SOURCEDEFS /u01/app/oracle/product/11.1.1.1.2ogg411g/dirdef/sourcedef
    --target database login --
    USERID ogg, PASSWORD XXX
    --file for dicarded transaction --
    DISCARDFILE /u01/app/oracle/product/11.1.1.1.2ogg411g/discard/rep1_discard.txt, APPEND, MEGABYTES 10
    --ddl support
    DDL
    --Specify table mapping ---
    MAP test.t1, TARGET test.t11g, COLMAP (USEDEFAULTS, VCH=@STRNUM(NM));
    MAP test.*, TARGET test.*;
    DDLERROR 24344 DISCARD;And, of course, I have created definitions file at source site and have copied it afterwards to the target site, so parameter SOURCEDEFS reflects the reality.
    For some completeness of the picture I'll providing parameters of extractor and of pump extractor -
    --extract group--
    EXTRACT ext1
    --connection to database--
    USERID ogg, PASSWORD xxx
    EXTTRAIL /u01/app/oracle/product/11.1.1.12ogg/dirdat/ss
    SEQUENCE test.*
    --DDL support
    DDL INCLUDE MAPPED OBJNAME test.*
    --DML
    TABLE test.*;
    TABLEEXCLUDE test.DIFFTYPE;
    -- Identify the data pump group:
    EXTRACT pump11
    --connection to database--
    USERID ogg, PASSWORD xxx
    RMTHOST db-dev-2, MGRPORT 7869
    RMTTRAIL /u01/app/oracle/product/11.1.1.1.2ogg411g/dirdat/tt
    sequence test.*
    -- Allow mapping, filtering, conversion or pass data through as-is:
    PASSTHRU
    -- Specify tables to be captured:
    TABLE test.*;Before definitions file generation, at source site there was created this parameters -
    DEFSFILE /u01/app/oracle/product/11.1.1.12ogg/dirdef/sourcedef
    USERID ogg, PASSWORD xxx
    TABLE test.*;Whenever I've tried to insert a row at source site my replicator became abendon immediately and in $GG_HOME/ggserr.log there are appear those lines -
    2012-10-11 23:49:17  WARNING OGG-00869  Oracle GoldenGate Delivery for Oracle, rep2.prm:  Failed to retrieve column list handle for table TEST.T1.
    2012-10-11 23:49:17  ERROR   OGG-00199  Oracle GoldenGate Delivery for Oracle, rep2.prm:  Table TEST.T1 does not exist in target database.
    2012-10-11 23:49:17  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, rep2.prm:  PROCESS ABENDING.It seems to me that the source of the issue is in some kind of bug in GoldenGate but it would be my last resort to upgrade the system.
    This is my first steps in this area, so I would like to any help.
    Edited by: ArtemKhisamiev on 12.10.2012 0:23

    I've tried to change slightly replication params from this -
    --Replicat group --
    REPLICAT REP2
    <...>
    --Specify table mapping ---
    MAP test.t1, TARGET test.t11g, COLMAP (USEDEFAULTS, VCH=@STRNUM(NM));
    MAP test.*, TARGET test.*;
    DDLERROR 24344 DISCARD;to something like this -
    --Replicat group --
    REPLICAT REP2
      <...>
    --Specify table mapping ---
    MAP test.T1, TARGET test.T11G, COLMAP (USEDEFAULTS, VCH=@STRNUM(NM));
    MAP test.CITIES, TARGET test.CITIES;
    MAP test.COUNTRIES, TARGET test.COUNTRIES;
    MAP test.LOBEXP, TARGET test.LOBEXP;
    DDLERROR 24344 DISCARD;Now replication works. So the problem was in asterisk which of course stands for test.T1 - the table which is substituted by test.T11g in target site.
    Then, how can I exclude test.T1 from generalisation? To enumerate every table in a schema is a mad overcome, especially if the schema is really bulky of objects?
    Edited by: ArtemKhisamiev on 12.10.2012 1:38
    Edited by: ArtemKhisamiev on 12.10.2012 3:22

Maybe you are looking for

  • Create new CBO statistics for the tables

    Dear All, I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes a longer time.how to  create new CBO statistics for the tables D010TAB and D010INC.  Please suggest. Regards, Kumar

  • Converted a pdf to Word.rtf.

    First time user:  I just converted a pdf to Word.rtf.  The lines are all off.  e.g. If it says Address: ________________, then the lines are appearing in the middle or above the word and it throws everything off.  Can this be fixed??  Help please

  • How to View DV Imported to FCE on a PC?

    I am really stuck here. Our company has hundreds of VHS tapes of surgeries. We wanted to convert them to digital so that we can have them available on a server for reviewing, etc. I have successfully imported the first batch of 50 VHS tapes into FCE

  • Email null error message

    I am unable to receive email using 6.6 macmail. Receving error message: server returned the error: the attempt to send data to the server "(null)" failed. I spoke with comcast tech support which issue was with email software.

  • No sound on satelie laptop

    i just purchased  a toshiba t135s1300 and can barely hear any sound even though i have volume on high on my desktop, any suggestions?