Problem with anchored objects in file from CS4

I opened, in CS5, a file which was done in CS4 and discovered all my anchored objects were gone!
Analising the document, I understood they were there, but in a kind of mask ("Illustrator style") but in a text frame, which is odd, I think.
I show here:
1. one picture of the CS4 file
2. one picture of the same file, but opened in CS5
3. one picture of this file, in CS5, showing the "ghost" when I put the cursor over the "missing" anchored object
If I cut and paste the objects they appear, but if I put them back like anchored objects, they disapear again.
I had to fix every thing in a document I was working (it's a 124 pages document...) without anchored objects, just grouped them, because it was urgent (as always...) but I am reporting this because maybe people hadn't notice this and maybe there's a way to fix this.

It's very hard to tell what might be causing this without seeing the actual file. I looked around and found a CS4 file on my suystem with hundreds of anchored photos, and they show fine when I open in CS5.
My first impression is that this is ither a layer problem of some sort, or perhaps the objects have been set not to print and you have a preview mode enabled, but eh screen captures don't seem to support the latter.
What happens if you export the file to .idml in CS4 and open that in CS5?

Similar Messages

  • [CS3] GREP-Problem with Anchored Objects

    Hi,
    I've the following problem:
    Lines beginning with a, b, c etc. should be formatted equally, so I use the following GREP:
    Search:
    ^([a-z])(\.??)([ \t])
    Replace:
    $1.\t
    Now there are some abc lines, which begin with an anchored object, so I tried:
    Search:
    ^(~a*?[a-z])(\.??)([ \t])
    Replace:
    $1.\t
    But with this, the anchored object is deleted. Is this a bug or am I doing something wrong?
    Thanks
    Tobias

    That's a known bug -- Dave Saunders discovered it, to his surprise, I might add. There seems to be no reason for it.

  • CS2 problem with 5D MKII RAW files from Lightroom 2.2

    Ok, this one is weird. I have tried it on two different computers with the same resulting problem. If I convert a full size RAW 5D MkII to Tiff in the new Lightroom 2.2, then bring it into CS2 and covert it to 8-bit and try to save as JPG, it always comes up with a program error. I tried an sRAW file with the same process and it worked fine.
    Just to make sure I did have something else messed up, I converted in DPP to tiff and then went to CS2 and saved as JPG just fine.
    Anyone else try this and see a problem? Looks like it's a combination of a problem with the new Lightroom, plus CS2.
    It almost seems like CS2 doesn't have enough memory to convert the file to jpeg. I have had this program error before in CS2, but with much larger files. The 5D MKII should not overrun memory.

    This is a known bug in CS2 where it trips up because of the (valid) metadata written by Lightroom. You need to delete the xmp metadata written by Lightroom from the file before saving as jpg or export from lIghtroom with minimize metadata checked.

  • Any problems with migrating In Design files from Mac to PC?

    My wife is a commercial artist, and we are considering getting CS4 here at home on our PC.  She has a Mac at work.  Are there any problems with switching files between the Mac and the PC so she can get some work done on the same files at both the office and at home?
    Thanks!

    One recurring problem I have is the different types of the Symbol font. Mac InDesign insists on adding (TT) -- "Symbol (TT)" -- to 'Windows' documents, and vice versa. I think the root of the problem is Symbol on Windows is an OTF font, whereas on the Mac it's still a 'regular' TTF. My Mac compadres tell me it's not easy to transfer the Windows font to the Mac -- its own version apparently being rooted deep inside the system.
    However, it's not a big issue, as we by default encapsulate Symbol with a character style. So when ID alerts us, all we have to do is redefine this style, be it on Windows or on Mac. Fortunately I've never seen text reflowing because of this (knocks on plywood desk).

  • Problem with downloading a PDF file from a webserver run on Linux

    Hi,
    I've written a simple functionality that manages file attachments.
    Everything works fine (attaching, downloading, deleting) when the webserver runs on Windows.
    However when I deployed the code to the Resin webserver run on Linux and use the Win browser to connect to the app, the downloading of PDF file doesn't work (uploading and downloading of txt, doc, xls, jpg files is OK).
    The downloaded PDF file is almost twice as big as original (~28KB when original is ~12KB) and it can't be open.
    I guess it is the problem of writing to the output stream of HttpServletResponse but I can't localize the problem.
    Here is the code I use for downloading file:
    private boolean downloadFile(HttpServletResponse response, String filePath,
                   String originalFilename) {
         File file = new File(filePath);
         String contentType = URLConnection
                   .guessContentTypeFromName(originalFilename);
         // If the content type is unknown set the default value.
         if (contentType == null) {
              contentType = "application/octet-stream";
         BufferedInputStream input = null;
         BufferedOutputStream output = null;
         try {
              input = new BufferedInputStream(new FileInputStream(file));
              int contentLength = input.available();
              response.reset();
              response.setContentLength(contentLength);
              response.setContentType(contentType);
              response.setHeader("Content-disposition", "attachment; 
                           filename=\""+ originalFilename + "\"");
                output = new BufferedOutputStream(response.getOutputStream());
              int bufSize = 10000;
              byte[] buf = new byte[bufSize];
              int bytesNo = 0;               
              while ((bytesNo = input.read(buf, 0, bufSize)) != -1) {
                   output.write(buf, 0, bytesNo);
              output.flush();
              input.close();
              output.close();
         } catch (IOException e) {
              log.debug(e.getMessage());
              e.printStackTrace();
    }Can you point any problem?
    Thanks in advance,
    Ala

    matali wrote:
              int bufSize = 10000;
              byte[] buf = new byte[bufSize];
              int bytesNo = 0;               
              while ((bytesNo = input.read(buf, 0, bufSize)) != -1) {
                   output.write(buf, 0, bytesNo);
    This piece is completely wrong and doesn't work for files bigger than 10000 bytes. Replace it by
    byte[] buffer = new byte[10240]; // 10KB exactly.
    int length = 0;
    while ((length = input.read(buffer)) > 0) { // Read next 10KB of input to buffer.
        output.write(buffer, 0, length); // Write specified length of buffer to output.
    }Or just use output.write(input.read()) in a loop of the contentLength as you're alredy using BufferedInputStream/BufferedOutputStream.
    Another thing, the method is declared to return a boolean, but it actually doesn't? I would just let it throw an exception in case of failure instead of returning a boolean and let the calling method handle the exception.

  • I've switched from Google to Firefox because I was so tired of Google crashed all the time. Now I have the same problem with Firefox, Bank id, file down

    Hej!
    I've switched from Google to Firefox because I was so tired of Google crashed all the time. Now I have the same problem with Firefox, Bank id, file downloading, accessing Web pages, none of it works. What is the error?
    Ronnie.

    When Firefox crashes, it usually records information about what was happening at that moment and displays the Mozilla Crash Reporter. Are you seeing that, or is Firefox just freezing and you're killing it?
    Assuming you have crashes, you can submit that data to Mozilla and share it with forum volunteers to see whether it points to the solution. Please check the support article "[[Firefox Crashes]]" (especially the last section) for steps to get those crash report IDs, and then post some of the recent ones here.
    If that's not possible, have you tried Firefox's Safe Mode? That's a standard diagnostic tool to deactivate extensions and some advanced features of Firefox. More info: [[Troubleshoot Firefox issues using Safe Mode]].
    You can restart Firefox in Safe Mode using either:
    * "3-bar" menu button > "?" button > Restart with Add-ons Disabled
    * Help menu > Restart with Add-ons Disabled
    Not all add-ons are disabled: Flash and other plugins still run
    After Firefox shuts down, a small dialog should appear. Click "Start in Safe Mode" (''not'' Reset).
    Any difference?

  • What is the problem with converting objects with gradients to gradient mesh?

    What is the problem with converting objects with gradients to gradient mesh?

    slange,
    There may arise some smaller corruption states that may be cured by what you did, as you can see in the list. Sometimes, restarting thrice is needed.
    The following is a general list of things you may try when the issue is not in a specific file (you may have tried/done some of them already); 1) and 2) are the easy ones for temporary strangenesses, and 3) and 4) are specifically aimed at possibly corrupt preferences); 5) is a list in itself, and 6) is the last resort.
    If possible/applicable, you should save curent artwork first, of course.
    1) Close down Illy and open again;
    2) Restart the computer (you may do that up to 3 times);
    3) Close down Illy and press Ctrl+Alt+Shift/Cmd+Option+Shift during startup (easy but irreversible);
    4) Move the folder (follow the link with that name) with Illy closed (more tedious but also more thorough and reversible);
    5) Look through and try out the relevant among the Other options (follow the link with that name, Item 7) is a list of usual suspects among other applications that may disturb and confuse Illy, Item 15) applies to CC, CS6, and maybe CS5);
    Even more seriously, you may:
    6) Uninstall, run the Cleaner Tool (if you have CS3/CS4/CS5/CS6/CC), and reinstall.
    http://www.adobe.com/support/contact/cscleanertool.html

  • Problem with Preview and PSD files - random gray square

    Hi guys, hope you can help...
    I've got a problem with Preview and PSD files.
    If I open in Preview both an original jpg straight from my reflex and the photoshop version of the same picture, the psd file presents a gray square (of what it seems unrendered image) in a random area of the photo (sometimes in the center.). The square is quite big...
    If I zoom in or zoom out it disappears...if I scroll to another photo and then back to the psd, the square it's there again...sometimes in a different position.
    I've tried the same psd on my older iMac with leopard...and got no problem at all.
    I suspect it got something to do with my Ati...
    (this is the second iMac 27...the first went back for gray banding on the lcd screen and flickering and yellow tinge........)
    Thanks for your help.
    DAve.

    maybe I'm onto something...
    I've just found out that opening Preview in 32bit mode (instead of default 64bit) works flawlessly with any psd files. If I switch back to 64bit mode, Preview is much faster but the gray square comes back...
    It seems like the i7 is much faster than the Ati....
    Any more realistic ideas?

  • Problem with Append mode in File Receiver

    Hello,
    I am facing some problem with Append Mode in File Receiver.
    In channel config, i have given :
    Construction Mode : Append
    File Type : Text
    Message Protocol : File Content Conversion
    The size of the file which i am trying to send is about 9.5MB.
    I got this error,
    "Recovering from loss of connection to database; message
    loaded into queue by recover job: System Job (Failover Recovery)".
    So, it would seem that there was a loss of connnection to the database    
    while the file was being written.
    Note -  XI successfully recovered from the connection loss and   
    successfully wrote the file, however since the communication channel  
    was set to append, it appended to the partial file that was written   
    before the database connection loss. This is not correct. The file    
    should have been overwritten after the recovery even though the communication
    channel was configured to append.                                     
    Can anyone help me on this regard.
    Thanks,
    Soorya.

    Hi Venkat,
    I would suggest u to split the file in to chunks if u face any problem in processing at a time in append mode and also
    Memory Requirements are must 4 processing huge files:
    Q: Which memory requirements does the File Adapter have? Is there a restriction on the maximum file size it can process?
    A: The maximum file size that can be processed by the File Adapter depends on a number of factors:
    o The most important one is the size of the Java heap, which is shared among all messages processed at a certain point in time. In order to be able to process larger messages without an out of memory error (OOM), it is recommended to increase the size of the available Java heap and/or to reduce the concurrency in the system so that fewer messages are processed in parallel.
    o Another factor negatively influencing the maximum message size in releases up to and including XI 3.0 SP 13 is an enabled charcter set (encoding) conversion if the message type is set to "Text".
    o Using the transport protocol "File Transfer Protocol (FTP)" also uses more memory for processing than the transport protocol "File System (NFS)" (up to and including XI 3.0 SP 13).
    o If the Message Protocol "File Content Conversion" is used in a File Sender channel, consider that not only the size of the input file affects the File Adapter's memory usage, but even more the size of the XML resulting from the conversion, which is usually a few factors larger than the original plain text file.
    To reduce the memory consumption in this scenario, consider configuring the setting "Maximum Recordsets per Message" for the sender channel. This will cause the input file to be split into multiple smaller mesages.
    Plz do refer the following links:
    U may plan the availability of ur communication channel using "Planning Availability Times" feature
    http://help.sap.com/saphelp_nw04/helpdata/en/45/06bd029da31122e10000000a11466f/frameset.htm
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    hi check the below links for reference
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10748ef7-b2f0-2910-7cb8-c81e7f284af5
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7086f109-aaa7-2a10-0cb5-f69bd2affd2b
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2498bf90-0201-0010-4884-83568752a857
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cc1ec146-0a01-0010-90a9-b1df1d2f346f
    Regards,
    Vinod.

  • Problem with:  select 'c' as X from dual

    Problem with 'select 'c' as X from dual'
    I get 2 different results when I execute the above with SQLPlus (or java) depending on the instance I am connected to. For one instance the result is a single character and for the other the character is padded with blanks to 32 chars in the SQLPlus window (and java). Does anyone know what database setting causes this to happen? Is it a version issue ?
    Test #1: Oracle 9.2.0.6 - SQLPlus result is padded with blanks
    SQL*Plus: Release 9.2.0.1.0 - Production on Mon Dec 10 09:27:58 2007
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    With the Partitioning option
    JServer Release 9.2.0.6.0 - Production
    SQL> select 'c' as X from dual;
    X
    c
    SQL>
    Test #2 Oracle 9.2.0.1 SQLPlus result is a single character.
    SQL*Plus: Release 9.2.0.1.0 - Production on Mon Dec 10 09:29:27 2007
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    SQL> select 'c' as X from dual;
    X
    c
    SQL>

    Using 9.2.0.6 On AIX 5.2 I get the single byte result:
    UT1 > select 'c' as X from dual;
    X
    c
    If the databases are on different Oracle Homes you may want to check the sqlplus global logon files for any set commands.
    If you executed the two sql statements from different OS directories you may also want to check your sqlpath for sqlplus .logon files.
    Try issueing clear columns and repeating the statement. Are the results the same?
    HTH -- Mark D Powell --

  • Problem with RTPExport output video files

    Hi, I have a problem with RTPExport output video files. One side streams H263/RTP(AVTransmit2.java) and other write this steam to a file by RTPExport.java. When network conditions are ideal, output video file has same fps and same number of frames like original file. Problem occures, when theres packet lost in network, then output file has different fps,and also has less frames like original video(because it didnt write missing frames to file, and thats why it get shorter). Pls how can I achieve output file that will have the same fps like original one? How to write to file an identical copy of what I can see while receiveing video with AVReceive2.java? Its there a way to modifi rtpexport or avreceiver to do this? Thanks a lot!

    Trubka wrote:
    When network conditions are ideal, output video file has same fps and same number of frames like original file. Problem occures, when theres packet lost in network, then output file has different fps,and also has less frames like original video(because it didnt write missing frames to file, and thats why it get shorter). Okay, first off, the second file is smaller on purpose. RTP intentionally drops packets that are old/out of order in order to make sure real-time video stays as close to real time as it can. This is by design, so there's really nothing that can be done about it.
    How to write to file an identical copy of what I can see while receiveing video with AVReceive2.java? Technically speaking, what you're getting in the RTPExport is exactly what you got on the receiving end. Any frames that are dropped during transmission will not be seen by the receiver, nor saved by the receiver.
    Pls how can I achieve output file that will have the same fps like original one? I'm not 100% sure that you can, but, you can give the following idea a try. I make no guarentees that it'll work, but it should work for you...
    [http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/RTPConnector.html]
    That example is an example of a "custom transport layer" for RTP connections. Essentially, it's some code that's handed the RTP packets on the transmission end, and it's expected to deliver those RTP packets on the other end. And it doesn't care about how they get from A to B, only that they do.
    If you were to replace the UDP socket in that example with a TCP socket, you would be guarenteed not to drop packets due to network reasons. Every RTP packet you were handed by the transmitter, you would then hand to the receiver. There is no guarentee that none of the packets would be cast away as "old" by the RTP protocol itself, but there's also no guarentee any of them would be. It's a crap-shoot at best, but it's certainly worth a try.

  • A problem with importing layered PSD file into Flash

    Hi.
    There's a problem with importing layered PSD file into Flash.
    If I import a layered PSD file, some part the color of the lower layer is shown at the edge of objects or shadows. Instead, if I crop each layers first and import them, there's no problem.
    If the higher layer has brush or transparent effects, it becomes worse.
    Any help with this problem?
    Thanks.

    How was the original art created? Was the original RGB or CMYK? What is the resolution of the Photoshop file? Flash only works well with RGB and 72 pixel per inch resolution. If your original art is not set this way, then Flash will attempt to convert it as it imports it. Flash uses the sRGB color space. You'll get the best color translation if your Photoshop file is using this color preference.

  • CS3: HDR Merge Problem with Canon 50d RAW files

    I recently upgraded from a 20D to a 50D.  I'm running CS3 under WinXPpro with 3GB of RAM.  HDR merge of 3 CR2 files is causing CS3 to generate a program error and crash.  This never happened with my older (smaller) files from the 20D.  Anybody know how to fix this?
    TIA,

    The suggestion was to try an HDR with smaller 50D sRAWs as an *experiment* to see if the size matters more than the camera model.  Another benefit of a 50D would be the 14-bits per pixel instead of 12-bits and an sRAW would have less noise per pixel than a full-size RAW so using smaller sRAW images might work better than a 20D's.
    The "size" being referred to is the dimensions in pixels, not the on-disk size, becasue the in-memory size of an image should be similar whether it comes from a 16-bit TIFF or an 16-bit RAW conversion.
    Merging RAWs will involve ACR during the HDR process, whereas merging TIFFs won't unless you have the setting to prefer ACR for TIFFs, too, which you don't indicate.
    There can be data in the highlights and especially shadows from a RAW that gets thrown away when it's converted to TIFF, so merging RAWs is better than merging from TIFFs.
    I currently have CS4 only on my machine so I forget, but with CS3, does it matter what output size you have set in ACR, when you do an HDR conversion from RAW?  In other words, does HDR using ACR ignore the output size or not?  If not, then perhaps setting a smaller size in ACR (the same area where you set the bit depth and conversion colorspace) would make things work or setting a larger size with 20D images would make them fail.  This is being suggested as a experiment as well, to try to isolate the issue, not necessarily a permanent solution.
    I don't think there is anything inherently wrong with a 50D RAW that would make it not work, but the larger size would use more memory and maybe your machine is getting close to some sort of limit.
    Other suggestions would be to reset your preferences and check your disk(s) with scandisk and/or change your scratch-disk to somewhere else to make sure there's no some corruption keeping the HDR process from working with the larger 50D images, and along those lines, also try purging your ACR cache or moving it to another drive.  You might also try changing the amount of memory available to Photoshop (ram vs scratch-disk) in the Preferences.
    What module does the crash occur in, if the error says?  Is it related to display or something else?

  • Problem with embedded objects

    Hi,
    I have problem with embedded objects which contained embedded objects.
    When I create an Object in the persistent memory and commit this object I get the following error:
    ORA-22805: cannot insert NULL object into object tables or nested tables
    In the constructor of my persisten object I create the embedded members in the transient memory:
    ATestPersObj::ATestPersObj() : m_count(0), m_lang(NULL), m_cost(NULL) {
      m_lang = new AEnumLanguage_OraType();
      m_cost = new AAmount_OraType();
    }Or when I dereference a reference I get this error:
    ORA-00600: internal error code, arguments: [kokeicadd2], [16], [5], [], [], [], [], []
    Can somebody give me a hint?
    I've defined the following Type:
    CREATE OR REPLACE TYPE AENUM_ORATYPE AS OBJECT (
                             VALUE  NUMBER(10,0)
                           ) NOT FINAL ;
    CREATE OR REPLACE TYPE AENUMLANGUAGE_ORATYPE UNDER AENUM_ORATYPE (
                           ) FINAL " );
    CREATE OR REPLACE TYPE ACURRENCY_ORATYPE AS OBJECT (
                             ISONR          NUMBER(5,0)
                           ) FINAL ;
    CREATE OR REPLACE TYPE AAMOUNT_ORATYPE AS OBJECT (
                             CCY        ACURRENCY_ORATYPE,
                             LO32BITS   NUMBER(10,0),
                             HI32BITS   NUMBER(10,0)
                           ) NOT FINAL ;
    CREATE OR REPLACE TYPE ATESTPERSOBJ AS OBJECT (
                             COUNT    NUMBER(4),
                             LANG     AENUMLANGUAGE_ORATYPE,
                             COST     AAMOUNT_ORATYPE   
                           ) FINAL ;
    oracle::occi::Ref<ATestPersObj> pObjR = new(c.getConnPtr(), "TTESTPERSOBJ") ATestPersObj();
    pObjR->setCount(i+2001);
    pObjR->setLang(AEnumLanguage(i+1));
    pObjR->setCost(AAmount(ACurrency(), 2.5));
    c.commit();
    c.execQueryRefs("SELECT REF(a) FROM TTESTPERSOBJ a", persObjListR);
    len = persObjListR.size();
    {for (int i = 0; i < len; i++) {  
      oracle::occi::Ref<ATestPersObj> pObjR = persObjListR;
    pObjR->getCount();
    pObjR->getLang();
    c.commit();
    With kind regards
    Daniel
    Message was edited by:
    DanielF
    Message was edited by:
    DanielF

    Hi,
    I have problem with embedded objects which contained embedded objects.
    When I create an Object in the persistent memory and commit this object I get the following error:
    ORA-22805: cannot insert NULL object into object tables or nested tables
    In the constructor of my persisten object I create the embedded members in the transient memory:
    ATestPersObj::ATestPersObj() : m_count(0), m_lang(NULL), m_cost(NULL) {
      m_lang = new AEnumLanguage_OraType();
      m_cost = new AAmount_OraType();
    }Or when I dereference a reference I get this error:
    ORA-00600: internal error code, arguments: [kokeicadd2], [16], [5], [], [], [], [], []
    Can somebody give me a hint?
    I've defined the following Type:
    CREATE OR REPLACE TYPE AENUM_ORATYPE AS OBJECT (
                             VALUE  NUMBER(10,0)
                           ) NOT FINAL ;
    CREATE OR REPLACE TYPE AENUMLANGUAGE_ORATYPE UNDER AENUM_ORATYPE (
                           ) FINAL " );
    CREATE OR REPLACE TYPE ACURRENCY_ORATYPE AS OBJECT (
                             ISONR          NUMBER(5,0)
                           ) FINAL ;
    CREATE OR REPLACE TYPE AAMOUNT_ORATYPE AS OBJECT (
                             CCY        ACURRENCY_ORATYPE,
                             LO32BITS   NUMBER(10,0),
                             HI32BITS   NUMBER(10,0)
                           ) NOT FINAL ;
    CREATE OR REPLACE TYPE ATESTPERSOBJ AS OBJECT (
                             COUNT    NUMBER(4),
                             LANG     AENUMLANGUAGE_ORATYPE,
                             COST     AAMOUNT_ORATYPE   
                           ) FINAL ;
    oracle::occi::Ref<ATestPersObj> pObjR = new(c.getConnPtr(), "TTESTPERSOBJ") ATestPersObj();
    pObjR->setCount(i+2001);
    pObjR->setLang(AEnumLanguage(i+1));
    pObjR->setCost(AAmount(ACurrency(), 2.5));
    c.commit();
    c.execQueryRefs("SELECT REF(a) FROM TTESTPERSOBJ a", persObjListR);
    len = persObjListR.size();
    {for (int i = 0; i < len; i++) {  
      oracle::occi::Ref<ATestPersObj> pObjR = persObjListR;
    pObjR->getCount();
    pObjR->getLang();
    c.commit();
    With kind regards
    Daniel
    Message was edited by:
    DanielF
    Message was edited by:
    DanielF

  • Problem with inherited Objects

    Hi,
    I have a problem with inherited objects inside Flex using
    wsdl as the source of the object. The AS-classes are generated
    inside Flex Builder 3.
    Inside the wsdl I have 2 complex types:
    <complexType abstract="true" name="PersistentObject">
    <sequence>
    <element name="id" nillable="true" type="xsd:string"/>
    <element name="insertTimeStamp" nillable="true"
    type="xsd:dateTime"/>
    <element name="insertUsername" nillable="true"
    type="xsd:string"/>
    <element name="updateTimeStamp" nillable="true"
    type="xsd:dateTime"/>
    <element name="updateUsername" nillable="true"
    type="xsd:string"/>
    </sequence>
    </complexType>
    and
    <complexType name="Contact">
    <complexContent>
    <extension base="tns3:PersistentObject">
    <sequence>
    <element name="birthday" nillable="true"
    type="xsd:dateTime"/>
    <element name="firstName" nillable="true"
    type="xsd:string"/>
    <element name="lastName" nillable="true"
    type="xsd:string"/>
    <element name="middleName" nillable="true"
    type="xsd:string"/>
    <element name="newPassword" nillable="true"
    type="xsd:string"/>
    <element name="password" nillable="true"
    type="xsd:string"/>
    <element name="title" nillable="true"
    type="xsd:string"/>
    <element name="username" nillable="true"
    type="xsd:string"/>
    </sequence>
    </extension>
    </complexContent>
    </complexType>
    The classes in actionscript seem plausable:
    public class PersistentObject
    * Constructor, initializes the type class
    public function PersistentObject() {}
    public var id:String;
    public var insertTimeStamp:Date;
    public var insertUsername:String;
    public var updateTimeStamp:Date;
    public var updateUsername:String;
    and
    public class Contact extends PersistentObject
    * Constructor, initializes the type class
    public function Contact() {}
    public var birthday:Date;
    public var firstName:String;
    public var lastName:String;
    public var middleName:String;
    public var newPassword:String;
    public var password:String;
    public var title:String;
    public var username:String;
    When I want to retreive an object of type Contact, it seems
    that only a couple of entries are filled. While debugging the flex
    XMLDecoder, I noticed something strange. It seems, like the decoder
    is expecting the result xml data to be in alphabetical order:
    birthday, firstname, lastname, etc. But since the object is
    inherited, the data that is actually received contains elements
    from the parent class: birthday, firstname, id, inserttimestamp,
    The resulting object has just birthday and firstname filled,
    which is somehow wrong. This seems to be a problem inside the
    parser itself. What can I do?

    I am having a problem with an extended class as well.
    When I step through the code, everything is going fine and the decoder (mx.rpc.xml::XMLDecoder) sees that the class is an extension and wants to get the values for the superclass first.  When it gets into getApplicableValues(), it's looking for the values to be in the order of the definition which would be ok if the values collection didn't include the values from the subclass as well!  It goes through the whole definition and doesn't find anything for the superclass because the values aren't where it expects them.  When it pops back up to the subclass and starts to decode those values, it finds them because the definition order and values order match.
    Is this a known issue?  Or, am I misunderstanding something?
    Thanks,
    Chuck

Maybe you are looking for