NPE deep within Toplink Stack

We are seeing NPE deep within Toplink stack as below, any ideas why this could happen?
java.lang.NullPointerException
at oracle.toplink.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:127)
at oracle.toplink.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:441)
at oracle.toplink.publicinterface.Session.executeCall(Session.java:728)
at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:117)
at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:103)
at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.selectOneRow(DatasourceCallQueryMechanism.java:501)
at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.selectOneRowFromTable(ExpressionQueryMechanism.java:872)
at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.selectOneRow(ExpressionQueryMechanism.java:847)
at oracle.toplink.queryframework.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:415)
at oracle.toplink.queryframework.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:812)
at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:780)
at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
at oracle.toplink.queryframework.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:841)
at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2631)
at oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
at oracle.toplink.internal.indirection.QueryBasedValueHolder.instantiate(QueryBasedValueHolder.java:62)
at oracle.toplink.internal.indirection.QueryBasedValueHolder.instantiateForUnitOfWorkValueHolder(QueryBasedValueHolder.java:77)
at oracle.toplink.internal.indirection.UnitOfWorkValueHolder.instantiateImpl(UnitOfWorkValueHolder.java:143)
at oracle.toplink.internal.indirection.UnitOfWorkValueHolder.instantiate(UnitOfWorkValueHolder.java:217)
at oracle.toplink.internal.indirection.DatabaseValueHolder.getValue(DatabaseValueHolder.java:61)
at com.integral.finance.dealing.RequestC.getParentRequest(RequestC.java:803)
at com.integral.is.management.monitor.TradeMonitorMessageBuilderC.addCustomFieldstoTradeObject(TradeMonitorMessageBuilderC.java:327)
at com.integral.is.management.monitor.TradeMonitorMessageBuilderC.createTradeMessage(TradeMonitorMessageBuilderC.java:181)
at com.integral.is.management.monitor.TradeMonitorMessageDispatcher.run(TradeMonitorMessageDispatcher.java:63)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:676)
at java.lang.Thread.run(Thread.java:595)

That is odd. What are you doing to get this error? Are multiple threads running? Are you by chance using a DatabaseSession concurrently? In general it it recommended to use a ServerSession in a multiple thread application.
James : http://www.eclipselink.org

Similar Messages

  • Sorting of images within a stack

    When working with stacks some strange things happen:
    - When trying to duplicate one image out of the stack into an album the whole stack is moved to the album. When I than delete this stack in the album the order of the images of the same stack in my projects is changed.
    - When rejecting images within a stack my rejected images are not automatically moved to the rejected images folder. Instead these images stay in the stack even so they are marked as rejected.
    - When sorting images within a stack by using different ratings and than trying to limit the image view by showing for instance only images with 3 stars or more nothing happens. Also showing rejected images only does not work within a stack. Instead showing images by rating is applied to the whole project and not only the stack
    Do I do something wrong or is this a known apperture bug?
    MBP   Mac OS X (10.4.9)  

    David, your understanding (use) of stacks is causing a conflict. The whole idea is basically to group a series of similar images and choose the best one of that series, called the Pick. Then, within that series you can rank those images in an order of Pick to next best, and so on. You can view stacks open or closed - when closed, the Pick is the one on top.
    Here's some info from the Help Menu that will help answer your questions:
    Note: When you place a stack in a book album or web gallery or web journal album, Aperture displays the stack pick. If you drag a stacked image that is not the stack pick into the book or web gallery or web journal album, Aperture reminds you to select the stack pick. If you don’t want to place the pick in the album, but want to use a different version from within the stack, select the version you want and then make it the album pick by choosing Stacks > Set Album Pick.
    Dragging Stacks
    You can drag an entire stack to a new location, and you can drag specific images within a stack to a new location. When a stack is closed, dragging the stack moves the entire stack. When a stack is open, you can drag individual versions to new locations in the Browser. You can also drag images into a stack. If you drag a stacked image into a different project, however, the entire stack moves to the new location.
    If you still want to be able to use a stacked image as you have stated, then take that image out of the stack, split the stack, or create a version of the image and remove it from the stack.
    Take a quick refresher through the help menu - it will give you some further details and ideas so you can accomplish exactly what you need to do.
    Hope that helps!!

  • Exact meaning of IL offset within a stack trace.

    I understand that when an exception is thrown (and there's no PDB around) the stack trace exposes the IL offset of where the exception arose (in the lowest frame).
    I'm not very clear though on what this means, is it the IL offset of IL operation that thew the exception or is it the IL Offset of the previously executed IL operation or what?
    I'm examining ILDASM dumps of assemblies and the IL offset sometimes doesn't quite make sense.
    e.g. one null reference exception reports an offset IL_0063 in the stack frame, but that is just an operation br.s
    Any help on exactly how the system determines the IL offset when it reports the stack trace, greatly appreciated.
    Thx

    Regardless of whether or not there is a PDB, managed code is able to provide a full trace with method names for the managed portion of the stack due to the metadata within the assembly. Where are you getting the offset? The offset that visual studio provides
    is the native offset within the jitted native method rather than the IL offset within its managed counterpart (or at least it was in VS2010, I haven't verified it since then).
    My understanding is that, when provided, the IL offset is simply mapped from the native offset using a table generated at JIT time (which may be an approximation due to optimizations). For the leaf native-frame, the native offset is where the exception was
    thrown while non-leaf native frames use the return address from the stack which may make it look like the exception was thrown by the instruction following the one where the exception actually happened.

  • Detect clicked cluster in mouse down event for clusters within multiple stacked clusters

    With the help of Ben (see http://forums.ni.com/t5/LabVIEW/Determine-cluster-element-clicked-in-mouse-down-event/td-p/1245770)
    I could easily find out what sub-cluster had been clicked on (mouse down event) within the main cluster, by using the Label.Text Property.
    However if you have a cluster within a cluster within a cluster then you probably have to use individual mouse down events for each sub-sub cluster.
    I just wanted to use one "Main Cluster":Mouse Down event and from that determine which of the sub-sub clusters had been clicked on - is this even remotely possible?
    Chris.

    Chris Reed wrote:
    With the help of Ben (see http://forums.ni.com/t5/LabVIEW/Determine-cluster-element-clicked-in-mouse-down-event/td-p/1245770)
    I could easily find out what sub-cluster had been clicked on (mouse down event) within the main cluster, by using the Label.Text Property.
    However if you have a cluster within a cluster within a cluster then you probably have to use individual mouse down events for each sub-sub cluster.
    I just wanted to use one "Main Cluster":Mouse Down event and from that determine which of the sub-sub clusters had been clicked on - is this even remotely possible?
    Chris.
    Yes but... you will have to pass through 26 Kudos worth of Nuggets to get there (Well maybe you can skip the last 5 or so).
    This Nugget by Ton teaches us how to use Dynamic Event Registration. (15 Kudos, must read and understand)
    This Nugget by me talks about getting at references inside arbitrary data structures. (11 Kudos, You don't have to read the whole thing, only enough to get at nested objects).
    SO use the stuff I wrote about to gather up the references to the clusters. Build them into an array and then use dynamic event registration and what you learned in that thread you linked in your question.
    So Possible? Yes!
    Easy? YOU tell me.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Organize within a stack

    I have many stacks of multiple photos. Often due to pre-Aperture methods of created various versions for high res, printing, low res,b&w,email,web use, etc.
    Is there a way in AP2 to open a stack and arrange it by file size with the largest file being the pick automatically? Or by pixel size or any other metadata for that matter. Thanks.

    And to be a little more specific, say I have a stack with 20 images. There may be 2 large color TIFF files, three large JPeg files, afew GIFFs and some smaller jpeg files of various sizes. Since Aperture generates lower res and other options from a Master file depending on how I am going to use it, I no longer need all these versions of the same image for different purposes. I like this. I want to trash all the low res versions and only work with the high res image. In the end, I want to compare the two large TIFF files (likely two scans of the same image) and see which is the better image to work with. If I could organize my stacks by file size automatically, this will make quick work of weeding out the, well, weeds I guess. The library is 10,000+. Lots of stacks.
    By the way, is there a way to create a smart album with the search criteria "image is in a stack".
    Thanks

  • Auto Sync metadata within a stack?

    Is there a setting to enable pictures in the same stack to auto sync metadata?  Currently, if you select pictures in grid view, it will only select the top picture.  So after I have updated keyword, etc, I must manually expand each stack and update the other pictures in that stack.  Is there an easier way?

    No, by design, Lr will only apply metadata or develop settings to the top image in a Collapsed Stack. So, the only way is to apply metadata to expanded stack.

  • How to add a pane from DEEP within System Prefs to the dock

    I frequently use my HP Deskjet scanner, and the way I scan stuff is by doing:
    System Prefs > Printers & Scanners > [select the printer] > Scan > Open Scanner...
    Is there a way to essentially record the LAST action, the "Open scanner" mouse-click, and add it to my dock for quick (and instant) scanner access?
    Saving these 15 seconds ten times a day might actually add a week to my life. And please, explain it like I'm 5.

    I cannot help with exactly your question, someone else may.
    However can you not use the Image Capture application, it is supplied with OS X and is in the applications folder, if that scans how you want, just drag it to the Dock for easy access.

  • Sort images within stack by rating?

    Hi Gang,
    I'm editing a massive job and there's one part of the process that is taking forever. After I've rated every image within a stack, I seem to have to resort them manually to be shown from highest rating to lowest rating, either by dragging or by using the "promote/demote" buttons.
    Is there any way to make images in a stack sort themselves by rating? This would literally save me hours of time.

    I know of no way to automatically stack by rating. You could create Smart Albums with only certain ratings included. I know that's not exactly what you wanted to do, but it would accomplish a segregation of your photos by rating.
    Sorry not to be able to provide an answer, but I don't think there is one.
    Joel

  • "Fill base line"/"Fill to" problem within stack plots

    Hi there,
    I try to plot multi data sets within the stack plots waveform chart. But I can't control the Fill To option correctly for the plots other than the first plot in the window. As attached, even I set the fill baseline to zero for the second plot (the green one) in the first window, it performs like fill to -infinity. I am using Labview 2010 DS2.
    Any solution? Or, I made any mistake here?
    Thanks.
    Attachments:
    Stack Plot Fill Baseline.vi ‏12 KB

    I agree things behave a bit weird if the number of traces is not a multiple of the number of stacked plots. Do you want three stacked plots or have two traces share one of the plots? Maybe you should use more defined data structures, e.g. an array where each element is a clusters of size=3.
    Try the following:
    resize the plot legend so 4 stacked plots show.
    Now resize it again for two stacked plots.
    In my case, the fill to zero is now correct.
    LabVIEW Champion . Do more with less code and in less time .

  • Logging just exceptions and 'critical' TopLink info to the log

    I'm trying to just log exceptions and other 'critical' info that occur within TopLink to the log rather than getting lots of SQL statements, unit of work info etc.
    I'm running TopLink 9.0.3.5 in WebLogic Server 7.0 (SP4) using container managed persistence for Entity Beans.
    If I startup WebLogic with the toplink.log.level=INFO option then I get SQL statements, unit of work info, JTS registration info as well as any exceptions that are logged.
    If I leave the logging like this my WebLogic log will likely be huge and performance degraded from writing a lot of info that will never be used.
    If I startup WebLogic with the toplink.log.level=NONE option then I don't get any log statements, not even exception info (although obviously clients still get the exception stack). I need to get the exceptions and 'critical' TopLink info in the WebLogic log because I cannot rely on getting the information from client logs.
    In the TopLink for WebLogic 2.5.1 product the default logging behaviour was to log only exceptions and other 'critical' info to the log.
    Is there a way to configure TopLink 9.0.3 so that only exceptions and any other 'critical' TopLink information is written to the log (and SQL statements, unit of work info and JTS registration info is not written).
    Thanks.

    You don't mention which version of 10g you have but there is a bug in all versions 10.1.2.0.2 and newer in that usernames are no longer being inserted in the Apache log files when portal pages are viewed. It was somewhat hit or miss before, but good enough to get a feeling of what was being used Now, it does not even provide that. Bug number reference from Metalink is 5638057. It is shown as "Closed -- not feasible to fix", but will be addressed in 11.0
    I am experimenting with getting this data a couple of ways. One, if you happen to use WebTrends, you can manually set the authenticated users field to whatever you'd like so I am using the API's to retrieve the username and the user company (organization) and concatenating them together.
    The other option I am considering is a procedure call in the footer of each page that automatically updates a new table with the session id, username, page, timestamp, and whatever other information you may want each time the page is visited. This table can then be dumped to a data file if desired or left in the database and analyzed using a tool like Discoverer.
    Rgds/Mark M.

  • C++ stack overflow on desctruction of Set::View

    Hi,
    I've encountered a problem when querying large sets of data from coherence.
    Gven the following code;
              // load cache with a large data set
              char buff[128] = {0};
              for( int i = 0; i < 100000; ++i )
                   sprintf_s(buff,128,"key%05d",i);
                   // random binary data
                   int len = 128 + rand() % 512;
                   Array<octet_t>::Handle hab = Array<octet_t>::create(len);
                   hCache->put(String::create(buff),hab);
            // query the cache, and print the results
              struct tmpIt
                   Filter::View          vAll;
                   Set::View               vSetResult;
              tmpIt *t = new tmpIt;
              t->vAll = AlwaysFilter::create();
                    t->vSetResult = hCache->entrySet(t->vAll);
              // iterate over the results
              int ncount = 0;
            for (Iterator::Handle hIter = t->vSetResult->iterator(); hIter->hasNext(); )
                Map::Entry::View vEntry = cast<Map::Entry::View>(hIter->next());
                   String::View     vKey   = cast<String::View>(vEntry->getKey());
                Object::View     vValue = vEntry->getValue();
                   Array<octet_t>::Handle dt = cast<Array<octet_t>::Handle>(vEntry->getValue());
                   ncount++;
                   if( (ncount % 2000) == 0 )
                        std::cout << ncount << std::endl;
              // delete the struct, thus forcing the destruction of vSetResult
              delete t;On deleting the struct 't' causes a stack overflow deep within coherence. I'm guessing there's some kind of recursive delete that works on smaller data sets.
    Can you please advise me on what to do in this situation, as this has put a halt to my coherence integrartion :)
    Cheers
    Rich
    Edited by: user9929344 on 22-Oct-2008 01:57

    Hi Rich,
    I've tried your test out in a few environments using both debug and release builds and have not been able to reproduce the issue. My guess is that it may have something to do with your build settings. Can you try building it using the build.cmd script which we ship with the product. This can be done by creating a new subdirectory under examples for your test, for instance "overflow", place you source code in that directory and then from the examples directory run "build overflow". You can then run the test by executing "run overflow". This is how I've tested an all seems to be ok. Assuming this resolves the issues on your side, you can have a look at the compiler/linker settings used in the build.cmd script and try applying them to your build process. If this does not resolve the issue, it would be useful if you could send us a fully buildable example (and build script) which reproduces it. Also if you could include the OS and compiler versions that would help.
    In testing your source code I ran into a few small things I thought I should point out to you.
    - while loading if you batch your puts into a local HashMap and then do periodic hCache->putAll() operations loading will be faster:
         // load cache with a large data set
            char buff[128] = {0}; 
            Map::Handle hMapBatch = HashMap::create();
              for( int i = 0, c = atoi(argv[2]); i < c; ++i )
                   sprintf(buff,"key%05d",i);
                   // random binary data
                   int len = 128 + rand() % 512;
                   Array<octet_t>::Handle hab = Array<octet_t>::create(len);
                   hMapBatch->put(String::create(buff),hab);
                   if ((i % 2000) == 0)
                        hCache->putAll(hMapBatch);
                        hMapBatch->clear();
                        std::cout << "loaded " << i << std::endl;
            hCache->putAll(hMapBatch);
            hMapBatch->clear(); - During iteration you should not cast the values back to Handles, but rather only to Views. It is not guaranteed that the cache will return non-const references to the data.
    I did validate that the test did not fail prior to making the above modifications.
    thanks,
    mark

  • How to put a single stack item to an album?

    Maybe a silly question, but I can't find out how to put a single item from a stack into an album. When I drag a stacked item to an album, the entire stack appears in the album. This is not what I want, I want to get the single item in the album.
    I know I can make a single stacked item in an album to be the albumpick. But this is not enough. Sometimes I need to put two items from the same stack into this album.
    Koen
    Message was edited by: Koen van Dijken

    Koen van Dijken wrote:
    Would this be as designed, and so a wrong use of stacks by me?
    It is as designed but I don't think what you want to do is a particularly wrong use of stacks. I can see using stacks as a way to reduce clutter in the browser but still wanting to be able to get to individual images in the stack to use..
    What is interesting is the wording in the Aperture users guide concerning stacks:
    Dragging Stacks
    You can drag an entire stack to a new location, *and you can drag specific images within a stack to a new location*. When a stack is closed, dragging the stack moves the entire stack. *When a stack is open, you can drag individual images to new locations in the Browser*. You can also drag images into a stack. If you drag an image within a stack into a different project, however, the entire stack moves to the new location.
    (emphasis added)
    So the first part sounds like you should be able to drag an image out of the stack to an album, the second just mentions dragging an image out of the stack in the browser.
    And in way you can do this, the only thing is what you end up doing is unstacking the image you drag out. Not exactly what you would want to do.
    I think it should be OK to place individual stack items into albums and have them remain in the stack. Any reason this would be a bad idea?

  • Accessing the Session in Toplink Essentials

    I try access session by this way:
    http://ontoplink.blogspot.com/2007/01/accessing-session-in-toplink-essentials.html
    and get exception:
    (Oracle TopLink Essentials - 2.0 (Build b41-beta2 (03/30/2007))): oracle.toplink.essentials.exceptions.ValidationException
    Exception Description: Could not find the session with the name [my_name] in the session.xml file []
    Is session.xml needed for this ? I hope not, because it requires project.xml and it maybe require some another... All what I need is:
    session.getEventManager().addListener(myEventListener);
    to force toplink switch database schema in listener preLogin method. In Hibernate, it can be done by one line in config file.
    Lumir

    OK, I tried obtain session BEFORE creating EntityManagerFactory and EntityManager.
    Now when obtaining it AFTER creating EMF and EM, it works.
    But this not solve problem with change db schema, because AFTER create EMF and EM, session is logged in and EventListener preLogin isn't called.
    Can anyone point me, how to change db schema ? I know it for connection defined using toplink.jdbc.* properties, but not for JNDI:
    public class ToplinkSessionCustomizer implements SessionCustomizer
    public void customize(Session session) throws Exception
              if(jdbcUrl == null)
                   // JDBC URL is not set, assume JNDI lookup
                   JNDIConnector connector = (JNDIConnector) session.getLogin().getConnector();
                   connector.setLookupType(JNDIConnector.STRING_LOOKUP);
                   // 1. Not works, cause NPE
                   // at oracle.toplink.essentials.descriptors.ClassDescriptor.verifyTableQualifiers(ClassDescriptor.java:3518)
                   session.getDatasourcePlatform().setTableQualifier(schema);
                   SessionEventAdapter preLoginEventListener = new SessionEventAdapter()
                   // Listen for preLogin events
                        public void preLogin(SessionEvent event)
                        event.getSession().getLogin().setTableQualifier("myschema");
                   // 2. Not works, eventManager is null, cause NPE !
                   session.getEventManager().addListener(preLoginEventListener);
              else
    // This works OK
                   DatabaseLogin login = session.getLogin();
                   login.useOracleThinJDBCDriver();
                   login.setDatabaseURL(jdbcUrl);
                   login.setUserName(userName);
                   login.setPassword(password);
                   login.setTableQualifier(schema);
         }

  • Stacked and standard item groups

    Hi,
    Can anyone give a clue if the following problem can be solved?
    I have a table with many columns which I want to display on a single form with multiple tab-canvases. And on each tab-canvas I want to use item groups (two side-by side!).
    I managed to generate this only partly by using stacked item groups for each tab-canvas and item groups within these stacked item groups. But the item groups are all placed beneath each other! I can not find a way to control the layout-generation. It looks like the forms-generator does not use the tabulation preferences.
    I am using Designer 6 (6.0.3.10) and Forms 6i (6.0.8.10).
    Thanks.
    null

    It is not possible to get Service Item w/o delivery in to Invoice along with the the delivery item..unless the Service item is configured as item relevant to delivery.

  • Stacking Images and Ratings: Bug or Feature?

    So, I went through my main library of images (7000+ NEF Raw files), and applied Star Ratings to some of the images, several(50+) of those got 5 stars.
    Later, I got the idea that it would be good to manually stack most of the images into what I would consider logical collections or stacks: command+K on a selected group.
    And this is where I'm consfused.
    I noticed today that a custom search that I created doesn't return all of the images that it should. Here's an example:
    I created a smart album that looks for .nef files with a 5 star rating; another one looks for .psd files with a 5 star rating.
    Anyway, many of the 5 star images no longer show up in this smart album, and it would appear that is related to those images being inside a stack that was manually created. Removing those images from a stack (as an experiment) brought them back into the smart album.
    So, I'm scratching my head. Is this supposed to work this way? Why wouldn't aperture "see" a 5 star image within a stack, even when that stack was manually created without regard to time/date/etc. Why are these images hidden from a smart album?
    I may be missing some fundamental point in the whole stacking concept.
    Has anyone else noticed this behavior?

    FOUND THE COMMON DENOMINATOR!!!
    "The built-in 5-star rating smart album does not work as it cannot find rated images inside stacks."
    As an update to my previous post, I have found the cause of the problem and a PARTIAL workaround. Still a pain in the neck though!
    The built in '5 star', '1 star or Better' and the 'rejected' smart folders actually DO work, PROVIDING (and here's the problem) the rated picture is the FIRST picture in the stack!!
    It only recognises the 1st image in the stack and NO others. This means that your rated image has to be the FIRST one in the stack.
    The problem is that if you have more than one rated image in the same stack, only the FIRST image is taken into account.
    This only applies to the built in smart folders provided at library level.
    All other smart folders that we create should have the 'ignore stack' checkbox checked to view all the images.
    Anyone got a thought on whether this is a feature or a bug?
    Spike

Maybe you are looking for

  • How can I use Back to my Mac when my ISP blocks port 1900?

    I was just forced to switch ISPs (don't ask...) and it turns out that my new ISP (Astound) lied to me and actually does block port 1900, which means that Back to my Mac (on which I rely) does not work. Has anyone seen this and found a viable workarou

  • Payment information required while I selected non long back

    I Don't know what to do, can't download any app in all my apple devices ! I Opened they'd purchased history from the laptop once as before a year or couple of years I purchased some apps from another countries and I didn't install them again but late

  • Query string

    Hi, What is the easiest way to get query string parameters in Adobe Edge Animate CC? /Daniel

  • Flash Player for Symbian 60 5th Edition?

    I couldnt find where to download for my Sony Ericsoon Vivaz mobile phone that run S60 5th Edition same as Nokia version but seem it dont support Adobe Flash so do I have to wait for new version which woul dbe 4.0 in the future cos Ive tried the versi

  • Typefi Publishing System

    Hi to All You Techies out there, I am researching and writing a white paper on a relatively new software application, Typefi Publishing System (TPS), which, as you may know, uses MS Word, Adobe InDesign, and Acrobat to translate content to XML, creat