TOPLINK Timestamp Issue

We have noticed that when TOPLINK is used to insert a datetime into our table, the time entered is always an hour ahead. We are in the EST zone. When we use custom SQL and use SYSDATE to enter or update dates, the datetime is stored at the correct time zone. The BPEL transforms always show the time correctly in the BPEL monitor but by the time it gets inserted into the table it is an hour ahead. I thought maybe it was getting the date from the server but we checked and that time is local time.
Any ideas?

Are you trying to insert the date from BPEL Console or from ESB console. If you are inserting it from ESB console check the SYS time in ESB database.
I guess there might be a hour time difference ESB and BPEL metadata.
-Ramana.

Similar Messages

  • SCORM timestamp issue

    Hi there,
    I've been having issues with multiple quiz attempts and interactions not all being sent to the LMS. Most answers are sent though randomily LMS drops couple of answers on each re-attempt of the quiz yet each question is set to report the answer. It also doesnt appear to occur on first attempt only new attempts after first failed. LMS provider says this is a timestamp issue in the course which doesnt send it incorrect SCORM format therefore it rejects it when the same interaction id is already there from previous attempt failed. Below is the error from LMS. Please anyone advise how this might be corrected inside Captivate.
    [11:05:25.897] SetValue('cmi.interactions.29.timestamp', '2013-09-13T11:05:25.0+00:60') returned 'false' in 0.001 seconds
    [11:05:25.898] GetLastError() returned '406' in 0 seconds
    [11:05:25.899] GetErrorString('406') returned 'Data Model Element Type Mismatch' in 0 seconds
    [11:05:25.900] GetDiagnostic('') returned 'The cmi.interactions.29.timestamp value of '2013-09-13T11:05:25.0+00:60' is not a valid time type.' in 0 seconds

    Im using captivate 6

  • TopLink Map issue

    Hi all
    i m very new to ADF ...
    while creating one toplink i selected emp as my table .it asks for toplinkmap in the wizard so i created new and it took some value by default.
    now when i rebuild this project it gives following error
    TopLink Map 'tlMap1' -> One of the packages is incomplete.
    Package dataprj -> One of the descriptors in this package is incomplete.
    Descriptor Emp -> No primary keys specified in APPS.EMP table.
    End TopLink Map 'tlMap1'
    pls help me out to fix this issue.

    hi user530923
    Maybe you should look for the "EMP" node in the "Offline Database Sources" node of your JDeveloper project (using the Applications Navigator in JDeveloper).
    If you right-click that "EMP" node, you can select "Properties...". The Edit Offline Table dialog should show. Select "Primary Key Information" on the left. Has a primary key been defined?
    You could also check "Decriptor Info" tab for your "Emp" descriptor. Select "Application Sources" > "TopLink" > "tlMap1" in your JDeveloper project, in the Structure pane double-click the "Emp" descriptor and check "Primary Keys" on the "Decriptor Info" tab.
    success
    Jan Vervecken

  • HTC One M8 call history timestamp issue

    I have waited patiently for several months through a couple firmware updates, expecting to see this issue addressed. However, it seems to have received very little attention or recognition.
    Using the stock phone app, any calls prior to the current day do not display the timestamp. Only calls made or received during the current day display the time of the call. Any calls received/made prior to the current day only display the date. Any calls placed yesterday only display simply "yesterday." Long-pressing any entries without a timestamp to select "view history" only displays the call "duration."
    A few friends have the same phone on another carrier and their devices display the name, phone number, date and timestamp in the "Call History" and "Phone" logs, even without the need to long press and "View History." They are also using the same stock Phone application. It is quite frustrating to not see the timestamps of missed calls.
    The few references online regarding this issue seem to have concluded this is a carrier programming issue, as those on Verizon have the same issue and those on another carrier do not.
    Does anyone using a Verizon HTC One M8 not have this issue using the stock Phone app?
    Does anyone know of a fix or workaround to address this without installing 3rd-party apps?
    Is Verizon planning to address this issue? Have they already?
    Whom shall I contact to raise this issue?
    Thanks in advance.

    Hi Pamela,
    Here's how things look.
    I can't remember it being any other way on my handset.
    Would appreciate anyone kindly confirming (or denying) this is how entries look on their Verizon HTC M8.
    (Censored for Privacy)
    Call History:
    Phone History:
    Long-Press entry > View Call History:

  • DbAdapter / Toplink Performance issues

    I'm using the DbAdabpter / Toplink to fetch a sizable dataset (500 - 30,000) row for later emission to a file. The fetch of this data via the DbAdapter / Toplink seems to works reasonably well (albeit quite slow) for datasets between 500-7,000 rows. If I attempt to read a larger dataset, the DbAdapter invocation simply times out. I've tweaked the following configuration options suspecting they might help with the long running invoke the DbAdapter partner link:
    * created an on alarm branch and allowed for 15 minutes for the dataset to be returned.
    * increased the syncMaxWaitTime attribute to 15 minutes.
    * Configured the BPEL OC4J instance to use 2048MB for the JVM heap.
    I've tested the query being used by the DbAdapter via SQL navigator and it returns the data sub-second. I'm suspecting the performance issue is related to the DbAdapter / Toplink rendering the thousands of rows into an XML DOM representation. Might this be the case?
    To help further my hypothesis, I'd like to enable the Toplink profiler. I've found a document in the Oracle doc library that suggests it can be enabled with the following directive, "logProfile" (http://www.oracle.com/technology/products/ias/toplink/doc/1013/MAIN/_html/optimiz003.htm#BEEBCBJF). Any idea where I'd specify this directive? Perhaps somewhere in the toplink_mappings.xml file?
    I could be completely off base and the performance issue could be attributed to some other aspect of my BPEL process. Has anyone else encountered this sort of behavior when working with DbAdapters that return thousands of rows?
    Thanks,
    Peter

    Hi Peter,
    if you are still experiencing the timeout problem you may want to alter the transaction config timout setting in the server.xml file. The tuning guide indicates to change the one file but we have changed it in two locations.
    j2ee/OC4J_BPEL/config (to 700000) and integration/orabpel/system/appserver/oc4j/j2ee/home/config/ (600000) we have previously found that the sync_max_wait_time in the console needs to be set at a value lower than the settings in the above files (540 secs).
    For the performance have you captured the sql from the island log that BPEL is actually running (colaxa.cube.ws logging to debug), is it from a single table or from multiple. We managed to improve the perfromance by setting the use-joining in the toplink-mappings file, but then ended up creating a view in the source system to limit the queries being executed. I am not sure about BPEL's capacity to handle large payloads but it would be interesting to know.
    Ashley

  • Toplink Caching issue

    I have three Entities say C1 , P1 and card. Relation between these
    entities are like this
    P1 to Card ( one - one mapping )
    C1 to Card ( one - many mapping )
    I have three usecases. Usecase 1 uses the mail Entity C1. It will
    create one record using C1. ( keep in mind that there will not be any data for entity card at that time ).
    Second usecase , Usecase 2 , uses Entity P1 and creates data in P1 as
    well as Card.
    Third usecase will again take C1 and try to get the card data which
    were inserted through P1.
    But unfortunately , it is coming as null ... But if I restart the server and access the third usecase then I'm getting all the card data from C1 Entity which were saved through P1 Entity .
    To summarise my finding , issue is with Toplink caching. When i
    accessed the first usecase, C1 is cached. In the second usecase , I updated Entity card using P1. When i accessed the third usecase , toplink is returing the cached entity. Toplink is not able to identify whether related entity got changed or not....
    Workaround is not to use cache for C1 entity. It is working but it will
    not be the correct solution.
    any one faced this issue in Toplink ?
    Correct me if my findings are not correct...

    Hello,
    Its not that the cache isn't working, its that you are updating relationships in the database but not in the cache. Take the case of a 1:M from B->C used in a previous response. It looks like the foreign key used in this relationship (in the C table) is mapped to the C object either as a direct to field mapping, or through a 1:1 to the A object. When you change this so that it now somehow points to the B object, you haven't updated the B object to include the C in its collection. Because B has a 1:M mapping collection of Cs, when you next get that B object from the cache, its collection will not contain the newly referenced C (assuming indirection is not used or has been previously triggered). This is a result of you mapping the 1:M in the first place - since it is storing the collection in the object.
    If you are going to make changes that affect relationships and yet not update the relationships directly I would suggest either:
    A) refreshing affected objects from the database when done. In this case, refreshing the B object will result in TopLink requerying the B->C relation and picking up any changes (depending on your refresh options on the B descriptor though - please check the docs if you need more info).
    B) Not mapping these relationships. Instead, when you need to find C objects referenced from B, query for them.
    Both options are less efficient as they cause more hits to the database. This is why it would be preferred if you updated the relationships when you change fields that affect those relationships, but it is up to you to decide which will work best for your application.
    Please note that the cache is working, in fact the situation you are seeing is due to it working. I am not sure of how your situation would work with DAO as you mentioned in a previous post, but it sounds like it would always hit the database. This situation is a result of you making changes in the database but not changing the object model to reflect those changes, proving that the cache is indeed working. While you mention that some objects are not in your use case, they are in the database but just not mapped in TopLink. In the case of a 1:M mapping, the database shows a 1:1 back relationship that you have not mapped in your object model. So this would be more a case of your object model not matching your database model, and not being taken into account in your use cases.
    Best Regards,
    Chris

  • ConversionException in Toplink TIMESTAMP mapping

    We have a column mapped in Toplink as java.sql.Timestamp. In order to make this work they've added a few lines to the toplink Oracle.xml file defining the timestamp type.
    Inside JDeveloper this runs fine, however when run under Ant I get the following:
    EXCEPTION [TOPLINK-3001] (TopLink - 9.0.3 (Build 423)): oracle.toplink.exceptions.ConversionException
    EXCEPTION DESCRIPTION: The object [oracle.sql.TIMESTAMP@258], of class [class oracle.sql.TIMESTAMP], could not be converted to [class java.sql.Timestamp].
    I've downloaded the newest JDBC drivers but the problem persists.
    Anyone have some advice?

    EXAMPLE CONVERSION MANAGER SOLUTION:
    package example;
    import java.sql.SQLException;
    import java.sql.Timestamp;
    import oracle.sql.*;
    import oracle.toplink.exceptions.ConversionException;
    import oracle.toplink.internal.helper.ConversionManager;
    * Example conversion manager that will add the ability to convert Oracle9i DB's
    * TIMESTAMP values into java.sql.Timestamp. This is required for TopLink
    * 9.0.3 and 9.0.4.
    * To use this class you must register the conversion manager prior to loading
    * your project.
    * <code>
    * ConversionManager.setDefaultManager(new MyConversionManager());
    * </code>
    public class MyConversionManager extends ConversionManager {
    private static final Class TIMESTAMP_TYPE = TIMESTAMP.class;
         protected Timestamp convertObjectToTimestamp(Object original) {
    if (original.getClass() == TIMESTAMP_TYPE) {
    try {
    return ((TIMESTAMP) original).timestampValue();
    } catch (SQLException se) {
    throw ConversionException.couldNotBeConverted(original, Timestamp.class, se);
    // NOTE: Similar check and comparisons could be done for TIMESTAMPTZ and TIMESTAMPLTZ
    return super.convertObjectToTimestamp(original);
    }

  • ODI timestamp issue

    Hi ,
    We are having a source column datatype as Date(Oracle 10g).
    When we just used the select date  from table --- the output is like '17-SEP-10 08:02:32'
    In the target also the date column data type is Date(Oracle 10g)...
    its a one - one mapping...after running the interface ,if we check the date column '17-SEP-10 12:00:00' ,for all the records we are getting the same format.
    what might be the issue...... the timestamp is not same as the source ,instead we are getting timestamp 12:00:00 same for all records..
    Any changes need here...plz let me know.
    Thanks,
    Ananth

    I'm not sure for you but,
    Few times ago we've seen an issue with our dates between source and target DB (both 10g).
    when on source there was HH24:MI:SS filled correctly, in target there was always 00:00:00.
    It occurs only when we were loading data from one server to another (never when source and target were on same server).
    It looks really similar with your issue...
    After some search it seems to came from our JDBC driver(Thin one)...
    To correct this I add a property in Topology each time I use this JDBC url.
    For each Data server I add the following property on Properties tab :
    Key = oracle.jdbc.V8Compatible     
    Value = true
    you can maybe try with this...
    Hope to help
    Regards,
    Brice

  • BUGREPORT: Timestamp issue when creating new url mappings

    We have come accross and issue when adding multiple databases and subsiqent url mappings.
    the issue presented in the log is
    ####<Nov 16, 2012 10:49:52 AM CST> <Error> <HTTP> <adeoraapp03.santos.com> <WLS_APEX> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1353025192413> <BEA-101020> <[ServletContext@955221946[app:apex module:apex.war path:/apex spec-version:2.5]] Servlet failed with Exception
    java.lang.IllegalArgumentException: Not a correctly formatted timestamp: 2012-11-15T23:52:58.0080Z
         at oracle.dbtools.common.util.Timestamps.valueOf(Timestamps.java:61)
         at oracle.dbtools.common.config.db.UrlMappings$Builder$1PoolFilter.startElement(UrlMappings.java:199)
         at oracle.dbtools.common.x3p.MatchFilter.startElement(MatchFilter.java:54)
         at oracle.dbtools.common.x3p.impl.Event.invoke(Event.java:52)
         at oracle.dbtools.common.x3p.impl.Chain$EventIterator.advance(Chain.java:125)
         at oracle.dbtools.common.x3p.impl.Chain$EventIterator.advance(Chain.java:79)
         at oracle.dbtools.common.util.AbstractIterator.next(AbstractIterator.java:28)
         at oracle.dbtools.common.x3p.impl.X3PReaderAdaptor.next(X3PReaderAdaptor.java:34)
         at oracle.dbtools.common.config.db.UrlMappings$Builder.read(UrlMappings.java:170)
         at oracle.dbtools.common.config.db.UrlMappings.existing(UrlMappings.java:99)
         at oracle.dbtools.common.config.db.UrlMappings.urlMappings(UrlMappings.java:93)
         at oracle.dbtools.common.config.db.DatabasePoolConfig.loadFromXML(DatabasePoolConfig.java:285)
         at oracle.dbtools.common.config.db.DatabasePoolConfig.loadFromDBFromTime(DatabasePoolConfig.java:181)
         at oracle.dbtools.common.config.db.DatabasePoolConfig.getPoolInfo(DatabasePoolConfig.java:54)
         at oracle.dbtools.rt.jdbc.DatabaseConnectionFilter.poolInfo(DatabaseConnectionFilter.java:60)
         at oracle.dbtools.rt.jdbc.DatabaseConnectionFilter.applyDatabaseConnectionInfo(DatabaseConnectionFilter.java:71)
         at oracle.dbtools.rt.web.HttpEndpointBase.service(HttpEndpointBase.java:119)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:301)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:184)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3732)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3696)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2273)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1490)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    >
    This seems to be due to a time format issue when adding new database mapping in the url-mappings
    when running the following command
    @:as11g_pfrd_prod> java -jar apex.war map-url --type base-path /apexpoc apexpoc
    Nov 16, 2012 10:22:57 AM oracle.dbtools.common.config.file.ConfigurationFolder logConfigFolder
    INFO: Using configuration folder: /data1/software/oracle/product/as11g_pfrd_prod/user_projects/domains/PFRD_Domain/servers/WLS_APEX/apex_config/apex
    @:as11g_pfrd_prod> pwd
    /data1/software/oracle/product/as11g_pfrd_prod/user_projects/domains/PFRD_Domain/servers/WLS_APEX/stage/apex
    After running the command the following entry was added to url mappings file
    This resulted in the following entries in url-mapping.xml, (the timestamps are in to be in UTC):
    <pool base-path="/apexpoc" name="apexpoc" updated="2012-11-15T23:52:58.0080Z"/>
    Which caused the above stack trace and also caused a internal error 500 on the browser when trying to access APEX
    we removed the url mapping and ran the following command(at a different time in the day)
    @:as11g_pfrd_prod> pwd
    /data1/software/oracle/product/as11g_pfrd_prod/user_projects/domains/PFRD_Domain/servers/WLS_APEX/stage/apex
    @:as11g_pfrd_prod> java -jar apex.war map-url --type base-path /apexpoc apexpoc
    Nov 16, 2012 10:58:35 AM oracle.dbtools.common.config.file.ConfigurationFolder logConfigFolder
    INFO: Using configuration folder: /data1/software/oracle/product/as11g_pfrd_prod/user_projects/domains/PFRD_Domain/servers/WLS_APEX/apex_config/apex
    which resulted in a new entry in the url mappings table
    <pool base-path="/apexpoc" name="apexpoc" updated="2012-11-16T00:28:35.479Z"/>
    And the apex listener worked again

    Thanks Colm,
    Was this a known issue? If so do we have a common "thread" for known issues?
    And do we have any idea when 2.0 Final is coming out?

  • Timestamp issues between mysql 5.0 and 5.1

    Slightly off the dreamweaver trail here - but that's what I'm using (CS3)....here's hoping someone knows what the issue is here?
    I've created some 'closing soon' and an 'expired' label for some ads I'm running, the php code is below.
    <?php
              // get the current timestamp
      $now = time();
      // get the number of seconds in seven days
      $sevendays = 60*60*24*7;
      // get the expiry date
      $expires = $row_listJobs['job_service_expiry_date'];
      if ($expires < $now) {
        // display the expired image
    echo '<img src="../../images/buttons_labels/expired.gif" alt="expired job" />';
      } elseif ($now + $sevendays > $expires) {
        // display final week image
    echo '<img src="../../images/buttons_labels/closingsoon.gif" alt="job closing soon" />';
        ?>
    This works absolutely fine when using a Recordset with a UNIX_TIMESTAMP in it, and going into the background the 'job_service_expiry_date' field in mysql5.1.30 using php5.2.8 provides a 'Default' parameter whereby I can set the Default 'as defined' followed by 0000-00-00 00:00:00.  There is another date field for job posted - but this is just an ordinary date stamp. None of the fields are set up with current timestamps as I want to be able to update records with dates in the future.
    The problem arises specifically with uploading this to my server (hey it's 1&1 shared host, so things are a little restricted).  I'm running mysql5.0.67 with php5.2.9 here and the timezone is set to Europe/Berlin. (I'm in the UK).  Looking at the background everything is the same EXCEPT for the ability to set the 'Default, as defined etc'.  So, all the options I'm left with is to set it as a TIMESTAMP with Default 0000-00-00 00:00:00.
    Here's the issue. On browsing the relevant page I'm now presented with an error "Function xnameofdatabasex UNIX_TIMESTAMP does not exist" and the page won't load.  This didn't happen during local testing; the page displayed fine, and the php code worked perfectly.
    Has anyone else come across this problem and found a way around it?
    Could it be the difference in the mysql server versions that is causing the issue?? In which case any pointers on how to amend the fields, SQL query or the php code so that it does work would be really appreciated.  Could it work, for example, by setting the posting date as current timestamp and then the expiry date as just a date, with the php code set to display images depending on days ahead of the posted date eg closing soon +25 days, and expired >30 days for example......need help with this as I'm no php expert!
    I can add lines to the .htaccess files, but I've tried adding php_value date.timezone 'Europe/London' to it, but get a 500 error.  My thinking here was that perhaps it was the timezone difference confusing the php?....but I'm not sure.
    Incidently, any ideas on how to set the timezone through .htaccess without getting a 500 error (and within a dreamweaver CS3 environment) would also be welcomed.
    Thanks in advance
    Matt

    Thanks David
    I tried adding that line to .htaccess, but received the 500 internal server error when trying to access the site.  I've added the other line for the time being -hopefully that works!
    The SQL looks like this:
    SELECT jobs.job_id, jobs.client_id, jobs.category_id, LEFT(jobs.job_longdesc, 200) AS first200,
    jobs.job_imagemini, jobs.job_salary, jobs.job_location, jobs.job_country, jobs.job_title,
    jobs.job_posted_date, categories.category_id, categories.category_name,
    clients.client_id, UNIX_TIMESTAMP (jobs.job_service_expiry_date) AS job_service_expiry_date,
    jobs.job_featured
    FROM jobs, categories, clients
    WHERE jobs.client_id = clients.client_id AND jobs.category_id = categories.category_id AND jobs.job_longdesc LIKE %var1% AND jobs.job_country LIKE %var2% AND jobs.job_title LIKE %var3%
    ORDER BY jobs.job_featured ASC, jobs.job_posted_date DESC, jobs.job_title
    Cheers
    Matt

  • Date Timestamp issue

    Hi All,
    I am having a table in on database with a column of datatype DATE. There is a value which has date with timestamp. When i build an interface in ODI and do a 1-1 mapping with another table in a different database which is also of datatype DATE only DATE appears and the timestamp is missing. The version of ODI i am using is 10.1.3.2.
    If the tables are in same database then the samething is working, its accepting Date with timestamp.
    Any solution?
    Thanks,
    Vikram
    Edited by: Vikram Datta on Nov 14, 2008 3:06 AM

    Hi Vikram,
    Unfortunately this is a issue that happens when the data "travel" thru agent.
    There are some ways to solve that:
    1) create an temp table (yellow interface) just like the target table but with all Date columns to varchar and use the "to_char" function at source. After that, create a second interface from this temp table to target but now with "to_date" at staging area.
    2) customize your KM to do it for you. I mean, make the km works like above, but at its work tables (C$, I$) keeping the transformation implicit (invisible) to the programmer but as explicit conversion to the database.
    3) take a look at what is the default timestamp format at target database and make the transformation to it (to_char) letting an "implicit conversion" to the target database.
    The problem here is: If the default format change, all interfaces will need to be altered.
    4) customize the KM to set the format datatime at database for the ODI step and use this format at "to_char" mapping.
    To any solution where the "to_char" is used, the mapping must be at source.
    I always have been used the second option.
    Does it help you?
    Edited by: Cezar Santos on 16/11/2008 11:25

  • Toplink Deployment issues

    Hello,
    I am trying to come up with a way to do deployment of the entity objects independent of any application server-be it Weblogic or Websphere or OAS. Of course not sure if it is possible but I would like to ask the community for their thoughts on my ideas . Of course if somebody has already done this please excuse this thread.
    1) I have a startup /shutdown class that is registered with the application server. This class will load the sessions.xml file and create a server session object. This class is a singleton and has one method that returns a server session object. The shutdown class will kill the server session object at the time of shutdown of the application server. So this ensures that the sessions.xml and project.xml that lie at the same level as the classes directory will be loaded by the system class loader along with the toplink.jar file. All the descriptors get loaded at this time.
    2) I package my EJB/Servlet classes along with my POJO into either a jar or a war file and they get loaded by the corresponding EJB/Servlet class loader.The Session bean or the servlet helper class will call the getServerSession() on the startup class to obtain any client sessions and perform persistence activity.
    Anybody see any issues with this approach ?
    Thanks,
    Aswin.

    Hello,
    I am trying to come up with a way to do deployment of the entity objects independent of any application server-be it Weblogic or Websphere or OAS. Of course not sure if it is possible but I would like to ask the community for their thoughts on my ideas . Of course if somebody has already done this please excuse this thread.
    1) I have a startup /shutdown class that is registered with the application server. This class will load the sessions.xml file and create a server session object. This class is a singleton and has one method that returns a server session object. The shutdown class will kill the server session object at the time of shutdown of the application server. So this ensures that the sessions.xml and project.xml that lie at the same level as the classes directory will be loaded by the system class loader along with the toplink.jar file. All the descriptors get loaded at this time.
    2) I package my EJB/Servlet classes along with my POJO into either a jar or a war file and they get loaded by the corresponding EJB/Servlet class loader.The Session bean or the servlet helper class will call the getServerSession() on the startup class to obtain any client sessions and perform persistence activity.
    Anybody see any issues with this approach ?
    Thanks,
    Aswin.

  • Oracle Toplink Version issue

    Oracle Toplink Version
    Hi all,
    Can i know which version of toplink is stable?
    currently i am using toplink v9.0.3 for our project.i found in the same forum that there is a bug(#2662726) in v9.0.3 related to improper setting of target foreign key and a suggestion to use v9.0.3.2 where bug had been fixed.
    I am unable to find v9.0.3.2 download in oracle site.I found only the latest versions 9.0.4(10g product).I tried to install toplink 10g which is just an extraction of zip file,but i end up with the error "Windows cannot find '-Xmx128m' file" when i tried to start workbench.cmd
    I found somewhere that there is a work-around for that bug - just reversing the feild names of addTargetForeignKeyFeildName() method.
    I don't want to take risk.
    please suggest me correct stable versions and place of downloads of both App.server and toplink
    our Oracle Application server version is 9.0.3. if i am going to use the latest version of toplink,can it solve the issue mentioned above and also can it be compatible with app server version.
    Thanks in advance.
    Regards,
    Karthick.

    Hi doug,
    Thanks for ur immediate response.
    I had installed the latest patchset 9.0.3.6 and tried to generate project java source using workbench.
    now the foreign key mapping problem is O.K. but i am having problems in the generated persistent classes code.
    I am having certain feilds in a table in Oracle Database of type Number of size varying from 1 - 6. I used workbench to generate code for that table. In that i found that all attributes which has direct to feild mapping to a Number datatype in Database is of type Double(irrespective of size even for Number(1)).
    I think it is a huge memory wastage having Double where an Integer or other types will do.
    I would be very grateful if u help me in this regards.
    -karthick.

  • IPhone 4 SMS Timestamp Issue

    Hi,
    I have never had any problems with my SMS timestamps but all the messages I am receiving today are displaying the wrong ones. For example with one person I was texting, the timestamp in the main inbox was coming up as 'yesterday' when we had been sending messages to eachother this afternoon. I deleted all the messages that I got from her 'yesterday' and a new timestamp appeared and it was the time of the first message I received off her today.....no matter how many texts to and from we send it wont update. I'v made sure all the time/date settings of the phone are right and even restored it in the hope of fixing it. If anyone could shed some light on this it would be a great help, it's driving me absolutely crazy!
    Thanks!

    Is it just with the one person? Is it with all people on a particular carrier? I'm wondering if it could be a carrier issue. I'd suggest calling yours as a start.
    Best of luck.

  • Date and timestamp issue please help

    In the following creation_date is DATE datatype.
    creation_time is VARCHAR2 having only MI:SS (16:02).
    :DATE_LAST_CHECKED is VARCHAR2 having string without time.
    If that be the case how can I compare these 2 statements with the time. I need to compare with timestamp.
    Please let me know.
    TO_DATE(TO_CHAR(TRUNC(csa.CREATION_DATE),'DD-MON-YYYY') ||' ' || csa.creation_time,'DD-MON-YYYY HH:MI pm')
    = TO_DATE(NVL(:DATE_LAST_CHECKED,SYSDATE-5),'DD-MON-YYYY HH24:MI:SS')

    I won't even ask why creation_date and creation_time are split out into separate columns. The DATE datatype supports both and you will have to code conversions like this all over the place.
    to_date(to_char(csa.creation_date, 'mm/dd/yyyy') || csa.creation_time, 'mm/dd/yyyyhh:mi pm')
    = nvl(to_date(:date_last_checked, 'mm/dd/yyyy'), trunc(sysdate - 5))

Maybe you are looking for

  • Special Characters Issue

    Hi, I am trying to load text data from a flat file which has some special characters in the text description like ö ø æ ü å. I have entered this in RSKC. Now when I look in the preview these characters are replaced by this symbol #. Can you help me o

  • Configure Universal Access? what is it..

    I have iPhone attached, and in lower portion of pane on computer from summary/options is button 'Configure Universal Access'. What is this? I tried every search option in iTune help menu, and they don't bother listing it. I'm not going to experiment

  • Trying to boot into Windows with no Windows Partition

    So I tried Windows XP on boot camp and ofcourse it says it's not installed when it clearly is >_> So I restored and when I boot up it tries to boot into windows, when it loads it says Disk Error insert boot disk. When I hold alt it only shows Macinst

  • Pop up with certain songs.

    For many songs, I get a pop up that says "The song....could not be used because the original file could not be found. Would you like to locate it?" The song will not play through iTunes, but will still play on the iPod when I am not connected to the

  • Growing a tree

    I know the people behind this commercial "   http://vimeo.com/7307911   " used xfrog and a couple of months to achieve this stunning result. There's a lot of tools out there to render out full grown trees, BUT is i possible to make a naturally , grow