[b]Adding millisecond to a date[/b]

Hi,
I am creating a view out of a table and one of the view column is based on two table columns: one is a date/time column and the other is an integer column(containing milliseconds) in the table.
My question is, how do i add the milliseconds to the date.
i tried using Date + Interval, but the interval only takes seconds.
Please note that the integer column containing milliseconds could be greater than 1000.
Thanks,
Vasu

You'll have to modify this expression to cope with >= 1000 milliseconds.
SQL> select cast (sysdate as timestamp) + cast ('0 00:00:00.' || millis as interval day to second) from (select 572 millis from dual);
CAST(SYSDATEASTIMESTAMP)+CAST('000:00:00.'||MILLISASINTERVALDAYTOSECOND)
02-AUG-03 10.47.01.572000000 AM
d.

Similar Messages

  • Adding milliseconds to long date format (tzntstmpl)

    I have a date field called START_DATE defined as type TZNTSTMPL (long date/time format - YYYYMMDDHHMMSS.mmmuuun) and another field call MILLISECONDS also defined as type TZNTSTMPL.
    I want to subtract 625 milliseconds from START_DATE, so I put .625 in the MILLISECONDS field and subtract MILLISECONDS from START_DATE.
    The starting value in START_DATE is "20090701095000.5410000". After I subtract MILLISECONDS from START_DATE the result is "20090701094999.9160000". 
    The result should have been "20090701094959.9160000. The problem is with SS (Seconds) 99 is invalid.
    How can I get it to subtract correctly?
    Are there any ADD or SUBTRACT functions that work with the long date/time format?
    Regards,
    Mike...

    Vindo,
    Thanks that is exactly what I was looking for. It handled the adding and subtract from the long date format correctly.
    Regards,
    Mike...

  • How do I delete duplicate songs in my iTunes? I have hundreds and I don't want to delete them one by one. They were all added on the same date, so sorting by the date won't help.

    How do I delete duplicate songs in my iTunes? I have hundreds and I don't want to delete them one by one. They were all added on the same date, so sorting by the date won't help.

    Hi, if this is in regards to your library simply open up itunes and do the following steps:
    Click File
    Scroll down to "show duplicates"
    A list will then appear of your duplicate song titles.
    Be sure to CAREFULLY review each song to make sure it is a duplicate ( as I have some music that is the same song but live, acoustic etc...)
    Proceed to manually delete each song from the list and leave alone any song that you wish to keep.
    Best of luck,
    Cait

  • I added 0AMOUNT in generic data source and in rsa3 i am seeing the data ..b

    i added 0AMOUNT in generic data source and in rsa3 i am seeing the data ..but i am not seeing any data in target table..
    what would be the cause

    Hi,
    I guess you mean the target table in BW, correct?
    First replicate your DSource in BW
    Open your TRules. In tab Transfer structure/Datasource locate your field in the right pane (should be greyed, not blue); move it from to the left (to the transfer structure); reactivate and reload.
    You should now see the field in your PSA table.
    hope this helps...
    Olivier.

  • IPhone 4, has timestamp been added to messages to date?

    I just got an iPhone 4, has timestamp been added to messages to date?

    Ok sorted it looks like the folder where the music is stored had changed to itunes media when all my music is in itunes music. I have reset the default folder back to itunes music and that seem to have solved the issue. Thanks for those that helped.

  • BDB dumps core after adding approx 19MB of data

    Hi,
    BDB core dumps after adding about 19MB of data & killing and restarting it several times.
    Stack trace :
    #0 0xc00000000033cad0:0 in kill+0x30 () from /usr/lib/hpux64/libc.so.1
    (gdb) bt
    #0 0xc00000000033cad0:0 in kill+0x30 () from /usr/lib/hpux64/libc.so.1
    #1 0xc000000000260cf0:0 in raise+0x30 () from /usr/lib/hpux64/libc.so.1
    #2 0xc0000000002fe710:0 in abort+0x190 () from /usr/lib/hpux64/libc.so.1
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile db_err.o.
    If NOT specified will behave as a non -g compiled binary.
    warning: No unwind information found.
    Skipping this library /integhome/jobin/B063_runEnv/add-ons/lib/libicudata.sl.34.
    #3 0xc000000022ec2340:0 in __db_assert+0xc0 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile db_meta.o.
    If NOT specified will behave as a non -g compiled binary.
    #4 0xc000000022ed2870:0 in __db_new+0x780 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile bt_split.o.
    If NOT specified will behave as a non -g compiled binary.
    #5 0xc000000022ded690:0 in __bam_root+0xb0 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    #6 0xc000000022ded2d0:0 in __bam_split+0x1e0 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile bt_cursor.o.
    If NOT specified will behave as a non -g compiled binary.
    #7 0xc000000022dc83f0:0 in __bam_c_put+0x360 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile db_cam.o.
    If NOT specified will behave as a non -g compiled binary.
    #8 0xc000000022eb8c10:0 in __db_c_put+0x740 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile db_am.o.
    If NOT specified will behave as a non -g compiled binary.
    #9 0xc000000022ea4100:0 in __db_put+0x4c0 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so---Type <return> to continue, or q <return> to quit---
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile db_iface.o.
    If NOT specified will behave as a non -g compiled binary.
    #10 0xc000000022eca7a0:0 in __db_put_pp+0x240 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    warning:
    ERROR: Use the "objectdir" command to specify the search
    path for objectfile cxx_db.o.
    If NOT specified will behave as a non -g compiled binary.
    #11 0xc000000022d92c90:0 in Db::put(DbTxn*,Dbt*,Dbt*,unsigned int)+0x120 ()
    from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
    What is the behaviour of BDB if its killed & restarted when a bdb transaction is in progress?
    anybody has an idea as to why BDB dumps core in above scenario?
    Regards
    Sandhya

    Hi Bogdan,
    As suggested by you i am using the below flags to open an enviornment.
    DB_RECOVER |DB_CREATE | DB_INIT_LOG | DB_INIT_MPOOL | DB_INIT_TXN|DB_THREAD
    DB_INIT_LOCK is not used because at our application level we are maintaining a lock to guard against multiple simultaneous access.
    The foll msg is output on the console & the dumps core with same stack trace as posted before.
    __db_assert: "last == pgno" failed: file "../dist/../db/db_meta.c", line 163
    I ran db_verify, db_stat, db_recover tools on the DB & thier results are as below.
    db_verify <dbfile>
    db_verify: Page 4965: partially zeroed page
    db_verify: ./configserviceDB: DB_VERIFY_BAD: Database verification failed
    db_recover -v
    Finding last valid log LSN: file: 1 offset 42872
    Recovery starting from [1][42200]
    Recovery complete at Sat Jul 28 17:40:36 2007
    Maximum transaction ID 8000000b Recovery checkpoint [1][42964]
    db_stat -d <dbfile>
    53162 Btree magic number
    9 Btree version number
    Big-endian Byte order
    Flags
    2 Minimum keys per-page
    8192 Underlying database page size
    1 Number of levels in the tree
    60 Number of unique keys in the tree
    60 Number of data items in the tree
    0 Number of tree internal pages
    0 Number of bytes free in tree internal pages (0% ff)
    1 Number of tree leaf pages
    62 Number of bytes free in tree leaf pages (99% ff)
    0 Number of tree duplicate pages
    0 Number of bytes free in tree duplicate pages (0% ff)
    0 Number of tree overflow pages
    0 Number of bytes free in tree overflow pages (0% ff)
    0 Number of empty pages
    0 Number of pages on the free list
    db_stat -E <dbfile>
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Default database environment information:
    4.3.28 Environment version
    0x120897 Magic number
    0 Panic value
    2 References
    0 The number of region locks that required waiting (0%)
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Per region database environment information:
    Mpool Region:
    2 Region ID
    -1 Segment ID
    1MB 264KB Size
    0 The number of region locks that required waiting (0%)
    Log Region:
    3 Region ID
    -1 Segment ID
    1MB 64KB Size
    0 The number of region locks that required waiting (0%)
    Transaction Region:
    4 Region ID
    -1 Segment ID
    16KB Size
    0 The number of region locks that required waiting (0%)
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    DB_ENV handle information:
    Set Errfile
    db_stat Errpfx
    !Set Errcall
    !Set Feedback
    !Set Panic
    !Set Malloc
    !Set Realloc
    !Set Free
    Verbose flags
    !Set App private
    !Set App dispatch
    !Set Home
    !Set Log dir
    /integhome/jobin/B064_July2/runEnv/temp Tmp dir
    !Set Data dir
    0660 Mode
    DB_INIT_LOG, DB_INIT_MPOOL, DB_INIT_TXN, DB_USE_ENVIRON Open flags
    !Set Lockfhp
    Set Rec tab
    187 Rec tab slots
    !Set RPC client
    0 RPC client ID
    0 DB ref count
    -1 Shared mem key
    400 test-and-set spin configuration
    !Set DB handle mutex
    !Set api1 internal
    !Set api2 internal
    !Set password
    !Set crypto handle
    !Set MT mutex
    DB_ENV_LOG_AUTOREMOVE, DB_ENV_OPEN_CALLED Flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Default logging region information:
    0x40988 Log magic number
    10 Log version number
    1MB Log record cache size
    0660 Log file mode
    1Mb Current log file size
    632B Log bytes written
    632B Log bytes written since last checkpoint
    1 Total log file writes
    0 Total log file write due to overflow
    1 Total log file flushes
    1 Current log file number
    42872 Current log file offset
    1 On-disk log file number
    42872 On-disk log file offset
    1 Maximum commits in a log flush
    1 Minimum commits in a log flush
    1MB 64KB Log region size
    0 The number of region locks that required waiting (0%)
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Log REGINFO information:
    Log Region type
    3 Region ID
    __db.003 Region name
    0xc00000000b774000 Original region address
    0xc00000000b774000 Region address
    0xc00000000b883dd0 Region primary address
    0 Region maximum allocation
    0 Region allocated
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    DB_LOG handle information:
    !Set DB_LOG handle mutex
    0 Log file name
    !Set Log file handle
    Flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    LOG handle information:
    0 file name list mutex (0%)
    0x40988 persist.magic
    10 persist.version
    0 persist.log_size
    0660 persist.mode
    1/42872 current file offset LSN
    1/42872 first buffer byte LSN
    0 current buffer offset
    42872 current file write offset
    68 length of last record
    0 log flush in progress
    0 Log flush mutex (0%)
    1/42872 last sync LSN
    1/41475 cached checkpoint LSN
    1MB log buffer size
    1MB log file size
    1MB next log file size
    0 transactions waiting to commit
    1/0 LSN of first commit
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    LOG FNAME list:
    0 File name mutex (0%)
    1 Fid max
    ID Name Type Pgno Txnid DBP-info
    0 configserviceDB btree 0 0 No DBP 0 0 0
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Default cache region information:
    1MB 262KB 960B Total cache size
    1 Number of caches
    1MB 264KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    43312 Requested pages found in the cache (89%)
    4968 Requested pages not found in the cache
    640 Pages created in the cache
    4965 Pages read into the cache
    621 Pages written from the cache to the backing file
    4818 Clean pages forced from the cache
    621 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    166 Current total page count
    146 Current clean page count
    20 Current dirty page count
    131 Number of hash buckets used for page location
    53888 Total number of times hash chains searched for a page
    4 The longest hash chain searched for a page
    92783 Total number of hash buckets examined for page location
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for
    0 The number of region locks that required waiting (0%)
    5615 The number of page allocations
    10931 The number of hash buckets examined during allocations
    22 The maximum number of hash buckets examined for an allocation
    5439 The number of pages examined during allocations
    11 The max number of pages examined for an allocation
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Pool File: temporary
    1024 Page size
    0 Requested pages mapped into the process' address space
    43245 Requested pages found in the cache (99%)
    1 Requested pages not found in the cache
    635 Pages created in the cache
    0 Pages read into the cache
    617 Pages written from the cache to the backing file
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Pool File: configserviceDB
    8192 Page size
    0 Requested pages mapped into the process' address space
    65 Requested pages found in the cache (1%)
    4965 Requested pages not found in the cache
    1 Pages created in the cache
    4965 Pages read into the cache
    0 Pages written from the cache to the backing file
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Mpool REGINFO information:
    Mpool Region type
    2 Region ID
    __db.002 Region name
    0xc00000000b632000 Original region address
    0xc00000000b632000 Region address
    0xc00000000b773f08 Region primary address
    0 Region maximum allocation
    0 Region allocated
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    MPOOL structure:
    0/0 Maximum checkpoint LSN
    131 Hash table entries
    64 Hash table last-checked
    48905 Hash table LRU count
    48914 Put counter
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    DB_MPOOL handle information:
    !Set DB_MPOOL handle mutex
    1 Underlying cache regions
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    DB_MPOOLFILE structures:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    MPOOLFILE structures:
    File #1: temporary
    0 Mutex (0%)
    0 Reference count
    18 Block count
    634 Last page number
    0 Original last page number
    0 Maximum page number
    0 Type
    0 Priority
    0 Page's LSN offset
    32 Page's clear length
    0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 f8 0 0 0 0 ID
    deadfile, file written Flags
    File #2: configserviceDB
    0 Mutex (0%)
    1 Reference count
    148 Block count
    4965 Last page number
    4964 Original last page number
    0 Maximum page number
    0 Type
    0 Priority
    0 Page's LSN offset
    32 Page's clear length
    0 0 b6 59 40 1 0 2 39 ac 13 6f 0 a df 18 0 0 0 0 ID
    file written Flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Cache #1:
    BH hash table (131 hash slots)
    bucket #: priority, mutex
    pageno, file, ref, LSN, mutex, address, priority, flags
    bucket 0: 47385, 0/0%:
    4813, #2, 0, 0/1, 0/0%, 0x04acf0, 47385
    4944, #2, 0, 0/0, 0/0%, 0x020c18, 48692

  • Adding time issue using Date's getTime method

    The following code is incorrectly adding 5 hours to the resultant time.
    I need to be able to add dates, and this just isn't working right.
    Is this a bug or am I missing something?
    long msecSum = 0 ;
    DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss.SSS") ;
    try
    Date date1 = dateFormat.parse("01:02:05.101") ;
    Date date2 = dateFormat.parse("02:03:10.102") ;
    System.out.println("Date1: " + dateFormat.format(date1));
    System.out.println("Date2: " + dateFormat.format(date2));
    msecSum = date1.getTime() + date2.getTime() ; // adds 5 hours !!!
    System.out.println("Sum: " + dateFormat.format(msecSum)) ;
    catch (Exception e)
    System.out.println("Unable to process time values");
    Results:
    Date1: 01:02:05.101
    Date2: 02:03:10.102
    Sum: 08:05:15.203 // should be 3 hours, not 8

    Dates shouldn't be added, but if you promise not to tell anyone:
    long msecSum = 0 ;
    DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss.SSS") ;
    dateFormat.setTimeZone(TimeZone.getTimeZone("GMT"));
    try
    Date date1 = dateFormat.parse("01:02:05.101") ;
    Date date2 = dateFormat.parse("02:03:10.102") ;
    System.out.println("Date1: " + dateFormat.format(date1));
    System.out.println("Date2: " + dateFormat.format(date2));
    msecSum = date1.getTime() + date2.getTime() ; // adds 5 hours !!!
    System.out.println("Sum: " + dateFormat.format(msecSum)) ;
    catch (Exception e)
    System.out.println("Unable to process time values");Me? I would just parse the String "01:02:05.101" to extract hours,
    minutes, seconds and milliseconds and do the math.

  • Adding new field to data source -can not see them

    Experts,
    I have added 4 fields to the data source 0FI_AA_11 .
    I can able to see these fields in the append structure or in RSA2,but i can not able to see in RSA6.I reactivated append structure ,still i can not see these fields in RSA6.
    Please advice me how can i see them in RSA6.

    Hi Manoj,
       Have a look, similar post:
    Re: not able to see new fields in datasource
    Hope it Helps
    Srini

  • Newly added field in the data Source not getting populated

    Hello All,
    We have added few fields in the Data Source. The Data Source is based on the InfoSet. We have included the field in the Infoset and have updated the code to fetch the value for the newly added fields.
    When we perform the test extraction for this Data Source in RSA3, the newly added fields are not getting populdated with the value. In the system generated query's selection list, the newly added fields are not selected.
    Please let mw know how to get the newly added field selected in the system generated query of the infoset.
    Regards,
    -Purnima

    Hi,
    As you said you have added the filed in Info Set. Have you included the same in data source? try if not.
    If you are trying to create a query in source system to check the data. I guess in R/3 (ECC) then you have to include the new field in slection critera (ther is an option available on top menu).
    I would suggest before creating any query go to RSO2 select the data source. Display the field structure and check if the filed is there or not. You maysee Infoset level data directly by data display from top menu. Try that optionas well to check.
    I hope it will help.
    THanks,
    S

  • With SPD adding 1 month to date not working?

    I have a calculated column [1stDayofMth] =DATE(YEAR([Start Date]),MONTH([Start Date]),1) which appears to be working fine.
    My SPD WF uses the [1stDayofMth] column and does a few calculations to find the next 2 months 1st days
    [Month 1 Resume Date] = [1stDayofMth] + "1 Months" 
    [Month 2 Resume Date] = [1stDayofMth] + "2 Months"
    These dates are used to pause the workflow. The problem when I log the results to history these calculations are not the 1st day of the next month on some list items?
    eg (text from workflow history)
    1stDayofMth="11/1/2014 12:00:00 AM"
    Month 1 Resume Date="11/30/2014 11:00:00 PM"  |  Month 2 Resume Date="12/31/2014 11:00:00 PM"  |  Month 3 Resume Date="1/4/2015 11:00:00 PM"
    The [1stDayofMth] column is set to "Date Only" and is showing as only the date, my logging was set to show the String and I can see the time is included as well. Is this the cause?
    Stunpals - Disclaimer: This posting is provided "AS IS" with no warranties.

    As a band-aid I inserted a 2nd add for 2 hours to the date which will move it from the 30th at 11:00pm to the 1st at 1:00am. 
    I haven't done a lot of digging but it looks like the issue of adding 1 month to the 1stDayofMonth is only occurring on months with 30 days?
    Stunpals - Disclaimer: This posting is provided "AS IS" with no warranties.

  • Extracting time from date and adding it to another date.

    Dear All,
    The values of two fields in my table are as follows:
    n_date n_time
    12/7/2007 1/1/1970 5:50:23 PM
    Both of these fields are of date data type.
    How can i add the time from n_time to n_date so that n_date will reflect a date with time added?
    The problem is that i want to sort on n_date and i am not getting result when two dates are having same values for n_date but different for n_time.
    How can i create a new expression for sorting these values?
    Thanks in advance.
    Regards,
    Sameer

    Hi,
    Check this.
    with data as
    ( select to_date('12/7/2007','dd/mm/yyyy') as ndate, to_date('1/1/1970 5:50:23','dd/mm/yyyy hh24:mi:ss') ntime from dual)
    select ndate + (ntime - trunc(ntime)) from data
    Regards
    RK

  • Is there a completely reliable method of adding months to a date in ABAP?

    Does anyone know of a completely reliable and consistent ABAP function module that can be used to add months to a date.  One that will always get the correct last day of the month when requested to add 1 month to the last day of the previous month.  Something as reliable as using the ADD_MONTHS function in Oracle SQL.  I don't want to use any of the specific 'get last day of the month' function modules since the start date may not necessarily be the last day of a month.
    In the past I have trusted the following.  Now they have betrayed me. 
    MONTHS_PLUS_DETERMINE  
    Correctly provides 28.02.09 when adding 1 months to 31.01.09.
    Incorrectly gives me 28.03.09 instead of 31.03.09 when adding 1 month to 28.02.09
    RP_CALC_DATE_IN_INTERVAL and RP_CALC_DATE_IN_INTERVAL_SG
    Both incorrectly give me 01.03.09 when asked to add 1 month to 31.01.09.
    Both incorrectly give me 28.03.09 when asked to add 1 month to 28.02.09.
    We're on ECC6.

    >
    Suhas Saha wrote:
    > Hello Christine,
    >
    > Did you check the method ADD_MONTHS_TO_DATE of the class CL_HRPAD_DATE_COMPUTATIONS ?
    >
    >
    > *     Adds No. of Months to Date
    >       TRY.
    >           CALL METHOD cl_hrpad_date_computations=>add_months_to_date
    >             EXPORTING
    >               start_date = sy-datum
    >               months     = l_v_month
    >             RECEIVING
    >               date       = l_v_date.
    >         CATCH cx_hrpa_violated_postcondition .
    >       ENDTRY.
    >
    >
    > I dont have any idea how ADD_MONTHS function in Oracle SQL works, though ):
    >
    > Hope this helps.
    >
    > BR,
    > Suhas
    That also sometimes works.....but adding 1 month to 28.02.2009 gives me 28.03.2009 and adding 1 month to 29.02.2008 gives me 29.03.2008.
    This is how to use ADD_MONTHS in Oracle SQL - a bit naughty since you have to use native SQL to do it but it ALWAYS seems to work.  I pass a date, month number and + or - into the function module.
    * For use with class based exception CX_SY_OPEN_SQL_DB.
    DATA:
      ex_check_os       TYPE REF TO cx_sy_open_sql_db,
      ex_check_rs       TYPE REF TO cx_sy_native_sql_error,
      ex_result(200)    TYPE C,
      ex_text           TYPE STRING,
      lv_new_date       TYPE datum,
      lv_old_date       TYPE datum,
      lv_months         TYPE I.
      lv_old_date = iv_date.
      lv_months = iv_months.
      IF iv_sign = '-'.
         lv_months = lv_months * -1.
      ENDIF.
      TRY.
        EXEC SQL.
          SELECT to_char(add_months(to_date(:lv_old_date,'YYYYMMDD'),:lv_months),'YYYYMMDD')
          INTO :lv_new_date
          FROM sys.dual a
        ENDEXEC.
        CATCH cx_sy_native_sql_error INTO ex_check_rs.
        ex_text = ex_check_rs->get_text( ).
        ev_error_message = ex_text.
        ev_return = 4.
      ENDTRY.
      ev_return = 0.
      ev_date = lv_new_date.

  • Converting from milliseconds to a date format in java

    This so that, that date can be inserted into a date column in mysql
    What I have is something like 1119193190
    I do:
    SimpleDateFormat sdf = new SimpleDateFormat("MMM dd,yyyy HH:mm");
    Date resultdate = new Date(yourmilliseconds);
    System.out.println(sdf.format(resultdate));
    and Java gives me something like:
    Jul 04,2004 14:06
    But Java then when inserting into a mysql table is all like um....no:
    com.mysql.jdbc.MysqlDataTruncation: Data truncation: Incorrect date value: 'Jul 04,2004 14:06' for column 'prodDate' at row 1
    proDate is of type date in mysql.
    Help?

    jverd wrote:
    "Jul 04,2004 14:06" is a String, not a Date.
    PreparedStatement ps = conn.prepareStatement("insert into T(name, birthdate) values(?, ?)");
    ps.setString(1, "Joe Smith");
    java.sql.Date date = new java.sql.Date(yourmillis);
    ps.setDate(2, date);
    ps.executeUpdate();
    I am a bit confused
    This i what I have
    for(int i = 0; i < productions.size(); i++)
                        //Create a new Production from the ArrayList
                        Production p = (Production) productions.get(i);
                        //Convert the date from milliseconds to YYYY-MM-DD format. for mysql?
                        SimpleDateFormat dateFormatter = new SimpleDateFormat("MMM dd,yyyy HH:mm");
                        Date convertedDate = new Date(p.getDate());
                        //Build a query to insert the Production into the table
                        String insertQuery = "INSERT INTO WELL_PROD VALUES(" +
                                  "'" + p.getLocation() + "'," +
                                  "'" + dateFormatter.format(convertedDate) + "'," +
                                  "'" + p.getOilProd() + "'," +
                                  "'" + p.getWaterProd() + "'," +
                                  "'" + p.getGasProd() + "')";
                        //Print the query to the screen
                        System.out.println("-> INSERTING the following query into well_prod: ");
                        System.out.println("   " + insertQuery);
                        //Update the database using the constructed query
                        int result = statement.executeUpdate(insertQuery);
                        //Print out the result of the insertion
                        System.out.println("   INSERT RESULT: " + result);
                   }Are you saying something like
    java.sql.Date date = new java.sql.Date(Something , what I have no idea, Should go here);
    instead of
    SimpleDateFormat dateFormatter = new SimpleDateFormat("MMM dd,yyyy HH:mm");
    and then carry on as normal?
    If so, what should go in those brackets based on the code?
    java.sql.Date date = new java.sql.Date("MMM dd,yyyy HH:mm");
    This is all being read in from a text file and converted over before being spit out to the data base, it all works except for the date...

  • Adding a custom meta data field which lists out content id based on query

    How can we add a custom meta data filed which lists out content id based on query like dDocType <matches> `AssociatedProduct`?
    Or other alternative would be to have a custom metadata field such that it allows selection of content id using link wizard which we typically use in site studio. It would be preferable to have this second option as it is user friendly. Can we do this on a check-in screen?
    -Pratap

    Thanks for the reply Deepak.
    We got it resolved. We did following changes in /ucm/custom/SiteStudio/resources/ss_custom_field_resources.htm file and it worked cleanly.
    Added following section at the end before body tag
    ===================================================================================================
    <@dynamichtml ss_parent_definition_field_entry@>
         <$include super.std_edit_entry$>
         <$if isQuery and isTrue(isQuery)$></td><td><$endif$>
         <$include ss_contributor_base_scripts$>
         <script type="text/javascript" src="<$HttpRelativeWebRoot$>resources/<$SSContributorSourceDir$>/sitestudio/wcm.contentserver.popup.js"></script>
         <script language="JavaScript">
         function OnSelectParentId()
              var selectParentIdOptions = {};
              selectParentIdOptions.httpCgiPath = '<$HttpCgiPath$>';
              selectParentIdOptions.queryText = 'dDocType <matches> `Country`';
              selectParentIdOptions.coreContentOnly = '<$if coreContentOnly and isTrue(coreContentOnly)$>1<$else$>0<$endif$>';
              selectParentIdOptions.callback = function( returnParams )
                   returnParams = returnParams || {};
                   if( returnParams && returnParams['dDocName'] && ( returnParams.dDocName.length > 0 ) )
                        // Set the actual metadata value
                        <$if isQuery AND isTrue(isQuery)$>
                             for (var i=0; i < document.<$formName$>.elements.length; i++)
                                  var elt=document.<$formName$>.elements;
                                  if (elt.name=="<$fieldName$>")
                                       elt.value = returnParams.dDocName;
                        <$else$>
                             document.<$formName$>.<$fieldName$>.value = returnParams.dDocName;
                        <$endif$>
              WCM.ContentServerPopup.ChooseManagedDocument(selectParentIdOptions);
         </script>
         <input type="button" value="<$lc("wwBrowse")$>..." onclick="OnSelectParentId();">
    <@end@>
    ======================================================================================
    Then modified the section which show xWebsiteSection, xRegionDefinition etc to include my custom meta data defitnion as well ('xParentContentType')
    ===================================================================
    <@dynamichtml std_edit_entry@>
         <$if fieldName and ( fieldName like "xWebsites|xDontShowInListsForWebsites" )$>
              <$include ss_website_query_text_field$>
         <$elseif fieldName and strEquals( fieldName, "xWebsiteSection" )$>
              <$include ss_website_section_field_entry$>
         <$elseif fieldName and strEquals( fieldName, "xRegionDefinition" )$>
              <$include ss_region_definition_field_entry$>
         <$elseif fieldName and strEquals( fieldName, "xParentContentType" )$>
              <$include ss_parent_definition_field_entry$>
         <$else$>
              <$include super.std_edit_entry$>
         <$endif$>
    <@end@>
    ====================================================================
    This worked fine.
    Regards,
    Pratap

  • Adding large amounts of data to multiple traces

    Hi,
    I have a CNiGraph object. I want to add a 100 traces (plots) with each 500.000pts, and later update with 50.000pts/sec for each trace. How can I do this? I tried and got a result of 45 seconds for 300pts/trace for 100 traces. I must be missing something. What is the fastest way to add the points to the traces? For the init I would use PlotXvsY (as ALL traces can have different X values and the length of each trace may not be the same). Then ChartXvsY. I don't even get as far as the ChartXvsY, since the first stage takes forever. Probably one reason is that I am getting a screen update for every trace added (can this be disabled?). I also don't want the graph to rescale on every trace update, just move it to the end (scroll). I tried turning of the AutoScale function for the plots and setting their visibility to false, but that still took like 23secs (for 100 traces, 300p/t).
    Please help.
    Thanks,
    Miklos

    Instead of using URLConnection open a Socket to the server port (80 probably) send a POST http request followed by the data, you may then (optional) recieve data from the server to check that the servlet is ok, this is the same protocol as URLConnection, but you have control over when the data is actually sent...
    Socket sock=new Socket(getHost(),80);
    DataOutputStream dos=new DataOutputStream(sock.getOutputStream());
    dos.writeBytes("POST servletname\r\n");
    dos.writeBytes("Content-type: text/plain\r\n");  //optional, but good if you know
    dos.writeBytes("Content-length: "+lengthOfData+"\r\n")  //again, optional, but good if you can know it without caching the data first
    dos.writeBytes("\r\n");   // gotta have a blank line before the data
      // send data now
    DataInputStream=new DataInputStream(sock.getInputStream());  //optional if you want to recieve
      // recieve any feedback from servlet if you want
    dis.close();
    dos.close();
    sock.close();im guessing that URLConnection caches the data so it can fill in "Content-length"

Maybe you are looking for

  • Create an XML file from Java

    I'm a new user of XML. I've a very important question: Is it possible create an XML file using some java package? If yes what package i must use? JDOM is the right product? Thanks

  • I can't use my current subscriptin of 800 mins for...

    Hi...I just bought an 800 min sunscription for one month to India, and it was working fine until December 31st. What happened? I need to make a phone call, but I can't use my subscription. I tried logging in and out several times. Help! Thanks, and H

  • How do i load a set of backing tracks

    I'm brand new to Mainstage and I'm sure the question has been asked before but I want to put my band's backing tracks in a set. I've seen how it should look, but I haven't found exactly how to di it. I have WAV files ready to go. Just need the how to

  • H.323 video QOS over LAN to LAN

    I need to configure QOS for video over gig fiber LAN(4506 catalyst switch) to LAN(3500 switch)

  • DIY hard drive replacement for MPB

    Hello, Has anyone actually replaced the HD in their MBP? I'm interested in upping the size of my drive to 200gb. Apple won't do the upgrade. There are outfits that will do the switch but the charge is around $450. I can score the same drive from Newe