Adding an hour to Date
How can i add an hour to a java.util.Date? I've searched the forum, but found only GregorianCalendar, which seems to only deal with adding and subtracting dayz..
newio
Date d = new Date(103, 4, 3);
d.setHours(d.getHours() + 10);
Calendar c = Calendar.getInstance();
c.set(Calendar.HOUR, c.get(Calendar.HOUR) + 10);
Similar Messages
-
Adding time issue using Date's getTime method
The following code is incorrectly adding 5 hours to the resultant time.
I need to be able to add dates, and this just isn't working right.
Is this a bug or am I missing something?
long msecSum = 0 ;
DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss.SSS") ;
try
Date date1 = dateFormat.parse("01:02:05.101") ;
Date date2 = dateFormat.parse("02:03:10.102") ;
System.out.println("Date1: " + dateFormat.format(date1));
System.out.println("Date2: " + dateFormat.format(date2));
msecSum = date1.getTime() + date2.getTime() ; // adds 5 hours !!!
System.out.println("Sum: " + dateFormat.format(msecSum)) ;
catch (Exception e)
System.out.println("Unable to process time values");
Results:
Date1: 01:02:05.101
Date2: 02:03:10.102
Sum: 08:05:15.203 // should be 3 hours, not 8Dates shouldn't be added, but if you promise not to tell anyone:
long msecSum = 0 ;
DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss.SSS") ;
dateFormat.setTimeZone(TimeZone.getTimeZone("GMT"));
try
Date date1 = dateFormat.parse("01:02:05.101") ;
Date date2 = dateFormat.parse("02:03:10.102") ;
System.out.println("Date1: " + dateFormat.format(date1));
System.out.println("Date2: " + dateFormat.format(date2));
msecSum = date1.getTime() + date2.getTime() ; // adds 5 hours !!!
System.out.println("Sum: " + dateFormat.format(msecSum)) ;
catch (Exception e)
System.out.println("Unable to process time values");Me? I would just parse the String "01:02:05.101" to extract hours,
minutes, seconds and milliseconds and do the math. -
How can I display a constant 1 hour of data in my VI
I am currently designing a VI which reads data from a spreadsheet which is being updated from another source.
I currently have my VI reading the information and displaying it on waveform charts.
I have 13 sample points each of which has a chart of its own. I wish to plot the data and be able to review it whilst it is running which is not a problem as I have activated the scroll bar function within the chart.
Now the task I wish to achieve next is to only have a certain amount of history data to review eg 1 hour of data.
So if I have been running the VI for 8 hours there will still only be the previous 1 hours data to review.
Can anybody help with how to achieve this? Has anybody else needed to do anything like this?
Thanks in advanceHi n_,
Would this not keep all of the data plotted stored in memory somewhere?
That depends on how you created/control those buffers…
I wish to use this to monitor a process constantly over years
So you need to limit the history length of your charts or use your own buffers…
(When it needs to run for "years" you should NOT use a Windows PC and you should stay away from any BuildArray function.)
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
How do I delete duplicate songs in my iTunes? I have hundreds and I don't want to delete them one by one. They were all added on the same date, so sorting by the date won't help.
Hi, if this is in regards to your library simply open up itunes and do the following steps:
Click File
Scroll down to "show duplicates"
A list will then appear of your duplicate song titles.
Be sure to CAREFULLY review each song to make sure it is a duplicate ( as I have some music that is the same song but live, acoustic etc...)
Proceed to manually delete each song from the list and leave alone any song that you wish to keep.
Best of luck,
Cait -
What a have is a VI that uses the following SUbvi's, starts with FP OpenVI, then FP Create TagVI, into a "While Loop" which contains a FP ReadVI outputting data into a Index ArrayVI outputting to a Display (DBL). This shows the output of a FP-AI-100 monitoring a 9v battery. I have to monitor this battery for a 4 hour period my problem is storing the 4 hours of data and getting it out of the "while loop" into a "Write to Spreadsheet File VI" all I seem to accomplish is just one data sample which I get into a spreed ship file with no problem. I just can't get 4 hours worth. By the way this is my first VI and I'm self
trained so have mercy.I figured it out thanks.
John Morris
Glendinning Marine -
What a have is a VI that uses the following SUbvi's, starts with FP OPENvi, then FP Create Tagvi, into a "While Loop" which contains a FP READvi outputting data into a INDEX ARRAYvi outputting to a Display (DBL). This shows the output of a FP-AI-100 monitoring a 9v battery. I have to monitor this battery for a 4 hour period my problem is storing the 4 hours of data and getting it out of the "while loop" into a "Write to Spreadsheet File vi" all I seem to accomplish is just one data sample which I get into a spreed ship file with no problem. I just can't get 4 hours worth. By the way this is my first VI and I'm sel
f trained so have mercy.I figured it out Thanks.
John Morris
Glendinning Marine -
I added 0AMOUNT in generic data source and in rsa3 i am seeing the data ..b
i added 0AMOUNT in generic data source and in rsa3 i am seeing the data ..but i am not seeing any data in target table..
what would be the causeHi,
I guess you mean the target table in BW, correct?
First replicate your DSource in BW
Open your TRules. In tab Transfer structure/Datasource locate your field in the right pane (should be greyed, not blue); move it from to the left (to the transfer structure); reactivate and reload.
You should now see the field in your PSA table.
hope this helps...
Olivier. -
IPhone 4, has timestamp been added to messages to date?
I just got an iPhone 4, has timestamp been added to messages to date?
Ok sorted it looks like the folder where the music is stored had changed to itunes media when all my music is in itunes music. I have reset the default folder back to itunes music and that seem to have solved the issue. Thanks for those that helped.
-
BDB dumps core after adding approx 19MB of data
Hi,
BDB core dumps after adding about 19MB of data & killing and restarting it several times.
Stack trace :
#0 0xc00000000033cad0:0 in kill+0x30 () from /usr/lib/hpux64/libc.so.1
(gdb) bt
#0 0xc00000000033cad0:0 in kill+0x30 () from /usr/lib/hpux64/libc.so.1
#1 0xc000000000260cf0:0 in raise+0x30 () from /usr/lib/hpux64/libc.so.1
#2 0xc0000000002fe710:0 in abort+0x190 () from /usr/lib/hpux64/libc.so.1
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile db_err.o.
If NOT specified will behave as a non -g compiled binary.
warning: No unwind information found.
Skipping this library /integhome/jobin/B063_runEnv/add-ons/lib/libicudata.sl.34.
#3 0xc000000022ec2340:0 in __db_assert+0xc0 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile db_meta.o.
If NOT specified will behave as a non -g compiled binary.
#4 0xc000000022ed2870:0 in __db_new+0x780 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile bt_split.o.
If NOT specified will behave as a non -g compiled binary.
#5 0xc000000022ded690:0 in __bam_root+0xb0 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
#6 0xc000000022ded2d0:0 in __bam_split+0x1e0 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile bt_cursor.o.
If NOT specified will behave as a non -g compiled binary.
#7 0xc000000022dc83f0:0 in __bam_c_put+0x360 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile db_cam.o.
If NOT specified will behave as a non -g compiled binary.
#8 0xc000000022eb8c10:0 in __db_c_put+0x740 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile db_am.o.
If NOT specified will behave as a non -g compiled binary.
#9 0xc000000022ea4100:0 in __db_put+0x4c0 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so---Type <return> to continue, or q <return> to quit---
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile db_iface.o.
If NOT specified will behave as a non -g compiled binary.
#10 0xc000000022eca7a0:0 in __db_put_pp+0x240 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
warning:
ERROR: Use the "objectdir" command to specify the search
path for objectfile cxx_db.o.
If NOT specified will behave as a non -g compiled binary.
#11 0xc000000022d92c90:0 in Db::put(DbTxn*,Dbt*,Dbt*,unsigned int)+0x120 ()
from /integhome/jobin/B063_runEnv/service/sys/servicerun/bin/libdb_cxx-4.3.so
What is the behaviour of BDB if its killed & restarted when a bdb transaction is in progress?
anybody has an idea as to why BDB dumps core in above scenario?
Regards
SandhyaHi Bogdan,
As suggested by you i am using the below flags to open an enviornment.
DB_RECOVER |DB_CREATE | DB_INIT_LOG | DB_INIT_MPOOL | DB_INIT_TXN|DB_THREAD
DB_INIT_LOCK is not used because at our application level we are maintaining a lock to guard against multiple simultaneous access.
The foll msg is output on the console & the dumps core with same stack trace as posted before.
__db_assert: "last == pgno" failed: file "../dist/../db/db_meta.c", line 163
I ran db_verify, db_stat, db_recover tools on the DB & thier results are as below.
db_verify <dbfile>
db_verify: Page 4965: partially zeroed page
db_verify: ./configserviceDB: DB_VERIFY_BAD: Database verification failed
db_recover -v
Finding last valid log LSN: file: 1 offset 42872
Recovery starting from [1][42200]
Recovery complete at Sat Jul 28 17:40:36 2007
Maximum transaction ID 8000000b Recovery checkpoint [1][42964]
db_stat -d <dbfile>
53162 Btree magic number
9 Btree version number
Big-endian Byte order
Flags
2 Minimum keys per-page
8192 Underlying database page size
1 Number of levels in the tree
60 Number of unique keys in the tree
60 Number of data items in the tree
0 Number of tree internal pages
0 Number of bytes free in tree internal pages (0% ff)
1 Number of tree leaf pages
62 Number of bytes free in tree leaf pages (99% ff)
0 Number of tree duplicate pages
0 Number of bytes free in tree duplicate pages (0% ff)
0 Number of tree overflow pages
0 Number of bytes free in tree overflow pages (0% ff)
0 Number of empty pages
0 Number of pages on the free list
db_stat -E <dbfile>
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Default database environment information:
4.3.28 Environment version
0x120897 Magic number
0 Panic value
2 References
0 The number of region locks that required waiting (0%)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Per region database environment information:
Mpool Region:
2 Region ID
-1 Segment ID
1MB 264KB Size
0 The number of region locks that required waiting (0%)
Log Region:
3 Region ID
-1 Segment ID
1MB 64KB Size
0 The number of region locks that required waiting (0%)
Transaction Region:
4 Region ID
-1 Segment ID
16KB Size
0 The number of region locks that required waiting (0%)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
DB_ENV handle information:
Set Errfile
db_stat Errpfx
!Set Errcall
!Set Feedback
!Set Panic
!Set Malloc
!Set Realloc
!Set Free
Verbose flags
!Set App private
!Set App dispatch
!Set Home
!Set Log dir
/integhome/jobin/B064_July2/runEnv/temp Tmp dir
!Set Data dir
0660 Mode
DB_INIT_LOG, DB_INIT_MPOOL, DB_INIT_TXN, DB_USE_ENVIRON Open flags
!Set Lockfhp
Set Rec tab
187 Rec tab slots
!Set RPC client
0 RPC client ID
0 DB ref count
-1 Shared mem key
400 test-and-set spin configuration
!Set DB handle mutex
!Set api1 internal
!Set api2 internal
!Set password
!Set crypto handle
!Set MT mutex
DB_ENV_LOG_AUTOREMOVE, DB_ENV_OPEN_CALLED Flags
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Default logging region information:
0x40988 Log magic number
10 Log version number
1MB Log record cache size
0660 Log file mode
1Mb Current log file size
632B Log bytes written
632B Log bytes written since last checkpoint
1 Total log file writes
0 Total log file write due to overflow
1 Total log file flushes
1 Current log file number
42872 Current log file offset
1 On-disk log file number
42872 On-disk log file offset
1 Maximum commits in a log flush
1 Minimum commits in a log flush
1MB 64KB Log region size
0 The number of region locks that required waiting (0%)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Log REGINFO information:
Log Region type
3 Region ID
__db.003 Region name
0xc00000000b774000 Original region address
0xc00000000b774000 Region address
0xc00000000b883dd0 Region primary address
0 Region maximum allocation
0 Region allocated
REGION_JOIN_OK Region flags
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
DB_LOG handle information:
!Set DB_LOG handle mutex
0 Log file name
!Set Log file handle
Flags
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
LOG handle information:
0 file name list mutex (0%)
0x40988 persist.magic
10 persist.version
0 persist.log_size
0660 persist.mode
1/42872 current file offset LSN
1/42872 first buffer byte LSN
0 current buffer offset
42872 current file write offset
68 length of last record
0 log flush in progress
0 Log flush mutex (0%)
1/42872 last sync LSN
1/41475 cached checkpoint LSN
1MB log buffer size
1MB log file size
1MB next log file size
0 transactions waiting to commit
1/0 LSN of first commit
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
LOG FNAME list:
0 File name mutex (0%)
1 Fid max
ID Name Type Pgno Txnid DBP-info
0 configserviceDB btree 0 0 No DBP 0 0 0
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Default cache region information:
1MB 262KB 960B Total cache size
1 Number of caches
1MB 264KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
43312 Requested pages found in the cache (89%)
4968 Requested pages not found in the cache
640 Pages created in the cache
4965 Pages read into the cache
621 Pages written from the cache to the backing file
4818 Clean pages forced from the cache
621 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
166 Current total page count
146 Current clean page count
20 Current dirty page count
131 Number of hash buckets used for page location
53888 Total number of times hash chains searched for a page
4 The longest hash chain searched for a page
92783 Total number of hash buckets examined for page location
0 The number of hash bucket locks that required waiting (0%)
0 The maximum number of times any hash bucket lock was waited for
0 The number of region locks that required waiting (0%)
5615 The number of page allocations
10931 The number of hash buckets examined during allocations
22 The maximum number of hash buckets examined for an allocation
5439 The number of pages examined during allocations
11 The max number of pages examined for an allocation
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Pool File: temporary
1024 Page size
0 Requested pages mapped into the process' address space
43245 Requested pages found in the cache (99%)
1 Requested pages not found in the cache
635 Pages created in the cache
0 Pages read into the cache
617 Pages written from the cache to the backing file
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Pool File: configserviceDB
8192 Page size
0 Requested pages mapped into the process' address space
65 Requested pages found in the cache (1%)
4965 Requested pages not found in the cache
1 Pages created in the cache
4965 Pages read into the cache
0 Pages written from the cache to the backing file
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Mpool REGINFO information:
Mpool Region type
2 Region ID
__db.002 Region name
0xc00000000b632000 Original region address
0xc00000000b632000 Region address
0xc00000000b773f08 Region primary address
0 Region maximum allocation
0 Region allocated
REGION_JOIN_OK Region flags
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
MPOOL structure:
0/0 Maximum checkpoint LSN
131 Hash table entries
64 Hash table last-checked
48905 Hash table LRU count
48914 Put counter
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
DB_MPOOL handle information:
!Set DB_MPOOL handle mutex
1 Underlying cache regions
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
DB_MPOOLFILE structures:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
MPOOLFILE structures:
File #1: temporary
0 Mutex (0%)
0 Reference count
18 Block count
634 Last page number
0 Original last page number
0 Maximum page number
0 Type
0 Priority
0 Page's LSN offset
32 Page's clear length
0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 f8 0 0 0 0 ID
deadfile, file written Flags
File #2: configserviceDB
0 Mutex (0%)
1 Reference count
148 Block count
4965 Last page number
4964 Original last page number
0 Maximum page number
0 Type
0 Priority
0 Page's LSN offset
32 Page's clear length
0 0 b6 59 40 1 0 2 39 ac 13 6f 0 a df 18 0 0 0 0 ID
file written Flags
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Cache #1:
BH hash table (131 hash slots)
bucket #: priority, mutex
pageno, file, ref, LSN, mutex, address, priority, flags
bucket 0: 47385, 0/0%:
4813, #2, 0, 0/1, 0/0%, 0x04acf0, 47385
4944, #2, 0, 0/0, 0/0%, 0x020c18, 48692 -
Flag "Copy from Worklist Without Hours" in data entry profile
Hi,
please, could anyone explain me the flag Copy from Worklist Without Hours in data entry profile?
I created an enhancement for fill the field of worklist. When I run CAT2, I expect that, with this flag, the record of worklist will be copied in data entry section.
Is it right?
Thanks
Regardsnot answered
-
Why do iDVD DVD's only hold an hour of data?
I am making DVD's on iDVD with AVI files. How come a DVD you buy in the store has 2 or 3 hours of data on it when the ones I make on iDVD or on a Windows program only hold 1 hour? Is there software you can buy for either a PC or Mac that allows you to put more data on a DVD and still retain excellent photo quality?
How come a DVD you buy in the store has 2 or 3 hours of data on it when the ones I make on iDVD or on a Windows program only hold 1 hour?
The DVDs you buy in a store are double layer.
iDVD 5 will let you put up to 2 hrs on video on a DVD.
The audio on iDVD produced discs is uncompressed and takes up more space the the compressed audio on commecial discs. (Apple's DVD SP will let you produce DVDs with compressed audio.)
and still retain excellent photo quality?
The video on NTSC DVDs is about 640x480 (square) pixels (0.3 megapixels) and so will never look as good a your original multi-megapixel still digital images - especially when viewed full screen on a computer monitor.
Commercial DVDs are produced with very high quality hardware compressors and start off with high quality content optimized for DVD production. -
Problems While Extracting Hours From Date Field
Hi Guys,
Hope you are doing well.
I am facing some problems while extracting hours from date field. Below is an example of my orders table:-
select * from orders;
Order_NO Arrival Time Product Name
1 20-NOV-10 10:10:00 AM Desktop
2 21-NOV-10 17:26:34 PM Laptop
3 22-JAN-11 08:10:00 AM Printer
Earlier there was a requirement that daily how many orders are taking place in the order's table, In that I used to write a query
arrival_time>=trunc((sysdate-1),'DD')
and arrival_time<trunc((sysdate),'DD')
The above query gives me yesterday how many orders have been taken place.
Now I have new requirement to generate a report on every 4 hours how many orders will take place. For an example if current time is 8.00 AM IST then the query should fetch from 4.00 AM till 8 AM how many orders taken place. The report will run next at 12.00 PM IST which will give me order took place from 8.00 AM till 12.00 PM.
The report will run at every 4 hours a day and generate report of orders taken place of last 4 hours. I have a scheduler which will run this query every hours, but how to make the query understand to fetch order details which arrived last 4 hours. I am not able to achieve this using trunc.
Can you please assist me how to make this happen. I have checked "Extract" also but I am not satisfied.
Please help.
Thanks In Advance
Arijityou may try something like
with testdata as (
select sysdate - level/24 t from dual
connect by level <11
select
to_char(sysdate, 'DD-MM-YYYY HH24:MI:SS') s
, to_char(t, 'DD-MM-YYYY HH24:MI:SS') t from testdata
where
t >= trunc(sysdate, 'HH') - numtodsinterval(4, 'HOUR')
S T
19-06-2012 16:08:21 19-06-2012 15:08:21
19-06-2012 16:08:21 19-06-2012 14:08:21
19-06-2012 16:08:21 19-06-2012 13:08:21
19-06-2012 16:08:21 19-06-2012 12:08:21trunc ( ,'HH') truncates the minutes and seconds from the date.
Extract hour works only on timestamps
regards
Edited by: chris227 on 19.06.2012 14:13 -
Adding day/hour/minute/second to a date value
How does one add a day/hour/minute/second to a date value?
SQL> select to_char(sysdate, 'DD/MM/YYYY HH24:MI:SS') to_day,
2 to_char(sysdate+1, 'DD/MM/YYYY HH24:MI:SS') add_day,
3 to_char(sysdate + 1/24, 'DD/MM/YYYY HH24:MI:SS') add_hour,
4 to_char(sysdate + 1/(24*60), 'DD/MM/YYYY HH24:MI:SS') add_minute,
5 to_char(sysdate + 1/(24*60*60), 'DD/MM/YYYY HH24:MI:SS') add_second
6 from dual
7 /
TO_DAY ADD_DAY ADD_HOUR ADD_MINUTE ADD_SECOND
10/10/2006 11:54:23 11/10/2006 11:54:23 10/10/2006 12:54:23 10/10/2006 11:55:23 10/10/2006 11:54:24
SQL>Cheers
Sarma. -
Hi!
I have been trying to check a date column for dates that occur before a certain date plus one hour.
Let's say that I want to find all dates before 2002-aug-12 12:00 and all dates before said time plus one hour.
Anyone know of a simple way to do this, I am kinda stuck.
Regards,
Nisse MarcussonSELECT date_column
FROM table_name
WHERE date_column < TO_DATE ('2002-aug-12 12:00', 'yyyy-mon-dd hh24:mi') + 1/24 -
With SPD adding 1 month to date not working?
I have a calculated column [1stDayofMth] =DATE(YEAR([Start Date]),MONTH([Start Date]),1) which appears to be working fine.
My SPD WF uses the [1stDayofMth] column and does a few calculations to find the next 2 months 1st days
[Month 1 Resume Date] = [1stDayofMth] + "1 Months"
[Month 2 Resume Date] = [1stDayofMth] + "2 Months"
These dates are used to pause the workflow. The problem when I log the results to history these calculations are not the 1st day of the next month on some list items?
eg (text from workflow history)
1stDayofMth="11/1/2014 12:00:00 AM"
Month 1 Resume Date="11/30/2014 11:00:00 PM" | Month 2 Resume Date="12/31/2014 11:00:00 PM" | Month 3 Resume Date="1/4/2015 11:00:00 PM"
The [1stDayofMth] column is set to "Date Only" and is showing as only the date, my logging was set to show the String and I can see the time is included as well. Is this the cause?
Stunpals - Disclaimer: This posting is provided "AS IS" with no warranties.As a band-aid I inserted a 2nd add for 2 hours to the date which will move it from the 30th at 11:00pm to the 1st at 1:00am.
I haven't done a lot of digging but it looks like the issue of adding 1 month to the 1stDayofMonth is only occurring on months with 30 days?
Stunpals - Disclaimer: This posting is provided "AS IS" with no warranties.
Maybe you are looking for
-
Ipod Nano Video - Charging or not? Who knows?
Okay, since Apple can't wrap their heads around the idea that some of us don't want video (and don't want to PAY for it) but would still like an MP3 player with a decent storage capacity, I finally broke down and bought a Nano video. Got it home, plu
-
How to extract data from info cube into an internal table using ABAP code
HI Can Anyone plz suggest me How to extract data from info cube into an internal table using ABAP code like BAPI's or function modules. Thankx in advance regds AJAY
-
Multiple Target Files as the number of times Item in source node
Hi all I am new XI ,my scenario is File to File and my data type structures for source and target are as follows Data type for source Source Header 1:unbound Org 1:unbound In declaration of target data type occurrence o
-
I have an airport extreme and express setup. My extreme is in my 2nd floor office with windows facing the water. My dock is about 250 ft away down a very steep hill. I have the express about 100 ft down the hill in an electrical waterproof box attach
-
TS1398 trouble with connecting to wifi.
Hi, I had troubling connecting the wifi from my iPhone 4s since last night. When i click Wifi menu it shows 'Choose a Network...' but never lists any. When i choose to connect via 'Other...' it tells me 'Could not scan for wireless networks.' I wonde