Problems with date in procedure on Oracle 11g
Hi gurus,
I have some problems with the date format on Oracle 11g.
Let me explain the situation:
When I am starting a request like
select to_number(to_char(to_date('01.04.2009','dd.mm.yyyy'), 'yyyy'))
from sys.dual
I got as result 2009 as number.
When I do the same in a procedure of a package like this
my_year := to_number(to_char(to_date('01.04.2009','dd.mm.yyyy'), 'yyyy'));
the variable my_year contains the value 9 instead of 2009.
Can someone explain me what's going wrong?
I have just tested with changing the environment variable nls_date_format for the session and for the complete database with no success.
Regards,
Björn
Thank you all for your replies so far:
@Alex: You are right, using your short script in sqlplus gives me also 2009 as result
So, I am now posting the essential excerpts of the procedure because the whole one is to large:
function insert_szrl (my_fremd_name varchar, my_elementadresse varchar,
my_zeitstempel varchar, my_wert float,
my_status varchar, my_zyklus varchar,
my_offset integer,
my_quelle varchar, my_nzm_daten integer) return integer is
begin
my_date := to_date (substr (my_zeitstempel, 1, 10), 'dd.mm.yyyy') + my_tageswechsel +1/24;
if my_zyklus = 'mm' then
my_zeitstempeldate := add_months(to_date(last_day(to_date(my_date, 'dd.mm.yyyy')), 'dd.mm.yyyy'),-1) +1 + (my_tageswechsel+1/24);
my_days := to_date(last_day(to_date(my_date, 'dd.mm.yyyy')), 'dd.mm.yyyy') - add_months(to_date(last_day(to_date(my_date, 'dd.mm.yyyy')), 'dd.mm.yyyy'),-1);
my_year := to_number(to_char(to_date(my_date,'dd.mm.yyyy'), 'yyyy'));
ptime.umschalttage_tuned (my_year, my_ws, my_sw);
end if;
While debugging the complete procedure I see since the start only a date which looks like '01.04.2009 07:00:00'
Edited by: user10994305 on 19.05.2009 15:58
Edited by: user10994305 on 19.05.2009 15:58
Similar Messages
-
Problem with date format from Oracle DB
Hi,
I am facing a problem with date fields from Oracle DB sources. The date format of the field in DB table is 'Date base type is DATE and DDIC type is DATS'.
I mapped the date fields to Date characters in BI. Now the data that comes to PSA is in weird format. It shows like -0.PR.09-A
I have changing the field settings in DataSource to internal and external and also i have tried mapping these date fields to text fields with out luck. All delivers the same format.
I have also tried using conversion routines like, CONVERSION_EXIT_IDATE_INPUT to change format. It also delivers me the same old result.
If anybody of you have any suggestions or if anybody have you experienced such probelms, Please share your experience with me.
Thanks in advance.
Regards
VaradaThanks for all your reply. I can only the solutions creating view in database. I want some solution to be done in BI. I appreciate if some of you have idea in it.
The issue again in detail
I am facing an issue with date fields from oracle data. The data that is sent from Oracle is in the format is -0.AR.04-M. I am able to convert this date in BI with conversion routine in BI into format 04-MAR-0.
The problem is, I am getting data of length 10 (Output format) in the format -0.AR.04-M where the month is not in numericals. Since it is in text it is taking one character spacing more.
I have tried in different ways to convert and increased the length in BI, the result is same. I am wondering if we can change the date format in database.
I am in puzzle with the this date format. I have checked other Oracle DB connections data for date fields in BI, they get data in the format 20.081.031 which will allow to convert this in BI. Only from the system i am trying creating a problem.
Regards
Varada -
Problem with SDO_ADMIN.update_index procedure
Hi all:
We use for develpment Oracle 10g XE utilizing Oracle Spatial (SDO) in relational model , with this version works great, all procedures work with out problems.
Recently we want try Oracle 11g, i create the schema, relational model, the same that in 10g XE.
The problem is when i want to update the sdoindex tables with the function SDOADMIN.UPDATE_INDEX, no response are returned, no error o update of _sdoindex table.
I try to use the procedure SDO_ADMIN.POPULATE_INDEX and the same situation.
Also try to use the procedure POPULATE_INDEX with a wrong SDO_GID, in 10g Xe return a ORA-13182 error, and no error returned in 11g.
In 10g XE i need to give permissions from MDSYS to the schema with the model, but in 11g i have the permission for execute the package.
Also in the migration the tables have data and with this data the Geometry function works, i can query data and are found in the geometry, but no new data can be added.
I copy some data to 10g XE and the functions and work great.
I use the relational model, this is a example of tables.
ROADS
ROADS_SDODIM
ROADS_SDOGEOM
ROADS_SDOINDEX
ROADS_SDOLAYER
Any help will be appreciated
Thanks
Juan PabloYou may get better answers here :- Spatial
-
Tp ended with error code 0247 - addtobuffer has problems with data- and/or
Hello Experts,
If you give some idea, it will be greatly appreciated.
This transported issue started coming after power outage, sap system went hard shutdown.
Then we brought up the system. Before that , we do not have this transport issue.
our TMS landscape is
DEV QA-PRD
SED-SEQSEP
DEV is having the TMS domain controller.
FYI:
*At OS level, when we do scp command using root user, it is fine for any TR.
In STMS, while adding TR in SEQ(QA system), we are getting error like this.
Error:
Transport control program tp ended with error code 0247
Message no. XT200
Diagnosis
An error occurred when executing a tp command.
Command: ADDTOBUFFER SEDK906339 SEQ client010 pf=/us
Return code: 0247
Error text: addtobuffer has problems with data- and/or
Request: SEDK906339
System Response
The function terminates.
Procedure
Correct the error and execute the command again if necessary.
This is tp version 372.04.71 (release 700, unicode enabled)
Addtobuffer failed for SEDK906339.
Neither datafile nor cofile exist (cofile may also be corrupted).
standard output from tp and from tools called by tp:
tp returncode summary:
TOOLS: Highest return code of single steps was: 0
ERRORS: Highest tp internal error was: 0247when we do scp using sm69,
SEDADM@DEVSYS:/usr/sap/trans/cofiles/K906339.SED SEQADM@QASYS:/usr/sap/trans/cofiles/.
it throws the error like below,
Host key verification failed.
External program terminated with exit code 1
Thanks
Praba -
I have one problem with Data Guard. My archive log files are not applied.
I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
In Enterprise Manager on Primary database it looks ok. I get the following message Data Guard status Normal
But as I wrote above the archive log files are not applied
After I created the Physical Standby database, I have also done:
1. I connected to the Physical Standby database instance.
CONNECT SYS/SYS@luda AS SYSDBA
2. I started the Oracle instance at the Physical Standby database without mounting the database.
STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
3. I mounted the Physical Standby database:
ALTER DATABASE MOUNT STANDBY DATABASE
4. I started redo apply on Physical Standby database
alter database recover managed standby database disconnect from session
5. I switched the log files on Physical Standby database
alter system switch logfile
6. I verified the redo data was received and archived on Physical Standby database
select sequence#, first_time, next_time from v$archived_log order by sequence#
SEQUENCE# FIRST_TIME NEXT_TIME
3 2006-06-27 2006-06-27
4 2006-06-27 2006-06-27
5 2006-06-27 2006-06-27
6 2006-06-27 2006-06-27
7 2006-06-27 2006-06-27
8 2006-06-27 2006-06-27
7. I verified the archived redo log files were applied on Physical Standby database
select sequence#,applied from v$archived_log;
SEQUENCE# APP
4 NO
3 NO
5 NO
6 NO
7 NO
8 NO
8. on Physical Standby database
select * from v$archive_gap;
No rows
9. on Physical Standby database
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1110
MRP0: Background Media Recovery process shutdown
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 2148
RFS[1]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 2384
RFS[2]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[3]: Assigned to RFS process 3188
RFS[3]: Identified database type as 'physical standby'
Primary database is in MAXIMUM PERFORMANCE mode
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[4]: Assigned to RFS process 3168
RFS[4]: Identified database type as 'physical standby'
RFS[4]: No standby redo logfiles created
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
10. on Physical Standby database
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 9 13664 2
RFS IDLE 0 0 0 0
10) on Primary database:
select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARCm: Becoming the 'no FAL' ARCH
ARCm: Becoming the 'no SRL' ARCH
ARCd: Becoming the heartbeat ARCH
Error 1034 received logging on to the standby
Error 1034 received logging on to the standby
LGWR: Error 1034 creating archivelog file 'luda'
LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
11)on primary db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
Luda 4 NO
Luda 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
Luda 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
Luda 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
Luda 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
Luda 8 NO
12) on standby db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
13) my init.ora files
On standby db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_unique_name='luda'
*.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='luda'
*.fal_server='irina'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
*.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
On primary db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
*.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='irina'
*.fal_server='luda'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
*.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
Please help me!!!!Hi,
After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
In another session 'show configuration' results in the following, confirming that the enable succeeded.
DGMGRL> show configuration
Configuration
Name: avhtest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
avhtest - Primary database
avhtestls53 - Physical standby database
Current status for "avhtest":
Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
It there anybody that experienced the same problem and/or knows the solution to this?
With kind regards,
Martin Schaap -
How to make data base link from oracle 11g r2 to microsoft sql 2008 express
I need to make data base link from oracle 11g r2 to microsoft sql 2008 express to make replication between then
please help me !
I didn't know what is the user and password in the command which create database linkTo replicate data you can ude Database Gateway for ODBC or Database Gatewy for MS SQl Server. Please use the search engine of this forum if you ant to get more details about each product.
Some SQl Servers are set up to use Windows authentication only. In this case you won't be able to connect to the SQL Server as Windows authentication isn't supported with the gateways. You have to make sure your SQL server is supporting username and password authentication - a common user is the "sa" user. Regarding the username/password, please get in touch with your SQL Server Admin. -
having a problem with dates when I send my numbers doc to excel. dates are all out and that they have to cut and paste individual entries onto their spreadsheet. Any idea how I can prevent this.
I'm using Lion on an MBP and Numbers is the latest versionMay you give more details about what is wrong with your dates ?
M…oSoft products aren't allowed on my machines but I use LibreOffice which is a clone of Office.
When I export from Numbers to Excel and open the result with LibreOffice, the dates are correctly treated.
To be precise, dates after 01/01/1904 are correctly treated. dates before 01/01/1904 are exported as strings but, as it's flagged during the export process, it's not surprising.
Yvan KOENIG (VALLAURIS, France) mardi 3 janvier 2012
iMac 21”5, i7, 2.8 GHz, 12 Gbytes, 1 Tbytes, mac OS X 10.6.8 and 10.7.2
My iDisk is : http://public.me.com/koenigyvan
Please : Search for questions similar to your own before submitting them to the community
For iWork's applications dedicated to iOS, go to :
https://discussions.apple.com/community/app_store/iwork_for_ios -
Problem with date format when ask prompt web-intelligence
Bo XIR2 with 5 SP. Instaled on Windows 2003 with support Russian.
Inside BO every labels, buttons - use russian. But when invoke web-report and Prompt appear there is problem with date format.
Looks like korean format of date 'jj.nn.aaa H:mm:ss'. I see system settings of date in Win .. everything right
What i have to do?
Where i can change format date for bo?GK, try this...
decode(instr(packagename.functionname(param1 ,param2),'2400'), 0, to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" hh24mi'),'mm/dd/yyyy hh24mi'),'mm/dd/yyyy hh24mi'),
to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" "2400"')+1,'mm/dd/yyyy "0000"'),'mm/dd/yyyy "0000"'))-Marilyn -
Got one more problem Merilyn and Radhakrishnan...
Regarding the soln y provided me earler with the thread "Problem with date format"...
What is happening is....I am able to change the 2400 to 0000 but when it is changed from 2400 on jan 1st to 0000 the hour is changing but not the date....the date still remains as jan 1st instead of jan 2nd....
Eg: Jan 1st 2400 -- changed to -- jan1st 0000
instead of jan 2nd 0000
Could you please help me in this issue...
Thanks,
GKGK, try this...
decode(instr(packagename.functionname(param1 ,param2),'2400'), 0, to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" hh24mi'),'mm/dd/yyyy hh24mi'),'mm/dd/yyyy hh24mi'),
to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" "2400"')+1,'mm/dd/yyyy "0000"'),'mm/dd/yyyy "0000"'))-Marilyn -
I am a rookie in SAP i have a small problem with date.Do we have any function module to find out the first day in a month if we give out the system current date ?? Pls help me out.
Hi,
As Ganesan told,you can do.
Here is the sample code.
data v type sy-datum.
data d type DTRESR-WEEKDAY.
v+6(2) = '01'.
v4(2) = sy-datum4(2).
v0(4) = sy-datum0(4).
CALL FUNCTION 'DATE_TO_DAY'
EXPORTING
date = v
IMPORTING
WEEKDAY = d.
write d. -
Problem with date format dd/mm/yyyy. But I need to convert yyyy-mm-dd.
Dear friends,
I have the problem with date format. I receiving the date with the format dd/mm/yyyy. But I can upload to MySQL only in the format of yyyy-mm-dd.
how should I handle this situation, for this I've created these code lines.But I have some problem with these line. please help me to solve this problem.
String pattern = "yyyy-mm-dd";
SimpleDateFormat format = new SimpleDateFormat(pattern);
try {
Date date = format.parse("2006-02-12");
System.out.println(date);
} catch (ParseException e) {
e.printStackTrace();
System.out.println(format.format(new Date()));
this out put gives me Tue Apr 03 00:00:00 IST 2007
But I need the date format in yyyy-mm-dd.
regards,
maza
thanks in advance.Thanks Dear BalusC,
I tried with this,
rs.getString("DATA_SCAD1")// where the source from .xls files
String pattern = "yyyy-MM-dd";
SimpleDateFormat format = new SimpleDateFormat(pattern);
try {
Date date = format.parse("DATA_SCAD1");
System.out.println(date);
} catch (ParseException e) {
e.printStackTrace();
System.out.println(format.format(new Date()));
this out put gives me Tue Apr 03 00:00:00 IST 2007
But I want to display the date format in yyyy-mm-dd.
regards,
maza -
Hi to everyone,
I have a problem with data acquisitioning in LV 7.1.
I made a transition from Tradiotional NI-DAQ to NI-DAQmx in my LabVIEW application.
The problem I have is that when I acquire data in Traditional (without writing somewhere -
just reading) then there is no scan backlog data. But when I acquire data in application that
acquisition is based on DAQmx than a scan backlog indicator shows numbers from 20 to 50 for
about 6 min and then that number quite quickly increases until I get an error (unable to
acquire data. The data was overwritten).
Acquisition settings are the same in both cases. When I acquire with DAQmx I use a global
channels. Is a reason for that phenomenon in global channels data procesing? But it seems
strange why does it flows quite smoothly for about 6 min and then it stucks?
Best regards,
EroIf you have an old Daq unit it may not be compatible with DAQMX. Which DAQ unit do you have? I think NI have a list showing which DAQ driver you can use with your card
Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far) -
has anyone else had a problem with data? after updating my phone to the new update, i have all this data, and keep getting notifications that my data is getting high. i never had this problem before.
fair enough. No need for any unnecessary posts either. You issue had been addressed ad nauseum already in this forum had you bothered to search this forum (as forum etiquette would dictate) before posting.
In any case, I hope that your problem was solved. -
I have problem with data transfer between Windows Server 2012RT and Windows7 (no more than 14kbps) while between Windows Sever 2012RT and Windows8.1 speed is ok.
Hi,
Regarding the issue here, please take a look at the below links to see if they could help:
Slow data transfer speed in Windows 7 or in Windows Server 2008 R2
And a blog here:
Windows Server 2012 slow network/SMB/CIFS problem
Hope this may help
Best regards
Michael
If you have any feedback on our support, please click
here.
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
URGENT ! JDEV 10.1.2 Problem with data control generated from session bean
I got a problem with data control generated from session bean which return a collection of data transfer object.
The dto's seem to be correct. The session bean load correctly the data into and the object's are plenty of data. Using the console to display the dto content is ok.
When generating a data control from this session bean and associate the dto included in the collection only the first object level and one-to-one dto object are correctly setted in the data control. Object that represent collection into the dto (one-to-many foreign key) are setted as collection with an iterator but the structure of the object is not setted. I don't know how to associate this second level of collection with the dto bean class to obtain the attributes definition.
I created a case with hr schema like the hrApp demo application in the tutorial with departments and employees table. I got the same problem.
Is it a bug ?
It exists a workaround to force the data control to understand the collection data structure ?
Help is welcome ! this is urgent !!!we found the problem by assigning the child dto bean class to the node representing the iterator in the xml file corresponding to the master dto.
Maybe you are looking for
-
Ct-e Inbound - Campo Nº Serviço (SRVNR) na MIRO
Olá Pessoal, boa tarde. Estamos implementando o CT-e Inbound seguindo a documentação disponibilizada pela SAP. Um ponto de diferença é que no pedido estamos usando o mestre de serviço invés de usar somente texto como na documentação da SAP. No entant
-
4:3 and 16:9 anamorphic messed up
Hi dear all, one of my workers messed up this big time. He spent about 30 hours on this project already, so we are trying to see if we can still save the project. We shot the video with a HDR-FX1 DV 16:9. And captured with anamorphic as well. But whe
-
How can I find or track my iPad mini when it keeps showing offline on the find my iPhone app?
-
Sometimes video streaming makes images stop and start every 2 minutes or so, and although it doesn't always happen the problem it's getting worse. Is there any way of fixing it? Thanks
-
Migration/5.1.2/Avid Codecs/Late night call to fix old project
Sound familiar? Well -- Last time I updated this project was in September. Since then, I've changed computers from G4 PPC to IntelPro, upgraded from 5.1 to 5.1.2. I know there's a post SOMEWHERE on this forum for converting these old AVID codec clips