Importing timestamp columns appears to use to_date instead of to_timestamp

I'm trying to import data (using the latest version 1.5.4 with Patch 2 applied) to an Oracle 10g database that contains timestamp columns. The input data has times with fractional (millisecond) values The data was exported using SQL Developer from a Sybase database and the timestamp format in the Excel (xls) file is YYYY-MM-DD HH24:MI:SS.FF3. When I specify this format for the TIMESTAMP columns on the import screens, the importer generates an insert statement like this:
INSERT INTO A (TMS) VALUES (to_date('2008-12-049 12:12:39.967', 'YYYY-MM-DD HH24:MI:SS.FF3'));
This command fails to execute with this error:
Error report:
SQL Error: ORA-01821: date format not recognized
01821. 00000 - "date format not recognized"
*Cause:   
*Action:
I found that if to_timestamp is used instead of to_date, there is no issue inserting the row with the correct time precision. The question I have is why isn't SQL Developer using to_timestamp for importing a TIMESTAMP column, and should it?
Any advise woudl be appreciated.
Thanks

In 1.5.4 I see a bug where the "Format" field doesn't show up in the page in the import wizard, preventing the user from entering a mask when the column type is TIMESTAMP. This has been fixed in the code line under development and should be available when 2.1 gets released.
To give you a bit more detail on the confusing DATE/TIMESTAMP behaviour...
SQL Developer misrepresenting date as timestamp and vice versa stems from the behaviour of the Oracle JDBC driver. Following are the details I obtained from the JDBC team when I raised a bug("WRONG VALUE RETURNED FOR GETCOLUMNTYPE FOR DATE COLUMN ") on them:-
oracle.jdbc.mapDateToTimestamp is by default set
to true to indicate reporting DATE column as TIMESTAMP type. To turn off, pass
-Doracle.jdbc.mapDateToTimestamp=false" at the command line.
To effect this option in SQL Developer, you can add an AddVMOption -Doracle.jdbc.mapDateToTimestamp=false
A bit more history on the option:
8i and older Oracle databases did not support SQL TIMESTAMP, however Oracle
DATE contains a time component, which is an extension to the SQL standard. In
order to correctly handle the time component of Oracle DATE the 8i and
earlier drivers mapped Oracle DATE to java.sql.Timestamp. This preserved the
time component.
Oracle database 9.0.1 included support for SQL TIMESTAMP. In the process of
implementing support for SQL TIMESTAMP we changed the 9i JDBC driver to map
Oracle DATE to java.sql.Date. This was an incorrect decision since it
truncates the time component of Oracle DATE. There was also a backwards
compatibility problem trying to write java.sql.Timestamps to 8i databases.
These are separate problems but we "fixed" both under the control of a single
flag, V8Compatible. This flag was introduced in a 9.2 patch set.
By default the flag is false. When it is set to false the driver maps Oracle
DATE to java.sql.Date, losing the time component and it writes
java.sql.Timestamps to the database as SQL TIMESTAMPS. When the flag is set
to true the driver maps Oracle DATE to java.sql.Timestamp and writes
java.sql.Timestamps to the database as Oracle DATEs.
In 11.1 the V8Compatible flag was deprecated because it controlled Database
8i compatibility which is no longer supported. The additional behavior it
controlled, how SQL DATEs are handled, is controlled by a new flag,
mapDateToTimestamp. In 11.1 setting V8Compatible will just set
mapDateToTimestamp. This new flag only controls how SQL DATEs are
represented, nothing more. This flag will be supported for the foreseeable
future.
Finally, the default value for V8Compatible is false in 9i and 10g. This
means that by default the drivers incorrectly map SQL DATEs to java.sql.Date.
In 11.1 the default value of mapDateToTimestamp is true which means that by
default the drivers will correctly map SQL DATEs to java.sql.Timestamp
retaining the time information. Any customer that is currently setting
V8Compatible = true in order to get the DATE to Timestamp mapping will get
that behavior by default in 11.1. Any customer that wants the incorrect but
10g compatible DATE to java.sql.Date mapping can get that by setting
mapDateToTimestamp = false in 11.1.
About the only way to see the difference between mapDateToTimestamp settings
is to call getObject on a DATE column. If mapDateToTimestamp is true, the
default setting, the result will be a java.sql.Timestamp. If
mapDateToTimestamp is false, then getObject on a DATE column will return a
java.sql.Date.
HTH
Edited by: vasan_kps on Jun 12, 2009 2:01 PM

Similar Messages

  • Interactive Report Column Heading Filters using LIKE instead of =

    Still on APEX 3.1
    I have an inter active report with a column Like Below
    select case when trunc(c.resp_contact_dt, 'MM') = trunc(sysdate, 'MM') or trunc(c.oth_contact_dt, 'MM') = trunc(sysdate, 'MM') then '<img src="/i/contact_2_green.png" alt="RESP A: ' || to_char(c.resp_attempt_dt, 'MM/DD/YYYY') || ' C:' || to_char(c.resp_contact_dt, 'MM/DD/YYYY') ||'
    ' || 'OTH A: ' || to_char(c.oth_attempt_dt, 'MM/DD/YYYY') || ' C:' || to_char(c.oth_contact_dt, 'MM/DD/YYYY') || '">'
    when trunc(c.resp_attempt_dt, 'MM') = trunc(sysdate, 'MM') or trunc(c.oth_attempt_dt, 'MM') = trunc(sysdate, 'MM') then '<img src="/i/contact_2_yellow.png" alt="RESP A: ' || to_char(c.resp_attempt_dt, 'MM/DD/YYYY') || ' C:' || to_char(c.resp_contact_dt, 'MM/DD/YYYY') ||'
    ' || 'OTH A: ' || to_char(c.oth_attempt_dt, 'MM/DD/YYYY') || ' C:' || to_char(c.oth_contact_dt, 'MM/DD/YYYY') || '">'
    else '<img src="/i/contact_1_red.png" alt="RESP A: ' || to_char(c.resp_attempt_dt, 'MM/DD/YYYY') || ' C:' || to_char(c.resp_contact_dt, 'MM/DD/YYYY') ||'
    ' || 'OTH A: ' || to_char(c.oth_attempt_dt, 'MM/DD/YYYY') || ' C:' || to_char(c.oth_contact_dt, 'MM/DD/YYYY') || '">' end as Contact,
    This column header has an User defined LOV
    select '%green%' d,
    '<img src="/i/contact_2_green.png">' r from dual
    Union select '%red%' d,
    '<img src="/i/contact_1_red.png">' r from dual
    union select '%yellow%' d,
    '<img src="/i/contact_2_yellow.png">' r from dual
    When user select my Red image from the column header link
    the filter generates where contact = '%red%' This fails
    Is there a way to generate the filter contact like '%red%' without the user having to change it to like.
    Is this possible in 4.0?

    I am assuming Because of the crickets either I am doing my SQL to incorrectly and there is a better way to return get a popup with a changing image into an interactive report
    or
    There is no way to do what I am trying

  • Compare date datatype against timestamp column

    Hi,
    I have a table "test" with a TIMESTAMP column as "insert_timestamp".
    I have an application with a query like
    select * from test where insert_timestamp between i_start_date and i_end_date;i_start_date and i_end_date are DATE data type variables.
    When I ran this query, it returned results:
       select * from test where insert_timestamp between sysdate - 1 and sysdate;My question is if it is ok to compare against a TIMESTAMP column in table using DATE data type variables?.
    I am just making sure the app doesn't break during execution due to this.
    Thanks in advance

    yes you can use...
    But with timestamp you get precision till fraction of seconds but not in case of data type...
    So if you have data which differ in fraction of seconds like
    1.0001 sec,1.0003 sec,1.0004 sec,1.0005 sec,1.0006 sec...
    then if i have to get data between 1.0003 and 1.0005 seconds you cannot use a date variable here...
    Ravi Kumar

  • DRAG & DROP IN SQL CALENDAR USING TIMESTAMP COLUMN

    Hi there,
    I'm having a difficulty with the drag & drop process in my sql calendar. The defalult code has been mentioned before as:
    declare
    l_date_value varchar2(32767) := apex_application.g_x01;
    l_primary_key_value varchar2(32767) := apex_application.g_x02;
    begin
    update EMP set HIREDATE = to_date(l_date_value,'YYYYMMDD HH24MISS')
    where ROWID = l_primary_key_value;
    end;
    My sql calendar query though, has not a simple date column, but a TIMESTAMP column used in the Date value attribute.The APPOINTMENTID is my primary key column of APPOINTMENTS table as you can see and of course the APP_TIMESTAMP column as my Date Column (which in my database schema is created by the "merging" of my APP_DATE & APP_TIME columns ):
    SELECT APPOINTMENTID,DECODE(APP_STATUS,'0','#005C09','1','#EF5800','2','#000099','3','purple')COLOR,
    (TO_CHAR(APP_TIMESTAMP,'HH24:MI')||' / '||(SELECT LAST_NAME||' '||FIRST_NAME
    FROM PATIENTS C
    WHERE C.PAT_ID = A.PAT_ID)||' / '||UPPER(APP_DESCR)||' / '||APP_TEL_NO) APP_ROW,
    APP_TIMESTAMP
    FROM APPOINTMENTS A, PATIENTS C
    WHERE C.PAT_ID(+) = A.PAT_ID
    So far, in my on demand process i have written the code shown below and the result is just the dragging functionality:
    declare
    l_date_value varchar2(32767) := apex_application.g_x01;
    l_primary_key_value varchar2(32767) := apex_application.g_x02;
    begin
    update APPOINTMENTS
    set APP_TIMESTAMP = to_date(l_date_value,'DD/MM/RRRR HH24MI')
    where APPOINTMENTID = l_primary_key_value;
    end;
    Any suggestion is welcome,
    Thanks

    Hi,
    I just found a solution. I implemented for each column an own event-handler. Therefore I can assign the excercise to the right column.
    Has anybody else a better solution?
    Best regards
    Max

  • Firefox can't read any Bookmark that was imported from my PC with file extension .url. Safari reads them fine. Is there a fix, so I can use Firefox instead of Safari? Many thanks if so. I have the latest version of Firefox

    Firefox can't read any Bookmark on my Mac that was imported from my PC with file extension .url. Safari reads them all fine. Is there a fix, so I can use Firefox instead of Safari? Many thanks if so. I have the latest version of Firefox
    == URL of affected sites ==
    http://anysite.url
    == User Agent ==
    Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en-us) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7

    Hello JF.
    I don't think that extension is supported. I believe Firefox can only read .json and .html.
    You may want to read this though:
    [http://support.mozilla.com/en-US/kb/Importing+bookmarks+and other data from Safari Importing bookmarks and other data from Safari]

  • Using to_date causes CBO to use wrong index

    I'm working on switching an application from using the Rule-based optimizer to use the Cost-based Optimizer on Oracle 9iR2. I'm having problems with the following query:
    select report_rsn, reports.facility_code,
           exams.facility_code
      from exams join reports using (report_rsn)
    where reports.transcribed_date
            between to_date('17-Mar-2005')
                and to_date('18-Mar-2005')
       and exams.dictated_for_rsn = 3323
       and exams.status IN ('S','T')There is an index on the reports.transcribed_date column and a compound index on (exams.dictated_for_rsn,exams.status).
    EXPLAIN PLAN tells me this query will use the transcribed_date as the driving index, but when I run the query through SQL_TRACE, tkprof says that it used the other index instead. That took a long time. (90 seconds)
    If I change the to_date() functions to be ANSI date literals (DATE '2005-03-17' and DATE '2005-03-18'), EXPLAIN PLAN says the same as before, and tkprof says that it actually used that plan. The query returned almost instantly.
    We have CURSOR_SHARING set to 'SIMILAR', and my best guess is that there is something involving the "bind parameter peeking" that is causing the problem. It appears as if the optimizer does not evaluate the to_date() function between "peeking" at the bind variable and determining the execution plan, so it does not know that the two dates are only a day apart. However, I haven't been able to find anything in the docs to confirm my suspicions.
    I've done a search for queries in our system that use the to_date() function as part of the where clause, and it doesn't look like a Herculean task to simply change them all, but I'm wondering two things:
    1) Is there a better way to solve the problem?
    2) Is there any information on this behavior out there that I can read to verify my suspicions? This might be the most important, because in order to finalize the switch to CBO, I need to be able to tell my supervisor that "this sort of thing" won't happen on our production box.
    -- Jeff Beal

    Jeff,
    Could you post your plan under both scenarios? WhatHere are the relevant* (different) portions of the execution plan (as given by TKPROF):
    * I actually ran TKProf against a much larger query, but the controlling part is the query that I included.
    When using DATE literal:
    Rows     Row Source Operation
          0               NESTED LOOPS 
          0                TABLE ACCESS BY INDEX ROWID REPORTS
          6                 INDEX RANGE SCAN IX_REPORTS_TRDATE (object id 109215)
          0                TABLE ACCESS BY INDEX ROWID EXAMS
          0                 INDEX RANGE SCAN IX_FK_EXAMS_11 (object id 96878)When using to_date function
    Rows     Row Source Operation
      46517                NESTED LOOPS 
      46517                 INLIST ITERATOR 
      46517                  TABLE ACCESS BY INDEX ROWID EXAMS
      46517                   INDEX RANGE SCAN IX_FK_EXAMS_4 (object id 139880)
          0                 TABLE ACCESS BY INDEX ROWID REPORTS
      46517                   INDEX UNIQUE SCAN PK_REPORTS (object id 109164)
    are the indexes on the tables and on what columns are
    they present?'ix_fk_exams_4' is a compound index on the exams table.  The first column is 'dictated_for_rsn' and the second is 'status'.  'pk_reports' is an index on reports.report_rsn - the primary key of the table.  'ix_reports_trdate' is on the reports.transcribed_date column.  'ix_fk_exams_11' is on exams.report_rsn (the foreign key to the reports table).
    Just to give you an idea on size, the exams table has about six million rows; the reports table has about 5.5 million.  I would categorize the transcribed_date column as evenly distributed, along with the values in dictated_for_rsn.  The status column is heavily skewed, with 'T' and 'S' being the two most common statuses.

  • How to get data with out having any date/timestamp columns by year wise

    hi,
    how can i select years wise rows from tables,if that have not any date/timestamp column.

    Well Govind it quite depends on what is the data type of that column and the format in which it is stored.
    If the data type is varchar2/varchar and all the values are in a uniform format then there is no problem. All you need to use is the to_date function to convert the supplied strings to default date format and then use to_char function to only extract the YY or YYYY or RR or RRRR aspect of the data.
    For example: If the column is called 'hire_date' and it's data type is varchar2 and the entries in this column are all in a uniform format, say month,date,year like January,12,1999. What you need to do is convert this string to a default date value using to_date function, like to_date(hire_date,'format_model') In the format model mention the format of the hire_date string. The out put of this function can be fed into to_char to extract the year, like to_char(output_of_to_date,'YYYY')
    I hope you got what I meant. Let me know if it was of any use.

  • Partitioning on timestamp column

    Has anybody ever used a range partition on a timestamp column.
    For data columns my parition looks like this:
    PARTITION P_2011_02_15 VALUES LESS THAN (TO_DATE(' 2011-02-16 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    can somebody provide the syntax for the followig Timestamp , Timestamp with time zone , timestamp with local time zone
    Thanks

    sb92075 wrote:
    Has anybody ever used a range partition on a timestamp column.no, because it is just plain silly.
    Most (sane & reasonable) folks partition down to DAY.Smallest partitions we have (in terms of time range), is hourly. Up to 50 million rows in such a single hourly partition...
    But I agree - in principle. Finer grain range partitions down to minutes or even seconds...? That is very unlikely.
    Unless the question is whether the partition range (for a daily partitioned table for example) can be on a column that is not a DATE data type, but a TIMESTAMP data type. In which case the answer is yes - the data type in this case does not prevent a partition range definition.

  • SSDT tries to alter timestamp column in TFS build

    We're trying to perform an upgrade test against a copy (backup/restore) of our customer database as target. There are some tables with  timestamp column in the database. The way we do this is by having a database project with a publish profile targeting
    that copy of customer database and then with TFS build server is used to build the database but only to generate a publish script (/p:UpdateDatabase=False) set in the build definition - msbuild argument.
    Example of table definition:
    CREATE TABLE dbo.CodeTable1
    (ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY
    ,Code CHAR(6)
    ,[Timestamp] TIMESTAMP NULL);
    We would like to have the "Code" column to have CHAR(7), so in the project we modify the table definition:
    CREATE TABLE dbo.CodeTable1
    (ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY
    ,Code CHAR(7)
    ,[Timestamp] TIMESTAMP NULL);
    Expecting SSDT build will generate alter script:
    ALTER TABLE dbo.CodeTable1 ALTER COLUMN Code CHAR(7);
    To our surprise the generated script was:
    ALTER TABLE dbo.CodeTable1 ALTER COLUMN Code CHAR(7);
    ALTER TABLE dbo.CodeTable1 ALTER COLUMN [Timestamp] TIMESTAMP NULL;
    Which will cause error when the script is executed: "Cannot alter column 'TIMESTAMP' to be data type timestamp."
    Why is SSDT generating the change script for that timestamp column??
    We then try a local build in VS, the issue is not happening, SSDT correctly generates alter script only for the "Code" column to CHAR(7);
    Both local machine and TFS Build server are having VS 2013 Update 4- SSDT 12.0.50318.0 installed.
    As we tried to troubleshoot further, we found out that it seems it only happens on a restored database from a backup copy of our customer database. It doesn't happen for databases created by SSDT build from scratch or that we manually created. We've tried make
    sure all database properties are the same as the database that correctly built. But still if the target database is the one we restored from a customer's copy, SSDT always tries to alter timestamp column (on server build).
    Anyone have same experience?
    I have posted a bug in ms connect: https://connect.microsoft.com/SQLServer/feedback/details/1266051
    Thanks!

    Thanks Paul!
    However, it doesn't happen when I build the database project locally or if the target database was created by SSDT (or manually for that matter). The issue happens when I change the target database to the one we restored from a backup copy of our customer's
    database and run the build through our TFS build server.
    So I thought there must be something different with the restored database (which causes SSDT to alter timestamp column) as opposed to the one SSDT/manually created (which doesn't alter the timestamp column). Maybe there is difference on database property/settings?
    Whatever it is, I just couldn't find it.
    The only thing we will do now as workaround is to get db schema creation script from that of customer's database and run that script to re-create the database from scratch and use that as target database instead, as luck would have it, the issue would be
    gone.
    Still, why the heck SSDT tries to alter timestamp column in that specific case and not in other case as described above??
    Elvin

  • Doubt in Timestamp column

    Hi,
    In below Query i am trying to retrive the data
    ,which is greater then the 18-DEC-10,
    But its retriving all the datas,why? please help.how to handle this time stamp column?
    desc vcps_misc_ch
    Name Null Type
    ENT_NM NOT NULL VARCHAR2(30)
    ATRBT_NM NOT NULL VARCHAR2(30)
    TXN_TYPE NOT NULL CHAR(1)
    TXN_DT NOT NULL TIMESTAMP(6)
    PREV_VALUE VARCHAR2(4000)
    CURR_VALUE VARCHAR2(4000)
    LAST_UPDT_GMIN VARCHAR2(9)
    LAST_UPDT_NM VARCHAR2(4000)
    REF_ID NOT NULL VARCHAR2(2000)
    SELECT txn_dt,to_char(txn_dt,'dd-mon-yy hh24:mi:ss')
    FROM vcps_misc_ch
    WHERE to_char(txn_dt,'dd-mon-yy hh24:mi:ss') > TO_CHAR(TO_date('18-DEC-10','DD-MON-YY'),'dd-mon-yyyy hh24:mi:ss')
    and rownum<10
    TXN_DT TO_CHAR(TXN_DT,'DD-MON-YYHH24:MI:SS')
    26-FEB-10 01.14.43.055154000 PM 26-feb-10 13:14:43
    25-MAR-10 05.23.35.601172000 PM 25-mar-10 17:23:35
    26-MAY-10 08.12.40.106995000 AM 26-may-10 08:12:40
    27-MAY-10 10.38.32.033523000 AM 27-may-10 10:38:32
    28-MAY-10 11.40.23.313450000 AM 28-may-10 11:40:23
    28-MAY-10 01.09.52.332828000 PM 28-may-10 13:09:52
    18-JUN-10 02.44.37.614339000 PM 18-jun-10 14:44:37
    18-JUN-10 02.46.47.141109000 PM 18-jun-10 14:46:47
    24-JUN-10 10.45.43.814528000 AM 24-jun-10 10:45:43
    9 rows selected

    Hi,
    Always compare TIMESTAMP columns to other TIMESTAMPs.
    To compare a TIMESTAMP column to a value that is being supplied as a string, convert the string to a TIMESTAMP, not the other way around. It's more efficient, and less prone to error.
    For example:
    SELECT      txn_dt
    ,     to_char (txn_dt,'dd-mon-yy hh24:mi:ss')
    FROM      vcps_misc_ch
    WHERE      txn_dt     >= TO_TIMESTAMP ( '18-DEC-2010 14:45:00'
                        , 'DD-MON-YYYY HH24:MI:SS'
    ;or
    SELECT      txn_dt
    ,     to_char (txn_dt,'dd-mon-yy hh24:mi:ss')
    FROM      vcps_misc_ch
    WHERE      txn_dt     >= TO_TIMESTAMP ( p_date
                        , 'DD-MON-YYYY HH24:MI:SS'
    Do not nest conversion functions (such as "TO_CHAR ( TO_TIMESTAMP ...)").  There's almost always a simpler, more efficient way.
    TO_CHAR is appropriate for displaying a date (as in the SELECT clause above).  If you're tempted to use TO_CHAR for any other purpose (in a WHERE clause, for example), ask yourself why.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Sqlplus query issue:customerID column appear twice

    I have tried in several ways to display a query where shows customer who have made more than 3 rentals from 01Nov to 31 Dec. However, for some reason the customerID column appear twice.
    ** I'm using 3 tables to get this result:
    select RentalAgreement.membershipcardid as "MCardID",
    customer.customerid,customer.cname,customer.address,
    membershipcard.customerid,invoice.rentc,RentalAgreement.DateAndTimeOfIssue,
    count(RentalAgreement.membershipcardid) as "NOfRents" from RentalAgreement,customer,membershipcard,invoice
    where rentalAgreement.membershipcardid=MembershipCard.MembershipCardID
    and customer.customerid = membershipcard.customerid
    and RentalAgreement.DateAndTimeOfIssue between '01-Nov-2012' and '31-Dec-2012'
    group by rentalAgreement.membershipcardid,MembershipCard.
    customerid,customer.cname,customer.address,invoice.rentc,customer.customerid,RentalAgreement.DateAndTimeOfIssue;
    MCardID CUSTOMERID CNAME ADDRES CUSTOMERID RENTC DATEANDTI
    NOfRents
    107 27 EDuois N151AS 27 60 11-NOV-12
    5
    108 28 WDuois N151AS 28 60 19-NOV-12
    5
    101 22 ALuois N193AS 22 60 22-NOV-12
    5
    MCardID CUSTOMERID CNAME ADDRES CUSTOMERID RENTC DATEANDTI
    NOfRents
    101 22 ALuois N193AS 22 60 21-NOV-12
    5
    101 22 ALuois N193AS 22 60 20-NOV-12
    5
    101 22 ALuois N193AS 22 60 24-NOV-12
    5
    MCardID CUSTOMERID CNAME ADDRES CUSTOMERID RENTC DATEANDTI
    NOfRents
    109 30 ADDuis N151AS 30 60 03-DEC-12
    5
    Thanks for your time.
    Im not sure if this query seems to be logic, I would appreciate any online guide for pl/sql

    Hi,
    978308 wrote:
    Hi
    Thank you for your answer. (I'm suing Oracle you're using (e.g. 11.2.0.1.0).
    sorry my question it wasnt not clear since I didn't attached my tables .
    As you suggested It is not necessary to use plsql, I tried to use the logic of algebra and old sql to solve query number 2: /* query No2
    produce a list of customers who have made more than three scotter rentals in the past 2 months
    show appropiate customer and rental agreement details*/
    However, I found many errors maybe because I my class diagram wasnt properly completed, So I decided to used plsql to get the result.
    It is true my query is incomplete because first of all I wanted to delete duplicate columns then I will work on number of ocurrences.
    If I delete and customer.customerid = membershipcard.customerid , sql will show 500 customers, the repetition is even higher. The condition "customer.customerid = membershipcard.customerid" in the WHERE clause looks right; you don't have to change that part.
    The SELECT clause determines what columns appear in the result set. If you only want customerid to appear in one column, then only include one customerid column in the SELECT clause. There need not be any connection between the SELECT clause and the WHERE clause; it's perfectly okay to use a column in the SELECT clause but not the WHERE clause, and it's perfectly okay to use a column in the WHERE clause but not the SELECT clause.
    /* query No2
    produce a list of customers who have made more than three scotter rentals in the past 2 months
    show appropiate customer and rental agreement details*/Once again, please format your code, and use \ tags when you post it here to preserve the spacing.  It's hard to read informatted queries like this, and therefore it's hard to debug them.
    See the forum FAQ {message:id=9360002}
    select RentalAgreement.membershipcardid as "MemberID",
    customer.customerid,
    customer.cname,
    customer.address,
    membershipcard.customerid,
    RentalAgreement.DateAndTimeOfIssue,
    RentalAgreement.EScotterID,
    count(RentalAgreement.membershipcardid) as "NumberOfRent"
    from RentalAgreement,customer,membershipcard
    where rentalAgreement.membershipcardid=MembershipCard.MembershipCardID
    and customer.customerid = membershipcard.customerid
    and RentalAgreement.DateAndTimeOfIssue between '01-Nov-2012' and '31-Dec-2012'Once again, don't compare a DATE (such as RentalAgreement.DateAndTimeOfIssue) to a VARCHAR2 (such as '01-Nov-2012').  You're using TO_DATE correctly in your INSERT statements; use TO_DATE in the query, too.
    group by rentalAgreement.membershipcardid,
    MembershipCard.customerid,
    customer.cname,customer.address,
    RentalAgreement.EScotterID,
    customer.customerid,
    RentalAgreement.DateAndTimeOfIssue;"GROUP BY a, b, c, d" means that each row of output will represent a distinct combination of columns a, b, c and d, and aggregate functions will operate on the whole group.  In the query above, that means count(RentalAgreement.membershipcardid) will show the number of rows that had a value for membershipcardid for each combination of the 6 columns listed in the GROUP BY clause, including custiomerid and dateandtimeofissue.  That means that rentals for each customer *and date* will be counted separately.  Look at the output you're getting from query 2:Member CUSTOMER CUSTOMER DATEANDTIME ESCOTTER Number
    ID ID CNAME ADDRES ID OFISSUE ID OfRent
    102 23 BLuois N113AS 23 11-Nov-2012 1 1
    101 22 ALuois N193AS 22 21-Nov-2012 1 1
    101 22 ALuois N193AS 22 24-Nov-2012 5 1
    101 22 ALuois N193AS 22 20-Nov-2012 2 1
    101 22 ALuois N193AS 22 22-Nov-2012 4 1
    101 22 ALuois N193AS 22 19-Nov-2012 2 1
    The count is always 1.  That is, customer 22 did have 5 rentals, but only 1 rental per day.
    You haven't posted the results you want from the same data.  I'm guessing you want something like this:Member CUSTOMER DATEANDTIME ESCOTTER Number
    ID ID CNAME ADDRES OFISSUE ID OfRent
    101 22 ALuois N193AS 19-Nov-2012 2 5
    101 22 ALuois N193AS 20-Nov-2012 2 5
    101 22 ALuois N193AS 21-Nov-2012 1 5
    101 22 ALuois N193AS 22-Nov-2012 4 5
    101 22 ALuois N193AS 24-Nov-2012 5 5
    showing that that customer 22 had 5 rentals in the period of interest.  That is, you *do* want to combine all rows for a customer when COUNTing, but you *don't* want to combine the rows when displaying.  The aggregate GROUP BY combines the rows, so how can you still get the details (such as date and scooter ID) for each separate rental?  One way is to do a GROUP BY just to get the customerid and COUNT, and use that in an IN subquery (or maybe a join), where the main query does not use GROUP BY.
    If this is a school assignment, I don't want to ruin it for you, so I'll use an example with the scott.dept and emp tables.  Say we want to find which departments have mor than 1 CLERK working in them.  One way to do that is:SELECT     *
    FROM     scott.dept
    WHERE     deptno     IN (
              SELECT deptno
              FROM      scott.emp
              WHERE      job     = 'CLERK'
              GROUP BY     deptno
              HAVING     COUNT (*)     > 1
    Output:DEPTNO DNAME LOC
    20 RESEARCH DALLAS
    Another, com pletely different way would be to use the analytic COUNT function, not the aggregate function.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Can you specify format mask for date or timestamp columns

    Hi,
    Recently when I'm developing a PL/SQL application, I find that any format mismatch between data in table and format mask specified in to_date/to_timestamp function will cause exceptions. However, I'd like to make my application more robust and immune to this kind of problems, and know exactly which format mask to use for exactly what columns in db table.
    Is there a way in a table column to specify the format mask for date/timestamp columns such that I can know exactly what format mask to use when doing conversion? Also I don't want to depend on NLS_DATE_FORMAT specification, don't want to make a long case statement to check for every date format allowed by Oracle.
    Many thanks.

    As per my knowledge it can't be possible
    casuse if you entered '12/11/2007' date
    how oracle know that in date 12'th is the month or 11'th.
    The person who entered that date will also have confusions in future.
    either you have to fix one date format for your whole application (application configuration).
    or you have to store the date format in other column of the table.
    In case you set the date column to varchar and store all the dates without format then
    in future you may face problems in future while fetching the records, searching etc.
    Regards
    Singh

  • How to import timestamps from excel into labview

    hello everyone, how to import timestamps from a column in excel into labview?
    I am bugged with this problem for long now... can anyone help please?
    Now on LabVIEW 10.0 on Win7

    LV and Excel use a different reference time (LV was 1.1.1904?). You need to convert between both references. I don't remember the details how I did it and I'm away from my code base.
    Felix
    www.aescusoft.de
    My latest community nugget on producer/consumer design
    My current blog: A journey through uml

  • How can I import db column comments into the OBIEE presentation layer?

    We have a very well designed data mart - it is a star schema and all the fact and dimension table columns have comments in them with their definition and use. There is virtually no change required in the physical or business layer. The only modifications done in the presentation layer was to hide the Pk/Fk columns.
    Is there a way to import these column comments into the presentation layer so that the business user can see this comment in the tool tip while hovering over the presentation column in Answers?
    Thanks for your help!

    Hi,
    I assume the comments you mean are stored in user_tab_comments and user_col_comments.
    When this is the case you should do the following:
    Go to your subject area in your presentation layer. For now I assume the name of this subject area is "Subject Area".
    Then right click on this subject area and check "Externalize Descriptions".
    Then create an initialization block (session) using this query:
    (select 'CD_Subject_Area_' || table_name, comments from user_tab_comments)
    union all
    (select 'CD_Subject_Area_' || table_name || '_' || column_name, comments from user_col_comments)
    Use "Row-wise initialization" for this initialization block.
    Two comments:
    1) Like I said, I assume Subject Area is the name of your subject area in your presentation layer, so I guess you need to replace this with the name of your Subject Area.
    But be sure that you replace each space ' ' with an underscore '_'.
    2) Maybe you need to refine above querys by filtering on table_name for those table_names you are using.
    Good luck.
    Regards,
    Stijn

  • After load test is completed, "90% Page Time", "95% Page Time" columns appear blank and "99% Page Time" column is not available.

    Hi,
    I am using visual studio 2013. After running a load test on my application, I need to view the "90% Page Time", "95% Page Time" and "99% Page Time" percentile data in tabular results view. But since I have started
    using visual studio 2013, these columns appear blank after load test execution is completed. Is there a specific configuration change is required to view data from these columns ?
    Please refer attached screen shot -
    Help much appreciated.
    Thanks,
    Rahul S.

    Hi Rahul S,
    To collect the data, in the Load Test Editor, under the Run Settings node, select the run setting node to change. In the
    Properties window, for the Timing Details Storage property, select
    All Individual Details.
    Best Regards,
    Jack
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for

  • The iPhone 5 is Finally Here!!  Some Questions About My First Smartphone

    Hello- I do not see any personal set-up or mini class advertised with the iphone 5 purchase webpage.  Does anybody know if theyare not offering this initially or at all?  Does the new adapter come with the iPhone 5 or is that extra? For Verizon custo

  • Launching Aperture issues

    When I launch Aperture, it initializes by showing "opening Aperture library" but after a few seconds it stops and does not open the program.  How do I launch Aperture?  It worked before, not anymore.  I've downloaded latest updates, but still have is

  • Archive logs to standby showing odd behavior.

    10.2.0.2 HP 11.23 I'm monitoring the archive log transfer of files, and it appeared a little odd...as if the the files are not being applied. So log into the system manually, and ran: recover managed standby database cancel; returned with recovery co

  • 'fast_start_mttr_target=300'

    HI all, i've been reading about this parmeter 'fast_start_mttr_target=300'..but i coundnt get proper idea.. it just means that if oracle is taking long time to recover then it should not go beyond this time to restart or something else ralated to it?

  • Please help re auto-curve line altering feature

    to all Illustrator artists and experts: Can you please tell me the name of the Illustrator auto-curve line altering feature and how to adjust it or turn it off?  When I draw a line, and want that line not to be a perfect curve, when I complete the li